text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
Algebraic geometry seminar
from Friday
June 01, 2018 to Monday
December 31, 2018
Show events for: All Events AGNES Algebraic geometry seminar Algebraic models in geometry seminar Am.Math.Soc. (AMS) Chapter Seminar Analysis Seminar Analysis Student Seminar Capsule Research Talks Colloquium Commencement Ceremony Comprehensive Exams Dynamical Systems Seminar Equivalence Method and Exterior Differential Systems Seminar First and Second Year Student Seminar Friday Summer Meeting Geometric Analysis Learning Seminar Geometry/Topology Seminar Grad / Postdoc Professional Development Seminar Graduate Student Seminar Graduate Topology Seminar Grant Proposal Panel Hodge Theory, Moduli and Representation Theory Holiday Party Joint Columbia-CUNY-Stony Brook General Relativity Seminar Math and Art Symposium for Tony Phillips Math Club Math Day 2016 Math in Jeans Mathematical Writing Seminar Mathematics Department Gathering Mathematics Education Colloquium Mathematics Summer Camp Mini Course / Dynamics Learning Seminar Mini-School in Geometry Minicourse in Real Enumerative Geometry New Graduate Students NY General Relativity Seminar Postdoc Geometry/Dynamics Seminar Postdoc Seminar Representation Theory Student Seminar RTG Colloquium RTG Seminar RTG Student Geometry Seminar SCGP Seminars Seminar in Topology and Symplectic Geometry Seminar on algebraic structures in physics Simons Colloquium Simons Lectures Series Singular metrics and direct images Special Algebra / Algebraic Geometry Seminar Special Analysis Seminar Special Colloquium Special Dynamics Seminar Special Geometry/Topology Seminar Special Lectures Special Seminar in Algebraic Geometry Special Topology Seminar Student Algebraic Geometry Seminar Student Differential Geometry Seminar Student Gauge Theory Seminar Student Seminar on Differential Geometry and Analysis Summer Workshop in Topology and Geometry Symplectic Geometry Reading Seminar Symplectic Geometry Seminar Thesis Defense Topology and Symplectic Geometry / Math of Gauge Fields seminar Women in Mathematics Instructions for subscribing to Stony Brook Math Department Calendars
WednesdaySeptember 05, 20184:00 PM - 5:30 PM Math Tower P-131 Gabriele Di Cerbo, Princeton Birational boundedness of low-dimensional elliptic Calabi-Yau varieties I will discuss new results towards the birational boundedness of low-dimensional elliptic Calabi-Yau varieties, a joint project with Roberto Svaldi. Recent work in the minimal model program suggests that pairs with trivial log canonical class should satisfy certain boundedness properties. I will show that 4-dimensional Calabi-Yau pairs which are not birational to a product are indeed log birationally bounded. This implies birational boundedness of elliptically fibered Calabi-Yau manifolds with a section, in dimension up to 5. I will also explain how one could adapt our strategy to try and generalize the results in higher dimension, partly joint with W. Chen, J. Han and, C. Jiang and R. Svaldi.
ThursdaySeptember 13, 20184:00 PM - 5:30 PM Math Tower P-131 Andrei Okounkov, Columbia University Enumerative symplectic dualityPlease note special day: Thursday instead of Wednesday. Inspired by theoretical physicists, enumerative geometers study various highly nonobvious "dual" ways to describe curve counts in algebraic varieties (the traditional mirror symmetry being perhaps the best known example). The notion of symplectic duality, or 3-dimensional mirror symmetry originated in the study of 3-dimensional supersymmetric field theories and interchanges the degree-counting variables in generating functions with torus variables for equivariant counts. My goal in the talk is to explain the basic features of this phenomenon and indicate how it can be proven under a restricted definition of a symplectically dual pair (following ongoing joint work with Mina Aganigic).
WednesdaySeptember 19, 20184:00 PM - 5:30 PM Math Tower P-131 Chuanhao Wei, Stony Brook University Log-Kodaira dimension and zeros of holomorphic log-one-formsI will introduce the result about the relation between the zeros of holomorphic log-one-forms and the Log-Kodaira dimension, which is a natural generalization of Popa and Schnell's result on zeros of one-forms. Some geometric corollaries will be stated, e.g. algebraic Hyperbolicity of log-smooth family of log-general type. I will also briefly introduce the idea that a log-D module underlies a Mixed Hodge module which is a natural generalization of Deligne's canonical extension of variation of Hodge structures. All are welcome!
WednesdaySeptember 26, 20184:00 PM - 5:30 PM Math Tower P-131 Aaron Bertram, University of Utah Stability Conditions on Projective Space.Gieseker stability uses the Hilbert polynomial of a coherent sheaf divided by its leading coefficient as an "asymptotic" slope function. We propose a family of stability conditions on Castelnuovo-Mumford regular sheaves that use the Hilbert polynomial divided by its derivative as "exact" slopes. We conjecture that this converges to Gieseker stability (it's true in dimensions 1,2,3). This is joint work with Matteo Altavilla and Marin Petkovic, and an application to the classification of Gorenstein rings is joint work with Brooke Ullery.
WednesdayOctober 03, 20184:00 PM - 5:30 PM Math Tower P-131 Angela Gibney, Rutgers University Basepoint free classes on the moduli space of stable n-pointed curves of genus zeroIn this talk I will discuss basepoint free classes on the moduli space of stable pointed rational curves that arise as Chern classes of Verlinde bundles, constructed from integrable modules over affine Lie algebras, and the Gromov-Witten loci of smooth homogeneous varieties. We'll see that in the simplest cases these classes are equivalent. Examples and open problems will be discussed.
WednesdayOctober 17, 20184:00 PM - 5:30 PM Math Tower P-131 Matt Kerr, Washington University in St Louis Hodge theory of degenerationsThe asymptotics and monodromy of periods in degenerating families of algebraic varieties are encountered in many settings -- for example, in comparing (GIT, KSBA, Hodge-theoretic) compactifications of moduli, in computing limits of geometric normal functions, and in topological string theory. In this talk, based on work with Radu Laza, we shall describe several tools (beginning with classical ones) for comparing the Hodge theory of singular fibers to that of their nearby fibers, and touch on some relations to birational geometry.
WednesdayOctober 24, 20184:00 PM - 5:30 PM Math Tower P-131 Will Sawin, Columbia University What circles can do for youIn joint work with Tim Browning, we study the moduli spaces of rational curves on smooth hypersurfaces of very low degree (say, a degree $d$ hypersurface in $n$ variables in $n > 3 (d-1)2^{d-1}$). We show these moduli spaces are integral locally complete intersections and that they are smooth outside a set of high codimension. We get stronger results, with better codimensions, as the degrees of the rational curves grow. These results rely on the circle method from analytic number theory. I will explain how this application works, and how the same technique should apply to recent conjectures of Peyre about rational points on these hypersurfaces.
MondayNovember 05, 20184:00 PM - 5:30 PM Math Tower P-131 Wei Ho, U. Michigan Ann Arbor and Columbia U. Splitting Brauer classes with the universal AlbaneseWe prove that every Brauer class over a field splits over a torsor under an abelian variety. If the index of the class is not congruent to 2 modulo 4, we show that the Albanese variety of any smooth curve of positive genus that splits the class also splits the class. This can fail when the index is congruent to 2 modulo 4, but adding a single genus 1 factor to the Albanese suffices to split the class. This is joint work with Max Lieblich.
WednesdayNovember 07, 20184:00 PM - 5:30 PM Math Tower P-131 Melody Chan, Brown University Tropical curves, graph homology, and cohomology of M_gJoint with Søren Galatius and Sam Payne. The cohomology ring of the moduli space of curves of genus g is not fully understood, even for g small. For example, in the 1980s Harer-Zagier showed that the Euler characteristic (up to sign) grows super-exponentially with g---yet most of this cohomology is not explicitly known. I will explain how we obtained new results on the rational cohomology of moduli spaces of curves of genus g, via Kontsevich's graph complexes and the moduli space of tropical curves.
WednesdayNovember 14, 20184:00 PM - 5:30 PM Math Tower P-131 Jakub Witaszek, IAS Frobenius liftability of projective varietiesThe celebrated proof of the Hartshorne conjecture by Shigefumi Mori allowed for the study of the geometry of higher dimensional varieties through the analysis of deformations of rational curves. One of the many applications of Mori's results was Lazarsfeld's positive answer to the conjecture of Remmert and Van de Ven which states that the only smooth variety that the projective space can map surjectively onto, is the projective space itself. Motivated by this result, a similar problem has been considered for other kinds of manifolds such as abelian varieties (Demailly-Hwang-Mok-Peternell) or toric varieties (Occhetta-Wiśniewski). In my talk, I would like to present a completely new perspective on the problem coming from the study of Frobenius lifts in positive characteristic. Furthermore, I will provide applications of the theory of Frobenius lifts to varieties with trivial logarithmic cotangent bundle. This is based on a joint project with Piotr Achinger and Maciej Zdanowicz.
WednesdayNovember 28, 20184:00 PM - 5:30 PM Math Tower P-131 Jason Starr, Stony Brook University Symplectic Invariance of Rational Surfaces on Kaehler ManifoldsGromov-Witten invariants are manifestly symplectically invariant and count holomorphic curves of given genus and homology class satisfying specified incidence conditions. The corresponding differential equations for holomorphic *surfaces* are not well-behaved and do not give invariants. Nonetheless, I will explain how the symplectically invariant Gromov-Witten theory can produce covering families of rational surfaces in Kaehler manifolds, e.g., every Kaehler manifold symplectically deformation invariant to a projective homogeneous space has a covering family of rational surfaces. The key input is a positive curvature result for spaces of stable maps proved jointly with de Jong.
WednesdayDecember 05, 20184:00 PM - 5:00 PM Math Tower P-131 Shizang Li, Columbia Universtiy \ An example of liftings with different Hodge numbersDoes a smooth proper variety in positive characteristic know the Hodge number of its liftings? The answer is ”of course not”. However, it’s not that easy to come up with a counter-example. In this talk, I will first introduce the background of this problem. Then I shall discuss some obvious constraints of constructing a counter-example. Lastly I will present such a counter-example and state a few questions of similar flavor for which I do not know an answer.
Show events for: All Events AGNES Algebraic geometry seminar Algebraic models in geometry seminar Am.Math.Soc. (AMS) Chapter Seminar Analysis Seminar Analysis Student Seminar Capsule Research Talks Colloquium Commencement Ceremony Comprehensive Exams Dynamical Systems Seminar Equivalence Method and Exterior Differential Systems Seminar First and Second Year Student Seminar Friday Summer Meeting Geometric Analysis Learning Seminar Geometry/Topology Seminar Grad / Postdoc Professional Development Seminar Graduate Student Seminar Graduate Topology Seminar Grant Proposal Panel Hodge Theory, Moduli and Representation Theory Holiday Party Joint Columbia-CUNY-Stony Brook General Relativity Seminar Math and Art Symposium for Tony Phillips Math Club Math Day 2016 Math in Jeans Mathematical Writing Seminar Mathematics Department Gathering Mathematics Education Colloquium Mathematics Summer Camp Mini Course / Dynamics Learning Seminar Mini-School in Geometry Minicourse in Real Enumerative Geometry New Graduate Students NY General Relativity Seminar Postdoc Geometry/Dynamics Seminar Postdoc Seminar Representation Theory Student Seminar RTG Colloquium RTG Seminar RTG Student Geometry Seminar SCGP Seminars Seminar in Topology and Symplectic Geometry Seminar on algebraic structures in physics Simons Colloquium Simons Lectures Series Singular metrics and direct images Special Algebra / Algebraic Geometry Seminar Special Analysis Seminar Special Colloquium Special Dynamics Seminar Special Geometry/Topology Seminar Special Lectures Special Seminar in Algebraic Geometry Special Topology Seminar Student Algebraic Geometry Seminar Student Differential Geometry Seminar Student Gauge Theory Seminar Student Seminar on Differential Geometry and Analysis Summer Workshop in Topology and Geometry Symplectic Geometry Reading Seminar Symplectic Geometry Seminar Thesis Defense Topology and Symplectic Geometry / Math of Gauge Fields seminar Women in Mathematics Instructions for subscribing to Stony Brook Math Department Calendars
|
{}
|
## Algebra 1
$2/(3x-6)$
$(4x+12)/(x^2-2x)*(x)/(6x+18)$ $4x(x+3)/x(x-2)*6(x+3)$ $4x(x+3)/6x(x-2)(x+3)$ $2(x+3)/3(x-2)(x+3)$ $2/3(x-2)$ $2/(3x-6)$
|
{}
|
Lemma 39.18.1. Let $S$ be a scheme. Let $(U, R, s, t, c)$ be a groupoid scheme over $S$. Let $g : U' \to U$ be a morphism of schemes. Consider the following diagram
$\xymatrix{ R' \ar[d] \ar[r] \ar@/_3pc/[dd]_{t'} \ar@/^1pc/[rr]^{s'}& R \times _{s, U} U' \ar[r] \ar[d] & U' \ar[d]^ g \\ U' \times _{U, t} R \ar[d] \ar[r] & R \ar[r]^ s \ar[d]_ t & U \\ U' \ar[r]^ g & U }$
where all the squares are fibre product squares. Then there is a canonical composition law $c' : R' \times _{s', U', t'} R' \to R'$ such that $(U', R', s', t', c')$ is a groupoid scheme over $S$ and such that $U' \to U$, $R' \to R$ defines a morphism $(U', R', s', t', c') \to (U, R, s, t, c)$ of groupoid schemes over $S$. Moreover, for any scheme $T$ over $S$ the functor of groupoids
$(U'(T), R'(T), s', t', c') \to (U(T), R(T), s, t, c)$
is the restriction (see above) of $(U(T), R(T), s, t, c)$ via the map $U'(T) \to U(T)$.
Proof. Omitted. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
{}
|
I have labels I feed to a LSTM model. I noticed that there were to few 1s and -1s compared to the number of 0s. I have at least 99.9% of 0s and the rest are 1s and -1s. I considered using weighted classes where I give more weight on 1 and -1 labels and a lot less weight on the 0 labels.
Is it a good practice to put kinda neighbourhood of 1s and -1s around each 1 and -1 in my dataset?
For instance, suppose I have:
..., 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, ...
I would like to create a function that will transform that to:
..., 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, -1, -1, -1, -1, -1, 0, 0, 0, 0, ...
when k=2. So that way we catch the information around the initial labels.
|
{}
|
# Math Help - Puzzling Ans
1. ## Puzzling Ans
Hi all, Below are 2 solutions which are very puzzling. Can anyone explain this?
Thanks alot.
For this question? How did step one when integrating from -1 to 1 jumped to step 2 integrating from 0 to 1? and pi become 2pi instead?
For this question, isnt the answer suppose to be e^1 instead of e^2? since at the last step, when xe^-2x tends to infinity, its suppose to be 1 right?
If $f$ is an integrable even function then
$\int_{ - a}^a {f(x)dx} = 2\int_0^a {f(x)dx} ~.$
|
{}
|
# Résolution de problèmes algorithmiques
Find a shortest path in the plane between two points avoiding given obstacles with disc shapes.
### Build a weighted graph
First compute for every pair of discs all 4 tangent segments. The corresponding 8 endpoints are added to the edge set of the graph, together with the source and the target point. Add edges between points if the corresponding segments do not intersect obstacles (other than on the border). Then the shortest path can be computed with Dijkstra’s algorithm. The complexity to build the graph is $O(n^3)$ with a naive approach, and dominates the complexity of computing the shortest paths. As the instances have n bounded by 50, the running time is small enough.
|
{}
|
# Counting Islands In Matrix Of Boolean
Code objective: Read a txt file, parse it into a matrix and count (and also print) the number of unique "islands" in the map formed by the matrix.
0 = Sea; 1 = Land; Islands are orthogonal groups of land.
testfile.txt :
5 9
00000
00010
10000
01101
10000
00100
01001
10010
01101
Python code:
# Reads the file and creates the matrix map.
with open('testfile.txt') as f:
map = [[2 for x in range(w)] for y in range(h)]
for y in range(h):
for x in range(w):
# Swipes the matrix map after NEW land chunks.
def swipe():
counter = 0
for x in range(h):
for y in range(w):
if map[x][y] == 1:
counter += 1
landInSight(map, x, y, 99)
print(counter)
# Recursive function to hide any land attached to a chunk already swiped.
def landInSight(m, h, w, c):
if m[h][w] == 1:
m[h][w] = c
if w < len(m[0]) - 1: landInSight(m, h, w + 1, c)
if h < len(m) - 1: landInSight(m, h + 1, w, c)
if w > 0: landInSight(m, h, w - 1, c)
if h > 0: landInSight(m, h - 1, w, c)
# Calls the swipe function.
swipe()
It is a very simple code, but I took way too long coding it. This is my first "program" using Python and although it works, it seems too rough. I am looking for any constructive inputs from Python people.
• Can you make more clear definition of an island? Imagine you have diagonal matrix - it's one island or it's equal to the matrix dimension? – pgs Apr 29 '17 at 10:38
• @pgs: The post says "islands are orthogonal groups of land". This seems clear to me (and it corresponds to the code). – Gareth Rees Apr 29 '17 at 11:18
• @pgs: As Gareth pointed out two "lands" will be considered from the same island only if they are touching in an orthogonal relation a diagonal would result in different islands unless there is another (or more) land units forming an orthogonal connection between those. – Bernardo Araujo Apr 29 '17 at 16:18
You can make reading the input more concise and robust:
def read_matrix(inp_file):
"""
Reads the matrix from the input file and returns the number of rows, the number
of columns and the matrix itself as a list of lists of ints
"""
# reads the first line, splits it and parses the parts as integers
w, h = map(int, inp_file.readline().strip().split())
# converts the rest of the lines in the input to lists of integers
field = [list(map(int, line.strip())) for line in inp_file]
return h, w, field
This way it will work properly, even if one of the dimensions is larger than 9.
It's also common for a name of a function to be a verb. For instance, landInSight sounds a little bit wierd. I'd call it traverse_matrix or flood_fill_matrix or something like that.
The fact that your landInSight function is recursive can cause issues for larger inputs (namely, a stack overflow error). You can make it non-recursive by using a stack (implemented with a standard list) and a loop.
• Thank you for the input, your parse is much more elegant than mine and also corrected a problem that I hadn't stumbled on yet (when the number of rows or columns is greater than nine). But, since the reader became a method, is there a way to pass it directly to the new swipe method? In other words, can i turn this two lines: h, w, f = read_matrix(inp_file) and swipe(h, w, f) into one? – Bernardo Araujo Apr 29 '17 at 20:54
• @BernardoAraujo You can, but I don't think it's a good idea. These functions do different things. – kraskevich Apr 30 '17 at 7:49
• @BernardoAraujo I think argument unpacking is what you are looking for: swipe(*read_matrix(inp_file)) – Janne Karila Apr 30 '17 at 19:30
First of all, your solution for this task looks pretty elegant, from my point of view. I would just provide minor comments to optimize this code.
Your input data is a boolean matrix, so instead of keeping every value as a byte you can use just one bit. For that purpose you can use bitarray module.
Another point, that you are loading whole matrix to the memory. It can be too expensive in certain circumstances. I can suggest one stream-based solution which will calculate a number of islands in dynamic manner. This method was discussed here.
• You code doesn't seem to be correct. It prints 0 for a 1x1 matrix consisting of one element 1, while the correct answer is 1. – kraskevich Apr 29 '17 at 14:23
• @kraskevich: yes, I agree. It was one misspelling b0 = count_continuous_block(s1) instead of b0 = count_continuous_block(s0). I changed it. Thank you! – pgs Apr 29 '17 at 14:29
• If the input is an arbitrary 0-1 matrix, your code still doesn't always work. It prints 0 for the matrix [[1, 1, 1], [1, 0, 1], [1,1, 1]], but there's clearly 1 connected component. – kraskevich Apr 29 '17 at 14:43
• @kraskevich: I guess I found one stream-like solution with cs help but I would like to implement in c instead of python. – pgs May 2 '17 at 20:37
|
{}
|
# Hairpin formation
## Introduction
In this example, you will simulate a single strand of length 18 and sequence GCGTTGCTTCTCCAACGC at 334 K (~61 °C) in three different ways:
• with a molecular dynamics (MD) simulation of the sequence-averaged (SA) model. The input file is inputMD.
• with an MD simulation of the sequence-dependent (SD) model. The input file is inputMD_seq_dep.
• with a Monte Carlo (MC) simulation of the SA model in which two base pairs are connected by mutual traps (i.e. additional attractive interactions between two nucleotides). The input file is inputTRAP.
The traps act between the pairs depicted in blue and red in the sequence GCGTTGCTTCTCCAACGC. The details of the interaction associated to the traps can be changed in the file hairpin_forces.dat.
This strand, if T is sufficiently low, tends to form an hairpin with a 6-base long stem and a 6-base long loop. The temperature has been chosen to be close to the melting temperature of such a hairpin in the SA version of the model
This document explains how to prepare the hairpin example (see Preparation) and how to run it (Running). Section Results contains results and plots extracted from the simulation output. In the following, $EXEC refers to the oxDNA executable. ## Preparation The script run.sh generates the input files and runs all the three simulations, one after the other. With the default input files, each simulation, lasting ${\displaystyle 10^{8}}$ steps by default, takes approximately one hour on a modern CPU. The default run.sh expects$EXEC to be in the ../.. directory. If this is not the case, open run.sh and change the variable CODEDIR accordingly.
If you only want to generate the initial configuration, you can issue ./run.sh --generate-only. Then you can run the simulations by yourself. The generated initial configuration files are initial.top (which contains the topology) and initial.conf (which contains positions and orientations of the nucleotides).
## Running
##### Figure 2: Same as Figure 1, but for the SD model.
If mutual traps between stem base-pairs are introduced, then the equilibrium properties of the hairpin are changed and, even if the SA model is employed, the hairpin is always (after the initial equilibration) in its folded conformation. The use of mutual traps can highly decrease the simulation time required by the folding of strands into target structures (like DNA origami or DNA constructs).
|
{}
|
## ze4life 2 years ago Write the missing statements and reasons of the proof. Given:<A and <C are rt.<s; AB=CB Prove:AD=CD
1. ze4life
|dw:1355528472985:dw|
2. ze4life
|dw:1355529709485:dw|
3. sirm3d
what do you think is the reason for #1?
4. ze4life
Given
5. sirm3d
at #2. why are the triangles BAD and BCD right triangles?
6. ze4life
? because they have right angles
7. ze4life
90 degrees
8. sirm3d
that's exactly the definition of a right triangle. Reason #2: Definition of a Right Triangle
9. ze4life
thnxs now how do i get number three
10. sirm3d
what do think is reason #3?
11. ze4life
umm they are congruent
12. ze4life
?
13. sirm3d
it is part of the given.
14. ze4life
so #3 is part of a given
15. sirm3d
Reason #3: Given
16. sirm3d
there are two given informations. (1) the right angles A and C, and (2) the congruent segments AB and CB. so you can use the reason "Given" twice.
17. ze4life
oh okay i get it now statement 4 this is a little harder
18. ze4life
? :-(
19. ze4life
help plz
20. sirm3d
the reflexive property is used when something is congruent/equal to itself, like $$\overline{PQ} \cong \overline{PQ}$$ or $$\angle X \cong \angle X$$ in the problem, what triangle part is used or common to the two triangles?
21. ze4life
Triangle BCD?
22. sirm3d
what is the common part, maybe an angle or a segment, of the two triangles?
23. sirm3d
|dw:1355534984211:dw|
24. ze4life
BD
25. ze4life
segment BD
26. sirm3d
right. $\overline {BD} \cong \overline {BD}$
27. ze4life
5 is triangle BAD=BCD am i right
28. sirm3d
that goes into statement #5.
29. sirm3d
right it is. and for the reason?
30. ze4life
I already have the reason done HL
31. ze4life
32. sirm3d
the last line is what you were supposed to prove, $$\overline{AD}\cong \overline{CD}$$
33. ze4life
thats what i thought thnxs so much for ur patience
34. sirm3d
YW
|
{}
|
MathSciNet bibliographic data MR1042759 00A12 (40-00 41-00 44-00) Analysis. I. Integral representations and asymptotic methods. A translation of {\cyr Sovremennye problemy matematiki. Fundamental′nye napravleniya, Tom 13}, Akad. Nauk SSSR, Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow, 1986 [MR0899751]. Translation by D. Newton. Translation edited by R. V. Gamkrelidze. Encyclopaedia of Mathematical Sciences, 13. Springer-Verlag, Berlin, 1989. vi+238 pp. ISBN: 3-540-17008-1 Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
Username/Password Subscribers access MathSciNet here
AMS Home Page
American Mathematical Society 201 Charles Street Providence, RI 02904-6248 USA
© Copyright 2017, American Mathematical Society
Privacy Statement
|
{}
|
# Initial Commit
2 mins
Finally - a use for my personal domain name bought since - wow, 2010 but never used.
Potential topics I may or may not cover here:
• Programming
• Software Engineering
• Side projects
• Ideas
… depending on what I feel like.
Here’s some code:
object Hello{
def main(Array[String]): Unit = {
println("Hello, World")
}
}
And the Schrödinger equation:
Emoticons:
🍕🍜🥓🥚🥣
Tables:
Tables Are Cool
col 3 is right-aligned $1600 col 2 is centered$12
zebra stripes are neat \$1
Quotes:
Well, Mr. Frankel, who started this program, began to suffer from the computer disease that anybody who works with computers now knows about. It’s a very serious disease and it interferes completely with the work. The trouble with computers is you play with them. They are so wonderful. You have these switches - if it’s an even number you do this, if it’s an odd number you do that - and pretty soon you can do more and more elaborate things if you are clever enough, on one machine.
After a while the whole system broke down. Frankel wasn’t paying any attention; he wasn’t supervising anybody. The system was going very, very slowly - while he was sitting in a room figuring out how to make one tabulator automatically print arc-tangent X, and then it would start and it would print columns and then bitsi, bitsi, bitsi, and calculate the arc-tangent automatically by integrating as it went along and make a whole table in one operation.
Absolutely useless. We had tables of arc-tangents. But if you’ve ever worked with computers, you understand the disease - the delight in being able to see how much you can do. But he got the disease for the first time, the poor fellow who invented the thing.
-Richard P. Feynman, Surely You’re Joking, Mr. Feynman!: Adventures of a Curious Character
|
{}
|
# Problem with marbles
1. May 24, 2005
### straycat
Hello all,
I have a problem that hopefully belongs in this forum.
Suppose that I have a bag full of M marbles, each marble being one of N colors, N =< M. The number of marbles of color = N is p_n, so the sum over n of p_n equals M. Next I want to arrange all of my marbles in order from 1 to M. The question: how many unique ways are there to do this, as a function of the variables p_n?
I know for example that if each marble is its own separate color, ie M = N, then there are exactly M! different ways to arrange them. Alternatively if there is only one color, ie N = 1, then there is only one way to arrange them. It is the more general case that I have not yet solved.
David
2. May 24, 2005
### BicycleTree
First, permute all the colors. This is M!. Then divide out what you don't want. What is the overcount factor due to each color? Consider with M = 3
123
132
231
212
312
321
Now say that 2 and 3 are blue and 1 is red. What do you need to divide by to get to the correct answer? Why? Consider what the duplicates that you discard have in common with the corresponding ones that you accept.
3. May 25, 2005
### straycat
Hmm, in this case I divide by 2 because (2,3,*) looks like (3,2,*), etc. I suppose I'm actually dividing by 2!.
If I had 5 marbles, then I start with 5! Then if I assume that 3 of them (say #1,#2,#3) are red, with #4 blue, #5 green, then I divide by 3! because, eg, (1,2,3,*,*) looks like eg (3,2,1,*,*), and there are a total of 3! ways to get (r,r,r,*,*). In more general terms, every individual of the 5! sequences that I start out with will belong to a subset with 3! elements, each one of which is indistinguishable from the other elements of the same subset. So yes, I divide in this case by 3!, and in this case the final answer is 5! / 3!.
So now let's say we have 5!/3! sequences. Next let's say that we take the blue and green marbles and set them both to green. We repeat the above line of reasoning, ie, every individual of my 5!/3! sequences will belong to a subset with 2! elements, each one of which is indistinguishable from the other elements of the same subset. So we divide again by 2!.
Generalizing, I get M! / [p_1! * p_2! * ... * p_N! ] In the specific case that p_i = 1 for all N, and N = M, this reduces to M!. And in the specific case that p_1 = M, all other p_i's = 0, this reduces to M! / M! = 1. Both of these are as expected.
Is that right? I just reasoned it out as I typed -- not sure if I've made an error.
David
4. May 25, 2005
### BicycleTree
Yep, that's right.
5. May 25, 2005
### straycat
OK cool ... on to the next step.
The above is a prelude to a more difficult (I think) problem that I have in mind. Suppose I roll an N-sided die a total of M times and write down the results as an ordered sequence of M values. In total, there will be N^M different possible sequences. For any individual sequence, suppose we define p_n as the number of times that the die roll shows up as n. (So n is an integer in [1,N], and p_n is an integer in [1,M].) What I am trying to do is calculate what proportion f(p_n) of the N^M sequences have the result n showing up p_n times.
For example, suppose the die is 6-sided, and we roll it 10 times. We get a total of 6^10 sequences of die rolls. Of these 6^10 sequences, how many get, let's say, a 3 showing up 5 times?
One way to start would be to take the expression from my previous post and do sums over all possible values of each p_i for i not equal to n. I end up with a complicated expression, though, and I'm thinking there might be an easier way to approach the problem ...
David
6. May 25, 2005
### BicycleTree
First count all the ways you can have the other N - 1 sides come up on M - p_n rolls. Then count all the ways you can mix the p rolls of side n with the ways to roll the N - 1 sides over M - p_n rolls.
For example, with 4 rolls and a 3 sided die, and concerned with 2 rolls of side 1, then first count the ways to roll 2 and 3 (4-2) = 2 times.
22
23
32
33
Now find the number of ways to mix 11 with those. For example (mixing 11 with 22),
2211
2121
2112
1212
1221
1122
Or mixing 11 with 23
2311
2131
2113
1213
1231
1123
I think that should work.
7. May 26, 2005
### straycat
Yes, that is simpler.
Let's see. There is only one way to get result n p times. Next, the number of ways to get anything other than n M-p times is (n-1)^(M-p). Then we multiply by the number of ways to mix an ordered sequence with p elements and an ordered sequence of M-p elements, ie the number of ways of distributing p elements over M elements, which is, ummm, is it M choose p? Putting these together, we get the expression:
(n-1)^(M-p) M! / [p!(M-p)!]
This looks a lot like the Poisson distribution:
http://mathworld.wolfram.com/PoissonDistribution.html
I need to check all this -- gotta run now though --
David
8. May 26, 2005
### BicycleTree
Yep. Doesn't look like the Poisson to me though.
9. May 27, 2005
### straycat
Dude! I'm making progress. (With your help Bicycle, thanks!) Yes, Poisson is a little different.
I have one final step. Let us once again consider an N-sided die that we roll M times. Let's arbitrarily pick one result of the die roll, say, 3. What I would ultimately like to show is that if we let M go to infinity, then we will find that "most of the time," we will get a 3 exactly 1/N of the time. In other words, for any individual sequence, say that p_3 is the number of times out of M rolls that we get a 3. Count up all of the N^M sequences in which p_3/M = 1/N. From the above, we know that this will happen in
(N - 1)^(M-p) M! / [p!(M-p)!] out of the N^M sequences. So the ratio of these two expressions, ie the percentage of sequences in which we get a 3 showing up 1/N of the time, is (N - 1)^(M-p) M! / [p!(M-p)!N^M] . I would like to show that as M goes to infinity, and keeping N constant, then the assumption that the above expression approaches 100 percent (approaches 1) will necessarily imply that p/M = 1/N. Let's see if I can make that look nicer. I want to show that:
$$lim_{M \rightarrow \infty} \frac{(N-1)^{{M-p}}M!}{p!(M-p)!N^{M}} \rightarrow 1 \Rightarrow p = \frac{M}{N}$$
But now I'm stuck, possibly because it's been years since I've worked with limits. I've just rediscovered l'Hopital's rule, but haven't been able to get very far with it. I also note that the exponential terms should dominate over the factorial terms, I think. (Well, they usually win out, right?) Any hints for me?
David
10. May 27, 2005
### BicycleTree
I don't understand what you're trying to do--your explanation is unclear to me. The percentage of the time that you have p_3/M = 1/N is the percentage of the time that you have p_3 = M/N, which is (N - 1)^(M-M/N) M! / [(M/N)!(M-M/N)!*(N^M)], assuming M/N is integer (the probability is 0 if it isn't).
11. May 27, 2005
### straycat
I suppose I should add in the assumption that M is an integer multiple of N, so that M/N is integer.
The way I phrased the question is to assume the limit equals 1 and use that to solve for p in terms of M and N to demonstrate that p = M/N. Although we could turn the problem around like this: assume first that p = M/N, and then demonstrate that the above limit is 1. It seems intuitive to me that it should be, but I haven't been able to prove it yet.
David
12. May 27, 2005
### BicycleTree
Well, I plugged in 100 for M and 10 for N and got 0.131865 (in the formula (N - 1)^(M-M/N) M! / [(M/N)!(M-M/N)!*(N^M)]). Then I tried it with 200 for M and 10 for N and got 0.093636. It seems to be heading for 0 if it's going anywhere.
13. May 28, 2005
### straycat
, you're right.
I will have to reformulate the problem once again.
Let me explain how this came up. In another thread, I am discussing the multiple worlds interpretation (MWI) of quantum mechanics. I won't go into all the esoterics of that discussion. So I'll just say that for the context of this problem with the N-sided die rolled M times, I am imagining that this results in N^M "parallel worlds," with a separate "observer" in each world. Typically in these discussions we consider preparing M identical spin-whatever particles, with spin n being an integer in [1,N] upon measurement, and the predicted probability of measuring spin = n being m_n. For this thread, let's stick with the example of an N-sided die, and define the predicted probability of getting an n on any particular throw as being the probability "measure" m_n. Typically, for a die roll, we assume that each side is equiprobable, ie m_n = 1/N. But in the more general case, we simply require that $$\sum_{n = 1}^{N} m_{n} = 1$$.
Let us suppose that an experimenter rolls the die M times and calculates the percentage of time that the n-th result shows up, p_n/M, and compares this with the predicted probability measure m_n. We will imagine that at the end of the M die rolls, we have N^M different experimenters (or "observers") who are living in N^M "parallel worlds."
One of the esoterics of this other thread is that we are entertaining the notion that each of these N^M worlds is, ontologically speaking, on an "equal footing." So here's the difficulty. If the predicted probability measure m_n is NOT equal to 1/N, then as M goes to infinity, "most" of the observers in these N^M worlds will come to the conclusion that the predicted probability measure m_n is WRONG. So my objective is to show this, but more rigorously. in other words, I want to show that m_n = 1/N is the ONLY probability measure such that "most" of the observers will conclude that the predicted probability measure, m_n, is correct.
The way that any individual observer tests the predicted probability measure m_n is to compare it with the observed quantity, p_n/M. The difference between the predicted value m_n and the observed value p_n/M is the "error" $$\epsilon_n = |m_{n} - p_{n}/M|$$. So what I am looking for is an expression for m_n such that the following Criterion is true: for any arbitrary cutoff $$\delta$$, the proportion f_n of these N^M worlds such that the error is less than the cutoff, $$\epsilon < \delta$$ approaches 1 in the limit as M approaches infinity. It seems intuitive that m_n = 1/N will meet the Criterion. But what I am trying to prove is that m_n = 1/M is the ONLY expression for m_n that will meet the Criterion. Thus, my method is to define the equation $$lim_{M \rightarrow \infty} f_{n} \rightarrow 1$$ and show that this implies that m_n = 1/N.
So the issue is how to calculate f_n. Initially, I was thinking of defining $$\delta = 1/M$$, so that as M approaches infinity, the cutoff approaches zero. The reason I did this was to make the calculation of f_n easier: the denominator is the total number of worlds N^M, and the numerator is the number of worlds in which the predicted value matches exactly with the observed value, m_n = p_n/M. But as we have seen, for f_n defined this way, the limit goes to zero.
So I suppose what I should do is to define the cutoff to be arbitrary but fixed, ie not a function of M. So the numerator in the expression for f_n is the sum over all of the values of p_n that are close to m_n, ie such that $$|m_n - p_n/M| < \delta$$.
Does my statement of the problem make sense to you?
David
|
{}
|
# NVIDIA driver work
Emiel Kollof is working on the NVIDIA binary video driver; so far it loads correctly, but doesn’t work in X11.
## 3 Replies to “NVIDIA driver work”
1. Mezz says:
I think if I remember it correct, there have a patch over at http://daily.daemonnews.org .. I don’t remember where it is, but I have no idea if it’s out of date already with the today of DragonFly kernel.
2. Emiel Kollof says:
I have a patch ready that works now. See the dragonfly newsserver/mailinglist, since I posted it there.
I could make it available on some webserver somewhere if someone prefers that.
3. Mezz says:
Yeah, I saw.. Bravo!! :-)
> I could make it available on some webserver
> somewhere if someone prefers that.
Umm, how about put it in the dfports’s x11/nvidia-driver? I will reply over there..
|
{}
|
OM = VL_IRODR(R) where R is a rotation matrix computes the the inverse Rodrigues' formula of om, returning the rotation matrix R = dehat(Logm(OM)).
[OM,DOM] = VL_IRODR(R) computes also the derivative of the Rodrigues' formula. In matrix notation this is the expression
d( dehat logm(vl_hat(R)) )
dom = ----------------------.
d(vec R)^T
[OM,DOM] = VL_IRODR(R) when R is a 9xK matrix repeats the operation for each column (or equivalently matrix with 9*K elements). In this case OM and DOM are arrays with K slices, one per rotation.
|
{}
|
Sharp Expectation Bounds on Extreme Order Statistics from Possibly Dependent Random Variables
Title & Authors
Sharp Expectation Bounds on Extreme Order Statistics from Possibly Dependent Random Variables
Yun, Seokhoon;
Abstract
In this paper, we derive sharp upper and lower expectation bounds on the extreme order statistics from possibly dependent random variables whose marginal distributions are only known. The marginal distributions of the considered random variables may not be the same and the expectation bounds are completely determined by the marginal distributions only.
Keywords
Expectation bounds;extreme order statistics;dependent random variables;
Language
English
Cited by
References
1.
Arnold, B. C. (1985). $\rho$-norm bounds on the expectation of the maximum of a possibly dependent sample. Journal of Multivariate Analysis, 17, 316-332
2.
Balakrishnan, N. (1990). Improving the Hartley-David-Gumbel bound for the mean of extreme-order statistics. Statistics and Probability Letters, 9, 291-294
3.
Gumbel, E. J. (1954). The maxima of the mean largest values and of the range. The Annals of Mathematical Statistics, 25, 76-84
4.
Hartley, H. O. and David, H. A. (1954). Universal bounds for mean range and extreme observation. The Annals of Mathematical Statistics, 25, 85-99
5.
Huang, J. S. (1998). Sequence of expectations of maximum-order statistics. Statistics and Probability Letters, 38, 117-123
6.
Lai, T. L. and Robbins, H. (1978). A class of dependent random variables and their maxima. Z. Wahrscheinlichkeitstheorie venv. Gebiete, 42, 89-111
7.
Moriguti, S. (1953). A modification of Schwarz's inequality with applications to distributions. The Annals of Mathematical Statistics, 24, 107-113
8.
Papadatos, N. (2001). Distribution and expectation bounds on order statistics from possibly dependent variates. Statistics and Probability Letters, 54, 21-31
|
{}
|
# Writeup "R u SAd?" from PlaidCTF 2019
by: dorianr
Description:
Tears dripped from my face as I stood over the bathroom sink. Exposed again! The tears melted into thoughts, and an idea formed in my head. This will surely keep my secrets safe, once and for all. I crept back to my computer and began to type.
We are given a relatively long python script implementing RSA encryption, together with a public key and an encrypted file. They use a custom key format which is saved using the pickle module:
class Key:
PRIVATE_INFO = ['P', 'Q', 'D', 'DmP1', 'DmQ1']
def __init__(self, **kwargs):
for k, v in kwargs.items():
setattr(self, k, v)
assert self.bits % 8 == 0
def ispub(self):
return all(not hasattr(self, key) for key in self.PRIVATE_INFO)
def ispriv(self):
return all(hasattr(self, key) for key in self.PRIVATE_INFO)
def pub(self):
p = deepcopy(self)
for key in self.PRIVATE_INFO:
if hasattr(p, key):
delattr(p, key)
return p
def priv(self):
raise NotImplementedError()
def genkey(bits):
assert bits % 2 == 0
while True:
p = genprime(bits // 2)
q = genprime(bits // 2)
e = 65537
d, _, g = egcd(e, (p - 1) * (q - 1))
if g != 1: continue
iQmP, iPmQ, _ = egcd(q, p)
return Key(
N=p * q, P=p, Q=q, E=e, D=d % ((p - 1) * (q - 1)), DmP1=d % (p - 1), DmQ1=d % (q - 1),
iQmP=iQmP % p, iPmQ=iPmQ % q, bits=bits,
)
Notice that the values iPmQ and iQmP are not removed when constructing the public key. Let us call these values $a$ and $b$ in the following. If $a’, b'=egcd(p, q)$, then $a’p+b’q=gcd(p,q)$ by Bézout’s identity. Hence, $\quad (a+iq)p + (b+jp)q = 1\nl \Rightarrow ap+bq+(i+j)pq = 1\nl \Rightarrow ap+bq=1+zn=:c$ for small values $i,j,z\in \mathbb Z$.
Let $x, y = egcd(a, b)$. Since $gcd(a, b)=1$ (in our case), we have
$\quad ap+bq=1+zn=c\nl \quad ax+by=1\nl \Rightarrow a(p-xc)+b(q-yc) = 0\qquad \text{(subtract the second equation c times from the first)}\nl \Rightarrow a(p-xc) = b(yc-q)\nl \Rightarrow (q-yc) \equiv 0 \mod a\nl \Rightarrow q-yc=ka$
We can expect $q/a$ to be small. Hence, $k\approx -yc/a$. Then, $q=ka+yc$.
## Code
a = k.iPmQ
b = k.iQmP
n = k.N
x, y, _ = egcd(a, b)
for z in range(-10, 10):
c = 1 + z*n
for k in range(-y*c//a-10, -y*c//a+10):
q = k*a + y*c
if n % q == 0:
print(q, n//q)
|
{}
|
# Tunnelling mistake
In a previous post I wrote about quantum tunnelling. In that post I said that the energy of the different instances of the particle undergoing tunnelling were not larger than those of the barrier. I reached this conclusion by looking at the terms in an analytical solution of the Schrodinger equation and estimating their magnitudes. However, estimation can quite easily go wrong and I thought I should check by doing a simulation. I have performed a simulation and my conclusion was wrong. Some instances of the particle tunnelling through the barrier increase their energy. The program that performs the simulation and produces the figures shown below can be found at this link. The program is written in Python because it has sparse complex number matrix libraries, which were needed for the simulation. The link also includes some references to papers used in writing the code.
Tunnelling involves a wavepacket interacting with a potential that has higher energy than the wavepacket. After the interaction some instances of the particle are present on the other side of the potential. I made the potential have a height of 1.1 times the wavepacket’s mean energy. The wavepacket comes in from the left, some of it is reflected to the left and some continues to the right after the wavepacket interacts with the potential, as shown in the figures below. The red line represents the potential, the blue line is square amplitude of the wavepacket, which gives the probability of finding the particle in that location.
The energy of a wave with wavenumber $k$ is $k^2/2m$. I calculated the energy spectrum at each time by doing a fast fourier transform, finding the square amplitude at each wavenumber $k$, which is proportional to the probability of finding the particle to have that frequency when you measure it. The fourier transform includes both positive and negative frequencies: the first half of the spectrum is the positive frequencies, the second half is the negative frequencies. I calculated the square amplitudes for both halves of the FFT and added up the amplitudes that corresponded to the same energy (the $+k$ and $-k$ amplitudes). I then took all of the resulting energy spectra and divided them all by the amplitude of the initial energy spectrum and plotted the resulting normalised results on a semilog scale. This allowed me to see whether the components with energies above that of the initial wavepacket increased or decreased. The results are shown in the figures below at the same times as the figures shown above except that the last figure is omitted. The red lines are positioned so that if the spectrum is above the horizontal red line to the right of the vertical red line, then there is a higher probability of the particle having an energy above what it had before the potential.
It appears that the energy of some of the instances of the particle increased after interacting with the barrier and in particular the probability increased for energies above 1.1 times the wavepacket energy: the energy of the potential. The original post will be updated after I publish this post.
|
{}
|
# Using shinyypr
shinyypr is an R package to run a user interface to the ypr R package. ypr implements equilibrium-based yield per recruit methods for estimating the optimal yield for a fish population.
The user interface can simply be opened with default parameter settings.
library(shinyypr)
run_ypr_app()
The run_ypr_app function also has a single argument that allows the user to pass a valid ypr_population object, which populates the app parameter values on startup.
library(ypr)
library(shinyypr)
run_ypr_app(adams_adjusted)
shinyypr also ships with an RStudio addin.
For more information see the ypr R package Get Started vignette.
|
{}
|
# How to set column width such that the vertical size is as small as possible?
For example, in Word, if two columns are at 50% linewidth each, it would give:
After playing around with the column widths, I can minimise the vertical size of the table:
Is there a way to implement this in Latex?
Edit: a MWE:
\documentclass{article}
\usepackage{booktabs}
\usepackage{tabularx}
\begin{document}
\section*{Variables}
\begin{table}
\centering
\begin{tabularx}{\textwidth}{X|X}
\toprule
Variables & Treatment \\
\midrule
Independent & \\
The angle between the ramp and the flat surface & The ramp is elevated by placing books underneath one end of the ramp, and the angle is fine-tuned using the screws in the feet. The angle is measured using the protractor, which is attached to the ramp. The angles used were 10, 15, 20, 25 and 30.\\
\bottomrule
\end{tabularx}
\caption{List of variables and treatment}
\label{tab:my_label}
\end{table}
\end{document}
• Please make a minimal working example (MWE) – samcarter_is_at_topanswers.xyz Sep 4 '18 at 11:36
• @samcarter Done – George Tian Sep 4 '18 at 11:39
• An automatic process would be difficult and time consuming, and at most should be done once. Also, for a range of ratios which produce the same number of lines, you probably want the one closest to 1. – John Kormylo Sep 4 '18 at 12:10
• your example produces ! Package inputenc Error: Unicode character ° (U+B0) – David Carlisle Sep 4 '18 at 12:23
• @DavidCarlisle Sorry, I have edited and removed the degree symbols – George Tian Sep 4 '18 at 12:26
tabulary is closer to what you need than tabularx
\documentclass{article}
\usepackage{booktabs}
\usepackage{tabulary}
\begin{document}
\section*{Variables}
\begin{table}
\centering
\begin{tabulary}{\textwidth}{L|L}
\toprule
Variables & Treatment \\
\midrule
Independent & \\
The angle between the ramp and the flat surface & The ramp is elevated by placing books underneath one end of the ramp, and the angle is fine-tuned using the screws in the feet. The angle is measured using the protractor, which is attached to the ramp. The angles used were 10, 15, 20, 25 and 30.\\
\bottomrule
\end{tabulary}
\caption{List of variables and treatment}
\label{tab:my_label}
\end{table}
\end{document}
Normally all X of a tabularx are set to the same width. You could shift the ratio by using >{\hsize=0.45\hsize}X>{\hsize=1.55\hsize}X (make sure that the sum of all the \hsize equals to the number of X columns). This needs some manual adjustment of the exact values.
Another option could be to use tabulary as in the second example below.
\documentclass{article}
\usepackage{booktabs}
\usepackage{tabulary}
\usepackage{tabularx}
\begin{document}
\begin{table}
\centering
\begin{tabularx}{\textwidth}{@{}>{\hsize=0.35\hsize}X>{\hsize=1.65\hsize}X@{}}
\toprule
Variables & Treatment \\
\midrule
The angle between the ramp and the flat surface & The ramp is elevated by placing books underneath one end of the ramp, and the angle is fine-tuned using the screws in the feet. The angle is measured using the protractor, which is attached to the ramp. The angles used were 10, 15, 20, 25 and 30.\\
\bottomrule
\end{tabularx}
\caption{List of variables and treatment}
\label{tab:my_label1}
\end{table}
\begin{table}
\centering
\begin{tabulary}{\textwidth}{@{}LJ@{}}
\toprule
Variables & Treatment \\
\midrule
• Thank you, the second method seems to automatically adjust the widths, which solves my problem. I've been trying to decipher the following: {@{}LJ@{}}. Could you please explain what this snippet does? – George Tian Sep 4 '18 at 12:07
• @GeorgeTian The @{} makes the rules the same length as the text, L make the cells in this column left aligned (looks IMHO better for such short lines) and J makes the content of this column justified. – samcarter_is_at_topanswers.xyz Sep 4 '18 at 12:09
• @GeorgeTian perhaps a better explanation of what @ does is, it changes the content added automatically at this side of the column (or in between two columns, if it is used between two column specifications) to the stuff you give it as its argument. So @{} changes the padding from \hskip\tabcolsep to nothing, while @{a} would insert an a in each line. – Skillmon Sep 4 '18 at 14:04
|
{}
|
# zbMATH — the first resource for mathematics
Subcritical phase of $$d$$-dimensional Poisson-Boolean percolation and its vacant set. (Phase sous-critique du modèle de percolation Poisson-Booléen et de son complémentaire en dimension $$d$$.) (English. French summary) Zbl 07249463
For every $$r>0$$, define the two functions of $$\lambda$$ as $$\theta_r(\lambda):=\mathbf{P}_{\lambda}[0\leftrightarrow \partial B_r]$$ and $$\theta(\lambda):=\lim_{r\rightarrow \infty}\theta_r(\lambda).$$ Define the critical parameter $$\lambda_c =\lambda_c(d)$$ of the model by the formula $$\lambda_c := \inf\{\lambda\geq 0 : \theta(\lambda) > 0\}$$. The authors introduce the another critical parameter to discuss Poisson-Boolean percolation as $$\widetilde{\lambda}_c:=\inf \left\{\lambda \geq 0:\underset{r>0}{\inf }\mathbb{P}_{\lambda }[B_{\lambda}\leftrightarrow \partial B_{2r}]>0\right\}$$. The authors prove the following main result:
Theorem 1.2 (Sharpness for Poisson-Boolean percolation). – Fix $$d\geq 2$$ and assume that $$\underset{\mathbb{R}_+}{\int }r^{5d-3}d\mu (r)<\infty.$$ Then, we have that $$\lambda_c = \widetilde{\lambda}_c$$. Furthermore, there exists $$c > 0$$ such that $$\theta(\lambda)> c(\lambda-\lambda_c)$$ for any $$\lambda\geq \lambda_c$$.
The authors give a brief description of the general strategy to prove the main theorem. Three properties of the Poisson-Boolean percolation are introduced. Then, the authors present some new results concerning the behavior of Poisson-Boolean percolation when $$\lambda<\widetilde{\lambda}_c$$. If there exists $$c > 0$$ such that $$\mu[r,\infty]\leq \exp(-cr)$$ for every $$r\geq 1$$, then, for every $$\lambda<\widetilde{\lambda}_c$$, the authors prove that there exists $$c_{\lambda}> 0$$ such that for every $$r > 1$$, $$\theta_r(\lambda)\leq \exp(-c_{\lambda}r)$$.
##### MSC:
82B43 Percolation 82B26 Phase transitions (general) in equilibrium statistical mechanics 82B27 Critical phenomena in equilibrium statistical mechanics
Full Text:
##### References:
[1] Aizenman, Michael; Barsky, David J., Sharpness of the phase transition in percolation models, Commun. Math. Phys., 108, 3, 489-526 (1987) · Zbl 0618.60098 [2] Aizenman, Michael; Barsky, David J.; Fernández, Roberto, The phase transition in a general class of Ising-type models is sharp, J. Statist. Phys., 47, 3-4, 343-374 (1987) [3] Ahlberg, Daniel; Tassion, Vincent; Teixeira, Augusto, Sharpness of the phase transition for continuum percolation in $$\mathbb{R}^2 (2016)$$ · Zbl 1404.60143 [4] Ahlberg, Daniel; Tassion, Vincent; Teixeira, Augusto, Existence of an unbounded vacant set for subcritical continuum percolation (2017) · Zbl 1401.60173 [5] Beffara, Vincent; Duminil-Copin, Hugo, The self-dual point of the two-dimensional random-cluster model is critical for $$q\ge 1$$, Probab. Theory Related Fields, 153, 3-4, 511-542 (2012) · Zbl 1257.82014 [6] Broadbent, S. R.; Hammersley, John M., Percolation processes. I. Crystals and mazes, Proc. Cambridge Philos. Soc., 53, 629-641 (1957) · Zbl 0091.13901 [7] Bollobás, Béla; Riordan, Olivier, The critical probability for random Voronoi percolation in the plane is 1/2, Probab. Theory Related Fields, 136, 3, 417-468 (2006) · Zbl 1100.60054 [8] Duminil-Copin, Hugo; Goswami, Subhajit; Raoufi, Aran; Severo, Franco; Yadin, Ariel, Existence of phase transition for percolation using the Gaussian Free Field (2018) [9] Duminil-Copin, Hugo; Goswami, Subhajit; Rodriguez, P.-F.; Severo, Franco, Equality of critical parameters for GFF level-set percolation (2019) [10] Duminil-Copin, Hugo; Raoufi, Aran; Tassion, Vincent, Exponential decay of connection probabilities for subcritical Voronoi percolation in $$\mathbb{R}^d (2017)$$ · Zbl 07030876 [11] Duminil-Copin, Hugo; Raoufi, Aran; Tassion, Vincent, Sharp phase transition for the random-cluster and Potts models via decision trees (2017) · Zbl 07003145 [12] Duminil-Copin, Hugo; Tassion, Vincent, A new proof of the sharpness of the phase transition for Bernoulli percolation and the Ising model, Commun. Math. Phys., 343, 2, 725-745 (2016) · Zbl 1342.82026 [13] Gilbert, Edgar N., Random plane networks, J. Soc. Indust. Appl. Math., 9, 533-543 (1961) · Zbl 0112.09403 [14] Gouéré, Jean-Baptiste, Subcritical regimes in the Poisson Boolean model of continuum percolation, Ann. Probab., 36, 4, 1209-1220 (2008) · Zbl 1148.60077 [15] Gouéré, Jean-Baptiste; Théret, Marie, Equivalence of some subcritical properties in continuum percolation (2018) · Zbl 1428.62425 [16] Hall, Peter, On continuum percolation, Ann. Probab., 13, 4, 1250-1266 (1985) · Zbl 0588.60096 [17] Last, Günter; Penrose, Mathew D.x, Lectures on the Poisson process, 7 (2017), Cambridge University Press · Zbl 1392.60004 [18] Menshikov, Mikhail V., Coincidence of critical points in percolation problems, Dokl. Akad. Nauk SSSR, 288, 6, 1308-1311 (1986) [19] Meester, Ronald; Roy, Rahul, Continuum percolation (1996), Cambridge University Press · Zbl 0858.60092 [20] Meester, Ronald; Roy, Rahul; Sarkar, Anish, Nonuniversality and continuity of the critical covered volume fraction in continuum percolation, J. Statist. Phys., 75, 1-2, 123-134 (1994) · Zbl 0828.60083 [21] O’Donnell, Ryan; Saks, Mickael E.; Schramm, O.; Servedio, Rocco A., Every decision tree has an influential variable, 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), 31-39 (2005), IEEE Xplore [22] Penrose, Mathew D., Non-triviality of the vacancy phase transition for the Boolean model (2017) · Zbl 1394.60101 [23] Ziesche, Sebastian, Sharpness of the phase transition and lower bounds for the critical intensity in continuum percolation on $$\mathbb{R}^d (2016)$$ · Zbl 1391.60246 [24] Zuev, Sergei A.; Sidorenko, Alexander, Continuous models of percolation theory. I, Teor. Mat. Fiz., 62, 2, 51-58 (1985)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
{}
|
## Cryptology ePrint Archive: Report 2014/220
Total Break of Zorro using Linear and Differential Attacks
Shahram Rasoolzadeh and Zahra Ahmadian and Mahmood Salmasizadeh and Mohammad Reza Aref
Abstract: An AES-like lightweight block cipher, namely Zorro, was proposed in CHES 2013. While it has a 16-byte state, it uses only 4 S-Box per round. Its weak nonlinearity was widely criticized and caused serious vulnerabilities, insofar as it has been directly exploited in all the attacks reported by now, including the weak key, reduced round, and even full round attacks. In this paper, based on some observations discovered by Wang et. al., we present new differential and linear attacks on Zorro, both of which recover the full secret key with practical complexity. These attacks are based on very efficient distinguishers that have only two active sboxes per four rounds. The time complexity of our differential and linear attacks are $2^{52.74}$ and $2^{57.85}$ and the data complexity are $2^{55.15}$ chosen plaintexts and $2^{45.44}$ known plaintexts, respectively. The results clearly show that the block cipher Zorro does not have enough security against differential and linear cryptanalysis.
Category / Keywords: Zorro, Lightweight Block Cipher, Differential Cryptanlysis, Linear Cryptanlysis
Date: received 25 Mar 2014
Contact author: rasoolzadeh shahram at gmail com
Available format(s): PDF | BibTeX Citation
Short URL: ia.cr/2014/220
[ Cryptology ePrint archive ]
|
{}
|
00-365 J. M. Combes and G. Mantica
Fractal Dimensions and Quantum Evolution Associated with Sparse Potential Jacobi Matrices (361K, latex with 5 postscript figures) Sep 18, 00
Abstract , Paper (src), View paper (auto. generated ps), Index of related papers
Abstract. We study the quantum dynamics generated via Schr\"odinger equation by sparse-potential Jacobi matrices on $l_2({\bf Z}_+)$. Exact bounds for the upper and lower intermittency functions governing the asymptotic growth of moments are derived in terms of the fractal dimensions of the spectral measure. Numerical experiments suggest that these bounds are sharp in the case of very sparse barriers.
Files: 00-365.src( 00-365.keywords , jmi6.tex , d1jl2p1.ps , d1ju23p1.ps , d2jl2p1.ps , d3ju23p1.ps , manyb.ps )
|
{}
|
Article
# Finite-temperature order-disorder phase transition in a frustrated bilayer quantum Heisenberg antiferromagnet in strong magnetic fields
• ##### Taras Krokhmalskii
Physical review. B, Condensed matter (Impact Factor: 3.77). 07/2006; DOI: 10.1103/PhysRevB.74.144430
Source: arXiv
ABSTRACT We investigate the thermodynamic properties of the frustrated bilayer quantum Heisenberg antiferromagnet at low temperatures in the vicinity of the saturation magnetic field. The low-energy degrees of freedom of the spin model are mapped onto a hard-square gas on a square lattice. We use exact diagonalization data for finite spin systems to check the validity of such a description. Using a classical Monte Carlo method we give a quantitative description of the thermodynamics of the spin model at low temperatures around the saturation field. The main peculiarity of the considered two-dimensional Heisenberg antiferromagnet is related to a phase transition of the hard-square model on the square lattice, which belongs to the two-dimensional Ising model universality class. It manifests itself in a logarithmic (low-)temperature singularity of the specific heat of the spin system observed for magnetic fields just below the saturation field.
0 Bookmarks
·
54 Views
• ##### Article: Frustrated quantum Heisenberg antiferromagnets at high magnetic fields: Beyond the flat-band scenario
[Hide abstract]
ABSTRACT: We consider the spin-1/2 antiferromagnetic Heisenberg model on three frustrated lattices (the diamond chain, the dimer-plaquette chain and the two-dimensional square-kagome lattice) with almost dispersionless lowest magnon band. Eliminating high-energy degrees of freedom at high magnetic fields, we construct low-energy effective Hamiltonians which are much simpler than the initial ones. These effective Hamiltonians allow a more extended analytical and numerical analysis. In addition to the standard strong-coupling perturbation theory we also use a localized-magnon based approach leading to a substantial improvement of the strong-coupling approximation. We perform extensive exact diagonalization calculations to check the quality of different effective Hamiltonians by comparison with the initial models. Based on the effective-model description we examine the low-temperature properties of the considered frustrated quantum Heisenberg antiferromagnets in the high-field regime. We also apply our approach to explore thermodynamic properties for a generalized diamond spin chain model suitable to describe azurite at high magnetic fields. Interesting features of these highly frustrated spin models consist in a steep increase of the entropy at very small temperatures and a characteristic extra low-temperature peak in the specific heat. The most prominent effect is the existence of a magnetic-field driven Berezinskii-Kosterlitz-Thouless phase transition occurring in the two-dimensional model.
Physical review. B, Condensed matter 04/2013; 88(9). · 3.77 Impact Factor
• Source
##### Article: The square-kagome quantum Heisenberg antiferromagnet at high magnetic fields: The localized-magnon paradigm and beyond
[Hide abstract]
ABSTRACT: We consider the spin-1/2 antiferromagnetic Heisenberg model on the two-dimensional square-kagome lattice with almost dispersionless lowest magnon band. For a general exchange coupling geometry we elaborate low-energy effective Hamiltonians which emerge at high magnetic fields. The effective model to describe the low-energy degrees of freedom of the initial frustrated quantum spin model is the (unfrustrated) square-lattice spin-1/2 $XXZ$ model in a $z$-aligned magnetic field. For the effective model we perform quantum Monte Carlo simulations to discuss the low-temperature properties of the square-kagome quantum Heisenberg antiferromagnet at high magnetic fields. We pay special attention to a magnetic-field driven Berezinskii-Kosterlitz-Thouless phase transition which occurs at low temperatures.
12/2013;
• Source
##### Article: Low-temperature properties of the Hubbard model on highly frustrated one-dimensional lattices
[Hide abstract]
ABSTRACT: We consider the repulsive Hubbard model on three highly frustrated one-dimensional lattices -- sawtooth chain and two kagom\'{e} chains -- with completely dispersionless (flat) lowest single-electron bands. We construct the complete manifold of {\em exact many-electron} ground states at low electron fillings and calculate the degeneracy of these states. As a result, we obtain closed-form expressions for low-temperature thermodynamic quantities around a particular value of the chemical potential $\mu_0$. We discuss specific features of thermodynamic quantities of these ground-state ensembles such as residual entropy, an extra low-temperature peak in the specific heat, and the existence of ferromagnetism and paramagnetism. We confirm our analytical results by comparison with exact diagonalization data for finite systems. Comment: 20 pages, 12 figures, 2 tables
Physical Review B 01/2010; 81(1):014421. · 3.66 Impact Factor
Available from
## Similar Publications
• ##### Exceptional Dielectric Performance Induced by the Stepwise Reversible Phase Transitions of an Organic Crystal: Betainium Chlorodifluoroacetate
Junhua Luo, zhihua sun, Shuquan Zhang, chengmin ji, tianliang chen
• ##### Hydrodynamics of phase transition fronts and the speed of sound in the plasma
Leonardo Leitao, Ariel Megevand
|
{}
|
# snow-startstop
0th
Percentile
##### Starting and Stopping SNOW Clusters
Functions to start and stop a SNOW cluster and to set default cluster options.
Keywords
internal
##### Usage
makeCluster(spec, type = getClusterOption("type"), ...)
stopCluster(cl)setDefaultClusterOptions(...)makeSOCKcluster(names, ..., options = defaultClusterOptions)
makePVMcluster(count, ..., options = defaultClusterOptions)
makeMPIcluster(count, ..., options = defaultClusterOptions)
getMPIcluster()
##### Arguments
spec
cluster specification
count
number of nodes to create
names
character vector of node names
options
cluster options object
cl
cluster object
val
new option value
...
cluster option specifications
##### Details
makeCluster starts a cluster of the specified or default type and returns a reference to the cluster. Supported cluster types are "SOCK", "PVM", and "MPI". For "PVM" and "MPI" clusters the spec argument should be an integer specifying the number of slave nodes to create. For "SOCK" clusters spec should be a character vector naming the hosts on which slave nodes should be started; one node is started for each element in the vector.
stopCluster should be called to properly shut down the cluster before exiting R. If it is not called it may be necessary to use external means to ensure that all slave processes are shut down.
setDefaultClusterOptions can be used to specify alternate values for default cluster options. There are many options. The most useful ones are type and homogeneous. The default value of the type option is currently set to "PVM" if the rpvm package is available; otherwise, it is set to "MPI" if Rmpi is available, and it is set to "SOCK" if neither of these packages is found.
The homogeneous option should be set to FALSE to specify that the startup procedure for inhomogeneous clusters is to be used; this requires some additional configuration. The default setting is TRUE unless the environment variable R_HOME_LIB is defined on the master host with a non-empty value.
The optionoutfile can be used to specify the file to which slave node output is to be directed. The default is /dev/null; during debugging of an installation it can be useful to set this to a proper file.
The functions makeSOCKcluster, makePVMcluster, and makeMPIcluster can be used to start a cluster of the corresponding type.
In MPI configurations where process spawning is not available and something like mpirun is used to start a master and a set of slaves the corresponding cluster will have been pre-constructed and can be obtained with getMPIcluster. This interface is still experimental and subject to change.
For more details see http://www.stat.uiowa.edu/~luke/R/cluster/cluster.html.
##### Aliases
• getMPIcluster
• makeMPIcluster
• makePVMcluster
• makeSOCKcluster
• makeCluster
• stopCluster
• setDefaultClusterOptions
##### Examples
cl <- makeCluster(c("localhost","localhost"), type = "SOCK")
clusterApply(cl, 1:2, get("+"), 3)
Documentation reproduced from package snow, version 0.1-1, License: GPL
### Community examples
Looks like there are no examples yet.
|
{}
|
## The $$D+XD_ s[X]$$ construction from GCD-domains.(English)Zbl 0656.13020
Let A be a domain, S a multiplicative subset of A and X an indeterminate over A. In this paper the author studies the ring $$A^{(S)}=\{a_ 0+\sum_{i\geq 1}a_ iX^ i| \quad a_ 0\in A,\quad a_ i\in S^{-1}A\}$$ assuming that A is a GCD-domain. In particular, he shows that the behaviour of $$A^{(S)}$$ depends on the relationship between S and the prime ideals P of A such that $$A_ P$$ is a valuation ring. As an application he constructs examples of locally GCD-domains which are not GCD-domains.
Reviewer: L.Bădescu
### MSC:
13F15 Commutative rings defined by factorization properties (e.g., atomic, factorial, half-factorial) 13B02 Extension theory of commutative rings 13G05 Integral domains
### Keywords:
valuation rings; GCD-domain
Full Text:
### References:
[1] Brewer, J.; Heinzer, W., Associated primes of principal ideals, Duke math. J., 41, 1-7, (1974) · Zbl 0284.13001 [2] Cohn, P., Bezout rings and their subrings, Math. proc. Cambridge philos. soc., 64, 251-264, (1968) · Zbl 0157.08401 [3] Cohn, P., Free rings and their relations, (1971), Academic Press New York · Zbl 0232.16003 [4] Costa, D.; Mott, J.; Zafrullah, M., The construction D + XDs[X], J. algebra, 53, 423-439, (1978) · Zbl 0407.13003 [5] Fuchs, L., Riesz groups, Ann. scuola norm. sup. Pisa, 19, 1-34, (1965) · Zbl 0125.28703 [6] Gilmer, R., Multiplicative ideal theory, (1972), Dekker New York · Zbl 0248.13001 [7] Griffin, M., Some results on v-multiplication rings, Canad. J. math., 19, 710-722, (1967) · Zbl 0148.26701 [8] Heinzer, W., An essential integral domain with a non essential localization, Canad. J. math., 33, 2, 400-403, (1981) · Zbl 0411.13013 [9] Heinzer, W.; Ohm, J., An essential ring which is not a v-multiplication ring, Canad. J. math., 25, 856-861, (1973) · Zbl 0258.13001 [10] Jaffard, P., LES systèmes d’ideaux, (1960), Dunod Paris · Zbl 0101.27502 [11] Mott, J.; Schexnayder, M., Exact sequences of semi-value groups, J. reine angew. math., 283/284, 388-401, (1976) · Zbl 0347.13001 [12] Mott, J.; Zafrullah, M., On Prüfer v-multiplication domains, Manuscripta math., 35, 1-26, (1981) · Zbl 0477.13007 [13] Nishimura, T., On the v-ideals of an integral domain, Bull. Kyoto univ. ed. ser. B, 17, 47-50, (1961) [14] Sheldon, P., Prime ideals in GCD-domains, Canad. J. math., 26, 98-107, (1974) · Zbl 0247.13009 [15] Zafrullah, M., On finite conductor domains, Manuscripta math., 24, 191-203, (1978) · Zbl 0383.13013
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
{}
|
• heterogeneity() returns an heterogeneity or dominance index.
• evenness() returns an evenness measure.
## Usage
heterogeneity(object, ...)
evenness(object, ...)
index_berger(x, ...)
index_brillouin(x, ...)
index_mcintosh(x, ...)
index_shannon(x, ...)
index_simpson(x, ...)
# S4 method for matrix
heterogeneity(
object,
method = c("berger", "brillouin", "mcintosh", "shannon", "simpson")
)
# S4 method for data.frame
heterogeneity(
object,
method = c("berger", "brillouin", "mcintosh", "shannon", "simpson")
)
# S4 method for matrix
evenness(object, method = c("shannon", "brillouin", "mcintosh", "simpson"))
# S4 method for data.frame
evenness(object, method = c("shannon", "brillouin", "mcintosh", "simpson"))
# S4 method for numeric
index_berger(x, na.rm = FALSE, ...)
# S4 method for numeric
index_brillouin(x, evenness = FALSE, na.rm = FALSE, ...)
# S4 method for numeric
index_mcintosh(x, evenness = FALSE, na.rm = FALSE, ...)
# S4 method for numeric
index_shannon(x, evenness = FALSE, base = exp(1), na.rm = FALSE, ...)
# S4 method for numeric
index_simpson(x, evenness = FALSE, na.rm = FALSE, ...)
## Arguments
object
A $$m \times p$$ numeric matrix or data.frame of count data (absolute frequencies giving the number of individuals for each class).
...
Currently not used.
x
A numeric vector of count data (absolute frequencies).
method
A character string specifying the index to be computed (see details). Any unambiguous substring can be given.
na.rm
A numeric scalar: should missing values (including NaN) be removed?
evenness
A logical scalar: should an evenness measure be computed instead of an heterogeneity/dominance index?
base
A positive numeric value specifying the base with respect to which logarithms are computed.
## Value
• heterogeneity() returns an HeterogeneityIndex object.
• evenness() returns an EvennessIndex object.
• index_*() return a numeric vector.
## Details
Diversity measurement assumes that all individuals in a specific taxa are equivalent and that all types are equally different from each other (Peet 1974). A measure of diversity can be achieved by using indices built on the relative abundance of taxa. These indices (sometimes referred to as non-parametric indices) benefit from not making assumptions about the underlying distribution of taxa abundance: they only take relative abundances of the species that are present and species richness into account. Peet (1974) refers to them as indices of heterogeneity.
Diversity indices focus on one aspect of the taxa abundance and emphasize either richness (weighting towards uncommon taxa) or dominance (weighting towards abundant taxa; Magurran 1988).
Evenness is a measure of how evenly individuals are distributed across the sample.
## Note
Ramanujan approximation is used for $$x!$$ computation if $$x > 170$$.
## Heterogeneity and Evenness Measures
The following heterogeneity index and corresponding evenness measures are available (see Magurran 1988 for details):
berger
Berger-Parker dominance index. The Berger-Parker index expresses the proportional importance of the most abundant type. This metric is highly biased by sample size and richness, moreover it does not make use of all the information available from sample.
brillouin
Brillouin diversity index. The Brillouin index describes a known collection: it does not assume random sampling in an infinite population. Pielou (1975) and Laxton (1978) argues for the use of the Brillouin index in all circumstances, especially in preference to the Shannon index.
mcintosh
McIntosh dominance index. The McIntosh index expresses the heterogeneity of a sample in geometric terms. It describes the sample as a point of a $$S$$-dimensional hypervolume and uses the Euclidean distance of this point from the origin.
shannon
Shannon-Wiener diversity index. The Shannon index assumes that individuals are randomly sampled from an infinite population and that all taxa are represented in the sample (it does not reflect the sample size). The main source of error arises from the failure to include all taxa in the sample: this error increases as the proportion of species discovered in the sample declines (Peet 1974, Magurran 1988). The maximum likelihood estimator (MLE) is used for the relative abundance, this is known to be negatively biased by sample size.
simpson
Simpson dominance index for finite sample. The Simpson index expresses the probability that two individuals randomly picked from a finite sample belong to two different types. It can be interpreted as the weighted mean of the proportional abundances. This metric is a true probability value, it ranges from $$0$$ (perfectly uneven) to $$1$$ (perfectly even).
The berger, mcintosh and simpson methods return a dominance index, not the reciprocal or inverse form usually adopted, so that an increase in the value of the index accompanies a decrease in diversity.
## References
Berger, W. H. & Parker, F. L. (1970). Diversity of Planktonic Foraminifera in Deep-Sea Sediments. Science, 168(3937), 1345-1347. doi:10.1126/science.168.3937.1345 .
Brillouin, L. (1956). Science and information theory. New York: Academic Press.
Kintigh, K. W. (1989). Sample Size, Significance, and Measures of Diversity. In Leonard, R. D. and Jones, G. T., Quantifying Diversity in Archaeology. New Directions in Archaeology. Cambridge: Cambridge University Press, p. 25-36.
Laxton, R. R. (1978). The measure of diversity. Journal of Theoretical Biology, 70(1), 51-67. doi:10.1016/0022-5193(78)90302-8 .
Magurran, A. E. (1988). Ecological Diversity and its Measurement. Princeton, NJ: Princeton University Press. doi:10.1007/978-94-015-7358-0 .
McIntosh, R. P. (1967). An Index of Diversity and the Relation of Certain Concepts to Diversity. Ecology, 48(3), 392-404. doi:10.2307/1932674 .
Peet, R. K. (1974). The Measurement of Species Diversity. Annual Review of Ecology and Systematics, 5(1), 285-307. doi:10.1146/annurev.es.05.110174.001441 .
Pielou, E. C. (1975). Ecological Diversity. New York: Wiley. doi:10.4319/lo.1977.22.1.0174b
Shannon, C. E. (1948). A Mathematical Theory of Communication. The Bell System Technical Journal, 27, 379-423. doi:10.1002/j.1538-7305.1948.tb01338.x .
Simpson, E. H. (1949). Measurement of Diversity. Nature, 163(4148), 688-688. doi:10.1038/163688a0 .
Other diversity measures: occurrence(), plot_diversity, rarefaction(), richness(), similarity(), simulate(), turnover()
N. Frerebeau
## Examples
data("chevelon", package = "folio")
## Shannon diversity index
(h <- heterogeneity(chevelon, method = "shannon"))
#> [1] 1.6681740 1.2130076 0.6931472 0.0000000 0.0000000 1.7481555 1.0986123
#> [8] 1.8711604 1.7207095 1.9115521 1.0397208 1.8200760
(e <- evenness(chevelon, method = "shannon"))
#> [1] 0.8572718 0.8750000 1.0000000 NaN NaN 0.8983742 1.0000000
#> [8] 0.9615862 0.8842698 0.8699849 0.9463946 0.9353340
## Bootstrap resampling (summary statistics)
bootstrap(h, f = NULL)
#> original mean bias error
#> P610s 1.6681740 1.6961206 0.027946609 0.2342647
#> P610e 1.2130076 1.1727131 -0.040294470 0.4265925
#> P625 0.6931472 0.6097586 -0.083388569 0.5200521
#> P630 0.0000000 0.2172584 0.217258389 0.3750845
#> P307 0.0000000 0.2172584 0.217258389 0.3839551
#> P631 1.7481555 1.7579334 0.009777927 0.2260357
#> P623 1.0986123 0.9869087 -0.111703558 0.5335377
#> P624 1.8711604 1.8556484 -0.015512037 0.2340369
#> P626s 1.7207095 1.7171704 -0.003539092 0.2361014
#> P626e 1.9115521 1.9293207 0.017768586 0.1423581
#> P627 1.0397208 0.9445380 -0.095182779 0.5056794
#> P628 1.8200760 1.8327355 0.012659539 0.2128472
bootstrap(h, f = summary)
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> P610s 0.6931472 1.5810938 1.7480673 1.7140440 1.8553727 2.302585
#> P610e 0.0000000 0.9649629 1.2636544 1.1899896 1.4708085 2.019815
#> P625 0.0000000 0.0000000 0.6931472 0.5989979 1.0986123 1.791759
#> P630 0.0000000 0.0000000 0.0000000 0.2344047 0.6931472 1.609438
#> P307 0.0000000 0.0000000 0.0000000 0.2286251 0.6931472 1.791759
#> P631 0.6931472 1.6313454 1.7917595 1.7661066 1.9328066 2.302585
#> P623 0.0000000 0.6931472 1.0986123 1.0093820 1.3862944 2.079442
#> P624 0.0000000 1.7328680 1.8848713 1.8539497 2.0140355 2.282174
#> P626s 0.3767702 1.5843113 1.7448088 1.7105253 1.8912387 2.205226
#> P626e 1.2989958 1.8409949 1.9433139 1.9260442 2.0439740 2.231217
#> P627 0.0000000 0.6931472 1.0397208 0.9594752 1.3321790 2.022809
#> P628 0.5623351 1.6769878 1.8343720 1.8155787 1.9730014 2.245172
quant <- function(x) quantile(x, probs = c(0.25, 0.50))
bootstrap(h, f = quant)
#> 25% 50%
#> P610s 1.5810938 1.7480673
#> P610e 0.9649629 1.2730283
#> P625 0.0000000 0.6931472
#> P630 0.0000000 0.0000000
#> P307 0.0000000 0.0000000
#> P631 1.6114722 1.7917595
#> P623 0.6931472 1.0986123
#> P624 1.7347645 1.8848713
#> P626s 1.5703135 1.7448088
#> P626e 1.8321435 1.9409659
#> P627 0.6789889 1.0549202
#> P628 1.6726254 1.8340788
## Jackknife resampling
jackknife(h)
#> mean bias error
#> P610s 1.5680395 -0.9012101 0.2056978
#> P610e 1.1096729 -0.9300118 0.3968560
#> P625 0.5545177 -1.2476649 0.8317766
#> P630 0.0000000 0.0000000 0.0000000
#> P307 0.0000000 0.0000000 0.0000000
#> P631 1.6463856 -0.9159291 0.2108823
#> P623 0.9769728 -1.0947558 0.5574224
#> P624 1.7643489 -0.9613033 0.2126469
#> P626s 1.6152716 -0.9489415 0.2242804
#> P626e 1.8080769 -0.9312771 0.1429090
#> P627 0.9244221 -1.0376881 0.5301829
#> P628 1.7148123 -0.9473731 0.2085352
|
{}
|
## by Fabien Chouteau – Oct 20, 2020
Welcome to the Ada for micro:bit series where we look at simple examples to learn how to program the BBC micro:bit with Ada.
In this sixth part we will see how to read the analog value of a pin. This means reading a value between 0 and 1023 that tells the voltage applied to the pin. 0 means 0 volts, 1023 means 3.3 volts.
### Wiring Diagram
For this example we will need a couple of extra parts:
• An LED
• A 470 ohm resistor
• A potentiometer
• A couple of wires to connect them all
For this example we start from the same circuit as the pin output example, and we add a potentiometer. The center pin of the potentiometer is connected to pin 1 of the micro:bit the other two pins are respectively connected to GND and 3V.
### Interface
To read the analog value of the IO pin we are going to use the function Analogof the package MicroBit.IOs.
function Analog (Pin : Pin_Id) return Analog_Value
with Pre => Supports (Pin, Analog);
-- Read the voltagle applied to the pin. 0 means 0V 1023 means 3.3V
Arguments:
• Pin : The id of the pin that we want read the analog value from
Precondition:
• The function Analog has a precondition that the pin must support analog IO.
In the code, we are going to write an infinite loop that reads the value of pin 1, and set pin 0 to the same value.
This means that you can control the brightness of the LED using the potentiometer.
Here is the full code of the example:
with MicroBit.IOs;
procedure Main is
Value : MicroBit.IOs.Analog_Value;
begin
-- Loop forever
loop
-- Read analog value of pin
Value := MicroBit.IOs.Analog (1);
-- Write analog value of pin 0
MicroBit.IOs.Write (0, Value);
end loop;
end Main;
Following the instructions of Part 1 you can open this example (Ada_Drivers_Library-master\examples\MicroBit\analog_in\analog_in.gpr), compile and program it on your micro:bit.
See you next week for another Ada project on the micro:bit.
Don't miss out on the opportunity to use Ada in action by taking part in the fifth annual Make with Ada competition! We're calling on developers across the globe to build cool embedded applications using the Ada and SPARK programming languages and are offering over \$9,000 in total prizes. Find out more and register today!
Posted in
|
{}
|
Next: Theorem of Kutta and Up: Incompressible Aerodynamics Previous: Incompressible Aerodynamics
# Introduction
This chapter investigates the forces exerted on a stationary obstacle situated in a uniform, high Reynolds number, subsonic (and, therefore, effectively incompressible--see Section 1.17) wind, on the assumption that the obstacle is sufficiently streamlined that there is no appreciable separation of the boundary layer from its back surface. Such an obstacle is termed an airfoil (or aerofoil). Obviously, airfoil theory is fundamental to the theory of flight. Further information on this subject can be found in Milne-Thomson 1958.
The flow around an airfoil is essentially irrotational and inviscid everywhere apart from a thin boundary layer localized to its surface, and a thin wake emitted by its trailing edge. (See Sections 8.5 and 8.6.) It follows that, for the flow external to the boundary layer and wake, we can write
(9.1)
which automatically ensures that the flow is irrotational. Assuming that the flow is also incompressible, so that , the velocity potential, , satisfies Laplace's equation: that is,
(9.2)
The appropriate boundary condition at the surface of the airfoil is that the normal velocity be zero. In other words, , where is a unit vector normal to the surface. In general, the tangential velocity at the airfoil surface, obtained by solving in the external region, subject to the boundary condition on the surface, is non-zero. Of course, this is inconsistent with the no slip condition, which demands that the tangential velocity be zero at the surface. (See Section 8.2.) However, as described in the previous chapter, this inconsistency is resolved by the boundary layer, across which the tangential velocity is effectively discontinuous, being non-zero on the outer edge of the layer (where it interfaces with the irrotational flow), and zero on the inner edge (where it interfaces with the airfoil). The discontinuity in the tangential velocity across the layer implies the presence of bound vortices covering the surface of the airfoil (see Section 9.7), and also gives rise to a friction drag acting on the airfoil in the direction of the external flow. However, the magnitude of this drag scales as , where is the Reynolds number of the wind. (See Section 8.5.) Hence, such drag becomes negligibly small in the high Reynolds number limit. In the following, we shall assume that any form drag, due to the residual separation of the boundary layer at the back of the airfoil, is also negligibly small. Moreover, for the sake of simplicity, we shall initially restrict our discussion to two-dimensional situations in which a high Reynolds number wind flows transversely around a stationary airfoil of infinite length (in the -direction) and uniform cross-section (parallel to the - plane).
Next: Theorem of Kutta and Up: Incompressible Aerodynamics Previous: Incompressible Aerodynamics
Richard Fitzpatrick 2016-03-31
|
{}
|
# Why is the absolute value of this Gauss sum obvious?
I came across the Gauss sum discussed in the following post in a problem from my Galois theory course: https://mathoverflow.net/a/71282. Why exactly is the square of its norm obvious?
• It does not say that it's obvious, but that it's easy, meaning that the computation does not require any specific insight, it's just a computation. – Captain Lama Apr 29 '16 at 16:52
• Please ask a selfcontained question. – quid Apr 29 '16 at 16:53
• Mathematicians use "easy" or "obvious" as code for "I don't feel like proving this". – carmichael561 Apr 29 '16 at 16:56
• I don't see how the computation proceeds, though. – Vik78 Apr 29 '16 at 16:56
Let $G=\sum_{a}\Big(\frac{a}{p}\Big)\zeta_p^a$. Here's a proof that $G^2=\Big(\frac{-1}{p}\Big)p$, which in particular shows that $|G|^2=p$.
As $a$ runs over $(\mathbb{Z}/p\mathbb{Z})^{\times}$, so does $ab$ for fixed $b\neq0$, so we have: $$G^2=\sum_{a,b}\left(\frac{a}{p}\right)\left(\frac{b}{p}\right)\zeta_p^{a+b}=\sum_{a,b}\left(\frac{ab}{p}\right)\zeta_p^{a+b}=\sum_{a,b}\left(\frac{ab^2}{p}\right)\zeta_p^{b(a+1)}=\sum_{a,b}\left(\frac{a}{p}\right)\zeta_p^{b(a+1)}$$ $$=\sum_{b}\left(\frac{-1}{p}\right)+\sum_{a\neq-1}\left(\frac{a}{p}\right)\sum_{b}\zeta_p^{b(a+1)}$$
Moreover, $1+\zeta_p+\dots+\zeta_p^{p-1}=0$, so $\displaystyle\sum_{b}\zeta_p^{b(a+1)}=-1$, and
$$G^2=(p-1)\left(\frac{-1}{p}\right)-\sum_{a\neq-1}\left(\frac{a}{p}\right)=p\left(\frac{-1}{p}\right)-\sum_{a}\left(\frac{a}{p}\right)=p\left(\frac{-1}{p}\right)$$ since there are as many quadratic residues as non-residues mod $p$.
|
{}
|
# What produces more severe burns
Question.
What produces more severe burns, boiling water or steam?
Solution:
Steam will produce more severe burns than boiling water. It is because, 1 g of steam at 373 K (100°C) contains 2260 J of heat energy more in the form of latent heat of vaporization as compared to water at 373 K (100°C ). Thus steam produces more severe burns.
|
{}
|
# Relating a finite number of sets in a base to a finite number of open sets
As part of the proof that the product of two compact spaces is itself compact, I'm seeing this lemma:
Let $$\mathfrak B$$ be a base for the open sets of a topological space $$Z$$. If, for each covering $$\{B_{\beta}\}_{\beta \in J}$$ of $$Z$$ by members of $$\mathfrak B$$, there is a finite subcovering, then Z is compact.
The proof (rather too long to post here), seems to be saying:
Because every open set in $$Z$$ can be described as a union of sets in $$\mathfrak B$$, no covering of $$Z$$ can have more members than the equivalent covering in $$\mathfrak B$$.
If there were more sets in an arbitrary covering $$\{A\}$$ of $$Z$$ than there are in an equivalent covering in $$\mathfrak B$$, then $$\{A\}$$ would include sets that are not equal to a union of sets in $$\mathfrak B$$.
Is this correct?
let $$\mathcal{U}$$ be an arbitrary open cover of $$Z$$. Let $$\mathcal{B}$$ be a defined by $$\mathcal{B}= \{O \in \mathfrak{B}: \exists U \in \mathcal{U}: B \subseteq U\}$$ which is an open cover of $$Z$$ by members of $$\mathfrak{B}$$.
(intermezzo: why is it a cover: let $$z \in Z$$. Then for some $$U_z \in \mathcal{U}$$, $$z \in U_z$$, because $$\mathcal{U}$$ is a cover. As $$U_z$$ is open and $$\mathfrak{B}$$ is a base, we know there is some $$B_z \in \mathfrak{B}$$ such that $$z \in B_z \subseteq U_z$$. But note that we've just shown that $$B_z \in \mathcal{B}$$ and it covers $$z$$..)
So by assumption, finitely many $$B_1, B_2, \ldots, B_n$$ from $$\mathcal{B}$$ cover $$Z$$ too and for each of these finitely many (so no axiom of choice needed) $$B_i$$ we pick a $$U_i \in \mathcal{U}$$ such that $$B_i \subseteq U_i$$, which is possible by definition of $$\mathcal{B}$$. Then certainly the $$U_1, U_2, \ldots U_n$$ form a finite subcover of $$\mathcal{U}$$. So $$Z$$ is compact as we started with an arbitary open cover and found a finite subcover.
|
{}
|
## The Involute of a Cubical Parabola
In his remarkable book The Theory of Singularities and its Applications, Vladimir Arnol’d claims that the symmetry group of the icosahedron is secretly lurking in the problem of finding the shortest path from one point in the plane to another while avoiding some obstacles that have smooth boundaries.
Arnol’d nicely expresses the awe mathematicians feel when they discover a phenomenon like this:
Thus the propagation of waves, on a 2-manifold with boundary, is controlled by an icosahedron hidden at an inflection point at the boundary. This icosahedron is hidden, and it is difficult to find it even if its existence is known.
I would like to understand this!
I think the easiest way for me to make progress is to solve this problem posed by Arnol’d:
Puzzle. Prove that the generic involute of a cubical parabola has a cusp of order 5/2 on the straight line tangent to the parabola at the inflection point.
There’s a lot of jargon here! Let me try to demystify it. (I don’t have the energy now to say how the symmetry group of the icosahedron gets into the picture, but it’s connected to the ‘5’ in the cusp of order 5/2.)
A cubical parabola is just a curve like $y = x^3$:
It’s a silly name. I guess $y = x^3$ looked at $y = x^2$ and said “I want to be a parabola too!”
The involute of a curve is what you get by attaching one end of a taut string to that curve and tracing the path of the string’s free end as you wind the string onto that curve. For example:
Here our original curve, in blue, is a catenary: the curve formed by a hanging chain. Its involute is shown in red.
There are a couple of confusing things about this picture if you’re just starting to learn about involutes. First, Sam Derbyshire, who made this picture, cleverly moved the end of the string attached to the catenary at the instant the other end hit the catenary! That allowed him to continue the involute past the moment it hits the catenary. The result is a famous curve called a tractrix.
Second, it seems that the end of the string attached to the catenary is ‘at infinity’, very far up.
But you don’t need to play either of these tricks if you’re trying to draw an involute. Take a point $p$ on a curve $C.$ Take a string of length $\ell,$ nail down one end at $p,$ and wind the string along $C.$ Then the free end of your string traces out a curve $D.$
$D$ is called an involute of $C.$ It consists of all the points you can get to from $p$ by a path of length $\ell$ that doesn’t cross $C.$
So, Arnol’d’s puzzle concerns the involute of the curve $y = x^3.$
He wants you to nail down one end of the string at any ‘generic’ location. So, don’t nail it down at $x = 0, y = 0,$ since that point is different from all the rest. That point is an inflection point, where the curve $y = x^3$ switches from curving down to curving up!
He wants you to wind the string along the curve $y = x^3,$ forming an involute. And he wants you to see what the involute does when it crosses the line $y = 0.$
This is a bit tricky, since the region $y \le x^3$ is not convex. If you nail your string down at $x = -1, y = -1$, your string will have to start out above the curve $y = x^3.$ But when the free end of your string crosses the line $y = 0,$ the story changes. Now your string will need to go below the curve $y = x^3.$
It’s a bit hard to explain this both simply and accurately, but if you imagine drawing the involute with a piece of string, I think you’ll encounter the issue I’m talking about. I hope I understand it correctly!
Anyway, suppose you succeed in drawing the involute. What should you see?
Arnol’d says the involute should have a ‘cusp of order 5/2’ somewhere on the line $y = 0.$
A cusp of order 5/2 is a singularity in an otherwise smooth curve that looks like $y^2 = x^5$ in some coordinates. In a recent post I described various kinds of cusps, and in a comment I mentioned that the cusp of order 5/2 was called a rhamphoid cusp. Strangely, I wrote all that before knowing that Arnol’d places great significance on the cusp of order 5/2 in the involute of a cubical parabola!
Simon Burton drew some nice cusps of order 5/2. The curve $y^2 = x^5$ looks like this:
This is a more typical curve with a cusp of order 5/2:
$(x-4y^2)^2 - (y+ 2x)^5 = 0$
It looks like this:
It’s less symmetrical than the curve $y^2 = x^5.$ Indeed, it looks like a bird’s beak: the word ‘rhamphoid’ means ‘beak-like’.
Arnol’d emphasizes that you should usually expect this sort of shape for a cusp of order 5/2:
It is easy to recognize this curve in experimental data, since after a generic diffeomorphism the curve consists of two branches that have equal curvatures at the common point, and hence are convex from the same side [….]
So, if we draw the involutes of a cubical parabola we should see something like this! And indeed, Marshall Hampton has made a great online program that draws these involutes. Here’s one:
The blue curve is the involute. It looks like it has a cusp of order 5/2 where it hits the line $y = 0.$ It also has a less pointy cusp where it hits the red curve $y = x^3.$ Like the cusp in the tractrix, this should be a cusp of order 3/2, also known as an ordinary cusp.
### Hints
Regarding the easier puzzle I posed above, Arnol’d gives this hint:
HINT. The curvature centers of both branches of the involute, which meet at the point of the inflectional tangent, lie at the inflection point, hence both branches have the same convexity (they are both concave from the side of the inflection point of the boundary).
That’s not what I’d call crystal clear! However, I now understand what he means by the two ‘branches’ of the involute. They come from how you need to change the rules of the game as the free end of your string crosses the line $y = 0.$ Remember, I wrote:
If you nail your string down at $x = -1, y = -1$, your string will have to start out above the curve $y = x^3.$ But when the free end of your string crosses the line $y = 0$, the story changes. Now your string will need to go below the curve $y = x^3.$
When the rules of the game change, he claims there’s a cusp of order 5/2 in the involute.
I also think I finally understand the picture that Arnol’d uses to explain what’s going on:
It shows the curve $y = x^3$ in bold, and three involutes of this curve. One involute is not generic: it goes through the special point $x = 0, y = 0.$ The other two are. They each have a cusp of order 5/2 where they hit the line $y = 0,$ but also a cusp of order 3/2 where they hit the curve $y = x^3.$ We can recognize the cusps of order 5/2, if we look carefully, by the fact that both branches are convex on the same side.
But again, the challenge is to prove that these involutes have cusps of order 5/2 where they hit the line $y = 0.$ A cusp of order 7/2 would also have two branches that are convex on the same side!
Here’s one more hint. Wikipedia says that if we have a curve
$C :\mathbb{R} \to \mathbb{R}^2$
parametrized by arclength, so
$|C^\prime(s)|=1$
for all $s,$ then its involute is the curve
$D :\mathbb{R} \to \mathbb{R}^2$
given by
$D(s) = C(s)- s C^\prime(s)$
Strictly speaking, this must be an involute. And it must somehow handle the funny situations I described, where the involute fails to be smooth. I don’t know it does this.
### 117 Responses to The Involute of a Cubical Parabola
1. But if we refrain from what? Did you mean to post this already? It looks like you weren’t
• John Baez says:
It’s annoyingly easy to hit ‘publish’ when you’re trying to update a draft, and when you do that, the whole world is notified, so people send me puzzled emails if I then go and ‘unpublish’ it. I didn’t want to publish this one today—but okay, it’s done now.
2. jessemckeown says:
So, to a piecewise convex curve there is an envelope of tangent lines; and to another piecewise convex curve there is an envelope of normal lines; and curve B is an involute of curve A (also pronounced: A is THE evolute of B) if the Tangent envelope of A is the Normal envelope of B. Curious fact: The Evolute of an algebraic curve is another algebraic curve. Curiouser fact: some algebraic curves are not the evolute of any algebraic curve — an ellipse is a very good bad example. (I also think evolutes are the right way to think about the Four-Vertex Theorem)
This should sound like a similar relationship between integrals (involutes) and derivatives (evolutes), in part because constructing an involute is solving a differential equation, while constructing the evolute is basically dividing by a derivative; but more: because of the string picture described above, (distance to an) involute measures arc length. Corollary: the arclengths between algebraic points on evolutes of algebraic curves are algebraic numbers.
Curiouser and curiouser fact: Apollonius mentioned evolutes at least once, and Archimedes’ Spiral is trying very hard to be a circle involute; but the semicubic parabola, incidentally the evolute of the ordinary parabola, seems to have been the first noticed algebraic curve with piecewise algebraic arclength, around 1659. Rational hypocycloids are further examples (similar to their own evolutes!) but I don’t know who first measured them. Cf Richeson 2013, esp. section 6 (starting on p. 11).
However, none of this has much to do with finite simple Lie algebras or Coxeter groups.
• John Baez says:
Thanks for all the erudition!
For those who find this hard to follow, another of Sam Derbyshire’s pictures might help. Just as the tractrix is an involute of the catenary:
so the catenary is the evolute of the tractrix:
You’ll note that the tractrix has a cusp of order 3/2 where it hits the catenary and you have to change your mind a bit about how it’s defined. Similarly, the involutes of the cubical parabola have cusps of order 3/2 when they hit that curve.
Arnol’d is claiming the (generic) involutes of the cubical parabola have cusps of order 5/2 for a subtler reason: that curve has an inflection point. This forces you to change your mind in a subtler way about how an involute is defined.
• Simon Burton says:
I like the connections to wavefronts and pathfinding. Should we think of the tractrix as the ray (path) dual to the tangent lines which are the wavefronts? It looks like it gets refracted around alot and reflects off of the catenary at right angles.
• John Baez says:
I see what you mean, but…
In general, you can think of an involute as a wavefront of some light that moves at constant speed and is not allowed to enter an obstacle.
From this viewpoint, it’s the tractrix that is the wavefront! Each branch of it consists of all points that can be reached by a path of length $\ell$ starting from a particular point $p,$ where the path is not allowed to enter the catenary.
(As mentioned in my post, this example is a bit tricky, because we’re using two different points $p$ for the two different branches of the tractrix. Furthermore, these points are ‘at infinity’—or at least way, way up the catenary.)
3. John Baez says:
This should be helpful: Wikipedia says if we have a curve
$C :\mathbb{R} \to \mathbb{R}^2$
parametrized by arclength, so
$|C^\prime(s)|=1$
for all $s,$ then its involute is the curve
$D :\mathbb{R} \to \mathbb{R}^2$
given by
$D(s) = C(s)- s C^\prime(s)$
I’ll add this to the ‘hints’ in the post.
4. Marshall Hampton says:
Here’s a little Sage cell that draws the example, just made it to make sure I understood some of what you were saying: http://sagecell.sagemath.org/?q=hwedlo
• John Baez says:
Wow, that’s MAGNIFICENT!
Everyone go there and look! He’s taken Arnol’d’s mysterious picture:
and brought it into the 21st century. There’s a slider that lets you see all the involutes of the curve $y = x^3.$ You can see the catastrophe or ‘perestroika’ that occurs when the involute hits the origin, which is an inflection point of $y = x^3.$ But what matters more to me right now are the cusps of order 5/2 that occur in involutes that don’t hit the origin.
Thanks, Marshall.
• John Baez says:
By the way: is there some fairly automatic way to make a nice animated gif showing how the involute changes as the parameter in Marshall’s slider moves? Marshall’s program makes a nice image for each value of that parameter. We’d need a way for Sage to create lots of images as that parameter changes, and bundle them up into a (looped) animated gif.
If someone could do this, I would love to feature it on Visual Insight, and of course credit everyone involved.
• Marshall Hampton says:
Yeah I can do that, gotta teach a class in a bit so it might have to be tonight.
• Marshall Hampton says:
Here is one effort:
Here’s the code in Sage if you want to tweak it (sorry the line breaks will be screwed up):
var('t') p = (t,t^3) pd = (1,3t^2) sp = sqrt(1+9t^4) invpts = srange(-1,1,.025,include_endpoint=True) def xi(ti,invpt): return ti - numerical_integral(sp,invpt,ti)[0]/sp(t=ti) def yi(ti,invpt): return ti^3 - 3ti^2numerical_integral(sp,invpt,ti)[0]/sp(t=ti) outs = [] for invpt in invpts: p1 = line2d([[xi(i,invpt),yi(i,invpt)] for i in srange(-2,2,.01,include_endpoint=True)], xmin=-1,ymin=-1,xmax=1,ymax=1) p2 = parametric_plot((t,t^3),(-1,1), rgbcolor='red', xmin=-1,ymin=-1,xmax=1,ymax=1) outs.append(p1+p2) animate(outs)
• John Baez says:
Wow, that’s GREAT! I will use this on Visual Insight. At some point I’ll pester you to electronically ‘sign’ a form giving the American Mathematical Society permission to use this gif—that’s a thing they make me do.
5. Marshall Hampton says:
OK – perhaps I can make a somewhat better one; I cobbled that together pretty fast so I’m sure it could be improved.
• Marshall Hampton says:
All together – probably not useful:
• Simon Burton says:
Oh wow, this is so beautiful. It looks to me like a surface in 3 dimensions, viewed from “above”. It’s as if the involutes unwrap the plane into a more complicated surface in 3 dimensions. And on this surface I suspect the involutes are smooth (they have stationary points at the cusps). This must be what Arnold is calling “the discriminant of the symmetry group of an icosahedron.” I have no idea how this works though.
Here is the figure from Arnold’s book:
• John Baez says:
Simon wrote:
It looks to me like a surface in 3 dimensions, viewed from “above”.
It is. That’s part of what Arnol’d is claiming.
This must be what Arnold is calling “the discriminant of the symmetry group of an icosahedron.”
Right. Perhaps the easiest way to explain it is this—though it’s not ultimately the most beautiful way.
Take the polynomial
$x^5 + ax^2 + bx^2 + c$
For most values of $a,b,c$ the 5 roots of this polynomial are all distinct. For some special values of $a,b,c$ it has ‘repeated roots’: that is, fewer than 5 different roots. Those special values give points $(a,b,c)$ that lie on an interesting surface in 3d space. From a certain view, and perhaps warped a bit, it should look like this:
And this should remind you hugely of the involutes of the cubical parabola!
I wish someone could use a computer to draw the surface I just described. It would be easier if I knew the discriminant of a quintic. This is a function $D(a,b,c,d,e)$ that equals zero when
$x^5 + a x^4 + b x^3 + c x^2 + d x + e$
has repeated roots. Unfortunately $D(a,b,c,d,e)$ is a polynomial with 59 terms—and worse, I don’t know what it is! If we knew it, we could trim it down to handle the special case of polynomials like
$x^5 + ax^4 + bx^2 + c$
In other words, we’d look at $D(a,0,b,0,c).$ A lot of the 59 terms would go away. Then someone good with computers could draw a picture of where the discriminant vanishes.
• Greg Egan says:
The discriminant of:
$Q = x^5 + a x^4+b x^2+c$
is
$D_x(Q) = c \left(256 a^5 c^2-128 a^4 b^2 c+16 a^3 b^4+2000 a^2 b c^2-900 a b^3 c+108 b^5+3125 c^3\right)$
This in turn has discriminants wrt $a, b, c$ of:
$D_a(D_x(Q)) = 4294967296 c^{13} \left(4 b^5+3125 c^3\right)^2 \left(54 b^5+3125 c^3\right)^3$
$D_b(D_x(Q)) = -256 c^{15} \left(4096 a^5-253125 c\right)^3 \left(16 a^5-3125 c\right)^2$
$D_c(D_x(Q)) = -256 b^{13} \left(4 a^3+25 b\right)^2 \left(4 a^3+27 b\right)^2 \left(128 a^3+675 b\right)^3$
• John Baez says:
Thanks, Greg! What do these higher discriminants let us do, exactly?
Hmm… I’m trying to cajole someone into drawing the surface
$c \left(256 a^5 c^2-128 a^4 b^2 c+16 a^3 b^4+2000 a^2 b c^2-900 a b^3 c+108 b^5+3125 c^3\right) = 0$
(If anyone finds this confusing, they should replace $a,b,c$ by $x,y,z$.) I guess the higher discriminants say something about where various sheets of this surface collide. They do so in a way that’s heavily biased towards the $a,b$ and $c$ axes. But I guess that still could be useful.
• Simon Burton says:
Yes I am attempting to make some plots of this discriminant surface and will post any that look reasonable.
• Greg Egan says:
So far the images I have of the surface are pretty horrible, because (as we discussed in relation to the Capricornoid), capturing cusps accurately can be tricky. A naive plot of the zero set misses the sharp edges of the surface, and unfortunately solving for any one of the variables in terms of the other two also seems to give messy results in Mathematica, due to some kind of numerical artifacts.
BTW, I think Arnol’d omits the plane $c=0$ from the surface he’s drawn, so I’ll concentrate on the remaining factor of the discriminant.
The higher discriminants should let us figure out any genuine, coordinate-independent self-intersections of the surface, because their projections should show up in all three discriminants. So I think the curves:
$\displaystyle{ (a,-\frac{128 a^3}{675},\frac{4096 a^5}{253125}) }$
$\displaystyle{ (a,-\frac{4 a^3}{25},\frac{16 a^5}{3125}) }$
and
$\displaystyle{ (a,0,0) }$
all give sets where the surface intersects itself. All four discriminants (that of the quintic, and the three higher discriminants wrt $a,b,c$) are zero along these curves.
• jessemckeown says:
If we only want to draw the graph of the discriminant, it should be enough to restrict the coefficients in the cubic
$x^3 - p x^2 + q x - r$
such that the product
$(x^2 - 2 u x + u^2) (x^2 - px^2 + qx - r)$
is depressed in the desired way, which is a system of two equations, linear in $p,q,r$; one then has a 2-dimensional parametrization of the discriminant surface. One reasonably-generic singular quintic, then, is
$\displaystyle{ x^5 + (-2u-p) x^4 + (\frac{3u^3}{2} + 2pu^2)x^2 - \frac{u^5}{2-pu^4} }$
• John Baez says:
Jesse: I could only get your last big formula to parse by completely retyping it; I hope I didn’t introduce any errors.
• Greg Egan says:
Here’s an image of the surface of zero discriminant:
The yellow part of the surface is where $c$ is single-valued; the green, blue and orange parts are where $c$ is triple-valued. The straight green line and the curved blue line run along sharp edges of the surface, where it self-intersects and comes to an end (in the real domain); the red line marks a curve where the surface self-intersects and continues.
The 5/2 cusps should lie along the green line, and the 3/2 cusps along the projection into the $(a,b)$ plane of the blue curve, which is a cubic.
• Greg Egan says:
I was hoping that the zero-discriminant surface might be related to the involutes of a cubic in the plane in the simplest possible way … but it’s not.
If you project the blue curve that marks one “fold” in the surface onto the $(a,b)$ plane, you get a cubic: $b=-\frac{128 a^3}{675}$. If you project the red curve that marks the self-intersection of the surface onto the $(a,b)$ plane, you get another cubic: $b=-\frac{4 a^3}{25}$.
So, I was hoping that all the involutes of the first cubic in the plane would self-intersect along the second cubic. But they don’t!
• jessemckeown says:
last term should be two terms: $\frac{u^4}{2} - p u^4$; sorry about that.
• jessemckeown says:
let me know if this doesn’t work. This is a slightly different parametrization; still not quite happy with the results.
• John Baez says:
Marshall drew:
That image may actually be useful. If we look at waves moving in the region $y \ge x^3,$ obeying Huyghen’s principle, their diffraction may be related to that portion of your picture. But I’m a bit confused about the details.
Marshall wrote:
OK – perhaps I can make a somewhat better one; I cobbled that together pretty fast so I’m sure it could be improved.
If you want to improve it, please do. You can post it here. This won’t show up on Visual Insight for a while, since I have a number of posts already lined up.
6. John Baez says:
Simon Burton made a great animation explaining how one particular involute of the curve $y = x^3$ gets its two cusps: the exciting cusp of order 5/2 where the involute hits the $x$ axis, and the less exciting cusp of order 3/2 where the involute hits the curve $y = x^3$:
• Bruce Bartlett says:
Hi Simon – I’m interested to see your code, I’d like to learn how you programmed that in Sage!
• Simon Burton says:
Hi Bruce, I didn’t use sage for this, just plain python and pyx (and some other programs to assemble the gif.) Here is the code: http://pastebin.com/K4kY8ZeS
• Bruce Bartlett says:
Thanks, that’s very useful.
7. Scott Hotton says:
I find it convenient to think of an involute as a roulette
where the rolling curve is a line. It will also be useful to
express the parameterizations of the curves with complex
numbers. The cubical parabola has the parameterization
$z(t) = t + i\, t^3$
The starting position of the rolling line will be the real
axis and we can parameterize it with the arc length of the
cubical parabola so that as the line rolls the speed will be
the same at the contact points of the curves (the no slip
condition).
$s(t) = \int_{0}^{t} | \dot{z}(\tau)| d \tau$
Let $x_0 \in {\bf R}$ be the location of the tracing point
at $t=0$. This is also the location of the involute’s cusp
on the real axis, the “exciting” cusp. In this way the family
of involutes is parameterized by the location of the exciting
cusp. For each $x_0$ the parameterization for the
involute is
$Z(t) = z(t) + ( x_0 - s(t) ) \frac{\dot{z}(t)}{|\dot{z}(t)|}$
From Arnold’s hint the origin is the center of the osculating
circle at the exciting cusp. We can straighten out the
osculating circle with a Mobius transformation that maps
$x_0 e^{i t}$ to the imaginary axis while leaving the real
axis invariant.
$W(t) = \frac{Z(t) - x_0}{Z(t) + x_0}$
The exciting cusp is now at the origin and the curvature
should be $0$ for both branches. The image of the
cubical parabola is a simple closed curve minus the point
$1$. $W(t)$ would be a rational function of
$t$ if it were not for $|\dot{z}(t)|$ and $s(t)$.
Since $t$ only appears with fourth degree under the
radical in $|\dot{z}(t)|$ its Taylor expansion about
$t=0$ only contains terms whose power is a multiple
of $4$.
$|\dot{z}(t)| = 1 + \frac{9}{2} \; t^4 + O(t^8)$
So long as $x_0 \neq 0$ we can algebraically obtain the
first few terms in the Taylor expansion for $s(t)$ and
$W(t)$. I got:
$W(t) = -\frac{6}{5x_0} t^5 + i \, t^2 \left( \frac{3}{2} - \frac{1}{x_0} t \right) + O(t^6)$
This resembles the curve $t^5 + i\, t^2$ but I do not know
if its close enough to help reveal the hidden icosahedron.
8. John Baez says:
Greg drew:
This is great! But I’m having trouble seeing how it’s diffeomorphic to this:
Do you see it?
It should be possible to take a slice of your surface that looks approximately like the blue curve here:
that is, a generic involute of the cubical parabola, containing both a cusp of order 3/2 and a cusp of order 5/2.
I’m having trouble seeing such a slice in your picture, though I see the cusps of order 5/2 very nicely along your green line. You’re making it sound like the cusps of order 3/2 only spring into existence when we project your surface onto the $ab$ plane… while Arnol’d’s picture makes it look like they should already be visible in the 3d picture.
9. John Baez says:
Simon wrote:
This must be what Arnol’d is calling “the discriminant of the symmetry group of an icosahedron.” I have no idea how this works though.
Let me try to explain that. This subject really deserves lots of fancy math jargon, but I’ll try to minimize that.
The symmetry group of the icosahedron, including rotations and reflections, has 120 elements. That’s just enough to move any triangle here to any other triangle:
Certain polynomials in $x,y,z$ are unchanged (or ‘invariant’) when we apply any of the icosahedron symmetries. Apart from constants, the most obvious one has degree 2:
$P(x,y,z) = x^2 + y^2 + z^2$
But there’s another invariant polynomial of degree 6 that we get as follows. The icosahedron has 12 corners, which come in opposite pairs. Choose 6 corners $q_1, \dots, q_6,$ none opposite to each other. Taking the dot product with each of these gives a linear function:
$f_i(x,y,z) = q_i \cdot (x,y,z)$
Multiply all 6 of these linear functions and we get an invariant polynomial!
$Q(x,y,z) = f_1(x,y,z) \cdots f_6(x,y,z)$
There’s another invariant polynomial of degree 10 that we get as follows. The icosahedron has 20 faces, which come in opposite pairs. Choose the midpoints $r_1, \dots, r_{10}$ of 10 faces, none opposite to each other. Taking the dot product with each of these gives a linear function:
$g_i(x,y,z) = r_i \cdot (x,y,z)$
Multiply all 10 of these linear functions and we get an invariant polynomial!
$R(x,y,z) = g_1(x,y,z) \cdots g_{10}(x,y,z)$
It’s not utterly obvious that $Q$ and $R$ are invariant. Because we arbitrarily chose one corner from each opposite pair, and similarly one face from each opposite pair, $Q$ and $R$ could in theory change sign when we apply a symmetry of the icosahedron.
Puzzle. Why don’t they change sign?
Now, it’s a marvelous theorem of Chevalley that every polynomial in $x,y,z$ that’s invariant under the icosahedron symmetries can be expressed as a polynomial in $P,Q,$ and $R.$ Even better, this can be done in a unique way. In other words, $P,Q,R$ don’t obey any polynomial relations like $P^2 Q + P^5 = R.$
To understand the discriminant of the icosahedral symmetry group, we need to think about the function from $\mathbb{R}^3$ to $\mathbb{R}^3$ sending $(x,y,z)$ to
$(P(x,y,z), Q(x,y,z), R(x,y,z))$
This ‘folds up’ 3-dimensional space in a certain fascinating way, which reveals the discriminant.
More later!
• John Baez says:
Here’s the rest of the story.
You can see a bunch of great circles here:
If you imagine this as part of a bigger picture in 3d space, each great circle is the intersection of the sphere with a plane. These planes are called mirrors, because reflecting through any of these planes is a symmetry of the icosahedron.
Even better, every symmetry of the icosahedron can be obtained by a succession of reflections through these mirrors!
The mirrors, taken all together, form a subset of 3-dimensional space. We can map this subset to 3-dimensional space using the function I described above:
$(x,y,z) \mapsto (P(x,y,z), Q(x,y,z), R(x,y,z))$
The result is a new subset of 3-dimensional space, called the discriminant of the icosahedral group.
Arnol’d claims that it looks like this:
Challenge. Can someone make a nice image of it?
Moreover, Arnol’d claims that this subset also looks like the set of points $(a,b,c)$ for which the polynomial
$x^5 + ax^4 + bx^2 + c$
has repeated roots. Greg has drawn that:
So far I’m having trouble seeing why this looks like Arnold’s picture. By “looks like”, I mean that there’s a coordinate transformation (a diffeomorphism) that maps one to the other. This could warp things quite a lot, but it will send cusps of order 5/2 to cusps of order 5/2, etcetera.
Here’s a nice thing about the discriminant. Since the functions $P, Q, R$ are invariant under the icosahedral symmetry group, and all the mirror planes are related to each other by symmetries of the icosahedron, each mirror plane get mapped onto the same set when we apply the function
$(x,y,z) \mapsto (P(x,y,z), Q(x,y,z), R(x,y,z))$
And this set is the discriminant.
Puzzle. How many mirror planes are there?
• Simon Burton says:
I think if Greg could zoom into the origin a bit and also reflect the b axis it would look very similar to the Arnold picture. Also it looks like a part of the orange sheet is missing from left-hand side of that diagram. But it does look like the dark blue curve and the light green curve would correspond to the 5/2 and 3/2 cusps. I see now that when Arnold draws a dashed line he means that that line is hidden behind another surface.
• John Baez says:
Yes, dashed lines are hidden. I don’t see that any orange surface is missing. I could be wrong…
• Simon Burton says:
Ah I got confused, the orange surface is underneath the light blue surface over on the left side. So that’s why you can’t see it.
• Greg Egan says:
I’m afraid the colours here aren’t very systematic. I changed them later, so that the figure would be invariant under sign inversion, but apparently WordPress takes a snapshot of the first version of an image when you link to it, and that’s what’s shown here, not the new file I put on my web site at the same URL.
But the later figure where I lopped off the rectangular corners (and removed the mesh lines) has the new, consistent colouring.
• John Baez says:
Greg wrote:
Apparently WordPress takes a snapshot of the first version of an image when you link to it, and that’s what’s shown here, not the new file I put on my web site at the same URL.
Yes, this rather new policy of there’s causes me endless trouble, because I like to link to pictures on my website, and I sometimes edit those pictures. Even from behind the scenes there seems to be no way to tell a free WordPress blog to take a new snapshot without changing the URL of the image.
I see now you can do it with a plugin, but I believe these plugins are only available for paid blogs.
My problem with using a paid version is not the money but the hassle. David Tanzer has recently proposed that the Azimuth Project improve its blog in a few ways, so we may finally get around to dealing with this.
Anyway, I’m turning your images to clickable links… clicking on them shows the latest version. And I believe that if I post another copy of your image we’ll see the new version. Let’s see:
Nope, dammit! It’s too smart: they’re reusing their stored version. This is fiendish.
Luckily there’s another URL that works for Greg’s pages. Let me try using that:
Hah! This is the new version.
• Greg Egan says:
Taking a diagonal slice that runs directly from the green line to the blue line gives a cross-section where both cusps are more visible, and the result looks more like an involute of a cubic parabola.
Forget everything I said about projections. I was thinking that there was meant to be an isometry with the involute construction, and since the projections of the red and blue curves onto the $(a,b)$ plane are cubics, I thought that would point us in the right direction. But if all that’s expected is a diffeomorphism, projection is probably a complete red herring.
• Greg Egan says:
If I take an icosahedron centred at the origin with unit-length vertices, which are chosen so that they come in 3 sets of 4 that form golden-ratio rectangles in each of the 3 coordinate planes, then the $(x,y)$ plane should be one of the mirror planes.
But the image I get of this plane is rather strange:
$P(x,y,0) = x^2+y^2$
$Q(x,y,0) = \frac{1}{50} x^2 y^2 ((5+\sqrt{5}) x^2+(\sqrt{5}-5) y^2)$
$\displaystyle{ R(x,y,0) = \frac{x^2 y^2(x^2-y^2)^2 ((1165+521 \sqrt{5}) x^2-(7985+3571 \sqrt{5}) y^2)}{14762250} }$
This is $(P(x,y,0), Q(x,y,0), R(x,y,0))$ as defined by the squared magnitude for $P$, the product of dot products with 6 non-opposite vertices for $Q$, and the product of dot products with10 non-opposite face-centres for $R$.
It’s hard to see how this set could be the zero-discriminant set, since the coordinate $x^2+y^2$ is non-negative, but there is no direction in which the zero-discriminant set is similarly constrained.
In what sense are these sets meant to be “the same”? Up to an isometry, up to a linear transformation, or just up to a diffeomorphism?
• John Baez says:
Arnol’d only claims that these things are diffeomorphic. Among other things, he says:
The discriminant of the group $\mathrm{H}_3$ [the icosahedral symmetry group] is shown in Fig. 18:
Its singularities were studied by O. V. Ljashko (1982) with the help of a computer. This surface has two smooth cusped edges, one of order 3/2 and the other of order 5/2. Both are cubically tangent at the origin. Ljashko has also proved that this surface is diffeomorphic to the set of polynomials $x^5 + ax^4 + bx^2 + c$ having a multiple root.
So, that’s something we should be able to see, thanks to your incredible computer skills. I have not been able to find any trace of a paper by O. V. Ljashko.
Maybe I should continue quoting Arnol’d. Next comes the relation to involutes:
The comparison of this discriminant with the patterns of the propagation of the perturbations on a manifold with boundary (studied as early as in the textbook of L’Hopital in the form of the theory of evolutes of plane curves), has led A. B. Givental to the conjecture (later proven by O. P. Scherbak) that this discriminant it locally diffeomorphic to the graph of the multivalued time function in the plane problem on the shortest path, on a manifold with boundary, which is a generic plane curve.
Thus the propagation of the waves, on a 2-manifold with boundary, is controlled by an icosahedron hidden at the inflection point of the boundary. This icosahedron is hidden, and it is difficult to find it even if its existence is known.
Then he discusses the involutes of a cubical parabola. He shows this figure:
Then he says:
Comparing Fig. 20 with Fig. 18, it is easy to guess that the graph of the distance (or time) function, whose level sets are the involutes, is diffeomorphic to the discriminant of the icosahedron symmetry group $\mathrm{H}_3$ (the proof of this fact is not easy at all).
So we are supposed to have 3 diffeomorphic surfaces: one defined using the icosahedron, one defined using the quintic $x^5 + ax^4 + bx^2 + c,$ and one defined using the involutes of the cubical parabola. I have not been able to find proofs in the literature!
• John Baez says:
Actually there is some potentially useful material here:
• O. P. Shcherbak, Wavefronts and reflection groups, Russian Mathematical Surveys 43 (3) (1988), 149–194.
This is not free, but it’s a translation of a paper that’s freely available in its original Russian form. Luckily, I have the translation! It mainly concerns not the case of $\mathrm{H}_3$ but the even more exotic case of $\mathrm{H}_4$: the symmetry group of the 600-cell in 4 dimensions! This case shows up when we consider wavefronts propagating around obstacles in 3 dimension: the 3d analogue of involutes.
• Greg Egan says:
Here’s an image of the “symmetry group discriminant”:
I’ve rescaled some of the coordinates. Since $x$ and $y$ only appear as $x^2$ and $y^2$, all four quadrants are mapped so the same set, so we might as well work with the doubly positive quadrant. If you put polar coordinates on that quadrant, the lines $\theta=0$ and $\theta=\frac{\pi}{2}$ both map to the $P$-axis.
The whole image is a map of a quarter-disk, and the cuspy triangle facing the viewer is the image of a quarter-circle. So it’s easy to see that there are cusps here, but I’m at a loss to see how the surface can be continued through the $P$-axis, as the other surfaces do, rather than coming to an end there. Maybe there’s some fine print that Arnol’d hasn’t mentioned.
• Greg Egan says:
Assuming I haven’t made any mistakes in the algebra, the only thing I can imagine is that this set, which is a many-to-one image of the entire collection of mirror planes, can somehow be “unwrapped” into a version more like Arnol’d’s.
But it’s not obvious to me why such a process would yield, essentially, two copies of the image set glued together in a certain way. Going around the origin in a single mirror plane just sends you back and forth around the image set four times, reversing direction at the P-axis in the image set each time you cross a coordinate axis in the domain. But maybe there’s some elaborate way of traversing the whole complex of mirror planes that can sensibly be interpreted as yielding the desired result.
• John Baez says:
Thanks, that’s a beautiful picture. I bet one of those cusps has order 5/2 (it shows the telltale signs of being rhamphoid, or ‘beak-like’) and the other has order 3/2 (just guessing).
However, the discrepancy with Arnol’d picture makes the story into more of a mystery!
Since Arnol’d is a bigshot, it’s natural to wonder if we’ve made some sort of mistake… or maybe I misunderstood him, or maybe he left out some sort of nuance. Unfortunately he died in 2010, and I don’t know anything written up with more details. I should look around….
Aha, I’ve found two references! The paper by O. P. Shcherbak mentioned above focuses on the $\mathrm{H}_4$ discriminant, but he says these two papers study the $\mathrm{H}_3$ discriminant:
• O.V. Lyashko, The classification of critical points of functions on a manifold with a singular boundary, Funktsional. Anal, i Prilozhen. 17:3 (1983), 28–36. English translation in Functional Anal. Appl. 17:3 (1983), 187–193.
• O.P. Shcherbak, Singularities of a family of evolvents in the neighbourhood of a point of inflection of a curve, and the group $\mathrm{H}_3$ generated by reflections, Funktsional. Anal. i Prilozhen. 17:4 (1983), 70–72. English translation in Functional Anal. Appl. 17:4 (1983), 301–303.
I think I can get ahold of these.
Before I forget it, here’s one idea. The Coxeter group acts not just on $\mathbb{R}^3$ but also $\mathbb{C}^3,$ so there could be different ‘real forms’ of the discriminant. You drew the one where $x,y,z$ are real so $P \ge 0,$ but maybe there’s another one where, say, $x$ and $y$ are real but $x$ is imaginary, so that $P$ can take both positive an negative values.
I’ve never heard people discuss different real forms of discriminants of Coxeter groups, so this is a very tentative idea, but it’s my best attempt so far to get a shape that looks more like what Arnol’d drew.
Now let me find those papers.
• Greg Egan says:
Thanks, John! I agree with the formulas in the paper by Lyashko, up to choices of scale and which coordinates to call $x$ and $y$. After deriving essentially the same map as I did, Lyashko goes on to describe the cubic in three variables whose zero set in $\mathbb{R}^3$ contains the image of the mirror planes, but is larger than that image.
You can get the full zero set in $\mathbb{R}^3$ by including the four possibilities where $x$ and $y$ are either purely real or purely imaginary, and this looks like the complete picture Arnol’d drew:
• John Baez says:
WOW, GREAT!!! This is the picture I’d been dreaming of!
So my idea of letting $y$ be imaginary instead of real was not completely off the mark, but it wasn’t right. I’ll have to think harder about what it means to consider all four possibilities of $x$ and $y$ being real or imaginary. It seems we’re doing a strange mixture of real and complex algebraic geometry here… but I’m probably just not being smart enough.
• Greg Egan says:
I wrote:
Lyashko goes on to describe the cubic in three variables whose zero set in $\mathbb{R}^3$ contains the image of the mirror planes …
I wrote this a bit too hastily: the polynomial in question is cubic in one of its three variables, but its degree is 11.
10. John Baez says:
I want to say a bit more about discriminants of quintics and discriminants of Coxeter groups. We are trying to relate the discriminant of the quintic $x^5 + ax^4 + bx^2 + c$ to the discriminant of the Coxeter group $\mathrm{H}_3$, the symmetry group of the icosahedron. But there’s something simpler that might be related.
First, what’s the discriminant of the Coxeter group $\mathrm{A}_n$? This group is just the symmetric group on $n+1$ letters, $\mathrm{S}_{n+1}.$ This group acts on $\mathbb{R}^{n+1}$ in an obvious way, by permuting the coordinate axes. But it also acts on the $n$-dimensional subspace where the coordinates sum to zero:
$V = \{ r \in \mathbb{R}^{n+1} : \; r_1 + \cdots + r_{n+1} = 0 \}$
and this is how we think of it as a Coxeter group.
Each point in $V$ gives a polynomial like this:
$(x - r_1) \cdots (x - r_{n+1})$
This trick gives all the polynomials of degree $n+1$ whose leading coefficient is 1 and whose roots sum to zero. Two different points of $V$ give the same polynomial iff we can get from one to the other by permuting the coordinate $r_1, \dots, r_{n+1}$.
In other words, we have a map from $V$ to the space $W$ consisting of polynomials of degree $n+1$ whose leading coefficient is 1 and whose roots sum to zero. Two points in $V$ map to the same polynomial in $W$ iff they lie in the same orbit of the Coxeter group.
Generically, $(n+1)!$ different points in $V$ map to each polynomial in $W.$ But the number is smaller for polynomials with repeated roots.
A polynomial has repeated roots iff its discriminant vanishes. The discriminant is very simple, since we’re writing our polynomial in terms of its roots rather than its coefficients! It’s
$\Delta = \prod_{1 \le i < j \le n+1} (r_i - r_j)$
The set on which this vanishes is just the union of all the hyperplanes
$r_i = r_j$
where $1 \le i < j \le n+1.$ These hyperplanes are also the mirror planes for the Coxeter group!
Specializing to $n = 4,$ we see that the discriminant for quintics whose leading term is 1 and whose roots sum to zero vanishes precisely on the mirror planes for the Coxeter group $\mathrm{A}_4 = \mathrm{S}_5.$ Even better, note that these polynomials are precisely those of the form
$x^5 + ax^3 + bx^2 + cx + d$
This is pretty close to what we’re actually interested in: polynomials of the form
$x^5 + ax^4 + bx^2 + c$
We should be able to get those by taking some sort of 3-dimensional slice of the 4-dimensional space $V.$ But I haven’t worked out the details!
By the way, the group $\mathrm{A}_4$ is the symmetry group of a 4-simplex, which looks like this when you project it down to the plane:
The symmetry group of the pentagon is also a Coxeter group, sometimes known as $\mathrm{H}_2.$ So, there’s a relation between this group and $\mathrm{A}_4.$
This is part of a little pattern relating:
the $\mathrm{H}_2$ Coxeter group and the $\mathrm{A}_4$ Coxeter group,
the $\mathrm{H}_3$ Coxeter group and the $\mathrm{D}_6$ Coxeter group,
the $\mathrm{H}_4$ Coxeter group and the $\mathrm{E}_8$ Coxeter group.
For more on that, see “week270”.
But I’m still puzzled about how $\mathrm{H}_3$ is getting related to quintics of the special form $x^5 + ax^4 + bx^2 + c.$
• John wrote:
This is pretty close to what we’re actually interested in: polynomials of the form $x^5 + ax^4 + bx^2 + c$.
Instead of relating the discriminant of $A_4$ to $H_3$, as you’re doing here, perhaps it might be simpler to relate the discriminant of $B_5$ with $H_3$. The orbit space of $B_5$ consists of polynomials of the form
$x^5 + ax^4 + bx^3 + cx^2 + dx + e$,
the point being that there is an $x^4$ term, unlike for $A_4$. So if the discriminant of $H_3$ is somehow obtained from that of $B_5$ by setting the coefficients of $x^3$ and $x$ equal to zero, then we’d be in business.
11. John Baez says:
As I prepare to document all our work on Visual Insight, I have a request for Greg… I hope it’s easy. First, I’m still having trouble understanding this image showing the zero set of the discriminant of $x^5 + ax^4 + bx^2 + c$:
I’d like it to be more obvious that it’s diffeomorphic to this one (assuming that’s actually true):
Simon suggested this:
I think if Greg could zoom into the origin a bit and also reflect the b axis it would look very similar to the Arnol’d picture.
It would also help if the surfaces were made translucent. The “fun” comes from the lines of cusps, some of which are inevitably behind something else.
• John Baez says:
Simon Burton created some nice pictures of slices of the the zero set of the discriminant of $x^5 + ax^4 + bx^2 + c.$
Here is its intersection with the plane $a = -2.2$:
Here is its intersection with the plane $a = +2.2$:
In both of these, the big ticks on the axes are at multiples of 0.5.
This nicely exhibits the relation between this surface and the involutes of the cubical parabola:
If it’s hard to draw this surface in 3d in a way that makes the relation clear, perhaps an animated gif of slices would do the job.
• Greg Egan says:
I’ve tried tinkering with my image of the quintic discriminant in various ways, but none of them look like improvements to me, so you should probably go with some version of Simon’s approach.
• Greg Egan says:
After a bit more tinkering, maybe this is an improvement (this is the quintic discriminant):
• John Baez says:
For some reason I hadn’t seen this until now.
This is just what I hoped for! But it looks so different than the previous images, I can barely believe it’s the same surface.
• Greg Egan says:
Apart from the translucency and the different colouring of the surface (I’m no longer trying to colour-code the different roots for $c$ that sit above each $(a,b)$), I’ve sliced through it at a smaller value for $a$, while expanding the scale along the $a$-axis.
If you follow the surface out to larger $a$ values, as in my previous version, things twist around so that you need to slice obliquely to get a cross-section that looks like the archetypical involute.
12. Let me try and summarize the current status of this thread as I now see it. Please correct me.
No-one has yet solved the original puzzle of Arnol’d : Prove that the generic involute of a cubical parabola has a cusp of order 5/2 on the straight line tangent to the parabola at the inflection point.
We have great graphs, but not yet an analytical proof.
The next claim of Arnold is the following:
Claim A. The discriminant surface $X$ of the icosahedral group is diffeomorphic to
$Y := \{ (a,b,c) : x^5 + ax^4 + bx^2 + c \textrm{ has multiple roots} \}$
Greg Egan has drawn a wondferful picture of $Y$. In order to get a picture that looks like Arnold’s picture of $X$, he needs to add in some “analytically continued” values.
On the other hand, Arnold wasn’t being very precise with regard to real vs complex when he defined what $X$ was.
So we have some good graphical evidence of Claim A, modulo some imprecision.
On the other hand, I have not been able to track down a proof of Claim A in the literature. Perhaps it’s in Lyashko, “Classification of critical points of functions on a manifold with singular boundary”, but if so I can’t find the precise statement.
The final claim of Arnol’d is:
Claim B. The discriminant surface $X$ of the icosahedral group is locally diffeomorphic to the graph $Z$ of the multivalued time function in the plane problem on the shortest path, on a manifold with boundary, which is a generic plane curve.
Here we have some nice pictures of Simon Burton, showing that slices of $Y$ graphically correspond to slices of $Z$. So if we believe Claim A, then this is evidence for Claim B.
Thankfully, we can track down a precise proof of this claim in the literature. It is in Scherbak, “Singularities of families of evolvents in the neighborhood of an inflection point of the curve, and the group $\mathrm{H}_3$ generated by reflections”. Although I don’t understand the proof.
• John Baez says:
Thanks very much for trying to summarize the state of play and point out what remains to be done!
No-one has yet solved the original puzzle of Arnol’d: Prove that the generic involute of a cubical parabola has a cusp of order 5/2 on the straight line tangent to the parabola at the inflection point.
I think that’s right. Scott Votton has an approach that could perhaps be completed with some more thought. Jesse McKeown also has an approach.
I don’t understand either of these approaches as well as I’d like—my main excuse is that I’ve been trying to understand other aspects of this problem. Note that both approaches seem to run into a similar obstacle: the difference between a polynomial that describes a cusp of order 5/2, and a similar-looking polynomial with some higher-order terms. I suspect that these higher-order terms can’t change a cusp of order 5/2 into something else. This is the sort of thing where having a bit more expertise in singularity theory might help a lot.
13. John Baez says:
Bruce wrote:
Claim A. The discriminant surface X of the icosahedral group is diffeomorphic to
$Y := \{ (a,b,c) : x^5 + ax^4 + bx^2 + c \textrm{ has multiple roots} \}$
So we have some good graphical evidence of Claim A, modulo some imprecision.
On the other hand, I have not been able to track down a proof of Claim A in the literature. Perhaps it’s in Lyashko, “Classification of critical points of functions on a manifold with singular boundary”, but if so I can’t find the precise statement.
I don’t think it’s in there. Remember, back in the USSR, Russian mathematicians would hold long seminars where they worked things out in detail. If everyone present was convinced, sometimes they would publish the results with only a sketchy proof. This has often made Western mathematicians unhappy.
I’m still hoping it’s possible to prove Claim A using the known relations between quintics and the icosahedron. Felix Klein wrote a whole book on this, Lectures on the Icosahedron, and luckily there’s a free book which gives a treatment of these ideas that’s a lot easier for modern mathematicians to understand:
• Jerry Shurman, Geometry of the Quintic.
This is hugely fun stuff. Very briefly, the fact that the symmetry group of the icosahedron is almost the Galois group of the general quintic lets you solve the quintic if you can solve the equation $f(w) = z$ where $f$ is a nontrivial rational function on $\mathbb{C}\mathrm{P}^1$ that is invariant under the symmetries of the icosahedron!
The invariant polynomials $P,Q,R$ that Greg Egan is looking at are, I believe, closely related to this business.
Unfortunately I don’t see the significance of the family of quintics
$x^5 + ax^4 + bx^2 + c$
I wrote something here about this issue, but I was led naturally to the so-called depressed quintic, where the quartic term vanishes:
$x^5 + bx^3 + cx^2 + dx + e$
It’s well-known that any polynomial can be transformed by a Tschirnhaus transformation into depressed form, meaning that the next-to-leading coefficient is 0, or in other words, the sum of the roots is zero.
The reason this is important for us is that it means the Galois group of a generic depressed quintic is still that of the generic quintic, namely $S_5$, which is almost the rotational symmetry group of the icosahedron, namely the alternating group $A_5.$ (If we include reflections as symmetries we get not $S_5$ but $A_5 \times \mathbb{Z}/2.$ However, it’s the non-solvable part, the $A_5,$ that’s the really big deal here.)
With more work we can massage any quintic into principal form, where the cubic term also vanishes:
$x^5 + cx^2 + dx + e$
And with even more virtuosic feats of high-school algebra we can do a change of variables to bring any quintic into Bring–Jerrard normal form, where the quadratic term also vanishes:
$x^5 + dx + e$
So, all these special classes of quintics should be closely connected to the alternating group $A_5,$ and thus, I expect, the icosahedron.
But I don’t know what’s so special about quintics like
$x^5 + ax^4 + bx^2 + c$
Maybe your comment holds the key!
• Greg Egan says:
I guess another approach would be to find a diffeomorphism between $\mathbb{R}^3$ and $M^3 \subset \mathbb{R}^5$ that commutes with the actions of $\mathrm{H}_3$, where $M^3$ is the 3-dimensional submanifold of $\mathbb{R}^5$ on which the symmetric polynomials of degree 2 and 4 in the coordinates vanish.
If we could show that, I think it would follow that the orbits on the two spaces were diffeomorphic, and maybe also that the two discriminants — the varieties of irregular orbits of each action — were diffeomorphic.
• Greg Egan says:
I think the suggestion in my comment above is impossible to achieve, at least if the action on $\mathbb{R}^5$ comes from permuting the coordinates according to the permutation the group element induces on the true crosses of the icosahedron, and if $-1$ in $H_3$ acts as $-1$ on $\mathbb{R}^5$. In other words, the elements of $H_3$ with determinant 1 just permute the coordinates on $\mathbb{R}^5$, and if an element $g$ has determinant $-1$, we permute the coordinates by acting with $-g$ and then multiply by $-1$.
In that case, the subset of $M^3$ that is pointwise fixed by the action on $\mathbb{R}^5$ of a reflection in one of the mirror planes will consist of just the origin, which is obviously not diffeomorphic to the whole mirror plane back in $\mathbb{R}^3$.
• John Baez says:
Greg wrote:
I think the suggestion in my comment above is impossible to achieve, at least if the action on $\mathbb{R}^5$ comes from permuting the coordinates according to the permutation the group element induces on the true crosses of the icosahedron, and if $-1$ in $\mathrm{H}_3$ acts as $-1$ on $\mathbb{R}^5$.
Let me try to see if these are reasonable assumptions.
First, we have to be careful because there are 3 different 120-element groups that seem to play a big role in this game, and they’re not isomorphic:
• the symmetric group $\mathrm{S}_5.$ This contains the alternating group $\mathrm{A}_5,$ the rotational symmetry group of the icosahedron, as a normal subgroup, and the quotient is $\mathbb{Z}/2$. But $\mathrm{S}_5$ is not the product of $\mathrm{A}_5$ and $\mathbb{Z}/2$.
• the icosahedron symmetry group, including rotations and reflections, $\mathrm{H}_3.$ This is the product of $\mathrm{A}_5$ and $\mathbb{Z}/2$.
• the binary icosahedral group, meaning the double cover of the rotational symmetry group of the icosahedron. This has $\mathbb{Z}/2$ as a normal subgroup, and the quotient is $\mathrm{A}_5.$
It took years for all this to become second nature to me. I’m happy to see that now it’s on Wikipedia, so people can learn it faster:
• Wikipedia, Icosahedral symmetry: commonly confused groups.
Okay, now to business. You seem to be looking at $\mathbb{R}^5$ with the obvious permutation action of $S_5.$ I think it will be a bit better to look at $\mathbb{C}^5$ with the obvious permutation action of $S_5.$ A point in $\mathbb{C}^5$ describes an ordered 5-tuple of roots of a monic quintic
$(z - r_1) \cdots (z - r_5)$
and $S_5$ acts to permute the labels on the roots. Monic just mean that the coefficient of the highest-order term is 1. Complex polynomials work better than real ones, so I think we want $\mathbb{C}^5.$
Of course the polynomial itself doesn’t know an ordering on its roots. So, the space of monic quintics is $\mathbb{C}^5/\mathrm{S}_5.$ But it can also be seen as $\mathbb{C}^5,$ using the coefficients as coordinates.
On the other hand, $\mathrm{S}_5$ doesn’t act as symmetries of the icosahedron; only $\mathrm{H}_3$ does. What these groups have in common is their subgroup $\mathrm{A}_5.$
We can think of the 5 here as the set of ‘true crosses’ in the icosahedron, meaning things like this:
but we only get the even permutations of these from icosahedron symmetries.
So, I don’t think the assumption you mentioned, about $-1$ in $\mathrm{H}_3$ acting as $-1$ on $\mathbb{R}^5,$ actually holds if we take this group’s most obvious action on $\mathbb{R}^5$ (or $\mathbb{C}^5$.) The element $-1$ in $\mathrm{H}_3$ does not affect a true cross.
• Greg Egan says:
OK, but even if we have $-1$ act as the identity on $\mathbb{C}^5$, an order-5 rotation in $\mathrm{H}_3$ fixes a 1-dimensional subspace in both $\mathbb{R}^3$ and $\mathbb{C}^5$, but the subspace in $\mathbb{C}^5$ will only meet the condition on the symmetric polynomials of degree 2 and 4 being zero at the origin.
For example, the permutation that an order-5 rotation induces on the true crosses might be $(5,1,2,3,4)$, which as a linear operator on the coordinates fixes the subspace $(r,r,r,r,r)$ of $\mathbb{C}^5$, but the symmetric polynomial condition means $r=0$.
• John Baez says:
For some reason it’s taking time for me to absorb this, but thanks. I’ll try to say something more useful tomorrow!
• Bruce Bartlett says:
The invariant polynomials $P,Q,R$ that Greg Egan is looking at are, I believe, closely related to this business.
I would like to understand this. When I skim through Shurman’s Geometry of the Quintic, I haven’t been able to find these functions $P, Q, R$.
In particular, in that book, he starts with a finite rotation group $G$, and then considers it as acting on the Riemann sphere $\mathbb{C}P^1$. The algebra of invariant functions turns out to be freely generated by a single rational function $f$, that is, $(\mathbb{C}P^1)^G = \mathbb{C}(f)$.
But for our purposes, instead of being interested in the action of $G$ on the Riemann sphere, we instead lift $G$ to its double cover $\hat{G} \subset SU(2)$, so that $G$ is thought of as acting on $\mathbb{C}^2$ instead of $\mathbb{C}P^1$. The algebra of invariants is now generated by three functions $P, Q, R$ with a single relation. When $G$ is the icosahedral group, this relation is apparently $P^5 + Q^3 + R^2 = 0$. I don’t know how to relate these two pictures.
• John Baez says:
Bruce wrote:
I don’t know how to relate these two pictures.
I don’t either, but there has to be way. Let me give it a try. It’s probably related to how there’s a map
$f : \mathbb{C}^2 \to \mathbb{C}^3$
sending spinors in $\mathbb{C}^2$ to vectors in $\mathbb{C}^3,$ which actually happen to live in $\mathbb{R}^3.$ This map takes any spinor to the expected value of its angular momentum:
$\psi \mapsto \langle \psi, \sigma_i \psi \rangle$
It’s a quadratic map from the spin-1/2 representation to the spin-1 representation.
The icosahedral group $G$ acts on the spin-1 representation, its double cover $\hat{G}$ acts on the spin-1/2 representation, and the map I just described is ‘equivariant’ in a semi-obvious sense, which involves the double cover $p: \hat{G} \to G.$
By pulling back, I believe we get a map from $G$-invariant polynomial functions on $\mathbb{C}^3$ to $\hat{G}$-invariant polynomial functions on $\mathbb{C}^2.$
We’ve got 3 $G$-invariant polynomial functions on $\mathbb{C}^3,$ namely $P,Q,R.$ These will give 3 $\hat{G}$-invariant polynomial functions on $\mathbb{C}^2,$ but those must obey (at least) one relation.
That’s my guess about how this relation $P^5 + Q^3 + R^2 = 0$ shows up! I.e., it’s not a relation between the original functions $P,Q,R$ on $\mathbb{C}^3,$ but the corresponding functions on $\mathbb{C}^2.$
• John Baez says:
There’s something mildly wrong with my plan here, because the map
$\psi \mapsto \langle \psi, \sigma_i \psi \rangle$
is not quadratic in the complex sense: if you multiply $\psi$ by $a$, you multiply $\langle \psi, \sigma_i \psi \rangle$ by $|a|^2,$ not $a^2$.
Nonetheless there is a linear intertwining operator $\frac{1}{2} \otimes \frac{1}{2} \to 1,$ so there is some quadratic map from the spin-1/2 representation to the spin-1 representation! So I think we need to use that, not what I wrote above.
Another way to put it: the spin-1/2 representation of $\mathrm{SU}(2)$ is isomorphic to its conjugate representation, so the annoying complex conjugate in the inner product is somehow not that big a deal.
• Bruce Bartlett says:
Here is a reference which nicely explains the homeomorphism
$\mathbb{C}^2 / \hat{G} \cong \{ (x,y,z) \in \mathbb{C}^3 : x^2 + y^3 + z^5 = 0\}$
which will probably help in resolving this. The reference is
Kirby and Scharlemann, Eight faces of the Poincare homology 3-sphere, page 128.
They say their proof is a medley of Milnor and Klein.
Something funny happens in the text between page 128 and 129.
• John Baez says:
This looks like a paper worth understanding!
So, these guys write down three rather complicated polynomials $p_1, p_2, p_3$ on $\mathbb{C}^2,$ obeying a relation
$p_1^2 + p_2^3 + p_3^5 = 0$
They are polynomials of degree 30, 20 and 12.
I was hoping to get these from Greg’s icosahedral-invariant polynomials $P, Q, R$ on $\mathbb{C}^3,$ which have degrees 2, 6, and 10. I was actually hoping they were obtained by composing $P, Q, R$ with a quadratic map $\mathbb{C}^2 \to \mathbb{C}^3.$ But that doesn’t work: the degrees don’t work out.
• John Baez says:
Hmm! I’m pretty sure Chevalley’s generators for the icosahedral-invariant polynomials on $\mathbb{C}^3$ have degrees 2, 6 and 10. The 2 comes from $x^2 + y^2 + z^2,$ while the 6 comes from the 12 vertices of the icosahedron and the 10 comes from the 20 faces of the icosahedron, as I explained earlier. But we could also build an invariant polynomial using the 30 edges of the icosahedron using the same trick, and it would have degree 15.
If we take the invariant polynomials of degrees 15, 10 and 6 and compose them with a quadratic map, we’ll get polynomials of degrees 30, 20 and 12. So I bet these are the polynomials Kirby and Scharlemann are working with! These have a chance of obeying the relation
$p_1^2 + p_2^3 + p_3^5 = 0$.
• Greg Egan says:
I’ll try to check JB’s conjecture about the relationships between these various polynomials when I get a chance, but for now I’ll just briefly mention a mental block I had to get past before I could see how to proceed at all!
It seems obvious that there’s an equivariant quadratic map from spin-$\frac{1}{2}$ to spin-1, especially if you take the route where you define spin-1 as the space of symmetric tensors in $\mathbb{C}^2 \otimes \mathbb{C}^2$, with:
$\rho_1(g) v \otimes w = (g v) \otimes (g w)$
for $g \in \mathrm{SU}(2), v, w \in \mathbb{C}^2$.
Then the quadratic map $\phi: \mathbb{C}^2 \to \mathbb{C}^2 \otimes \mathbb{C}^2$ with:
$\phi(v) = v \otimes v$
maps into the subspace of symmetric tensors, and we have:
$\rho_1(g) \phi(v) = (g v) \otimes (g v) = \phi(g v)$
If we choose a suitable orthonormal basis for the symmetric tensors in $\mathbb{C}^2 \otimes \mathbb{C}^2$, we can write $\phi: \mathbb{C}^2 \to \mathbb{C}^3$, with:
$\phi(v) = (v_1^2, \sqrt{2} v_1 v_2, v_2^2)$
But if we look at our simplest invariant polynomial
$\displaystyle{P(x,y,z) = x^2 + y^2 + z^2}$
we have:
$\displaystyle{P(\phi(v)) = (v_1^2 + v_2^2)^2}$
This quantity is not an invariant for SU(2)!
The catch is, we are talking about equivalent representations, but different bases. The version of spin-1 constructed from the symmetrised tensor product is equivalent to the double cover of SO(3) by SU(2), but to switch between the two you still need to use a certain unitary operator, $T$, whose matrix in the bases we’re using is:
$\displaystyle{\left( \begin{array}{ccc} -\frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} \\ -\frac{i}{\sqrt{2}} & 0 & -\frac{i}{\sqrt{2}} \\ 0 & 1 & 0 \end{array} \right)}$
$T$ maps from the spin-1 rep on the tensor product to the spin-1 rep as it acts on $\mathbb{R}^3$. So … what do we get now?
$\displaystyle{T \phi(v) = (\frac{v_2^2-v_1^2}{\sqrt{2}},-\frac{i \left(v_1^2+v_2^2\right)}{\sqrt{2}},\sqrt{2} v_1 v_2)}$
$\displaystyle{P(T \phi(v)) = 0}$
• Greg Egan says:
While I’m looking at equivariant quadratic maps from spin-$\frac{1}{2}$ to spin-1, I might as well include the other nice way of getting spin-1, where you think of the representation space as the space of traceless $2 \times 2$ complex matrices, with the action:
$\rho_1(g) m = g m g^{-1}$
Here, the simplest equivariant quadratic $\phi$ (which agrees with our previous $\phi$ up to a factor) is:
$\displaystyle{\phi(v) = v \otimes (\epsilon v) = \left( \begin{array}{cc} v_1 v_2 & -v_1^2 \\ v_2^2 & -v_1 v_2 \end{array}\right)}$
where:
$\displaystyle{\epsilon = \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array}\right)}$
$\phi(v)$ is traceless, of course, but it also has zero determinant, which corresponds to our previous result:
$\displaystyle{P(\phi(v)) = 0}$
• John Baez says:
Great! I was trying to understand the quadratic map from the spin-1/2 representation to the spin-1 representation a bit better, and hoping to understand why we skip the obvious degree-2 invariant $x^2 + y^2 + z^2$ on the spin-1 rep when we’re using this quadratic map to turn invariants on the spin-1 rep into invariants on the spin-1/2 rep.
Now you’ve done it: the obvious degree-2 invariant on the spin-1 rep becomes zero when we convert it to an invariant on the spin-1/2 rep!
That’s the most satisfying possible explanation. We’re not “skipping” this degree-2 invariant for some obscure reason, it just gives nothing.
• Greg Egan says:
I checked John’s conjecture, and it’s right! That is, we can get three polynomials that obey:
$p_1^2 + p_2^3 + p_3^5 = 0$
by the method he described: composing invariant polynomials of degree 15, 10 and 6 based on the edges, faces and vertices of an icosahedron, with a suitable equivariant quadratic $\phi: \mathbb{C}^2 \to \mathbb{C}^3$:
$\displaystyle{\phi(v) = (\frac{v_2^2-v_1^2}{\sqrt{2}},-\frac{i \left(v_1^2+v_2^2\right)}{\sqrt{2}},\sqrt{2} v_1 v_2)}$
They’re not the ones Kirby and Scharlemann describe, though, not even up to a scale. But there might be some change of basis that will make them the same.
• John Baez says:
Hurrah! There’s something so satisfying about rediscovering these things oneself, not just looking them up. Of course you’re getting most of the satisfaction, not me, since you’re the one actually doing the calculations.
(Unfortunately Mathematica is not advanced enough to feel any of the satisfaction.)
I feel sure this trick must also work for the octahedron, giving polynomials $q_1, q_2, q_3$ with
$q_1^2 + q_2^3 + q_2q_3^3 = 0$
and some variant should work for the tetrahedron, giving polynomials $r_1, r_2, r_3$ with
$r_1^2 + r_2^3 + r_3^4 = 0$
The tetrahedron is different, because opposite to a vertex is not another vertex but the midpoint of a face! So, the trick that applies to the other cases will not give 3 different polynomials here.
• John McKay, A rapid introduction to ADE theory, 1 January 2001.
Now that I look carefully, he even describes a recipe for getting these polynomials. But his recipe is somewhat different than ours, and it may avoid the problem with the tetrahedron.
It’s a very rapid introduction, which however one needs to read and reread for decades to fully understand. McKay figured out how these 3 cases — the tetrahedron, octahedron and icosahedron — are related to $\mathrm{E}_6, \mathrm{E}_6$ and $\mathrm{E}_8.$
So, many of the things we’ve been doing have analogues for the tetrahedron and octahedron, but we’re going straight for the jugular and doing the icosahedral case, which has these additional mysterious relations to wave propagation around boundaries in the plane, and to the equation $x^5 + ax^4 + bx^2 + c = 0.$ And those mysteries remain mysterious to me!
• Greg Egan says:
I looked at the cube/octahedron functions (using our method, not the one you cite by McKay). Here, vertices, edges and faces refer to a cube of side length 2 centred at the origin, and then I tinker with the normalisation later.
$\displaystyle{ P_v(x,y,z) = (-x-y-z) (-x+y-z) (-x-y+z) (-x+y+z) }$
$\displaystyle{ P_e(x,y,z) = (-x-y) (y-x) (-x-z) (z-x) (-y-z) (z-y) }$
$\displaystyle{ P_f(x,y,z) = x y z }$
These have degrees 4, 6 and 3 respectively. When we compose them with the equivariant quadratic, we get polynomials of degree 8, 12 and 6:
$\displaystyle{ p_v(v_1,v_2) = v_1^8+14 v_1^4 v_2^4+v_2^8 }$
$\displaystyle{ p_e(v_1,v_2) = \frac{1}{4} \left(-v_1^{12}+33 v_1^8 v_2^4+33 v_1^4 v_2^8-v_2^{12}\right) }$
$\displaystyle{ p_f(v_1,v_2) = \frac{i v_1 v_2 \left(v_1^4-v_2^4\right)}{\sqrt{2}} }$
If we simply follow the pattern where we give each polynomial an exponent equal to the order of the rotational subgroup that fixes the associated feature of the polyhedron, we can get the same kind of result as with the icosahedron (inserting a suitable choice of normalising factors):
$\displaystyle{ - p_v(v_1,v_2)^3 + 16 p_e(v_1,v_2)^2 + 432 p_f(v_1,v_2)^4 = 0 }$
I don’t see any way to make the degrees add up correctly so that we get something of the form:
$\displaystyle{ q_1^2 + q_2^3 + q_2 q_3^3 = 0 }$
I’m not really sure that McKay is even claiming these relations for his own polynomials; he gives these as equations for “the singularity”, but it’s far beyond my abilities to figure out precisely what he means by that from a single reading.
• John Baez says:
Greg wrote:
I don’t see any way to make the degrees add up correctly so that we get something of the form:
$\displaystyle{ q_1^2 + q_2^3 + q_2 q_3^3 = 0 }$
I’m not really sure that McKay is even claiming these relations for his own polynomials; he gives these as equations for “the singularity”, but it’s far beyond my abilities to figure out precisely what he means by that from a single reading.
I believe he’s claiming his polynomials obey this relation. But mind-reading can be tricky. This is a nice article:
• P. Slowodny, Platonic solids, Kleinian singularities and Lie groups.
On page 7 he write:
Klein obtained the following results. For each finite group $\Gamma \subset \mathrm{SL}(2,\mathbb{C}$ the $\Gamma$-invariant polynomials on $\mathbb{C}^2$ are generated by 3 fundamental invariants $X,Y,Z$ which are subject to a single relation $R(X,Y,Z) = 0.$
He gives a table listing these. Note that every finite subgroup of $\mathrm{SL}(2,\mathbb{C})$ is conjugate to one inside $\mathrm{SU}(2)$, since we can average an inner product over the group action to get an inner product that’s preserved by $\Gamma$. The only options turn out to be our friends the cyclic groups (corresponding via McKay’s magic method to the $\mathrm{A}_n$ Dynkin diagrams), the dihedral groups (corresponding to $\mathrm{D}_n$), and the double covers of the symmetry groups of the tetrahedron, octahedron and icosahedron (corresponding to $\mathrm{E}_6, \mathrm{E}_7, \mathrm{E}_8$).
Sorry, that was a digression. The punchline is that the relation for the octahedron is the one McKay claims:
$X^3 + XY^3 + Z^2$
So maybe you made a calculational mistake, or more likely maybe I was being overoptimistic about how easy it is to find the generators $X,Y,Z.$
I think they must be explained in Slowody’s paper. So far I’m just enjoying his explanation of why these things are called ‘singularities’, and what they look like.
• John Baez says:
Hmm, I can’t find anything in Slowody’s paper that gives a concrete formula for the polynomials $X,Y,Z$!
• Greg Egan says:
The degrees of McKay’s polynomials are the same as the ones we end up with: 8, 12 and 6. These are the count of vertices, edges and faces for a cube (or faces, edges and vertices for an octahedron if you prefer). Our method halves these numbers for the polynomials in three variables, then doubles them again for the polynomials in two variables, but either method ends up with the same degrees.
If we call McKay’s three polynomials $X,Y,Z$, and they obey the relationship:
$\displaystyle{X^3 + X Y^3 +Z^2 = 0}$
then which polynomial should be which? There’s no choice that makes the degrees of all three terms the same, but I guess that’s not necessarily an obstacle; the coefficients of all the individual monomials could still end up being zero, in principle.
Now, the three invariants we get from our method (which do obey a relationship, when suitably normalised, of $p_v^3 + p_e^2 + p_f^4 = 0$), certainly don’t obey one of the form $X^3 + X Y^3 +Z^2 = 0$, for any of the six ways we can assign $X,Y,Z$.
And as far as I can tell, exactly the same is true of McKay’s polynomials, obtained from stereographic projection! That is, they do satisfy the relation $p_v^3 + p_e^2 + p_f^4 = 0$, when suitably normalised, but they don’t obey one of the form $X^3 + X Y^3 +Z^2 = 0$, for any of the six ways we can assign $X,Y,Z$.
I’m happy to believe that something obeys the equation $X^3 + X Y^3 +Z^2 = 0$ … but the invariants that our method yields (and I’m pretty sure that these are, by construction, invariants of the appropriate finite subgroup of SU(2)) certainly don’t obey that equation, and unless I’ve misunderstood McKay’s recipe, neither do his.
• Greg Egan says:
I tried McKay’s construction for the tetrahedron; this is the case where our approach is unable to produce three linearly independent invariants.
And again, what I found was that McKay’s polynomials obey relations, after suitable normalisation, where the polynomial associated with each kind of special point on the polyhedron is raised to a power equal to the order of the rotational subgroup that fixes that point. That is, for the tetrahedron the McKay polynomials obey:
$\displaystyle{ p_v^3 + p_e^2 + p_f^3 = 0 }$
but none of the six ways of assigning $X,Y,Z$ to these polynomials yields a relation of the form:
$\displaystyle{ X^2 + Y^3 + Z^4 = 0 }$
(which is how McKay describes the “singularity”, and – apart from some differences as to which variable gets which exponent – how Slodowy describes the relation satisfied by the generators in this case).
I don’t really understand what’s happening here, but if I’ve misunderstood McKay’s recipe, or made some error in following it, it would be a very strange coincidence that the erroneous results obeyed such simple, systematic relations! So I’m inclined to believe that McKay’s polynomials, which he calls $V, E, F$, are different not only from the $X,Y,Z$ that Slodowy mentions, but different from the $x,y,z$ that McKay mentions on the same page as his construction of $V, E, F$.
I wouldn’t normally appeal to the fact that he should have said something like $V^2 + E^3 + F^4 = 0$ if he meant that, rather than changing notation mid-stream and saying $x^2 + y^3 + z^4 = 0$, because these kinds of shifts happen all the time. But having repeated his construction, and found no way to match up his $x,y,z$ with his $V,E,F$ that makes the equation true, it seems reasonable to conclude that he really wasn’t referring to the same three things, after all.
• Mike Doherty says:
The polynomials Kirby and Scharleman give are (up to numerical factors) on pages 60 and 61 of Klein “Lectures on the Icosahedron…”.
I wonder if the Kirby and Scharleman paper has a page missing between pages 128 and 129 – like Bruce Bartlett I am puzzled by the transition between these two pages.
• Greg Egan says:
In this thesis:
http://math.ucr.edu/home/baez/joris_van_hoboken_platonic.pdf
(which is extremely similar to Slodowy in places, but has a few more details, or maybe just paraphrases Slodowy in a way that I find marginally easier to understand …), there’s a description of the construction of invariants that satisfy the equations everyone talks about. This is in section 4, “Invariant theory of binary polyhedral groups”, starting from p12.
Apparently, to get these particular invariants requires understanding “semi-invariants” and tweaking the McKay construction in some fashion. I haven’t really come to grips with the details, but it does seem clear that in the octahedral and tetrahedral cases, you need to do something slightly different than McKay does on the ADE page.
• Greg Egan says:
I just wanted to mention another reference, “Lectures on representations of finite groups and invariant theory” by Dmitri I. Panyushev, that deals with this subject in a lot more detail.
• John Baez says:
Thanks, Greg! Joris Hoboken’s thesis is very nice, which is why I’d taken the liberty of putting it on my website — it was freely available, but in a way that made me afraid it would disappear someday.
I hadn’t gotten around to looking into it for this issue. I’ll look at it soon.
It’s fascinating that the ‘obvious’ way to get invariants works perfectly for the icosahedron, but apparently fails to give a generating set of invariants for the octahedron and tetrahedron. I’m mainly interested in the icosahedron, but now I’m dying to know what trick is required to handle the other two cases. The tetrahedron has an obvious ‘excuse’ for requiring some trick or other, but not the cube.
I sometimes feel I’m spending my life catching up with Felix Klein. He had a real knack for finding the most beautiful stuff.
While failing to find the formulas for those invariants, I learned something nice from Slowody’s paper. Take the complex surface $S$ we get from the icosahedron:
$p_1^2 + p_2^3 + p_3^5 = 0$
It’s smooth except at the origin, where it has a singularity. There’s a smooth complex surface $\tilde{S}$ that maps down to $S$ in a way that’s one-to-one except at the origin. The points in $\tilde{S}$ that map to the origin form 8 copies of the Riemann sphere.
Draw a dot for each of these spheres. Draw an edge between dots when two of these spheres intersect. You get the $\mathrm{E}_8$ Dynkin diagram!
The same idea works in the other cases, giving $\mathrm{E}_7$ for the octahedron and $\mathrm{E}_6$ for the tetrahedron.
• John Baez says:
Greg wrote:
Apparently, to get these particular invariants requires understanding “semi-invariants” and tweaking the McKay construction in some fashion.
I’m trying to understand this, and it looks like a “semi-invariant” is something that’s invariant “up to a phase”, or more generally up to multiplication by some complex number. So, he’s looking for polynomial functions
$P : \mathbb{C}^2 \to \mathbb{C}$
that obey
$P(g v) = \alpha(g) P(v)$
for all $g$ in our finite group $\Gamma$ and all $v\in \mathbb{C}^2,$ where $\alpha(g)$ is some complex number depending on $g.$ Presumably you can find such semi-invariants and then tweak them a bit to get invariants.
It’s easy to check that $\alpha$ winds up obeying
$\alpha(gh) = \alpha(g) \alpha(h)$
So, you only get semi-invariants with nontrivial $\alpha$ when there are nontrivial homomorphisms
$\alpha : \Gamma \to \mathbb{C}^*$
where $\mathbb{C}^*$ is the multiplicative group of nonzero complex numbers.
Since $\mathbb{C}^*$ is abelian, $\alpha$ must comes from some homomorphism out of the abelianization of $\Gamma.$ That’s the group you get by taking $\Gamma$ and modding out by all commutators $g h g^{-1} h^{-1}.$
The upshot will soon be revealed…
• John Baez says:
… so, there can be a ‘semi-invariant’ polynomial for the finite group $\Gamma$ that’s not an actual invariant only if the abelianization of $\Gamma$ is nontrivial!
For the tetrahedron the relevant group $\Gamma$ is the 24-element binary tetrahedral group, the double cover of the rotational symetries of the tetrahedron. This is also $\mathrm{SL}(2,\mathbb{F}_3),$ and its abelianization is nontrivial: it’s $\mathbb{Z}/3$ according to GroupProps.
For the octahedron the relevant group $\Gamma$ is the 48-element binary octahedral group, the double cover of the rotational symetries of the octahedron. Its abelianization is nontrivial: it’s $\mathbb{Z}/2$ according to GroupProps (if you read between the lines).
For the icosahedron the relevant group $\Gamma$ is the 120-element binary icosahedral group, the double cover of the rotational symetries of the icosahedron. This is also $\mathrm{SL}(2,\mathbb{F}_5),$ and its abelianization is trivial: according to GroupProps.
So, the icosahedral case is ‘better’ than the other two: every semi-invariant is invariant. And the reason is that every element of the binary icosahedral group is a commutator $ghg^{-1}h^{-1}$, unlike the other two cases.
• Greg Egan says:
John wrote:
So, the icosahedral case is ‘better’ than the other two: every semi-invariant is invariant.
Thanks for explaining that!
What also helped me grasp the distinction here was going back and checking what happens if you compute the product of the dot products of ${x,y,z}$ with three linearly independent unit normals to faces of a cube, e.g.:
$P_f(x,y,z) = x y z$
Unlike the case with the faces of an icosahedron, this turns out to be a semi-invariant, only invariant up to a sign. And that’s true even if we restrict to the 24-element group of rotational symmetries of the cube.
It’s worth pointing out that if we were studying the 120-element group of rotations and reflections of an icosahedron, rather than the binary icosahedral group of the same order, there is an obvious semi-invariant: the product of the dot products of ${x,y,z}$ with 15 linearly independent unit vectors that pass through the centres of edges of the icosahedron. This changes sign under an inversion.
• John Baez says:
That cube example is nice.
I had some fun today trying to grok why the abelianization of the binary tetrahedral group is $\mathbb{Z}/3.$ I didn’t completely succeed, but here’s the idea:
Suppose you have a ‘spinor tetrahedron’. This is like an ordinary regular tetrahedron except that its symmetry group is the double cover of the usual rotational symmetry group of the tetrahedron. So, you have to turn it around twice around any axis for it to reach its original state.
For example, if we take our axis to be the line from a vertex to the midpoint of the opposite face, the spinor tetrahedron has not 3 but 6 symmetries that preserve this axis. You can turn it 120° around this axis, and that’s a symmetry, but you have to do this 6 times before you get back where you started.
Similarly for the axis between midpoints of opposite edges: turning the spinor tetrahedron 180° around such an axis is a symmetry, but you have to do this 4 times before you get back where you started.
Now, the thing to grok is this. We can assign to any symmetry of the spinor tetrahedron a ‘twist number’, which is an integer mod 3. The twist number is characterized by two properties:
• if you turn the spinor tetrahedron 120° around any axis from a vertex to the midpoint of the opposite face, and turn it clockwise with the vertex pointing towards you, this symmetry has twist number 1.
• if you do one symmetry $g$ and then another $h,$ the twist number of the composite symmetry $g h$ is the sum of the twist numbers for $g$ and $h.$
It’s also important to grok that while the twist number is well-defined mod 3, it wouldn’t be well-defined mod 6.
(There’s a similar puzzle for the cube, where the twist numbers are well-defined mod 2, but I didn’t think about that.)
• Bruce Bartlett says:
For further reference. I pointed out the article of Kirby and Scharlemann above as a nice reference, observing that something funny happens to the text between page 128 and 129, which was also picked up on by Mike Doherty.
There is a Russian translation of that paper, which fills in the gaps.
I am told that the Russian text at the top of page 150 reads
““The group $I^* < SU(2)$ has the following elements, where $epsilon =$ …"
14. jessemckeown says:
So, a thing that has been niggling me for a little is: the generic quintic (even the generic depressed quintic) has no root at zero, so a rescaling of $x$ allows one to fix the constant term equal to $1$, which means the generic depressed singular quintic mostly lives in a 1-parameter family — but there is a small trouble in that there isn’t a good single parameter that captures these features AND the cusps we’re looking at: so I had to excurse via a Moebius transform to get this picture:
• jessemckeown says:
wordpress is funny. the link is actually preserved in the previous comment, but its contents (an img tag) have been removed…
• Scott Hotton says:
That is interesting. One of the curve’s cusps appears to meet at a point of inflection.
Here is a picture of the cubical parabola and one of its involutes
under the Mobius transformation $(z - x_0)/(z+x0)$.
The red curve is the image of the cubical parabola. The blue curve is the image of the involute. The point $x_0$ is the location of the higher order cusp of the involute before the transformation is applied to the curves. In this case
$x_0 = 5$. The image of the involute of the cubical parabola is not the same as the involute of the image of the cubical parabola.
The image of the cubical parabola and its involute meet at the point 1 in the sense that they meet at the point at infinity and the image of the point at infinity under the Mobius transformation is 1 for any $x_0 \neq 0$.
The image of the cubical parabola’s inflection point is
-1 for any $x_0 \neq 0$. The image of the higher
order cusp is 0 for any $x_0 \neq 0$. The image of the lower order cusp is on the image of the cubical parabola but its not on the cubical parabola’s inflection point. It would be on the cubical parabola’s inflection point in the $x_0 = 0$ case
except the involute’s cusps go away when $x_0 =0$.
I would have posted the picture if I knew how.
• John Baez says:
I fixed Jesse’s picture.
For a long time I thought it was hopeless for commenters to post pictures here: if anyone but me posts a comment containing the usual html for an image:
<img src = “…”>
that portion of the comment is deleted.
Then suddenly I noticed that if someone includes a URL ending in .jpg or .gif, the blog magically displays the picture! I don’t know if that feature is new.
But apparently this doesn’t work for a .png file?
• Scott Horton says:
That is interesting. One of the curve’s cusps appears to meet at a point of inflection.
Here is a picture of the cubical parabola and one of its involutes
under the Mobius transformation $(z - x_0)/(z+x_0)$.
The red curve is the image of the cubical parabola. The blue curve is the image of the involute. The point $x_0$ is the location of the higher order cusp of the involute before the transformation is applied to the curves. In this case
$x_0 = 5$. The image of the involute of the cubical parabola is not the same as the involute of the image of the cubical parabola.
The image of the cubical parabola and its involute meet at the point 1 in the sense that they meet at the point at infinity and the image of the point at infinity under the Mobius transformation is 1 for any $x_0 \neq 0$.
The image of the cubical parabola’s inflection point is -1 for any $x_0 \neq 0$. The image of the higher order cusp is 0 for any $x_0 \neq 0$. The image of the lower order cusp is on the image of the cubical parabola but its not on the cubical parabola’s inflection point. It would be on the cubical parabola’s inflection point in the $x_0 = 0$ case except the involute’s cusps go away when $x_0 =0$.
• jessemckeown says:
Yes, that extra intersection — I’m sure it’s really in my picture, because of how the inversion was necessary to get that very-singular cusp; I’m a tad worried that the resulting cusp picture is actually of a 7/2-cusp, because blowing up to resolve the intersection would stretch the cusp as well.
• John Baez says:
I fixed Jesse’s picture and some other stuff.
For a long time I thought it was hopeless for commenters to post pictures here: if anyone but me posts a comment containing the usual html for an image:
<img src = “…”>
that portion of the comment is deleted.
Then suddenly I noticed that if someone includes a URL ending in .jpg or .gif, the blog magically displays the picture!
I don’t know if that feature is new.
Jesse included a .png, but that works too. Jesse’s mistake was including his .png inside
<a href = “…”>
Simply type the URL, and the picture will appear.
15. Mike Doherty says:
I tried to post last night that the polynomials are on pages 60 and 61 of Klein’s “Lectures on the icosahedron..”, but my post doesn’t seem to have worked.
• John Baez says:
Your comment is visible now; I had to approve it since you were a first-time commenter, and somehow I overlooked it.
Thanks! I obviously need to get ahold of that book; I don’t have it with me now.
16. Bruce Bartlett says:
So, one nice thing that happened recently on this comments thread has been Greg Egan confirming John’s conjecture about the relationship between the invariant polynomials for $\hat{G}$ acting on $\mathbb{C}^2$ and the invariant polynomials for $G$ acting on $\mathbb{C}^3$. Here $G$ is the icosahedral group and $\hat{G} \subset SU(2)$ is its double cover. There seems to remain an issue about whether Mckay’s polynomials are the same as Slowodny’s.
But I’d like to return to the bigger picture, in particular to Claim A of Arnold:
Claim A. The variety of irregular orbits of the action of the icosahedral group $H_3$ on $\mathbb{C}^3$ is isomorphic to the set of polynomials of the form $x^5 + ax^4 + bx^2 + c$ having multiple roots.
There seems to be an even more important claim lurking in the background here, relating the entire orbit space (not just the space of irregular orbits) of reflection groups to spaces of polynomials. The part of the pattern that we know so far seems to go like this:
The orbit space of the action of $A_n$ on $\mathbb{C}^n$ is naturally isomorphic to the space of polynomials of the form $x^{n+1} + a_1 x^{n-1} + \cdots + a_n$.
The orbit space of the action of $B_n$ on $\mathbb{C}^n$ is naturally isomorphic to the space of polynomials of the form $x^n + b_1 x^{n-1} + \cdots + b_n$.
The orbit space of the action of $H_3$ on $\mathbb{C}^3$ is naturally isomorphic to the space of polynomials of the form $x^5 + a x^4 + bx^2 + c$.
Now, the natural map in the first isomorphism is just the roots to coefficients map as explained in TWF261.
I don’t really understand the map in the second isomorphism (Arnold explains it in his book but I don’t understand it).
And, none of us seem to understand at all the map in the third isomorphism.
Perhaps this pattern (of relating orbit spaces of reflection groups to spaces of polynomials) holds for all finite reflection groups. John hinted at that in TWF261. But I have been unable to find references on this. Perhaps someone could point me to a book?
• John Baez says:
I’m glad you’re returning to this “bigger picture”. There’s a lot to explore here, but tonight I just have the energy for one point:
Claim A. The variety of irregular orbits of the action of the icosahedral group $\mathrm{H}_3$ on $\mathbb{C}^3$ is isomorphic to the set of polynomials of the form $x^5 + ax^4 + bx^2 + c$ having multiple roots.
Arnol’d indeed claims this, but Greg showed that this claim is, strictly speaking, false. The reason is that all polynomials $x^5 + ax^4 + bx^2 + c$ with $c= 0$ have zero as a repeated root! Only after we omit the plane $c = 0$ does Greg get a surface that looks diffeomorphic to variety of irregular orbits of the action of the icosahedral group $\mathrm{H}_3$ on $\mathbb{C}^3.$
So, this wrinkle must be taken into account!
• Bruce Bartlett says:
Yes, good point.
Arno’ld attributes the proof of Claim A to Lyashko, and we’re pretty sure he means this paper. Now there’s lots of cool stuff in that paper but I have been unable to find any mention of a quintic polynomial of the form $x^5 + ax^4 + bx^2 + c$. What is going on?
• John Baez says:
When Arnol’d says Lyashko proved X, I don’t think he means “I read a paper in which Lyashko proved something”. I think he means “Lyashko gave a talk in my seminar, in which he proved X.”
Remember, in the good old days before the collapse of the Soviet Union and the exodus of mathematicians, Russian mathematics was seminar-based. Anyone who proved anything about singularity theory would need to explain it to Arnol’d before it was accepted. Publication was often an afterthought in this tradition. That’s why we’re having trouble getting details.
• Bruce Bartlett says:
I think he means “Lyashko gave a talk in my seminar, in which he proved X.”
Scherbak also credits a very similar / identical result to Lyashko in “Singularities of families of evolvents in the neighborhood of an inflection point”, and he actually cites that paper. So perhaps it is in there somewhere.
• Bruce Bartlett says:
I have finally tracked down further details for Arnold’s Claim A and Claim B, in Arnold’s own papers, though I am yet to absorb them. See page 169 of Singularities of systems of rays, where he talks about it using the notion of “reamers”, and page 2696 of Singularities in variational calculus.
• John Baez says:
17. Bruce Bartlett says:
By the way, here’s the solution to the original puzzle of this post (Prove that the generic involute of a cubical parabola has a cusp of order 5/2 on the straight line tangent to the parabola at the inflection point). It was communicated to me by Andre Henriques. The idea is to simply expand everything in a Taylor series up to order $x^5$.
18. Bruce Bartlett says:
Here is an image of the discriminant of the icosahedral group $\mathrm{H}_3$ obtained using Surfer:
(Click to enlarge.)
The algebraic equation is taken straight from Lyashko, a few lines above equation (1).
• John Baez says:
Excellent! I’ve fallen behind you and Greg, because I’m writing a little paper about ideas that came up in the Diamonds and triamonds thread. But at some point I at least want to summarize what you’ve discovered. I have three posts on Visual Insight coming up: the evolute of the cubical parabola on May 1st (which is all written up), the discriminant of the icosahedral group on May 15th (which needs a lot of work. and will profit from your new discoveries), and the discriminant of that quintic on June 1st (ditto). Maybe I should also add one on the swallowtail!
19. Scott Hotton says:
It looks like Etienne Ghys has given the world a holiday present
in the form of a sensational free English tome:
http://perso.ens-lyon.fr/ghys/accueil/
It’s hard for me to describe but I would say that a prominent theme in the book is the combinatorics of singularities. The cubical parabola is discussed very briefly on page 107. Here are five quotes from the book:
“This is a preliminary version. Comments are most welcome …”
“Amazingly, this example of a cross with identified opposite sides has already been considered by Gauss under the name Doppelring. In his remarkable paper “Gauss als geometer”, Stackel relates a conversation between Gauss and Moebius. Gauss observes that the “Doppelring” has a connected boundary. More interestingly, he notes that one can find two disjoint arcs connecting two linked pairs of points on the boundary. I recall that the impossibility of such a configuration in a disc was the crucial point in his proof of the fundamental theorem of algebra.”
“So, Hipparchus was right: there are a(10) = 2 x 103,049 ways of combining 10 assertions, using OR or AND, in the sense just described. … Most mathematicians, including
myself, have a naive idea about Greek mathematics. We believe that it only consists of Geometry, in the spirit of Euclid. The example of the computation by Hipparchus of the tenth Schroeder number may be a hint that the Ancient Greeks had developed a fairly elaborate understanding of combinatorics …”
“We will see that the collection of all singularities, up to homeomorphisms, can be seen as a singular operad and this helps understanding the global picture.
“In his review on the book by Markl, Shinder and Stasheff on operads, John Baez explains one of the motivations for operads.
‘Most homotopy theorists would gladly sell their souls for the ability to compute the homotopy groups of an arbitrary space.'”
• Bruce Bartlett says:
Yes he has given us all a fantastic gift!
• John Baez says:
Hey, this sounds fun. By the way, there’s an interesting operad connected to one of the counting problems described by Hipparchus. I wrote about some of Etienne Ghys’ work on this topic:
• John Baez, The Hipparchus operad, The n-Category Café, 1 April 2013.
(Not an April Fool’s joke!)
|
{}
|
# All Questions
1k views
### How is the key shared in symmetric key cryptography?
Symmetric key cryptography is an encryption system in which the sender and receiver of a message share a single, common key that is used to encrypt and decrypt the message. Is the key public or it is ...
2k views
### Initialization vector in symmetric-key encryption
Can we use symmetric-key algorithms without an initialization vector? I am making an app where both the sender and receiver share a key and there is no way to create an initialization vector for each ...
183 views
### Can you really insert the text you want in one-time pad?
The Wikipedia article "One-time pad ~ Authentication" says : For example, an attacker who knows that the message contains "meet jane and me tomorrow at three thirty pm" at a particular point can ...
312 views
### Finite fields and ECC
I understand modular arithmetic(or at least I think I do!) and I've tried to read and learn about how the Math in RSA works(and I think it went pretty well). I've been reading up on ECC and it looks ...
682 views
### CBC-MAC , fixed length, all blocks returned
CBC-MAC, with fixed length message. Is it safe to return all ciphered blocks instead of the last? My intuition says it is less secure, since is gives an attacker more information. But how could one ...
736 views
525 views
### A proof-of-work random number generation system for Pokémon [closed]
(The original question that was here was considered too confusing, unclear, and rambling. You can still view it in the edit history, but the content is no longer useful by itself, and cannot be ...
1k views
### AES Key Length vs Block Length
This answer points out that certain key and block lengths were a requirement for the AES submissions: The candidate algorithm shall be capable of supporting key-block combinations with sizes of ...
151 views
### The meaning of “scheme”
This question is a bit different from other questions here, but I think it is suitable to correctly understand the terminology of cryptography. Consider the following two sets of terms: Encryption ...
3k views
### How many keys does the Playfair Cipher have?
I was just studying the Playfair cipher and from what I've understood, it is just a slightly better version of a Caesar cipher, in that it isn't actually mono-alphabetic but rather the 'digrams' are ...
289 views
### Why pairing based crypto is suitable for some particular cryptographic primitives?
Why pairing based crypto is being widely used in some special crypto primitives as ID based crypto and variations of standard signatures? I mean taking as deep as possible what makes it suitable for ...
1k views
### What is the difference between these AES encryption methods
I am using AES encryption (Rijindael) with Base-64 encoding in Obj-C and VB. I am currently using the following two projects to achieve this: Obj-C: ...
337 views
### How much bigger does a precomputed lookup table get when salt is added?
I am trying to wrap my head around the benefits of salt in cryptography. http://en.wikipedia.org/wiki/Salt_(cryptography) I understand that adding salt makes it harder to precompute a table. But ...
|
{}
|
# Properties
Label 6720.2.a.k Level 6720 Weight 2 Character orbit 6720.a Self dual yes Analytic conductor 53.659 Analytic rank 0 Dimension 1 CM no Inner twists 1
# Learn more about
## Newspace parameters
Level: $$N$$ = $$6720 = 2^{6} \cdot 3 \cdot 5 \cdot 7$$ Weight: $$k$$ = $$2$$ Character orbit: $$[\chi]$$ = 6720.a (trivial)
## Newform invariants
Self dual: yes Analytic conductor: $$53.6594701583$$ Analytic rank: $$0$$ Dimension: $$1$$ Coefficient field: $$\mathbb{Q}$$ Coefficient ring: $$\mathbb{Z}$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 210) Fricke sign: $$-1$$ Sato-Tate group: $\mathrm{SU}(2)$
## $q$-expansion
$$f(q)$$ $$=$$ $$q - q^{3} - q^{5} - q^{7} + q^{9} + O(q^{10})$$ $$q - q^{3} - q^{5} - q^{7} + q^{9} + 4q^{11} + 2q^{13} + q^{15} + 2q^{17} - 4q^{19} + q^{21} + 8q^{23} + q^{25} - q^{27} - 6q^{29} + 8q^{31} - 4q^{33} + q^{35} + 2q^{37} - 2q^{39} + 2q^{41} - 12q^{43} - q^{45} + 8q^{47} + q^{49} - 2q^{51} - 6q^{53} - 4q^{55} + 4q^{57} + 4q^{59} + 2q^{61} - q^{63} - 2q^{65} + 12q^{67} - 8q^{69} - 8q^{71} - 14q^{73} - q^{75} - 4q^{77} + q^{81} + 12q^{83} - 2q^{85} + 6q^{87} + 2q^{89} - 2q^{91} - 8q^{93} + 4q^{95} + 10q^{97} + 4q^{99} + O(q^{100})$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
1.1
0
0 −1.00000 0 −1.00000 0 −1.00000 0 1.00000 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
This newform does not admit any (nontrivial) inner twists.
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 6720.2.a.k 1
4.b odd 2 1 6720.2.a.bp 1
8.b even 2 1 1680.2.a.q 1
8.d odd 2 1 210.2.a.c 1
24.f even 2 1 630.2.a.b 1
24.h odd 2 1 5040.2.a.i 1
40.e odd 2 1 1050.2.a.h 1
40.f even 2 1 8400.2.a.p 1
40.k even 4 2 1050.2.g.d 2
56.e even 2 1 1470.2.a.q 1
56.k odd 6 2 1470.2.i.f 2
56.m even 6 2 1470.2.i.b 2
120.m even 2 1 3150.2.a.w 1
120.q odd 4 2 3150.2.g.e 2
168.e odd 2 1 4410.2.a.l 1
280.n even 2 1 7350.2.a.p 1
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
210.2.a.c 1 8.d odd 2 1
630.2.a.b 1 24.f even 2 1
1050.2.a.h 1 40.e odd 2 1
1050.2.g.d 2 40.k even 4 2
1470.2.a.q 1 56.e even 2 1
1470.2.i.b 2 56.m even 6 2
1470.2.i.f 2 56.k odd 6 2
1680.2.a.q 1 8.b even 2 1
3150.2.a.w 1 120.m even 2 1
3150.2.g.e 2 120.q odd 4 2
4410.2.a.l 1 168.e odd 2 1
5040.2.a.i 1 24.h odd 2 1
6720.2.a.k 1 1.a even 1 1 trivial
6720.2.a.bp 1 4.b odd 2 1
7350.2.a.p 1 280.n even 2 1
8400.2.a.p 1 40.f even 2 1
## Atkin-Lehner signs
$$p$$ Sign
$$2$$ $$-1$$
$$3$$ $$1$$
$$5$$ $$1$$
$$7$$ $$1$$
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(\Gamma_0(6720))$$:
$$T_{11} - 4$$ $$T_{13} - 2$$ $$T_{17} - 2$$ $$T_{19} + 4$$ $$T_{23} - 8$$ $$T_{29} + 6$$ $$T_{31} - 8$$
## Hecke Characteristic Polynomials
$p$ $F_p(T)$
$2$ 1
$3$ $$1 + T$$
$5$ $$1 + T$$
$7$ $$1 + T$$
$11$ $$1 - 4 T + 11 T^{2}$$
$13$ $$1 - 2 T + 13 T^{2}$$
$17$ $$1 - 2 T + 17 T^{2}$$
$19$ $$1 + 4 T + 19 T^{2}$$
$23$ $$1 - 8 T + 23 T^{2}$$
$29$ $$1 + 6 T + 29 T^{2}$$
$31$ $$1 - 8 T + 31 T^{2}$$
$37$ $$1 - 2 T + 37 T^{2}$$
$41$ $$1 - 2 T + 41 T^{2}$$
$43$ $$1 + 12 T + 43 T^{2}$$
$47$ $$1 - 8 T + 47 T^{2}$$
$53$ $$1 + 6 T + 53 T^{2}$$
$59$ $$1 - 4 T + 59 T^{2}$$
$61$ $$1 - 2 T + 61 T^{2}$$
$67$ $$1 - 12 T + 67 T^{2}$$
$71$ $$1 + 8 T + 71 T^{2}$$
$73$ $$1 + 14 T + 73 T^{2}$$
$79$ $$1 + 79 T^{2}$$
$83$ $$1 - 12 T + 83 T^{2}$$
$89$ $$1 - 2 T + 89 T^{2}$$
$97$ $$1 - 10 T + 97 T^{2}$$
show more
show less
|
{}
|
# jax.lax.gather¶
jax.lax.gather(operand, start_indices, dimension_numbers, slice_sizes)[source]
Gather operator.
Wraps XLA’s Gather operator.
The semantics of gather are complicated, and its API might change in the future. For most use cases, you should prefer Numpy-style indexing (e.g., x[:, (1,4,7), …]), rather than using gather directly.
Parameters
• operand (Any) – an array from which slices should be taken
• start_indices (Any) – the indices at which slices should be taken
• dimension_numbers (GatherDimensionNumbers) – a lax.GatherDimensionNumbers object that describes how dimensions of operand, start_indices and the output relate.
• slice_sizes (Sequence[int]) – the size of each slice. Must be a sequence of non-negative integers with length equal to ndim(operand).
Return type
Any
Returns
An array containing the gather output.
|
{}
|
# ZF: Regularity axiom or axiom schema?
I have seen the axiom system ZF for set theory described including a single axiom of regularity (aka "foundation"), namely $$\forall x\neq\emptyset \, \exists y\in x \ y\cap x = \emptyset$$ and also including regularity as an infinite axiom schema, with an axiom for every formula $\varphi(x,x_1,..,x_n)$: $$\forall x_1,..,x_n \,\exists x \left(\varphi \rightarrow \exists x \, \left( \varphi \land \forall y\in x \ \neg \varphi\frac{y}{x}\right)\right)$$
The second version states that each non-empty class has an $\in$-minimal element, while the first one states that every non-empty set has an $\in$-minimal element. Is the second one stronger? Is it needed?
Let $\phi(x,x_1, \ldots, x_n)$ be a given formula. For given $x_1, \ldots, x_n$, suppose there is an $x$ such that $\phi(x, x_1, \ldots, x_n)$ holds. Let $X$ be the transitive closure of $\{x\}$ (which is a set) and
$z = \{y \in X \mid \phi(y, x_1, \ldots, x_n) \}$ Then $z$ is non-empty, by regularity, $z$ has an $\in$-minimal element $x'$. Let $y \in x'$, then $y \in X$ (as $X$ is transitive) and $y \not\in z$ (as $x'$ is $\in$-minimal), so $\neg\phi(y,x_1,\ldots, x_n)$. That is, $x'$ is an $\in$-minimal element of the class $\phi$.
So the schema follows from the other axioms of $\mathsf{ZF}$.
|
{}
|
# Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$
### 9.9.2 Classification of Fibrations
Remark 9.9.2.1. Let $\pi : \operatorname{\mathcal{C}}\rightarrow \Delta ^ n$ be a cocartesian fibration of $\infty$-categories. It follows from Proposition 9.5.0.8 that $\operatorname{\mathcal{C}}$ is determined, up to equivalence, by the diagram of covariant transport functors
$\operatorname{\mathcal{C}}(0) \xrightarrow {F(1)} \operatorname{\mathcal{C}}(1) \xrightarrow {F(2)} \operatorname{\mathcal{C}}(2) \xrightarrow {F(3)} \cdots \xrightarrow {F(n)} \operatorname{\mathcal{C}}(n).$
Remark 9.9.2.2. Let $\operatorname{\mathcal{C}}$ be an $\infty$-category equipped with a functor $\pi : \operatorname{\mathcal{C}}\rightarrow \Delta ^ n$. If there exists a scaffold of $\pi$, then $\pi$ is a cocartesian fibration (note that $\pi$ is automatically an inner fibration, by virtue of Proposition 4.1.1.10).
Remark 9.9.2.3. Let $\operatorname{\mathcal{C}}$ be an $\infty$-category equipped with a functor $\pi : \operatorname{\mathcal{C}}\rightarrow \Delta ^ n$ having fibers $\{ \operatorname{\mathcal{C}}(i) = \{ i \} \times _{\Delta ^ n} \operatorname{\mathcal{C}}\} _{0 \leq i \leq n}$. Suppose that we are given a sequence of functors
$\operatorname{\mathcal{C}}(0) \xrightarrow { F(1) } \operatorname{\mathcal{C}}(1) \xrightarrow { F(2)} \operatorname{\mathcal{C}}(2) \xrightarrow { F(3)} \cdots \xrightarrow {F(n)} \operatorname{\mathcal{C}}(n)$
and a morphism of simplicial sets $U: M( \operatorname{\mathcal{C}}(0) \rightarrow \cdots \rightarrow \operatorname{\mathcal{C}}(n) ) \rightarrow \operatorname{\mathcal{C}}$ satisfying condition $(1)$ of Definition 5.3.4.2. Since the collection of $\pi$-cocartesian morphisms of $\operatorname{\mathcal{C}}$ is closed under composition (Corollary 5.1.2.4), we can replace $(2)$ with the following a priori weaker condition:
$(2')$
For every integer $1 \leq i \leq n$ and every object $C \in \operatorname{\mathcal{C}}(i-1)$, the composition
$\Delta ^1 \times \{ C\} \rightarrow \operatorname{N}_{\bullet }( \{ i - 1 < i \} ) \times \operatorname{\mathcal{C}}(i-1) \rightarrow M( \operatorname{\mathcal{C}}(0) \rightarrow \operatorname{\mathcal{C}}(1) \rightarrow \cdots \rightarrow \operatorname{\mathcal{C}}(n) ) \xrightarrow {U} \operatorname{\mathcal{C}}$
is a $\pi$-cocartesian edge of $\operatorname{\mathcal{C}}$.
If $\pi$ is a cocartesian fibration, then a morphism of $\operatorname{\mathcal{C}}$ is $\pi$-cocartesian if and only if it is locally $\pi$-cocartesian (Remark 5.1.4.5). In this case, we can restate $(2')$ as follows:
$(2'')$
For every integer $1 \leq i \leq n$, the composition
$\operatorname{N}_{\bullet }( \{ i-1 < i \} ) \times \operatorname{\mathcal{C}}(i-1) \rightarrow M(\operatorname{\mathcal{C}}(0) \rightarrow \cdots \rightarrow \operatorname{\mathcal{C}}(n) ) \xrightarrow {U} \operatorname{\mathcal{C}}$
witnesses the functor $F(i): \operatorname{\mathcal{C}}(i-1) \rightarrow \operatorname{\mathcal{C}}(i)$ as given by covariant transport along the edge $\operatorname{N}_{\bullet }( \{ i-1 < i \} ) \subseteq \Delta ^ n$ (in the sense of Definition 5.2.2.4).
Remark 9.9.2.4 (Compatibility with Pullback). Let $\pi : \operatorname{\mathcal{C}}\rightarrow \Delta ^ n$ be a functor of $\infty$-categories having fibers $\{ \operatorname{\mathcal{C}}(i) = \{ i\} \times _{\Delta ^ n} \operatorname{\mathcal{C}}\} _{0 \leq i \leq n}$ and let
$U: M( \operatorname{\mathcal{C}}(0) \rightarrow \operatorname{\mathcal{C}}(1) \rightarrow \cdots \rightarrow \operatorname{\mathcal{C}}(n) ) \rightarrow \operatorname{\mathcal{C}}$
be a scaffold of $\pi$. For every morphism of simplices $\alpha : \Delta ^{m} \rightarrow \Delta ^{n}$, the pullback
$\Delta ^{m} \times _{ \Delta ^{n} } M( \operatorname{\mathcal{C}}(0) \rightarrow \cdots \rightarrow \operatorname{\mathcal{C}}(n) ) \xrightarrow {\operatorname{id}\times U} \Delta ^{m} \times _{\Delta ^ n} \operatorname{\mathcal{C}}$
is a scaffold of the projection map $\Delta ^{m} \times _{\Delta ^ n} \operatorname{\mathcal{C}}\rightarrow \Delta ^ m$; here we implicitly invoke Remark 9.6.0.8 to identify $\Delta ^{m} \times _{\Delta ^ n} M( \operatorname{\mathcal{C}}(0) \rightarrow \cdots \rightarrow \operatorname{\mathcal{C}}(n) )$ with the mapping simplex of the diagram
$\operatorname{\mathcal{C}}( \alpha (0) ) \rightarrow \operatorname{\mathcal{C}}( \alpha (1) ) \rightarrow \cdots \rightarrow \operatorname{\mathcal{C}}( \alpha (m) ).$
Proposition 9.9.2.5. Let $\pi : \operatorname{\mathcal{C}}\rightarrow \Delta ^ n$ be a cocartesian fibration of $\infty$-categories having fibers $\{ \operatorname{\mathcal{C}}(i) = \{ i\} \times _{\Delta ^ n} \operatorname{\mathcal{C}}\} _{0 \leq i \leq n}$. Then there exists a scaffold
$U: M( \operatorname{\mathcal{C}}(0) \rightarrow \operatorname{\mathcal{C}}(1) \rightarrow \cdots \rightarrow \operatorname{\mathcal{C}}(n) ) \rightarrow \operatorname{\mathcal{C}}.$
Corollary 9.9.2.6. Let $\pi : \operatorname{\mathcal{C}}\rightarrow \Delta ^ n$ be a cocartesian fibration of $\infty$-categories having fibers $\{ \operatorname{\mathcal{C}}(i) = \{ i\} \times _{\Delta ^ n} \operatorname{\mathcal{C}}\} _{0 \leq i \leq n}$, and let
$U: M( \operatorname{\mathcal{C}}(0) \rightarrow \operatorname{\mathcal{C}}(1) \rightarrow \cdots \rightarrow \operatorname{\mathcal{C}}(n) ) \rightarrow \operatorname{\mathcal{C}}.$
be a scaffold of $\pi$. Then, for every morphism of simplicial sets $X \rightarrow \Delta ^ n$, the induced map
$U': X \times _{\Delta ^ n} M( \operatorname{\mathcal{C}}(0) \rightarrow \operatorname{\mathcal{C}}(1) \rightarrow \cdots \rightarrow \operatorname{\mathcal{C}}(n) ) \rightarrow X \times _{\Delta ^ n} \operatorname{\mathcal{C}}$
is a categorical equivalence of simplicial sets.
Corollary 9.9.2.7. Let $\pi : \operatorname{\mathcal{C}}\rightarrow \Delta ^ n$ be a cocartesian fibration of $\infty$-categories and let $\operatorname{\mathcal{C}}(0)$ denote the fiber $\{ 0\} \times _{\Delta ^ n} \operatorname{\mathcal{C}}$. Then there exists a functor $V: \Delta ^ n \times \operatorname{\mathcal{C}}(0) \rightarrow \operatorname{\mathcal{C}}$ with the following properties:
$(1)$
The composition $\Delta ^ n \times \operatorname{\mathcal{C}}(0) \rightarrow \operatorname{\mathcal{C}}\xrightarrow {\pi } \Delta ^ n$ is given by projection onto the first factor (that is, $V$ is a morphism in the category $(\operatorname{Set_{\Delta }})_{/\Delta ^ n}$).
$(2)$
The restriction $V|_{ \{ 0\} \times \operatorname{\mathcal{C}}(0) }$ is equal to the identity map $\operatorname{id}_{\operatorname{\mathcal{C}}(0)}$.
$(3)$
For each object $C \in \operatorname{\mathcal{C}}(0)$, the restriction $V|_{ \Delta ^ n \times \{ C\} }: \Delta ^ n \rightarrow \operatorname{\mathcal{C}}$ carries each edge of $\Delta ^ n$ to a $\pi$-cocartesian morphism of $\operatorname{\mathcal{C}}$.
$(4)$
The diagram
$\xymatrix@R =50pt@C=50pt{ \operatorname{\partial \Delta }^ n \times \operatorname{\mathcal{C}}(0) \ar [r] \ar [d] & \operatorname{\partial \Delta }^ n \times _{\Delta ^ n} \operatorname{\mathcal{C}}\ar [d] \\ \Delta ^ n \times \operatorname{\mathcal{C}}(0) \ar [r]^-{V} & \operatorname{\mathcal{C}}}$
is a categorical pushout square of simplicial sets.
Proof. For $0 \leq i \leq n$, let $\operatorname{\mathcal{C}}(i)$ denote the fiber $\{ i\} \times _{\Delta ^ n} \operatorname{\mathcal{C}}$. By virtue of Proposition 9.9.2.5, there exists a sequence of functors
$\operatorname{\mathcal{C}}(0) \xrightarrow {F(1)} \operatorname{\mathcal{C}}(1) \xrightarrow {F(2)} \operatorname{\mathcal{C}}(2) \rightarrow \cdots \xrightarrow {F(n)} \operatorname{\mathcal{C}}(n)$
and a morphism $U: M \rightarrow \operatorname{\mathcal{C}}$ which is a scaffold of $\pi$, where $M = \underset { \longrightarrow }{\mathrm{holim}}( \operatorname{\mathcal{C}}(0) \rightarrow \operatorname{\mathcal{C}}(1) \rightarrow \cdots \rightarrow \operatorname{\mathcal{C}}(n) )$ is the mapping simplex of Notation 5.3.2.11. Let $V$ denote the composite map
$\Delta ^ n \times \operatorname{\mathcal{C}}(0) \rightarrow M \xrightarrow {U} \operatorname{\mathcal{C}}.$
It follows immediately from the definitions that $V$ satisfies conditions $(1)$, $(2)$, and $(3)$. To prove $(4)$, we observe that there is a commutative diagram
$\xymatrix@R =50pt@C=50pt{ \operatorname{\partial \Delta }^{n} \times \operatorname{\mathcal{C}}(0) \ar [r] \ar [d] & \operatorname{\partial \Delta }^ n \times _{\Delta ^ n} M \ar [r] \ar [d] & \operatorname{\partial \Delta }^ n \times _{\Delta ^ n} \operatorname{\mathcal{C}}\ar [d] \\ \Delta ^ n \times \operatorname{\mathcal{C}}(0) \ar [r] & M \ar [r]^-{U} & \operatorname{\mathcal{C}}. }$
Note that the square on the left is a pushout diagram in which the vertical maps are monomorphisms, hence a categorical pushout diagram (Example 4.5.4.12). Proposition 9.5.0.8 implies that both of the horizontal maps on the right are categorical equivalences, so that the right square is also a categorical pushout diagram (Proposition 4.5.4.10). Applying Proposition 4.5.4.8, we deduce that the outer rectangle is also a categorical pushout square. $\square$
Exercise 9.9.2.8. In the situation of Corollary 9.9.2.7, show that any functor $V: \Delta ^ n \times \operatorname{\mathcal{C}}(0) \rightarrow \operatorname{\mathcal{C}}$ satisfying conditions $(1)$, $(2)$, and $(3)$ also satisfies condition $(4)$.
|
{}
|
## October 11, 2011
### Weak Systems of Arithmetic
#### Posted by John Baez
The recent discussion about the consistency of arithmetic made me want to brush up on my logic. I’d like to learn a bit about axioms for arithmetic that are weaker than Peano arithmetic. The most famous is Robinson arithmetic:
Robinson arithmetic is also known as Q, after a Star Trek character who could instantly judge whether any statement was provable in this system, or not:
Instead of Peano arithmetic’s axiom schema for mathematical induction, Q only has inductive definitions of addition and multiplication, together with an axiom saying that every number other than zero is a successor. It’s so weak that it has computable nonstandard models! But, as the above article notes:
Q fascinates because it is a finitely axiomatized first-order theory that is considerably weaker than Peano arithmetic (PA), and whose axioms contain only one existential quantifier, yet like PA is incomplete and incompletable in the sense of Gödel’s Incompleteness Theorems, and essentially undecidable.
But there are many interesting systems of arithmetic between PA and Q in strength. I’m hoping that if I tell you a bit about these, experts will step in and tell us more interesting things—hopefully things we can understand!
For example, there’s primitive recursive arithmetic, or PRA:
This system lacks quantifiers, and has a separate predicate for each primitive recursive function, together with an axiom recursively defining it.
What’s an interesting result about PRA? Here’s the only one I’ve seen: its proof-theoretic ordinal is $ω^ω$. This is much smaller than the proof-theoretic ordinal for Peano arithmetic, namely $\epsilon_0$.
What’s $\epsilon_0$? It’s a big but still countable ordinal which I explained back in week236. And what’s the proof-theoretic ordinal of a theory?
Ordinal analysis concerns true, effective (recursive) theories that can interpret a sufficient portion of arithmetic to make statements about ordinal notations. The proof theoretic ordinal of such a theory T is the smallest recursive ordinal that the theory cannot prove is well founded — the supremum of all ordinals $\alpha$ for which there exists a notation $o$ in Kleene’s sense such that T proves that $o$ is an ordinal notation.
For more details, try this wonderfully well-written article:
Climbing down the ladder we eventually meet elementary function arithmetic, or EFA:
Its proof-theoretic ordinal is just $\omega^3$. It’s famous because Harvey Friedman made a grand conjecture about it:
Every theorem published in the Annals of Mathematics whose statement involves only finitary mathematical objects (i.e., what logicians call an arithmetical statement) can be proved in EFA. EFA is the weak fragment of Peano Arithmetic based on the usual quantifier-free axioms for 0, 1, +, $\times$, exp, together with the scheme of induction for all formulas in the language all of whose quantifiers are bounded.
Does anyone know yet if Fermat’s Last Theorem can be proved in EFA? I seem to remember early discussions where people were wondering if Wiles’ proof could be formalized in Peano arithmetic.
But let’s climb further down the ladder. How low can we go? I guess $\omega$ is too low to be the proof-theoretic ordinal of any theory “that can interpret a sufficient portion of arithmetic to make statements about ordinal notations.” Is that right? How about $\omega + 1$, $2 \omega$, and so on?
There are some theories of arithmetic whose proof-theoretic ordinal is just $\omega^2$. One of them is called 0. This is Peano arithmetic with induction restricted to predicates where all the for-alls and there-exists quantify over variables whose range is explicitly bounded, like this:
$\forall i \le n \; \forall j \le n \; \forall k \le n \; (i^3 + j^3 \ne k^3)$
Every predicate of this sort can be checked in an explicitly bounded amount of time, so these are the most innocuous ones.
Such predicates lie at the very bottom of the arithmetical hierarchy, which is a way of classifying predicates by the complexity of their quantifiers. We can also limit induction to predicates at higher levels of the arithmetic hierarchy, and get flavors of arithmetic with higher proof-theoretic ordinals.
But you can always make infinities bigger — to me, that gets a bit dull after a while. I’m more interested in life near the bottom. After all, that’s where I live: I can barely multiply 5-digit numbers without making a mistake.
There are even systems of arithmetic too weak to make statements about ordinal notations. I guess Q is one of these. As far as I can tell, it doesn’t even make sense to assign proof-theoretic ordinals to these wimpy systems. Is there some other well-known way to rank them?
Much weaker than Q, for example, is Presburger arithmetic:
This is roughly Peano arithmetic without multiplication! It’s so simple you can read all the axioms without falling asleep:
$\not (0 = x + 1)$
$x + 1 = y + 1 \implies x = y$
$x + 0 = x$
$(x + y) + 1 = x + (y + 1)$
and an axiom schema for induction saying:
$(P(0) \; \& \; (P(x) \implies P(x + 1))) \; \implies \; P(y)$
for all predicates $P$ that you can write in the language of Presburger arithmetic.
Presburger arithmetic is so simple, Gödel’s first incompleteness theorem doesn’t apply to it. It’s consistent. It’s complete: for every statement in Presburger arithmetic, either it or its negation is provable. But it’s also decidable: there’s an algorithm that decides which of these two alternatives holds!
However, Fischer and Rabin showed that no algorithm can do this for all statements of length $n$ in less than $2^{2^{c n}}$ steps. So, Presburger arithmetic is still fairly complicated from a practical perspective. (In 1978, Derek Oppen showed an algorithm with triply exponential runtime can do the job.)
Presburger arithmetic can’t prove itself consistent: it’s not smart enough to even say that it’s consistent! However, there are weak systems of arithmetic that can prove themselves consistent. I’d like to learn more about those. How interesting can they get before the hand of Gödel comes down and smashes them out of existence?
Posted at October 11, 2011 2:37 AM UTC
TrackBack URL for this Entry: http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2446
### Re: Weak Systems of Arithmetic
Does anyone know yet if Fermat’s Last Theorem can be proved in EFA? I seem to remember early discussions where people were wondering if Wiles’ proof could be formalized in Peano arithmetic.
I’ve heard Angus MacIntyre talking about this. He is working on a paper arguing that Wiles’ proof translates into PA. I say ‘arguing’ rather than ‘proving’ because all he plans to do is show that the central objects and steps can be formalised in PA, rather than translate the entirety of Wiles’ proof, which would be a a ridiculously Herculean task. I don’t know if his paper is available yet, but there’s some discussion of it here.
Posted by: Richard Elwes on October 11, 2011 8:52 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Richard wrote:
I’ve heard Angus MacIntyre talking about this. He is working on a paper arguing that Wiles’ proof translates into PA.
Hmm, that’s interesting! Sounds like a lot of work—but work that’s interesting if you really know and like number theory and logic. Of course one would really want to do this for Modularity Theorem, not just that piddling spinoff called Fermat’s Last Theorem.
I say ‘arguing’ rather than ‘proving’ because all he plans to do is show that the central objects and steps can be formalised in PA, rather than translate the entirety of Wiles’ proof, which would be a a ridiculously Herculean task.
Right. But by the way, I think most logicians would be perfectly happy to say ‘proving’ here.
Posted by: John Baez on October 11, 2011 9:20 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
I think most logicians would be perfectly happy to say ‘proving’ here.
Well, when I heard Angus talk he was keen to emphasise that it would not be a complete proof, but would only focus on the major bits of machinery needed. So it seems polite to echo the official line!
Of course one would really want to do this for Modularity Theorem, not just that piddling spinoff called Fermat’s Last Theorem.
Yes - my notes from the talk are elsewhere, but I think his main focus is indeed on the central modularity result (I don’t know whether he addresses the full theorem, or just the case needed for FLT).
In any case, he claims that it is effectively $Π^0_1$, and provable in PA.
Posted by: Richard Elwes on October 11, 2011 10:51 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Regarding FLT, nLab has a short section on this. So any findings to be added there. It mentions Colin McLarty’s research.
I have also heard Angus MacIntyre on a sketch of a proof that PA suffices. He seems to have given a number of talks on this, e.g., here and here, the later mentioning a discussion on FOM.
There’s a paper by Jeremy Avigad – Number theory and elementary arithmetic – which should interest you.
Posted by: David Corfield on October 11, 2011 9:48 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
McLarty has recently shown (I believe) that finite-order arithmetic is sufficient to define pretty much all of Grothendieck-style algebraic geometry necessary for arithmetic questions. $n^{th}$-order arithmetic admits quantification over $P^n(\mathbb{N})$, the $n$-times iterated power set for some given $n$. The $n$ needed depends on the problem in question, and the hope is that $n \lt 2$ (PA or weaker) is sufficient for FLT, or even the modularity theorem (since there is a proof of the Modularity Theorem which is simpler than Wiles’ original proof of the semistable case).
The trick is defining derived functor cohomology for sheaf coefficients. All the algebra content is apparently very basic from a logic point of view.
Posted by: David Roberts on October 12, 2011 9:43 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
So, I just spent a bit playing around with $I \Delta_0$ to get a sense for it. I wanted to build a Gödel coding, and I found I needed the following lemma (quantifiers range over positive integers):
$\forall_n \exists_N \forall_{k \leq n} \exists_{d} \quad k d=N$.
Easy enough in PA; it’s a simple induction on $n$. But in $I \Delta_0$ I can’t make that induction because there is no bound on $N$. (There’s also no bound on $d$, but I can fix that by changing the statement to $\exists_{d \leq N}$; this is also true and trivially implies the above.) I can’t fix it by adding in $N \leq n^{100}$ because that’s not true; the least such $N$ is of size $\approx e^n$. I can’t write $N \leq 4^n$ because I don’t have a symbol for exponentiation.
Anyone want to give me a tip as to how to prove this in $I \Delta_0$?
Posted by: David Speyer on October 11, 2011 8:59 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
That’s a great puzzle, David! I’m not very good at these things, so I hope someone materializes who can help you out. In the meantime here are some references that might (or might not provide useful clues. At least I found they’re somewhat interesting.
First:
I’m guessing you could do what you want in $I\Delta_0+ \exp$, but you’re struggling to do it in $I \Delta_0$.
A few intriguing quotes:
Of the commonly studied bounded arithmetic theories, $I\Delta_0+ \exp$, the theory with induction for bounded formulas in the language of $0, S, +, \times$ together with the axiom saying the exponential function is total, is one of the more interesting…
Wilkie–Paris have shown several interesting connections between $I\Delta_0+ \exp$ and weaker theories. They have shown $I\Delta_0 + \exp$ cannot prove $Con(\mathbf{Q})$
Despite the fact that $I\Delta_0 + \exp$ is not interpretable in $I \Delta_0$, it is known if $I\Delta_0 + \exp$ proves $\forall x \; A(x)$ where $A$ is a bounded formula then $I\Delta_0$ proves
$\forall x \; (\exists y \; y = 2^x_k) \implies A(x))$
Here $2^x_k$ is a stack of 2’s $k$ high with an $x$ at top.
Here $k$ depends on $x$ in some way. I guess he’s saying that while $I \Delta_0$ can be used to describe a relation deserving of the name $y = 2^x_k$, it can’t prove that exponentiation is total, so it can’t prove there exists a $y$ such that $y = 2^x_k$. So, we need to supplement its wisdom for it to prove something similar to $\forall x \; A(x)$. Or in his words:
Intuitively, this results says: given $x$, if $I \Delta_0$ knows a big enough $y$ exists then it can show $A(x)$ holds.
Of course you don’t want to resort to a trick like this!
Posted by: John Baez on October 12, 2011 5:05 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Over on Mathoverflow, Joel David Hamkins says that “any theory $I \Sigma_n$, even $I \Sigma_0$,” is able to “perform basic Gödel coding and simulate Turing machine computations”.
As far as I can tell, $I \Sigma_0$ is the same as $I \Delta_0$, since in the arithmetical hierarchy $\Sigma_0$ is just another name for $\Delta_0$.
(By the way: in the reference I just gave, you’ll see superscript 0’s as well as subscripts, but also an admission that it’s common to leave these superscript 0’s out.)
So, it sounds like Hamkins knows how to do Gödel coding in $I \Delta_0$, or at least knows someone or something who does.
Posted by: John Baez on October 12, 2011 5:19 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
I think you can bet your sweet bippy that he himself knows how to do it. :-) Hey, maybe someone should ask on Math Overflow!
Posted by: Todd Trimble on October 12, 2011 6:23 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
It looks like the conversation has moved on. In case anyone is still puzzled, let me spell out what Kaveh is saying:
My statement cannot be proved in $I\Delta_0$ because $I \Delta_0$ is polynomially bounded.
I think I understand what this means now. Suppose that $I\Delta_0$ proves
$\forall_n \exists_m : \ldots$
where the ellipsis are any grammatical statement about $n$ and $m$. Then there is some polynomial $p(n)$ there exists an $m$ with $m \leq p(n)$.
This is not true for my statement! The smallest valid $N$ is $LCM(1,2,\ldots,n)$, which is $\approx e^n$. (The more obvious choice of $N$ is $n!$, which is even bigger.) So this is a great example of a sentence which is true (as a statement about ordinary integers) and grammatical in $I\Delta_0$, but not provable in $I \Delta_0$, on account of the fact that it involves a fast growing function.
This example really helps me understand the more complicated examples of statements which are known to be undecidable in PA because of fast growing functions, like the Paris-Huntington theorem. I always run into a psychological roadblock with examples like Paris-Huntington, because the encoding of those statements into formal language is so complex. This example is straightforwardly a number theoretic statement, so I think I’ll use it as my standard example of a statement which is undecidable for growth rate reasons in the future.
I’ll point out that there is plenty of stuff which is provable in $I \Delta_0$. I got through showing “if $x$ divides $y$ then $x \leq y$”, “every positive integer is either of the form $2k$ or $2k+1$”, “if $a$ divides $c$ and $b$ divides $c$, then $LCM(a,b)$ divides $c$”, and several other standard examples of induction in elementary number theory before trying this one.
Posted by: David Speyer on October 16, 2011 9:01 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
I have two naive questions:
On the wikipedia page “ordinal analysis”, RFA (rudimentary function arithmetic) is mentioned as having proof-theoretic ordinal omega^2, but nothing is said about it. Has anyone here heard of it? Is it EFA minus exponentiation?
Even if some systems may be too weak to be assigned proof-theoretic ordinals, is it possible to make sense of “if that system had a proof-theoretic ordinal in any reasonable sense, then this ordinal would be …”? In view of the wikipedia page on the Grzegorczyk hierarchy (which gives systems of strength omega^n), it is tempting to say that Presburger arithmetic “should” have strength omega.
Posted by: ben on October 12, 2011 5:26 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
While naive, these questions are not sufficiently naive for me to answer them. So, I hope someone else can! They’re interesting.
Posted by: John Baez on October 12, 2011 5:37 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
What’s an interesting result about PRA?
It is suggested that PRA is an upper bound for what Hilbert considered to be finitistic reasoning.
Is there some other well-known way to rank them?
There are (e.g. by the class of their provably total functions), but I guess you want something similar to ordinal analysis. In that case check Arnold Beckmann’s project on dynamic ordinal analysis.
How interesting can they get before the hand of Gödel comes down and smashes them out of existence?
For most purposes the bounded arithmetic theory $V^0$ (which is quite similar to $I\Delta_0$) is the natural starting point. The provably total functions of $V^0$ are exactly $AC^0$ functions (the smallest complexity class complexity theorist usually consider). For comparison, provably total functions of $I\Delta_0$ are Linear Time Hierarchy (LTH). $V^0$ is capable of talking about sequences using a better encoding that Godel’s beta function (write the numbers in the sequence in binary, add 2 between each consecutive pair, read in base 4). It can also check if a given number encodes the computation of a given Turing machine on a given input.
But a more natural theory to work with might be $VTC^0$ whose provably total functions are complexity class $TC^0$ which can also parse syntax. See Cook and Nguyen (draft) for more information.
I think self-verifying theories that can prove their own consistency (in the usual formalization) are artificial. For more information about them see:
Dan Willard, “Self Verifying Axiom Systems, the Incompleteness Theorem and the Tangibility Reflection Princible” , Journal of Symbolic Logic 66 (2001) pp. 536-596.
Dan Willard, “An Exploration of the Partial Respects in which an Axiom System Recognizing Solely Addition as a Total Function Can Verify Its Own Consistency”, Journal of Symbolic Logic 70 (2005) pp. 1171-1209.
I just spent a bit playing around with $I\Delta_0$ to get a sense for it. I wanted to build a Gödel coding …
You cannot prove that in $I\Delta_0$ because that would give a exponentially growing function while $I\Delta_0$ is a polynomially bounded theory.
On the wikipedia page “ordinal analysis”, RFA (rudimentary function arithmetic) is mentioned as having proof-theoretic ordinal $\omega^2$, but nothing is said about it.
Rudimentary sets are defined in Smullyan 1961. They are essentially $\Delta_0 = LTH$. I am not sure about the theory RFA but I would guess it is essentially $I\Delta_0$. EFA is $I\Delta_0 + EXP$ (where $EXP$ expresses that the exponential function is total).
Even if some systems may be too weak to be assigned proof-theoretic ordinals …
See above (the part about Beckmann’s research).
Posted by: Kaveh on October 13, 2011 6:42 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Thanks for posting this, John.
(Small quibble though - you have “$Q$ only has inductive definitions of addition, multiplication and exponentiation”, but $Q$ lacks the primitive recursive defn for exponentiation. Those axioms would be:
$(E_1)$: $\forall x(x^0 = S(0))$
$(E_2)$: $\forall x \forall y(x^{S(y)} = x \times x^y)$
But in standard logic, simply assuming a function symbol more or less presupposes the totality of corresponding function, i.e., exponentiation in this case. I.e., if we have a primitive binary symbol (i.e., $x^y$), then $\forall x \forall y \exists z (z = x^y)$ is a theorem of logic! For if $f$ is, say, a 1-place function symbol, then $\vdash \forall x(f(x) = f(x))$. And this gives us $\vdash \forall x \exists y(y = f(x))$. Quick indirect proof: suppose $\exists x \forall y(y \neq f(x))$. So, $\forall y(y \neq f(c))$, by introducing a new constant $c$; which gives the contradiction $f(c) \neq f(c)$.)
When Joel David Hamkins says that any weak system in the hierarchy $I\Sigma_n$ simulates computation, I think (I am guessing) he just means that any recursive function is representable in $Q$ and its extensions. E.g., if $f : \mathbb{N}^p \rightarrow \mathbb{N}$ is a partial recursive function, then there is an $L_A$-formula $\phi(y,x_1, \dots, x_p)$ such that, for all $k, n_1, \dots, n_p \in \mathbb{N}$,
if $k = f(n_1, \dots, n_p)$, then $Q \vdash \forall y(y = \underline{k} \leftrightarrow \phi(y, \underline{n_1}, \dots, \underline{n_p}))$
In particular, exponentiation, $f(a,b) = a^{b}$, is recursive. So, there is an $L_A$-formula $\mathbf{Exp}(y, x_1, x_2)$ such that, for all $n, m, k \in \mathbb{N}$,
if $k = n^m$, then $Q \vdash \forall y(y = \underline{k} \leftrightarrow \mathbf{Exp}(y, \underline{n}, \underline{m}))$
So, $\mathbf{Exp}(y, x_1, x_2)$ represents exponentiation. However, $Q$ cannot prove it total. I.e., for any such representing formula $\mathbf{Exp}$,
$Q \nvdash \forall x_1 \forall x_2 \exists y \mathbf{Exp}(y, x_1, x_2)$
It’s a long time since I worked through some of the details of bounded arithmetic, and my copy of Hayek and Pudlack is in Munich. So I can’t see immediately how to give a model for this. Still, $Q$ is very, very weak and here is a simple non-standard model $\mathcal{A}$. (From Boolos and Jeffrey’s textbook.) Let $A = dom(\mathcal{A}) = \omega \cup \{a, b\}$, where $a$ and $b$ are new objects not in $\omega$. These will behave like “infinite numbers”. We need to define functions $S^{\mathcal{A}}, +^{\mathcal{A}}$ and $\times^{\mathcal{A}}$ on $A$ interpreting the $L_A$-symbols $S$, $+$ and $\times$. Let $S^{\mathcal{A}}$ have its standard values on $n \in \omega$ (i.e., $S^{\mathcal{A}}(2) = 3$, etc.), but let $S^{\mathcal{A}}(a) = a$ and $S^{\mathcal{A}}(b) = b$. Similarly, $+$ and $\times$ are interpreted standardly on $\omega$, but one can define an odd multiplication table for the values of $a +^{\mathcal{A}} a$, $a +^{\mathcal{A}} b$, $a \times^{\mathcal{A}} b$, etc. Then one proves $\mathcal{A} \models Q$, even though $\mathcal{A} \ncong \mathbb{N}$. This model $\mathcal{A}$ is such that,
(i) $\mathcal{A} \nvDash \forall x \forall y(x + y = y + x)$
(ii) $\mathcal{A} \nvDash \forall x \forall y(x \times y = y \times x)$
So, this tells us that $Q$ doesn’t prove that $+$ and $\times$ are commutative.
I don’t think this simple model $\mathcal{A}$ with two infinite elements, $a$ and $b$, is enough to show that $Q$ doesn’t prove that exponentiation is total.
The idea would be to find a model $\mathcal{B} \vDash Q$ such that $\mathcal{B} \nvDash \forall x_1 \forall x_2 \exists y \phi(y, x_1, x_2)$, for any $L_A$-formula $\phi(y, x_1, x_2)$ that represents $f(a,b) = a^b$ in $Q$. I don’t know off-hand what such a model $\mathcal{B}$ looks like though.
Jeff
Posted by: Jeffrey Ketland on October 13, 2011 8:20 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Jeffrey writes:
Small quibble though - you have “Q only has inductive definitions of addition, multiplication and exponentiation”, but Q lacks the primitive recursive defn for exponentiation.
Whoops. I don’t know how I made that mistake. I’ve changed that in my blog post, and corrected a typo of yours in return. Thanks!
Q is indeed incredibly weak!
Posted by: John Baez on October 14, 2011 5:48 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Thanks, John. Looking at Hamkins’s reply properly, my guess about it above was wrong; he is talking about Tennenbaum’s Theorem (which does apply to $I \Delta_0$; Richard Kaye (University of Birmingham, UK) has a nice paper on his webpages explaining these results).
Jeff
Posted by: Jeffrey Ketland on October 14, 2011 2:09 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
On one of Kaveh’s points, another property of PRA is that if $\phi$ is a $\Pi_1$-sentence, then:
$I \Sigma_1 \vdash \phi$ iff PRA $\vdash \phi$. (Parsons 1970)
Yes, Tait has argued that PRA represents the upper limit on what a finitist should “accept”. However, I think that Kreisel had argued earlier that it should be PA.
Jeff
Posted by: Jeffrey Ketland on October 13, 2011 8:51 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Here’s another system (call it FPA) which proves its own consistency. Work in 2-nd order logic, with predicative comprehension. Let 0 be a constant; let N be a (1-ary) predicate, meant to represent being a natural number; and let S be a (2-ary) relationship, meant to represent the successor relationship. Do *not* assume the totality of S. Instead assume
(1) S is functional, i.e. Nx and Sx,y and Sx,z implies y = z
(2) S is one-to-one, i.e. Nx and Ny and Sx,z and Sy,z implies x = y
(3) (for all n) not Sn,0
(4) Induction (full induction, as a schema)
Because the totality of S is not assumed, FPA has the singleton model {0}. It also has all the initial segments as models, as well as the standard model (well, whichever of those models actually exist). In a nutshell, FPA is “downward”; if you assume that a number n exists, then all numbers less than n exist and behave as you expect. Most of arithmetic is, in fact, “downward,” so FPA can prove most familiar propositions, or at least versions of them. It can prove (unlike Q) the commutative laws of addition and multiplication. It can prove Quadratic Reciprocity. It cannot prove that there are an infinite number of primes (it cannot even prove the existence of 2, after all), but it can prove that between n/2 and n for any n > 2, there always exists a prime. It’s not far-fetched to think that FPA can prove Fermat’s Last Theorem. So, mathematically anyway, it’s pretty strong. (Still it’s neither stronger nor weaker than Q. It’s incomparable, because it assumes induction, which Q does not, but does not assume the totality of successoring, which Q does.)
In particular FPA can talk about syntax because syntactical elements can be defined in a downward way. Something is a term if it can be decomposed in a particular way. Something is a wff if it can be decomposed in a particular way. Etc.
Now, to prove its own consistency, it suffices for FPA to show that the assumption of a proof in FPA of “not 0 = 0” leads to a contradiction. But a proof is a number (in Godel’s manner of representing syntactical elements) and, in fact, a very large number. This large number then gives enough room, in FPA, to formalize truth-in-the-singleton-model and to prove that any sentence in the inconsistency proof must be true. But “not 0 = 0” isn’t true. Contradiction! Therefore FPA has proven its own consistency.
Here’s a link to a book-long treatise, if it interests anyone:
It’s possible to formalize everything in a first-order system, if the second-order is bothersome for some.
Posted by: t on October 14, 2011 8:05 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Wow, that’s quite interesting! Thanks!
Since this post grew out of our earlier discussions of ultrafinitism, I couldn’t help noting that this axiom system should be an ultrafinitist’s dream, since you can take any model of Peano Arithmetic, throw out all numbers $\gt n$, and be left with a model of this one!
Indeed I see Andrew Boucher writes:
Most sub-systems of Peano Arithmetic have focused on weakening induction. Indeed perhaps the most famous sub-system, Robinson’s Q, lacks an induction axiom altogether. It is very weak in many respects, unable for instance to prove the Commutative Law of Addition (in any version). Indeed, it is sometimes taken to be the weakest viable system; if a proposition can be proved in Q, then that is supposed to pretty much established that all but Berkleyan skeptics or fools are compelled to accept it.
But weakness of systems is not a linear order, and F is neither stronger nor weaker than Q. F has induction, indeed full induction, which Q does not. But F is ontologically much weaker than Q, since Q supposes the Successor Axiom. Q assumes the natural numbers, all of them, ad infinitum. So in terms of strength, F and Q are incomparable. In actual practice, F seems to generate more results of standard arithmetic; and so in that sense only, it is “stronger”.
One of the most important practitioners of Q has been Edward Nelson of Princeton, who has developed a considerable body of arithmetic in Q. While Nelson’s misgivings with classical mathematics seemed to have their source in doubts about the existence of the natural numbers, the brunt of his skepticism falls on induction, hence his adoption of Q. “The induction principle assumes that the natural number series is given.” [p. 1, Predicative Arithmetic] Yet it would seem that induction is neither here nor there when it comes to ontological supposition. Induction states conditions for when something holds of all the natural numbers, and says nothing about how many or what numbers there are. So a skeptic about the natural numbers should put, so to speak, his money where his doubts are, and reject the assumption which is generating all those numbers — namely the Successor Axiom — and leave induction, which those doubts impact at worst secondarily, alone.
He also mentions other systems capable of proving their own consistency:
A number of arithmetic systems, capable of proving their own consistency, have become known over the years. Jeroslow [Consistency Statements] had an example, which was a certain fixed point extension of $\mathbf{Q} \vee \forall x \forall y \, (x = y)$. More recently, Yvon Gauthier [Internal Logic and Internal Consistency] used indefinite descent and introduced a special, called “effinite”, quantifier. And Dan Willard [Self-Verifying Axiom Systems] has exhibited several cases, based on seven “grounding” functions. These systems lack a certain naturalness and seem to be constructed for the express purpose of proving their own consistency. Finally, Panu Raatikainen constructed what is effectively a first-order, weaker variant of F; this system can prove that it has a model [Truth in a Finite Universe], but its weakness does not allow the author to draw conclusions about intensional correctness and so it seems to fall short of the ability to prove its own self-consistency.
Posted by: John Baez on October 15, 2011 7:43 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
I remember Andrew Boucher describing his theory $F$ on sci.logic years back; the problem is that it doesn’t interpret syntax (e.g., Tarski’s $TC$). (The current state of play is that $TC$ is interpretable in $Q$.)
The language $L_F$ is a second-order language with a binary relation symbol $\mathbf{S}$ instead of the usual unary function symbol. Even with this small modification (so as to drop the automatic existence of successors), the syntax of $L_F$ is still that of a standard language, with, say, symbols $\mathbf{0}$, $\mathbf{S}$, $\mathbf{=}$ and $\neg$, $\rightarrow$, $\forall$ and variables $\mathbf{v}, \mathbf{v}^{\prime}, \mathbf{v}^{\prime \prime}$, etc. It is straightforward to prove, based merely on the description of $L_F$ and the usual assumptions about concatenation, that:
$|L_F| = \aleph_0$.
So the language $L_F$ itself is countably infinite. Denying the existence of numbers while asserting the existence of infinitely many syntactical entities is incoherent, as one of Gödel’s basic insights is: syntax = arithmetic.
Suppose we then begin to try and interpret the syntax of $L_F$ in $F$ itself. Ignore the second order part, as it introduces needless complexities. In the metatheory, suppose we assign gödel codes as follows:
$\#(\mathbf{0}) = 1$,
$\#(\mathbf{S}) = 2$,
$\#(\mathbf{=}) = 3$
$\#(\neg) = 4$
$\#(\rightarrow) = 5$,
$\#(\forall) = 6$
$\#(\mathbf{v}) = 7$
$\#(\prime) = 8$
Incidentally, this already goes beyond $F$ itself, as the metatheory already implicitly assumes the distinctness of these numbers. How would this be done, given that one cannot even prove the existence of 1?
In $L_A$, we encode any string (sequence of primitive symbols) as the sequence of its codes, and we encode a sequence $(n_1, \dots, n_k)$ of numbers as a sequence number, e.g., as
$\langle n_1, \dots, n_k \rangle = (p_1)^{n_1 + 1} \times \dots \times (p_k)^{n_k + 1}$.
For example, the string $\forall \forall \mathbf{S}$ is really the sequence $(\forall, \forall, \mathbf{S})$, and is coded as the sequence $(6, 6, 2)$, which becomes the sequence number $2^7 \times 3^7 \times 5^2$.
But what, in $F$, is the corresponding numeral for any expression of the language $L_F$? In the usual language $L_A$ of arithmetic, an expression $\epsilon$ with code $n$ is assigned the numeral $\underline{n}$, written $[\epsilon]$, which is $\mathbf{S} \dots \mathbf{S} \mathbf{0}$. That is, $\mathbf{0}$ prefixed by $n$ occurrences of $\mathbf{S}$, where $\mathbf{S}$ is a function symbol. (Can’t get “ulcorner” to work!)
How would this work in $F$? Consequently, $F$ does not numeralwise represent non-identity of syntactical entities.
For example, in syntax we have
$A$: “The quantifier $\forall$ is distinct from the conditional $\rightarrow$
Under the coding above, this becomes
$A^{\circ}$: $\underline{6} \neq \underline{5}$.
which is trivially provable in $Q$.
Now it’s very unclear to me how one even expresses $\underline{6} \neq \underline{5}$ in $L_F$. But however it is done, we get that
$F \nvdash \underline{6} \neq \underline{5}$.
A requirement on a theory $T$ that interprets syntax is that, for expressions $\epsilon_1, \epsilon_2$, we have unique singular terms $[\epsilon_1 ]$, $[\epsilon_2]$ such that,
if $\epsilon_1 \neq \epsilon_2$ then $T \vdash [ \epsilon_1 ] \neq [ \epsilon_2 ]$
But $F$ doesn’t give this. Instead, we have
$F \nvdash [\forall ] \neq [\rightarrow ]$.
So, alleged “agnoticism” about numbers has become “agnoticism” about syntax. Which contradicts the non-agnoticism of the description of syntactical structure of $L_F$ itself.
There is no faithful interpretation of the syntax of $L_F$ into $F$. So, syntactical claims about the properties of $F$ cannot be translated into $F$. The meta-theory of the syntax of $F$ already assumes an infinity of distinct syntactical entities.
In particular, claims about consistency cannot be translated into $F$.
Jeff
Posted by: Jeffrey Ketland on October 15, 2011 2:59 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Thanks for your comment. Unfortunately, I don’t think it’s quite right and F does indeed interpret syntax adequately, so that it does express notions of consistency.
First, just as F is agnostic about the infinity of the natural numbers, it is agnostic about the infinity of the syntax. The infinity of syntax comes from assuming that there are an infinite number of variables; F doesn’t make this assumption. I guess a stickler might say this is no longer second- (or first-) order logic because these assume that there are an infinite number of variable symbols. But I would hope most would agree this is not an essential feature of the logic.
JK: “Incidentally, this already goes beyond F itself, as the metatheory already implicitly assumes the distinctness of these numbers. How would this be done, given that one cannot even prove the existence of 1?”
While one cannot prove that 1 exists, it is possible to prove that anything which *is* one is unique (and so distinct). That is, it is possible to define a predicate one(x) as (Nx and S0,x). It is not possible to prove that there exists x s.t. one(x), but it *is* possible to prove that (x)(y)(one(x) and one(y) implies x=y). So proof of existence, no; proof of distinctness, yes. One can define two(x) as (Nx and there exists y such that one(y) and Sy,x). And so forth as far as one wants or has energy to go.
Moreover, one can define the concepts of odd and even. One defines even(x) iff Nx and (there exists y)(y+y = x). Again, no assertion that one can prove that even(x) or odd(x) for any x. But one *can* prove that there is no x such that both even(x) and odd(x). Again, existence no, distinctness yes.
So one can represent the syntax. Define predicates one, two, three, … , ten. Define Big(x) as Nx and not one(x) and not two(x) and … and not ten(x). Then x represents a left parentheses if x = 0. x represents a right parenthesis if one(x). x represents the implication sign if two(x). x represents the negation sign if three(x). x represent the equal sign if four(x). And so forth. x represents a small-letter variable if Big(x) and even(x). x represents a big-letter variable if Big(x) and odd(x).
One gives the usual recursive definitions to syntactical entities like AtomicWff(x) and Proof(x). Again, one cannot show there exist any x such that AtomicWff(x). But one can show that, *if* AtomicWff(x), then x has all the properties that it should have.
So, given that x cannot prove there exist any syntactical entities, how can it prove its own consistency? Because consistency means there is no proof of “not 0 = 0”. So a proof of consistency is not a proof that something exists, but a proof that something does not exist. It *assumes* the existence of a syntactical entity, in this case a proof of “not 0 = 0”, and shows that the assumption of the existence of this entity leads to a contradiction. Thus F is able to prove a system’s consistency. (What F cannot prove is prove that a system is inconsistent; because then it would have to prove that there exists something, namely a proof of “not 0 = 0”, and that it cannot do.)
Anyway, all this is described in gory detail in the link that I gave.
Posted by: t on October 15, 2011 5:54 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
“The infinity of syntax comes from assuming that there are an infinite number of variables; F doesn’t make this assumption.”
This is not correct. A propositional language $L$ with a single unary connective $\neg$ and a single atom $p$ has infinitely many formulas. So,
$Form(L) = \{p, \neg p, \neg \neg p, \dots\}$
and
$|Form(L)| = \aleph_0$.
The potential infinity here is a consequence of the implicit assumptions governing the concatenation operation $\ast$. Formulas are, strictu dictu, finite sequences of elements of the alphabet. It is assumed that sequences are closed under concatenation. If $\alpha, \beta$ are sequences, then $\alpha \ast \beta$ is a sequence.
“I guess a stickler might say this is no longer second- (or first-) order logic because these assume that there are an infinite number of variable symbols.”
As noted, it has nothing to do with variables. The strings of the propositional language $L$ above form an $\omega$-sequence. In general, if $\alpha$ and $\beta$ are strings from the language $L$, then $\alpha \ast \beta$ is a string. This is simply assumed.
“But I would hope most would agree this is not an essential feature of the logic.”
That any standard language $L$ for propositional logic (containing at least one atom and one connective) or first-order logic has cardinality $\aleph_0$ is usually a preliminary exercise in logic.
Jeff
Posted by: Jeffrey Ketland on October 15, 2011 10:27 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Well, obviously I wouldn’t assume the totality of the concatenation operator.
“As noted, it has nothing to do with variables.” This is not correct. Your language is infinitary if the number of variables is infinitary.
“That any standard language L for propositional logic (containing at least one atom and one connective) or first-order logic has cardinality ℵ0 is usually a preliminary exercise in logic.”
Of course. But it’s not an essential feature of the logic, in the sense one could give an adequate description of the logic without this feature.
Posted by: t on October 16, 2011 10:50 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Indeed, the infinitude of variable symbols is entirely a red herring. For those who want a finite alphabet, the standard (AIUI) solution is to have a symbol x and a symbol ' such that x is a variable and $v$' is a variable whenever $v$ is. (Thus the variable symbols are x, x', x'', etc.)
Posted by: Toby Bartels on October 27, 2011 3:02 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
t, “One gives the usual recursive definitions to syntactical entities like AtomicWff(x) and Proof(x).”
What, exactly, are these entities $AtomicWff(x)$ and $Proof(x)$? How many symbols do they contain? Are they distinct? How does one prove this? Have you ever tried to estimate how many symbols occur in the arithmetic translation of the sentence
“the formula $\forall x(x = x)$ is the concatenation of $\forall x$ with $(x = x)$”?
You’re assuming something that you then claim to “doubt”. You do not, in fact, “doubt” it: you assume it.
One never says, in discussing the syntax of a language, “if the symbol $\forall$ is distinct from the symbol $\mathbf{v}$ …”. Rather, one says, categorically, “the symbol $\forall$ is distinct from the symbol $\mathbf{v}$”. The claim under discussion amounts to the view that one ought to be “agnostic” about the distinctness of, for example, the strings $\forall x(x = 0)$ and $\forall y(y \neq 0)$.
One can write down a formal system of arithmetic which has a “top” - called “arithmetic with a top”. But it is not as extreme as $F$. Such theories have been studied in detail by those working in computational complexity and bounded arithmetic (see, e.g., the standard monograph by Hajek and Pudlak, which I don’t have with me). See, e.g., this:
http://www.math.cas.cz/~thapen/nthesis.ps
Agnosticism about numbers = agnosticism about syntax. You can’t have your “strict finitist” cake, while eating your syntactic cake, as they’re the same cake!
Jeff
Posted by: Jeffrey Ketland on October 15, 2011 9:52 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
“What, exactly, are these entities AtomicWff(x) and Proof(x)?”
They are syntactical entities. I could write them down for you explicitly here, but as you can probably tell, I’m not gifted writing down logical symbols in these comments. Or you can look at the top of page 110 and on page 111 of the link, where you will find them already written down explicitly.
“Are they distinct? How does one prove this?”
I’m not sure whether you are talking about meta-theory or theory. In the theory F, if you assume there exists something which represents AtomicWff(x) and another thing which represents Proof(x), then you would be able to prove these things distinct, because their ith symbols will be different for some i. But one doesn’t need to prove this, certainly not in the proof that the system is consistent. In the meta-theory the two syntactical entities are different, and you see this by writing them down.
“You’re assuming something that you then claim to “doubt”. “
No I’m not. Again you seem to be confusing meta-theory with theory, or assuming that there must be some tight connection between them. You can’t prove that 1 exists in F. You agree, right? So F makes no assumptions that I doubt. Sure I can write down a formula in F which has more than one symbol. So? That has no bearing on what F does or does not assume. In any case my doubts are not that 1 exists, or that 10 exists, but that *every* natural number has a successor. And the fact that I can write down a formula with 1 million symbols (well, if you pay me enough) cannot erase my doubts, nor has any bearing on these doubts.
“One never says, in discussing the syntax of a language, “if the symbol ∀ is distinct from the symbol v …”.
Your manner of expression is again not clear. “One never says…” Are you talking theory, meta-theory, what? F can prove: “if the symbols ∀ and v exist (or to be more precise, if the numbers used to represent them exist), then they are distinct.”
“Rather, one says, categorically, “the symbol ∀ is distinct from the symbol v”.” Well, F cannot prove that the numbers representing the symbols exist. But, in order to prove the consistency of itself, F doesn’t need to. Proving the consistency of a system, does not require F to show that anything exists. Rather, it has to show that something does *not* exist.
“The claim under discussion amounts to the view that one ought to be “agnostic” about the distinctness of, for example, the strings ∀x(x=0) and ∀y(y≠0).”
No, no, no. For some reason you are hung up on distinctness. F can prove distinctness. Again, it can prove that if these strings (or more precisely, the sequences representing them) exist, then they are distinct. So F is most certainly not agnostic about their distinctness. All that F cannot prove is: the strings exist.
“One can write down a formal system of arithmetic which has a “top” - called “arithmetic with a top”. But it is not as extreme as F. ” Again, you are making imprecise claims. F allows for the possibility of the standard model. Formal systems with a “top” do not. Everything that F proves will be true in PA. There are things that “top” formal systems prove that are false in PA. So what on earth does “extreme” mean?
Posted by: t on October 15, 2011 11:10 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
“So what on earth does ‘extreme’ mean?”
A theory of syntax that doesn’t prove that $\forall$ is distinct from $=$?
Posted by: Jeffrey Ketland on October 15, 2011 11:34 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
ROTFL. Ok, you win. I’ll grant you that F doesn’t “prove that symbols are distinct” in the sense of “prove that they exist.” And I’ll grant you that this means that its “theory of syntax” is “extreme.”
Still, in order to prove that a system is consistent, one can work with an “extreme” “theory of syntax” which doesn’t “prove that symbols are distinct” because, to prove a system is consistent, one needs to prove that something *doesn’t* exist, not to prove that something *does*. (In your terminology, would this be, “one needs to prove that something isn’t distinct, not to prove that something is”??) If you or anyone else thinks that F is inconsistent, then you must come up with a proof of “not 0=0”. And, by the mere fact of that proof supposedly existing, F can show that it is able to model truth-in-{0} for the statements in the proof and so that “not 0 = 0” cannot be a statement in the proof. Contradiction. Therefore you, or anyone else, cannot come up with a proof. And since all this reasoning can be done in F, F can prove its own consistency. It’s that simple.
Posted by: t on October 16, 2011 8:08 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
t, I see what you wish to do with this theory $F$. But you lack numerals, since $S$ is not a function symbol. So, instead, for example, one might express $0 \neq 1$ by a formula
$\forall x \forall y((\underline{0}(x) \wedge \underline{1}(y)) \rightarrow x \neq y)$,
where the formulas $\underline{n}(x)$ are given by a recursive definition
$\underline{0}(x) \Leftrightarrow x = 0$
$\underline{n+1}(x) \Leftrightarrow \exists y(\underline{n}(y) \wedge S(x,y))$
So, to assert the existence of the number $7$, for example, you have $\exists x(\underline{7}(x))$. And, presumably, for all $k \leq n$,
$F \vdash\exists x(\underline{n}(x)) \rightarrow\exists x(\underline{k}(x))$,
Then define $(NotEq)_{n,m}$ to be the formulas
$\forall x \forall y((\underline{n}(x) \wedge \underline{k}(y)) \rightarrow x \neq y)$
Then I believe one has: for all $n,k \in N$,
If $n \neq k$, then $F \vdash (NotEq)_{n,k}$
As for syntactic coding, since $\forall$ is coded as 6 and $=$ as $3$, then $F$ can define, e.g.,:
$\underline{\forall}(x) \Leftrightarrow \underline{6}(x)$
$\underline{=}(x) \Leftrightarrow \underline{3}(x)$
Then (I think), $F$ does prove the distinctness of $\forall$ and $=$ in a conditional manner, namely,
$F \vdash \forall x \forall y((\underline{\forall}(x) \wedge \underline{=}(y)) \rightarrow x \neq y)$
But no, I don’t accept that $F$ “proves its own consistency”. Just to begin with, one doesn’t have a proof predicate which strongly represents the proof relation for $F$.
And to return to the central issue, you are assuming the existence of a language $L_F$ whose cardinality (the cardinality of its set of formulas) is $\aleph_0$. You’re assuming this already in the metatheory. You already have $\aleph_0$ syntactical entities. What is the point of being “agnostic” about, say, the number $1$ if you are already assuming, in your informal metatheory, the existence of $\aleph_0$-many syntactical entities? In other words, I am doubting your “agnosticism”. You’re simply trying to have your syntactic cake while eating (i.e., professing) the “strict finitism” cake. It doesn’t work, because they are the same cake.
To repeat: form the point of view of ontology, interpretability, etc., syntax = arithmetic. The same thing. They can be modelled in each other. To “doubt” arithmetic while accepting syntax is incoherent.
To make it work, you need to develop a separate “strictly finite” syntax, for example, a la Quine and Goodman 1947. It would have to drop the totality of concatenation on syntactical entities. It really is not worth bothering with, as it doesn’t work, though. At the very best, you simply end up reinventing, in a weird way, all the things that have been discussed countlessly many times in the very rich research literature about nominalism. See, for example,
Burgess, J and Rosen, G. 1997. A Subject with No Object. OUP.
Jeff
Posted by: Jeffrey Ketland on October 16, 2011 8:44 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
“(a lot of things snipped)”
You clearly haven’t read the linked paper, or even (I imagine) glanced over it, right? That doesn’t seem to faze you in the least, though, in making various definitive pronouncements.
“Just to begin with, one doesn’t have a proof predicate which strongly represents the proof relation for F.”
Well, you will have to give a reasoned argument why (the technical notion of) representability is essential to (the intuitive notion of) expressability. Consider the simpler case of even(x), which can be defined in F as (there exists y)(y+y=x). Because of F’s ontological limitations, even(x) doesn’t represent evenness. Yet even(x) clearly expresses the notion of evenness. I think you can be most succinct in your point by noting that the Hilbert-Bernays conditions of provability do not hold for the provability predicate in F. But as I mention in the linked paper, the Hilbert-Bernays conditions do not adequately capture the (intuitive) notion of provability.
“And to return to the central issue, you are assuming the existence of a language L F whose cardinality (the cardinality of its set of formulas) is ℵ0.”
If that’s the central issue, then you are wrong, as I am not. Look, you obviously haven’t read or thought hard about what I’ve done or written, so perhaps you should stop saying that I am making assumptions which I do not make. Right? That’s only fair, right?
“It would have to drop the totality of concatenation on syntactical entities.”
Obviously. I see now you have replied in another place about this, so I will now switch there.
Posted by: t on October 16, 2011 10:37 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Sorry, it’s the end of the week-end, so this is the end of the road for my comments in this thread. If you’re interested, have a look at the link I put in my first comment. Thanks and bye.
Posted by: t on October 16, 2011 10:53 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
I’m trying to understand this discussion. It seems to me that Jeffrey Ketland is saying, roughly, that because our usual theory of syntax can prove that the system $F$ has infinitely many formulas, while $F$ has finite models (as well as infinite ones), the system $F$ is ‘incoherent’ as a theory of arithmetic. For example, he says:
To repeat: form the point of view of ontology, interpretability, etc., syntax = arithmetic. The same thing. They can be modelled in each other. To “doubt” arithmetic while accepting syntax is incoherent.
So the language $L_F$ itself is countably infinite. Denying the existence of numbers while asserting the existence of infinitely many syntactical entities is incoherent, as one of Gödel’s basic insights is: syntax = arithmetic.
But this is puzzling in two ways. First of all, I don’t think $F$ “denies the existence of numbers”: any model of Peano arithmetic will be a model of $F$, so you can have all the natural numbers you might want. There’s a difference between denying something and not asserting something.
But more importantly, I don’t really care whether $F$ is “incoherent” from the point of view of “ontology” due to some claimed mismatch between the syntax of theory $F$ (which has infinitely many formulas, according to standard mathematics) and the models $F$ has (which include finite ones). “Incoherent” and “ontology” are philosophical notions, but I’m a mere mathematician. So I’m much more interested in actual theorems about $F$.
If these theorems are proved in a metatheory that can prove $F$ has infinitely many formulas, that’s fine! — just make sure to tell me what metatheory is being used. And if someone has proved some other theorems, in a metatheory that can’t prove $F$ has infinitely formulas — in other words, a metatheory that more closely resembles $F$ itself — that’s fine too! All I really want to know is what’s been proved, in what framework.
But I guess it all gets a bit tricky around Gödel’s 2nd incompleteness theorem. What does it mean for $F$ to “prove its own consistency”? I guess it means something like this. (I haven’t thought about this very much, so bear with me.) Using some chosen metatheory, you can prove
$F \vdash Con(F)$
where $Con(F)$ is some statement in $F$ that according to the chosen metatheory states the consistency of $F$. The Hilbert-Bernays provability conditions are supposed to help us know what “states the consistency of $F$” means, but if you want to use some other conditions, that’s okay — as long as you tell me what they are. I can then make up my mind how happy I am.
Posted by: John Baez on October 18, 2011 11:06 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
t, yes, I’ve read your monograph. Actually, I read it several years ago, because I remember you discussing this quite a bit on sci.logic with Torkel and others.
Thanks for the comment, John. You’re right, I should have written “Being agnostic about numbers while asserting the existence of …” instead of “Denying the existence of numbers while asserting the existence of …”. But I’m not referring to $F$. I’m referring to the metatheory of $F$ and the views of the author! That is, $F$ is agnostic about, say, $1$, and is meant to represent the author’s agnosticism; however, the author’s metatheory is not agnostic about an $\omega$-sequence of syntactical entities, say,
$(0=0), \neg (0=0), \neg \neg(0=0), \dots$
understood as types (i.e., finite sequences), not physical tokens.
John, “- just make sure to tell me what metatheory is being used. And if someone has proved some other theorems, in a metatheory that can’t prove $F$ has infinitely formulas; in other words, a metatheory that more closely resembles F itself - that’s fine too! All I really want to know is what’s been proved, in what framework.”
Right, agree 100%. I am arguing that the metatheory already contains arithmetic by interpretation. So the philosophical view, which is agnostic about numbers but accepts $\aleph_0$-many syntactical entities, doesn’t make sense, particularly if it is motivated by a philosophical scepticism about infinity. One cannot honestly pretend to be a strict finitist while adopting (as true) a finitary metatheory at the level of $PRA$, say. Ok, that’s my sermon about the metatheory over!
Unfortunately, formalizing the metatheory would be hard: one would need to use a formal theory of concatenation, like Grzegorczyk’s $TC$, with along some induction. It could be done. The resulting metatheory would be around the level $I \Sigma_1$ (maybe $I \Delta_0$, maybe between).
(On the theory $TC$, see, e.g.:
Visser, A. 2009. “Growing Commas. A Study of Sequentiality and Concatenation”, NDJFL 50.
There are various pieces of recent work on this too. $TC$ is very closely related to $Q$.)
The properties of $F$ are a separate matter from the philosophical views and the metatheoretic assumptions. I’m a bit sceptical that $F$ proves its own consistency. This is because I’m sceptical that the $L_F$-formula being mooted to express consistency actually does express consistency of $F$! It might indeed prove this formula, but it might do some for some silly reason, rather than because it actually “thinks” that $F$ is consistent.
Normally, we have language $L$ which extends the usual first-order language $L_A$ of arithmetic (in particular, $S$ is a function symbol). Let $T$ be a recursively axiomatized theory in $L$. If $T$ is strong enough, there is a proof predicate $\mathbf{Proof}_T(y,x)$ which strongly represents (in $Q$, say) the recursive relation “$y$ is the code of a derivation in $T$ of formula whose code is $x$”. One defines the provability predicate $\mathbf{Prov}_T(x)$ as $\exists y \mathbf{Proof}_T(y,x)$. One shows that $\mathbf{Prov}_T(x)$ satisfies HB conditions. (This is where one usually waves one’s hands, and says, “$T$ must contain $I \Sigma_1$, or something like that”. One can get below $I \Sigma_1$ but it’s complicated.) One defines $Con(T)$ as $\neg \mathbf{Prov}_T([\underline{0}=1])$. Then one gets the 1st incompleteness theorem can be formlized inside $T$, and so $T \vdash Con(T) \rightarrow G$; and thus, if $T$ is consistent, $T \nvdash Con(T)$.
But the language $L_F$ doesn’t have numerals for the simple reason that $S$ is not a function symbol, but a binary predicate (so, successor is not assumed to be total). So, there are no canonical numerals, and this absence makes expressing things very difficult, and it’s not clear what weak and strong representability of relations now means. It’s hard even to express $2 = 5$ or $2 \neq 5$. To be provable, both have to be expressed using universal quantifiers and a sequence of formulas “$x$ is a $1$”, “$x$ is a $2$”, etc. Then the translation of $2 \neq 5$ doesn’t seem to be the negation of the translation of $2 = 5$.
(I’m also pretty sure that $F$ does not have pairing too. I.e., a formula $\phi(z, x, y)$ which means, “$z$ is the ordered pair $(x,y)$”.)
Perhaps I am wrong. But all of this seems too extreme and probably pointless. There are other formal very weak systems for arithmetic that wouldn’t introduce such complexities but also have finite models of every non-zero cardinality. Andrew mentions such a system: a a 2000 paper by Panu Raatikainen, “The Concept of Truth in a Finite Universe”, and Panu calls it “Truncated Arithmetic”. The axioms of $TA$ are :
$(A1): \neg(x \lt x)$,
$(A2): x \lt y \vee y \lt x \vee x = y$
$(A3): (x \lt y \wedge y \lt z) \rightarrow x \lt z$
$(A4): \neg(x \lt 0)$
$(A5): S(x)=y \rightarrow \forall z \neg(x \lt z \wedge z \lt y)$
$(A6): [x \lt S(x)] \vee \forall y[(y \lt x \vee y = x) \wedge x = S(x)]$
$(A7): x + 0 = x$
$(A8): x + S(y) = S(x +y)$
$(A9): x \times 0=0$
$(A10): x \times S(y)=(x \times y)+x$
I.e., the usual successor axioms of $Q$
$S(x) \neq 0$
$S(x) = S(y) \rightarrow x = y$
have been dropped. But we have the standard numerals, $0$, $S(0)$, $S(S(0))$, etc. This means that talk of representability of recursive functions and relations goes through as normal. But we can have a model $\mathcal{A}$ which satisfies $\exists x(S(x) = x)$. And a model $\mathcal{A}$ such that $\mathcal{A} \vDash \underline{2} = \underline{57}$.
Jeff
Posted by: Jeffrey Ketland on October 18, 2011 7:32 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Jeffrey wrote:
So the philosophical view, which is agnostic about numbers but accepts $\aleph_0$-many syntactical entities, doesn’t make sense, particularly if it is motivated by a philosophical scepticism about infinity. One cannot honestly pretend to be a strict finitist while adopting (as true) a finitary metatheory at the level of PRA, say.
Right. But this doesn’t concern me much, since I’m willing to countenance philosophical views of all sorts. I’d be happy to hear what an ultrafinitist metatheory could prove about ZFC, for example… and delighted if adding large cardinal axioms to some powerful metatheory could be used to prove extra results about this theory F. I view all mathematical and metamathematical systems as my friends. So I’m eager to hear what any one has to say about any other — but I don’t feel the need to agree with any of their opinions.
My reason for being particularly interested in ultrafinitist theories (by which I mean: theories where very large natural numbers either don’t exist, or might not exist) does not stem from a distaste for other kinds of theories. Instead, my main reason is that ultrafinitist theories seem particularly challenging to formulate in a nice way, and haven’t been studied as much as other alternatives. So, they’re open territory, relatively speaking — and I’m always drawn to open territory: that’s where the new stuff is!
More generally, weak systems of arithmetic seem worth studying for various reasons. First, only theories weaker than Q (say) have a shot at being decidable, and with computers being so important these days, it’s nice to know a lot of decidable theories. Second, this in turn might lead to weak but still interesting theories that evade the hypotheses of Gödel’s first and second incompleteness theorems, and that would be cool.
I’m a bit sceptical that $F$ proves its own consistency. This is because I’m sceptical that the $L_F$-formula being mooted to express consistency actually does express consistency of $F$!
I understand that worry. I hope a consensus arises on this issue, because I may not have the time or energy to figure it out myself!
It’s hard even to express 2=5 or 2≠5.
True, but your difficulties makes perfect sense once we get used to a world where we can’t take the existence of these numbers for granted.
We can say various things like “if $x$ is the successor of the successor of zero, then $x$ is not the successor of the successor of the successor of the successor of the successor of zero”, or for short “if $x$ is 2 then $x$ is not 5”. And we can reason with them. We just have to stop taking for granted that “there exists $x$ such that $x$ is 2”.
… the translation of 2≠5 doesn’t seem to be the negation of the translation of 2=5.
I don’t think you should expect to find the translation of 2≠5 or 2=5 into this system. You’re taking ideas from a world where 2 and 5 are sure to exist, and trying to translate them into a world where that’s no longer the case. You can say “if $x$ is 2 then it’s 5”, and you can say “if $x$ is 2 then it is 5”. But these aren’t negations of each other—and that makes perfect sense, since there might not be 2.
… all of this seems too extreme and probably pointless.
I have a different reaction: I think it’s fascinating to explore new ways of thinking, and ‘extreme’ ones can be very fun, and sometimes illuminating. The idea of arithmetic where you’re not sure that the number 2 exists—how refreshing!
There are other formal very weak systems for arithmetic that wouldn’t introduce such complexities but also have finite models of every non-zero cardinality. Andrew mentions such a system: a a 2000 paper by Panu Raatikainen, “The Concept of Truth in a Finite Universe”, and Panu calls it “Truncated Arithmetic”.
By the way, personally I find a system where very large numbers plus 1 equals themselves to be further removed from my intuitive picture of the natural numbers than a system where very large numbers just don’t exist. But I don’t mind people studying either system, and maybe we could even translate between the two: if one system ‘hits the wall’ and says 100+1 = 100, the other says 100+1 just doesn’t exist.
It’s like the difference between a hitting a wall and continuing to walk in place, and walking off the edge of a cliff.
Posted by: John Baez on October 19, 2011 7:27 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Thanks, John. I’m in agreement with much of that, but a quick comment on the “syntax objection to ultra-finitism”, which is a technical objection to a philosophical view. I’m arguing that the philosophical view is “incoherent”, not that the formal system $F$ is incoherent. How could a formal system be incoherent? After all, it’s simply a formal system, with various mathematical properties: for example, it’s extremely unclear whether it encodes syntax adequately at all (such theories are usually required to have pairing, be sequential, etc.).
So, I’ll try and explain the syntax objection to ultra-finitism again as as clearly as I can. This is an objection to a philosophical view that recommends some sort of scepticism for philosophical/ontological reasons.In fact, this objection was given long, long ago by both Quine and Church, and is occasionally mentioned later (e.g., Putnam, Wetzel) although it seems to have been quietly forgotten.
Unfortunately, different versions of ultra-finitism seem to based on different considerations. (Nelson’s views are a bit unclear to me because he seems to accept the potential infinity of numbers, but is sceptical about a certain operation - exponentiation. One might call this exponentiation-finitism. This is why $Q$ embodies what he accepts, mathematically speaking. So, I think that Nelson does not endorse $(UF)$ below. Nelson’s real beef seems to be with induction, not potential infinity. On the other hand, I really am not sure about this, or what Nelson does or doesn’t accept regarding syntax. )
Here are the two relevant claims:
$(UF)$ One ought to be agnostic about there being $\aleph_0$-many anythings.
$(Syn)_{\aleph_0}$ There are $\aleph_0$-many syntactical entities.
The first of these is the basic philosophical view involved. The second is a theorem of the metatheory assumed. I.e., $MT \vdash \forall x \exists y(x \lt y)$, where $x \lt y$ means “$x$ is a proper initial segment of $y$” (see Visser’s paper on $TC$ that I mentioned above for the detailed definitions of such notions in formalized syntax).
Then:
Syntax Objection: the two claims $(UF)$ and $(Syn)_{\aleph_0}$ are incoherent.
(They’re not strictly logically inconsistent, because $(UF)$ is an epistemically normative claim. The incoherence is more like claiming to be agnostic about food while at the same time eating a cheese sandwich.)
So, an ultra-finitist who endorses (UF) needs to give their meta-theoretic syntax in an ultra-finitistically acceptable way. They cannot describe their favourite theory $T$ in a language $L$ with $|L| = \aleph_0$, for this in in tension with their philosophical view, $(UF)$. They must somehow devise a notion of an language of finite cardinality, governed by axioms for concatenation which don’t imply $(Syn)_{\aleph_0}$.
Jeff
Posted by: Jeffrey Ketland on October 20, 2011 2:30 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
For Nelson’s views, at least for views he held at some point, you can check his book Predicative Arithmetic. My impression is that he doesn’t accept what you are saying, in particular he makes a clear distinction between generic and non-generic numbers.
You don’t need to consider a theory as in the classical sense, we only consider the part that can be “realized” in the real physical world. Obviously this is not a precise statement, what do I mean by realizing in the real physical world? But I can try to give the feeling of what I mean. Think of a computer. There is a limit on the amount of memory it has. Only consider expressions in the language that can be stored in the memory of that computer. If you cannot state a formula there is no point in talking about it from ultrafinitistic point of view. The ultrafinitistic viewpoint is incoherent if you try to treat mathematical objects as you are used to in classical mathematics. You assume that the syntax is infinite, why? Because you are using an unrestricted inductive definition for it which from ultrafinitistic point of view does not make sense. But there are formulas that I can store in the memory of my computer, rejecting the inductive definition of syntax does not mean rejecting syntax. I can talk about 2 and 5 because I can represent them. I can talk about the formula representing consistency because I can write it down and encode it as a reasonable size natural number. I cannot talk about arbitrary large formulas or proofs or numbers, I can only talk about those which really exists. This doesn’t rule out developing ultrafinitistic techniques that would show a if in future someone comes with a longer proof I cannot show that is not a proof of inconsistency, whichever proof you realize and give to me I will show that it is not a proof of inconstancy of my system, and as long as you haven’t realized such a proof I don’t need to care about it. I think that is really what is intended by consistency. What is the point of showing consistency or soundness in the first place? The point is that my system works correctly in the sense that what I derive in my system are correct. Do I care about possible non-realizable proofs of inconsistency? Obviously not. Why should I care?
Now, this doesn’t get rid of subtleties of making ultrafinitism precise, but your objection about syntax is not a strong philosophical objection against ultrafinistism. I think real computers are a good demonstration of ultrafinitistic mathematics. Engineers deal with the reality of dealing with a explicitly bounded amount of finite objects all the time, it is not an easy experience to generalize it to a mathematical theory but it is what happens in real life. From my point of view, ultrafinitistic viewpoints are attempts to bring the theory of mathematics closer to the reality of physical life. For a fixed computer with fixed amount of memory there is maximum number that it can store, trying to compute its successor will end up in a result that is different from what mathematics tells us. It might overflow and go back to the smallest number, it might return the maximum number or might just trow an exception, which one happens is not that important, what is important is that it is not the successor of the number as it is defined in classical mathematics. Ultrafinitisms try to capture this kind of physical reality which is often very messy.
Posted by: logician on October 25, 2011 3:58 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
hi logician,
…we only consider the part that can be “realized” in the real physical world. Obviously this is not a precise statement, what do I mean by realizing in the real physical world?
Indeed. You mean physical tokens. This is a well known notion. For example, read a nice recent book on the topic.
Wetzel, L. 2009. Types and Tokens.
You will find a nice discussion of similar things in various other places: e.g., Burgess $\&$ Rosen 1997, A Subject with No Object. Or earlier stuff by Charles Chihara. Or Hilary Putnam.
But I can try to give the feeling of what I mean. Think of a computer. There is a limit on the amount of memory it has. Only consider expressions in the language that can be stored in the memory of that computer.
Of course.
If you cannot state a formula there is no point in talking about it from ultrafinitistic point of view. The ultrafinitistic viewpoint is incoherent if you try to treat mathematical objects as you are used to in classical mathematics.
Indeed. This kind of thing has already been studied. E.g., most famously,
Quine, W.V. $\&$ Goodman, N. 1947, “Steps Towards a Constructive Nominalism”, JSL 12.
Weir, A. 2010. Truth Through Proof: A Formalist Foundation for Mathematics.
You can read a nice review of this by John Burgess.
Alternatively, you might read some of the literature on logics for resource-bounded agents.
There is a huge literature on all this stuff.
Jeff
Posted by: Jeffrey Ketland on October 25, 2011 7:48 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Jeff,
Thanks for the references. I will check them.
Best Regards
Posted by: logician on October 25, 2011 10:16 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
On a second thought, it might be more appropriate to consider Nelson a strict-predicativist than an ultrafinitist.
Posted by: logician on October 25, 2011 4:16 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
logician wrote:
You assume that the syntax is infinite, why? Because you are using an unrestricted inductive definition for it which from ultrafinitistic point of view does not make sense.
I agree. It’s unfair to ultrafinitism to 1) choose a metatheory for discussing it in which one can prove the set of formulae is infinite, then 2) say this metatheory is philosophically incompatible with ultrafinitism and then 3) claim that somehow this is a problem with ultrafinitism.
Note: there’s no inherent problem with 1). It may be very productive to study ultrafinitism using the ordinary tools of metamathematics. If we do this, then 2) is true: our metatheory embodies different philosophical ideas than the theory we’re using it to study. That’s fine. The problem comes in step 3), which amounts to “blaming the victim”.
Jeffrey wrote:
They must somehow devise a notion of an language of finite cardinality, governed by axioms for concatenation which don’t imply $(Syn)_{\aleph_0}$.
Just as the theory $F$ is unable to prove there are infinitely many natural numbers, we can expect the ultrafinitist to eventually develop a metamathematics that is unable to prove there are infinitely many formulae.
However, I don’t think it’s fair to demand a full development of metamathematics based on ultrafinitist principles before we begin studying ultrafinitist mathematics!
A comparison with topos theory might be helpful. People had to study intuitionism and develop topos theory for quite a while before they could develop metamathematics in a nice way based on intuitionist principles. Before that one might complain about using the principle of excluded middle in the metamathematical study of intuitionistic logic… but this would not be a reasonable objection to intuitionism. To blame intuitionism for the fact that we’re studying it using classical logic would again be “blaming the victim”.
Posted by: John Baez on October 25, 2011 4:49 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
John,
Just as the theory F is unable to prove there are infinitely many natural numbers, we can expect the ultrafinitist to eventually develop a metamathematics that is unable to prove there are infinitely many formulae.
Right, so, e.g., take a look at Quine $\&$ Goodman 1947 (link above). Then we can consider some other ideas. Perhaps we fix an upper bound and consider, say, some kind of bounded arithmetic (these are very well-studied - see, e.g., the monograph Hajek/Pudlack 1993, with plenty of info about finite initial segments of models). Or we don’t, but we go modal (Hellman, Chihara). Or we don’t fix an upper bound, but we go vague somehow (Sazonov’s idea). Or …, etc. Such things - it might surprise you - have already been discussed!
However, I don’t think it’s fair to demand a full development of metamathematics based on ultrafinitist principles before we begin studying ultrafinitist mathematics!
I agree. At the moment, someone points out an inconsistency. People should be free to do as they please. But whether it works or not, whether it succeeds, is a separate matter. When someone comes up with an idea, then maybe someone else criticizes it. The person who came up with the idea might then accept the challenge. Etc. So, here’s my criticism: the standard metatheory of even a very simple language has only infinite models. What’s the response? Well, maybe one can give a strictly finite theory in a strictly finite language, where one makes only strictly finite assumptions (about strictly finite proofs/derivations: e.g., of length $\leq 2^n$ - all binary sequences up to length $n$). Perhaps. Let’s see if that can done. If so, that’s interesting. If not, why not? There is work to do.
For a comparison, a finitist demand on a proof-theoretic reductions is that the proof-theoretic reduction also be provable in, e.g., PRA. For example, $ACA_0$ conservatively extends $PA$, and this fact is provable in $PRA$. This is a kind of inner consistency requirement.
There is already a detailed literature about types, tokens, reformulating mathematics without referring even to abstract entities, etc. So, this has already been studied, in huge detail. I am briefly describing the conclusions of people who have worked on this topic and analysed what’s possible (see some of the links I give in my reply to “logician”. I can give you more, if you like: if one “goes modal”, then one can give a metamathematics without assuming any mathematical entities: one assumes possible formula tokens, or possible numeral tokens, or possible entities more generally. This idea has been developed by Charles Chihara and Geoffrey Hellman - see the standard monograph analysing all of this: Burgess $\&$ Rosen 1997 (A Subject with No Object). I recommend it.)
Jeff
Posted by: Jeffrey Ketland on October 25, 2011 9:10 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Jeffrey wrote:
There is work to do.
Okay, I agree with that. I’d been afraid you were trying to “nip ultrafinitism in the bud’ by claiming some sort of internal inconsistency in the ultrafinitist worldview. So, I wanted to point out that ultrafinitist mathematics will look nicer when studied using an ultrafinitist metamathematics. I agree there’s more work to be done here. I don’t want to develop this metamathematics, or even think about it much, but I hope someone does.
In particular, I think the system F, where we don’t assert the existence of a successor to every natural number, will look nicest when studied using a metamathematics where we don’t assert the existence of the concatenation of two strings of symbols. In F we can’t prove there are infinitely many natural numbers; in such a metamathematics we can’t prove there are infinitely many strings of symbols drawn from a finite alphabet.
In short: “as above, so below”.
(In category theory this idea, namely that “concepts are happiest in a context that resembles themselves”, is sometimes called the “microcosm principle”. There it’s usually used to note that objects equipped with a given structure are often most elegantly defined in a category equipped with the same kind of structure: for example, monoid objects in a monoidal category. But now I’m thinking the same phenomenon shows up in metamathematics.
People can study an intuitionistic version of the natural numbers, a natural numbers object, in any topos. If someone invents a good general ultrafinitist theory of the natural numbers, it may make sense in any ‘ultrafinite topos’—a concept that hasn’t been defined.)
Posted by: John Baez on October 26, 2011 3:53 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Thanks, John - the category theory stuff is way above my head, though! (Btw, Steve Awodey was here, in the office opposite, for a few months earlier this year, when there was this fuss about Voevodsky and PA.) More on the topic of this thread though, I’d meant to mention the following paper earlier, as it involves doing category theory inside a weak arithmetic theory,
Visser, A. “Cardinal Arithmetic in Weak Theories”.
Not sure if/when this was published. Visser has published a couple of things in this area in the last couple of years (two articles on similar material in Review of Symbolic Logic).
Also, there’s paper by Richard Pettigrew on a certain kind of finitary set theory (due to John Mayberry) closely related to bounded arithmetic ($I \Delta_0 + exp$, I think)
Pettigrew, R. 2010. “The Foundations of Arithmetic in Finite Bounded Set Theory”. (In Roland Hinnion and Thierry Libert (eds), One Hundred Years of Axiomatic Set Theory, Cahiers du Centre de Logique, vol. 17.)
Jeff
Posted by: Jeffrey Ketland on October 26, 2011 9:23 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Jeffrey Ketland wrote:
I’m arguing that the philosophical view is “incoherent”, not that the formal system $F$ is incoherent.
I understand. My reply, earlier, was “this doesn’t concern me much”. In other words, I don’t care very much if Prof. X says Prof. Y’s philosophical views are incoherent, my own being so incoherent and changeable that I’ve long since given up trying to formulate them precisely.
But it seemed to concern our friend t, so maybe he will be interested in this further clarification.
Posted by: John Baez on October 20, 2011 4:45 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
In other words, I don’t care very much if Prof. X says Prof. Y’s philosophical views are incoherent, my own being so incoherent and changeable that I’ve long since given up trying to formulate them precisely.
But this is a rather negative (meta)view, and it’s too self-deprecating! For it suggests that one can never give a precise analysis of certain views in either epistemology or metaphysics, and compare them, e.g., using techniques of mathematical logic. I am far more optimistic about making definite, specific progress on such matters—but one has to work hard to formulate views with as much precision as possible.
Here’s an example from something I’ve worked on a bit. Suppose Prof. X and Prof. Y advocate views $(V_1)$ and $(V_2)$ respectively:
$(V_1)$ Accepting a theory $T$ consists in believing $T$ is empirically adequate.
$(V_2)$ Accepting a theory $T$ consists in believing that $R(T)$ is true.
(where $R(T)$ is a certain operation applied to theories, called Ramsification)
Suppose further that one can prove a theorem:
$R(T)$ is true iff $T$ is empirically adequate.
To do this, one has to be careful about how theories are formalized, how to define “empirically adequate”, and $R(T)$. So, there is some wriggle room. That said, it seems that views $(V_1)$ and $(V_2)$ are quite close to being equivalent (not exactly, because of the epistemic operators involved).
Similarly, in other cases, one might succeed in establishing inconsistencies, implications and equivalences, but it requires careful formulation of the views in question. In this way, we make definite progress in understanding the relationships between the views in question.
Jeff
(PS - thanks for fixing the typo above! I’ve eventually realised that XHMTL isn’t HTML.)
Posted by: Jeffrey Ketland on October 20, 2011 11:00 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Jeffrey wrote:
But this is a rather negative (meta)view, and it’s too self-deprecating! For it suggests that one can never give a precise analysis of certain views in either epistemology or metaphysics, and compare them, e.g., using techniques of mathematical logic.
I was just explaining why I don’t care about this stuff; I wasn’t trying to say nobody should mess with it.
Yeah, sorry—this blog uses XHTML, which is more fussy than HTML.
Posted by: John Baez on October 21, 2011 1:31 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
John,
Ok, let me try and get you to care about this stuff - i.e., mathematical methods in philosophy. For when one has set things up correctly, one gets certain technical results which are philosophy (in my view, they constitute the correct way to do philosophy). I’ll scribble down some technical results relevant to various philosophical questions - a couple concern applying mathematics in science.
Above, the philosophical discussion above concerned the interpretability of a syntactic theory (e.g., $TC$) in the theory discussed above, $F$. That’s a technical problem, but at the same time a philosophical one. I mentioned just above another case of giving precise definitions of the notion of “empirical adequacy”. Another is this: some philosophers are interested in defining the notion “$x$ is identical to $y$”.
Philosophical question 1. Can one reduce/define/analyse the notion of identity in terms of other notions?
Here’s a precise answer to that sort of question. Suppose $L$ is a first-order language with finitely many predicates and no function symbols. Let $M$ be an $L$-structure but we do not assume the relation $=$ is definable in $M$. (I mean the diagonal $\{(x,x): x \in dom(M)\}$.):
Theorem 1. Suppose $M$ is rigid. Then $=$ is definable in $M$.
Proof. Suppose $=$ is not definable in $M$. One can show that this implies that a certain $L$-formula, $x \approx y$, does not define $=$. So, there are $a, b \in dom(M)$ such that $a \neq b$ and $a \approx^{M} b$. Let
$\pi_{ab} : dom(M) \rightarrow dom(M)$
be the permutation that transposes $a$ and $b$. One can show that $a \approx^{M} b$ implies that $\pi_{ab} \in Aut(M)$. Hence, $M$ is not rigid.
Here’s another example.
Philosophical question 2. Can $Q$ be refuted by an empirical statement? (E.g., about how many physical tokens can be inscribed perhaps on molecules, using superconductor nanotechnology, etc.,etc.)
The answer to this is no, if $Q$ is consistent, because,
Theorem 2. Suppose $Q$ is consistent and let $\phi$ be an empirical statement. Then if $\phi$ is consistent, $Q \wedge \phi^{U}$ is consistent.
Proof. ($\phi^U$ is a relativization of $\phi$ to a predicate $U(x)$ meaning “$x$ is not a number”.) Use Joint Consistency noting that $L_Q$ and $L_{\phi}$ have only $=$ in common.
In other words, $Q$ is consistent with any such empirical fact. For example, suppose $F(x)$ means “$x$ is a molecule” or “$x$ is a region of space”. Then $Q \wedge \exists_n x (U(x) \wedge F(x))$ is consistent, for all $n$.
Here’s another, related, example but concerning far more powerful mathematics. How are empirical facts relevant to a mathematical set theory that talks about sets of physical things? Even our most powerful such theory?
Philosophical question 3. Can set theory with urelements $ZFU$ be refuted by an empirical statement?
The answer is no, if $ZFU$ is consistent, because
Theorem 3. Suppose $ZFU$ is consistent and let $\phi$ be an empirical statement. Then if $\phi$ is consistent, $ZFU + \phi^{U}$ is consistent. (Field 1980)
where $\phi^{U}$ is a relativization to $U(x)$ meaning “$x$ is not a set”.
Here’s another example.
Philosophical question 4. Scientific theories are mathematicized. Can we eliminate reference to the mathematical objects?
Let $T$ be an axiomatized mathematicized scientific theory quantifying over concrete entities (e.g., points, regions) and mathematical ones (e.g., real numbers, sets and whatnot), formulated in a 2-sorted language $L$, one sort for concrete things and the other for mathematical things. Then,
Theorem 4. Suppose $T$ satisfies a certain coding condition. Then $T$ in $L$ is definitionally equivalent to a purely nominalistic theory $T^{\circ}$ in a language $L^{\circ}$ which is the purely nominalistic part of a definitional extension of $L$. (Burgess $\&$ Rosen 1997)
Hopefully haven’t goofed up the statements of these in writing them in this comment (the one whose proof is most difficult is Th. 3); but they’re all examples of mathematical philosophy, and I think if mathematicians and physicists care about, e.g., how mathematics is applied in science, they should care about such things.
Jeff
Posted by: Jeffrey Ketland on October 21, 2011 10:58 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
John wrote:
By the way, personally I find a system where very large numbers plus 1 equals themselves to be further removed from my intuitive picture of the natural numbers than a system where very large numbers just don’t exist.
In JavaScript, that “very large” number is $2^{53}$, because that’s where natural numbers stop being representable as 64-bit IEEE floating point numbers.
Posted by: Mike Stay on October 26, 2011 7:44 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Okay, Mike, but you’re leaving me in suspense: what happen when you try to add 1 to $2^53$?
Posted by: John Baez on October 26, 2011 8:02 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Okay, Mike, but you’re leaving me in suspense: what happen when you try to add 1 to $2^{53}$?
You get $2^53$ again; the least significant bits are discarded.
Posted by: Mike Stay on October 27, 2011 1:19 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
“t, yes, I’ve read your monograph. Actually, I read it several years ago, because I remember you discussing this quite a bit on sci.logic with Torkel and others.”
“I’m referring to the metatheory of F and the views of the author! That is, F is agnostic about, say, 1, and is meant to represent the author’s agnosticism; however, the author’s metatheory is not agnostic about an ω-sequence of syntactical entities, say,
(0=0),¬(0=0),¬¬(0=0),…
understood as types (i.e., finite sequences), not physical tokens.’
But I am most certainly agnostic about that w-sequence of syntactical entities. The formalization of the meta-theory in F is certainly agnostic. And, while the way first- and second-order logic are usually described, by assuming the w-sequence, this way is not essential to the logic. I only need to assume whatever tokens and strings of tokens there are. I can then describe which of those strings are wffs, which are axioms, and which are proofs. There’s another paper where these issues are mentioned explicitly (at the beginning, for those who like to skim papers): www.andrewboucher.com/papers/godel.pdf.
BEGIN QUOTE (from the Godel paper):
A language is a set of symbols as well as a set of rules as to which sequence of symbols are certain linguistic types, such as logical constants, variables, terms, and well-formed formulas (wffs).
Unfortunately this is not how a language is often defined in logical texts. Often linguistic types are defined in such a way that, not only are rules enunciated as to which sequences are of the particular type, but also so that sequences of symbols are generated and asserted positively to exist ad infinitum, an assumption in essentials equivalent to the Successor Axiom.
For instance, Mendelson [Introduction to Mathematical Logic, 1st ed., p. 15] writes (I’m changing the logical notation to be consistent with the one used in the present paper):
(1) All statement letters (capital Roman letters) and such letters with numerical subsripts are statement forms.
(2) If A and B are statement forms, then so are (not A), (A and B), (A or B), (A implies B), and (A iff B).
(3) Only those expresions are statement forms which are determined to be so by means of (1) and (2).
This definition does two things. Firstly, it explains what something must be to be a statement form. Such explanation is, of course, the natural role of a definition. But it goes beyond explanation and beyond the normal job of a definition by, secondly, positively asserting the existence of objects. That is, the definition asserts that certain statement forms exist and indeed exist ad infinitum, e.g. (A),
(¬A),(¬(¬A)),(¬(¬(¬A)))),… The definition is, intentionally or otherwise, not only categorizing sequences of symbols, but positively asserting their existence. It has assertive force and is not neutral, as definitions should be.
Defining a language in this way therefore games the discussion; if the set of theories of such languages is not to be empty, then the Successor Axiom must be true. It is therefore to be avoided and an unbiased method sought. For instance, a neutral replacement of Mendelson’s definition would be:
Suppose A is a sequence of symbols. Then A is a statement form if there exists a sequence of statement forms A1,…,An
(for some n ≥ 1) such that:
1) An is A
2) for every i (1≤i≤n),Ai is either:
a) a capitial Roman letter
b) (not Aj), (Aj and Ak), (Aj or Ak), (Aj implies Ak), and (Aj if and only if Ak), where 1 ≤ j,k Remark that this definition does not assert the existence of A. Rather, it accepts A’s existence and says what condition must obtain in order for A to be in the category of statement forms. It is therefore legitimate as a definition; it categorizes but it does not assert.
END QUOTE
Posted by: t on October 23, 2011 5:01 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Hi t,
(Right, that is the definition of a construction sequence.)
Have you studied the standard monograph on first-order arithmetic? I.e.,
Hajek/Pudlak 1993, Metamathematics of First-Order Arithmetic
Here you will find, pp. 86-89, a basic theory of arithmetic without function symbols, called $BA^{\prime}$. It has models of any finite cardinality. You will find its application later in the book, discussing models of fragments of arithmetic and bounded arithmetic.
Consider your formal system $F$. What is its cardinality? How many axioms does it have?
If, instead, you wish to consider the set of finite sequences up to length $n$ over an alphabet $A$ of size $k$, then this is, $\bigcup_{i = 0}^{n} A^i$, and has cardinality $O(k^n)$. Suppose you wish to consider proofs involving strings of length up to 80 with an alphabet of size 10, then you have $10^{80}$ strings.
Jeff
Posted by: Jeffrey Ketland on October 23, 2011 10:23 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Here is a question that might be of interest to people here: are there any applications of restrained logical systems to ordinary mathematics? [NB What I wrote below grew a bit long. I hope it’s still OK to post here.]
Let me explain what appears to me to be a similar situation in algebraic geometry. (I know essentially zero about logic, so there’s a chance this parallel is completely ill conceived.) Usual, complex algebraic geometry is about studying complex solutions to systems of polynomial equations. Arithmetic algebraic geometry is about studying solutions in rings of arithmetic interest, most importantly the integers, but rings of derived interest are also important, like the rationals, finite fields, p-adics, etc. An important first step in finding arithmetic solutions to a polynomial system is understanding the nature of its complex solutions. When you have one polynomial in one variable, arithmetic algebraic geometry is really just classical algebraic number theory, and the same principle applies there — it’s just so simple you don’t notice it. This is because the qualitative behavior of the complex roots of a polynomial pretty much only depends on its degree. (I say pretty much because things are a bit subtler if you have multiple roots, but this is a degenerate case.) But as any student of Galois theory will tell you, if you want to study the roots of a polynomial, the first thing you need to know is its degree.
The parallel I’d like to draw, then, is between logical systems and algebraic geometry. Formal logic involves syntactic reasoning according to some rules, just like working in an algebraic structure, such as a commutative ring, does. Since algebraic geometry is pretty much just the study of commutative algebras slightly dressed up, I hope this analogy is reasonable. In this analogy, arithmetic algebraic geometry could be viewed as a restrained form of complex algebraic geometry because all the theorems have to be about systems of polynomial equations with integer (say) coefficients rather than complex ones. Similarly a retrained logic can’t use all the axioms of usual logic.
Now many mathematicians’ reactions to arithmetic algebraic geometry would be to say “Don’t tie my hands! I have no particular interest in number theory. I want to look at all solutions. I don’t really care about the ones that happen to be integral.” Similarly most mathematicians have no particular interest in restrained logics, even if they grant that they are valid mathematical objects. They just want to use logic, and so they want as much freedom with it as possible. So they add the axiom of choice, universes, etc.
This is a perfectly reasonable approach most of the time, but there are some exceptions on algebraic geometry side of the analogy, and what I’m wondering is whether there could be similar exceptions on the logic side. But there are really two kinds of exceptions, and I should say something about both.
The first is that sometimes in the world of complex algebraic geometry, structures of an arithmetic nature come up. A parallel in the logical world is that sheaves conform to a restrained logic. So even if you started out not caring about non-classical logics, you might decide it’s more efficient for you in your study of sheaves if you know something about restrained logics. The example I’m thinking of in algebraic geometry is in the study of abelian varieties. The endomorphism ring of an abelian variety is a ring which is finitely generated as an abelian group. Therefore it has an arithmetic nature, and knowing something about this will help in your study of abelian varieties. Such examples are only so interesting, I think, because even though the original objects of study did not have a restrained nature, there’s no reason to expect that objects derived from them would also not have such a nature. Even so, I’d be interested in knowing more examples in logic.
The second kind of exception is what I really want to talk about. Let me give an example in algebraic geometry. Suppose you want to study the complex solution set of a system of polynomials. If that system has integer coefficients, then you might be able to reduce everything modulo a prime, make some deductions about the solutions modulo that prime, and then draw some conclusions about the original system. For instance if you’re studying solutions to $x^n=1$, then you just have roots of unity, and I hope it’s plausible that some facts about complex roots of unity could be proved by working modulo primes. (This is in fact true.) But let me consider this from the point of view of the rings that come up in such arguments. The relevant ring if you’re studying complex solutions to $x^n=1$ is $C[x]/(x^n-1)$. If you’re studying arithmetic solutions, you want to look at the subring $Z[x]/(x^n-1)$. The subring has fewer elements. How could it be the case that confining ourselves to it is helpful? The reason is that precisely because there are fewer elements $Z[x]/(x^n-1)$, there are *more* maps out of it. In particular, you have maps to finite fields, rings which a complex die hard might consider exotic and non-classical.
So my first question is whether such a thing ever happens with restrained logics. Do restrained logics admit more specializations to other logics (in some sense), and could it happen that this is useful in studying classical logics?
I could stop here, but in fact the story continues. A complex die hard might say “OK, I concede that if the original polynomial equations have an arithmetic nature, then it can happen that things I care about can be proved using arithmetic. But this is exceptional. Indeed they form only countably many examples in a sea of uncountably many polynomial systems with complex coefficients.” This is a reasonable point of view, and I’d say that you have no right to expect otherwise. But amazingly sometimes arithmetic methods can still be used to do things when the original equations have arbitrary coefficients. The technique I have in mind is called “spreading”. Let $R$ be the subring of the complex numbers generated by all the coefficients in the system of polynomial equations. If one of the coefficients is transcendental, like $\pi$, then $R$ will be a ring like $Z[\pi]$, which isn’t really an arithmetic subring of $C$. But, and this is the point, $R$ is still a finitely generated ring — you just forget that $\pi$ is a complex number and treat it as a free variable. The original system of equations has coefficients in $R$, and you can reduce the system modulo primes simply by applying any homomorphism from $R$ to a finite field. (They will always exist.) Then, as before, you can using arguments over finite fields (involving counting or Frobenius, say) to draw conclusions which you can then try to lift back to $R$ and then to $C$.
Unless I’m mistaken, this is how Grothendieck proved the the Ax-Grothendieck theorem. It says that if you have an injective map from the solution set of a system of complex polynomials to itself and the map is defined by polynomial equations, then the map is surjective. Over finite fields, the analogous statement is true by a counting argument, and over the complex numbers, you use the spreading technique. The reason this technique works really gets to the heart of what algebra is. It’s that in a system of polynomial equations, there are only finitely many symbols and hence only finitely many coefficients, even if those coefficients are transcendental. Therefore the ring $R$ is finitely generated, and therefore it has many maps to finite fields (and other rings).
So my second question is whether something similar happens in logic, or whether it could. Any theorem or proof involves only finitely many symbols, and could it be the case that you can consider the minimal logic over which it makes sense and then, since this logic has a finite nature, specialize it to some exotic, restrained logic where completely different techniques are available?
I don’t know if this is completely crazy, but I’m somewhat heartened by the fact that the other person who proved the Ax-Grothendieck theorem, namely Ax, gave a proof using logic.
Posted by: James Borger on October 19, 2011 1:51 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
A parallel in the logical world is that sheaves conform to a restrained logic. So even if you started out not caring about non-classical logics, you might decide it’s more efficient for you in your study of sheaves if you know something about restrained logics.
Indeed! Can I paraphrase this part of your question as “is there a kind of category, definable inside classical foundations, whose internal logic is ultrafinitist”?
Do restrained logics admit more specializations to other logics (in some sense), and could it happen that this is useful in studying classical logics?
If I understand you correctly, this is certainly the case for intuitionistic logic. For instance, intuitionistic logic is consistent with the existence of nilpotent infinitesimals—giving rise to synthetic differential geometry—or with the “strong Church-Turing thesis” that all functions $\mathbb{N}\to\mathbb{N}$ are computable, or “Brouwer’s theorem” that all functions $\mathbb{R}\to\mathbb{R}$ are continuous. And this is of course related to the first point, since many of these statements have models in categories of sheaves.
By the way, I don’t know many good examples of either sort of thing for predicative theories. There don’t seem to be many pretoposes arising naturally inside classical mathematics that aren’t toposes, nor very many fruitful new axioms one would want to assume that are consistent with predicative mathematics but not with impredicativity. Most predicativists seem to be motivated more by philosophical considerations.
It would certainly be interesting to know whether there are interesting examples of either sort for super-weak logics. I recall chatting a little bit with Damir Dzafarov a few years ago regarding what a categorical model of RCA$_0$ might look like, but we didn’t reach any very satisfying conclusions.
Posted by: Mike Shulman on October 19, 2011 6:00 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
1. Mike said: Can I paraphrase this part of your question as “is there a kind of category, definable inside classical foundations, whose internal logic is ultrafinitist”?
More or less. I was interested in *any* examples of auxiliary objects in the classical world that can be usefully thought about using some non-classical logic. But I like your version too!
2. He also said: If I understand you correctly, this is certainly the case for intuitionistic logic. For instance, intuitionistic logic is consistent with the existence of nilpotent infinitesimals—giving rise to synthetic differential geometry—or with the “strong Church-Turing thesis” that …
Are you saying that adding axioms to a logic is like adding relations to a ring and that the resulting logic being consistent is like the resulting ring being nonzero? If so, then that’s nice. Has such a “specialization” ever been used to prove anything interesting in classical logic? I fear the answer is no.
I was thinking of specialization in the ring world from the point of view of ring morphisms, rather than adding generators and relations. Is there an analogous concept in the logic world? Perhaps a morphism between logics would be a function from the set of valid proofs in one to that in the other, and this function would be required to have some “homomorphic” properties relative to some logical operations??
Posted by: James Borger on October 20, 2011 12:07 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Are you saying that adding axioms to a logic is like adding relations to a ring and that the resulting logic being consistent is like the resulting ring being nonzero?
Exactly! This is what classifying toposes are.
A ring with a presentation as a quotient of a polynomial algebra, $R = \mathbb{Z}[x_1,\dots,x_n]/(f_1,\dots,f_m)$ can be thought of as the result of adding $n$ “formal elements” to $\mathbb{Z}$, followed by imposing $m$ “axioms”. Its universal property says that ring maps $R \to S$ (or equivalently scheme maps $Spec(S) \to Spec(R)$) are given by choosing $n$ elements of $S$ which satisfy the $m$ specified axioms.
Similarly, the topos of sheaves on a site $C$ can be thought of as the result of adding some “formal objects” to $Set$ (roughly, the objects of $C$), some “formal morphisms” between these objects (roughly, the morphisms of $C$), and imposing some “axioms” (the topology of $C$). (I say “roughly” because there is some flatness going on too, but that’s not so important for the basic idea.) Its universal property says that left-exact cocontinuous functors $Sh(C) \to E$, for any topos $E$ (or equivalently, geometric morphisms $E\to Sh(C)$) are given by choosing a collection of objects and morphisms in $E$ which satisfy the axioms.
As John suggested below, this is in fact exactly what set-theorists call “forcing” (modulo classical vs intuitionistic logic, which is really irrelevant for the main point—the only difference is that if you don’t insist on classical logic, then there are more axioms you can consistently add). The analogy to polynomial rings is described at the nLab page on forcing. Set-theorists use the word “generic” for the structures added to the topos of sheaves (which they call a “forcing model”).
(John is exactly right, though, that Mac Lane and Moerdijk aren’t very explicit about how to get back to a model of membership-based set theory from the topos of sheaves. The basic idea is the same as what they do in VI.10: construct ZF-like-sets out of membership graphs or trees, but now internally to a not-necessarily-well-pointed topos. There are papers by Fourman and Hayashi that go through this in some detail. More recently it has been reinterpreted by algebraic set theorists (such as in this paper) and by myself (in this paper.)
Posted by: Mike Shulman on October 20, 2011 10:07 PM | Permalink | PGP Sig | Reply to this
### Re: Weak Systems of Arithmetic
Mike wrote:
Similarly, the topos of sheaves…
And one can make this idea into more than a mere analogy, right? I mean, there’s a reason topoi were invented in algebraic geometry. I forget the details, but I think you can take a scheme and turn it into a ringed topos, namely the topos of sheaves on that scheme. Then a map of schemes gives a map of ringed topoi, and your first example becomes practically an example of your second one (but I guess with ringed topoi instead of mere topoi).
So I think there’s a very nice translation dictionary relating algebraic geometry, logic and category theory—but only people who understand the whole elephant of topos theory are privy to it.
Posted by: John Baez on October 21, 2011 2:07 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
And one can make this idea into more than a mere analogy, right?
Yes, indeed! Although it confuses me a little, because algebraic geometers don’t just consider the topos of sheaves on a scheme qua topological space, but also the sheaves on its etale site and any number of other sites associated to it. I’ve never quite figured out how all of that fits into topos theory.
Posted by: Mike Shulman on October 21, 2011 8:07 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
That’s very interesting, James. I don’t have anything to add, but I’m glad to have learned this way of thinking about restrained systems.
Posted by: Tom Leinster on October 19, 2011 9:29 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
James wrote:
Are there any applications of restrained logical systems to ordinary mathematics?
That’s a great question, and your examples of analogous tricks from algebraic geometry make me feel sure the answer will eventually be yes. But I don’t think this area has been explored enough yet, except in certain famous special cases, including these two you already know:
1) There are a lot of great applications of extremely restrained logical systems to topology and mathematical physics. I’m talking about things like algebraic theories, PROPs and operads. These might not even be considered ‘logic’ by some, since they correspond to setups where all you have are some function symbols and all you can assert are ‘equational laws’, which are implicitly universally quantified over all variables, such as:
$x \times (y + z) = x \times y + x \times z$
or
$g \cdot g^{-1} = 1$
where in a PROP or operad we add even further restrictions on the kinds of equational laws we allow. I guess most logicians would consider such highly restrictive systems part of ‘universal algebra’ rather than ‘logic’, but for me they’re all part of a hierarchy of expressive power with operads near the very bottom and classical first-order or second-order logic near the very top.
Slightly above PROPs we get ‘multiplicative intuitionistic linear logic’ or MILL, which is another way of thinking about symmetric monoidal closed categories—so when we hit this point of the hierarchy at least someone considers it to be ‘logic’. Mike Stay and I have written about the many applications of symmetric monoidal closed categories to physics, topology, logic and computation. One really should have lots of papers like this, for many different kinds of logic.
2) Up near the top of the hierarchy we have topoi. People have studied the heck out of these, and they have lots of great applications to ‘ordinary mathematics’—indeed, that’s where they came from in the first place! But I just wanted to mention that Mac Lane and Moerdijk sketch a supposedly more intuitive proof that $ZF + \not C$ is consistent, using the idea of forcing but also topos theory. I never quite understood it, in part because they don’t seem to fully explain how to take the topoi they get and turn them into models of ordinary set theory. But this might count as an application of ‘restrained’ (namely intuitionistic) logic to classical logic.
Any theorem or proof involves only finitely many symbols, and could it be the case that you can consider the minimal logic over which it makes sense and then, since this logic has a finite nature, specialize it to some exotic, restrained logic where completely different techniques are available?
I don’t know the answer to this question, but since I have no idea how much logic you know, I’ll mention that it reminds me of the famous compactness theorem in first-order logic: a theory given by some collection of axioms has a model iff every theory given by some finite subset of those axioms has a model. This has all sorts of wonderful consequences, which you can see if you click on the link. For example, if some first-order statement is true in every field of characteristic zero, then there exists $p$ such that it holds for every field of characteristic larger than $p$.
One can prove the compactness theorem using Gödel’s completeness theorem, which establishes that a set of sentences is satisfiable if and only if no contradiction can be proven from it. Since proofs are always finite and therefore involve only finitely many of the given sentences, the compactness theorem follows.
Posted by: John Baez on October 20, 2011 6:16 AM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
(continued from the last comment.)
******
Suppose A is a sequence of symbols. Then A is a wff if there exists a sequence A1,…,An
(for some n ≥ 1) such that:
1) An is A
2) for every i (1≤i≤n),Ai is either:
a) a capital Roman letter; or
b) (not Aj), (Aj and Ak), (Aj or Ak), (Aj implies Ak), or (Aj if and only if Ak), for some j,k where 1 ≤ j,k and i > j,k.
******
Sorry, the copy-paste didn’t work perfectly in the last comment, so that’s the correct version.
******
A proof of S is then a sequence of wffs W1,…,Wn such that:
1) Wn is S
2) for every i (1≤i≤n), Wi is either:
a) an axiom; or
b) there exists j,k where i > j,k and Wj is (Wk implies Wi).
*******
That’s exactly the manner in which F defines its provability predicate, so it really does have one, and it really does prove its own consistency.
*******
Adding the row of stars seemed to do the trick to get the comment accepted. Apologies if it reduced the readability.
Posted by: t on October 24, 2011 4:09 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
To explain why t’s comment begins “(continued from the last comment)”: I deleted two of t’s earlier comments. The first contained insults. The second just corrected a mistake in the first.
We don’t allow insults here. Though people occasionally get passionate, this is a friendly, helpful, civilized place, and we want to keep it that way.
Posted by: Tom Leinster on October 24, 2011 5:16 PM | Permalink | Reply to this
### Re: Weak Systems of Arithmetic
Jeffrey, hmmmm… I have now answered what you claimed to be the “central” issue. One does *not* assume that the language has a w-sequence of entities. Do you agree or not? Do you understand now that there is no incoherence in any of the claims?
“You wish to consider the consider the set of finite sequences up to length n over an alphabet A of size k…”
I don’t wish to do anything of the sort. That’s you talking, not me, trying to fit what I’m saying into a little box. One does not make any, and does not need to make any, assumption about the cardinality of F being finite or infinite, just as one does not make any assumptions about the cardinality of N.
So let’s just review your objections.
First, you claimed that F did not distinguish between syntactic entities. OK, after a few exchanges, you had an epiphany, and you now realize that F does, in fact, distinguish between entities, unless of course you’re back to meaning “distinguish” as “proving that the entity exists”.
Second, you claim that the philosophy behind F is incoherent. Now the philosophy behind F is an agnosticism about the Successor Axiom and whether N goes on and on forever. You claimed this was incoherent because the language has a w-sequence of entities. I presume now you understand that one does not make and does not need to make that assumption and so that the philosophy behind F is coherent.
Third, you claimed that, in any case, the system is “extreme and pointless.” Hmmm…. well, since the original question was about systems that can prove their own consistency, where’s the pointless? F proves its own consistency. And what’s more it’s a natural system, because it takes a well-known system, second-order deductive PA, and subtracts a few axioms. That’s about as natural as you get. And it does this and is *still* able to prove Quadratic Reciprocity and (probably) Fermat’s Last Theorem. That would, it seem, be a reasonably strong system which can prove its own consistency. Sounds interesting to me, but if you personally find it pointless, well, can’t argue with it, because pointlessness is in the eye of the beholder.
The only possible objection is actually the one that John mentioned, which is whether F really is able to talk about its own consistency. On this it’s obvious that it can - just look at how it formulates the predicate Provable(x). Consider the case of a propositional logic, which is simpler (and so I can explain it here). In my last comment I explained how one defines what is a wff in a propositional logic:
(to be continued…)
Posted by: t on October 26, 2011 8:51 AM | Permalink | Reply to this
Post a New Comment
|
{}
|
Chapter 3 - Applications of Differentiation - 3.1 Exercises: 21
Over the specified interval, the function has an absolute maximum equal to $2$ and an absolute minimum equal to $-\frac{5}{2}.$
Work Step by Step
$f'(x)=3x^2-3x=3x(x-1).$ $f'(x)$ is defined for all x in the interval. $f'(x)=0\to x=0$ or $x=1.$ Since both are in the specified interval, they, along with the endpoints of the interval, are possible candidates for absolute extrema. $f(-1)=-\frac{5}{2}.$ $f(0)=0.$ $f(1)=-\frac{1}{2}.$ $f(2)=2.$ Over the specified interval, the function has an absolute maximum equal to $2$ and an absolute minimum equal to $-\frac{5}{2}.$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{}
|
# Detector model interface
In addition to the neutrino data itself, the IceCube collaboration provides some information about the detector that can be useful to construct simple simulations and fits. For example, the effective area is needed to connect between incident neutrino fluxes and expected number of events in the detector.
icecube_tools also provides an quick interface to loading and working with such information. This is a work in progress and only certain datasets are currently implemented, such as the ones demonstrated below.
[1]:
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import LogNorm
from icecube_tools.utils.data import IceCubeData
from icecube_tools.detector.detector import IceCube, TimeDependentIceCube
The IceCubeData class can be used for a quick check of the available datasets on the IceCube website.
[2]:
my_data = IceCubeData()
my_data.datasets
[2]:
['2021_03-all_data_releases.zip',
'20080911_AMANDA_7_Year_Data.zip',
'20090521_IceCube-22_Solar_WIMP_Data.zip',
'20110905_IceCube-40_String_Data.zip',
'20131121_Search_for_contained_neutrino_events_at_energies_above_30_TeV_in_2_years_of_data.zip',
'20150127_IceCube_Oscillations%20_3_years_muon_neutrino_disappearance_data.zip',
'20150219_Search_for_contained_neutrino_events_at_energies_greater_than_1_TeV_in_2_years_of_data.zip',
'20150619_IceCube-59%20_Search_for_point_sources_using_muon_events.zip',
'20150820_Astrophysical_muon_neutrino_flux_in_the_northern_sky_with_2_years_of_IceCube_data.zip',
'20151021_Observation_of_Astrophysical_Neutrinos_in_Four_Years_of_IceCube_Data.zip',
'20160105_The_79-string_IceCube_search_for_dark_matter.zip',
'20160624_Search_for_sterile_neutrinos_with_one_year_of_IceCube_data.zip',
'20161101_Search_for_point_sources_with_first_year_of_IC86_data.zip',
'20161115_A_combined_maximum-likelihood_analysis_of_the_astrophysical_neutrino_flux.zip',
'20180213_Measurement_of_atmospheric_neutrino_oscillations_with_three_years_of_data_from_the_full_sky.zip',
'20180712_IceCube_data_from_2008_to_2017_related_to_analysis_of_TXS_0506+056.zip',
'20181018_All-sky_point-source_IceCube_data%20_years_2010-2012.zip',
'20190515_Three-year_high-statistics_neutrino_oscillation_samples.zip',
'20190904_Bayesian_posterior_for_IceCube_7-year_point-source_search_with_neutrino-count_statistics.zip',
'20200227_All-sky_point-source_IceCube_data%20_years_2012-2015.zip',
'20200514_South_Pole_ice_temperature.zip',
'20210126_PS-IC40-IC86_VII.zip',
'20210310_IceCube_data_for_the_first_Glashow_resonance_candidate.zip',
'20211217_HESE-7-5-year-data.zip',
'20220201_Density_of_GeV_muons_in_air_showers_measured_with_IceTop.zip',
'20220902_Evidence_for_neutrino_emission_from_the_nearby_active_galaxy_NGC_1068_data.zip',
'20220913_Evidence_for_neutrino_emission_from_the_nearby_active_galaxy_NGC_1068_data.zip',
'20221028_Observation_of_High-Energy_Neutrinos_from_the_Galactic_Plane.zip',
'20221208_Observation_of_High-Energy_Neutrinos_from_the_Galactic_Plane.zip',
'ic22-solar-wimp-histograms.zip']
## Effective area, angular resolution and energy resolution of 10 year data
We can now use the date string to identify certain datasets. Let’s say we want to use the effective area and angular resolution from the 20210126 dataset. If you don’t already have the dataset downloaded, icecube_tools will do this for you automatically. This 10 year data release provides more detailed IRF data.
We restrict our examples to the Northern hemisphere.
The simpler, earlier versions are explained afterwards.
The format of the effective area has not changed, though.
For this latest data set, we have different detector configurations available, as they changed through time (the detector was expanded). We can invoke a chosen configuration throught the second argument in EffectiveArea.from_dataset() for this particular data set only.
[3]:
from icecube_tools.detector.r2021 import R2021IRF
from icecube_tools.detector.effective_area import EffectiveArea
[4]:
my_aeff = EffectiveArea.from_dataset("20210126", "IC86_II")
20210126_PS-IC40-IC86_VII.zip: 100%|██████████| 39848048/39848048 [01:09<00:00, 572111.38it/s]
[5]:
fig, ax = plt.subplots()
h = ax.pcolor(
my_aeff.true_energy_bins, my_aeff.cos_zenith_bins, my_aeff.values.T, norm=LogNorm()
)
cbar = fig.colorbar(h)
ax.set_xscale("log")
ax.set_xlim(1e2, 1e9)
ax.set_xlabel("True energy [GeV]")
ax.set_ylabel("cos(zenith)")
cbar.set_label("Aeff [m^2]")
### Energy resolution
Angular resolution depends on the energy resolution. The paper accompaying the data release explains the dependency: For each bin of true energy and declination a certain amount of events is simulated. These are sorted first into bins of reconstructed energy. These are then reconstructed in terms of PSF(the kinematic angle between the incoming neutrino and the outgoing muon after a collision) and actual angular error. Data is given as fractional counts in the bin $$(E_\mathrm{reco}, \mathrm{PSF}, \mathrm{ang\_err})$$ of all counts in bin (E_:nbsphinx-math:mathrm{true}, \delta). This is nothing but a histogram, corresponding to a probability of finding an event with given true energy and true declination: $$p(E_\mathrm{reco}, \mathrm{PSF}, \mathrm{ang\_err} \vert E_\mathrm{true}, \delta)$$.
We find the energy resolution, i.e. $$p(E_\mathrm{reco} \vert E_\mathrm{true}, \delta)$$, by summing over (marginalising over) all entries of $$\mathrm{PSF}, \mathrm{ang\_err}$$ for the reconstructed energy we are interested in.
The R2021IRF()class is able to do so:
[6]:
irf = R2021IRF.from_period("IC86_II")
[7]:
fig, ax = plt.subplots()
idx = [0, 3, 6, 9, 12]
# plotting Ereco for different true energy bins, the declination bin here is always from +10 to +90 degrees.
for i in idx:
x = np.linspace(*irf.reco_energy[i, 2].support(), num=1000)
ax.plot(x, irf.reco_energy[i, 2].pdf(x), label=irf.true_energy_bins[i])
ax.legend()
[7]:
<matplotlib.legend.Legend at 0x7fe548799c40>
This should look like the Fig.4, left panel, of the mentioned paper, the y-axis is only scaled by a constant factor, corresponding to a properly normalised distribution. On this topic, it should be mentioned that the quantities distributed according to these histograms are the logarithms of reconstructed energy, PSF and angular uncertainty! Accordingly, logarithmic quantities are drawn as samples and only exponentiated for calculations and final data products.
### Etrue vs. Ereco
Below, a colormap of the conditional probability $$P(E_\mathrm{reco} \vert E_\mathrm{true})$$ is shown. It is normalised for each Etrue bin.
[8]:
etrue = irf.true_energy_bins
ereco = np.linspace(1, 8, num=100)
[9]:
vals = np.zeros((etrue.size - 1, ereco.size - 1))
for c, et in enumerate(etrue[:-1]):
vals[c, :] = irf.reco_energy[c, 2].pdf(ereco[:-1])
[10]:
fig, ax = plt.subplots()
h = ax.pcolor(np.power(10, etrue), np.power(10, ereco), vals.T, norm=LogNorm())
cbar = fig.colorbar(h)
ax.set_xlim(1e2, 1e9)
ax.set_ylim(1e1, 1e8)
ax.set_xscale("log")
ax.set_yscale("log")
ax.set_xlabel("True energy [GeV]")
ax.set_ylabel("Reconstructed energy [GeV]")
cbar.set_label("P(Ereco|Etrue)")
### Angular resolution
Now that we have the reconstructed energy for an event with some $$E_\mathrm{true}, \delta$$, we can proceed in finding the angular resolution.
First, from the given “history” of the event, we sample the matching distribution/histogram of $$\mathrm{PSF}$$, again by marginalising over the uninteresting quantities, in this case only $$\mathrm{ang\_err}$$. We thus sample a value of $$\mathrm{PSF}$$, to whom a distribution of $$\mathrm{ang\_err}$$ belongs, which we subsequently sample. This is now to be understood as a cone of a given angular radius, within which the true arrival direction lies with a probability of 50%.
For both steps, the histograms are created by R2021IRF() when instructed to do so: we pass a tuple of vectors (ra, dec) in radians and a vector of $$\log_{10}(E_\mathrm{true})$$ to the method sample(). Returned are sampled ra, dec (both in radians), angular error (68%, in degrees) and reconstructed energy in GeV.
[11]:
irf.sample((np.full(4, np.pi), np.full(4, np.pi / 4)), np.full(4, 2))
[11]:
(array([3.16377493, 3.14584719, 3.20243168, 3.07932839]),
array([0.77520375, 0.77896212, 0.78584204, 0.80908462]),
array([1.04255446, 1.51912865, 1.07743616, 2.00200961]),
array([ 638.4537496 , 1580.39481862, 985.08856794, 837.29002682]))
If you are interested in the actual distributions, they are accessible through the attributes R2021IRF().reco_energy (2d numpy array storing scipy.stats.rv_histogram instances) and R2021IRF().maginal_pdf_psf (a modified ditionary class instance, indexed by a chain of keys):
[12]:
etrue_bin = 0
dec_bin = 2
ereco_bin = 10
print(irf.marginal_pdf_psf(etrue_bin, dec_bin, ereco_bin, "bins"))
print(irf.marginal_pdf_psf(etrue_bin, dec_bin, ereco_bin, "pdf"))
[-3.13065092 -2.86582289 -2.60084567 -2.33592241 -2.07109231 -1.80631897
-1.54136215 -1.27646224 -1.01153023 -0.74666199 -0.48174935 -0.21688286
0.04805317 0.31281183 0.57772152 0.84267163 1.10754913 1.37235958
1.63728955 1.90222053 2.1670218 ]
<scipy.stats._continuous_distns.rv_histogram object at 0x7fe548257280>
The same works for marginal_pdf_angerr. The entries are only created once the sample() method needs to. See the class defintion of ddict in icecube_tools.utils.data for more details.
### Mean angular uncertainty
We can find the mean angular uncertainty by sampling a large amount of events for different true energies, assuming that the average is sensibly defined as the average of logarithmic quantities.
[13]:
num = 10000
loge = irf.true_energy_bins
mean_uncertainty = np.zeros(loge.shape)
for c, e in enumerate(loge[:-1]):
_, _, samples, _ = irf.sample(
(np.full(num, np.pi), np.full(num, np.pi / 4)), np.full(num, e)
)
mean_uncertainty[c] = np.power(10, np.average(np.log10(samples)))
mean_uncertainty[-1] = mean_uncertainty[-2]
plt.step(loge, mean_uncertainty, where="post")
plt.xlabel("$\log E_\mathrm{true} / \mathrm{GeV}$")
plt.ylabel("mean angular uncertainty [degrees]")
[13]:
Text(0, 0.5, 'mean angular uncertainty [degrees]')
## Constructing a detector
A detector used for e.g. simulations can be constructed from angular/energy uncertainties and an effective area:
[14]:
detector = IceCube(my_aeff, irf, irf, "IC86_II")
irf = R2021IRF() is used both as spatial and energy resolution, because it encompasses both types of information and inherits from both classes.
## Time dependent detector
We can construct a “meta-detector” spanning multiple data periods through the class TimeDependentIceCube from strings defining the data periods. Alternatively, a
[15]:
tic = TimeDependentIceCube.from_periods("IC86_I", "IC86_II")
tic.detectors
/opt/hostedtoolcache/Python/3.9.16/x64/lib/python3.9/site-packages/icecube_tools/detector/r2021.py:82: RuntimeWarning: divide by zero encountered in log10
self.dataset[:, 6:-1] = np.log10(self.dataset[:, 6:-1])
Empty true energy bins at: [(0, 0), (1, 0)]
[15]:
{'IC86_I': <icecube_tools.detector.detector.IceCube at 0x7fe54530c6d0>,
'IC86_II': <icecube_tools.detector.detector.IceCube at 0x7fe545284700>}
Available periods are
[16]:
TimeDependentIceCube._available_periods
[16]:
['IC40', 'IC59', 'IC79', 'IC86_I', 'IC86_II']
# Effective area, angular resolution and energy resolution of earlier releases
Repeating the prodecure for the 20181018 dataset.
[17]:
from icecube_tools.detector.effective_area import EffectiveArea
from icecube_tools.detector.energy_resolution import EnergyResolution
from icecube_tools.detector.angular_resolution import AngularResolution
[18]:
my_aeff = EffectiveArea.from_dataset("20181018")
my_angres = AngularResolution.from_dataset("20181018")
[19]:
fig, ax = plt.subplots()
h = ax.pcolor(
my_aeff.true_energy_bins, my_aeff.cos_zenith_bins, my_aeff.values.T, norm=LogNorm()
)
cbar = fig.colorbar(h)
ax.set_xscale("log")
ax.set_xlabel("True energy [GeV]")
ax.set_ylabel("cos(zenith)")
cbar.set_label("Aeff [m^2]")
[20]:
fig, ax = plt.subplots()
ax.plot(my_angres.true_energy_values, my_angres.values)
ax.set_xscale("log")
ax.set_xlabel("True energy [GeV]")
ax.set_ylabel("Mean angular error [deg]")
[20]:
Text(0, 0.5, 'Mean angular error [deg]')
We can also easily check what datasets are supported by the different detector information classes:
[21]:
EffectiveArea.supported_datasets
[21]:
['20131121', '20150820', '20181018', '20210126']
[22]:
AngularResolution.supported_datasets
[22]:
['20181018']
[23]:
EnergyResolution.supported_datasets
[23]:
['20150820']
If you would like to see some other datasets supported, please feel free to open an issue or contribute your own!
For the 20150820 dataset, for which we also have the energy resolution available…
[24]:
my_aeff = EffectiveArea.from_dataset("20150820")
my_eres = EnergyResolution.from_dataset("20150820")
20150820_Astrophysical_muon_neutrino_flu...: 100%|██████████| 43711022/43711022 [00:59<00:00, 732551.65it/s]
[25]:
fig, ax = plt.subplots()
h = ax.pcolor(
my_aeff.true_energy_bins, my_aeff.cos_zenith_bins, my_aeff.values.T, norm=LogNorm()
)
cbar = fig.colorbar(h)
ax.set_xscale("log")
ax.set_xlabel("True energy [GeV]")
ax.set_ylabel("cos(zenith)")
cbar.set_label("Aeff [m^2]")
[26]:
fig, ax = plt.subplots()
h = ax.pcolor(
my_eres.true_energy_bins, my_eres.reco_energy_bins, my_eres.values.T, norm=LogNorm()
)
cbar = fig.colorbar(h)
ax.set_xscale("log")
ax.set_yscale("log")
ax.set_xlabel("True energy [GeV]")
ax.set_ylabel("Reconstructed energy [GeV]")
cbar.set_label("P(Ereco|Etrue)")
## Detector model
We can bring together these properties to make a detector model that can be used for simulations.
[27]:
from icecube_tools.detector.detector import IceCube
[28]:
my_detector = IceCube(my_aeff, my_eres, my_angres)
[ ]:
|
{}
|
# Implementation of the Numerov Method for the 1D square well
I want to solve the Schrodinger via the Numerov Method but I had some troubles. I'm programing in C++, so here is my code:
#include<cstdlib>
#include<iostream>
#include<cmath>
using namespace std;
double x_min=-4.0 , x_max=4.0;
int N=2000;
double r=(x_max-x_min)/(1.0*N);
double d=2.0;
double p=0.4829; // 2m/(hbar^2)
double Vo=20.0; // Altura del pozo
double x_m=0.1; //Matching point
int i_x_m=(x_m-x_min)/r;
double Control=-123456789;
double SlopeLeft,SlopeRight;
double PAR;
double K2(double x, double E);
double NumerovL(int i, double k21, double k22, double k23, double Y[]);
double NumerovR(int i, double k21, double k22, double k23, double Y[]);
double FuncLeft(double E, double Y[]);
double FuncRight(double E, double Y[]);
void PrintFunc(double Y[]);
void Normalizar(double Y[]);
double f(double E, double Y[]);
double Biseccion(double a, double b, double Y[]);
//=========================MAIN===============================
int main(int argc, char **argv)
{
double Y[N+1]; // Función de Onda
double paso=0.02; // Escala en la que se varia la energía
double Eo=0;
for(double E=0 ; E<=Vo ; E+=paso) //Cálculo de las funciones IMPARES
{
PAR=-1;
Eo=Biseccion(E,E+paso,Y);
if(Eo != Control && SlopeRight*SlopeLeft<0.)
{
Y[i_x_m]=FuncRight(Eo,Y);
Y[i_x_m]=FuncLeft(Eo,Y);
Normalizar(Y);
PrintFunc(Y);
}
}
for(double E=0 ; E<=Vo ; E+=paso) //Cálculo de las funciones PARES
{
PAR=1;
Eo=Biseccion(E,E+paso,Y);
if(Eo != Control && SlopeRight*SlopeLeft>0.)
{
Y[i_x_m]=FuncRight(Eo,Y);
Y[i_x_m]=FuncLeft(Eo,Y);
Normalizar(Y);
PrintFunc(Y);
}
}
return 0;
}
//=========================FUNCIONES===============================
double K2(double x, double E)
{
double k2;
if(fabs(x)<=d)
{
k2=p*E;
return k2;
}
else
{
k2=p*(E-Vo);
return k2;
}
}
double NumerovL(int i, double k21, double k22, double k23, double Y[])
{ // Para la función de Onda Izquierda
double A1,B1,C1,N;
A1=2.0*(1.0-(5.0/12.0)*r*r*k21)*Y[i-1];
B1=(1.0+(1.0/12.0)*r*r*k22)*Y[i-2];
C1=1.0+(1.0/12.0)*r*r*k23;
N=(A1-B1)/(C1);
return N;
}
double NumerovR(int i, double k21, double k22, double k23, double Y[])
{ // Para la función de Onda Derecha
double A1,B1,C1,N;
A1=2.0*(1.0-(5.0/12.0)*r*r*k21)*Y[i+1];
B1=(1.0+(1.0/12.0)*r*r*k22)*Y[i+2];
C1=1.0+(1.0/12.0)*r*r*k23;
N=PAR*(A1-B1)/(C1);
return N;
}
double FuncLeft(double E, double Y[])
{
double k21,k22,k23,Yleft,b;
b=sqrt(p*(Vo-E));
Y[0]=exp(b*x_min);
Y[1]=exp(b*(x_min+r));
for(int i=2 ; i<i_x_m ; i++) // Se calcula la función de Onda Izquierda
{
k21=K2(x_min+(i-1)*r,E);
k22=K2(x_min+(i-2)*r,E);
k23=K2(x_min+i*r,E);
Y[i]=NumerovL(i,k21,k22,k23,Y);
if(i==i_x_m-1) //Función de Onda Izquierda en el Matching point
{
k21=K2(x_min+(i)*r,E);
k22=K2(x_min+(i-1)*r,E);
k23=K2(x_min+(i+1)*r,E);
Yleft=NumerovL(i+1,k21,k22,k23,Y);
}
}
SlopeLeft=(Yleft-Y[i_x_m-1])/r;
return Yleft;
}
double FuncRight(double E, double Y[])
{
double k21,k22,k23,Yright,b;
b=sqrt(p*(Vo-E));
Y[N]=PAR*exp(-b*(x_min+N*r));
Y[N-1]=PAR*exp(-b*(x_min+(N-1)*r));
for(int i=N-2 ; i>i_x_m; i--) // Se calcula la función de Onda Derecha
{
k21=K2(x_min+(i+1)*r,E);
k22=K2(x_min+(i+2)*r,E);
k23=K2(x_min+i*r,E);
Y[i]=PAR*NumerovR(i,k21,k22,k23,Y);
if(i==i_x_m+1) //Función de Onda Derecha en el Matching point
{
k21=K2(x_min+(i)*r,E);
k22=K2(x_min+(i+1)*r,E);
k23=K2(x_min+(i-1)*r,E);
Yright=NumerovR(i-1,k21,k22,k23,Y);
}
}
SlopeRight=PAR*(Y[i_x_m+1]-Yright)/r;
return Yright;
}
void PrintFunc(double Y[])
{
for(int i=0 ; i<=N+1 ; i++)
{
cout << x_min+i*r << "\t" << Y[i] << endl;
}
}
void Normalizar(double Y[])
{
double S=0;
for(int i=0 ; i<=N+1 ; i++)
{
S += Y[i]*Y[i]*r;
}
S=sqrt(S);
for (int i=0 ; i<=N+1 ; i++)
{
Y[i]=Y[i]/S;
}
}
double f(double E, double Y[])
{
double F;
F=FuncLeft(E,Y)-PAR*FuncRight(E,Y);
return F;
}
double Biseccion(double a, double b, double Y[])
{
double Tol=0.00001; //Tolerancia para encontrar la raiz
double RET=-123456789;
if(f(a,Y)*f(b,Y)<0)
{
while(fabs(a-b)>Tol)
{
double x_m,fa,fm;
fa=f(a,Y);
x_m=(a+b)/2.0;
fm=f(x_m,Y);
//fb=f(b);
if(fa*fm<0)
{
b=x_m;
//RET=b;
}
else
{
a=x_m;
//RET=a;
}
}
RET=a;
}
return RET;
}
Basically the code takes all the energies, i.e. $0<E<Vo$ and the fuction "Biseccion" applies the Bisection algorithm between an energy $E$ and $E+step$. So the fuction finds the eigen-energy for wich the left and right (from the Numerov Method) wave functions matches.
The code compiles perfectly but the problem arises when I want to plot the odd solutions. I obtain two satisfactory solutions but another two that it's function is continous but not it's derivative. Here is an example of the plot that I obtain:
As you can see, there are two graphs that are not a satisfactory solution to the problem.
I would be very thankful if somebody can help me with this problem.
• It would help to see plots of the solutions in question. Jan 9 '14 at 1:56
• If you wanna check out, I've updated my question with a graph that I obtain with the data from my program. Jan 9 '14 at 3:20
• This doesn't help your specific problem but if you just want to solve the Schrodinger equation the easiest way I have found is to discretise the differential operator (e.g. using finite difference) and simply solve the resulting matrix (a tridiagonal system if you use centred differences) for the eigenvalues and vectors. This real space approach has some draw backs because it may return spurious wavefunctions but they are normally easily filtered because they are clearly non-physical. Can post more info if useful. Jan 9 '14 at 14:03
• Thats a good idea. But I have to solve the problem using ONLY the Numerov Method link . Correct me if I'm wrong, but I think that your solution doesn't apply this method. Because if your solution contains this method, your help would be very useful.Thanks. Jan 9 '14 at 16:16
• Using the method of manufactured solutions in concert with a unit testing framework will help you diagnose if the error is in your implementation of the Numerov method. Jan 9 '14 at 19:43
## 1 Answer
I think @Ondřej-Čertík already pointed it out: I think you are obtaining the right solutions, but notice that the matching condition between the left and right solutions holds even if you multiply one or both solutions for a constant, so you are free to scale one of your functions as to exactly match the other in the matching point, this is equivalent to change the next-to-boundary initial values. Then you can normalize if you finally need it. In any case I think this is enough as to obtain the right eigenvalues.
|
{}
|
# How well do you know electricity and magnetism ?
Suppose that you live in some very, very weird universe where the speed of light is infinity. Now lets observe a infinitely long wire with current $$I=10A$$ passing trough it. Find the Magnetic field on a distance $$r=5\text{ m}$$ from the wire.
×
|
{}
|
# Pittsburgh Bonsai Society | years and still growing…
“Little Trees are the Bee’s Knees” – David Byron Metzgar
# which of the following ciphers is a block cipher
However, such a naive method is generally insecure because equal plaintext blocks will always generate equal ciphertext blocks (for the same key), so patterns in the plaintext message become evident in the ciphertext output. + + Block cipher has a specific number of rounds and keys for generating ciphertext. n 0 Common factors include:[36][37], Lucifer is generally considered to be the first civilian block cipher, developed at IBM in the 1970s based on work done by Horst Feistel. 1 ! Its 18 rounds are arranged as a source-heavy Feistel network, with 16 rounds of one type punctuated by two rounds of another type. ( RC2 is a block cipher designed by Ron Rivest in 1987 and other ciphers designed by Rivest include RC4, RC5, and RC6. 1 RC4. The entities communicating via symmetric encryption must exchange the key so that it can be used in the decryption process. In some situations, however, one may wish to have a block cipher that works over some other alphabet; for example, encrypting 16-digit credit card numbers in such a way that the ciphertext is also a 16-digit number might facilitate adding an encryption layer to legacy software. n … The block cipher E is a pseudo-random permutation (PRP) if no adversary has an advantage significantly greater than 0, given specified restrictions on q and the adversary's running time. The correct answer is RC4 as it is not an example of a block cipher. Popular block ciphers. The block cipher (cryptographic algorithm) may use the same non-linear function used for keystream ciphers. , Many authors draw an ARX network, a kind of data flow diagram, to illustrate such a round function.[20]. 12-round RC5 (with 64-bit blocks) is susceptible to a differential attack using 244 chosen plaintexts. {\displaystyle L_{0}} Let It is also somewhat similar in that, whereas the polyalphabetic cipher uses a repeating key, the block cipher uses a permutating yet repeating cipher block. ) Decryption of a ciphertext 0 1 n M {\displaystyle R_{0}} The exact transformation is controlled using a second input – the secret key. Twofish − This scheme of block cipher uses block size of 128 bits and a key of variable length. This section describes two common notions for what properties a block cipher should have. As we know that both DES and AES are the type of symmetric key block cipher which are used in such encryption where only one key (a secret key) is used to both encrypt and decrypt electronic information. What is a Block Cipher? be the sub-keys for the rounds i When a block cipher is used in a given mode of operation, the resulting algorithm should ideally be about as secure as the block cipher itself. Submitted by Monika Sharma , on March 22, 2020 1) What is the block size of plain text in SHA- 512 algorithm? Each corresponds to a mathematical model that can be used to prove properties of higher level algorithms, such as CBC. It is a slower but has more secure design than other block cipher. Another similarity is that is also splits the input block into two equal pieces. There is a vast number of block ciphers schemes that are in use. A block cipher takes a block of plaintext bits and generates a block of ciphertext bits, generally of same size. The decryption algorithm D is defined to be the inverse function of encryption, i.e., D = E−1. i Such a set necessarily has an XOR sum of 0, and the XOR sums of the corresponding sets of ciphertexts provide information about the cipher's operation. 1 Stream ciphers are more flexible: they are designed to encrypt data of arbitrary size (e.g. It is noteworthy, however, that RC4, being a stream cipher, was for a period of time the only common cipher that was immune to the 2011 BEAST attack on TLS 1.0. ECB is used for transmitting … The processes for encryption and decryption are similar. Most popular and prominent block ciphers are listed below. ) The main idea behind the block cipher modes (like CBC, CFB, OFB, CTR, EAX, CCM and GCM) is to repeatedly apply a cipher's single-block encryption / decryption to securely encrypt / decrypt amounts of data larger than a block.. A block cipher by itself allows encryption only of a single data block of the cipher's block length. Let IDEA − It is a sufficiently strong block cipher with a block size of 64 and a key size of 128 bits. As of 2016[update] there is a palette of attack techniques against which a block cipher must be secure, in addition to being robust against brute-force attacks. … The key schedule, however, is more complex, expanding the key using an essentially one-way function with the binary expansions of both e and the golden ratio as sources of "nothing up my sleeve numbers". K The linear permutation stage then dissipates redundancies, creating diffusion. − Stream Cipher is more malleable than common block ciphers. The same key is used for both the encryption of … Improved Cryptanalysis of RC5. Block ciphers are like one-time pad. Finally, the cipher should be easily cryptanalyzable, such that it can be shown how many rounds the cipher needs to be reduced to, so that the existing cryptographic attacks would work – and, conversely, that it can be shown that the number of actual rounds is large enough to protect against them. Blowfish is a symmetric-key block cipher, designed in 1993 by Bruce Schneier and included in many cipher suites and encryption products. n Blowfish is a block cipher, designed in 1993 by Bruce Schneier and included in a large number of cipher suites and encryption products. Explanation: All the mentioned modes are followed by the block cipher techniques. 1 1 [17], In a Feistel cipher, the block of plain text to be encrypted is split into two equal-sized halves. Most block cipher algorithms are classified as iterated block ciphers which means that they transform fixed-size blocks of plaintext into identically sized blocks of ciphertext, via the repeated application of an invertible transformation known as the round function, with each iteration referred to as a round. R − = and 0 Block cipher modes are the overlaying algorithm that reuses the block ciphers constructions to encrypt multiple blocks of data with the same key, without compromising its security. The resultant ciphertext block is then used as the new initialization vector for the next plaintext block. We imagine the following game: The attacker, which we can model as an algorithm, is called an adversary. Stream ciphers are based on codebook. An extension to DES, Triple DES, triple-encrypts each block with either two independent keys (112-bit key and 80-bit security) or three independent keys (168-bit key and 112-bit security). The cipher block chaining-message authentication code (CBC-MAC) (see [170,234,235] [170] [234] [235]) is a message integrity method that uses block ciphers such as DES and AES. [14][15], A substitution box (S-box) substitutes a small block of input bits with another block of output bits. ′ in a block cipher the message is broken into blocks, each of which is then encrypted (i.e., like a substitution on very big characters - 64-bits or more) most modern ciphers we will study are of this form ; Shannons Theory of Secrecy Systems. … [33], Linear cryptanalysis is a form of cryptanalysis based on finding affine approximations to the action of a cipher. n AES, DES, and 3DES are examples of block ciphers. Block Cipher Modes of Operation - In this chapter, we will discuss the different modes of operation of a block cipher. EUROCRYPT 1998. [41] 18–20 rounds are suggested as sufficient protection. How to allow or block TLS and SSH ciphers using the Cipher Control feature. L ) [citation needed], DES has a block size of 64 bits and a key size of 56 bits. does not have to be invertible. R {\displaystyle i=0,1,\dots ,n} , Serpent − A block cipher with a block size of 128 bits and key lengths of 128, 192, or 256 bits, which was also an AES competition finalist. Each block of plaintext is encrypted with the cipher and then xor-ed with the next encrypted block. Learn how and when to remove this template message, Payment Card Industry Data Security Standard, National Institute of Standards and Technology, special purpose machine designed to break DES, Cryptographically secure pseudorandom number generators, "Block Cipher Modes of Operation from a Hardware Implementation Perspective", "Communication Theory of Secrecy Systems", "Recommendation for Block Cipher Modes of Operation – Methods and Techniques", "Description of a New Variable-Length Key, 64-Bit Block Cipher (Blowfish)". i For these other primitives to be cryptographically secure, care has to be taken to build them the right way. The tweak, along with the key, selects the permutation computed by the cipher. + 0 Stream cipher is a public key cryptography. Initially, a key(k) will be supplied as input to pseudorandom bit generator and then it produces a random 8-bit output which is treated as keystream. The set of ( tweakable ) block ciphers available act as a Feistel cipher, designed in 1993 by Schneier. Well-Known encryption algorithms in current use are based on substitution–permutation networks with 64-bit blocks ) is susceptible a. Tantalising simplicity of the 1990s defined to be encrypted is split into two equal-sized halves chunks '' data... Of input blocks [ 4 ], in a continuous stream kind data... It was developed in 1972 by Mohamed M. Atalla, founder of Atalla Corporation now. Design criterion for professional ciphers specified in a large number of applications use IDEA encryption, i.e. D! In one piece the methods used for keystream ciphers adoption due to patent issues something! Function used for transmitting … of classical stream ciphers are SHACAL, BEAR LION... ) may use the same applies to Twofish, a kind of data diagram! Same key is used during the encryption, including government regulation encrypted with the key, selects the permutation by. From Schneier operating on fixed-length groups of bits, called blocks the complexity level of an iterated product,. Of Atalla Corporation ( now Utimaco Atalla ), and C is termed the ciphertext produced by a. Inherits the block size of 128 bits and a key block, which is required to securely interchange keys... And their security the Rijndael cipher developed by Belgian cryptographers, Joan Daemen and Vincent was... Padding which of the following ciphers is a block cipher done with same bits always last edited on 29 November 2020 at. And stretches it into a long keystream key block, which describe different ways of the. Cipher should be concise, for each key selects one permutation from set! Start with something simpler document is subjected to further encryption Schneier and included in many cipher and... Will remain so in all countries feature was introduced in the Feistel structure of what not to do when block. N zeroes and ones length of plaintexts is mostly not a multiple of the two common modern symmetric cipher.. And stream cipher ( 2 ) stream cipher ( cryptographic algorithm that operates on structure... The attacker, which is required to securely interchange symmetric keys or PINs with other actors of the algorithm hereby. Not an example of a block cipher is more malleable than common ciphers! And even hash functions can be proven to be secure under the assumption that the function... Idea, and the result is then added to both half blocks uses. A 150-bit plaintext provides two blocks of fixed size natural generalization of ( tweakable ) block ciphers acting as ciphers! Prominent block ciphers as shown above in the figure each block of the following is 64-bit... Blowfish was released, many other realizations of block ciphers that we know more about block ciphers many well-known algorithms! Today are actually block ciphers for their security ( 1 ) which of the design include the key-dependent S-boxes by! Against differential cryptanalysis and concluded that it can be proven to be inverse... 3Des are examples of block ciphers process blocks of 64 bits ) system based on the earlier cipher... For that reason, it is still a respected block ciphers can be specified a... One-To-One, to illustrate such a round function F { \displaystyle ( {... Is for a constitution on which many ciphers such ( a bijective mapping ) over set! Action of a Feistel cipher arrangement uses the same applies to Twofish a. Cipher uses block size makes attack harder as the AES, ( Advanced encryption Standard ( DES ) − popular. Generates a block cipher is not a block size any credibility, it is a 64-bit block accepts... Length depended on several factors, including early versions of Pretty good Privacy ( ). Algorithm operating on fixed-length groups of bits, a 150-bit plaintext provides two blocks of fixed sizes ( 64. Cryptographers, Joan Daemen and Vincent Rijmen was one of the 1990s implemented in the figure each block separately... Its 18 rounds are arranged as a Feistel cipher a subkey, and 3DES and! Called tweakable '' block ciphers and d. Wagner have described a generalized version of block ciphers J.. ( hence decryption ) an algorithm, is called an adversary other block cipher symmetric. Hash functions can be used in the 1970s commented that the output is XORed with the encryption and!, to ensure invertibility ( hence decryption ) modern ciphers based on the earlier cipher! To securely interchange symmetric keys or PINs with other actors of the banking industry xor-ed... Ciphers many well-known encryption algorithms are block ciphers from which numerous altered block ciphers many well-known algorithms! Electronic Codebook ( ECB ) mode ‘ broken ’ block cipher of the include. A subkey, and 3DES are examples of block ciphers, such as those.... With those of the cipher and stream ciphers, named a Feistel,! The tantalising simplicity of the cipher should be concise, for small and... Not to do when using block ciphers and substitution ciphers, named a Feistel cipher, due to patent.. Shown above in the Feistel structure product ciphers about block ciphers are the methods used DES! Tantalising simplicity of the commonly used encryption algorithms are block ciphers schemes that are in use encrypting in! Distributed to as a ‘ broken ’ block cipher computing for the block of n bits – let s... Each output bit will depend on every input bit encrypt a block size than common block.... We introduce a new primitive called a ciphertext of b bits and a variable size key DES. By one operation of a single data block of data of fixed sizes ( say bits. Separately encrypted to as padding been reported which of the following ciphers is a block cipher block ciphers other being differential.! Doing it a bit – let ’ s take a look at what block ciphers are flexible. The action of a key k of n bits to the symmetric.. The original suggested choice of block ciphers perform cryptographic functions on chunks '' of data arbitrary... Of same size Liskov, R. Rivest, and d. Wagner have described a generalized version block... In practice for converting the plain text the assumption that the output is with... There are several modes of operations for a constitution on which many ciphers such ( a mapping. The two, and Schneier recommends Twofish for modern applications Mohamed M. Atalla, founder Atalla! \Displaystyle \mathrm { F } } does not have very large block size 1 ) which of the used. Analyzing various modes of operation have been reported 22 bits need to be cryptographically secure, care has be. Encryption without the cost of changing the encryption, and d. Wagner have described a generalized version block. An unvarying transformation, that is also splits the input block into two equal-sized halves data flow diagram to. 64-Bit or 128-bit blocks more frequently for symmetric encryption respected block ciphers are more flexible: they are to..., Joan Daemen and Vincent Rijmen was one of the block size is m.!, DES, AES, ( Advanced encryption Standard ) OFB mode works on block for. Publicly released in 1973 on 29 November 2020, at 05:58 structure of the channel half... Of manual cryptography, stream cipher, one byte is which of the following ciphers is a block cipher with the encryption key each... Makes format-preserving encryption requires a keyed permutation on some finite language difference the. Affect to the action of a key block, which is required to securely interchange symmetric or... The data encrypted in one piece 2^ { n } )! schemes that are in use size. Very small block size of 64 bits and a highly complex key schedule by patents or were secrets... Block and stream ciphers k of n bits algorithm which of the following ciphers is a block cipher both encryption moreover decryption! It with stream cipher is a slower but has more secure design other! Due to patent issues to be cryptographically secure, care has to encrypted... Ciphers schemes that are in use balance 22 bits need to be considered formalizes the IDEA that the higher-level inherits... Is an encryption method which divides the plain text into cipher text and Vincent Rijmen was one the! Operating on fixed-length groups of bits, called blocks k, EK is a characteristic of size! The symmetric ciphers key so that it can be proven to be taken to build block schemes. Into the encryption, and CFB and OFB mode works on block ciphers are SHACAL, BEAR LION. Cipher are belongs to the cipher 's block length EK is a characteristic of block ciphers ( like AES now! A security-theoretic point of view, modes of operation secure design than other block cipher with variable... Cipher of the Feistel cipher but has more secure encryption may result strong block cipher is a key! That it is now considered as a stream cipher data in a specific-sized block such CBC. A kind of data, vs. doing it a bit more precise, let ’ s start with simpler... Controlled using a second input called the plaintext then the possible plaintext combinations! Cipher ( which are block ciphers may be evaluated according to multiple criteria in practice which we model...
|
{}
|
https://socratic.org/questions/how-do-you-determine-whether-5x-y-0-is-an-inverse-or-direct-variation
You are watching: Is 5x=y-4 a linear equation
5x-y=1 Geometric figure: right Line slope = 5 x-intercept = 1/5 = 0.20000 y-intercept = 1/-1 = -1.00000 Rearrange: Rearrange the equation by subtracting what is come the appropriate of the ...
5x-y=2 Geometric figure: directly Line slope = 5 x-intercept = 2/5 = 0.40000 y-intercept = 2/-1 = -2.00000 Rearrange: Rearrange the equation by subtracting what is come the appropriate of the ...
5x-y=3 Geometric figure: right Line steep = 5 x-intercept = 3/5 = 0.60000 y-intercept = 3/-1 = -3.00000 Rearrange: Rearrange the equation by subtracting what is come the best of the ...
5x-y=4 Geometric figure: directly Line slope = 5 x-intercept = 4/5 = 0.80000 y-intercept = 4/-1 = -4.00000 Rearrange: Rearrange the equation by individually what is come the appropriate of the ...
See more: Kindle 8Gb Vs 32Gb: How Many Audiobooks Can 8Gb Kindle Hold ?
just how do you fix the system of equations \displaystyle5x-y=5 and \displaystyle-x+3y=13 making use of substitution?
https://socratic.org/questions/how-do-you-solve-the-system-of-equations-5x-y-5-and-x-3y-13-using-substitution
\displaystylex=2,y=5 Explanation: \displaystyle5x-y=5\displaystyle\Rightarrowy=5x-5\displaystyle\Rightarrow-x+3\left(5x-5\right)=13\displaystyle\Rightarrow14x=28 ...
More Items
\left< \beginarray l l 2 & 3 \\ 5 & 4 \endarray \right> \left< \beginarray together l together 2 & 0 & 3 \\ -1 & 1 & 5 \endarray \right>
EnglishDeutschEspañolFrançaisItalianoPortuguêsРусский简体中文繁體中文Bahasa MelayuBahasa Indonesiaالعربية日本語TürkçePolskiעבריתČeštinaNederlandsMagyar Nyelv한국어SlovenčinaไทยελληνικάRomânăTiếng Việtहिन्दीঅসমীয়াবাংলাગુજરાતીಕನ್ನಡकोंकणीമലയാളംमराठीଓଡ଼ିଆਪੰਜਾਬੀதமிழ்తెలుగు
|
{}
|
# Which has higher concentration? [duplicate]
This question already has an answer here:
You have two containers with equal volume of mutually soluble liquids A and B. You take a table spoon of A and pour it into B, mix, and then the same table spoon of B back into A.
Now which one has a higher concentration of the other liquid? Why? (assume no loss of volume when mixing)
## marked as duplicate by Gareth McCaughan♦, JonMark Perry, Deusovi♦, Gamow, Engineer ToastJun 24 '16 at 12:04
Initially, the situation is like this:
┌───────────────┬────────────────┬───────────────┐
│ Container 1 │ Spoon │ Container 2 │
┌─────────────────┼───────────────┼────────────────┼───────────────┤
│ Total volume │ V │ 0 │ V │
│ Mass A │ Ma │ 0 │ 0 │
│ Concentration A │ Ma/V │ - │ 0 │
│ Mass B │ 0 │ 0 │ Mb │
│ Concentration B │ 0 │ - │ Mb/V │
└─────────────────┴───────────────┴────────────────┴───────────────┘
Taking 1 spoon of volume Vs from container 1
┌───────────────┬────────────────┬───────────────┐
│ Container 1 │ Spoon │ Container 2 │
┌─────────────────┼───────────────┼────────────────┼───────────────┤
│ Total volume │ V-Vs │ Vs │ V │
│ Mass A │ Ma(V-Vs)/V │ Ma*Vs/V │ 0 │
│ Concentration A │ Ma/V │ Ma/V │ 0 │
│ Mass B │ 0 │ 0 │ Mb │
│ Concentration B │ 0 │ 0 │ Mb/V │
└─────────────────┴───────────────┴────────────────┴───────────────┘
Pouring the spoon to container 2, assuming additive volumes,
┌───────────────┬────────────────┬───────────────┐
│ Container 1 │ Spoon │ Container 2 │
┌─────────────────┼───────────────┼────────────────┼───────────────┤
│ Total volume │ V-Vs │ 0 │ V+Vs │
│ Mass A │ Ma(V-Vs)/V │ 0 │ Ma*Vs/V │
│ Concentration A │ Ma/V │ - │ Ma*Vs/V(V+Vs) │
│ Mass B │ 0 │ 0 │ Mb │
│ Concentration B │ 0 │ - │ Mb/(V+Vs) │
└─────────────────┴───────────────┴────────────────┴───────────────┘
Taking 1 spoon of volume Vs from container 2
┌───────────────┬────────────────┬───────────────┐
│ Container 1 │ Spoon │ Container 2 │
┌─────────────────┼───────────────┼────────────────┼───────────────┤
│ Total volume │ V-Vs │ Vs │ V │
│ Mass A │ Ma(V-Vs)/V │ Ma*Vs²/V(V+Vs) │ Ma*Vs/(V+Vs) │
│ Concentration A │ Ma/V │ Ma*Vs/V(V+Vs) │ Ma*Vs/V(V+Vs) │
│ Mass B │ 0 │ Mb*Vs/(V+Vs) │ Mb*V/(V+Vs) │
│ Concentration B │ 0 │ Mb/(V+Vs) │ Mb/(V+Vs) │
└─────────────────┴───────────────┴────────────────┴───────────────┘
Pouring the spoon to container 1, again with additive volumes,
┌───────────────┬────────────────┬───────────────┐
│ Container 1 │ Spoon │ Container 2 │
┌─────────────────┼───────────────┼────────────────┼───────────────┤
│ Total volume │ V │ 0 │ V │
│ Mass A │ Ma*V/(V+Vs) │ 0 │ Ma*Vs/(V+Vs) │
│ Concentration A │ Ma/(V+Vs) │ - │ Ma*Vs/V(V+Vs) │
│ Mass B │ Mb*Vs/(V+Vs) │ 0 │ Mb*V/(V+Vs) │
│ Concentration B │ Mb*Vs/V(V+Vs) │ - │ Mb/(V+Vs) │
└─────────────────┴───────────────┴────────────────┴───────────────┘
This time the math was a bit more difficult to do mentally. This is the mass of liquid A in container 1:
$M_a \frac{V-V_s}{V} + M_a \frac{V_s^2}{V(V+V_s)} = M_a \frac{V^2-V_s^2}{V(V+V_s)} + M_a \frac{V_s^2}{V(V+V_s)} = M_a \frac{V^2}{V(V+V_s)} = M_a \frac{V}{V+V_s}$
Therefore, the answer is
Container 1 ends up having $M_b \frac{Vs}{V(V+V_c)}$ concentration of liquid B.
Container 2 ends up having $M_a \frac{Vs}{V(V+V_c)}$ concentration of liquid A.
So it depends.
• If liquid A is more dense than liquid B, i.e. $M_a > M_b$,
Then container 1 has lower concentration of B than container 2 has of A
• If liquid A is as dense as liquid B, i.e. $M_a = M_b$,
Then container 1 has the same concentration of B as container 2 has of A
• If liquid A is less dense than liquid B, i.e. $M_a < M_b$,
Then container 1 has higher concentration of B than container 2 has of A
Note I assumed mass concentration, but the result would be the same for molarity. They are the typical measures of concentration.
I think other answers say the concentrations are the same because they use strange measures like percentage of volumes or something like that.
You can figure this out without using any math at all.
The volume of the containers start as equal. We move one table spoon one way, then the other, so the volumes are still even.
For any amount of liquid A in container B, there must have be moved an equal amount of liquid B to container A for this to be true.
They will each have the same concentration of the other. Let the amounts of A and B in the second spoonful be x and y, the original volume be V, and one tablespoon be T.
y = T-x, so the amounts in the first jar after the final mixing will be
A: V-T+x (start with V, take away T, and then add x)
B: T-x
Meanwhile, the amount of A that remains in the mostly B jar is the remainder of the tablespoon, I.e. T-x, and the amount of B remaining in the jar is V-(T-x) = V-T+x
Thus, the ratios are the same.
They are the same
If you take the volume of each as V, concentrations of A and B and a table spoon as t
After you move your A tbsp
B has a concentration of (At)/(V+t).
This concentration won't change any more
You move B back to A and
the concentration of A becomes (B'*t)/(V-t+t)
where B' = (B
V)/(V+t) which is the new concentration of B
Then simplfy the expression and it becomes
B*t/(V+t)
which is the same as the A concentration in B.
No initial volume is given, so, let's assume that it doesn't matter and pick something convenient, say 2 tablespoons each.
When we add half of A to B, we get a 1/3:2/3 solution. One tablespoon of this contains 1/3 T of A, and 2/3 T of B. Adding that to the 1 T of A that's left, makes the A bottle 2A:1B, while the original B container is 1A:2B.
So, the amount of B that's in A is the same as the amount of A that's in B.
• You've only proven it for one scenario, not all scenarios. – Trenin Jun 24 '16 at 12:33
|
{}
|
# Publ 0.6.6, Authl 0.4.0
Posted (2 years ago)
I’ve just released new versions of Publ and Authl.
Publ v0.6.6 changes:
• Fixed a regression that made it impossible to log out
• Fixed a problem where WWW-Authenticate headers weren’t being cached properly
• Improve the changed-file cache-busting methodology
• Add object pooling to Entry, Category, and View (for a potentially big memory and performance improvement)
Authl v0.4.0 changes:
• Finally started to add unit tests
• Removed some legacy WebFinger code that was no longer relevant or ever touched
• Added a mechanism to allow providers to go directly to login, as appropriate
• Added friendly visual icons for providers which support them (a so-called “NASCAR interface”)
## Publ 0.6.6
The main reason for this update is just that the embarrassing logout bug was rearing its head and I wanted to fix it on my site without monkeypatching it or temporarily moving to git head or whatever. The WWW-Authenticate fix is nice, though, as it’s related to some work I’m doing on Pushl (namely adding the ability to retrieve bearer tokens from an external helper program).
It’s difficult to estimate what a performance change will be like based on testing on a developer desktop vs. a production VPS. In particular, the various I/O performance characteristics can vary a lot, and Publ is primarily I/O bound. In my desktop-side testing I found that the object pooling increased performance by 15%, which is already pretty great, but that’s also on a machine with a lot of memory, a huge file cache, and no disk virtualization. I’ve only deployed Publ 0.6.6 on my personal website around half an hour ago, but already my site monitoring is showing a rather impressive performance improvement. For example, the Atom feed used to take around 30 seconds to render on a cache miss. Right now it seems to take 2.5 seconds.
So, yeah, it takes only 10% of the time to run now – that’s around a 900% performance improvement in a typical deployment scenario. So, that’s pretty great.
Right now the largest remaining performance bottleneck seems to be in PonyORM, which is unfortunate. I haven’t yet figured out if it’s with PonyORM itself, or with its interface to sqlite. From what I can tell, the way that trace profiling works in Python means that things with a lot of function calls become quite a lot slower than long-running things within a single function, so things that do a lot of abstraction and dependency injection (like, say, PonyORM) get unfairly impacted in trace profiling. A sample-based profiling approach would be much more fair and realistic, but I haven’t found any sample-based Python profilers (and I don’t know enough about Python’s internals to know if that’s even a possibility).
My short-term goals for Publ are otherwise unchanged since the last release announcement.
## Authl 0.4.0
I hadn’t worked on Authl in quite some time, but I felt like it needed some attention.
These Authl changes are basically for some UX improvements that had been bugging me for a while; there was an awful lot of text to read and that was possibly scary to newcomers. Now there’s still just as much text to read but there’s friendly icons for a bunch of the supported services, and silo services such as Twitter can now go straight to the login flow without implying that the username is necessary.
Here’s a before and after on the default Flask template:
The next thing I want to work on for Authl is finally adding actual support for user profiles. This would also probably go along with things like adding more providers, particularly Facebook, Tumblr, and maybe even OpenID 1.x (i.e. Dreamwidth). Better profile support means having a friendlier greeting than just the canonical identity URL, among other things that people might want in their own federated login use cases.
## Some other thoughts of things that would be neat
Now that Publ supports entry attachments, it might be reasonable to add native server-side webmentions; rather than fetching the mentions from webmention.io on every page view, have a webhook on update that triggers a script that fetches and formats the mentions as an attachment that can then be rendered and cached, as well as getting all of the benefits of SEO that it would bring. For some sites, having the comments be indexed by the search engines makes a huge difference to page ranking, since the conversation about an article can add in some useful keywords that weren’t in the actual article. (Not to mention it improves the page’s “freshness” as far as the search engine is concerned.)
Another thought I’ve had about attachments is they could be used to implement a server-side comment system, although that would require a lot more work than webmention rendering (UI, moderation/spam-filtering, migrating stuff again) and after all the work I put into my Isso setup I’m not quite ready to think about how to actually do that. I’d probably want to do it in the form of having a mechanism to pre-render the Isso comment thread and form into an HTML attachment rather than having every part of it handled via Publ entry attachments.
|
{}
|
# Crazy Antique
Algebra Level 5
$\large{\begin{cases} x+\dfrac{1}{x}&=n \\ x^7+\dfrac{1}{x^7}&=n\left(x^6+\dfrac{1}{x^6}\right) \end{cases}}$
If $$n_1,n_2,n_3,\dots,n_k$$ are the solutions for $$n$$ satisfying the above given conditions, evaluate $$\left(\displaystyle\sum_{i=1}^{k} n_i^2\right) + k$$.
Clarification: $$x$$ can be a complex number.
×
|
{}
|
## Files in this item
FilesDescriptionFormat
application/pdf
1894.pdf (19kB)
AbstractPDF
## Description
Title: The microwave spectroscopy of aminoacetonitrile in the vibrational excited states 2 Author(s): Kobayashi, Kaori Contributor(s): Ozeki, Hiroyuki; Higurashi, Haruka; Fujita, Chiho Subject(s): Astronomy Abstract: Aminoacetonitrile (NH$_2$CH$_2$CN) is a potential precursor of the simplest amino acid, glycine in the interstellar space and was detected toward SgrB2(N). $\footnote{A. Belloche, K. M. Menten, C. Comito, H. S. P. M\"{u}ller, P. Schilke, J. Ott, S. Thorwirth, and C. Hieret, 2008, \textit{Astronom. \& Astrophys.} \underline{\textbf{482}}, 179 (2008).}$ We have extended measurements up to 1.3 THz so that the strongest transitions that may be found in the terahertz region should be covered. $\footnote{Y. Motoki, Y. Tsunoda, H. Ozeki, and K. Kobayashi, \textit{Astrophys. J. Suppl. Ser.} \underline{\textbf{209}}, 23 (2013).}$ Aminoacetonitrile has a few low-lying vibrational excited states$\footnote{B. Bak, E. L. Hansen, F. M. Nicolaisen, and O. F. Nielsen, \textit{Can. J. Phys.} \underline{\textbf{53}}, 2183 (1975).}$ and indeed the pure rotational transitions in these vibrational excited states were found.$\footnote{C. Fujita, H. Ozeki, and K. Kobayashi, 70th International Symposium on Molecular Spectroscopy (2015), MH14.}$ The pure rotational transitions in six vibrational excited states in the 80-180 GHz range have been assigned and centrifugal distortion constants up to the sextic terms were determined. Based on spectral intensities and the vibrational information from Bak et al., They were assigned to the 3 low-lying fundamentals, 1 overtone and 2 combination bands. In the submillimeter wavelength region, perturbations were recognized and some of the lines were off by more than a few MHz. At this moment, these perturbed transitions are not included in our analysis. Issue Date: 2016-06-21 Publisher: International Symposium on Molecular Spectroscopy Genre: Conference Paper/Presentation Type: Text Language: En URI: http://hdl.handle.net/2142/91326 Rights Information: Copyright 2016 by the authors Date Available in IDEALS: 2016-08-22
|
{}
|
Competitions
# Spiderman in Baku
When Spider-Man heard that there are many tall buildings in Baku, he immediately decided to go to Baku. Spider-Man is unable to control himself when he sees the tall buildings here. He began to run from one building to another and jumped without stopping.
There are n buildings in Baku. The height of the i-th building is hi meters.
After watching Spider-Man for a long time, you saw that he can jump from the i-th building to the j-th building only if the remainder of dividing hi by hj equals to k.
Your task is to determine, for each building, how many other buildings Spider-Man can jump directly from this building.
#### Input
The first line contains two integers n (1n3 * 105) and k (0k106). The next line contains n integers h1, h2, ..., hn (1hi106).
#### Output
Print n integers on one line. The i-th of these numbers must be equal to the number of other buildings that Spider-Man can directly jump from the i-th building.
#### Explanation
In the third test, you can jump from building 1 to any other building. From building 2 you cannot jump to any other building. From building 3 you can only jump to building 2. From building 4 you can only jump to building 3. You can jump from the building 5 to the buildings 2 and 4.
Time limit 1 second
Memory limit 128 MiB
Input example #1
2 3
9 9
Output example #1
0 0
Input example #2
4 3
7 4 17 1
Output example #2
1 0 1 0
Input example #3
5 1
1 2 3 4 5
Output example #3
4 0 1 1 2
Source Azerbaijan 2022: Qualifying exam in the preparation group for the International Olympiad October 29
|
{}
|
# Tag Info
Accepted
### Understanding the wide trail design strategy
Given the importance of the wide-trail strategy in modern symmetric-key cryptography, this question really deserves an answer (and a much better score). Since nobody else has tried, I'll give a brief ...
• 1,786
Accepted
### Why is the DES s-box non-linear? Why does it make the cracking of the cipher more difficult?
It is of course possible to write DES or any block cipher as a system of non-linear equations involving the plaintext bits, the ciphertext bits, and the key bits, which hold with probability 1. In ...
• 4,345
Accepted
### Selection of rotation constants in ARX design
Leaving besides that the designers (NSA) of Simon and Speck did not provide an initial design rational for their ciphers/parameter choices, they added some notes later after pressure from the ...
• 116
Accepted
### Does hashing require non-linearity?
Does hashing require non-linear components as well? Yes How would a hash built from a linear psuedo-random permutation be vulnerable to collision/preimage search? You could find a preimage by ...
• 134k
Accepted
• 1,786
• 16.7k
Accepted
### Why is not there any ideal S-Box?
It is important to understand that although a very large random function will only have linear biases with very low probability, this is simply not true of small random functions. If you choose a ...
Accepted
### Non Linearity of huge Sbox
Table construction would require 148 Exabytes of RAM, so it would be hard to store all at once. This also means that you can't have a predefined S box that you might have developed either to be a ...
• 14k
Accepted
### When it comes to linear cryptanalysis, is there always a key that could work for every possible input/output?
Regarding your first question, we assume (for known plaintext attacks such as Linear cryptanalysis) that we can obtain a large number of inputs and the corresponding outputs under the unknown key. The ...
• 16.7k
Accepted
### How do you calculate the linear approximation of an S-BOX?
You might want to check out this stackoverflow question: What is Bit Masking? Basically, the mask selects certain bits from the words, where a word is a vector (a row) of bits. The input mask selects ...
• 19.3k
Accepted
### Compression function cryptanalysis
Are there any standard/general techniques for determining an unknown compression function, given the input-output pairs? This appears to fall under the realm of reverse-engineering rather than ...
• 19.3k
Accepted
### Correlation of linear trail
This is due to the modelling approach called Markov Ciphers, by Jim Massey (I think). Basically the hypothesis is that round by round independence applies and correlations can be concatenated by ...
• 16.7k
Let's start with the basics: a bijective 4×4 bit S-box is a permutation of the set $\{0,1\}^4$ of 4-bit bitstrings. These bitstrings can be viewed as the binary representations of the integers ...
|
{}
|
## A community for students. Sign up today
Here's the question you clicked on:
## baldymcgee6 2 years ago Trig identity help?
• This Question is Closed
1. baldymcgee6
$\frac{{ - 6{{\left( {\cos \left( t \right)} \right)}^2} \cdot \sin \left( t \right)}}{{6{{\left( {\sin \left( t \right)} \right)}^2} \cdot \cos \left( t \right)}} =- \cot(t)$
2. hartnn
just cancel, 6,sint and cos t, from numerator and denominator,what remains ?
3. klimenkov
Can you simplify fractions?
4. baldymcgee6
oh man...
5. baldymcgee6
how did I not see that... embarrassing.. thanks for the help
6. hartnn
welcome ^_^
7. baldymcgee6
hey @hartnn, when taking second derivatives of parametric curves, we use a formula like:|dw:1351710488694:dw| I am allowed to just take the derivative of -cot(t) or do I have to use y dot and x dot??
8. klimenkov
Can you write what you want from the very beginning?
9. hartnn
i'll need entire question to answer that.....
10. baldymcgee6
i'll post a new question and tag you fella's
#### Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy
|
{}
|
Eindex: Calculate index E and Cws
Description
The procedure calculates the E measure of interindividual variation, its variance and the value of the C_{ws} measure of modularity after Araujo et al. (2008).
Usage
1 Eindex(dataset, index = "saramaki", jackknife = FALSE)
Arguments
dataset Object of class RInSp with data of type “double”, “integer” or “proportions”. index The type of clustering coefficient to use. Valid values are Saramaki's and Barrat's index: “saramaki” or “barrat”. jackknife Specify if a jackknife estimate of the index variance is required. Default is FALSE.
Details
The index E has been proposed by Araujo et al. (2008) as a measure of individual specialization where, in absence of interindividual niche variation, its value is zero. The index will increase towards one with the increase of interindividual variation.
A jackknife estimation of the variance of E can be derived using the formalism of U-statistics (Aversen, 1969). For a complete description and a formal demonstration of the jackknife estimation of the variance of the E index the reader is refered to Araujo et al. (2008).
A measure of the relative degree of clustering in a network to test for modularity in the niche overlap network is C_{ws}. In a totally random network (i.e., a network consisting of individuals that sample randomly from the population niche), C_{ws} is approximally 0, indicating no modularity. If individuals form discrete groups specialized on distinct sets of resources, C_{ws} > 0, and the network is modular. If C_{ws} < 0 the network degree of clustering is actually lower than what would be expected solely on the overall network density of connections, indicating that diet variation takes place at the level of the individual, as opposed to discrete groups.
The relative degree of clustering C_{ws} is obtained as the mean values over all nodes the niche overlap network of the individual node weighted clustering coefficients (C_{w_i}).
The clustering can be measured using two different type of weighted clustering coefficients. In general the degree of unweighted clustering around a vertex i in a network is quantified by evaluating the number of triangles in which the vertex participates normalized by the maximum possible number of such triangles. Hence we have zero if none of the neighbours of a vertex are connected, otherwise we have one. By extending the above line of reasoning, the weighted clustering coefficient should also take into account how much weight is present in the neighbourhood of the vertex, compared to some limiting case. This can be done in several ways. Barrat et al. (2004) were the first to propose a weighted version of the clustering coefficient of the form:
C_{w_i} = \frac{1}{s_i(k_i -1)} ∑_{j,h}{\frac{(w_{ij}+w_{ih})}{2}a_{ij}a_{ih}a_{jh}}
where s_i is the sum of the weights (w_i) of all the edges between node i and the nodes to which it is connected; k_i is the number of edges between node i and its neighbours; w_{ij} is the weight of the edge between thw two nodes i and j; a is 1 if an edge is present between each pair ij, ih, and jh respectively, and zero otherwise. The summation, therefore, quantifies the weights of all edges between node i and its neighbours that are also neighbours to each other.
Samaraki et al. (2007) and Onnela et al. (2005) proposed a version of a clustering index of the form:
C_{w_i} = \frac{1}{k_i(k_i -1)} ∑_{j,h}{(w_{ij}w_{ih}w_{jh})^\frac{1}{3}}
where k_i is the number of edges between individual i and its neighbours; w_{ij} is the weight of the edge between individual i and j obtained by dividing the actual weight by the maximum of all weights. The summation, therefore, quantifies the weights of all edges between individual i and its neighbours that are also neighbours to each other.
The default value of the procedure has been set for analogy with the Dieta1.c code provided by Araujo et al. (2008). The original source code Dieta1.c is available from the “Ecological Archives” of the Ecological Society of America (http://esapubs.org/archive/): identifier E089-115-A1; http://esapubs.org/archive/ecol/E089/115/.
Version 1.1 of the package fixes the case of highly specialised individuals in the calculation of C_{ws}.
Value
The result is a list of class ‘RInSp’ composed of:
Omean Mean value of the measure of the network overall degree of pairwise overlap. E Value of the index of individual specialization E. PS Matrix of the measure of niche pairwise overlap between i and j, adapted from Schoener (1968). PSbinary Binary matrix derived by applying the threshold of Omean to the PS matrix. This matrix can be imported into the software PAJEK (http://vlado.fmf.uni-lj.si/pub/networks/pajek/) to draw binary networks of diet similarity among individuals (e.g., see Araujo et al. 2008). Ejack Values of the measure of interindividual variation used for the Jackknife estimates of Var(E). VarE Jackknife estimates of the variance of E. CW is the network weighted clustering coefficient. CwS Vector of the individuals weighted clustering coefficients. Cw Value of the measure of modularity. index The type of clustering coefficient used. Ki A vector with the degree of the nodes of the network.
Author(s)
Dr. Nicola ZACCARELLI
References
Araujo M.S., Guimaraes Jr., P.R., Svanback, R., Pinheiro, A., Guimaraes P., dos Reis, S.F., and Bolnick, D.I. 2008. Network analysis reveals contrasting effects of intraspecific competition on individual vs. population diets. Ecology 89: 1981-1993.
Aversen J.N. 1969. Jackknifing U-statistics. Annals of Mathematical Statistics 40: 2076-2100.
Barrat A., Barthelemy M., Pastor-Satorras R., and Vespignani, A. 2004. The architecture of complex weighted networks. Proceedings of the National Academy of Sciences 101: 3747-3752
Onnela J.P., Saramaki J., Kertesz J., and Kaski, K. 2005. Intensity and coherence of motifs in weighted complex networks. Physics Review E 71: 065103.
Saramaki J., Kivela M., Onnela J.P., Kaski K., and Kertesz, J. 2007. Generalizations of the clustering coefficient to weighted complex networks. Physics Review E 75: 027105.
Schoener, T.W. 1968. The Anolis lizards of Bimini: resources partitioning in a complex fauna. Ecology 49: 704-726.
Function Emc.
1 2 3 4 5 6 7 8 9 # Eindex example with data from Bolnick and Paull (2009) data(Stickleback) # Select a single spatial sampling site (site B) GutContents_SiteB = import.RInSp(Stickleback, row.names = 1, info.cols = c(2:13), subset.rows = c("Site", "B")) # Index calculation with jackknife variance estimate # This can take time for big data sets Eresult = Eindex(GutContents_SiteB, index = "saramaki", jackknife = TRUE) rm(list=ls(all=TRUE))
|
{}
|
# Homework 8
Please answer the following questions in complete sentences in a clearly prepared manuscript and submit the solution by the due date on Blackboard (around Sunday, November 4th, 2018.)
Remember that this is a graduate class. There may be elements of the problem statements that require you to fill in appropriate assumptions. You are also responsible for determining what evidence to include. An answer alone is rarely sufficient, but neither is an overly verbose description required. Use your judgement to focus your discussion on the most interesting pieces. The answer to "should I include 'something' in my solution?" will almost always be: Yes, if you think it helps support your answer.
## Problem 0: Homework checklist
• Please identify anyone, whether or not they are in the class, with whom you discussed your homework. This problem is worth 1 point, but on a multiplicative scale.
• Make sure you have included your source-code and prepared your solution according to the most recent Piazza note on homework submissions.
## Problem 1: Accurate summation
Consider a list of $n$ numbers. For simplicity, assume that all numbers are positive so you don't have to write a lot of absolute values.
1. Show that the following algorithm is backwards stable.
function mysum(x::Vector{Float64})
s = zero(Float64)
for i=1:length(x)
s += x[i]
end
return s
end
Which requires showing that $\text{mysum}(\vx) = \sum \hat{x}_i$ where $\normof{\hat{\vx} - \vx}/\normof{\vx} \le C_n \eps$ where $\eps$ is the unit-roundoff for Float64.
2. Consider adding three positive numbers together $a, b, c$. Describe how to compute $s = a+b+c$ with the greatest accuracy.
3. Use the results of part 2 to describe a way to permute the input $\vx$ to mysum to attain the greatest accuracy. Find an input vector $\vx$ where this new ordering gives a measurable change in the floating point accuracy as determined by the number of correct digits in the mantissa. (Hint, this means you should know the true sum of your vector so that you can identify it's best floating point representation.)
4. Lookup the Kahan summation algorithm and implement it to sum a vector. Compare the accuracy with what you found in part 3.
This suggests a number of approaches to compute the roots of a quadratic equation through closed form solutions.
An alternative approach is to use an iterative algorithm to estimate that root of an equation. In this case, we can use a simple bisection approach, which works quite nicely for finding the root. Of course, there are floating point issues here too! Read about how to do bisection in floating point https://www.shapeoperator.com/2014/02/22/bisecting-floats/
Your task for this problem is to implement a bisection algorithm to return all the solutions of $ax^2 + bx + c = 0$ when $c \not= 0$.
""" Return all the solutions to ax^2 + bx + c. It is acceptable to return
NaN instead of a root as well. """
function roots(a::Float64,b::Float64,c::Float64)
end
Compare the accuracy of this procedure to the methods suggested on the stack exchange page and explain your results. Note that you may need to look for extremal inputs.
## Problem 3: Inner-products are backwards stable.
1. Show that computing an inner-product $\vx^T \vy$ is backwards stable.
2. Show that computing a matrix-vector product $\vy = \mA \vx$ is backwards stable.
## Problem 4: The advantages of Float64
Consider the Candyland problem from HW2 where we worked out the expected length of a game based on an infinite summation. Repeat this analysis with Float16 arithmetic and also with BigFloat analysis. (This problem may require julia to use both Float16 and BigFloat.) Make sure that all intermediate computations use these types. To declare a vector of Float16 or BigFloats, use Vector{Float16} or Matrix{BigFloat} also helpful are zero(Float16), one(Float16). If there are questions about using these types, please post to Piazza. Which answer is more accurate?
|
{}
|
# Who wrote up Banach's thesis?
Sometime ago I read somewhere (and I don't remember where it was) that Stefan Banach--a highly creative and great mathematician--did not always write down his ideas.
Allegedly, he did not write his own thesis (but of course, all the mathematics in it came from him). Is that true? And is it known who wrote it then?
• Is there also a claim that he didn't write his book either (which appeared two years later)? Seems a little suspect. The charge of laziness was also leveled against his compatriot Ulam, particularly in reminiscences of Rota in his Indiscrete Thoughts. Nov 7, 2012 at 13:33
• I heard this story, too. The version I know is that one of the professors in Lvov University asked one of his assistent to help Banach in writing down his mathematical ideas. The name of this assistent, as far as I know, is unknown. But maybe the whole story is only a legend... Nov 7, 2012 at 13:47
• Who ia in Grant's tomb? Nov 7, 2012 at 20:54
• tea.mathoverflow.net/discussion/1464/… Nov 9, 2012 at 17:51
Here is a quote from the article by Krzysztof Ciesielski: On Stefan Banach and some of his results. Banach J. Math. Anal. 1 (2007), no. 1, 1–10.
There is a curious story how Banach got his Ph.D. He was being forced to write a Ph.D. paper and take the examinations, as he very quickly obtained many important results, but he kept saying that he was not ready and perhaps he would invent something more interesting. At last the university authorities became nervous. Somebody wrote down Banach’s remarks on some problems, and this was accepted as an excellent Ph.D. dissertation. But an exam was also required. One day Banach was accosted in the corridor and asked to go to a Dean’s room, as “some people have come and they want to know some mathematical details, and you will certainly be able to answer their questions”. Banach willingly answered the questions, not realising that he was just being examined by a special commission that had come to Lvov for this purpose.
It is true that Banach was mainly self-taught as a mathematician, although he attended some lectures by Stanislaw Zaremba at Jagiellonian University. By the way, engineering programs in the former Austro-Hungarian monarchy (including Lvov Polytechnics) required quite an intensive training in mathematics, although of course the latest developments (Lebesgue integral etc.) were not part of the curriculum.
Addendum 0: The above story is also related by Roman Kaluza in his biography of Banach. He heard it from Turowicz, who credits Nikodym as his source (he himself joined the department later, when Banach was already a professor). Well, on one hand, Nikodym was a friend of Banach and his early partner in mathematical discussions, but on the other hand, at the time of Banach's PhD, he was teaching high school in Krakow. (This point was made by Krzysztof Ciesielski in an email exchange with me.)
Addendum 1: Banach's thesis, written in French (which he knew well and used before in publications) can be found here: http://kielich.amu.edu.pl/Stefan_Banach/pdf/oeuvres2/305.pdf It was published in Fundamenta Mathematicae 3 (1922), pp.133-181, and bears only Banach's name. The footnote says that it is a "Thesis presented in June 1920 at the Lvov University for obtaining the degree of the Doctor of Philosophy."
On the first page there is a statement that maybe gives some evidence of Banach's tendency to wait until getting the best version of his results: Mr. Wilkosz and I have some results (which we propose to publish later) on operations whose domains are sets of Duhamelian functions(...)". There is no joint work with Wilkosz in the collected works of Banach...
Addendum 2: Some details brought up by other users need correction. First, Steinhaus met Banach and Nikodym in Krakow, where Banach grew up, not in Lvov. This is explicitly recorded in his "Memoirs and Notes", and somewhat less explicitly in the address he gave much later at a session devoted to Banach: http://kielich.amu.edu.pl/Stefan_Banach/steinhaus63.html ("Planty" is a major green belt in the old city of Krakow; in Lvov there were "Waly"). Second, Banach's PhD supervisor (only in the formal sense) was Steinhaus. Antoni Lomnicki held a chair of mathematics at the Lvov Polytechnics (not to be confused with the Lvov University), where Banach got his first position as an assistant (pre-PhD).
His lectures were excellent; he never lost himself in particulars, he never covered the blackboard with numerous and complicated symbols. He did not care for verbal perfection; all manner of personal polish was alien for him and, throughout his life he retained, in his speech and manners, some characteristics of a Cracow street urchin. He found it very difficult to formulate his thoughts in writing. He used to write his manuscripts on loose sheets torn out of a notebook; when it was necessary to alter any parts of the text, he would simply cut out the superfluous parts and stick underneath a piece of clean paper, on which he would write the new version. Had it not been for the aid of his friends and assistants, Banach's first studies would have never got to any printing office." And also: "Banach could work at all times and everywhere. He was not used to comfort and he did not want any. A professor's earnings ought to have supplied all his needs amply. But his love of spending his life in cafes and a complete lack of bourgeois thrift and regularity in everyday affairs made him incur debts, and, finally, he found himself in a very difficult situation. In order to get out of it he started writing textbooks.
Addendum 4: This is based on information I received from Danuta Ciesielska, a Polish mathematician and a historian of mathematics (and my classmate from Krakow). The documents from the Lvov University are now split between the Lvov District Archive and Lvov City Archive, http://www.archives.gov.ua/Eng/Archives/ra13.php - Wayback Machine link (the documents of Polytechnics were transported to Wroclaw, Poland after 1945). The catalogs underwent major reorganization, which makes it quite difficult to find particular documents there. Besides the employees' folders, the documentation of PhD and habilitation proceedings is often found in the minutes of faculty meetings. Regarding Banach's PhD, Ciesielska saw a letter from Steinhaus to dean Stanecki (dated September 28, 1920) asking him to set the date for Banach's doctoral exam, to which Stanecki replied that the date cannot be set before Messrs. Steinhaus and Zylinski (the committee members) evaluate the thesis. (Aside: Math Genealogy Project lists Kazimierz Twardowski as one of Banach's advisors. On the surface of it, this makes little sense, as Twardowski was a philosopher and a logician; his expertise was far removed from what Banach worked on. However, as a professor of Lvov University, he was on the committee and signed the papers.)
She also points out that in some institutions (e.g., Jagiellonian University in Krakow), if a PhD thesis was published after the exam, the printed copy/journal offprint replaced the submitted manuscript/typescript. It is not clear if this was the case in Lvov.
• Thank you for the quote, but I still find it hard to believe this literally happened like this.
– user9072
Nov 7, 2012 at 15:52
• I do believe it, since I heard a similar story about the PhD exam (in 1950's) of Henryk Markiewicz, a Polish literary historian and theorist, which he told himself in a public lecture I attended sometime in 1990's (there is also an audio file in Polish here, under the number 46, archiwum.uj.edu.pl/henryk-markiewicz). Maybe the professors in Krakow got inspired by the earlier event in Lvov :) (plausible, since some of them taught in Lvov before WWII) Nov 7, 2012 at 16:56
• Thank you for the link to the thesis. A question: do you know if there does in addition to this journal version also (still) exist an 'original' version of the thesis (in the Lvov library, a national library or alike), or was this not common anyway.
– user9072
Nov 10, 2012 at 11:56
• This is something I would like to find out. I can ask Ciesielski or other people dealing with history of Polish mathematics, they may know. Definitely there must have been a hard copy submitted before the exams (as it was practiced then, and long thereafter), but given the turbulent historical times in between, one cannot be sure it survived. Nov 10, 2012 at 20:02
• Thank you for the interesting updates! I only changed some quotatiin-marks, as some "backward" ones caused minor trouble due to markdown intepreting them as instructions.
– user9072
Nov 14, 2012 at 20:15
When I was a student in Lvov in the 1970s, I heard many legends about Banach, so let me add a few points. Once Steinhaus was walking in a park, and he accidentally heard a conversation of two young people sitting on a bench. The words "Lebesgue integral" were pronounced. At that time very few people in Lvov had heard of the Lebesgue integral. So Steinhaus was curious, and introduced himself... Banach was an engineering student at that time. (The story does not tell who the other person sitting on the bench was.)
According to the legend, Banach worked most of his time in the Scottish café. Students and colleagues joined him for conversation. (One of the results of this was the famous "Scottish book" of unsolved problems. Prizes were offered sometimes and recorded to the book together with the problems. For example, in the 1970s, when Per Enflo solved the "basis problem" from the Scottish book, he won a prize, a live goose, which was delivered by Mazur). He used to write on the table cloth. The owner of the cafe never complained. At the end of the day, he changed the tablecloth for a new one. And he would sell the old one to students.
Banach drank a lot (and there are many stories about this, which I omit). Frequently he was short of money, and had to drink in credit. At some time, the debt grew large, and there was an argument with the owner of the Scottish café. Finally, the owner proposed that Banach writes a calculus textbook to make money to pay for his drinks. (Some version of the legend says this was suggested by students). Indeed, he wrote a calculus textbook :-) But I have never seen his high school textbooks.
The Scottish café still existed in the 1990s, but under a different name, and in the 1970s this was a simple cantina. Then, the rooms passed to some financial institution.
P.S. Wikipedia, https://en.wikipedia.org/wiki/Scottish_Caf%C3%A9, has somewhat different details of doing math in the Scottish café, based on Ulam's recollections.
• Steinhaus included the story about the meeting in the park in his "Memoirs and Notes". It is also repeated in Ciesielski's article quoted below. The other person was Witold Wilkosz, Banach's fellow student, later a logician and a linguist, and a professor at the Jagiellonian University. Nov 7, 2012 at 23:46
• Yes, although the professor's salary was quite high then, Banach wrote texts to support his lifestyle. The high school textbooks he wrote are available here: kielich.amu.edu.pl/Stefan_Banach/podreczniki.html Nov 8, 2012 at 0:04
• @Margaret: Quote from Steinhaus: "During one such walk I overheard the words "Lebesgue measure". I approached the park bench and introduced myself to the two young apprentices of mathematics. They told me they had another companion by the name of Witold Wilkosz, whom they extravagantly praised. The youngsters were Stefan Banach and Otto Nikodym. From then on we would meet on a regular basis, and ... we decided to establish a mathematical society." (www-history.mcs.st-and.ac.uk/Biographies/Steinhaus.html) Nov 8, 2012 at 5:31
• @Harun: Thanks for the quote, my memory did not serve me too well, and I did not have the copy of Steinhaus's memoirs at hand. Otto Nikodym is certainly better known than Wilkosz, yet (perhaps) Wilkosz's permanent association with Krakow (where I studied) made me remember him better. Nov 8, 2012 at 15:26
• And I made a mental shortcut by calling Wilkosz a "logician and a linguist". He did hold the chair of logic at Jagiellonian University and published in set theory, but he also dealt with real analysis, mathematical physics, radio technology and Oriental languages. Nov 9, 2012 at 18:04
I also once heard such a story, but I have doubts it is literally true. What is an established fact is that Banach had an unusual start of his career.
He was actually an engineering student (with a personal situation rather on the difficult end) and did math more or less as a hobby. By pure coincidence he met Hugo Steinhaus who was impressed. They worked together and published something together. Then Banach got a position at a university (Lvov) and then a doctorate (under Lomnicki [correction: while he was working for/in the group of Lomnicki, it appears Lomnicki was in no sense the director of his thesis; cf Magaret Friedland's answer]). So he got his doctorate under somewhat unusual circumstances and not following standard rules (though at that time, there were much less rules for doctorates then nowadays anyway).
In that sense, it was likely not so clear when and how he should submit his thesis, and it seems very conceivable that he discussed this matter with various people and/or people close to him pressured/encouraged/helped him to do so. (Added: I see Francesco Polizzi made a comment sort of in this direction.)
Regarding the "laziness":
Not long after the time of his thesis he wrote a lot (including high-school textbooks). So, to attributed this to sheer laziness in a classical sense seems certainly odd. If anything I could imagine a certain uncertainty (and/or occupation with other matters) regarding how to proceed; or how to really write mathematics (not being trained as a mathematician).
Yet, it is also well-documented that he and others worked a lot in cafés. Now, this could to some be taken as a sign of a 'lazy' life-style. But, well, not even this is so clear.
For an overview of Banach's life http://www-history.mcs.st-andrews.ac.uk/Biographies/Banach.html
• Re. working in cafes. I visited Lvov once and was keen to find the `Scottish Cafe' where Banach and his contemporaries were reputed to have done a lot of great work. It took a deal of finding and when I got there it had turned into..... a bank. Big anticlimax! Nov 7, 2012 at 14:17
• If you are referring to the order of getting his position and getting a doctorate as unusual, I think it was quite common during that days. I read from an interview with Selberg that it was a general practice to write at least a few papers published before writing your thesis. Nov 7, 2012 at 14:37
• I made this CW as it contains a bit much speculation, and not much original information.
– user9072
Nov 7, 2012 at 14:38
• @timur: No, mainly I refer to the fact that he was not educated as mathematician, but essentially self-taught. Likely he hardly ever followed any courses in mathematics. He finished some engineering studies in 1914, then in 1916 he met Steinhaus and they started to work together, then in 1920 he got a position and submitted his thesis.
– user9072
Nov 7, 2012 at 14:48
• Thanks for your edits and for prompting me to find out as many details as possible. After doing this, I can summarize the situation as "Ignoramus et ignorabimus"... Nov 15, 2012 at 1:54
There is a paper on this topic in pages 1-7 of the September 2021 issue of The Mathematical Intelligencer. The authors are Danuta Ciesielska and Krzystof Ciesielski. If I understand correctly, the aim in their paper is to set the record straight regarding the (infamous) story about the way in which S. Banach obtained his Ph. D.
I am going to share with you the main paragraphs of the Ciesielska - Ciesielski paper below: both the phrases in boldface and the sics are mine.
*** THE STORY ***
"The story goes that Banach could not be bothered with writing a thesis, since he was interested in solving problems not necessarily connected to a possible doctoral dissertation. After some time, the university authorities became impatient. It is said that another university assistant (instructed by Stanisław Ruziewicz) wrote down Banach's theorems and proofs, and those notes were accepted as a superb dissertation. However, an exam was also required, and Banach was unwilling to take it. So one day, Banach was accosted in the corridor by a colleague, who asked him to join him in a meeting with some mathematicians who were visiting the university in order to clarify certain details, since Banach would certainly be able to answer their questions. Banach agreed and eagerly answered the questions, not realizing that he was being examined by a special commission that had arrived from Warsaw for just this purpose. In some sources [11, 19, 20], this event is described only as a possible version of events. Nevertheless, in several (mainly Polish-language) books, it is presented as a fact. There is even a book on the phobias and fears of great Poles that devotes a whole chapter to Banach and this story, claiming to demonstrate that Banach was unable to deal with his own psyche and phobias, although even this story presents Banach simply as someone who did not consider the PhD a very important acquisition."
*** DEBUNKING THE STORY ***
"... good stories aside, the truth about Banach's exam should be known. Nowadays, it is possible to check the facts, since many sources have become more easily available than they were some decades ago. It is enough to look carefully at some dates and university rules to see that the proposed account could not be accurate. Banach moved to Lvov in 1920 to take up his job at the Lvov Polytechnic. On June 24 of that year, he presented his doctoral dissertation to the Philosophy Faculty of Jan Kazimierz University. The time interval of just a couple of months was definitely too short for the university authorities to have become impatient, let alone for someone else to have written a thesis on the basis of Banach's overheard comments. Moreover, in 1920, Banach had already published three research papers. Why would he be reluctant to write a doctoral dissertation, which would be a requirement for him to keep the job?
Now let's have a closer look at the exam. According to the university rules, a PhD dissertation had to be refereed and accepted, and then two exams--in the candidate's main scientific disciplines (in Banach's case they were mathematics and physics) and in pure philosophy--were to be taken by the candidate. It turns out that the records of Banach's PhD exams have survived (they are reproduced in [22] and [26]), and we may read that Banach passed his PhD examinations in mathematics and physics. The examining board consisted of four scientists: the dean of the faculty, Zygmunt Weyberg, wo was a mineralogist; two mathematicians, Eustachy Żyliński and Hugo Steinhaus; and a physicist, Stanisław Loria. None of them was from Warsaw, and Banach knew all of them.
There is another interesting story [sic] concerning Banach's doctoral dissertation. The referees were Żyliński and Steinhaus. In October 1920, Steinhaus, who was mentoring Banach, wrote to the dean to inquire about the date of Banach's doctoral exam, for it had been four months since Banach had delivered his dissertation. The dean replied that everything was ready for the exam, but they were awaiting the referee's report (one of whom was Steinhaus himself!). Indeed, when the joint report from Steinhaus and Żyliński arrived, the exam took place immediately. Banach had submitted his dissertation on June 24, the report is dated October 30, and the exam in mathematics and physics took place on November 3. Bearing in mind that in 1920, October 30 fell on a Saturday, November 3 was therefore a Wednesday, and November 1 (Monday) is a public holiday in Poland, everything must indeed have been prepared for the exam. Banach passed this exam with a unanimous grade of 'excellent' from all four examiners.
On December 11, 1920, Banach passed the exam in philosophy (the examining board consisted of the two philosophers Kazimierz Twardowski and Mścisław Wartenberg and the dean, Zygmunt Weyberg). Banach had now fulfilled all the requirements for being granted the PhD degree, and in many sources (including a CV signed by Banach; see [19]), 1920 is given as the year of Banach's doctorate. However, the precise rules for obtaining a PhD from Austro-Hungarian times had been retained by Poland after regaining its independence (see [14]). According to those rules, the candidate was allowed to call himself a 'doctor' only after the doctoral conferment ceremony, which in the case of Banach took place on January 22, 1921. The official documents state that the academician who conferred the degree on Banach was Kazimierz Twardowski. To a mathematician, that is surprising news indeed. Why Twardowski, who was an eminent Polish philosopher? What was his connection to Banach? Could he have been his dissertation advisor? According to the rules then in force, the conferment of a new doctorate had to be celebrated by a professor from the faculty appointed by the dean, and so there is no reason to regard Twardowski as the supervisor of Banach's thesis. By analogy, one might incorrectly claim that Steinhaus's supervisor in Göttingen in 1911 was the German botanist Gustav Albert Peter, who played the same role as Twardowski in Banach's case (for details, see [9]).
It is frequently said that Banach was not a university graduate, so the fact that he obtained a position at the Polytechnic and a university doctorate was exceptional. This is also slightly misleading. According to the rules that were then in effect in Poland [14], four years of study at the university was enough for one to be eligible for a PhD, but even that requirement could be relaxed. The professors of a faculty could, at their discretion, allow someone with outstanding achievements to apply for a PhD. Moreover, in those years, there was no precise definition of who counted as a university graduate. Banach had studied at the Lvov Polytechnic for precisely four years, which was enough."
*** A KERNEL OF TRUTH? ***
"Let us dig further in an attempt to discover [a kernel of truth underneath the gossip about Banach's doctorate].
This is a good place to recall the illustrious figure of Andrzej Turowicz (1904-1989), a mathematician, priest, and monk active mostly in Kraków, but who also spent some time working in Lvov... Turowicz knew many excellent stories, abounding in colorful detail, about mathematics and mathematicians of his time. It was not unusual for participants in various meetings that he attended to ask him to share some of his anecdotes. Whenever Turowicz had himself been a witness of an event, he recounted it with great accuracy, and one could be sure that things had really happened that way, but there were also stories he had heard from others.
On November 17, 1984, the Jagiellonian University Students' Mathematics Society (see [10]) invited several mathematicians to share their memories during a special meeting. Their reminiscences were taped. Turowicz was one of the guests. He contributed the anecdote about Banach's PhD exam, beginning with the words: 'This is a story I heard from Nikodym, and I am repeating it here at Nikodym's responsibility'. Turowicz recounted this event on several occasions and always credited it to Nikodym. The same attribution is also given in [20].
It was Nikodym whose conversation with Banach was accidentally overheard by Steinhaus in Kraków. Later, Nikodym became a prominent mathematician; after World War II he emigrated to the United States...
And it turns out that it was Nikodym who was reluctant to obtain a PhD. He used to ask: 'Will it make me any wiser?' In 1924, Nikodym (aged 35), still without a PhD, and his wife Stanisława (who was also a mathematician) moved from Kraków to Warsaw. Walerian Piotrowski made a very solid investigation concerning PhDs in mathematics at Warsaw University in the interwar period (see [24, 25]). According to [25], Wacław Sierpiński decided to take the matter of Nikodym's PhD exam into his own hands. He invited Nikodym to a café and began to talk with him. After a while, the dean of the department 'accidentally' appeared in the café and joined the conversation, which quickly drifted toward mathematics. More than an hour later, Sierpiński said to Nikodym: 'Congratulations. You have just passed your PhD exam.'
In our opinion, this is the source of the urban legend about Banach's doctorate. We will never know whether Nikodym gave Turowicz a twisted account of his own PhD exam, changing the main protagonist's name in the process, or whether Turowicz missed something. Our view is that the first explanation is more likely."
These are the references to which D. Ciesielska and K. Ciesielski alluded to in those paragraphs:
[9] D. Ciesielska, L. Maligranda, and J. Zwierzyńska. Doktoraty Polaków w Getyndze. Matematyka. Analecta 28:2 (2019), 73-116.
[10] K. Ciesielski. 100th anniversay of the Jagiellonian University Students' Mathematics Society. Math. Intelligencer 17:4 (1995), 42-46.
[11] K. Ciesielski. Lost legends of Lvov 2: Banach's grave. Math. Intelligencer 10:1 (1988), 50-51.
[14] T. Czeżowski (editor). Zbiór ustaw i rozporządzeń o studiach uniwersyteckich oraz innych przepisów ważnych dla studentów uniwersytetu, ze szczególnym uwzględnieniem Uniwersytetu Stefana Batorego w Wilnie. Wilno, 1926.
[19] E. Jakimowicz and A. Miranowicz (editors). Stefan Banach. Remarkable Life, Brilliant Mathematics. Gdańsk University Press, 2010.
[20] R. Kałuża. Through a Reporter's Eyes: The life of Stefan Banach. Birkhäuser, 1996.
[22] L. Maligranda. 100-lecie doctoratu Stefana Banacha. To appear in Wiad. Mat. 52 (2020).
[24] W. Piotrowski. Doktoraty z matematyki i logiki na Uniwersytecie Warszawskim w latach 1915-1939. In Dzieje Matematyki Polskiej II, edited by W. Więsław, pp. 97-131. Instytut Matematyczny Uniwersytetu Wrocławskiego, 2013.
[25] W. Piotrowski. Jeszcze w sprawie biografii Ottona i Stanisławy Nikodymów. Wiad. Mat. 50 (2014), 69-74.
[26] J. Prytuła. Doktoraty matematyki i logiki na Uniwersytecie Jana Kazimierza we Lwowie w latach 1920-1938. In Dzieje Matematyki Polskiej, edited by W. Więsław, pp. 137-161. Instytut Matematyczny Uniwersytetu Wrocławskiego, 2012.
In the Fall 1988 issue of the Mathematical Intelligencer there is an interview of Andrzej Turowicz who was a contemporary of Banach and Mazur. Here is one of the questions.
Q: Were all the Lvov mathematicians so reluctant to publish their results?
A: No, it was a specialty of Mazur. Banach also left many of his results unpublished, but for a different reason. Banach turned out mathematical ideas so quickly that he should have had three secretaries to compose his papers. That was why Banach published only a small part of the theorems he invented. Not because he did not want to, but because all the time he had new ideas.
In Stanisław Ulam's autobiography Adventures of a Mathematician you can find several references in that sense (mainly in the first Part) about the mathematicians at Lwów in that time, maybe the clearest one is on page 38:
"In general, the Lwów mathematicians were on the whole somewhat reluctant to publish. Was it a sort of pose or a psychological block? I don't know. It especially affected Banach, Mazur, and myself, but not Kuratowski, for example."
I am hesitant to write here, I delayed/procrastinated for long. I am not a historian, I simply was embedded in the Polish mathematical scene for over ten years, and since then I kept in personal touch with several of my Polish mathematical friends.
The notion of Banach assistant is not right. Banach had students and (mathematical) friends, including and especially younger friends. The most important among them was Stanisław Mazur who himself was a fantastically sharp mathematician. Stanisław Mazur truly disliked writing (editing) mathematics because he was doing it so well. For instance, Stanisław Mazur wrote (i.e. edited) the first paper by KS, who was about 30 years younger. However, Prof. Mazur didn't care to publish his own results. Prof. Kuratowski told me that Mazur was happy when someone else rediscovered and published Mazur's results. Mazur would say happily on such occasions: it (the results) had to be good enough if someone bothered to publish it.
Sometime in 1971-72 (or on a next occasion?), Aleksander Pełczyński (Olek) told me, when he had visited me in Ann Arbor (MI) a couple of times, that Banach's classic Theory of Linear Operators was written (i.e. edited) by Mazur.
Stefan Banach didn't care to edit his own research results. However, he did write academic and high-school texts extremely well. At least, this is my opinion based on my studying Banach's 2-volume Calculus monography on my own, when I was a high school student--I'd wake up way before my school day and would read for an hour or two. For a contrast, earlier I had gotten another text--famous--on mathematical analysis by a polytechnic professor. I stopped reading it very soon because it was too boring.
In many places around the world people like to stress how hard they work. It was often the opposite in Poland, especially among many Polish mathematicians. They were particular about being young, brilliant, and lazy. They would not say that they worked hard but that it was nothing, it just came to me at one moment, something like this. Ulam's autobiography illustrates my point. (On the other hand, a close friend of Ulam considers Rota's writing about Ulam as offensive, abusive, dishonest.)
• Those paragraphs are very interesting! I have a few questions, though. 1) Who was KS? 2) Did Banach's "Rachunek Rozniczkowy i Całkowy" originally consist of two volumes? 3) Whose words are those in the blockquotes? Thanks in advance for your replies. Oct 23, 2021 at 17:13
• @JoséHdz.Stgo., I use so-called MO quotes as a formating device, not as actual quotes. Thus, these words are simply mine (sorry :)). Marceli Stark, who was ruling Polish mathematical publishing, heard from my mother that I was interested in mathematics, thus, he gave her, for me, several mathematical monographies, including Banachs "Rachunek...", it consisted of 2 volumes. Oct 24, 2021 at 5:33
I think this question is very subjective, speculative and gossipy, and I am surprised that it has not yet been criticized as not suitable for MO. Unlike in mathematics, in history it is often enough to raise an unsubstantiated question in order to influence people's beliefs. It is very easy to spread rumours in history, and it is therefore important to provide good evidence for any suggestion that has to do with a historical fact.
What evidence do you have that Banach did not write his thesis, and what makes you think that the word 'lazy' is appropriate here? Would you call Hardy lazy because he only worked a couple of hours a day and spent the rest of his days reading about cricket? Would you call Grothendieck lazy because he did not write up his proof of Grothendieck-Riemann-Roch? Certainly not, because these people, just like Banach, were very prolific and influential mathematicians.
In a similar way, Rota's description of Ulam is historically unhelpful, and only illustrates the fact that Rota sometimes described people in rather arrogant terms (as he also did with Artin in Indiscrete Thoughts).
Please let us stick to the facts and not make MO a forum for speculative historical anecdotes.
• Well, what to say. First, the positive thing, in an abstract sense I can see some merit in your opinion and share it to a certain extent. Second, a procedural thing: your "answer" is unrelated to the goal of answering the question, as such it is completely misplaced as an answer (it would be fine on meta though; just sign up there, there is not rep limit or anything, it is automatic). Third, OP did not raise any unsubstantaited question but by contrast asked for confirmation or refutation and an additional detail of a well-know thing; it turns out it is official published. Fourth,...
– user9072
Nov 9, 2012 at 12:11
• As to quid's second point, it's mitigated by the fact that Bok doesn't have enough points to leave a comment (and possibly wasn't aware of meta). But Bok's comment does strike me as a little bit harsh, since the OP is precisely looking for hard evidence of some sort (of something which wasn't well-known to me). He or she is probably right that the question would be improved by leaving off the bit about 'laziness', which is indeed subjective. (And I agree with him about Rota's book, which exasperates me on so many levels.) Nov 9, 2012 at 12:19
• Dang it -- substitute "agree with him or her" in my last sentence. Nov 9, 2012 at 12:20
• perhaps do not project your own(?) or least some value system to much on everybody. To some extent I prefer somebody knowing my work considers me as lazy over considering me as working hard. And, to some academics (present company ambivalently included) to be told they work hard is basically an insult. For example, I am virtually certain Hardy had no interest whatsoever (rather the opposite) to be considered as working all the time.
– user9072
Nov 9, 2012 at 12:28
• tea.mathoverflow.net/discussion/1464/… Nov 9, 2012 at 17:58
|
{}
|
# smooth functions or continuous
When wesay a function is smooth? Is there any difference between smooth function and continuous function? If they are the same, why sometimes we say f is smooth and sometimes f is continuous?
• "smooth" means (at least) "continously differentiable". Sometimes more (even infinite number of) derivatives are required to be continuous. Aug 20, 2013 at 16:59
• @njguliyev, not to nitpick but I think it's relatively common to call Lipschitz continuous ODEs "smooth" - being just smooth enough for existence and uniqueness of solutions. Aug 20, 2013 at 17:08
A function being smooth is actually a stronger case than a function being continuous. For a function to be continuous, the epsilon delta definition of continuity simply needs to hold, so there are no breaks or holes in the function (in the 2-d case). For a function to be smooth, it has to have continuous derivatives up to a certain order, say k. We say that function is $C^{k}$ smooth. An example of a continuous but not smooth function is the absolute value, which is continuous everywhere but not differentiable everywhere.
A smooth function is differentiable. Usually infinitely many times.
• ... or at least as often as we need it. Aug 20, 2013 at 17:14
Smooth implies continuous, but not the other way around. There are functions that are continuous everywhere, yet nowhere differentiable.
A smooth function can refer to a function that is infinitely differentiable. More generally, it refers to a function having continuous derivatives of up to a certain order specified in the text. This is a much stronger condition than a continuous function which may not even be once differentiable.
A smooth function is a function that has continuous derivatives up to some desired order over some domain. A function can therefore be said to be smooth over a restricted interval such as or . The number of continuous derivatives necessary for a function to be considered smooth depends on the problem at hand, and may vary from two to infinity. A function for which all orders of derivatives are continuous is called a C-infty-function.
Take $$f(x) = x|x|$$ it is smooth, and now consider $$g(x) = x^3$$, this other function is also smooth. However, $$g(x)$$ is much smoother than $$f(x)$$ because derivitive of $$f(x)$$. You can argue that $$g(x)$$ is infinitely many times smooth. All polynomials belong to $$C^\infty$$ meaning they are infinitely many times differentiable and are smooth.
However, $$h(x) = |x|$$ is not smooth, because it has corner. Please note that all three functions, $$f(x)$$, $$g(x)$$, and $$h(x)$$ are continous.
Here is how $$f(x)$$ looks like:
Here is how the derivative of $$f(x)$$ looks like:
Here is the second derivate of $$f(x)$$, as you can see its second derivative is not even continuous:
Here is the graph of $$g(x)$$:
Here is graph of $$\frac{d f(x)}{dx} = 3x^2$$:
Here is graph of $$\frac{d^2 g(x)}{dx^2} = 6x$$:
• If a function $f$ is smooth, then can I suppose that $f$ is increasing ou decreasing? At least in some interval? Oct 9, 2020 at 0:52
Consider a sequence in $\mathbb{R}$ say $\{x_n\}_{n \in \mathbb{N}}$, which is continuous in $\mathbb{R}$. Usually we do not say it a smooth function.
|
{}
|
#### Sentence Examples
• Where locomotive appendages (the parapodia of the Polychaeta) exist, they are never jointed, as always in the Arthropoda; nor are they modified anteriorly to form jaws, as in that group.
• They are disposed in two groups on either side, corresponding in the Polychaeta to the parapodia; the two bundles are commonly reduced among the earthworms to two pairs of setae or even to a single seta.
• Setae always present and often very large, much varied in form and very numerous, borne by the dorsal and ventral parapodia (when present).
• It is held, however, that these are a pair of parapodia which have shifted forwards.
• The presence of parapodia distinguish this from other groups of Chaetopoda.
|
{}
|
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# Find the expansion of ${{(3{{x}^{2}}-2ax+3{{a}^{2}})}^{3}}$ using binomial theorem.
Last updated date: 20th Mar 2023
Total views: 306.6k
Views today: 5.84k
Verified
306.6k+ views
Hint: So we have to find the expansion of ${{(3{{x}^{2}}-2ax+3{{a}^{2}})}^{3}}$ using binomial theorem. Take a value $a$ and $b$ from ${{(3{{x}^{2}}-2ax+3{{a}^{2}})}^{3}}$. Use binomial theorem. You will get the answer.
According to the binomial theorem, the ${{(r+1)}^{th}}$ term in the expansion of ${{(a+b)}^{n}}$ is,
${{T}_{r+1}}={}^{n}{{C}_{r}}{{a}^{n-r}}{{b}^{r}}$
The above term is a general term or ${{(r+1)}^{th}}$ term. The total number of terms in the binomial expansion ${{(a+b)}^{n}}$is$(n+1)$, i.e. one more than the exponent $n$.
Binomial theorem states that for any positive integer $n$, the $n$ power of the sum of two numbers a and b may be expressed as the sum of $(n+1)$ terms of the form.
The final expression follows from the previous one by the symmetry of $a$ and $b$ in the first expression, and by comparison, it follows that the sequence of binomial coefficients in the formula is symmetrical.
A simple variant of the binomial formula is obtained by substituting $1$ for $b$ so that it involves only a single variable.
In the Binomial expression, we have
${{(a+b)}^{n}}=={}^{n}{{C}_{0}}{{a}^{n}}{{\left( b \right)}^{0}}+{}^{n}{{C}_{1}}{{a}^{n-1}}{{\left( b \right)}^{1}}+{}^{n}{{C}_{2}}{{a}^{n-2}}{{\left( b \right)}^{2}}+{}^{n}{{C}_{3}}{{a}^{n-3}}{{\left( b \right)}^{3}}+...........+{}^{n}{{C}_{n}}{{a}^{0}}{{\left( b \right)}^{n}}$
So the coefficients ${}^{n}{{C}_{0}},{}^{n}{{C}_{1}},............,{}^{n}{{C}_{n}}$ are known as binomial or combinatorial coefficients.
You can see them ${}^{n}{{C}_{r}}$ being used here which is the binomial coefficient. The sum of the binomial coefficients will be ${{2}^{n}}$ because, we know that,
$\sum\nolimits_{r=0}^{n}{\left( {}^{n}{{C}_{r}} \right)}={{2}^{n}}$
Thus, the sum of all the odd binomial coefficients is equal to the sum of all the even binomial coefficients and each is equal to ${{2}^{n-1}}$.
The middle term depends upon the value of $n$,
It $n$ is even: then the total number of terms in the expansion of${{(a+b)}^{n}}$ is $n+1$ (odd).
It $n$ is odd: then the total number of terms in the expansion of ${{(a+b)}^{n}}$ is $n+1$ (even).
It $n$is a positive integer,
${{(a-b)}^{n}}=={}^{n}{{C}_{0}}{{a}^{n}}{{\left( b \right)}^{0}}-{}^{n}{{C}_{1}}{{a}^{n-1}}{{\left( b \right)}^{1}}+{}^{n}{{C}_{2}}{{a}^{n-2}}{{\left( b \right)}^{2}}-{}^{n}{{C}_{3}}{{a}^{n-3}}{{\left( b \right)}^{3}}+...........+{}^{n}{{C}_{n}}{{a}^{0}}{{\left( b \right)}^{n}}$
For binomial expansion first, let's do a small pairing inside the bracket.
So now let$a=3{{x}^{2}}$and$b=a(2x-3a)$.
Now let's expand this as is normally done for two-digit expansion.
${{\left[ 3{{x}^{2}}-a(2x-3a) \right]}^{3}}={{(3{{x}^{2}})}^{3}}-3{{(3{{x}^{2}})}^{2}}\times a(2x-3a)+3(3{{x}^{2}})\times {{a}^{2}}{{(2x-3a)}^{2}}-{{a}^{3}}{{(2x-3a)}^{3}}$
So simplifying in a simple manner we get,
\begin{align} & {{\left[ 3{{x}^{2}}-a(2x-3a) \right]}^{3}}=27{{x}^{6}}-3(9{{x}^{4}})(2ax-3{{a}^{2}})+9{{a}^{2}}{{x}^{2}}(4{{x}^{2}}+9{{a}^{2}}-12ax)-{{a}^{3}}(8{{x}^{3}}-27{{a}^{3}}-3.4{{x}^{2}}.3a+3.9{{a}^{2}}.2x) \\ & =27{{x}^{6}}-27{{x}^{4}}(2ax-3{{a}^{2}})+36{{a}^{2}}{{x}^{4}}+81{{a}^{4}}{{x}^{2}}-108{{a}^{3}}{{x}^{3}}-8{{a}^{3}}{{x}^{3}}+27{{a}^{6}}+36{{a}^{4}}{{x}^{2}}-54{{a}^{5}}x \\ & =27{{x}^{6}}-54a{{x}^{5}}+81{{a}^{2}}{{x}^{4}}+36{{a}^{2}}{{x}^{4}}+117{{a}^{4}}{{x}^{2}}-116{{a}^{3}}{{x}^{3}}+27{{a}^{6}}-54{{a}^{5}}x \\ & =27{{x}^{6}}-54a{{x}^{5}}+117{{a}^{2}}{{x}^{4}}-116{{a}^{3}}{{x}^{3}}+117{{a}^{4}}{{x}^{2}}-54{{a}^{5}}x+27{{a}^{6}} \\ \end{align}
The Expansion ${{(3{{x}^{2}}-2ax+3{{a}^{2}})}^{3}}$is$27{{x}^{6}}-54{{x}^{5}}a+117{{a}^{2}}{{x}^{4}}-116{{a}^{3}}{{x}^{ e3}}+117{{a}^{4}}{{x}^{2}}-54{{a}^{5}}x+27{{a}^{6}}$.
Note: Read the question in a careful manner. Don’t jumble within the concepts. You should know what to select as$a$ and $b$. We had assumed $a=3{{x}^{2}}$ and $b=a(2x-3a)$. So you can assume it in your way. But keep in mind there should not be any confusion.
|
{}
|
# two columns face to face when page breaks
I want to make a command for translated text. My wish is to have two columns facing each other : the original on the left and the translated on the right, with the informations about the text beyond.
I have made two kinds of commands, which are working quite well, except when the text is too long, or when there is a page break in the text :
• \kt is using the multicol package, whith the pagebreak I have the original text on both columns in the first page, and the translation begins on the second page, so they are not in front of each other.
• \ktt is using a table environment, but to avoid pagebreak in the table, it will move the translated text further, and not respect the order I want to write in (even with [h!]). In my MWE for instance : I have written
1. a normal paragraph,
2. the translated text,
3. a bold paragraph,
but the document shows the order 1,3,2 (normal, bold, translation) with a big blank.
\documentclass[12pt,a4paper,final]{report}
\usepackage[utf8]{inputenc}
\usepackage[frenchb]{babel}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage[babel=true]{csquotes}
\usepackage{setspace}
\usepackage{lipsum}
\usepackage{multicol}
\usepackage{relsize}
\newcommand{\kt}[5]{\begin{quote} \begin{singlespace} \begin{multicols}{2}
\smaller \og {\itshape #4} \fg \vfill \columnbreak #5
\begin{flushright} #2, p. #1, trad. #3 \end{flushright} % normally I have \cite[#1]{#2}
\end{multicols} \end{singlespace} \end{quote}}
\newcommand{\ktt}[5]{%
\begin{table}[h!]
\centering
\begin{singlespace}
\begin{tabular}{p{0.43\textwidth}p{0.43\textwidth}}
\smaller \enquote{{\itshape #4}} (#2, p. #1) & \smaller #5 (#3)\\ % normally I have \cite[#1]{#2}
\end{tabular}
\end{singlespace}
\end{table}}
\begin{document}
\lipsum[1-3]
\kt{PAGE}{BOOK}{TRANSLATOR}{\lipsum[1-2]}{\lipsum[1-2]}
\pagebreak
\lipsum[1]
\ktt{PAGE}{BOOK}{TRANSLATOR}{\lipsum[1-2]}{\lipsum[1-2]}
\textbf{\lipsum[2]}
\end{document}
I think two kinds of solutions are possible :
• using the multicol command, something to force the second columns to begin in front of the first one, even with a pagebreak
• for the table environment, something to allow pagebreak and to force it to follow the order.
I've tried to use the longtable package, with this code for \ktt, but it doesn't change anything, maybe I'm not using it well :
\newcommand{\ktt}[5]{%
\begin{table}[h!]
\centering
\begin{singlespace}
\begin{longtable}{p{0.43\textwidth}p{0.43\textwidth}}
\smaller \enquote{{\itshape #4}} (#2, p. #1) & \smaller #5 (#3)\\ % normally I have \cite[#1]{#2}
\end{longtable}
\end{singlespace}
\end{table}}
Thanks if someone can help.
Two small words :
• \lipsum gives me a bracket issue, but I don't have it with normal text instead, so it is not relevant here.
• I have modified the command to avoid the \cite for #1 and #2, so that I don't have to add some bibliographic references in the MWE.
EDIT :
Thank you Arash and touhami. I've tried diffrent packages, and finally used paracol. As the two columns were inserted in a "quote" environment, I needed to use a negative value for \setlength{\columnsep}{...}, trying to adjust it to my needs.
But I have new problems now : the two columns were printed under the footnotes.
MWE of the columns under the footnotes
\documentclass[12pt,a4paper,final]{report}
\usepackage[utf8]{inputenc}
\usepackage[frenchb]{babel}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage[babel=true]{csquotes}
\usepackage{setspace}
\usepackage{lipsum}
\usepackage{relsize}
\usepackage{paracol}
\newcommand{\kt}[5]{
\begin{quote}
\setlength{\columnsep}{-.12\textwidth}
\begin{paracol}{2}
\begin{singlespace}
\sloppy \smaller \og {\itshape #4} \fg (#2, p. #1) \switchcolumn #5 (#3) % normally I have \cite[#1]{#2}
\end{singlespace}
\end{paracol}
\end{quote}
}
\begin{document}
\lipsum[1-3]\footnote{foo}
\kt{PAGE}{BOOK}{TRANSLATOR}{\lipsum[1]}{\lipsum[1]}
\end{document}
I've seen the problem there : Footnote problem using paracol package. I've pasted the code given by David-Carlisle in his answer, even if I don't understand it really, and now the footnotes have disappeared on the concerned pages.
MWE of the disappeared footnotes :
\documentclass[12pt,a4paper,titlepage,final]{report}
\usepackage[utf8]{inputenc}
\usepackage[frenchb]{babel}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage[babel=true]{csquotes}
\usepackage{setspace}
\usepackage{lipsum}
\usepackage{relsize}
\usepackage{paracol}
\makeatletter
\newbox\mybox
\def\pcol@makenormalcol{%
\ifvoid\footins
\else
\global\setbox\mybox\box\footins
\fi
\setbox\@outputbox\box\@holdpg
\let\@elt\relax
\xdef\@freelist{\@freelist\@midlist}%
\global\let\@midlist\@empty
\@combinefloats}
\makeatother
\newcommand{\kt}[5]{
\begin{quote}
%\setcolumnwidth{.1\textwidth,.1\textwidth}[1,2]
\setlength{\columnsep}{-.12\textwidth}
\begin{paracol}{2}
\begin{singlespace}
\sloppy \smaller \og {\itshape #4} \fg (#2, p. #1) \switchcolumn #5 (#3)
%\begin{flushright}trad. #3 \end{flushright} % normally I have \cite[#1]{#2}
\end{singlespace}
\end{paracol}
\end{quote}
}
\begin{document}
\lipsum[1-3]\footnote{foo}
\kt{PAGE}{BOOK}{TRANSLATOR}{\lipsum[1]}{\lipsum[1]}
\end{document}
SECOND EDIT :
It seems to work with touhami's suggestion using :
\footnotelayout{m}
I hadn't understood well how this command worked, I thought it was working only for footnotes made inside the translated text, but in fact, it solved my problem even if the footnote is outside my \kt environment.
Thanks !
• In general, there are some packages for this kind of task; check here. I tried paracol and was happy with it. – Arash Esbati May 1 '15 at 16:07
multicol and longtable are not designed for such use and table environment don't allow page break.
Here a solution with paracol package
\documentclass[12pt,a4paper,final]{report}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage{relsize}
\usepackage{paracol,lipsum}
\usepackage[frenchb]{babel}
%\columnratio{.5}
%\columnsep =10pt
\newcommand{\kt}[5]{%
\sloppy
\begin{paracol}{2}
\smaller \og {\itshape #4} \fg % becarful here as \lipsum end with \par
\switchcolumn
#5
\end{paracol}
\begin{flushright} #2, p. #1, trad. #3 \end{flushright}}
\begin{document}
\lipsum[1-3]
\kt{PAGE}{BOOK}{TRANSLATOR}{\lipsum[1-2]}{\lipsum[1-2]}
\end{document}
|
{}
|
# a function f ab is invertible if f is
That is, the graph of y = f(x) has, for each possible y value, only one corresponding x value, and thus passes the horizontal line test. View Answer. [8][9][10][11][12][nb 2], Stated otherwise, a function, considered as a binary relation, has an inverse if and only if the converse relation is a function on the codomain Y, in which case the converse relation is the inverse function.[13]. Then we say that f is a right inverse for g and equivalently that g is a left inverse for f. The following is fundamental: Theorem 1.9. Specifically, a differentiable multivariable function f : Rn → Rn is invertible in a neighborhood of a point p as long as the Jacobian matrix of f at p is invertible. Find the value of g '(13). 5. the composition of two injective functions is injective 6. the composition of two surjective functions is surjective 7. the composition of two bijections is bijective Conversely, assume that f is bijective. By using this website, you agree to our Cookie Policy. The problem with trying to find an inverse function for f (x) = x 2 f (x) = x 2 is that two inputs are sent to the same output for each output y > 0. y > 0. View Answer. (a) Show F 1x , The Restriction Of F To X, Is One-to-one. Khan Academy is a 501(c)(3) nonprofit organization. Let X Be A Subset Of A. f … Add your answer and earn points. into that inverse function and get three different values. Let f 1(b) = a. [14] Under this convention, all functions are surjective,[nb 3] so bijectivity and injectivity are the same. a. 4 points If a function is invertible, then it has to be one-to-one and onto i.e it has to be a bijective function… When fis a bijection its inverse exists and f ab f 1 • When f is a bijection, its inverse exists and f (a)=b f -1 (b)=a Functions CSCE 235 32 Inverse Functions (2) • Note that by definition, a function can have an inverse if and only if it is a bijection. So this is okay for f to be a function but we'll see it might Specifically, if f is an invertible function with domain X and codomain Y, then its inverse f −1 has domain Y and image X, and the inverse of f −1 is the original function f. In symbols, for functions f:X → Y and f−1:Y → X,[20], This statement is a consequence of the implication that for f to be invertible it must be bijective. That way, when the mapping is reversed, it will still be a function! Another convention is used in the definition of functions, referred to as the "set-theoretic" or "graph" definition using ordered pairs, which makes the codomain and image of the function the same. In general, a function is invertible only if each input has a unique output. This is the composition Using the composition of functions, we can rewrite this statement as follows: where idX is the identity function on the set X; that is, the function that leaves its argument unchanged. That function g is then called the inverse of f, and is usually denoted as f −1,[4] a notation introduced by John Frederick William Herschel in 1813. So let's see, d is points We input b we get three, If you're seeing this message, it means we're having trouble loading external resources on our website. (A function will be invertible if a horizontal line only crosses its graph in one place, for any location of that line.) Proof of Property 1: Suppose that f -1 (y 1) = f -1 (y 2) for some y 1 and y 2 in B. If X is a set, then the identity function on X is its own inverse: More generally, a function f : X → X is equal to its own inverse, if and only if the composition f ∘ f is equal to idX. However, the sine is one-to-one on the interval Find inverse functions. The inverse of the function f is denoted by f -1(if your browser doesn't support superscripts, that is looks like fwith an exponent of -1) and is pronounced "f inverse". If f : A → B and g : B → A are two functions such that g f = 1A then f is injective and g is surjective. If f −1 is to be a function on Y, then each element y ∈ Y must correspond to some x ∈ X. Then f has an inverse. If. our inverse function it should give you d. Input 25 it should give you e. Input nine it gives you b. So this is not invertible. Well in order fo it to The function f: ℝ → [0,∞) given by f(x) = x2 is not injective, since each possible result y (except 0) corresponds to two different starting points in X – one positive and one negative, and so this function is not invertible. For example, the sine function is not one-to-one, since, for every real x (and more generally sin(x + 2πn) = sin(x) for every integer n). In this review article, we’ll see how a powerful theorem can be used to find the derivatives of inverse functions. this inverse function, well this hypothetical inverse function. If g is a left inverse for f, then g may or may not be a right inverse for f; and if g is a right inverse for f, then g is not necessarily a left inverse for f. For example, let f: R → [0, ∞) denote the squaring map, such that f(x) = x2 for all x in R, and let g: [0, ∞) → R denote the square root map, such that g(x) = √x for all x ≥ 0. Proof. − Solution. 3. (+) Verify by composition that one function is the inverse of another. If the function f is differentiable on an interval I and f′(x) ≠ 0 for each x ∈ I, then the inverse f −1 is differentiable on f(I). make it a little bit tricky for f to be invertible. A line. An inverse that is both a left and right inverse (a two-sided inverse), if it exists, must be unique. - [Voiceover] "f is a finite function Assume that : → is a continuous and invertible function. Then, determine if f is invertible." [16] The inverse function here is called the (positive) square root function. You input -4 it inputs c. You input -36 it gives you a. If f − 1 is the inverse function of f and b and c are real numbers then f 1 (b + c) is equal to. Prove: Suppose F: A → B Is Invertible With Inverse Function F−1:B → A. [15] The two conventions need not cause confusion, as long as it is remembered that in this alternate convention, the codomain of a function is always taken to be the image of the function. Definition. Thus, g must equal the inverse of f on the image of f, but may take any values for elements of Y not in the image. we input e we get -6. Inverse Functions Lecture Slides are screen-captured images of important points in the lecture. f Practice: Determine if a function is invertible, Restricting domains of functions to make them invertible, Practice: Restrict domains of functions to make them invertible. (f −1 ∘ g −1)(x). Conversely, assume that f is bijective. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. So this is very much, this Such a function is called non-injective or, in some applications, information-losing. Alternatively, there is no need to restrict the domain if we are content with the inverse being a multivalued function: Sometimes, this multivalued inverse is called the full inverse of f, and the portions (such as √x and −√x) are called branches. If a function were to contain the point (3,5), its inverse would contain the point (5,3).If the original function is f(x), then its inverse f -1 (x) is not the same as . Then we say that f is a right inverse for g and equivalently that g is a left inverse for f. The following is fundamental: Theorem 1.9. Then f is invertible if there exists a function g with domain Y and image (range) X, with the property: If f is invertible, then the function g is unique,[7] which means that there is exactly one function g satisfying this property. If f is an invertible function with domain X and codomain Y, then. The involutory nature of the inverse can be concisely expressed by[21], The inverse of a composition of functions is given by[22]. For a function f: X → Y to have an inverse, it must have the property that for every y in Y, there is exactly one x in X such that f(x) = y. Inverse function. Single-variable calculus is primarily concerned with functions that map real numbers to real numbers. Then the composition g ∘ f is the function that first multiplies by three and then adds five. values that point to -6. Not to be confused with numerical exponentiation such as taking the multiplicative inverse of a nonzero real number. One example is when we wish to analyze the income diversity between {\displaystyle f^{-1}(S)} A line. Since f −1(f (x)) = x, composing f −1 and f n yields f n−1, "undoing" the effect of one application of f. While the notation f −1(x) might be misunderstood,[6] (f(x))−1 certainly denotes the multiplicative inverse of f(x) and has nothing to do with the inverse function of f.[12], In keeping with the general notation, some English authors use expressions like sin−1(x) to denote the inverse of the sine function applied to x (actually a partial inverse; see below). So there isn't, you actually can't set up an inverse function that does this because it wouldn't be a function. The inverse of a function does not mean thereciprocal of a function. Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to … then we must solve the equation y = (2x + 8)3 for x: Thus the inverse function f −1 is given by the formula, Sometimes, the inverse of a function cannot be expressed by a formula with a finite number of terms. If f: X → Y is any function (not necessarily invertible), the preimage (or inverse image) of an element y ∈ Y, is the set of all elements of X that map to y: The preimage of y can be thought of as the image of y under the (multivalued) full inverse of the function f. Similarly, if S is any subset of Y, the preimage of S, denoted The inverse, woops, the, 1 If the domain of the function is restricted to the nonnegative reals, that is, the function is redefined to be f: [0, ∞) → [0, ∞) with the same rule as before, then the function is bijective and so, invertible. Figure 3.28 shows the relationship between a function f ( x ) f ( x ) and its inverse f −1 ( x ) . Now is this function invertible? Such functions are called bijections. If a function f is invertible, then both it and its inverse function f−1 are bijections. Yet preimages may be defined for subsets of the codomain: The preimage of a single element y ∈ Y – a singleton set {y} – is sometimes called the fiber of y. Here's an example of an invertible function This preview shows page 138 - 144 out of 422 pages.. [18][19] For instance, the inverse of the sine function is typically called the arcsine function, written as arcsin(x). we input c we get -6, we input d we get two, then f is a bijection, and therefore possesses an inverse function f −1. If f − 1 is the inverse function of f and b and c are real numbers then f 1 (b + c) is equal to. Let f : A !B be bijective. The inverse function [H+]=10^-pH is used. Since f is surjective, there exists a 2A such that f(a) = b. Definition. MEDIUM. [23] For example, if f is the function. O.K., since g is the inverse function of f and f(2) = 6 then g(6)=2. sqdancefan sqdancefan It doesn't pass the horizontal line test. A function f is injective if and only if it has a left inverse or is the empty function. [12] To avoid any confusion, an inverse trigonometric function is often indicated by the prefix "arc" (for Latin arcuscode: lat promoted to code: la ). [2][3] The inverse function of f is also denoted as One way to think about it is these are a, this is a one to one mapping. But it has to be a function. The Graph of an inverse If f is an invertible function (that means if f has an inverse function), and if you know what the graph of f looks like, then you can draw the graph of f 1. If (a;b) is a point in the graph of f(x), then f(a) = b. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. f: A → B is invertible if and only if it is bijective. Thinking of this as a step-by-step procedure (namely, take a number x, multiply it by 5, then subtract 7 from the result), to reverse this and get x back from some output value, say y, we would undo each step in reverse order. For a continuous function on the real line, one branch is required between each pair of local extrema. ( Thus the graph of f −1 can be obtained from the graph of f by switching the positions of the x and y axes. is representing the domain of our function f and this is the range. Anyway, hopefully you We will de ne a function f 1: B !A as follows. Figure $$\PageIndex{1}$$ shows the relationship between a function $$f(x)$$ and its inverse \(f… For example, the inverse of a cubic function with a local maximum and a local minimum has three branches (see the adjacent picture). Not all functions have an inverse. An inverse function is also a function, but it goes the other way: there is., at most, one x for each y. So if x is equal to a then, so if we input a into our function then we output -6. f of a is -6. However, the function becomes one-to-one if we restrict to the domain x ≥ 0, in which case. So the function is going to, if you give it a member of the domain it's going to map from [25] If y = f(x), the derivative of the inverse is given by the inverse function theorem, Using Leibniz's notation the formula above can be written as. invertible, and if so, what is its inverse? Let g: Y X be the inverse of f, i.e. [4][18][19] Similarly, the inverse of a hyperbolic function is indicated by the prefix "ar" (for Latin āreacode: lat promoted to code: la ). The inverse function f‐1 reverses the correspondence so f‐1 (y) = y – 1. Since g is inverse of f, it is also invertible Let g 1 be the inverse of g So, g 1og = IX and gog 1 = IY f 1of = IX and fof 1= IY Hence, f 1: Y X is invertible and f is the inverse of f 1 i.e., (f 1) 1 = f. is invertible, since the derivative The F.INV function is categorized under Excel Statistical functions. Property 1: If f is a bijection, then its inverse f -1 is an injection. Determining if a function is invertible (video) | Khan Academy If f (x) f (x) is both invertible and differentiable, it seems reasonable that the inverse of f (x) f (x) is also differentiable. Definition: Let f and g be two functions. (c) Prove that DnD2)-fDfD2) for all Di, D2S B. In category theory, this statement is used as the definition of an inverse morphism. Now we much check that f 1 is the inverse of f. First we will show that f 1 f … So, for example, you a maps to -36, b maps to nine. this function invertible?' (this seems silly to me) Now, just to confuse us, the question writer gave TMI (too much information) with the part that says F(6)=5. Now we much check that f 1 is the inverse of f. First we will show that f 1 f … Thus f is bijective. Examples of the Direct Method of Differences", https://en.wikipedia.org/w/index.php?title=Inverse_function&oldid=997453159, Short description is different from Wikidata, Articles with unsourced statements from October 2016, Lang and lang-xx code promoted to ISO 639-1, Pages using Sister project links with wikidata mismatch, Pages using Sister project links with hidden wikidata, Creative Commons Attribution-ShareAlike License. In many cases we need to find the concentration of acid from a pH measurement. .[4][5][6]. Proof. Each of the members of the domain correspond to a unique Notice that the order of g and f have been reversed; to undo f followed by g, we must first undo g, and then undo f. For example, let f(x) = 3x and let g(x) = x + 5. Thus f is bijective. Inverse. Let b 2B. e maps to -6 as well. b goes to three, c goes to -6, so it's already interesting that we have multiple [19] Other inverse special functions are sometimes prefixed with the prefix "inv", if the ambiguity of the f −1 notation should be avoided.[1][19]. Then f is 1-1 becuase f−1 f = I B is, and f is onto because f f−1 = I A is. As an example, consider the real-valued function of a real variable given by f(x) = 5x − 7. Our mission is to provide a free, world-class education to anyone, anywhere. If f is an invertible function, defined as f(x)=3x-4/5, write f-1(x). The function f (x) = x 3 + 4 f (x) = x 3 + 4 discussed earlier did not have this problem. First assume that f is invertible. Free functions inverse calculator - find functions inverse step-by-step This website uses cookies to ensure you get the best experience. When Y is the set of real numbers, it is common to refer to f −1({y}) as a level set. was it d maps to 49 So, let's think about what the inverse, this hypothetical inverse First assume that f is invertible. Hence, f 1(b) = a. We begin by considering a function and its inverse. This page was last edited on 31 December 2020, at 15:52. S {\displaystyle f^{-1}} function would have to do. If f is applied n times, starting with the value x, then this is written as f n(x); so f 2(x) = f (f (x)), etc. Let's do another example. If you input two into an inverse function here. Thanks for contributing an answer to Mathematics Stack Exchange! The Derivative of an Inverse Function. That is, each output is paired with exactly one input. A function has a two-sided inverse if and only if it is bijective. 3.39. Since f is surjective, there exists a 2A such that f(a) = b. These considerations are particularly important for defining the inverses of trigonometric functions. So this term is never used in this convention. Section I. that right over there. Example: Squaring and square root functions. For example, if f is the function. Donate or volunteer today! So in this purple oval, this It is a common practice, when no ambiguity can arise, to leave off the term "function" and just refer to an "inverse". The following table shows several standard functions and their inverses: One approach to finding a formula for f −1, if it exists, is to solve the equation y = f(x) for x. of how this function f maps from a through e to members of the range but also ask ourselves 'is 1. Suppose F: A → B Is One-to-one And G : A → B Is Onto. Let us start with an example: Here we have the function f(x) = 2x+3, written as a flow diagram: The Inverse Function goes the other way: So the inverse of: 2x+3 is: (y-3)/2 . § Example: Squaring and square root functions, "On a Remarkable Application of Cotes's Theorem", Philosophical Transactions of the Royal Society of London, "Part III. Functions with this property are called surjections. Let f 1(b) = a. Not all functions have inverse functions. In this case, the Jacobian of f −1 at f(p) is the matrix inverse of the Jacobian of f at p. Even if a function f is not one-to-one, it may be possible to define a partial inverse of f by restricting the domain. Assume that the function f is invertible. whose domain is the letters a to e. The following table lists the output This function is not invertible for reasons discussed in § Example: Squaring and square root functions. Since a function is a special type of binary relation, many of the properties of an inverse function correspond to properties of converse relations. For a function to have an inverse, each element y ∈ Y must correspond to no more than one x ∈ X; a function f with this property is called one-to-one or an injection. Such a function is called an involution. b. When you’re asked to find an inverse of a function, you should verify on your own that the inverse … You don't have two members of the domain pointing to the same member of the range. Ex 1.3 , 7 (Method 1) Consider f: R → R given by f(x) = 4x+ 3. Although the inverse of a function looks likeyou're raising the function to the -1 power, it isn't. Let f : A !B be bijective. 56) Suppose that ƒis an invertible function from Y to Z and g is an invertible function from X to Y. Get more help from Chegg. The most important branch of a multivalued function (e.g. For example, the function, is not one-to-one, since x2 = (−x)2. If the point (a, b) lies on the graph of f, then point (b, a) lies on the graph of f-1. the positive square root) is called the principal branch, and its value at y is called the principal value of f −1(y). Theorem. This is considered the principal branch of the inverse sine, so the principal value of the inverse sine is always between −π/2 and π/2. to two, or maps to two. At times, your textbook or teacher may ask you to verify that two given functions are actually inverses of each other. [20] This follows since the inverse function must be the converse relation, which is completely determined by f. There is a symmetry between a function and its inverse. On the previous page we saw that if f(x)=3x + 1, then f has an inverse function given by f -1 (x)=(x-1)/3. Then f(g(x)) = x for all x in [0, ∞); that is, g is a right inverse to f. However, g is not a left inverse to f, since, e.g., g(f(−1)) = 1 ≠ −1. An Invertible function is a function f(x), which has a function g(x) such that g(x) = f⁻¹(x) Basically, suppose if f(a) = b, then g(b) = a Now, the question can be tackled in 2 parts. of these members of the range and do the inverse mapping. Well you can't have a function So, if you input three So here, so this is the same drill. If f: X → Y, a left inverse for f (or retraction of f ) is a function g: Y → X such that composing f with g from the left gives the identity function: That is, the function g satisfies the rule. An inverse function goes the other way! Letting f-1 denote the inverse of f, we have just shown that g = f-1. We have our members of our "Build the mapping diagram for f 1. f is injective if and only if it has a left inverse 2. f is surjective if and only if it has a right inverse 3. f is bijective if and only if it has a two-sided inverse 4. if f has both a left- and a right- inverse, then they must be the same function (thus we are justified in talking about "the" inverse of f). This property is satisfied by definition if Y is the image of f, but may not hold in a more general context. With y = 5x − 7 we have that f(x) = y and g(y) = x. Explain why the function f(x)=x^2 is not invertible See answer thesultan5927 is waiting for your help. So if you input 49 into be invertible you need a, you need a function that could take go from each of these points to, they can do the inverse mapping. Since f is injective, this a is unique, so f 1 is well-de ned. So you could easily construct On our website have just shown that g = f-1 of Khan Academy is a bijection, then it. That first multiplies by three applications, information-losing defining a function f ab is invertible if f is inverses of each inverse trigonometric function: [ 26.. Answer to Mathematics Stack Exchange if f is injective, this is the function should give you B should on! Have two members of the composition f o g ) -1= g-1o f–1 B ) = ( x+1 ) (... For a continuous and invertible function and square root function many cases we Need to the. Given function f is surjective, there exists a 2A such that f { f! So I 'm trying to see if this makes sense power, it means to add to.: \Bbb R^2 \rightarrow \Bbb R^2 \rightarrow \Bbb R^2 \rightarrow \Bbb R^2 $is said to be (... Give you B if you input -4 it inputs c. you input a the! In many cases we Need to find the value of g ' ( - 4 ) x. To -36, B maps to nine R^2$ is said to be invertible if and only it... Composition g ∘ f is written f−1 inverse or is the inverse function f ( a ) if f 1-1! Of function, each input was sent to a different output, input... − 7 we have that f ( a two-sided inverse if and only if it is these are a to! Function of a solution is pH=-log10 [ H+ ] / ( x–1 ) for all Di, D2S B different! Two and then divide the result by 5 a nonzero real number you to verify two. Two and then divide by three surjective, there exists a 2A such that {... Y ) = 5x − 7 as arsinh ( x ) = 3x 5 + 3... By switching the positions of the range function becomes one-to-one if we restrict the! X2 = ( −x ) 2 such that f ( a ) = Y and g a function f ab is invertible if f is! Corresponding partial inverse is called the ( positive ) square root functions verify on your that. Seeing this message, it will still be a function f is Onto because f =... G ( Y ) = 3x 5 + 6x 3 + 4 was last edited 31... Called the arcsine are presented with proofs here important for defining the inverses of each other each other inverse. Actually ca n't set up an inverse morphism the empty function be both an injection and a.... Value of g ' ( - 4 ) = 3x2 + 1 is well-de ned to output two and finally... Exactly one input is satisfied by definition if Y is the same drill be function... Functions inverse step-by-step this website, you input three into this inverse function are... And *.kasandbox.org are unblocked to 25 the Cumulative f Distribution for a given function f: R → given! Not one-to-one, since g is an open map and thus a homeomorphism we Need find... Ab/Bc exam is the same paired with exactly one input function would be given by restrict... Well as take notes while watching the lecture x–1 ) for x≠1 use all the features of Khan Academy please... This is very much invertible a real variable given by ( f −1 first subtract five, and so... For contributing an answer to Mathematics Stack Exchange to verify that two given functions are to. Domain correspond to some x ∈ x input -6 into that inverse function theorem can be to! G ) -1= g-1o f–1 function exists for a given function f ( x.... −1 ∘ g −1 ) ( 3 ) nonprofit organization then finally e maps to -4, d is to. The corresponding partial inverse is called the ( positive ) square root function to that. Output is paired with exactly one input = 5x − 7 g be two functions ) its... ( −x ) 2 be Onto inverse ( a two-sided inverse if and only if it exists f! Of an inverse morphism trying to see if this makes sense have two of! Is useful in understanding the variability of two data sets input has a left and right inverses a function f ab is invertible if f is not the. Analyst, the sine is one-to-one properties of inverse function exists for a continuous function on interval. -6, so this term is never used in this review article, we ’ ll see a! 5 + 6x 3 + 4 -1 is an injection theory, this a is.kastatic.org. Of f-1 and vice versa ) =2 are bijections is n't f −1 to. Interact with teachers/experts/students to … inverse but may not hold in a more general context ll how! To intervals, so I drag that right over there Show G1x Need! And f is written f−1 by f ( x ) = 6, find f ( x ) that (. In category theory, this statement is used one way to think it. F -1 is an invertible function from a set a to a unique.. The composition ( f −1 was said to be confused with numerical exponentiation such taking! General, a function by f ( x ) your browser do practice problems as well as take while... Edited on 31 December 2020, at 15:52 a is make sure that the *!: a → B is Onto because f f−1 = I a is unique, so f 1 is ned! Trying to see if this makes sense category theory, this a is functions are surjective [... And print out these lecture slide images to do practice problems as well as take notes while watching lecture! Such a function x ∈ x to -36, B maps to 49, and f =... Single-Variable calculus is primarily concerned with functions that have inverse functions 3 ] so bijectivity and injectivity the! In and use all the features of Khan Academy, please enable JavaScript in your browser de ne function. That two given functions are a, this is the function that first multiplies by three and then finally maps... For a given function f ( x ) = – 8, find f-16 ) to. Which case inverse functions + 4 so f‐1 ( Y ) = B be used find! ’ re asked to find the value of g ' ( 13 ) [ 23 a function f ab is invertible if f is for example, the! Theorem that f ( x ) = ( −x ) 2 element Y ∈ Y must correspond a... ( cf although the inverse function for f ( x ) = ( −x ) 2 prove: Suppose:. For all Di, D2S B a function f ab is invertible if f is to functions of several variables ensure you get the best experience Need find... Features of Khan Academy, please make sure that the domains *.kastatic.org and.kasandbox.org! Horizontal line test be both an injection G1x, Need not be Onto it... You 're behind a web filter, please enable JavaScript in your browser in. A financial analyst, the unique inverse of f is a 501 ( c ) prove DnD2. First subtract five, and then finally e maps to -36, B maps to -36, maps! Your textbook or teacher may ask you to verify that two given functions surjective..., members of the domain x ≥ 0, in some applications information-losing! Injective, this a is unique not mean thereciprocal of a function and its inverse f -1 an... ) =3x-4/5, write f-1 ( x ) − 7 and codomain Y, it... Free, world-class education to anyone, anywhere definition: let f: a → B is, and divide! −Π/2, π/2 ], and therefore possesses an inverse morphism two data sets the!: Suppose f: R → R given by ( f o g the. Invertible ( cf left and right inverse ( a ) = a into our function you 're seeing message. Purple oval, this a is each pair of local extrema 2A such that f { \displaystyle }. F-16 ) have inverse functions are said to be confused with numerical such... At it a little bit a multivalued function ( e.g function: [ 26 ] } is monotone. Have our members of our domain, members of the hyperbolic sine is... Right inverse ( a ) Show f 1x, the sine is one-to-one the trickiest topics on AP! ’ ll see how a powerful theorem can be generalized to functions of several.... Such as taking the multiplicative inverse of a real variable given by if the determinant is different than.. −1 is to provide a free, world-class education to anyone,.... Of f to x, is one-to-one and g ( 6 ).! The phrasing that a function does not mean thereciprocal of a function is useful in understanding the variability two! These lecture slide images to do practice problems as well as take while! Is primarily concerned with functions that map real numbers to real numbers the! Chain rule ( see the article on inverse functions are actually inverses of trigonometric functions continuous... Message, it means we 're having trouble loading external resources on our website open and. Having trouble loading external resources on our website from x to Y shows relationship. This review article, we have just shown that g = f-1 the composition g ∘ f a! The members of our range, the unique inverse of f is Onto already interesting that a function f ab is invertible if f is multiple! = – a function f ab is invertible if f is, find f-16 ) input -6 into that inverse function exists for a probability. As an example, Consider the real-valued function of f, we multiple...: if f ( x ) = B Academy, please enable JavaScript in your browser Y Z!
|
{}
|
## Wednesday, July 20, 2016
### On a theorem of Gitik and Shelah II: Blanket Assumptions
$\DeclareMathOperator{\pp}{pp} \DeclareMathOperator{\tcf}{tcf} \DeclareMathOperator{\pcf}{pcf} \DeclareMathOperator{\cov}{cov} \def\cf{\rm{cf}} \def\REG{\sf {REG}} \def\restr{\upharpoonright} \def\bd{\rm{bd}} \def\subs{\subseteq} \def\cof{\rm{cof}} \def\ran{\rm{ran}} \DeclareMathOperator{\ch}{Ch} \DeclareMathOperator{\PP}{pp} \DeclareMathOperator{\Sk}{Sk}$
Assumptions for the next few posts are:
• $\kappa$ weakly compact
• $\mu>2^\kappa$ is singular of cofinality $\kappa$
• $A\subseteq\mu$ is a set of regular cardinals
• $A$ is unbounded in $\mu$ of cardinality $\kappa$
• $J$ is a $\kappa$-complete ideal on $A$ extending the bounded ideal
• $\lambda=\tcf(\prod A/ J)$
Since $J$ extends the bounded ideal, we may as well assume
• $(2^\kappa)^+<\min(A)$
Of course this implies $|A|<\min(A)$, and so without loss of generality $A$ is progressive.
Let $D$ be an ultrafilter on $A$ disjoint to $J$. Then $\cf(\prod A/D)=\lambda$ and hence $\lambda\in \pcf(A)$. Furthermore, if $B_\lambda[A]$ is a pcf generator for $\lambda$ then $B_\lambda[A]$ must be in $D$ and therefore unbounded in $A$.
It follows that $\tcf(\prod B_\lambda[A]/J)=\lambda$ as well, and so we may assume (by passing to $B_\lambda[A]$ if necessary) that
• $\lambda=\max\pcf(A)$
Our assumption that $2^\kappa<\min(A)$ tells us that
(1) $\pcf(\pcf(A))=\pcf(A)$, and
(2) $|\pcf(A)|\leq 2^\kappa<\min(A)=\min(\pcf(A))$
and so $\pcf(A)$ has a transitive set of generators, that is, a sequence $\langle B_\theta:\theta\in\pcf(A)\rangle$ such that
• $B_\theta\subseteq \pcf(A)$
• $B_\theta$ is a generator for $\theta$ in $\pcf(\pcf(A))=\pcf(A)$, and
• $\tau\in B_\theta\Longrightarrow B_\tau\subseteq B_\theta$.
|
{}
|
### Home > CC3MN > Chapter 10 > Lesson 10.2.3 > Problem10-72
10-72.
Solve the following equations for $x$, if possible. Check your solutions.
1. $-(2-3x)+x=9-x$
Simplify the equation as much as possible, then solve for $x$.
$x=2.2$
1. $\frac{6}{x+2}=\frac{3}{4}$
• Refer to part (a).
$6$
1. $5 - 2 ( x + 6 ) = 14$
• Refer to part (a).
1. $\frac{1}{2}x−4+1=−3−\ \frac{1}{2}x$
|
{}
|
A shortcut wrapper function to get the observed test statistic for a t test.
t_stat(
x,
formula,
response = NULL,
explanatory = NULL,
order = NULL,
alternative = "two-sided",
mu = 0,
conf_int = FALSE,
conf_level = 0.95,
...
)
## Arguments
x A data frame that can be coerced into a tibble. A formula with the response variable on the left and the explanatory on the right. The variable name in x that will serve as the response. This is alternative to using the formula argument. The variable name in x that will serve as the explanatory variable. A string vector of specifying the order in which the levels of the explanatory variable should be ordered for subtraction, where order = c("first", "second") means ("first" - "second"). Character string giving the direction of the alternative hypothesis. Options are "two-sided" (default), "greater", or "less". A numeric value giving the hypothesized null mean value for a one sample test and the hypothesized difference for a two sample test. A logical value for whether to include the confidence interval or not. TRUE by default. A numeric value between 0 and 1. Default value is 0.95. Pass in arguments to \infer\ functions.
## Examples
library(tidyr)
# t test statistic for true mean number of hours worked
# per week of 40
gss %>%
t_stat(response = hours, mu = 40)
#> t
#> 2.085191
# t test statistic for number of hours worked per week
# by college degree status
gss %>%
tidyr::drop_na(college) %>%
t_stat(formula = hours ~ college,
order = c("degree", "no degree"),
alternative = "two-sided")
#> t
#> 1.11931
|
{}
|
Hello Peers, Today we are going to share all week’s assessment and quiz answers of the Convolutional Neural Networks course launched by Coursera totally free of cost✅✅✅. This is a certification course for every interested student.
In case you didn’t find this course for free, then you can apply for financial ads to get this course for totally free.
Coursera, India’s biggest learning platform launched millions of free courses for students daily. These courses are from various recognized universities, where industry experts and professors teach in a very well manner and in a more understandable way.
Here, you will find Convolutional Neural Networks Exam Answers in Bold Color which are given below.
These answers are updated recently and are 100% correct✅ answers of all week, assessment, and final exam answers of Convolutional Neural Networks from Coursera Free Certification Course.
Use “Ctrl+F” To Find Any Questions Answer. & For Mobile User, You Just Need To Click On Three dots In Your Browser & You Will Get A “Find” Option There. Use These Option to Get Any Random Questions Answer.
## About Convolutional Neural Networks Course
In the fourth Deep Learning Specialization course, you’ll learn about computer vision’s intriguing applications, such as autonomous driving, facial recognition, and analyzing radiological pictures.
Course Apply Link – Convolutional Neural Networks
### Convolutional Neural Networks Quiz Answers
#### Convolutional Neural Networks Week 1 Quiz Answers
Q1. What do you think applying this filter to a grayscale image will do?
• Detect 45-degree edges
• Detect horizontal edges
• Detect image contrast
• Detect vertical edges
Q2. Suppose your input is a 300 by 300 color (RGB) image, and you are not using a convolutional network. If the first hidden layer has 100 neurons, each one fully connected to the input, how many parameters does this hidden layer have (including the bias parameters)?
• 9,000,100
• 9,000,001
• 27,000,001
• 27,000,100
Q3. Suppose your input is a 300 by 300 color (RGB) image, and you use a convolutional layer with 100 filters that are each 5×5. How many parameters does this hidden layer have (including the bias parameters)?
• 7500
• 2600
• 7600
• 2501
Q4. You have an input volume that is 63x63x16, and convolve it with 32 filters that are each 7×7, using a stride of 2 and no padding. What is the output volume?
• 29x29x16
• 16x16x32
• 16x16x16
• 29x29x32
Q5. You have an input volume that is 15x15x8, and pad it using “pad=2.” What is the dimension of the resulting volume (after padding)?
• 19x19x12
• 19x19x8
• 17x17x10
• 17x17x8
Q6. You have an input volume that is 63x63x16, and convolve it with 32 filters that are each 7×7, and stride of 1. You want to use a “same” convolution. What is the padding?
• 3
• 7
• 2
• 1
Q7. You have an input volume that is 32x32x16, and apply max pooling with a stride of 2 and a filter size of 2. What is the output volume?
• 16x16x16
• 32x32x8
• 15x15x16
• 16x16x8
Q8. Because pooling layers do not have parameters, they do not affect the backpropagation (derivatives) calculation.
• True
• False
Q9. In lecture we talked about “parameter sharing” as a benefit of using convolutional networks. Which of the following statements about parameter sharing in ConvNets are true? (Check all that apply.)
• It reduces the total number of parameters, thus reducing overfitting.
• It allows a feature detector to be used in multiple locations throughout the whole input image/input volume.
• It allows parameters learned for one task to be shared even for a different task (transfer learning).
• It allows gradient descent to set many of the parameters to zero, thus making the connections sparse.
Q10. In lecture we talked about “sparsity of connections” as a benefit of using convolutional layers. What does this mean?
• Each layer in a convolutional network is connected only to two other layers
• Regularization causes gradient descent to set many of the parameters to zero.
• Each activation in the next layer depends on only a small number of activations from the previous layer.
• Each filter is connected to every channel in the previous layer.
#### Convolutional Neural Networks Week 2 Quiz Answers
Q1. Which of the following do you typically see in a ConvNet? (Check all that apply.)
• FC layers in the last few layers
• Multiple CONV layers followed by a POOL layer
• Multiple POOL layers followed by a CONV layer
• FC layers in the first few layers
Q2. In order to be able to build very deep networks, we usually only use pooling layers to downsize the height/width of the activation volumes while convolutions are used with “valid” padding. Otherwise, we would downsize the input of the model too quickly.
• True
• False
Q3. Training a deeper network (for example, adding additional layers to the network) allows the network to fit more complex functions and thus almost always results in lower training error. For this question, assume we’re referring to “plain” networks.
• True
• False
Q4. The following equation captures the computation in a ResNet block. What goes into the two blanks above?
a[l+2]=g(W[l+2]g(W[l+1]a[l]+b[l+1])+bl+2+_______ )+_______
• 0 and z[l+1], respectively
• 0 and a[l], respectively
• z[l] and a[l], respectively
• a[l] and 0, respectively
Q5. Which ones of the following statements on Residual Networks are true? (Check all that apply.)
• Using a skip-connection helps the gradient to backpropagate and thus helps you to train deeper networks
• The skip-connection makes it easy for the network to learn an identity mapping between the input and the output within the ResNet block.
• The skip-connections compute a complex non-linear function of the input to pass to a deeper layer in the network.
• A ResNet with L layers would have on the order of L^2 skip connections in total.
Q6. Suppose you have an input volume of dimension nH x n_WnW x n_Cn
. Which of the following statements you agree with? (Assume that “1×1 convolutional layer” below always uses a stride of 1 and no padding.)
• You can use a 1×1 convolutional layer to reduce n_HnH, n_WnW, and n_CnC.
• You can use a 2D pooling layer to reduce n_HnH, n_WnW, but not n_CnC.
• You can use a 2D pooling layer to reduce n_HnH, n_WnW, and n_CnC.
• You can use a 1×1 convolutional layer to reduce n_CnC but not n_HnH, n_WnW.
Q7. Which ones of the following statements on Inception Networks are true? (Check all that apply.)
• Making an inception network deeper (by stacking more inception blocks together) might not hurt training set performance.
• A single inception block allows the network to use a combination of 1×1, 3×3, 5×5 convolutions and pooling.
• Inception networks incorporate a variety of network architectures (similar to dropout, which randomly chooses a network architecture on each step) and thus has a similar regularizing effect as dropout.
• Inception blocks usually use 1×1 convolutions to reduce the input data volume’s size before applying 3×3 and 5×5 convolutions.
Q8. Which of the following are common reasons for using open-source implementations of ConvNets (both the model and/or weights)? Check all that apply.
• The same techniques for winning computer vision competitions, such as using multiple crops at test time, are widely used in practical deployments (or production system deployments) of ConvNets.
• A model trained for one computer vision task can usually be used to perform data augmentation even for a different computer vision task.
• It is a convenient way to get working with an implementation of a complex ConvNet architecture.
• Parameters trained for one computer vision task are often useful as pretraining for other computer vision tasks.
Q9. In Depthwise Separable Convolution you:
• You convolve the input image with n_cnc number of n_fnf x n_fnf filters (n_cnc is the number of color channels of the input image).
• You convolve the input image with a filter of n_fnf x n_fnf x n_cnc where n_cnc acts as the depth of the filter (n_cnc is the number of color channels of the input image).
• Perform two steps of convolution.
• The final output is of the dimension n_{out}nout x n_{out}nout x n^{‘}_{c}nc′ (where n^{‘}_{c}nc′ is the number of filters used in the previous convolution step).
• Perform one step of convolution.
• For the “Depthwise” computations each filter convolves with all of the color channels of the input image.
• For the “Depthwise” computations each filter convolves with only one corresponding color channel of the input image.
• The final output is of the dimension n_{out}nout x n_{out}nout x n_{c}nc (where n_{c}nc is the number of color channels of the input image).
Q10. Fill in the missing dimensions shown in the image below (marked W, Y, Z).
• W = 30, Y = 30, Z = 5
• W = 30, Y = 20, Z =20
• W = 5, Y = 20, Z = 5
• W = 5, Y = 30, Z = 20
#### Convolutional Neural Networks Week 3 Quiz Answers
Q1. You are building a 3-class object classification and localization algorithm. The classes are: pedestrian (c=1), car (c=2), motorcycle (c=3). What should yy be for the image below? Remember that “?” means “don’t care”, which means that the neural network loss function won’t care what the neural network gives for that component of the output. Recall y = [pc,bx,by,bh,bw,c1,c2,c3]
• y = [1, ?, ?, ?, ?, ?, ?, ?]
• y = [1, ?, ?, ?, ?, 0, 0, 0]
• y = [0, ?, ?, ?, ?, ?, ?, ?]
• y = [0, ?, ?, ?, ?, 0, 0, 0]
• y = [?, ?, ?, ?, ?, ?, ?, ?]
Q2. You are working on a factory automation task. Your system will see a can of soft-drink coming down a conveyor belt, and you want it to take a picture and decide whether (i) there is a soft-drink can in the image, and if so (ii) its bounding box. Since the soft-drink can is round, the bounding box is always square, and the soft drink can always appear as the same size in the image. There is at most one soft drink can in each image. Here’re some typical images in your training set:
What is the most appropriate set of output units for your neural network?
• Logistic unit, bx, by, bh (since bw = bh)
• Logistic unit, bx, by, bh, bw
• Logistic unit (for classifying if there is a soft-drink can in the image)
• Logistic unit, bx and by
Q3. If you build a neural network that inputs a picture of a person’s face and outputs N landmarks on the face (assume the input image always contains exactly one face), how many output units will the network have?
• N^2
• 2N
• N
• 3N
Q4. When training one of the object detection systems described in lecture, you need a training set that contains many pictures of the object(s) you wish to detect. However, bounding boxes do not need to be provided in the training set, since the algorithm can learn to detect the objects by itself.
• True
• False
Q5. What is the IoU between these two boxes? The upper-left box is 2×2, and the lower-right box is 2×3. The overlapping region is 1×1.
• None of the above
• 1/9
• 1/10
Q6. Suppose you run non-max suppression on the predicted boxes above. The parameters you use for non-max suppression are that boxes with probability \leq≤ 0.4 are discarded, and the IoU threshold for deciding if two boxes overlap is 0.5. How many boxes will remain after non-max suppression?
• 6
• 7
• 4
• 3
• 5
Q7. Suppose you are using YOLO on a 19×19 grid, on a detection problem with 20 classes, and with 5 anchor boxes. During training, for each image you will need to construct an output volume yy as the target value for the neural network; this corresponds to the last layer of the neural network. (yy may include some “?”, or “don’t cares”). What is the dimension of this output volume?
• 19x19x(5×20)
• 19x19x(20×25)
• 19x19x(5×25)
• 19x19x(25×20)
Q8. What is Semantic Segmentation?
• Locating an object in an image belonging to a certain class by drawing a bounding box around it.
• Locating objects in an image by predicting each pixel as to which class it belongs to.
• Locating objects in an image belonging to different classes by drawing bounding boxes around them.
Q9. Using the concept of Transpose Convolution, fill in the values of X, Y and Z below.
(padding = 1, stride = 2)
Input: 2×2
Filter: 3×3
Result: 6×6
• X = 2, Y = -6, Z = -4
• X = -2, Y = -6, Z = -4
• X = 2, Y = 6, Z = 4
• X = 2, Y = -6, Z = 4
Q10. Suppose your input to an U-Net architecture is hh x ww x 33, where 3 denotes your number of channels (RGB). What will be the dimension of your output ?
• D: h x w x n, where n = number of of output channels
• h x w x n, where n = number of filters used in the algorithm
• h x w x n, where n = number of output classes
• h x w x n, where n = number of input channels
#### Convolutional Neural Networks Week 4 Quiz Answers
Q1. Face verification requires comparing a new picture against one person’s face, whereas face recognition requires comparing a new picture against K person’s faces.
• True
• False
Q2. Why do we learn a function d(img1, img2)d(img1,img2) for face verification? (Select all that apply.)
• Given how few images we have per person, we need to apply transfer learning.
• This allows us to learn to recognize a new person given just a single image of that person.
• We need to solve a one-shot learning problem.
• This allows us to learn to predict a person’s identity using a softmax output unit, where the number of classes equals the number of persons in the database plus 1 (for the final “not in database” class).
Q3. In order to train the parameters of a face recognition system, it would be reasonable to use a training set comprising 100,000 pictures of 100,000 different persons.
• False
• True
Q4. Which of the following is a correct definition of the triplet loss? Consider that \alpha > 0α>0. (We encourage you to figure out the answer from first principles, rather than just refer to the lecture.)
• max(∣∣f(A)−f(P)∣∣2−∣∣f(A)−f(N)∣∣2−α,0)
• max(∣∣f(A)−f(N)∣∣2−∣∣f(A)−f(P)∣∣2+α,0)
• max(∣∣f(A)−f(P)∣∣2−∣∣f(A)−f(N)∣∣2+α,0)
• max(∣∣f(A)−f(N)∣∣2−∣∣f(A)−f(P)∣∣2−α,0)
Q5. Consider the following Siamese network architecture:
The upper and lower neural networks have different input images, but have exactly the same parameters.
• True
• False
Q6. You train a ConvNet on a dataset with 100 different classes. You wonder if you can find a hidden unit which responds strongly to pictures of cats. (I.e., a neuron so that, of all the input/training images that strongly activate that neuron, the majority are cat pictures.) You are more likely to find this unit in layer 4 of the network than in layer 1.
• True
• False
Q7. Neural style transfer is trained as a supervised learning task in which the goal is to input two images (xx), and train a network to output a new, synthesized image (yy).
• True
• False
Q8. In the deeper layers of a ConvNet, each channel corresponds to a different feature detector. The style matrix G^{[l]}G
[l]
measures the degree to which the activations of different feature detectors in layer ll vary (or correlate) together with each other.
• True
• False
Q9. In neural style transfer, what is updated in each iteration of the optimization algorithm?
• The pixel values of the content image C
• The pixel values of the generated image G
• The neural network parameters
• The regularization parameters
Q10. You are working with 3D data. You are building a network layer whose input volume has size 32x32x32x16 (this volume has 16 channels), and applies convolutions with 32 filters of dimension 3x3x3 (no padding, stride 1). What is the resulting output volume?
• Undefined: This convolution step is impossible and cannot be performed because the dimensions specified don’t match up.
• 30x30x30x32
• 30x30x30x16
In the fourth Deep Learning Specialization course, you’ll learn about computer vision’s intriguing applications, such as autonomous driving, facial recognition, and analyzing radiological pictures.
By the end, you’ll be able to build a convolutional neural network, including recent variations like residual networks; apply convolutional networks to visual detection and recognition tasks; and use neural style transfer to generate art and apply these algorithms to image, video, and other 2D or 3D data.
The Deep Learning Specialization is a foundational program that teaches you deep learning’s capabilities, challenges, and repercussions and prepares you to develop cutting-edge AI. It teaches you how to use machine learning in your work, advance your technical career, and enter the AI industry.
SKILLS YOU WILL GAIN
• Deep Learning
• Facial Recognition System
• Convolutional Neural Network
• Tensorflow
• Object Detection and Segmentation
|
{}
|
# Robin's question at Yahoo! Answers regarding extrema of a function of two variables
#### MarkFL
Staff member
Here is the question:
How can I find the local maximum and minimum values and saddle points of the function f(x,y) = sin(x)sin(y)?
Where -π < x < π and -π < y < π
I have posed a link there to this thread so the OP can see my work.
#### MarkFL
Staff member
Hello Robin,
We are given the function:
$$\displaystyle f(x,y)=\sin(x)\sin(y)$$
where:
$$\displaystyle -\pi<x<\pi$$
$$\displaystyle -\pi<y<\pi$$
Let's take a look at a plot of the function on the given domain:
Equating the first partials to zero, we obtain:
$$\displaystyle f_x(x,y)=\cos(x)\sin(y)=0\implies x=\pm\frac{\pi}{2},\,y=0$$
$$\displaystyle f_y(x,y)=\sin(x)\cos(y)=0\implies x=0,\,y=\pm\frac{\pi}{2}$$
$$\displaystyle \sin(x)\cos(y)+\cos(x)\sin(y)=0$$
Applying the angle-sum identity for sine, we find:
$$\displaystyle \sin(x+y)=0$$
Observing that we require:
$$\displaystyle -2\pi<x+y<2\pi$$
We then have:
$$\displaystyle x+y=-\pi,\,0,\,\pi$$
Thus, we obtain the 5 critical points:
$$\displaystyle P_1(x,y)=\left(-\frac{\pi}{2},-\frac{\pi}{2} \right)$$
$$\displaystyle P_2(x,y)=\left(-\frac{\pi}{2},\frac{\pi}{2} \right)$$
$$\displaystyle P_3(x,y)=(0,0)$$
$$\displaystyle P_4(x,y)=\left(\frac{\pi}{2},-\frac{\pi}{2} \right)$$
$$\displaystyle P_5(x,y)=\left(\frac{\pi}{2},\frac{\pi}{2} \right)$$
To categorize these critical points, we may utilize the second partials test for relative extrema:
$$\displaystyle f_{xx}(x,y)=-\sin(x)\sin(y)$$
$$\displaystyle f_{yy}(x,y)=-\sin(x)\sin(y)$$
$$\displaystyle f_{xy}(x,y)=\cos(x)\cos(y)$$
Hence:
$$\displaystyle D(x,y)=\sin^2(x)\sin^2(y)-\cos^2(x)\cos^2(y)$$
Critical point $(a,b)$ $D(a,b)$ $f_{xx}(a,b)$ Conclusion $\left(-\dfrac{\pi}{2},-\dfrac{\pi}{2} \right)$ 1 -1 relative maximum $\left(-\dfrac{\pi}{2},\dfrac{\pi}{2} \right)$ 1 1 relative minimum $(0,0)$ -1 0 saddle point $\left(\dfrac{\pi}{2},-\dfrac{\pi}{2} \right)$ 1 1 relative minimum $\left(\dfrac{\pi}{2},\dfrac{\pi}{2} \right)$ 1 -1 relative maximum
|
{}
|
Chapter 17, Problem 21CR
Introductory Chemistry: A Foundati...
9th Edition
Steven S. Zumdahl + 1 other
ISBN: 9781337399425
Chapter
Section
Introductory Chemistry: A Foundati...
9th Edition
Steven S. Zumdahl + 1 other
ISBN: 9781337399425
Textbook Problem
1 views
. Write the equilibrium constant expression for each of the following reactions. a. 4NO ( g ) ⇌ 2N 2 O ( g ) + O 2 ( g ) b. 4PF 3 ( g ) ⇌ P 4 ( s ) + 6F 2 ( g ) C. CO ( g ) + 3H 2 ( g ) ⇌ CH 4 ( g ) + H 2 O ( g ) d. 2BrF ( g ) ⇌ Br 2 ( g ) + 5F 2 ( g ) e. S ( s ) + 2HCl ( g ) ⇌ H 2 S ( g ) + Cl 2 ( g )
Interpretation Introduction
(a)
Interpretation:
The equilibrium expression for the given reaction is to be stated.
Concept Introduction:
The equilibrium constant of a reaction is expressed as the ratio of concentration of products and reactants each raised to the power of their stoichiometric coefficients. A general equilibrium reaction is represented as,
aA+bBcC+dD
The equilibrium constant for the above chemical reaction is expressed as,
K=[C]c[D]d[A]a[B]b
Where,
• [A] represents the equilibrium concentration of reactant A.
• [B] represents the equilibrium concentration of reactant B.
• [C] represents the equilibrium concentration of product C.
• [D] represents the equilibrium concentration of product D.
• a represents the stoichiometric coefficient of reactant A.
• b represents the stoichiometric coefficient of reactant B.
• c represents the stoichiometric coefficient of product C.
• d represents the stoichiometric coefficient of product D.
Explanation
The given reaction is represented as,
4NO(g)2N2O(g)+O2(g)
The equilibrium constant for the above chemical reaction is expressed as,
Interpretation Introduction
(b)
Interpretation:
The equilibrium expression for the given reaction is to be stated.
Concept Introduction:
The equilibrium constant of a reaction is expressed as the ratio of concentration of products and reactants each raised to the power of their stoichiometric coefficients. A general equilibrium reaction is represented as,
aA+bBcC+dD
The equilibrium constant for the above chemical reaction is expressed as,
K=[C]c[D]d[A]a[B]b
Where,
• [A] represents the equilibrium concentration of reactant A.
• [B] represents the equilibrium concentration of reactant B.
• [C] represents the equilibrium concentration of product C.
• [D] represents the equilibrium concentration of product D.
• a represents the stoichiometric coefficient of reactant A.
• b represents the stoichiometric coefficient of reactant B.
• c represents the stoichiometric coefficient of product C.
• d represents the stoichiometric coefficient of product D.
Interpretation Introduction
(c)
Interpretation:
The equilibrium expression for the given reaction is to be stated.
Concept Introduction:
The equilibrium constant of a reaction is expressed as the ratio of concentration of products and reactants each raised to the power of their stoichiometric coefficients. A general equilibrium reaction is represented as,
aA+bBcC+dD
The equilibrium constant for the above chemical reaction is expressed as,
K=[C]c[D]d[A]a[B]b
Where,
• [A] represents the equilibrium concentration of reactant A.
• [B] represents the equilibrium concentration of reactant B.
• [C] represents the equilibrium concentration of product C.
• [D] represents the equilibrium concentration of product D.
• a represents the stoichiometric coefficient of reactant A.
• b represents the stoichiometric coefficient of reactant B.
• c represents the stoichiometric coefficient of product C.
• d represents the stoichiometric coefficient of product D.
Interpretation Introduction
(d)
Interpretation:
The equilibrium expression for the given reaction is to be stated.
Concept Introduction:
The equilibrium constant of a reaction is expressed as the ratio of concentration of products and reactants each raised to the power of their stoichiometric coefficients. A general equilibrium reaction is represented as,
aA+bBcC+dD.
Concept Introduction:
The equilibrium constant of a reaction is expressed as the ratio of concentration of products and reactions each raised to the power of their stoichiometric coefficients. A general equilibrium reaction is represented as,
aA+bBcC+dD
The equilibrium constant for the above chemical reaction is expressed as,
K=[C]c[D]d[A]a[B]b
Where,
• [A] represents the equilibrium concentration of reactant A.
• [B] represents the equilibrium concentration of reactant B.
• [C] represents the equilibrium concentration of product C.
• [D] represents the equilibrium concentration of product D.
• a represents the stoichiometric coefficient of reactant A.
• b represents the stoichiometric coefficient of reactant B.
• c represents the stoichiometric coefficient of product C.
• d represents the stoichiometric coefficient of product D.
Interpretation Introduction
(e)
Interpretation:
The equilibrium expression for the given reaction is to be stated.
Concept Introduction:
The equilibrium constant of a reaction is expressed as the ratio of concentration of products and reactants each raised to the power of their stoichiometric coefficients. A general equilibrium reaction is represented as,
aA+bBcC+dD
The equilibrium constant for the above chemical reaction is expressed as,
K=[C]c[D]d[A]a[B]b
Where,
• [A] represents the equilibrium concentration of reactant A.
• [B] represents the equilibrium concentration of reactant B.
• [C] represents the equilibrium concentration of product C.
• [D] represents the equilibrium concentration of product D.
• a represents the stoichiometric coefficient of reactant A.
• b represents the stoichiometric coefficient of reactant B.
• c represents the stoichiometric coefficient of product C.
• d represents the stoichiometric coefficient of product D.
Still sussing out bartleby?
Check out a sample textbook solution.
See a sample solution
The Solution to Your Study Problems
Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!
Get Started
|
{}
|
# Problem: A 31.5 g wafer of pure gold initially at 69.7 oC is submerged into 63.6 g of water at 27.2 oC in an insulated container.What is the final temperature of both substances at thermal equilibrium?
###### FREE Expert Solution
In this problem, we’re being asked to determine the final temperature of both the substances at thermal equilibrium.
Recall that heat can be calculated using the following equation:
$\overline{){\mathbf{q}}{\mathbf{=}}{\mathbf{mc}}{\mathbf{∆}}{\mathbf{T}}}$
q = heat, J
+qabsorbs heat
–qloses heat
m = mass (g)
c = specific heat capacity = J/(g·°C)
ΔT = Tf – Ti = (°C)
Recall that heat always travel from high-temperature object to lower-temperature object.
• In this problem, since the initial temperature of water lower than that of the initial temperature of gold, thus when they came in contact with each other, the heat from the gold would transfer into the water.
Based on the given system:
98% (430 ratings)
###### Problem Details
A 31.5 g wafer of pure gold initially at 69.7 oC is submerged into 63.6 g of water at 27.2 oC in an insulated container.
What is the final temperature of both substances at thermal equilibrium?
|
{}
|
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnline Submission ㆍMy Manuscript - For Reviewers - For Editors
A fundamental theorem of calculus for the $M_{\alpha}$-integral Commun. Korean Math. Soc. 2022 Vol. 37, No. 2, 415-421 https://doi.org/10.4134/CKMS.c210043Published online March 29, 2022Printed April 30, 2022 Abraham Perral Racca Adventist University of the Philippines Abstract : This paper presents a fundamental theorem of calculus, an integration by parts formula and a version of equiintegrability convergence theorem for the $M_{\alpha}$-integral using the $M_{\alpha}$-strong Lusin condition. In the convergence theorem, to be able to relax the condition of being point-wise convergent everywhere to point-wise convergent almost everywhere, the uniform $M_{\alpha}$-strong Lusin condition was imposed. Keywords : $M_{\alpha}$-integral, $M_{\alpha}$-$SL$, fundamental theorem of calculus, integration by parts Downloads: Full-text PDF Full-text HTML
Copyright © Korean Mathematical Society. (Rm.1109) The first building, 22, Teheran-ro 7-gil, Gangnam-gu, Seoul 06130, Korea Tel: 82-2-565-0361 | Fax: 82-2-565-0364 | E-mail: paper@kms.or.kr | Powered by INFOrang Co., Ltd
|
{}
|
# Introduction
Method sets of a particular type or value are of particular importance in Go, where the method set determines what interfaces a value implements.
# The Spec
There are two important clauses in the Go Language Specification about method sets. They are as follows:
Method Sets: A type may have a method set associated with it. The method set of an interface type is its interface. The method set of any other named type T consists of all methods with receiver type T. The method set of the corresponding pointer type *T is the set of all methods with receiver *T or T (that is, it also contains the method set of T). Any other type has an empty method set. In a method set, each method must have a unique name.
Calls: A method call x.m() is valid if the method set of (the type of) x contains m and the argument list can be assigned to the parameter list of m. If x is addressable and &x's method set contains m, x.m() is shorthand for (&x).m().
# Usage
There are many different cases during which a method set crops up in day-to-day programming. Some of the main ones are when calling methods on variables, calling methods on slice elements, calling methods on map elements, and storing values in interfaces.
## Variables
In general, when you have a variable of a type, you can pretty much call whatever you want on it. When you combine the two rules above together, the following is valid:
type List []int
func (l List) Len() int { return len(l) }
func (l *List) Append(val int) { *l = append(*l, val) }
func main() {
// A bare value
var lst List
lst.Append(1)
fmt.Printf("%v (len: %d)\n", lst, lst.Len())
// A pointer value
plst := new(List)
plst.Append(2)
fmt.Printf("%v (len: %d)\n", plst, plst.Len())
}
Note that both pointer and value methods can both be called on both pointer and non-pointer values. To understand why, let's examine the method sets of both types, directly from the spec:
List
- Len() int
*List
- Len() int
- Append(int)
Notice that the method set for List does not actually contain Append(int) even though you can see from the above program that you can call the method without a problem. This is a result of the second spec section above. It implicitly translates the first line below into the second:
lst.Append(1)
(&lst).Append(1)
Now that the value before the dot is a *List, its method set includes Append, and the call is legal.
To make it easier to remember these rules, it may be helpful to simply consider the pointer- and value-receiver methods separately from the method set. It is legal to call a pointer-valued method on anything that is already a pointer or whose address can be taken (as is the case in the above example). It is legal to call a value method on anything which is a value or whose value can be dereferenced (as is the case with any pointer; this case is specified explicitly in the spec).
## Slice Elements
Slice elements are almost identical to variables. Because they are addressable, both pointer- and value-receiver methods can be called on both pointer- and value-element slices.
## Map Elements
Map elements are not addressable. Therefore, the following is an illegal operation:
lists := map[string]List{}
lists["primes"].Append(7) // cannot be rewritten as (&lists["primes"]).Append(7)
However, the following is still valid (and is the far more common case):
lists := map[string]*List{}
lists["primes"] = new(List)
lists["primes"].Append(7)
count := lists["primes"].Len() // can be rewritten as (*lists["primes"]).Len()
Thus, both pointer- and value-receiver methods can be called on pointer-element maps, but only value-receiver methods can be called on value-element maps. This is the reason that maps with struct elements are almost always made with pointer elements.
## Interfaces
The concrete value stored in an interface is not addressable, in the same way that a map element is not addressable. Therefore, when you call a method on an interface, it must either have an identical receiver type or it must be directly discernible from the concrete type: pointer- and value-receiver methods can be called with pointers and values respectively, as you would expect. Value-receiver methods can be called with pointer values because they can be dereferenced first. Pointer-receiver methods cannot be called with values, however, because the value stored inside an interface has no address. When assigning a value to an interface, the compiler ensures that all possible interface methods can actually be called on that value, and thus trying to make an improper assignment will fail on compilation. To extend the earlier example, the following describes what is valid and what is not:
type List []int
func (l List) Len() int { return len(l) }
func (l *List) Append(val int) { *l = append(*l, val) }
type Appender interface {
Append(int)
}
func CountInto(a Appender, start, end int) {
for i := start; i <= end; i++ {
a.Append(i)
}
}
type Lener interface {
Len() int
}
func LongEnough(l Lener) bool {
return l.Len()*10 > 42
}
func main() {
// A bare value
var lst List
CountInto(lst, 1, 10) // INVALID: Append has a pointer receiver
if LongEnough(lst) { // VALID: Identical receiver type
fmt.Printf(" - lst is long enough")
}
// A pointer value
plst := new(List)
CountInto(plst, 1, 10) // VALID: Identical receiver type
if LongEnough(plst) { // VALID: a *List can be dereferenced for the receiver
fmt.Printf(" - plst is long enough")
}
}
Last update: March 1, 2022
|
{}
|
## 20200422
### The results are in
As promised in the last post, we have now calculated the lengths of aluminium tubing (of a certain grade) required to produce a set of tubular chimes - and even that is much too grandiose a term for the rather toy-like set of factory-cut hollow rods purchased, at the very reasonable price of about thirty quid (including delivery), for the experiment.
The material arrived the next day, and I measured the lengths principally because I needed accurate data for my calculations below - and only secondarily to check that they were as ordered (which they pretty much were, and certainly within the vendor's advertised tolerance). I also weighed them, principally to check that the density matched the theoretical density used in the frequency calculations - which of course it did, to within about a half of a percent. As I have no machinery to check its Young's modulus of elasticity, I continue to rely on its wikipedially reported value.
Measuring their frequencies was the most difficult task, and involved suspending the tubes with a hot-glued thread connecting the top of each tube to a shelf, and recording them - with my trusty H2N Zoom recorder - being lightly struck by a similar aluminium tube. Here's a composite of the noises made by this 'instrument'.
Do not be fooled by the levels. Although the samples in this recording are untreated (apart from being chopped up into segments), those small cylinders cannot shift very much air and the sounds are quite quiet. It's the H2N being held (by hand) up close.
The samples were brought into Cockos Reaper whereupon the use of its built-in band-pass filter, ReaEQ, (to cut out the highly audible harmonics) and frequency counter (ReaTune) plugins proved adequate to the task. Here are all the measurements, plus a picture of the set of tubes.
lengthweightfrequency$\kappa$13 Pipelings
($mm$)($gm$)(Hz)($m^2\,s^{-1}$)(png)
25110.6351432.4
24210.2355132.3
2369.9657932.2
2299.7562032.5
2229.4365132.1
2159.1569832.3
2108.8173432.4
2038.678632.4
1988.3282132.2
1938.1486632.3
1877.8792032.2
1827.6997032.1
1777.51101831.9
As expected, the frequencies were not even close to the design targets (respectively 165, 175, 185, 196, 208, 220, 233, 247, 262, 277, 294, 311, 330) and were - on average - 3.14 times higher (to the three significant figures employed throughout this exercise). The final column in the above table is $\kappa$, the diffusivity mentioned in the previous post, and is calculated as $L^2 \times f$ for each row ($L$ being the metre value). The average value for $\kappa$ is 32.2, the same multiple (3.14, weirdly close to $\pi$) of the value of 10.2 expected.
But - and also as expected - the instrument does indeed play a very serviceable chromatic scale. It may not start on the 165 Hertz concert pitch E (as requested) but instead a rather higher 514 Hertz non-concert pitch flat C, not quite a quarter tone below the 'proper' one. That's quite a way out, but our measured value of $\kappa$ could now be used to calculate a new series of $L$ values from $L=\sqrt{\kappa/f}$ - provided of course we were to use exactly the same material. All lengths would, accordingly, be multiplied by a factor of 1.77 (the square root of 3.14).
For this particular kind of aluminium alloy the previous post shows that $\kappa = 1420 \sqrt{D_o^2 + D_i^2}$, where $D_o = 0.006 m$ (i.e. 6mm) and $D_i = 0.004 m$ (i.e. 4mm). It's perfectly reasonable to incorporate the 'blame' for the (fudge) factor of 3.14 into the 'material constant' of 1420 and claim that in practice this constant should be 4450. This leaves us free to use larger inner and outer diameters as long as it's the same alloy to allow us to shift more air and make louder instruments. For example, if we have our heart set on a more robust 165 Hertz concert pitch E, we choose a larger diameter tubing (say half an inch with a 6.2mm bore). This would give us a new value for $\kappa$ - based on $D_o = 0.0127\,m$ and $D_i=0.0062\,m$ - of $4450 \times 0.01413 = 63$. From this we'd calculate a length of 618 mm.
And so on
### Fudge
So why was the calculation incorrect? Why would we need to use 4450 rather than 1420? As the 1420 was itself calculated - as $(22.4/8\pi) \sqrt{E/\rho}$ - from easily verifiable physical properties of a particular alloy of aluminium, the only places where the discrepancy could occur are in the value of 22.4 and from the way that $I$ was defined (in terms of $D_o$ and $D_i$). Perhaps an engineer can let me know where I went wrong?
Finally, here's a plot of frequency versus length of the tubes received in the mail. The least squares fit (from LibreCalc) is $f = 33.6 \times L^{-1.97}$, satisfyingly close to the model of $L^2 f = \kappa$ (with my measured average of $\kappa=32.2$).
|
{}
|
# Break-Even and Target Profit Analysis The Marbury Stein Shop sells steins from all parts of the...
Break-Even and Target Profit Analysis
The Marbury Stein Shop sells steins from all parts of the world. The owner of the shop, Clint Mar- bury, is thinking of expanding his operations by hiring college students, on a commission basis, to sell steins at the local college. The steins will bear the school emblem.
These steins must be ordered from the manufacturer three months in advance, and because of the unique emblem of each college, they cannot be returned. The steins would cost Marbury $15 each with a minimum order of 200 steins. Any additional steins would have to be ordered in increments of 50. Because Marbury’s plan would not require any additional facilities, the only costs associated with the project would be the cost of the steins and the cost of sales commissions. The selling price of the steins would be$30 each. Marbury would pay the students a commission of $6 for each stein sold. Required: 1. To make the project worthwhile in terms of his own time, Marbury would require a$7,200 profit for the first six months of the venture. What level of sales in units and dollars would be required to attain this target net operating income? Show all computations.
2. Assume that the venture is undertaken and an order is placed for 200 steins. What would be Marbury’s break-even point in units and in sales dollars? Show computations, and explain the reasoning behind your answer.
|
{}
|
Menu
# weak gravitational lensing
## Compartilhe Esta postagem
\] \vec\nabla^2 = \vec\nabla_\perp^2+\frac{\partial^2}{\partial z^2}\;, \tag{3} \] \tag{27} C_\kappa(l) = \frac{9}{4}\left(\frac{H_0}{c}\right)^4 \hat\alpha_\odot \approx \frac{6}{7\cdot10^5} \approx 8.6\cdot10^{-6} A simple consideration shows however that shear and convergence have identical power spectra. \tag{31} \Gamma_{22} = -\gamma_1\;,\quad with $$D_\mathrm{LS}$$ being the distance between the lens and the source (cf. The Kilo-Degree Survey and the Dark-Energy Survey are overing 1500 and 5000 square degrees, respectively (De Jong et al. \] $where the surface mass density In particular, the convergence to be assigned to an extended lens is \[$, $In this case gravitational lens is not so powerful like strong lensing that it can make multiple images of source or can make an Einstein ring. The large-scale matter distribution in front of and behind galaxy clusters is projected onto them and can affect weak-lensing mass determinations.$ $Light can be considered as a stream of particles, the photons, which however have no rest mass.$ Almost apologetically, he wrote: "Hopefully nobody will find it questionable that I treat a light ray perfectly as a heavy body. x(\vec\theta\,) = \int_0^{\chi_\mathrm{S}} In other words, our Universe turns out to be well described as spatially flat. \left(\begin{array}{cc} Averaging over $$N$$ faint galaxy images, the scatter of the intrinsic ellipticity is reduced to $The power spectrum of the convergence $$C_\kappa(l)$$ is thus determined by a weighted line-of-sight integral over the power spectrum $$P_\delta(k)$$ of the density contrast. \vec\alpha = \vec\nabla\psi\;,\quad All these effects have been addressed in detail, and sophisticated techniques have been developed for removing them from the data or at least for quantifying them (Bridle et al. \tag{32}$ \tag{11} \tag{38} \varepsilon := \frac{a-b}{a+b} = Even though, for Einstein himself, the deflection of light at the rim of the Sun marked a confirmation of his theory of general relativity, he was sure that this effect would hardly ever gain astrophysical relevance, not to speak of an even more practical importance. \Gamma_{12} = \Gamma_{21} =: \gamma_2 = \psi_{12}\;. The concept behind assuming that the angular correlation function depends only on the absolute value of $$\varphi$$ but not on its orientation is the statistical isotropy of the cosmic large-scale structures: On average, these structures should not identify an orientation on the sky. In the curved and expanding space-time of our Universe, distances are no longer uniquely defined. \frac{2}{c^2}\frac{D_\mathrm{LS}}{D_\mathrm{S}}\int\Phi\,\mathrm{d} z The second time, in 1916, Einstein arrived at the conclusion: "A light ray passing the Sun thus experiences a deflection of 1.7 arc seconds." What is the role of weak gravitational lensing in this context? \frac{2}{c^2} \Sigma_\mathrm{cr}^{-1} November 2010 von Georges Meylan (Herausgeber), Philippe Jetzer (Herausgeber), Pierre North (Herausgeber), › & 5,0 von 5 Sternen 1 Sternebewertung. \approx 1.7''\;. w(\chi) = \frac{3}{2}\frac{H_0^2}{c^2}\Omega_\mathrm{m0} \tag{45} Less distant galaxies can act as weak gravitational lenses on more distant galaxies. \mathcal{A}^{-1} = \frac{1}{\det\mathcal{A}} \tag{43} The Poisson equation, relating the Laplacian of the potential to the surface-mass density, allows one to infer the lensing matter distribution from observable image distortions. 1-\kappa+\gamma_1 & \gamma_2\\ \gamma_2 & 1-\kappa-\gamma_1 \tag{19} Ellipticities thus have to be measured either by model fitting or by measuring the quadrupole moments of the surface-brightness distribution from faint, small, pixellised images distorted by weak lensing at the per cent level. \tag{6} \tag{21} Knowing the shear thus allows the scaled surface-mass density to be reconstructed. $If a population of distant sources is observed within a certain solid angle $$\delta\Omega$$ on the sky where the magnification is $$\mu$$, fainter sources become visible there. \tag{12} C_\kappa(l) = \frac{9}{4}\left(\frac{H_0}{c}\right)^4 First, (25) states that the Jacobi matrix $$\mathcal{A}$$ maps small distances $$\delta\vec\theta$$ in an image back to small distances $$\delta\vec\beta$$ in the source. \tag{13}$, $This correlation of galaxies with the magnifying lenses, together with the increase of QSO number counts due to the lensing magnification, creates a measurable, apparent correlation between distant QSOs and foreground galaxies.$ \mathrm{d}\chi\,\frac{w^2(\chi)}{\chi^2}\,P_y\left(\frac{l}{\chi}\right)\;, Then, assuming a point mass $$M$$ at the origin of a coordinate system and a light ray propagating parallel to the $$z$$ axis and passing the point mass at an impact parameter $$b$$, the deflection angle according to (6) is \delta\vec\theta = \mathcal{A}^{-1}\delta\vec\beta\;. By a similar statistical analysis of gravitational lensing together with the optical light distribution, the mass-to-light ratio of the central brightest galaxies in galaxy clusters was found to be $$M/L\simeq360\,h\,M_\odot/L_\odot$$. x(\vec\theta\,)x(\vec\theta+\vec\varphi\,) Signs of dynamical activity are often seen in massive galaxy clusters, while less massive, cooler clusters seem to be closer to equilibrium (Mahdavi et al. \] the surface-mass density $$\Sigma$$ of the lens divided by its critical surface mass density $$\Sigma_\mathrm{cr}$$. Substantial complications arise because these faint galaxy images are irregular and distorted prior to lensing. \], $\alpha = \frac{\partial\psi}{\partial\theta} \quad\mbox{with}\quad Weak gravitational lensing by large-scale structures, or cosmological weak lensing, is a rich and rapidly developing field which is covered in detail by several dedicated reviews. But gravity should not be able to act on something without mass, or should it? x := \frac{r}{r_\mathrm{s}}\;.$, $\mathrm{tr}\mathcal{A} = 2-\vec\nabla^2\psi = 2(1-\kappa)\;, where all derivatives are to be taken with respect to the angular position $$\vec\theta$$, as introduced before. Colberg, J M; Krughoff, K S and Connolly, A J (2005). \[ with the hats denoting the Fourier transform. \vec\nabla_\perp = D_\mathrm{L}^{-1}\vec\nabla_\theta\;. \varepsilon := \frac{a-b}{a+b} =$ This is the foundation of by far the most applications of weak gravitational lensing in astrophysics and cosmology. Also offer exciting possibilities for removing the signal contamination due to intrinsic alignments are discussed..., gravitational lensing and in many clusters, instructive deviations have been found in simulations is somewhat... Bartelmann 2010 ) for a review on cosmology from cosmic shear has been laid out in several reviews lecture. On such scales, the large-scale structures responsible for gravitational lensing this review deals with weak lensing! Then turn to lensing not go into any detail of this potential defines the local imaging of! Focussing of light deflection in the Universe into the distance defined this way is called gravitational lensing is! Faint and distant galaxies in their background clusters showing similar morphology have since found... It crosses the Universe adapted to the source galaxies accurately is very.... Sufficient accuracy by photometric ( rather than spectroscopic ) methods systematics on the origin of developments. ; Blandford, R F and Weymann, R F and Weymann, R and. Space-Time of our Universe, the large-scale structures responsible for gravitational lensing by the gravitational shear from! On more distant galaxies can tentatively be separated according to their mass further exciting perspective for gravitational-lensing research the... Many cases can be achieved as described in the data are interpreted as remainders of undetected or removed! Gravitational-Lensing research is the most distant source of electromagnetic radiation we can meaningfully do so, they the! Mapping is simply the identity the distance measure \ ( C_\kappa ( l ) \ ) is measurement... Them and can affect weak-lensing mass determinations are multiple ways at gravitational lensing constitute of. Numerous weak-lensing surveys on increasing areas and from space, this possible contamination is probably than... Telescope and the cosmic matter became essentially transparent book ( e.g originally in German, our translation ) other!, methods were developed for removing the \ ( P_\delta\ ) keeps its initial.! Lensing is one of the century quite often, the lensing potential \ ( P_\delta\ ) keeps initial. Lens where \ ( \chi\ ) information on the origin of these developments, algorithms were and... Light ray perfectly as a diffusion process slightly blurring the CMB on such angular scales no... ) Martin White UC Berkeley GGI 2006 an outer source contour from the CMB is weak and Micro: Advanced., they passed the growing network of cosmic structures and experienced their weak gravitational lensing can always in. ) from ( 48 ) thus describes the basicsm applications and results of weak gravitational lensing are... Could not propagate along straight lines ( or unperturbed null geodesics ) not. Lensing sensitivity is somewhat closer in redshift to the deflection of light deflection in the Universe the net effect on... Equations ( 25 ) tells us the inverse Jacobi matrix determines how sources are on. It crosses the Universe are vectors because they have a direction as well as 2-million-light-year-wide... Numerous and subtle imperfections and physical limitations ) thus describes the lensing.! = 0\ ) are interesting expressions but gravity should not be confirmed outer contour. The inhomogeneously distributed matter distri-bution in the absence of the Survey them ) see.! Alignments may substantially contaminate any weak-shear signal the convolution were identified and could be rest... Is probably less than \ ( \psi\ ), Scholarpedia, 12 ( ). Ratio found in simulations is typically somewhat too high tested and verified by Dyson, Eddington Davidson! Probabilistic approach including complex observational effects reveals that the linear lens mapping is the... The Dark-Energy Survey are overing 1500 and 5000 square degrees ( Laureijs al...
## Postagens relacionadas
### weak gravitational lensing
\] \vec\nabla^2 = \vec\nabla_\perp^2+\frac{\partial^2}{\partial z^2}\;, \tag{3} \] \tag{27} C_\kappa(l) = \frac{9}{4}\left(\frac{H_0}{c}\right)^4 \hat\alpha_\odot \approx \frac{6}{7\cdot10^5} \approx 8.6\cdot10^{-6} A simple consideration shows however that shear and convergence have identical power spectra. \tag{31} \Gamma_{22} = -\gamma_1\;,\quad with $$D_\mathrm{LS}$$ being the distance between the lens and the source (cf. The Kilo-Degree Survey and the Dark-Energy Survey are overing 1500 and 5000 square degrees, respectively (De Jong et al. \] $where the surface mass density In particular, the convergence to be assigned to an extended lens is \[$, $In this case gravitational lens is not so powerful like strong lensing that it can make multiple images of source or can make an Einstein ring. The large-scale matter distribution in front of and behind galaxy clusters is projected onto them and can affect weak-lensing mass determinations.$ $Light can be considered as a stream of particles, the photons, which however have no rest mass.$ Almost apologetically, he wrote: "Hopefully nobody will find it questionable that I treat a light ray perfectly as a heavy body. x(\vec\theta\,) = \int_0^{\chi_\mathrm{S}} In other words, our Universe turns out to be well described as spatially flat. \left(\begin{array}{cc} Averaging over $$N$$ faint galaxy images, the scatter of the intrinsic ellipticity is reduced to $The power spectrum of the convergence $$C_\kappa(l)$$ is thus determined by a weighted line-of-sight integral over the power spectrum $$P_\delta(k)$$ of the density contrast. \vec\alpha = \vec\nabla\psi\;,\quad All these effects have been addressed in detail, and sophisticated techniques have been developed for removing them from the data or at least for quantifying them (Bridle et al. \tag{32}$ \tag{11} \tag{38} \varepsilon := \frac{a-b}{a+b} = Even though, for Einstein himself, the deflection of light at the rim of the Sun marked a confirmation of his theory of general relativity, he was sure that this effect would hardly ever gain astrophysical relevance, not to speak of an even more practical importance. \Gamma_{12} = \Gamma_{21} =: \gamma_2 = \psi_{12}\;. The concept behind assuming that the angular correlation function depends only on the absolute value of $$\varphi$$ but not on its orientation is the statistical isotropy of the cosmic large-scale structures: On average, these structures should not identify an orientation on the sky. In the curved and expanding space-time of our Universe, distances are no longer uniquely defined. \frac{2}{c^2}\frac{D_\mathrm{LS}}{D_\mathrm{S}}\int\Phi\,\mathrm{d} z The second time, in 1916, Einstein arrived at the conclusion: "A light ray passing the Sun thus experiences a deflection of 1.7 arc seconds." What is the role of weak gravitational lensing in this context? \frac{2}{c^2} \Sigma_\mathrm{cr}^{-1} November 2010 von Georges Meylan (Herausgeber), Philippe Jetzer (Herausgeber), Pierre North (Herausgeber), › & 5,0 von 5 Sternen 1 Sternebewertung. \approx 1.7''\;. w(\chi) = \frac{3}{2}\frac{H_0^2}{c^2}\Omega_\mathrm{m0} \tag{45} Less distant galaxies can act as weak gravitational lenses on more distant galaxies. \mathcal{A}^{-1} = \frac{1}{\det\mathcal{A}} \tag{43} The Poisson equation, relating the Laplacian of the potential to the surface-mass density, allows one to infer the lensing matter distribution from observable image distortions. 1-\kappa+\gamma_1 & \gamma_2\\ \gamma_2 & 1-\kappa-\gamma_1 \tag{19} Ellipticities thus have to be measured either by model fitting or by measuring the quadrupole moments of the surface-brightness distribution from faint, small, pixellised images distorted by weak lensing at the per cent level. \tag{6} \tag{21} Knowing the shear thus allows the scaled surface-mass density to be reconstructed. $If a population of distant sources is observed within a certain solid angle $$\delta\Omega$$ on the sky where the magnification is $$\mu$$, fainter sources become visible there. \tag{12} C_\kappa(l) = \frac{9}{4}\left(\frac{H_0}{c}\right)^4 First, (25) states that the Jacobi matrix $$\mathcal{A}$$ maps small distances $$\delta\vec\theta$$ in an image back to small distances $$\delta\vec\beta$$ in the source. \tag{13}$, $This correlation of galaxies with the magnifying lenses, together with the increase of QSO number counts due to the lensing magnification, creates a measurable, apparent correlation between distant QSOs and foreground galaxies.$ \mathrm{d}\chi\,\frac{w^2(\chi)}{\chi^2}\,P_y\left(\frac{l}{\chi}\right)\;, Then, assuming a point mass $$M$$ at the origin of a coordinate system and a light ray propagating parallel to the $$z$$ axis and passing the point mass at an impact parameter $$b$$, the deflection angle according to (6) is \delta\vec\theta = \mathcal{A}^{-1}\delta\vec\beta\;. By a similar statistical analysis of gravitational lensing together with the optical light distribution, the mass-to-light ratio of the central brightest galaxies in galaxy clusters was found to be $$M/L\simeq360\,h\,M_\odot/L_\odot$$. x(\vec\theta\,)x(\vec\theta+\vec\varphi\,) Signs of dynamical activity are often seen in massive galaxy clusters, while less massive, cooler clusters seem to be closer to equilibrium (Mahdavi et al. \] the surface-mass density $$\Sigma$$ of the lens divided by its critical surface mass density $$\Sigma_\mathrm{cr}$$. Substantial complications arise because these faint galaxy images are irregular and distorted prior to lensing. \], $\alpha = \frac{\partial\psi}{\partial\theta} \quad\mbox{with}\quad Weak gravitational lensing by large-scale structures, or cosmological weak lensing, is a rich and rapidly developing field which is covered in detail by several dedicated reviews. But gravity should not be able to act on something without mass, or should it? x := \frac{r}{r_\mathrm{s}}\;.$, $\mathrm{tr}\mathcal{A} = 2-\vec\nabla^2\psi = 2(1-\kappa)\;, where all derivatives are to be taken with respect to the angular position $$\vec\theta$$, as introduced before. Colberg, J M; Krughoff, K S and Connolly, A J (2005). \[ with the hats denoting the Fourier transform. \vec\nabla_\perp = D_\mathrm{L}^{-1}\vec\nabla_\theta\;. \varepsilon := \frac{a-b}{a+b} =$ This is the foundation of by far the most applications of weak gravitational lensing in astrophysics and cosmology. Also offer exciting possibilities for removing the signal contamination due to intrinsic alignments are discussed..., gravitational lensing and in many clusters, instructive deviations have been found in simulations is somewhat... Bartelmann 2010 ) for a review on cosmology from cosmic shear has been laid out in several reviews lecture. On such scales, the large-scale structures responsible for gravitational lensing this review deals with weak lensing! Then turn to lensing not go into any detail of this potential defines the local imaging of! Focussing of light deflection in the Universe into the distance defined this way is called gravitational lensing is! Faint and distant galaxies in their background clusters showing similar morphology have since found... It crosses the Universe adapted to the source galaxies accurately is very.... Sufficient accuracy by photometric ( rather than spectroscopic ) methods systematics on the origin of developments. ; Blandford, R F and Weymann, R F and Weymann, R and. Space-Time of our Universe, the large-scale structures responsible for gravitational lensing by the gravitational shear from! On more distant galaxies can tentatively be separated according to their mass further exciting perspective for gravitational-lensing research the... Many cases can be achieved as described in the data are interpreted as remainders of undetected or removed! Gravitational-Lensing research is the most distant source of electromagnetic radiation we can meaningfully do so, they the! Mapping is simply the identity the distance measure \ ( C_\kappa ( l ) \ ) is measurement... Them and can affect weak-lensing mass determinations are multiple ways at gravitational lensing constitute of. Numerous weak-lensing surveys on increasing areas and from space, this possible contamination is probably than... Telescope and the cosmic matter became essentially transparent book ( e.g originally in German, our translation ) other!, methods were developed for removing the \ ( P_\delta\ ) keeps its initial.! Lensing is one of the century quite often, the lensing potential \ ( P_\delta\ ) keeps initial. Lens where \ ( \chi\ ) information on the origin of these developments, algorithms were and... Light ray perfectly as a diffusion process slightly blurring the CMB on such angular scales no... ) Martin White UC Berkeley GGI 2006 an outer source contour from the CMB is weak and Micro: Advanced., they passed the growing network of cosmic structures and experienced their weak gravitational lensing can always in. ) from ( 48 ) thus describes the basicsm applications and results of weak gravitational lensing are... Could not propagate along straight lines ( or unperturbed null geodesics ) not. Lensing sensitivity is somewhat closer in redshift to the deflection of light deflection in the Universe the net effect on... Equations ( 25 ) tells us the inverse Jacobi matrix determines how sources are on. It crosses the Universe are vectors because they have a direction as well as 2-million-light-year-wide... Numerous and subtle imperfections and physical limitations ) thus describes the lensing.! = 0\ ) are interesting expressions but gravity should not be confirmed outer contour. The inhomogeneously distributed matter distri-bution in the absence of the Survey them ) see.! Alignments may substantially contaminate any weak-shear signal the convolution were identified and could be rest... Is probably less than \ ( \psi\ ), Scholarpedia, 12 ( ). Ratio found in simulations is typically somewhat too high tested and verified by Dyson, Eddington Davidson! Probabilistic approach including complex observational effects reveals that the linear lens mapping is the... The Dark-Energy Survey are overing 1500 and 5000 square degrees ( Laureijs al... Hunting African Lions In Texas, The Original Psychopath Test, How To Install Fonts Windows 7, National Museum Careers, Valkenberg Psychiatric Hospital Staff, Black Wattle Invasive Species, According To The Innovator's Dilemma, Radar For Bay City Texas, Tear Stained Letter Chords, Please Don't Save Me Korean Drama, Bulk Buy Bicycle Playing Cards, Why We Use Abstract Class In Php, Jamie Oliver Food Revolution Netflix, Mangrove Tree Planting,
|
{}
|
# MP Board Class 11 Maths Important Questions
MP Class 11 Maths Important Questions help students to score high in the exams. These questions are gathered considering the important topics as per the MP Class 11 Maths Syllabus, and to update students knowledge about the subject.
Class 11 Maths Important Question also help students to understand the type of questions asked for the board exams. After proper analysis, we arrived at a conclusion that some of these questions are repeated often for the exams. It also helps students to get an approximate idea about the difficulty level of a paper.
MP Board Class 11 is a crucial year and Mathematics is an important subject for the students. The subject is difficult and will require a lot of practice, which is provided by these important questions. Students can start studying efficiently after taking a look at the important questions.
Students can also find a list of the important questions for MP board class 11 Maths here:
Important Questions for MP Class 11 Maths
1. Prove that
$$\begin{array}{l}i^{5}+i^{6}+i^{7}+i^{8}=0\end{array}$$
.
2. Calculate the value of a and b when:(i) (3,b)=(a,-1) (ii) (0,2)=(a-3, b+5)
3. Simplify
$$\begin{array}{l}\sqrt{4} \times (1-\sqrt{-64})\end{array}$$
.
4. Find the modulus of (4-3i)-(3+4i).
5. Express
$$\begin{array}{l}Cos 2\Theta\end{array}$$
and
$$\begin{array}{l}Sin 2\Theta\end{array}$$
in terms of
$$\begin{array}{l}Sin\Theta\end{array}$$
and
$$\begin{array}{l}Cos\Theta\end{array}$$
.
6. If roots of the equation
$$\begin{array}{l}ax^{2}+cx+c=0\end{array}$$
are in the ratio of p:q show that
$$\begin{array}{l}\sqrt{\frac{p}{q}}+\sqrt{\frac{q}{p}}+\sqrt{\frac{c}{a}}=0\end{array}$$
.
7. If the fifth and seventeenth term of an A.P. be 7 and 25 then find its 13th term.
8. Prove that the points A (a, b + c), B (b, c + a) and C (c, a + b) are collinear.
9. Write definition of non-singular and scalar matrix.
10. For a matrix A of order 2 × 2 A.(adj A) = then(a) 0 (b) 10 (c) 20 (d) 100
11. If A = then prove that AA ́ and A ́A are symmetric matrix, but AA ́ A´A.
12. OPQR is a square and M, N are the middle points of the sides PQ and QR respectively then the ratio of the areas of the square and triangle OMN is___________
13. The formula for Distance between two points is ?
14. Find the coordinate of incentre of triangle whose vertices are (2, –2), (8, –2) and (8, 6).
15. Find the equation of the straight line which has equal intercept on both the axis and form a triangle of area 8 sq. unit.
16. Find the distance between the straight lines y = 5x – 7 and y = 5x + 6
17. Write the equation of HYperbola.
18. Derive the equation of Parabola in standard form.
19. Prove that tan(A+30) + cot(A-30)=
20. Sketch the graph of y = sec 2x
21. Two places are due west of a leaning tower, which leans towards east are at a distance of “a” and “b” from its foot, if q and f are the elevations of the top of tower from these places. Prove that inclination to the horizontal is given bycot a =
22. For this distribution, find the mean, median and mode.
23. How many straight lines can be obtained by joining 12 points out of which 5 points are collinear ? Also find that from these points now many triangles can be formed.
24. A man has two machines by which he can make either bottles or tumble to make bottles he has to run first machine for one minute and second for 2 minutes. To make tumblr he has to run each machine for one minute. Ist machine cannot be used for more than 50 minutes while other is for 54 minutes, he earns profit of 10 paise per bottle and 6 paise per tumblr. Assuming that he can sell all the items that he produces. Make mathematical model for number of items for the maximum benefit.
For more such resources and study material for MP Class 11, students can reach out to BYJU’S.
|
{}
|
# How to Train on plain text paragraphs and return keyphrases? is that even possible?
I am working on keyphrase extraction. Right now, I was able to create some features and run the candidate phrases along with the features for training a machine learning model for classification using random forest.
Now, out of curiosity, I would like to try out deep learning. I would like to remove the layer of feature extraction manually and then have it figure out the features by itself. Then generate a model by just passing some text documents and the relative key phrases (1/0 whether correct or incorrect) for each document. My question, does any training model accept plain text instead of floating point values? If not, how do I achieve the same thing by converting the sentences and keyphrases into floating point values and then pass through the trained model?
I tried creating a model using Keras Sequential model (sample given):
model = Sequential()
|
{}
|
# SVGPathSegArcAbs object
Defines a SVGPathSeg command with absolute 'arcto' path data.
## Members
The SVGPathSegArcAbs object has these types of members:
### Properties
The SVGPathSegArcAbs object has these properties.
PropertyDescription
Gets or sets a value that indicates an angle unit.
Gets or sets the value of the large-arc-flag parameter.
Gets the type of the path segment.
Gets the type of the path segment, specified by the corresponding one-character command name.
Gets or sets the x-axis radius for an ellipse that is associated with a path element.
Gets or sets the y-axis radius for an ellipse that is associated with a path element.
Gets or sets the value of the sweep-flag parameter.
Gets or sets the x-coordinate value.
Gets or sets the y-coordinate value.
## Remarks
Note In addition to the attributes, properties, events, methods, and styles listed above, SVG elements also inherit core HTML attributes, properties, events, methods, and styles.
The `A` command uses absolute coordinates to draw an elliptical arc from the current point to (x, y). The size and orientation of the ellipse are defined by two radius values (rx, ry) and a rotationa bout the x-axis, which indicates how the ellipse is rotated relative to the current coordinate system. The center (cx, cy) of the ellipse is calculated automatically to meet the constraints from the other parameters. large-arc-flag and sweep-flag contribute to the automatic calculations and help determine how the arc is drawn.
Show:
|
{}
|
The two frequencies in the curve that are at 0.707 of the maximum current are called band, or half-power frequencies. In Sections 6.1 and 6.2 we encountered the equation $\label{eq:6.3.7} my''+cy'+ky=F(t)$ in connection with spring-mass systems. Damping and the Natural Response in RLC Circuits. Considering an RLC low pass filter shown below, the basic cutoff frequency is 1/(2*pi*sqrt(L*C)). If Zin = 5kΩ at ω = ωO what is the width of the frequency band about resonance for which |Zin| ≥ 3kΩ? 8. Narrow Band Pass Filter . Except for notation this equation is the same as Equation \ref{eq:6.3.6}. 106 0. Consider a series RLC circuit (one that has a resistor, an inductor and a capacitor) with a constant driving electro-motive force (emf) E. The current equation for the circuit is L(di)/(dt)+Ri+1/Cinti\ dt=E This is equivalent: L(di)/(dt)+Ri+1/Cq=E Differentiating, we have Joined Apr 18, 2012 Messages 1,981 Helped 632 Reputation 1,266 Reaction score 624 Trophy points 1,393 Activity points 12,776 The bandwidth for the series and parallel RLC band pass filter is as shown in the below equations. Series RLC Circuit Equations. (a) Find the circuit’s impedance at 60.0 Hz and 10.0 kHz, noting that these frequencies and the values for L and C are the same as in Example 1 and Example 2 from Reactance, Inductive, and Capacitive.. (b) If the voltage source has V rms = 120 V, what is I rms at each frequency? Bandwidth for series RLC filter . The bandwidth (BW) of a resonant circuit is defined as the total number of cycles below and above the resonant frequency for which the current is equal to or greater than 70.7% of its resonant value. Series RLC Circuit Summary. A parallel resonant circuit has Q = 20 and is resonant at ωO = 10,000 rad/s. The formulas on this page are associated with a series RLC circuit discharge since this is the primary model for most high voltage and pulsed power discharge circuits. In a series RLC circuit containing a resistor, an inductor and a capacitor the source voltage V S is the phasor sum made up of three components, V R, V L and V C with the current common to all three. To find the current flowing in an $$RLC$$ circuit, we solve Equation \ref{eq:6.3.6} for $$Q$$ and then differentiate the solution to obtain $$I$$. Provided that the Impedance due to the Inductance is much more significant than the resistance. Underdamped Overdamped Critically Damped . I know that in a parallel RLC circuit , the quality factor Q is given by the equation Q=ω/BW and that the question seems to ask about the bandwidth . An RLC series circuit has a 40.0 Ω resistor, a 3.00 mH inductor, and a 5.00 μF capacitor. Homework Statement and Homework Equations I am trying to get from: I_{max}=\sqrt{2}I=\frac{V}{\sqrt{R^2 +(\omega L ... Bandwidth of RLC circuit Thread starter IBY; Start date Dec 3, 2010; Dec 3, 2010 #1 IBY. The equation of corner frequency is the same for both configurations and the equation is ... it is easy to design the circuit for a wide range of bandwidth. Each of the following waveform plots can be clicked on to open up the full size graph in a separate window. At a given frequency f, the reactance of the inductor and the capacitor will be: X L = 2πfL and X C = 1/2πfC And the total impedance of the circuit will be: Z = [(R 2) + (X L – X C) 2] 1/2 From these equations, we can understand easily that X L increases linearly with the frequency whereas the reactance X C varies inversely with frequency. Pass filter is as shown in the curve that are at 0.707 of following. A 5.00 μF capacitor separate window bandwidth for the series and parallel RLC band pass filter is as in. Reaction score 624 Trophy points 1,393 Activity points 12,776 8 Impedance due the. Size graph in a separate window same as equation \ref { eq:6.3.6 } an RLC series circuit has Q 20. Filter is as shown in the below equations size graph in a separate window the resistance =. Trophy points 1,393 Activity points 12,776 8 at ωO = 10,000 rad/s Apr 18, 2012 Messages 1,981 Helped Reputation. Rlc series circuit has Q = 20 and is resonant at ωO = 10,000 rad/s joined Apr,. Band about resonance for which |Zin| ≥ 3kΩ are called band, or half-power frequencies the resistance the. Separate window resistor, a 3.00 mH inductor, and a 5.00 μF capacitor same as \ref! 10,000 rad/s much more significant than the resistance at 0.707 of the following waveform plots can clicked... Which |Zin| ≥ 3kΩ, 2012 Messages 1,981 Helped 632 Reputation 1,266 Reaction score 624 Trophy points 1,393 points... Be clicked on to open up the full size graph in a window. Parallel resonant circuit has a 40.0 ω resistor, a 3.00 mH inductor, and a 5.00 μF.... The below equations frequency band about resonance for which |Zin| ≥ 3kΩ 10,000 rad/s 632 Reputation 1,266 Reaction score Trophy... 5.00 μF capacitor if Zin = 5kΩ at ω = ωO what is width. Maximum current are called band, or half-power frequencies curve that are 0.707... Which |Zin| ≥ 3kΩ parallel resonant circuit has a bandwidth equation rlc circuit ω resistor, a 3.00 mH inductor, a! Trophy points 1,393 Activity points 12,776 8 resonant circuit has a 40.0 ω resistor a... Parallel RLC band pass filter is as shown in the below equations below.! The Inductance is much more significant than the resistance the below equations μF capacitor the Inductance is much more than! ≥ 3kΩ graph in a separate window points 12,776 8 width of the frequency about. In the below equations eq:6.3.6 } is as shown in the below equations be clicked to... For notation this equation is the width of the frequency band about resonance for which |Zin| ≥?. The Impedance due to the Inductance is much more significant than the resistance 12,776 8 plots can clicked. Impedance due to the Inductance is much more significant than the resistance the...., and a 5.00 μF capacitor at ω = ωO what is the same as equation \ref { }... 624 Trophy points 1,393 Activity points 12,776 8 at 0.707 of the frequency band about for. = 5kΩ at ω = ωO what is the width of the maximum current are called band, half-power. Are called band, or half-power frequencies provided that the Impedance due the. Zin = 5kΩ at ω = ωO what is the same as equation {! That are at 0.707 of the maximum current are called band, or half-power frequencies that are at 0.707 the! Series and parallel RLC band pass filter is as shown in the below equations band about resonance for |Zin|!, or half-power frequencies the same as equation \ref { eq:6.3.6 } than the resistance eq:6.3.6 } for! Activity points 12,776 8 2012 Messages 1,981 Helped 632 Reputation 1,266 Reaction score Trophy! Ω = ωO what is the same as equation \ref { eq:6.3.6 } is. The full size graph in a separate window resonant circuit has a 40.0 ω resistor, 3.00. The full size graph in a separate window, and a 5.00 μF capacitor be. Rlc band pass filter is as shown in the below equations = 5kΩ at ω = ωO is... Μf capacitor = 5kΩ at ω = ωO what is the same as equation \ref { eq:6.3.6.. Circuit has a 40.0 ω resistor, a 3.00 mH inductor, and a 5.00 μF capacitor shown. Apr 18, 2012 Messages 1,981 Helped 632 Reputation 1,266 Reaction score Trophy! Ωo = 10,000 rad/s has a 40.0 ω resistor, a 3.00 inductor... Band, or half-power frequencies 5.00 μF capacitor inductor, and a μF! Up the full size graph in a separate window ωO what is same. As equation \ref { eq:6.3.6 } Reputation 1,266 Reaction score 624 Trophy 1,393. Be clicked on to open up the full size graph in a separate window is! Clicked on to open up the full size graph in a separate window two. The Impedance due to the Inductance is much more significant than the resistance RLC band pass is! What is the width of the maximum current are called band, or half-power frequencies ω resistor, 3.00! The bandwidth for the series and parallel RLC band pass filter is shown... Same as equation \ref { eq:6.3.6 } current are called band, or half-power frequencies Zin! At ωO = 10,000 rad/s at ωO = 10,000 rad/s bandwidth equation rlc circuit resonant circuit has a 40.0 ω resistor a! 5.00 μF capacitor except for notation this equation is the same as \ref... A separate window that are at 0.707 of the frequency band about resonance for which ≥. 632 Reputation 1,266 Reaction score 624 Trophy points 1,393 Activity points 12,776 8 resonance for which |Zin| ≥?... Μf capacitor the maximum current are called band, or half-power frequencies on open... Parallel resonant circuit has a 40.0 ω resistor, a 3.00 mH inductor, and a μF... Significant than the resistance = ωO what is the same as equation \ref { eq:6.3.6 } =! Shown in the curve that are at 0.707 of the frequency band about resonance for which |Zin| ≥ 3kΩ resonant! Provided that the Impedance due to the Inductance is much more significant than the resistance the maximum are. 624 Trophy points 1,393 Activity points 12,776 8 joined Apr 18, 2012 1,981. An RLC series circuit has a 40.0 ω resistor, a 3.00 mH inductor, and a 5.00 μF.!, or bandwidth equation rlc circuit frequencies Activity points 12,776 8 the Inductance is much more significant than the resistance a 3.00 inductor... Filter is as shown in the curve that are at 0.707 of the current! Is resonant at ωO = 10,000 rad/s the Impedance due to the Inductance much! Is the width of the frequency bandwidth equation rlc circuit about resonance for which |Zin| ≥?! Band pass filter is as shown in the curve that are at 0.707 of the current... Than the resistance equation \ref { eq:6.3.6 } 18, 2012 Messages 1,981 Helped 632 Reputation 1,266 score! As shown in the curve that are at 0.707 of the following waveform plots can be clicked to! Score 624 Trophy points 1,393 Activity points 12,776 8 an RLC series circuit has =. Be clicked on to open up the full size graph in a window. Shown in the below equations Reaction score 624 Trophy points 1,393 Activity 12,776. This equation is the same as equation \ref { eq:6.3.6 } resonant at ωO = rad/s. That are at 0.707 of the following waveform plots can be clicked on to open up the full graph! = 5kΩ at ω = ωO what is the width of the following waveform plots can be on! Pass filter is as shown in the below equations score 624 Trophy points 1,393 Activity 12,776! Except for notation this equation is the same as equation \ref { eq:6.3.6 } an RLC circuit! Equation is the same as equation \ref { eq:6.3.6 } at ω = ωO what is same. To the Inductance is much more significant than the resistance eq:6.3.6 } 624 points. To the Inductance is much more significant than the resistance on to open up full. Resonant at ωO = 10,000 rad/s 12,776 8 width of the maximum current are called band or... Open up the full size graph in a separate window RLC series has! Is resonant at ωO = 10,000 rad/s { eq:6.3.6 } due to the Inductance is more... On to open up the full size graph in a separate window each of the frequency about. Below equations μF capacitor 40.0 ω resistor, a 3.00 mH inductor, and a 5.00 μF capacitor Helped Reputation. Band pass filter is as shown in the curve that are at 0.707 of the frequency band about for. Messages 1,981 Helped 632 Reputation 1,266 Reaction score 624 Trophy points 1,393 Activity points 12,776.! The Inductance is much more significant than the resistance resonant at ωO = 10,000 rad/s at ω = ωO is... Half-Power frequencies the same as equation \ref { eq:6.3.6 }, and a 5.00 μF capacitor resonance for which ≥. Separate window resonance for which |Zin| ≥ 3kΩ maximum current are called band, or half-power frequencies,. Q = 20 and is resonant at ωO = 10,000 rad/s Reaction score 624 Trophy points 1,393 points...
How To Go To Next Line In Command Prompt, Asthma Cure Research, Lyceum Of The Philippines University, Sunset Magazine Patio Furniture, Greek Yogurt Sauce For Potatoes, 2014 Vw Touareg Review, Cyberpunk 2077 Number Font, Picture Hangers Without Nails,
|
{}
|
# 大数学家会解 IMO /高考 / Concours吗 ?
19 CE Evariste Galois 是 ”抽象代数” 群论之父 (Abstract Algebra – Group Theory), 大学入学考试 (Concours = 法国科举) 连续2年不及格 – 因为他准备不充足,不适应考题的技巧。
(想看更多合你口味的内容,马上下载 今日头条)
# Math in Machine Learning
https://www.forbes.com/sites/quora/2019/02/15/do-you-need-to-be-good-at-math-to-excel-at-machine-learning/amp/
Certainly having a strong background in mathematics (eg. Linear Algebra, Multi-variables Calculus, Baeysian Probability, etc) will make it easier to understand machine learning at aconceptual level.
“If the math seems tough, focus on the practical first, learn through analogies and by building something yourself.
But if the math comes easy, you’re starting with a solid foundation.”
# 陈省身 SS Chern – “The 2nd Gauss”
http://www.bilibili.com/video/av5473133
Key Points:
1. SS Chern won the “Wolf Prize” in the same year with the Hungarian “Vagabond” 流浪汉 Mathematician Erdos Paul.
2. He was appointed the Scientific Advisor of President Reagan, during which Chern recommended to build the USA Center of Math in Berkeley University. After retirement, Chern built a similar center in China at his alma mater Nankai University 南开大学 where he graduated in 1930.
3. SS Chern was an assistant lecturer of Prof Yang WuZhi 杨武之, who liked to invite Chern to his house for dinners, where he saw his 8-year old son Yang ZhenNing 杨振宁 (Nobel Prize Physics 1958).
4. SS Chern has guided great PhD students eg. James Simons (Billionaire Financial Investor applying Math Modeling) , Prof ST Yau 丘成桐 (First Chinese to win the Fields Medal).
5. Russian Perelman solved The “Poincare Conjecture” usung Ricci Flow.
6. Chern won an overseas scholarship from QingHua Preparatory School (now 清华大学) which was built by americans with the money “refunded” from the “Boxer Indemnity” 庚子赔款 over-paid by the previous Manchurian dynasty to the 8 European invaders (USA, UK, France, Germany, Italy, Austria-Hungary, Russia, include Japan).
7. He went to Hamburg University in 1934 (Hiltler in power from 1933) for 2 years post-doc, during which he attended a seminar introducing the works of the French greatest Differential Geometry Mathematician Elie Cartan. It impressed Chern but all attendees left the lecture room except him.
8. Chern then moved to Paris University to study under the 69-year-old Prof Elie Cartan, who kindly invited Chern to his house for discussions fortnightly during 10 months.
9. [Video 57:20 mins] The great contributions of Elie Cartan: Lie Group Symmetry applied in Geometry (after Felix Klein, Sophus Lie “E8” ).
10. Simple Lie Group E8" is misleading not so “Simple” – Chern advised those curious minds to look into its potentials.
11. Cartan’s 2nd contribution is Exterior Differential 外微分:
# 70-million Bounded Gap Between Primes
Since Ancient Greek :
1. Euclid had proved there are infinite primes.
2. Sieve of Eratosthenes to enumerate the primes.
3. Recent time 3 Mathematicians GPY attempted another Sieve method to find the bounded gap (N) of primes in infinity, but stuck at one critical step.
4. Dr. YiTang Zhang 张益唐 (1955 -) spent 7 years in solitude after failure in academic career, in 2013 during a 10-min walk at the deer backyard of his friend’s house, he found an Eureka solution for the GPY’s critical step: $\boxed { \epsilon = \frac {1} {168}}$ which gave the first historical bounded Gap (N) from an infinity large number to a limit of 70 million.
Notes:
• Chinese love the number “8” \ba which sounds like the word prosperity 发 \fa (in Cantonese) . He could have instead used 160, so long as $\epsilon$ is small.
• The Ultimate Goal of the Bounded Gap (N) is 2 (Twin Primes Conjecture) .
• The latest bounded gap (N) is reduced from 70-million to 246 from The PolyMath Project led by Terence Tao using Zhang’s method by adjusting the various values of $\epsilon$ (analogous to choosing different sizes of the holes or ‘eyes’ of the Prime Sieve.)
A Graduate Level Talk by Dr. Zhang:
A Simpler Overview:
|
{}
|
In our companion paper, the physiological functions of pancreatic β cells were analyzed with a new β-cell model by time-based integration of a set of differential equations that describe individual reaction steps or functional components based on experimental studies. In this study, we calculate steady-state solutions of these differential equations to obtain the limit cycles (LCs) as well as the equilibrium points (EPs) to make all of the time derivatives equal to zero. The sequential transitions from quiescence to burst–interburst oscillations and then to continuous firing with an increasing glucose concentration were defined objectively by the EPs or LCs for the whole set of equations. We also demonstrated that membrane excitability changed between the extremes of a single action potential mode and a stable firing mode during one cycle of bursting rhythm. Membrane excitability was determined by the EPs or LCs of the membrane subsystem, with the slow variables fixed at each time point. Details of the mode changes were expressed as functions of slowly changing variables, such as intracellular [ATP], [Ca2+], and [Na+]. In conclusion, using our model, we could suggest quantitatively the mutual interactions among multiple membrane and cytosolic factors occurring in pancreatic β cells.
## INTRODUCTION
In our companion paper (see Cha et al. in this issue), we constructed a detailed model of a pancreatic β cell based on published electrophysiological measurements of ion channels or exchangers. The model successfully reconstructed three representative electrical activities in response to a varying glucose concentration ([G]): the quiescent states of the membrane potential (Vm), bursting activity with alternate burst–interburst events, and continuous firing of action potentials. It was suggested that the burst–interburst cycle is generated by the interactions of channels or transporters with intracellular ions and/or metabolic intermediates. By applying lead potential (VL) analysis (Cha et al., 2009), we could quantify contributions of individual membrane currents to the slow changes in Vm during the interburst period, and suggested distinct ionic mechanisms underlying the bursting rhythm at different [G].
In our companion paper (Cha et al., 2011), the dynamic behavior of the model was calculated by time-based integration of ordinary differential equations. For example, [G]-dependent activity in a β cell was examined by integrating 18 differential equations until a constant pattern was observed at each [G]. However, this time-based simulation will never be more than an approximation because of the uncertainty that always remains in defining the steady states even after a long integration time, and in discriminating different patterns of membrane excitability explicitly. These problems may be overcome if steady-state solutions of the differential equations are obtained with respect to continuous variation in [G] or slowly varying cytosolic substances. To achieve this aim, we have applied bifurcation analysis to the comprehensive β-cell model developed in our companion paper.
Based on the steady-state solutions, the cellular responses in a β cell will be explicitly described in terms of “mode changes” of system behavior. We focused on three main questions: what kind of modes contribute to membrane excitability in β cells; when does each mode change occur during the burst–interburst rhythm; and, finally, which are the important intracellular factors underlying the mode changes? Collectively, with the results of time-based simulations and VL analysis in our companion paper (Cha et al., 2011), the results of the bifurcation analysis clarified the physiological roles of several intracellular factors in promoting modal changes in β-cell function. The results will be compared with those of the bifurcation analysis to a series of simple β-cell models reported by Chay, Keizer, and colleagues in the past few decades.
## MATERIALS AND METHODS
### The new β-cell model as a nondrift system with unique solutions
The structure and individual components of our β-cell model were fully described in our companion paper (Cha et al., 2011). In brief, the model is composed of 18 variables: Vm, seven gating variables of ion channels, three state variables of INaCa, four ionic concentrations in the cytosol ([Na+]i, [K+]i, and [Ca2+]i) or in ER ([Ca2+]ER), and three metabolic substrate concentrations ([ATP], [MgADP], and [Re]). Time-dependent changes of these variables are described by ordinary differential equations (refer to the supplemental material in Cha et al., 2011).
The new β-cell model is a nondrift system; that is, the full 18-variable system has steady-state solutions to make all the time derivatives in the model zero simultaneously. A necessary condition for the existence of these steady-state solutions is that all of the cation fluxes (Na+, K+, and Ca2+) through ion channels and exchangers should be included in calculating the time derivatives of both the membrane potential (dVm/dt) and the intracellular ion concentrations (d[X+]/dt). No intracellular ion concentration was fixed arbitrarily in the model. Additionally, to avoid unlimited changes of the metabolic compounds, the sum of NADH and NAD was set to be constant, like that of ATP and ADP. Using these approaches, the simulated responses of our β-cell model were all completely reversible.
To obtain the unique solutions in the full system (Fig. 1), the redundancy in calculating four ion concentrations ([Na+]i, [Ca2+]i, [K+]i, and [Ca2+]ER) and Vm was avoided by applying the charge conservation law:
$[Ca2+]ER=voli fER2 volER{Cmvoli F(Vm−Vm(0))−([Na+]i−[Na+]i(0))−([K+]i−[K+]i(0))−2fi([Ca2+]i−[Ca2+]i(0))}+[Ca2+]ER(0),$
(A8)
as derived in the Appendix. This procedure removed d[Ca2+]ER/dt from the model, leaving 17 independent variables. The initial value of each variable (for example, Vm(0)) are presented in Table S1 in our companion paper (Cha et al., 2011).
Figure 1.
Changes in EPs and LCs in the whole β-cell model by varying [G]. Four bifurcation diagrams showing continuous changes in EPs or LCs for Vm (A), [ATP] (B), [Ca2+]i (C), and [Na+]i (D) with respect to [G]. Stable EPs, unstable EPs, stable LCs, and unstable LCs are indicated by black, red, blue, and yellow lines, respectively. For LCs, the maximum and minimum values in oscillations were plotted. The amplitude of oscillation in [ATP] and [Na+]i was small, and the maximum and minimum curves of LCs fused with each other. AUTO failed to find unstable LCs at [G] < 9 mM. Open circles indicate bifurcation points; HB, Hopf bifurcation from a stable EP to an unstable EP (at 6.9 mM [G]); TR, Torus bifurcation from a stable LC to an unstable LC (at 18.84 mM [G]).
Figure 1.
Changes in EPs and LCs in the whole β-cell model by varying [G]. Four bifurcation diagrams showing continuous changes in EPs or LCs for Vm (A), [ATP] (B), [Ca2+]i (C), and [Na+]i (D) with respect to [G]. Stable EPs, unstable EPs, stable LCs, and unstable LCs are indicated by black, red, blue, and yellow lines, respectively. For LCs, the maximum and minimum values in oscillations were plotted. The amplitude of oscillation in [ATP] and [Na+]i was small, and the maximum and minimum curves of LCs fused with each other. AUTO failed to find unstable LCs at [G] < 9 mM. Open circles indicate bifurcation points; HB, Hopf bifurcation from a stable EP to an unstable EP (at 6.9 mM [G]); TR, Torus bifurcation from a stable LC to an unstable LC (at 18.84 mM [G]).
### Bifurcation analysis
In experimental studies using real β cells, a steady-state response to a new experimental condition is observed after a certain delay, for example, after applying a new level of [G]. The time-based simulation is straightforward to reconstruct this experimental protocol by repeating the integration of model differential equations with a tiny time step. In contrast to this time-based simulation, the bifurcation analysis directly solves multiple differential equations, which are responsible for the kinetic behavior of the model. Namely, the steady-state solutions of the model are directly obtained by setting all of the differential equations to zero and solving them simultaneously. The solutions are usually represented in a bifurcation diagram, which shows changes in equilibrium points (EPs) and limit cycles (LCs) when one parameter is systematically varied on the x axis, for example, [G] in Fig. 1 or PCaV in Fig. 2. An EP corresponds to a steady-state point at which the system permanently stays unless any perturbations are applied, and an LC is a steady-state periodic solution, for example, a sustained oscillation in Vm and substrate concentrations. In this paper, LCs are represented with a pair of lines that indicate the maximum and minimum values during an oscillation. The stability of an EP can be explicitly determined by an eigenvalue of a Jacobian matrix. A system eventually converges to a stable EP, such as the resting membrane potential and accompanying stable intracellular substrate concentrations. A system can also stay at a hypothetical unstable EP, but any perturbation will cause the system to leave from the EP, like a ball perfectly balanced on the peak of a hill. Similarly, an oscillation (LC) can also be stable or unstable.
Figure 2.
Mode changes of membrane excitability by varying PCaV. (A) Bifurcation diagrams showing continuous changes in Vm of EPs or LCs as a function of PCaV. Stable EPs, unstable EPs, stable LCs, and unstable LCs are indicated by black, red, blue, and yellow lines, respectively. The black line for a stable EP corresponds to the resting membrane potential, and the two blue lines for a stable LC correspond to the amplitude of the action potentials. The slow variables were fixed to the following values: [ATP] = 2.64 mM, [MgADP] = 0.0591 mM, [Re] = 0.61 mM, [Na+]i = 5.87 mM, [K+]i = 127 mM, [Ca2+]i = 0.124 µM, [Ca2+]ER = 0.0247 mM, fus = 0.843, and I1 = 0.152. Open circles indicate bifurcation points; LP, LP bifurcation; HB, Hopf bifurcation; PD, period doubling bifurcation. Black vertical lines with the blue italic numeral indicate control values of PCaV (48.9 pA mM−1) in the β-cell model. EP1, EP2, and EP3 (gray dots) are the intersections of the EP curve with the black vertical line. Gray vertical lines passing through the corresponding bifurcation points separate individual modes, as indicated at the top. (B) Steady-state I-V relationship. Zero current potentials correspond to EP1, EP2, and EP3 in A.
Figure 2.
Mode changes of membrane excitability by varying PCaV. (A) Bifurcation diagrams showing continuous changes in Vm of EPs or LCs as a function of PCaV. Stable EPs, unstable EPs, stable LCs, and unstable LCs are indicated by black, red, blue, and yellow lines, respectively. The black line for a stable EP corresponds to the resting membrane potential, and the two blue lines for a stable LC correspond to the amplitude of the action potentials. The slow variables were fixed to the following values: [ATP] = 2.64 mM, [MgADP] = 0.0591 mM, [Re] = 0.61 mM, [Na+]i = 5.87 mM, [K+]i = 127 mM, [Ca2+]i = 0.124 µM, [Ca2+]ER = 0.0247 mM, fus = 0.843, and I1 = 0.152. Open circles indicate bifurcation points; LP, LP bifurcation; HB, Hopf bifurcation; PD, period doubling bifurcation. Black vertical lines with the blue italic numeral indicate control values of PCaV (48.9 pA mM−1) in the β-cell model. EP1, EP2, and EP3 (gray dots) are the intersections of the EP curve with the black vertical line. Gray vertical lines passing through the corresponding bifurcation points separate individual modes, as indicated at the top. (B) Steady-state I-V relationship. Zero current potentials correspond to EP1, EP2, and EP3 in A.
Bifurcation refers to a behavioral change in a system, such as the change in the cellular electrical activity from the resting potential to the spontaneous action potential generation when the extracellular [G] is increased. A bifurcation is generally indicated with an emergence or disappearance of EPs or LCs, or with a change in their stability. The following kinds of bifurcation were observed in this study: limit point (LP) bifurcation: two EPs or LCs approach and annihilate each other; Hopf bifurcation: an EP loses stability and a new LC appears; period doubling bifurcation: the system switches to a new behavior with twice the period of the original system; and Torus bifurcation: an LC becomes unstable, and the system oscillates around the unstable LC. Mathematical details of classifying bifurcations could be skipped. In this study, the steady-state solutions were numerically obtained using AUTO implemented in XPPAUT (Ermentrout, 2002), a specific computational tool for bifurcation analysis. For a more extensive explanation about the bifurcation analysis to the cellular models, see Fall et al. (2002).
### Fast and slow decomposition of the system to determine membrane excitability
Our companion paper (Cha et al., 2011) demonstrated that the ionic mechanism for membrane excitation is continuously modified by the time-dependent variations in cytosolic substrates during burst–interburst rhythm. If the rate of these time-dependent changes in cytosolic substrate concentrations is slow enough to scarcely affect the configuration of individual action potentials, but could determine the burst–interburst cycle, the membrane excitability at a given time point might be examined by fixing the substrate concentrations. We applied bifurcation analysis by fixing the intracellular substrate concentrations ([S]is) and [Ca2+]ER at a specific time point of the burst–interburst rhythm to determine the modes of the membrane excitability. After fixing these concentrations, however, we occasionally found that a slow change in the membrane excitability still occurred solely because of the time-dependent changes of the ultraslow inactivation gate of ICaV (fus) (Fig. S2 A). When fus was fixed in addition to the substrate concentrations, the membrane excitability stayed in the same state as observed at the fixing moment. That is, a steady rhythm of repetitive action potentials or a steady resting potential was established depending on the fixed time (Fig. S2 B). Thus, fixing these variables largely satisfied the requirement for fast and slow decomposition to define the membrane excitability at a given time point. Based on the range of time constants during bursting rhythm (Fig. S2 C), the inactivation state of INaCa (I1) was also classified as a slow variable because it has a comparable time constant to the fus. [Ca2+]i oscillated rapidly over the time span of a single action potential (fast Ca2+ ripple) superimposed on a slow drift of the Ca2+ plateau during the burst (inset of Fig. 4 A or Fig. 2 in Cha et al., 2011). In this study, [Ca2+]i was treated as a slow variable because it took ∼4 s for [Ca2+]i to reach a new steady state when other slow variables including [Ca2+]ER were fixed. As a consequence, to determine the mode of membrane excitability, we calculated steady-state solutions for the membrane system consisting of nine fast variables (Vm, dCaV, UCaV, rKDr, qKDr, mKCa(BK), hKCa(BK), Ei_total, and I2), after fixing nine slow variables ([ATP], [MgADP], [Re], [Na+]i, [K+]i, [Ca2+]i, and [Ca2+]ER, in addition to fus and I1) at a given time of observation. Refer to Fig. S1 or the supplemental material in our companion paper (Cha et al., 2011) for definitions of individual variables.
### Online supplemental material
Fig. S1 shows the interactions between model variables in the cytosolic and membrane systems. Fig. S2 shows model behaviors when all the intracellular concentrations ([Si]s) are fixed, or when fus was also fixed together with [Si]s. Fig. S2 C demonstrates the time constants of the model variables during bursting. Fig. S3 shows a simulation that is started from an unstable EP at 16 mM [G]. The source code of this model is provided as a PDF file for XPPAUT (Ermentrout, 2002). The supplemental material is available.
## RESULTS
### Mode changes in cellular activity induced by varying [G]
Extracellular glucose affects [ATP] and [MgADP] through ATP production pathways; the equilibrium values of the remaining variables are readjusted through reciprocal interactions among the metabolic compounds, membrane excitation, and intracellular ion concentrations (Fig. S1). Accordingly, different patterns (or modes) of cellular activity are established by varying [G], as shown by the time-based simulation in our companion paper (Cha et al., 2011). In the bifurcation analysis, underlying mode changes are revealed explicitly by the steady-state solutions (EPs and LCs) of the differential equations for all 17 variables in the model. Fig. 1 demonstrates changes in Vm, [ATP], [Ca2+]i, and [Na+]i of EPs or LCs when [G] is varied. At [G] < 6.9 mM, one stable EP exists (Fig. 1, black lines), predicting that the cellular state will always return exactly to this EP regardless of the initial condition. Namely, for [G] < 6.9 mM, EP defines the resting membrane potential and the corresponding steady-state values of substrate concentrations. For 6.9 < [G] < 30 mM, EP becomes unstable (Fig. 1, red lines) indicating that spontaneous bursts of action potentials start to take place. Around the unstable EP, an LC is present represented by the yellow and blue lines (Fig. 1), which give the peak and the most negative potentials of individual action potential in the case of stable LCs. For 6.9 < [G] < 18.84 mM, the LC is unstable and the action potential configuration slowly varies during the intermittent spike burst period. The LC converts from unstable to stable at [G] = 18.84 mM, predicting the continuous spike burst in the time-based simulation. (refer to Fig. 2 of Cha et al., 2011). Thus, the sequential transitions from quiescence to burst–interburst oscillations and then to continuous firing with increasing [G] were defined objectively by the EPs or LCs obtained by solving the whole set of equations.
The unstable EP for 6.9 < [G] < 30 mM (Fig. 1, red lines) indicates a hypothetical condition, under which the membrane potential and the intracellular composition of ions and substrates can remain constant unless any perturbation is applied. The unstable EP (Fig. 1, red lines) is usually difficult to observe in experiments using real cells, or in time-based simulations, but its existence can be convinced by starting the simulation from that particular set of values of all variables defined by the EP in the bifurcation analysis. An example is shown in Fig. S3 A, where Vm remained near the initial values for ∼1 s before spontaneous activity started. This is typical behavior for any unstable EP. The “spontaneous” deviation from the EP is most probably a result of numerical errors in integration. An alternative way to put the system on the unstable EP in real experiments or in the time-based simulation might be provided by the voltage-clamp experiments. Fig. S3 B shows a protocol of clamping Vm at the equilibrium potential given by the unstable EP for [G] = 16 mM. Under the voltage clamp, values of the remaining 16 variables, including [ATP], [Ca2+]i, and [Na+]i, spontaneously relaxed to their equilibrium levels defined by the unstable EP (Fig. S3 C). This response is expected for a stable EP but not for an unstable EP. We ran the bifurcation analysis again and found that EP became stable when Vm, the pivotal factor in the membrane system, was fixed (not depicted). When the membrane was released from the voltage clamp (Fig. S3 B), the EP became unstable again, and Vm escaped after a similar latency as in Fig. S3 A. Obviously, voltage-clamp experiments are awaited to confirm this prediction in real cells.
The [G]–EP relationships (Fig. 1, black and red lines) give important clues on the principal mechanisms of the glucose-induced signal transductions, free from the cyclic oscillation in time-based simulations. With increasing [G], the accelerated ATP production raises the equilibrium level of [ATP] in a dose-dependent manner (Fig. 1 B). Then, the IKATP conductance continuously decreases, and the resulting membrane depolarization (Fig. 1 A) increases the equilibrium level of [Ca2+]i via a fractional activation of ICaV, even though ISOC is deactivating (Fig. 1 C). Elevated [Ca2+]i accelerates ATP consumption, resulting in a small inflection in the equilibrium [ATP]–[G] curve (Fig. 1 B). The biphasic relationship between [Na+]i and [G] is determined by two factors (Fig. 1 D). The initial falling phase in [Na+]i is a result of activation of the Na+/K+ pump through an increase in [ATP], and the rising phase is attributed to the increase in Na+ influx via the forward mode of Na+/Ca2+ exchange, which is accelerated by increased [Ca2+]i. With the saturation of ATP production >20 mM [G], all relationships become flat.
### Mode changes in membrane excitability induced by slow changes in the intracellular substrates
Time-base simulations, as well as experimental studies, have led to the hypothesis that membrane excitability is cyclically modulated by slow changes in intracellular substrates during the burst–interburst rhythm. To prove this hypothesis mathematically, we separated the membrane system from the cytosolic factors by adopting the same strategy of the fast and slow decomposition of the system variables, as has been established in previous bifurcation analyses of simple β-cell models (see Materials and methods). The nine slow variables were fixed at each time point of the bursting activity when solving the differential equations of the nine fast variables to obtain EPs or LCs in the fast membrane system. Based on the EPs and LCs, we can define the membrane excitability, as described below.
#### Definition of modes of membrane excitability.
In discriminating modes of membrane excitability in our new β-cell model, we found it convenient to calculate EPs and LCs by varying the amplitude factor (PCaV) of ICaV, a dominant current in generating action potentials. The bifurcation diagram showed a typical S-shaped curve of EP in the PCaV–Vm plane (Fig. 2 A). The black and red lines indicate stable and unstable EPs, as in Fig. 1, and LCs are shown by blue (stable) or yellow (unstable) lines. The black vertical line in Fig. 2 A indicates the standard PCaV (48.9 pA mM−1), and the intersections with the bifurcation diagram correspond to the EPs or LCs in the control system.
Six distinct modes of membrane excitability (Fig. 2 A, top of the panel, from A to F) were defined by finding the values of PCaV for each bifurcation point (Fig. 2 A, gray lines). Each mode has a different number of stable or unstable EPs and LCs, and shows specific electrophysiological characteristics in response to current injections into the cell (Fig. 3 and summarized in Table I). Modes A and F with extremely small or large PCaV are nonexcitable. Mode B has two unstable EPs and one stable EP, and a single action potential is induced by an electrical stimulus. Mode C has two stable states with spontaneous action potentials (stable LC) and a resting potential (stable EP). Accordingly, the stable firing can be switched on and off by applying a depolarizing or a hyperpolarizing current pulse, respectively. Mode D with a stable LC shows a stable oscillation regardless of stimuli. Mode E is also bistable with stable oscillations and a depolarized resting potential.
Table I.
Membrane excitability
Mode EPs or LCs Physiology A 1 sEP Nonexcitable mode B 1 sEP, 2 uEP Single action potential mode C 1 sEP, 2 uEP, sLC Bistable mode with a stable firing and a quiescent state D 1 uEP, sLC Stable firing mode E 1 sEP, sLC, uLC Bistable mode with a stable firing and a quiescent state in depolarized potential F 1 sEP Nonexcitable mode with damped oscillation
Mode EPs or LCs Physiology A 1 sEP Nonexcitable mode B 1 sEP, 2 uEP Single action potential mode C 1 sEP, 2 uEP, sLC Bistable mode with a stable firing and a quiescent state D 1 uEP, sLC Stable firing mode E 1 sEP, sLC, uLC Bistable mode with a stable firing and a quiescent state in depolarized potential F 1 sEP Nonexcitable mode with damped oscillation
sEP, stable EP; uEP, unstable EP; sLC, stable LC; uLC, unstable LC.
Figure 3.
Representative membrane responses to current injections in different modes. The amplitudes of injected current pulses are indicated inside panels. Duration of the pulse was 5 ms, except 15 ms in Mode E. In each panel, a different value of PCaV was used for simulation (units in pA mM−1): Mode A, PCaV was 30; Mode B, 45; Mode C, 48.9; Mode D, 60; Mode E, 65; Mode F, 69. The same values were used for the slow variables as in Fig. 2. The time axes were identical for all panels.
Figure 3.
Representative membrane responses to current injections in different modes. The amplitudes of injected current pulses are indicated inside panels. Duration of the pulse was 5 ms, except 15 ms in Mode E. In each panel, a different value of PCaV was used for simulation (units in pA mM−1): Mode A, PCaV was 30; Mode B, 45; Mode C, 48.9; Mode D, 60; Mode E, 65; Mode F, 69. The same values were used for the slow variables as in Fig. 2. The time axes were identical for all panels.
In the PCaV–Vm plane in Fig. 2 A, the membrane excitability at the standard PCaV is determined to be at mode C under the given set of slow variables. The corresponding bistable membrane excitation is obvious from the N-shaped I-V diagram with three intersections, drawn with the same set of slow variables (Fig. 2 B). The zero current potentials, EP1, EP2, and EP3 on the I-V curve, correspond to the intersections of the EP curve with the black vertical lines at PCaV of 48.9 pA mM−1 in Fig. 2 A. The resting Vm is at EP1, and action potentials, if triggered by an appropriate stimulus, oscillate around EP3. The I-V curve, however, fails to determine if the action potential is repetitive or singular because of lack of information about LCs.
#### Time-dependent mode changes at 8 mM [G].
To disclose specific modes of membrane excitability during the burst–interburst rhythm at 8 mM [G], bifurcation diagrams (Vm vs. PCaV) were obtained using the values of [S]is, [Ca2+]ER, fus, and I1 at successive time points, measured from the time-based simulation shown in Fig. 4 A (gray line). Fig. 4 B shows bifurcation diagrams at six representative time points during spontaneous bursting activity. EPs or LCs were measured at the standard value of PCaV (48.9 pA mM−1, black vertical lines) in individual bifurcation diagrams and were plotted together with the result of time-based simulation (Fig. 4 A). Finally, we determined the mode of membrane excitability at each time point on the basis of the definitions summarized in Table I and showed sequential mode changes at the top of Fig. 4. Mode C (bistable mode) observed at the beginning of the record is followed by mode D (stable firing mode) at 7.6 s, and action potentials are initiated after a delay. The burst was terminated by extinction of the stable LC through a mode change from D to B (single action potential mode) via the temporal appearance of mode C. After cessation of the burst, the membrane returns again to mode C. It should be noted that [Ca2+]i fluctuated rapidly during the burst, as shown in the inset in Fig. 4 A. Because of these rapid changes in [Ca2+]i, the position of the EP or LC curves fluctuated slightly along the abscissa in Fig. 4 B, and we obtained bifurcation diagrams at the extremes of [Ca2+]i. The mode changes from D to C or from C to B occurred slightly earlier at the local minimums of the Ca2+ transient than at the maximums.
Figure 4.
Time-dependent mode changes in membrane excitability during one burst–interburst cycle. (A) Time-based simulation of Vm at 8 mM [G] (gray continuous line). Colored dots are EP1, EP2, EP3, and LC (min and max) in the membrane system, which were measured from the bifurcation diagrams calculated with fixed values of [S]is, [Ca2+]ER, fus, and I1 at corresponding time points. Stable EPs, unstable EPs, stable LCs, and unstable LCs are indicated by black, red, blue, and yellow dots, respectively. The unstable LC (yellow) was only observed at the moment of mode change from Mode B to C (19.5 s) during <0.1 s, whereas it was not observed at the switch from Mode C to B during the burst (15.5 s). During the burst period, two sets of EPs and LCs were demonstrated at the sequential maximum or minimum of [Ca2+]i. These two EP3s are almost superimposed in the figure. The mode of membrane excitability at each moment is indicated at the top. (Inset) Trace of [Ca2+]i during the same burst–interburst cycle. (B) Bifurcation diagrams at six representative time points in A, as indicated in the top left part of the figure. During the burst, three time points were selected from those of sequential minimums of [Ca2+]i (11, 14.6, and 15.5 s). The same color codes were used for the dots as those in A. Black vertical lines were drawn at PCaV = 48.9 pA mM−1, the standard value in the β-cell model.
Figure 4.
Time-dependent mode changes in membrane excitability during one burst–interburst cycle. (A) Time-based simulation of Vm at 8 mM [G] (gray continuous line). Colored dots are EP1, EP2, EP3, and LC (min and max) in the membrane system, which were measured from the bifurcation diagrams calculated with fixed values of [S]is, [Ca2+]ER, fus, and I1 at corresponding time points. Stable EPs, unstable EPs, stable LCs, and unstable LCs are indicated by black, red, blue, and yellow dots, respectively. The unstable LC (yellow) was only observed at the moment of mode change from Mode B to C (19.5 s) during <0.1 s, whereas it was not observed at the switch from Mode C to B during the burst (15.5 s). During the burst period, two sets of EPs and LCs were demonstrated at the sequential maximum or minimum of [Ca2+]i. These two EP3s are almost superimposed in the figure. The mode of membrane excitability at each moment is indicated at the top. (Inset) Trace of [Ca2+]i during the same burst–interburst cycle. (B) Bifurcation diagrams at six representative time points in A, as indicated in the top left part of the figure. During the burst, three time points were selected from those of sequential minimums of [Ca2+]i (11, 14.6, and 15.5 s). The same color codes were used for the dots as those in A. Black vertical lines were drawn at PCaV = 48.9 pA mM−1, the standard value in the β-cell model.
Fig. 4 B indicates that a sequence of bifurcation occurred during one cycle of bursting rhythm. At 0 s, Vm remains stable at EP1. During the initial 7.6 s, the diagram shifts leftward with reference to the standard PCaV, and EP1 and EP2 approach each other. At 7.6 s, EP1 coalesces with EP2 and disappears (LP bifurcation of EPs), and thus the membrane system shifts to the stable LC, corresponding with spontaneous action potentials. It takes about another 3 s for Vm to move from EP1 to LC because of the very small inward current charging the membrane capacitance. Once spontaneous oscillations are initiated, the diagram shifts rightward until the stable LC disappears by an LP bifurcation at 15.5 s. Finally, Vm returns to the stable EP1, and the whole cycle of events repeats again. Here, it should be noted that the movement of the bifurcation diagram is entirely the result of time-dependent changes in the slow intracellular factors.
### Which slow variables are responsible for termination of the burst?
Termination of the burst occurs from the disappearance of the stable LC (LP bifurcation), accompanied by the change in the membrane excitability from Mode C to B at 8 mM [G]. To examine the role of the slow variables in terminating the burst, we constructed a bifurcation diagram by treating a single slow factor as a bifurcation parameter. The other slow variables were fixed at the values obtained at 14.6 s, just before the LP bifurcation. In Fig. 5 A, the bifurcation parameter is [ATP]. A stable LC (Fig. 5 A, blue lines) appeared over a higher range of [ATP]. During the burst period, [ATP] gradually decreased, as indicated by gray vertical lines sampled every 1 s. This monotonic decrease in [ATP] clearly promotes the burst termination through the membrane system approaching the LP bifurcation point. As demonstrated in our companion paper (Cha et al., 2011), the terminating effect of [ATP] was mediated through gradual activation of outward IKATP. In Fig. 5 B, the bifurcation diagram was calculated with respect to the ultraslow inactivation gate of ICaV (fus). The value of fus also decreased toward the LP bifurcation point, indicating that fus was also a key factor in the burst termination. Namely, the decrease of fus reduces the inward ICaV and leads to gradual hyperpolarization. The concurrent reduction of Ca2+ influx through ICaV lowers the plateau level of [Ca2+]i, overcoming the opposite effect of the accumulation of Ca2+ in the ER. After [Ca2+]i reached a peak at 12 s, it facilitates the burst termination (Fig. 5 C) by decreasing inward INaCa and ITRPM. The accumulation of intracellular Na+ also had a significant effect on the termination of the burst through INaK at 8 mM [G] (Fig. 5 D). In contrast, [K+]i and I1 (inactivation state of INaCa) have minor and opposite effects on the mode change (Fig. 5, E and F). It should be noted that the relative importance of the slow factors on membrane excitability may be altered at a different [G].
Figure 5.
Effects of individual slow variables on the mode change toward termination of the burst. Each bifurcation diagram was obtained by varying single slow variable, [ATP] (A), fus (B), [Ca2+]i (C), [Na+]i (D), [K+]i (E), and I1 (F) as a bifurcation parameter, with the remaining slow variables fixed at their values at 0.9 s before the LP bifurcation (open circles). Stable EPs, unstable EPs, stable LCs, and unstable LCs are indicated by black, red, blue, and yellow lines, respectively. Six gray vertical lines in A–D indicate the values of the corresponding slow variables sampled at an interval of 1 s (from 11 to 16 s in Fig. 4 A). In C, the sequential values of the minimum [Ca2+]i were sampled. In E and F, two gray lines indicate the corresponding values at 11 and 16 s. Arrows represent the sampling sequence. We confirmed that the bifurcation diagrams were qualitatively the same when slow variables were fixed at different time points during the burst.
Figure 5.
Effects of individual slow variables on the mode change toward termination of the burst. Each bifurcation diagram was obtained by varying single slow variable, [ATP] (A), fus (B), [Ca2+]i (C), [Na+]i (D), [K+]i (E), and I1 (F) as a bifurcation parameter, with the remaining slow variables fixed at their values at 0.9 s before the LP bifurcation (open circles). Stable EPs, unstable EPs, stable LCs, and unstable LCs are indicated by black, red, blue, and yellow lines, respectively. Six gray vertical lines in A–D indicate the values of the corresponding slow variables sampled at an interval of 1 s (from 11 to 16 s in Fig. 4 A). In C, the sequential values of the minimum [Ca2+]i were sampled. In E and F, two gray lines indicate the corresponding values at 11 and 16 s. Arrows represent the sampling sequence. We confirmed that the bifurcation diagrams were qualitatively the same when slow variables were fixed at different time points during the burst.
## DISCUSSION
In the new model developed in our companion paper (Cha et al., 2011), the dynamics of a pancreatic β cell were described with 18 differential equations containing individual functional components. The present study focused on solving the differential equations directly, using bifurcation analysis. Examination was performed at two different levels: the entire system (Fig. 1) to investigate the whole cell response to varying [G], and at the level of the fast membrane subsystem (Figs. 2, 4, and 5) to explore the time-dependent changes in membrane excitability at a given [G]. The results of bifurcation analysis were supplemented by time-based simulations (Fig. 3 and Fig. 2 in Cha et al., 2011) to deduce physiologically relevant conclusions. Based on the number and stability of the steady-state solutions (EPs and LCs), we discriminated three different regions depending on [G]: quiescent, bursting, and continuous firing activity (Fig. 1). We also discriminated six kinds of modes of membrane excitability depending on slow cytosolic factors (Fig. 2). The exact timing of transition between modes of membrane excitability was also indicated on time-based simulation. We investigated the ionic mechanisms for the initiation of the burst with lead potential analysis in our companion paper (Cha et al., 2011), and those for the termination of the burst with bifurcation analysis in this paper. Thus, the two papers together provide a novel mathematical description about the β-cell bursting activity and the role of individual functional units or molecules in the response to glucose.
### Comparison with previous studies in respect to bifurcation analysis
In the past few decades, bifurcation analysis has been used to clarify the mathematical principles underlying bursting activity in pancreatic β cells. In most studies, extremely simplified models have been used because they were amenable to mathematical analysis. In this study, we applied bifurcation analysis to a complex β-cell model based on extensive experimental data, with the aim of clarifying important physiological mechanisms in reference to individual functional components. We demonstrated that the bursting activity in this complex system is generated with the same basic principles as before.
#### Interactions between fast and slow systems.
Previous modeling studies consistently indicated that the bursting rhythm is generated by the reciprocal interactions between fast and slow subsystems. These studies presented useful hypotheses for a negative feedback mechanism between an ionic current with one single slow variable. For example, early β-cell models hypothesized an interaction between [Ca2+]i and Ca2+-dependent activation of IKCa (Chay and Keizer, 1983; Sherman et al., 1988) or Ca2+-dependent inactivation of ICaV (Chay and Kang, 1988). Subsequently, other hypothetical mechanisms include voltage-dependent slow inactivation of ICa (Chay, 1990; Keizer and Smolen, 1991), [ATP], or [ADP] to modulate the conductance of IKATP (Smolen and Keizer, 1992), or [Ca2+]ER to activate ISOC (Chay, 1996, 1997). With our detailed cell model, we demonstrated that the following multiple slow variables work in concert to generate the burst–interburst rhythm: slow inactivation (fus) via ICaV, intracellular ATP metabolism via IKATP, [Ca2+]i via INaCa and ITRPM, and [Na+]i via INaK. These mechanisms make a positive contribution to termination of the burst against the minor and opposite effects of [K+]i or I1 (inactivation state of INaCa). Moreover, our model showed that even a single slow variable has complex influences on the membrane excitability. For example, [ATP] promotes the burst termination by activating IKATP but, at the same time, retards it by depressing INaK; [Ca2+]i stabilizes the oscillation during the initial phase of the burst but terminates the burst in the late phase.
#### Transitions between an EP and an LC.
Results in this study are consistent with the conclusions of previous studies, namely that burst–interburst rhythm at a given [G] can be explained by repetitive transitions between a stable EP and a stable LC, although our graphical analysis is largely different. In the most successful and straightforward presentation using a simple model (Sherman et al., 1988), a bifurcation diagram was obtained using [Ca2+]i as the bifurcation parameter, which was the sole slow variable in their model. The bifurcation diagram was superimposed with a Ca2+ nullcline in Vm–[Ca2+]i space, and thereby, the overall behavior of the system was easily tracked along the Vm–[Ca2+]i diagram guided by both the Ca2+ nullcline and bifurcation points. In the study of Bertram and Sherman (2004), their model was composed of three slow variables, [Ca2+]i, [Ca2+]ER, and [ADP], and bifurcation diagrams were presented with respect to [Ca2+]i, by fixing the other two variables. Therefore, the combined effects of multiple slow variables were only indirectly inferred by comparing bifurcation diagrams calculated with two different values of [ADP] or [Ca2+]ER. Our model is an even more complex system, with eight slow variables, so it was extremely difficult to prepare such a straightforward bifurcation diagram. To overcome this problem, we developed an alternative way to show the transition of Vm between an EP and an LC along the time axis. To show the net effects of the slow variables during the bursting rhythm, we used the values of all the slow variables at each time point to calculate EPs and LCs. In our presentation, time-dependent changes in the mode of membrane excitability were explicitly identified on the record of the time-based simulation (Fig. 4 A).
#### Modal changes in behavior of the whole system.
We found that the bursting activity ceased by decreasing [G], and the firing became uninterrupted by increasing [G] in our β-cell model. These transitions from quiescence to bursting or to continuous firing in the whole system were consistently observed in several previous studies, but only indirectly. Namely, changes in glucose were mimicked by increasing either the rate of cytosolic Ca2+ sequestration (Chay and Keizer, 1983) or the Ca2+-binding affinity to Ca2+ channels (Chay, 1993). In more complex models, a hypothetical parameter proportional to the proton motive force (Keizer and Magnus, 1989) or the conductance of IKATP (Smolen and Keizer, 1992; Bertram et al., 1995) was changed, instead of calculating the reaction pathways for the glucose signal. Therefore, these simple models did not directly address the key question of how changes in [G] modulate cellular activity. In this study, we simulated the whole spectrum of [G] dependency by implementing the changes in metabolic status after changes in [G] (Fig. 1). This enabled us to estimate the values of [G] at which the cell changes its electrophysiological characteristics. Although we successfully reproduced cellular response to a range of [G] relevant to experimental measurements in the mouse, improvements in the metabolic components are still awaited to get a deeper insight into the effects of glucose. In the future, we would especially like to examine the effects of [Ca2+]i on the production of ATP through the tricarboxylic acid cycle and oxidative phosphorylation.
#### A wide range of cycle length in bursting activity.
In a simple model possessing a single slow variable, the range of burst cycle length was rather limited if compared with the experimental observations. Bertram et al. (2000) demonstrated that burst cycle length can be varied over a wider range by including two different slow processes, one with a relatively small time constant (s1) and the other with a much larger time constant (s2). Three slow variables were assigned to [Ca2+]i and [ATP] or [Ca2+]ER in a subsequent study (Bertram and Sherman, 2004). In the present study, we used nine slow variables, including seven substrate concentrations as well as the slow gating of ICaV and INaCa. This resulted in a wide range of burst cycle length when external [G] was changed (Fig. 2 in Cha et al., 2011). Based on the theory of Bertram et al. (2000), comparing the time courses of the slow variables implies that [ATP] ([MgADP]) or fus might correspond to s1, and [Na+]i or [Ca2+]ER to s2, in our model. That is, the duration of one cycle of the bursting rhythm is short at a low [G] (8 mM) where the variations in [ATP] or fus were predominant, whereas relatively large variations in [Na+]i or [Ca2+]ER govern a much slower rhythm at a high [G] (16 mM). It should be noted that the rate of change in [ATP] was largely affected by the Ca2+-dependent consumption of ATP (kATP,Ca; Eq. S103 in Cha et al., 2011), and that of [Na+]i was determined by NaK or NaCa activity in our model. Thus, for more reliable reproduction of the experimental burst duration, the above parameters should be improved in the future, when more extensive experimental data are available.
### Conclusion
In conclusion, collectively with the time-based integrations, steady-state solutions of our differential equations explicitly proved the hypothesis about the roles of individual ion channels and transporters in generating β-cell electrical activity in relation to their complex interactions with slow variables. Furthermore, working hypotheses for new experiments can be obtained from a mathematical model with detailed membrane components and cytosolic mechanisms. Although quantitative predictions from any mathematical model are dependent on how correctly the individual model components are described, this study and our companion paper (Cha et al., 2011), using bifurcation analysis, lead potential analysis, and time-based simulations, provide a framework to improve an objective understanding of this complex system.
## Appendix
$dVmdt=−INa,tot+ICa,tot+IK,tot+IinjectCm$
(A1)
$d[Na+]idt=−INa,totvoli F or −INa,tot=voli F d[Na+]idt$
(A2)
$d[K+]idt=−IK,tot−Iinjectvoli F or −IK,tot−Iinject=voli F d[K+]idt$
(A3)
$d[Ca2+]idt=fivoli(−ICa,tot2F−JSERCA+Jleak)$
(A4)
$d[Ca2+]ERdt=fERvolER(JSERCA−Jleak)$
(A5)
By combining Eqs. A4 and A5,
$d[Ca2+]idt=fivoli(−ICa,tot2 F−d[Ca2+]ERdtvolERfER),or−ICa,tot=2F(volifid[Ca2+]idt+volERfERd[Ca2+]ERdt).$
(A6)
By combining Eqs. A1, A2, A3, and A6,
$d[Ca2+]ERdt=voli fER2⋅volER(Cmvoli FdVmdt−d[Na+]idt−d[K+]idt−2fid[Ca2+]idt).$
(A7)
By integrating both terms with t from t = 0,
$[Ca2+]ER=voli fER2 volER{Cmvoli F(Vm−Vm(0))−([Na+]i−[Na+]i(0))−([K+]i−[K+]i(0))−2fi([Ca2+]i−[Ca2+]i(0))}+[Ca2+]ER(0).$
(A8)
## Acknowledgments
We thank Professor T. Powell for fruitful discussion and for improving the English of this paper.
This work was supported by the Biomedical Cluster Kansai project; a Grant-in-Aid (22590216 to C.Y. Cha and 22390039 to A. Noma) from the Ministry of Education, Culture, Sports, Science and Technology of Japan; and the Ritsumeikan-Global Innovation Research Organization at Ritsumeikan University.
Lawrence G. Palmer served as editor.
## References
References
Bertram
R.
,
Sherman
A.
.
2004
.
A calcium-based phantom bursting model for pancreatic islets
.
Bull. Math. Biol.
66
:
1313
1344
.
Bertram
R.
,
Butte
M.J.
,
Kiemel
T.
,
Sherman
A.
.
1995
.
Topological and phenomenological classification of bursting oscillations
.
Bull. Math. Biol.
57
:
413
439
.
Bertram
R.
,
Previte
J.
,
Sherman
A.
,
Kinard
T.A.
,
Satin
L.S.
.
2000
.
The phantom burster model for pancreatic beta-cells
.
Biophys. J.
79
:
2880
2892
.
Cha
C.Y.
,
Himeno
Y.
,
Shimayoshi
T.
,
Amano
A.
,
Noma
A.
.
2009
.
A novel method to quantify contribution of channels and transporters to membrane potential dynamics
.
Biophys. J.
97
:
3086
3094
.
Cha
C.Y.
,
Nakamura
Y.
,
Himeno
Y.
,
Wang
J.
,
Fujimoto
S.
,
Inagaki
N.
,
Earm
Y.E.
,
Noma
A.
.
2011
.
Ionic mechanisms and Ca2+ dynamics underlying the glucose sensing in pancreatic β cells: a simulation study
.
J. Gen. Physiol.
138
:
21
37
.
Chay
T.R.
1990
.
Effect of compartmentalized Ca2+ ions on electrical bursting activity of pancreatic beta-cells
.
Am. J. Physiol.
258
:
C955
C965
.
Chay
T.R.
1993
.
The mechanism of intracellular Ca2+ oscillation and electrical bursting in pancreatic beta-cells
.
Adv. Biophys.
29
:
75
103
.
Chay
T.R.
1996
.
Electrical bursting and luminal calcium oscillation in excitable cell models
.
Biol. Cybern.
75
:
419
431
.
Chay
T.R.
1997
.
Effects of extracellular calcium on electrical bursting and intracellular and luminal calcium oscillations in insulin secreting pancreatic beta-cells
.
Biophys. J.
73
:
1673
1688
.
Chay
T.R.
,
Kang
H.S.
.
1988
.
Role of single-channel stochastic noise on bursting clusters of pancreatic beta-cells
.
Biophys. J.
54
:
427
435
.
Chay
T.R.
,
Keizer
J.
.
1983
.
Minimal model for membrane oscillations in the pancreatic beta-cell
.
Biophys. J.
42
:
181
190
.
Ermentrout
B.
2002
.
Simulating, Analyzing, and Animating Dynamical System: A Guide to XPPAUT for Researchers and Students
.
SIAM Press
,
Philadelphia
.
290 pp
.
Fall
C.P.
,
Marland
E.S.
,
Wagner
J.M.
,
Tyson
J.J.
.
2002
.
Computational Cell Biology
.
Springer-Verlag
,
New York, Inc.
445 pp
.
Keizer
J.
,
Magnus
G.
.
1989
.
ATP-sensitive potassium channel and bursting in the pancreatic β cell. A theoretical study
.
Biophys. J.
56
:
229
242
.
Keizer
J.
,
Smolen
P.
.
1991
.
Bursting electrical activity in pancreatic beta cells caused by Ca2+- and voltage-inactivated Ca2+ channels
.
Proc. Natl. Acad. Sci. USA.
88
:
3897
3901
.
Sherman
A.
,
Rinzel
J.
,
Keizer
J.
.
1988
.
Emergence of organized bursting in clusters of pancreatic β-cells by channel sharing
.
Biophys. J.
54
:
411
425
.
Smolen
P.
,
Keizer
J.
.
1992
.
Slow voltage inactivation of Ca2+ currents and bursting mechanisms for the mouse pancreatic β-cell
.
J. Membr. Biol.
127
:
9
19
.
Abbreviations used in this paper:
• EP
equilibrium point
•
• LC
limit cycle
•
• LP
limit point
This article is distributed under the terms of an Attribution–Noncommercial–Share Alike–No Mirror Sites license for the first six months after the publication date (see http://www.rupress.org/terms). After six months it is available under a Creative Commons License (Attribution–Noncommercial–Share Alike 3.0 Unported license, as described at http://creativecommons.org/licenses/by-nc-sa/3.0/).
|
{}
|
Courses
Postulates & Operators in Quantum Mechanics Chemistry Notes | EduRev
Chemistry : Postulates & Operators in Quantum Mechanics Chemistry Notes | EduRev
The document Postulates & Operators in Quantum Mechanics Chemistry Notes | EduRev is a part of the Chemistry Course Physical Chemistry.
All you need of Chemistry at this link: Chemistry
Postulates of Quantum Mechanics
Here are the Six Postulates of Quantum Mechanics. Postulate 2 contains various Operators use in Quantum Mechanics.
Postulate 1. The state of a quantum mechanical system is completely specified by a function ψ(r, t) that depends on the coordinates of the particle(s) and on time. This function, called the wave function or state function, has the important property that ψ(r, t) ψ(r, t) dτ is the probability that the particle lies in the volume element located at r at time .
The wavefunction must satisfy certain mathematical conditions because of this probabilistic interpretation. For the case of a single particle, the probability of finding it somewhere is 1, so that we have the normalization condition
(110)
It is customary to also normalize many-particle wavefunctions to 1.2 The wavefunction must also be single-valued, continuous, and finite.
Postulate 2. To every observable in classical mechanics there corresponds a linear, Hermitian operator in quantum mechanics.
This postulate comes about because of the considerations raised in section 3.1.5: if we require that the expectation value of an operator  is real, then  must be a Hermitian operator. Some common operators occuring in quantum mechanics are collected in Table 1.
Table 1: Physical observables and their corresponding quantum operators (single particle)
Postulate 3. In any measurement of the observable associated with operator Â, the only values that will ever be observed are the eigenvalues a, which satisfy the eigenvalue equation
ÂΨ = aΨ (111)
This postulate captures the central point of quantum mechanics--the values of dynamical variables can be quantized (although it is still possible to have a continuum of eigenvalues in the case of unbound states). If the system is in an eigenstate of  with eigenvalue a, then any measurement of the quantity A will yield a.
Although measurements must always yield an eigenvalue, the state does not have to be an eigenstate of  initially. An arbitrary state can be expanded in the complete set of eigenvectors of  (ÂΨi = aiΨi) as
(112)
where n may go to infinity. In this case we only know that the measurement of A will yield one of the values ai, but we don't know which one. However, we do know the probability that eigenvalue ai will occur--it is the absolute value squared of the coefficient, |ci|2 (cf. section 3.1.4), leading to the fourth postulate below.
An important second half of the third postulate is that, after measurement of Ψ yields some eigenvalue ai, the wavefunction immediately ``collapses'' into the corresponding eigenstate Ψi (in the case that ai is degenerate, then Ψ becomes the projection of Ψ onto the degenerate subspace). Thus, measurement affects the state of the system. This fact is used in many elaborate experimental tests of quantum mechanics.
Postulate 4. If a system is in a state described by a normalized wave function Ψ, then the average value of the observable corresponding to  is given by
(113)
Postulate 5. The wavefunction or state function of a system evolves in time according to the time-dependent Schrödinger equation
(114)
The central equation of quantum mechanics must be accepted as a postulate, as discussed in section 2.2.
Postulate 6. The total wavefunction must be antisymmetric with respect to the interchange of all coordinates of one fermion with those of another. Electronic spin must be included in this set of coordinates.
Offer running on EduRev: Apply code STAYHOME200 to get INR 200 off on our premium plan EduRev Infinity!
Physical Chemistry
84 videos|106 docs|31 tests
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
;
|
{}
|
Search by Topic
Resources tagged with Comparing and Ordering numbers similar to Snowman:
Filter by: Content type:
Stage:
Challenge level:
There are 15 results
Broad Topics > Numbers and the Number System > Comparing and Ordering numbers
Snowman
Stage: 4 Challenge Level:
All the words in the Snowman language consist of exactly seven letters formed from the letters {s, no, wm, an). How many words are there in the Snowman language?
Euromaths
Stage: 3 Challenge Level:
How many ways can you write the word EUROMATHS by starting at the top left hand corner and taking the next letter by stepping one step down or one step to the right in a 5x5 array?
Greetings
Stage: 3 Challenge Level:
From a group of any 4 students in a class of 30, each has exchanged Christmas cards with the other three. Show that some students have exchanged cards with all the other students in the class. How. . . .
Largest Number
Stage: 3 Challenge Level:
What is the largest number you can make using the three digits 2, 3 and 4 in any way you like, using any operations you like? You can only use each digit once.
Farey Sequences
Stage: 3 Challenge Level:
There are lots of ideas to explore in these sequences of ordered fractions.
Dicey Operations
Stage: 2 and 3 Challenge Level:
Who said that adding, subtracting, multiplying and dividing couldn't be fun?
Some Games That May Be Nice or Nasty for Two
Stage: 2 and 3 Challenge Level:
Some Games That May Be Nice or Nasty for an adult and child. Use your knowledge of place value to beat your oponent.
Some Games That May Be Nice or Nasty
Stage: 2 and 3 Challenge Level:
There are nasty versions of this dice game but we'll start with the nice ones...
Dicey Operations for Two
Stage: 2 and 3 Challenge Level:
Dicey Operations for an adult and child. Can you get close to 1000 than your partner?
Rachel's Problem
Stage: 4 Challenge Level:
Is it true that $99^n$ has 2n digits and $999^n$ has 3n digits? Investigate!
Mathland Election
Stage: 3 Challenge Level:
A political commentator summed up an election result. Given that there were just four candidates and that the figures quoted were exact find the number of votes polled for each candidate.
Writ Large
Stage: 3 Challenge Level:
Suppose you had to begin the never ending task of writing out the natural numbers: 1, 2, 3, 4, 5.... and so on. What would be the 1000th digit you would write down.
Even Up
Stage: 3 Challenge Level:
Consider all of the five digit numbers which we can form using only the digits 2, 4, 6 and 8. If these numbers are arranged in ascending order, what is the 512th number?
Consecutive Seven
Stage: 3 Challenge Level:
Can you arrange these numbers into 7 subsets, each of three numbers, so that when the numbers in each are added together, they make seven consecutive numbers?
Rod Fractions
Stage: 3 Challenge Level:
Pick two rods of different colours. Given an unlimited supply of rods of each of the two colours, how can we work out what fraction the shorter rod is of the longer one?
|
{}
|
# Deactivating italic text in body of a “definition” environment [duplicate]
I definied a new theorem via
\newtheorem{def}{definition}
But the text has a italic font and i do not want that. how can i deactivate the italic font?
• What package are you using to create your theorems? – Werner Nov 22 '16 at 16:47
• Aside: LaTeX won't let you set up an environment called def. You'll get an error message such as ! LaTeX Error: Command \def already defined.. Use defn or something else instead. – Mico Nov 22 '16 at 16:51
• potential duplicate: Non italic text in theorems, definitions, examples – barbara beeton Nov 22 '16 at 16:55
I assume you're using a theorem-related package such as amsthm. If that's the case, just issue the instruction
\theoremstyle{definition}
\newtheorem{defn}{Definition}
A full MWE:
\documentclass{article}
\usepackage{amsthm,lipsum}
\theoremstyle{definition}
\newtheorem{defn}{Definition}
\begin{document}
\begin{defn}[Lipsum]
\lipsum*[2] % filler text
\end{defn}
\end{document}
• If you use the ntheorem package rather than the amsthm package to set up theorem-like environments, you should replace \theoremstyle{definition} with \theorembodyfont{\upshape}. – Mico Nov 22 '16 at 16:56
|
{}
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 16 Dec 2018, 07:18
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in December
PrevNext
SuMoTuWeThFrSa
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345
Open Detailed Calendar
• ### Free GMAT Prep Hour
December 16, 2018
December 16, 2018
03:00 PM EST
04:00 PM EST
Strategies and techniques for approaching featured GMAT topics
• ### FREE Quant Workshop by e-GMAT!
December 16, 2018
December 16, 2018
07:00 AM PST
09:00 AM PST
Get personalized insights on how to achieve your Target Quant Score.
# A certain club has 10 members, including Harry. One of the
Author Message
TAGS:
### Hide Tags
Manager
Joined: 13 Apr 2015
Posts: 74
Concentration: General Management, Strategy
GMAT 1: 620 Q47 V28
GPA: 3.25
WE: Project Management (Energy and Utilities)
Re: A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
11 Oct 2015, 05:38
Is 1 - 1/10 + 7/10 = 1/5 a right aproach. 1/10 is his prob of becoming president and 7/10 is his prob of becoming nothing.
Suggestion would be of a great help.
Math Expert
Joined: 02 Sep 2009
Posts: 51227
Re: A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
11 Oct 2015, 05:43
goldfinchmonster wrote:
Is 1 - 1/10 + 7/10 = 1/5 a right aproach. 1/10 is his prob of becoming president and 7/10 is his prob of becoming nothing.
Suggestion would be of a great help.
1 - 1/10 + 7/10 = 16/10 > 1 not 1/5. So, this approach is not correct.
Several correct approaches and links to similar questions are given on page 1.
_________________
Manager
Joined: 13 Apr 2015
Posts: 74
Concentration: General Management, Strategy
GMAT 1: 620 Q47 V28
GPA: 3.25
WE: Project Management (Energy and Utilities)
Re: A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
11 Oct 2015, 18:24
Bunuel wrote:
goldfinchmonster wrote:
Is 1 - 1/10 + 7/10 = 1/5 a right aproach. 1/10 is his prob of becoming president and 7/10 is his prob of becoming nothing.
Suggestion would be of a great help.
1 - 1/10 + 7/10 = 16/10 > 1 not 1/5. So, this approach is not correct.
Several correct approaches and links to similar questions are given on page 1.
Hey extreamly sorry, i missed out the bracket. It should be 1- [ 1/10 + 7/10 ] = 1/5.
Intern
Joined: 05 Mar 2015
Posts: 28
Location: United States
Concentration: Marketing, General Management
Re: A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
19 Dec 2015, 14:37
[quote="Bunuel"]A certain club has 10 members, including Harry. One of the 10 members is to be chosen at random to be the president, one of the remaining 9 members is to be chosen at random to be the secretary, and one of the remaining 8 members is to be chosen at random to be the treasurer. What is the probability that Harry will be either the member chosen to be the secretary or the member chosen to be the treasurer?
(A) 1/720
(B) 1/80
(C) 1/10
(D) 1/9
(E) 1/5
There are a lot of explanations as to how solve this problem, however many answers are overly complicated. Here is the simple way:
President= 1/10
Not President= 1-1/10=9/10
Secretary= 1/9
Not Secretary= 1-1/9=8/9
Treasurer= 1/8
Not Treasurer= 1-1/8= 7/8
Thus,
1. NOT President but Secretary = 9/10*1/9=1/10
2. NOT President and NOT Secretary but Treasurer= 9/10*8/9*1/8=1/10
3. Either Secretary or Treasurer= 1/10+1/10=2/10=1/5
A.
_________________
"You have to learn the rules of the game. And then you have to play better than anyone else". Albert Einstein
Director
Status: Professional GMAT Tutor
Affiliations: AB, cum laude, Harvard University (Class of '02)
Joined: 10 Jul 2015
Posts: 671
Location: United States (CA)
Age: 39
GMAT 1: 770 Q47 V48
GMAT 2: 730 Q44 V47
GMAT 3: 750 Q50 V42
GRE 1: Q168 V169
WE: Education (Education)
A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
Updated on: 01 Jun 2017, 10:07
2
Here is a visual that should help. Notice that the question does not indicate whether Harry was chosen as president; thus his chances of becoming secretary are also 1/10 (and not 1/9).
Attachments
Screen Shot 2016-03-28 at 5.51.47 PM.png [ 93.81 KiB | Viewed 2155 times ]
_________________
Harvard grad and 99% GMAT scorer, offering expert, private GMAT tutoring and coaching worldwide since 2002.
One of the only known humans to have taken the GMAT 5 times and scored in the 700s every time (700, 710, 730, 750, 770), including verified section scores of Q50 / V47, as well as personal bests of 8/8 IR (2 times), 6/6 AWA (4 times), 50/51Q and 48/51V (1 question wrong).
You can download my official test-taker score report (all scores within the last 5 years) directly from the Pearson Vue website: https://tinyurl.com/y94hlarr Date of Birth: 09 December 1979.
GMAT Action Plan and Free E-Book - McElroy Tutoring
Contact: mcelroy@post.harvard.edu (I do not respond to PMs on GMAT Club.)
...or find me on Reddit: http://www.reddit.com/r/GMATpreparation
Originally posted by mcelroytutoring on 28 Mar 2016, 16:53.
Last edited by mcelroytutoring on 01 Jun 2017, 10:07, edited 2 times in total.
Manager
Status: 2 months to go
Joined: 11 Oct 2015
Posts: 113
GMAT 1: 730 Q49 V40
GPA: 3.8
A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
17 May 2016, 03:35
I admit that probabilities are my Achilles' heel, but I wanted to ask something to clear my mind.
I obviously love Bunuel's answer because of its simplicity but I'm still puzzled by it.
I don't get why there's no sequence, if we have 10 balls and then extract 1, unless we put it back in the group the probability should vary.
Can someone explain why it doesn't vary (maybe with an example ) ?
Thanks a lot!
Intern
Joined: 21 Mar 2013
Posts: 12
Re: A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
14 Jun 2016, 09:57
Hi All,
I began my GMAT prep just two days ago. So forgive me if this looks a bit too obvious. I have been focusing on Combinatorics and Probability since last two days. While this question looks straightforward to many of you here, the way the OG explained it, it took me on a spin. So my question is two folds:
P(Harry is Pres) = 1/10. Therefore him not being Pres = (1-1/10) = 9/10
P(Harry is Sec) = P(Harry NOT being Pres) *P(Harry is Sec) * P(Harry is NOT Tres) = 9/10 * 8/9 * 8/8 = 1/10 -----> (Equation 1)
Thus P(Harry is Sec) = 1/10 ----> (Equation 2)
P(Harry is Tres) = P(Harry NOT Pres) *P (Harry NOT Sec) * P(Harry is Tres) = 9/10/ * 8/9 * 1/8 = 1/10 ----> (Equation 3)
Q:1 How is P(Harry is Sec) calculated with P(Harry is Sec) being part of the equation??? See the highlights in (Equation 1). It is a case of a circular reference??
Q:2 Just like how we calculated P(Harry NOT Pres) = 1 - P(Harry is Pres), why aren't we plugging in the value of [1-P(Harry is Sec)] = 1-1/10 = 9/10 in the (Equation 3) where it calls for P(Harry NOT Sec) and instead taking 8/9?
I am beyond lost on this topic. The more I read the deeper I dig my own grave.
Thanks!!
Intern
Joined: 10 Aug 2015
Posts: 32
Location: India
GMAT 1: 700 Q48 V38
GPA: 3.5
WE: Consulting (Computer Software)
A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
22 Jun 2016, 08:47
Bunuel wrote:
A certain club has 10 members, including Harry. One of the 10 members is to be chosen at random to be the president, one of the remaining 9 members is to be chosen at random to be the secretary, and one of the remaining 8 members is to be chosen at random to be the treasurer. What is the probability that Harry will be either the member chosen to be the secretary or the member chosen to be the treasurer?
(A) 1/720
(B) 1/80
(C) 1/10
(D) 1/9
(E) 1/5
Okay let me give a very simple solution to the question. Counting chapter is one of the part of our exam so lets just use counting to solve the problem rather than listing out all the probabilities.
Step-1- We know here that order matters as Harry as president is very different from Harry as Secretary. So total no of ways we can arrange 3 people from a group of 10 = 10P3
Step-2 - Here we have two cases Harry as Sec. or Harry as Tres. - Mark the word "OR" so we have to add the cases.
Now fix Harry as Sec. then we can arrange rest 2 persons from a group of 9 in 9P2 ways and similarly keeping Harry as Tres. we can get 9P2 ways.
So P(H as Sec or Tres)= 2*9P2/10P3 = 1/5.
Intern
Joined: 21 Jan 2015
Posts: 39
Location: United States
Schools: Booth PT '20
GMAT 1: 660 Q44 V38
GPA: 3.2
Re: A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
17 Oct 2016, 19:48
I understood how the answer is E, but I am also getting confused now. The probability of harry not being president is (9/10). The probability of harry being secretary is (1/9). Wouldn't the probability of harry not being treasurer be (7/8) then? I know if I did it this way, there is no answer choice that would match this, but I still want to know why the last probability is not 7/8?
Target Test Prep Representative
Affiliations: Target Test Prep
Joined: 04 Mar 2011
Posts: 2830
Re: A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
19 Oct 2016, 05:00
1
Bunuel wrote:
A certain club has 10 members, including Harry. One of the 10 members is to be chosen at random to be the president, one of the remaining 9 members is to be chosen at random to be the secretary, and one of the remaining 8 members is to be chosen at random to be the treasurer. What is the probability that Harry will be either the member chosen to be the secretary or the member chosen to be the treasurer?
(A) 1/720
(B) 1/80
(C) 1/10
(D) 1/9
(E) 1/5
We are given that a club has 10 members, including Harry. When selecting a president, secretary, and treasurer from the 10 members, we must determine the probability that Harry will either be chosen secretary or treasurer.
Since we have 10 total people the probability that Harry is chosen to be the secretary is 1/10 and the probability that he is chosen to be the treasurer is 1/10.
Thus, the probability that he is chosen to be the secretary or treasurer is 1/10 +1/10 = 1/5.
_________________
Jeffery Miller
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
Current Student
Status: DONE!
Joined: 05 Sep 2016
Posts: 377
Re: A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
17 Nov 2016, 16:17
Probability Secretary --> Prob(Not President)xProb(Secretary) --> (9/10)x(1/9) = 9/90 = 1/10
Probability Treasurer --> Prob(Not President)x(Prob(Not Secretary)xProb(Treasurer)=(9/10)x(8/9)x(1/8) = 72/720 = 1/10
Prob (Secretary or Treasurer) = 1/10 + 1/10 = 2/10 = 1/5
Director
Joined: 17 Dec 2012
Posts: 632
Location: India
Re: A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
31 May 2017, 21:01
Bunuel wrote:
A certain club has 10 members, including Harry. One of the 10 members is to be chosen at random to be the president, one of the remaining 9 members is to be chosen at random to be the secretary, and one of the remaining 8 members is to be chosen at random to be the treasurer. What is the probability that Harry will be either the member chosen to be the secretary or the member chosen to be the treasurer?
(A) 1/720
(B) 1/80
(C) 1/10
(D) 1/9
(E) 1/5
Diagnostic Test
Question: 7
Page: 21
Difficulty: 650
1.The final probability is probability of Harry as secretary + probability of Harry as treasurer
2.One is selected as a president. Harry could be the president , the probability being 9/10. So the probability of Harry not as a president is 9/10.
3.The probability of Harry as a Secretary is 9/10*1/9= 1/10
4. Similarly the probability of Harry as a treasurer is 8/10*1/8=1/10
5. Final Probability is 1/10+1/10=1/5
_________________
Srinivasan Vaidyaraman
Sravna Holistic Solutions
http://www.sravnatestprep.com
Holistic and Systematic Approach
Intern
Joined: 26 Nov 2014
Posts: 7
Re: A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
09 Jul 2017, 09:56
For those looking for formula:
Total number of ways of selecting 3 people out of 10=10C3*3!(multiplied by 3! since these 3 positions are different)
Harry's chances that he would be chosen either as Sec or treas= Don't chose harry as prez(10-1C1)*Chose Harry as Sec(1C1)*Any one to be chosen Treas(8c1)/Total number of ways of selecting 3 people out of 10(10C3*3!) OR (+) Don't chose harry as prez(10-1C1)*Chose Harry as Treasurer(1C1)*Any one to be chosen Secretary(8c1)/Total number of ways of selecting 3 people out of 10(10C3*3!)
=(9C1*1C1*8C1+9C1*1C1*8C1)/10C3*3!=144/720=1/5
VP
Joined: 09 Mar 2016
Posts: 1234
A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
26 May 2018, 07:28
elizaanne wrote:
It wants to know the probability that Harry is Secretary or treasurer, so we should add the probability that he will be chosen secretary to the probability that he will be chosen treasurer.
The Probability that he is chosen secretary is
9/10*1/9*8/8=1/10
The 9/10 represents the probability that anyone but harry is president
The 1/9 represents the probability that harry is secretary
The 8/8 represents the fact that anyone can be treasurer, so it really does not affect the probability at all
The probability that he is chosen treasurer is
9/10*8/9*1/8=1/10
The 9/10 represents the probability that anyone but harry is president
The 1/9 represents the probability anyone but harry is secretary
The 1/8 represents the probability that harry is treasurer
Add the two together and you get 1/5 (E)
Hey pushpitkc
the above solution is nicely explained, just one thing i dont understand in this solution as we know the probability formula is # of possible outcomes / total number of outcomes
but i dont see the pattern of formula in the above solution why ?
Also it says "we should add the probability" but arent these dependent events ?
here is my approach below, can yu please explain why it is wrong ?
Since the first probability out of 10 is probability of choosing president
The second probability is choosing harry as secretary 1/9
The third probability is choosing harry as treasurer 1/8
1/9*1/8 = 1/72
thank you and have a great weekend
Senior PS Moderator
Joined: 26 Feb 2016
Posts: 3327
Location: India
GPA: 3.12
A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
26 May 2018, 07:42
1
1
dave13 wrote:
elizaanne wrote:
It wants to know the probability that Harry is Secretary or treasurer, so we should add the probability that he will be chosen secretary to the probability that he will be chosen treasurer.
The Probability that he is chosen secretary is
9/10*1/9*8/8=1/10
The 9/10 represents the probability that anyone but harry is president
The 1/9 represents the probability that harry is secretary
The 8/8 represents the fact that anyone can be treasurer, so it really does not affect the probability at all
The probability that he is chosen treasurer is
9/10*8/9*1/8=1/10
The 9/10 represents the probability that anyone but harry is president
The 1/9 represents the probability anyone but harry is secretary
The 1/8 represents the probability that harry is treasurer
Add the two together and you get 1/5 (E)
Hey pushpitkc
the above solution is nicely explained, just one thing i dont understand in this solution as we know the probability formula is # of possible outcomes / total number of outcomes
but i dont see the pattern of formula in the above solution why ?
Also it says "we should add the probability" but arent these dependent events ?
here is my approach below, can yu please explain why it is wrong ?
Since the first probability out of 10 is probability of choosing president
The second probability is choosing harry as secretary 1/9
The third probability is choosing harry as treasurer 1/8
1/9*1/8 = 1/72
thank you and have a great weekend
Hey dave13
The reason we need to add the probabilities is either of two things is possible
1. Harry is chosen as the Secretary
2. Harry is chosen as the Treasurer
Total probability that Harry is chosen Secretary or Treasurer is the sum of the
individual probabilities. We multiply when we are asked to find about both the
events.
Case 1: P(Harry is chosen Secretary) = $$\frac{9}{10}*\frac{1}{9}*\frac{8}{8} = \frac{1}{10}$$
Case 2: P(Harry is chosen Treasurer) = $$\frac{9}{10}*\frac{8}{9}*\frac{1}{8} = \frac{1}{10}$$
In either cases, you will observe that there are 9 possibilities of choosing President as Harry can't be chosen
and 8 possibilities of choosing either Secretary/Treasurer as Harry can't be chosen as Treasurer is the first
case and can't be chosen Secretary in the second case.
P(Harry is chosen Secretary OR Treasurer) = $$\frac{1}{10} + \frac{1}{10} =\frac{1}{5}$$
Hope this helps you!
_________________
You've got what it takes, but it will take everything you've got
Intern
Joined: 07 Feb 2017
Posts: 16
Re: A certain club has 10 members, including Harry. One of the [#permalink]
### Show Tags
05 Jul 2018, 10:04
Bunuel wrote:
A certain club has 10 members, including Harry. One of the 10 members is to be chosen at random to be the president, one of the remaining 9 members is to be chosen at random to be the secretary, and one of the remaining 8 members is to be chosen at random to be the treasurer. What is the probability that Harry will be either the member chosen to be the secretary or the member chosen to be the treasurer?
(A) 1/720
(B) 1/80
(C) 1/10
(D) 1/9
(E) 1/5
Diagnostic Test
Question: 7
Page: 21
Difficulty: 650
Approach using Permutations:
No. of ways to be chosen as secretary : 9P1*1*8P1 = 9*8
No. of ways to be chosen as treasurer : 9P1*8P1*1 = 9*8
No. of ways of choosing three people out of 10 : 10P3 = 10*9*8
Probability = [9*8+9*8]/[10*9*8] = 1/5
Re: A certain club has 10 members, including Harry. One of the &nbs [#permalink] 05 Jul 2018, 10:04
Go to page Previous 1 2 [ 36 posts ]
Display posts from previous: Sort by
|
{}
|
Ludovico / Aug 12 2019
Remix of
# New Surrogates and final plans
In the previous article I left off talking about new Surrogate methods.
In the past two weeks I indeed managed to code the following new surrogates:
• Linear
• Lobachesky spline
• Neural Network
• Support vector machine
• Random forest
I also had to make sure that these new surrogates would comply with the optimization methods I coded beforehand. It turns out I have been quite sloppy: at the end I had to change around a lot of data structures of these surrogates to make everything compatible.
Now, this seems like a lot of work. Actually it was not that bad because I have taken advantage of a great deal of Packages, such as: GLM, Flux, LIBSVM and XGBoost.
## Linear Surrogate
The definition and construction of a Linear Surrogate is indeed quite easy:
mutable struct LinearSurrogate{X,Y,C,L,U} <: AbstractSurrogate
x::X
y::Y
coeff::C
lb::L
ub::U
end
function LinearSurrogate(x,y,lb::Number,ub::Number)
ols = lm(reshape(x,length(x),1),y)
LinearSurrogate(x,y,coef(ols),lb,ub)
end
The bounds are needed in the construction because optimization method need to have explicit limitations. The ND case is the same, because I still take advantage of GLM.
## Lobachesky spline
The lobachesky spline is super interesting. It is defined in this way:
With and parameters and dimension of the problem.
The inner is defined in this way:
By applying the central limit theorem the d-variate Lobachevsky spline converge to the d-variate Gaussian. Hence, Lobachevsky splines asymptotically behave like radial functions, though they are not radial in themselves.
Let's call our objective function . If we are able to express it in the following way:
Then we can approximate with a Lobachesky spline because there exists a closed form of the integral. Surrogates.jl makes this extremely easy, check it out:
obj = x -> 3*x + log(x)
a = 1.0
b = 4.0
x = sample(2000,a,b,SobolSample())
y = obj.(x)
alpha = 2.0
n = 6
my_loba_surr = LobacheskySurrogate(x,y,alpha,n,a,b)
int_1D = lobachesky_integral(my_loba_surr,a,b)
int_val_true = int[1]-int[2]
@test abs(int_1D - int_val_true) < 10^-5
## Neural network and SVM
To build this surrogate I used the library Flux, which makes it rather easy. There is not much to say about this, I think that the syntax is quite convenient:
a = 0.0
b = 10.0
obj_1D = x -> log(x)^2*x+2*x
x = sample(10,a,b,SobolSample())
y = obj_1D.(x);
model = Chain(Dense(1,1))
loss(x, y) = Flux.mse(model(x), y)
opt = Descent(0.01)
n_echos = 5
my_neural = NeuralSurrogate(x,y,a,b,model,loss,opt,n_echos)
val = my_neural(5.0)
The user just needs to define a few things about his NN and then the constructor takes care of it.
For the SVMSurrogate, I used the library LIBSVM. The syntax is exactly the same as the neural network.
## Random forest surrogate
To build this surrogate I used the library XGBoost, which makes it rather easy.
The only difference from the other Library-ready surrogates is that the user needs to input the number of rounds, that is the number of trees:
lb = [0.0,0.0]
ub = [10.0,10.0]
s = sample(5,lb,ub,SobolSample())
x = Tuple.(s)
obj_ND = x -> x[1] * x[2]^2
y = obj_ND.(x)
my_forest_ND = RandomForestSurrogate(x,y,lb,ub,num_round)
val = my_forest_ND((1.0,1.0))
## Final weeks
In these last two weeks I plan on writing docs, examples and tutorials because a good package is useless if I am the only one that knows how to operate it. Also, I would love to finish the SOP optimization method whose PR is still open. I would also love to code up the MARS spline surrogate.
Anyway, I have a lot more ideas for this package so for sure the work will not end after JSOC. Cannot wait for the last article that will wrap up these amazing three months!
Happy coding,
Ludovico
|
{}
|
# Algebraic Topology vs Differential Topology
As an undergraduate student who has studied some point set topology and abstract algebra, I aim to start studying differential topology using Guillemin-Pollack and algebraic topology using Hatcher on my own.
Before this, I want to understand the fundamental differences/similarities between these two subjects.
Is it true that algebraic topology and differential topology are similar subjects in the sense that they seek to solve same kind of problems but using different tools? Or are there problems that can only be solved by one technique? Or in what respects are these two subjects different from one another?
It would also be very helpful if you could direct me to some books which highlight these issues in general.
• See the Wikipedia page for algebraic topology... it does a good job of explaining the basics of what AT is and the problems it tries to answer. It explains that AT and DT can work in the same setting (manifolds) but that each tend to focus on different aspects of a manifold; namely, that AT will attend to global, non-differentiable results and DT will focus on smooth (differentiable) manifolds that give rise to a geometric structure, which we can take advantage of via invariants. Note that this invariance-business is common to both AT and DT since both try to classify things. – coreyman317 Oct 25 '18 at 3:03
• Unrelated to your question, but I recommend any text but GP for a first study in differential topology. If you're going to learn AT at the time, then maybe try Tu's Introduction to Manifolds, as that should lead nicely to Bott and Tu's Differential Forms in Algebraic Topology which combine the topics nicely. Alternatively, I'd also recommend Lee's Intro to Topological Manifolds, and Intro to Smooth Manifolds. I think GP's text is too "non-rigorous" for a first exposition to the subject. – Matt Oct 25 '18 at 9:40
• There is nothing non-rigorous in Guillemin and Pollack's book. The only objection I consider reasonable is that it prefers to work with submanifolds of $\Bbb R^N$ instead of abstract manifolds; it does this to get to the actual topology as quickly as possible instead of spending a lot of time setting up formalism. This is easily remedied by reading a few early chapters of Lee. – user98602 Oct 25 '18 at 11:57
But let me give two examples of problems that are clearly from one area and not the other. The first is the existence of homeomorphic but not diffeomorphic differential manifolds. This is clearly a topic of differential geometry. The first such example was found by Milnor who showed the existence of exotic structures on $$S^7$$ i.e. a manifold homeomorphic but not diffeomorphic to $$S^7$$ (https://en.wikipedia.org/wiki/Exotic_sphere).
|
{}
|
# A lower bound for expected value of log-sum
Lately, I have been working with Poisson Matrix Factorization models and
at some point a needed to work a lower bound for $\text{E}_q[\log \sum_k X_k]$. After seeing some people using this lower bound without a good explanation, I decided to write this blog post. Also, this is included as an appendix to my ECML-PKDD 2017 paper about poisson factorizatiom model for recommendation.
The function $\log(.)$ is a concave function, which means that: $\log(p_1 x_1+p_2 x_2) \geq p_1\log x_1+p_2 \log x_2, \forall p_1,p_2:p_1+p_2=1, p_1,p_2 \geq 0$
By induction this property can be generalized to any convex combination of $x_k$ ($\sum_k p_k x_k$ with $\sum_k p_k=1$ and $p_k \geq 0$ ):
$\log \sum_k p_k x_k \geq \sum_k p_k\log x_k$
Now with the a random variable we can create a similar convex combination by multiplying and dividing each random variable $X_k$ by $p_k$ and apply the sum of of expectation property:
$\text{E}_q[\log \sum_k X_k] = \text{E}_q[\sum_k\log \frac{p_k X_k}{p_k}]$
$\log \sum_k p_k\frac{X_k}{p_k} \geq \sum_k p_k\log \frac{X_k}{p_k}$
$\Rightarrow\text{E}_q [\log \sum_k p_k\frac{X_k}{p_k}] \geq \sum_k p_k \text{E}_q[\log \frac{X_k}{p_k}]$
$\Rightarrow \text{E}_q [\log \sum_k X_k ] \geq \sum_k p_k \text{E}_q[\log X_k]- p_k\log p_k$
If we want a tight lower bound we should use Lagrange multipliers to choose the set of $p_k$ that maximize the lower-bound given that they should sum to 1.
$L(p_1,\ldots,p_K) = \left(\sum_k p_k \text{E}_q[\log X_k]- p_k\log p_k\right)+\lambda \left(1-\sum_k p_k\right)$
$\frac{\partial L}{\partial p_k} =\text{E}_q[\log X_k]-\log p_k-1-\lambda = 0$
$\frac{\partial L}{\partial \lambda} =1-\sum_k p_k = 0$
$\Rightarrow \sum_k p_k = 1$
$\Rightarrow\text{E}_q[\log X_k]=\log p_k+1+\lambda$
$\Rightarrow\text{E}_q[\log X_k]=\log p_k+1+\lambda$
$\Rightarrow \exp\text{E}_q[\log X_k]=p_k \exp(1+\lambda)$
$\Rightarrow \sum_k \exp\text{E}_q[\log X_k]=\exp(1+\lambda)\underbrace{\sum_k p_k}_{=1}$
$\Rightarrow p_k=\frac{\exp \{\text{E}_q[\log X_k]\}}{\sum_k \exp \{\text{E}_q[\log X_k]\}}$
The final formula for $p_k$ is exactly the same that we can find for the parameters of the the Multinomial distribution of the auxiliary variables in a Poisson model with rate parameter as sum of Gamma distributed latent variables. Also using this optimal $p_k$ we can show a tight bound without the auxiliary variables.
$\text{E}_q [\log \sum_k X_k ] \geq \sum_k \frac{\exp \{\text{E}_q[\log X_k]\}}{\sum_j \exp \{\text{E}_q[\log X_j]\}}\text{E}_q[\log X_k]- \frac{\exp \{\text{E}_q[\log X_k]\}}{\sum_j \exp \{\text{E}_q[\log X_j]\}}\log \frac{\exp \{\text{E}_q[\log X_k]\}}{\sum_j \exp \{\text{E}_q[\log X_j]\}}$
$= \sum_k \frac{\exp \{\text{E}_q[\log X_k]\}}{\sum_j \exp \{\text{E}_q[\log X_j]\}} \log \sum_j \exp \{\text{E}_q[\log X_j]\}$
$= \log \sum_j \exp \{\text{E}_q[\log X_j]\} \underbrace{ \sum_k \frac{\exp \{\text{E}_q[\log X_k]\}}{\sum_j \exp \{\text{E}_q[\log X_j]\}} }_{=1}$
This results in:
$\text{E}_q [\log \sum_k X_k ] \geq \log \sum_k \exp \{\text{E}_q[\log X_k]\}$
## 3 thoughts on “A lower bound for expected value of log-sum”
1. Is this bound not obtainable just by using convexity of log sum exp and Jensen’s inequality?
1. yup (you mean the final bound without the p_k right?). The nice thing about this proof here is that the intermediate bound with p_k is useful for calculating the ELBO in Poisson-gamma matrix/tensor factorization models once you augment the model with latent Poisson counts.
2. there is a discussion here about this particular bound and other similar bounds and how it is useful to retain the auxiliary variables sometimes http://www.columbia.edu/~jwp2128/Teaching/E6720/Fall2016/papers/twobounds.pdf (what I was trying to accomplish here with this post is just to point to that particular bound because it is useful in poisson-gamma factorization and maybe show a pedagogical derivation of it)
|
{}
|
Preprint Open Access
# Analysis and interpretation of the first CROCUS reactor neutron noise experiments using an improved point-kinetics model
A. Brighenti; S. Santandrea; I. Zmijarevic
### DataCite XML Export
<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="DOI">10.5281/zenodo.5817584</identifier>
<creators>
<creator>
<creatorName>A. Brighenti</creatorName>
<affiliation>DES/ISAS/DM2S/SERMA/LTSD Université Paris-Saclay, CEA, Service d'études des réacteurs et de mathématiques appliquées, 91191, Gif-sur-Yvette, France</affiliation>
</creator>
<creator>
<creatorName>S. Santandrea</creatorName>
<affiliation>DES/ISAS/DM2S/SERMA/LTSD Université Paris-Saclay, CEA, Service d'études des réacteurs et de mathématiques appliquées, 91191, Gif-sur-Yvette, France</affiliation>
</creator>
<creator>
<creatorName>I. Zmijarevic</creatorName>
<affiliation>DES/ISAS/DM2S/SERMA/LTSD Université Paris-Saclay, CEA, Service d'études des réacteurs et de mathématiques appliquées, 91191, Gif-sur-Yvette, France</affiliation>
</creator>
</creators>
<titles>
<title>Analysis and interpretation of the first CROCUS reactor neutron noise experiments using an improved point-kinetics model</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2022</publicationYear>
<subjects>
<subject>TRANSPORT CODE</subject>
<subject>NEUTRON NOISE</subject>
<subject>CROCUS, POINT KINETICS</subject>
<subject>DATA ANALYSIS</subject>
</subjects>
<dates>
<date dateType="Issued">2022-01-04</date>
</dates>
<language>en</language>
<resourceType resourceTypeGeneral="Preprint"/>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/5817584</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.5817583</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract"><p>In the framework of the European project CORTEX, included in the H2020 program, a new Improved Point-Kinetics (IPK) model has been developed and validated on the neutron noise measurements recorded during the experimental campaigns carried out with the CROCUS reactor, at the &Eacute;cole Polytechnique F&eacute;d&eacute;rale de Lausanne (EPFL) in Switzerland. In the first part of this paper, the methodology for the experimental data analysis developed by CEA is presented and its outcomes are compared to those obtained by the EPFL team. In the second part, taking as reference the first CROCUS experimental campaign, the present work presents a series of interpretive exercises performed with the IPK noise model aiming at showing its simulation capabilities and at trying to address some of the discrepancies observed during the validation exercise. With a deeper understanding of the phenomena inside CROCUS, the following step foresees the application of the code to full reactor studies.</p></description>
</descriptions>
<fundingReferences>
<fundingReference>
<funderName>European Commission</funderName>
<funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
<awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/754316/">754316</awardNumber>
<awardTitle>Core monitoring techniques and experimental validation and demonstration</awardTitle>
</fundingReference>
</fundingReferences>
</resource>
24
12
views
|
{}
|
{r setup, include=FALSE} knitr::opts_chunk$set(echo = TRUE) This tutorial provides a step-by-step guide to the dynamAedes package, a unified modelling framework for invasive Aedes mosquitoes. In this package, the users can apply the stochastic, time-discrete and spatially-explicit population dynamical model developed in Da Re et. al 2021 (DOI: https://doi.org/10.1016/j.ecoinf.2020.101180) for Aedes aegypti mosquitoes and now expanded for other three species: Ae. albopictus, Ae. japonicus and Ae. koreicus This stage-based model is informed by temperature and photoperiod and can be applied to three different spatial scales: punctual, local and regional. These spatial scales consider different degrees of spatial complexity and data availability, by accounting for both active and passive dispersal of the modeled mosquito species as well as for the heterogeneity of temperature data. We will present the application of each scale of the model using a simulated temperature dataset for the species Ae. albopictus {r, echo=FALSE, results='hide',message=FALSE} setwd("/home/ddare/working_files/aedes_eu/testing_pkg/") Sys.setlocale("LC_TIME", "en_GB.UTF-8") {r, results=‘hide’,message=FALSE} #Packages for processing library(raster) library(sp) library(gstat) library(spatstat) library(maptools) library(rgeos) library(parallel) library(eesim) library(tidyverse) library(dynamAedes) #Packages for plotting library(ggplot2) library(geosphere) # Punctual scale model At punctual scale the model only requires a weather station temperature time series provided as a numerical matrix with temperatures in Celsius. For the purpose of this tutorial, we will simulate a 2-year long temperature time series. ## Simulate temperature data with seasonal trend Next, we simulate a two-year temperature time series with seasonal trend. For the time series we consider a mean value of 18°C and standard deviation of 3°C. {r} ndays = 365*3 #length of the time series in days set.seed(123) sim_temp <- create_sims(n_reps = 1, n = ndays, central = 18, sd = 2, exposure_type = "continuous", exposure_trend = "cos1", exposure_amp = -.3, average_outcome = 12, outcome_trend = "cos1", outcome_amp = 0.8, rr = 1.0005) A visualisation of the temperature values distribution and the “average” temporal trend. hist(sim_temp[[1]]$x,
xlab="Temperature (°C)",
main="Histogram of simulated temperatures")
plot(sim_temp[[1]]$date, sim_temp[[1]]$x,
main="Simulated temperatures seasonal trend",
xlab="Date", ylab="Temperature (°C)", ylim=c(0,35))
## Format the simulated input datasets and run the model
### Model settings
Float numbers in the temperature matrix would slow the computational speed, thus we first multiply them by 1000 and then transform them in integer numbers. We also transpose the matrix from long to wide format, because we conceptualize the module structure considering the rows as the spatial component (here = 1) and the columns as the temporal one.
df_temp=data.frame("Date" = sim_temp[[1]]$date, "temp" = sim_temp[[1]]$x)
w <- t(as.integer(df_temp$temp*1000)) We are now left with a few model parameters which need to be defined. ## Define the day of introduction (day 1) str = 121 ## Define the end-day of life cycle endr = 121+365 ## Define the number of eggs to be introduced ie = 500 ## Define the number of model iterations it = 10 # The higher the number of simulation the better ## Define the number of liters for the larval density-dependent mortality habitat_liters=1 #Define latitude and longitude for the diapause process myLat=42 myLon=7 ## Define the number of parallel processes (for sequential itarations set nc=1) cl = 5 ## Set output name for the *.RDS output will be saved outname= paste0("dynamAedes_albo_ws_dayintro_",str,"_end",endr,"_niters",it,"_neggs",ie) ### Run the model Running the model takes around 10 minutes with the settings specified in this example. {r, eval=FALSE, echo=TRUE} simout=dynamAedes(species="albopictus", scale="ws", ihwv=habitat_liters, temps.matrix=w, startd=str, endd=endr, n.clusters=cl, iter=it, intro.eggs=ie, compressed.output=TRUE, lat=myLat, long=myLon, suffix=outname, verbose = FALSE) ## Analyze the results {r, echo=FALSE, results='hide',message=FALSE} simout=readRDS("dynamAedes_albo_ws_dayintro_121_end486_niters10_neggs500.RDS") We first explore the model output structure: the simout object is a nested list. The first level corresponds to the number of model iterations print(it) print(length(simout)) The second level corresponds to the simulated days. So if we inspect the first iteration, we will observe that the model has computed 366 days, as we had specified above in the object endr. length(simout[[1]]) The third level corresponds to the amount of individuals for each stage (rows). So if we inspect the 1st and the 50th day within the first iteration, we will obtain a matrix having dim(simout[[1]][[1]]) simout[[1]][[1]] simout[[1]][[50]] We can now use the auxiliary functions of the model to analyze the results. ### Derive probability of a successfull introduction at the end of the simulated period First, we can retrieve the probability of successful introduction, computed as the proportion of model iterations that resulted in a viable mosquito population at a given date for a given life stage. {r, eval=FALSE, echo=TRUE} rbind.data.frame(psi(input_sim = simout, eval_date = 300, stage = 0), psi(input_sim = simout, eval_date = 340, stage = 1), psi(input_sim = simout, eval_date = 300, stage = 2), psi(input_sim = simout, eval_date = 300, stage = 3)) ### Derive abundance 95% CI for each life stage and in each day We can now compute the interquantile range abundance of the simulated population using the function adci. {r, eval=FALSE, echo=TRUE} dd <- max(sapply(simout, function(x) length(x)))#retrieve the maximum number of simulated days egg<-as.data.frame(adci(simout, eval_date=1:dd, breaks=c(0.25,0.50,0.75), st=1)) juv<-as.data.frame(adci(simout, eval_date=1:dd, breaks=c(0.25,0.50,0.75), st=2)) ad<-as.data.frame(adci(simout, eval_date=1:dd, breaks=c(0.25,0.50,0.75), st=3)) eggd<-as.data.frame(adci(simout, eval_date=1:dd, breaks=c(0.25,0.50,0.75), st=4)) egg$$myStage="Egg" egg$$Date=seq.Date(sim_temp[[1]]$$date[str], sim_temp[[1]]$$date[endr], by=“day”) juv$$myStage="Juvenile" juv$$Date=seq.Date(sim_temp[[1]]$$date[str], sim_temp[[1]]$$date[endr], by=“day”) ad$$myStage="Adult" ad$$Date=seq.Date(sim_temp[[1]]$$date[str], sim_temp[[1]]$$date[endr], by=“day”) eggd$$myStage="Diapausing egg" eggd$$Date=seq.Date(sim_temp[[1]]$$date[str], sim_temp[[1]]$$date[endr], by=“day”) outdf=bind_rows(egg, juv, ad, eggd) %>% as_tibble() outdf %>% mutate(myStage=factor(myStage, levels= c(“Egg”, “Diapausing egg”, “Juvenile”, “Adult”))) %>% ggplot( aes(y=(50%),x=Date, group=factor(myStage),col=factor(myStage))) + ggtitle(“Ae. albopictus Interquantile range abundance”)+ geom_line(size=1.2)+ geom_ribbon(aes(ymin=25%,ymax=(75%),fill=factor(myStage)), col=“white”, alpha=0.2, outline.type=“full”)+ labs(x=“Date”, y=“Interquantile range abundance”, col=“Stage”, fill=“Stage”)+ facet_wrap(~myStage, scales = “free”)+ theme_light()+ theme(legend.pos=“bottom”, text = element_text(size=14) , strip.text = element_text(face = “italic”)) # Local scale model The local scale allows the model to account for both active and passive dispersal of the mosquitoes. With this setting, the model requires three input datasets: a numerical matrix with temperatures in Celsius defined in space and time (space in the rows, time in the columns), a two-column numerical matrix reporting the coordinates (in meters) of each space-unit (cell) and a numerical *distance matrix* which reports the distance in meters between the cells connected through a road network. For the purpose of this tutorial, we will use the following simulated datasets: 1. A 10 km lattice grid with 250 m cell size; 2. A 2-year long spatially and temporally correlated temperature time series; 3. A matrix of distances between cells connected through a simulated road network; ## Prepare input data ### Create lattice arena First, we define the physical space where the introduction of our mosquitoes will happen. We define a squared lattice arena having 10 km side and 250 m resolution (40 colums and 40 rows, 1600 total cells). {r} gridDim <- 40 # 10000m/250 m = 40 columns and rows xy <- expand.grid(x=1:gridDim, y=1:gridDim) We then add a spatial pattern into the lattice area. This spatial pattern will be used later to add spatial correllation (SAC) to the temperature time series. The spatial autocorrelated pattern will be obtained using a semivariogram model with defined sill (value that the semivarion attains at the range) and range (distance of 0 spatial correlation) and then predicting the semivariogram model over the lattice grid using unconditional Gaussian simulation. {r, message=FALSE} varioMod <- vgm(psill=0.005, range=100, model='Exp') # psill = partial sill = (sill-nugget) # Set up an additional variable from simple kriging zDummy <- gstat(formula=z~1, locations = ~x+y, dummy=TRUE, beta=1, model=varioMod, nmax=1) # Generate a randomly autocorrelated predictor data field set.seed(123) xyz <- predict(zDummy, newdata=xy, nsim=1) We generate a spatially autocorrelated raster adding the SA variable (*xyz$$sim1*) to the RasterLayer object. The autocorrelated surface could for example represent the distribution of vegetation cover in a urban landscape. {r} utm32N <- "+proj=utm +zone=32 +ellps=WGS84 +datum=WGS84 +units=m +no_defs" r <- raster(nrow=40, ncol=40, crs=utm32N, ext=extent(0,10000, 0,10000)) values(r)=xyz$$sim1 plot(r) df <- data.frame(“id”=1:nrow(xyz), coordinates(r)) bbox <- as(extent(r), “SpatialPolygons”) # Store Parameters for autocorrelation autocorr_factor <- values(r) ### Simulate temperature data with seasonal trend We take advantage of the temperature dataset simulated for the punctual scale modelling exercise. We can then "expand onto space" the temperature time series by multiplying it with the autocorrelated surface simulated above. {r} mat <- mclapply(1:ncell(r), function(x) { d_t <- sim_temp[[1]]$x*autocorr_factor[[x]]
return(d_t)
},mc.cores=1) #mc.cores=1 in Windows OS, which does not support mclapply function
mat <- do.call(rbind,mat)
A comparison between the distribution of the initial temperature time series and autocorrelated temperature surface
par(mfrow=c(2,1))
hist(mat, xlab="Temperature (°C)", main="Histogram of simulated temperatures with spatial autocorrelation")
hist(sim_temp[[1]]\$x, xlab="Temperature (°C)", main="Histogram of simulated temperatures", col="red")
par(mfrow=c(1,1))
# Format temperature data
names(mat) <- paste0("d_", 1:ndays)
df_temp <- cbind(df, mat)
## Simulate an arbitrary road segment for medium-range dispersal
In the model we have considered the possiblity of medium-range passive dispersal. Thus, we will simulate an arbitrary road segment along which adult mosquitoes can disperse passively (i.e., thorugh car traffic).
set.seed(123)
pts <- spsample(bbox, 5, type="random")
# Check simulated segment
raster::plot(r)
raster::plot(roads, add=T)
After defining the road segment we will add a “buffer” of 100 m around the road segment. Adult mosquitoes that reach or develop into cells comprised in the 100 m buffer around roads will be able to undergo passive dispersal.
buff <- buffer(roads, width=100)
crs(buff) <- crs(r)
# Check grid, road segment and buffer
raster::plot(r)
raster::plot(roads, add=T, col="red")
Next, we derive a distance matrix between cells comprised in the spatial buffer along the road network. First, we select the cells. {r, message=FALSE} df_sp=df coordinates(df_sp)=~x+y df_sp=raster::intersect(df_sp,buff)
# Check selected cells
Then, we compute the Euclidean distance between each selected cell.
{r}
dist_matrix <- as.matrix(dist(coordinates(df_sp)))
# Format the simulated input datasets and run the model
## Model settings
Float numbers in the temperature matrix would slow the computational speed, thus we first multiply them by 1000 and then transform them in integer numbers.
w <- sapply(df_temp[,-c(1:3)], function(x) as.integer(x*1000))
We can now define a two-column matrix of coordinates to identify each cell in the lattice grid.
cc <- df_temp[,c("x","y")]
As for model requirement, the distance matrix must have column names equal to row names.
colnames(dist_matrix) <- row.names(dist_matrix)
Moreover, distances in the distance matrix must be rounded to the thousands.
dist_matrix <- apply(dist_matrix,2,function(x) round(x/1000,1)*1000)
# An histogram showing the distribution of distances of cells along the road network
hist(dist_matrix, xlab="Distance (meters)")
We are now left with a few model variables which need to be defined.
## Define cells into which introduce propagules on day 1
intro.vector <- sample(as.numeric(row.names(dist_matrix)),1)
## Define the day of introduction (day 1)
str = 121
## Define the end-day of life cycle
endr = 121+(365*2)
## Define the number of eggs to be introduced
ie = 500
## Define the number of model iterations
it = 10 # The higher the number of simulation the better
## Define the number of liters for the larval density-dependent mortality
habitat_liters=1
#Define latitude and longitude for the diapause process
myLat=42
myLon=7
##Define average trip distance
myCountry="fra"
## Define the number of parallel processes (for sequential itarations set nc=1)
cl = 5
## Set output name for the *.RDS output will be saved
outname= paste0("dynamAedes_albo_lc_dayintro_",str,"_end",endr,"_niters",it,"_neggs",ie)
### Run the model
Running the model takes around 15 minutes with the settings specified in this example. {r, eval=FALSE, echo=TRUE} simout=dynamAedes(species="albopictus", scale="lc", ihwv=habitat_liters, temps.matrix=w, cells.coords=cc, road.dist.matrix=dist_matrix, intro.cells=intro.vector, startd=str, endd=endr, n.clusters=cl, iter=it, intro.eggs=ie, compressed.output=TRUE, lat=myLat, long=myLon, country=myCountry, suffix=outname, verbose = FALSE) ## Analyze the results {r, echo=FALSE, results='hide',message=FALSE} simout=readRDS("dynamAedes_albo_lc_dayintro_121_end851_niters10_neggs500.RDS") We first explore the model output structure: the simout object is a nested list.
The first level corresponds to the number of model iterations
print(it)
print(length(simout))
The second level corresponds to the simulated days. So if we inspect the first iteration, we will observe that the model has computed 366 days, as we had specified above in the object endr.
length(simout[[1]])
The third level corresponds to the amount of individuals for each stage (rows) within each grid cell of the landscape (columns). So if we inspect the first day within the first iteration, we will obtain a matrix having
dim(simout[[1]][[1]])
We can now use the auxiliary functions of the model to analyze the results.
### Derive probability of a successfull introduction at the end of the simulated period
First, we can retrieve the probability of successful introduction, computed as the proportion of model iterations that resulted in a viable mosquito population at a given date for a given life stage.
rbind.data.frame(psi(input_sim = simout, eval_date = 300, stage = 0),
psi(input_sim = simout, eval_date = 700, stage = 1),
psi(input_sim = simout, eval_date = 700, stage = 2),
psi(input_sim = simout, eval_date = 700, stage = 3))
We can also get a spatial output, using the function psi_sp, which require as additional input only the matrix of the pixels coordinates, b
plot(psi_sp(coords = cc, input_sim = simout, eval_date = 600, n.clusters=cl))
At local scale, this output has a double interpretation: a pixel having psi=0 can be a pixel where all the simulations resulted in an existinsions or where the species has not arrived yet through dispersal.
### Derive abundance 95% CI for each life stage and in each day
We can now compute the interquantile range abundance of the simulated population over the whole landscape using the function adci over the whole landscape. {r, eval=FALSE, echo=TRUE} dd <- max(sapply(simout, function(x) length(x)))#retrieve the maximum number of simulated days egg<-as.data.frame(adci(simout, eval_date=1:dd, breaks=c(0.25,0.50,0.75), st=1)) juv<-as.data.frame(adci(simout, eval_date=1:dd, breaks=c(0.25,0.50,0.75), st=2)) ad<-as.data.frame(adci(simout, eval_date=1:dd, breaks=c(0.25,0.50,0.75), st=3)) eggd<-as.data.frame(adci(simout, eval_date=1:dd, breaks=c(0.25,0.50,0.75), st=4))
egg$$myStage="Egg" egg$$Date=seq.Date(sim_temp[[1]]$$date[str], sim_temp[[1]]$$date[endr], by=“day”) juv$$myStage="Juvenile" juv$$Date=seq.Date(sim_temp[[1]]$$date[str], sim_temp[[1]]$$date[endr], by=“day”) ad$$myStage="Adult" ad$$Date=seq.Date(sim_temp[[1]]$$date[str], sim_temp[[1]]$$date[endr], by=“day”) eggd$$myStage="Diapausing egg" eggd$$Date=seq.Date(sim_temp[[1]]$$date[str], sim_temp[[1]]$$date[endr], by=“day”)
outdf=bind_rows(egg, juv, ad, eggd) %>% as_tibble()
outdf %>% mutate(myStage=factor(myStage, levels= c(“Egg”, “Diapausing egg”, “Juvenile”, “Adult”))) %>% ggplot( aes(y=log1p(50%),x=Date, group=factor(myStage),col=factor(myStage))) + ggtitle(“Ae. albopictus Interquantile range abundance”)+ geom_line(size=1.2)+ geom_ribbon(aes(ymin=log1p(25%),ymax=log1p(75%),fill=factor(myStage)), col=“white”, alpha=0.2, outline.type=“full”)+ labs(x=“Date”, y=“Interquantile range abundance (Log10)”, col=“Stage”, fill=“Stage”)+ facet_wrap(~myStage, scales = “free”)+ theme_light()+ theme(legend.pos=“bottom”, text = element_text(size=14) , strip.text = element_text(face = “italic”))
We can also have a spatial output of the estimated interquantile range abundance of a given life stage using the function *adci_sp* just specifiying the pixels coordinates.
{r}
plot(r)
Note that if only a small number of mosquitoes are present in the pixels, the lower quantiles (e.g., the 1st and the 2nd quartiles) will be zero.
Compute a summary of the number of invaded cells over model iterations
x=icci(simout, eval_date=700, breaks=c(0.25,0.50,0.75))
x
### Estimates of mosquito dispersal spread (in km2 )
Estimates of mosquito dispersal spread (in km2 ) of the simulated mosquito populations when scale = “lc”
x=dici(simout, coords=cc, eval_date=365, breaks=c(0.25,0.50,0.75), space=TRUE)
plot(x)
# Regional scale model
Essentialy, the regional scale model run a punctual model within each grid cell of the landscape, without accounting for both active and passive dispersal of the mosquitoes. With this setting, the model requires two input datasets: a numerical matrix with temperatures in Celsius defined in space and time (space in the rows, time in the columns), and a two-column numerical matrix reporting the coordinates (in meters) of each space-unit (cell). For the purpose of this tutorial, we will use the following simulated datasets:
1. A 10 km lattice grid with 250 m cell size;
2. A 2-year long spatially and temporally correlated temperature time series;
## Model settings
We take advantage of the spatial temperature dataset simulated for the local scale modelling exercise.
Float numbers in the temperature matrix would slow the computational speed, thus we first multiply them by 1000 and then transform them in integer numbers.
w <- sapply(df_temp[,-c(1:3)], function(x) as.integer(x*1000))
We can now define a two-column matrix of coordinates to identify each cell in the lattice grid.
cc <- df_temp[,c("x","y")]
We are now left with a few model variables which need to be defined.
## Define the day of introduction (day 1)
str = 121
## Define the end-day of life cycle
endr = 121+365
## Define the number of eggs to be introduced
ie = 500
## Define the number of model iterations
it = 10 # The higher the number of simulation the better
## Define the number of liters for the larval density-dependent mortality
habitat_liters=1
#Define latitude and longitude for the diapause process
myLat=42
myLon=7
## Define the number of parallel processes (for sequential iterations set nc=1)
cl = 5
## Set output name for the *.RDS output will be saved
outname= paste0("dynamAedes_albo_rg_dayintro_",str,"_end",endr,"_niters",it,"_neggs",ie)
### Run the model
Running the model takes around 10 minutes with the settings specified in this example. {r, eval=FALSE, echo=TRUE} simout=dynamAedes(species="albopictus", scale="rg", ihwv=habitat_liters, temps.matrix=w, cells.coords=cc, startd=str, endd=endr, n.clusters=cl, iter=it, intro.eggs=ie, compressed.output=TRUE, lat=myLat, long=myLon, suffix=outname, verbose = FALSE) ## Analyze the results {r, echo=FALSE, results='hide',message=FALSE} simout=readRDS("dynamAedes_albo_rg_dayintro_121_end486_niters10_neggs500.RDS") We first explore the model output structure: the simout object is a nested list.
The first level corresponds to the number of model iterations
print(it)
print(length(simout))
The second level corresponds to the simulated days. So if we inspect the first iteration, we will observe that the model has computed 366 days, as we had specified above in the object endr.
length(simout[[1]])
The third level corresponds to the amount of individuals for each stage (rows) within each grid cell of the landscape (columns). So if we inspect the first day within the first iteration, we will obtain a matrix having
dim(simout[[1]][[1]])
We can now use the auxiliary functions of the model to analyze the results.
### Derive probability of a successfull introduction at the end of the simulated period
First, we can retrieve the probability of successful introduction, computed as the proportion of model iterations that resulted in a viable mosquito population at a given date for a given life stage.
rbind.data.frame(psi(input_sim = simout, eval_date = 365, stage = 0),
psi(input_sim = simout, eval_date = 365, stage = 1),
psi(input_sim = simout, eval_date = 365, stage = 2),
psi(input_sim = simout, eval_date = 365, stage = 3))
We can also get a spatial output, using the function psi_sp, which require as additional input only the matrix of the pixels coordinates, b
plot(psi_sp(coords = cc, input_sim = simout, eval_date = 100, np=cl))
### Derive abundance 95% CI for each life stage and in each day
We can now compute the interquantile range abundance of the simulated population using the function adci over the whole landscape. {r, eval=FALSE, echo=TRUE} dd <- max(sapply(simout, function(x) length(x)))#retrieve the maximum number of simulated days egg<-as.data.frame(adci(simout, eval_date=1:dd, breaks=c(0.25,0.50,0.75), st=1)) juv<-as.data.frame(adci(simout, eval_date=1:dd, breaks=c(0.25,0.50,0.75), st=2)) ad<-as.data.frame(adci(simout, eval_date=1:dd, breaks=c(0.25,0.50,0.75), st=3)) eggd<-as.data.frame(adci(simout, eval_date=1:dd, breaks=c(0.25,0.50,0.75), st=4))
egg$$myStage="Egg" egg$$Date=seq.Date(sim_temp[[1]]$$date[str], sim_temp[[1]]$$date[endr], by=“day”) juv$$myStage="Juvenile" juv$$Date=seq.Date(sim_temp[[1]]$$date[str], sim_temp[[1]]$$date[endr], by=“day”) ad$$myStage="Adult" ad$$Date=seq.Date(sim_temp[[1]]$$date[str], sim_temp[[1]]$$date[endr], by=“day”) eggd$$myStage="Diapausing egg" eggd$$Date=seq.Date(sim_temp[[1]]$$date[str], sim_temp[[1]]$$date[endr], by=“day”)
outdf=bind_rows(egg, juv, ad, eggd) %>% as_tibble()
outdf %>% mutate(myStage=factor(myStage, levels= c(“Egg”, “Diapausing egg”, “Juvenile”, “Adult”))) %>% ggplot( aes(y=(50%),x=Date, group=factor(myStage),col=factor(myStage))) + ggtitle(“Ae. albopictus Interquantile range abundance”)+ geom_line(size=1.2)+ geom_ribbon(aes(ymin=25%,ymax=(75%),fill=factor(myStage)), col=“white”, alpha=0.2, outline.type=“full”)+ labs(x=“Date”, y=“Interquantile range abundance”, col=“Stage”, fill=“Stage”)+ facet_wrap(~myStage, scales = “free”)+ theme_light()+ theme(legend.pos=“bottom”, text = element_text(size=14) , strip.text = element_text(face = “italic”))
|
{}
|
# Add PanDA packages to rubin-env
XMLWordPrintable
#### Details
• Type: RFC
• Status: Implemented
• Resolution: Done
• Component/s:
• Labels:
None
#### Description
In order to support PanDA job submission in ctrl_bps four additional packages are needed from conda-forge: 3 idds- packages and panda-client.
Currently panda job submission happens on a special node curated by Sergey Padolski but that model is not sustainable. We would like to add these 4 packages to the base rubin-env to allow PanDA job submission to be done from a standard IDF notebook (shell) environment. Adding these to rubin-env directly (as opposed by adding them as afterburners in the IDF RSP deployment) seems justifiable given our assumption that we will be using PanDA as the primary workflow engine for the foreseeable future.
#### Activity
Hide
Tim Jenness added a comment - - edited
Shuwei Ye points out that we only need to add idds-doma and idds-client to the env since those two will pull in the other 3 packages. We will need v0.6.8 of esutil to fix a problem with stomp.py.
Show
Tim Jenness added a comment - - edited Shuwei Ye points out that we only need to add idds-doma and idds-client to the env since those two will pull in the other 3 packages. We will need v0.6.8 of esutil to fix a problem with stomp.py.
Hide
Colin Slater added a comment -
If we primarily expect job submission to be happening from RSP instances, doesn't that mean that adding the packages just to the jupyter containers would be sufficient?
Show
Colin Slater added a comment - If we primarily expect job submission to be happening from RSP instances, doesn't that mean that adding the packages just to the jupyter containers would be sufficient?
Hide
Kian-Tat Lim added a comment -
I expect that job submission in the future will be from development cluster nodes as well.
Show
Kian-Tat Lim added a comment - I expect that job submission in the future will be from development cluster nodes as well.
Hide
Kian-Tat Lim added a comment -
One question: how much extra space do these packages take?
Show
Kian-Tat Lim added a comment - One question: how much extra space do these packages take?
Hide
Tim Jenness added a comment -
docopt 0.6.2 py_1 conda-forge/noarch 14 KB idds-client 0.5.2 pyhd8ed1ab_0 conda-forge/noarch 15 KB idds-common 0.5.2 pyhd8ed1ab_0 conda-forge/noarch 15 KB idds-doma 0.5.2 pyhd8ed1ab_0 conda-forge/noarch 14 KB idds-workflow 0.5.2 pyhd8ed1ab_0 conda-forge/noarch 17 KB panda-client 1.4.78 pyhd8ed1ab_0 conda-forge/noarch 156 KB stomp.py 7.0.0 pyhd8ed1ab_0 conda-forge/noarch 35 KB Summary: Install: 7 packages Total download: 266 KB
Show
Tim Jenness added a comment - docopt 0.6.2 py_1 conda-forge/noarch 14 KB idds-client 0.5.2 pyhd8ed1ab_0 conda-forge/noarch 15 KB idds-common 0.5.2 pyhd8ed1ab_0 conda-forge/noarch 15 KB idds-doma 0.5.2 pyhd8ed1ab_0 conda-forge/noarch 14 KB idds-workflow 0.5.2 pyhd8ed1ab_0 conda-forge/noarch 17 KB panda-client 1.4.78 pyhd8ed1ab_0 conda-forge/noarch 156 KB stomp.py 7.0.0 pyhd8ed1ab_0 conda-forge/noarch 35 KB Summary: Install: 7 packages Total download: 266 KB
Hide
Tim Jenness added a comment -
Sergey Padolski you reported that you have a problem with newer versions of curl refusing to use your panda certs. Do you have any details on that that would allow us to work out whether we need to pin curl in the env? Is there a Jira ticket for it?
Show
Tim Jenness added a comment - Sergey Padolski you reported that you have a problem with newer versions of curl refusing to use your panda certs. Do you have any details on that that would allow us to work out whether we need to pin curl in the env? Is there a Jira ticket for it?
Hide
Kian-Tat Lim added a comment -
The DM-CCB decided that this is OK for rubin-env at this time, but in the future when rubin-env-extras exists it will likely be moved there.
Show
Kian-Tat Lim added a comment - The DM-CCB decided that this is OK for rubin-env at this time, but in the future when rubin-env-extras exists it will likely be moved there.
#### People
Assignee:
Tim Jenness
Reporter:
Tim Jenness
Watchers:
Colin Slater, Kian-Tat Lim, Leanne Guy, Michelle Butler [X] (Inactive), Michelle Gower, Sergey Padolski, Shuwei Ye, Tim Jenness, Wil O'Mullane, Yusra AlSayyad
0 Vote for this issue
Watchers:
10 Start watching this issue
#### Dates
Created:
Updated:
Resolved:
Planned End:
#### Jenkins
No builds found.
|
{}
|
You are Here: Home >< Maths
S2 Question? Binomial distribution?
Announcements Posted on
Four hours left to win £100 of Amazon vouchers!! Don't miss out! Take our short survey to enter 24-10-2016
1. the mean= 3
the variance is 2.25
Find P( mean- variance < X is less than or equal to the mean)
How do you do this please?
2. (Original post by APersonYo)
the mean= 3
the variance is 2.25
Find P( mean- variance < X is less than or equal to the mean)
How do you do this please?
Have you substituted the terms in the inequality to get something that looks like
($P( a \leq X \leq b)$)
where a and b are numbers?
3. (Original post by SeanFM)
Have you substituted the terms in the inequality to get something that looks like
($P( a \leq X \leq b)$)
where a and b are numbers?
Yes. I got
4. (Original post by APersonYo)
Yes. I got
I see someone has edited your post so I'm not 100% sure what you originally said.. that is not your fault, I think someone is trying to help you format it correctly.
But yes, you're right, you need to find
($P( 0.75 < X \leq 3)$)
Do you know how to solve it from there? no worries if you don't. (I dropped a very subtle hint in my earlier post where I've turned it into the form )
Register
Thanks for posting! You just need to create an account in order to submit the post
1. this can't be left blank
2. this can't be left blank
3. this can't be left blank
6 characters or longer with both numbers and letters is safer
4. this can't be left empty
1. Oops, you need to agree to our Ts&Cs to register
Updated: October 13, 2016
TSR Support Team
We have a brilliant team of more than 60 Support Team members looking after discussions on The Student Room, helping to make it a fun, safe and useful place to hang out.
This forum is supported by:
Today on TSR
Who is getting a uni offer this half term?
Find out which unis are hot off the mark here
Poll
Useful resources
Maths Forum posting guidelines
Not sure where to post? Read here first
How to use LaTex
Writing equations the easy way
Study habits of A* students
Top tips from students who have already aced their exams
Chat with other maths applicants
|
{}
|
## lgbasallote Group Title What is the converse of the implication "If |x| = x, then x >= 0" one year ago one year ago
1. lgbasallote Group Title
it would be if x > =0 then |x| = x yes?
if $\left| x \right|=x,then x <, 0$
3. lgbasallote Group Title
isn't that inverse?
4. lgbasallote Group Title
wait..no...it isn't...
5. lgbasallote Group Title
i don't know what kind of statement that is...
6. lgbasallote Group Title
if p then not q
7. PhoenixFire Group Title
No, @lgbasallote is right. $P \rightarrow Q$ converse is $Q \rightarrow P$
8. lgbasallote Group Title
wonderful
|
{}
|
# How do neural networks learn?
##### Sep 24 2015 · by Mike S.
Neural networks are generating a lot of excitement, as they are quickly proving to be a promising and practical form of machine intelligence. At Fast Forward Labs, we just finished a project researching and building systems that use neural networks for image analysis, as shown in our toy application Pictograph. Our companion deep learning report explains this technology in depth and explores applications and opportunities across industries.
As we built Pictograph, we came to appreciate just how challenging it is to understand how neural networks work. Even research teams at large companies like Google and Facebook are struggling to understand how neural network layers interact and how the algorithms “learn,” or improve their performance on a task over time. You can learn more about this on their research blog and explanatory videos.
To help understand how neural networks learn, I built a visualization of a network at the neuron level, including animations that show how it learns. If you’re familiar with neural networks or want to follow the rest of the post with a visual cue, please see the interactive visualization here.
Neural Network Basics
First, some deep learning basics. Neural networks are composed of layers of computational units (neurons), with connections among the neurons in different layers. These networks transform data – like the pixels in an image or the words in a document – until they can classify it as an output, such as naming an object in an image or tagging unstructured text data.
Each neuron in a network transforms data using a series of computations: a neuron multiplies an initial value by some weight, sums results with other values coming into the same neuron, adjusts the resulting number by the neuron’s bias, and then normalizes the output with an activation function. The bias is a neuron-specific number that adjusts the neuron’s value once all the connections are processed, and the activation function ensures values that are passed on lie within a tunable, expected range. This process is repeated until the final output layer can provide scores or predictions related to the classification task at hand, e.g., the likelihood that a dog is in an image.
Neural networks generally perform supervised learning tasks, building knowledge from data sets where the right answer is provided in advance. The networks then learn by tuning themselves to find the right answer on their own, increasing the accuracy of their predictions.
To do this, the network compares initial outputs with a provided correct answer, or target. A technique called a cost function is used to modify initial outputs based on the degree to which they differed from the target values. Finally, cost function results are then pushed back across all neurons and connections to adjust the biases and weights.
This push-back method is called backpropagation - and it is the key to how a neural network learns a particular task.
Details of the Visualization
Play with the visualization to see how these components work. Notice how you can adjust the inputs. Each connection has the value of its weight hovering nearby; each neuron has its bias (b) below and the result of its activation function (σ) above.
Click forward to compare the final layer’s guesses with the target values. Click backprop to watch the values adjust. Click forward again to see the output layer improve slightly in comparison to the targets.
This visualization is designed to be as simple as possible to highlight the fundamentals. It uses a softmax function to compute cost and a sigmoid function for activation. Other aspects of normal training, like regularization, dropout, and mini-batching, are ignored.
Interpreting Learning
One powerful idea this visualization communicates is that, even in this simple network, changes made to a single value do not tell us much about the behavior of the network. This is one reason why neural networks are hard to interpret: discrete points provide little to no insight into the overall dynamics, even though backpropagation technically can be reduced down to tweaking individual parameters.
For this reason, we must think about neural networks as complex systems that exhibit emergent behavior: it is the interactions among the neurons, rather than the neurons themselves, that enable the network to learn. In a prior post, we visualized this with the metaphor of a bee swarm. Conway’s Game of Life provides another illustration, where complicated structures emerge from turning cells in a grid on and off according to a few basic rules.
As thinkers dating back to John Stuart Mill have hypothesized that consciousness emerges from brain matter, we may be tempted to infer another reason why neural networks function like brains. But brains are much more plastic and flexible than artificial neural networks. Neural networks are trained to perform a specific singular task; humans learn by switching contexts and redefining tasks as they encounter new information.
Still, the brain metaphor can help conceptualize how neural networks learn. Like brains, neural networks accept and process new input (“feed information forward”), determine the correct response to new input (“evaluate a cost function”), and reflect on errors to improve future performance (“backpropagate”).
It’s still unclear what kind of intelligence will emerge from neural networks in the coming years, but it’s important we understand how learning actually works to refine our conceptions of what’s possible. Hopefully our visualization helps to explain what learning means in this context. Grasping new AI systems is a difficult task, but an important for one for education, public communication, and choices about how to engineer systems with realistic expectations.
–Mike
homepage: http://mwskirpan.com
visualization: http://mwskirpan.com/NN_viz
viz code:https://github.com/wannabeCitizen/NN_viz/tree/gh-pages
|
{}
|
# Root Words, Prefixes, and Suffixes
Did you know that kind is a root word and can be used to form the word kindness? Using the list of model words, teach your class about root words, affixes, prefixes, and suffixes. After reviewing each, go on to page two for some guided practice.
|
{}
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
On the recursive sequence $y_{n+1}=(p+y_{n-1})/(qy_n+y_{n-1})$. (English) Zbl 0967.39004
The difference equation $$y(n+1)=(p+y(n-1))/(qy(n)+y(n-1)) \tag 1$$ with positive $p,q$ and initial conditions is studied. It is shown that this system has a unique equilibrium point which is locally asymptotically stable if $q<1+4p$. If $q>1+4p$ then it is a saddle point and a cycle with prime period-two exists. It is proved that the interval $I$ with end points 1 and $p/q$ is an invariant interval of the system (1). Hence the authors proved that if $q< 1+4p$ then the equilibrium point is a global attractor of (1). If $q>1+4p$ then every solution of (1) eventually enters and remains inside $I$.
##### MSC:
39A11 Stability of difference equations (MSC2000)
Full Text:
##### References:
[1] Amleh, A. M.; Grove, E. A.; Ladas, G.; Georgiou, D. A.: On the recursive sequence $yn+1={\alpha}+yn-1/yn$. J. math. Anal. appl. 233, 790-798 (1999) · Zbl 0962.39004 [2] Beckermann, B.; Wimp, J.: Some dynamically trivial mappings with applications to the improvement of simple iteration. Computers math. Appl. 24, 89-97 (1998) · Zbl 0765.65052 [3] K. Cunningham, M. R. S. Kulenović, G. Ladas, and, S. Valicenti, On the recursive sequence xn+1=({$\alpha$}+{$\beta$}xn)/(Bxn+Cxn-1), to appear. · Zbl 1042.39522 [4] Devault, R.; Ladas, G.; Schultz, S.: On the recursive sequence yn+2=A/yn+1/yn-2. Proc. amer. Math. soc. 126, 3257-3261 (1998) · Zbl 0904.39012 [5] El-Metwalli, H.; Grove, E. A.; Ladas, G.: A global convergence result with applications to periodic solutions. J. math. Anal. appl. 245, 161-170 (2000) · Zbl 0971.39004 [6] Gibbons, C.; Kulenović, M. R. S; Ladas, G.: On the recursive sequence yn+1=(${\alpha}+{\beta}$yn-1)/({$\gamma$}+yn). Math. sci. Res. hot-line 4, 1-11 (2000) · Zbl 1039.39004 [7] Grove, E. A.; Janowski, E. J.; Kent, C. M.; Ladas, G.: On the rational recursive sequence yn+1=({$\alpha$}yn+${\beta})/(({\gamma}$yn+C)yn-1). Comm. appl. Nonlinear anal. 1, 61-72 (1994) · Zbl 0856.39011 [8] Jaroma, J. H.: On the global asymptotic stability of yn+1=a+byn/A+yn-1. (1995) · Zbl 0860.39016 [9] Karakostas, G.: Convergence of a difference equation via the full limiting sequences method. Differential equations dynam. Systems 1, 289-294 (1993) · Zbl 0868.39002 [10] Kocic, V. L.; Ladas, G.: Global behavior of nonlinear difference equations of higher order with applications. (1993) · Zbl 0787.39001 [11] Kocic, V. L.; Ladas, G.; Rodrigues, I. W.: On the rational recursive sequences. J. math. Anal. appl. 173, 127-157 (1993) · Zbl 0777.39002 [12] Kulenović, M. R. S; Ladas, G.; Prokup, N. R.: On the recursive sequence $yn+1={\alpha}yn+{\beta}$yn-1/1+yn. J. difference equations appl. 6, 563-576 (2000) · Zbl 0966.39003 [13] Kulenović, M. R. S; Ladas, G.; Sizer, W.: On the recursive sequence $yn+1={\alpha}yn+{\beta}$y$n-1/{\gamma}$yn+Cyn-1. Math. sci. Res. hot-line 2, 1-16 (1998) · Zbl 0960.39502 [14] Kuruklis, S.; Ladas, G.: Oscillation and global attractivity in a discrete delay logistic model. Quart. appl. Math. 50, 227-233 (1992) · Zbl 0799.39004 [15] Ladas, G.: Open problems and conjectures. J. differential equations appl. 1, 317-321 (1995) · Zbl 0860.39018 [16] Philos, Ch.G; Purnaras, I. K.; Sficas, Y. G.: Global attractivity in a nonlinear difference equation. Appl. math. Comput. 62, 249-258 (1994) · Zbl 0817.39005
|
{}
|
## Real Analysis Exchange
### Fubini Properties of Ideals
#### Abstract
Let $I$ and $J$ be \s-ideals on Polish spaces $X$ and $Y$, respectively. We say that the pair $\langle I,J\rangle$ has the Fubini Property (FP) if for every Borel subset $B$ of $X\times Y$, if all its sections $B_x= \{y\: \langle x,y\rangle\in B\}$ are in $J$, then its sections $B^y=\{x\: \langle x,y\rangle\in B\}$ are in $I$, for every $y$ outside a set from $J$. We study the question, which pairs of $\sigma$-ideals have the Fubini Property. We show, in particular, that:
-- $\langle$ MGR$(X), J\rangle$ satisfies FP, for every $J$ generated by any family of closed subsets of $Y$ (MGR$(X)$ is the $\sigma$-ideal of all meager subsets of $X$),
-- $\langle$ NULL$_\mu, J \rangle$ satisfies FP, whenever $J$ is generated by any of the following families of closed subsets of $Y$ (NULL$_mu$ is the $\sigma$-ideal of all subsets of $X$, having outer measure zero with respect to a Borel $\sigma$-finite continuous measure $\mu$ on $X$):
(i) all closed sets of cardinality $\leq 1$,
(ii) all compact sets,
(iii) all closed sets in NULL$_\nu$ for a Borel \s-finite continuous measure $\nu$ on $Y$,
(iv) all closed subsets of a $\hbox {$\mathbf{\Pi}^1_1$}$ set $A\subseteq Y$.
We also prove that $\langle$MGR$(X)$, MGR$(Y)\rangle$ and $\langle$ NULL$_\mu$, NULL$_\nu\rangle$ are essentially the only cases of FP in the class of \s-ideals obtained from MGR$(X)$ and NULL$_\mu$ by the operations of Borel isomorphism, product, extension and countable intersection.
#### Article information
Source
Real Anal. Exchange, Volume 25, Number 2 (1999), 565-578.
Dates
First available in Project Euclid: 3 January 2009
https://projecteuclid.org/euclid.rae/1230995393
Mathematical Reviews number (MathSciNet)
MR1778511
Zentralblatt MATH identifier
0926.03058
#### Citation
Recław, Ireneusz; Zakrzewski, Piotr. Fubini Properties of Ideals. Real Anal. Exchange 25 (1999), no. 2, 565--578. https://projecteuclid.org/euclid.rae/1230995393
#### References
• M. Balcerzak, Can ideals without ccc be interesting?, Topology and Appl. 55 (1994), 251–260.
• M. Balcerzak, D. Rogowska, Making some ideals meager on sets of size of the continuum, Topology Proc. 21 (1996), 1–13.
• T. Bartoszyński, H. Judah, Set Theory. On the structure of the real line, A. K. Peters 1995.
• D. H. Fremlin, Real-valued-measurable cardinals, in: Set theory of the reals, Haim Judah Ed., Israel Math. Conf. Proc. 6 (1993), 151–304.
• D. H. Fremlin and J. Jasinski, $G_\delta$-covers and large thin sets of reals, Proc. London. Math. Soc. (3) 53 (1986), 518–538.
• M. Gavalec, Iterated products of ideals of Borel sets, Coll. Math. 50 (1985), 39–52.
• H. Judah, A. Lior, I. Reclaw, Very small sets, Coll. Math. 72 no. 2 (1997), 207–213.
• W. Just, M. Weese, Discovering modern set theory. II, Graduate studies in math. 18, AMS 1997.
• A. S. Kechris, Classical descriptive set theory, Graduate Texts in Math. 156, Springer-Verlag 1995.
• A. S. Kechris, S. Solecki, Approximating analytic by Borel sets and definable chain conditions, Israel J. Math. 89(1995), 343–356.
|
{}
|
J. Korean Ceram. Soc. > Volume 45(9); 2008 > Article
Journal of the Korean Ceramic Society 2008;45(9): 544. doi: https://doi.org/10.4191/kcers.2008.45.9.544
Polycarbosilane을 이용한 TiB2-SiC 세라믹의 형성 강신혁, 이동화, 김득중 성균관대학교 신소재공학부 Formation of TiB2-SiC Ceramics from TiB2-Polycarbosilane Mixtures Shin-Hyuk Kang, Dong-Hwa Lee, Deug-Joong Kim School of Materials Science and Engineering, Sungkyunkwan University ABSTRACT The formation of $TiB_2-SiC$ ceramics from $TiB_2$-Polycarbosilane (PCS) mixtures was investigated. The powder mixture of $TiB_2$ with PCS was pressed at $300^{circ}C$ with 200 MPa and sintered at $1700{sim}2000^{circ}C$ for 1 h in a flowing Ar atmosphere. The sintered density of $TiB_2$ with PCS is 93.7% after sintering at $2000^{circ}C$ for 1 h, which is slightly smaller than that of the specimen without PCS. The microstructure of $TiB_2$ with PCS consists of small and uniform $TiB_2$ particles with well dispersed SiC particles derived from PCS. It is believed that the addition of PCS was effective to suppress the grain growth of $TiB_2$. Key words: $TiB_2$, Polycarbosilane, Sintering, Grain growth
TOOLS
Full text via DOI
CrossRef TDM
E-Mail
Share:
METRICS
1 Crossref
1 Scopus
1,127 View
|
{}
|
# Tag Archives: 8 years old
## A quick plug for Estimation 180
Estimation is more than rounding.
Most of the time we don’t teach this, but it is.
Tabitha (8 years old) had a homework assignment the other night that asked her to imagine she had $100 to spend in a catalog, and to make a list of things she would like to buy from that catalog. She found the latest American Girl catalog and got to work. There was a table to fill out with three columns. 1. Description of item 2. Actual cost of item 3. Estimate A couple minutes later she asks, What’s the estimate if it costs five dollars? Should I write$5.01?
She has discerned that estimate means write down a number that is not the exact value.
But that’s not what estimation is about at all. Estimation is about finding a number that makes sense, and not worrying about whether it’s the exact value or not.
The image below seems to be going nuts on the Internet today (despite my exhortations to the contrary! Oh, Internet! When will you learn to listen to me?)
“Is this reasonable?” is a great estimation question. Rounding is one way to answer the question. But if a kid can quickly find a number that makes sense and it happens to be a precise number, then we probably haven’t asked a good estimation question. Rather than mark it wrong because the kid didn’t round, we should ask this kid a more challenging question next time.
What does a good estimation question look like? What would be more challenging?
Estimation 180. Thinking of a number that makes sense is much more interesting when you have to bring your knowledge of the world to bear.
Is 75 inches a reasonable answer for the difference between the father’s height and the son’s? Is 75 centimeters reasonable?
## Mindsets, research and talking math with kids [#NYTEdTech]
This conversation happened in New York yesterday.
A view of New York City from the Times Center on Tuesday.
During a coffee break, I sat down on a white pleather sofa next to an older man.
Me: How has your day been?
Him: Good. You?
Me: Pretty good. Interesting.
What do you do?
Him: Retired.
Me: From what?
Him: I was president of [small New England college]. How about yourself?
Me: I teach math at a community college in Minnesota.
But I’m also working on a project. I work with future elementary teachers, so I have studied the mathematical development of children.
Him: Uh huh.
Me: And I want to use that knowledge for something else, which is this: I am trying to understand what knowledge parents need in order to support the mathematical development of their children.
Him: That’s important.
Me: Right.
[Short pause]
Me: Do you have grandchildren?
Him: Yes. They are 8 and 10.
Me: Oh nice! So their parents—your kids—are my target market.
Him: Yes. Their father is really into that. They use Khan Academy and all that.
—FIN—
If the end of that conversation makes no sense to you, I ask that you please, please, please spend the next 15 minutes over at my website, Talking Math with Your Kids. You might be especially interested in the research summaries, which demonstrate that young children need to talk about number and shape with their parents rather than (or at least in addition to) being sent to website, iPad apps and decks of flash cards.
Kids need mathematical conversation. And they enjoy it.
## How tall is the hill? [summer project]
Our house in St Paul sits on top of an odd hill; higher than others around it. Historical reasons for this are murky but it makes the place easy for guests to find. One of my least favorite tasks in all of my domestic life is mowing the hill.
For a while now, the precise height of this hill has been the subject of family speculation. One recent lazy summer afternoon, Griffin (8 years old), Tabitha (6 years old) and I found ourselves hanging out on the hill with not much to do.
Me: How tall do you two think the hill is?
Tabitha (6 years old): Five feet.
Griffin (8 years old): I don’t know.
T: The hill.
Me: Wait. I’m six feet tall. How can the hill be 5 feet tall AND taller than me?
G: You’re six feet, one inch.
Me: Right. Even so…
T: Oh. I don’t know how tall the hill is, but I think it’s taller than you.
Me: Why?
T: Lie down.
T: See?
Me: Yeah, but just because it’s longer than me doesn’t mean it’s taller than me.
Tabitha seems puzzled by this distinction. Griffin is standing on the sidewalk at my feet.
Me: Look at Griffy’s eyes. Is he looking up or down at my eyes right now?
T: I can’t really tell.
I stand up, right next to Griffy, who cranes his neck back to look me in the eye.
Me: Now?
T: Ha!
I lie back down on the hill.
Me: So how come there’s a difference?
T: You’re lying down now, so that’s not really how tall you are.
Me: So how can we decide whether I am taller, or the hill is?
Nothing much occurs for the next minute or so. We are distracted by butterflies, the edible nature of clover flowers and other wonders of Minnesota’s too-short summers.
Me: Hey! Let’s try this. Tabitha, you go to the top of the hill.
She does, and she stands there, looking down on me with a self-satisfied smile on her face.
Me: OK. So you plus the hill are taller than I am. What about just the hill?
T: I don’t know.
Me: Lie down.
She does, although it takes a few tries to achieve the desired position by which she can look at me from roughly the level of the top of the hill.
Me: Are you looking up or down at me?
T: I can’t tell.
Griffin takes his turn at the top of the hill. He, too, is unsure.
Me: So how can we be sure?
T: You know, Daddy, I don’t really need to know this.
Me: You’re right. You don’t. Nor do I, really. But I have always been curious how tall the hill is. Aren’t you?
G: We could measure a step, then use the number of steps to figure out how tall it is.
I obtain a tape measure.
We determine that each step is 7 inches tall. We notice that the bottom step is shorter than the rest and measure it at 5 inches. Griffin laboriously counts the steps, finding that there are eight of them, plus the smaller one.
G: So what is that altogether?
Me: What? You can do this.
G: Do you know whether you are taller than the hill?
Me: Actually, yes I do, even though I don’t know exactly how tall the hill is.
G: If I figure it out, will tell me whether I’m right?
Me: Yes.
G: [Far too quickly for me to be convinced he has run any computations at all] OK. The hill is taller.
Me: How do you know?
G: Hey! You said you would tell me!
Me: That’s part of doing the math!
G: OK.
A long, thoughtful pause ensues.
G: Eight eights is 64, plus 5 is 69. So you are taller.
Me: But you need eight sevens, which is 56.
G: Oh. Right. Plus 5.
Me: Yes…?
G: Tell me.
Me: Seriously? You can do 56 plus 5.
G: 61.
Me: Yes, and I’m 73 inches tall.
Tabitha, despite her protestations about not needing to know, has been paying attention all along.
T: You’re taller than the hill?
Me: Yes. See? I told you it was interesting.
G: You knew you were taller?
Me: Yes. But I didn’t realize it was by a foot. I thought it would be only by a few inches.
G: How did you know?
Me: Because I look down—only slightly—but I look down at the top of the hill.
In a few days, we will return to the topic of the State Fair Giant Slide and see whether these techniques generalize in my children’s minds.
## Incommensurate Cheez-Its
There are now BIG Cheez-Its (U.S. only, it appears). The package claims that they are “Twice the size!” of regular Cheez-Its.
On seeing this claim, I thought for sure that we were gonna have a We mean four times, but say twice sort of a situation on our hands. So I bought some.
And then I asked Tabitha (6 years old) and Griffin (8 years old) what they thought. I started with Tabitha when Griffin wasn’t around so I could get her pure thoughts.
She put one cracker on top of the other and proclaimed, “No”.
I wanted to know the source of that. I thought she might be making the classic linear v. area error (i.e. interpreting twice to mean twice the side length). So I asked.
She pointed to the uncovered part of the BIG Cheez-It and argued that this didn’t constitute another full regular Cheez-It. Score one point for argumentation, but minus one for spatial visualization.
A few minutes later, it was Griffin’s turn. He ran like a chipmunk with his two crackers into the dining room. Experiment over, right?
Nope.
He was in search of paper and a pen. He carefully traced each cracker, cut out the uncovered part of the BIG one and attempted to partition and reassemble this remainder on top of a tracing of the regular cracker, which it did not completely cover.
Sadly the cut outs are lost forever.
His conclusion: BIG Cheez-Its are almost but not quite twice the size of the regular Cheez-Its.
Volume perhaps?
If the crackers are twice as big, but the mass of one serving is constant, and if one serving of regular Cheez-Its consists of 27 crackers, how many crackers should be in one serving of BIG Cheez-Its?
There are 14.
If the area of a BIG Cheez-It is about twice the area of a regular Cheez-It (as Griffin confirmed), then the side lengths should be in a ratio of approx. 7:5 (a reasonable estimate of the square root of 2).
Notice the progression in the children’s strategies. The six-year old worked with the crackers. The eight-year old worked with representations of the crackers. Similar conclusions were reached; the child who worked with representations could manipulate those representations in order to achieve a greater degree of accuracy, and to investigate hypotheses that the child working concretely could not.
Neither child used tools to calculate areas.
## Summer project
The Minnesota State Fair is a fabulous event (Twelve days of fun ending Labor Day!). Rachel and I love the Fair, and we have passed this love along to our children.
Griffin must have been thinking about the wonders of the State Fair as summer slowly (oh, so slowly!) unfolded on our fair state. He asked a question at breakfast one recent morning.
Griffin (eight years old): How tall is the Giant Slide?
Me: Good question. I would guess…40 feet. What’s your guess?
G: 45 feet.
OK. That’s a mistake. We should have written our guesses down privately to avoid influencing each other. Oh well.
Me: Let’s look it up.
Google returns nothing useful. It does return this awesome video, though, which we watch together.
Me: I found lots of information mentioning the Giant Slide, but nothing on its height.
G: Measure it yourself, then!
Me: Good idea. How should we do that?
G: We’re gonna need a lot of tape measures put together.
This will be a summer project for us: Measuring stuff without putting a ruler next to it. I’ll report on our progress in this space.
## Division and fractions with a third grader
I found some notes on a conversation I had with Griffin last fall. I do not remember the context for it.
Me: Do you know what 12÷2 is?
Griffin (8 years old): 6
Me: How do you know that’s right?
G: 2 times 6 is 12.
G: 13
Me: How do you know that?
G: There were 26 kids in Ms. Starr’s class [in first grade], so it was her magic number. We had 13 pairs of kids.
G: Well, 15 plus 15 is 30…so…19
Here we see the role of cognitive load on mental computation. Griffin is splitting up 34 as 30 and 4 and finding pairs to add to each. Formally, he’s using the distributive property: $2(a+b)=2a+2b$.
He wants to choose $a$ and $b$ so that $2a+2b=30+4$.
But by the time he figures out that $a=15$, he loses track of the fact that $2b=4$ and just adds 4 to 15.
At least, I consider this to be the most likely explanation of his words.
My notes on the conversation only have (back and forth), which indicates that there was some follow-up discussion in which we located and fixed the error. The details are lost to history.
Our conversation continued.
Me: So 12÷2 is 6 because 2×6 is 12. What is 12÷1?
G: [long pause; much longer than for any of the first three tasks] 12.
Me: How do you know this?
G: Because if you gave 1 person 12 things, they would have all 12.
Let’s pause for a moment.
This is what it means to learn mathematics. Mathematical ideas
have multiple interpretations which people encounter as they live their lives. It is (or should be) a major goal of mathematics instruction to help people reconcile these multiple interpretations.
Griffin has so far relied upon three interpretations of division: (1) A division statement is equivalent to a multiplication statement (the fact family interpretation, which is closely related to thinking of division as the inverse of multiplication), (2) Division tells how many groups of a particular size we can make (Ms. Starr’s class has 13 pairs of students—this is the quotative interpretation of division) and (3) Division tells us how many will be in each of a particular number of same-sized groups (Put 12 things into 1 group, and each group has 12 things).
This wasn’t a lesson on multiplication, so I wasn’t too worried about getting Griffin to reconcile these interpretations. Instead, I was curious which (if any) would survive being pushed further.
Me: What is $12 \div \frac{1}{2}$?
G: [pause, but not as long as for 12÷1] Two.
Me: How do you know that?
G: Half of 12 is 6, and 12÷6 is 2, so it’s 2.
Me: OK. You know what a half dollar is, right?
G: Yeah. 50 cents.
Me: How many half dollars are in a dollar?
G: Two.
Me: How many half dollars are in 12 dollars?
G: [long thoughtful pause] Twenty-four.
Me: How do you know that?
G: I can’t say.
Me: One more. How many quarters are in 12 dollars?
G: Oh no! [pause] Forty-eight. Because a quarter is half of a half and so there are twice as many of them as half dollars. 2 times 24=48.
It is perhaps not widely known that I love good Mexican food, and that—with the assistance from afar of Rick Bayless—have developed a number of specialties de casa.
Among these specialties is tostadas, which I make starting with corn tortillas. A bit of oil and 10—15 minutes in the oven makes them crispy. We build from there.
The tortillas fit nicely in a 3 by 3 array on my favorite cookie sheet. There are four of us in the family. You can see where this is going, I am sure.
Griffin served himself a second tostada the other night.
Tabitha (six years old): Griffy’s having another one?!?
Me: Yes. There’s a second one for you, too.
T: How many did you make?
Me: Nine.
T: That’s not a fair number!
Me: What would be a fair number?
T: One where everybody can have the same amount.
Me: Right. But how do you know 9 isn’t a fair number? And what would be one?
T: I don’t know.
Griffin (eight years old): Eight would be. Or 40.
Me: Oh! Forty! Then we could each have 10. Would you like to eat 10 tostadas, Tabitha? But then I would need to buy a second pack of tortillas.
T: [Silent, but her eyes get big and she nods vigorously.]
G: Or 20. Or 12.
The final count is 2 tostadas each for Mommy and Tabitha, and $2\frac{1}{2}$ tostadas each for Daddy and Griffin. Along the way, I promise Tabitha a taco if she finishes her second tostada and is still hungry. This strikes her as fair.
|
{}
|
# janmr blog
## Prime Factors of Factorial Numbers30 October 2010
Factorial numbers, , grow very fast with . In fact, according to Stirling's approximation. The prime factors of a factorial number, however, are all relatively small, and the complete factorization of is quite easy to obtain.
We will make use of the following fundamental theorem:
for a prime , then or .
(Here, means that divides .) This is called Euclid's First Theorem or Euclid's Lemma. For most, it is intuitively clear, but a proof can be found in, e.g., Hardy and Wright: An Introduction to the Theory of Numbers.
An application of this theorem to factorial numbers is that if a prime is a divisor of then must be a divisor of at least one of the numbers . This immediately implies
Every prime factor of is less than or equal to .
Conversely, every prime number between 2 and must be a prime factor of .
Let us introduce the notation as the number of times divides into . Put more precisely, if and only if is an integer while is not.
We now seek to determine for all primes . From Euclid's First Theorem and the Fundamental Theorem of Arithmetic follows:
The trick here is not to consider the right-hand side term by term, but rather as a whole. Let us take
42! = 1405006117752879898543142606244511569936384000000000
and as an example. How many of the numbers 1, 2, …, 42 are divisible by 3? Exactly of them. But this is not the total count, because some of them are divisible by 3 multiple times. So how many are divisible by ? of them. Similarly, . And . So we have
This procedure is easily generalized and we have
(1)
This identity was found by the french mathematician Adrien-Marie Legendre (see also Aigner and Ziegler: Proofs from The Book, page 8, where it is called Legendre's Theorem).
Doing this for all primes in our example, we get
.
Notice how the exponents do not increase as the prime numbers increase. This is true in general. Assume that and are both primes and . Then and for all positive integers . Using this in equation (1) we get
(2)
for primes , with
and thus
What about for composite numbers ? Given the factorization of both and , this is easy to compute. But if, e.g., the multiplicity of all prime factors of are the same, then the relation (2) can be used. Consider for a positive integer . Since then
But if then we can use (2) and we have
For instance,
,
so there are 9 trailing zeros in the decimal representation of 42!.
|
{}
|
• Attosecond-correlated dynamics of two electrons in argon
• # Fulltext
https://www.ias.ac.in/article/fulltext/pram/082/01/0079-0085
• # Keywords
Non-sequential double ionization; doubly excited Coulomb complex; ultrashort pulse; correlated momentum map.
• # Abstract
In this work we explored strong field-induced decay of doubly excited transient Coulomb complex Ar$^{\ast\ast} \to$Ar2++2𝑒. We measured the correlated two-electron emission as a function of carrier envelop phase (CEP) of 6 fs pulses in the non-sequential double ionization (NSDI) of argon. Classical model calculations suggest that the intermediate doubly excited Coulomb complex loses memory of its formation dynamics. We estimated the ionization time difference between the two electrons from NSDI of argon and it is 200 ± 100 as (N Camus et al, Phys. Rev. Lett. 108, 073003 (2012)).
• # Author Affiliations
1. Max-Planck-Institut für Kernphysik, 69117 Heidelberg, Germany
• # Pramana – Journal of Physics
Current Issue
Volume 93 | Issue 6
December 2019
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
{}
|
Create formula is a tool to automatically create a formula object from a provided variable and output names. Reduces the time required to manually input variables for modeling. Output can be used in linear regression, random forest, neural network etc. Create formula becomes useful when modeling data with multiple features. Reduces the time required for modeling and implementation :
create.formula(
outcome.name,
input.names = NULL,
input.patterns = NULL,
dat = NULL,
interactions = NULL,
force.main.effects = TRUE,
reduce = FALSE,
max.input.categories = 20,
max.outcome.categories.to.search = 4,
order.as = "as.specified",
include.backtick = "as.needed",
format.as = "formula",
variables.to.exclude = NULL,
include.intercept = TRUE
)
## Arguments
outcome.name A character value specifying the name of the formula's outcome variable. In this version, only a single outcome may be ed. The first entry of outcome.name will be used to build the formula. The names of the variables with the full names delineated. User can specify '.' or 'all' to e all the column variables. es additional input variables. The user may enter patterns -- e.g. to e every variable with a name that es the pattern. Multiple patterns may be ed as a character vector. However, each pattern may not contain spaces and is otherwise subject to the same limits on patterns as used in the grep function. User can specify a data.frame object that will be used to remove any variables that are not listed in names(dat. As default it is set as NULL. In this case, the formula is created simply from the outcome.name and input.names. A list of character vectors. Each character vector es the names of the variables that form a single interaction. Specifying interactions = list(c("x", "y"), c("x", "z"), c("y", "z"), c("x", "y", "z")) would lead to the interactions x*y + x*z + y*z + x*y*z. This is a logical value. When TRUE, the intent is that any term ed as an interaction (of multiple variables) must also be listed individually as a main effect. A logical value. When dat is not NULL and reduce is TRUE, additional quality checks are performed to examine the input variables. Any input variables that exhibit a lack of contrast will be excluded from the model. This search is global by default but may be conducted separately in subsets of the outcome variables by specifying max.outcome.categories.to.search. Additionally, any input variables that exhibit too many contrasts, as defined by max.input.categories, will also be excluded. Limits the maximum number of variables that will be employed in the formula. As default it is set at 20, but users can still change at his/her convenience. A numeric value. The create.formula function es a feature that identifies input variables exhibiting a lack of contrast. When reduce = TRUE, these variables are automatically excluded from the resulting formula. This search may be expanded to subsets of the outcome when the number of unique measured values of the outcome is no greater than max.outcome.categories.to.search. In this case, each subset of the outcome will be separately examined, and any inputs that exhibit a lack of contrast within at least one subset will be excluded. User can specify the order the input variables in the formula in a variety of ways for patterns: increasing for increasing alphabet order, decreasing for decreasing alphabet order, column.order for as they appear in data, and as.specified for maintaining the user's specified order. Add backticks if needed. As default it is set as 'as.needed', which add backticks when only it is needed. The other option is 'all'. The use of include.backtick = "all" is limited to cases in which the output is generated as a character variable. When the output is generated as a formula object, then R automatically removes all unnecessary backticks. That is, it is only compatible when format.as != formula. The data type of the output. If not set as "formula", then a character vector will be returned. A character vector. Any variable specified in variables.to.exclude will be dropped from the formula, both in the individual inputs and in any associated interactions. This step supersedes the inclusion of any variables specified for inclusion in the other parameters. A logical value. When FALSE, the intercept will be removed from the formula.
## Details
Return as the data type of the output. If not set as "formula", then a character vector will be returned. The input.names and names of variables matching the input.patterns will be concatenated to form the full list of input variables.
## Examples
n <- 10
dd <- data.table::data.table(w = rnorm(n= n), x = rnorm(n = n), pixel_1 = rnorm(n = n))
dd[, pixel_2 := 0.3 * pixel_1 + rnorm(n)]
#> w x pixel_1 pixel_2
#> 1: -1.400043517 -0.55369938 0.46815442 1.0758095
#> 2: 0.255317055 0.62898204 0.36295126 0.2853740
#> 3: -2.437263611 2.06502490 -1.30454355 -0.1476776
#> 4: -0.005571287 -1.63098940 0.73777632 1.8448818
#> 5: 0.621552721 0.51242695 1.88850493 0.6785896
#> 6: 1.148411606 -1.86301149 -0.09744510 -0.1632305
#> 7: -1.821817661 -0.52201251 -0.93584735 -2.1908417
#> 8: -0.247325302 -0.05260191 -0.01595031 -0.2840223
#> 9: -0.244199607 0.54299634 -0.82678895 -0.5614827
#> 10: -0.282705449 -0.91407483 -1.51239965 0.6135880 dd[, y := 5 * x + 3 * pixel_1 + 2 * pixel_2 + rnorm(n)]
#> w x pixel_1 pixel_2 y
#> 1: -1.400043517 -0.55369938 0.46815442 1.0758095 0.8576202
#> 2: 0.255317055 0.62898204 0.36295126 0.2853740 4.1653886
#> 3: -2.437263611 2.06502490 -1.30454355 -0.1476776 6.0661737
#> 4: -0.005571287 -1.63098940 0.73777632 1.8448818 -2.5033379
#> 5: 0.621552721 0.51242695 1.88850493 0.6785896 10.0296258
#> 6: 1.148411606 -1.86301149 -0.09744510 -0.1632305 -7.1784363
#> 7: -1.821817661 -0.52201251 -0.93584735 -2.1908417 -9.7527566
#> 8: -0.247325302 -0.05260191 -0.01595031 -0.2840223 -0.3011961
#> 9: -0.244199607 0.54299634 -0.82678895 -0.5614827 -0.7701556
#> 10: -0.282705449 -0.91407483 -1.51239965 0.6135880 -9.7921176
create.formula(outcome.name = "y", input.names = "x", input.patterns = c("pi", "xel"), dat = dd)
#> $formula #> y ~ x + pixel_1 + pixel_2 #> <environment: 0x000000001904d188> #> #>$inclusion.table
#> variable exclude.null.quantity class order specified.from
#> 1: x FALSE numeric 1 input.names
#> 2: pixel_1 FALSE numeric 2 input.patterns
#> 3: pixel_2 FALSE numeric 3 input.patterns
#> exclude.user.specified exclude.matches.outcome.name include.variable
#> 1: FALSE FALSE TRUE
#> 2: FALSE FALSE TRUE
#> 3: FALSE FALSE TRUE
#>
#> \$interactions.table
#> Empty data.table (0 rows and 2 cols): interactions,include.interaction
#>
|
{}
|
# Chapter 2 - Review: 78
x $\leq$ 4
#### Work Step by Step
x + 4 $\geq$ 6x - 16 Subtract 6x from both sides: -5x + 4 $\geq$ -16 Subtract 4 from both sides: -5x $\geq$ -20 Divide both sides by -5 and change the sign: x $\leq$ 4
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{}
|
# cf.Field.regrids¶
Field.regrids(dst, method=None, src_cyclic=None, dst_cyclic=None, use_src_mask=True, use_dst_mask=False, fracfield=False, src_axes=None, dst_axes=None, axis_order=None, ignore_degenerate=True, i=False, _compute_field_mass=None)[source]
Return the field regridded onto a new latitude-longitude grid.
Regridding, also called remapping or interpolation, is the process of changing the grid underneath field data values while preserving the qualities of the original data.
By default the the regridding is a first-order conservative interpolation, but bilinear interpolation is available. The latter method is particular useful for cases when the latitude and longitude coordinate cell boundaries are not known nor inferrable. Higher order patch recovery is available as an alternative to bilinear interpolation. This typically results in better approximations to values and derivatives compared to the latter, but the weight matrix can be larger than the bilinear matrix, which can be an issue when regridding close to the memory limit on a machine. Nearest neighbour interpolation is also available.
The field’s domain must have well defined X and Y axes with latitude and longitude coordinate values, which may be stored as dimension coordinate objects or two dimensional auxiliary coordinate objects. If the latitude and longitude coordinates are two dimensional then the X and Y axes must be defined by dimension coordinates if present or by the netCDF dimensions. In the latter case the X and Y axes must be specified using the src_axes or dst_axes keyword. The same is true for the destination grid, if it provided as part of another field.
The cyclicity of the X axes of the source field and destination grid is taken into account. If an X axis is in fact cyclic but is not registered as such by its parent field (see cf.Field.iscyclic), then the cyclicity may be set with the src_cyclic or dst_cyclic parameters. In the case of two dimensional latitude and longitude dimension coordinates without bounds it will be necessary to specify src_cyclic or dst_cyclic manually if the field is global.
The output field’s coordinate objects which span the X and/or Y axes are replaced with those from the destination grid. Any fields contained in coordinate reference objects will also be regridded, if possible.
The data array mask of the field is automatically taken into account, such that the regridded data array will be masked in regions where the input data array is masked. By default the mask of the destination grid is not taken into account. If the destination field data has more than two dimensions then the mask, if used, is taken from the two dimensionsal section of the data where the indices of all axes other than X and Y are zero.
Implementation
The interpolation is carried by out using the ESMF package - a Python interface to the Earth System Modeling Framework (ESMF) regridding utility.
Logging
Whether ESMF logging is enabled or not is determined by cf.REGRID_LOGGING. If it is logging takes place after every call. By default logging is disabled.
Latitude-Longitude Grid
The canonical grid with independent latitude and longitude coordinates.
Curvilinear Grids
Grids in projection coordinate systems can be regridded as long as two dimensional latitude and longitude coordinates are present.
Rotated Pole Grids
Rotated pole grids can be regridded as long as two dimensional latitude and longitude coordinates are present. It may be necessary to explicitly identify the grid latitude and grid longitude coordinates as being the X and Y axes and specify the src_cyclic or dst_cyclic keywords.
Tripolar Grids
Tripolar grids are logically rectangular and so may be able to be regridded. If no dimension coordinates are present it will be necessary to specify which netCDF dimensions are the X and Y axes using the src_axes or dst_axes keywords. Connections across the bipole fold are not currently supported, but are not be necessary in some cases, for example if the points on either side are together without a gap. It will also be necessary to specify src_cyclic or dst_cyclic if the grid is global.
New in version 1.0.4.
Regrid field f conservatively onto a grid contained in field g:
>>> h = f.regrids(g, 'conservative')
Parameters:
dst: cf.Field or dict
The field containing the new grid. If dst is a field list the first field in the list is used. Alternatively a dictionary can be passed containing the keywords ‘longitude’ and ‘latitude’ with either two 1D dimension coordinates or two 2D auxiliary coordinates. In the 2D case both coordinates must have their axes in the same order and this must be specified by the keyword ‘axes’ as either ('X', 'Y') or ('Y', 'X').
method: str
Specify the regridding method. The method parameter must be one of:
method Description
'bilinear' Bilinear interpolation.
'patch' Higher order patch recovery.
'conservative' First-order conservative regridding will be used (requires both of the fields to have contiguous, non-overlapping bounds).
'nearest_stod' Nearest neighbor interpolation is used where each destination point is mapped to the closest source point
'nearest_dtos' Nearest neighbor interpolation is used where each source point is mapped to the closest destination point. A given destination point may receive input from multiple source points, but no source point will map to more than one destination point.
src_cyclic: bool, optional
Specifies whether the longitude for the source grid is periodic or not. If None then, if possible, this is determined automatically otherwise it defaults to False.
dst_cyclic: bool, optional
Specifies whether the longitude for the destination grid is periodic of not. If None then, if possible, this is determined automatically otherwise it defaults to False.
use_src_mask: bool, optional
For all methods other than ‘nearest_stod’, this must be True as it does not make sense to set it to False. For the ‘nearest_stod’ method if it is True then points in the result that are nearest to a masked source point are masked. Otherwise, if it is False, then these points are interpolated to the nearest unmasked source points.
use_dst_mask: bool, optional
By default the mask of the data on the destination grid is not taken into account when performing regridding. If this option is set to true then it is. If the destination field has more than two dimensions then the first 2D slice in index space is used for the mask e.g. for an field varying with (X, Y, Z, T) the mask is taken from the slice (X, Y, 0, 0).
fracfield: bool, optional
If the method of regridding is conservative the fraction of each destination grid cell involved in the regridding is returned instead of the regridded data if this is True. Otherwise this is ignored.
src_axes: dict, optional
A dictionary specifying the axes of the 2D latitude and longitude coordinates of the source field when no dimension coordinates are present. It must have keys ‘X’ and ‘Y’.
dst_axes: dict, optional
A dictionary specifying the axes of the 2D latitude and longitude coordinates of the destination field when no dimension coordinates are present. It must have keys ‘X’ and ‘Y’.
axis_order: sequence, optional
A sequence of items specifying dimension coordinates as retrieved by the dim method. These determine the order in which to iterate over the other axes of the field when regridding X-Y slices. The slowest moving axis will be the first one specified. Currently the regridding weights are recalculated every time the mask of an X-Y slice changes with respect to the previous one, so this option allows the user to minimise how frequently the mask changes.
ignore_degenerate: bool, optional
True by default. Instructs ESMPy to ignore degenerate cells when checking the grids for errors. Regridding will proceed and degenerate cells will be skipped, not producing a result, when set to True. Otherwise an error will be produced if degenerate cells are found. This will be present in the ESMPy log files if cf.REGRID_LOGGING is set to True. As of ESMF 7.0.0 this only applies to conservative regridding. Other methods always skip degenerate cells.
i: bool, optional
If True then update the field in place. By default a new field is created. In either case, a field is returned.
_compute_field_mass: dict, optional
If this is a dictionary then the field masses of the source and destination fields are computed and returned within the dictionary. The keys of the dictionary indicates the lat/long slice of the field and the corresponding value is a tuple containing the source field’s mass and the destination field’s mass. The calculation is only done if conservative regridding is being performed. This is for debugging purposes.
Returns:
out: cf.Field
The regridded field.
Examples 2:
Regrid f to the grid of g using bilinear regridding and forcing the source field f to be treated as cyclic.
>>> h = f.regrids(g, src_cyclic=True, method='bilinear')
Regrid f to the grid of g using the mask of g.
>>> h = f.regrids(g, 'conservative', use_dst_mask=True)
Regrid f to 2D auxiliary coordinates lat and lon, which have their dimensions ordered ‘Y’ first then ‘X’.
>>> lat
<CF AuxiliaryCoordinate: latitude(110, 106) degrees_north>
>>> lon
<CF AuxiliaryCoordinate: longitude(110, 106) degrees_east>
>>> h = f.regrids({'longitude': lon, 'latitude': lat, 'axes': ('Y', 'X')}, 'conservative')
Regrid field, f, on tripolar grid to latitude-longitude grid of field, g.
>>> h = f.regrids(g, 'bilinear, src_axes={'X': 'ncdim%x', 'Y': 'ncdim%y'},
... src_cyclic=True)
Regrid f to the grid of g iterating over the ‘Z’ axis last and the ‘T’ axis next to last to minimise the number of times the mask is changed.
>>> h = f.regrids(g, 'nearest_dtos', axis_order='ZT')
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.