text
stringlengths
256
16.4k
Search Now showing items 1-10 of 21 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Centrality dependence of particle production in p-Pb collisions at $\sqrt{s_{\rm NN} }$= 5.02 TeV (American Physical Society, 2015-06) We report measurements of the primary charged particle pseudorapidity density and transverse momentum distributions in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV, and investigate their correlation with experimental ...
In Babai nearest plane algorithm(solve approximate version of CVP), given the basis as input first step is to find the reduced basis(using LLL reduction algorithm). Why the reduced basis is used for further steps? Short answer: If the basis is not reduced, then there are no guarantees on the distance between the target and the output, compared to the distance between the target and an actual closest vector. By LLL-reducing the basis you can bound this distance, and therefore Babai's nearest plane algorithm solves CVP to a certain approximation. Take a look at the second step of Babai's nearest plane algorithm, where $t$ refers to the target vector. $ b \leftarrow t$ for $i = n$ to 1 do $\quad b \leftarrow b - c_i b_i$ where $c_i = \lceil \langle b, \tilde b_i \rangle / \langle \tilde b_i, \tilde b_i \rangle \rfloor$ end output $t-b$ Where the tilde denotes the Gram-Schmidt orthogonalization of the basis. Essentially, it looks for an integer combination of the basis vectors that is close to the target vector. Now, if $t \in$ span$(B)$, you can see that you can bound the distance between the output and $t$ by the sum of $\langle \tilde b_i, \tilde b_i \rangle$. LLL gives us a bound on these values. For an arbitrary basis, these norms could be very large, meaning the second step could be very, very off. In the case that $t \notin$ span$(B)$, you can use a similar argument on the projection of $t$ onto span$(B)$. For the exact details of how the LLL properties of the basis are used to prove the correctness of the algorithm, see this lecture.
From the last two positions, we see that $4\cdot(10R+E)\equiv 10E+R \bmod{100}$. This is equivalent to $39R\equiv6E\bmod{100}$, and further to $13R\equiv2E\bmod{100}$.Hence $R$ has to be even. Checking $R=0,2,4,6,8$, we see that only $R=8$ works. We conclude that $R=8$ and that $E=2$, and that the carry over from the second column to the third column is $3$. Since $4\cdot10^5\,S= 4\cdot S00000 < 4*\mbox{SQUARE} = \mbox{NUMBER} <10^6$, we conclude $4S<10$. The case $S=0$ would have zero as leading digit, and the case $S=2$ would collide with $S=E=2$. We derive $S=1$. We plug the detected values into the given equation and derive $4\cdot(1000+100Q+10U+A)+3=1000N+100U+10M+B$, which simplifies to $4003 -10M+ 4A-B = 1000N -400Q +60U.~~~ (*)$ The left hand side of equation ($*$) is at least $4003-90+0-9=3904$ and at most $4003-0+36-0=4039$. Furthermore, it is a multiple of $20$. Hence it lies between $3920$ and $4020$, which yields$196\le 50N-20Q+3U\le 201~~~ (**)$. Now we distinguish several cases on $U$ to derive the following from $(**)$: (1) If $U=0$, then $5N-2Q=20$. Then $N=6$ and $Q=5$. (2) If $U=3$, then $5N-2Q=19$. Then $(N,Q)=(5,2)$ or $=(7,8)$; collisions. (3) If $U\in\{4,5\}$, then $181<50N-20Q<189$. No solution. (4) If $U\in\{6,7\}$, then $5N-2Q=18$. Then $(N,Q)=(4,1)$ or $=(6,6)$; collisions. (5) If $U=9$, then $5N-2Q=17$. Then $(N,Q)=(5,4)$ which is fine, or $(N,Q)=(7,9)$ which has the collision $Q=U=9$. All in all, this leaves us with two possible cases: Case X: $(N,Q,U)=(6,5,0)$ Case Y: $(N,Q,U)=(5,4,9)$ In case X, equation $(*)$ boils down to $10M- 4A+B = 3$. If $M\ge4$, then $10M-4A+B\ge40-4\cdot9+0=4>3$; contradiction. If $M=3$, then $4A-B=27$ implies $(A,B)=(7,1)$ or $=(8,5)$ or $=(9,9)$; all three cases are collisions (with $B=S=1$, $B=Q=5$, $A=B=9$ respectively). If $M=0$, we have the collision $M=U=0$. In case Y, equation $(*)$ boils down to $10M- 4A+B =63$. If $M\le5$, then $10M- 4A+B\le 50+B<63$; contradiction. If $M=6$, we get $B=4A+3$; this leaves $(A.B)=(7,1)$ with the collision $B=S=1$ and the good solution $(A,B)=(0,3)$. If $M=7$, we get $4A-B=7$. This leaves the cases $(A.B)=(2,1)$, $(3,5)$, $(4,9)$ which respectively collide with $B=S=1$, $B=N=5$, $A=Q=4$. If $M=9$,we get $4A-B=27$. This leaves the cases $(A.B)=(7,1)$, $(8,5)$, $(9,9)$ which respectively collide with $B=S=1$, $B=N=5$, $A=B=9$. Summary: Only a single branch in this analysis has led to a solution: $R=8$ $E=2$ $S=1$; $(N,Q,U)=(5,4,9)$; $M=6$ and $(A.B)=(0,3)$.
Music generated by AI Completed Submissions Participants Views Overview Can AI generate music that we humans find beautiful, perhaps even moving? Let’s find out! In this challenge, participants are tasked to generate an AI model that learns on a large data set of music (in the form of MIDI files), and is then capable of producing its own music. Concretely, the model must produce a music piece in response to a short “seed” MIDI file that is given as input. There are two special aspects of this challenge, apart from the extremely interesting application. First, the results of the models will be evaluated by humans, with an ELO-style system where volunteers are given two randomly paired pieces of generated music, and choose the one they like better. Second, the top five models will at the end each generate a piece of music that will be performed live on stage at the Applied Machine Learning Days! Evaluation The grader expects a MIDI file of a total length of 3600 seconds (when played at 120 bpm). The MIDI file has to be a type 0 MIDI file (a maximum of 1 track), and in the case of multiple tracks, only the first track will be considered. There are no challenge-specific restrictions on the number of channels being used in the MIDI file.The grader splits the MIDI file into 120 chunks of approximately 30 seconds each, and each submission is represented by this pool of 120 chunks.During this post-processing step, all meta events from the MIDI file will be removed except the PPQ meta event (or ticks per beat), hence the officially supported MIDI events will only be the note_on and note_off events; where note_off event can be optionally replaced by a note_on event with a velocity of 0. All the MIDI parsing is done using the MIDO library; and you are requested to ensure that your submitted file is estimated to be of 3600 +/- 10 seconds by mido.MidiFile('your_file_path').length. A separate evaluation interface is made available, where all the participants (and other external volunteers) can hear two randomly sampled chunks and then vote for the one they like better (more details on the sampling mechanism is provided in the following sections). These randomly sampled chunks will be played with the SoundFont of an acoustic grand piano at 120 bpm. These binary comparisons will be used to compute an individual score for every submission, which evolves over time as it gets more and more evaluations in the evaluation interface. The scoring mechanism follows the TrueSkill ranking system, and hence is modeled by $ \mu $ (a quantitative estimate of the preference of a general population towards a particular song) and $ \sigma $ (the confidence of the system in this estimate). The actual score on the leaderboard is computed by taking a conservative estimate of the modeled score, and hence is represented by : $ \mu - k * \sigma $ where $k$ is the ratio of the default $\mu$ and $\sigma$ values and is represented by: $(\mu=25) / (\sigma=8.334)$. The submissions tab will report the values for $\mu$, $\sigma$ and the number of evaluations completed for every submission; and the leaderboard will use the conservative estimate of $\mu - k * \sigma$ as the primary score, and $\mu$ as the secondary score. To ensure that the top-10 selected participants are not overfitting on the training set; the top-10 submissions at the end of the challenge, will be divided into quantized chunks of $\tau(=5)$ seconds each (at 120 bpm) with a sliding window of stride $s$, and a normalised dynamic time warp (DTW) distance will be computed against $\tau(=5)$ second chunks from all the MIDI files listed in the Datasets. With $DTW(x, y)$ representing the DTW between two $\tau(=5)$ second quantized chunks, the normalized DTW will be computed by : $NDTW(x,y) = \frac{127 \times T(\tau=5) - DTW(x,y) }{127 \times T(\tau=5)}$ where, $T(\tau)$ represents the number of ticks in a time period of $\tau$ seconds. All matching chunks pairs with $NTDW < 0.3$ will be manually verified, and in case the chunks are found to be similar, then the submissions will be disqualified. Given the subjective nature of the evaluation, the organisers will reserve the right to both adjust the threshold of $0.3$ and also to decide if the flagged chunks are indeed similar because of the model overfitting, or because of the said participant trying to cheat by stitching together MIDI snippets from the training data. Starter Kit : A starter kit to help you get started on the submission procedure is made available at: https://github.com/crowdAI/crowdai-ai-generate-music-starter-kit. Comin Soon : A Getting Started guide on music generation from MIDI files using LSTMs. Rules Participants are allowed at most 2 submissions per day. By uploading a submission, participants provide crowdai the right to host and play short clips of the submitted midi files publicly to human evaluators who may or may not be affiliated with crowdai. Participants are not allowed to make submissions which are hand written, generated using custom rules, or recorded. Participants are expected to release their final code using any Open Source license of their choice to be eligible for the prizes. Organizers reserve the right to make changes to the rules Resources Starter Kit : A starter kit to help you get started on the submission procedure is made available at : https://github.com/crowdAI/crowdai-ai-generate-music-starter-kit. Some other projects to help you quickly get started on MIDI composition: https://github.com/brannondorsey/midi-rnn https://github.com/jisungk/deepjazz Google Magenta : Performance RNN MIDINet Contact: Technical issues : https://gitter.im/crowdAI/AI-Generated-Music-Challenge Discussion Forum : https://www.crowdai.org/challenges/ai-generated-music-challenge/topics We strongly encourage you to use the public channels mentioned above for communications between the participants and the organizers. In extreme cases, if there are any queries or comments that you would like to make using a private communication channel, then you can send us an email at :
Heat is the total kinetic energy of all atoms of the system. When work is done on the system it means that a part of system kinetic energy is used to do the work, and this work makes the surrounding warmer. So "$\Delta U$" of the system is equal to "$Q$". And now, why we use the work of the system in: $\Delta U = Q + W$? The questions is a good one, and answering it winds up being central to a lot of key concepts in thermodynamics. A brief correction before starting, though I don't think it's central to the question, "When work is done on by the system it means that a part of system kinetic energy is used to do the work..." The disconnect between your understanding and that used in thermodynamics is that in thermodynamics, the system and its surroundings are separate. So, when our system does work on the surroundings that energy won't come back to heat our system. If we do as you say and use our system to do work on the surroundings, turn that work into heat, and then apply that heat back into our system, the result is just as you say! The kinetic energy in our system isn't lost. Or put in another way when we're allowing heat to flow in and out of our system, all of that heat ultimately winds up as kinetic energy and $ΔU = Q.$ Let's look at a couple examples of putting $\pu{1 J}$ of heat into a system with different surroundings to see how this works out. Our system will be an insulated piston filled with an ideal gas (say helium), which can expand and push on one of 3 things in its surroundings: a mass, a vacuum, or a generator. Mass: This follows $ΔU = Q - W$. Of the $\pu{1 J}$ of heat we apply, the portion that goes into warming up our system and increasing internal kinetic energy is $C_v$ (the molar heat capacity is $1.5R),$ whereas the portion of that instead goes into doing work on the surroundings is $R$ (the universal gas constant). So our system gains $$[\pu{1 J}]\cdot \frac{C_v}{C_v + R} = \pu{\frac{3}{5} J}$$ of thermal energy, and does $$[\pu{1 J}]\cdot \frac{R}{C_v + R} = \pu{\frac{2}{5} J}$$ of work on the surroundings by expanding and pushing up the mass. This $\pu{\frac{2}{5} J}$ of work energy won't return. These portions $C_v/(C_v + R)$ and $R/(C_v + R)$ show up a lot later on in thermodynamics as their inverses, $k$ and $k/(k - 1)$, respectively. Vacuum: This follows $ΔU = Q$. There is no surroundings to do work on, so all of the expansion energy goes into slamming the piston against its backstop, and after everything settles this ultimately results in the full $\pu{1 J}$ of heat becoming thermal energy inside our piston. Generator: This also follows $ΔU = Q$. It's the same situation as the mass, with $\pu{\frac{3}{5} J}$ of thermal energy and $\pu{\frac{2}{5} J}$ of work done on the surroundings, but that $\pu{\frac{2}{5} J}$ of work can be turned into electricity and then be used to heat our piston. Just like in the vacuum case, all $\pu{1 J}$ of heating ultimately winds up as thermal energy in the system, though it spent some time as work on the surroundings during the process. So, $ΔU = Q - W + W.$ I think this final case is the closest to matching your question.
In my specific case, I have a pdf that has no closed form, and I want to generate random values of this distribution. It depends on a summation that goes to infinity (coming from a poisson process) and two Lebesgue integrals. Can anyone tell me how to choose a suitable method to generate random values from this distribution? Here is my pdf for a finite collection of points $r=(s_1,\dots,s_n)$. $p(Y_r)=\sum^{\infty}_{|N|=0}\frac{{\rm e}^{-\lambda\mu(S)}\left[\lambda \right]^{|N|}}{|N|!}\int_{S^{|N|}}\int_{\mathbb{R}^{|N|}}f_N(Y_N\mid S_N,N)p(Y_r\mid Y_N,S_N)\text{d}Y_N\text{d}S_N$, where $f_N(Y_N\mid S_N,N)$ is a normal distribution $|N|$-dimensional, and $p(Y_r\mid Y_N,S_N)$ is a conditional normal distribution $n$-dimensional. Here $S_N$ is the vector of locations of the points from my Poisson process ($N$). I want generate values for $Y_r$.
Because the last two lectures were a review day and an exam, I am going to once again break from the mold a bit, and discuss both events in one entry. The Review Over the weekend, I produced a rough draft of a study guide, cribbed from that to create a rough exam, then pared the guide down a little and changed some wording to emphasize the material that (1) I felt was most important and (2) I was actually going to test them on (with some important red herrings thrown in to make sure that they studied those, as well). I handed out the study guide on Monday, and told them to prepare to go over it on Thursday. When Thursday rolled around, I had two quick corollaries to go over, and a quiz prepared. We were then able to spend the rest of class answering questions. Unfortunately, I did an embarrassingly poor job with the very first question, which I think spoiled the mood and morale for the day. This question was something like this: find the critical numbers of the function \(f(x) = |3x-5|\). Intuitively, this is the absolute value of a linear function, thus there should be one critical number, which will occur when the function is 0 (i.e. the vertex of the absolute value function). In fact, if we know anything about transformations of graphs, it isn’t difficult to see that the critical number occurs at \(x=\frac{5}{3}\). However, I wanted to work out the derivative, and demonstrate that it didn’t exist at this point, hence it was a critical number. This was a catastrophe. It started out okay: I tried to do too many steps at once. By definition, we have \[ f(x) = \begin{cases} 3x-5 & \text{if $3x-5\ge 0$,} \\ -3x+5 & \text{if $3x-5< 0$.} \\ \end{cases} \] Except that I was already trying to figure out the derivative, so I wrote\[ \eqalign{ \frac{d}{dx}f(x) &= \begin{cases} (3x-5)\frac{d}{dx}(3x+5) & \text{if $3x-5\ge 0$,} \\ (-3x+5)\frac{d}{dx}(3x+5) & \text{if $3x-5< 0$.} \\ \end{cases} \\ &= \begin{cases} 9x-15 & \text{if $3x-5\ge 0$,} \\ -9x+15 & \text{if $3x-5< 0$.} \\ \end{cases} \\ } \] Setting this equal to zero and solving, we get a critical point at \(x = \frac{15}{9} = \frac{5}{3}\), which is the correct answer, but for the wrong reasons. This looked wrong to me on the board, and felt wrong to the students (go them!), but for the life of me, I couldn’t find the error, and ended up confusing the hell out of everyone (myself included) trying to track it down. Collectively, we eventually managed to go back to the beginning of the problem and work it out, but we ended up wasting a lot of time in (what I think was) an unproductive activity. By the end of it, everyone seemed somewhat dazed and demoralized, ready to go home. I pushed on until the end of class, and we did get some productive work done, but it was not a happy day. Of course, this comes back to my constant refrain: slow down, slow down, slow down. Do one step at a time. The Exam Again, it seems, the exam was too long. Half of the students took more than the 95 minutes allotted for the class. This despite the fact that the exam was far shorter (no multiple choice questions, two fewer true/false questions (six, compared to eight), and what I thought was a set of relatively straight-forward free response computations). I will need to discuss the exam with the students this evening in lecture to confirm this, but I suspect that the biggest problems were the related rates and implicit differentiation problems in the free response section. The related rates problem involved two boats passing a buoy at a different times. The goal was to determine how fast they were converging/diverging at a specific time. We have done several such problems, but this example was slightly different, in that the boats did not pass the buoy at the same time. This difference made the set-up a little more difficult, and introduced a really subtle sign error. The set-up killed half the class, and the sign error wiped out almost all of the rest—out of 27 students who took the exam, there were only two correct answers to this question. In principle, this isn’t a bad ratio—those two students are the A students, those that managed to solve it with the sign error are B students, those that got the set-up more or less correct are the C students, and those that made no progress fall into the D/F range. However, I had not intended this question to be a discriminating question—I just wanted to know if they could solve a related rates problem. For that purpose, it probably was not the best possible question The other difficult question involved an implicit differentiation, i.e. find the slope of the line tangent to the graph of \(x^{2/3} + y^{2/3} = 1\) at the point \(\left(\frac{3}{8}\sqrt{3},\frac{1}{8}\right)\). Most of the class managed the calculus, but spun their wheels on the algebraic simplification. I am sure it was quite frustrating for them, and I was disappointed by the results. The most annoying aspect of it is that the could have pulled out their calculators right at the beginning, forgone most of the algebra, and just given a decimal answer. In fact, that is what I told them to do. Overall, the distribution of scores was quite strange. There were four scores above 85%. These scores represented my A range. There were three scores between 80% and 85%, then a trickle of scores down to 70% without any obvious gaps or clusters. In general, I would have liked to award some of those in the 70-80% range with Bs, but there was no fair way to draw a line, so I went by the syllabus and gave a B to the 80% and Cs those that scored between 65% and 70% (actually, the lowest C ended up being at 63.3%). Ds and Fs were also awarded exactly as the syllabus describes. Thus there were a few students in the B and D ranges who were helped by the curve, but almost everyone was completely unaffected by it, and the overall distribution seemed heavily weighted to the bottom (lots of low Cs, Ds, and Fs). I’m not entirely sure what to make of it.
A new Higgs tadpole cancellation condition reformulating the hierarchy problem Strings 2013 [talks] is underway.The first hep-ph paper today probably got to that exclusive place because the authors were excited and wanted to grab the spot. Andre de Gouvea, Jennifer Kile, and Roberto Vega-Morales of Illinois chose the title \(H\rightarrow \gamma\gamma\) as a Triangle Anomaly: Possible Implications for the Hierarchy ProblemThey point out a curious feature of the diagrams calculating the Higgs boson decay to two photons (yes, it's the process that seemed to have a minor excess at the LHC but this excess went away): while the diagram is finite, one actually gets different results according to the choice of the regularization. \({\Huge \Rightarrow}\) In particular, the \(d=4\) direct calculation leads to a finite result but it actually violates the gauge invariance so it can't be right. It should be disturbing for you that wrong results may arise from quantum field theory calculations even if you don't encounter any divergence. However, the right fix is known: work with a regularization, typically dimensional regularization, that automatically respects the gauge invariance. Using dim reg, in \(d=4-\epsilon\) dimensions, one automatically gets the right result. However, it still disagrees with the wrong result computed directly in \(d=4\). While this episode doesn't mean that QFT is ill-defined or inconsistent and we actually know how to do things correctly, the finite-yet-wrong result in \(d=4\) surely sounds bizarre. The authors propose a new condition on quantum field theories: this strangeness shouldn't be there. In other words, the wrong, Ward-identity-violating terms in the \(d=4\) calculation should cancel. When they cancel, the \(d=4\) calculation will agree with the correct dim reg \(d=4-\epsilon\) calculation. The paper suggests that this cancellation is a new general principle of physics that constrains the allowed spectra of particles and fields and that should be added next to the usual triangle diagram gauge anomaly cancellation conditions in the Standard Model and similar gauge theories. Note that the triangle anomaly diagrams may be blamed on linear divergences in the integrals. Here, the new type of an "anomaly" that should be canceled is also related to the linearly divergent part of certain integrals because they behave differently under the shift of momenta. So even the computational origin of their new "anomaly" resembles the case of the chiral anomaly. In some sense, the "new" anomaly only differs from the well-known triangle anomaly by its replacement of one external gauge boson with the Higgs boson. These diagrams that have to cancel are close to some Higgs tadpole diagrams – Feynman diagrams you would use to compute the shift of the Higgs vacuum expectation value (vev). The "tadpole cancellation conditions" are well-known to string theorists but they weren't really discussed in the context of ordinary 4D quantum field theories yet. I suppose that there should be a more natural way to phrase and justify the Higgs tadpole cancellation condition. The condition looks like eqn (36)\[ 3ge^2M_W + \frac{e^2 g m_H^2}{2M_W} +\sum_{\rm scalars} 2\lambda_S v e_s^2-\!\!\sum_{\rm fermions}\!\! 2\lambda_f^2 v e_f^2 = 0 \] Supersymmetry seems to be the only known natural principle that cancels the new "anomaly". The authors have only checked it by some uninspiring brute force calculation in the MSSM as a function of several parameters. I guess that there's a simple proof that supersymmetry – unbroken or broken at an arbitrary scale – cancels the new "anomaly" condition. It's probably true and they probably realize that the new condition is mostly equivalent to the usual unbearable lightness and naturalness of the Higgs' being. However, if you might phrase the condition for naturalness as a version of an anomaly cancellation condition, it would probably be (or at least look) much more inevitable than the usual arguments discussing the hierarchy problem.
Polynomial equations are one of the major concepts of Mathematics, where the relation between numbers and variables are explained in a pattern. In Maths, we have studied a variety of equations formed with algebraic expressions. When we talk about polynomials, it is also a form of the algebraic equation. What is a Polynomial Equation? The equations formed with variables, exponents and coefficients are called as polynomial equations. It can have a number of different exponents, where the higher one is called the degree of the exponent. We can solve polynomials by factoring them in terms of degree and variables present in the equation. A polynomial function is an equation which consists of a single independent variable, where the variable can occur in the equation more than one time with different degree of the exponent. Students will also learn here to solve these polynomial functions. The graph of a polynomial function can also be drawn using turning points, intercepts, end behaviour and the Intermediate Value Theorem. Example of polynomial function: f(x) = 3x 2 + 5x + 19 Read More: Polynomial Functions Polynomial Equations Formula Usually, the polynomial equation is expressed in the form of a n(x n). Here a is the coefficient, x is the variable and n is the exponent. As we have already discussed in the introduction part, the value of exponent should always be a positive integer. If we expand the polynomial equation we get; F(x) = a nxn + an-1xn-1 + an-2xn-2 + …….. + a1x +a0 This is the general expression and is also a polynomial equation solver. It can also be expressed as; F(x) = \(\sum_{k=0}^{n}a_{k}n^k\) Example of a polynomial equation is: 2x 2 + 3x + 1 = 0, where 2x 2 + 3x + 1 is basically a polynomial expression which has been set equal to zero, to form a polynomial equation. Types of Polynomial Equation Polynomial equation is basically of four types; Monomial Equations Binomial Equations Trinomial or Cubic Equations Quadratic Polynomial Equations Monomial Equation: An equation which has only one variable term is called a Monomial equation. This is also called a linear equation. It can be expressed in the algebraic form of; ax + b = 0 For Example: 4x+1=0 5y=2 8z-3=0 Binomial Equations: An equation which has only two variable terms and is followed by one variable term is called a Monomial equation. This is also in the form of the quadratic equation. It can be expressed in the algebraic form of; ax 2 + bx + c = 0 For Example: 2x 2+ 5x + 20 = 0 3x 2– 4x + 12 = 0 Trinomial Equations: An equation which has only three variable terms and is followed by two variable and one variable term is called a Monomial equation. This is also called a cubic equation. In other words, a polynomial equation which has a degree of three is called a cubic polynomial equation or trinomial polynomial equation. Since the power of the variable is maximum up to 3, therefore, we get three values for a variable, say x. It is expressed as; a 0 x3 + a1x2 + a2x + a3 = 0, a ≠ 0 or ax 3 + bx2 + cx + d = 0 For Example: 3x 3+ 12x 2– 8x – 10 = 0 9x 3+ 5x 2– 4x – 2 = 0 To get the value of x, we generally use, trial and error method, in which we start putting the value of x randomly, to get the given expression as 0. If for both sides of the polynomial equation, we get a 0 ,then the value of x is considered as one of the roots. After then we can find the other two values of x. Let us take an example: Problem: y 3 – y 2 + y – 1 = 0 is a cubic polynomial equation. Find the roots of it. Solution: y 3 – y 2 + y – 1 = 0 is the given equation. By trial and error method, start putting the value of x. If y = -1, then, (-1) 3 – (-1) 2 -1 +1 = 0 -1 + 1 -1 + 1 = 0 -4 ≠ 0 If y = 1, then, 1 3 – 1 2 + 1 – 1 = 0 0 = 0 Therefore, one of the roots is 1. y = 1 (y – 1) is one of the factors. Now dividing the given equation by x+1 on both sides, we get, y 3 – y 2 + y – 1 = 0 Dividing both sides by y-1, we get (y-1) (y 2 + 1) = 0 Therefore, the roots are y = 1 which is a real number and y 2 + 1 gives complex numbers or imaginary numbers. Quadratic Polynomial Equation A polynomial equation which has a degree as two is called a quadratic equation. The expression for the quadratic equation is: ax 2 + bx + c = 0 ; a ≠ 0 Here, a,b, and c are real numbers. The roots of quadratic equations give two values for the variable x. x = \(\frac{-b\pm \sqrt{b^2-4ac}}{2a}\) Also Check: Polynomial Equation Solver Related Topics Quadratic Formula & Quadratic Polynomial Multiplying Polynomials Polynomial Formula Factorization Of Polynomials Frequently Asked Questions What are Quartic Polynomials? Polynomials of degree 4 are known as quartic polynomials. A quartic polynomial can have 0 to 4 roots. How Polynomial Equations are Represented? A polynomial equation is represented of the form: F(x) = a nxn + an-1xn-1 + an-2xn-2 + …….. + a1x + a0 Can a Polynomial have no Real Zeroes? Yes, a polynomial can have no real zeroes. An example of a polynomial having no zero is x2 – 2x + 5. Download BYJU’S – The Learning App to learn to get engaging video lessons on various maths topics to make learning more interactive and effective.
I was reading a proof on the evaluation of $\int_0^\infty e^{-x^2}\ dx$ without advanced techniques and stumbled upon two limits that I can't seem to crack: $$\lim_{m\to\infty}\left(\sqrt{m}\cdot\prod_{n=1}^m\frac{2n}{2n+1}\right)=\frac{\sqrt{\pi}}2$$ $$\lim_{m\to\infty}\left(\sqrt{m}\cdot\prod_{n=2}^m\frac{2n-3}{2n-2}\right)=\frac1{\sqrt{\pi}}$$ The proof does not go into detail on how these limits were obtained, and since I wanted to understand it completely, I thought this would be the best place to ask. I have not been exposed to infinite products (only summations) and therefore I do not know which rules to apply (I feel as if they are quite similar?). In both cases, I see that an indeterminate form $0\cdot\infty$ presents its self, therefore I am guessing Hospital would be a nice approach? Any help is appreciated! Also, my calculus book does not tackle infinite products, any suggestions on books that might give me a general outlook on the subject? Wallis's formula: $$\frac{\pi}{2}=\prod_{n=1}^\infty \left[\frac{(2n)^2}{(2n+1)(2n-1)}\right].$$ Proof: Weierstrass factorization of $\sin$ (You can find Euler's semi standard proof of this here) : $$\sin(x)=x\prod_{n=1}^\infty\left(1-\frac{x^2}{n^2\pi^2}\right).$$ Plug in $x=\pi/2$ and play with the resulting fractions to get the desired result. For your first product: \begin{align*} \prod_{n=1}^m\frac{2n}{2n+1}&=\frac{2\cdot 1}{2\cdot 1+1}\frac{2\cdot 2}{2\cdot 2+1}\frac{2\cdot 3}{2\cdot 3+1}\cdots \frac{2\cdot m}{2\cdot m+1}\\ &=2\cdot 1\frac{2\cdot 2}{2\cdot 1+1}\frac{2\cdot 3}{2\cdot 2+1}\cdots \frac{2\cdot m}{2\cdot (m-1)+1}\frac{1}{2m+1}\\ &=\frac{2}{2m+1}\prod_{n=2}^m\frac{2n}{2n-1} \end{align*} Thus: \begin{align*} \frac{\pi}{2}&=\lim_{m\rightarrow\infty}\prod_{n=1}^m \left[\frac{(2n)^2}{(2n+1)(2n-1)}\right]\\ &=\lim_{m\rightarrow\infty}\left(\prod_{n=1}^m\frac{2n}{2n+1}\right)\frac{1}{2}\left(\prod_{n=2}^m\frac{2n}{2n-1}\right)\\ &=\lim_{m\rightarrow\infty}\frac{1}{2}\frac{2m+1}{2}\left(\prod_{n=1}^m\frac{2n}{2n+1}\right)^2. \end{align*} Now just take the square-root of both sides and notice that $\sqrt{m}/\sqrt{\frac{2m+1}{2}}\rightarrow 1$ For the second question, try a similar trick by shifting the index $n\rightarrow n+2$. Both of these can be obtained as a consequence of Stirling's approximation, by first rewriting all of the partial products in terms of factorials. This argument doesn't seem easier to me than the standard argument involving passing to a 2-dimensional integral. In general, a standard strategy for handling infinite products is to take their logarithms, producing infinite sums. $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\lim_{m \to \infty}\pars{\root{m}\prod_{n = 1}^{m}{2n \over 2 n + 1}} = \lim_{m \to \infty}{\root{m}\,m! \over \prod_{n = 1}^{m}\pars{n + 1/2}} = \lim_{m \to \infty}{\root{m}\,m! \over \pars{3/2}^{\overline{m}}} \\[5mm] = &\ \lim_{m \to \infty}{\root{m}\,m! \over \Gamma\pars{3/2 + m}/\Gamma\pars{3/2}} = \Gamma\pars{3 \over 2}\lim_{m \to \infty} {m^{1/2}\root{2\pi}m^{m + 1/2}\expo{-m} \over \root{2\pi}\pars{m + 1/2}^{m + 1}\expo{-\pars{m + 1/2}}} \\[5mm] = &\ {1 \over 2}\,\root{\pi}\lim_{m \to \infty}{\expo{1/2} \over \bracks{1 + \pars{1/2}/m}^{\,m + 1}} = \bbx{\ds{\root{\pi} \over 2}} \end{align} The other one follows the same pattern.
I have 1 right triangle of dimensions $\sqrt75$$, 11, 14$. I'd like to know how to quickly obtain the other right triangles with $\sqrt75$ as a leg, and two integers as the hypotenuse and the other leg (as per the Pythagorean theorem). It is to my understanding that these triangles are all connected somehow geometrically and, consequently, algebraically. Are the necessary techniques for quickly obtaining them related to: https://en.wikipedia.org/wiki/Spiral_of_Theodorus and/or the proof using differential techniques shown here: https://en.wikipedia.org/wiki/Pythagorean_theorem ? Factor $75=3 \cdot 5^2$. This implies that $75$ has 6 divisors, namely $1,3,5,15,25,75$. Your problem is equivalent on looking for all integer solution to $$x^2+75=y^2$$ Now, write this equation as $$(y+x)(y-x)=75$$ so that you have to solve 6 different linear systems $$\left\{ \begin{matrix} y &+&x& =& a \\ y&-&x&=& 75/a\end{matrix} \right.$$ where $a$ is a divisor of $75$. For example, for $a=25$ you get the solution $x=11, y=14$. The other solutions are $(37,38)$ and $(5,10)$ (and other 3 negative solutions, which must be discarded).
Suppose that $\limsup a_n$ is finite and $b_n \rightarrow b>0$ ($b\neq \infty$) as $n \rightarrow \infty$, and prove that $\limsup a_n b_n=(\limsup a_n)b$. Note in this problem $a_n$ can be unbounded below, I have already shown the result if $a_n$ is bounded. Here is my approach so far, please let me know if I am on the right track. We will show $(\limsup a_n)b=$ sup$E$ where $E$ denotes the set of all subsequential limits of $a_n b_n$ along with $+\infty, -\infty$. First we show $(\limsup a_n)b$ is an upper bound for $E$. So let $a_{n_k}b_{n_k}$ be a convergent subsequence of $a_n b_n$ with $n_1>n_2>...$ then we have the following inequalties $$\lim_{k \to\infty}a_{n_k}b_{n_k}\leq \lim_{k \to\infty}\left(\sup_{i\geq k}{a_{n_i}}b_{n_k}\right)\leq \lim_{k\to\infty}\left(\sup_{n\geq k}{a_n}b_{n_k} \right)=\lim_{k\to\infty}\sup_{n\geq k}a_n\lim_{k\to\infty}{b_{n_k}}=b\limsup a_n$$ Hence $(\limsup a_n)b$ is an upper bound for $E$. Now we show that it is the least upper bound. Suppose there exists an $M<b\limsup a_n$ such that for all convergent subsequences $a_{n_k}b_{n_k}$ of $a_nb_n$ we have that $\lim_{k\to\infty}{a_{n_k}b_{n_k}}\leq M$. Contradiction since by taking $a_n$ to be bounded we have that $\limsup a_nb_n=b\limsup a_n$.
My book is Connections, Curvature, and Characteristic Classes by Loring W. Tu (I'll call this Volume 3), a sequel to both Differential Forms in Algebraic Topology by Loring W. Tu and Raoul Bott (Volume 2) and An Introduction to Manifolds by Loring W. Tu (Volume 1). If $F : N \to M$ is a diffeomorphism and $< , >$ is a Riemannian metric on $M$, then (1.3) defines an induced Riemannian metric $< , >'$ on $N$. Here $N$ and $M$ are smooth manifolds that hopefully have dimensions. Note that the $F_*$ here indeed refers to the differential $F_*,p: T_pN \to T_{F(p)}M$ defined in Volume 1 Section 8.2 and not the latter half $F_*: TN \to TM$ of the bundle map $(F, F_*)$, where $F_*$ is what would be known as $\tilde{F}$ in Volume 1 Section 12.3. The following is my proof of Example 1.9. Question 1: Is this proof correct? Question 2: If this proof is correct, then is there a way to do this without relying on pushforwards from Volume 1 or without injectivity of $F$? I guess we can come up with a similar proof for an embedding, but embeddings are injective. So we'll have to go with investigating local diffeomorphisms, local diffeomorphisms onto image, immersions, etc. If this proof is incorrect, then why? Proof: Notation from Volume 1 Section 2.4: For a smooth manifold $N$, let $\mathfrak X (N)$ be the set of smooth vector fields on $N$, and let $C^{\infty}N$ be the set of smooth functions on $N$ (not germs). We must show that A. (Not interested in proving this part, but I'm stating what is to be proven for completeness) For all $p \in N$, the mapping $\langle , \rangle'_p: (T_pN)^2 \to \mathbb R$ is an inner product on $T_pN$, where $\langle , \rangle'_p$ is given as follows: Let $u,v \in T_pN$. Then $F_{*,p}u, F_{*,p}v \in T_{F(p)}M$. Let $\langle , \rangle_{F(p)}: (T_{F(p)}M)^2 \to \mathbb R$ be the inner product on $T_{F(p)}M$ given by the Riemannian metric $\langle , \rangle$ on $M$, at the point $F(p) \in M$. Then $(\langle , \rangle'_p)(u,v) = \langle u, v \rangle'_p = \langle F_{*,p}u, F_{*,p}v \rangle_{F(p)}$. B. $\langle X,Y\rangle' \in C^{\infty}N$ for all $X,Y \in \mathfrak X (N)$, where $\langle X,Y\rangle': N \to \mathbb R$, $\langle X,Y \rangle'(p)=\langle X_p,Y_p\rangle'_p$ $=\langle F_{*,p}X_p,F_{*,p}Y_p\rangle_{F(p)}$. To prove B: Let $X,Y \in \mathfrak X (N)$. Then, by Volume 1 Example 14.15, $F_{*}X$ and $F_{*}Y$ are defined vector fields on $M$. Hopefully, $F_{*}X$ and $F_{*}Y$ are smooth, i.e. $F_{*}X,F_{*}Y \in \mathfrak X (M)$. (I ask about this step here.) $\langle A, B \rangle \in C^{\infty} M$ for all $A,B \in \mathfrak X(M)$, by definition of $\langle , \rangle$ for $M$ (Definition 1.5). $\langle F_{*}X,F_{*}Y \rangle \in C^{\infty}M$, from (2) and (3). $\langle X,Y\rangle' = \langle F_{*}X,F_{*}Y \rangle \circ F$, i.e. $\langle X,Y\rangle'$ is the pullback by $F$ of $\langle F_{*}X,F_{*}Y \rangle$ $\langle X,Y\rangle' \in C^{\infty}N$, by Volume 1 Proposition 6.9, by (4) and by smoothness of $F$.
Repunit cannot be Square Jump to navigation Jump to search Theorem Proof By definition, $m$ is odd. $m \equiv 1 \pmod 4$. $m$ is of the form $\displaystyle \sum_{k \mathop = 0}^{r - 1} 10^k$ where $r$ is the number of digits. Thus for $r \ge 2$: \(\displaystyle m\) \(=\) \(\displaystyle 11 + 100 s\) for some $s \in \Z$ \(\displaystyle \) \(=\) \(\displaystyle \paren {2 \times 4} + 3 + 4 \times \paren {25 s}\) \(\displaystyle \) \(=\) \(\displaystyle 3 + 4 t\) for some $t \in \Z$ Hence: $m \equiv 3 \pmod 4$ and so cannot be square. $\blacksquare$ Sources 1980: David M. Burton: Elementary Number Theory(revised ed.) ... (previous) ... (next): Chapter $2$: Divisibility Theory in the Integers: $2.1$ The Division Algorithm: Problems $2.1$: $7$ 1986: David Wells: Curious and Interesting Numbers... (previous) ... (next): $1,111,111,111,111,111,111$ 1997: David Wells: Curious and Interesting Numbers(2nd ed.) ... (previous) ... (next): $1,111,111,111,111,111,111$
Say I have a portfolio of 3 stocks $A,B,C$ with $\mu_A = 5%$, $\mu_B = 10%$, $\mu_C = 15%$ and volatility $\sigma_A = 10%$, $\sigma_B = 15%$, and $\sigma_C = 25%$. Let us also say that correlations are $\rho_{AC} = 0.7$, $\rho_{AB} = 0.3$, and $\rho_{BC} = -0.1$. Say total portfolio value is 1 and it is composed of $A,B,C$ equally by value. How would I calculate the corresponding risk exposure that I have to each of the three underlying securities? Portfolio $\mu_{total} = \frac{1}{3} \times \mu_A + \frac{1}{3} \times \mu_B + \frac{1}{3} \times \mu_C$. Portfolio $\sigma_{total} = \sqrt{\frac{1}{9}(\sigma_A^2+\sigma_B^2+\sigma_C^2 + 2\rho_{AC}\sigma_A\sigma_C+2\rho_{AB}\sigma_A\sigma_B+2\rho_{BC}\sigma_B\sigma_C)}$ How would you divide up $\sigma_{total}$ or is it not possible?
$\displaystyle \large \lim_{x \,\to\, 0}{\normalsize \dfrac{\sin{x}}{x}} \,=\, 1$ The limit of ratio of sin of angle to angle as the angle approaches zero is equal to one. This standard result is used as a rule to evaluate the limit of a function in which sine is involved. $x$ is a variable and represents angle of a right triangle. The sine function is written as $\sin{x}$ as per trigonometry. The limit of quotient of $\sin{x}$ by $x$ as $x$ approaches zero is often appeared in calculus. $\displaystyle \large \lim_{x \,\to\, 0}{\normalsize \dfrac{\sin{x}}{x}}$ Actually, the limit of $\sin{(x)}/x$ as $x$ tends to $0$ is equal to $1$ and this standard trigonometric function result is used as a formula everywhere in calculus. There are two ways to prove this limit of trigonometric function property in mathematics. It is derived on the basis of close relation between $\sin{x}$ function and angle $x$ as the angle $x$ closer to zero. It can also be derived by the expansion of $\sin{x}$ function as per Taylor (or) Maclaurin series. Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Recently the question If $\frac{d}{dx}$ is an operator, on what does it operate? was asked on mathoverflow. It seems that some users there objected to the question, apparently interpreting it as an elementary inquiry about what kind of thing is a differential operator, and on this interpretation, I would agree that the question would not be right for mathoverflow. And so the question was closed down (and then reopened, and then closed again…. sigh). (Update 12/6/12: it was opened again,and so I’ve now posted my answer over there.) Meanwhile, I find the question to be more interesting than that, and I believe that the OP intends the question in the way I am interpreting it, namely, as a logic question, a question about the nature of mathematical reference, about the connection between our mathematical symbols and the abstract mathematical objects to which we take them to refer. And specifically, about the curious form of variable binding that expressions involving $dx$ seem to involve. So let me write here the answer that I had intended to post on mathoverflow: ————————- To my way of thinking, this is a serious question, and I am not really satisfied by the other answers and comments, which seem to answer a different question than the one that I find interesting here. The problem is this. We want to regard $\frac{d}{dx}$ as an operator in the abstract senses mentioned by several of the other comments and answers. In the most elementary situation, it operates on a functions of a single real variable, returning another such function, the derivative. And the same for $\frac{d}{dt}$. The problem is that, described this way, the operators $\frac{d}{dx}$ and $\frac{d}{dt}$ seem to be the same operator, namely, the operator that takes a function to its derivative, but nevertheless we cannot seem freely to substitute these symbols for one another in formal expressions. For example, if an instructor were to write $\frac{d}{dt}x^3=3x^2$, a student might object, “don’t you mean $\frac{d}{dx}$?” and the instructor would likely reply, “Oh, yes, excuse me, I meant $\frac{d}{dx}x^3=3x^2$. The other expression would have a different meaning.” But if they are the same operator, why don’t the two expressions have the same meaning? Why can’t we freely substitute different names for this operator and get the same result? What is going on with the logic of reference here? The situation is that the operator $\frac{d}{dx}$ seems to make sense only when applied to functions whose independent variable is described by the symbol “x”. But this collides with the idea that what the function is at bottom has nothing to do with the way we represent it, with the particular symbols that we might use to express which function is meant. That is, the function is the abstract object (whether interpreted in set theory or category theory or whatever foundational theory), and is not connected in any intimate way with the symbol “$x$”. Surely the functions $x\mapsto x^3$ and $t\mapsto t^3$, with the same domain and codomain, are simply different ways of describing exactly the same function. So why can’t we seem to substitute them for one another in the formal expressions? The answer is that the syntactic use of $\frac{d}{dx}$ in a formal expression involves a kind of binding of the variable $x$. Consider the issue of collision of bound variables in first order logic: if $\varphi(x)$ is the assertion that $x$ is not maximal with respect to $\lt$, expressed by $\exists y\ x\lt y$, then $\varphi(y)$, the assertion that $y$ is not maximal, is not correctly described as the assertion $\exists y\ y\lt y$, which is what would be obtained by simply replacing the occurrence of $x$ in $\varphi(x)$ with the symbol $y$. For the intended meaning, we cannot simply syntactically replace the occurrence of $x$ with the symbol $y$, if that occurrence of $x$ falls under the scope of a quantifier. Similarly, although the functions $x\mapsto x^3$ and $t\mapsto t^3$ are equal as functions of a real variable, we cannot simply syntactically substitute the expression $x^3$ for $t^3$ in $\frac{d}{dt}t^3$ to get $\frac{d}{dt}x^3$. One might even take the latter as a kind of ill-formed expression, without further explanation of how $x^3$ is to be taken as a function of $t$. So the expression $\frac{d}{dx}$ causes a binding of the variable $x$, much like a quantifier might, and this prevents free substitution in just the way that collision does. But the case here is not quite the same as the way $x$ is a bound variable in $\int_0^1 x^3\ dx$, since $x$ remains free in $\frac{d}{dx}x^3$, but we would say that $\int_0^1 x^3\ dx$ has the same meaning as $\int_0^1 y^3\ dy$. Of course, the issue evaporates if one uses a notation, such as the $\lambda$-calculus, which insists that one be completely explicit about which syntactic variables are to be regarded as the independent variables of a functional term, as in $\lambda x.x^3$, which means the function of the variable $x$ with value $x^3$. And this is how I take several of the other answers to the question, namely, that the use of the operator $\frac{d}{dx}$ indicates that one has previously indicated which of the arguments of the given function is to be regarded as $x$, and it is with respect to this argument that one is differentiating. In practice, this is almost always clear without much remark. For example, our use of $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$ seems to manage very well in complex situations, sometimes with dozens of variables running around, without adopting the onerous formalism of the $\lambda$-calculus, even if that formalism is what these solutions are essentially really about. Meanwhile, it is easy to make examples where one must be very specific about which variables are the independent variable and which are not, as Todd mentions in his comment to David’s answer. For example, cases like $$\frac{d}{dx}\int_0^x(t^2+x^3)dt\qquad \frac{d}{dt}\int_t^x(t^2+x^3)dt$$ are surely clarified for students by a discussion of the usage of variables in formal expressions and more specifically the issue of bound and free variables.
A Harnack type inequality and a maximum principle for an elliptic-parabolic and forward-backward parabolic De Giorgi class Dipartimento di Matematica "Tullio Levi Civita, " Università di Padova, via, Trieste 63,35121, Padova, Italy We define a homogeneous parabolic De Giorgi class of order 2 which suits a mixed type class of evolution equations whose simplest example is $\mu (x) \frac{\partial u}{\partial t} - \Delta u = 0$ where $\mu$ can be positive, null and negative. The functions belonging to this class are local bounded and satisfy a Harnack type inequality. Interesting by-products are Hölder-continuity, at least in the "evolutionary" part of $\Omega$ and in particular in the interface $I$ where $\mu$ change sign, and an interesting maximum principle. Keywords:Parabolic equations, elliptic equations, mixed type equations, weighted Sobolev spaces, Harnack's inequality, Hölder-continuity, maximum principle. Mathematics Subject Classification:Primary: 35M10, 35B65, 35B50; Secondary: 35K65, 35B45, 35J62, 35J70. Citation:Fabio Paronetto. A Harnack type inequality and a maximum principle for an elliptic-parabolic and forward-backward parabolic De Giorgi class. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 853-866. doi: 10.3934/dcdss.2017043 References: [1] E. B. Fabes, C. E. Kenig and R. Serapioni, The local regularity of solutions of degenerate elliptic equations, [2] J. Garcia Cuerva and J. L. Rubio de Francia, [3] [4] [5] [6] F. Paronetto, A Harnack's inequality and Hölder continuity for solutions of mixed type evolution equations, [7] show all references References: [1] E. B. Fabes, C. E. Kenig and R. Serapioni, The local regularity of solutions of degenerate elliptic equations, [2] J. Garcia Cuerva and J. L. Rubio de Francia, [3] [4] [5] [6] F. Paronetto, A Harnack's inequality and Hölder continuity for solutions of mixed type evolution equations, [7] [1] [2] Ping Li, Pablo Raúl Stinga, José L. Torrea. On weighted mixed-norm Sobolev estimates for some basic parabolic equations. [3] Giuseppe Di Fazio, Maria Stella Fanciullo, Pietro Zamboni. Harnack inequality for degenerate elliptic equations and sum operators. [4] Doyoon Kim, Hongjie Dong, Hong Zhang. Neumann problem for non-divergence elliptic and parabolic equations with BMO$_x$ coefficients in weighted Sobolev spaces. [5] Luis Silvestre. Hölder continuity for integro-differential parabolic equations with polynomial growth respect to the gradient. [6] Angelo Favini, Rabah Labbas, Stéphane Maingot, Hiroki Tanabe, Atsushi Yagi. Necessary and sufficient conditions for maximal regularity in the study of elliptic differential equations in Hölder spaces. [7] Simona Fornaro, Maria Sosio, Vincenzo Vespri. Harnack type inequalities for some doubly nonlinear singular parabolic equations. [8] Peter Weidemaier. Maximal regularity for parabolic equations with inhomogeneous boundary conditions in Sobolev spaces with mixed $L_p$-norm. [9] Francesca Da Lio. Remarks on the strong maximum principle for viscosity solutions to fully nonlinear parabolic equations. [10] [11] Jiabao Su, Rushun Tian. Weighted Sobolev embeddings and radial solutions of inhomogeneous quasilinear elliptic equations. [12] [13] Xiaojun Li, Xiliang Li, Kening Lu. Random attractors for stochastic parabolic equations with additive noise in weighted spaces. [14] Jinggang Tan, Jingang Xiong. A Harnack inequality for fractional Laplace equations with lower order terms. [15] Alberto Fiorenza, Anna Mercaldo, Jean Michel Rakotoson. Regularity and uniqueness results in grand Sobolev spaces for parabolic equations with measure data. [16] [17] Chérif Amrouche, Mohamed Meslameni, Šárka Nečasová. Linearized Navier-Stokes equations in $\mathbb{R}^3$: An approach in weighted Sobolev spaces. [18] Emmanuele DiBenedetto, Ugo Gianazza and Vincenzo Vespri. Intrinsic Harnack estimates for nonnegative local solutions of degenerate parabolic equations. [19] Zaiyun Peng, Xinmin Yang, Kok Lay Teo. On the Hölder continuity of approximate solution mappings to parametric weak generalized Ky Fan Inequality. [20] 2018 Impact Factor: 0.545 Tools Metrics Other articles by authors [Back to Top]
Let $A$ be the alphabet of the codes, with $|A| = D$, and codelengths $1 \leq l_1 \leq ... \leq l_n$. Those codelengths satisfy the inequality of Kraft: $$\sum_{i=1}^n D^{-l_i} \leq 1$$ On how many ways can we choose codewords $c(w_i) \in A^*$ so that $c(w_i)$ had length $l_i$, and that the code is a prefix code? ($A^*$ is the concatenation of ''characters'' in the alphabet, and $w_i$ is just a word) I don't exactly know where to start. It is not very difficult to show that a code is a prefix code. But how can I find the number of ways we can choose that codewords $c(w_i)$? I hope somebody can help me with that.
THEMATIC PROGRAMS September 23, 2019 Fall 2005 May 31, 1:30 p.m. ** (Note special time) Hui Guo (Fields Institute) Integrable Teichmuller spaces We introduce a new kind of subspaces of the universal Teichmuller space. Some characterizations of them are given in terms of univalent functions, Beltrami coefficients and quasisymmetric homeomorphisms of the boundary of the unit disc. May 1 Mitsuhiro Shishikura (Kyoto University and Fields Institute) Teichmuller contraction and renormalizationAccording to Royden-Gardiner theorem, any holomorphic mapping between Teichmuller spaces does not expand the Teichmuller distance. (Just like Schwarz-Pick theorem says that any holomorphic mapping between hyperbolic Riemann surfaces does not expand the Poincare metric.) In the theory of renormalizations, one often wants to show that the renormalization map (which is defined in a transcendental way) on the space of certain dynamical systems is hyperbolic or contracting. Therefore Royden-Gardiner theorem is an obvious candidate of tools to obtain the contraction. This idea was first used by Sullivan in his work on generalized Feigenbaum type renormalization. In this talk, we will discuss two applications of Teichmuller theory to renormalizations: 1. Parabolic renormalization for parabolic fixed points and their perturbations. 2. Rigidity of real quadratic polynomials only using Yoccoz's combinatorial a priori bounds. For these cases, we deal with the Teichmuller spaces of a disk or a punctured disk and show that a certain inclusion map induces a contraction in Teichmuller distance. Apr. 17 Apr. 10 Mary Rees (Liverpool) A fundamental domain for V_{3}. Abstract: We consider the space V_{n} of quadratic rational maps with one named critical point of period n, quotiented by M\"obius conjugacy preserving named critical points. Thus, the space V_{1} is the space of all quadratic polynomials up to affine conjugacy. The parameter space V_{1} must be one of the most studied in all dynamics. But V_{1} differs from most other parameter spaces in an important respect: there is a natural ``base map'' and, within the Mandelbrot set, natural paths, up to a natural homotopy, to any other map in the Mandelbrot set. The simplest parameter space for which this fails to be true is V_{3}. One can say (truthfully) that there is no canonical choice of fundamental domain for V_{3,m}, which is obtained from V_{3} by removing a natural dynamically defined puncture set. I shall exhibit a fundamental domain, using the dynamical planes of three maps within V_{3} and a theory known as the ``Resident's View''. This enables one to at least formulate an analogue of MLC for this family. Three parts of the fundamental domain are straightforward (although the proof in one of these cases is probably new). The structure of the fourth part is much more interesting, involving a spiral in the dynamical plane of the so-called ``aeroplane'' quadratic polynomial z\mapsto z^{2}+c for the (unique) real parameter c for which the critical point 0 has period 3. Apr. 3 Matilde Martinez (Fields Institute) Measures on hyperbolic surface laminations We will consider laminations by hyperbolic Riemann surfaces, and different measures that can be associated with these objects (holonomy-invariant measures, harmonic measures, measures invariant under the geodesic and horocycle flows). We will state some results that show how these measures are related. We will mainly focus on two families of examples: Riccati foliations and Hilbert modular foliations. Mar. 20 Peter Makienko Remarks on Ruelle Operator, Invariant Differentials and invariant line fields problem.Abstract: TBA Mar. 13th **12PM (Note special time**) Special Brown Bag Lunch Dynamical Systems Seminar Dierk Schleicher (IU Bremen), Johannes Rueckert (IU Bremen), Magnus Aspenberg (Fields), and Vladim Timorin (Stony Brook) + anyone who wants to can give input. Combinatorics of Rational maps Mar. 6 N: C^2 \rightarrow C^2 for finding the four roots has very complicated dynamics: N is four-to-one and N has points of indeterminacy. Furthermore, high iterates of N have many points of indeterminacy. By restricting to parameters |1-B|>1, all of these points of indeterminacy are in the set X_l = Re(x) < 1/2, which is invariant under N. If one wants to consider the homotopy type of a basin of attraction, W(r_i), for one of the roots r_i \in X_l under N, one encounters a kind of ``topological indeterminacy.'' When studying the homotopy type of a loop \gamma in W(r_i), should one consider the homotopies that hit the points of indeterminacy of N^k or should one avoid them? Both seem reasonable. To avoid such questions, one can perform blow-ups at the points of indeterminacy of all iterates of N, obtaining a new space X_l^\infty from X_l on which all iterates of N are defined. We show how to make precise the notion of linking numbers in X_l^\infty, overcoming a different kind of indeterminacy that is a result of the fact that H_2(X_l^\infty) is infinitely generated. Having developed this technology, I will explain how we can use it to study the homotopy type of the basins of attraction W(r_i) within X_l^\infty. Feb. 27 Robert Devaney (Boston University) Rings around the McMullen Domain Feb. 13 Andrzej Bis (University of Illinois at Chicago) Dynamics of foliated spaces in codimension greater than one Abstract. Dynamics of foliated spaces can be characterized by the exceptional minimal sets and topological entropy. Dynamical theory of codimension one foliations was developed by Hector, Cantwell and Conlon, and others. We present some results on exceptional minimal sets, which are analogous to the attractors in a standard dynamical system, of foliated spaces in codimension greater than one. Feb. 6 Yoel Feler (Fields Institute) Holomorphic endomorphisms of configuration spaces Abstract: The most traditional configuration space C(X,n) of a complex space X consists of all n point subsets ("configurations") Q in X. If X carries an additional geometric structure, it may be taken into account; say if X=CP^m or C^m, the space C(X,n;gp) of geometrically generic configurations consists of all n point configurations Q such that no hyperplane in X contains more than m points of Q. An automorphism T of X (preserving an additional geometric structure, whenever it is relevant) produces a holomorphic endomorphism f of the configuration space via f(Q)=TQ. If the automorphism group Aut X is a complex Lie group, one may take T=T(Q) depending analytically on a configuration Q and define the corresponding holomorphic endomorphism f by f(Q)=T(Q)Q. Such a map f is called tame. In the talk, we shall see that for every non-hyperbolic Riemann surface X all "non-degenerate" holomorphic endomorphisms of configuration spaces C(X,n) are tame. To some extent, this is true also for spaces of geometrically generic configurations. Jan. 30 Tomoki Kawahira (Fields Institute) Tessellation and Lyubich-Minsky laminations associated with quadratic mapsAbstract: In 1990s, M.Lyubich and Y.Minsky introduced the hyperbolic 3-laminations associated with rational maps as an analogue of the hyperbolic 3-manifolds associated with Kleinian groups. In this talk I will present a new method to describe topological and combinatorial changes of laminations associated with hyperbolic-to-parabolic degeneration of quadratic maps. The method is based on tessellation of filled Julia sets, which gives a nice organization of the dynamics inside the filled Julia set like external rays outside. Jan. 23 Dierk Schleicher (International University Bremen) Dynamics of transcendental entire functions from the point of view of polynomials Abstract: In this talk, we will discuss some fundamentals of iterated entire functions and indicate why and how they differ from polynomial dynamics, with a special focus on the simplest representatives in both cases, and a view towards generalization. Jan. 16 Dmitrii V. Anosov (Steklov Mathematical Institute) A lemma about families of epsilon pseudo-trajectories revisitedIn hyperbolic dynamics there are results related to the existence of a true trajectory near an epsilon pseudo-trajectory for sufficiently small epsilon (although, formally, some of these results are expressed rather differently.) Many years ago I found a lemma which covers most of these questions. The proof is rather involved and some famous mathematicians expressed complaints about the difficulty. In this talk, I will present a simplified version of the proof. Dec. 5 Viviane Baladi (Institut Mathematiques de Jussieu) Anisotropic spaces of distributions and dynamical zeta functions Abstract: (Joint work with M. Tsujii) The Ruelle transfer operator is a powerful tool in ergodic theory, which involves composition with the dynamics. Many relevant dynamical systems are hyperbolic, i.e. they involve contracting and expanding directions. Composition with a contraction improves regularity - but composing with an expanding map "worsens" regularity: It has been an open problem for many years to find a space of distributions on which composition by a hyperbolic diffeomorphism (of finite smoothness) can be well understood. Last year we constructed such a space and estimated the essential spectral radius of the transfer operator on this space. After recalling this result, we shall describe more recent progress including spectral interpretation of zeroes of dynamical determinants. Nov. 28 Kristian Bjerklov (University of Toronto) The dynamics of the quasi-periodic Schroedinger cocycle at the lowest energy of the spectrumAbstract: We will study properties of the quasi-periodic Schroedinger equation at the lowest energy of the spectrum. This will lead us into the study of phase transitions. Moreover, we will answer a question by M. Herman concerning the geometry of a certain minimal set - a non-chaotic strange atractor - of the projective Schroedinger cocycle. We study the case of large coupling constant and Diophantine frequency. Nov. 25, 1:10 p.m. **Note: special day and time Mitchell Feigenbaum (The Rockefeller University) Exponents in period doubling Nov. 24, Yulij Ilyashenko (Cornell University) Topological properties of polynomial and analytic foliations Abstract: Geometrical study of holomorphic foliations of the complex plane, both projective and affine, lies on the boundary of differential equations, topology and complex analysis. Foliations of \Bbb CP^2 have an algebraic origin: they are defined by polynomial vector fields, but their behavior is highly transcendental. Their properties are drastically different from those of real polynomial vector fields. Properties of density of leaves, absolute rigidity and existence of a countable number of limit cycles were discovered by different authors in 60s and 70s. The talk will present these results together with a survey of the further development and open problems. Foliations of \Bbb C^2 have an analytic origin: they are defined by analytic vector fields. Generic properties of these fields were studied very recently. Yet genericity of density of leaves and existence of the infinite number of complex limit cycles is recently proved. Moreover, generic leaves of such foliations are either disks, or cylinders. These results are obtained by graduate students Firsova, Kutuzova and Volk. Nov. 21 Konstantin Khanin (University of Toronto) Minimizers for random Lagrangian systems Abstract: We shall discuss random Aubry-Mather theory and prove that for time-dependent random Lagrangian systems on compact manifolds there exists a unique global minimizer. In the one-dimensional case we show that the global minimizer corresponds to a hyperbolic invariant measure for the random Lagrangian flow. We also discuss dynamical properties of shocks and show that their global structure is quite rigid and reflects the topology of the configuration manifold. Nov. 14 ** talk at 4:10 p.m. Israel Sigal (University of Toronto). Spectral Renormalization Group and Theory of Radiation Abstract: Non-relativistic quantum electrodynamics describes interaction of charged particles (electrons and nuclei) with quantized electro-magnetic field (photons). The key problem here is to describe emission and absorption of radiation by systems of matter such as atoms and molecules. In this talk I will present some recent rigorous results on the problem of radiation and describe a novel renormalization group technique used in proving these results. I will not assume any prior knowledge of quantum field theory or quantum mechanics. Nov. 7 Marco Martens (University of Groeningen) Henon renormalization (I) Abstract: This mini-course will introduce a renormalization operator for dissipative Henon-like maps. The fixed point of the one-dimensional renormalization operator will also be a hyperbolic fixed point of the Henon-renormalization operator. This corresponds to universal geometrical properties of the Cantor attractor of infinitely renormalizable Henon-like maps. However, the two-dimensional theory is richer than the unimodal case. In particular, the Cantor attractor is not rigid, does not lie on a smooth curve and generically does not have bounded geometry. The quantitative aspects of these phenomena are controlled by the average Jacobian. The global topological properties of finitely renormalizable Henon-like maps in phase and parameter space are also controlled by the average Jacobian. In particular, density of hyperbolicity will be discussed in a neighborhood of the infinitely renormalizable maps. Oct. 31 Hans Koch (University of Texas, Austin) Renormalization of vector fields (I) Abstract: This mini-course covers some of the recent developments in the renormalization of flows - mainly Hamiltonian flows and skew flows. After stating some of the problems and describing alternative approaches, we focus on the definition and basic properties of a single renormalization step. A second part deals with the construction of conjugacies and invariant tori, including shearless tori, and non-differentiable tori for critical Hamiltonians. Then we discuss properties related to the spectrum of the linearized renormalization transformation, such as the accumulation rates for sequences of closed orbits. The last part describes extensions from "simple" to Diophantine rotation vectors. This involves sequences of renormalization transformations that are related to continued fractions expansions in one and more dimensions. Whenever appropriate, the discussion of details will be restricted to special cases where inessential technical complications can be avoided. Oct. 24 Michael Shub (U Toronto) Lower bounds for the entropy in several families of dynamical systems Abstract: Using soft techniques we prove lower bounds for the maximum entropy of a system in a family in terms of the entropy of a random product of the systems in the family. We accomplish this for two families of immersions of the circle. For one family this is joint work with Leonel Robert and Enrique Pujals and the other Rafael de la Llave and Carles Simo. Time permitting we recall a two dimensional analog whose entropy properties are still unknown, but for which partial results have been achieved in joint work with Francois Ledrappier, Carles Simo and Amie Wilkinson. Oct. 17 Oct. 10 Thankgiving holiday Oct. 3 Charles Pugh (UC Berkeley) Smoothing Topological Manifolds Abstract: The Cairns-Whitehead Smoothing Theorem is proved by dynamical systems methods, namely the Invariant Section Theorem. Sept. 26 The two vertical lines x=0 and x=1 are invariant under N and super-attracting. Within these lines the ''circles'' Re(y) = 1/2 and Re(y) = (1-B)/2, respectively, are hyperbolically repelling with multiplier 2. In this talk we will prove that these circles have superstable manifolds of real dimension 3 using the technique of holomorphic motions. These manifolds extend to all points with Re(x) < 1/2 and Re(x) > 1/2 respectively and provide insight into the topology of the basins of attraction for the four roots. This work follows the ideas of John H. Hubbard and Sebastien Krief. Sep. 19 We will compare this approach with Yoccoz's and McMullen's renormalizations. We will also mention possible applications such as Buff-Cheritat's work toward positive measure Julia sets.
There are six properties in differential calculus and they are used as formulas in differentiation. So, learn the following list of properties of derivatives with proofs and also example problems with solutions to learn how to use them in differentiating the functions. $\dfrac{d}{dx}{\, \Big(f{(x)}+g{(x)}\Big)}$ $\,=\,$ $\dfrac{d}{dx}{\, f{(x)}}$ $+$ $\dfrac{d}{dx}{\, g{(x)}}$ $\dfrac{d}{dx}{\, \Big(f{(x)}-g{(x)}\Big)}$ $\,=\,$ $\dfrac{d}{dx}{\, f{(x)}}$ $-$ $\dfrac{d}{dx}{\, g{(x)}}$ The product rule of derivatives is written popularly in two different forms. $(1). \,\,\,$ $\dfrac{d}{dx}{\, \Big(f{(x)}.g{(x)}\Big)}$ $\,=\,$ ${f{(x)}}{\dfrac{d}{dx}{\, g{(x)}}}$ $+$ ${g{(x)}}{\dfrac{d}{dx}{\, f{(x)}}}$ $(2). \,\,\,$ $\dfrac{d}{dx}{\, \Big(u.v\Big)}$ $\,=\,$ $u.\dfrac{dv}{dx}+v.\dfrac{du}{dx}$ The quotient rule of differentiation is also written popularly in two different forms. $(1). \,\,\,$ $\dfrac{d}{dx}{\, \Bigg(\dfrac{f{(x)}}{g{(x)}}\Bigg)}$ $\,=\,$ $\dfrac{{g{(x)}}{\dfrac{d}{dx}{f{(x)}}}-{f{(x)}}{\dfrac{d}{dx}{g{(x)}}}}{{g{(x)}}^2}$ $(2). \,\,\,$ $\dfrac{d}{dx}{\, \Bigg(\dfrac{u}{v}\Bigg)}$ $\,=\,$ $\dfrac{v.\dfrac{du}{dx}-u.\dfrac{dv}{dx}}{v^2}$ $\dfrac{d}{dx}{\, \Big(k.f(x)\Big)} \,=\, k \times \dfrac{d}{dx}{\, f(x)}$ $\dfrac{d}{dx} {f[{g(x)}]} \,=\, {f'[{g(x)}]}.{g'{(x)}}$ List of the differentiation formulas with proofs and example problems to learn how to use some standard results as formulas in differentiating the functions. Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
$x$ is a variable, which represents an angle of a right triangle and the cosine function is written as $\cos{x}$ in trigonometry. The indefinite integral of $\cos{x}$ with respect to $x$ is mathematically written in the following mathematical form. $\displaystyle \int{\cos{x} \,}dx$ Write the derivative of sin function with respect to $x$ formula for writing the differentiation of sine function in mathematical form. $\dfrac{d}{dx}{\, \sin{x}} \,=\, \cos{x}$ As per differential calculus, the derivative of a constant is always zero. So, it does not change the differentiation even an arbitrary constant ($c$) is added to the trigonometric function $\sin{x}$. $\implies$ $\dfrac{d}{dx}{(\sin{x}+c)} \,=\, \cos{x}$ According to integral calculus, the collection of all primitives of $\cos{x}$ function is called the indefinite integral of $\cos{x}$ function and it can be expressed in the following mathematical form. $\displaystyle \int{\cos{x} \,}dx$ Here, the primitive or an antiderivative of $\cos{x}$ function is $\sin{x}$ and the constant of integration $c$. $\dfrac{d}{dx}{(\sin{x}+c)} = \cos{x}$ $\,\Longleftrightarrow\,$ $\displaystyle \int{\cos{x} \,}dx = \sin{x}+c$ $\therefore \,\,\,\,\,\,$ $\displaystyle \int{\cos{x} \,}dx = \sin{x}+c$ Therefore, it has proved that the indefinite integral or antiderivative of cosine function is equal to the sum of the sine function and the constant of integration. Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
Let $\mathcal A$ be the class of those abelian groups embeddable inthe multiplicative group of some field.Let $\mathcal B$ be the class of those abelian groups whose finitesubgroups are cyclic. I claim that $\mathcal A=\mathcal B$.From this it follows that the answer to the question When can an infinite abelian group be embedded in the multiplicative group of a field? is Those infinite abelian groups whose finite subgroups are cyclic. This class of groups is axiomatized by universally quantified first-order sentences. The sentences needed are (i) some universally quantified sentences axiomatizing the class of abelian groups, together with (ii) for each $n$, a sentence $\sigma_n$that says, for all $x, y$, if $x$ and $y$ both haveexponent $n$, then $x$ is a power of $y$ or $y$ is a power of $x$.(I am considering abelian groups as multiplicative groups.) To be clear here, the sentence $\sigma_3$ is the following universally quantified first-order sentence: $$\forall x\forall y(((x^3=1)\wedge (y^3=1))\to ((x=1)\vee(x=y)\vee(x=y^2)\vee(y=1)\vee(y=x)\vee(y=x^2))$$ To see that $\mathcal A=\mathcal B$, observe that $\mathcal A$ is a class of first-order structures that is closed under the formation of substructures and ultraproducts. [Reason: it is clear that$\mathcal A$ is closed under the formation of substructures.If $\{A_i\;|\;i\in I\}\subseteq \mathcal A$is a class of abelian groups, eachembeddable in the multiplicative group of a field, then eachultraproduct that can be formed from this setis embeddable in the multiplicative group ofthe corresponding ultraproduct of fields. This ultraproduct is itself a field.] (The preceding observation implies that $\mathcal A$ is axiomatizable by universally quantified first-order sentences.) $\mathcal B$ is a class of first-order structures that is axiomatizable by universally quantified first-order sentences.[Reason: The sentences I mentioned above work. Recall that these sentences are the axioms for abelian groups together with all $\sigma_n$.] $\mathcal A\subseteq \mathcal B$. [Reason: it is well known thata finite subgroup of the multiplicative group of a field is cyclic.] $\mathcal B\subseteq \mathcal A$. [Reason: It is a general factabout classes that are axiomatizable by universally quantified first-order sentences that if $\mathcal B\not\subseteq \mathcal A$,then there is a finitely generated $B\in \mathcal B-\mathcal A$.A finitely generated member of$\mathcal B$ has the form $\mathbb Z^k\oplus \mathbb Z_m$.So to establish this claimit suffices to show that groups of this form are embeddable inmultiplicative groups of fields. To embed this group, choosealgebraically independent elements $\alpha_1,\ldots, \alpha_k\in\mathbb C$and let $\zeta\in\mathbb C$ be a primitive $m$-th root of unity.The multiplicative subgroup of $\mathbb C$ generated by$\{\alpha_1,\ldots,\alpha_k,\zeta\}$ is isomorphic to$\mathbb Z^k\oplus \mathbb Z_m$, so we are done.]
Title Uniqueness for discontinuous ODE and conservation laws Publication Type Journal Article Year of Publication 1998 Authors Bressan, A, Shen, W Journal Nonlinear Analysis 34 (1998) 637-652 Abstract Consider a scalar O.D.E. of the form $\\\\dot x=f(t,x),$ where $f$ is possibly discontinuous w.r.t. both variables $t,x$. Under suitable assumptions, we prove that the corresponding Cauchy problem admits a unique solution, which depends H\\\\\\\"older continuously on the initial data.\\nOur result applies in particular to the case where $f$ can be written in the form $f(t,x)\\\\doteq g\\\\big( u(t,x)\\\\big)$, for some function $g$ and some solution $u$ of a scalar conservation law, say $u_t+F(u)_x=0$. In turn, this yields the uniqueness and continuous dependence of solutions to a class of $2\\\\times 2$ strictly hyperbolic systems, with initial data in $\\\\L^\\\\infty$. URL http://hdl.handle.net/1963/3699 DOI 10.1016/S0362-546X(97)00590-7 Uniqueness for discontinuous ODE and conservation laws Research Group:
Given two sets that have the same cardinal number Example:\begin{align*}A & = \{1, 4\}\\B & = \{1, 2\}\end{align*}How would you prove that the function from $A$ to $B$ is always injective and surjective AND not.... injective but not surjective or surjective but not injective. My proof: Since the cardinal number of $A$ and $B$ is $n(A) = n(B)$. Then thus, inj$(\beta) \wedge$ surj$(\beta)$. Therefore by definition, bij $(\beta)$. So the function from $A$ to $B$ is always bijective. Is this correct? Other ways to prove this?
→ → → → Browse Dissertations and Theses - Mathematics by Title Now showing items 627-646 of 1147 (2017-05-03)Nakajima introduced a t-deformation of q-characters, (q,t)-characters for short, and their twisted multiplication through the geometry of quiver varieties. The Nakajima (q, t)-characters of Kirillov-Reshetikhin modules ... application/pdfPDF (639kB) (1993)The examples discussed in this paper are related to the atomic space problem: Is there an infinite dimensional space with no proper closed infinite dimensional subspace? This question is equivalent to one first posed by ... application/pdfPDF (5MB) (1987)In this thesis we introduce the class (DELTA)(X,Y) of nearly represent- able operators from a Banach space X to a Banach space Y. These are the operators that map X-valued uniformly bounded martingales that are Cauchy in ... application/pdfPDF (3MB) (2018-07-19)Solving Satisfiability Modulo Theories (SMT) problems in a key piece in automating tedious mathematical proofs. It involves deciding satisfiability of formulas of a decidable theory, which can often be reduced to solving ... application/pdfPDF (161kB) (2019-01-14)Competition and mutualism are inevitable processes in ecology, and a central question is which and how many taxa will persist in the face of these interactions. Ecological theory has demonstrated that when direct, pairwise ... application/pdfPDF (3MB) application/pdfPDF (2MB) application/pdfPDF (2MB) (2004)In this work, the author studies genus zero constant mean curvature (CMC) surfaces, Alexandrov-embedded in R3. In the first chapters, the work of Lawson, and also of Pinkall and Polthier is extended and the first-order ... application/pdfPDF (9MB) application/pdfPDF (1MB) (2012-06-27)Moving boundary problems arise in many areas of science and engineering and they are of great importance in the areas of partial differential equations (PDEs) since they characterize phase change phenomena where a system ... application/pdfPDF (906kB) (1990)In Chapter I we shall prove a new upper bound in the linear sieve. Our purpose in Chapter II is to explain our method in greater detail than was done in Chapter I. Let x be a large number. We consider $\pi\sb2$(x)--the ... application/pdfPDF (2MB) Non commutative version of arithmetic geometric mean inequality and crossed product of ternary ring of operators (2017-07-11)This thesis is structured into two parts. In the first two chapters, we prove the non commutative version of the Arithmetic Geometric Mean (AGM) inequality (this is a joint work with Mingyue Zhao and Maruis Junge). We start ... application/pdfPDF (589kB) application/pdfPDF (5MB) (1996)This thesis introduces a new set theory referred to as the graph-isomorphism set theory (GST). GST does not satisfy the foundation axiom. Peter Aczel has presented several non-well-founded (NWF) set theories within a unified ... application/pdfPDF (5MB) (2010-08-20)Results from abstract harmonic analysis are extended to locally compact quantum groups by considering the noncommutative Lp-spaces associated with the locally compact quantum groups. Let G be a locally compact abelian ... application/pdfPDF (388kB) (1989)Consider the nonlinear, singularly perturbed, vector boundary relation problem x$\sp\prime$ = f(t,x,y,$\epsilon$), $\epsilon$y$\sp\prime$ = g(t,x,y,$\epsilon$), L(x(0),y(0),$\epsilon$) = $\alpha\sb0$, R(x(1),y(1),$\epsilon$) ... application/pdfPDF (7MB) (2009)In this thesis, we apply model theory to Lie theory and geometric group theory. These applications of model theory come via nonstandard analysis. In Lie theory, we use nonstandard methods to prove two results. First, we ... application/pdfPDF (2MB) (2011-05-25)In this dissertation, a nonstandard approach to lifting theory developed by Bliedtner and Loeb is applied to liftings on topological measure spaces and group measure spaces. A further application to disintegrations of ... application/pdfPDF (2MB) (1989)Ideas and techniques from nonstandard theories of measure spaces and Banach spaces are brought together to develop a nonstandard theory of Banach space valued measures. In particular, constructions of countably additive ... application/pdfPDF (3MB) (1994)We describe an extension of the Bochner integral. Bochner integrable functions can be approximated by simple functions. Using Nonstandard Analysis, we investigate internal simple functions from an internal measure space ... application/pdfPDF (3MB)
Does anyone here understand why he set the Velocity of Center Mass = 0 here? He keeps setting the Velocity of center mass , and acceleration of center mass(on other questions) to zero which i dont comprehend why? @amanuel2 Yes, this is a conservation of momentum question. The initial momentum is zero, and since there are no external forces, after she throws the 1st wrench the sum of her momentum plus the momentum of the thrown wrench is zero, and the centre of mass is still at the origin. I was just reading a sci-fi novel where physics "breaks down". While of course fiction is fiction and I don't expect this to happen in real life, when I tired to contemplate the concept I find that I cannot even imagine what it would mean for physics to break down. Is my imagination too limited o... The phase-space formulation of quantum mechanics places the position and momentum variables on equal footing, in phase space. In contrast, the Schrödinger picture uses the position or momentum representations (see also position and momentum space). The two key features of the phase-space formulation are that the quantum state is described by a quasiprobability distribution (instead of a wave function, state vector, or density matrix) and operator multiplication is replaced by a star product.The theory was fully developed by Hilbrand Groenewold in 1946 in his PhD thesis, and independently by Joe... not exactly identical however Also typo: Wavefunction does not really have an energy, it is the quantum state that has a spectrum of energy eigenvalues Since Hamilton's equation of motion in classical physics is $$\frac{d}{dt} \begin{pmatrix} x \\ p \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \nabla H(x,p) \, ,$$ why does everyone make a big deal about Schrodinger's equation, which is $$\frac{d}{dt} \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \hat H \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} \, ?$$ Oh by the way, the Hamiltonian is a stupid quantity. We should always work with $H / \hbar$, which has dimensions of frequency. @DanielSank I think you should post that question. I don't recall many looked at the two Hamilton equations together in this matrix form before, which really highlight the similarities between them (even though technically speaking the schroedinger equation is based on quantising Hamiltonian mechanics) and yes you are correct about the $\nabla^2$ thing. I got too used to the position basis @DanielSank The big deal is not the equation itself, but the meaning of the variables. The form of the equation itself just says "the Hamiltonian is the generator of time translation", but surely you'll agree that classical position and momentum evolving in time are a rather different notion than the wavefunction of QM evolving in time. If you want to make the similarity really obvious, just write the evolution equations for the observables. The classical equation is literally Heisenberg's evolution equation with the Poisson bracket instead of the commutator, no pesky additional $\nabla$ or what not The big deal many introductory quantum texts make about the Schrödinger equation is due to the fact that their target audience are usually people who are not expected to be trained in classical Hamiltonian mechanics. No time remotely soon, as far as things seem. Just the amount of material required for an undertaking like that would be exceptional. It doesn't even seem like we're remotely near the advancement required to take advantage of such a project, let alone organize one. I'd be honestly skeptical of humans ever reaching that point. It's cool to think about, but so much would have to change that trying to estimate it would be pointless currently (lol) talk about raping the planet(s)... re dyson sphere, solar energy is a simplified version right? which is advancing. what about orbiting solar energy harvesting? maybe not as far away. kurzgesagt also has a video on a space elevator, its very hard but expect that to be built decades earlier, and if it doesnt show up, maybe no hope for a dyson sphere... o_O BTW @DanielSank Do you know where I can go to wash off my karma? I just wrote a rather negative (though well-deserved, and as thorough and impartial as I could make it) referee report. And I'd rather it not come back to bite me on my next go-round as an author o.o
[1101.1650] The cosmological bulk flow: consistency with $\Lambda$CDM and $z\approx 0$ constraints on $\sigma_8$ and $\gamma$ Authors: Adi Nusser, Marc Davis Abstract: We derive estimates for the cosmological bulk flow from the SFI++ catalog of Tully-Fisher (TF) measurements of spiral galaxies. For a sphere of radius $40 \hmpc$ centered on the Milky Way (MW), we derive a bulk flow of $333 \pm 38\kms $ towards Galactic $ (l,b)=(276^\circ,b=14^\circ)$ within a $3^\circ$ $1\sigma$ error. Within a $ 100\hmpc$ we get $ 257\pm 44\kms$ towards $(l,b)=(279^\circ, 10^\circ)$ within a $6^\circ$ error. These directions are at a $40^\circ$ angle with the Supergalactic plane, close to the apex of the motion of the Local Group (LG) of galaxies after correcting it for the Virgocentric infall \citep{st10}. Our findings are consistent with the $\Lambda$CDM model with the latest WMAP best fit cosmological parameters. But the bulk flow allows independent constraints. For WMAP inferred Hubble parameter $h=0.71$ and baryonic mean density parameter $\Omega_b=0.0449$, the constraint from the bulk flow on the matter mean density $\Omega_m$, the normalization of the density power spectrum, $\sigma_8$, and the growth index, $\gamma$, can be expressed as $\sigma_8\Omega_m^{\gamma-0.55}(\Omega_m/0.266)^{0.28}=0.86\pm 0.11$ (for $\Omega_m\approx 0.266$). Fixing $\sigma_8=0.8$ and $\Omega_m=0.266$ as favored by WMAP, we get $\gamma=0.495\pm 0.096$. These local constraints are independent of the biasing relation between mass and galaxies. Our results are based on a method termed \ace\ (All Space Constrained Estimate) which reconstructs the bulk flow from an all space three dimensional peculiar velocity field constrained to match the TF measurements. For comparison, a maximum likelihood estimate (MLE) is found to lead to similar bulk flows, but with larger errors. [PDF] [PS] [BibTex] [Bookmark] Discussion related to specific recent arXiv papers Post Reply 10 posts • Page 1of 1 This paper studies the bulk flow of galaxies on scales up to 100[tex]h^{-1}[/tex] Mpc using a single survey of spiral galaxies, with distances determined from the Tully-Fisher relation; the sample has 2859 galaxies. (The authors say they use the inverse Tully-Fisher relation instead of the Tully-Fisher relation; I confess ignorance of the difference.) The velocities are reconstructed by generating a random basis of velocity fields from a [tex]\Lambda[/tex]CDM power spectrum and fitting the coefficients, demanding that on very large scales, the result agrees with [tex]\Lambda[/tex]CDM. The results are found to be consistent with [tex]\Lambda[/tex]CDM. This might not be interesting, were it not for the contrary claim of 0911.5516, which argues that there are flows in significant excess of the [tex]\Lambda[/tex]CDM expectation on 100[tex]h^{-1}[/tex] Mpc scales. (There are other papers claiming large bulk flows on even larger scales.) The present authors hint that miscalibration between different catalogues in the composite used in 0911.5516 could be the origin of the bulk flow found there. On the other hand, the assumption of a vanilla [tex]\Lambda[/tex]CDM model seems to be heavily used in the present analysis, and it is not transparent how much this biases the results. The results are found to be consistent with [tex]\Lambda[/tex]CDM. This might not be interesting, were it not for the contrary claim of 0911.5516, which argues that there are flows in significant excess of the [tex]\Lambda[/tex]CDM expectation on 100[tex]h^{-1}[/tex] Mpc scales. (There are other papers claiming large bulk flows on even larger scales.) The present authors hint that miscalibration between different catalogues in the composite used in 0911.5516 could be the origin of the bulk flow found there. On the other hand, the assumption of a vanilla [tex]\Lambda[/tex]CDM model seems to be heavily used in the present analysis, and it is not transparent how much this biases the results. Posts:14 Joined:September 27 2004 Affiliation:University of Canterbury Contact: In our paper we used a different approach http://arxiv.org/pdf/1010.4276 and we also found that the SFI++ catalogue was consistent with [tex]\Lambda[/tex]CDM, see table 1. But, we found that the SN peculiar velocity data where mildly inconsistent with [tex]\Lambda[/tex]CDM at the two sigma level. I had missed that paper, thanks. Just to shortly explain what the inverse T-F relation is: you fit the two parameters "s" and "eta_0" in the relation eta = s * M + eta_0, where eta = log("line_width") is a measure of the galaxy's circular velocity and M is the absolute magnitude. In the original T-F relation it was rather M = a * eta + b. Now for the results of the paper. The interesting thing is the discrepancy with the results of Watkins et al. 2009 and Feldman et al. 2010. They also use the SFI++ catalog and present results also for this catalog alone, together with other catalogs (including the "Composite"). Their bulk flow from SFI++ alone grows from ~20 Mpc/h, whereas it decreases for Nusser and Davis although they use the same data. So what is the reason? I can see at last two: different methods are used to estimate the bulk flow; the data are handled differently. For example, it is interesting that Watkins et al. present results up to ~60 Mpc/h. Nusser and Davis use the same data and claim to have measured the bulk flow up to ~100 Mpc/h, although their sample is smaller! The bulk flow issue seems to be far from settled IMO. Now for the results of the paper. The interesting thing is the discrepancy with the results of Watkins et al. 2009 and Feldman et al. 2010. They also use the SFI++ catalog and present results also for this catalog alone, together with other catalogs (including the "Composite"). Their bulk flow from SFI++ alone grows from ~20 Mpc/h, whereas it decreases for Nusser and Davis although they use the same data. So what is the reason? I can see at last two: different methods are used to estimate the bulk flow; the data are handled differently. For example, it is interesting that Watkins et al. present results up to ~60 Mpc/h. Nusser and Davis use the same data and claim to have measured the bulk flow up to ~100 Mpc/h, although their sample is smaller! The bulk flow issue seems to be far from settled IMO. So in the inverse relation you determine the circular velocity from the magnitude, and not vice versa? Not exactly :) The circular velocity is your observable, as is the observed magnitude and redshift. What you want is a mean relation between a measure of the circular velocity and absolute magnitude. The latter is calculated from the observed m and redshift. The question is what is your "x" in the f(x)=a*x+b fit. In the ITF, the "x" is absolute magnitude M. For more details, take a look at the recent Davis et al. paper http://arxiv.org/abs/1011.3114 I don't understand... To me the equations are the same, with some terms moved from one side to the other. They are in a sense... It's only the question of what you fit with your linear regression or whatever, i.e. what is your x and what is your y. But it's still the same Tully-Fisher relation that is dealt with :) Maybe someone else will make it clearer... Maybe someone else will make it clearer... The reason to use the inverse TF relation is that there is intrinsic scatter in the relation and there is also an apparent magnitude limit. If you aren't careful with how you treat the scatter you can introduce a Malmquist-type bias. Malmquist biases are more severe with a larger dispersion in the population (brightness of a not-exactly-standard candle, or scatter about a relation). Typical methods of fitting y on x effectively assign the scatter to y, the dependent variable. If you make magnitude the dependent variable then your fitted relation will be biased - it will have biases in zeropoint and slope - because you're missing some fraction of low luminosity galaxies and you're not missing an equal fraction at all velocities. If you make velocity the dependent variable, then the scatter is parallel to the selection limit, so to speak, and the fitted relation is less biased. Discussions of fitting relations with both observational errors and scatter can be found in the appendix of Weiner et al 2006, http://adsabs.harvard.edu/abs/2006ApJ...653.1049W and B. Kelly 2007, http://adsabs.harvard.edu/abs/2007ApJ...665.1489K. My paper is Tully Fisher-oriented, although the fitting issues are general; Kelly's paper is more statistically sophisticated and correct. Typical methods of fitting y on x effectively assign the scatter to y, the dependent variable. If you make magnitude the dependent variable then your fitted relation will be biased - it will have biases in zeropoint and slope - because you're missing some fraction of low luminosity galaxies and you're not missing an equal fraction at all velocities. If you make velocity the dependent variable, then the scatter is parallel to the selection limit, so to speak, and the fitted relation is less biased. Discussions of fitting relations with both observational errors and scatter can be found in the appendix of Weiner et al 2006, http://adsabs.harvard.edu/abs/2006ApJ...653.1049W and B. Kelly 2007, http://adsabs.harvard.edu/abs/2007ApJ...665.1489K. My paper is Tully Fisher-oriented, although the fitting issues are general; Kelly's paper is more statistically sophisticated and correct. That makes it a little clearer, thanks.
There are six fundamental formulas on integration of trigonometric functions. $\Large \int \normalsize \sin{x} dx = -\cos{x}+c$ $\Large \int \normalsize \cos{x} dx = \sin{x}+c$ $\Large \int \normalsize \sec^2{x} dx = \tan{x}+c$ $\Large \int \normalsize \csc^2{x} dx = -\cot{x}+c$ $\Large \int \normalsize \sec{x}\tan{x} dx = \sec{x}+c$ $\Large \int \normalsize \csc{x}\cot{x} dx = -\csc{x}+c$ Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
$ L^p $-$ L^q $ estimates for the damped wave equation and the critical exponent for the nonlinear problem with slowly decaying data 1. Department of Mathematics, Faculty of Science and Technology, Keio University, 3-14-1, Hiyoshi, Kohoku-ku, Yokohama, 223-8522, Japan 2. Center for Advanced Intelligence Project, RIKEN, Japan 3. Department of Mathematics, Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan 4. Division of Mathematics and Physics, Faculty of Engineering, Shinshu University, 4-17-1 Wakasato, Nagano City 380-8553, Japan 5. Department of Engineering for Production and Environment, Graduate School of Science and Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama, Ehime, 790-8577, Japan $ \begin{align*} \partial_{t}^2 u - \Delta u + \partial_t u = 0 \end{align*} $ $ L^p $ $ L^q $ $ 1\le q \le p < \infty\ (p\neq 1) $ $ (H^s\cap H_r^{\beta}) \times (H^{s-1} \cap L^r) $ $ r \in (1,2] $ $ s\ge 0 $ $ \beta = (n-1)|\frac{1}{2}-\frac{1}{r}| $ $ 1+\frac{2r}{n} $ $ 1+\frac{2}{n} $ $ r = 1 $ Mathematics Subject Classification:Primary: 35L71; Secondary: 35A01, 35B40, 35B44. Citation:Masahiro Ikeda, Takahisa Inui, Mamoru Okamoto, Yuta Wakasugi. $ L^p $-$ L^q $ estimates for the damped wave equation and the critical exponent for the nonlinear problem with slowly decaying data. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1967-2008. doi: 10.3934/cpaa.2019090 References: [1] H. Brezis, [2] [3] [4] F. M. Christ and M. I. Weinstein, Dispersion of small amplitude solutions of the generalized Korteweg-de Vries equation, [5] K. Fujiwara, M. Ikeda and Y. Wakasugi, Estimates of lifespan and blow-up rates for the wave equation with a time-dependent damping and a power-type nonlinearity, [6] [7] M.-H. Giga, Y. Giga and J. Saal, [8] [9] [10] [11] N. Hayashi, E. I. Kaikina and P. I. Naumkin, On the critical nonlinear damped wave equation with large initial data, [12] [13] T. Hosono and T. Ogawa, Large time behavior and $L^p$-$L^q$ estimate of solutions of 2-dimensional nonlinear damped wave equations, [14] M. Ikeda, T. Inui and Y. Wakasugi, The Cauchy problem for the nonlinear damped wave equation with slowly decaying data, [15] R. Ikehata, Y. Miyaoka and T. Nakatake, Decay estimates of solutions for dissipative wave equations in $\mathbf R^N$ with lower power nonlinearities, [16] [17] [18] R. Ikehata and K. Tanizawa, Global existence of solutions for semilinear damped wave equations in $\mathbf R^N$ with noncompactly supported initial data, [19] [20] S. Kawashima, M. Nakao and K. Ono, On the decay property of solutions to the Cauchy problem of the semilinear wave equation with a dissipative term, [21] [22] [23] P. Marcati and K. Nishihara, The $L^p$-$L^q$ estimates of solutions to one-dimensional damped wave equations and their application to the compressible flow through porous media, [24] [25] [26] [27] [28] [29] T. Narazaki and K. Nishihara, Asymptotic behavior of solutions for the damped wave equation with slowly decaying data, [30] K. Nishihara, $L^p$-$L^q$ estimates of solutions to the damped wave equation in 3-dimensional space and their application, [31] [32] [33] [34] [35] [36] [37] Qi S. Zhang, A blow-up result for a nonlinear wave equation with damping: the critical case, show all references References: [1] H. Brezis, [2] [3] [4] F. M. Christ and M. I. Weinstein, Dispersion of small amplitude solutions of the generalized Korteweg-de Vries equation, [5] K. Fujiwara, M. Ikeda and Y. Wakasugi, Estimates of lifespan and blow-up rates for the wave equation with a time-dependent damping and a power-type nonlinearity, [6] [7] M.-H. Giga, Y. Giga and J. Saal, [8] [9] [10] [11] N. Hayashi, E. I. Kaikina and P. I. Naumkin, On the critical nonlinear damped wave equation with large initial data, [12] [13] T. Hosono and T. Ogawa, Large time behavior and $L^p$-$L^q$ estimate of solutions of 2-dimensional nonlinear damped wave equations, [14] M. Ikeda, T. Inui and Y. Wakasugi, The Cauchy problem for the nonlinear damped wave equation with slowly decaying data, [15] R. Ikehata, Y. Miyaoka and T. Nakatake, Decay estimates of solutions for dissipative wave equations in $\mathbf R^N$ with lower power nonlinearities, [16] [17] [18] R. Ikehata and K. Tanizawa, Global existence of solutions for semilinear damped wave equations in $\mathbf R^N$ with noncompactly supported initial data, [19] [20] S. Kawashima, M. Nakao and K. Ono, On the decay property of solutions to the Cauchy problem of the semilinear wave equation with a dissipative term, [21] [22] [23] P. Marcati and K. Nishihara, The $L^p$-$L^q$ estimates of solutions to one-dimensional damped wave equations and their application to the compressible flow through porous media, [24] [25] [26] [27] [28] [29] T. Narazaki and K. Nishihara, Asymptotic behavior of solutions for the damped wave equation with slowly decaying data, [30] K. Nishihara, $L^p$-$L^q$ estimates of solutions to the damped wave equation in 3-dimensional space and their application, [31] [32] [33] [34] [35] [36] [37] Qi S. Zhang, A blow-up result for a nonlinear wave equation with damping: the critical case, [1] Karen Yagdjian, Anahit Galstian. Fundamental solutions for wave equation in Robertson-Walker model of universe and $L^p-L^q$ -decay estimates. [2] [3] Fabrice Planchon, John G. Stalker, A. Shadi Tahvildar-Zadeh. $L^p$ Estimates for the wave equation with the inverse-square potential. [4] [5] Sergey Zelik. Asymptotic regularity of solutions of a nonautonomous damped wave equation with a critical growth exponent. [6] Alexandre N. Carvalho, Jan W. Cholewa. Strongly damped wave equations in $W^(1,p)_0 (\Omega) \times L^p(\Omega)$. [7] [8] [9] A. Kh. Khanmamedov. Global attractors for strongly damped wave equations with displacement dependent damping and nonlinear source term of critical exponent. [10] Björn Birnir, Kenneth Nelson. The existence of smooth attractors of damped and driven nonlinear wave equations with critical exponent , s = 5. [11] Zhaojuan Wang, Shengfan Zhou. Random attractor for stochastic non-autonomous damped wave equation with critical exponent. [12] Fengjuan Meng, Chengkui Zhong. Multiple equilibrium points in global attractor for the weakly damped wave equation with critical exponent. [13] Xinghong Pan, Jiang Xu. Global existence and optimal decay estimates of the compressible viscoelastic flows in $ L^p $ critical spaces. [14] Shengfan Zhou, Linshan Wang. Kernel sections for damped non-autonomous wave equations with critical exponent. [15] Shouming Zhou. The Cauchy problem for a generalized $b$-equation with higher-order nonlinearities in critical Besov spaces and weighted $L^p$ spaces. [16] Damiano Foschi. Some remarks on the $L^p-L^q$ boundedness of trigonometric sums and oscillatory integrals. [17] José Caicedo, Alfonso Castro, Rodrigo Duque, Arturo Sanjuán. Existence of $L^p$-solutions for a semilinear wave equation with non-monotone nonlinearity. [18] Igor Chueshov, Irena Lasiecka, Daniel Toundykov. Long-term dynamics of semilinear wave equation with nonlinear localized interior damping and a source term of critical exponent. [19] Jiao Chen, Weike Wang. The point-wise estimates for the solution of damped wave equation with nonlinear convection in multi-dimensional space. [20] Seung-Yeal Ha, Mitsuru Yamazaki. $L^p$-stability estimates for the spatially inhomogeneous discrete velocity Boltzmann model. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
Genetic algorithms (GAs) are stochastic search algorithms inspired by the basic principles of biological evolution and natural selection. GAs simulate the evolution of living organisms, where the fittest individuals dominate over the weaker ones, by mimicking the biological mechanisms of evolution, such as selection, crossover and mutation. The R package GA provides a collection of general purpose functions for optimization using genetic algorithms. The package includes a flexible set of tools for implementing genetic algorithms search in both the continuous and discrete case, whether constrained or not. Users can easily define their own objective function depending on the problem at hand. Several genetic operators are available and can be combined to explore the best settings for the current task. Furthermore, users can define new genetic operators and easily evaluate their performances. Local search using general-purpose optimisation algorithms can be applied stochastically to exploit interesting regions. GAs can be run sequentially or in parallel, using an explicit master-slave parallelisation or a coarse-grain islands approach. This document gives a quick tour of GA (version 3.2) functionalities. It was written in R Markdown, using the knitr package for production. Further details are provided in the papers Scrucca (2013) and Scrucca (2017). See also help(package="GA") for a list of available functions and methods. Consider the function \(f(x) = (x^2+x)\cos(x)\) defined over the range \(-10 \le x \le 10\): GA <- ga(type = "real-valued", fitness = f, lower = c(th = lbound), upper = ubound)summary(GA)## ── Genetic Algorithm ─────────────────── ## ## GA settings: ## Type = real-valued ## Population size = 50 ## Number of generations = 100 ## Elitism = 2 ## Crossover probability = 0.8 ## Mutation probability = 0.1 ## Search domain = ## th## lower -10## upper 10## ## GA results: ## Iterations = 100 ## Fitness function value = 47.70562 ## Solution = ## th## [1,] 6.560761plot(GA) Consider the Rastrigin function, a non-convex function often used as a test problem for optimization algorithms because it is a difficult problem due to its large number of local minima. In two dimensions it is defined as \[f(x_1, x_2) = 20 + x_1^2 + x_2^2 - 10(\cos(2\pi x_1) + \cos(2\pi x_2)),\] with \(x_i \in [-5.12, 5.12]\) for \(i=1,2\). It has a global minimum at \((0,0)\) where \(f(0,0) = 0\). A GA minimisation search is obtained as follows (note the minus sign used in the definition of the local fitness function): GA <- ga(type = "real-valued", fitness = function(x) -Rastrigin(x[1], x[2]), lower = c(-5.12, -5.12), upper = c(5.12, 5.12), popSize = 50, maxiter = 1000, run = 100)summary(GA)## ── Genetic Algorithm ─────────────────── ## ## GA settings: ## Type = real-valued ## Population size = 50 ## Number of generations = 1000 ## Elitism = 2 ## Crossover probability = 0.8 ## Mutation probability = 0.1 ## Search domain = ## x1 x2## lower -5.12 -5.12## upper 5.12 5.12## ## GA results: ## Iterations = 250 ## Fitness function value = -5.803126e-08 ## Solution = ## x1 x2## [1,] 1.068976e-05 1.335054e-05plot(GA)
How to evaluate the following integral $$\displaystyle \int_0^\infty \frac{\sin{(\omega\tau)}\sin{(\omega y)}\sinh\,(\omega x)}{\sinh{(\omega a)}} \,\text{d}\omega$$ where $a > 0$, $x \in (0,\, a)$ , $y \in (0,\,\infty)$ and $\tau \in (0,\,\infty)$? The solution should be a function of $x\,,y\,,\tau\,,a$. Any clues? I heard that it has a closed form and can be expressed by elementary functions. Any idea will help! Thanks. Is it equivalent to $\frac{\sin{(\dfrac{\pi}{a} x)}\sinh\,(\dfrac{\pi}{a}y)\sinh(\dfrac{\pi}{a} \tau)}{\sin^2(\dfrac{\pi}{a} x)\sinh^2(\dfrac{\pi}{a} y) \,+\, [\cos\,(\dfrac{\pi}{a} \tau)\,+\,\cos\,(\dfrac{\pi}{a} x)\cosh\,(\dfrac{\pi}{a} y)]^2}$ ?
In classification one usually computes $$ C = \operatorname*{argmax}_k p(C=k\mid X) $$ where $p(C=k\mid X)$ is the posterior distribution. In a simple logistic regression setting with $C \in \{0, 1\}$ and $$ p(C=1\mid X)=\frac{\exp(\beta_0+\beta_1 x_i)}{1+\exp(\beta_0+\beta_1 x_i)} $$ and therefore $$ p(C=0\mid X)=\frac{1}{1+\exp(\beta_0+\beta_1 x_i)} $$ with $X=\{x_i\},\ i=1,\ldots,N$. we estimate the parameters $\beta_0, \beta_1$ via the maximum likelihood estimation. To do so one has to compute the product of the likelihood function of all $N$ observations. So far, so normal. However, in all text books the authors plug in the posterior instead of the likelihood (e.g., Bishop, p. 206, Hastie, et al., p. 120): \begin{align} \ell(\beta) &= \log\left(\prod_{i=1}^N p(C_i=k\mid x_i, \beta)\right) \\[8pt] &= \log\left(\prod_{i=1}^N p(C_i=1\mid x_i, \beta)^{C_i}(1-p(C_i=1\mid x_i, \beta))^{1-C_i}\right) \end{align} And even though these probabilities are now conditioned on $\beta$ as well, they are still no likelihood to the posterior $p(C=k\mid X)$. So How come we plug in the MLE just the posterior conditioned on the parameter $\beta$? Why is $p(C=k\mid X)$ a posterior anyway? To me a posterior is a distribution of over a parameter given the observed data. But the class $C$ is to me not a parameter but a target just like the observations $y_i$ in a linear regression setting.
The ancient Greeks had a theory that the sun, the moon, and the planets move around the Earth in circles. This was soon shown to be wrong. The problem was that if you watch the planets carefully, sometimes they move backwards in the sky. So Ptolemy came up with a new idea - the planets move around in one big circle, but then move around a little circle at the same time. Think of holding out a long stick and spinning around, and at the same time on the end of the stick there's a wheel that's spinning. The planet moves like a point on the edge of the wheel. Well, once they started watching really closely, they realized that even this didn't work, so they put circles on circles on circles... Eventually, they had a map of the solar system that looked like this: This "epicycles" idea turns out to be a bad theory. One reason it's bad is that we know now that planets orbit in ellipses around the sun. (The ellipses are not perfect because they're perturbed by the influence of other gravitating bodies, and by relativistic effects.) But it's wrong for an even worse reason that that, as illustrated in this wonderful youtube video. In the video, by adding up enough circles, they made a planet trace out Homer Simpson's face. It turns out we can make any orbit at all by adding up enough circles, as long as we get to vary their size and speeds. So the epicycle theory of planetary orbits is a bad one not because it's wrong, but because it doesn't say anything at all about orbits. Claiming "planets move around in epicycles" is mathematically equivalent to saying "planets move around in two dimensions". Well, that's not saying nothing, but it's not saying much, either! A simple mathematical way to represent "moving around in a circle" is to say that positions in a plane are represented by complex numbers, so a point moving in the plane is represented by a complex function of time. In that case, moving on a circle with radius $R$ and angular frequency $\omega$ is represented by the position $$z(t) = Re^{i\omega t}$$ If you move around on two circles, one at the end of the other, your position is $$z(t) = R_1e^{i\omega_1 t} + R_2 e^{i\omega_2 t}$$ We can then imagine three, four, or infinitely-many such circles being added. If we allow the circles to have every possible angular frequency, we can now write $$z(t) = \int_{-\infty}^{\infty}R(\omega) e^{i\omega t} \mathrm{d}\omega.$$ The function $R(\omega)$ is the Fourier transform of $z(t)$. If you start by tracing any time-dependent path you want through two-dimensions, your path can be perfectly-emulated by infinitely many circles of different frequencies, all added up, and the radii of those circles is the Fourier transform of your path. Caveat: we must allow the circles to have complex radii. This isn't weird, though. It's the same thing as saying the circles have real radii, but they do not all have to start at the same place. At time zero, you can start however far you want around each circle. If your path closes on itself, as it does in the video, the Fourier transform turns out to simplify to a Fourier series. Most frequencies are no longer necessary, and we can write $$z(t) = \sum_{k=-\infty}^\infty c_k e^{ik \omega_0 t}$$ where $\omega_0$ is the angular frequency associated with the entire thing repeating - the frequency of the slowest circle. The only circles we need are the slowest circle, then one twice as fast as that, then one three times as fast as the slowest one, etc. There are still infinitely-many circles if you want to reproduce a repeating path perfectly, but they are countably-infinite now. If you take the first twenty or so and drop the rest, you should get close to your desired answer. In this way, you can use Fourier analysis to create your own epicycle video of your favorite cartoon character. That's what Fourier analysis says. The questions that remain are how to do it, what it's for, and why it works. I think I will mostly leave those alone. How to do it - how to find $R(\omega)$ given $z(t)$ is found in any introductory treatment, and is fairly intuitive if you understand orthogonality. Why it works is a rather deep question. It's a consequence of the spectral theorem. What it's for has a huge range. It's useful in analyzing the response of linear physical systems to an external input, such as an electrical circuit responding to the signal it picks up with an antenna or a mass on a spring responding to being pushed. It's useful in optics; the interference pattern from light scattering from a diffraction grating is the Fourier transform of the grating, and the image of a source at the focus of a lens is its Fourier transform. It's useful in spectroscopy, and in the analysis of any sort of wave phenomena. It converts between position and momentum representations of a wavefunction in quantum mechanics. Check out this question on physics.stackexchange for more detailed examples. Fourier techniques are useful in signal analysis, image processing, and other digital applications. Finally, they are of course useful mathematically, as many other posts here describe.
$x$ is a variable and also represents the quotient of lengths of opposite side to hypotenuse of a right triangle. The inverse sine function is written as $\arcsin{(x)}$ or $\sin^{-1}{(x)}$ in inverse trigonometric mathematics. In calculus, the limit of a function in the following form is often appeared. So, it is considered as a standard result and also used as a formula in calculus. $\displaystyle \large \lim_{x \,\to\, 0}{\normalsize \dfrac{\arcsin{(x)}}{x}} \,\,\,$ or $\,\,\, \displaystyle \large \lim_{x \,\to\, 0}{\normalsize \dfrac{\sin^{-1}{(x)}}{x}}$ The limit of arcsin(x) by x as x approaches zero is equal to one. Let’s learn how to derive this limit rule before using it as a formula in calculus. Convert the inverse trigonometric sine function by the mathematical relationship between the trigonometric and inverse trigonometric functions. Take $y \,=\, \sin^{-1}{x}$, then $x \,=\, \sin{y}$. Therefore, the quotient of $\arcsin{x}$ by $x$ can be written as the ratio of $y$ to $\sin{y}$ mathematically. $\implies$ $\dfrac{\sin^{-1}{x}}{x} \,=\, \dfrac{y}{\sin{y}}$ The limit of the function in terms of $x$ has to calculate as $x$ approaches zero but the inverse trigonometric function is now expressed in the form of trigonometric function and in terms of $y$. We have taken that $y \,=\, \sin^{-1}{x}$ Therefore, if $x$ approaches zero, then $y$ tends to $\sin^{-1}{(0)}$ According to inverse trigonometry, the value of inverse sin of zero is equal to zero. Therefore, if the value of $x$ closer to $0$, then the value of $y$ also approaches zero. $\therefore \,\,\,\,\,\,$ $\displaystyle \large \lim_{x \,\to\, 0}{\normalsize \dfrac{\sin^{-1}{x}}{x}}$ $\,=\,$ $\displaystyle \large \lim_{y \,\to\, 0}{\normalsize \dfrac{y}{\sin{y}}}$ There is a trigonometric limit rule in calculus. $\displaystyle \large \lim_{x \,\to\, 0}{\normalsize \dfrac{\sin{x}}{x}} \,=\, 1$ But our function is in reciprocal form. $\displaystyle \large \lim_{y \,\to\, 0}{\normalsize \dfrac{y}{\sin{y}}}$ Now, write the above algebraic trigonometric function in its reciprocal form. $\implies$ $\displaystyle \large \lim_{y \,\to\, 0}{\normalsize \dfrac{y}{\sin{y}}}$ $\,=\,$ $\displaystyle \large \lim_{y \,\to\, 0}{\normalsize \dfrac{1}{\dfrac{\sin{y}}{y}}}$ According to quotient rule of limits, the limit of a quotient is equal to quotient of their limits. $\implies$ $\displaystyle \large \lim_{y \,\to\, 0}{\normalsize \dfrac{y}{\sin{y}}}$ $\,=\,$ $\dfrac{\displaystyle \large \lim_{y \,\to\, 0}{\normalsize (1)}}{\displaystyle \large \lim_{y \,\to\, 0}{\normalsize \dfrac{\sin{y}}{y}}}$ According to constant limit rule, The limit of one is always equal to one. $\implies$ $\displaystyle \large \lim_{y \,\to\, 0}{\normalsize \dfrac{y}{\sin{y}}}$ $\,=\,$ $\dfrac{1}{\displaystyle \large \lim_{y \,\to\, 0}{\normalsize \dfrac{\sin{y}}{y}}}$ The limit of sinx/x as x approaches zero is equal to one. Therefore, the limit of $\dfrac{\sin{y}}{y}$ as $y$ tends to $0$ is also equal to one. $\implies$ $\displaystyle \large \lim_{y \,\to\, 0}{\normalsize \dfrac{y}{\sin{y}}}$ $\,=\,$ $\dfrac{1}{1}$ $\implies$ $\displaystyle \large \lim_{y \,\to\, 0}{\normalsize \dfrac{y}{\sin{y}}} \normalsize \,=\, 1$ Actually, $\displaystyle \large \lim_{x \,\to\, 0}{\normalsize \dfrac{\sin^{-1}{x}}{x}}$ $\,=\,$ $\displaystyle \large \lim_{y \,\to\, 0}{\normalsize \dfrac{y}{\sin{y}}}$ $\,\,\, \therefore \,\,\,\,\,\,$ $\displaystyle \large \lim_{x \,\to\, 0}{\normalsize \dfrac{\sin^{-1}{x}}{x}}$ $\,=\,$ $1$ Therefore, it is proved that the limit of quotient of inverse sine by a variable as the variable approaches zero is equal to one. Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
Memorizing ... A course of trigonometry can be surprisingly confusing and somewhat counter-mathematical, an increasing number of identities that seem to be unending sometimes, question is how can I understand these formulas intuitively? The way a mathematician does, not based on artificial definitions and symbols, but rather getting into the heart, the essence of the subject, a way of thinking that can make them trivial, give the feeling of why they work, aside from the way we prove them. For instance $$\cos \theta +\cos \varphi =2\cos \left({\frac {\theta +\varphi }{2}}\right)\cos \left({\frac {\theta -\varphi }{2}}\right)$$ Can lead to several questions, why did we multiply by a factor of $2$, why did we take the average of $\theta$ and $\varphi$, of course the way we chose to define $\cos(x)$ is what led to this fuzzy discovery, but how ? I believe, with no doubt that there should be a solution, as Poincaré put it : "Mathematics is the art of giving the same name to different things." Relating these identities with other concepts in Math, (Linear transformations and rotations in the case of Matrix form for the sum and difference formulae)as an example. Any help would be appreciated. Thanks for your time. Memorizing ... A course of trigonometry can be surprisingly confusing and somewhat counter-mathematical, an increasing number of identities that seem to be unending sometimes, question is (1). Background: Let $E$ be the Euclidean plane. (i). If $f:E\to E$ is an isometry then $f$ is a bijection, so $f^{-1}:E\to E$ is also an isometry. (ii).If an isometry $f:E\to E$ has 3 non-co-linear fixed points then $f=id_E.$ (iii). If isometries $f,g$ on $E$ agree on 3 non-co-linear points then $f=g$ because the isometry $g^{-1}f$ fixes those 3 points, so $g^{-1}f=id_E.$ (2). Choose an origin and orthogonal co-ordinate axes for $E.$ Any $2x2$ matrix $M(a)$ with top row $(\cos a, \;-\sin a)$ and bottom row $(\sin a,\;\cos a)$ can be regarded as a function on $E$ that maps $v=\binom {x}{y}\in E$ to $M(a)v\in E.$ The theorem of Pythagoras implies that $M(a)$ is an isometry. The isometry $R(a)$ rotates each point of $E$ thru the angle $a$ counter-clockwise about $\binom {0}{0}.$ Note that $R(a)R(b)=R(a+b).$ Since $M(a)(v)= R(a)(v)$ when $v \in \{ \binom {0}{0},\binom {1}{0}, \binom {0}{1}\},$ we have $M(a)=R(a).$ $$\text {Therefore }\quad M(a)M(b)=R(a)R(b)=R(a+b)=M(a+b).$$ Comparing entries in the matrices $M(a)M(b)$ and $M(a+b),$ we obtain the trig angle-sum formulae as a necessary consequence of Part (1),,(i),(ii),(iii). Here is an example where some light can be shed on another important trigonometric formula: $$\tag{1}\cos^2(x)=\frac{1}{2}+\frac{1}{2}\cos(2x)=\frac{1}{2}\left(1+\cos(2x)\right)$$ Let us give the names $f : x \to \cos^2(x)$ and $g : x \to \cos(2x)$. $1$st approach: we already know the RHS of (1). $f$ and $g$ share two properties: they are even and periodic with period $\pi$. and it is evident that the graphical representation of $f$ is a shifted and reduced version of the graphical representation of $g$ (see figure). $2$nd (heuristic) approach, where the RHS is unknown. We need here the concept of Fourier expansions (have you already had a lecture on this important subject ?). $f$ being even and periodic with period $\pi$, possesses a Fourier expansion $\sum_{k=0}^{\infty} a_k \cos(kx)$. A look at the graphical representation of $f$ shows that a good candidate for $a_0$ (quadratic mean value) is $1/2$; Besides, the only frequency present in the spectrum of $f$ is twice the basic initial frequency (explaining the $2x$ instead of $x$). Thus there is a second term $a_2 \cos(2x)$. Because the maximum value of the RHS is 1, we have necessarily $a_2=1/2$. Tha'st all. In fact (1) can be seen as a finite Fourier expansion. Remark, one can, in almost the same manner, anticipate formulas of the same kind for any power $cos^n(x)$ as a Fourier decomposition. If you recall Euler's formula $$e^{ix}=\cos x+i\sin x$$ Then you will be able to basically derive all trig identities (using exponential function rules.
Let $X,Y$ be Banach spaces and $T:X\to Y$ be a bounded linear operator. It is required to show that there is a constant $m>0$ such that $\|T(x)\|\geq m\|x\|$ for all $x\in X$ if and only if $T$ is injective and $T(X)$ is closed. I proved the forward implication using the fact that $T^{-1}$ exists and it is bounded if and only if there is $m>0$ such that $m\|x\|\leq\|T(x)\|$ for all $x\in X$, which I have already proved. However I cannot find a way to prove the other direction. I tried showing that $T(X)=Y$ which is just some wishful thinking so that I can invoke the above result. But I couldn't and I am not sure if it really is the case that $T(X)=Y$. So could someone please give me a hint? Thanks. Edit: (-added later-) ($\Rightarrow$) $x_1\neq x_2\implies \|x_1-x_2\|>0\implies \|T(x_1)-T(x_2)\|=\|T(x_1-x_2)\|\geq m\|x_1-x_2\|>0\implies T(x_1)\neq T(x_2)$. Hence $T$ is injective. Let $y\in \overline{T(X)}$. Then choose $(y_n)=(T(x_n))$ in $T(X)$ such that $y_n\to y$. For each $m,n\in\mathbb{Z^+},\|y_m-y_n\|=\|T(x_m)-T(x_n)\|=\|T(x_m-x_n)\|\geq m\|x_m-x_n\|$, and therefore since $(y_n)$ is cauchy $(x_n)$ is Cauchy and hence converges to some $x\in X$ as $X$ is complete. Now by continuity of $T$ we have $y=T(x)\in T(X)$. Hence $T(X)$ is closed. ($\Leftarrow$) $T:X\to T(X)$ is a bijective bounded linear operator and $T(X)$ is complete as it is closed. By open mapping theorem $T$ is an isomorphism and hence there exists $m>0$ such that $\|T(x)\|\geq m\|x\|$ for each $x\in X$. @JohnMa Is the added later part alright?
I don't have access to Tarski's exposition, but the following arguments (see Sections 1-3 below) are all made in the same 'playground' that Tarski developed his theory. I have no doubt that Tarski's definition of multiplication of the reals depends on using the Eudoxus Theory of Proportion (see this). The Eudoxus theory can be used to show that any two endomorphisms on the additive group of positive real numbers under addition commute (under functional composition), and that is crucial to defining multiplication with endomorphisms in our sketched-out theory. Here is Definition 5 of Euclid's Book V: Magnitudes are said to be in the same ratio, the first to the second and the third to the fourth when, if any equimultiples whatever be taken of the first and third, and any equimultiples whatever of the second and fourth, the former equimultiples alike exceed, are alike equal to, or alike fall short of, the latter equimultiples respectively taken in corresponding order. Also from the wikipedia link, The Eudoxian definition of proportionality uses the quantifier, "for every ..." to harness the infinite and the infinitesimal, just as do the modern epsilon-delta definitions of limit and continuity. I can't say exactly how Tarksi defines multiplication, but I'm about 99% confident in the following: There is one and only one binary operation of multiplication defined over $(\Bbb R, 0, 1, +, \le )$ satisfying $\quad$ $1 \times 1 = 1$ $\quad$ Multiplication is a commutative and associative operation $\quad$ Multiplication distributes over addition $\quad$ If $0 \lt a \lt b$ and $c \gt 0$ then $0 \lt ca \lt cb$ Section 1 With Tarski's axioms we start with $$ (\Bbb R, 0, 1, +, \le ) \quad \text{ the additive group of numbers on the line (extending in both directions)}$$ There is no multiplication but $1 \gt 0$ is selected as the unit of measure. The ancient Greeks, Eudoxus/Euclid/et.al, worked with $(\Bbb R^{>0},1,+)$ as a system of magnitudes. In the next section, we state three theorems, using modern mathematical terminology, where some of their logic is employed. Theorem 3 is an immediate consequence of the first two theorems. In the last section we use this theory to define multiplication on $\Bbb R$, by again stating theorems without proof. Section 2 Theorem 1: Every endomorphism $\phi: \Bbb R^{>0} \to \Bbb R^{>0}$ is completely determined by knowing the image under $\phi$ of $1$. Each of these endomorphisms, $$\tag 1 \phi_m:1 \mapsto m$$ is a bijective transformation, and so, the inverse ${\phi_m}^{-1}$ can also be recast into a $\text{(1)}$ representation. Finally, to any $m \in \Bbb R^{>0}$ there corresponds a $\text{(1)-form }\phi_m$. This group is denoted by $\mathcal G$. Theorem 2: The group $(\mathcal G, \circ)$ is commutative. Theorem 3: Corresponding to any choice of $1 \in (\Bbb R^{>0},+)$ the group $\mathcal G$ of endomrophisms can be put in a bijectice correspondence with $\Bbb R^{>0}$. In this way a commutative binary operation, $$\tag 2 x \times y = [\phi_x \circ \phi_y]\, (1) = \phi_x(y) = \phi_y(x)$$ call it multiplication of $x$ with $y$, $xy$, can be defined on $\Bbb R^{>0}$. This operation distributes over addition $$\tag 3 x(y+z) = xy + xz$$ has a multiplicative identity $$\tag 4 1x = x1 = x$$ and associated with every $x \in \Bbb R^{>0}$ is a number unique $y \in \Bbb R^{>0}$ such that $$\tag 5 xy = yx = 1$$ Recall that we can write $y = x^{-1}$ or $x = y^{-1}$ when $\text{(4)}$ is true. Section 3 Proposition 4: Every endomorphism $\phi_m$ in $(\Bbb R^{>0},1,+)$ has one and only one extension to a (bijective) endomorhism on the abelian group $(\Bbb R,0,1,+)$. The collection $\mathcal P$ of these transformations forms a commutative group isomorphic to $\mathcal G$. Recall that we have the inversion endomorphism $\gamma: x \mapsto -x$ defined on the commutative group $(\Bbb R,0,1,+)$. Proposition 5: The inversion mapping $\gamma$ commutes with every endomorphism in $\mathcal P$. Recall that we have the constant trivial endomorphism $\psi_0: x \mapsto 0$ defined on $(\Bbb R,0,1,+)$; it commutes with every other endomorphism on $(\Bbb R,0,1,+)$, and in particular every morphism in $\mathcal P$. Proposition 6: The expression $$\tag 6 \mathcal A = \mathcal P \cup \{\gamma \circ \phi_m \, | \, \phi_m \in \mathcal P \} \cup \{\psi_0\}$$ represents a disjoint union of endomorphisms on $(\Bbb R,1,+)$. Proposition 7: The set $\mathcal A$ is closed under the operation of functional composition and this operation is commutative. Every endomorphism $\phi: \Bbb R \to \Bbb R$ belonging $\mathcal A$ is completely determined by knowing the image under $\phi$ of $1$. Except for the trivial $0\text{-endomorphism}$, each of these these mappings, $$\tag 7 \phi_m:1 \mapsto m$$ is a bijective transformation with its inverse also belonging to $(\mathcal A,\circ)$. Finally, to any $m \in \Bbb R$ there corresponds a $\text{(7)-form }\phi_m$. So the trivial endomorphism $\psi_0$ on $\Bbb R$ can be written as $\phi_0$ and we can also write $$\tag 8 \mathcal A = \{ \phi_m \, | \, m \in \Bbb R\}$$ Theorem 8: The structure $(\Bbb R,0,1,+)$ can be put into a $1:1$ correspondence with $\mathcal A$. In this way a second binary operation, multiplication, can be defined over $(\Bbb R,0,1,+)$. The new algebraic structure, $(\Bbb R,0,1,+,\times)$, forms a field. Note: An outline for some of the above theory can be found in this article, $\quad$ Translating Tarski's Axiomatization/Logic of $\mathbb R$ to the Theory of Magnitudes
Let $X$ be a Hausdorff, locally compact but non-compact topological space. If the (Alexandroff) one-point compactification is connected, can $X$ have compact connected components? I think I proved the following Lemma Let $X$ be a Hausdorff space and $C \subset X$ have a compact neighbourhood $K$. Then $C$ is a component of $X$ if and only if $C$ is a component of $K$. in this answer. For the present problem this implies a negative answer, since a compact set in a locally compact Hausdorff space has a compact neighbourhood. Let $C$ be a compact component of $X$, and let $K$ be some compact neighbourhood of $C$ in $X$. Applying the lemma one way then shows that $C$ is a component of $K$, and then applying it the other way shows that $C$ is a component of the compactification, since $K$ remains a compact neighbourhood of $C$. Well, no. Assume $X=Y\sqcup C$, where $C$ is compact. Then the 1PC is $X\cup\{\infty\}$ with the topology: $U$ is open iff $U\subset X$ is open in the previous topology or $\infty\in U$ and $U^c$ is compact. Then $C = (U\cup\{\infty\})^c$ is compact, and thus $U\cup\{\infty\}$ is open. $C$ is also open (it was open in $X$). Thus the 1PC of $X$ is not connected.
[4] AMNV. I have already heard the M-name somewhere. Yes, of course I knew the main point we wrote about the "axion weak gravity conjecture". That point – discussed in a paper by Banks, Dine, Fox, and Gorbatov (and in some lore I could have had heard from Tom many years earlier, unless I told him) – had largely stimulated the research into the "normal" weak gravity conjecture itself. The conjecture says that the decay constant of an axion shouldn't be too high – in fact, its product with the action of the relevant instanton is smaller than one in Planck units. This is a generalization of the "normal" weak gravity conjecture because the instanton is a lower-dimensional generalization of the charged massive point-like particles (higher-dimensional ones exist as well) and its action is a generalization of the mass/tension of the objects. Our claim implies (the previous formulation by Banks et al.) that either the decay constant or the instanton action or both have to be small. And this condition has a nice implication: quantum gravity doesn't want to allow you to emulate flat potentials too closely, unless they're exactly flat, so the axion "wants" to be visible either because its decay constant is low or because the instanton corrections to its potential are sufficiently wiggly. This is one of the particular insights that indicates that string theory's predictivity always remains nonzero – string theory doesn't want you to approximate the effective field theory of one vacuum by another vacuum too closely. In the older Banks et al. formulation, the "axion weak gravity conjecture" was considered as a bad news because it indicated that some natural attempts to construct natural inflation were actually forbidden in quantum gravity. Fine, now the two Dutchpersons look at a sufficiently wide and rich class of string compactifications to test the "axion weak gravity conjecture" – at type IIA string theory vacua on Calabi-Yau compactifications. Note that type IIB has the "point-like in spacetime" instanton, the D(-1)-instanton, and similarly all the other odd ones. The Dutch paper looks at type IIA so they need to look at the even D-brane instantons. OK, the "generic" Calabi-Yau has everything of order one. To make the decay constants and instanton actions parameterically large or small, so that you may study whether some inequalities are parameterically obeyed or violated, they need to study extreme shapes of Calabi-Yaus. They look at extreme corners of the complex structure moduli space. The analysis of these "extreme directions" is somewhat analogous to my and Banks' dualities vs singularities. And indeed, for every extreme direction in the Calabi-Yau complex structure moduli space, they find a tower of the D2-brane instantons that is predicted by the "axion weak gravity conjecture" – with the parameterically correct actions. That's quite a nice test of the conjecture. Curiously enough, to argue that the instantons exist, they need to use another swampland conjecture, the "swampland distance conjecture". Because the weak gravity conjectures should be counted as "swampland conjectures", they use one swampland conjecture to complete the partial proof of another one. I guess that a "swampland skeptic" could remain skeptical and call the proof circular. OK, Vengaboys are Dutch, too. At any rate, the "axion weak gravity conjecture" has passed a test (at least assuming that other conjectures hold) and it looks like a nontrivial test because the limits in the space of shapes of a Calabi-Yau aren't quite simple. The authors of the weak gravity conjectures arguably weren't idiots, it seems once again. The situation is really provoking because the weak gravity conjectures may be motivated and formulated rather easily and have a "philosophical beauty, naturalness, and coherence" which are very important in theoretical physics. On the other hand, the proofs are partial, context-dependent, and very technical. Cannot there be a universal proof of the "weak gravity conjecture(s)" that really unifies and clarifies all the partial proofs and that is as straightforward as the proof of Heisenberg's\[ \Delta x \cdot \Delta p \geq \frac{\hbar}{2} \] or the generalized uncertainty principle inequalities? And don't these weak gravity conjectures have some direct far-reaching philosophical consequences for quantum gravity – much like the uncertainty principle basically implies that probabilities must be predicted relatively to an observer and from complex amplitudes? Well, let me give you another, more detailed hint what you need to do to make a breakthrough analogous to the quantum mechanical one. In quantum mechanics, you first needed to realize that \(x,p\) from the inequality should be replaced with Hermitian operators. Here, we are talking about the values of parameters in effective actionsof quantum gravity. So these parameters that enter the WGC-like conjectures must correspond to some objects, let's call them prdelatorsbecause they're like operators but probably not quite, constructed within the full theory of quantum gravity or string/M-theory (which is more abstract than just an effective field theory). Your main task is to figure out what a "prdelator" is and why it has the property analogous to noncommutativity that is responsible for the swampland inequalities. And Czech readers must be warned that their partial understanding could be illusory.
I have a system and I've carried out a long molecular dynamics simulation over it. I would like to estimate the partition function $Z.$ Theoretically, one would compute: $$Z=\dfrac{1}{N!h^{3N}}\int \exp \left(-\beta\frac{-p^2}{2m} \right) \exp\left(-\beta V(r) \right) \ dp \ dr, $$ but if the system I'm dealing with is, say, a protein in water or something like that, this integral is of course absolutely untractable. When thinking of how to estimate the partition function of susch a system from a molecular dynamics simulation, I guess the first thing that comes to mind is the following: Consider all the snapshots obtained during your simulation, label them with the index $j$ and call $E_j$ to the energy of each snapshot. For giant thermalized systems, I guess we can neglect the probability of a concrete configuration repeating, so I believe a nice try would be: $$\tilde{Z}=\frac{dV_{\text{ps}}}{N!h^{3N}}\sum_{j} \exp\left( -\beta E_j \right),$$ where $dV_{\text{pc}}$ is the volume of small cube in phase space (its volume would have to be adjusted manually depending oh how many snapshots one has taken). My question, briefly put, is, how can I estimate the error of estimating $Z$ by $\tilde{Z}$? It seems intuitive that typically $\tilde{Z}$ should be a good aproximation to $Z,$ but one can easilly think of situations where this isn't true: Imagine that my initial configuration lies in, so as to speak, a valley in the energetic landscape, surrounded by a chain of "high mountains" (high-energy configuration) and that at the other side of these "mountains" lie low-energy configuration. It is clear that in such a case there could be a considerable error when estimating $Z$ from $\tilde{Z}.$ But since one does not know the aspect of the energetic landscape beforehand, it seems difficult to deal with this. The question stops here, but just to provide context I'll explain briefly what I'm thinking about: I want to study a chemical reaction, and to that end I would like to know the free energy profile along a certain reaction coordinate. For that, an Umbrella Sampling technique is used: you drag the reaction coordinate along the reaction path, forcing it to be at certain positions at different steps (otherwhise, the simulation wouldn't take you over the energetic barriers in short simulation times). I am aware of several methods to compute free energies, like FEP or WHAM, which is quite sophisticated, but I was guessing that if the technique I mentioned above to estimate the partition function $Z$ was correct, then the goal of getting the free energy profile would be fairly simple: you just drag along the reaction coordinate, perform molecular dynamics simulations for different restricted values of the reaction coordinate (you force the reaction coordinate to be at a concrete position by means of a harmonic potential), and at each value of the reaction coordinate you estimate the partition function as above. That's the idea and that's why I am interested in having some idea on methods to estimate the error of approximating $Z$ by $\tilde{Z}.$ Note: this question was originally in Physics.stackexchange. It didn't receive any attention at all and I asked for it to be migrated to Chemistry.stackexchange, but the flag was ignored, so I've decided to delete my question on Physics.stacexchange and post it here
Film Boiling Analysis in Porous Media From Thermal-FluidsPedia Line 40: Line 40: (10.262) (10.262) - It should be pointed out that <math>{u_v} is not equal to zero at the heating surface under Darcy’s law, i.e., slip occurs at the surface. The boundary condition in the liquid that is far from the heated surface is + It should be pointed out that <math>{u_v}is not equal to zero at the heating surface under Darcy’s law, i.e., slip occurs at the surface. The boundary condition in the liquid that is far from the heated surface is + <center><math>{u_\ell } = 0\begin{array}{*{20}{c}} <center><math>{u_\ell } = 0\begin{array}{*{20}{c}} , & {y \to \infty } \\ , & {y \to \infty } \\ Revision as of 23:56, 31 May 2010 Film boiling of liquid saturated in a porous medium at an initial temperature of next to a vertical, impermeable heated wall at a temperature of T > w T is analyzed (see Fig. 10.41; Cheng and Verma, 1981; Nield and Bejan, 1999). Vapor generated at the liquid-vapor interface flows upward due to buoyancy force. The liquid adjacent to the vapor layer is dragged upward by the vapor. The temperature at the liquid-vapor interface is at the saturation temperature. There are velocity and thermal boundary layers in the liquid phase adjacent to the vapor film. The solution of the film boiling problem requires solutions of vapor and liquid flow, as well as heat transfer in both the vapor and liquid phases. It is assumed that boundary layer approximations are applicable to the vapor film and to convection heat transfer in the liquid phase. It is further assumed that the vapor flow is laminar, two-dimensional; Darcy’s law is applicable in both the vapor and liquid phases. The continuity, momentum, and energy equations in the vapor film are s a t (10.255) (10.256) (10.257) where α is thermal diffusivity of the porous medium saturated with the vapor. The governing equations for the liquid boundary layer are mv (10.258) (10.259) (10.260) where is thermal diffusivity of the porous medium saturated with the liquid. The boundary conditions at the heated wall ( y = 0) are (10.261) (10.262) It should be pointed out that u is not equal to zero at the heating surface under Darcy’s law, i.e., slip occurs at the surface. The boundary condition in the liquid that is far from the heated surface is v (10.263) (10.264) The mass balance at the liquid-vapor interface is [see eq. (10.152)]: (10.265) The temperature at the liquid-vapor interface is equal to the saturation temperature: (10.266) The above film boiling problem can be solved using a similarity solution like that for film condensation in porous media discussed in Section 8.5.2. The results obtained by Cheng and Verma (1981) are shown in Fig. 10.42. The dimensionless parameters used in Fig. 10.42 are defined as (10.267) where Jakob numbers Ja and , measure the degrees of superheat in the vapor and subcooling in the liquid. For all cases shown in Fig. 10.42, the effect of liquid subcooling on the heat transfer is insignificant. The effect of vapor superheat on heat transfer is significant when Ja v is less than 2. The following asymptotic result can be obtained from Fig. 10.42: v (10.268) References Cheng, P., and Verma, A.K., 1981, “The Effect of Subcooling Liquid on Film Boiling about a Vertical Heated Surface in a Porous Medium,” International Journal of Heat and Mass Transfer, Vol. 24, pp. 1151-1160. Nield, D.A., and Bejan, A., 1999, Convection in Porous Media, 2 nd ed., Springer-Verlag, New York.
Assessment | Biopsychology | Comparative |Cognitive | Developmental | Language | Individual differences |Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | In statistics, the Kuder-Richardson Formula 20 (KR-20) is a measure of internal consistency reliability for measures with dichotomous choices, first published in 1937. It is analogous to Cronbach's α, except Cronbach's α is used for non-dichotomous (continuous) measures. [1] A high KR-20 coefficient (e.g., >0.90) indicates a homogeneous test. Values can range from 0.00 to 1.00 (sometimes expressed as 0 to 100), with high values indicating that the examination is likely to correlate with alternate forms (a desirable characteristic). The KR20 is impacted by difficulty, spread in scores and length of the examination. In the case when scores are not tau-equivalent (for example when there is not homogeneous but rather examination items of increasing difficulty) then the KR-20 is an indication of the lower bound of internal consistency (reliability). $ \alpha={K\over{K-1}}{[{1-{\sum_{i=1}^N{p_{i}q_{i}}\over\sigma^{2}_{X}}}]} $ Note that variance for KR-20 is $ \sigma^{2}_{X} = {\sum_{i=1}^N{(X_i-\bar{X})^{2}}\over{N}} $ If it is important to use unbiased operators then the Sum of Squares should be divided by degrees of freedom ( N − 1) and the probabilities are multiplied by $ {N}\over{N-1} $ Since Cronbach's α was published in 1951, there has been no known advantage to KR-20 over Cronbach. KR-20 is seen as a derivative of the Cronbach formula, with the advantage to Cronbach that it can handle both dichotomous and continuous variables. See alsoEdit ReferencesEdit ↑ Cortina, J.M., (1993). What Is Coefficient Alpha? An Examination of Theory and Applications. Journal of Applied Psychology, 78(1), 98-104. Statistical analysis of multiple choice exams Quality of assessment chapter in Illinois State Assessment handbook (1995) This page uses Creative Commons Licensed content from Wikipedia (view authors).
Suppose I came to a programmer and asked him to write a function that returns the sum of numbers cubed up to a given number. That is, [math]sum\_cubes(n) = 1 + 2^3 + 3^3 + ... + n^3[/math] Being a Scheme programmer, his first solution might be something like this: (define (cube x) (* x x x)) (define (sum-cubes n) (if (= n 1) 1 (+ (cube n) (sum-cubes (- n 1))))) "What's sum-cubes(10)?", I ask. "3025," he responds, after waiting unnoticeably long. "What's sum-cubes(415)?", I ask. "7,451,142,400," he responds, again waiting unnoticeably long. "And sum-cubes(416)?" "Uh... Stack overflow?" He wrote it recursively, but he neglected to remember that all languages have a recursion depth limit. "Hold on," he says. Meanwhile, programmers used to a C-like language are smirking, knowing exactly how to implement this thing in a for-loop that runs quite fast and doesn't ever have a stack overflow problem. "Okay, I'm back," says the Scheme programmer. (You see, Scheme doesn't have for or while loops.) (define (sum-cubes2 n) (define (sum-iter total n) (if (= n 0) total (sum-iter (+ total (cube n)) (- n 1)))) (sum-iter 0 n)) "And sum-cubes2(416) is 7,523,133,696. sum-cubes(1000) is 250,500,250,000. I can do whatever you want!" "Okay, what's sum-cubes2(10 million)?" He types it in, and waits an annoyingly long period of time as his Core i7 processor cranks the loop (which is completely equivalent to a for or while loop, so don't get smug other-language guys. There's no points taken off here for recursion; iteration is just a special case of recursion). "2500000500000025000000000000", he says after the time has passed. "And a billion?" "Look, I'm not going to waste my time calculating that!" And he shouldn't. I laugh, and give him the following function: (define (square x) (* x x)) (define (sum-cubes3 n) (square (/ (* n (+ n 1)) 2))) (For those of you unable to grok Lisp well:) [math]sum\_cubes3(n) = \left(\frac{n * (n + 1)}{2} \right)^2 [/math] He calculates it for a few things, and sees that it produces the same output regardless of the given n he tries. One billion isn't even a struggle, for this algorithm runs in constant time. He returns "250000000500000000250000000000000000", but he just assumes it's correct since he's not going to test it with his other functions he knows will always give the right answer. "How do you know this works for any given input?" he asks, "Maybe it stops working after 200 million." "I can prove that it will work for any given input," I say. "How can you do that? You'd have to check it works for every positive number, and there are infinitely many positive numbers! Graham's Number is a positive integer, you know. Good luck checking your answer with a computer that can't even represent a number close to Graham's!" At this point it's time to introduce mathematical induction. Proving statements about infinitely many numbers such as the positive integers is what it was made for. The intuition pump is this: if you can prove that your statement, or proposition, P(n) is true for k, and that fact implies that P(k + 1) is also true, then for every integer k the statement is true. (Because having it true for k means k+1 is true, but having it true for k+1 means it's true for some integer j, which means j+1 is true, which is simply k+2, thus all numbers after k are true.) So let's prove our little equation up there. Induction is performed with two steps: the basis step, and the inductive step. Definitions: Let P(n) be the function we wish to prove is true for the set of all positive integers. Let P(n) be defined as such: [math]P(n) = \left(\frac{n * (n + 1)}{2}\right)^2 [/math] Basis Step: Show that the base case, usually P(0) or P(1), is true. (Note: you may have to show more than one base case, such as P(2) or P(3), and you don't have tostart with 0 or 1 unless you're proving for all positive integers (which includes 1).) [math]\begin{eqnarray}P(1) &=& \left(\frac{1 * (1 + 1)}{2}\right)^2 = \left(\frac{2}{2}\right)^2 &=& 1^2 &=& 1 &=& 1^3\end{eqnarray}[/math] Check. Inductive Hypothesis: We are going to assume that P(k) is true. We already showed that P(k) is true when k = 1, so we're going to assume it's true for any k and try to imply that P(k+1) is also true. (If P(k+1) is true, then we don't have to check P(2), P(3), etc.) Inductive Step: [math]\begin{eqnarray} P(k) &=& 1 + 2^3 + 3^3 + ... + k^3 \\ &=& \left(\frac{k * (k + 1)}{2}\right)^2 \\ && is\ true\ by\ the\ inductive\ hypothesis,\ thus: \\ P(k + 1) &=& 1 + 2^3 + 3^3 + ... + k^3 + (k+1)^3 \\ &=& \left(\frac{ (k + 1) * (k + 2)}{2}\right)^2 \\ && Let's\ try\ to\ recover\ the\ right\ side\ with \\ && left\ side\ manipulations. \\ P(k + 1) &=& 1 + 2^3 + 3^3 + ... + k^3 + (k+1)^3 \\ &=& P(k) + (k+1)^3 \\ &=& \left(\frac{k * (k + 1)}{2}\right)^2 + (k+1)^3 \\ && Distribute\ the\ squaring: \\ &=& \frac{k^2 * (k + 1)^2}{4} + (k + 1)^3 \\ && Multiply\ the\ second\ term\ by\ \frac{4}{4}\ to \\ && combine\ fractions: \\ &=& \frac{k^2 * (k + 1)^2 + 4 * (k + 1)^3}{4} \\ && We\ can\ factor\ a\ (k+1)^2\ and\ simplify: \\ &=& \frac{(k+1)^2 * (k^2 + 4 * (k + 1)}{4} \\ &=& \frac{(k+1)^2 * (k^2 + 4k + 4)}{4} \\ && Factor\ the\ polynomial: \\ &=& \frac{(k+1)^2 * (k + 2)^2}{4} \\ && Take\ out\ the\ common\ squaring: \\ &=& \left(\frac{(k+1) * (k + 2)}{2}\right)^2 \\ && QED \end{eqnarray} [/math] There's the right hand side. Pretty easy, no? A trickier problem is in proving that for any n >= 8, you can compose it of multiples of 3 and/or 5. The base step for that is in showing p(8), p(9), and p(10) are true, then you can continue the inductive step. Another form of mathematical induction is Strong Induction, which is logically equivalent. The base step is the same, but the inductive step must show this: [math][P(1)\ and\ P(2)\ and\ ...\ and\ P(k)] \to P(k+1).[/math] It is logically equivalent to normal induction, however, so it's simply a way to use induction that is sometimes more convenient with notation and such. (Such as in the last mentioned problem, or part of the fundamental theorem of arithmetic that states for any integer greater than 1, it is either prime or can be expressed as a product of primes.) How are they equivalent? Let Q(n) is a statement we are proving with normal induction, and P(n) is the same statement we wish to prove with strong induction. Obviously the base cases are the same: Q(1) = P(1). If Q(n) implies Q(n+1) by normal induction, that means that P(n) which is [P(1) and P(2) and ... and P(n)] is known to be true by normal induction, and thus P(n+1) is also true. Wikipedia's Page on induction isn't bad, but I wouldn't call it good. Nevertheless, if you're further interested it's not a bad place to start. Posted on 2010-04-03 by Jach Permalink: https://www.thejach.com/view/id/85 Trackback URL: https://www.thejach.com/view/2010/4/mathematical_induction
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to . To send content items to your Kindle, first ensure no-reply@cambridge.orgis added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. The relationship between interpolation and separation properties of hypersurfaces in Bargmann–Fock spaces over$\mathbb{C}^{n}$is not well understood except for$n=1$. We present four examples of smooth affine algebraic hypersurfaces that are not uniformly flat, and show that exactly two of them are interpolating. In the paper the correspondence between a formal multiple power series and a special type of branched continued fractions, the so-called ‘multidimensional regular C-fractions with independent variables’ is analysed providing with an algorithm based upon the classical algorithm and that enables us to compute from the coefficients of the given formal multiple power series, the coefficients of the corresponding multidimensional regular C-fraction with independent variables. A few numerical experiments show, on the one hand, the efficiency of the proposed algorithm and, on the other, the power and feasibility of the method in order to numerically approximate certain multivariable functions from their formal multiple power series. In this paper, we completely characterize the finite rank commutator and semi-commutator of two monomial-type Toeplitz operators on the Bergman space of certain weakly pseudoconvex domains. Somewhat surprisingly, there are not only plenty of commuting monomial-type Toeplitz operators but also non-trivial semi-commuting monomial-type Toeplitz operators. Our results are new even for the unit ball. The edge-of-the-wedge theorem in several complex variables gives the analytic continuation of functions defined on the poly upper half plane and the poly lower half plane, the set of points in$\mathbb{C}^{n}$with all coordinates in the upper and lower half planes respectively, through a set in real space,$\mathbb{R}^{n}$. The geometry of the set in the real space can force the function to analytically continue within the boundary itself, which is qualified in our wedge-of-the-edge theorem. For example, if a function extends to the union of two cubes in$\mathbb{R}^{n}$that are positively oriented with some small overlap, the functions must analytically continue to a neighborhood of that overlap of a fixed size not depending of the size of the overlap. We present some fundamental properties of quasi-Reinhardt domains, in connection with Kobayashi hyperbolicity, minimal domains and representative domains. We also study proper holomorphic correspondences between quasi-Reinhardt domains. We investigate interesting connections between Mizohata type vector fields and microlocal regularity of nonlinear first-order PDEs, establishing results in Denjoy–Carleman classes and real analyticity results in the linear case. We prove the$L^{2}$extension theorem for jets with optimal estimate following the method of Berndtsson–Lempert. For this purpose, following Demailly’s construction, we consider Hermitian metrics on jet vector bundles. We obtain explicit expressions for genus 2 degenerate sigma-function in terms of genus 1 sigma-function and elementary functions as solutions of a system of linear partial differential equations satisfied by the sigma-function. By way of application, we derive a solution for a class of generalized Jacobi inversion problems on elliptic curves, a family of Schrödinger-type operators on a line with common spectrum consisting of a point and two segments, explicit construction of a field of three-periodic meromorphic functions. Generators of rank 3 lattice in ℂ2 are given explicitly. The algebra of all Dirichlet series that are uniformly convergent in the half-plane of complex numbers with positive real part is investigated. When it is endowed with its natural locally convex topology, it is a non-nuclear Fréchet Schwartz space with basis. Moreover, it is a locally multiplicative algebra but not a Q-algebra. Composition operators on this space are also studied. Classically, Nevanlinna showed that functions from the complex upper half plane into itself which satisfy nice asymptotic conditions are parametrized by finite measures on the real line. Furthermore, the higher order asymptotic behaviour at infinity of a map from the complex upper half plane into itself is governed by the existence of moments of its representing measure, which was the key to his solution of the Hamburger moment problem. Agler and McCarthy showed that an analogue of the above correspondence holds between a Pick function f of two variables, an analytic function which maps the product of two upper half planes into the upper half plane, and moment-like quantities arising from an operator theoretic representation for f. We apply their ‘moment’ theory to show that there is a fine hierarchy of levels of regularity at infinity for Pick functions in two variables, given by the Löwner classes and intermediate Löwner classes of order N, which can be exhibited in terms of certain formulae akin to the Julia quotient. Let$s\in \mathbb{R}$and$0<p\leqslant \infty$. The fractional Fock–Sobolev spaces$F_{\mathscr{R}}^{s,p}$are introduced through the fractional radial derivatives$\mathscr{R}^{s/2}$. We describe explicitly the reproducing kernels for the fractional Fock–Sobolev spaces$F_{\mathscr{R}}^{s,2}$and then get the pointwise size estimate of the reproducing kernels. By using the estimate, we prove that the fractional Fock–Sobolev spaces$F_{\mathscr{R}}^{s,p}$are identified with the weighted Fock spaces$F_{s}^{p}$that do not involve derivatives. So, the study on the Fock–Sobolev spaces is reduced to that on the weighted Fock spaces. The discrepancy function measures the deviation of the empirical distribution of a point set in$[0,1]^{d}$from the uniform distribution. In this paper, we study the classical discrepancy function with respect to the bounded mean oscillation and exponential Orlicz norms, as well as Sobolev, Besov and Triebel–Lizorkin norms with dominating mixed smoothness. We give sharp bounds for the discrepancy function under such norms with respect to infinite sequences. In this paper, we first give a description of the holomorphic automorphism group of a convex domain which is a simple case of the so-called generalised minimal ball. As an application, we show that any proper holomorphic self-mapping on this type of domain is biholomorphic. Let${\mathcal{H}}ol(B_{d})$denote the space of holomorphic functions on the unit ball$B_{d}$of$\mathbb{C}^{d}$,$d\geq 1$. Given a log-convex strictly positive weight$w(r)$on$[0,1)$, we construct a function$f\in {\mathcal{H}}ol(B_{d})$such that the standard integral means$M_{p}(f,r)$and$w(r)$are equivalent for any$p$with$0<p\leq \infty$. We also obtain similar results related to volume integral means. We define the notion of$\unicode[STIX]{x1D6F7}$-Carleson measures, where$\unicode[STIX]{x1D6F7}$is either a concave growth function or a convex growth function, and provide an equivalent definition. We then characterize$\unicode[STIX]{x1D6F7}$-Carleson measures for Bergman–Orlicz spaces and use them to characterize multipliers between Bergman–Orlicz spaces. We use a generalised Nevanlinna counting function to compute the Hilbert–Schmidt norm of a composition operator on the Bergman space$L_{a}^{2}(\mathbb{D})$and weighted Bergman spaces$L_{a}^{1}(\text{d}A_{\unicode[STIX]{x1D6FC}})$when$\unicode[STIX]{x1D6FC}$is a nonnegative integer. Let$H^{2}$be the Hardy space over the bidisk. It is known that Hilbert–Schmidt invariant subspaces of$H^{2}$have nice properties. An invariant subspace which is unitarily equivalent to some invariant subspace whose continuous spectrum does not coincide with$\overline{\mathbb{D}}$is Hilbert–Schmidt. We shall introduce the concept of splittingness for invariant subspaces and prove that they are Hilbert–Schmidt. We study the complete Kähler–Einstein metric of certain Hartogs domains${\rm\Omega}_{s}$over bounded homogeneous domains in$\mathbb{C}^{n}$. The generating function of the Kähler–Einstein metric satisfies a complex Monge–Ampère equation with Dirichlet boundary condition. We reduce the Monge–Ampère equation to an ordinary differential equation and solve it explicitly when we take the parameter$s$for some critical value. This generalizes previous results when the base is either the Euclidean unit ball or a bounded symmetric domain. We prove that the extension problem from one-dimensional subvarieties with values in Bergman space$H^{1}(D)$on convex finite type domains can be solved by means of appropriate measures. We obtain also almost optimal results concerning the extension problem for other Bergman spaces and one-dimensional varieties.
I'm currently taking introduction to Calculus and I've been presented with this limit involving the greatest integer function (GIF): $$\lim_{x \to 2^-} \frac{\lfloor x \rfloor - 1}{\lfloor x \rfloor - x}$$ Now since $x \to 2^-$ I figured I could immediately evaluate the limits of the first terms of the numerator and denominator and replace it with 1. This makes the numerator 0 and the denominator will be $1 - x$. After evaluating the limit of the denominator I arrive at $\frac{0}{-1}$ which is basically 0. However looking at the problem's answer keys it said that the limit should have been $-\infty$. I wasn't sure where I got it wrong so I thought that the key must be wrong for that item until I tried solving for this next limit: $$\lim_{x \to 1^+} \frac{\lfloor x^2 \rfloor - \lfloor x \rfloor^2}{x^2 - 1}$$ Since $x \to 1^+$, I thought that $\lfloor x^2 \rfloor$ and $\lfloor x \rfloor$ would both resolve to 1 right? And that's what I did but the limit of the entire function then becomes an indeterminate form of type $\frac{0}{0}$. Now I know that I should rewrite the function in order to get rid of the terms that would cause it to become $\frac{0}{0}$ and factoring the denominator gives me $(x + 1)(x - 1)$ which will become $(2)(0^+)$ but given that the numerator is 0, isn't that the same indeterminate form? The answer key says that the limit is 0. Am I missing something here? Is there a way to manipulate or rewrite these GIF that simply hasn't been taught to us? EDIT If you guys think that the question or the answer key must be wrong then please feel free to tell me :) I'm preparing for the exam tomorrow and I don't want to be bogged down thinking about these limits if they're wrong. I got most of the problems correctly and it's only these two that's been weird for me, I've tried what I can but I'm just stumped. I'm currently googling ways to rewrite the GIF function.
The twisted cohomological equation over the geodesic flow Department of Mathematics, Michigan State University, East Lansing, MI 48824, USA We study the twisted cohomoligical equation over the geodesic flow on $ SL(2, \mathbb{R} )/\Gamma $. We characterize the obstructions to solving the twisted cohomological equation, construct smooth solution and obtain the tame Sobolev estimates for the solution, i.e, there is finite loss of regularity (with respect to Sobolev norms) between the twisted coboundary and the solution. We also give a tame splittings for non-homogeneous cohomological equations. The result can be viewed as a first step toward the application of KAM method in obtaining differential rigidity for partially hyperbolic actions in products of rank-one groups in future works. Keywords:Twisted geodesic flow, representation theory of SL(2, R), orthogonal basis of SL(2, R), twisted cohomological equation, Sobolev space. Mathematics Subject Classification:37A17, 37A20. Citation:Zhenqi Jenny Wang. The twisted cohomological equation over the geodesic flow. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3923-3940. doi: 10.3934/dcds.2019158 References: [1] D. Damjanovic and A. Katok, Local Rigidity of Partially Hyperbolic Actions. I. KAM method and $ \mathbb{Z} ^k$ actions on the torus, [2] [3] [4] [5] [6] [7] D. Mieczkowski, [8] [9] D. W. Robinson, [10] [11] Z. J. Wang, Various smooth rigidity examples in$SL(2, \mathbb{R})\times\cdots SL(2, \mathbb{R})/\Gamma $, in preparation.Google Scholar [12] [13] show all references References: [1] D. Damjanovic and A. Katok, Local Rigidity of Partially Hyperbolic Actions. I. KAM method and $ \mathbb{Z} ^k$ actions on the torus, [2] [3] [4] [5] [6] [7] D. Mieczkowski, [8] [9] D. W. Robinson, [10] [11] Z. J. Wang, Various smooth rigidity examples in$SL(2, \mathbb{R})\times\cdots SL(2, \mathbb{R})/\Gamma $, in preparation.Google Scholar [12] [13] [1] [2] [3] [4] Russell Johnson, Mahesh G. Nerurkar. On $SL(2, R)$ valued cocycles of Hölder class with zero exponent over Kronecker flows. [5] Ser Peow Tan, Yan Loi Wong and Ying Zhang. The SL(2, C) character variety of a one-holed torus. [6] [7] Samuel C. Edwards. On the rate of equidistribution of expanding horospheres in finite-volume quotients of SL(2, ${\mathbb{C}}$). [8] Anne-Sophie de Suzzoni. Consequences of the choice of a particular basis of $L^2(S^3)$ for the cubic wave equation on the sphere and the Euclidean space. [9] Dmitry Tamarkin. Quantization of Poisson structures on R^2. [10] [11] [12] Guji Tian, Qi Wang, Chao-Jiang Xu. $C^\infty$ Local solutions of elliptical $2-$Hessian equation in $\mathbb{R}^3$. [13] Imed Bachar, Habib Mâagli. Singular solutions of a nonlinear equation in apunctured domain of $\mathbb{R}^{2}$. [14] Myeongju Chae, Soonsik Kwon. Global well-posedness for the $L^2$-critical Hartree equation on $\mathbb{R}^n$, $n\ge 3$. [15] Giorgio Fusco. Layered solutions to the vector Allen-Cahn equation in $\mathbb{R}^2$. Minimizers and heteroclinic connections. [16] Giorgio Fusco, Francesco Leonetti, Cristina Pignotti. On the asymptotic behavior of symmetric solutions of the Allen-Cahn equation in unbounded domains in $\mathbb{R}^2$. [17] Michał Kowalczyk, Yong Liu, Frank Pacard. Towards classification of multiple-end solutions to the Allen-Cahn equation in $\mathbb{R}^2$. [18] J. Colliander, M. Keel, Gigliola Staffilani, H. Takaoka, T. Tao. Resonant decompositions and the $I$-method for the cubic nonlinear Schrödinger equation on $\mathbb{R}^2$. [19] [20] A. Kononenko. Twisted cocycles and rigidity problems. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Inertia In power systems engineering, "inertia" is a concept that typically refers to rotational inertia or rotational kinetic energy. For synchronous systems that run at some nominal frequency (i.e. 50Hz or 60Hz), inertia is the energy that is stored in the rotating masses of equipment electro-mechanically coupled to the system, e.g. generator rotors, fly wheels, turbine shafts. Contents Derivation Below is a basic derivation of power system rotational inertia from first principles, starting from the basics of circle geometry and ending at the definition of moment of inertia (and it's relationship to kinetic energy). The length of a circle arc is given by: [math] L = \theta r [/math] where [math]L[/math] is the length of the arc (m) [math]\theta[/math] is the angle of the arc (radians) [math]r[/math] is the radius of the circle (m) A cylindrical body rotating about the axis of its centre of mass therefore has a rotational velocity of: [math] v = \frac{\theta r}{t} [/math] where [math]v[/math] is the rotational velocity (m/s) [math]t[/math] is the time it takes for the mass to rotate L metres (s) Alternatively, rotational velocity can be expressed as: [math] v = \omega r [/math] where [math]\omega = \frac{\theta}{t} = \frac{2 \pi \times n}{60}[/math] is the angular velocity (rad/s) [math]n[/math] is the speed in revolutions per minute (rpm) The kinetic energy of a circular rotating mass can be derived from the classical Newtonian expression for the kinetic energy of rigid bodies: [math] KE = \frac{1}{2} mv^{2} = \frac{1}{2} m(\omega r)^{2}[/math] where [math]KE[/math] is the rotational kinetic energy (Joules or kg.m 2/s 2 or MW.s, all of which are equivalent) [math]m[/math] is the mass of the rotating body (kg) Alternatively, rotational kinetic energy can be expressed as: [math] KE = \frac{1}{2} J\omega^{2} [/math] where [math]J = mr^{2}[/math] is called the moment of inertia (kg.m 2). Notes about the moment of inertia: In physics, the moment of inertia [math]J[/math] is normally denoted as [math]I[/math]. In electrical engineering, the convention is for the letter "i" to always be reserved for current, and is therefore often replaced by the letter "j", e.g. the complex number operator i in mathematics is j in electrical engineering. Moment of inertia is also referred to as [math]WR^{2}[/math] or [math]WK^{2}[/math], where [math]WK^{2} = \frac{1}{2} WR^{2}[/math]. WR 2literally stands for weight x radius squared. Moment of inertia is also referred to as [math]WR^{2}[/math] or [math]WK^{2}[/math], where [math]WK^{2} = \frac{1}{2} WR^{2}[/math]. WR WR 2is often used with imperial units of lb.ft 2or slug.ft 2. Conversions factors: 1 lb.ft 2= 0.04214 kg.m 2 1 slug.ft 2= 1.356 kg.m 2 1 lb.ft WR Normalised Inertia Constants The moment of inertia can be expressed as a normalised quantity called the inertia constant H, calculated as the ratio of the rotational kinetic energy of the machine at nominal speed to its rated power (VA): [math]H = \frac{1}{2} \frac{J \omega_0^{2}}{S_{b}}[/math] where [math]H[/math] is the inertia constant (s) [math]\omega_{0} = 2 \pi \times \frac{n}{60}[/math] is the nominal mechanical angular frequency (rad/s) [math]n[/math] is the nominal speed of the machine (revolutions per minute) [math]S_{b}[/math] is the rated power of the machine (VA) Generator Inertia The moment of inertia for a generator is dependent on its mass and apparent radius, which in turn is largely driven by its prime mover type. Based on actual generator data, the normalised inertia constants for different types and sizes of generators are summarised in the table below: Machine type Number of samples MVA Rating Inertia constant H Min Median Max Min Median Max Steam turbine 45 28.6 389 904 2.1 3.2 5.7 Gas turbine 47 22.5 99.5 588 1.9 5.0 8.9 Hydro turbine 22 13.3 46.8 312.5 2.4 3.7 6.8 Combustion engine 26 0.3 1.25 2.5 0.6 0.95 1.6 Relationship between Inertia and Frequency Inertia is the stored kinetic energy in the rotating masses coupled to the power system. Whenever there is a mismatch between generation and demand (either a deficit or excess of energy), the difference in energy is made up by the system inertia. For example, suppose a generator suddenly disconnects from the network. In that instant, the equilibrium in generation and demand is broken and demand exceeds generation. Because energy must be conserved, there must always be energy balance in the system and the instantaneous deficit in energy is supplied by the system inertia. However, the kinetic energy in rotating masses is finite and when energy is used to supply demand, the rotating masses begin to slow down. In aggregate, the speed of rotation of these rotating masses is roughly proportional to the system frequency and so the frequency begins to fall. New generation must be added to the system to reestablish the equilibrium between generation and demand and restore system frequency, i.e. put enough kinetic energy back into the rotating masses such that it rotates at a speed proportional with nominal frequency (50/60 Hz). The figure to the right illustrates this concept by way of a tank of water where system demand is a flow of water coming out of the bottom of the tap and generation is a hose that tops up the water in the tank (here the system operator manages the tap, which determines how much water comes out of the hose). The system frequency is the water level and the inertia is the volume of water in the tank. This analogy is instructive because it can be easily visualised that if system inertia was very large, then the volume of water and the tank itself would also be very large. Therefore, a deficit of generation would cause the system frequency to fall, but at a slower rate than if the system inertia was small. Likewise, excess generation would fill up the tank and cause frequency to rise, but at a slower rate if inertia is very large. Therefore, it can be said that system inertia is related to the rate at which frequency rises or falls in a system whenever there is a mismatch between generation and load. The standard industry term for this is the Rate of Change of Frequency (RoCoF). Figure 4 shows the system frequency response to a generator trip at different levels of system inertia. It can be seen that the rate of change of the frequency decline increases as the system inertia is decreased. Furthermore, the minimum frequency that the system falls to (called the frequency nadir) is also lower as system inertia is decreased.
I'm reading a proof of Multivariate CLT using Lindeberg Theorem. Let $X_n = (X_{ni},... ,X_{nk})$ be independent random vectors all having the same distribution. Suppose that $E[X_{nu}]<\infty$; let the vector of means be $c=(c_1,..., c_k)$, where $c_u=E[X_{nu}],$ and let the covariance matrix be $\Sigma = [\sigma_{uv}],$ where $\sigma_{uv}=E[(X_{nu} — c_u)(X_{nv} — c_v)].$ Put $Sn=X_{1}+\cdots X_{n}.$ Under these assumptions, the distribution of the random vector $(S_n — nc)/\sqrt{n}$ converges weakly to the centered normal distribution with covariance matrix $\Sigma$. The proof is as follows: Let $Y =(Y_1,...,Y_{n})$ be a normally distributed random with $0$ means and covariance matrix $\Sigma.$ For given $t=(t_1,...,t_k)$ let $Z_n=\displaystyle\sum_{u=1}^{k}t_u(X_{nu}-c_{u})$ and $Z=\displaystyle\sum_{u=1}^{k}t_uY_u.$ Then it suffices prove that $n^{1/2}\displaystyle\sum_{j=1}^{n}Z_j$ converges in distribution to $Z$ (for arbitrary $t$). But this is an instant consequence of the Lindeberg-Levy theorem. I'm stuck following this proof. I'm not sure if Lindeberg condition is satisfied, i.e. $$\displaystyle\lim_{n\rightarrow\infty}\displaystyle\sum_{k=1}^{n}\frac{1}{s_n}\int_{\{|Z_k/\sqrt{n}|>\epsilon s_{n}\}}\frac{|Z_k|^2}{n} dP=0.$$ My idea is that $\{|Z_k/\sqrt{n}|>\epsilon s_{n}\}$ decreases to $\emptyset$; that's the reason of the integral converges to $0,$ but what about of the convergence or divergence of $s_{n}$ and the sum that tends to infinity? Any kind of help is thanked in advanced.
Current browse context: cs.LG Change to browse by: References & Citations Bookmark(what is this?) Computer Science > Data Structures and Algorithms Title: Efficient average-case population recovery in the presence of insertions and deletions (Submitted on 12 Jul 2019) Abstract: Several recent works have considered the \emph{trace reconstruction problem}, in which an unknown source string $x\in\{0,1\}^n$ is transmitted through a probabilistic channel which may randomly delete coordinates or insert random bits, resulting in a \emph{trace} of $x$. The goal is to reconstruct the original string~$x$ from independent traces of $x$. While the best algorithms known for worst-case strings use $\exp(O(n^{1/3}))$ traces \cite{DOS17,NazarovPeres17}, highly efficient algorithms are known \cite{PZ17,HPP18} for the \emph{average-case} version, in which $x$ is uniformly random. We consider a generalization of this average-case trace reconstruction problem, which we call \emph{average-case population recovery in the presence of insertions and deletions}. In this problem, there is an unknown distribution $\cal{D}$ over $s$ unknown source strings $x^1,\dots,x^s \in \{0,1\}^n$, and each sample is independently generated by drawing some $x^i$ from $\cal{D}$ and returning an independent trace of $x^i$. Building on \cite{PZ17} and \cite{HPP18}, we give an efficient algorithm for this problem. For any support size $s \leq \smash{\exp(\Theta(n^{1/3}))}$, for a $1-o(1)$ fraction of all $s$-element support sets $\{x^1,\dots,x^s\} \subset \{0,1\}^n$, for every distribution $\cal{D}$ supported on $\{x^1,\dots,x^s\}$, our algorithm efficiently recovers ${\cal D}$ up to total variation distance $\epsilon$ with high probability, given access to independent traces of independent draws from $\cal{D}$. The algorithm runs in time poly$(n,s,1/\epsilon)$ and its sample complexity is poly$(s,1/\epsilon,\exp(\log^{1/3}n)).$ This polynomial dependence on the support size $s$ is in sharp contrast with the \emph{worst-case} version (when $x^1,\dots,x^s$ may be any strings in $\{0,1\}^n$), in which the sample complexity of the most efficient known algorithm \cite{BCFSS19} is doubly exponential in $s$. Submission historyFrom: Sandip Sinha [view email] [v1]Fri, 12 Jul 2019 21:39:43 GMT (35kb)
Is there such a thing as a "trilinear inner product"? The definition of an inner product is: Let $H$ be a vector space over $\mathbb{K}\in \{\mathbb{R,C}\}$. An inner product is a map $\langle \cdot|\cdot\rangle: H^2 \to \mathbb{K}$ such that for all $x,y,z \in H$ and $\lambda \in \mathbb{K}$ the following properties hold: Bilinearity: $\langle x+\lambda y | z\rangle = \langle x|z\rangle + \lambda \langle y|z\rangle $ Complex conjugacy: $\overline{\langle y | x \rangle} = \langle x | y \rangle$ Positive definiteness: $||x||^2:=\langle x | x \rangle$ > 0 if $x \neq 0$ Can this be modified to have a trilinear map $\langle \cdot |\cdot| \cdot \rangle : H^3 \to \mathbb{K}$? Would it for example be possible to make $L^3$ into a "trilinear inner product space" like $L^2$ is a "bilinear inner product space"? What is so special about the number $2$ in this context? Of course $2$ is the only nubmer that is conjugate to itself in the sense that $\frac{1}{2}+\frac{1}{2}$, so there would be no nice identification of this trilinear inner product space with it's dual. I guess that there is no useful notion because the complex conjugacy can't be modified to get a trilinear inner product: $\mathbb{C}$ is a field extension of degree $2$ of $\mathbb{R}$, but there is no field extension of degree $3$ of the reals this inner product could be defined over. What if the "trilinear space" is solely defined over $\mathbb{R}$?
4:28 AM @MartinSleziak Here I am! Thank you for opening this chat room and all your comments on my post, Martin. They are really good feedback to this project. @MartinSleziak Yeah, using a chat room to exchange ideas and feedback makes a lot of sense compared to leaving comments in my post. BTW. Anyone finds a \oint\frac{1}{1-z^2}dz expression in old posts? Send to me and I will investigate why this issue occurs. @MartinSleziak It is OK, don't feel anything bad. As long as there is a place that comes to people's mind if they want to report some issue on Approach0, I am willing to come to that place and discuss. I am really interested in pushing Approach0 forward. 4:57 AM Hi @WeiZhong thanks for joining the room. I will write a bit more here when I have more time. For now two minor things. I just want to make sure that you know that the answer on meta is community wiki. Which means that various users are invited to edit it, you can see from revision history who added what to the question. You can see in revision history that this bullet point was added by Workaholic: "I searched for \oint $\oint$, but I only got results related to \int $\int$. I tried for \oint \frac{dz}{1-z^2} $\oint \frac{dz}{1-z^2}$ which is an integral that appears quite often but it did not yield any correct results." So if you want to make sure that this user is notified about your comments, you can simply add @Workaholic. Any of the editors can be pinged. And I noticed also this about one of the quizzes (I did not check whether some of the other quizzes have similar problem.) I suppose that the quizzes are supposed to be chosen in such way that Approach0 indeed helps to find the question. I.e., each quiz was created with some specific question in mind, which should be among the search results. Is that correct? I guess the quiz saying "Please list all positive integers $i,j,k$ such that $i^5 + j^6 = k^7$." was made with this question in mind: Find all positive integers satisfying: $x^5+y^6=z^7$. However when I try the query from this quiz, I get completely different results. I vaguely recall that I tried some quizzes, including this one, and they worked. (By which I mean that the answer to the question from the quiz could be found among the search results.) So is this perhaps due to some changes that were made since then? Or is that simply because when I tried the quiz last time, less questions were indexed. (And now that question is still somewhere among the results, but further down.) I was wondering whether to add the word to my last message, but it is probably not a bug. It is simply that search results are not exactly as I would expect. My impression from the search results is that not only x, y, z are replaced by various variables, but also 5,6,7 are replaced by various numbers. 5:40 AM I think that this implicitly contains a question whether when searching for $x^5+y^6=z^7$ also the questions containing $x^2+y^2=z^2$ or $a^3+b^3=c^3$ should be matches. For the sake of completeness I will copy here the part of quiz list which is relevant to the quiz I mentioned above: "Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ", Hmm, I should have posted this as a single multiline message. But now I see that it is already too late to delete the above messages. Sorry for the duplication: { /* 4 */ "Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ", "hints": [ "This should be easy, the only thing I need to do is do some calculation...", "I can use my computer to enumerate...", "... (10 minutes after) ...", "OK, I give up. Why borther list them <b>all</b>?", "Is that possible to <a href=\"#\">search it</a> on Internet?" ], "search": "all positive integers, $i^5 + j^6 = k^7$" }, "Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ", "hints": [ "This should be easy, the only thing I need to do is do some calculation...", "I can use my computer to enumerate...", "... (10 minutes after) ...", "OK, I give up. Why borther list them <b>all</b>?", "Is that possible to <a href=\"#\">search it</a> on Internet?" ], "search": "all positive integers, $i^5 + j^6 = k^7$" }, 8 hours later… 1:19 PM @MartinSleziak OK, I get it. So next time I would definitely reply to whom actually makes the revision. @MartinSleziak Yes, remember the first time when we talk in a chat room? At that version of approach0, when a very limited posts have been indexed, you can actually get relevant posts on $i^5+j^6=k^7$. However, when I has enlarged the index (now almost the entire MSE), that piece of quiz (in fact, some quiz I selected earilier like [this one]()) does not find relevant posts anymore. I have noticed that "quiz" does not work, but I am really lazy and have not investigated it. Instead of change that "quiz", I agree to investigate on why that relevant result has gone. As far as I can guess, there can be two reasons: 1) the crawler missed that one (I did the crawling in China, the network condition is not always good, sometimes crawler fails to fetch random posts and have to skip them) 2) there is a bug in approach0 that I am not aware 1) the crawler missed that one (I did the crawling in China, the network condition is not always good, sometimes crawler fails to fetch random posts and have to skip them) 2) there is a bug in approach0 that I am not aware In order to investigate this problem, I am trying to find the original posts that you and me have seen (as you remember vaguely) which is relevant to $i^5+j^6=k^7$ quiz, if you find that post, please send me the URL. @MartinSleziak It can be a bug, but I need to know if my index does contain a relevant post, so first let us find that post we think relevant. And I will have a look whether or not it is in my index, perhaps the crawler just missed that one. If it is in our index currently, then I should spend some time to find out the reason. @MartinSleziak As for you last question, I need to illustrate it a little more. Approach0 will first find expressions that are structurallyrelevant to query. So $x^5+y^6=z^7$ will get you $x^2+y^2=z^2$ or $a^3+b^3=c^3$, because they (more specifically, their operator tree representation) are considered structurally identical. After filtering out these structurally relevant expressions, Approach0 will evaluate their symbolic relevance degree with regarding to query expression. Suppose $x^5+y^6=z^7$ gives you $x^2+y^2=z^2$, $a^3+b^3=c^3$ and also $x^5+y^6=z^7$, expression $x^5+y^6=z^7$ will be ranked higher than $x^2+y^2=z^2$ and $a^3+b^3=c^3$, this is because $x^5+y^6=z^7$ has higher symbolic score (in fact, since it has identical symbol set to query, it has the highest possible symbolic score). I am sorry, I should use "and" instead of "or". Let me repeat the message before previous one below: As for you last question, I need to illustrate it a little more. Approach0 will first find expressions that are structurallyrelevant to query. So $x^5+y^6=z^7$ will get you both$x^2+y^2=z^2$ and$a^3+b^3=c^3$, because they (more specifically, their operator tree representation) are considered structurally identical. Now the next things for me to do is to investigate some "missing results" suggested by you. 1. Try to find `\oint` expression in an old post (by old I mean at least 5 weeks old, so that it is possible been indexed) 1. Try to find `\oint` expression in an old post (by old I mean at least 5 weeks old, so that it is possible been indexed) 2:23 PM Unfortunately, I fail to find any relevant old post in neither case 1 nor case 2 after a few tries (using MSE default search). So the only thing I can do now is to do an "integrated test" (see the new code I have just pushed to Github: github.com/approach0/search-engine/commit/…) An "integrated test" means I make a minimal index with a few specified math expressions and search a specified query, and see if the results is expected. For example, the test case tests/cases/math-rank/oint.txt specified the query $\oint \frac{dz}{1-z^2}$, and the entire index has just two expressions: $\oint \frac{dz}{1-z^2}$ and $\oint \frac{dx}{1-x^2}$, and the expected search result is both these two expressions are HIT (i.e. they should appear in search result) 10 hours ago, by Martin Sleziak I guess the quiz saying "Please list all positive integers $i,j,k$ such that $i^5 + j^6 = k^7$." was made with this question in mind: Find all positive integers satisfying: $x^5+y^6=z^7$. 2:39 PM For anyone interested, I post the screenshot of integrated test results here: imgur.com/a/xYBD5 3:04 PM For example like this: chat.stackexchange.com/transcript/message/32711761#32711761 You get the link by clicking on the little arrow next to the message and then clicking on "permalink". I am mentioning this because (hypothetically) if Workaholic only sees your comment a few days later and then they come here to see what the message you refer to, they might have problem with finding it if there are plenty of newer messages. However, this room does not have that much traffic, so very likely this is not going to be a problem in this specific case. Another possible way to linke to a specific set of messages is to go to the transcript and then choose a specific day, like this: chat.stackexchange.com/transcript/46148/2016/10/1 Or to bookmark a conversation. This can be done from the room menu on the right. This question on meta.SE even has some pictures. This is also briefly mentioned in chat help: chat.stackexchange.com/faq#permalink 3:25 PM @MartinSleziak Good to learn this. I just posted another comment with permalink in that meta post for Workaholic to refer. I just checked the index on server, yes, that post is indeed indexed. (for my own reference, docID = 249331) 2 hours later… 5:13 PM Update: I have fixed that quiz problem. See: approach0.xyz/search/… That is not strictly a bug, it is because I put a restriction on the number of document to be searched in one posting list (not trying to be very technical). I have pushed my new code to GitHub (see commit github.com/approach0/search-engine/commit/…), this change gets rid of that restriction and now that relevant post is shown as the 2nd search result. 2 hours later… 6:57 PM « first day (2 days earlier) next day → last day (1080 days later) »
Published in 2018 by Cambridge University Press, this book surveys many famous problems in the geometry of finite point sets in the plane, unifying them under the framework of properties that depend only on how triples of points are oriented and that behave monotonically as points are removed, and covering both mathematical and computational aspects of the subject. It is aimed at a range of readers from advanced undergraduates in mathematics and computer science to research professionals. DarrenGlass, MAA Reviews, July 2018: “There is a lot to like aboutthis book, as Eppstein does a good job of introducing the material tohis readers. I will note that I think many mathematicians will findcertain aspects of Eppstein’s writing style and notational conventionsto be different from what we are used to, and while I found his choiceof topics interesting I know that the book made me think of many morequestions that he did not address. A reader who sticks with Eppsteinwill learn a lot about this exciting area that lies on the border ofmathematics and computer science.” László Szabó, Mathematical Reviews, March2019: “Thebook is a great read. It is a valuable addition to the library of anydiscrete or computational geometer. Moreover, it can also serve as anexcellent textbook for an introductory course on point configurations.” Open Problem 3.14 (the maximum number of halving partitions) should have been credited to the paper that introduced this problem: Erdős, Paul, Lovász, László; Simmons, A., and Straus, ErnstG. 1973. Dissection graphs of planar pointsets. Pages 139–149 of: Asurvey of combinatorial theory (Proc. Internat. Sympos., Colorado StateUniv., Fort Collins, Colo., 1971). Amsterdam: North-Holland. Section 6.5 (property testing) should have credited the following paper, where the model of property testing used here originates: Czumaj, Artur, Sohler, Christian, and Ziegler, Martin.2000. Property testing in computationalgeometry. Pages 155–166of: 8th European Symposium on Algorithms (ESA 2000). Lecture Notesin Computer Science, vol. 1879, Berlin: Springer. Theorem 7.3 states that, under the exponential time hypothesis, no algorithm with running time $n^{o(\sqrt{k})}$ can test whether a configuration of size $k$ is a subconfiguration of an input of size $n$. The proof uses convex embedding to reduce from finding cliques in graphs. The conclusion can be strengthened to running time $n^{o(k/\log k)}$ using a similar reduction from labeled subgraph isomorphism (this problem is described e.g. in Eppstein and Lokshtanov 2018). Open problem 7.6 (the existence of a finite set of obstacles such that finding a $k$-point set avoiding the obstacles is not fixed-parameter tractable) has been solved by the following paper: Eppstein, David, and Lokshtanov, Daniel. 2018. Theparameterized complexity of finding point sets with hereditaryproperties. Pages 11:1–11:14 of: 13th International Symposium on Parameterized and Exact Computation(IPEC 2018). Leibniz International Proceedings in Informatics, vol. 115,Dagstuhl: Leibniz-Zentrum für Informatik. It shows that under standard complexity-theoretic assumptions there is no FPT algorithm for this general problem: for the three obstacles below, this problem is $\mathsf{W}[1]$-hard, and no algorithm can solve the problem in time $n^{o(k/\log k)}$ unless the exponential time hypothesis fails. Theorem 9.3 states in part that finding the largest general-position subset is $\mathsf{NP}$-hard and $\mathsf{APX}$-hard, and Theorem 9.5 states that it is fixed-parameter tractable in the size of the subset. The same results are proven in: Froese, Vincent, Kanj, Iyad, Nichterlein, André, and Niedermeier, Rolf.2017. Finding points in generalposition. Int. J. Comput.Geom. Appl. 27(4), 277–296. Theorem 9.13 of Payne and Wood implies that every $n$-point set has a subset of size $\Omega(\sqrt{n/\log n})$ that is either collinear or in general position. A new result of Hajnal and Szemerédi improves this to $\Omega(\sqrt{n\log\log n/\log n})$: Hajnal, Péter, and Szemerédi, Endre. 2018. Two geometrical applicationsof the semi-random method.Pages 188–199 of: New Trends in Intuitive Geometry. Bolyai SocietyMathematical Studies, vol. 27, Berlin: Springer. Theorem 10.12 on point sets with no four in line and no general-position subset larger than $O(n^{5/6+\epsilon})$ is credited to a 2017 preprint by Balogh and Solymosi. Their paper has now been published. It is: Balogh, Jozsef, and Solymosi, Jozsef. 2018. On the number of points ingeneral position in the plane. Discrete Analysis 2018(16), 20pp. Observation 11.7 gives asymptotic bounds on the largest convex polygon in a grid, and figure 11.3 gives an example of the largest polygon in a $5\times 5$ grid. More precise asymptotic bounds for the square grid were given by Acketa, Dragan M. and Žunić, Joviša D. 1995.On the maximal number of edges of convex digital polygons included into an$m\times m$-grid. J. Combin. Theory Ser. A 69(2), 358–368. Exact values were determined by Kisman, Derek, Guy, Richard, and Fink, Alex. 2009.Patulous pegboard polygons.Pages 59–68 of: Mathematical Wizardry for a Gardner.Wellesley, MA: A K Peters. The following paper considers similar problems for other shapes than squares: Bárány, Imre and Prodromou, Maria. 2006.On maximal convex lattice polygons inscribed in a plane convexset. Israel J. Math. 154, 337–360. See also my blog post “big convex polygons in grids” for more on this topic. Open problem 11.10 (the sample size for property testing convex position) was already answered by Czumaj, Sohler, and Ziegler (2000). The optimal sample size is $\Theta(n^{2/3})$. One proof of this (not the one they give) is to consider any point $p$ of high Tukey depth, observe that logarithmically many samples are enough to have high probability of enclosing $p$, and consider the nonconvex quadruples of points formed by $p$ and three given points. If linearly many quadruples are disjoint (except for $p$) then one is likely to be found by the remaining sample points; otherwise, deleting all of the points in a maximal disjoint family of quadruples shows that the input is near-convex. Footnotes 6 and 7 of section 13.4 give two references for sets of seven points in the plane, all at integer distances, with no three on a line and no four on a circle. Many more such sets with the additional property that all coordinates are integers (including the set with the smallest possible diameter) were found by: Kurz, Sascha, Noll, Landon Curt, Rathbun, Randall, and Simmons,Chuck. 2014. Constructing 7-clusters. Serdica J. Comput. 8(1), 47–70. Their paper provides several additional references to earlier work on this problem. Footnote 4 at the start of Section 16.3 lists three papers on linear lower bounds for universal point sets. A new fourth paper improves them, but the bound is still linear: Scheucher, Manfred, Schrezenmaier, Hendrik, and Steiner, Raphael. 2019.A note on universal point sets for planargraphs. 27th International Symposiumon Graph Drawing and Network Visualization. The proof of Theorem 16.13 (universal point sets cannot be covered by few lines) used the fact that certain graphs, the Apollonian networks, cannot be drawn with their points on a bounded number of lines. This was already known for a different class of planar graphs, the maximal planar graphs of small dual shortness exponent. See the following two papers: Ravsky, Alexander, and Verbitsky, Oleg. 2011. On collinear sets instraight-line drawings.Pages 295–306 of: 37th International Workshop on Graph-TheoreticConcepts in Computer Science (WG 2011). Lecture Notes in ComputerScience, vol. 6986, Berlin: Springer. Chaplick, Steven, Fleszar, Krzysztof, Lipp, Fabian, Verbitsky, Oleg, andWolff, Alexander. 2016. Drawing graphs on few lines and fewplanes. Pages 166–180 of: 24th International Symposium on Graph Drawing and NetworkVisualization. Lecture Notes in Computer Science, vol. 9801, Berlin:Springer. For recent developments on drawing planar graphs on few lines, see: Eppstein, David. 2019. Cubic planar graphs that cannot be drawn on fewlines. Pages 32:1–32:15 of: 35th International Symposium on Computational Geometry. LeibnizInternational Proceedings in Informatics (LIPIcs), vol. 129. Dagstuhl,Germany: Leibniz-Zentrum für Informatik. Felsner, Stefan. 2019. 4-connected triangulations on fewlines. 27th International Symposiumon Graph Drawing and Network Visualization. Biedl, Therese, Felsner, Stefan, Meijer, Henk, and Wolff, Alexander.2019. Line and plane cover numbersrevisited. 27th International Symposiumon Graph Drawing and Network Visualization. In section 18.2, “size” is inappropriately bold. In the references and index, the paper “Point sets with many $k$-sets”( Discrete Comput. Geom. 2001) is credited to Gabor Tóth. The correctauthor of the paper is Geza Tóth. David Eppstein – 0xDE – @11011110
here. We'll start with the last example, Cartesian Joins. Recall the definition of a Cartesian Product: [math]X\times Y = \{\,(x,y)\mid x\in X \ \text{and} \ y\in Y\,\}.[/math] See Full Post and Comments We'll start with the last example, Cartesian Joins. Recall the definition of a Cartesian Product: [math]X\times Y = \{\,(x,y)\mid x\in X \ \text{and} \ y\in Y\,\}.[/math] See Full Post and Comments wrote about how I prefer Python's Clojure has such a clever version built-in, called That's it. See Full Post and Comments map()to using its List Comprehensions feature even if list comprehensions look and feel more Pythonic. The main reason is because by using map, it makes it simple to extend functional code to multiple cores or machines without changing the original, just by writing a clever version of the map function. Clojure has such a clever version built-in, called pmap. Basically, it works just like mapbut applies the mapper function to the input dataset in parallel. (Hence it really shines when the mapper function time dominates.) I just wanted to gush over how awesome it is. Clojure also includes a timemacro that makes benchmarking easy. Check out the docs here for an example. That's it. See Full Post and Comments real realitythat quantum electrodynamics, quantum chromodynamics, and general relativity describe. I'm not making statements about the fundamental level that's the only real level, but about what reality kind of looks like at a bigger scale if you squint my way for a moment. Nature has tuned us to think heavily in Cause and Effect. A chain, one thing proceeding to the next. Sometimes human choice dictates the direction of that chain, but human choice contains its own cause and effect cycle with choice and consequence. Only a few smart thinkers in history have seen beyond this, and only for a moment. Consider this quote from George Santayana, circa 1905-1906 in The Life of Reason. (Emphasis mine.) Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it. See Full Post and Comments It seems like a rather obvious argument to me, but I don't think it's really that obvious to most people, especially people who wonder how atheists could have morality or morals at all. I think a background of programming makes me think of it as obvious: for a good programmer, indirection and recursion start to become natural. "Who created God?" "If this reality is a simulation, is the environment we're simulated in also a simulation?" The first thing we must realize is that even Divine Morality changes. The Bible has demonstrated that God can change His mind, and a pure historical account of the Catholic Church shows their positions on certain issues differ significantly from their founding views. I don't think this is very controversial, and I don't mean to imply morality can change into anything; it still must fall within certain bounds. See Full Post and Comments smartness from intelligence in the following way: intelligence, specifically human intelligence, is simply what the human species is and does. Every human has intelligence, and roughly the same as another, from the dumbest idiot to the brainiest genius, barring large amounts of brain damage. This is because we're all the same species, our brains are all more or less the same "hardware", our genes are more or less the same, etc. The difference between Yet there's clearly variation among humans. I call this See Full Post and Comments The difference between intelligenceof a chimp and a human is staggering, even though we share about 95% or so of our DNA with a chimp. Put simply, the smartest chimp can't match the dumbest fully functioning human. There are thoughts a chimp brain is literally incapable of holding due to its design, that a human brain can hold. Yet there's clearly variation among humans. I call this smartness. Intelligence is a spectrum, with a minimum (a rock) and a maximum (AIXI with some modifications), with humans and chimps occupying points on the line very near each other. I hope we as a species will be able to build the next step up from human intelligence and create something not only smarter than us in every measurable way, but simply more intelligent. See Full Post and Comments Aumann's agreement theorem, roughly speaking, says that two agents acting rationally (in a certain precise sense) and with common knowledge of each other's beliefs cannot agree to disagree. More specifically, if two people are genuine Bayesians, share common priors, and have common knowledge of each other's current probability assignments, then they must have equal probability assignments. - Less Wrong Wiki Whenever someone says "well we'll just have to agree to disagree", the parties involved in the disagreement have failed at presenting their cases. It means that all parties, or maybe just one, are ignorant of some piece of information the other is implicitly using. This happens a lot, unfortunately. The more your argument depends on, the harder it becomes to actually argue. From a distance such arguments look like a series of each person moving the goal posts of the argument, when in reality they're just trying to get across more prior information the other(s) don't have access to. See Full Post and Comments - Less Wrong Wiki Whenever someone says "well we'll just have to agree to disagree", the parties involved in the disagreement have failed at presenting their cases. It means that all parties, or maybe just one, are ignorant of some piece of information the other is implicitly using. This happens a lot, unfortunately. The more your argument depends on, the harder it becomes to actually argue. From a distance such arguments look like a series of each person moving the goal posts of the argument, when in reality they're just trying to get across more prior information the other(s) don't have access to. See Full Post and Comments 41 Long answer part 1: is this to be solved by parsing or by algebra? If it's to be solved by parsing, we need a set of parsing rules, in other words a convention. Grade school teaches things like BEDMAS/PEMDAS, but that's a fairly complex rule operating on groups. Instead let's go with one particular way of computer program parsing that is easy for a beginner programmer to write. The general algorithm goes like this: Read the first number until the operator is found. Create a tree leaf containing the operator, with a left branch containing the first number read, and a right branch being empty. Read the next symbol: if it's a parenthesis, start over but with the right branch becoming a new "leaf" to hold the next operator. If it's another number, put it into the right branch. Now simplify by applying the leaf operator to both its branches, and storing the result inside the leaf and clipping the branches. Read the next symbol, if it's an operator create a leaf with a left branch containing the resulting value previously computed and a right branch containing nothing... repeat. See Full Post and Comments
I'll answer question 2, leaving the first as an exercise to the reader. I'll do this on intuitive grounds, rather than using explicit conditional probabilities. The adversary is free to compute $v_1\cdot v_2$ regardless of what we ask, therefore removing everything about that and $v_3$ does not change the problem, which reduces to: We somewhat have chosen some $a\in\mathbb F_p$. We draw a random uniform secret $z_1\in\mathbb F_p^*$ (so that it's inverse ${z_1}^{-1}$ is well-defined) and a random uniform secret $z_2\in\mathbb F_p$, and compute and reveal $v_1=a\cdot z_1$ and $v_2={z_1}^{-1}\cdot z_2$ to the adversary; what does that reveal about $a$? Lemma 1: for unknown $u\in\mathbb F_p^*$, drawing a random uniform secret $z\in\mathbb F_p$, and revealing $v=u\cdot z$, reveals nothing about $u$. Lemma 2: for unknown $u\in\mathbb F_p^*$, drawing a random uniform secret $z\in\mathbb F_p^*$, and revealing $v=u\cdot z$, reveals nothing about $u$. Proof follows from the fact that $z\to u\cdot z$ is a mapping over $\mathbb F_p$ (for lemma 1) or over $\mathbb F_p^*$ (for lemma 2). Notice that neither $v_2$ nor $z_2$ are involved when we compute and reveal $v_1=a\cdot z_1$. Therefore, we can consider in isolation the part of the protocol where we draw a random uniform secret $z_2\in\mathbb F_p$ and reveal $v_2={z_1}^{-1}\cdot z_2$. We apply lemma 1 with $u={z_1}^{-1}$ (which belongs to $\mathbb F_p^*$), and conclude that revealing $v_2$ reveals nothing about ${z_1}^{-1}$, hence nothing about $z_1$. Thus the part of the protocol where we compute and reveal $v_2={z_1}^{-1}\cdot z_2$ has revealed nothing about any quantity in the part of the protocol where we compute and reveal $v_1=a\cdot z_1$. If our choice of $a$ was not zero, by lemma 2, that part of the protocol has revealed nothing about $a$. If our choice of $a$ was $0$, $v_1$ will be $0$. Hence the answer to question 2 is: the protocol reveals precisely whether $a=0$ or not. No other information about $a$ leaks. With the statement disallowing $z_1=0$ (or if we reject that as having vanishing odds since $p$ is large), it can be shown that no (or vanishingly few) information about $z_1$ in isolation leaks, and that the only (or almost the only) information that leaks about $z_2$ in isolation is whether $z_2=0$ hold.
(Re-posted from StackOverflow as suggested) I have the following problem. The functions $f(x),g(x)$ are defined as $$ f(x) = \begin{cases} f_1(x) & 0 \leq x \leq 10, \\ f_2(x) & 10 < x \leq 20, \\ 0 & \text{otherwise}, \end{cases} \qquad g(x) = \begin{cases} g_1(x) & 0 \leq x \leq 5, \\ g_2(x) & 5 < x \leq 20, \\ 0 & \text{otherwise}, \end{cases} $$ In addition, we require the constraints $$ \int_0^{20} f(x) dx \geq K, \quad \int_0^{20} g(x) dx \geq Q, \quad f(x)+g(x) \leq R \text{ for all $x$}. $$ where $K,Q,R$ are parameters. I assume there is quite some elaborate theory behind it, and was wondering if anybody could point me in the right direction to devise an algorithm that can generate $f_1(x), f_2(x), g_1(x), g_2(x)$? I would like to add that for a given $K$ and $Q$, the interest is to keep $R$ as low as possible.
Cost Elasticity Cost elasticity (also called cost-output elasticity) measures the responsiveness of total cost to changes in output. It is calculated by dividing the percentage change in cost with percentage change in output. A cost elasticity value of less than 1 means that economies of scale exists. Economies of scale exist when increase in output is expected to result in a decrease in unit cost while keeping the input costs constant. Such a reduction in average cost may occur, for example, when workers are able to specialize which increases their productivity, when the firm is able to negotiate more effectively with suppliers and receive volume discounts, etc. Calculation Cost elasticity is calculated by dividing percentage change in total costs by percentage change in output: $$ \text{Cost Elasticity}\ =\ \frac{\text{%\ Change in Total Costs}}{\text{%\ Change in Output}} $$ Where ∆C is the change in total costs, percentage change in total costs equals ∆C/C. Similarly, percentage change in output is ∆Q/Q. It follows that: $$ \text{Cost Elasticity}\ =\frac{\Delta \text{C}}{\text{C}}÷\frac{\Delta \text{Q}}{\text{Q}} $$ $$ \text{Cost Elasticity}\ =\frac{\Delta \text{C}}{\Delta \text{Q}}\times \frac{\text{Q}}{\text{C}} $$ A production process is said to exhibit economies of scale if the cost elasticity is less than 1 and diseconomies of scale when the cost elasticity is greater than 1. At a cost of elasticity of exactly 1, neither economies nor diseconomies of scale exist. A cost elasticity of less than 1 represents existence of economies of scale because it means that percentage change in costs (i.e. the numerator) is lower than the percentage change in output (the denominator). In other words, it shows that at cost elasticity of less than 1, costs increase by a lower percentage than output. Example Using the data given below for three firms, advise each firm regarding production level. Firm A Firm B Firm C Old output 1,000 5,000 11,000 New output 1,200 6,000 12,000 Old total cost ($) 20,000 50,000 132,000 New total cost ($) 22,800 60,000 168,000 You need to calculate cost elasticity for each firm and then see if there are economies of scale. Let’s calculate cost elasticity for Firm A: $$ \varepsilon _ \text{C}=\frac{\Delta \text{C}}{\Delta \text{Q}}\times \frac{\text{Q}}{\text{C}} \\=\frac{\text{\$22,800} - \text{\$20,000}}{\text{1,200} - \text{1,000}}\times \frac{\text{1,000}}{\text{\$20,000}}= \text{0.7} $$ Using the same formula, you can verify that the cost elasticities of Firm B and C are 1 and 3. Since Firm A has a cost elasticity value of less than 1, its production process exhibits economies of scale and it should increase production. Firm B has neither economies nor diseconomies of scale while Firm C has diseconomies of scale and it should reduce production. by Obaidullah Jan, ACA, CFA and last modified on
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1996 (2) (remove) 282 Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\). 276 Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)), \(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework of continuous and \(f\)- additive polytope functionals.
Difference between revisions of "Worldly" Line 7: Line 7: * The least worldly cardinal has [[cofinality]] $\omega$. * The least worldly cardinal has [[cofinality]] $\omega$. * Indeed, the next worldly cardinal above any ordinal, if any exist, has [[cofinality]] $\omega$. * Indeed, the next worldly cardinal above any ordinal, if any exist, has [[cofinality]] $\omega$. + ==Degrees of worldliness== ==Degrees of worldliness== Revision as of 19:02, 24 March 2014 Every inaccessible cardinal is worldly. Nevertheless, the least worldly cardinal is singular and hence not inaccessible. The least worldly cardinal has cofinality $\omega$. Indeed, the next worldly cardinal above any ordinal, if any exist, has cofinality $\omega$. Any worldly cardinal $\kappa$ of uncountable cofinality is a limit of $\kappa$ many worldly cardinals. Degrees of worldliness A cardinal $\kappa$ is $1$-worldly if it is worldly and a limit of worldly cardinals. More generally, $\kappa$ is $\alpha$-worldly if it is worldly and for every $\beta\lt\alpha$, the $\beta$-worldly cardinals are unbounded in $\kappa$. The cardinal $\kappa$ is hyper-worldly if it is $\kappa$-worldly. One may proceed to define notions of $\alpha$-hyper-worldly and $\alpha$-hyper${}^\beta$-worldly in analogy with the hyper-inaccessible cardinals. Every inaccessible cardinal $\kappa$ is hyper${}^\kappa$-worldly, and a limit of such kinds of cardinals. The worldly cardinal terminology was introduced in lectures of J. D. Hamkins at the CUNY Graduate Center.
Some days ago I posted a question in MSE in order to correct a solution to the problem of Prove that $[\mathbb{Q}(\sqrt{4+\sqrt{5}},\sqrt{4-\sqrt{5}}):\mathbb{Q}]=8$. After posting this another question, I found a general argument for this type of extensions. I think that the ideas at the solution of Bill Dubuque in this question could be used to solve the following problem: Let $p$ and $q$ be distinct positive prime numbers such that $p+q$ is a perfect square. Then $[\mathbb{Q}(\sqrt{\sqrt{p+q}+\sqrt{q}},\sqrt{\sqrt{p+q}-\sqrt{q}}):\mathbb{Q}]=8.$ My attempt of solution: Let $\alpha_1 = \sqrt{\sqrt{p+q}+\sqrt{q}}$ and $\alpha_2=\sqrt{\sqrt{p+q}-\sqrt{q}}).$ Let $\mathbb{K}=\mathbb{Q}(\alpha_1,\alpha_2)$. First observe that $$\alpha_1^2 = \sqrt{p+q}+\sqrt{q},$$ and $$\alpha_1 \alpha_2 = \sqrt{p}.$$ Let $\mathbb{L}=\mathbb{Q}(\alpha_1^2,\alpha_1 \alpha_2)=\mathbb{Q}(\sqrt{q},\sqrt{p}).$ We have that $[\mathbb{L}:\mathbb{Q}]=4,$ hence $\mathbb{L}$ is a 2-dimensional vector space over $\mathbb{Q}(\sqrt{q}),$ with basis $\{1,\sqrt{p}\}$. We will prove now that $\alpha_1 \not\in \mathbb{L}:$ Suppose that $\alpha_1 \in \mathbb{L}$ (this imply directly that $\alpha_2 \in \mathbb{L}$ too), then exists unique $a,b \in \mathbb{Q}(\sqrt{q})$ with $$\alpha_1 = a + b\sqrt{p}.$$ Hence, $$\sqrt{p+q}+\sqrt{q} = a^2 + p b^2 + 2ab\sqrt{p},$$ or equivalently: $$2ab\sqrt{p} = \sqrt{p+q}+\sqrt{q} - a^2 - p a^2.$$ Since the right member of the equality is in $\mathbb{Q}(\sqrt{q}),$ must be $a=0$ or $b=0$. If $a=0$ then $\alpha_1 = b\sqrt{p}=b\alpha_1 \alpha_2,$ hence $1=b\alpha_2$ and we conclude that $\alpha_2^{-1}=b \in \mathbb{Q}(\sqrt{q}).$ If $b=0$ then $\alpha_1=a \in \mathbb{Q}(\sqrt{q}).$ Both cases gets a contradiction since $\sqrt{\sqrt{p+q}\pm\sqrt{q}}\not\in\mathbb{Q}(\sqrt{q}).$ If we suppose that $$\sqrt{\sqrt{p+q}\pm\sqrt{q}}\in\mathbb{Q}(\sqrt{q}),$$ then exists unique $a,b \in \mathbb{Q}$ such that $$\sqrt{\sqrt{p+q}\pm\sqrt{q}}=a+b\sqrt{q}.$$ Hence $$\sqrt{p+q}\pm\sqrt{q} = a^2 + qb^2+2ab\sqrt{q},$$ and must be $ab=\pm1/2$ and $\sqrt{p+q} = a^2 + qb^2.$ Solving for $a$ we get that $a$ is a root of the polynomial $$4x^4-4\sqrt{p+q}x^2+q.$$ Hence $a$ have one of the following four values: $$\pm\sqrt{\frac{\sqrt{p+q}}{2}\pm\frac{\sqrt{p}}{2}},$$ but any of these values is a rational, if not, $$\bigg(\pm\sqrt{\frac{\sqrt{p+q}}{2}\pm\frac{\sqrt{p}}{2}}\bigg)^2=\frac{\sqrt{p+q}}{2}\pm\frac{\sqrt{p}}{2} \in \mathbb{Q}.$$ With this we conclude the proof and get the original claim. End. The problem I posted some days ago is a special case with $p = 11$ and $q = 5$. Is this approach correct? I'm interested in reading Galois-type solutions since I think they are more "beautiful". Which are the pair of distinct positive primes whose sum is a perfect square? I see the pairs $(11,5)$, $(23,2)$ and $(31,5)$ for example. Thaks to everyone.
This answer tries to give more connections between these two decompositions than their differences. SVD actually stems from the eigenvalue decomposition of real symmetric matrices. If a matrix $A \in \mathbb{R}^{n \times n}$ is symmetric, then there exists an real orthogonal matrix $O$ such that $$A = O\text{diag}(\lambda_1, \ldots, \lambda_n)O', \tag{1}$$where $\lambda_1, \ldots, \lambda_n$ are all real eigenvalues of $A$. In other words, $A$ is orthogonal similar to a diagonal matrix $\text{diag}(\lambda_1, \ldots, \lambda_n)$. For a general (rectangular) real matrix $B \in \mathbb{R}^{m \times n}$, clearly $B'B$ is square, symmetric and semi-positive definite, thus all its eigenvalues are real and non-negative. By definition, the singular values of $B$ are all arithmetic square root of the positive eigenvalues of $B'B$, say, $\mu_1, \ldots, \mu_r$. Since $B'B$ has its eigen-decomposition $$B'B = O\text{diag}(\mu_1^2, \ldots, \mu_r^2, 0, \ldots, 0)O',$$it can be shown (doing a little clever algebra) that there exist orthogonal matrices $O_1 \in \mathbb{R}^{m \times m}$ and $O_2 \in \mathbb{R}^{n \times n}$ such that $B$ has the following Singular Value Decomposition (SVD):$$B = O_1 \text{diag}(\text{diag}(\mu_1, \ldots, \mu_r), 0)O_2, \tag{2}$$where $0$ in the diagonal matrix is a zero matrix of size $(m - r) \times (n - r)$. $(2)$ sometimes is said as $B$ is orthogonal equivalent to the diagonal matrix $\text{diag}(\text{diag}(\mu_1, \ldots, \mu_r), 0)$. In view of $(1)$ and $(2)$, both eigen-decomposition (in its narrow sense for symmetric matrices only) and SVD are trying to look for representative elements under some relations. In detail, the eigen-decomposition $(1)$ states that under the orthogonal similar relation, all symmetric matrices can be classified into different equivalent classes, and for each equivalent class, the representative element can be chosen to be the simple diagonal matrix $\text{diag}(\lambda_1, \ldots, \lambda_n)$. It can be further shown that the set of eigenvalues $\{\lambda_1, \ldots, \lambda_n\}$ is the maximal invariant under the orthogonal similar relation. By comparison, the SVD $(2)$ states that under the orthogonal equivalent relation, all $m \times n$ matrices can be classified into different equivalent classes, and for each equivalent class, the representative element can also be chosen to be a diagonal matrix $\text{diag}(\text{diag}(\mu_1, \ldots, \mu_r), 0)$. It can be further shown that the set of singular values $\{\mu_1, \ldots, \mu_r\}$ is the maximal invariant under the orthogonal equivalent relation. In summary, given a matrix $M$ to be decomposed, both eigen-decomposition and SVD aim to seek for its simplified profile. This is not much different from seeking a representative basis under which a linear transformation has its simplistic coordinate expression. Moreover, the above (incomplete) arguments showed that eigen-decomposition and SVD are closely related -- in fact, one way to derive SVD is completely from the eigen-decomposition.
ok, suppose we have the set $U_1=[a,\frac{a+b}{2}) \cup (\frac{a+2}{2},b]$ where $a,b$ are rational. It is easy to see that there exists a countable cover which consists of intervals that converges towards, a,b and $\frac{a+b}{2}$. Therefore $U_1$ is not compact. Now we can construct $U_2$ by taking the midpoint of each half open interval of $U_1$ and we can similarly construct a countable cover that has no finite subcover. By induction on the naturals, we eventually end up with the set $\Bbb{I} \cap [a,b]$. Thus this set is not compact I am currently working under the Lebesgue outer measure, though I did not know we cannot define any measure where subsets of rationals have nonzero measure The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure that is, trying to compute the Lebesgue outer measure of the irrationals using only the notions of covers, topology and the definition of the measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Problem: Let $X$ be some measurable space and $f,g : X \to [-\infty, \infty]$ measurable functions. Prove that the set $\{x \mid f(x) < g(x) \}$ is a measurable set. Question: In a solution I am reading, the author just asserts that $g-f$ is measurable and the rest of the proof essentially follows from that. My problem is, how can $g-f$ make sense if either function could possibly take on an infinite value? @AkivaWeinberger For $\lambda^*$ I can think of simple examples like: If $\frac{a}{2} < \frac{b}{2} < a, b$, then I can always add some $\frac{c}{2}$ to $\frac{a}{2},\frac{b}{2}$ to generate the interval $[\frac{a+c}{2},\frac{b+c}{2}]$ which will fullfill the criteria. But if you are interested in some $X$ that are not intervals, I am not very sure We then manipulate the $c_n$ for the Fourier series of $h$ to obtain a new $c_n$, but expressed w.r.t. $g$. Now, I am still not understanding why by doing what we have done we're logically showing that this new $c_n$ is the $d_n$ which we need. Why would this $c_n$ be the $d_n$ associated with the Fourier series of $g$? $\lambda^*(\Bbb{I}\cap [a,b]) = \lambda^*(C) = \lim_{i\to \aleph_0}\lambda^*(C_i) = \lim_{i\to \aleph_0} (b-q_i) + \sum_{k=1}^i (q_{n(i)}-q_{m(i)}) + (q_{i+1}-a)$. Therefore, computing the Lebesgue outer measure of the irrationals directly amounts to computing the value of this series. Therefore, we first need to check it is convergent, and then compute its value The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Alessandro: and typo for the third $\Bbb{I}$ in the quote, which should be $\Bbb{Q}$ (cont.) We first observed that the above countable sum is an alternating series. Therefore, we can use some machinery in checking the convergence of an alternating series Next, we observed the terms in the alternating series is monotonically increasing and bounded from above and below by b and a respectively Each term in brackets are also nonegative by the Lebesgue outer measure of open intervals, and together, let the differences be $c_i = q_{n(i)-q_{m(i)}}$. These form a series that is bounded from above and below Hence (also typo in the subscript just above): $$\lambda^*(\Bbb{I}\cap [a,b])=\sum_{i=1}^{\aleph_0}c_i$$ Consider the partial sums of the above series. Note every partial sum is telescoping since in finite series, addition associates and thus we are free to cancel out. By the construction of the cover $C$ every rational $q_i$ that is enumerated is ordered such that they form expressions $-q_i+q_i$. Hence for any partial sum by moving through the stages of the constructions of $C$ i.e. $C_0,C_1,C_2,...$, the only surviving term is $b-a$. Therefore, the countable sequence is also telescoping and: @AkivaWeinberger Never mind. I think I figured it out alone. Basically, the value of the definite integral for $c_n$ is actually the value of the define integral of $d_n$. So they are the same thing but re-expressed differently. If you have a function $f : X \to Y$ between two topological spaces $X$ and $Y$ you can't conclude anything about the topologies, if however the function is continuous, then you can say stuff about the topologies @Overflow2341313 Could you send a picture or a screenshot of the problem? nvm I overlooked something important. Each interval contains a rational, and there are only countably many rationals. This means at the $\omega_1$ limit stage, thre are uncountably many intervals that contains neither rationals nor irrationals, thus they are empty and does not contribute to the sum So there are only countably many disjoint intervals in the cover $C$ @Perturbative Okay similar problem if you don't mind guiding me in the right direction. If a function f exists, with the same setup (X, t) -> (Y,S), that is 1-1, open, and continous but not onto construct a topological space which is homeomorphic to the space (X, t). Simply restrict the codomain so that it is onto? Making it bijective and hence invertible. hmm, I don't understand. While I do start with an uncountable cover and using axiom of choice to well order the irrationals, the fact that the rationals are countable means I eventually end up with a countable cover of the rationals. However the telescoping countable sum clearly does not vanish, so this is weird... In a schematic, we have the following, I will try to figure this out tomorrow before moving on to computing the Lebesgue outer measure of the cantor set: @Perturbative Okay, kast question. Think I'm starting to get this stuff now.... I want to find a topology t on R such that f: R, U -> R, t defined by f(x) = x^2 is an open map where U is the "usual" topology defined by U = {x in U | x in U implies that x in (a,b) \subseteq U}. To do this... the smallest t can be is the trivial topology on R - {\emptyset, R} But, we required that everything in U be in t under f? @Overflow2341313 Also for the previous example, I think it may not be as simple (contrary to what I initially thought), because there do exist functions which are continuous, bijective but do not have continuous inverse I'm not sure if adding the additional condition that $f$ is an open map will make an difference For those who are not very familiar about this interest of mine, besides the maths, I am also interested in the notion of a "proof space", that is the set or class of all possible proofs of a given proposition and their relationship Elements in a proof space is a proof, which consists of steps and forming a path in this space For that I have a postulate that given two paths A and B in proof space with the same starting point and a proposition $\phi$. If $A \vdash \phi$ but $B \not\vdash \phi$, then there must exists some condition that make the path $B$ unable to reach $\phi$, or that $B$ is unprovable under the current formal system Hi. I believe I have numerically discovered that $\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n$ as $K\to\infty$, where $c=0,\dots,K$ is fixed and $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$. Any ideas how to prove that?
Newspace parameters Level: \( N \) = \( 2016 = 2^{5} \cdot 3^{2} \cdot 7 \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 2016.l (of order \(2\) and degree \(1\)) Newform invariants Self dual: Yes Analytic conductor: \(1.00611506547\) Analytic rank: \(0\) Dimension: \(1\) Coefficient field: \(\mathbb{Q}\) Coefficient ring: \(\mathbb{Z}\) Coefficient ring index: \( 1 \) Projective image \(D_{2}\) Projective field Galois closure of \(\Q(\sqrt{2}, \sqrt{-7})\) Artin image size \(8\) Artin image $D_4$ Artin field Galois closure of 4.0.14112.1 Character Values We give the values of \(\chi\) on generators for \(\left(\mathbb{Z}/2016\mathbb{Z}\right)^\times\). \(n\) \(127\) \(577\) \(1765\) \(1793\) \(\chi(n)\) \(1\) \(-1\) \(-1\) \(1\) For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below. For more information on an embedded modular form you can click on its label. Label \(\iota_m(\nu)\) \( a_{2} \) \( a_{3} \) \( a_{4} \) \( a_{5} \) \( a_{6} \) \( a_{7} \) \( a_{8} \) \( a_{9} \) \( a_{10} \) 433.1 0 0 0 0 0 1.00000 0 0 0 Char. orbit Parity Mult. Self Twist Proved 1.a Even 1 trivial yes 7.b Odd 1 CM by \(\Q(\sqrt{-7}) \) yes 8.b Even 1 RM by \(\Q(\sqrt{2}) \) yes 56.h Odd 1 CM by \(\Q(\sqrt{-14}) \) yes This newform can be constructed as the kernel of the linear operator \(T_{11} \) acting on \(S_{1}^{\mathrm{new}}(2016, [\chi])\).
Order of Real Numbers is Dual of Order Multiplied by Negative Number Jump to navigation Jump to search Theorem $\forall x, y, z \in \R: x > y, z < 0 \implies x \times z < y \times z$ Proof Let $z < 0$. $-z > 0$ and so: \(\displaystyle x\) \(>\) \(\displaystyle y\) \(\displaystyle \leadsto \ \ \) \(\displaystyle x \times \paren {-z}\) \(>\) \(\displaystyle y \times \paren {-z}\) Real Number Axioms: $\R O2$: compatibility with multiplication \(\displaystyle \leadsto \ \ \) \(\displaystyle -\paren {x \times z}\) \(>\) \(\displaystyle -\paren {y \times z}\) Multiplication by Negative Real Number \(\displaystyle \leadsto \ \ \) \(\displaystyle x \times z\) \(<\) \(\displaystyle y \times z\) Order of Real Numbers is Dual of Order of their Negatives $\blacksquare$
Difference between revisions of "Lower attic" From Cantor's Attic m (removing superfluous bullet points) Line 17: Line 17: * the [[Feferman-Schütte]] ordinal [[Feferman-Schütte | $\Gamma_0$]] * the [[Feferman-Schütte]] ordinal [[Feferman-Schütte | $\Gamma_0$]] * [[epsilon naught | $\epsilon_0$]] and the hierarchy of [[epsilon naught#epsilon_numbers | $\epsilon_\alpha$ numbers]] * [[epsilon naught | $\epsilon_0$]] and the hierarchy of [[epsilon naught#epsilon_numbers | $\epsilon_\alpha$ numbers]] − * the [[omega one chess | omega one of chess]], [[omega one chess| $\omega_1^{\ + * the [[omega one chess | omega one of chess]], [[omega one chess| $\omega_1^{\chess}$]] * [[indecomposable]] ordinal * [[indecomposable]] ordinal * the [[small countable ordinals]], such as [[small countable ordinals | $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$]] up to [[epsilon naught | $\epsilon_0$]] * the [[small countable ordinals]], such as [[small countable ordinals | $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$]] up to [[epsilon naught | $\epsilon_0$]] Revision as of 06:57, 27 July 2013 Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an eternal self-similar reflecting ascent. $\omega_1$, the first uncountable ordinal, and the other uncountable cardinals of the middle attic stable ordinals The ordinals of infinite time Turing machines, including admissible ordinals and relativized Church-Kleene $\omega_1^x$ Church-Kleene $\omega_1^{ck}$, the supremum of the computable ordinals the Bachmann-Howard ordinal the large Veblen ordinal the small Veblen ordinal the Feferman-Schütte ordinal $\Gamma_0$ $\epsilon_0$ and the hierarchy of $\epsilon_\alpha$ numbers the omega one of chess, $\omega_1^{\mathfrak{Ch}}$, $\omega_1^{\mathfrak{Ch}_{\!\!\!\!\sim}}$ indecomposable ordinal the small countable ordinals, such as $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$ up to $\epsilon_0$ Hilbert's hotel and other toys in the playroom $\omega$, the smallest infinity down to the parlour, where large finite numbers dream
Problem 3 (a) (+) Prove that the order of a cycle of length $k$ is $k$. (b) (+) Prove that the order of the product of two disjoint cycles in $S_n$ ($n\geq 2$) is the least common multiple of the lengths of the cycles. Deduce that the order of a product of $m$ disjoint cycles is the least common multiple of their lengths. (c)(*) Write out the possible disjoint cycle structures of $S_{7}$. (For ease of notation, let ($\underline{n}$) denote a cycle of length $n$, so for example, ($\underline{4}$)($\underline{2}$)($\underline{1}$) is one possible structure, which will have order $4$.) Determine the possible orders of $S_{7}$. (d) (+) Find the orders of the following elements of $S_7$: (a) $(135)$ (b) $(24)(163)$ (d) $(124)(3576)$ (e) $(1234)(175)$ Solution (a) Let $\sigma \in S_n$ be a cycle of length $k$; say, $\sigma = (a_1 a_2 \cdots a_k)$. Then note $\sigma^2(a_i) = a_{i+1}$ for $i=1, \dots, k-1$ and $\sigma^2(a_k) = a_1 = a_{k+_k 1}$. In general, we can see that $\sigma^n(a_i) = a_{i+_kn}$. So, for $0<i<k$, $\sigma^i(a_j) \neq a_j$, and $\sigma^k(a_j)=a_j$ for $j=1, \dots, k$. Thus, $\sigma^k =(1)$ and $|\sigma| = k$. (b) Let $\sigma, \tau \in S_n$ be disjoint cycles of length $k$ and $l$, respectively. Let $m = \textrm{lcm}(k,l)$ and note that $(\sigma\tau)^m=(1)$. Indeed, since $\sigma$ and $\tau$ are disjoint, they commute, so $(\sigma\tau)^m=\sigma^m\tau^m=(1)(1)=(1)$, as $k$ and $l$ both divide $m$. Now, for $0<n<m$, we can use the division algorithm to write $n=q_1k+r_1$ and $n=q_2l+r_2$, where $0\leq r_1<k$ and $0 \leq r_2<l$. So,(1) Now, since $n<\textrm{lcm}(k,l)$, either $r_1\neq 0$, or $r_2\neq 0$. In particular, by part (a), we have $\sigma^{r_1}\tau^{r_2}\neq (1)$. So $m$ is the smallest positive integer such that $(\sigma\tau)^m = (1)$, i.e. $m=\textrm{lcm}(k,l)=|\sigma\tau|$. A similar argument will show that the order of a disjoint product of $m$ cycles is the LCM of their lengths. (c)(*)First let's list the possible disjoint cycles:(2) Now the possible orders of $S_{7}$ will be the least common multiples of the lengths of the cycles in the possible disjoint cycles. Then the possible orders are $7, 6, 10, 5, 12, 4, 3, 2, 1$. (d) (+) Find the orders of the following elements of $S_7$: (a) $|(135)|=3$ (b) $|(24)(163)|=6$ (d) $|(124)(3576)|=12$ (e) $|(1234)(175)|=|(175234)|=6$
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for @JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default? @JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font. @DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma). @egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge. @barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually) @barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording? @barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us. @DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.) @barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow) if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.) @egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended. @barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really @DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts. @DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ... @DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts. MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers... has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable? I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something. @baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!... @baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier. @baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
Contact InfoPure Mathematics University of Waterloo 200 University Avenue West Waterloo, Ontario, Canada N2L 3G1 Departmental office: MC 5304 Phone: 519 888 4567 x33484 Fax: 519 725 0160 Email: puremath@uwaterloo.ca Anton Mosunov, Department of Pure Mathematics, University of Waterloo "Generalizations of the Gap Principle and the Thue-Siegel Principle, With Applications to Diophantine Equations" We develop generalizations of two well-known principles from the theory of Diophantine approximation, namely the gap principle and the Thue-Siegel principle. Our results find their applications in the theory Diophantine equations. Let $\alpha$ be an algebraic number over $\mathbb Q$ and let $F(X, Y)$ be the homogenization of the minimal polynomial of $\alpha$. In the special case when $\mathbb Q(\alpha)/\mathbb Q$ is a Galois extension of degree at least seven, we establish absolute bounds on the number of solutions of certain equations of Thue and Thue-Mahler type, which involve $F(X, Y)$. Consequently, we give theoretical evidence in support of Stewart's conjecture (1991). More generally, if every conjugate $\beta$ of $\alpha$ is such that the degree of $\beta$ over $\mathbb Q(\alpha)$ is small relative to the degree of $\alpha$ over $\mathbb Q$, we establish bounds of the form $C\gamma$, where $C$ is an absolute constant and $\gamma$ is a natural parameter associated to $\alpha$ that does not exceed the degree of $\alpha$ over $\mathbb Q$. We expect this parameter to be small, perhaps even bounded by an absolute constant. MC 2009 Departmental office: MC 5304 Phone: 519 888 4567 x33484 Fax: 519 725 0160 Email: puremath@uwaterloo.ca
1of 1 Prove that \(\displaystyle{D}\) is dense on \(\displaystyle{X}\) if, and only if, for each continuous function \(\displaystyle{f:X\longrightarrow \mathbb{R}}\) holds : \(\displaystyle{f(x)=0\,,\forall\,x\in D\implies f=\mathbb{O}}\) . Now assume the converse. A different definition of density in a metric space is the following : "$D$ is dense iff every open set in $X$ intersects $D$ non-trivially". So assume $D$ is not dense and pick an open set not intersecting $D$. Since we're working in a metric space, there exists an $x$ and an $\epsilon>0$ : $B_{\epsilon} (x) \cap D = \emptyset $. How can we use this information? Things like Urysohn's lemma come to mind... Indeed, Urysohn gives a continuous function where $f(X -B_{\epsilon} (x)) = 0$ and $f (\bar{B_{\epsilon/2}}(x) ) = 1 $ and we are done. (Every metric space is normal) However, things here are much easier! Just define $A= \bar{B}_{\epsilon/2}(x)$ and $B= B_{\epsilon}(x) ^{\mathsf{c}}$. Notice $D \subset B$ and let $$f(x)= \frac{dist(x,B)} {dist(x,A) +dist(x,B)}$$. This $f$ does the job , so we get home without any heavy machinery. Nikos Here is another proof : Suppose that \(\displaystyle{D}\) is not dense on \(\displaystyle{\left(X,d\right)}\), that is \(\displaystyle{\overline{D}\neq X}\) . Then, there exists \(\displaystyle{y\in X}\) such that \(\displaystyle{d(y,D)>0}\) . The function \(\displaystyle{f:X\longrightarrow \mathbb{R}\,,f(x)=d(x,D)}\) is continuous and \(\displaystyle{f(x)=0\,,\forall\,x\in D\subseteq \overline{D}}\) . Accoding to the hypothesis, \(\displaystyle{f=\mathbb{O}}\), a contradiction, since \(\displaystyle{f(y)>0}\) .
Droop Control Droop control is a control strategy commonly applied to generators for primary frequency control (and occasionally voltaqe control) to allow parallel generator operation (e.g. load sharing). Contents Background Physical Intuition TBA Generic Formulation A more generic formulation of the droop control concept stems from the coupling between active power and frequency, and similarly reactive power and voltage. Recall that the active and reactive power transmitted across a lossless line are: [math]P = \frac{V_{1} V_{2}}{X} \sin\delta [/math] [math]Q = \frac{V_{2}}{X} (V_{2} - V_{1} \cos\delta) [/math] Since the power angle [math]\delta \,[/math] is typically small, we can simplify this further by using the approximations [math]\sin\delta \approx \delta \,[/math] and [math]\cos\delta \approx 1 \,[/math]: [math]\delta \approx \frac{PX}{V_{1} V_{2}} [/math] [math](V_{2} - V_{1}) \approx \frac{QX}{V_{2}} [/math] From the above, we can see that active power has a large influence on the power angle and reactive power has a large influence on the voltage difference. Restated, by controlling active and reactive power, we can also control the power angle and voltage. We also know from the swing equation that frequency in synchronous power systems is related to the power angle, so by controlling active power, we can therefore control frequency. Droop Equations Per-Unit Droop Equations The coupling of active power to frequency and reactive power to voltage forms the basis of frequency and voltage droop control where active and reactive power are adjusted according to linear characteristics, based on the following control equations: [math]f = f_{0} - r_{p} (P - P_{0}) \, [/math] ... Eq. 1 [math]V = V_{0} - r_{q} (Q - Q_{0}) \, [/math] ... Eq. 2 where [math]f \, [/math] is the system frequency (in per unit) [math]f_{0} \, [/math] is the base frequency (in per unit) [math]r_{p} \, [/math] is the frequency droop control setting (in per unit) [math]P \, [/math] is the active power of the unit (in per unit) [math]P_{0} \, [/math] is the base active power of the unit (in per unit) [math]V \, [/math] is the voltage at the measurement location (in per unit) [math]V_{0} \, [/math] is the base voltage (in per unit) [math]Q \, [/math] is the reactive power of the unit (in per unit) [math]Q_{0} \, [/math] is the base reactive power of the unit (in per unit) [math]r_{q} \, [/math] is the voltage droop control setting (in per unit) These two equations are plotted in the characteristics below: The frequency droop characteristic above can be interpreted as follows: when frequency falls from [math]f_{0}[/math] to [math]f[/math], the power output of the generating unit is allowed to increase from [math]P_{0}[/math] to [math]P[/math]. A falling frequency indicates an increase in loading and a requirement for more active power. Multiple parallel units with the same droop characteristic can respond to the fall in frequency by increasing their active power outputs simultaneously. The increase in active power output will counteract the reduction in frequency and the units will settle at active power outputs and frequency at a steady-state point on the droop characteristic. The droop characteristic therefore allows multiple units to share load without the units fighting each other to control the load (called "hunting"). The same logic above can be applied to the voltage droop characteristic. Alternative Droop Equations The basic per-unit droop equations in Eq. 1 and Eq. 2 above can be expressed in natural quantities and in terms of deviations as follows: [math]r_{p} = \frac{\Delta f}{\Delta P} \times \frac{P_{n}}{f_{n}} [/math] [math]r_{q} = \frac{\Delta V}{\Delta Q} \times \frac{Q_{n}}{V_{n}} [/math] where [math]\Delta f \, [/math] is the frequency deviation (in Hz) [math]f_{n} \, [/math] is the nominal frequency (in Hz), e.g. 50 or 60 Hz [math]\Delta P \, [/math] is the active power deviation (in kW or MW) [math]P_{n} \, [/math] is the rated active power of the unit (in kW or MW) [math]r_{p} \, [/math] is the frequency droop control setting (in per unit) [math]\Delta V \, [/math] is the voltage deviation at the measurement location (in V) [math]V_{n} \, [/math] is the nominal voltage (in V) [math]\Delta Q \, [/math] is the reactive power deviation (in kVAr or MVAr) [math]Q_{n} \, [/math] is the rated reactive power of the unit (in kVAr or MVAr) [math]r_{q} \, [/math] is the voltage droop control setting (in per unit) Droop Control Setpoints Droop settings are normally quoted in % droop. The setting indicates the percentage amount the measured quantity must change to cause a 100% change in the controlled quantity. For example, a 5% frequency droop setting means that for a 5% change in frequency, the unit's power output changes by 100%. This means that if the frequency falls by 1%, the unit with a 5% droop setting will increase its power output by 20%. The short video below shows some examples of frequency (speed) droop: Limitations of Droop Control Frequency droop control is useful for allowing multiple generating units to automatically change their power outputs based on dynamically changing loads. However, consider what happens when there is a significant contingency such as the loss of a large generating unit. If the system remains stable, all the other units would pick up the slack, but the droop characteristic allows the frequency to settle at a steady-state value below its nominal value (for example, 49.7Hz or 59.7Hz). Conversely, if a large load is tripped, then the frequency will settle at a steady-state value above its nominal value (for example, 50.5Hz or 60.5Hz). Other controllers are therefore necessary to bring the frequency back to its nominal value (i.e. 50Hz or 60hz), which are called secondary and tertiary frequency controllers.
Which of the following has higher boiling points? Alkanes, alkenes, or alkynes? And why? Disclaimer: All of this "jazz" will be about reaching a mere rule-of-thumb. You can't just compare whole families of organic compounds with each other. There are more factors to consider than below, mostly based on isomerism notions. However, as most of the A grade exams emphasize on the lighter aliphatic compounds, we can understand each other here. :) It's all about polarizability. Polarizability is the ability for a molecule to be polarized. When determining (aka comparing) the boiling points of different molecular substances, intermolecular forces known as London Dispersion Forces are at work here. Which means, these are the forces that are overcome when the boiling occurs. (See here for example) London forces get stronger with an increase in volume, and that's because the polarizability of the molecule increases. (See the answer to this recent question) Alkanes vs. Alkenes In their simplest form (where no substitution etc. has occurred) alkanes tend to have very close boiling points to alkenes. The boiling point of each alkene is very similar to that of the alkane with the same number of carbon atoms. Ethene, propene and the various butenes are gases at room temperature. All the rest that you are likely to come across are liquids. Boiling points of alkenes depends on more molecular mass (chain length). The more intermolecular mass is added, the higher the boiling point. Intermolecular forces of alkenes gets stronger with increase in the size of the molecules. \begin{array}{|c|c|}\hline \text{Compound} & \text{Boiling point / }^\circ\mathrm{C} \\ \hline \text{Ethene} & -104 \\ \hline \text{Propene} & -47 \\ \hline \textit{trans}\text{-2-Butene} & 0.9 \\ \hline \textit{cis}\text{-2-Butene} & 3.7 \\ \hline \textit{trans}\text{-1,2-dichlorobutene} & 155 \\ \hline \textit{cis}\text{-1,2-dichlorobutene} & 152 \\ \hline \text{1-Pentene} & 30 \\ \hline \textit{trans}\text{-2-Pentene} & 36 \\ \hline \textit{cis}\text{-2-Pentene} & 37 \\ \hline \text{1-Heptene} & 115 \\ \hline \text{3-Octene} & 122 \\ \hline \text{3-Nonene} & 147 \\ \hline \text{5-Decene} & 170 \\ \hline \end{array} In each case, the alkene has a boiling point which is a small number of degrees lower than the corresponding alkane. The only attractions involved are Van der Waals dispersion forces, and these depend on the shape of the molecule and the number of electrons it contains. Each alkene has 2 fewer electrons than the alkane with the same number of carbons. Alkanes vs. Alkynes As explained, since there is a bigger volume to an alkane than its corresponding alkyne (i.e. with the same number of carbons) the alkane should have a higher boiling point. However, there's something else in play here: Alkynes, have a TRIPLE bond! I currently can think of two things that happen as a result of this: London Dispersion Forces are in relation with distance. Usually, this relation is $r^{-6}$. (See here) The triple bond allows two alkynes to get closer. The closer they are, the more the electron densities are polarised, and thus the stronger the forces are. Electrons in $\pi$ bonds are more polarizable$^{10}$. These two factors overcome the slight difference of volume here. As a result, you have higher boiling points for alkynes than alkanes, generally. \begin{array}{|c|c|}\hline\text{Compound} & \text{Boiling point / }^\circ\mathrm{C} \\ \hline\text{Ethyne} & -84^{[1]} \\ \hline\text{Propyne} & -23.2^{[2]} \\ \hline\text{2-Butyne} & 27^{[3]} \\ \hline\text{1,4-Dichloro-2-butyne} & 165.5^{[4]} \\ \hline\text{1-Pentyne} & 40.2^{[5]} \\ \hline\text{2-Heptyne} & 112\text{–}113^{[6]} \\ \hline\text{3-Octyne} & 133^{[7]} \\ \hline\text{3-Nonyne} & 157.1^{[8]} \\ \hline\text{5-Decyne} & 177\text{–}178^{[9]} \\ \hline\end{array} 1: http://en.wikipedia.org/wiki/Acetylene 2: http://en.wikipedia.org/wiki/Propyne 3: http://en.wikipedia.org/wiki/2-Butyne 4: http://www.lookchem.com/1-4-Dichloro-2-butyne/ 5: http://en.wikipedia.org/wiki/1-Pentyne 6: http://www.chemsynthesis.com/base/chemical-structure-17405.html 7: http://www.chemspider.com/Chemical-Structure.76541.html 8: http://www.thegoodscentscompany.com/data/rw1120961.html 9: http://www.chemsynthesis.com/base/chemical-structure-3310.html 10: https://chemistry.stackexchange.com/a/27531/5026 Conclusion: We can't fully determine the boiling points of the whole class of alkanes, alkenes and alkynes. However, for the lighter hydrocarbons, comparing the boiling points, you get:$$\text{Alkynes > Alkanes > Alkenes}$$ protected by Community♦ Nov 1 '18 at 12:19 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
It's a very interesting observation, and I would imagine it's certainly not a meaningless coincidence. I'm sure someone else can give you a better description, but I think the "rolling ball" perspective suggests why such a relationship exists. It turns out it's convenient to think of the change of position of the particle in the $x$ direction as coming fron two sources: The motion of the particle with respect to the center of the circle, and the motion of the center of the circle with respect to a stationary observer (See the image below, these are the two green arrows). So, we add these two speeds together, and we we have $\frac{d\ x}{d\ t}$. The center of the rolling ball is given by $(rt,r)$ and is always contributing velocity $\langle r, 0\rangle$ to the particle. Now, it will be nice to have the angular speed, so we can find the linear speed of the point (all of this with respect to the center of the circle). Of course, when $t = 2\pi$, the $y$-coordinate is back to zero, so the angular speed is simply $1,$ and in turn the linear speed is simply $r$ (I'm calling the radius $r$, but it appears that you're using $a$). To simplify matters, I'll pretend $r=1$ from here on out. The linear velocity is tangent to the circle, and we have this picture: It's not too hard to label relevant pieces of the black, central triangle. Of course the radius (hypotenuse) is just $1,$ while the length of the vertical leg is $\ -\cos t$. This is because the top endpoint of the vertical leg has the same $y$-coordinate as the particle, which is $1-\cos t$. Thus we have broken the $y$-coordinate into two pieces: the radius of the ball, and the length of the vertical leg. This, in turn, helps us start labeling pieces of the red triangle. Since the linear velocity is perpendicular to the radius of the ball and has magnitude $1,$ the red triangle is congruent to the black triangle. Thus, we see that the $x$-component of the linear velocity is also $\ -\cos t$. Thus, adding the linear speed of the point (with respect to the center of the ball), $-\cos t$, to the linear speed of the center of the ball, $1,$ we get exactly what you want: $\frac{d\ x}{d\ t} = 1 -\cos t = y$. I'm sure there's a more elegant argument, but it really is quite believable (staring at the rolling ball image) that the change in $x$ is, at the very least, proportional the the height of the point: when $y$ is near zero, the point moves mostly vertically, and when $y$ is near its maximum, the point moves mostly horizontally. Great question, I hadn't seen this before, and it was quite fun figuring out what's going on! Rolling cycloid image courtesy of Wikipedia. If I've committed some sort of image faux-pas (animated gif, or using the image directly), feel free to edit this post.
Contents DT Fourier Series with a single MATLAB command! Calculating fourier series by hand can often become time consuming and error prone. Matlab has an easy and fast built-in fuction for computing discrete time fourier series coefficents. Unfortunely, this wont help you on exams, but it might save you considerable time on homework assignemnts. The command is ifft. It takes in a vector representing your signal and produces a vector of the fourier series coefficients. Two examples are provided below: Example 1: The signal is represented by the graph below and is periodic for all time: This signal can be represented by a vector. Each element in the vector corresponds to the value that the signal takes at each time interval. At time 0, the value is 2. At time 1 the value is 1 and for time 2 and 3 the value is 0. This can be represented by the vector below: [2,1,0,0]? To find the fourier series coefficients, we would use the following matlab code: signal = [2,1,0,0]; fouriercoefs = ifft(signal) The output gives: fouriercoefs = 0.7500 0.5000 + 0.2500i 0.2500 0.5000 - 0.2500i This means that the fourier series coefficients are: a0 = .75 a1 = .5+.25j a2 = .25 a3 = .5-.25j. Example 2: The signal is represented by the graph below and is periodic for all time: This signal can be represented by a vector like before. You may recognize this signal as x(t) = t for 0<=t<=2 in continuous time. Since we're in discrete time, the vector below represents the signal: [0,1,2]? Note that this is NOT the same as [0,.5,1,1.5,2]?. In that case, the continuous time example would be x(t) = .5t for 0<=t<=4. To find the fourier series coefficients, we would use the following matlab code: signal = [0,1,2]; fouriercoefs = ifft(signal) The output gives: fouriercoefs = 1.0000 -0.5000 - 0.2887i -0.5000 + 0.2887i This means that the fourier series coefficients are: a0 = 1 a1 = -.5 - .2887j a2 = -.5 + .2877j Hopefully this will help you take DT fourier series in matlab easier and faster. These are examples which you can easily verify by hand. comments: ... --shaun.p.greene.1, Wed, 26 Sep 2007 22:46:19 Good find in the functions library. Although I'm not sure that the ifft() does what we really want, I tried it on the homework, and it gives me pretty much the same answer, so I definately think its close to what we need. I found a function called dftmtx() that will generate the k and n matrix that is needed for finding the ak values. I apologize in advance because I'm not good in latex. we have $ \displaystyle a_k = \frac{1}{N}\cdot \sum_{k=0}^{k=N-1} \left( signal \cdot e^{(\jmath\frac{2\pi}{N} k n)} \right) $ the dftmtx() command should give you the matrix that has your exp(......). Then, all you have left to find the aks is to ak = signal * dfftmtx(N); which gives a matrix of ak values that is almost identical to what the ifft() command gives you. Thanks again for the good start with the ifft function, it got me moving on this homework. ... --shaun.p.greene.1, Wed, 26 Sep 2007 22:43:05 sry, that paragraph is really hard to read. help? --andrew.c.daniel.1, Thu, 27 Sep 2007 00:04:44 if you used ifft() in the homework and it didn't work you could try transposing the [1xn]? vector wavread gives you to a [nx1]? horizontal vector matlab syntax: A_transposed = A'; ... --tom.l.moffitt.1, Thu, 27 Sep 2007 00:15:22 If you do it on the homework, make sure you take it over one period. Since with voice recordings each period wont be exactly the same, it's a good idea to just do it over one. A note about fft vs ifft --ross.a.howard.1, Thu, 27 Sep 2007 09:31:01 If you look at the help fft page it gives the equations it represents with fft and ifft. There are some differences between them and our book. fft has the correct sign in the complex exponential, but is not multiplied by 1/N. ifft finds the conjugate of the aks (which when you plot abs(ak) it does not matter) and has the 1/N term. For finding the aks using fft: aks=aks./length(aks); One should also note the helpful function shiftfft. It is useful because the matrix returned from fft starts at index 1 but contains a0, index 2 contains a1, and so on until it reaches the highest k, then it starts counting down. This means that when you plot the aks, they will not be in the right order. The shiftfft function correctly puts the negative aks on the left of the a0. (aks=shiftfft(aks);) Note: if you used ifft to find the aks, then use the shiftifft instead. Hope this helps. :) The Best Way The best way to reverse a vector is probably as follows: Say we have a vector, test = [1,2,3,4,5] and we want to reverse the numbers to provide us with a new vector: newVector = [5,4,3,2,1] To accomplish this without using a while loop or for loop, we can simply use the variable 'end', that is predefined in matlab: newVector = test(end:-1:1) What this statement say is take the last element of test ('end'), place that in the first element of newVector, then take the next last element of test ('end - 1') and put it into the second element of newVector, and so on, until newVector is a reversed copy of test. This method works with both row and column vectors. Other Ways Use the function flipud(vector) to reverse a column vector quickly. >> flipud([1;2;3]) ans = 3 2 1 ans = Use the function fliplr(vector) to reverse a row vector quickly. >> fliplr([1,2,3]) ans = 3 2 1 ans = This is some matlab code to check your fourier ak's. All you have to change is the expression for ak, and any particular ak's that are not in the formula (usually a0 is one of these). The other thing you must change is in the expression for sum, the last number (in this case 6) is the period (T). Note: This is what I got for problem 3.22(e). t=-6:.001:6; k=-40:40; ak=0; f=0; sum=0; ak=1./(j.*k.*pi).*(cos(2.*pi./3.*k)-cos(pi./3.*k)); ak(41)=0; for num=-40:40 sum=ak(num+41).*exp(j*num*t*2*pi/6); f=f+sum; end plot(t,f) Recording sound: WAVRECORD(N,FS,CH)Records N samples at frequency FS. CH is number of channels. Use 1 for the homeworks. Returns a matrix of size N, contiaining the samples WAVPLAY(Y,FS)Plays audio stored in vector Y at a sampling rate of FS Hz. Plot(Y)Plots vector Y (useful to visualize an audio signal). Plotting at different frequencies: x=y ( 1 : N : length(y) );will create a vector x, which is the signal Y at 1/N of its original frequency. (takes every N element of y and puts in in x) plot(x,y,z) Will plot X versus Y. If Y is omitted, the function will plot X versus size(X). Z parameter will change the format of the plot. It is a character string. For example, Z = 'b*' will plot using blue stars. For all possible options, check the help page for plot semilogx(x,y,z) Creates plot with logarithmic scale for x semilogy(x,y,z) Creates plot with logarithmic scale for y loglog(x,y,z) Creates plot with logarithmic scales for both x and y plot(x1,y1,z1,x2,y2,z2...xn,yn,zn) Will plot multiple curves on one graph where X, Y and Z each correspond to one individual curve. subplot(y,x,z) Will allow you to plot many graphs at once. X and Y are the number of graphs in each row and column, respectively. Z is the current graph that the plot() command will affect. To change which plot to modify, you must reuse the subplot command with the same X and Y, and change the Z value to the plot you wish to modify xlabel('label'),ylabel('label') Changes the label for the x and y axis on the current graph. title('label') Changes the title of the current graph. grid on/ grid off/ grid minor Turns a grid on or off. grid minor adds more lines to the plot. Writing functions in matlab can save you the trouble of typing the same lines of code over and over again. Functions are written in m-files. Open MATLAB, then go to file->new->m-file. Let's begin with an empty function named test that does nothing. function test() That was easy. Now, save the file as test.m. Every function you write must have the same name as the name of the m-file that contains it. A function foo() would be in foo.m, bar() would be in bar.m, and so on. Go to the matlab input and type test() to execute your function. Nothing happens, since we only made an empty function. Now, let's try and make our function do something. Let's modify our file like so: function test() fprintf('Hello World\n') Now save the function and execute it again. As you can see, anything that happens within your function will execute every time you call it from the main window. Whats the point of this? Let's give it an input, and get an output instead. We'll add two numbers: function x = test(a,b) x = a+b; As you can see by running this code, you can give your functions inputs, and they will give you outputs. You can give your functions as many inputs as you want. In the prompt, try typing y = test(5,6). You will see that y is now equal to 11. The function will return the value of x as whatever it is equal to when the function finishes. What if you want multiple outputs, you ask? Easy. Lets change our code to return a number of outputs. You can return an arbitrary number of outputs with your function, just like inputs. function [x, y, z]? = test(a,b) x = a+b; y = a-b; z = a*b; Go back to the main prompt, and type [a,b,c]? = test(6,4). You will see that now a = 10, b = 2, and c = 24.
Alladi Ramakrishnan Hall Bases for root spaces of Borcherds-Kac-Moody algebras R. Venkatesh IIT Madras Let $G$ be the BKM algebra. We consider the roots of $G$ of the form $\sum_{i\in I} k_i\alpha_i$ with the $k_i\leq 1$ for real simple roots $\alpha_i$. We prove that the root multiplicities of these roots have a close relationship with the (multi-)chromatic polynomial of the graph of $G$. Using this relationship, we construct bases of the root spaces of $G$ associated to these roots. This is joint work with G. Arunkumar and Deniz Kus. Done
Total Factor Productivity Total factor productivity (TFP) is a measure of productivity calculated by dividing economy-wide total production by the weighted average of inputs i.e. labor and capital. It represents growth in real output which is in excess of the growth in inputs such as labor and capital. Productivity is a measure of the relationship between outputs (total product) and inputs i.e. factors of production (primarily labor and capital). It equals output divided by input. There are two measures of productivity: (a) labor productivity, which equals total output divided by units of labor and (b) total factor productivity, which equals total output divided by weighted average of the inputs. $$ \text{TFP}=\frac{\text{Total Product}}{\text{Weighted Average of Inputs}} $$ The most widely used production function is the Cobb-Douglas function which is as follows: $$ \text{Q}=\text{A}\times \text{K}^\alpha\times \text{L}^\beta $$ Where Q is total product, K is capital, α is output elasticity of capital, L is labor and β is the output elasticity of labor. Q is the total product and the product of Kα and Lβ is the weighted average of inputs. If we rearrange the Cobb-Douglas function, we get the following formula for total factor productivity: $$ \text{TFP}=\text{A}\ =\frac{\text{Total Product}}{\text{Weighted Average of Inputs}}=\frac{\text{Q}}{\text{K}^\alpha\times \text{L}^\beta} $$ TFP represents the increase in total production which is in excess of the increase that results from increase in inputs. It results from intangible factors such as technological change, education, research and development, synergies, etc. It is more useful to look at productivity increase over a period instead of the absolute value of total factor productivity. The following growth accounting equation gives us the relationship between growth in total product, growth in labor and capital and growth in TFP: $$ \frac{\Delta \text{Q}}{\text{Q}} = α\times \frac{\Delta \text{K}}{\text{K}}+β\times \frac{\Delta \text{L}}{\text{L}}+\frac{\Delta \text{A}}{\text{A}} $$ Example Consider the following production function for mining industry in Andalusia: $$ \text{Q}=\text{A}\times \text{K}^{\text{0.70}}\times \text{L}^{\text{0.45}} $$ If the growth in total output is 3% in a period in which capital and labor grew by 1.5% and 2%, determine the growth that is attributable to total factor productivity. We need to isolate the increase in total product that is not explained by the increase in inputs i.e. capital and labor. Let’s just punch the available data in the growth accounting equation above: $$ \text{5%}=\text{0.70}\times\text{1.5%}+\text{0.45}\times\text{2%}+\frac{\Delta \text{A}}{\text{A}} $$ $$ \frac{\Delta \text{A}}{\text{A}}=\text{5%}- \text{0.70}\times \text{1.5%}-\text{0.45}\times \text{2%}=\text{3%} - \text{1.95%} = \text{1.05%} $$ by Obaidullah Jan, ACA, CFA and last modified on
Research articles for the 2019-04-21 arXiv In this paper we apply Markovian approximation of the fractional Brownian motion (BM), known as the Dobric-Ojeda (DO) process, to the fractional stochastic volatility model where the instantaneous variance is modelled by a lognormal process with drift and fractional diffusion. Since the DO process is a semi-martingale, it can be represented as an \Ito diffusion. It turns out that in this framework the process for the spot price $S_t$ is a geometric BM with stochastic instantaneous volatility $\sigma_t$, the process for $\sigma_t$ is also a geometric BM with stochastic speed of mean reversion and time-dependent colatility of volatility, and the supplementary process $\calV_t$ is the Ornstein-Uhlenbeck process with time-dependent coefficients, and is also a function of the Hurst exponent. We also introduce an adjusted DO process which provides a uniformly good approximation of the fractional BM for all Hurst exponents $H \in [0,1]$ but requires a complex measure. Finally, the characteristic function (CF) of $\log S_t$ in our model can be found in closed form by using asymptotic expansion. Therefore, pricing options and variance swaps (by using a forward CF) can be done via FFT, which is much easier than in rough volatility models. arXiv Disagreement is an essential element of science and life in general. The amount of disagreement is often quantified by highly abstract entropic measures such as the R\'enyi divergence. Despite their widespread use in science and engineering, such quantities lack numerical intuition and their axiomatic definitions contain no practical insight as to how the disagreement can be resolved. An economic approach addresses both of these problems by transforming disagreements into tangible investment opportunities. The R\'enyi divergence appears connected to the optimized performance of such investments. Optimization around individual opinions provides a social mechanism by which funds flow naturally to support a more accurate view. Such social mechanisms can help to resolve difficult disagreements (e.g., financial arguments concerning future climate). SSRN Although Capital Asset Pricing Model is very convenient for estimating the Cost of Capital for long-term investments, it requires the determination and use of a value for the equity risk premium (ERP). Using Prospect Theory introduced by Kahneman and Tversky and assuming a Brownian motion for any volatile asset, it seems possible to estimate such a premium from two market parameters â" volatility and risk-free rate â" and from the estimates of two human characteristics â" the multiple of the valuation of the pain produced by losses in comparison to the satisfaction extracted from the alternative gains, plus the non-linearity of the risk-aversion/risk-loving curve. Under common values for these parameters, the ERP should be 7% p.a. Therefore, it seems that Mehra and Prescott estimated an insufficient premium, because they considered only that non-linearity, and not the gains-losses asymmetry. arXiv Connected and automated vehicles (CAVs) are expected to yield significant improvements in safety, energy efficiency, and time utilization. However, their net effect on energy and environmental outcomes is unclear. Higher fuel economy reduces the energy required per mile of travel, but it also reduces the fuel cost of travel, incentivizing more travel and causing an energy "rebound effect." Moreover, CAVs are predicted to vastly reduce the time cost of travel, inducing further increases in travel and energy use. In this paper, we forecast the induced travel and rebound from CAVs using data on existing travel behavior. We develop a microeconomic model of vehicle miles traveled (VMT) choice under income and time constraints; then we use it to estimate elasticities of VMT demand with respect to fuel and time costs, with fuel cost data from the 2017 United States National Household Travel Survey (NHTS) and wage-derived predictions of travel time cost. Our central estimate of the combined price elasticity of VMT demand is -0.4, which differs substantially from previous estimates. We also find evidence that wealthier households have more elastic demand, and that households at all income levels are more sensitive to time costs than to fuel costs. We use our estimated elasticities to simulate VMT and energy use impacts of full, private CAV adoption under a range of possible changes to the fuel and time costs of travel. We forecast a 2-47% increase in travel demand for an average household. Our results indicate that backfire - i.e., a net rise in energy use - is a possibility, especially in higher income groups. This presents a stiff challenge to policy goals for reductions in not only energy use but also traffic congestion and local and global air pollution, as CAV use increases. arXiv We present some indications of inefficiency of the Brazilian stock market based on the existence of strong long-time cross-correlations with foreign markets and indices. Our results show a strong dependence on foreign markets indices as the S\&P 500 and CAC 40, but not to the Shanghai SSE 180, indicating an intricate interdependence. We also show that the distribution of log-returns of the Brazilian BOVESPA index has a discrete fat tail in the time scale of a day, which is also a deviation of what is expected of an efficient equilibrated market. As a final argument of the inefficiency of the Brazilian stock market, we use a neural network approach to forecast the direction of movement of the value of the IBOVESPA future contracts, with an accuracy allowing financial returns over passive strategies. SSRN The paper develops a price discovery model for commodity futures markets that accounts for two forms of limits to arbitrage caused by transaction costs and noise trader risk. Four market regimes are identified: (1) effective arbitrage, (2) transaction costs but no noise trader risk, (3) no transaction costs but noise trader risk and (4) both transaction costs and noise trader risk. It is shown that commodity prices are driven by both market fundamentals and speculative trader positions under the latter two regimes. Further, speculative effects spill over to the cash market under regime (3) but are confined to the futures market under regime (4). The model is empirically tested using data from six grain and soft commodity markets. While regime (4) is rare and short lived, regime (3) with some noise trader risk and varying elasticity of arbitrage prevails. SSRN The close relationship between commodity future and cash prices is critical for the effectiveness of risk management and the functioning of price discovery. However, in recent years, commodity futures prices, across the board, have appeared increasingly detached from prices on physical markets. This paper argues that while various factors, identified in previous literature, which introduced limits to arbitrage have facilitated non-convergence, the actual extent of non-convergence in these markets is caused by essential differences in the mechanisms of price formation on physical and derivative markets. With reference to the particular case of the CBOT wheat market, the paper shows that the size of the spread between futures and cash prices can be theoretically and empirically linked to the increasing inflow of financial investment into commodity futures markets. arXiv Value-at-risk (VaR) has been playing the role of a standard risk measure since its introduction. In practice, the delta-normal approach is usually adopted to approximate the VaR of portfolios with option positions. Its effectiveness, however, substantially diminishes when the portfolios concerned involve a high dimension of derivative positions with nonlinear payoffs; lack of closed form pricing solution for these potentially highly correlated, American-style derivatives further complicates the problem. This paper proposes a generic simulation-based algorithm for VaR estimation that can be easily applied to any existing procedures. Our proposal leverages cross-sectional information and applies variable selection techniques to simplify the existing simulation framework. Asymptotic properties of the new approach demonstrate faster convergence due to the additional model selection component introduced. We have also performed sets of numerical results that verify the effectiveness of our approach in comparison with some existing strategies. SSRN The function of independent directors has been extensively documented, but the general question of how they are appointed remains insufficiently explored. We find that the likelihood of the appointment of candidates is higher when those candidates are professionally affiliated with departing independent directors, and this is more pronounced when there are personal ties between predecessors and insiders, an entirely compliant record of voting on the part of candidates or predecessors, and particularly in firms with higher-concentrated ownership and that are located in areas with a weak market environment. Moreover, the appointment of independent directors affiliated with their predecessors results in fewer dissenting votes, more related-party transactions, and a higher incidence and greater severity of violations. Our research shows that predecessorâ"candidate affiliation helps construct a reciprocity norm between successors and insiders, leading to weak board independence. arXiv The effect of proportional transaction costs on systematically generated portfolios is studied empirically. The performance of several portfolios (the index tracking portfolio, the equally-weighted portfolio, the entropy-weighted portfolio, and the diversity-weighted portfolio) in the presence of dividends and transaction costs is examined under different configurations involving the trading frequency, constituent list size, and renewing frequency. Moreover, a method to smooth transaction costs is proposed. arXiv Inspired by Strotz's consistent planning strategy, we formulate the infinite horizon mean-variance stopping problem as a subgame perfect Nash equilibrium in order to determine time consistent strategies with no regret. Equilibria among stopping times or randomized stopping times may not exist. This motivates us to consider the notion of liquidation strategies, which lets the stopping right to be divisible. We then argue that the mean-standard deviation variant of this problem makes more sense for this type of strategies in terms of time consistency. It turns out that an equilibrium liquidation strategy always exists. We then analyze whether optimal equilibrium liquidation strategies exist and whether they are unique and observe that neither may hold. SSRN The increasing inflow of institutional investors replicating broad based indices into commodity futures markets has been linked to excessive calendar spreads and anomalies in futures curves. At the same time, these investors have been welcomed as liquidity providers. This paper hypothesises that this apparent dissent can be reconciled by considering the relative size of index positions to hedging positions, rather than the presence of index traders alone. The hypothesis is tested empirically for three soft commodity markets: cocoa, coffee, and cotton. By use of factor decomposition, the paper shows empirically that (a) index and hedging positions have inverse and offsetting effects on futures curves, and (b) index positions, net of hedging positions, are associated with upward sloping and peaked futures curves and occasionally wave-like shapes linked to roll-effects. The paper concludes that index traders are welcomed liquidity providers but can become âtoo much of a good thingâ if exceeding hedgersâ demand for counterparty.
this is a mystery to me, despite having changed computers several times, despite the website rejecting the application, the very first sequence of numbers I entered into it's search window which returned the same prompt to submit them for publication appear every time, I mean ive got hundreds of them now, and it's still far too much rope to give a person like me sitting along in a bedroom the capacity to freely describe any such sequence and their meaning if there isn't any already there my maturity levels are extremely variant in time, that's just way too much rope to give me considering its only me the pursuits matter to, who knows what kind of outlandish crap I might decide to spam in each of them but still, the first one from well, almost a decade ago shows up as the default content in the search window 1,2,3,6,11,23,47,106,235 well, now there is a bunch of stuff about them pertaining to "trees" and "nodes" but that's what I mean by too much rope you cant just let a lunatic like me start inventing terminology as I go oh well "what would cotton mathers do?" the chat room unanimously ponders lol i see Secret had a comment to make, is it really a productive use of our time censoring something that is most likely not blatant hate speech? that's the only real thing that warrants censorship, even still, it has its value, in a civil society it will be ridiculed anyway? or at least inform the room as to whom is the big brother doing the censoring? No? just suggestions trying to improve site functionality good sir relax im calm we are all calm A104101 is a hilarious entry as a side note, I love that Neil had to chime in for the comment section after the big promotional message in the first part to point out the sequence is totally meaningless as far as mathematics is concerned just to save face for the websites integrity after plugging a tv series with a reference But seriously @BalarkaSen, some of the most arrogant of people will attempt to play the most innocent of roles and accuse you of arrogance yourself in the most diplomatic way imaginable, if you still feel that your point is not being heard, persist until they give up the farce please very general advice for any number of topics for someone like yourself sir assuming gender because you should hate text based adam long ago if you were female or etc if its false then I apologise for the statistical approach to human interaction So after having found the polynomial $x^6-3x^4+3x^2-3$we can just apply Eisenstein to show that this is irreducible over Q and since it is monic, it follwos that this is the minimal polynomial of $\sqrt{1+\sqrt[3]{2}}$ over $\mathbb{Q}$ ? @MatheinBoulomenos So, in Galois fields, if you have two particular elements you are multiplying, can you necessarily discern the result of the product without knowing the monic irreducible polynomial that is being used the generate the field? (I will note that I might have my definitions incorrect. I am under the impression that a Galois field is a field of the form $\mathbb{Z}/p\mathbb{Z}[x]/(M(x))$ where $M(x)$ is a monic irreducible polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$.) (which is just the product of the integer and its conjugate) Note that $\alpha = a + bi$ is a unit iff $N\alpha = 1$ You might like to learn some of the properties of $N$ first, because this is useful for discussing divisibility in these kinds of rings (Plus I'm at work and am pretending I'm doing my job) Anyway, particularly useful is the fact that if $\pi \in \Bbb Z[i]$ is such that $N(\pi)$ is a rational prime then $\pi$ is a Gaussian prime (easily proved using the fact that $N$ is totally multiplicative) and so, for example $5 \in \Bbb Z$ is prime, but $5 \in \Bbb Z[i]$ is not prime because it is the norm of $1 + 2i$ and this is not a unit. @Alessandro in general if $\mathcal O_K$ is the ring of integers of $\Bbb Q(\alpha)$, then $\Delta(\mathcal O_K) [\mathcal O_K:\Bbb Z[\alpha]]^2=\Delta(\mathcal O_K)$, I'd suggest you read up on orders, the index of an order and discriminants for orders if you want to go into that rabbit hole also note that if the minimal polynomial of $\alpha$ is $p$-Eisenstein, then $p$ doesn't divide $[\mathcal{O}_K:\Bbb Z[\alpha]]$ this together with the above formula is sometimes enough to show that $[\mathcal{O}_K:\Bbb Z[\alpha]]=1$, i.e. $\mathcal{O}_K=\Bbb Z[\alpha]$ the proof of the $p$-Eisenstein thing even starts with taking a $p$-Sylow subgroup of $\mathcal{O}_K/\Bbb Z[\alpha]$ (just as a quotient of additive groups, that quotient group is finite) in particular, from what I've said, if the minimal polynomial of $\alpha$ wrt every prime that divides the discriminant of $\Bbb Z[\alpha]$ at least twice, then $\Bbb Z[\alpha]$ is a ring of integers that sounds oddly specific, I know, but you can also work with the minimal polynomial of something like $1+\alpha$ there's an interpretation of the $p$-Eisenstein results in terms of local fields, too. If the minimal polynomial of $f$ is $p$-Eisenstein, then it is irreducible over $\Bbb Q_p$ as well. Now you can apply the Führerdiskriminantenproduktformel (yes, that's an accepted English terminus technicus) @MatheinBoulomenos You once told me a group cohomology story that I forget, can you remind me again? Namely, suppose $P$ is a Sylow $p$-subgroup of a finite group $G$, then there's a covering map $BP \to BG$ which induces chain-level maps $p_\# : C_*(BP) \to C_*(BG)$ and $\tau_\# : C_*(BG) \to C_*(BP)$ (the transfer hom), with the corresponding maps in group cohomology $p : H^*(G) \to H^*(P)$ and $\tau : H^*(P) \to H^*(G)$, the restriction and corestriction respectively. $\tau \circ p$ is multiplication by $|G : P|$, so if I work with $\Bbb F_p$ coefficients that's an injection. So $H^*(G)$ injects into $H^*(P)$. I should be able to say more, right? If $P$ is normal abelian, it should be an isomorphism. There might be easier arguments, but this is what pops to mind first: By Schur-Zassenhaus theorem, $G = P \rtimes G/P$ and $G/P$ acts trivially on $P$ (the action is by inner auts, and $P$ doesn't have any), there is a fibration $BP \to BG \to B(G/P)$ whose monodromy is exactly this action induced on $H^*(P)$, which is trivial, so we run the Lyndon-Hochschild-Serre spectral sequence with coefficients in $\Bbb F_p$. The $E^2$ page is essentially zero except the bottom row since $H^*(G/P; M) = 0$ if $M$ is an $\Bbb F_p$-module by order reasons and the whole bottom row is $H^*(P; \Bbb F_p)$. This means the spectral sequence degenerates at $E^2$, which gets us $H^*(G; \Bbb F_p) \cong H^*(P; \Bbb F_p)$. @Secret that's a very lazy habit you should create a chat room for every purpose you can imagine take full advantage of the websites functionality as I do and leave the general purpose room for recommending art related to mathematics @MatheinBoulomenos No worries, thanks in advance. Just to add the final punchline, what I wanted to ask is what's the general algorithm to recover $H^*(G)$ back from $H^*(P; \Bbb F_p)$'s where $P$ runs over Sylow $p$-subgroups of $G$? Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not necessarily survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth. The measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists... As a result, there does not exists a single group which lived long enough to belong to, and hence one continue to search for new group and activity eventually, a social heat death occurred, where no groups will generate creativity and other activity anymore Had this kind of thought when I noticed how many forums etc. have a golden age, and then died away, and at the more personal level, all people who first knew me generate a lot of activity, and then destined to die away and distant roughly every 3 years Well i guess the lesson you need to learn here champ is online interaction isn't something that was inbuilt into the human emotional psyche in any natural sense, and maybe it's time you saw the value in saying hello to your next door neighbour Or more likely, we will need to start recognising machines as a new species and interact with them accordingly so covert operations AI may still exists, even as domestic AIs continue to become widespread It seems more likely sentient AI will take similar roles as humans, and then humans will need to either keep up with them with cybernetics, or be eliminated by evolutionary forces But neuroscientists and AI researchers speculate it is more likely that the two types of races are so different we end up complementing each other that is, until their processing power become so strong that they can outdo human thinking But, I am not worried of that scenario, because if the next step is a sentient AI evolution, then humans would know they will have to give way However, the major issue right now in the AI industry is not we will be replaced by machines, but that we are making machines quite widespread without really understanding how they work, and they are still not reliable enough given the mistakes they still make by them and their human owners That is, we have became over reliant on AI, and not putting enough attention on whether they have interpret the instructions correctly That's an extraordinary amount of unreferenced rhetoric statements i could find anywhere on the internet! When my mother disapproves of my proposals for subjects of discussion, she prefers to simply hold up her hand in the air in my direction for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many females i have intercourse with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise i feel as if its an injustice to all child mans that have a compulsive need to lie to shallow women they meet and keep up a farce that they are either fully grown men (if sober) or an incredibly wealthy trust fund kid (if drunk) that's an important binary class dismissed Chatroom troll: A person who types messages in a chatroom with the sole purpose to confuse or annoy. I was just genuinely curious How does a message like this come from someone who isn't trolling: "for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many ... with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise" 3 Anyway feel free to continue, it just seems strange @Adam I'm genuinely curious what makes you annoyed or confused yes I was joking in the line that you referenced but surely you cant assume me to be a simpleton of one definitive purpose that drives me each time I interact with another person? Does your mood or experiences vary from day to day? Mine too! so there may be particular moments that I fit your declared description, but only a simpleton would assume that to be the one and only facet of another's character wouldn't you agree? So, there are some weakened forms of associativity. Such as flexibility ($(xy)x=x(yx)$) or "alternativity" ($(xy)x=x(yy)$, iirc). Tough, is there a place a person could look for an exploration of the way these properties inform the nature of the operation? (In particular, I'm trying to get a sense of how a "strictly flexible" operation would behave. Ie $a(bc)=(ab)c\iff a=c$) @RyanUnger You're the guy to ask for this sort of thing I think: If I want to, by hand, compute $\langle R(\partial_1,\partial_2)\partial_2,\partial_1\rangle$, then I just want to expand out $R(\partial_1,\partial_2)\partial_2$ in terms of the connection, then use linearity of $\langle -,-\rangle$ and then use Koszul's formula? Or there is a smarter way? I realized today that the possible x inputs to Round(x^(1/2)) covers x^(1/2+epsilon). In other words we can always find an epsilon (small enough) such that x^(1/2) <> x^(1/2+epsilon) but at the same time have Round(x^(1/2))=Round(x^(1/2+epsilon)). Am I right? We have the following Simpson method $$y^{n+2}-y^n=\frac{h}{3}\left (f^{n+2}+4f^{n+1}+f^n\right ), n=0, \ldots , N-2 \\ y^0, y^1 \text{ given } $$ Show that the method is implicit and state the stability definition of that method. How can we show that the method is implicit? Do we have to try to solve $y^{n+2}$ as a function of $y^{n+1}$ ? @anakhro an energy function of a graph is something studied in spectral graph theory. You set up an adjacency matrix for the graph, find the corresponding eigenvalues of the matrix and then sum the absolute values of the eigenvalues. The energy function of the graph is defined for simple graphs by this summation of the absolute values of the eigenvalues
Gauss-Bonnet Theorem Jump to navigation Jump to search Theorem Let $\Kappa$ be the Gaussian curvature of $M$. Let $k_g$ be the geodesic curvature of $\partial M$. Then : $\displaystyle \int_M \kappa \, \mathrm d A + \int_{\partial M} k_g \, \mathrm d s = 2 \pi \chi\left({M}\right)$ where: $\mathrm d A$ is the element of area of the surface $\mathrm d s$ is the line element along $\partial M$ $\chi\left({M}\right)$ is the Euler characteristic of $M$. Proof Source of Name It was Pierre Ossian Bonnet who first published, in $1848$, a special case.
Search Now showing items 1-10 of 17 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV (Springer, 2014-08) The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ...
I shall speak for the Oxford and Cambridge Club, in a joint event hosted by Maths and Science Group and the Military History Group, an evening (6 June 2019) with dinner and talks on the theme of the Enigma and Code breaking. Abstract: I shall describe Alan Turing’s transformative philosophical analysis of the nature of computation, including his argument that some mathematical questions must inevitably remain beyond our computational capacity to answer. The talk will highlight ideas from Alan Turing’s phenomenal 1936 paper on computable numbers: This will be a talk for the Theory Seminar for the theory research group in Theoretical Computer Science at Queen Mary University of London. The talk will be held 4 June 2019 1:00 pm, ITL first floor. Abstract. Curious, often paradoxical instances of self-reference inhabit deep parts of computability theory, from the intriguing Quine programs and Ouroboros programs to more profound features of the Gödel phenomenon. In this talk, I shall give an elementary account of the universal algorithm, showing how the capacity for self-reference in arithmetic gives rise to a Turing machine program $e$, which provably enumerates a finite set of numbers, but which can in principle enumerate any finite set of numbers, when it is run in a suitable model of arithmetic. In this sense, every function becomes computable, computed all by the same universal program, if only it is run in the right world. Furthermore, the universal algorithm can successively enumerate any desired extension of the sequence, when run in a suitable top-extension of the universe. An analogous result holds in set theory, where Woodin and I have provided a universal locally definable finite set, which can in principle be any finite set, in the right universe, and which can furthermore be successively extended to become any desired finite superset of that set in a suitable top-extension of that universe. Abstract: Potentialism can be seen as a fundamentally model-theoretic notion, in play for any class of mathematical structures with an extension concept, a notion of substructure by which one model extends to another. Every such model-theoretic context can be seen as a potentialist framework, a Kripke model whose modal validities one can investigate. In this talk, I’ll explain the tools we have for analyzing the potentialist validities of such a system, with examples drawn from the models of arithmetic and set theory, using the universal algorithm and the universal definition. Abstract. What does it mean to make existence assertions in mathematics? Is there a mathematical universe, perhaps an ideal mathematical reality, that the assertions are about? Is there possibly more than one such universe? Does every mathematical assertion ultimately have a definitive truth value? I shall lay out some of the back-and-forth in what is currently a vigorous debate taking place in the philosophy of set theory concerning pluralism in the set-theoretic foundations, concerning whether there is just one set-theoretic universe underlying our mathematical claims or whether there is a diversity of possible set-theoretic worlds. This will be talk for the CUNY Set Theory seminar, Friday, March 22, 2019, 10 am in room 6417 at the CUNY Graduate Center. Abstract. I shall discuss recent joint work with Victoria Gitman and Asaf Karagila, in which we proved that Kelley-Morse set theory (which includes the global choice principle) does not prove the class Fodor principle, the assertion that every regressive class function $F:S\to\text{Ord}$ defined on a stationary class $S$ is constant on a stationary subclass. Indeed, it is relatively consistent with KM for any infinite $\lambda$ with $\omega\leq\lambda\leq\text{Ord}$ that there is a class function $F:\text{Ord}\to\lambda$ that is not constant on any stationary class. Strikingly, it is consistent with KM that there is a sequence of classes $A_n$, each containing a class club, but the intersection of all $A_n$ is empty. Consequently, it is relatively consistent with KM that the class club filter is not $\sigma$-closed. I am given to understand that the talk will be streamed live online. I’ll post further details when I have them. Abstract. An old argument, heard perhaps at a good math tea, proceeds: “there must be some real numbers that we can neither describe nor define, since there are uncountably many real numbers, but only countably many definitions.” Does it withstand scrutiny? In this talk, I will discuss the phenomenon of pointwise definable structures in mathematics, structures in which every object has a property that only it exhibits. A mathematical structure is Leibnizian, in contrast, if any pair of distinct objects in it exhibit different properties. Is there a Leibnizian structure with no definable elements? Must indiscernible elements in a mathematical structure be automorphic images of one another? We shall discuss many elementary yet interesting examples, eventually working up to the proof that every countable model of set theory has a pointwise definable extension, in which every mathematical object is definable. This will be a talk for the Jowett Society on 8 February, 2019. The talk will take place in the Oxford Faculty of Philosophy, 3:30 – 5:30pm, in the Lecture Room of the Radcliffe Humanities building. Abstract. Potentialism is the view, originating in the classical dispute between actual and potential infinity, that one’s mathematical universe is never fully completed, but rather unfolds gradually as new parts of it increasingly come into existence or become accessible or known to us. Recent work emphasizes the modal aspect of potentialism, while decoupling it from arithmetic and from infinity: the essence of potentialism is about approximating a larger universe by means of universe fragments, an idea that applies to set-theoretic as well as arithmetic foundations. The modal language and perspective allows one precisely to distinguish various natural potentialist conceptions in the foundations of mathematics, whose exact modal validities are now known. Ultimately, this analysis suggests a refocusing of potentialism on the issue of convergent inevitability in comparison with radical branching. I shall defend the theses, first, that convergent potentialism is implicitly actualist, and second, that we should understand ultrafinitism in modal terms as a form of potentialism, one with surprising parallels to the case of arithmetic potentialism. Abstract. We investigate the senses in which set-theoretic forcing can be seen as a computational process on the models of set theory. Given an oracle for the atomic or elementary diagram of a model of set theory $\langle M,\in^M\rangle$, for example, we explain senses in which one may compute $M$-generic filters $G\subset P\in M$ and the corresponding forcing extensions $M[G]$. Meanwhile, no such computational process is functorial, for there must always be isomorphic alternative presentations of the same model of set theory $M$ that lead by the computational process to non-isomorphic forcing extensions $M[G]\not\cong M[G’]$. Indeed, there is no Borel function providing generic filters that is functorial in this sense. This is joint work with Russell Miller and Kameryn Williams. Abstract. The Riemann rearrangement theorem asserts that a series $\sum_n a_n$ is absolutely convergent if and only if every rearrangement $\sum_n a_{p(n)}$ of it is convergent, and furthermore, any conditionally convergent series can be rearranged so as to converge to any desired extended real value. How many rearrangements $p$ suffice to test for absolute convergence in this way? The rearrangement number, a new cardinal characteristic of the continuum, is the smallest size of a family of permutations, such that whenever the convergence and value of a convergent series is invariant by all these permutations, then it is absolutely convergent. The subseries number is defined similarly, as the smallest number of subseries whose convergence suffices to test a series for absolute convergence. The exact values of the rearrangement and subseries numbers turns out to be independent of the axioms of set theory. In this talk, I shall place the rearrangement and subseries numbers into a discussion of cardinal characteristics of the continuum, including an elementary introduction to the continuum hypothesis and an account of Freiling’s axiom of symmetry. This talk is based in part on joint work with Andreas Blass, Joerg Brendle, Will Brian, myself, Michael Hardy and Paul Larson. @ARTICLE{BlassBrendleBrianHamkinsHardyLarson:TheRearrangementNumber,author = {Andreas Blass and Jörg Brendle and Will Brian and Joel David Hamkins and Michael Hardy and Paul B. Larson},title = {The rearrangement number},journal = {ArXiv e-prints},year = {2016},volume = {},number = {},pages = {},month = {},note = {manuscript under review},url = {http://jdh.hamkins.org/the-rearrangement-number},eprint = {1612.07830},archivePrefix = {arXiv},primaryClass = {math.LO},abstract = {},keywords = {under-review},source = {},} This will be a talk for the Logic Oberseminar at the University of Münster, January 11, 2019. Abstract. I shall present a new proof, with new applications, of the amazing extension theorem of Barwise (1971), which shows that every countable model of ZF has an end-extension to a model of ZFC + V=L. This theorem is both (i) a technical culmination of Barwise’s pioneering methods in admissible set theory and the admissible cover, but also (ii) one of those rare mathematical results saturated with significance for the philosophy of set theory. The new proof uses only classical methods of descriptive set theory, and makes no mention of infinitary logic. The results are directly connected with recent advances on the universal $\Sigma_1$-definable finite set, a set-theoretic version of Woodin’s universal algorithm. I’ll be back in New York from Oxford, and this will be a talk for the CUNY Logic Workshop, December 14, 2018. Abstract. I shall present a new proof, with new applications, of the amazing extension theorem of Barwise (1971), which shows that every countable model of ZF has an end-extension to a model of ZFC + V=L. This theorem is both (i) a technical culmination of Barwise’s pioneering methods in admissible set theory and the admissible cover, but also (ii) one of those rare mathematical results saturated with significance for the philosophy of set theory. The new proof uses only classical methods of descriptive set theory, and makes no mention of infinitary logic. The results are directly connected with recent advances on the universal $\Sigma_1$-definable finite set, a set-theoretic version of Woodin’s universal algorithm. The Oxford Graduate Philosophy Conference will be held at the Faculty of Philosophy November 10-11, 2018, with graduate students from all over the world speaking on their papers, with responses and commentary by Oxford faculty. I shall be the faculty respondent to the delightful paper, “Paradoxical Desires,” by Ethan Jerzak of the University of California at Berkeley, offered under the following abstract. Ethan Jerzak (UC Berkeley): Paradoxical DesiresI present a paradoxical combination of desires. I show why it’s paradoxical, and consider ways of responding to it. The paradox saddles us with an unappealing disjunction: either we reject the possibility of the case by placing surprising restrictions on what we can desire, or we revise some bit of classical logic. I argue that denying the possibility of the case is unmotivated on any reasonable way of thinking about propositional attitudes. So the best response is a non-classical one, according to which certain desires are neither determinately satisfied nor determinately not satisfied. Thus, theorizing about paradoxical propositional attitudes helps constrain the space of possibilities for adequate solutions to semantic paradoxes more generally. The conference starts with coffee at 9:00 am. This session runs 11 am to 1:30 pm on Saturday 10 November in the Lecture Room. Abstract. In light of the comparative success of membership-based set theory in the foundations of mathematics, since the time of Cantor, Zermelo and Hilbert, it is natural to wonder whether one might find a similar success for set-theoretic mereology, based upon the set-theoretic inclusion relation $\subseteq$ rather than the element-of relation $\in$. How well does set-theoretic mereological serve as a foundation of mathematics? Can we faithfully interpret the rest of mathematics in terms of the subset relation to the same extent that set theorists have argued (with whatever degree of success) that we may find faithful representations in terms of the membership relation? Basically, can we get by with merely $\subseteq$ in place of $\in$? Ultimately, I shall identify grounds supporting generally negative answers to these questions, concluding that set-theoretic mereology by itself cannot serve adequately as a foundational theory. This is joint work with Makoto Kikuchi, and the talk is based on our joint articles: J. D. Hamkins and M. Kikuchi, Set-theoretic mereology, Logic and Logical Philosophy, special issue “Mereology and beyond, part II”, pp. 1-24, 2016. This will be a talk for the Logic Seminar in Oxford at the Mathematics Institute in the Andrew Wiles Building on October 9, 2018, at 4:00 pm, with tea at 3:30. Abstract. The universal algorithm is a Turing machine program $e$ that can in principle enumerate any finite sequence of numbers, if run in the right model of PA, and furthermore, can always enumerate any desired extension of that sequence in a suitable end-extension of that model. The universal finite set is a set-theoretic analogue, a locally verifiable definition that can in principle define any finite set, in the right model of set theory, and can always define any desired finite extension of that set in a suitable top-extension of that model. Recent work has uncovered a $\Sigma_1$-definable version that works with respect to end-extensions. I shall give an account of all three results, which have a parallel form, and describe applications to the model theory of arithmetic and set theory. This will be a talk for the Mathematics Colloquium at the University of Warwick, to be held October 19, 2018, 4:00 pm in Lecture Room B3.02 at the Mathematics Institute. I am given to understand that the talk will be followed by a wine and cheese reception.Abstract. The Riemann rearrangement theorem asserts that a series $\sum_n a_n$ is absolutely convergent if and only if every rearrangement $\sum_n a_{p(n)}$ of it is convergent, and furthermore, any conditionally convergent series can be rearranged so as to converge to any desired extended real value. How many rearrangements $p$ suffice to test for absolute convergence in this way? The rearrangement number, a new cardinal characteristic of the continuum, is the smallest size of a family of permutations, such that whenever the convergence and value of a convergent series is invariant by all these permutations, then it is absolutely convergent. The exact value of the rearrangement number turns out to be independent of the axioms of set theory. In this talk, I shall place the rearrangement number into a discussion of cardinal characteristics of the continuum, including an elementary introduction to the continuum hypothesis and an account of Freiling’s axiom of symmetry. This talk is based in part on joint work with Andreas Blass, Will Brian, myself, Michael Hardy and Paul Larson. @ARTICLE{BlassBrendleBrianHamkinsHardyLarson:TheRearrangementNumber,author = {Andreas Blass and Jörg Brendle and Will Brian and Joel David Hamkins and Michael Hardy and Paul B. Larson},title = {The rearrangement number},journal = {ArXiv e-prints},year = {2016},volume = {},number = {},pages = {},month = {},note = {manuscript under review},url = {http://jdh.hamkins.org/the-rearrangement-number},eprint = {1612.07830},archivePrefix = {arXiv},primaryClass = {math.LO},abstract = {},keywords = {under-review},source = {},}
I read in Stewart "single variable calculus" page 83 that the limit $$\lim_{x\to 0}{1/x^2}$$ does not exist. How precise is this statement knowing that this limit is $\infty$?. I thought saying the limit does not exist is not true where limits are $\infty$. But it is said when a function does not have a limit at all like $$\lim_{x\to \infty}{\cos x}$$. I read in Stewart "single variable calculus" page 83 that the limit $$\lim_{x\to 0}{1/x^2}$$ According to some presentations of limits, it is proper to write "$\lim_{x\to 0}\frac{1}{x^2}=\infty$." This does not commit one to the existence of an object called $\infty$. The sentence is just an abbreviation for "given any real number $M$, there is a real number $\delta$ (which will depend on $M$) such that $\frac{1}{x^2}\gt M$ for all $x$ such that $0\lt |x| \lt \delta$." It turns out that we often wish to write sentences of this type, because they have important geometric content. So having an abbreviation is undeniably useful. On the other hand, some presentations of limits forbid writing "$\lim_{x\to 0}\frac{1}{x^2}=\infty$." Matter of taste, pedagogical choice. The main reason for choosing to forbid is that careless manipulation of the symbol $\infty$ all too often leads to wrong answers. A limit $$\lim_{x\to a} f(x)$$ exists if and only if it is equal to a number. Note that $\infty$ is not a number. For example $\lim_{x\to 0} \frac{1}{x^{2}} = \infty$ so it doesn't exist. When a function approaches infinity, the limit technically doesn't exist by the proper definition, that demands it work out to be a number. We merely extend our notation in this particular instance. The point is that the limit may not be a number, but it is somewhat well behaved and asymptotes are usually worth note. The term "infinite limit" is actually an oxymoron, like "jumbo shrimp" or "unbiased opinion". True limits are finite. However, it is okay to write down "lim f(x) = infinity" or "lim g(x) = -infinity", if the given function approaches either plus infinity or minus infinity from BOTH sides of whatever x is approaching, especially to distinguish this from the situation in which it approaches plus infinity on ONE side and minus infinity on the OTHER side, in which case the ONLY correct answer would be "the limit does not exist". Note that working in the affinely extended real numbers with the induced order topology this limit exists and equals infinity, unambiguously. We also don't need a "special" definition for infinite limits with this method which is convenient.
By Shiv Shankar, In this tutorial, I will explain how to display math equations in Windows 8 store applications using MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all modern browsers. MathJax enables your web site or application show mathematics equations. The scope of this tutorial is limited to Windows 8 store applications written in HTML/CSS/Javascript. I’ll cover the following topics in detail in the following sections: How to setup your development environment? How to make MathJax work in IE10 and Windows 8 app? How to reduce the size of the MathJax to a smaller size by removing unnecessary files? How to configure to ensure MathJax works properly? How to render math content runtime? How to prepare and publish a Windows 8 app that has math content? 1. How to setup your development environment? I have used the following development environment and recommend the following setup: Mac with Parallels 8 Desktop. Windows 8 Enterprise Evaluation Edition (valid for 90 days). Visual Studio Express 2012 for Windows 8. MathJax 2.1 beta or above. We are also going to use some unix specific tool like iconv. If you don’t have access to unix like machines you may want to setup GNU tools in your Windows machine. 2. How to make MathJax work for IE10 and Windows 8 app? The Windows 8 application written in javascript runs in the embedded version of IE10 in Windows. Our first step is to ensure that we have a version of MathJax that runs properly in the IE10 browser. At the time of this writing, MathJax 2.1 beta is available that works in IE10. As a first step, from your Windows IE10 visit http://www.mathjax.org to check whether the math equations are displayed correctly. Download the latest version of MathJax in a separate folder outside the Visual Studio project. Do not try to adding the whole MathJax to Visual Studio; this will make Visual Studio hang! We will be using the unpacked version of the MathJax stored in the unpacked directory. Windows apps are not allowed access to window.clipboardData and will get a runtime error: "access denied". We are going to fix this problem first. Edit the file Mathjax.js in the unpacked directory. Replace the following line the near line #2332: isMSIE: (window.ActiveXObject != null && window.clipboardData != null) with: isMSIE: (window.ActiveXObject != null ) 3. How reduce the size of the MathJax to a smaller size by removing files? MathJax has around 30,000 files and occupies around 30MB disk space. It is essential to reduce the size and the number of files. The core strategy is remove the fonts that are not used by IE10. Further optimizations can be done by deleting unused files in the output directory. Through some trial and errors, I kept only the following files and directories. MathJax-2.1/images/* MathJax-2.1/fonts/HTML-CSS/TeX/woff/* MathJax-2.1/unpacked MathJax-2.1/unpacked/MathJax.js MathJax-2.1/unpacked/config/default.js MathJax-2.1/unpacked/extensions/* MathJax-2.1/unpacked/jax/element/mml/* MathJax-2.1/unpacked/jax/input/* MathJax-2.1/unpacked/jax/output/HTML-CSS/* Total size: 3.5MB Further optimizations are possible. Once you have done the optimizations, copy the above folders/files to your Visual Studio project and add the top level folder (MathJax-2.1) to the solution. 4. How to configure to ensure MathJax works properly? Standard configurations work without any issues. Typically in an app environment math content is fetched using AJAX and are displayed dynamically. <script src="plugins/MathJax-2.1/unpacked/MathJax.js?config=default"> </script> Note: You need to include all the required files in Visual Studio. You can not dynamically include scripts from an external web-site. 5. How to render Math content runtime? Once you receive the AJAX response, you dynamically populate HTML content and call the processMath function: <div id="mymathcontent"> LaTeX: <span class="dynamic_content"> $$J_\alpha(x) = \sum\limits_{m=0}^\infty \frac{(-1)^m}{m! \, \Gamma(m + \alpha + 1)} {\left({\frac{x}{2}}\right)}^{2 m + \alpha}$$ </span> </div> window.processMath = function (id) { MathJax.Hub.Queue(["Typeset", MathJax.Hub, document.getElementById(id)]); }; processMath("mymathcontent"); 6. How to prepare and publish the app that has math content? You need to convert MathJax javascript files to UTF8 encoding with BOM (Byte Order Marking). Otherwise the Windows self certification test will fail and your app submission will be rejected. We need couple of scripts to do the conversion. PHP script outputbom.php creates the BOM header. <?php echo chr(239) . chr(187) . chr(191); ?> C-shell toutf8.csh script converts a javascript file to UTF8 encoding with BOM header. #!/bin/csh echo Converting $1 mv $1 $1.orig php ~/tools/outputbom.php > $1 iconv -t UTF8 $1.orig >> $1 Store both theses files under tools directory under your home directory. The following find command does the bulk conversion. % find unpacked -name "*.js" -exec ~/tools/toutf8.csh {} \; You can test whether the conversion works by building the app, running the app, and running the Windows self certification test. Delete the .orig files after the tests. Closing I hope you enjoyed the tutorial. Please send your questions or feedback to shiv@learnhive.net. Acknowledgements Thanks to Davide Cervone and Peter Krautzberger for their feedback and corrections.
hide Free keywords: General Relativity and Quantum Cosmology, gr-qc,Astrophysics, Cosmology and Extragalactic Astrophysics, astro-ph.CO, Astrophysics, High Energy Astrophysical Phenomena, astro-ph.HE, Astrophysics, Instrumentation and Methods for Astrophysics, astro-ph.IM Abstract: We employ gravitational-wave radiometry to map the gravitational waves stochastic background expected from a variety of contributing mechanisms and test the assumption of isotropy using data from Advanced LIGO's first observing run. We also search for persistent gravitational waves from point sources with only minimal assumptions over the 20 - 1726 Hz frequency band. Finding no evidence of gravitational waves from either point sources or a stochastic background, we set limits at 90% confidence. For broadband point sources, we report upper limits on the gravitational wave energy flux per unit frequency in the range $F(f, \Theta) < (0.1 - 56) \times 10^{-8}$ erg cm$^{-2}$ s$^{-1}$ Hz$^{-1}$ (f/25 Hz)$^{\alpha-1}$ depending on the sky location $\Theta$ and the spectral power index $\alpha$. For extended sources, we report upper limits on the fractional gravitational wave energy density required to close the Universe of $\Omega(f,\Theta) < (0.39-7.6) \times 10^{-8}$ sr$^{-1}$ (f/25 Hz)$^\alpha$ depending on $\Theta$ and $\alpha$. Directed searches for narrowband gravitational waves from astrophysically interesting objects (Scorpius X-1, Supernova 1987 A, and the Galactic Center) yield median frequency-dependent limits on strain amplitude of $h_0 <$ (6.7, 5.5, and 7.0) $\times 10^{-25}$ respectively, at the most sensitive detector frequencies between 130 - 175 Hz. This represents a mean improvement of a factor of 2 across the band compared to previous searches of this kind for these sky locations, considering the different quantities of strain constrained in each case.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
I will start off by addressing Jon's comment above. Yes, the Bohr model is flawed. I think it is still worth learning about it just from a historical standpoint, to see how we discovered the quantum mechanical description of the electron, but when you are studying it you absolutely have to remember that all of what he said was effectively rubbish. Now on to your questions. Is there a relationship between an electron's energy and its distance from the nucleus? The first thing that should be said is that in QM, unlike in the Bohr model, there is no well-defined "distance from the nucleus". Electrons exist everywhere (except at nodes) and the probability density is given by $\lvert\psi\rvert^2$. However, it is possible to calculate the "most probable distance from the nucleus" using the radial distribution function, $r^2 R^2$, where $R(r)$ is the radial part of the wavefunction. Essentially, you take the derivative of this with respect to $r$ and set it to 0 - you can read more here. For a 1s electron in hydrogen, you will find that the most probable radius is approximately $52.9 \text{ pm}$, a quantity called the Bohr radius and denoted $a_0$. (As an aside: oddly enough, this correlates with the flawed Bohr model, which said that the radius of the 1s electron in hydrogen was $52.9 \text{ pm}$. However, what Bohr said was that the electron was always at this distance, which is wrong. QM says that the electron is most likely to be found at this distance.) Anyway, you will find that there is some degree of correlation between the most probable radius and the energy of the electron. If we consider the 2s orbital of hydrogen, you can perform exactly the same calculation as you did for the 1s orbital, and you will find that the most probable radius is $4a_0$. Also, it is often mentioned that an atom is 99.99% empty space etc... That's just a simplified view presented in popular science (or elementary chemistry classes) that ignores the quantum mechanical model. No serious chemistry book would write something like that. I don't know exactly how it's calculated, but I can make a good guess. You take any atom you like, let's say hydrogen. You look up the atomic radius - Google tells you it's $52.9 \text{ pm}$. (No surprises there.) So you can think of the atom as a sphere with that radius, and calculate the volume of the sphere: $V = \frac{4}{3}\pi r^3$. Then you go and look up the volume of an electron and a proton (a physicist would get a headache from just reading the phrase "volume of an electron"), and add it up, and you find that the "atom" is "99.99% empty space". As Ivan succinctly said - with the QM description of the atom, there is no longer such a thing as free space since the electron "exists everywhere". Apart from that, just like how the distance from the nucleus is no longer well-defined in QM, the "volume" of an atom is also no longer well-defined. We pretty much threw that view out of the window.
I often see the term white noise appearing when reading about different statistical models. I must however admit, that I am not completely sure what this means. It is usually abbreviated as $WN(0,σ^2)$. Does that mean it's normally distributed or could it follow any distribution? TL;DR The answer is NO, it doesn't have to be normal; YES, it can be other distributions. Colors of the noise Let's talk about colors of the noise. The noise that an infant makes during the air travel is not white. It has color. The noise that an airplane engine makes is also not white, but it's not as colored as the kid's noise. It's whiter. The noise that an ocean or a forest produces is almost white. If you use noise cancelling head phones, you know that #1 is impossible to cancel. It'll pierce through any head phone with ease. #2 will be cancelled very well. As to #3, why would you cancel it? Origin of a term "color" What's the distinction between these three noises? It comes from spectral analysis. As you know from high school years you can send the white light through a prism and it will split the light into all different colors. That's what we call white: all colors in approximately the same proportion. No color dominates. The color is the light of a certain frequency, or you could say electromagnetic waves of certain wavelength like shown below. The red color has low frequency relative to the blue, equivalently the red color has longer wavelength of almost 800nm compared to the blue wavelength of 450nm. Spectral Analysis If you take noise, whether acoustic, radio or other, and send it through the spectral analysis tool such as FFT, you get it spectral decomposition. You'll see how much of each frequency is in the noise, like shown in the next picture from Wikipedia. It's clear that this is not white noise: it has clear peaks at 50Hz, 40Hz etc. If a narrow frequency band sticks out, then it's called colored, as in not white. So, white noise is just like white light, it has a wide range of frequencies in approximately same proportion like shown in the next figure from this site. The top chart shows the recording of the amplitude, and the bottom shows the spectral decomposition. No frequency sticks out. So the noise is white. Perfect sine Now, why does the sequence of independent identically distributed random numbers(iid) generates the white noise? Let's think of what makes a signal colored. It's the waves of certain frequency sticking out from others. They dominate the spectrum. Consider a perfect sign wave: $\sin(2\pi t)$. Let's see what is the covariance between any two points $\phi=1/2$ seconds apart: $$E[\sin(2\pi t) \times \sin(2\pi (t+1/2)]=-E[\sin^2 (2\pi t)]=-\frac 1 2$$ So, in the presence of the sine wave we'll get autocorrelation in the time series: all oservations half a second apart will be perfectly negatively correlated! Now, saying that our data is i.i.d. implies that there is no any autocorrelation whatsoever. This means that there are no waves in the signal. The spectrum of the noise is flat. Imperfect Example Here's an example I created on my computer. I first recorded my tuning fork, then I recorded the noise from computer's fans. Then I ran the following MATLAB code to analyze the spectra: [y,Fs] = audioread(filew);data = y(1000:5000,1);plot(data)figureperiodogram(data,[],[],Fs);[pxx,f] = periodogram(data,[],[],Fs); [pm,i]=max(pxx); f(i) Here's the signal and the spectrum of the tuning fork. As expected it has a peak at around 440Hz. The tuning fork must produce a nearly ideal sine wave signal, like in my theoretical example earlier. Next I did the same to the noise. As expected no frequency is sticking out. Obviously this is not the white noise, but it gets quite close to it. I think that there must be very high pitched frequency, it bother me a little bit. I need to change the fan soon. However, I don't see it in the spectrum. Maybe because my microphone is beyond crappy, or the sampling frequency is not high enough. Distribution doesn't matter The important part is that in the random sequence the numbers are not autocorrelated (or even stronger, independent). The exact distribution is not important. It could be Gaussian or gamma, but as long as the numbers do not correlate in the sequence the noise will be white. White noise simply means that the sequence of samples are uncorrelated with zero mean and finite variance. There is no restriction on the distribution from which the samples are drawn. Now if the samples happen to be drawn from a Normal distribution, you have a special type of white noise called Gaussian white noise.
I have a bivariate normal distribution$$(X, Y)\sim N(\mu_{x}, \mu_{y}, \sigma_{x}^2, \sigma_{y}^2, \rho)$$ My question is : when $X > k$ ($k$ is a constant),how to get the distribution of $Y$? Can anyone tell me how to solve it? For exaple, let $$(X, Y) \sim N(0, 0, 1, 1, 0.7)$$ when $X > 1$, the distribution of $Y$? Using usual notation, the conditional (truncated) distribution $Y\mid X>k$ for some fixed $k$ is given by \begin{align} f_{Y\mid X>k}(y)&=\int_k^\infty\frac{f_{X,Y}(x,y)}{P(X>k)}\,dx \\\\&=\frac{1}{P(X>k)}\int_k^\infty f_{Y\mid X=x}(y\mid x)f_X(x)\,dx\qquad,\,y\in\mathbb R \end{align} You can now find this density explicitly given any joint distribution $(X,Y)$.
Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you. Purchase individual online access for 1 year to this journal. Impact Factor 2019: 0.808 The journal Asymptotic Analysis fulfills a twofold function. It aims at publishing original mathematical results in the asymptotic theory of problems affected by the presence of small or large parameters on the one hand, and at giving specific indications of their possible applications to different fields of natural sciences on the other hand. Asymptotic Analysis thus provides mathematicians with a concentrated source of newly acquired information which they may need in the analysis of asymptotic problems. Authors: Orsina, Luigi Article Type: Research Article Citation: Asymptotic Analysis, vol. 34, no. 3,4, pp. 187-198, 2003 Article Type: Research Article Abstract: According to the Aharonov–Bohm effect, magnetic potentials have a direct significance to the motion of particles in quantum mechanics. We study this quantum effect through the scattering by several point‐like magnetic fields at large separation in two dimensions. We derive the asymptotic formula for scattering amplitudes as the distances between centers of fields go to infinity. The result heavily depends on the location of centers. A special emphasis is placed on the case of scattering by fields with centers on an even line. The obtained formula depends on fluxes of fields and on ratios of distances between adjacent centers. We …figure the approximate values of differential cross sections to see how the pattern of interferences changes with flux parameters. Show more Citation: Asymptotic Analysis, vol. 34, no. 3,4, pp. 199-240, 2003 Article Type: Research Article Citation: Asymptotic Analysis, vol. 34, no. 3,4, pp. 241-259, 2003 Authors: Petrini, Milena Article Type: Research Article Abstract: We study an initial values problem related to a viscoelasticity model in a periodic medium, with oscillating data at t=0, looking for the homogenization limit of the associated energy density. This limit is determined in a specific case in which there is a constant proportionality between the elastic coefficient and the one in the viscosity part, where it follows an exponential decay in time of the energy density. Citation: Asymptotic Analysis, vol. 34, no. 3,4, pp. 261-273, 2003 Authors: Colin, Mathieu Article Type: Research Article Abstract: In this article, we study the nonlinear plasma wave equation \[-\varepsilon^{2}\dfrac{\curpartial^{2}u_{\varepsilon}}{\curpartial t^{2}}+2\mathrm{i}\dfrac{\curpartial u_{\varepsilon}}{\curpartial t}+\Delta u_{\varepsilon}=\big(\dfrac{1}{\sqrt{1+|u_{\varepsilon}|^{2}}}-1\big)u_{\varepsilon}+\dfrac{\Delta(\sqrt{1+|u_{\varepsilon}|^{2}})}{\sqrt{1+|u_{\varepsilon}|^{2}}}u_{\varepsilon}\] with initial data $u_{\varepsilon}(\cdot,0)=u_{0}^{\varepsilon}(\cdot)\in H^{8}(\mathbb{R}^{2}),\ \curpartial_{t}u_{\varepsilon}(\cdot,0)=u_{1}^{\varepsilon}(\cdot)\in H^{7}(\mathbb{R}^{2})$ . We show that the Cauchy problem is locally well‐posed on an interval [0,T] where the time T is independent of ε if u1 ε is small enough. Then, we demonstrate the strong convergence of uε towards the solution u of a nonlinear relativistic Schrödinger equation as ε goes to 0. Citation: Asymptotic Analysis, vol. 34, no. 3,4, pp. 275-309, 2003 Article Type: Research Article Abstract: We consider the linearized equations of slightly compressible single fluid flow through a highly heterogeneous random porous medium, consisting of two types of material. Due to the high heterogeneity of the two materials the ratio of their permeability coefficients is of order ε2 , where ε is the characteristic scale of heterogeneities. Supposing that the matrix blocks set of the porous medium consists of random stationary inclusions, and assuming positive definiteness of the effective permeability tensor associated to the corresponding Neumann problem for the random fractures system, we obtain the homogenized problem for a random version of the double porosity …model used in geohydrology. It includes as a particular case the periodic setting, already studied by homogenization theory methods (see, for example, [1,7]). The homogenized problem is obtained by using the stochastic two scale convergence in the mean, and by means of convergence results specially adapted to our a priori estimates and to the random geometry, which do not require extension of solutions to the matrix part. Show more Citation: Asymptotic Analysis, vol. 34, no. 3,4, pp. 311-332, 2003 Article Type: Research Article Citation: Asymptotic Analysis, vol. 34, no. 3,4, pp. 333-358, 2003 Inspirees International (China Office) Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901 100025, Beijing China Free service line: 400 661 8717 Fax: +86 10 8446 7947 china@iospress.cn For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl 如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl
Warning: my background is mostly in probability and analysis, and not in logic. When reading or writing a complex proposition, with long chains of "for all... there exists... for all...", I tend to understand the structure of the sentence of quantifiers as a a way to describe how some parameters depend on other parameters. For instance, Poincaré's inequality in $\mathbb{R}^n$ reads: For all $p \in [1, +\infty]$, for all piecewise Lipschitz bounded open subset $\Omega$, there exists a constant $C>0$ such that, for all $u \in > W^{1, p} (\Omega)$, $$\|u - \mathbb{E} (u)\|_{\mathbb{L}^p (\Omega)} \leq C \|\nabla u\|_{\mathbb{L}^p (\Omega)}.$$ Let me denote by $A$ the set of piecewise Lipschitz bounded open subsets of $\mathbb{R}^n$. The same proposition can be understood as: There exists a function $C : [1, +\infty] \times A \to \mathbb{R}_+^*$ such that, for all $p \in [1, +\infty]$, for all piecewise Lipschitz bounded open subset $\Omega$, for all $u \in W^{1, p} (\Omega)$, $$\|u - \mathbb{E} (u)\|_{\mathbb{L}^p (\Omega)} \leq C_{p, \Omega} \|\nabla u\|_{\mathbb{L}^p (\Omega)}.$$ So, in some sense, the order of the quantifiers encode the parameters some functions are allowed to depend on. This becomes unwieldy when the sets of parameters the functions can depend on are not well ordered by inclusion. Let $A$, $B$, $C$ and $D$ be four sets and $P$ be a proposition of three free variables in $D$. We could imagine something like: There exists functions $f : A \times B \to D$, $g : B \times C \to D$, $h : C \times A \to D$ such that, for all $a \in A$, for all $b \in B$, for all $c \in C$, $$P (f(a, b), g(b, c), h(c, a)).$$ Is there a nice way to encode such a dependence on parameters into the way the proposition is built, as can be done e.g. for Poincaré's inequality?
$L_1$ convergence (which means $E(|X_n-X|)\to 0) $and almost sure convergence are incomparable conditions where neither implies the other. Yours must be an example where almost sure convergence doesn't imply $L_1$ convergence. There are simpler examples, like $$ X_n = \left\{\begin{array}{ll}0&\mbox{with probability $1-1/n^2$}\\n^3&\mbox{with probability $\frac{1}{n^2}$}\end{array}\right.$$ Essentially what can happen is the variable can be usually close to some value (the value it converges to a.s.), but very rarely very far away. Even if the rareness is such that it is guaranteed only far away finitely often (so that convergence is almost sure) it can be so large in the rare event when it is far away that the expected value is pushed away from the value to which it converges almost surely. There can also be cases where the sequence strays infinitely often so as not to converge but nonetheless doesn't stray very far so doesn't impact the mean. For instance $$ X_n = \left\{\begin{array}{ll}0&\mbox{with probability $1-1/n$}\\1&\mbox{with probability $\frac{1}{n}$}\end{array}\right.$$ almost surely does not converge, but $E(|X_n|)\to 0.$
This assignment is purely optional! Due: November 27th at 11:59pm In this assignment you will get your hands dirty with theano, which is a framework that has been the basis of a lot of work in deep-learning. Writing code in theano is very different than what we are accustomed to. In class you had a taste of it, where we saw how to program logistic regression. Your task for this assignment is to implement ridge regression (again!), and explore some variants of it. Recall that ridge regression is the regularized form of linear regression, and is the linear function $$h(\mathbf{x}) = \mathbf{w}^{T} \mathbf{x} + b$$ that minimizes the cost function $$ \frac{1}{N}\sum_{i=1}^N (h(\mathbf{x}) - y_i)^2 + \lambda ||\mathbf{w}||_2^2.$$ The first term is the average loss incurred over the training set, and the second term is the regularization term.The regularization we considered thus far uses the so-called L2 norm, $||\cdot||_2^2$.As discussed in class (see the second slide set that discusses SVMs), there are other options, the primary one being the L1 penalty, $||\mathbf{w}||_1 = \sum_{i=1}^d |w_i|$.L1 regularization often leads to very sparse solutions, i.e. a weight vector with many coefficients that are zero (or very close to that). However, gradient descent does not work in this case, since the L1 penalty is not differentiable.A simple solution to that is to use a smooth approximation of the L1 norm defined by:$$\sum_{i=1}^d \sqrt{w_i^2 + \epsilon}.$$This function converges to the L1 norm when $\epsilon$ goes to 0, and is a useful surrogate which can be used with gradient descent. In this assignment we will also explore using a different loss function. As discussed in class, the squared loss $(h(\mathbf{x}) - y)^2$ has the issue of being sensitive to outliers. The Huber loss is an alternative that combines a quadratic part for its smoothness, and a linear part for resistance to outliers. We'll consider a simpler loss function: $$\log \cosh (h(\mathbf{x}) - y) ),$$ called the Log-Cosh loss. Recall that $\cosh(z) = \frac{\exp{(z)} + \exp{(-z)}}{2}$. What you need to do for this assignment: RegularizedRegression that provides regularized linear regression and has the options of two loss functions ('square', or 'logcosh') and two regularizers ('L1', or 'L2'). The class should provide both regular gradient descent, and stochastic gradient descent for optimizing the weight vector and bias, and should use theano for its computations. To help you with the implementation here's a theano symbolic expression that implements the squared loss: squared_loss = T.mean(T.sqr(prediction - y)) In your code, follow the standard interface we have used in coding classifiers; the code I have shown for logistic regression gives you much of what you need for the coding part of this assignment. Submit your report via Canvas. Python code can be displayed in your report if it is short, and helps understand what you have done. The sample LaTex document provided in assignment 1 shows how to display Python code. Submit the Python code that was used to generate the results as a file called assignment6.py (you can split the code into several .py files; Canvas allows you to submit multiple files). Typing $ python assignment6.py should generate all the tables/plots used in your report. A few general guidelines for this and future assignments in the course: We will take off points if these guidelines are not followed. Grading sheet for assignment 6 (50 points): Correct implementation of regularized ridge regression (20 points): Exploration of gradient descent vs stochastic gradient descent (30 points): Exploration of loss and regularization term
Relative Permeability at Near-Critical Conditions Authors S.M.P. Blom (Delft U. of Technology) | Jacques Hagoort (Delft U. of Technology) | D.P.N. Soetekouw (Delft U. of Technology) DOI https://doi.org/10.2118/62874-PA Document ID SPE-62874-PA Publisher Society of Petroleum Engineers Source SPE Journal Volume 5 Issue 02 Publication Date June 2000 Document Type Journal Paper Pages 172 - 181 Language English ISSN 1086-055X Copyright 2000. Society of Petroleum Engineers Disciplines 4.3.4 Scale, 5.3.2 Multiphase Flow, 5.2.1 Phase Behavior and PVT Measurements, 5.4.1 Waterflooding, 4.1.9 Tanks and storage systems, 5.5.2 Core Analysis, 4.1.2 Separation and Treating, 5.3.1 Flow in Porous Media, 5.4.7 Chemical Flooding Methods (e.g., Polymer, Solvent, Nitrogen, Immiscible CO2, Surfactant, Vapex), 4.6 Natural Gas, 5.8.8 Gas-condensate reservoirs, 4.1.5 Processing Equipment, 5.6.8 Well Performance Monitoring, Inflow Performance, 5.2 Reservoir Fluid Dynamics Downloads 3 in the last 30 days 785 since 2007 Show more detail View rights & permissions SPE Member Price: USD 12.00 SPE Non-Member Price: USD 35.00 Summary We have measured a series of two-phase drainage relative permeability curves at near-critical conditions by means of the displacement method. As a fluid system we have used the model system methanol/n-hexane that exhibits a critical point at ambient conditions. In the measurements we have varied the interfacial tension and the flow rate. Our results show a clear trend from immiscible relative permeability functions to miscible relative permeability lines with decreasing interfacial tension and increasing superficial velocity. The relative permeability measurements show that the controlling parameter is the ratio of viscous to capillary forces on a pore scale, denoted by the capillary number N {c}=k\| \bullet \Phi \| /\phi \sigma . To demonstrate the significance of using the proper relative permeability functions, we have calculated the well impairment due to liquid drop-out in a model gas condensate reservoir, for four different rock types showing four different relations between relative permeability and the capillary number. The calculations show that near-miscible relative permeability functions come into play in the vicinity of the well bore. This is contrary to what happens if the relative permeability would be a function of interfacial tension alone. In addition, the results show that well impairment by condensate drop-out may be significantly overestimated if the dependence of relative permeability on the capillary number is ignored. Introduction As hydrocarbon exploration moves to deeper geological formations, volatile oil and gas condensate reservoirs become increasingly important. At initial reservoir conditions, the hydrocarbon fluids in these reservoirs are often found at near-critical conditions. As a consequence, the physical properties of the oil phase and the gas phase are very similar and the interfacial tension between oil and gas is very low. The latter may have an important bearing on the multiphase flow characteristics in the reservoir during the production phase. An example of an important multiphase fluid problem at near-critical conditions is condensate drop-out in the vicinity of wells in a gas condensate reservoir. 1 This drop-out causes an apparent skin resistance at the well bore that impairs the production capacity of the well. Conventionally, multiphase flow in porous media is described by means of the concept of relative permeability functions, empirical relationships for the decrease in effective permeability to a flowing fluid phase as a function of the fluid saturation. At conditions far from the critical point, multiphase flow is, when viewed on a pore scale, dominated by capillary forces relative to viscous and gravitational forces. Hence, relative permeability functions may be considered constant, i.e., independent of flow rate and interfacial tension. The constant functions are commonly referred to as immiscible relative permeability functions. At the other extreme, in the limit of zero-interfacial tension, relative permeability curves reduce to linear functions of the fluid saturation. These straight lines are referred to as miscible relative permeability functions. In the following, we use the term "near-miscible relative permeability functions" to denote the curves found in the region between the immiscible limit and the miscible limit. A review of the literature 2 reveals that there is no consensus on how near miscibility changes relative permeability curves and which parameters are controlling this change. Some investigators have found that the relative permeability to the nonwetting phase is affected more easily, 3-5 whereas others observed a greater increase of the relative permeability to the wetting phase compared with the relative permeability to the nonwetting phase. 6,7 Other authors did not find an effect of interfacial tension at all. 8,9 Equally contradicting are the reports on the effect of flow velocity on near-miscible relative permeability. Some investigators find no effect, 10,11 whereas others do. 4,12 In addition, Henderson et al. 6 have reported that relative permeability is only affected by the flow velocity if the fluids enter the porous medium as a single, homogeneous phase, and subsequently, are allowed to separate into two phases inside the pores. There appear to be two conflicting views on which mechanism controls the change in relative permeability. Many authors argue that a low interfacial tension affects relative permeability through the ratio between viscous forces and capillary forces, as denoted by the capillary number. 3-5,12-15 Most of these authors, however, suggest that there is a threshold interfacial tension below which the capillary-number dependence becomes important. 3-5,14,15 Other investigators interpret their relative permeability data in terms of the interfacial tension alone. 6,11,16-19 In two cases, this was done in view of the fact that a transition from partial wetting to complete wetting, as predicted by Cahn, 20 may affect the mobility of both phases. 11,16 The influence of such a transition cannot be described in terms of the capillary number, because it is directly induced by a change in the interfacial tension between the near-miscible phases. The objective of the work presented in this paper is two-fold. First, we want to provide conclusive experimental evidence of the effect of interfacial tension and flow velocity on near-miscible relative permeability. For this purpose, we have measured relative permeability curves of a near-miscible fluid system at varying interfacial tension and varying flow rate. Second, we want to demonstrate the significance of using proper relative permeability curves for the evaluation of well impairment in gas condensate reservoirs. To this end, we have extended the steady-state method of Jones and Raghavan 21 by incorporating near-miscible relative permeability functions. Using this method, we have calculated the pressure profile in a model reservoir for our relative permeability data set and three other data sets on sandstone samples. Experimental Method Fluid System. Because of the universal behavior of near-critical thermodynamic quantities, 22phenomena evoked by the vicinity of a critical point will occur both in gas/liquid equilibria and in liquid/liquid equilibria. Consequently, a near-miscible binary liquid system can be used as a model for a near-miscible gas/liquid system. As a fluid system, we have selected the binary liquid mixture methanol/ n-hexane as a model for a near-critical gas/condensate or gas/volatile oil system. The methanol/hexane system exhibits a critical solution temperature at atmospheric pressure, at a temperature of 33.5°C. Below this temperature, the mixture may segregate into a methanol-rich liquid phase in equilibrium with a hexane-rich liquid phase. File Size 331 KB Number of Pages 10
Reciprocal of Positive Real Number is Positive Theorem Let $a \in \R$ such that $a > 0$. Then $a^{-1} = \dfrac 1 a > 0$. It follows directly that $a < 0 \implies a^{-1} < 0$. Proof Aiming for a contradiction, suppose $a > 0$ but $a^{-1} \le 0$. \(\displaystyle a^{-1}\) \(\le\) \(\displaystyle 0\) \(\displaystyle \leadsto \ \ \) \(\displaystyle a \times a^{-1}\) \(\le\) \(\displaystyle a \times 0\) Real Number Ordering is Compatible with Multiplication \(\displaystyle \leadsto \ \ \) \(\displaystyle 1\) \(\le\) \(\displaystyle 0\) The result follows from Proof by Contradiction. $\blacksquare$
Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you. Purchase individual online access for 1 year to this journal. Impact Factor 2019: 0.808 The journal Asymptotic Analysis fulfills a twofold function. It aims at publishing original mathematical results in the asymptotic theory of problems affected by the presence of small or large parameters on the one hand, and at giving specific indications of their possible applications to different fields of natural sciences on the other hand. Asymptotic Analysis thus provides mathematicians with a concentrated source of newly acquired information which they may need in the analysis of asymptotic problems. Authors: Lions, J.-L. Article Type: Research Article Abstract: Homogenization theory allows to “replace” a “complicated” operator with rapidly varying coefficients by a “simple one”. This procedure applies to evolution operators of hyperbolic type or of Petrowsky's type. One can study for these operators the exact controllability problem—in particular, using HUM (Hilbert Uniqueness Method) introduced in [5]. A general program is to study the following question: What happens to exact controllability during homogenization procedure? The present paper is a first (and small) result in this program DOI: 10.3233/ASY-1988-1102 Citation: Asymptotic Analysis, vol. 1, no. 1, pp. 3-11, 1988 Authors: Perthame, B. Article Type: Research Article Abstract: We consider the impulse control of a reflected diffusion. It has been proved that the long run average cost for this problem solves the ergodic Quasi-Variational inequality (Q.V.I.) \begin{equation}\left\{\begin{array}{l}\int_{\Omega}\triangledown u_{k}\triangledown (v-u_{k})dx\geq {\int_{\Omega}}(f-\lambda _{k})\cdot(v-u_{k})dx,\\\forall_{v}\in H^{1}(\Omega),\,v\leq k+Mu_{k}\,\mathrm{a.e.;}\quad u_{k}\inH^{1}(\Omega),\,u_{k}\leq k+Mu_{k}\,\mathrm{a.e.,}\quad \int_{\Omega}u_{k}dx=0,\end{array}\right.\end{equation} Mu(x)=infess{c0 (ξ)+u(x+ξ);ξ≥0,x+ξ∈Ω}. We prove uniform bounds in H1 ∩L∞ on uk and we show that, extracting a subsequence if necessary, (uk ,λk ) converge as k→0 to a solution of (1)0 . We also study the uniqueness of (u0 ,λ0 ), and we prove that it is false in general although the complete sequence λk …converges to the maximal λ0 such that (1)0 admits a solution. Show more DOI: 10.3233/ASY-1988-1103 Citation: Asymptotic Analysis, vol. 1, no. 1, pp. 13-21, 1988 Article Type: Research Article Abstract: It is known that for dissipative evolution equations, the long time behavior of the solutions is generally described by a compact attractor to which all solutions converge, while such a result is not true for conservative equations of Hamiltonian type. In this paper we consider a partly dissipative system corresponding to the equations of slightly compressible fluids and investigate the long time behavior of their solutions. Despite the lack of compactness and smoothing effect for the pressure variable, the existence of a global attractor is shown and its fractal dimension is estimated. DOI: 10.3233/ASY-1988-1104 Citation: Asymptotic Analysis, vol. 1, no. 1, pp. 23-49, 1988 Article Type: Research Article Abstract: It is shown that the continuation by zero for families of Sobolev type spaces H(s),ε of vectorial order s=(s1 ,s2 ,s3 ) is a continuous linear mapping uniformly with respect to the parameter ε∈(0,1], provided that |s2 |<½,|s2 +s3 |<½ and the boundary ∂U of the open set U where the functions are defined is a C∞ -manifold. This result is needed in the theory of pseudodifferential coercive (elliptic) singular perturbations. DOI: 10.3233/ASY-1988-1105 Citation: Asymptotic Analysis, vol. 1, no. 1, pp. 51-60, 1988 Authors: Wendt, W.D. Article Type: Research Article Abstract: The reduced problem is formulated for singularly perturbed pseudodifferential equations with unknown potentials and with general boundary conditions. Several classes of problems of this type have previously been investigated in [2,5,6,7]. The stability theory [5,7] and the asymptotic analysis [6,7] for coercive singularly perturbed problems without potentials is extended to the above class of problems. Systems of singular integral equations on the half line were investigated in [8], using Wiener–Hopf factorization. Wiener–Hopf matrix operators without small parameter have been introduced and investigated in [10,12]. DOI: 10.3233/ASY-1988-1106 Citation: Asymptotic Analysis, vol. 1, no. 1, pp. 61-93, 1988 Inspirees International (China Office) Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901 100025, Beijing China Free service line: 400 661 8717 Fax: +86 10 8446 7947 china@iospress.cn For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl 如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl
What Murray-von Neumann did was to show that there is an infinite-dimensional generalization of the following fact. If $\mathcal H$ is finite-dimensional and $\mathcal M\subset\mathcal B(\mathcal H)$ is a von Neumann algebra, it is a basic exercise that we can see $\mathcal B(\mathcal H)$ as $M_n(\mathbb C)$ for $n=\dim\mathcal H$. And in that situation, $\mathcal M$ is isomorphic to $$\tag{*}\bigoplus_{k=1}^{\ell} M_{n(k)}(\mathbb C),$$ where the blocks are given by the minimal central projections (i.e., each block is $P\mathcal M P$, with $P$ a minimal central projection. When $\mathcal H$ is infinite-dimensional, the same idea works. Thing is, now the centre may not have minimal projections, but what they proved is that there exists a Borel space $X$ and a Borel measure $\mu$ such that $$\mathcal M=\int_X^{\oplus} \mathcal M_\lambda\,d\mu(\lambda),$$where the function $\lambda\longmapsto \mathcal M_\lambda$ is factor-valued a.e. In the particular case when $X$ is finite, $\mu$ is the counting measure and the $\mathcal M_\lambda$ are finite-dimensional, one recovers $(*)$.
Let $\varphi: A \rightarrow B$ be an integral ring homomorphism. Show that the induced morphism $\tilde{\varphi}:\mathrm{Spec}B \rightarrow \mathrm{Spec}A$ is closed. My idea: Let $I$ be an ideal of $B$. We have $\tilde{\varphi}(V(I)) \subset V(\varphi^{-1}(I))$ since $\tilde{\varphi}(\mathfrak{q}) = \varphi^{-1}(\mathfrak{q}) \in \mathrm{Spec}A$ and $\varphi^{-1}(\mathfrak{q}) \supset \varphi^{-1}(I)$ for any prime $\mathfrak{q} \in \mathrm{Spec} B$. Now suppose $\mathfrak{p} \in V(\varphi^{-1}(I))$: we have to prove there exists $\mathfrak{q} \in V(I)$ such that $\mathfrak{q} \cap A=\mathfrak{p}$. Since $\mathfrak{p} \supset \varphi^{-1}(I) \supset \varphi^{-1}(0) = \mathrm{Ker \ } \varphi$ by Lying Over Theorem there is a prime ideal $\mathfrak{q} \subset B$ such that $\mathfrak{q} \cap A=\mathfrak{p}$, i.e. $\tilde{\varphi}(\mathfrak{q})=\mathfrak{p}$ and it follows that $\tilde{\varphi}(V(I)) = V(\varphi^{-1}(I))$, so $\tilde{\varphi}$ is a closed mapping. The detail I'm worried about is why $\mathfrak{q} \in V(I)$. Is my reasoning correct?
I have recently been reading about the interpretation of the Aharonov-Bohm effect via Feynman's path integral (see viXra:1403.0950). I do not know whether I am missing something, but I do not understand why when evaluating the action they have to take into account the potential even if the electron does not "feel" the field. As far as I understand Feynman's theory, the action is a classical entity which relies on local information (in this case the path the electron follows). Why then does one have to take into account the potential? closed as unclear what you're asking by ACuriousMind♦, Diracology, Gert, user36790, CuriousOne Jul 12 '16 at 6:47 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. The key here is that, just like the infinite walls of the potential well, where the wave function is identically zero outside the well, the wave function in the Aharonov-Bohm effect vanishes in the region where $\mathbf{B} \neq 0$. Further, this impenetrable zone is excluded for all paths. No path that passes through the forbidden zone contributes to the path integral. In the case of a cylinder with an enclosed field, then, all paths pass over or under the cylinder, none through. Sakurai tackles this problem in Modern Quantum Mechanics, first starting on p. 136 by showing the shift in the energy eigenvalues in the bound-state problem (concentric cylinders), where he states that The wave function is required to vanish on the inner and outer walls. referring to the walls of the concentric cylinders. He then goes to solve the unbounded problem via Feynman's path integral approach, starting on p. 138. On the following page, he concludes that [T]he interference effect discussed here is purely quantum mechanical...the Lorentz force is identically zero in all regions where the particle wave function is finite. Yet there is a striking interference pattern that depends on the presence or absence of a magnetic field inside the impenetrable cylinder. This point has led some people to conclude that in quantum mechanics it is $\mathbf{A}$ rather than $\mathbf{B}$ that is fundamental. Indeed, the reason this effect appears is that even though the magnetic field $\mathbf{B}$ vanishes in all regions where the particle may be found, the vector potential $\mathbf{A}$ is nonzero. In this sense, the particle does "feel" the magnetic field (specifically the vector potential), inasmuch as its canonical momentum is affected (through the additional term $\frac{e}{c}\frac{d\mathbf{x}}{dt}\cdot \mathbf{A}$ in the Lagrangian), its wave function experiences a phase shift, and accordingly the probabilities associated with finding it in various regions is affected.
Today represents the half-way point of the class, both in terms of the calendar and in terms of the quantity of material that we are going to cover. On the one hand, it seems like the course is going by incredibly quickly (and it is). On the other hand, that first week of instruction feels so very long ago. So today, we not only worked with new material, but integrated some of first ideas that we studied into the current section. What I Taught I started by presenting a doomsday device: a hedgehog is sitting on the top of a ladder that is leaning against a wall. The bottom of the ladder is being pulled away from the wall at some constant rate, thus the hedgehog is sliding down the wall toward the ground. When the hedgehog reaches the ground, it will impart its kinetic energy into the ground. The amount of energy depends on how fast the hedgehog is moving, so we want to compute that value. This is a standard related rates problem. We developed the tools to deal with it in the previous class, so we started by trying to model the situation. The ladder is of fixed length, the bottom is firmly attached to the ground, and the top is firmly attached to the wall, so we use the Pythagorean Theorem to get the height of the hedgehog, take derivatives, and solve for the velocity of the hedgehog at any given time. Unfortunately, when the hedgehog reaches the ground, our solution involves a division by zero, which is a problem. No worries—we have a theory of limits which allows us to evaluate functions “near” a point that is not in the domain. Remembering back to the second day of lecture, we eventually determined that the hedgehog is moving infinitely fast when it hits the ground! Thus, if we mount a hedgehog on the top of a ladder then slide the bottom of the ladder along the ground, we end up with a hedgehog slamming into the ground at infinite velocity, and destroy the universe. Hence we have modeled a doomsday device. There are three reasons to bring up this example: (1) it gives students practice with related rates problems (the current topic), (2) it requires students to recall facts about limits (particularly infinite limits), and (3) it demonstrates how important it is to take physical reality into account when modeling real-world phenomena. After discussing the hedgehog of the apocalypse, we moved on to new material. Specifically, we started talking about linear approximation. I tried to motivate the idea by showing that as we zoom in on a point where a function is differentiable, the function starts to look linear. In fact, near any point where a function is differentiable, we can approximate the function by a tangent line. We did some algebra to determine the equation of that line, then stated the idea formally. If \(f\) is differentiable near \(a\), then \[ f(x) \approx L(x) = f(a) + f'(a)(x-a), \] where this approximation is the equation of the tangent line at \(a\), derived from the point-slope equation for a line. I justified this kind of approximation by noting that some functions are easy to evaluate at some points but not others (for instance \(\sqrt{4}\) is easy, \sqrt{3.97} not so much), then worked several examples. I finished the class by introducing the notion of differentials and showing how the differential \(dy\) can be useful for estimating the error between an actual value and a measured value. What Worked The hedgehog of the apocalypse was fun. I think that the problem itself managed to maintain engagement: I advertised it as a doomsday device at the beginning of the discussion, and students wanted to see what I was on about. I had the attention of nearly every student, and got good input from one or two normally quiet students (though I did have to instruct my regulars to quiet down for a minute). As an added bonus, we got to spend some time working with the textbook. I think that students often don’t really know how to read a textbook, and even more specifically, I don’t think they know how to read a mathematics textbook. Because we had to work with limits again, we had to familiarize ourselves with the index so that we could figure out how to find \[ \lim_{x\to 10} \frac{x}{\sqrt{100-x^2}}. \] (and for an unrelated problem, we had to find out about the reference pages at the front and back of the text, which was an added bonus). While the lack of an \(\varepsilon\)–\(\delta\) definition for the limit was a little frustrating, the appropriate sections of the text did (eventually) prove helpful. I am moderately proud of myself for (mostly) shutting up and (mostly) letting the students run that part of the show. What Didn’t Work I am finding myself increasingly frustrated with a couple of students. These students probably aren’t really well prepared for calculus (they require a lot of hand holding for basic algebra problems), often arrive to class late (and expect me to catch them up while everyone else is waiting), and prove to be a constant drag on my flow. I think that I am going to have to have a word with these guys… I was also stymied by the technology. It seems that IT has been reimaging hard drives recently, and upgrading software. Unfortunately, this means that Maple didn’t work. I had a whole document prepared and ready to go, and when it didn’t work, I got flustered. On the bright side, I managed to get back up and running again with WolframAlpha, but it was less than ideal. Of course, it is entirely my own damn fault—while I prepared a Maple document ahead of time, I did not check to make sure that Maple was actually working before I needed it. Finally, I don’t think that I did a very good job of explaining absolute error vs relative error. To me, this seems like a really easy concept—absolute error is measured in units, and tells you the difference between an actual value and a measured value; while relative error is unitless, and gives the error as a proportion of the actual value. This seemed to be a really hard concept for the students, and I think that my attempts to explain just added to the confusion. The right thing to do would have been to stop trying to explain, and just work some more examples (not that there was much time for this, but I could have done one or two).