text
stringlengths
256
16.4k
Mathematics CIIT: LaTeX Resources This page contains LaTeX template of CIIT Mathematics, MSc Project and MS Thesis templates. Templates Download a zip file given below and extract it by right clicking on the file. The latest templates are available at http://www.mathcity.org/cui Installing latex Three separte software and installation are needed to run the LaTeX on windows. MikTeX: Available at http://www.miktex.org/download PDF Viewer (especially Sumatra PDF, it works best with LaTeX) LaTeX Editor (especially TeXstudio, its our choice, you can choose any other) After installing above software, configure Sumatra PDF and TeXstudio for forward and inverse search. Tip & Tricks To activate forward and inverse search, go in TeXstudio and Option > Configure TeXstudio > Commands > External PDF viewer Replace it with following code: For 32bit Operating System "C:\PROGRA~1\SumatraPDF\SumatraPDF.exe" -reuse-instance -forward-search "?c:am.tex" @ -inverse-search """""C:\PROGRA~1\TeXstudio\texstudio.exe"""" """%%f""" -line %%l" "?am.pdf" For 64bit Operating System (if 64bit SumatraPDF is installed) "C:\PROGRA~1\SumatraPDF\SumatraPDF.exe" -reuse-instance -forward-search "?c:am.tex" @ -inverse-search """""C:\PROGRA~2\TeXstudio\texstudio.exe"""" """%%f""" -line %%l" "?am.pdf" For 64bit Operating System (if 32bit SumatraPDF is installed) "C:\PROGRA~2\SumatraPDF\SumatraPDF.exe" -reuse-instance -forward-search "?c:am.tex" @ -inverse-search """""C:\PROGRA~2\TeXstudio\texstudio.exe"""" """%%f""" -line %%l" "?am.pdf" In above code, we considered that windows in installed in C drive. If it is installed in different drive then, please use the respective code instead of C. Also it was considered that programs are installed on their default location. In the Build select “ PDF Viewer: External PDF Viewer” LaTeX Codes To write inline equation Use a $\$ $ (dollar) sign to write equation or symbols in between statements or sentences, e.g. Let $I$ be an interval in $\mathbb{R}$ and $f:I\to \mathbb{R}$ be a function Let $I$ be an interval in $\mathbb{R}$ and $f:I\to \mathbb{R}$ be a function To write an equation Use double dollar $(\$\$)$ to write dedicated equation, e.g., $$\sin^2 \theta + \cos^2 \theta =1$$ $$ \sin^2 \theta + \cos^2 \theta =1 $$ If you wish that equation number appear automatically to this equation, then write in the following way: \begin{equation} \sin^2 \theta + \cos^2 \theta =1 \end{equation} \begin{equation} \sin^2 \theta + \cos^2 \theta =1 \end{equation} For more tips and tricks related to TeXStudio, please visit: http://www.texstudio.org/#features ciit Last modified: 14 months ago by Administrator
Let us now take a closer look at the hex-fractal we sliced last week. Chopping a level 0, 1, 2, and 3 Menger sponge through our slanted plane gives the following: This suggests an iterative recipe to generate the hex-fractal. Any time we see a hexagon, chop it into six smaller hexagons and six triangles as illustrated below. Similarly, any time we see a triangle, chop it into a hexagon and three triangles like this: In the limit, each triangle and hexagon in the above image becomes a hex-fractal or a tri-fractal, respectively. The final hex-fractal looks something like this (click for larger image): Now we are in a position to answer last week’s question: how can we compute the Hausdorff dimension of the hex-fractal? Let d be its dimension. Like last week, our computation will proceed by trying to compute the “ d-dimensional volume” of our shape. So, start with a “large” hex-fractal and tri-fractal, each of side-length 1, and let their d-dimensional volumes be h and t respectively. [1] Break these into “small” hex-fractals and tri-fractals of side-length 1/3, so these have volumes \(h/3^d\) and \(t/3^d\) respectively (this is how “ d-dimenional stuff” scales). Since $$\begin{gather*}(\text{large hex}) = 6(\text{small hex})+6(\text{small tri}) \quad \text{and}\\ (\text{large tri}) = (\text{small hex})+3(\text{small tri}),\end{gather*}$$ we find that \(h=6h/3^d + 6t/3^d\) and \(t=h/3^d+3t/3^d\). Surprisingly, this is enough information to solve for the value of \(3^d\). [2] We find \(3^d = \frac{1}{2}(9+\sqrt{33})\), so $$d=\log_3\left(\frac{9+\sqrt{33}}{2}\right) = 1.8184\ldots,$$ as claimed last week. As a final thought, why did we choose to slice the Menger sponge on this plane? Why not any of the (infinitely many) others? Even if we only look at planes parallel to our chosen plane, a mesmerizing pattern emerges: More Information It takes a bit more work to turn the above computation of the hex-fractal’s dimension into a full proof, but there are a few ways to do it. Possible methods include mass distributions [3] or similarity graphs [4]. This diagonal slice through the Menger sponge has been proposed as an exhibit at the Museum of Math. Sebastien Perez Duarte seems to have been the first to slice a Menger sponge in this way (see his rendering), and his animated cross section inspired my animation above. Thanks for reading! Notes We’re assuming that the hex-fractal and tri-fractal have the same Hausdorff dimension. This is true, and it follows from the fact that a scaled version of each lives inside the other. [↩] There are actually two solutions, but the fact that hand tare both positive rules one out. [↩] Proposition 4.9 in: Kenneth Falconer. Fractal Geometry: Mathematical Foundations and Applications.John Wiley & Sons: New York, 1990. [↩] Section 6.6 in: Gerald Edgar. Measure, Topology, and Fractal Geometry(Second Edition). Springer: 2008. [↩]
[Sophie wrote](https://forum.azimuthproject.org/discussion/comment/18107/#Comment_18107): > > in other words, we have \\((x, y) \le (x', y')\\) iff \\(x \le x' \land y \le y'\\) > > I assume you used \\(\land\\) to mean "and" and not "join". Is that correct? I can definitely see how that's ambiguous! I did mean "and". (Though technically, "and" is a join on some lattice of propositions, so it's not an _incorrect_ reading, as long as the right parentheses are inferred.) (EDIT: Also, lest I fall back into bad habits, I think \\(\wedge\\) is the symbol for “meet”, the greatest lower bound. I do find it more natural sometimes to think of “true” as the informationless bottom element though.) > Second I am still unclear about what ordering you are getting on \\(\mathbb R ^2 \\) from product. You wrote: > > this lifts to the product straightforwardly > > Any chance you could make that explicit? Sure thing. Generally when working with products, things "lift" componentwise. So we say \\((x, y) \le (x', y')\\) iff both \\(x \le x'\\) and \\(y \le y'\\). The lifting I'm referring to in the quote is that the fact that \\(+\\) respects the order on \\(R\\) lifts in the “usual” sense to how the component-wise \\(+\\) respects the order on \\(R^2\\). Explicitly, if \\((x, y) \le (x', y')\\) and we have some \\((r, s) \in \mathbb{R}^2\\), we know that \\((x, y) + (r, s) \le (x', y') + (r, s)\\) because \\(x + r \le x' + r\\) and \\(y + s \le y' + s\\) individually.
I think there are two legitimate sources of complaint. For the first, I will give you the anti-poem that I wrote in complaint against both economists and poets. A poem, of course, packs meaning and emotion into pregnant words and phrases. An anti-poem removes all feeling and sterilizes the words so that they are clear. The fact that most English speaking humans cannot read this assures economists of continued employment. You cannot say that economists are not bright. Live Long and Prosper-An Anti-Poem May you be denoted as $k\in{I},I\in\mathbb{N}$, such that $I=1\dots{i}\dots{k}\dots{Z}$ where $Z$ denotes the most recently born human. $\exists$ a fuzzy set $Y=\{y^i:\text{Human Mortality Expectations}\mapsto{y^i},\forall{i\in{I}}\},$ may $y^k\in\Omega,\Omega\in{Y}$ and $\Omega$ is denoted as "long" and may $U(c)$, where c is the matrix of goods and services across your lifetime $U$ is a function of $c$, where preferences are well-defined and $U$ is qualitative satisfaction, be maximized $\forall{t}$, $t$ denoting time, subject to $w^k=f'_t(L_t),$ where $f$ is your production function across time and $L$ is the time vector of your amount of work, and further subject to $w^i_tL^i_t+s^i_{t-1}=P_t^{'}c_t^i+s^i_t,\forall{i}$ where $P$ is the vector of prices and $s$ is a measure of personal savings across time. May $\dot{f}\gg{0}.$ Let $W$ be the set $W=\{w^i_t:\forall{i,t}\text{ ranked ordinally}\}$ Let $Q$ be the fuzzy subset of $W$ such that $Q$ is denoted "high". Let $w_t^k\in{Q},\forall{t}$ The second is mentioned above, which is the misuse of math and statistical methods. I would both agree and disagree with the critics on this. I believe that most economists are not aware of how fragile some statistical methods can be. To provide an example, I did a seminar for the students in the math club as to how your probability axioms can completely determine the interpretation of an experiment. I proved using real data that newborn babies will float out of their cribs unless nurses swaddle them. Indeed, using two different axiomatizations of probability, I had babies clearly floating away and obviously sleeping soundly and securely in their cribs. It wasn't the data that determined the result; it was axioms in use. Now any statistician would clearly point out that I was abusing the method, except that I was abusing the method in a manner that is normal in the sciences. I didn't actually break any rules, I just followed a set of rules to their logical conclusion in a way that people do not consider because babies don't float. You can get significance under one set of rules and no effect at all under another. Economics is especially sensitive to this type of problem. I do belive that there is an error of thought in the Austrian school and maybe the Marxist about the use of statistics in economics that I believe is based on a statistical illusion. I am hoping to publish a paper on a serious math problem in econometrics that nobody has seemed to notice before and I think it is related to the illusion. This image is the sampling distribution of Edgeworth's Maximum Likelihood estimator under Fisher's interpretation (blue) versus the sampling distribution of the Bayesian maximum a posteriori estimator (red) with a flat prior. It comes from a simulation of 1000 trials each with 10,000 observations, so they should converge. The true value is approximately .99986. Since the MLE is also the OLS estimator in the case, it is also Pearson and Neyman's MVUE. Note how relatively inaccurate the Frequency based estimator is compared to the Bayesian. Indeed, the relative efficiency of $\hat{\beta}$ under the two methods is 20:1. Although Leonard Jimmie Savage was certainly alive when the Austrian school left statistical methods behind, the computational ability to use them didn't exist. The first element of the illusion is inaccuracy. The second part can better be seen with a kernel density estimate of the same graph. In the region of the true value, there are almost no examples of the maximum likelihood estimator being observed, while the Bayesian maximum a posteriori estimator closely covers .999863. In fact, the average of the Bayesian estimators is .99987 whereas the frequency based solution is .9990. Remember this is with 10,000,000 data points overall. Frequency based estimators are averaged over the sample space. The missing implication is that it is unbiased, on average, over the entire space, but possibly biased for any specific value of $\theta$. You also see this with the binomial distribution. The effect is even greater on the intercept. The red is the histogram of Frequentist estimates of the itercept, whose true value is zero, while the Bayesian is the spike in blue. The impact of these effects are worsened with small sample sizes because the large samples pull the estimator to the true value. I think the Austrians were seeing results that were inaccurate and didn't always make logical sense. When you add data mining into the mix, I think they were rejecting the practice. The reason I believe the Austrians are incorrect is that their most serious objections are solved by Leonard Jimmie Savage's personalistic statistics. Savages Foundations of Statistics fully covers their objections, but I think the split had effectively already happened and so the two have never really met up. Bayesian methods are generative methods while Frequency methods are sampling based methods. While there are circumstances where it may be inefficient or less powerful, if a second moment exists in the data, then the t-test is always a valid test for hypotheses regarding the location of the population mean. You do not need to know how the data was created in the first place. You need not care. You only need to know that the central limit theorem holds. Conversely, Bayesian methods depend entirely on how the data came into existence in the first place. For example, imagine you were watching English style auctions for a particular type of furniture. The high bids would follow a Gumbel distribution. The Bayesian solution for inference regarding the center of location would not use a t-test, but rather the joint posterior density of each of those observations with the Gumbel distribution as the likelihood function. The Bayesian idea of a parameter is broader than the Frequentist and can accomodate completely subjective constructions. As an example, Ben Roethlisberger of the Pittsburgh Steelers could be considered a parameter. He would also have parameters associated with him such as pass completion rates, but he could have a unique configuration and he would be a parameter in a sense similar to Frequentist model comparison methods. He might be thought of as a model. The complexity rejection isn't valid under Savage's methodology and indeed cannot be. If there were no regularities in human behavior, it would be impossible to cross a street or take a test. Food would never be delivered. It may be the case, however, that "orthodox" statistical methods can give pathological results that have pushed some groups of economists away.
does this series converge/diverge conditionally or absolutly $\sum_{n=2}^{\infty} (-1)^n \cdot \frac{\sqrt{n}}{(-1)^n + \sqrt{n}} \cdot \sin(\frac{1}{\sqrt{n}})$ i can use the facts that: $\lim_{x\to0}\frac{\sin x}{x} =1 $; $\sin(\frac{1}{\sqrt{n}})$ is monotone decreasing my problem is the alternating $-1$'s in the denominator, i dont know how to deal with that, though the fraction looks similar to an $e$ based limit
This feat is notorious for its poor wording. The “+100%” phrasing is completely unique within D&D 3.5e as far as I know, for example. Ultimately, I can’t imagine any other interpretation here than adding again the number subtracted from your attack rolls, and it does have the nice feature of specifying the “normal” damage from Power Attack which means that features like the frenzied berserker’s supreme power attack that already give one-handed weapons 2:1 returns don’t get doubled to 4:1, but instead go to the 3:1 you would normally expect from D&D’s multiplication rules. But then there is the line you haven’t quoted: If you use this tactic with a two-handed weapon, you instead triple the extra damage from Power Attack. No bizarre “+100%” in sight! But also we have lost the useful reference to “normal” and now it is multiplying “the extra damage from Power Attack,” whatever that is for you. This is going to get us in trouble, you can just tell already. So you are tripling the extra damage—not tripling the penalty applied. The problem here, well the first problem here, is that “the extra damage from Power Attack” is “twice the number subtracted from your attack rolls” when attacking two-handed. Worse, since “the extra damage from Power Attack” is calculated as twice the penalty, but isn’t itself subject to any multiplier, arguably the repeated-multiplication rules don’t apply, and that gets you a 2×3=6 rather than 1+(2−1)+(3−1)=4. So instead of 2:1 returns on Power Attack, you get 6:1 returns on Power Attack. Or maybe you get 5:1; it’s impossible to say since it’s worded so poorly. Plus, ya know, I suspect what they meant to do was give you 3:1 returns, but of course they didn’t say that. And that would combine quite nicely with, say, the supreme power attack feature of the frenzied berserker, who was getting 4:1 returns to begin with. Now they’re arguably getting 8:1. On top of those issues, this is only the Power Attack bonus damage. The result is added to the rest of your damage, and that gets you your full damage... which might be multiplied again, e.g. with valorous. This effectively multiplies your multiplier, which is exactly what the multiplication rules try to avoid, but since two different things are being multiplied, the multiplication rules don’t actually come into play. So for the example: 2d6+1 damage from the weapon itself, +6 for Strength, and the −6 attack penalty for maximum Power Attack results in double that for +12 damage from Power Attack without Leap Attack. Thus 2d6+19 is the baseline for all interpretations, and valorous doubles that for 4d6+38. With the 6:1 returns, we are instead looking at Power Attack bonus of +36 (six times the penalty, triple “the extra damage from Power Attack” which would have been +12). Using 5:1 brings that down to +30, which is somewhat better, but not, ya know, great, when what they probably meant was +18. Note that +36 is nearly what valorous was giving the entire attack before. Now with valorous, we’re looking at a total of 4d6+ 66—of which, 52 comes from Power Attack. It may not be a bad idea to try to eliminate the multiplication of a multiplier here through houserule, but note that the Power Attack bonus damage isn’t the only case of this: the bonus damage due to Strength also has a multiplier, +1½×, which is also being doubled by valorous. This, unlike Leap Attack, has strong precedent in the rules. The “fix” would be to apply the multiplication rule individually to all sources of damage, like so: \begin{array}{r}2 \times ( && 2\text{d}6 && +1 && +1\tfrac{1}{2}\times 4 && +3\times 2\times 6 & ) \\= && 2\times 2\text{d}6 && + 2\times 1 && + 2\times 1\frac{1}{2}\times 4 && + 2\times 3\times 2\times 6 \\= & [1 \\ && +\left(2-1\right) \\ & ] & \times 2\text{d}6 & +[1 \\ && && +\left(2-1\right) \\ && & ] & \times 1 & +[1 \\ && && && +\left(2-1\right) \\ && && && +\left(1\frac{1}{2}-1\right) \\ && && & ] & \times 4 & +[1 \\ && && && && +\left(2-1\right) \\ && && && && +\left(3-1\right) \\ && && && && +\left(2-1\right) \\ && && && & ] & \times 6 \\= && 2\times 2\text{d}6 && +2\times 1 && +2\frac{1}{2}\times 4 && +5\times 6 \\= && 4\text{d}6 && +2 && +10 && +30 \\= && && && && 4\text{d}6+42 \\\end{array} But this is very-definitely a houserule, and I’m not convinced that it is good (I mean, good luck calculating that for every attack!), even though it “enforces” the idea that you’re not supposed to get to mulitply multipliers.
Team Member Jaehyun Bae Contents Motivation This project is motivated by DARPA Warrior Web Program. There has been a lot of research into exoskeletons over the years to alleviate heavy loads that soldiers carry, but strapping a person into a robotic outfit just isn't practical in a combat zone yet. DARPA's Warrior Web program aims to build a lightweight suit that improves a soldier's endurance and overall effectiveness, while preventing injuries. The main goals by developing the warrior web are: To prevent and reduce musculoskeletal injuries. To augment positive work done by the muscles and reduce the physical burden. You can see the details of this project on http://www.darpa.mil/Our_Work/DSO/Programs/Warrior_Web.aspx. Harvard Exosuit In order to develop an under-suit that doesn’t hinder the wearer’s free movement, researchers are trying to make it soft and deformable, but still capable of applying force to body joints. The Harvard exosuit is an example of a new approach to creating an under-suit in a soft and deformable manner. Experimental data proved that it can help loaded walking by reducing metabolic cost. This suit applies force to lower limb joints through cables driven by actuators in the backpack. Challenges As the exosuit tries to assist human gait with a deformable structure, there are many challenges in its development. The challenges are: It is difficult to analyze the effectiveness of the suit. It is difficult to find the optimal input force for actuators to reduce the metabolic cost. It is difficult to identify the effect of changes in design parameters. The reasons for the challenges are: The suit is deformable and closely attached to the body. We cannot predict how external actuation assists muscles during loaded walking, as the human body is highly redundant. Experimental data is inconsistent case-by-case. Goals Evaluate the effectiveness of wearing active actuators on metabolic cost reduction. Explain how the exosuit can help loaded gait. Verify the impact of changes in design parameters. Find optimal control inputs for active actuators. Strategy Experimental data Two sets of data were collected from the same subject: One gait cycle of loaded walking (from left toe-off to next left toe-off). One gait cycle of unloaded walking (again, from left toe-off to next left toe-off). The subject didn't wear a suit and walked freely. Walking speeds were identical. Mass of the load was 38 kg. Modeling To simulate the movement of the exosuit wearer, the first thing to do is to create a model which can replicate a real subject as realistically as possible. Before I created the simulation model with active actuators on it, I used the generic gait model to go through the basic steps in the OpenSim simulation pipeline. By doing so, I could make my model dynamically consistent to the experimental data. I then added actuators and metabolic cost probes to the model. Modeling a subject wearing an active actuator The diagram above describes the procedure used to simulate the generic gait model. The first three steps comprise the basic modeling procedure in OpenSim to make a model dynamically and kinematically consistent with the experimental data. For more information, see the following pages on Confluence: After step 3 was complete, probes for calculating metabolic cost were added to the model. For more information about how to add the probes, see Simulation-Based Design to Reduce Metabolic Cost. Finally, I added active actuators to the model. Here, I used the PathActuator class to simulate the active actuators of the exosuit, as they are cable-driven actuators. If you are interested in how the PathActuator works, see OpenSIm::PathActuator Class. Sample models Here are the figures of sample simulation models. I modified the RRA-adjusted model to create several different types of models for comparison. Loaded gait models Three types of models were created for loaded gait simulations: A model without any additional actuators A model with path actuators supporting plantarflexion A model with path actuators supporting hip extension The path actuator supporting plantarflexion is attached to the heel and tibia, and the path actuator supporting hip extension is attached to the backpack and femur. For simplicity, the loaded mass was added directly to the torso. Unloaded gait models The same types of models were created for unloaded gait simulations. Optimization methodology The idea to optimize the control input force for the actuators is to take advantage of the Computed Muscle Control (CMC) tool. The main reason we use CMC in OpenSim is to find the most suitable excitations for muscles to create body movement while minimizing muscle activations. To see how it works, see How CMC Works. In this project, I make different use of the optimization process in CMC in order to optimize the control input force for active actuators. CMC procedure is static optimization process, and it minimizes the cost function J which can be represented as \[ J = \sum_{i=1}^{n_x} x_i^2 + \sum_{j=1}^{n_q} w_j \left( \ddot{q}_j\,^* - \ddot{q}_j \right)^2\] When we add active actuators to an OpenSim model, the activation term in the cost function becomes \[ x = \begin{bmatrix} x_{\mathrm{muscle}} \\ x_{\mathrm{actuator}} \end{bmatrix}\] where X muscle is muscle control and X actuator is actuator control. As X actuator is part of activation states, it is also adjusted after the optimization process. Now, if we diminish the influnece of X actuatoron J and run CMC, the optimizer tries to find X actuatorin the manner of minimizing muscle activations. We know that minimizing muscle activation correponds to minimizing metabolic cost, so we can conclude that the actuator input force resulting from CMC if we diminish the influence of X. actuatoron objective function J is the optimal actuator input for metabolic cost reduction Muscle force is constructed from the equation F actuator= F actuator max* X actuator. If we assign a large maximum force to each actuator, the actuator control X actuatordecreases, so the influence of the actuator on J is decreased. Using this idea, I found an optimal input for each actuator, and also found the metabolic cost reduction after running CMC with a model where active actuators were added. Results and Discussion Metabolic cost change when active actuators are added to the model I investigated how much metabolic cost is reduced when optimal input force is applied to a model by active actuators. I simulated both loaded and unloaded walking cases, and I compared the influence of the hip and ankle actuators on metabolic cost. I used 10,000 N as the maximum active actuator force (F actuator max) for these simulations. Loaded walking Unloaded walking Metabolic cost reduction when active actuators are added to the loaded gait model: Ankle actuator: 10.35% Hip actuator: 6.62% Metabolic cost reduction when active actuators are added to the unloaded gait model: Ankle actuator: 10.62% Hip actuator: 1.04% Things to notice: The metabolic cost is much lower during unloaded walking than loaded walking. Unloaded walking costs only 75% of the metabolic energy spent during loaded walking. The ankle actuator works better at reducing metabolic cost than the hip actuator when we can apply the optimal input force. The hip actuator is not assistive during unloaded gait. Therefore, we can say that the ankle actuator helps reduce metabolic cost better than the hip actuator if we have ideal actuators with no maximum force limitation. Optimal actuator input force Loaded walking Unloaded walking Ankle actuator Hip actuator Optimal input force for the ankle actuator: Actuation begins right after toe-off of the foot on the opposite side, and the peak force happens between heal-strike of the foot on the opposite side and toe-off of the foot on the same side. The same actuation strategy is valid for both loaded gait and unloaded gait. The force signal is clear and easy to implement in the real world. However, the maximum actuation force is about 2500 N, which is too high to achieve in reality. Optimal input force for the hip actuator: The hip actuator can reduce the metabolic cost with lower maximum force than the ankle actuator. However, it is hard to identify how the actuator assists walking. Also, it is difficult to implement the optimal control input for the hip actuator in the real world. How the ankle actuator assists loaded gait We can explain how the optimal actuation input for the ankle actuator helps loaded gait by investigating the change of plantarflexor muscle forces. The gastrocnemius muscle forces barely change. Other plantarflexor muscle forces, including soleus muscle forces, are significantly decreased. If we compare the active actuator input force with the sum of the baseline uniarticular forces, we can see that the active actuator force follows the sum of the baseline uniarticular muscle forces. Optimal input force for ankle actuator when actuation force is limited to 400 N (i.e., realistic actuation limit) From the previous results, we found that the optimal input force for the hip actuator is small enough to be achieved by a real actuator, while the optimal input force for ankle actuator is not realistic (> 2000 N). Therefore, I tried different types of input forces for ankle actuators up to 400 N, and compared the results to the optimal control input case. My initial guess was to saturate the optimal input force that I found earlier at 400 N. I generated an new input force which is identical to the optimal input force up to 400 N, and saturated once the optimal input force exceeded 400 N. The second input force I tried was a new result from my CMC simulations. The new CMC result was acquired by assigning 4000 N to the maximum actuation force and bounding the control input between 0 and 0.1. In other words, \[ F_{\mathrm{actuator}}^{\mathrm{max}} = 400 N, \quad 0 \leq x_{\mathrm{actuator}} \leq 0.1\] According to the formula F actuator = F actuator max * X actuator, the new CMC result also has a maximum force of 400 N. As X actuator is bounded between 0 and 0.1 and X muscle is chosen between 0 and 1, the influence of X actuator to the objective function of the CMC procedure is relatively lower than that of X muscle, so we can use this idea to create an optimal input for the ankle actuator when the maximum actuation force is limited. When we compare the saturated optimal input and the result from the new CMC procedure, we find similarity. Now, let's compare the metabolic cost reduction when each control input is applied to ankle actuators. Metabolic cost reduction Loaded walking Unloaded walking Metabolic cost reduction when active actuators are added to loaded gait model: Optimal: 10.35% reduction Saturated: 1.84% reduction New CMC: 2.68% reduction Metabolic cost reduction when active actuators are added to unloaded gait model: Optimal: 10.62% reduction Saturated: 3.46% reduction New CMC: 3.82% reduction The result from the new CMC procedure reduces metabolic cost more efficiently. However, the reduction is not significant, and it is much lower than the optimal case. The interesting thing is that the realistic actuation input force works better in the unloaded walking case than the loaded walking case. It makes sense because we require lower force to assist unloaded walking than to assist loaded walking. Biarticular actuator Now that we know both the ankle actuator and the hip actuator can reduce metabolic cost during loaded walking, a natural progression is to test actuators which can affect both ankle plantarflexion and hip extension. To reduce the number of actuators, I added single-degree-of-freedom biarticular actuators affecting ankle plantarflexion and hip extension to legs on both sides, and investigate the metabolic cost. The main idea in creating a biarticular actuator is to let the path actuator go through the axis of ankle joint rotation. I chose the attachment points of the ankle and hip actuators as the via points and end points of the biarticular actuator line, and also set the origin of the ankle joint rotation as one of the via points. By doing so, I created a biarticular actuator which combines the effects of the ankle and hip actuators. Simulation result I ran CMC on the loaded gait model with the biarticular actuator. I set the maximum force of the biarticular actuator to be 10,000N in order to see the optimal input force and the best possible metabolic cost reduction. Optimal input Metabolic cost reduction Metabolic cost reduction when biarticular actuators are added to loaded gait is 3.12% from baseline. It is much lower than the reduction observed using either the ankle or hip actuator. Control input is complex, which makes it hard to realize. The biarticular actuator is not as effective as the uniarticular actuators in terms of metabolic cost reduction. Conclusions Three types of active actuators are evaluated in terms of metabolic cost reduction and controllability: If we can apply a sufficient amount of force, it is better to apply force to the ankle joint. If not, the hip actuator is a good alternative, though it is difficult to control. The biarticular actuator doesn’t assist loaded walking very well and the force input is not consistent. Active actuators offer greater assist for loaded walking than unloaded walking when we can apply a sufficient amount of force. The optimal input for the ankle actuator is consistent with gait cycle and muscle force data, in contrast to the optimal input for the hip and biarticular actuators. The optimal input force for the ankle actuator when its maximum force is bounded is similar to the general optimal input force when it is saturated at the maximum force. Limitations The experimental data was obtained from a subject without an exosuit. The exosuit may change the kinematics of a subject as well as the ground-reaction forces. The simulation methodology to use CMC as an optimization tool works, but more improvement is needed. The CMC process doesn’t minimize metabolic cost. Instead, it minimizes the 2-norm of activation. The experimental data involved only one gait cycle. More realistic actuator simulations are needed (e.g., a combination of passive and active actuators). Source Code You can find the simulation models that I created on my Simtk project page.
ä is in the extended latin block and n is in the basic latin block so there is a transition there, but you would have hoped \setTransitionsForLatin would have not inserted any code at that point as both those blocks are listed as part of the latin block, but apparently not.... — David Carlisle12 secs ago @egreg you are credited in the file, so you inherit the blame:-) @UlrikeFischer I was leaving it for @egreg to trace but I suspect the package makes some assumptions about what is safe, it offers the user "enter" and "exit" code for each block but xetex only has a single insert, the interchartoken at a boundary the package isn't clear what happens at a boundary if the exit of the left class and the entry of the right are both specified, nor if anything is inserted at boundaries between blocks that are contained within one of the meta blocks like latin. Why do we downvote to a total vote of -3 or even lower? Weren't we a welcoming and forgiving community with the convention to only downvote to -1 (except for some extreme cases, like e.g., worsening the site design in every possible aspect)? @Skillmon people will downvote if they wish and given that the rest of the network regularly downvotes lots of new users will not know or not agree with a "-1" policy, I don't think it was ever really that regularly enforced just that a few regulars regularly voted for bad questions to top them up if they got a very negative score. I still do that occasionally if I notice one. @DavidCarlisle well, when I was new there was like never a question downvoted to more than (or less?) -1. And I liked it that way. My first question on SO got downvoted to -infty before I deleted it and fixed my issues on my own. @DavidCarlisle I meant the total. Still the general principle applies, when you're new and your question gets donwvoted too much this might cause the wrong impressions. @DavidCarlisle oh, subjectively I'd downvote that answer 10 times, but objectively it is not a good answer and might get a downvote from me, as you don't provide any reasoning for that, and I think that there should be a bit of reasoning with the opinion based answers, some objective arguments why this is good. See for example the other Emacs answer (still subjectively a bad answer), that one is objectively good. @DavidCarlisle and that other one got no downvotes. @Skillmon yes but many people just join for a while and come form other sites where downvoting is more common so I think it is impossible to expect there is no multiple downvoting, the only way to have a -1 policy is to get people to upvote bad answers more. @UlrikeFischer even harder to get than a gold tikz-pgf badge. @cis I'm not in the US but.... "Describe" while it does have a technical meaning close to what you want is almost always used more casually to mean "talk about", I think I would say" Let k be a circle with centre M and radius r" @AlanMunn definitions.net/definition/describe gives a websters definition of to represent by drawing; to draw a plan of; to delineate; to trace or mark out; as, to describe a circle by the compasses; a torch waved about the head in such a way as to describe a circle If you are really looking for alternatives to "draw" in "draw a circle" I strongly suggest you hop over to english.stackexchange.com and confirm to create an account there and ask ... at least the number of native speakers of English will be bigger there and the gamification aspect of the site will ensure someone will rush to help you out. Of course there is also a chance that they will repeat the advice you got here; to use "draw". @0xC0000022L @cis You've got identical responses here from a mathematician and a linguist. And you seem to have an idea that because a word is informal in German, its translation in English is also informal. This is simply wrong. And formality shouldn't be an aim in and of itself in any kind of writing. @0xC0000022L @DavidCarlisle Do you know the book "The Bronstein" in English? I think that's a good example of archaic mathematician language. But it is still possible harder. Probably depends heavily on the translation. @AlanMunn I am very well aware of the differences of word use between languages (and my limitations in regard to my knowledge and use of English as non-native speaker). In fact words in different (related) languages sharing the same origin is kind of a hobby. Needless to say that more than once the contemporary meaning didn't match a 100%. However, your point about formality is well made. A book - in my opinion - is first and foremost a vehicle to transfer knowledge. No need to complicate matters by trying to sound ... well, overly sophisticated (?) ... The following MWE with showidx and imakeidx:\documentclass{book}\usepackage{showidx}\usepackage{imakeidx}\makeindex\begin{document}Test\index{xxxx}\printindex\end{document}generates the error:! Undefined control sequence.<argument> \ifdefequal{\imki@jobname }{\@idxfile }{}{... @EmilioPisanty ok I see. That could be worth writing to the arXiv webmasters as this is indeed strange. However, it's also possible that the publishing of the paper got delayed; AFAIK the timestamp is only added later to the final PDF. @EmilioPisanty I would imagine they have frozen the epoch settings to get reproducible pdfs, not necessarily that helpful here but..., anyway it is better not to use \today in a submission as you want the authoring date not the date it was last run through tex and yeah, it's better not to use \today in a submission, but that's beside the point - a whole lot of arXiv eprints use the syntax and they're starting to get wrong dates @yo' it's not that the publishing got delayed. arXiv caches the pdfs for several years but at some point they get deleted, and when that happens they only get recompiled when somebody asks for them again and, when that happens, they get imprinted with the date at which the pdf was requested, which then gets cached Does any of you on linux have issues running for foo in *.pdf ; do pdfinfo $foo ; done in a folder with suitable pdf files? BMy box says pdfinfo does not exist, but clearly do when I run it on a single pdf file. @EmilioPisanty that's a relatively new feature, but I think they have a new enough tex, but not everyone will be happy if they submit a paper with \today and it comes out with some arbitrary date like 1st Jan 1970 @DavidCarlisle add \def\today{24th May 2019} in INITEX phase and recompile the format daily? I agree, too much overhead. They should simply add "do not use \today" in these guidelines: arxiv.org/help/submit_tex @yo' I think you're vastly over-estimating the effectiveness of that solution (and it would not solve the problem with 20+ years of accumulated files that do use it) @DavidCarlisle sure. I don't know what the environment looks like on their side so I won't speculate. I just want to know whether the solution needs to be on the side of the environment variables, or whether there is a tex-specific solution @yo' that's unlikely to help with prints where the class itself calls from the system time. @EmilioPisanty well th eenvironment vars do more than tex (they affect the internal id in teh generated pdf or dvi and so produce reproducible output, but you could as @yo' showed redefine \today oor teh \year, \month\day primitives on teh command line @EmilioPisanty you can redefine \year \month and \day which catches a few more things, but same basic idea @DavidCarlisle could be difficult with inputted TeX files. It really depends on at which phase they recognize which TeX file is the main one to proceed. And as their workflow is pretty unique, it's hard to tell which way is even compatible with it. "beschreiben", engl. "describe" comes from the math. technical-language of the 16th Century, that means from Middle High German, and means "construct" as much. And that from the original meaning: describe "making a curved movement". In the literary style of the 19th to the 20th century and in the GDR, this language is used. You can have that in englisch too: scribe(verb) score a line on with a pointed instrument, as in metalworking https://www.definitions.net/definition/scribe @cis Yes, as @DavidCarlisle pointed out, there is a very technical mathematical use of 'describe' which is what the German version means too, but we both agreed that people would not know this use, so using 'draw' would be the most appropriate term. This is not about trendiness, just about making language understandable to your audience. Plan figure. The barrel circle over the median $s_b = |M_b B|$, which holds the angle $\alpha$, also contains an isosceles triangle $M_b P B$ with the base $|M_b B|$ and the angle $\alpha$ at the point $P$. The altitude of the base of the isosceles triangle bisects both $|M_b B|$ at $ M_ {s_b}$ and the angle $\alpha$ at the top. \par The centroid $S$ divides the medians in the ratio $2:1$, with the longer part lying on the side of the corner. The point $A$ lies on the barrel circle and on a circle $\bigodot(S,\frac23 s_a)$ described by $S$ of radius…
What is the fluorescence lifetime? The fluorescence lifetime - the average decay time of a fluorescence molecule's excited state - is a quantitative signature which can be used to probe structure and dynamics at micro- and nano scales. FLIM (Fluorescence Lifetime Imaging Microscopy) is used as a routine technique in cell biology to map the lifetime within living cells, tissues and whole organisms. The fluorescence lifetime is affected by a range of biophysical phenomena and hence the applications of FLIM are many: from ion imaging and oxygen imaging to studying cell function and cell disease in quantitative cell biology using FRET. For fluorescent molecules the temporal decay can be assumed as an exponential decay probability function: \[ P_{decay}(t)={1/\tau} e^{-{t/\tau}},\quad t\gt 0 \] where \(t\) is time and \(\tau\) is the excited state lifetime. More complex fluorophores can be described using a multiple exponential probability density function: \[ P_{multiple-decay}(t)={\sum_{i=1}^{N}}\alpha_i\cdot P_{\tau_i}(t),\quad t\gt 0 \] where \(t\) is time, \(\tau_i\) is the lifetime of each component and \(\alpha_i\) is the relative contribution of each component. Why measure fluorescence lifetime? A key advantage of the fluorescence lifetime is that it is a basic physical parameter that does not change with variations in local fluorophore concentration and is independent of the fluorescence excitation. Hence the lifetime is a direct quantitative measure, and its measurement - in contrast to e.g. the recorded fluorescence intensity - does not require detailed calibrations. Excited state lifetimes are also independent of the optical path of the microscope, photobleaching (at least to first order), and the local fluorescence detection efficiency. The fluorescence lifetime does change when the molecules undergo de-excitation through other processes than fluorescence such as dynamic quenching through molecular collisions with small soluble molecules like ions or oxygen (Stern-Volmer quenching) or energy transfer to a nearby molecule through FRET. As a result the fluorophores (in the excited state) lose their energy at a higher rate, causing a distinct decrease in the fluorescence lifetime. The measured rate of fluorescence is actually a summation of all of the rates of de-excitation. In this way the fluorescent lifetime mirrors any process in the micro-environment that quenches the fluorophores; and spatial differences in the amount of quenching reveals itself as contrast in a lifetime image. Related Posts
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
Energy balance are used to account the needs of energy to run a process. There are several different types of energy forms: Indipendently to the form, is defined as: energy is the capacity to perform work or heat. The unit of measure of energy is joule (symbol: $\si{\joule}$, pronounce: "jul"). Thus, standard units for Energy are: To describe energy balance you should understand the meaning of: The temperature of a material is the risult of all the forces that composed that material. Temperature. The temperature is the state of vibration of all the atoms and molecules that are composing that material. Heat is a form of energy. Like the work, a body cannot contain heat. Heat is always a transient energy, which is passing from a hot body to a cold body. There are two type of heat: The heat is often referred to a unit of mass. In such case, the enthalpy devided by mass unit is called enthalpy, which is the quantity of energy that a system of mass unit is able to exchange with the environment. It is measured as joule per kilogram (J/kg). Anytime two materials have a difference in temperture, the hot material loose heat towards the cold material. This transient energy is defined as sensible heat. Sensible heat can be measured as:$$q = m \cdot C_p \cdot \Delta{T} $$ where: The specific heat is the amount of energy required to raise the temperature of 1 kg of a substance by 1 Kelvin or Celsius degree. Thus, units of measure are J/kg℃ Example. Determine the sensible heat required to bring 5 kg of water from 20 to 80℃ Solution. Given: Thus:$$q = 5 \cdot kg \cdot 4.19 \cdot \frac{kJ}{kg \cdot ℃} \cdot (80 - 20)℃ = 1257 kJ$$ Latent heat is a form of energy that is gain or loss during a phase transition. For instance, to bring water to boil, you need first to provide sensible heat that increase the water temperature up to 100℃. Afterwards, you need heat to transform all the water in steam. Such heat is called latent heat of vaporization. Pay attention that such form of heat is exchanged without changes to the temperature. Enthalpy Temperature Pression Volume Liquid Steam Latent heat ℃ kPa $\si{\meter^3\per\kilo\gram}$ $\si{\kilo\joule\per\kilo\gram}$ $\si{\kilo\joule\per\kilo\gram}$ $\si{\kilo\joule\per\kilo\gram}$ 15 1.7051 77.026 62.99 2528.90 2465.91 30 4.246 32.894 125.79 2556.30 2430.51 45 9.593 15.258 188.45 2583.20 2394.75 60 19.940 6.671 251.13 2609.60 2358.47 75 38.58 4.131 313.93 2635.30 2321.37 90 70.14 2.361 376.92 2660.10 2283.18 100 101.325 1.6729 419.04 2676.10 2257.06 110 143.27 1.2102 461.30 2691.50 2230.20 120 198.53 0.8919 503.71 2706.30 2202.59 130 270.1 0.6685 546.31 2720.50 2174.19 140 316.3 0.5089 589.13 2733.90 2144.77 When a fluid is heated, we need to provide sensible heat. When there is a phase change, then we need to provde also the latent heat:$$q = m \cdot C_p \cdot \Delta{T} + m \cdot \lambda$$ Example. Determine the heat required to transform 5 kg of water at 20℃ in steam at 1 bar of pressure. Solution. Given: Thus:$$q = 5 \cdot kg \cdot 4.19 \cdot \frac{kJ}{kg \cdot ℃} \cdot (100 - 20)℃ + 5 \cdot kg \cdot 2257 \cdot \frac{kJ}{kg} = (1676 + 11285) \cdot kJ = 13 MJ$$
This is inspired by ... an article by G. A. Krasinsky and V. A. Brumberg, "Secular Increase of Astronomical Unit from Analysis of the Major Planet Motions, and its Interpretation." Pitjeva and Pitjev (1) provide a simple explanation of the very large secular change in the astronomical unit found by Krasinsky and Brumberg: "In the paper by Krasinsky and Brumberg the au change was determined simultaneously with all other parameters, specifically, with the orbital elements of planets and the value of the au astronomical unit itself. However, at present it is impossible to determine simultaneously two parameters: the value of the astronomical unit, and its change. In this case, the correlation between au and its change $\dot{au}$ reaches 98.1 %, and leads to incorrect values of both of these parameters". What is the rate at which the Earth-Sun distance is changing? The first article cited in DavePhD's answer by E.V. Pitjeva is based on the refereed article by Pitjeva and Pitjev (1). Both articles provide rates of change of the Earth-Sun distance (specifically, the Earth-Sun semimajor axis length $a$) and of the 1976 definition of the astronomical unit $au$ as$$\begin{aligned}\frac{\dot a} a &= (1.35\pm0.32)\cdot10^{-14}/\text{century} \\\frac{\dot{au}}{au} &= (8\pm21)\cdot10^{-12}/\text{century}\end{aligned}$$ (Note: the latter is based on the published value of $\dot{au} = 1.2\pm 3.2$ cm/yr.) The reason for the nearly three order of magnitude difference between these two figures is that the astronomical unit is not the distance between the Sun and the Earth. While that is how the astronomical unit was originally defined, the two concepts have been effectively divorced from one another since the end of the 19th century, when Simon Newcomb published his Tables of the Motion of the Earth on its Axis and Around the Sun. The divorce was made official in 1976 when the International Astronomical Unit redefined the astronomical unit to be the unit of length that made the Gaussian gravitational constant k have a numerical value of 0.017202098950000 when expressed in the astronomical system of units (the unit of length is one astronomical unit, the unit of mass is one solar mass, and the unit of time is 86400 seconds (one day)). Who has done this analysis, and what do they find? Also, what methods are used? There are three key groups: The Institute of Applied Astronomy of the Russian Academy of Sciences, which produces the Ephemerides of Planets and the Moon series (EPMxxxx) of ephemerides (1); The Jet Propulsion Laboratory of NASA, which produces the Development Ephemeris series (DExxx) of ephemerides (2), and also ephemerides for small solar system bodies; and The Institute of Celestial Mechanics and of the Calculation of Ephemerides of the Paris Observatory (L'institut de mécanique céleste et de calcul des éphémérides, IMCCE), which produces the Integration Numerique Planetaire de l’Observatoire de Paris series (INPOPxx) of ephemerides (3). All three numerically solve the equations of motion for the solar system using a first order post Newtonian expansion given a set of states at some epoch time. This integration of course will not match the several hundred thousands of observations that have been collected over time. All three use highly-specialized regression techniques to update the epoch states so as to somehow minimize the errors between estimates and observations. All three carefully address highly correlated state elements, something that Krasinsky and Brumberg did not do. All three share observational data, sometimes cooperate (joint papers, IAU committees, ...), and sometimes compete ("our technique is better than yours (at least for now)"). For example is radar really that precise? Regarding radar, the distance to the Sun was never measured directly via radar. Unless massively protected with filters, pointing a telescope of any sort directly at the Sun is generally a bad idea. If massively protected with filters, a radio antenna would not see the weak radar return. Those 1960s radar measurements were of Mercury, Venus, and Mars. There's no compelling reason to ping those planets now that humanity has sent artificial satellites in orbit about those planets. Sending an artificial satellite into orbit about a planet (as opposed to flying by it) provides significantly higher quality measurements than do radar pings. References: E. V. Pitjeva and N. P. Pitjev, "Changes in the Sun’s mass and gravitational constant estimated using modern observations of planets and spacecraft," Solar System Research 46.1 (2012): 78-87. E. Myles and Standish and James G. Williams, "Orbital ephemerides of the Sun, Moon, and planets," *Explanatory Supplement to the Astronomical Almanac (2012): 305-346. A. Fienga, et al. "INPOP: evolution, applications, and perspectives," Proceedings of the International Astronomical Union 10.H16 (2012): 217-218.
Given the sample mean $\bar{x}$ and the sample variance $s^2$ of a random variable $X$, is it possible to estimate the shape $\sigma^2$ and log-scale $\mu$ of the log-normal distribution, with probability density function $$f_X(x;\mu,\sigma^2) =\frac{1}{x \sqrt{2 \pi \sigma^2}}\, \mathrm{exp}\left( {-\frac{(\ln x - \mu)^2}{2\sigma^2}}\right), \text{ if } x>0,$$ and if so, how are any resulting formulae derived? Suppose that $X = \ln(Y)$ follows a Normal distribuion with mean $\mu$ and variance $\sigma^2$ ($N(\mu,\sigma^2)$). Using $$ E(g(X)) = \int_{-\infty}^{+\infty}g(x)p(x)dx$$ (where $p(x)$ is the pdf of the Normal distribution), we have that $$E(Y) = E(\exp(X)) = \int_{-\infty}^{+\infty}\exp(x)p(x)dx=\exp(\sigma^2/2+\mu)$$ $$E(Y^2) = E(\exp(X)^2) = \int_{-\infty}^{+\infty}\exp(x)^2p(x)dx=\exp(2\sigma^2+2\mu)$$ from which we conclude that the variance is $$V(Y) = \exp(2\mu+2\sigma^2)-\exp(2\mu+\sigma^2)$$ This is not directly an answer to your question: But if you can instead get the mean and standard deviation of log X then you should be able to reuse the existing estimators for the normal distribution. That seems to be the most simple method, actually, unless there is additionally a location parameter.
Shapes of constant width Yes - there are shapes of constant width other than the circle. No - you can't drill square holes. But saying this was not just an attention catcher. As the applet below illustrates, you can drill holes that are almost square - drilled holes whose border includes straight line segments! Now then let us define the subject of our discussion. First we need a notion of width. Let there be a bounded shape. Pick two parallel lines so that the shape lies between the two. Move each line towards the shape all the while keeping it parallel to its original direction. After both lines touched our figure, measure the distance between the two. This will be called the width of the shape in the direction of the two lines. A shape is of constant width if its (directional) width does not depend on the direction. This unique number is called the width of the figure. For the circle, the width and the diameter coincide. The curvilinear triangle above is built the following way. Start with an equilateral triangle. Draw three arcs with radius equal to the side of the triangle and each centered at one of the vertices. The figure is known as the Reuleaux triangle. Convince yourself that the construction indeed results in a figure of constant width. Starting with this we can create more. Rotating Reuleaux's triangle covers most of the area of the enclosing square. For the $\displaystyle A=2\sqrt{3}+\frac{\pi}{6}-3=0.98770039073605346013\ldots$ which looks pretty close to $1,$ the area of the square. Now generalize the construction a little by extending sides of the triangle the same distance beyond its vertices. This will create three $60^{\circ}$ angles external to the triangle. In each of these angles draw an arc with the center at the nearest vertex. 18 July 2015, Created with GeoGebra There are many other shapes of constant width. May you think of any? There are in fact curves of constant width that include no circular arcs however small. Here are some problems concerning shapes of constant width. For the Reuleaux triangle, find its area. Does it exceed the area of the circle of the same width (diameter)? For the Reuleaux triangle, derive a formula linking the length of the perimeter to its width? Does it remind you of anything? (Check Barbier's Theoerm.) The angle between two intersecting curves is defined as the angle between their tangents at the point of intersection. Find internal angles of the Reuleaux triangle. For every point on the boundary of a figure of constant shape there exists another boundary point with the distance between the two equal to the width of the shape. The distance between any two points inside the shape of constant width never exceeds its width. Assume in the diagram above, the triangle's side is 50 while each side was extended 10 units in each direction. What is the width of the resulting shape? The length of the boundary of shapes of constant width depends only on their width. An aside The applet on this page could be in one of three modes. At the start, it's in the first mode. In the second mode the square rotates around the triangle. In the third mode I stop erasing the background so that one can see how big the area is traced by the Reuleaux triangle. Click inside the applet to change modes. Reference M. Gardner, The Unexpected Hanging and Other Mathematical Diversions, The University of Chicago Press, 1991 R. Honsberger, Ingenuity in Mathematics, MAA, New Math Library, 1970 H. Rademacher and O. Toeplitz, The Enjoyment of Mathematics, Dover Publications, 1990. I. M. Yaglom and V. G. Boltyansky, Convex shapes, Nauka, Moscow, 1951. (in Russian) On Internet Note A visitor to the CTK Exchange who only identified himself as Giorgis left this message (Jul 12, 2008): You have an attention grabber by stating "One can even drill square holes" and later you state with emphasis "No - you can't drill square holes" so I rectify it for you. All you need is a drill bit, an almost stock standard round drill bit!! Make sure it has $90$ degrees cone at the bottom. Take a piece of shim, fold it in half. Clamp it in a vice edge wise. Drill a hole no deeper than the height of the cone. Take the shim out of the vice, unfold ... Square hole with sharp corners with a round drill bit no less !! While this approach does illustrate good "thinking out of the box", it may be hard to hold the drill steady, for the drill will make a contact with a very narrow surface, an edge actually. I think it could be easier to use a pair of cutters to cut off an $45-90-45$ triangle. The result is the same. Shapes of Constant Width Shapes of constant width Crossed-Lines Construction of Shapes of Constant Width Shapes of constant width (An Interactive Gizmo) Star Construction of Shapes of Constant Width Reuleaux's Triangle, Extended 65466164
Mensuration (Perimeter & Area, Review of Earlier Concepts) Category : 6th Class MENSURATION (Perimeter and Area, Review of Earlier Concepts) FUNDAMENTALS (a) Perimeter of a square \[=4\,\,\times \]side. Elementary question-1: Find perimeter of a square kabaddi field each of whose side is 20 metres Ans. Perimeter \[4\times 20=80\,\,m\] (b) Perimeter of a rectangle \[=2\times \] (length + breadth) units. (c) Area of a square = (side \[\times \] side). (d) Area of a rectangle = length \[\times \] breadth. (e) Area of a parallelogram = base \[\times \] height sq. units. (f) Area of a triangle \[=\frac{1}{2}\] (Area of the parallelogram generated from it) \[=\frac{1}{2}\times \]base \[\times \] height sq. units Circumference \[=2\pi r=\pi d,\] where \[r=\]radius and \[d=\]diameter. Here \[\pi \,\,(Pi)\] is a constant, equal to\[3.14\] approximately. You need to login to perform this action. You will be redirected in 3 sec
In my population genetics book (see reference at bottom) they define them as: Nucleotide polymorphism (θ): proportion of nucleotide sites that are expected to be polymorphic in any sample of size 5 from this region. Of the genome. $\hat{θ}$ equals the proportion of nucleotide polymorphism observes in the sample (S) divided by $a_1 = \sum_{i=1}^{n-1} \frac{1}{I}$. If n= 5, $a_1 = \frac{1}{1}+ \frac{1}{2}+ \frac{1}{3}+ \frac{1}{4}+ \frac{1}{5}=2.083$ Nucleotide diversity is the average proportion of nucleotide differences between all possible pairs of sequences in the sample. In R, I came up with that code which is in accordance with what is in the book. data # Only polymorphisms total.snp # This is the total number of sites that were looked (e.g. 16 might be polymorphic over the 500 sites that we've looked at. So 484 sites are monomorphic) n = nrow(data) # Number of samples pwcomp = n*(n-1)/2 # Number of pairwise comparisons for(i in 1:n.col){ # Compute the number of differences in the samples that are polymorphic t.v = as.vector(table(data[,i])) z = outer(t.v,t.v,'*') temp= c(temp,sum(z[lower.tri(z)])) } pi.hat = sum(temp)/(pwcomp*total.snp) # Nb of different pairwise comparison/(nb all pairwise comparison * total nb of loci (polymorphic or not)) In the book, they say On the other hand, there is a theoretical relation between θ and π that is expected under simplifying assumption that the alleles are invisible to natural selection. In summary θ = π with this assumption and large sample sizes. So what is the difference in the Why are we calculating both (what could they tell us)? Hartl, D. L., & Clark, A. G. (1997). Principles of Population Genetics (3rd ed.). Sinauer Associates Incorporated.
First let me say that what is intuitive to a physicist may be not be so to a geometer and vice-versa. To many physicists a connection is the potential of a field satisfying a gauge invariance. For this point of view I refer to vol. 1, Chap. 6 sect 41 of the three volume book by Dubrovin-Fomenko-Novikov: Modern Geometry-Methods and applications. I find this point of view less intuitive only because I was trained as a mathematician. The notion of covariant derivative appears naturally when one tries to solve the following problem. Suppose that $E\to M$ is a smooth vector bundle over a smooth manifold $M$. For example, $E$ could be the tangent bundle of $M$. We seek a notion of parallel transport that will allow us to compare vectors situated in different fibers of the bundle. More precisely, this is a correspondence that associates to each smooth path $$\gamma: [a,b]\to M$$ a linear map $T_\gamma$ from the fiber of $E$ at the initial point of $\gamma$ to the fiber of $E$ over the final point of $\gamma$ $$T_\gamma: E_{\gamma(a)}\to E_{\gamma(b)}.$$ The map $T_\gamma$ is called the parallel transport along the path $\gamma$.The assignment $\gamma\mapsto T_\gamma$ should satisfy two natural conditions. (a) $T_\gamma$ should depend smoothly on $\gamma$. (The precise meaning of this smoothness is a bit technical to formulate, but in the end it means what your intuition tells you it should mean.) (b) If $\gamma_0: [a,b]\to M$ and $\gamma_1:[b,c]\to M$ are two smooth paths such that the initial point of $\gamma_1$ coincides with the final point, then we obtain by concatenation a path $\gamma:[a,c]\to M$ and we require that $$T_\gamma= T_{\gamma_1}\circ T_{\gamma_0}. $$ Suppose we have a concept of parallel transport. Given a smooth path $\gamma:[0,1]\to M$ and a section $\boldsymbol{u}(t)\in E_{\gamma(t)}$, $t\in [0,1]$ of $E$ over $\gamma$, then we can define a concept of derivative of $\boldsymbol{u}$ along $\gamma$. More precisely $$ \nabla_{\dot{\gamma}} \boldsymbol{u}|_{t=t_0}=\lim_{\varepsilon \to 0} \frac{1}{\varepsilon} \left( T^{t_0,t_0+\varepsilon}_\gamma \boldsymbol{u}(t_0+\varepsilon)- \boldsymbol{u}(t_0)\right), $$ where $ T^{t_0,t_0+\varepsilon}_\gamma$ denotes the parallel transport along $\gamma$ from the fiber of $E$ over $\gamma(t_0+\varepsilon)$ to the fiber of $E$ over $\gamma(t_0)$. The left-hand-side of the above equality is called the covariant derivative of $\boldsymbol{u}$ along the vector field $\dot{\gamma}$ determined by the parallel transport. Thus, a choice of parallel transport leads to a concept of covariant derivative. Conversely, a covariant derivative $\nabla$ leads to a parallel transport. Given a smooth path $\gamma:[0,1]\to M$ the parallel transport $$T_{\gamma}: E_{\gamma(0)}\to E_{\gamma(1)} $$ is defined as follows. Fix $u_0\in E_{\gamma(0)}$. Then there exists a unique section $\boldsymbol{u}(t)$ of $E$ over $\gamma$ satisfying $$ \boldsymbol{u}(0)=u_0,\;\;\nabla_{\dot{\gamma}}\boldsymbol{u}(t)=0,\;\;\forall t\in [0,1].$$ We then set $\newcommand{\bu}{\boldsymbol{u}}$ $$T_\gamma \bu_0:= \boldsymbol{u}(1).$$ This construction allows us to define the covariant derivative $\nabla_X\bu$ of a section $\bu$ of $E$ along a vector field $X$ of $M$. It satisfies the rescaling property $$ \nabla_{fX}\bu=f\big(\nabla_X\bu\big),\;\;\forall f\in C^\infty(M). $$ A connection on $TM$ will then satisfy $$\nabla_{fX} Y=f\big(\nabla_X Y),$$ for any vector fields $X,$ and any smooth function $f$. On the other hand the Lie derivative satisfies$$L_{fX} Y= fL_XY-(Xf) Y, $$so it cannot be a covariant derivative.
Fussy spacing issues These are issues that won't affect the correctness of formulas, but might make them look significantly better or worse. Beginners should feel free to ignore this advice; someone else will correct it for them, or more likely nobody will care. Don't use \frac in exponents or limits of integrals; it looks bad and can be confusing, which is why it is rarely done in professional mathematical typesetting. Write the fraction horizontally, with a slash: $$\begin{array}{cc}\mathrm{Bad} & \mathrm{Better} \\\hline \\e^{i\frac{\pi}2} \quad e^{\frac{i\pi}2}& e^{i\pi/2} \\\int_{-\frac\pi2}^\frac\pi2 \sin x\,dx & \int_{-\pi/2}^{\pi/2}\sin x\,dx \\\end{array}$$ The | symbol has the wrong spacing when it is used as a divider, for example in set comprehensions. Use \mid instead: $$\begin{array}{cc}\mathrm{Bad} & \mathrm{Better} \\\hline \\\{x|x^2\in\Bbb Z\} & \{x\mid x^2\in\Bbb Z\} \\\end{array}$$ For double and triple integrals, don't use \int\int or \int\int\int. Instead use the special forms \iint and \iiint: $$\begin{array}{cc}\mathrm{Bad} & \mathrm{Better} \\\hline \\\int\int_S f(x)\,dy\,dx & \iint_S f(x)\,dy\,dx \\\int\int\int_V f(x)\,dz\,dy\,dx & \iiint_V f(x)\,dz\,dy\,dx\end{array}$$ Use \, to insert a thin space before differentials; without this $\TeX$ will mash them together: $$\begin{array}{cc}\mathrm{Bad} & \mathrm{Better} \\\hline \\\iiint_V f(x)dz dy dx & \iiint_V f(x)\,dz\,dy\,dx\end{array}$$This post imported from StackExchange Mathematics Meta at 2014-04-16 16:08 (UCT), posted by SE-user MJD
Activity monitoring for seniors living alone Identifying Critical Events If a sensor uses water within an observation period, but the water does not flow all the time, there is probably nothing to worry about. Critical cases are those in which no water flows or water flows for a long time (see cases A and B). To identify critical cases K, the behavior vector is normalized and a case distinction is performed (Listing 1). Listing 1 Identifying Critical Cases #LaTeX formula \begin{eqnarray*} \|v\| & = & \sqrt{v^2_1 + v^2_2 + + v^2_n} = \sqrt{\sum_{i=1}^n{v^2_i}}\\ K & = & \begin{cases} 1 & \textrm{falls } \|v\| = 0 \\ 1 & \textrm{falls } \|v\| = \sqrt{n} \\ 0 & \textrm{sonst} \end{cases} \end{eqnarray*} At K=1, Seheiah queries the database for each value in the behavior vector v, as it appeared at that time in the past. The tolerance is considered and whether events need to be checked for weekdays or weekends/holidays. All events found in period t±( n× l) seconds are summed and divided by the number of days monitored. These steps are used to form the Laplace probability P( e_t), as shown in Listing 2. Listing 2 Calculating Laplace Probability \begin{eqnarray*} P(e_t) & = & \frac{number\:events\:at\:time\:t \pm (n \ast l)\:sec}{number\:of\:recorded\:days}\\ & = & \frac{\sum_{i=1}^D{e_{i_{t \pm (n \ast l)}}}}{D} \end{eqnarray*} If you are dealing with an "ideal" senior, who is at home every day and consumes water at the same time within the tolerance period, the probability is 100 percent [ P( e_t)=1]. Because retirees and their lives are usually not ideal in the mathematical sense, the probability will be lower in many cases. Thus, a threshold u for fairly safe behavior ( thresholdProbability= u) is set; this value also takes into account that the subject might have a lazy day from time to time, or that the regular daily routine might differ occasionally. If a probability below this threshold is determined, none of the related events are taken into account, and the probability is set to [ P( e_t)=0]. This approach also prevents infrequent events (e.g., a visitor uses the bathroom, asks for a glass of water, etc.) from being evaluated. The probability values are entered into a the historical behavior vector h, which comes from the same vector space as v. The vectors v and h are then compared for cosine similarity \cos(\varphi), where \varphi is the angle between v and h (Listing 3). Listing 3 Comparing Historical Behavior Vectors \begin{eqnarray*} \cos(\varphi) & = & \frac{v \cdot h}{\sqrt{\|v\|} \ast \sqrt{\|h\|}}\\ & = & \frac{\sum_{i=1}^n{v_i \ast h_i}}{\sqrt{\sum_{i=1}^n{v^2_i}} \ast \sqrt{\sum_{i=1}^n{h^2_i}}} \end{eqnarray*} In this example, \cos(\varphi)=1 would mean a complete match, whereas v and h are orthogonal to one other at \cos(\varphi)=0 and do not resemble one another one iota. To detect unusual behavior, a threshold value s is also is set for the cosine similarity ( thresholdCosSimilarity= s), depending on the number of the intervals observed ( n). For example, s=0.7 represents a good initial threshold for the default value of n=3. If \cos(\varphi)< s occurs several times in succession (3 n), the alarm cascade is triggered (Figure 2). The system also can be informed about periods of absence. If the senior politely says "Seheiah bye bye" when leaving the apartment, critical events are not evaluated. When water is used later, the system is automatically re-enabled. Alarm Cascade In case of an alarm, the alarm cascade is responsible for notifying family members or caregivers about a suspected fall. The alarm cascade communicates with the activity monitor and voice recognition via a Unix socket. All messages received there (ALARM, UNEXPECTED BEHAVIOR, FINE, WATER FLOW) are interpreted. If the activity detector identifies deviant behavior, it sends the message UNEXPECTED BEHAVIOR to the alarm cascade. This event triggers playback of an audio file ( <path_to_seheiah>/mp3s/unexpected_behavior.mp3), in which the senior is asked to confirm her well-being (message FINE). She has two minutes in which to comply. False alarms can occur relatively frequently, depending on the values set for intervals, behavior vectors or thresholds, or behavior changes, such as sleeping longer in the morning. Assuming that the senior really is feeling fine, she can cancel the alarm by a simple voice command ("Seheiah alarm off"). For the senior, a false alarm is no more than an annoying error that costs significantly less than a fall that goes unnoticed. If the senior does not confirm her well-being within this period or if the alarm cascade receives an ALARM message, an alarm is triggered. To do this, the webcam grabs a snapshot and then mails it – along with the request to take care of the senior – to a list of recipients (Figure 3).
Chapter 03: General Theorem, Intermediate Forms Rolle's theorem Geometrical interpretation of Rolle's theorem The mean value theorems Another form of mean value theorem Increasing and decreasing functions Cauchy's mean value theorem Extended mean value theorem Indeterminate form The form $\frac{0}{0}$ The form $\frac{\infty}{\infty}$ The form $0\times \infty$ (or $\infty \times 0$) The form $\infty-\infty$ The forms $0^\infty, 1^\infty, \infty^0$ Use of expansions Solutions We are very thankful to Muhammad Idrees for providing notes of Exercise 3.3 bsc/notes_of_calculus_with_analytic_geometry/ch03_general_theorem_intermediate_forms Last modified: 3 weeks ago by Dr. Atiq ur Rehman
NCERT Solutions For Class 7 Maths Chapter 5 Lines and Angles, are given here in a simple and detailed way. These solutions are provided for students to clear the concepts covered in chapter 5 and make them understand on the basis of each question and its solution, in an easy manner. The solutions are prepared by our experts taking into consideration Class 7 CBSE syllabus for Maths subject. The NCERT solutions are prepared in such a way that students who are in 7th standard are able to grasp the concepts of lines and angles easily and prepare themselves very well for exams to score higher marks. These materials can be used by the students who are appearing for any competitive exams in state level or national levels such as olympiads or talent squads, to check the logical and reasoning capabilities of students going to be conducted in 2019. Class 7 Maths NCERT Solutions – Lines and Angles In chapter 5 of NCERT class 7 maths book, a lot of questions are included which will serve you like the basics and will help you to solve questions of higher standards. Therefore, the solutions for Chapter Lines and Angles covers the important topics which are covered in the NCERT book, such as; Complementary angles and Supplementary angles Adjacent angles and Vertically opposite angles Intersecting lines Transversal and angles made by a transversal Transversal of parallel lines and etc. NCERT Solutions For Class 7 Maths Chapter 5 Exercises NCERT Solutions For Class 7 Maths Chapter 5 Lines and Angles Exercise 5.1 NCERT Solutions For Class 7 Maths Chapter 5 Lines and Angles Exercise 5.2 NCERT Solutions For Class 7 Maths Chapter 5 Exercises-5.1 Q.1. Find the complementary angle of each of the following angles: Ans: The sum of values of complementary angles is 90°. (i) 40° Complement = 90° – 40° = 50° (ii) 53° Complement = 90° – 53° = 37° (iii) 67° Complement = 90° – 67° = 23° Q2. Find the supplementary angle of each of the following angles: : The sum of values of supplementary angles is 180° Ans (1)95° Supplement = 180° – 95° = 85° (ii) 83° Supplement = 180° – 83° = 97° (iii) 155° Supplement = 180° – 155° = 25° Q3: Find out which of following pairs of angles are supplementary and which are complementary. (i) 60°, 120° (ii) 67°, 23° (iii) 108°, 72° (iv) 150°, 30° (v) 50°, 40° (vi) 72°. 18° If the sum of measures of angles is 90°, they are complimentary angles. If the sum of measures of angles is 180°, they are known as supplementary angles. Ans: (i) 60°, 120° Sum of value of the above angles = 60° + 120° =180° The above angles are supplementary. (ii) 67°, 23° Sum of value of the above angles = 67° + 23° = 90° The above angles are complementary. (iii) 108°, 72° Sum of value of the above angles = 108° + 72° =180° The above angles are supplementary. (iv) 150°, 30° Sum of value of the above angles =150° + 30° =180° The above angles are supplementary. (v) 50°, 40° Sum of value of the above angles = 50° + 40° = 90° These angles are complementary angles. (vi) 72°, 18° Sum of value of the above angles = 72° + 18° = 90° These angles are complementary angles. Q4: Determine the angle that is complimentary by itself. Let an angle be a. Ans: Let complement of the angle is also a. If the sum of measures of angle is 90°, it is complimentary angles a + a = 90° 2a = 90° a = 90°/2 = 45° Q5. Determine the angle that is supplementary by itself. Let an angle be a. Ans: Let the supplement of the angle is also a. If the sum of measures of angle is 180°, it is known as supplementary angle. a + a = 180° 2a = 180° a = 90° Q6. In the below given figure, \(\angle 1\) and \(\angle 2\) are supplementary angles. If \(\angle 1\) is decreased, what should be the change \(\angle 2\) so that it remains supplementary? \(\angle 1\) and \(\angle 2\) are supplementary angles. Ans: If \(\angle 1\) is decreased, then \(\angle 2 \) should also be increased by the same value so that this angle pair remains supplementary. Q7: Can 2 angles be supplementary if both the angles are: (i) Obtuse? (ii) Acute? (iii) Right? Ans: (i) No. Obtuse angle will always be greater than 90. Even if we add the minimum supplementary, it will be more than 180°. So, two obtuse angles can’t be a supplementary angle pair. (ii) No. Because acute angle will always be lesser than 90°. Even if we add the maximum acute angle i.e, 89, it cannot add up to 180°. So, two acute angles cannot make a supplementary angle pair. (iii) Yes, The value of right angle= 90° and sum of two right angles make 180° So, 2 right angles together can make a supplementary angle pair. Q8: Find out whether the complementary of an angle greater than 45° is lesser than 45° or greater than 45° or equal to 45°. : Let X and Y are two angles which make complementary angle pair and X is greater than 45°. Ans X + Y = 90° Y = 90° – X Therefore, B will be lesser than 45°. Q9. In the following figure: (a) Is \(\angle 1\) adjacent to \(\angle 2\)? (b) Is \(\angle AOC \) adjacent to \(\angle AOE \)? (c) Do \(\angle COE \) and \(\angle EOD \) form a linear pair? (d) Are \(\angle BOD \) and \(\angle DOA \) supplementary? (e) Is \(\angle 1\) vertically opposite to \(\angle 1\)? (f) Which angle is vertically opposite to \(\angle 5\)? Ans: (a) Yes. Because they have a vertex O as common and also arm OC as common. Also, their non-common arms, OA and OE are on either side of the common arm. (b) No. They have vertex O as common and also arm OA as common. However, their non-common arms, OC and OE are on the same side of the common arm. So, these are not adjacent to each other. (c) Yes. Since they have vertex O as common and arm OE as common. Also, their non-common arms. OC and OD. are opposite rays. (d) Yes. Since \(\angle BOD\) and \(\angle DOA\) have vertex O as common and their non-common arms are opposite to each other. (e) Yes. Since these angles are formed by the intersection of two straight lines namely, AB and CD. (f) \(\angle COB\) is the vertically opposite angle of \(\angle 5\) as these are formed due to the intersection of two straight lines, AB and CD. Q10. Looking at the given diagram below, Identify the following pairs of angles (i) Linear pairs. (ii) Vertically opposite angles. (i) \(\angle 2\) and \(\angle 1\), \(\angle 1\) and \(\angle 5\) as these have a common vertex and also have non- common arms opposite to each other. Ans: (ii) \(\angle 2\) and \(\angle 5\), \(\angle 1\) and \(\angle 1\) + \(\angle 4\) are vertically opposite angles because these are formed due to intersection of two straight lines. Q11. In the below figure, Is \(\angle 1\) adjacent to \(\angle ?\) ? \(\angle 1\) and \(\angle 2\) are not adjacent angles because they don’t have a common vertex. Ans: Q12. Find the values of angles A, B and C in each of the following: Ans: (i) Since \(\angle A\) and \(\angle 55^{\circ}\) are vertically opposite,\(\angle A\) = 55 \(^{\circ}\) \(\angle A\) + \(\angle B\) =\(180^{\circ}\) \(55^{\circ}\) + \(\angle B\) = \( 180^{\circ}\) \(\angle B\) = \( 180^{\circ}\) – \( 55^{\circ}\) = \(125^{\circ}\) \(\angle B\) = \(\angle C\) (Vertically Opposite angles) \(\angle C\) = \(125^{\circ}\) (ii) \(\angle C\) = \(40^{\circ}\) (Vertically opposite angles)\(\angle B\) + \(\angle C\) = \(180^{\circ}\) \(\angle B\) = \( 180^{\circ}\) – \( 40^{\circ}\) = \(140^{\circ}\) \( 40^{\circ}\) + \(\angle A\) + \( 25^{\circ}\) = \( 180^{\circ}\) \( 65^{\circ}\) +\(\angle A\) = \( 180^{\circ}\) \(\angle A\) = \( 180^{\circ}\) – \( 65^{\circ}\) = \( 115^{\circ}\) Q13: Complete the following: (a) The sum of measures of two supplementary angles are ________ (b) If 2 adjacent angles are supplementary, they form a ________ (c) The sum of measures of two complementary angles are ________ (d) If two lines intersect at a point, and if one pair of vertically opposite angles are acute angles, then the other pair of vertically opposite angles are (e) The vertically opposite angles are always________, if 2 lines intersect at a point. (f) Two angles forming a linear pair are ________ Ans: (a) 180° (b) Linear pair (c) 90° (d) Obtuse angles (e) Equal (f) Supplementary Q14: Name the pairs of angles in the adjoining figure given below. (ii) Adjacent complementary angles (i) Obtuse vertically opposite angles (iv) Unequal supplementary angles (v) Adjacent angles that do not form a linear pair (iii) Equal supplementary angles Ans: (ii) \(\angle TOP\), \(\angle POQ\) (i) \(\angle POS\), \(\angle QOR\) (iv) \(\angle TOP\), \(\angle TOR\) (v) \(\angle POQ\) and \(\angle POT\), \(\angle POT\) and \(\angle TOS\), \(\angle TOS\) and \(\angle ROS\) (iii) \(\angle TOQ\), \(\angle TOS\) NCERT Solutions For Class 7 Maths Chapter 5 Exercises-5.2 Q1: Describe the property that is used in each of the following statements: (i)If \(a||b,\; then\: \angle 1=\angle 5.\). (ii)If \(\angle 4=\angle 6,\, then\: a||b.\) (iii)If \(\angle 4+\angle 5+180^{\circ},\, then\: a||b.\) Ans: (i)Given, \(a||b\), then \(\angle 1=\angle 5\) [Corresponding angles] If two parallel lines are cut by a transversal, each pair of corresponding angles are equal in measure. (ii) Given, \(\angle 4=\angle 6\), then \(a||b\) [Alternate angles] When a transversal cuts two lines such that pairs of alternate interior angles are equal, the lines have to be parallel. (iii)Given, \(\angle 4+\angle 5=180^{\circ}\), then \(a||b\) [Co-interior angles] When a transversal cuts two lines, such that pairs of interior angles on the same side of transversal are supplementary, the lines have to be parallel. Q2: In the given figure, identify: (i)The pairs of corresponding angles. (ii)The pairs of alternate interior angles. (iii)The pairs of interior angles on the same side of the transversal. (iv)The vertically opposite angles. Ans: (i)The pairs of corresponding angles:\(\angle 1,\angle 5;\angle 2,\angle 6;\angle 4,\angle 8 and \angle 3,\angle 7\) (ii)The pairs of alternate interior angles are:\(\angle 3,\angle 5 \: and\: \angle 2,\angle 8\) (iii)The pair of interior angles on the same side of the transversal:\(\angle 3,\angle 8 \: and\: \angle 2,\angle 5\) (iv)The vertically opposite angles are:\(\angle 1,\angle 3 ;\angle 2,\angle 4;\angle 6,\angle 8 \: and\: \angle 5,\angle 7\) Q3: In the adjoining figure, \(p||q\). Find the unknown angles. \(\ because 125^{\circ}+e=180^{\circ}\) [Linear Pair]\(\ therefore 1e=180^{\circ}-125^{\circ}=55^{\circ}\) ……(1) Ans: Given, \(p||q\) and cut by a transversal line. Now \(e=f=55^{\circ}\) [Vertically opposite angles] Also \(a=f=55^{\circ}\) [Alternate interior angles] \(a+b=180^{\circ}\) [Linear pair] \(\Rightarrow 55^{\circ}+b=180^{\circ}\) [From equation (1)] \(\Rightarrow b=180^{\circ}-55^{\circ}=125^{\circ}\) Now \(a=c=55^{\circ}\) and \(b=d=125^{\circ}\) [Vertically opposite angles] Thus, \(a=55^{\circ},b=125^{\circ},c=55^{\circ},d=125^{\circ},e=55^{\circ} \: and \: f=55^{\circ}\) Q4: Find the values of x in each of the following figures if \(l||m\) Ans: (i)Given, \(l||m\) and t is transversal line.\(\ therefore\) Interior vertically opposite angle between lines l and t=\(110^{\circ}\) \(\ therefore 110^{\circ}+x=180^{\circ}\) [Supplementary angles] \(\Rightarrow x=180^{\circ}-110^{\circ}=70^{\circ}\) (ii)Given, \(l||m\) and t is transversal line.\(x+2x=180^{\circ}\\ \\ \Rightarrow 3x=180^{\circ} [Interior \;opposite\; angles]\\ \\ \Rightarrow x=\frac{180^{\circ}}{3}=60^{\circ}\) (iii)Given, \(l||m\) and \(a||b\).\(x=100^{\circ}\) [Corresponding angles] Q5: In the given figure, the arms of two angles are parallel. If \(\bigtriangleup ABC=70^{\circ}\), then find: (i) \(\angle DGC\) (II) \(\angle DEF\) Ans: (i) From the figure \(AB\parallel DE\) and BC is a transversal line and \(\angle ABC=70^{\circ}\) \(\ because \angle ABC=\angle DGC\) (corresponding angles)\(∴ \angle DGC=70^{\circ}\) ………(i) (ii) From the figure \(BC\parallel DE\) and DE is a transversal line and \(\angle DGC=70^{\circ}\) \(\ because \angle DGC=\angle DEF\) (corresponding angles)\(∴ \angle DGC=70^{\circ}\) ………(i) Q6: In the given figure below, decide whether \(l\) is parallel to \(m\). Ans: (i) \(126^{\circ}+44^{\circ}=170^{\circ}\) Here \(l\parallel m\) this condition holds false, because the sum of interior opposite angles are not equal to \(180^{\circ}\). (ii) \(75^{\circ}+75^{\circ}=150^{\circ}\) Here \(l\parallel m\) this condition holds false, because the sum of interior opposite angles are not equal to \(180^{\circ}\). (iii) \(57^{\circ}+123^{\circ}=180^{\circ}\) Here \(l\parallel m\) this condition holds true, because the sum of interior opposite angles are equal to \(180^{\circ}\). (iv) \(98^{\circ}+72^{\circ}=170^{\circ}\) Here \(l\parallel m\) this condition holds false, because the sum of interior opposite angles are not equal to \(180^{\circ}\). From this chapter, students will be able to recognize types of angles such as complementary or supplementary, by themselves. Also, learn how to increase or decrease the value of a certain angle so that it remains supplementary. Students can use these solutions as worksheets and also can prepare notes from them to keep as a reference and even they can download lines and angles class 7 pdf for free. Apart from these solutions for chapter 5, lines and angles, students can also get the NCERT Solutions For Class 7 Maths for all the chapters, exercise-wise, only at BYJU’S. And to have a better learning experience with us, download BYJU’S – The learning App, where you can learn from videos lessons for lines and angles topic.
For this univariate linear regression model $$y_i = \beta_0 + \beta_1x_i+\epsilon_i$$ given data set $D=\{(x_1,y_1),...,(x_n,y_n)\}$, the coefficient estimates are $$\hat\beta_1=\frac{\sum_ix_iy_i-n\bar x\bar y}{n\bar x^2-\sum_ix_i^2}$$ $$\hat\beta_0=\bar y - \hat\beta_1\bar x$$ Here is my question, according to the book and Wikipedia, the standard error of $\hat\beta_1$ is $$s_{\hat\beta_1}=\sqrt{\frac{\sum_i\hat\epsilon_i^2}{(n-2)\sum_i(x_i-\bar x)^2}}$$ How and why? 3rd comment above: I've already understand how it comes. But still a question: in my post, the standard error has (n−2), where according to your answer, it doesn't, why? In my post, it is found that $$ \widehat{\text{se}}(\hat{b}) = \sqrt{\frac{n \hat{\sigma}^2}{n\sum x_i^2 - (\sum x_i)^2}}. $$ The denominator can be written as $$ n \sum_i (x_i - \bar{x})^2 $$ Thus, $$ \widehat{\text{se}}(\hat{b}) = \sqrt{\frac{\hat{\sigma}^2}{\sum_i (x_i - \bar{x})^2}} $$ With $$ \hat{\sigma}^2 = \frac{1}{n-2} \sum_i \hat{\epsilon}_i^2 $$ i.e. the Mean Square Error (MSE) in the ANOVA table, we end up with your expression for $\widehat{\text{se}}(\hat{b})$. The $n-2$ term accounts for the loss of 2 degrees of freedom in the estimation of the intercept and the slope. another way of thinking about the n-2 df is that it's because we use 2 means to estimate the slope coefficient (the mean of Y and X) df from Wikipedia: "...In general, the degrees of freedom of an estimate of a parameter are equal to the number of independent scores that go into the estimate minus the number of parameters used as intermediate steps in the estimation of the parameter itself ."
I am taking a course on Monte Carlo methods and we learned the Rejection Sampling (or Accept-Reject Sampling) method in the last lecture. There are a lot of resources on the web which shows the proof of this method but somehow I am not convinced with them. So, in the Rejection Sampling, we have a distribution $f(x)$ which is hard to sample from. We choose a easy-to-sample distribution $g(x)$ and find a coefficient $c$ such that $f(x) \leq cg(x)$. Then we sample from $g(x)$ and for each draw, $x_i$, we also sample a $u$ from a standard uniform distribution $U(u|0,1)$. The sample $x_i$ is accepted if it is $cg(x_i)u \leq f(x_i)$ and rejected otherwise. The proofs which I came across usually just show that $p(x|Accept) = f(x)$ and stop there. What I think about this process is that we have a sequence of variables $x_1,Accept_1,x_2,Accept_2,...,x_n,Accept_n$ and a $x_i,Accept_i$ pair corresponds to our i.th sample ($x_i$) and whether it is accepted ($Accept_i$). We know that each $x_i,Accept_i$ pair is independent of each other, such that: $P(x_1,Accept_1,x_2,Accept_2,...,x_n,Accept_n) = \prod\limits_{i=1}^n P(x_i,Accept_i)$ For a $(x_i,Accept_i)$ pair we know that $P(x_i) = g(x_i)$ and $P(Accept_i|x_i) = \frac{f(x_i)}{cg(x_i)}$. We can readily calculate $p(x_i|Accept_i)$ but I don't understand how it suffices as a proof. We need to show that the algorithm works, so I think a proof should show that the empricial distribution of the accepted samples converge to $f(x)$ as $n\rightarrow\infty$. I mean, with $n$ being the number of all accepted and rejected samples: $\frac{Number \hspace{1mm} of \hspace{1mm} samples \hspace{1mm} with \hspace{1mm} (A \leq x_i \leq B)}{Number \hspace{1mm} of \hspace{1mm} accepted \hspace{1mm} samples} \rightarrow \int_A^B f(x)dx$ as $n\rightarrow\infty$. Am I wrong with this thought pattern? Or is there a connection between the common proof of the algorithm and this? Thanks in advance
@0celouvskyopoulo7 Anyways, I don't think so: the initial topology $\mathfrak{T}_{\lVert \cdot\rVert}$ on a Banach space is finer than the weak topology $\sigma(X,X')$, and therefore $f\in C((X,\mathfrak{T}_{\lVert \cdot\rVert}),T)$ may very well be discontinuous in $(X,\sigma(X,X'))$, and almost surely not even semicontinuous. I understand Partitions, just imposing restrictions like "r = some constant" in a space with co-ordinates $(r,\theta)$, and then you're left with different sets of points for different r that, if Union'ed, give you the whole space. But I still don't understand Equivalence relations @Slereah Here's the problem. Let $M^n$ be a compact $\partial$-manifold, $1\le p<\infty$ and $f\in C_c^\infty(M)$ (that is, $f|_{\partial M}=0$). Is there a constant $C>0$, independent of $f$, such that $$\int |f|^p\le C \int |\nabla f|^p?$$ He confirms that the star operator is indeed a relation and that $A \star B \neq A$ means indeed $A \star B$ and $B \neq A$ He also adds "Anyway, I would discourage you from wasting you time on reading this part of the paper. All worthy was extracred from it, reformulated, clarified, and published in arxiv.org/abs/1408.6813 " My favorite sentence in a Davis document : "Puthoff’s polarizable vacuum general relativity model is the only alternative theory of gravity that has been successfully applied to explain the physical, anti-physical and physiological characteristics & performances of UFOs" @Koolman Binding energy can be a bit confusing. Remember that the binding energy is the energy released when the nucleus is formed. That it, if we take a proton and a neutron and bind them together to make a deuteron then it releases 1MeV per nucleon. Can anyone please just help me with one thing? Would the equivalence class that describes the partition $D$ of a space, where $D$ is the set of all circles, be $E = [x,y \subset AxA \vert \text{x and y are the same distance from the origin}]$ @JohnRennie I'm teaching thermal physics this semester and the parts I am most successful at teaching seem to be the bits that made me cry in the misty days of yore. The few bits that came easily have been the ones I've struggled to convey clearly.
Terms sourced from: http://iupac.org/publications/pac/68/12/2223/ "Glossary of terms used in photochemistry (IUPAC Recommendations 1996)", Verhoeven, J.W., Pure and Applied Chemistry 1996, 68(12), 2223 absorbance \(A\)absorptance \(\alpha\)absorption absorption coefficient absorption cross-section \(\sigma\)absorptivity actinometer action spectrum adiabatic electron transfer adiabatic photoreaction annihilation antimony–xenon lamp (arc) apparent lifetime argon ion laser attenuance \(D\)attenuance filter auxochrome avoided crossing ADMR anti-Stokes shift back electron transfer bandgap energy \(E_{\text{g}}\)bandpass filter Barton reaction bathochromic shift (effect) Beer–Lambert law bioluminescence bleaching blue shift carbon dioxide laser cavity dumping charge hopping charge recombination charge separation charge shift charge-transfer transition to solvent charge-transfer (CT) state charge-transfer (CT) transition chemical laser chemiexcitation chemiluminescence chromophore Chemically Induced Dynamic Electron Polarization Chemically Induced Dynamic Nuclear Polarization Chemically Initiated Electron Exchange Luminescence coherent radiation collision complex conduction band configuration (electronic) configuration interaction conversion spectrum copper vapour laser correlation diagram correlation energy crystal field splitting CT current yield cut-off filter CW cadium–helium laser critical quenching radius \(r_{0}\) dark photochemistry (photochemistry without light) Davydov splitting deactivation delayed fluorescence depth of penetration Dexter excitation transfer diabatic electron transfer diabatic photoreaction diode laser dipolar mechanism doublet state driving force (affinity) of a reaction \(A\)dye laser DEDMR DFDMR dynamic quenching effectiveness efficiency \(\eta\)efficiency spectrum einstein electrogenerated chemiluminescence electron correlation electron exchange excitation transfer electron transfer electron transfer photosensitization electronic energy migration (or hopping) electronically excited state electrophotography emission emission spectrum emittance \(\varepsilon\)encounter complex encounter pair energy migration energy storage efficiency \(\eta\)energy transfer energy transfer plot enhancer excimer excimer laser exciplex excitation spectrum excitation transfer excited state exciton exitance external heavy atom effect exterplex extinction extinction coefficient electrochemiluminescence electrochromic effect electroluminescence energy pooling ESCA fnumber Fermi level \(E_{\text{F}}\)flash photolysis fluence \(F\),\(\varPsi \),\(H_{0}\)Förster cycle Förster excitation transfer Fourier transform spectrometer Franck–Condon principle free electron laser free-running laser frequency \(f\),\(\nu \)frequency doubling FWHM (Full Width at Half Maximum) factor-group splitting Franck–Condon state half-width Hammond–Herkstroeter plot harmonic frequency generation harpoon mechanism heavy atom effect helium–cadmium laser helium–neon laser Herkstroeter plot heteroexcimer high-pressure mercury lamp (arc) hole burning hole transfer hot ground state reaction hot quartz lamp hot state reaction Hush model hyperchromic effect hypochromic effect hypsochromic shift Hund rules imaging (photoimaging) incoherent radiation inner-sphere electron transfer integrating sphere intended crossing intensity interferometer internal conversion intersystem crossing intervalence charge transfer inverted region (for electron transfer) isoabsorption point isoclinic point inner filter effect Lambert law lamp Laporte rule laser lasing latent image ligand field splitting ligand to ligand charge transfer (LLCT) transition ligand to metal charge transfer (LMCT) transition light source Lorentzian band shape luminescence lumiphore Marcus inverted region (for electron transfer) Marcus–Hush relationship merry-go-round reactor metal to ligand charge transfer (MLCT) transition metal to metal charge transfer (MMCT) transition mode-locked laser multiphoton absorption multiphoton process multiplicity n → π* state n → π* transition n → σ* transition neodymium laser nitrogen laser non-linear optical effect non-radiative decay nonadiabatic electron transfer nonadiabatic photoreaction optically detected magnetic resonance optical density optical filter oscillator strength \(f_{ij}\)outer-sphere electron transfer oxa-di-π-methane rearrangement π – π* state π → π* transition Paterno–Büchi reaction phosphorescence photo-Fries rearrangement photoacoustic effect photoacoustic spectroscopy photoassisted catalysis photochemical hole burning photochemistry photoconductivity photocrosslinking photocurrent yield photodegradation photodetachment photodynamic effect photoelectrical effect photoelectrochemical cell photoelectrochemical etching photoelectrochemistry photoelectron spectroscopy photoexcitation photogalvanic cell photoinduced electron transfer photoinduced polymerization photoinitiation photoionization photon photon flow \(\mathit{\Phi} _{\text{p}}\)photooxidation photooxygenation photophysical processes photopolymerization photoreduction photoresist photosensitization photosensitizer photostationary state photothermal effect photothermography photovoltaic cell piezoluminescence population inversion precursor complex predissociation primary photochemical process primary (photo)process primary (photo)product Q-switched laser quantum counter quantum efficiency quantum quartet state quencher quenching constant radiant energy \(Q\)radiationless transition radiative transition radical pair radioluminescence red shift relative spectral responsivity resonance absorption technique resonance fluorescence resonance fluorescence technique resonance line resonance radiation rovibronic state ruby laser Rydberg orbital Rydberg transition π → σ* transition σ → σ* transition sacrificial acceptor sacrificial donor scintillators selection rule self-absorption self-quenching sensitization sensitizer simultaneous pair transitions singlet state singlet-singlet annihilation singlet-singlet energy transfer singlet-triplet energy transfer solar conversion efficiency solid state lasers solvatochromism solvent shift sonoluminescence spectral irradiance \(E_{\lambda}\)spectral overlap spectral (photon) effectiveness spectral photon exitance \(M_{\text{p}\lambda}\)spectral photon flow \(\varPhi_{\text{p}\lambda}\)spectral photon flux \(E_{\text{p}\lambda}\)spectral photon radiance \(L_{\text{p}\lambda}\)spectral radiance \(L_{\lambda}\)spectral radiant exitance \(M_{\lambda}\)spectral radiant flux spectral radiant intensity \(I_{\lambda}\)spectral radiant power \(P_{\lambda}\)spectral responsivity spectral sensitization spherical radiance spherical radiant exposure spin conservation rule spin-allowed electronic transition spin-orbit coupling spin-orbit splitting spin–spin coupling spontaneous emission Stark effect state crossing state diagram Stern–Volmer kinetic relationships stimulated emission Stokes shift superexchange interaction superradiance surface crossing thermal lensing thermally activated delayed fluorescence thermochromism thermoluminescence through-bond electron transfer through-space electron transfer TICT emission TICT state time-correlated single photon counting time-resolved spectroscopy transient spectroscopy transition polarization transmittance \(T\),\(\tau\)triboluminescence triplet state triplet-triplet annihilation triplet-triplet energy transfer triplet-triplet transitions tunnelling two-photon excitation two-photon process Vavilov rule vibrational redistribution vibrational relaxation vibronic coupling vibronic transition
Assume $$X\sim \frac{e^{-\beta Nf(x)}}{Z_{\beta}}$$ where $$ f(x) = -hx -x^2 + \frac{1}{\beta N}(1-x)\ln(1-x) + (1+x)\ln(1+x) $$ and $Z_{\beta}$ is the appropriate normalization factor. The support of $X$ is the lattice $\Gamma_N = \{-1,-1+2/N,\ldots,1-2/N,1\}$ and $N$ is some large positive integer and $\beta>1$. I would like to know if there's any way to characterize the first three moments of $X$, that is $$ \mathbb{E}[X], \mathbb{E}[X^2],\mathbb{E}[X^3]$$ as functions of $\beta$ and whatever else comes out, but in particular I'm interested in the overall behaviour when $\beta$ grows large. If you are familiar with statistical physics this is the Curie Weiss model (or a mean field Ising model).
I am asked to prove that an O(1)-pass randomized streaming algorithm that solves s-t connectivity problem in a simple directed graph $G=(V,E)$ with $|V|=n$ vertices, with sucess possibility $>\frac{1}{2}$, has space lower bound of $\Omega(n^2)$ bits, using techniques in communication complexity theory. I believe what the problem intends me to do is to reduce s-t connectivity problem in directed graph to some problem with a well-established space lower bound in communication complexity theory. Basing on their space lower bounds, I guess I have to start with one of disjointness, indexing, or inner-product problem for two boolean vectors of length $n^2$, i.e. Alice has a $n^2$-length boolean vector $a$, with $a[i\times k+j]=1$ if and only if arc $(i,j) \in E$. This way if I can go on to complete the reduction of the problem to s-t connectivity problem, I can prove the $\Omega(n^2)$ bound, because all of the three problems has randomized communication complexity linear to their instance size. However, I've been thinking for days and still have no clue about how to move forward from here... I would greatly appreciate some ideas or hints.
The formula you quote gives the Gibbs energy of a system [single liquid droplet plus vapor of same chemical species]. You say "we are not at constant pressure". This is true in the sense that pressure of liquid inside the droplet is different from pressure of vapor just above the droplet. The formula you quote actually takes into account these differences of pressure, through $\Delta g$. What the formula gives is not the basic Gibbs energy as defined in textbooks ($U-TS+PV$), but something related but more technical: it is the ordinary Gibbs energy of the system minus the ordinary Gibbs energy the system would have if it was all pure vapor at the same $T,P$, minus some constant of no consequence. The ordinary Gibbs energy of liquid can be expressed as $$G_L = \mu_{liquid} N_{liquid} = g_{liquid}V_{liquid}$$and similarly for the vapor. Thus, we can regard the Gibbs energy as having a density, either per molecule ($\mu$), or per volume ($g$). There is also Gibbs energy per unit boundary surface, so we have a separate contribution $$G_{s} = \sigma S .$$ Chemical potential $\mu$ is a function of $T,P$, but we have a different function for liquid ($\mu_L(T,P)$), and different function for vapor($\mu_V(T,P)$). And we have same temperature $T$ everywhere (by assumption), but different pressures for liquid $P_L$ and vapor $P_V$ (due to curved surface of the droplet). Now, with this, we can understand how different pressures enter the expression for $\Delta G$. $$\Delta G(T,P_V) = g_L(T,P_L) V_L + g_V(T,P_V)V_V + \sigma(T) S$$ Because $P_V$ is easy to measure and control directly (by putting in more vapor), it is taken as "the pressure" that $\Delta G$ is regarded as function of, but this is just arbitrary choice, we could use $P_L$ instead. The difference in pressures enters non-trivially in the right-hand side: the Gibbs energy densities $g_L,g_V$ are to be taken at different pressures. We can assume that all evaporation/condensation happens inside a big container of constant volume $V$, so $V_V = V - V_L$ and rewrite this as $$\Delta G(T,P_V) = g_L(T,P_L) V_L + g_V(T,P_V)V - g_V(T,P_V)V_L + \sigma(T) S$$ This is almost the original formula, but the second term is getting in the way. However, provided the condensation or evaporation happens while $T$ and $P_V$ do not change (big container so it acts as fixed reservoir), this term does not change and we can drop it and define $\Delta G'$: $$\Delta G'(T,P_V) = \left( g_L(T,P_L) - g_V(T,P_V)\right) V_L + \sigma(T) S.$$ So we see that the original $\Delta g$ in the equation actually takes into account different pressures. It may be convenient to express it as function of vapor pressure only (and $T$ and radius $r$): $$\Delta g(T,P_V) = g_L(T,P_V+\frac{2\sigma}{r}) - g_V(T,P_V).$$
Introduction Built at the Jet Propulsion Laboratory by an Investigation Definition Team (IDT) headed by John Trauger, WFPC2 was the replacement for the first Wide Field and Planetary Camera (WF/PC-1) and includes built-in corrections for the spherical aberration of the HST Optical Telescope Assembly (OTA). The WFPC2 was installed in HST during the First Servicing Mission in December 1993. Early IDT report of the WFPC2 on-orbit performance: Trauger et al. (1994, ApJ, 435, L3) A more detailed assessment of its capabilities: Holtzman et al. (1995, PASP, 107, page 156 and page 1065). The WFPC2 was used to obtain high resolution images of astronomical objects over a relatively wide field of view and a broad range of wavelengths (1150 to 11,000 Å). WFPC2 was installed during the first HST Servicing Mission in 1993 and removed during Servicing Mission 4 in 2009. WFPC2 data can be found on the MAST Archive. ISRs Filter WFPC2 ISRs Listing Results 2010-04: The Dependence of WFPC2 Charge Transfer Efficiency on Background Illumination 2010-01: WFPC2 Standard Star CTE Optical Configuration While it was in operation, the WFPC2 field of view was located at the center of the HST focal plane. The central portion of the f/24 beam coming from the OTA would be intercepted by a steerable pick-off mirror attached to the WFPC2 and diverted through an open port entry into the instrument. The beam would then pass through a shutter and interposable filters. An assembly of 12 filter wheels contained a total of 48 spectral elements and polarizers. The light would then fall onto a shallow-angle, four-faceted pyramid, located at the aberrated OTA focus. Each face of the pyramid was a concave spherical surface, dividing the OTA image of the sky into four parts. After leaving the pyramid, each quarter of the full field of view would then be relayed by an optically flat mirror to a Cassegrain relay that would form a second field image on a charge-coupled device (CCD) of 800 x 800 pixels. Each of these four detectors were housed in a cell sealed by a MgF2 window, which is figured to serve as a field flattener. The aberrated HST wavefront was corrected by introducing an equal but opposite error in each of the four Cassegrain relays. An image of the HST primary mirror would then be formed on the secondary mirrors in the Cassegrain relays. The spherical aberration from the telescope's primary mirror would be corrected on these secondary mirrors, which were extremely aspheric; the resulting point spread function was quite close to that originally expected for WF/PC-1. Field of View The U2,U3 axes were defined by the "nominal" Optical Telescope Assembly (OTA) axis, which was near the center of the WFPC2 FOV. The readout direction was marked with an arrow near the start of the first row in each CCD; note that it rotated 90 degrees between successive chips. The x,y arrows mark the coordinate axes for any POS TARG commands that may have been specified in the proposal. An optional special requirement in HST observing proposals, places the target an offset of POS TARG (in arcsec) from the specified aperture. Camera Configurations Camera Pixels Field of View Scale f/ratio PC (PC1) 800 x 800 36" x 36" 0.0455" per pixel 28.3 WF2, 3, 4 800 x 800 80" x 80" 0.0996" per pixel 12.9 A Note about HST File Formats Data from WFPC2 are made available to observers as files in Multi-Extension FITS (MEF) format, which is directly readable by most PyRAF/IRAF/STSDAS tasks. All WFPC2 data are now available in either waivered FITS or MEF formats. The user may specify either format when retrieving that data from the HDA. WFPC2 data, in either Generic Edited Information Set (GEIS) or MEF formats, can be fully processed with STSDAS tasks. The figure below provides a physical representation of the typical data format. Resources Charge Traps There are about 30 pixels in WFPC2 that are "charge traps" which do not transfer charge efficiently during readout, producing artifacts that are often quite noticeable. Typically, charge is delayed into successive pixels, producing a streak above the defective pixel. In the worst cases, the entire column above the pixel can be rendered useless. On blank sky, these traps will tend to produce a dark streak. However, when a bright object or cosmic ray is read through them, a bright streak will be produced. Here, we show streaks (a) in the background sky, and (b) stellar images produced by charge traps in the WFPC2. Individual traps have been cataloged and their identifying numbers are shown. Warm Pixels and Annealing Decontaminations (anneals), during which the instrument is warmed up to about 22 o C for a period of six hours, were performed about once per month. These procedures are required in order to remove the UV-blocking contaminants which gradually build-up on the CCD windows (thereby restoring the UV throughput) as well as fix warm pixels. Examples of warm pixels are presented in the figure below. Calibration Procedure Estimated Accuracy Notes Bias subtraction 0.1 DN rms Unless bias jump is present Dark subtraction 0.1 DN/hr rms Error larger for warm pixels; absolute error uncertain because of dark glow Flat fielding <1% rms large scale Visible, near UV 0.3% rms small scale Visible, near UV ~10% F160BW; however, significant noise reduction achieved with use of correction flats Relative Photometry Procedure Estimated Accuracy Notes Residuals in CTE correction < 3% for the majority (~90%) of cases up to 1-% for extreme cases (e.g., very low backgrounds) Long vs. short anomaly (uncorrected) < 5% Magnitude errors <1% for well-exposed stars but may be larger for fainter stars. Some studies have failed to confirm the effect. (see Chapter 5 of IHB for more details) Aperture correction 4% rms focus dependence (1 pixel aperture) Can (should) be determined from data <1% focus dependence (> 5 pixel) Can (should) be determined from data 1-2% field dependence (1 pixel aperture) Can (should) be determined from data Contamination correction 3% rms max (28 days after decon) (F160BW) 1% rms max (28 days after decon) (filters bluer than F555W) Background determination 0.1 DN/pixel (background > 10 DN/pixel) May be difficult to exceed, regardless of image S/N Pixel centering < 1% Absolute Photometry Precedure Estimated Accuracy Sensitivity < 2% rms for standard photometric filters 2% rms for broad and intermediate filters in visible < 5% rms for narrow-band filters in visible 2-8% rms for UV filters Astrometry Procedure Estimated Accuracy Notes Relative 0.005" rms (after geometric and 34th-row corrections) Same chip 0.1" (estimated) Across chips Absolute 1" rms (estimated) Photometric Systems Used for WFPC2 Data The WFPC2 flight system is defined so that stars of color zero in the Johnson-Cousins UBVRI system have color zero between any pair of WFPC2 filters and have the same magnitude in V and F555W. This system was established by Holtzman et al. (1995b) The zeropoints in the WFPC2 synthetic system, as defined in Holtzman et al. (1995b), are determined so that the magnitude of Vega, when observed through the appropriate WFPC2 filter, would be identical to the magnitude Vega has in the closest equivalent filter in the Johnson-Cousins system. \(m_{AB} = -48.60-2.5\log f_\nu \) \(m_{ST} = -21.10-2.5\log f_\lambda\) Photometric Corrections A number of corrections must be made to WFPC2 data to obtain the best possible photometry. Some of these, such as the corrections for UV throughput variability, are time dependent, and others, such as the correction for the geometric distortion of WFPC2 optics, are position dependent. Finally, some general corrections, such as the aperture correction, are needed as part of the analysis process. Here we provide examples of factors affecting photometric corrections. Cool Down on April 23, 1994 PSF Variations 34th Row Defect Gain Variation Pixel Centering Possible Variation in Methane Quad Filter Transmission Polarimetry WFPC2 has a polarizer filter which can be used for wide-field polarimetric imaging from about 200 through 700 nm. This filter is a quad, meaning that it consists of four panes, each with the polarization angle oriented in a different direction, in steps of 45 o. The panes are aligned with the edges of the pyramid, thus each pane corresponds to a chip. However, because the filters are at some distance from the focal plane, there is significant vignetting and cross-talk at the edges of each chip. The area free from vignetting and cross-talk is about 60" square in each WF chip, and 15" square in the PC. It is also possible to use the polarizer in a partially rotated. Accurate calibration of WFPC2 polarimetric data is rather complex, due to the design of both the polarizer filter and the instrument itself. WFPC2 has an aluminized pick-off mirror with a 47° angle of incidence, which rotates the polarization angle of the incoming light, as well as introducing a spurious polarization of up to 5%. Thus, both the HST roll angle and the polarization angle must be taken into account. In addition, the polarizer coating on the filter has significant transmission of the perpendicular component, with a strong wavelength dependence. Astrometry Astrometry with WFPC2 means primarily relative astrometry. The high angular resolution and sensitivity of WFPC2 makes it possible, in principle, to measure precise positions of faint features with respect to other reference points in the WFPC2 field of view. On the other hand, the absolute astrometry that can be obtained from WFPC2 images is limited by the positions of the guide stars, usually known to about 0.5" rms in each coordinate, and by the transformation between the FGS and the WFPC2, which introduces errors of order of 0.1" Because WFPC2 consists of four physically separate detectors, it is necessary to define a coordinate system that includes all four detectors. For convenience, sky coordinates (right ascension and declination) are often used; in this case, they must be computed and carried to a precision of a few mas, in order to maintain the precision with which the relative positions and scales of the WFPC2 detectors are known. It is important to remember that the coordinates are not known with this accuracy. The absolute accuracy of the positions obtained from WFPC2 images is typically 0.5" rms in each coordinate and is limited primarily by the accuracy of the guide star positions.
I know ooids form in high energy water, by rolling and building layers around a nucleation site, but I don't understand what limits their size. What keeps these such a regular, small size? Earth Science Stack Exchange is a question and answer site for those interested in the geology, meteorology, oceanography, and environmental sciences. It only takes a minute to sign up.Sign up to join this community I was unaware of a 0.5 mm maximum rule with ooid size. The Sam Boggs Jr. book on Principals of Sedimentology and Stratagraphy indicates that ooids can grow up to 2 mm. Grains larger than this are pisoids. The size limitation may depend on how you define an ooid. To me, an ooid carbonate grain with a nucleus and concentric layers of calcite or aragonite. The size is likely a function of the availability of calcium and carbonate ions in the water (calcite or aragonite saturation). I think that microbial action also contributes to large grain size, but if microbes are involved, then you might call it an oncoid instead. The similarity of ooid size is a function of similarities in the environment of deposition. Microfacies of Carbonate Rocks by Erik Flugel is a great book that discusses these features in detail. As already noted by Inkenbrandts answer the size limitation of ooids to 2 mm seems rather arbitrary, from Richter (1983): Though most ooids are smaller than 2 mm, it is not meaningful to postulate a grain size boundary between "ooids" and "pisoids" at a diameter of 2 mm as suggested by Choquette (1978) and Donahue (1978), based on older usage, because the ooids of several occurrences exceed this limit without structural and mineralogical changes. Most Phanerozoic ooids are smaller than 2 mm, but there are some Neoproterozoic ooids which are considerably larger, up to a diameter of 16 mm. This might be because the conditions on carbonate platforms were different. Controlling factors for the growth of ooids are: Nuclei supply: The ooids stop growing as soon as they are buried, when there is a large influx of new nuclei, small ooids get buried before they are 'fully grown' Growth rate:This is dependent on chemical conditions, also note that the growth rate is dependent on the surface of the oid: $$ \frac{\Delta m_{growth}}{t} \propto 4 \pi r^2 $$ Agitation:For ooids to grow, they need to be mobile. When the velocity of the water is not sufficient to keep them moving, they stop growing. Abrasion:When the growing ooids impact each other in high enery conditions, the mass loss during each impact is dependent on their mass: $$ \frac{\Delta m_{abrasion}}{t} \propto r^3 $$the larger the ooids, the quicker they loose mass when colliding with others. If there was only growth rate and abrasion on could simply calculate the radius of ooids were they stop growing, but the other factors have also have to be considered. The authors from whom I compiled the list of factors (Sumner and Grotzinger (1993) argue argue that for the growth of large ooids there must be a low nuclei supply, so that ooids have time to grow before getting buried, and the environment has to be energetic enough that large ooids still move. One reason for low nuclei supply might be that common sources for nucleation like skeletal fragments and fecal pellets were not available in proterozoic time. The second factor could be that the large ooids grew on carbonate ramps on rimmed carbonate platforms where the high energy environment is given. Richter, D. K. (1983). Calcareous ooids: a synopsis". In: Coated grains. Springer,pp. 71-99. Sumner, D. Y. and J. P. Grotzinger (1993). Numerical modeling of ooid size and the problem of Neoproterozoic giant ooids". In: Journal of Sedimentary Research 63.5.
12.3.3 - Hypothesis Testing We can use statistical inference (i.e., hypothesis testing) to draw conclusions about how the population of \(y\) values relates to the population of \(x\) values, based on the sample of \(x\) and \(y\) values. The equation \(Y=\beta_0+\beta_1 x\) describes this relationship in the population. Within this model there are two parameters that we use sample data to estimate: the \(y\)-intercept (\(\beta_0\) estimated by \(b_0\)) and the slope (\(\beta_1\) estimated by \(b_1\)). We can use the five step hypothesis testing procedure to test for the statistical significance of each separately. Note, typically we are only interested in testing for the statistical significance of the slope because that tells us that \(\beta_1 \neq 0\) which means that \(x\) can be used to predict \(y\). When \(\beta_1 = 0\) then the line of best fit is a straight horizontal line and having information about \(x\) does not change the predicted value of \(y\); in other words, \(x\) does not help us to predict \(y\). If the value of the slope is anything other than 0, then the predict value of \(y\) will be different for all values of \(x\) and having \(x\) helps us to better predict \(y\). We are usually not concerned with the statistical significance of the \(y\)-intercept unless there is some theoretical meaning to \(\beta_0 \neq 0\). Below you will see how to test the statistical significance of the slope and how to construct a confidence interval for the slope; the procedures for the \(y\)-intercept would be the same. The assumptions of simple linear regression are linearity, independence of errors, normality of errors, and equal error variance. You should check all of these assumptions before preceding. Research Question Is the slope in the population different from 0? Is the slope in the population positive? Is the slope in the population negative? Null Hypothesis, \(H_{0}\) \(\beta =0\) \(\beta= 0\) \(\beta= 0\) Alternative Hypothesis, \(H_{a}\) \(\beta\neq 0\) \(\beta> 0\) \(\beta< 0\) Type of Hypothesis Test Two-tailed, non-directional Right-tailed, directional Left-tailed, directional Minitab Express will compute the \(t\) test statistic: \(t=\frac{b_1}{SE(b_1)}\) where \(SE(b_1)=\sqrt{\frac{\frac{\sum (e^2)}{n-2}}{\sum (x- \overline{x})^2}}\) Minitab Express will compute the p-value for the non-directional hypothesis \(H_a: \beta_1 \neq 0 \) If you are conducting a one-tailed test you will need to divide the p-value in the Minitab Express output by 2. If \(p\leq \alpha\) reject the null hypothesis. If \(p>\alpha\) fail to reject the null hypothesis. Based on your decision in Step 4, write a conclusion in terms of the original research question.
12.3.4 - Confidence Interval for Slope We can use the slope that was computed from our sample to construct a confidence interval for the population slope (\(\beta_1\)). This confidence interval follows the same general form that we have been using: General Form of a Confidence Interval \(sample statistic\pm(multiplier)\ (standard\ error)\) Confidence Interval of \(\beta_1\) \(b_1 \pm t^\ast (SE_{b_1})\) \(b_1\) = sample slope \(t^\ast\) = value from the \(t\) distribution with \(df=n-2\) \(SE_{b_1}\) = standard error of \(b_1\) Example: Confidence Interval of \(\beta_1\) Section Below is the Minitab Express output for a regression model using Test 3 scores to predict Test 4 scores. Let's construct a 95% confidence interval for the slope. Term Coef SE Coef T-Value P-Value VIF Constant 16.37 12.40 1.32 0.1993 Test 3 0.8034 0.1360 5.91 <0.0001 1.00 From the Minitab Express output, we can see that \(b_1=0.8034\) and \(SE(b_1)=0.1360\) We must construct a \(t\) distribution to look up the appropriate multiplier. There are \(n-2\) degrees of freedom. \(df=26-2=24\) \(t_{24,\;.05/2}=2.064\) \(b_1 \pm t \times SE(b_1)\) \(0.8034 \pm 2.064 (0.1360) = 0.8034 \pm 0.2807 = [0.523,\;1.084]\) We are 95% confident that \(0.523 \leq \beta_1 \leq 1.084 \) In other words, we are 95% confident that in the population the slope is between 0.523 and 1.084. For every one point increase in Test 3 the predicted value of Test 4 increases between 0.523 and 1.084 points.
So the first time I tried to recolor this insertion case from memory, I ended up with the right-hand side recoloring; of course the left recoloring is more efficient as the loop ends at this point. If however, we check in the right case for whether the grandparent of a is the root (color it black) and otherwise continue the loop from that node, I read that it makes the recoloring not O(log(n)) anymore, why is that? It still seems to me to be O(log(2n)) at worst, even if the number of rotations performed is not O(1) anymore. we have a load balancer that implement ssl interception for servers with tls client authentication,how does the client certificate travel through this ssl mess? Say I have: page size = 2^12 bytes physical memory = 2^24 bytes logical address space = 2^32 How many page-table entries are needed in case the program uses the whole logical address space? I’m trying to get the regular expression of an automata but an state has a form that I don’t know how to solve, the form on it’s simplest example is: $ $ q_{2} = 1q_{2} \cup 0q_{2}$ $ What’s the procedure to solve this form? Thanks in advance Are there any resources, articles, research documents or case studies explains how companies or apps succeed to move users from social media to their website? On other words: create motivation and building a transitional road map from social media to website example: Instagram account sells goods, and want to move to e-commerce website but currently exploring opportunities how to do that (fear of the idea that its business is currently based on being a social-based store) context: I’m currently on secondary research for UX project and I want to explore this specific area in order to get some assumptions to be validated. I have dynamic IP address and I am trying to find the solution to limit the access to wp-admin. It seems that the only option is to block the access by domain name by using a Dynamic DNS Manager instead of IP address. Are there any alternatives? In CLRS there is an exercise for direct addressing data structure Suppose that a dynamic set S is represented by a direct-address table T of length m. Describe a procedure that finds the maximum element of S. What is the worst-case performance of your procedure? for this question i have seen a lot of websites indicating the worst case possibility is O(m) by starting and examining every non null slot up to end and then returning the maximum index, but i think it can be done in a faster way than that. If direct addressing tables has m slots , then obviously the highest value should be in the end (i thought keys are treated as indexes in direct addressing,correct me if i am wrong) , so if we start iterating from the end then worst case would be O(m-n) where n is the number of elements examined by our iterator before reaching non null value from end of the direct addressing table is something wrong with my approach? Can any one clear this topic? ¡Buenas tardes! La forma en que lo digité en el título es obviamente errónea, pero es para que entiendan a lo que me refiero. Normalmente un case 1, el switch case siempre obedece a un entero, pero qué pasa si quiero un case que lea un rango de enteros posibles dentro de la variable del switch, por ejemplo switch(edad()) case (edad()>=1) Aquí un ejemplo en código fácil de lo que me refiero. switch(edadGato()) { case 1: cout<<"Al gato le corresponde la vacuna de la Rabia"; break; case 3: cout<<"Al gato le corresponde la vacuna de la Panleucopenia"; break; default: cout<<"Basado en la edad, al gato no le corresponde ninguna edad"; break; } Si la edad fuese 2, no me detectaría que ya tendría una vacuna pendiente por tener más de un año (case 1), por eso, mi duda sobre si existe una forma algo como case (rango de números). Nota: Sé que con if es más fácil, pero si en switch es posible, compártanlo conmigo. Gracias de antemano. The user of my app has access to two tools within my app. Each tool is comprised of multiple tables that the user has to fill with information. The app is more complex but that’s roughly what’s it’s about. This is the UML Diagram that I have designed based on the app: I am wondering whether the use case diagram should be designed otherwise. Maybe, I should design the use case diagram for each tool independently. That way, it will be less cumbersome. But, this will mean that each tool represents its own system. And I don’t know if from a UML viewpoint, that is the right thing to do. Consider the following python implementation of the Heap Sort algorithm: def heapsort(lst): length = len(lst) - 1 leastParent = length // 2 for i in range (leastParent, -1, -1): moveDown(lst, i, length) for i in range(length, 0, -1): if lst[0] > lst[i]: swap(lst, 0, i) moveDown(lst, 0, i - 1) def moveDown(lst, first, last): largest = 2 * first + 1 while largest <= last: # right child is larger than left if (largest < last) and (lst[largest] < lst[largest + 1]): largest += 1 # right child is larger than parent if lst[largest] > lst[first]: swap(lst, largest, first) # move down to largest child first = largest; largest = 2 * first + 1 else: return # exit def swap(lst, i, j): tmp = lst[i] lst[i] = lst[j] lst[j] = tmp I have been able to formally prove that worst-case is in $ \Theta(n \log(n))$ and that the best-case is in $ \Theta(n)$ (some might argue the best-case is in $ \Theta(n \log(n))$ as well since that’s what most searches on the internet will return but just think of what happens when an input list where all of the elements are the same number). I have showed both upper and lower bounds of the worst and best-case by realizing that the route taken by moveDown function is dependent on the height of the heap/tree and whether the elements in the list are distinct or the same number across the whole list. I have not been able to prove the average case of this algorithm which I know is also in $ \Theta(n \log(n))$ . I do know, however, that I am supposed to consider an input set or family of lists of all length $ n$ and I am allowed to make an assumption such as that all of the elements in the list are distinct. I confess that I am not good at average-case analysis and would really appreciate it if someone could give a complete and thorough proof(including the exact expressions especially of the number of inputs) as it would help me understand the concept a great deal.
For a student of Karnataka Board 1st Year PUC class, mathematics can be a challenging subject as it deals with a wide range of topics from sets, relations and functions to complex numbers and quadratic equations, sequences and series, mathematical reasoning and so on. If a student needs to get a good foundation in class 11 mathematics they would need to do well in the classes and for the exams. To prepare for these exams, students can access the Karnataka 1st PUC Maths Important Questions get practice. This will be a good supplementary study material for the students apart from their Karnataka Board 1st PUC Question Papers, 1st PUC textbooks and so on. Students can also get an overview of the list of important questions here: Karnataka Board 1st PUC Maths Important Questions List If \(\frac{a}{b}, \frac{c}{d}, \frac{e}{f}\) are the proper fractions, then LCM is given by ___________ Three vessels can hold 9, 15, 24 litres of water. Find the least quantity of water which can be filled by these vessels an exact number of times. Find the H.C.F. of 165, 225 and 435 4. The cost of a chair is `600 and the cost of a table is `900. Find the least sum of money that a person must possess in order to purchase a whole number of chairs or tables. 5. Express each of the following rational numbers as decimals: \(\frac{3}{4}\) (b) \(\frac{15}{8}\) (c) \(\frac{8}{125}\) 6. Without actually performing the long division state whether the following rational numbers will have a terminating decimal expansion or non- terminating repeating decimal expansion. \(\frac{17}{3125}\) (b) (b) \(\frac{13}{8}\) 7. Prove that 2 + 3 \(\sqrt{5}\) is an irrational number 8. Represent the following sets in both Roster form and Rule method: (a) Set of factors of 20 (b) Set of all prime Numbers less than 10 9. Out of 250 people, 160 drink coffee, 90 drink tea, 85 drink milk, 45 drink coffee and tea, 35 drink tea and milk, 20 drink all the three. How many will drink Coffee and Milk?. 10. If A = {5, 6, 7}, Find P (A) 11. If (2x, x + y) = (8, 4). Find x and y 12. R is a relation on the set of natural numbers N standing for x is related to y. if x = 3y, Find R 13. Find the domain and range of the function \(F(x)\frac{x^{2}-2x+1}{x^{2}-9x+13}\), where \(x\varepsilon N\) 14. Simplify \(2(3^{-2})+\frac{1}{3}^{-3}+3^{2}\) 15. Prove that 2log\(\frac{3}{7}\) + log\(\frac{49}{9}\)=0 16. If log 5 = 0.6990, find the number of digits in the integral part of \(5^{23}\) 17. Find the 11th term of the A.P. 3, 5, 7, 9 – – – – 18. Imrez buys a used car for `1,50,000 he pays `1,00,000 Cash and agrees to pay the balance in annual installments of `5,000 plus 8% interest on the unpaid amount. How much will the Car Cost for him? 19. If \(n(\cup)\)=700, \(n(A)\)=200, \(n(B)\) =300 and \(n(A\cap B)\)=100, then find \(n(A’\cap B’)\) 20. Prove that \(\frac{1}{1+cos A} +\frac{1}{1-cos A}= 2cosec^{2}A\) Why Solve Karnataka 1st PUC Maths Important Questions? These important questions covers almost all the important topics that come under the class 11 Maths syllabus. Hence, it also help students to learn a subject properly. Other benefits are given here: Help students to get proper information and expectation about the exam pattern Help them to self analyze their failings and knowledge gap of the subject If the find any question more tough, they can spend more time on that topic It is a good tool for revision Get practice doing mathematical problems Students can also come back to BYJU’S for more such interesting resources and study material like question papers or textbooks of Karnataka 1st PUC. Stay tuned and get more updates about Karnataka pu board.
$$\sec2\theta=\csc2\theta$$ My attempt: $$\begin{align} \cos2\theta &= \sin2\theta \tag{1}\\ \cos^2\theta+\sin^2\theta-2\cos\theta\sin\theta &=0 \tag{2}\\ (\cos\theta-\sin\theta)^2 &=0 \tag{3}\\ \cos\theta-\sin\theta &=0 \tag{4}\\ \cos\theta &=\sin\theta \tag{5}\\ \tan\theta &=1 \tag{6}\\ \theta &=180^\circ n+45^\circ \quad\text{??} \tag{7} \end{align}$$ But the answer was $90^\circ n+22.5^\circ$ and I'm not sure why. I've searched up the question online, and someone has proposed a solution where it is not factored; instead, the equation turns into $\tan2\theta=1$ on line $(2)$, and this allows you to get the correct solution. What's wrong with factoring it though?
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
11.2.1.2- Cards (Equal Proportions) Research question Section When randomly selecting a card from a deck with replacement, are we equally likely to select a heart, diamond, spade, and club? I randomly selected a card from a standard deck 40 times with replacement. I pulled 13 hearts, 8 diamonds, 8 spades, and 11 clubs. Let's use the five-step hypothesis testing procedure: \(H_0: p_h=p_d=p_s=p_c=0.25\) \(H_a:\) at least one \(p_i\) is not as specified in the null We can use the null hypothesis to check the assumption that all expected counts are at least 5. \(Expected\;count=n (p_i)\) All \(p_i\) are 0.25. \(40(0.25)=10\), thus this assumption is met and we can approximate the sampling distribution using the chi-square distribution. \(\chi^2=\Sigma \frac{(Observed-Expected)^2}{Expected} \) All expected values are 10. Our observed values were 13, 8, 8, and 11. \(\chi^2=\frac{(13-10)^2}{10}+\frac{(8-10)^2}{10}+\frac{(8-10)^2}{10}+\frac{(11-10)^2}{10}\) \(\chi^2=\frac{9}{10}+\frac{4}{10}+\frac{4}{10}+\frac{1}{10}\) \(\chi^2=1.8\) Our sampling distribution will be a chi-square distribution. \(df=k-1=4-1=3\) We can find the p-value by constructing a chi-square distribution with 3 degrees of freedom to find the area to the right of \(\chi^2=1.8\) The p-value is 0.614935 \(p>0.05\) therefore we fail to reject the null hypothesis. There is not evidence that the proportion of hearts, diamonds, spades, and clubs that are randomly drawn from this deck are different.
There are two prominent uses of the term "average" in algorithm analysis. Average-case as a special case of expected costs Here, "average case" just means "expected case w.r.t. uniform distribution". Since we usually analyse with uniform inputs in mind (everything else is hard, and there's not much reason to prefer one distribution over the other in most cases). Example: the average-case running-time cost of sorting algorithmsis often analyzed w.r.t uniformly-random permutations. Average cost in the classic sense. When analyzing data structures, we can look at average costs acrossthe contained elements for a fixed instance -- no probability distribution here (well, you could...).That is, we may still have/want to consider a worst-, average- or best-case instance. Example: Consider BSTs. The average search cost of a given tree is the total cost for searching all contained elements (one after the other)divided by the number of contained elements. This is a classic quantityin AofA called internal path length. Note: There are situations where average and expected do not usually mean the same thing. For instance, the expected (also: average-case) height of BSTs is in $O(\log n)$ but the average height is in $\Theta(\sqrt{n})$. That is because "expected" is implicitly (by tradition) meant w.r.t. uniformly-random permutations of insertion operations whereas "average" means the average over all BSTs of a given size. The two distributions are not the same, and apparently significantly so! Recommendation: Whenever you use "expected" or "average-case", be very clear about which quantities are random w.r.t. which distribution. The specific sentence you quote is indeed not clear if read in isolation -- if you ignore that CLRS specify exactly what "simple uniform hashing" means on the very same page. There are two potentially random variables here: 1) the content of the hashtable itself, and 2) the key searched for. Simple uniform hashing is a simple way of specifying both. We abstract from sequences of insertions¹ and just assume that every one of the $n$ elements we inserted independently hashed to each of the $m$ addresses with probability $1/m$. We assume that the searched key hashes to each address with probabilty $1/m$. That's how the proof works: our search hits each list with probability $1/m$ (cf 2), and they all have the same expected length of $n/m$ (via 1). Hence, the expected cost (under this specific model) for searching for $x$ not in the table is proportional to $\qquad\displaystyle\begin{align*} T_u(x,n,m) &= 1 + \sum_{i=1}^m \operatorname{Pr}[h(x) = i] \cdot \mathbb{E}[\operatorname{length}(T[i])] \\ &\overset{1,2}{=} 1 +\sum_{i=1}^m \frac{1}{m} \cdot \frac{n}{m} \\ &= 1 + \frac{n}{m}.\end{align*}$ The "$+1$" is there to account for computing $h(x)$ and accessing $T[h(x)]$, the sum represents the cost for searching along the list. That is fair since the sequence of insertions does not have as much impact on the resulting structure as for, say, BSTs. The hash function shakes everything up. We don't want to talk about the precise interaction of sequence and hash function, so we just assume that the result of both is independently uniform -- that's something we can work with. It may not represent reality, of course!
11.2.1.3 - Roulette Wheel (Different Proportions) Research Question Section An American roulette wheel contains 38 slots: 18 red, 18 black, and 2 green. A casino has purchased a new wheel and they want to know if there is any evidence that the wheel is unfair. They spin the wheel 100 times and it lands on red 44 times, black 49 times, and green 7 times. If the wheel is fair then \(p_{red}=\frac{18}{38}\), \(p_{black}=\frac{18}{38}\), and \(p_{green}=\frac{2}{38}\). All of these proportions combined equal 1. \(H_0: p_{red}=\frac{18}{38},\;p_{black}=\frac{18}{38}\;and\;p_{green}=\frac{2}{38}\) \(H_a: At\;least\;one\;p_i\;is \;not\;as\;specified\;in\;the\;null\) In order to conduct a chi-square goodness of fit test all expected values must be at least 5. For both red and black: \(Expected \;count=100(\frac{18}{38})=47.368\) For green: \(Expected\;count=100(\frac{2}{38})=5.263\) All expected counts are at least 5 so we can conduct a chi-square goodness of fit test. \(\chi^2=\Sigma \frac{(Observed-Expected)^2}{Expected} \) In the first step we computed the expected values for red and black to be 47.368 and for green to be 5.263. \(\chi^2= \frac{(44-47.368)^2}{47.368}+\frac{(49-47.368)^2}{47.368}+\frac{(7-5.263)^2}{5.263} \) \(\chi^2=0.239+0.056+0.573=0.868\) Our sampling distribution will be a chi-square distribution. \(df=k-1=3-1=2\) We can find the p-value by constructing a chi-square distribution with 2 degrees of freedom to find the area to the right of \(\chi^2=0.868\) The p-value is 0.647912 \(p>0.05\) therefore we should fail to reject the null hypothesis. There is not evidence that this roulette wheel is unfair.
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
The best intuitive analogy I've heard is with classical sound waves. Consider a musical instrument playing a pure sine wave of frequency $\nu$ and amplitude $A$, and no other harmonic frequencies at all. Graphing this in frequency-amplitude space ($x$-axis=frequency, $y$=amplitude) gives you a $\delta$-function-like point function with value $y=A$ at $x=\nu$, and zero everywhere else. That represents your exact knowledge of the note's frequency. But at what time was the note played? A pure sine wave extends from $-\infty<t<\infty$. Any attempt to play a shorter note necessarily introduces additional components/harmonics in its Fourier decomposition. And the shorter the interval $t_0<t<t_1$ you want, the broader your frequency spectrum has to become. Indeed, imagine an instantaneous sound. Neither your ear, nor any apparatus, can say anything about its frequency at all -- you'd have to sense some finite portion of the waveform to analyse its shape/components, but "instantaneous" precludes that. So, you can't simultaneously know both a note's frequency and the time it's played, due to the Fourier conjugate nature of frequency/time. The better you know one, the worse you know the other. And, as @annav mentioned, that's analogous to the nature of conjugate quantum observables. Edit: to address @sanchises remark about some "crude MSPaint drawings"... For simplicity (i.e., my own simplicity generating the following "crude drawings"), I'm illustrating an almost-square wave below, rather than a sine wave. Suppose you wanted to produce a sound wave with a one-cycle duration, looking something like, So the "tails" are zero in both directions, indicating the sound's finite duration. But if we try generating that with just two fourier components, we can't get those zero-tails. Instead, it looks like, As you see, we can't "localize" the sound's duration with just two frequencies. To get a better approximation, four components looks like, And that still fails to accomplish much by way of "localization".Next, eight components looks like, And that's beginning to exhibit the behavior we're looking for.Sixteen looks like, And I could go on. The initial illustration above was generated with 99 components, and looks pretty much like the intended square wave. Comment: you guys coincidentally stepped into one of my little programs when mentioning drawings. See http://www.forkosh.com/onedwaveeq.html for a discussion, although not about uncertainty. To get the above illustrations, I used the following parameters in that "Solver Box" at top, nrows=100&ncols=256&ncoefs=99&fgblue=135&f=0,0,0,0,0,0,1,1,1,1,1,-1,-1,-1,-1,-1,0,0,0,0,0,0,0>imestep=1&bigf=1 Just change the ncoefs=99 to generate the corresponding drawings above.
The geodesic equation is $$ {d^2 x^\mu \over {ds}^2}+\Gamma^\mu {}_{\alpha \beta}{d x^\alpha \over ds}{d x^\beta \over ds}=0\ $$ for some scalar parameter of motion s and connection coefficients of the second kind $\Gamma^\mu {}_{\alpha \beta}$. It's straightforward to show this reduces to $$ \frac{d^2 x^n}{{dt}^2} = -\Gamma^n{}_{00} $$ for n in [1, 2, 3] when rewritten in terms of $t\equiv x^0$ and in the limit of velocities $\ll c$. This negative makes sense when substituting $\Gamma^\mu {}_{00}=-\frac 1 2 g^{\mu\sigma}\partial_\sigma g_{00}$ (static field limit), and then $g_{00}=1+2\phi$ (weak field limit, $\eta_{\mu\nu}=\operatorname{diag}(1, -1, -1, -1)$). But if we forget that, and instead consider that $\Gamma^n {}_{00}=\frac{\partial \mathbf{e}_0}{\partial x^0}\cdot \mathbf{e}^n$, then intuitively why is this negative? Shouldn't a test particle moving in $\mathbf{e}_0$ accelerate in the direction $\mathbf{e}_0$ changes? Edit: It was the dual. It makes more sense to consider $\Gamma_{kij}= \frac{\partial \mathbf{e}_{i}}{\partial x^j} \cdot \mathbf{e}^{m} g_{mk}= \frac{\partial \mathbf{e}_{i}}{\partial x^j} \cdot \mathbf{e}_{k}$, whose sign is flipped by $g^{nn}$.
Given a set of $n$ points in $R^d$, the goal is to cover them with (finitely many) unit balls such that following conditions satisfy: 1) Minimizing the number of balls that are required to cover all the points. 2) Maximizing the inter-clusters separation between clusters, which is define as follows: For clusters $C_i$ and $C_j$ (with centroids $c_i$ and $c_j$), their inter-cluster separation $R_{ij}$ is $$ R_{ij}=\frac{d_{ij}}{(S_i+S_j)},$$ where $S_i=\Sigma_{p\in C_i} dist(c_i,p)$ and $d_{ij}=dist(c_i,c_j)$. (Here, $dist$ is the usual $l_2$-norm.) The exact algorithm of the problem should output optimal centers ${c_1, c_2, ..}$ of the clusters. I have following two questions regarding the problem: 1) What is the inherent complexity of the problem. 2) If the above problem can be solved in sublinear query/time (in a similar spirit of the learning problem in the PAC model)? More precisely, by querying $poly(\frac{1}{\epsilon}, logn)$ many positions from the input the algorithm should output approximated clusters, i.e. $\Sigma_i dist(c_i, c_{opt_i})<\epsilon$. (Here $c_{opt_i}$ is the center of $i^{th}$ optimal cluster.) Pls let me know if there if I am able to put the question clearly. Thanks!
Developed in 1945 by the statistician Frank Wilcoxon, the signed rank test was one of the first "nonparametric" procedures developed. It is considered a nonparametric procedure, because we make only two simple assumptions about the underlying distribution of the data, namely that: (1) the random variable X is continuous (2) the probablility density function of X is symmetric Then, upon taking a random sample X 1, X 2, ..., X n, we are interested in testing the null hypothesis: \(H_0 : m=m_0\) against any of the possible alternative hypotheses: \(H_A : m > m_0\) or \(H_A : m < m_0\) or \(H_A : m \ne m_0\) As we often do, let's motivate the procedure by way of example. Let X i denote the length, in centimeters, of a randomly selected pygmy sunfish, 5.0 3.9 5.2 5.5 2.8 6.1 6.4 2.6 1.7 4.3 can we conclude that the median length of pygmy sunfish differs significantly from 3.7 centimeters? Solution. We are interested in testing the null hypothesis H 0: m = 3.7 against the alternative hypothesis H A: m ≠ 3.7. In general, the Wilcoxon signed rank test procedure requires five steps. We'll introduce each of the steps as we apply them to the data in this example. Step #1. In general, calculate X i − Step #2. In general, calculate the absolute value of X i − Step #3. Determine the rank R i, Step #4. Determine the value of W, the Wilcoxon signed rank test statistic: \[ W=\sum_{i=1}^{n}Z_i R_i\] where Z i is an indicator variable with And, therefore W equals 40: \[ W=(1)(5)+(1)(1)+ ... +(0)(-8)+(1)(2) =5+1+6+7+9+10+2=40\] Step #5. Determine if the observed value of W is extreme in light of the assumed value of the median under the null hypothesis. That is, calculate the P-value associated with W, and make a decision about whether to reject or not to reject. Whoa, nellie! We're going to have to take a break from this example before we can finish, as we first have to learn something about the distribution of W. As is always the case, in order to find the distribution of the discrete random variable W, we need: (1) to find the range of possible values of W, that is, we need to specify the support of W (2) to determine the probability that W takes on each of the values in the support Let's tackle the support of W first. Well, the smallest that \(W=\sum_{i=1}^{n}Z_i R_i\) could be is 0. That would happen if each observation The largest that \(W=\sum_{i=1}^{n}Z_i R_i\) could be is \(\frac{n(n+1)}{2}\). That would happen if each observation fell above the value of the median m 0 specified in the null hypothesis, thereby causing Z i = 1, for and therefore W reduces to the sum of the integers from 1 to n: \[W=\sum_{i=1}^{n}Z_i R_i=\sum_{i=1}^{n}=\frac{n(n+1)}{2}\] So, in summary, W is a discrete random variable whose support ranges between 0 and n( n+1)/2. Now, if we have a small sample size n, such as we do in the above example, we could use the exact probability distribution of W to calculate the There we have it. We're just about done with finding the exact probability distribution of W when n = 3. All we have to do is recognize that under the null hypothesis, each of the above eight arrangements (columns) is equally likely. Therefore, we can use the classical approach to assigning the probabilities. That is: And, just to make sure that we haven't made an error in our calculations, we can verify that the sum of the probabilities over the support 0, 1, ..., 6 is indeed 1/8 + 1/8 + ... + 1/8 = 1. Hmmm. That was easy enough. Let's do the same thing for a sample size of n = 4. Well, in that case, the possible values of W are the integers 0, 1, 2, ..., 10. Now, each of the four data points would be assigned a rank R i of either 1, 2, 3, or 4, and depending on whether the data point fell above or below the hypothesized median Again, under the null hypothesis, each of the above 16 arrangements is equally likely, so we can use the classical approach to assigning the probabilities: Do you want to do the calculation for the case where n = 5? Here's what the enumeration of possible outcomes looks like: After having worked through finding the exact probability distribution of W for the cases where n = 3, 4, and 5, we should be able to make some generalizations. First, note that, in general, there are 2 total number of ways to make signed rank sums, and therefore the probability that n \[P(W=w)=f(w)=\frac{c(w)}{2^n}\] where c( w) = the number of possible ways to assign a + or a − to the first n integers so that \(\sum_{i=1}^{n}Z_i R_i=w\). Okay, now that we have the general idea of how to determine the exact probability distribution of W, we can breathe a sigh of relief when it comes to actually analyzing a set of data. That's because someone else has done the dirty work for us for sample sizes n = 3, 4, ..., 12, and published the relevant results in a statistical table of W [1]. (Our textbook authors chose not to include such a table in our textbook.) By relevant, I mean the probabilities in the "tails" of the distribution of W. After all, that's what P-values generally are, that is, probabilities in the tails of the distribution under the null hypothesis. As the table of W suggests, our determination of the probability distribution of W when n = 4 agrees with the results published in the table: because both we and the table claim that: \[P(W \le 0)=P(W \ge 10)=0.062\] and: \(P(W \le 1)=P(W =0)+P(W =1)=0.062+0.062=0.125\) \(P(W \ge 9)=P(W =9)+P(W =10)=0.062+0.062=0.125\) Okay, it should be pretty obvious that working with the exact distribution of W is going to be pretty limiting when it comes to large sample sizes. In that case, we do what we typically do when we have large sample sizes, namely use an approximate distribution of W. follows an approximate standard normal distribution Proof. Because the Central Limit Theorem is at work here, the approximate standard normal distribution part of the theorem is trivial. Our proof therefore reduces to showing that the mean and variance of W are: \(E(W)=\frac{n(n+1)}{4}\) and \(Var(W)=\frac{n(n+1)(2n+1)}{24}\) respectively. To find E( W) and Var( W), note that \(W=\sum_{i=1}^{n}Z_i R_i\) has the same distribution of \(U=\sum_{i=1}^{n}U_i\) where: In case that claim was less than obvious, consider this intuitive, hand-waving kind of argument: At any rate, we therefore have: \[E(W)=E(U)=\sum_{i=1}^{n}E(U_i)=\sum_{i=1}^{n}\left[0\left(\frac{1}{2}\right)+i\left(\frac{1}{2}\right) \right]=\frac{1}{2}\sum_{i=1}^{n}i=\frac{1}{2}\times\frac{n(n+1)}{2}=\frac{n(n+1)}{4} \] and: \[Var(W) =Var(U)=\sum_{i=1}^{n}Var(U_i)\] because the U i's are independent under the null hypothesis. Now: \[Var(U_i) = (E(U_{i}^{2})-E(U_i))^2 = \left[0^2\left(\frac{1}{2}\right)+i^2\left(\frac{1}{2}\right) \right]-\left(\frac{i}{2}\right)^2 = \frac{i^2}{2}-\frac{i^2}{4} = \frac{i^2}{4}\] and therefore: \[Var(W)=\sum_{i=1}^{n}Var(U_i)=\sum_{i=1}^{n}\frac{i^2}{4}=\frac{1}{4}\sum_{i=1}^{n}i^2=\frac{1}{4}\times\frac{n(n+1)(2n+1)}{6} \] Therefore, in summary, under the null hypothesis, we have that: \[W'=\frac{\sum_{i=1}^{n}Z_i R_i - \frac{n(n+1)}{4}}{\sqrt{\frac{n(n+1)(2n+1)}{24}}} \] follows an approximate standard normal distribution as was to be proved. Let's return to our example now to complete our work. Let X i denote the length of a randomly selected pygmy sunfish, 5.0 3.9 5.2 5.5 2.8 6.1 6.4 2.6 1.7 4.3 can we conclude that the median length of pygmy sunfish differs significantly from 3.7 centimeters? Solution. Recall that we are interested in testing the null hypothesis H 0: m = 3.7 against the alternative hypothesis H A: m ≠ 3.7. The last time we worked on this example, we got as far as determining that W = 40 for the given data set. Now, we just have to use what we know about the distribution of W to complete our hypothesis test. Well, in this case, with n = 10, our sample size is fairly small so we can use the exact distribution of W. The upper and lower percentiles of the Wilcoxon signed rank statistic when n = 10 are: Therefore, our P-value is 2 × 0.116 = 0.232. Because our P-value is large, we cannot reject the null hypothesis. There is insufficient evidence at the 0.05 level to conclude that the median length of pygmy sunfish differs significantly from 3.7 centimeters. A couple of notes are worth mentioning before we take a look at another example: (1) Our textbook authors define \(W=\sum_{i=1}^{n}R_i\) as the sum of all of the ranks, as opposed to just the sum of the positive ranks. That is perfectly fine, but not the most typical way of defining W. (2) W is based on the ranks of the deviations from the hypothesized median m 0, not on the deviations themselves. In the above example, W = 40 even if x7 = 6.4 or 10000 (now that's a pretty strange sunfish) because its rank would be unchanged. It is in this sense that W protects against the effect of outliers. Now for that last example. The median age of the onset of diabetes is thought to be 45 years. The ages at onset of a random sample of 30 people with diabetes are: 35.5 44.5 39.8 33.3 51.4 51.3 30.5 48.9 42.1 40.346.8 38.0 40.1 36.8 39.3 65.4 42.6 42.8 59.8 52.426.2 60.9 45.6 27.1 47.3 36.6 55.6 45.1 52.2 43.5 Assuming the distribution of the age of the onset of diabetes is symmetric, is there evidence to conclude that the median age of the onset of diabetes differs significantly from 45 years? Solution. We are interested in testing the null hypothesis H 0: m = 45 against the alternative hypothesis H A: m ≠ 45. We can use Minitab's calculator and statistical functions to do the dirty work for us: Then, summing the last column, we get: Because we have a large sample ( n = 30), we can use the normal approximation to the distribution of W. In this case, our P-value is defined as two times the probability that W ≤ 200. Therefore, using a half-unit correction for continuity, our transformed signed rank statistic is: \[W'=\frac{200.5 - \left(\frac{30(31)}{4}\right)}{\sqrt{\frac{30(31)(61)}{24}}}=-0.6581 \] Therefore, upon using a normal probability calculator (or table), we get that our P-value is: \[P \approx 2 \times P(W' < -0.66)=2(0.2546) \approx 0.51 \] Because our P-value is large, we cannot reject the null hypothesis. There is insufficient evidence at the 0.05 level to conclude that the median age of the onset of diabetes differs significantly from 45 years. By the way, we can even be lazier and let Minitab do all of the calculation work for us. Under the Stat menu, if we select Nonparametrics, and then 1-Sample Wilcoxon, we get:
Symbols:Abbreviations/P/PGF Jump to navigation Jump to search Abbreviation: PGF or p.g.f. Let $p_X$ be the probability mass function for $X$. The probability generating function for $X$, denoted $\map {\Pi_X} s$, is the formal power series defined by: $\displaystyle \map {\Pi_X} s := \sum_{n \mathop = 0}^\infty \map {p_X} n s^n \in \R \left[\left[{s}\right]\right]$ Sources 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.) ... (previous) ... (next): Entry: p.g.f.
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
Please assume that this graph is a highly magnified section of the derivative of some function, say $F(x)$. Let's denote the derivative by $f(x)$.Let's denote the width of a sample by $h$ where $$h\rightarrow0$$Now, for finding the area under the curve between the bounds $a ~\& ~b $ we can a... @Ultradark You can try doing a finite difference to get rid of the sum and then compare term by term. Otherwise I am terrible at anything to do with primes that I don't know the identities of $\pi (n)$ well @Silent No, take for example the prime 3. 2 is not a residue mod 3, so there is no $x\in\mathbb{Z}$ such that $x^2-2\equiv 0$ mod $3$. However, you have two cases to consider. The first where $\binom{2}{p}=-1$ and $\binom{3}{p}=-1$ (In which case what does $\binom{6}{p}$ equal?) and the case where one or the other of $\binom{2}{p}$ and $\binom{3}{p}$ equals 1. Also, probably something useful for congruence, if you didn't already know: If $a_1\equiv b_1\text{mod}(p)$ and $a_2\equiv b_2\text{mod}(p)$, then $a_1a_2\equiv b_1b_2\text{mod}(p)$ Is there any book or article that explains the motivations of the definitions of group, ring , field, ideal etc. of abstract algebra and/or gives a geometric or visual representation to Galois theory ? Jacques Charles François Sturm ForMemRS (29 September 1803 – 15 December 1855) was a French mathematician.== Life and work ==Sturm was born in Geneva (then part of France) in 1803. The family of his father, Jean-Henri Sturm, had emigrated from Strasbourg around 1760 - about 50 years before Charles-François's birth. His mother's name was Jeanne-Louise-Henriette Gremay. In 1818, he started to follow the lectures of the academy of Geneva. In 1819, the death of his father forced Sturm to give lessons to children of the rich in order to support his own family. In 1823, he became tutor to the son... I spent my career working with tensors. You have to be careful about defining multilinearity, domain, range, etc. Typically, tensors of type $(k,\ell)$ involve a fixed vector space, not so many letters varying. UGA definitely grants a number of masters to people wanting only that (and sometimes admitted only for that). You people at fancy places think that every university is like Chicago, MIT, and Princeton. hi there, I need to linearize nonlinear system about a fixed point. I've computed the jacobain matrix but one of the elements of this matrix is undefined at the fixed point. What is a better approach to solve this issue? The element is (24*x_2 + 5cos(x_1)*x_2)/abs(x_2). The fixed point is x_1=0, x_2=0 Consider the following integral: $\int 1/4*(1/(1+(u/2)^2)))dx$ Why does it matter if we put the constant 1/4 behind the integral versus keeping it inside? The solution is $1/2*\arctan{(u/2)}$. Or am I overseeing something? *it should be du instead of dx in the integral **and the solution is missing a constant C of course Is there a standard way to divide radicals by polynomials? Stuff like $\frac{\sqrt a}{1 + b^2}$? My expression happens to be in a form I can normalize to that, just the radicand happens to be a lot more complicated. In my case, I'm trying to figure out how to best simplify $\frac{x}{\sqrt{1 + x^2}}$, and so far, I've gotten to $\frac{x \sqrt{1+x^2}}{1+x^2}$, and it's pretty obvious you can move the $x$ inside the radical. My hope is that I can somehow remove the polynomial from the bottom entirely, so I can then multiply the whole thing by a square root of another algebraic fraction. Complicated, I know, but this is me trying to see if I can skip calculating Euclidean distance twice going from atan2 to something in terms of asin for a thing I'm working on. "... and it's pretty obvious you can move the $x$ inside the radical" To clarify this in advance, I didn't mean literally move it verbatim, but via $x \sqrt{y} = \text{sgn}(x) \sqrt{x^2 y}$. (Hopefully, this was obvious, but I don't want to confuse people on what I meant.) Ignore my question. I'm coming of the realization it's just not working how I would've hoped, so I'll just go with what I had before.
We consider two different methods to estimate the parameters of a factor model: Principal Component Method Maximum Likelihood Estimation A third method, principal factor method, is also available but not considered in this class. Principal Component Method Section Let \(X_i\) be a vector of observations for the \(i^{th}\) subject: \(\mathbf{X_i} = \left(\begin{array}{c}X_{i1}\\ X_{i2}\\ \vdots \\ X_{ip}\end{array}\right)\) \(\mathbf{S}\) denotes our sample variance-covariance matrix and is expressed as: \(\textbf{S} = \dfrac{1}{n-1}\sum\limits_{i=1}^{n}\mathbf{(X_i - \bar{x})(X_i - \bar{x})'}\) We have p eigenvalues for this variance-covariance matrix as well as corresponding eigenvectors for this matrix. Eigenvalues of \(\mathbf{S}\): \(\hat{\lambda}_1, \hat{\lambda}_2, \dots, \hat{\lambda}_p\) Eigenvectors of \(\mathbf{S}\): \(\hat{\mathbf{e}}_1, \hat{\mathbf{e}}_2, \dots, \hat{\mathbf{e}}_p\) Recall that the variance-covariance matrix can be re-expressed in the following form as a function of the eigenvalues and the eigenvectors: Spectral Decomposition of \(Σ\) Section \[\Sigma = \sum_{i=1}^{p}\lambda_i \mathbf{e_ie'_i} \cong \sum_{i=1}^{m}\lambda_i \mathbf{e_ie'_i} = \left(\begin{array}{cccc}\sqrt{\lambda_1}\mathbf{e_1} & \sqrt{\lambda_2}\mathbf{e_2} & \dots & \sqrt{\lambda_m}\mathbf{e_m}\end{array}\right) \left(\begin{array}{c}\sqrt{\lambda_1}\mathbf{e'_1}\\ \sqrt{\lambda_2}\mathbf{e'_2}\\ \vdots\\ \sqrt{\lambda_m}\mathbf{e'_m}\end{array}\right) = \mathbf{LL'}\] The idea behind the principal component method is to approximate this expression. Instead of summing from 1 to p, we now sum from 1 to m, ignoring the last p - m terms in the sum and obtain the third expression. We can rewrite this as shown in the fourth expression, which is used to define the matrix of factor loadings \(\mathbf{L}\), yielding the final expression in matrix notation. Note!If standardized measurements are used, we replace \(\mathbf{S}\) by the sample correlation matrix \(\mathbf{R}\). This yields the following estimator for the factor loadings: \(\hat{l}_{ij} = \hat{e}_{ji}\sqrt{\hat{\lambda}_j}\) This forms the matrix \(\mathbf{L}\) of factor loading in the factor analysis. This is followed by the transpose of \(\mathbf{L}\). To estimate the specific variances, recall that our factor model for the variance-covariance matrix is \(\boldsymbol{\Sigma} = \mathbf{LL'} + \boldsymbol{\Psi}\) in matrix notation. \(\Psi\) is now going to be equal to the variance-covariance matrix minus \(\mathbf{LL'}\). \( \boldsymbol{\Psi} = \boldsymbol{\Sigma} - \mathbf{LL'}\) This in turn suggests that the specific variances, the diagonal elements of \(\Psi\), are estimated with this expression: \(\hat{\Psi}_i = s^2_i - \sum\limits_{j=1}^{m}\lambda_j \hat{e}^2_{ji}\) We take the sample variance for the ith variable and subtract the sum of the squared factor loadings (i.e., the communality).
I was asked to calculate the limit: $$\lim_{(x,y)\to (0,0)} \frac{x^3 \sin y}{x^6+2y^3} $$ I believe it has no limit so I tried to place some other functions that goes to $(0,0)$ and prove they don't goes to $0$ (I found some functions that does go to zero). I've tried with $y=x^2$ and I find out it goes to infinity, is it good enough? I've attached an Image explaining it better. Thanks
Example 12-1: Continued... Section The communalities for the \(i^{th}\) variable are computed by taking the sum of the squared loadings for that variable. This is expressed below: \(\hat{h}^2_i = \sum\limits_{j=1}^{m}\hat{l}^2_{ij}\) To understand the computation of communulaties, recall the table of factor loadings: Factor Variable 1 2 3 Climate 0.286 0.076 0.841 Housing 0.698 0.153 0.084 Health 0.744 -0.410 -0.020 Crime 0.471 0.522 0.135 Transportation 0.681 -0.156 -0.148 Education 0.498 -0.498 -0.253 Arts 0.861 -0.115 0.011 Recreation 0.642 0.322 0.044 Economics 0.298 0.595 -0.533 Let's compute the communality for Climate, the first variable. We square the factor loadings for climate (given in bold-face in the table above), then add the results: \(\hat{h}^2_1 = 0.28682^2 + 0.07560^2 + 0.84085^2 = 0.7950\) Using SAS The communalities of the 9 variables can be obtained from page 4 of the SAS output as shown below: Final Communality Estimates: Total = 5.616885 Climate housing health crime trans educate arts recreate econ 0.79500707 0.51783185 0.72230182 0.51244913 0.50977159 0.56073895 0.75382091 0.51725940 0.72770402 5.616885, (located just above the individual communalities), is the "Total Communality". Using Minitab View the video below to see hhow to get the communalities using the Minitab statistical software application.> In summary, the communalities are placed into a table: Variable Communality Climate 0.795 Housing 0.518 Health 0.722 Crime 0.512 Transportation 0.510 Education 0.561 Arts 0.754 Recreation 0.517 Economics 0.728 Total 5.617 You can think of these values as multiple \(R^{2}\) values for regression models predicting the variables of interest from the 3 factors. The communality for a given variable can be interpreted as the proportion of variation in that variable explained by the three factors. In other words, if we perform multiple regression of climate against the three common factors, we obtain an \(R^{2} = 0.795\), indicating that about 79% of the variation in climate is explained by the factor model. The results suggest that the factor analysis does the best job of explaining variation in climate, the arts, economics, and health. One assessment of how well this model performs can be obtained from the communalities. We want to see values that are close to one. This indicates that the model explains most of the variation for those variables. In this case, the model does better for some variables than it does for others. The model explains Climate the best, and is not bad for other variables such as Economics, Health and the Arts. However, for other variables such as Crime, Recreation, Transportation and Housing the model does not do a good job, explaining only about half of the variation. The sum of all communality values is the total communality value: \(\sum\limits_{i=1}^{p}\hat{h}^2_i = \sum\limits_{i=1}^{m}\hat{\lambda}_i\) Here, the total communality is 5.617. The proportion of the total variation explained by the three factors is \(\dfrac{5.617}{9} = 0.624\) This is the percentage of variation explained in our model. This could be considered an overall assessment of the performance of the model. However, this percentage is the same as the proportion of variation explained by the first three eigenvalues, obtained earlier. The individual communalities tell how well the model is working for the individual variables, and the total communality gives an overall assessment of performance. These are two different assessments. Because the data are standardized, the variance for the standardized data is equal to one. The specific variances are computed by subtracting the communality from the variance as expressed below: \(\hat{\Psi}_i = 1-\hat{h}^2_i\) Recall that the data were standardized before analysis, so the variances of the standardized variables are all equal to one. For example, the specific variance for Climate is computed as follows: \(\hat{\Psi}_1 = 1-0.795 = 0.205\) The specific variances are found in the SAS output as the diagonal elements in the table on page 5 as seen below: Climate Housing Health crime Trans Educate Arts Recreate Econ Climate 0.20499 -0.00924 -0.01476 -0.06027 -0.03720 0.18537 -0.07518 -0.12475 0.21735 Housing -0.00924 0.48217 -0.02317 -0.28063 -0.12119 -0.04803 -0.07518 -0.04032 0.04249 Health -0.01476 -0.02317 0.27770 0.05007 -0.15480 -0.11537 -0.00929 -0.09108 0.06527 Crime -0.06027 -0.28063 0.05007 0.48755 0.05497 0.11562 0.00009 -0.18377 -0.10288 Trans -0.03720 -0.12119 -0.15480 0.05497 0.49023 -0.14318 -0.05439 0.01041 -0.12641 Educate 0.18537 -0.04803 -0.11537 0.11562 -0.14318 0.43926 -0.13515 -0.05531 0.14197 Arts -0.07518 -0.07552 -0.00929 0.00009 -0.05439 -0.13515 0.24618 -0.01926 -0.04687 Recreate -0.12475 -0.04032 -0.09108 -0.18377 0.01041 -0.05531 -0.01926 0.48274 -0.18326 Econ 0.21735 0.04249 0.06527 -0.10288 -0.12641 0.14197 -0.04687 -0.18326 0.27230 For example, the specific variance for housing is 0.482. This model provides an approximation to the correlation matrix. We can assess the model's appropriateness with the residuals obtained from the following calculation: \(s_{ij}- \sum\limits_{k=1}^{m}l_{ik}l_{jk}; i \ne j = 1, 2, \dots, p\) This is basically the difference between R and LL', or the correlation between variables i and j minus the expected value under the model. Generally, these residuals should be as close to zero as possible. For example, the residual between Housing and Climate is -0.00924 which is pretty close to zero. However, there are some that are not very good. The residual between Climate and Economy is 0.217. These values give an indication of how well the factor model fits the data. One disadvantage of the principal component method is that it does not provide a test for lack-of-fit. We can examine these numbers and determine if we think they are small or close to zero, but we really do not have a test for this. Such a test is available for the maximum likelihood method.
In the paper "A Duality Web in 2+1 Dimensions and Condensed Matter Physics" https://arxiv.org/abs/1606.01989 on page 34, the action for electromagnetic field in Lorentzian signature is given by $$S=\int d^{4}x\sqrt{-g}\left(\frac{-1}{4g^{2}}F_{\mu\nu}F^{\mu\nu}+\frac{\theta}{32\pi^{2}}\epsilon^{\mu\nu\alpha\beta}F_{\mu\nu}F_{\alpha\beta}\right)$$ I am confused by the metric for the second term, because I used to think that the $\theta$-term should topological. The action should be $$S=-\frac{1}{2}\int F\wedge\star F+\frac{\theta}{8\pi^{2}}\int F\wedge F,$$ where there should be no metric dependence in the second term. Am I misunderstanding anything or is that a mistake in the paper? I also posted my question at stackexchange https://physics.stackexchange.com/questions/391685/the-action-for-electric-magnetic-duality
Infinite series is one of the important concept in mathematics. It tells about the sum of series of numbers which do not have limits. The sum of an infinite series can be denoted as $\sum_{0}^{\infty}r^{n}$. The formula for infinite series is given by, \[\large \sum_{0}^{\infty}r^{n}=\frac{1}{1-r}\] Solved Examples Question 1: Evaluate $\sum_{0}^{\infty }(\frac{1}{2})^{n}$? Solution: The sum of given series is, $\sum_{0}^{infty }(\frac{1}{2})^{n}$ So the series can be written as, $\sum_{0}^{\infty }(\frac{1}{2})^{n}$ = $(\frac{1}{2})^{0}$+$(\frac{1}{2})^{1}$+$(\frac{1}{2})^{2}$+$(\frac{1}{2})^{3}$+………….. Here common ratio, r is $\frac{1}{2}$ Infinite series formula is, $\sum_{0}^{\infty }r^{n}$ = $\frac{1}{1-r}$ So, $\sum_{0}^{\infty }r^{n}$ = $\frac{1}{1-\frac{1}{2}}$ = 2
Through Riemann sum, we find the exact total area that is under a curve on a graph, commonly known as integral. Riemann sum gives a precise definition of the integral as the limit of a series that is infinite. For approximating the area of lines or functions on a graph is a very common application of Riemann Sum formula. This formula is also used for curves and other approximations. The idea of calculating the sum is by dividing the region into the known shapes such as rectangle, squares, parabolas, cubics, that form the region that is somewhat similar to the region needed to measure, and then adding all of the regions to find the area. The four methods in Riemann Summation for finding the area are: 1. Right and Left methods : This method is to find the area using the endpoints of left and right of the sub intervals, respectively. 2. Maximum and minimum methods: Through this the values of largest and smallest end point of each sub- interval. $[a,b]$ = Closed interval divided into ‘n’ sub intervals $f(x)$ = continuous function on interval $x_{i}$ = Point belonging to the interval [a,b] $f(x_{i})$ = Value of the function at at x = xi \[\large S_{n}=\sum_{n}^{i-1}\int (x_{i})(x_{i}-x_{i-1})\] Solved Examples Question 1: Find the area under the curve f(x) = x 2 + 1, -3 $\leq$ x $\leq$ 1 by using riemann sum with 4 intervals ? Solution: Given that: a = -3, b = 1 f(x) = x 2 + 1 $\delta$ x = $\frac{b – a}{n}$ = $\frac{1 + 3}{6}$ = $\frac{4}{6}$ = $\frac{2}{3}$ Area = $left ( \sum_{i} f(x_{i})\delta{x})$ = $\delta{x}$(f(-2) + f(-1) + f(0) + f(1)) $\frac{2}{3}$(5 + 2 + 1 + 2) = $\frac{20}{3}$ = 6.6667
The Hénon equation with a critical exponent under the Neumann boundary condition Department of Mathematical Sciences, KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea $ -Δ u + u = |x|^{α}u^{p}, \; u > 0 \;\text{in} \; Ω,\ \ \frac{\partial u}{\partial ν} = 0 \; \text{ on }\;\partial Ω,$ $Ω \subset B(0,1) \subset \mathbb{R}^n, n ≥ 3$, $α≥ 0$ and $\partial^*Ω \equiv \partialΩ \cap \partial B(0,1) \ne \emptyset.$ $α = 0,$ $α > 0$ $α$ $α_0 ∈ (0,∞]$ Keywords:Least energy solutions, Hénon equation, limiting profile, Neumann boundary condition, weighted equations. Mathematics Subject Classification:Primary: 35B33, 35B40, 35J15, 35J25, 35J61. Citation:Jaeyoung Byeon, Sangdon Jin. The Hénon equation with a critical exponent under the Neumann boundary condition. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4353-4390. doi: 10.3934/dcds.2018190 References: [1] [2] Adimurthi, G. Mancini and S. L. Yadava, The role of the mean curvature in semilinear Neumann problem involving critical exponent, [3] Adimurthi, F. Pacella and S. L. Yadava, Characterization of concentration points and $L^∞$ -estimates for solutions of a semilinear Neumann problem involving the critical Sobolev exponent, [4] Adimurthi, F. Pacella and S. L. Yadava, Interaction between the geometry of the boundary and positive solutions of a semilinear Neumann problem with critical nonlinearity, [5] [6] H. Brézis and L. Nirenberg, Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents, [7] J. Byeon, S. Cho and J. Park, On the location of a peak point of a least energy solution for Hénon equation, [8] [9] [10] J. Byeon and Z. Q. Wang, On the Hénon equation with a Neumann boundary condition: Asymptotic profile of ground states, [11] [12] [13] J. Chabrowski and M. Willem, Least energy solutions of a critical Neumann problem with a weight, [14] G. Chen, W. M. Ni and J. Zhou, Algorithms and visualization for solutions of nonlinear ellptic equations, [15] D. G. Costa and P. M. Girão, Existence and nonexistence of least energy solutions of the Neumann problem for a semilinear elliptic equation with critical Sobolev exponent and a critical lower-order perturbations, [16] M. Gazzini and E. Serra, The Neumann problem for the Henon equation, trace inequalitiesand Steklov eigenvalues, [17] D. Gilbarg and N. Trudinger, [18] [19] [20] [21] P. L. Lions, F. Pacella and M. Tricarico, Best constants in Sobolev inequalities for functions vanishing on some part of the boundary and related questions, [22] [23] W. M. Ni, X. B. Pan and I. Takagi, Singular behavior of least-energy solutions of a semilinear Neumann problem involving critical Sobolev exponents, [24] [25] [26] [27] [28] [29] [30] show all references References: [1] [2] Adimurthi, G. Mancini and S. L. Yadava, The role of the mean curvature in semilinear Neumann problem involving critical exponent, [3] Adimurthi, F. Pacella and S. L. Yadava, Characterization of concentration points and $L^∞$ -estimates for solutions of a semilinear Neumann problem involving the critical Sobolev exponent, [4] Adimurthi, F. Pacella and S. L. Yadava, Interaction between the geometry of the boundary and positive solutions of a semilinear Neumann problem with critical nonlinearity, [5] [6] H. Brézis and L. Nirenberg, Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents, [7] J. Byeon, S. Cho and J. Park, On the location of a peak point of a least energy solution for Hénon equation, [8] [9] [10] J. Byeon and Z. Q. Wang, On the Hénon equation with a Neumann boundary condition: Asymptotic profile of ground states, [11] [12] [13] J. Chabrowski and M. Willem, Least energy solutions of a critical Neumann problem with a weight, [14] G. Chen, W. M. Ni and J. Zhou, Algorithms and visualization for solutions of nonlinear ellptic equations, [15] D. G. Costa and P. M. Girão, Existence and nonexistence of least energy solutions of the Neumann problem for a semilinear elliptic equation with critical Sobolev exponent and a critical lower-order perturbations, [16] M. Gazzini and E. Serra, The Neumann problem for the Henon equation, trace inequalitiesand Steklov eigenvalues, [17] D. Gilbarg and N. Trudinger, [18] [19] [20] [21] P. L. Lions, F. Pacella and M. Tricarico, Best constants in Sobolev inequalities for functions vanishing on some part of the boundary and related questions, [22] [23] W. M. Ni, X. B. Pan and I. Takagi, Singular behavior of least-energy solutions of a semilinear Neumann problem involving critical Sobolev exponents, [24] [25] [26] [27] [28] [29] [30] [1] Jaeyoung Byeon, Sungwon Cho, Junsang Park. On the location of a peak point of a least energy solution for Hénon equation. [2] Haiyang He. Asymptotic behavior of the ground state Solutions for Hénon equation with Robin boundary condition. [3] Futoshi Takahashi. On the number of maximum points of least energy solution to a two-dimensional Hénon equation with large exponent. [4] Henri Berestycki, Juncheng Wei. On least energy solutions to a semilinear elliptic equation in a strip. [5] Shouming Zhou, Chunlai Mu, Yongsheng Mi, Fuchen Zhang. Blow-up for a non-local diffusion equation with exponential reaction term and Neumann boundary condition. [6] [7] Yinbin Deng, Wentao Huang. Least energy solutions for fractional Kirchhoff type equations involving critical growth. [8] Claudia Anedda, Giovanni Porru. Boundary estimates for solutions of weighted semilinear elliptic equations. [9] [10] Umberto De Maio, Akamabadath K. Nandakumaran, Carmen Perugia. Exact internal controllability for the wave equation in a domain with oscillating boundary with Neumann boundary condition. [11] Martin Gugat, Günter Leugering, Ke Wang. Neumann boundary feedback stabilization for a nonlinear wave equation: A strict $H^2$-lyapunov function. [12] Miaomiao Niu, Zhongwei Tang. Least energy solutions for nonlinear Schrödinger equation involving the fractional Laplacian and critical growth. [13] Doyoon Kim, Hongjie Dong, Hong Zhang. Neumann problem for non-divergence elliptic and parabolic equations with BMO$_x$ coefficients in weighted Sobolev spaces. [14] [15] Alexander Gladkov. Blow-up problem for semilinear heat equation with nonlinear nonlocal Neumann boundary condition. [16] Miaomiao Niu, Zhongwei Tang. Least energy solutions of nonlinear Schrödinger equations involving the half Laplacian and potential wells. [17] Zhongwei Tang. Least energy solutions for semilinear Schrödinger equations involving critical growth and indefinite potentials. [18] M. Eller. On boundary regularity of solutions to Maxwell's equations with a homogeneous conservative boundary condition. [19] Zhenhua Zhang. Asymptotic behavior of solutions to the phase-field equations with neumann boundary conditions. [20] Jason Metcalfe, Jacob Perry. Global solutions to quasilinear wave equations in homogeneous waveguides with Neumann boundary conditions. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
I'm trying to solve the one-dimensional heat equation using Fourier series: $$\frac{\partial T}{\partial t}=-\alpha\frac{\partial^2T}{\partial x^2}$$ I know that: $$T = k_n\cos{(nx-\varphi_n)}e^{-\alpha n^2t}$$ is a solution that satisfies the PDE, but doesn't necessary satisfy the initial, or boundary conditions. However, we also know that we may write any function as a sum of sines and cosines, which is what a Fourier series is. For a function $f(x)$, we may write: $$f(x)=\frac{a_0}{2} + \sum_{n=1}^{\infty}{a_ncos(nx)+b_nsin(nx)}$$ Now, let me define $k_n$ as: $$k_n^2=a_n^2+b_n^2$$ Thus, we can write: $$a_n=k_n\cos{\varphi_n}$$ and $$b_n=k_n\sin{\varphi_n}$$ Substituting this above, we get: $$f(x)=\frac{a_0}{2} + \sum_{n=1}^{\infty}{k_n\cos{(nx-\varphi_n)}}$$ I made this change so that instead of restricting to sines or cosines, we can have a cosine wave, shifted by a phase angle instead. Initial condition: If we take the above $f(x)$ as the initial condition of our heat distribution, i.e $$T(x,t=0) = f(x)$$ we would instantly get the PDE solution as: $$T(x,t)=\frac{a_0}{2} + \sum_{n=1}^{\infty}{k_n\cos{(nx-\varphi_n)}e^{-\alpha n^2t}}$$ Boundary condition: This is where I am stuck. We can have a whole variety of boundary conditions, for the two end points of the rod. We can either maintain the temperature of the rod at the ends constant, or the heat flow through the rod at the ends constant, or the rate of heat flow, etc. All of these boundary conditions can be described as either the function values of $T(0,t)$ or $T(L,t)$, or any of the higher order derivatives of them. At first I thought that we must tweak $k_n$, or $\varphi_n$ to get our boundary condition solved, but that doesnt seem to be the case. After careful inspection, I realize that it's the definition of $f(x)$ that we must consider for the boundary condition: $f(x)$ is a periodic extension of the heat distribution of the rod at intial time. I am aware of the half-range extensions, where we may extend as just sines, or just cosines, but I'm sure that's not what it's limited to. I believe we should be able to extend the initial condition in any way we would like, such that this extension satisfies both boundary conditions. I'm not sure how I would proceed to do this. Any help would be appreciated.
In Classical Mechanics to describe symmetries like translations and rotations we use diffeomorphisms on the configuration manifold. In Quantum Mechanics we use unitary operators in state space. We impose unitarity because we don't want the action of the symmetry to interfere with the normalization of the states. Now, examples of that are the cases of translations and rotations. Those are defined directly in terms of the classical ones. In the case of rotations if $\mathcal{E}$ is the state space of a spinless particle in three dimensions and if $R\in SO(3)$ is a rotation in three dimensions we have the induce rotation $\hat{R}\in \mathcal{L}(\mathcal{E})$ by $$\langle \mathbf{r} | \hat{R}|\psi\rangle = \langle R^{-1}\mathbf{r}|\psi\rangle.$$ Now in Classical Mechanics we often want to talk about the infinitesimal version of a symmetry which is known its generator. In Quantum Mechanics the same idea is quite important. The generators of rotations, for example, are the angular momentum operators. The whole point with generators is that They can be interpreted as the infinitesimal version of a symmetry. In analogy with Lie groups if $A$ and if $\xi$ is its generator we should be able to write $$A_\alpha = \exp(i\alpha \xi),$$ where $\alpha$ is a parameter characterizing the extent to which we are applying the symmetry. The operators generators $\xi$ are then hermitian operators. These are facts that I know in a totally informal and non rigorous manner. What I want is to make the idea of generators of a symmetry in QM precise. One problem that we already have is that the exponential might converge since there are operators which are unbounded. In any case: how do we precisely define the generators of an operator, how do we show they exist and how do we write the operator as an exponential in terms of its generators in a rigorous manner?
We know that two fields commute - by locality and causality - iff there is spacelike separation $\left[\phi_l^k(x) , \phi_m^{k'}(y)\right] = 0$ for $(x-y)^2<0$ In the canonical quantization of the Dirac field, if $b_\alpha(k)$ is the annihilation operator and $b^\dagger_\alpha(k)$ is the creation operator for a particle of 4-momentum $k$ with $\left[b_\alpha(k), b^\dagger_\beta(q)\right] = (2\pi)^3\frac{\omega_\mathbf k}{m} \delta^{(3)}(\mathbf{k}-\mathbf{q})\delta_{\alpha\beta}$ and $\psi^{(+)}(x) = e ^{-ikx}u(k) $ is a solution with positive energy while $\psi^{(-)}$ is negative, when we use commutators all the way, the following $\left[\psi_\xi(x) , \overline\psi_\eta(y)\right] = (i\not\partial_x+m)\int{\frac{d^3k}{(2\pi)^3}\frac{1}{2\omega_\mathbf k}\left[e^{-ik(x-y)} + e^{+ik(x-y)}\right]|_{k=(\omega_k,\mathbf k )}}$ does not vanish for spacelike separations and results in a violation of causality. How is this problem overcome or why isn't it a problem?
I'm attempting to solve the 1D Schrodinger Equation, approaching a potential barrier defined as follows: $$V(x) = \begin{cases}-V_0&\quad\text{for}\quad x<0 \\0&\quad\text{for}\quad x>0\end{cases}$$ where the inbound particle has energy $-V_0 < E < 0$ (Approaching from the left). I've started the solution, as I would normally, for a barrier defined as: $$V(x) = \begin{cases}0&\quad\text{for}\quad x<0 \\V_0&\quad\text{for}\quad x>0\end{cases}$$ But I've got a feeling that I may be wrong- does the vertical shift of the barrier (and the energy of the incoming particle) actually change anything? Do I instead need to solve: $$(\frac{-\hbar^2}{2m} \frac{d^2}{dx^2} -V_0)\psi=E\psi$$ for the region $x<0$?
The integrability of negative powers of the solution of the Saint Venant problem 2014 (English)In: Annali della Scuola Normale Superiore di Pisa (Classe Scienze), Serie V, ISSN 0391-173X, E-ISSN 2036-2145, Vol. XIII, no 2, p. 465-531Article in journal (Refereed) Published Abstract [en] We initiate the study of the finiteness condition∫ Ω u(x) −β dx≤C(Ω,β)<+∞ whereΩ⊆R n is an open set and u is the solution of the Saint Venant problem Δu=−1 in Ω , u=0 on ∂Ω . The central issue which we address is that of determining the range of values of the parameter β>0 for which the aforementioned condition holds under various hypotheses on the smoothness of Ω and demands on the nature of the constant C(Ω,β) . Classes of domains for which our analysis applies include bounded piecewise C 1 domains in R n , n≥2 , with conical singularities (in particular polygonal domains in the plane), polyhedra in R 3 , and bounded domains which are locally of classC 2 and which have (finitely many) outwardly pointing cusps. For example, we show that if u N is the solution of the Saint Venant problem in the regular polygon Ω N with N sides circumscribed by the unit disc in the plane, then for each β∈(0,1) the following asymptotic formula holds: % {eqnarray*} \int_{\Omega_N}u_N(x)^{-\beta}\,dx=\frac{4^\beta\pi}{1-\beta} +{\mathcal{O}}(N^{\beta-1})\quad{as}\,\,N\to\infty. {eqnarray*} % One of the original motivations for addressing the aforementioned issues was the study of sublevel set estimates for functions v satisfying v(0)=0 , ∇v(0)=0 and Δv≥c>0 . Place, publisher, year, edition, pages Scuola Normale Superiore , 2014. Vol. XIII, no 2, p. 465-531 National Category Natural Sciences IdentifiersURN: urn:nbn:se:liu:diva-108526ISI: 000339985500008Scopus ID: 2-s2.0-84908458320OAI: oai:DiVA.org:liu-108526DiVA, id: diva2:730709 2014-06-292014-06-292017-12-05Bibliographically approved
First, note that the growth rate of $\mu$ is defined as $\dot\mu = \frac{ d\mu }{ \mu }$. Therefore, you will have to take the total differential of the equation; and divide by $\mu$. For the first step, as in your previous question, simply use the rules of total differentials, specifically the product rule that states that for variables $X$ and $Y$, the total differential of $XY$ is given by $$d(XY) = Y \, dX + X \, dY$$ Assuming that $s_\pi$, $s_W$ and $\zeta$ are (constant) parameters (so that by the constant rule, $ds_\pi = ds_W = d\zeta = 0$), we can apply this to your equation: $$\begin{align}d\mu & = d \left[ s_\pi - v (s_\pi - s_W) + \zeta \right] \\& = ds_\pi - d \left[ v (s_\pi - s_W) \right] + d\zeta \\& = -\left[ (s_\pi - s_W) \, dv + v \, d(s_\pi - s_W) \right] \\& = -(s_\pi - s_W) \, dv\end{align}$$ Now divide both sides by $\mu$, noting that since $\dot v = \frac{ dv }v$, we also have $dv = v \dot v$: $$\begin{align}\dot\mu = \frac{ d\mu }\mu & = -\frac{ dv }\mu (s_\pi - s_W) \\& = -\frac v\mu (s_\pi - s_W) \dot v\end{align}$$ which is the desired result. As before, see chapter 8 of Chiang and Wainwright, Fundamental Methods of Mathematical Economics, 4th ed., McGraw-Hill 2005, for more on total differentials. Alternatively, see this Q&A post of mine explaining the rules of total differentials and showing another example of how to apply them to an equation.
The rigorous way to solve this would be to solve the differential equation, get an expression for $P(t)$ and then take the limit $\lim_{t \to \infty}P(t)$. But if you start out by assuming the limit exists, then at the limit, $\frac{dP}{dt}=0$ (i.e. loosely speaking, "P stops changing"). So solve $3P(4-P) = 0$ giving $P=4$ as the limiting value (ignore the other root $P=0$ because $P(0) = 2$ and $P$ is always increasing). Note that what I did is a "quick and dirty" solution. If you're asked this in homework, a test or an examination, you should start by solving the differential equation. It's a separable first order differential equation. You start by separating the variables: $$\frac{dP}{3P(4-P)} = dt$$ Then integrate both sides. I find it easier to set the initial conditions as the lower bound of a definite integral. $$\int_2^{P(t)}\frac{dP}{3P(4-P)} = \int_0^t 1dt$$ The integrand on the left hand side can be resolved quickly with partial fractions, and the rest is algebra.
To find the maximum likelihood a generic approach is to write down the likelihood function as a starting point. In this example, $Y_i\sim\mathcal{N}(\mu,\sigma_i^2)$ means that$$L(\mu,\sigma_1,\ldots,\sigma_n) = \prod_{i=1}^n \dfrac{\exp\{-(y_i-\mu)^2/2\sigma_i^2\}}{\sqrt{2\pi}\sigma_i}$$or$$\ell(\mu,\sigma_1,\ldots,\sigma_n) =\log L(\mu,\sigma_1,\ldots,\sigma_n) = \sum_{i=1}^n -(y_i-\mu)^2/2\sigma_i^2 - \log \sigma_i + \text{cst}$$ Now that you have your (log-)likelihood function, you have to find its maximum in $(\mu,\sigma_1,\ldots,\sigma_n)$. A standard approach at this stage is to look at the derivatives against every component of the parameter:\begin{align*}\frac{\partial \ell}{\partial \mu}(\mu,\sigma_1,\ldots,\sigma_n)&= \sum_{i=1}^n (y_i-\mu)/\sigma_i^2 \\\frac{\partial \ell}{\partial \sigma_i}(\mu,\sigma_1,\ldots,\sigma_n)&= (y_i-\mu)^2/\sigma_i^3 - 1/\sigma_i \qquad i=1,\ldots,n\end{align*}If you look at the solutions $(\hat\mu,\hat\sigma_1,\ldots,\hat\sigma_n)$ of$$\frac{\partial \ell}{\partial \mu}(\mu,\sigma_1,\ldots,\sigma_n) =0,\qquad\frac{\partial \ell}{\partial \sigma_i}(\mu,\sigma_1,\ldots,\sigma_n) =0,\qquad i=1,\ldots,n,$$you find\begin{align*}\sum_{i=1}^n \hat\sigma_i^{-2} y_i - n \hat\mu \sum_{i=1}^n \hat\sigma_i^{-2} &= 0\\(y_i-\hat\mu)^2/\hat\sigma_i^3 - 1/\hat\sigma_i &= 0 \qquad i=1,\ldots,n\end{align*}which leads to $$\hat\sigma_i^2 = (y_i-\hat\mu)^2 \qquad i=1,\ldots,n$$hence to$$\sum_{i=1}^n (y_i-\hat\mu)/\hat\sigma_i^2 = \sum_{i=1}^n (y_i-\hat\mu)^{-1}=0$$This equation does not have a closed form expression in $\hat\mu$. This is the generic way to derive the likelihood equations and their solution(s). However, in some cases like the current one, the maximum of the likelihood function might occur at the boundary of the parameter space. Take indeed $\mu=y_1$. Then the log-likelihood function is equal to$$-\log(\sigma_1)-\sum_{i=2}^n (y_i-\mu)^2/2\sigma_i^2 -\sum_{i=2}^n \log \sigma_i + \text{cst}$$and if we let $\sigma_1$ go to zero, the log-likelihood goes to infinity. The same property holds for $\mu=y_2,\ldots,y_n$. The maximum of the likelihood is thus at the boundary of the parameter space$$\mathbb{R}\times\mathbb{R}^*_+\times\cdots\times\mathbb{R}^*_+$$(excluding $0$ as possible value for the $\sigma_i$) and therefore there is no maximum likelihood estimator.
I have a recollection of seeing an identity connecting polylogarithm and polygamma functions of arguments $\frac14$ and $\frac34$. But I don't remember details, and searching my books and the Internet for an hour did not give any results. Could you please remind me this identity (if it exists)? I have not found a source, but I think I reconstructed it correctly: $$\psi^{(n)}\!\left(\tfrac34\right)-\psi^{(n)}\!\left(\tfrac14\right)=(-1)^n\,4^{n+1}\,n!\,\,\Im\operatorname{Li}_{n+1}(i),\ n\in\mathbb N.$$ In the source where I saw it, the imaginary part was probably written as a difference of two polylog terms. It can be proved using: http://mathworld.wolfram.com/PolygammaFunction.html, (5) http://dlmf.nist.gov/25.11, 25.11.31 Gradshteyn—Ryzhik, 3.523 http://mathworld.wolfram.com/LegendresChi-Function.html, (2), (4)
I'd appreciate some help in squaring this thermodynamic circle (which may exist only in my mind, but I digress ...) Let's consider pure water in a two-phase(liquid/vapour) state. Within the vapour dome, the density of water can be determined from mass and volumetric balances according to $$\rho = \Bigg[\frac{x}{\rho_V(P)}+\frac{1-x}{\rho_L(P)}\Bigg]^{-1}$$where $x$ is the vapour mass (not volume) fraction (which we knuckle-dragging engineers call the "steam quality"), $P$ is the pressure, and where $\rho_L(P)$ and $\rho_V(P)$ are the saturated liquid and saturated vapour densities to the left and right of the critical point (apex of the vapour dome) respectively. Both $\rho_L$ and $\rho_V$ are functions of pressure $P$, alone. The classical definition of the compressibility (really the isenthalpic compressibility within the vapour dome since the vapour mass fraction is a proxy for enthalpy here) is $$ \beta=-\frac{1}{V}\frac{\partial V}{\partial P}=\rho\frac{\partial \rho}{\partial P} $$ Now along the saturated liquid curve $x=0$ (to the left of the vapour dome apex), the density drops with pressure (because the dominant effect is that of the increase in the boiling point with an increase in pressure), so that for a saturated liquid, the compressibility is $$ \beta_{x=0}=\rho_L\frac{\partial \rho_L}{\partial P} $$ which would be negative. This defies not only the physics, but is also inconsistent with the (positive) compressibility that would be evaluated form a simple finite difference between a point on the saturated liquid line and a point at the same pressure in the subcooled region just above and to the left of the vapour dome boundary. I suspect that I'm missing some key concept here. Any advice? Thanks
11.3 - Chi-Square Test of Independence The chi-square (\(\chi^2\)) test of independence is used to test for a relationship between two categorical variables. Recall that if two categorical variables are independent, then \(P(A) = P(A \mid B)\). The chi-square test of independence uses this fact to compute expected values for the cells in a two-way contingency table under the assumption that the two variables are independent (i.e., the null hypothesis is true). Even if two variables are independent in the population, samples will vary due to random sampling variation. The chi-square test is used to determine if there is evidence that the two variables are not independent in the population using the same hypothesis testing logic that we used with one mean, one proportion, etc. Again, we will be using the five step hypothesis testing procedure: The assumptions are that the sample is randomly drawn from the population and that all expected values are at least 5 (we will see what expected values are later). Our hypotheses are: \(H_0:\) There is not a relationship between the two variables in the population (they are independent) \(H_a:\) There is a relationship between the two variables in the population (they are dependent) Note: When you're writing the hypotheses for a given scenario, use the names of the variables, not the generic "two variables." Chi-Square Test Statistic \(\chi^2=\sum \frac{(Observed-Expected)^2}{Expected}\) Expected Cell Value \(E=\frac{row\;total \; \times \; column\;total}{n}\) The p-value can be found using Minitab Express. Look up the area to the right of your chi-square test statistic on a chi-square distribution with the correct degrees of freedom. Chi-square tests are always right-tailed tests. Degrees of Freedom: Chi-Square Test of Independence \(df=(number\;of\;rows-1)(number\;of\;columns-1)\) If \(p \leq \alpha\) reject the null hypothesis. If \(p>\alpha\) fail to reject the null hypothesis. Write a conclusion in terms of the original research question.
Mathematics - Part 2 This tutorial builds on the basic foundations presented in the previous tutorial. If you often include a lot of maths in your documents, then you will probably find that you wish to have slightly more control over presentation issues. Some of the topics covered make writing equations more complex - but who said typesetting mathematics was easy?! Adding text to equations I doubt it will be every day that you will need to include some text within an equation. However, sometimes it needs to be done. Just sticking the text straight in the maths environment won't give you the results you want. For example: \begin{equation} 50 apples \times 100 apples = lots of apples \end{equation} There are two noticeable problems. Firstly, there are no spaces between numbers and text, nor spaces between multiple words. Secondly, the words don't look quite right --- the letters are more spaced out than normal. Both issues are simply artifacts of the maths mode, in that it doesn't expect to see words. Any spaces that you type in maths mode are ignored and LaTeX spaces elements according to its own rules. It is assumed that any characters represent variable names. To emphasise that each symbol is an individual, they are not positioned as closely together as with normal text. There are a number of ways that text can be added properly. The typicalway is to wrap the text with the \mbox{...} command. This commandhasn't been introduced before, however, it's job is basically to createa text box just width enough to contain the supplied text. Text withinthis box cannot be broken across lines. Let's see what happens when theabove equation code is adapted: The text looks better. However, there are no gaps between the numbersand the words. Unfortunately, you are required to explicitly add these.There are many ways to add spaces between maths elements, however, forthe sake of simplicity, I find it easier, in this instance at least,just to literally add the space character in the affected \mbox(s) itself (just before the text.) \begin{equation} 50 \mbox{ apples} \times 100 \mbox{ apples} = \mbox{lots of apples} \end{equation} Formatted text Using the \mbox is fine and gets the basic result. Yet, there isan alternative that offers a little more flexibility. You may recallfrom the Formatting Tutorial,the introduction of font formatting commands, such as \textrm, \textit, \textbf, etc. These commands format the argumentaccordingly, e.g., \textbf{bold text} gives bold text.These commands are equally valid within a maths environment to includetext. The added benefit here is that you can have better control overthe font formatting, rather than the standard text achieved with \mbox. \begin{equation} 50 \textrm{ apples} \times 100 \textbf{ apples} = \textit{lots of apples} \end{equation} However, as is the case with LaTeX, there is more than one way to skina cat! There are a set of formatting commands very similar to the fontformatting ones just used, except they are aimed specifically for textin maths mode. So why bother showing you \textrm and co if thereare equivalents for maths? Well, that's because they are subtlydifferent. The maths formatting commands are: Command Format Example \mathrm{...} Roman \mathit{...} Italic \mathbf{...} Bold \mathsf{...} Sans serif \mathtt{...} Typewriter \mathcal{...} Calligraphy The maths formatting commands can be wrapped around the entire equation, and not just on the textual elements: they only format letters, numbers, and uppercase Greek, and the rest of the maths syntax is ignored. So, generally, it is better to use the specific maths commands if required. Note that the calligraphy example gives rather strange output. This is because for letters, it requires upper case characters. The reminding letters are mapped to special symbols. Changing text size of equations Probably a rare event, but there may be a time when you would prefer to have some control of the size. For example, using text-mode maths, by default a simple fraction will look like this: where as you may prefer to have it displayed larger, like when in display mode, but still keeping it inline, like this: . A simple approach is to utilise the predefined sizes for maths elements: \displaystyle Size for equations in display mode \textstyle Size for equations in text mode \scriptstyle Size for first sub/superscripts \scriptscriptstyle Size for subsequent sub/superscripts A classic example to see this in use is typesetting continued fractions. The following code provides an example. \begin{equation} x = a_0 + \frac{1}{a_1 + \frac{1}{a_2 + \frac{1}{a_3 + a_4}}} \end{equation} As you can see, as the fractions continue, they get smaller (althoughthey will not get any smaller as in this example, they have reached the \srciptscriptstyle limit. If you wanted to keep the sizeconsistent, you could declare each fraction to use the display styleinstead, e.g., \begin{equation} x = a_0 + \frac{1}{\displaystyle a_1 + \frac{1}{\displaystyle a_2 + \frac{1}{\displaystyle a_3 + a_4}}} \end{equation} Another approach is to use the \DeclareMathSizes command toselect your preferred sizes. You can only define sizes for \displaystyle}, \textstyle, etc. One potential downside isthat this command sets the global maths sizes, as it can only be used inthe document preamble. However, it's fairly easy to use: \DeclareMathSizes{ds}{ts}{ss}{sss},where ds is the display size, ts isthe text size, etc. The values you input are assumed to be point (pt) size.In the example document (mathsize2.pdf), the math sizes have beenmade much larger than necessary to illustrate that the changes have taken place. NB the changes only take place if the value in the first argument matches the current document text size. It is therefore common to see a set of declarations in the preamble, in the event of the main font being changed. E.g., \DeclareMathSizes{10}{18}{12}{8} % For size 10 text \DeclareMathSizes{11}{19}{13}{9} % For size 11 text \DeclareMathSizes{12}{20}{14}{10} % For size 12 text Multi-lined equations ( eqnarray environment) Imagine that you have an equation that you want to manipulate, for example, to simplify it. Often this is done over a number of steps to help the reader understand how to get from the original equation to the final result. This should be a relatively simple task, but as we shall see, the skills for displaying mathematics from the previous tutorial are not adequate. Using what we know so far: This clearly looks rather ugly. One way to the various elements neatly aligned is to use a table, and place equations inline. Let's have a go: \begin{tabular}{ r l } \(10xy^2+15x^2y-5xy\) & \(= 5\left(2xy^2+3x^2y-xy\right)\) \\ & \(= 5x\left(2y^2+3xy-y\right)\) \\ & \(= 5xy\left(2y+3x-1\right)\) \end{tabular} That doesn't look too bad. Perhaps the extra space between the equation on the left and the equals sign being larger than the gap on the right doesn't look perfect. That could be rectified by adding an extra column and putting the equals sign in the central one: \begin{tabular}{ r c l } \(10xy^2+15x^2y-5xy\) & \(=\) & \(5\left(2xy^2+3x^2y-xy\right)\) \\ & \(=\) & \(5x\left(2y^2+3xy-y\right)\) \\ & \(=\) & \(5xy\left(2y+3x-1\right)\) \end{tabular} Looking better. Another issue is that the vertical space between therows makes makes the equations on the right-hand side look a littlecrowded. It would be nice to add a little space --- make the rows in thetable a little taller. (You need to add \usepackage{array} to your preamblefor this to work.) % add some extra height. Needs the array package. See preamble. \setlength{\extrarowheight}{0.3cm} However, by this stage, we've had to do quite a bit of extra leg work,and there is still one important disadvantage at the end of it. Youcan't add equation numbers. Because once you are within the tabular environment, you can only use the inline type of mathsdisplay. For equation numbers, you need to use the display mode equation package, but you can't in this instance. This is where eqnarray becomes extremely useful. eqnarray, as the name suggests borrows from the array}package which is basically a simplified tabular environment. The array package was introduced in the Mathematics 1 tutorial for producingmatrices. Here is how to use eqnarray for this example: \begin{eqnarray} 10xy^2+15x^2y-5xy & = & 5\left(2xy^2+3x^2y-xy\right) \\ & = & 5x\left(2y^2+3xy-y\right) \\ & = & 5xy\left(2y+3x-1\right) \end{eqnarray} As you can see, everything is laid out nicely and as expected. Well,except that each row within the array has been assigned its own equationnumber. Whilst this feature is useful in some instances, it is not requiredhere --- just the one number will do! To suppress equation numbers for agiven row, add a \nonumber command just before the end of rowcommand ( \\). \begin{eqnarray} 10xy^2+15x^2y-5xy & = & 5\left(2xy^2+3x^2y-xy\right) \nonumber \\ & = & 5x\left(2y^2+3xy-y\right) \nonumber \\ & = & 5xy\left(2y+3x-1\right) \end{eqnarray} If you don't care for equation numbers at all, then rather than adding \nonumber to every row, use the starred version of theenvironment, i.e., \begin{eqnarray*} ... \end{eqnarray*}. NB There is a limit of 3 columns in the eqnarray environment.If you need anymore flexibility, you are best advised to seek the AMSMaths packages. Breaking up long equations Let us first see an example of a long equation. (Note, when you view the example document for this topic, you will see that this equation is in fact wider than the text width. It's not as obvious on this webpage!) LaTeX doesn't break long equations to make them fit within the marginsas it does with normal text. It is therefore up to you to format theequation appropriately (if they overrun the margin.) This typicallyrequires some creative use of an eqnarray to get elementsshifted to a new line and to align nicely. E.g., \begin{eqnarray*} \left(1+x\right)^n & = & 1 + nx + \frac{n\left(n-1\right)}{2!}x^2 \\ & & + \frac{n\left(n-1\right)\left(n-2\right)}{3!}x^3 \\ & & + \frac{n\left(n-1\right)\left(n-2\right)\left(n-3\right)}{4!}x^4 \\ & & + \ldots \end{eqnarray*} It may just be that I'm more sensitive to these kind of things, but you may notice that from the 2nd line onwards, the space between the initial plus sign and the subsequent fraction is (slightly) smaller than normal. (Observe the first line, for example.) This is due to the fact that LaTeX deals with the + and - signs in two possible ways. The most common is as a binary operator. When two maths elements appear either side of the sign, it is assumed to be a binary operator, and as such, allocates some space either side of the sign. The alternative way is a sign designation. This is when you state whether a mathematical quantity is either positive or negative. This is common for the latter, as in maths, such elements are assumed to be positive unless a - is prefixed to it. In this instance, you want the sign to appear close to the appropriate element to show their association. It is this interpretation that LaTeX has opted for in the above example. To add the correct amount of space, you can add an invisiblecharacter using {}, as illustrated here: \begin{eqnarray*} \left(1+x\right)^n & = & 1 + nx + \frac{n\left(n-1\right)}{2!}x^2 \\ & & {} + \frac{n\left(n-1\right)\left(n-2\right)}{3!}x^3 \\ & & {} + \frac{n\left(n-1\right)\left(n-2\right)\left(n-3\right)}{4!}x^4 \\ & & {} + \ldots \end{eqnarray*} Alternatively, you could avoid this issue altogether by leaving the + at the end of the previous line rather at the beginning of the current line: There is another convention of writing long equations that LaTeXsupports. This is the way I see long equations typeset in books andarticles, and admittedly is my preferred way of displaying them.Sticking with the eqnarray approach, using the \lefteqn{...} command around the content before the= sign gives the following result: \begin{eqnarray*} \lefteqn{\left(1+x\right)^n = } \\ & & 1 + nx + \frac{n\left(n-1\right)}{2!}x^2 + \\ & & \frac{n\left(n-1\right)\left(n-2\right)}{3!}x^3 + \\ & & \frac{n\left(n-1\right)\left(n-2\right)\left(n-3\right)}{4!}x^4 + \\ & & \ldots \end{eqnarray*} Notice that the first line of the eqnarray contains the \lefteqn only. And within this command, there areno column separators ( &). The reason this command displays things as itdoes is because the \lefteqn prints the argument,however, tells LaTeX that the width is zero. This results in the firstcolumn being empty, with the exception of the inter-column space, whichis what gives the subsequent lines their indentation. Controlling horizontal spacing LaTeX is obviously pretty good at typesetting maths --- it was one ofthe chief aims of the core Tex system that LaTeX extends. However, itcan't always be relied upon to accurately interpret formulae in the wayyou did. It has to make certain assumptions when there are ambiguousexpressions. The result tends to be slightly incorrect horizontalspacing. In these events, the output is still satisfactory, yet, anyperfectionists will no doubt wish to fine-tune their formulae toensure spacing is correct. These are generally very subtle adjustments. There are other occasions where LaTeX has done its job correctly, but you just want to add some space, maybe to add a comment of some kind. For example, in the following equation, it is preferable to ensure there is a decent amount of space between the maths and the text. \[f(n) = \left\{ \begin{array}{l l} n/2 & \quad \mbox{if $n$ is even}\\ -(n+1)/2 & \quad \mbox{if $n$ is odd}\\ \end{array} \right. \] LaTeX has defined two commands that can be used anywhere in documents(not just maths) to insert some horizontal space. They are \quad and \qquad A \quad is a space equal to the current fontsize. So, if you are using an 11pt font, then the space provided by \quad will also be 11pt (horizontally, of course.) The \qquad gives twice that amount. As you can seefrom the code from the above example, \quads wereused to add some separation between the maths and the text. OK, so back to the fine tuning as mentioned at the beginning of thedocument. A good example would be displaying the simple equation for theindefinite integral of y with respect to x: If you were to try this, you may write: \[ \int y \mathrm{d}x \] However, this doesn't give the correct result. LaTeX doesn't respectthe whitespace left in the code to signify that the y and thed x are independent entities. Instead, it lumps themaltogether. A \quad would clearly be overkill isthis situation --- what is needed are some small spaces to be utilisedin this type of instance, and that's what LaTeX provides: Command Description Size \, small space 3/18 of a quad \: medium space 4/18 of a quad \; large space 5/18 of a quad \! negative space -3/18 of a quad NB you can use more than one command in a sequence to achieve a greater space if necessary. So, to rectify the current problem: \int y\, \mathrm{d}x \int y\: \mathrm{d}x \int y\; \mathrm{d}x The negative space may seem like an odd thing to use, however, itwouldn't be there if it didn't have some use! Take the followingexample: The matrix-like expression for representing binomial coefficients is too padded. There is too much space between the brackets and the actual contents within. This can easily be corrected by adding a few negative spaces after the left bracket and before the right bracket. \[\left(\!\!\! \begin{array}{c} n \\ r \end{array} \!\!\!\right) = {^n}C_r = \frac{n!}{r!(n-r)!} \] Summary As you can begin to see, typesetting maths can be tricky at times. However, because LaTeX provides so much control, you can get professional quality mathematics typesetting for relatively little effort (once you've had a bit of practise, of course!). It would be possible to keep going and going with maths topics because it seems potentially limitless. However, with these two tutorials, you should be able to get along sufficiently. Useful resources: User's Guide for the amsmathPackage [pdf] Last updated: December 2, 2011
Write a program which can encode text to avoid reusing characters, and convert back. Both normal and encoded forms are restricted to a particular character set: the space character with code point 32, the tilde character ~ with code point 126, and all characters between. This is 95 total characters. It's a printable subset of ASCII. The encoded text can be up to 95 characters long. The normal text can be up to 75 characters long, because above 75 it would be impossible to uniquely encode all messages. Encoding There are some requirements for the encoding: Uses only characters from the character set described above Uses each character at most once Uniquely encodes all valid messages Other than that, you're free to pick any variant. Input Input will consist of 2 items: A boolean: false to encode, true to decode A string: the message to be encoded/decoded How you receive the input is up to you. The boolean can be as a boolean or integer in the range [-2, 2] in your language's format. If the esoteric language doesn't have booleans or integers, use the closest analogue. I'm not doing the "or any 2 distinct values" because then you could use an encoding and decoding program as the values and evaluate them. Output Output should consist of only the encoded/decoded string. Leading and trailing newlines are okay. Scoring Your score is the length of your program, shorter is better. I still encourage you to try harder languages even if they won't win. Testing Testing will generally be done with Try it Online!. For convenience, please provide a link to your program with some test input already filled in. These are the steps to test a program: Pick a valid normal string. Encode it using the program. Check that the encoded form is valid. Decode it using the program. Check that the final result matches the original string. This can be repeated with other test strings for confidence. If you show a working case and no failing cases have been found, we will assume your program works. Reference Implementation I am providing a reference implementation that shows one possible method, in hopes of making this problem more approachable. I used it to generate the examples. I recommend you try the problem yourself before looking at how I did it, that way you may independently come up with a better mapping. Examples Since you can choose a different encoding, as well as different input/output methods, your program might not match these examples. Empty Input:0 Output: (none) Input:1 Output: (none) Small Input: 0 Hello, world! Output: #mvcD_YnEeX5? Input: 1 #mvcD_YnEeX5? Output: Hello, world! Max Size Input: 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Output: C-,y=6Js?3TOISp+975zk}.cwV|{iDjKmd[:/]$Q1~G8vAneoh #&<LYMPNZB_x;2l*r^(4E'tbU@q>a!\WHuRFg0"f%X`) Input: 1 C-,y=6Js?3TOISp+975zk}.cwV|{iDjKmd[:/]$Q1~G8vAneoh #&<LYMPNZB_x;2l*r^(4E'tbU@q>a!\WHuRFg0"f%X`) Output: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Extras How did I get the 75 characters limit? I could have brute force checked it of course, but there's a slightly more mathematical method for counting the number of possible encoded messages for a given character set: $$ F(n)=\sum_{k=0}^{n}{\frac{n!}{(n-k)!}}=e\Gamma(n+1,1)\approx en! $$ Here are the relevant bounds. There's plenty more valid encoded messages than normal messages, but still enough normal messages that you'll need a variable length encoding. $$ F(95)\approx 2.81\times 10^{148} \\ {95}^{75}\approx 2.13\times 10^{148} \\ 95!\approx 1.03\times 10^{148} $$ The problem was inspired by a phenomenon on the Discord chat platform. In Discord, you can "react" to chat messages, which adds a little emoji underneath. You can react many times on a single chat message. Each emoji can only be added once. They remember the order they were added and display oldest to newest, left to right. You can add some non-emojis too. People sometimes write short messages in reactions, and when their message has repeat characters they're forced to get creative, usually they opt for lookalike characters.
Here we will only answer OP's two first question(v1). Yes, Newton's Shell Theorem generalizes to General Relativity as follows. The Birkhoff's Theorem states that a spherically symmetric solution is static, and a (not necessarily thin) vacuum shell (i.e. a region with no mass/matter) corresponds to a radial branch of the Schwarzschild solution $$\tag{1} ds^2~=~-\left(1-\frac{R}{r}\right)c^2dt^2 + \left(1-\frac{R}{r}\right)^{-1}dr^2 +r^2 d\Omega^2$$ in some radial interval $r \in I:=[r_1, r_2]$. Here the constant $R$ is the Schwarzschild radius, and $d\Omega^2$ denotes the metric of the angular $2$-sphere. Since there is no mass $M$ at the center of OP's internal hollow region $r \in I:=[0, r_2]$, the Schwarzschild radius $R=\frac{2GM}{c^2}=0$ is zero. Hence the metric (1) in the hollow region is just flat Minkowski space in spherical coordinates.
Tutorial: K Nearest Neighbors in Python In this post, we’ll be using the K-nearest neighbors algorithm to predict how many points NBA players scored in the 2013-2014 season. Along the way, we’ll learn about euclidean distance and figure out which NBA players are the most similar to Lebron James. If you want to follow along, you can grab the dataset in csv format here. A look at the data Before we dive into the algorithm, let’s take a look at our data. Each row in the data contains information on how a player performed in the 2013-2014 NBA season. Here are some selected columns from the data: player— name of the player pos— the position of the player g— number of games the player was in gs— number of games the player started pts— total points the player scored There are many more columns in the data, mostly containing information about average player game performance over the course of the season. See this site for an explanation of the rest of them. We can read our dataset in and figure out which columns are present: import pandaswith open("nba_2013.csv", 'r') as csvfile: nba = pandas.read_csv(csvfile)print(nba.columns.values) # The names of all the columns in the data. ['player' 'pos' 'age' 'bref_team_id' 'g' 'gs' 'mp' 'fg' 'fga' 'fg.' 'x3p' 'x3pa' 'x3p.' 'x2p' 'x2pa' 'x2p.' 'efg.' 'ft' 'fta' 'ft.' 'orb' 'drb' 'trb' 'ast' 'stl' 'blk' 'tov' 'pf' 'pts' 'season' 'season_end'] KNN overview The k-nearest neighbors algorithm is based around the simple idea of predicting unknown values by matching them with the most similar known values. Let’s say that we have 3 different types of cars. We know the name of the car, its horsepower, whether or not it has racing stripes, and whether or not it’s fast.: car,horsepower,racing_stripes,is_fastHonda Accord,180,False,FalseYugo,500,True,TrueDelorean DMC-12,200,True,True Let’s say that we now have another car, but we don’t know how fast it is: car,horsepower,racing_stripes,is_fastChevrolet Camaro,400,True,Unknown We want to figure out if the car is fast or not. In order to predict if it is with k nearest neighbors, we first find the most similar known car. In this case, we would compare the horsepower and racing_stripes values to find the most similar car, which is the Yugo. Since the Yugo is fast, we would predict that the Camaro is also fast. This is an example of 1-nearest neighbors — we only looked at the most similar car, giving us a k of 1. If we performed a 2-nearest neighbors, we would end up with 2 True values (for the Delorean and the Yugo), which would average out to True. The Delorean and Yugo are the two most similar cars, giving us a k of 2. If we did 3-nearest neighbors, we would end up with 2 True values and a False value, which would average out to True. The number of neighbors we use for k-nearest neighbors (k) can be any value less than the number of rows in our dataset. In practice, looking at only a few neighbors makes the algorithm perform better, because the less similar the neighbors are to our data, the worse the prediction will be. Euclidean distance Before we can predict using KNN, we need to find some way to figure out which data rows are “closest” to the row we’re trying to predict on. A simple way to do this is to use Euclidean distance. The formula is \(\sqrt{(q_1-p_1)^2 + (q_2-p_2)^2 + \cdots + (q_n-p_n)^2}\) Let’s say we have these two rows (True/False has been converted to 1/0), and we want to find the distance between them: car,horsepower,is_fastHonda Accord,180,0Chevrolet Camaro,400,1 We would first only select the numeric columns. Then the distance becomes \(\sqrt{(180-400)^2 + (0-1)^2}\), which is about equal to 220. We can use the principle of euclidean distance to find the most similar NBA players to Lebron James. # Select Lebron James from our datasetselected_player = nba[nba["player"] == "LeBron James"].iloc[0]# Choose only the numeric columns (we'll use these to compute euclidean distance)distance_columns = ['age', 'g', 'gs', 'mp', 'fg', 'fga', 'fg.', 'x3p', 'x3pa', 'x3p.', 'x2p', 'x2pa', 'x2p.', 'efg.', 'ft', 'fta', 'ft.', 'orb', 'drb', 'trb', 'ast', 'stl', 'blk', 'tov', 'pf', 'pts']def euclidean_distance(row): """ A simple euclidean distance function """ inner_value = 0 for k in distance_columns: inner_value += (row[k] - selected_player[k]) ** 2 return math.sqrt(inner_value)# Find the distance from each player in the dataset to lebron.lebron_distance = nba.apply(euclidean_distance, axis=1) Normalizing columns You may have noticed that horsepower in the cars example had a much larger impact on the final distance than racing_stripes did. This is because horsepower values are much larger in absolute terms, and thus dwarf the impact of racing_stripes values in the euclidean distance calculations. This can be bad, because a variable having larger values doesn’t necessarily make it better at predicting what rows are similar. A simple way to deal with this is to normalize all the columns to have a mean of 0, and a standard deviation of 1. This will ensure that no single column has a dominant impact on the euclidean distance calculations. To set the mean to 0, we have to find the mean of a column, then subtract the mean from every value in the column. To set the standard deviation to 1, we divide every value in the column by the standard deviation. The formula is \(x=\frac{x-\mu}{\sigma}\). # Select only the numeric columns from the NBA datasetnba_numeric = nba[distance_columns]# Normalize all of the numeric columnsnba_normalized = (nba_numeric - nba_numeric.mean()) / nba_numeric.std() Finding the nearest neighbor We now know enough to find the nearest neighbor of a given row in the NBA dataset. We can use the distance.euclidean function from scipy.spatial, a much faster way to calculate euclidean distance. from scipy.spatial import distance# Fill in NA values in nba_normalizednba_normalized.fillna(0, inplace=True)# Find the normalized vector for lebron james.lebron_normalized = nba_normalized[nba["player"] == "LeBron James"]# Find the distance between lebron james and everyone else.euclidean_distances = nba_normalized.apply(lambda row: distance.euclidean(row, lebron_normalized), axis=1)# Create a new dataframe with distances.distance_frame = pandas.DataFrame(data={"dist": euclidean_distances, "idx": euclidean_distances.index})distance_frame.sort("dist", inplace=True)# Find the most similar player to lebron (the lowest distance to lebron is lebron, the second smallest is the most similar non-lebron player)second_smallest = distance_frame.iloc[1]["idx"]most_similar_to_lebron = nba.loc[int(second_smallest)]["player"] Generating training and testing sets Now that we know how to find the nearest neighbors, we can make predictions on a test set. We’ll try to predict how many points a player scored using the 5 closest neighbors. We’ll find neighbors by using all the numeric columns in the dataset to generate similarity scores. First, we have to generate test and train sets. In order to do this, we’ll use random sampling. We’ll randomly shuffle the index of the nba dataframe, and then pick rows using the randomly shuffled values. If we didn’t do this, we’d end up predicting and training on the same data set, which would overfit. We could do cross validation also, which would be slightly better, but slightly more complex. import randomfrom numpy.random import permutation# Randomly shuffle the index of nba.random_indices = permutation(nba.index)# Set a cutoff for how many items we want in the test set (in this case 1/3 of the items)test_cutoff = math.floor(len(nba)/3)# Generate the test set by taking the first 1/3 of the randomly shuffled indices.test = nba.loc[random_indices[1:test_cutoff]]# Generate the train set with the rest of the data.train = nba.loc[random_indices[test_cutoff:]] Using sklearn for k nearest neighbors Instead of having to do it all ourselves, we can use the k-nearest neighbors implementation in scikit-learn. Here’s the documentation. There’s a regressor and a classifier available, but we’ll be using the regressor, as we have continuous values to predict on. Sklearn performs the normalization and distance finding automatically, and lets us specify how many neighbors we want to look at. # The columns that we will be making predictions with.x_columns = ['age', 'g', 'gs', 'mp', 'fg', 'fga', 'fg.', 'x3p', 'x3pa', 'x3p.', 'x2p', 'x2pa', 'x2p.', 'efg.', 'ft', 'fta', 'ft.', 'orb', 'drb', 'trb', 'ast', 'stl', 'blk', 'tov', 'pf']# The column that we want to predict.y_column = ["pts"]from sklearn.neighbors import KNeighborsRegressor# Create the knn model.# Look at the five closest neighbors.knn = KNeighborsRegressor(n_neighbors=5)# Fit the model on the training data.knn.fit(train[x_columns], train[y_column])# Make point predictions on the test set using the fit model.predictions = knn.predict(test[x_columns]) Computing error Now that we know our point predictions, we can compute the error involved with our predictions. We can compute mean squared error. The formula is \(\frac{1}{n}\sum_{i=1}^{n}(\hat{y_{i}} – y_{i})^{2}\). # Get the actual values for the test set.actual = test[y_column]# Compute the mean squared error of our predictions.mse = (((predictions - actual) ** 2).sum()) / len(predictions) Next Steps For more on k nearest neighbors, you can check out our six-part interactive machine learning fundamentals course, which teaches the basics of machine learning using the k nearest neighbors algorithm. Vik is the CEO and Founder of Dataquest.
2019-05-20 15:18 Record dettagliato - Record simili 2019-01-23 09:13 nuSTORM at CERN: Feasibility Study / Long, Kenneth Richard (Imperial College (GB)) The Neutrinos from Stored Muons, nuSTORM, facility has been designed to deliver a definitive neutrino-nucleus scattering programme using beams of $\bar{\nu}_e$ and $\bar{\nu}_\mu$ from the decay of muons confined within a storage ring. The facility is unique, it will be capable of storing $\mu^\pm$ beams with a central momentum of between 1 GeV/c and 6 GeV/c and a momentum spread of 16%. [...] CERN-PBC-REPORT-2019-003.- Geneva : CERN, 2019 - 150. Record dettagliato - Record simili 2019-01-23 08:54 Record dettagliato - Record simili 2019-01-15 15:35 Record dettagliato - Record simili 2018-12-20 13:45 Record dettagliato - Record simili 2018-12-18 14:08 Physics Beyond Colliders at CERN: Beyond the Standard Model Working Group Report / Beacham, J. (Ohio State U., Columbus (main)) ; Burrage, C. (U. Nottingham) ; Curtin, D. (Toronto U.) ; De Roeck, A. (CERN) ; Evans, J. (Cincinnati U.) ; Feng, J.L. (UC, Irvine) ; Gatto, C. (INFN, Naples ; NIU, DeKalb) ; Gninenko, S. (Moscow, INR) ; Hartin, A. (U. Coll. London) ; Irastorza, I. (U. Zaragoza, LFNAE) et al. The Physics Beyond Colliders initiative is an exploratory study aimed at exploiting the full scientific potential of the CERN’s accelerator complex and scientific infrastructures through projects complementary to the LHC and other possible future colliders. These projects will target fundamental physics questions in modern particle physics. [...] arXiv:1901.09966; CERN-PBC-REPORT-2018-007.- Geneva : CERN, 2018 - 150 p. Full Text: PDF; Fulltext: PDF; Record dettagliato - Record simili 2018-12-17 18:05 PBC technology subgroup report / Siemko, Andrzej (CERN) ; Dobrich, Babette (CERN) ; Cantatore, Giovanni (Universita e INFN Trieste (IT)) ; Delikaris, Dimitri (CERN) ; Mapelli, Livio (Universita e INFN, Cagliari (IT)) ; Cavoto, Gianluca (Sapienza Universita e INFN, Roma I (IT)) ; Pugnat, Pierre (Lab. des Champs Magnet. Intenses (FR)) ; Schaffran, Joern (Deutsches Elektronen-Synchrotron (DE)) ; Spagnolo, Paolo (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, Pisa (IT)) ; Ten Kate, Herman (CERN) et al. Goal of the technology WG set by PBC: Exploration and evaluation of possible technological contributions of CERN to non-accelerator projects possibly hosted elsewhere: survey of suitable experimental initiatives and their connection to and potential benefit to and from CERN; description of identified initiatives and how their relation to the unique CERN expertise is facilitated.. CERN-PBC-REPORT-2018-006.- Geneva : CERN, 2018 - 31. Fulltext: PDF; Record dettagliato - Record simili 2018-12-14 16:17 AWAKE++: The AWAKE Acceleration Scheme for New Particle Physics Experiments at CERN / Gschwendtner, Edda (CERN) ; Bartmann, Wolfgang (CERN) ; Caldwell, Allen Christopher (Max-Planck-Institut fur Physik (DE)) ; Calviani, Marco (CERN) ; Chappell, James Anthony (University of London (GB)) ; Crivelli, Paolo (ETH Zurich (CH)) ; Damerau, Heiko (CERN) ; Depero, Emilio (ETH Zurich (CH)) ; Doebert, Steffen (CERN) ; Gall, Jonathan (CERN) et al. The AWAKE experiment reached all planned milestones during Run 1 (2016-18), notably the demonstration of strong plasma wakes generated by proton beams and the acceleration of externally injected electrons to multi-GeV energy levels in the proton driven plasma wakefields. During Run~2 (2021 - 2024) AWAKE aims to demonstrate the scalability and the acceleration of electrons to high energies while maintaining the beam quality. [...] CERN-PBC-REPORT-2018-005.- Geneva : CERN, 2018 - 11. Record dettagliato - Record simili 2018-12-14 15:50 Particle physics applications of the AWAKE acceleration scheme / Wing, Matthew (University of London (GB)) ; Caldwell, Allen Christopher (Max-Planck-Institut fur Physik (DE)) ; Chappell, James Anthony (University of London (GB)) ; Crivelli, Paolo (ETH Zurich (CH)) ; Depero, Emilio (ETH Zurich (CH)) ; Gall, Jonathan (CERN) ; Gninenko, Sergei (Russian Academy of Sciences (RU)) ; Gschwendtner, Edda (CERN) ; Hartin, Anthony (University of London (GB)) ; Keeble, Fearghus Robert (University of London (GB)) et al. The AWAKE experiment had a very successful Run 1 (2016-8), demonstrating proton-driven plasma wakefield acceleration for the first time, through the observation of the modulation of a long proton bunch into micro-bunches and the acceleration of electrons up to 2 GeV in 10 m of plasma. The aims of AWAKE Run 2 (2021-4) are to have high-charge bunches of electrons accelerated to high energy, about 10 GeV, maintaining beam quality through the plasma and showing that the process is scalable. [...] CERN-PBC-REPORT-2018-004.- Geneva : CERN, 2018 - 11. Fulltext: PDF; Record dettagliato - Record simili 2018-12-13 13:21 Summary Report of Physics Beyond Colliers at CERN / Jaeckel, Joerg (CERN) ; Lamont, Mike (CERN) ; Vallee, Claude (Centre National de la Recherche Scientifique (FR)) Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN's accelerator complex and its scientific infrastructure in the next two decades through projects complementary to the LHC, HL-LHC and other possible future colliders. These projects should target fundamental physics questions that are similar in spirit to those addressed by high-energy colliders, but that require different types of beams and experiments. [...] arXiv:1902.00260; CERN-PBC-REPORT-2018-003.- Geneva : CERN, 2018 - 66 p. Fulltext: PDF; PBC summary as submitted to the ESPP update in December 2018: PDF; Record dettagliato - Record simili
The Binomial Formula The binomial distribution is a discrete probability distribution of the successes in a sequence of [latex]\text{n}[/latex] independent yes/no experiments. Learning Objectives Employ the probability mass function to determine the probability of success in a given amount of trials Key Takeaways Key Points The probability of getting exactly [latex]\text{k}[/latex] successes in [latex]\text{n}[/latex] trials is given by the Probability Mass Function. The binomial distribution is frequently used to model the number of successes in a sample of size [latex]\text{n}[/latex] drawn with replacement from a population of size [latex]\text{N}[/latex]. The binomial distribution is the discrete probability distribution of the number of successes in a sequence of [latex]\text{n}[/latex] independent yes/no experiments, each of which yields success with probability [latex]\text{p}[/latex]. Key Terms central limit theorem: a theorem which states that, given certain conditions, the mean of a sufficiently large number of independent random variables–each with a well-defined mean and well-defined variance– will be approximately normally distributed probability mass function: a function that gives the probability that a discrete random variable is exactly equal to some value In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of [latex]\text{n}[/latex] independent yes/no experiments, each of which yields success with probability [latex]\text{p}[/latex]. The binomial distribution is the basis for the popular binomial test of statistical significance. The binomial distribution is frequently used to model the number of successes in a sample of size [latex]\text{n}[/latex] drawn with replacement from a population of size [latex]\text{N}[/latex]. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for [latex]\text{N}[/latex] much larger than [latex]\text{n}[/latex], the binomial distribution is a good approximation, and widely used. In general, if the random variable [latex]\text{X}[/latex] follows the binomial distribution with parameters [latex]\text{n}[/latex] and [latex]\text{p}[/latex], we write [latex]\text{X} \sim \text{B}(\text{n}, \text{p})[/latex]. The probability of getting exactly [latex]\text{k}[/latex] successes in [latex]\text{n}[/latex] trials is given by the Probability Mass Function: [latex]\displaystyle \text{f}(\text{k}; \text{n}, \text{p}) = \text{P}(\text{X}=\text{k}) = {{\text{n}}\choose{\text{k}}}\text{p}^\text{k}(1-\text{p})^{\text{n}-\text{k}}[/latex] For [latex]\text{k} = 0, 1, 2, \dots, \text{n}[/latex] where: [latex]\displaystyle {{\text{n}}\choose{\text{k}}} = \frac{\text{n}!}{\text{k}!(\text{n}-\text{k})!}[/latex] Is the binomial coefficient (hence the name of the distribution) “ n choose k, “ also denoted [latex]\text{C}(\text{n}, \text{k})[/latex] or [latex]_\text{n}\text{C}_\text{k}[/latex]. The formula can be understood as follows: We want [latex]\text{k}[/latex] successes ([latex]\text{p}^\text{k}[/latex] ) and [latex]\text{n}-\text{k}[/latex] failures ([latex](1-\text{p})^{\text{n}-\text{k}}[/latex]); however, the [latex]\text{k}[/latex] successes can occur anywhere among the [latex]\text{n}[/latex] trials, and there are [latex]\text{C}(\text{n}, \text{k})[/latex] different ways of distributing [latex]\text{k}[/latex] successes in a sequence of [latex]\text{n}[/latex] trials. One straightforward way to simulate a binomial random variable [latex]\text{X}[/latex] is to compute the sum of [latex]\text{n}[/latex] independent 0−1 random variables, each of which takes on the value 1 with probability [latex]\text{p}[/latex]. This method requires [latex]\text{n}[/latex] calls to a random number generator to obtain one value of the random variable. When [latex]\text{n}[/latex] is relatively large (say at least 30), the Central Limit Theorem implies that the binomial distribution is well-approximated by the corresponding normal density function with parameters [latex]\mu = \text{np}[/latex] and [latex]\sigma = \sqrt{\text{npq}}[/latex] . Figures from the Example Binomial Probability Distributions This chapter explores Bernoulli experiments and the probability distributions of binomial random variables. Learning Objectives Apply Bernoulli distribution in determining success of an experiment Key Takeaways Key Points A Bernoulli (success-failure) experiment is performed [latex]\text{n}[/latex] times, and the trials are independent. The probability of success on each trial is a constant [latex]\text{p}[/latex]; the probability of failure is [latex]\text{q}=1-\text{p}[/latex]. The random variable [latex]\text{X}[/latex] counts the number of successes in the [latex]\text{n}[/latex] trials. Key Terms Bernoulli Trial: an experiment whose outcome is random and can be either of two possible outcomes, “success” or “failure” Many random experiments include counting the number of successes in a series of a fixed number of independently repeated trials, which may result in either success or failure. The distribution of the number of successes is a binomial distribution. It is a discrete probability distribution with two parameters, traditionally indicated by [latex]\text{n}[/latex], the number of trials, and [latex]\text{p}[/latex], the probability of success. Such a success/failure experiment is also called a Bernoulli experiment, or Bernoulli trial; when [latex]\text{n}=1[/latex], the Bernoulli distribution is a binomial distribution. Named after Jacob Bernoulli, who studied them extensively in the 1600s, a well known example of such an experiment is the repeated tossing of a coin and counting the number of times “heads” comes up. In a sequence of Bernoulli trials, we are often interested in the total number of successes and not in the order of their occurrence. If we let the random variable [latex]\text{X}[/latex] equal the number of observed successes in [latex]\text{n}[/latex] Bernoulli trials, the possible values of [latex]\text{X}[/latex] are [latex]0, 1, 2, \dots, \text{n}[/latex]. If [latex]\text{x}[/latex] success occur, where [latex]\text{x}=0, 1, 2, \dots, \text{n}[/latex], then [latex]\text{n}-\text{x}[/latex] failures occur. The number of ways of selecting [latex]\text{x}[/latex] positions for the [latex]\text{x}[/latex] successes in the [latex]\text{x}[/latex] trials is: [latex]\displaystyle {{\text{n}}\choose{\text{x}}} = \frac{\text{n}!}{\text{x}!(\text{n}-\text{x})!}[/latex] Since the trials are independent and since the probabilities of success and failure on each trial are, respectively, [latex]\text{p}[/latex] and [latex]\text{q}=1-\text{p}[/latex], the probability of each of these ways is [latex]\text{p}^\text{x}(1-\text{p})^{\text{n}-\text{x}}[/latex]. Thus, the p.d.f. of [latex]\text{X}[/latex], say [latex]\text{f}(\text{x})[/latex], is the sum of the probabilities of these ([latex]\text{nx}[/latex] ) mutually exclusive events–that is, [latex]\text{f}(\text{x})=(\text{nx})\text{p}^\text{x}(1-\text{p})^{\text{n}-\text{x}}[/latex] , [latex]\text{x}=0, 1, 2, \dots, \text{n}[/latex] These probabilities are called binomial probabilities, and the random variable [latex]\text{X}[/latex] is said to have a binomial distribution. Mean, Variance, and Standard Deviation of the Binomial Distribution In this section, we’ll examine the mean, variance, and standard deviation of the binomial distribution. Learning Objectives Examine the different properties of binomial distributions Key Takeaways Key Points The mean of a binomial distribution with parameters [latex]\text{N}[/latex] (the number of trials) and [latex]\text{p}[/latex] (the probability of success for each trial) is [latex]\text{m}=\text{Np}[/latex]. The variance of the binomial distribution is [latex]\text{s}^2 = \text{Np}(1-\text{p})[/latex], where [latex]\text{s}^2[/latex] is the variance of the binomial distribution. The standard deviation ([latex]\text{s}[/latex]) is the square root of the variance ([latex]\text{s}^2[/latex]). Key Terms variance: a measure of how far a set of numbers is spread out mean: one measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution standard deviation: shows how much variation or dispersion exists from the average (mean), or expected value As with most probability distributions, examining the different properties of binomial distributions is important to truly understanding the implications of them. The mean, variance, and standard deviation are three of the most useful and informative properties to explore. In this next section we’ll take a look at these different properties and how they are helpful in establishing the usefulness of statistical distributions. The easiest way to understand the mean, variance, and standard deviation of the binomial distribution is to use a real life example. Consider a coin-tossing experiment in which you tossed a coin 12 times and recorded the number of heads. If you performed this experiment over and over again, what would the mean number of heads be? On average, you would expect half the coin tosses to come up heads. Therefore, the mean number of heads would be 6. In general, the mean of a binomial distribution with parameters [latex]\text{N}[/latex] (the number of trials) and [latex]\text{p}[/latex] (the probability of success for each trial) is: [latex]\text{m}=\text{Np}[/latex] Where [latex]\text{m}[/latex] is the mean of the binomial distribution. The variance of the binomial distribution is: [latex]\text{s}^2 = \text{Np}(1-\text{p})[/latex], where [latex]\text{s}^2[/latex] is the variance of the binomial distribution. The coin was tossed 12 times, so [latex]\text{N}=12[/latex]. A coin has a probability of 0.5 of coming up heads. Therefore, [latex]\text{p}=0.5[/latex]. The mean and standard deviation can therefore be computed as follows: [latex]\text{m}=\text{Np}=12\cdot0.5 = 6[/latex] [latex]\text{s}^2=\text{Np}(1-\text{p})=12\cdot0.5\cdot(1.0-0.5)=3.0[/latex] Naturally, the standard deviation ([latex]\text{s}[/latex]) is the square root of the variance ([latex]\text{s}^2[/latex]). Additional Properties of the Binomial Distribution In this section, we’ll look at the median, mode, and covariance of the binomial distribution. Learning Objectives Explain some results of finding the median in binomial distribution Key Takeaways Key Points There is no single formula for finding the median of a binomial distribution. The mode of a binomial [latex]\text{B}(\text{n}, \text{p})[/latex] distribution is equal to. If two binomially distributed random variables [latex]\text{X}[/latex] and [latex]\text{Y}[/latex] are observed together, estimating their covariance can be useful. Key Terms floor function: maps a real number to the smallest following integer covariance: A measure of how much two random variables change together. median: the numerical value separating the higher half of a data sample, a population, or a probability distribution, from the lower half Mode: the value that appears most often in a set of data In general, there is no single formula for finding the median of a binomial distribution, and it may even be non-unique. However, several special results have been established: If [latex]\text{np}[/latex] is an integer, then the mean, median, and mode coincide and equal [latex]\text{np}[/latex]. Any median [latex]\text{m}[/latex] must lie within the interval [latex]\lfloor \text{np}\rfloor \leq \text{m} \leq \lceil \text{np}\rceil [/latex]. A median [latex]\text{m}[/latex] cannot lie too far away from the mean: [latex]|\text{m} \text{np}| \leq \min { \ln { 2 } },\max { (\text{p},1 - \text{p} )}[/latex]. The median is unique and equal to [latex]\text{m} = round(\text{np})[/latex] in cases where either [latex]\text{p} \leq 1 \ln 2[/latex] or [latex]\text{p} \geq \ln 2[/latex] or [latex]|\text{m} \text{np}| \leq \min{(\text{p}, 1 \text{p})}[/latex] (except for the case when [latex]\text{p} = \frac{1}{2}[/latex] and n is odd ). When[latex]\text{p} = \frac{1}{2}[/latex] and n is odd, any number m in the interval [latex]\frac{1}{2} \cdot (\text{n} 1) \leq \text{m} \leq \frac{1}{2} \cdot (\text{n} + 1)[/latex] is a median of the binomial distribution. If [latex]\text{p} = \frac{1}{2}[/latex] and n is even, then [latex]\text{m} = \frac{n}{2}[/latex] is the unique median. There are also conditional binomials. If [latex]\text{X} \sim \text{B}(\text{n}, \text{p})[/latex] and, conditional on [latex]\text{X}, \text{Y} \sim \text{B}(\text{X}, \text{q})[/latex], then Y is a simple binomial variable with distribution. The binomial distribution is a special case of the Poisson binomial distribution, which is a sum of n independent non-identical Bernoulli trials Bern(pi). If X has the Poisson binomial distribution with p1=…=pn=pp1=\ldots =pn=p then ∼B(n,p)\sim B(n, p) . Usually the mode of a binomial B(n, p) distribution is equal to where is the floor function. However, when [latex](\text{n} + 1)\text{p}[/latex] is an integer and p is neither 0 nor 1, then the distribution has two modes: [latex](\text{n} + 1)\text{p}[/latex] and [latex](\text{n} + 1)\text{p} 1[/latex]. When p is equal to 0 or 1, the mode will be 0 and n, respectively. These cases can be summarized as follows: If two binomially distributed random variables X and Y are observed together, estimating their covariance can be useful. Using the definition of covariance, in the case n = 1 (thus being Bernoulli trials) we have. The first term is non-zero only when both X and Y are one, and μX and μY are equal to the two probabilities. Defining pB as the probability of both happening at the same time, this gives and for n independent pairwise trials. If X and Y are the same variable, this reduces to the variance formula given above.
I'm trying to proove $NP$-membership for a problem from the following certificate. I have $n$ sets of integers : $$(S_i)_{i \in \{1,\dots,n\}}$$ Each set has a number $m_i$ of integers. I make "combination" from those sets by taking at most one element in each set. A "combination" has between $0$ and $n$ elements (assuming sets are not empty). A "combination" has a value : the sum of its elements. I have to compute the sum of every possible "combination" values, and look if it is greater than a given value. An analytical formula would be nice, but I'm not sure it exists. Otherwise, do you think this sum is easily computable ? $C_0=\{0\}$, $C_{k} = C_{k-1} \cup \{ x+y \mid x\in C_{k-1}, y\in S_k \}$. Let $x_{ij}$ be the $j^{th}$ element of $S_i$ and let $S$ denote the sum of all combinations. We consider $S$ as $$S = \Sigma \;x_{ij} \cdot w_{ij}$$ where $w_{ij}$ is the number of combinations that contain $x_{ij}$. Consider an arbitrary element $x$ of $S_1$. There are $(m_2+1) \cdot (m_3+1) \cdots (m_n+1)$ possible combinations that contain $x$ (i.e.: pick any of the $m_2$ elements of $S_2$ or none, pick any of the $m_3$ elements of $S_3$ or none, etc...). So, in general, $w_{ij} = \left(\Pi_{k=1}^n(m_k+1) \right)/(m_i+1)$. Note: This is assuming that the question is asking for the sum of the values of all combinations. If the real question is the sum of the values of all distinct combinations, this will not work.
In the book that I am using, Linear Algebra Done Right, the proof for the Steinitz exchange lemma (which can be found here) left me unconvinced. The proof refers to the linear independence lemma. This is where my problem lies, the proof says that 'one of the vectors in this list is in the span of the previous ones'. I believe this should be 'at least one', as the LDL reads 'there exists'. Related to this, why must that vector be one of the w's (the vectors that span V)? As I see it, it could be one of the u's too (the vectors that are linearly independent)! As an example, $((2,0,0),(0,2,0),(0,0,2),(1,1,1))$ certainly spans $\mathbb{R}^3$ and $((1,0,0),(0,1,0),(0,0,1))$ is certainly linearly independent in $\mathbb{R}^3$. We add the $(1,0,0)$ to the first set and choose not to remove $(1,0,0)$, but a 'w', $(2,0,0)$ (I understand why LDL is correct so we can always do this). Now we have: $((1,0,0),(0,2,0),(0,0,2),(1,1,1))$. We then add $(0,1,0)$, but actually, even though $(1,0,0)$ and $(0,1,0)$ are linearly independent, $(0,1,0)$ is also in the span of the previous ones! So we could actually remove that one, so as I see it, what the proof says is incorrect. Can someone point out my misunderstanding? EDIT: So, I still find the formulation of the proof a bit sketchy. Krish' post inspired me to prove the lemma below formally. Hopefully it clears up things for some, as it did for me. Lemma: If $(u_1,\dots,u_n,w_1,\dots,w_m)$ is a linearly dependent list of vectors, with $(u_1,\dots,u_n)$ linearly independent, then for some $i \in \left\{1,\dots,n\right\}$, $w_m$ is in the span of that initial list that remains after removing $w_i$. Proof: Let $(u_1,\dots,u_n,w_1,\dots,w_j)$ be a list, where $w_j$ is chosen as high as possible such that the list is linearly independent. Then $(u_1,\dots,u_n,w_1,\dots,w_j,w_{j+1})$ is linearly dependent. From linear dependence follows that $a_1u_1+a_2u_2+\dots+a_nu_n+b_1w_1+\dots+b_{j+1}w_{j+1} = \mathbf{0}$, for some $a_1,\dots,a_n,b_1,\dots,b_{j+1}$ not all zero. If $w_{j+1} = \mathbf{0}$, then the desired result follows. If not, then $b_{j+1} \neq 0$, as otherwise the equality would only be satisfied if $a_1,\dots,a_n,b_1,\dots,b_j$ would equal zero (because of linear independence). Linear independence ensures that $u_1,\dots,u_n,w_1,\dots,w_j \neq \mathbf{0}$, and hence at least one of $a_1,\dots,a_n,b_1,\dots,b_j$ of must be nonzero. Now we can get $ w_{j+1}= - \frac{a_1}{b_{j+1}} u_1 - \frac{a_2}{b_{j+1}} u_2 - \dots - \frac{a_n}{b_{j+1}} u_n - \dots - \frac{b_1}{b_{j+1}} w_1 - \dots - \frac{b_j}{b_{j+1}} w_j $. Thus $w_{j+1}$ can be written as a linear combination of $u_1,\dots,u_n,w_1,\dots,w_j$, hence $w_{j+1} \in \mathrm{span}(u_1,\dots,u_n,w_1,\dots,w_j) \subseteq \mathrm{span}(u_1,\dots,u_n,w_1,\dots,w_j,w_{j+2},\dots,w_m).$ Q.E.D.
This is not easy for general $G$, but we can immediately produce some crude bounds (on the minimal number $\mu(G)$ such that $G$ embeds into $S_{\mu(G)}$): If we have an embedding $G \hookrightarrow S_n$, we must trivially have $|G| \leq |S_n| = n!$. Of course, this is sharp in the sense that equality holds for $G = S_n$. On the other hand, the regular representation of $G$ determines an embedding $G \hookrightarrow S_{|G|}$, so $\mu(G) \leq |G|$. This latter bound is sharp: Theorem If $G$ is a group for which there is no embedding $G \hookrightarrow S_n$ for any $n < |G|$, then $G$ is a cyclic group $\Bbb Z_{p^m}$ of prime power order, a generalized quaternion $2$-group, or the Klein 4-group $\Bbb Z_2 \times \Bbb Z_2$. There are numerous results available for certain classes of $G$. For example, if $\gcd(|G_1|, |G_2|) = 1$, then $\mu(G_1 \times G_2) = \mu(G_1) + \mu(G_2)$.This and (1) above together resolve the problem for abelian groups: Theorem If $G$ is abelian, so that we can write it as $\Bbb Z_{a_1} \times \cdots \times \Bbb Z_{a_r}$ for (nontrivial) prime powers $a_1, \ldots, a_r$, then $$\mu(G) = a_1 + \cdots + a_r .$$ (Both of the Theorems here are given by D.L. Johnson and recorded in the reference below, which itself contains a bibliography of this topic.) The realization of $D_8$ as the group of symmetries of the square determines an embedding $D_8 \hookrightarrow S_4$, and it's not too hard to prove that $\mu(D_{2n}) = \mu(\Bbb Z_n)$ and $\mu(A_n) = n$ (both for $n > 2$). These facts together determine $\mu(G)$ for all $G$ of order $<16$ except $Q_{12} \cong \Bbb Z_3 \rtimes \Bbb Z_4$: $S_1$: $\{ e \}$ $S_2$: $\Bbb Z_2$ $S_3$: $\Bbb Z_3$, $S_3$ $S_4$: $\Bbb Z_4$, $\Bbb Z_2 \times \Bbb Z_2$, $D_8$, $A_4$ $S_5$: $\Bbb Z_5$, $\Bbb Z_6$, $D_{10}$ $S_6$: $\Bbb Z_4 \times \Bbb Z_2$, $\Bbb Z_2 \times \Bbb Z_2 \times \Bbb Z_2$, $\Bbb Z_3 \times \Bbb Z_3$, $D_{12}$ $S_7$: $\Bbb Z_7$, $\Bbb Z_{10}$, $\Bbb Z_{12}$, $\Bbb Z_2 \times \Bbb Z_6$, $D_{14}$ $S_8$: $\Bbb Z_8$, $Q_8$, $\Bbb Z_{15}$ $S_9$: $\Bbb Z_9$, $\Bbb Z_{14}$ $S_{11}$: $\Bbb Z_{11}$ $S_{13}$: $\Bbb Z_{13}$ This question gives an explicit proof that $\mu(Q_8) = 8$. Lemiuex, Stephan R., Minimal degree of faithful permutation representations of finite groups. (1999) Master's thesis.
Mainly trying to determine where I'm going wrong. Here is what I have done so far: $$r = 1 + cos(\theta)$$ We know that: $x = r cos(\theta)$ and $y = r cos(\theta)$ So: $$x = (1 + cos(\theta))cos(\theta)$$ $$x = cos(\theta) + cos^2(\theta)$$ $$\frac{dx}{d\theta} = -sin(\theta)-2cos(\theta)sin(\theta)$$ Set $\frac{dx}{d\theta}$ to $0$ $$-sin(\theta)-2cos(\theta)sin(\theta)=0$$ $$-sin(\theta)(1+2cos(\theta))=0$$ So $\theta$ = $0, \pi, \frac{2\pi}{3},\frac{4\pi}{3}$ $r = 2$ when $\theta = 0$, $r = 0$ when $\theta = \pi$, $r = \frac{1}{2}$ when $\theta = \frac{2\pi}{3},\frac{4\pi}{3}$ So the vertical tangent lines, when plugged into $x=rcos(\theta)$ are: $x=2$, $x=0$, and $x=-\frac{1}{4}$ This answer is wrong though, and I'm not entirely sure why. Can anyone point out what I am doing wrong?
I have some questions about homotopy. Before starting here a definition: A topological space $X$ is called contractible if $X$ is homotopy equivalent with a one-point-space Suppose $X$ a toplogical space, $Y\subset X$. A retraction $r:X\rightarrow Y$ is a continu function such that $r_{|Y}=id_Y$, thus r(y)=y $\forall y\in Y$ and $f(x)\in Y\ \forall x\in X$ Now the questions: Suppose $X\equiv X'$ and $Y\equiv Y'$ homotopy equivalences, prove that $X\times Y\equiv X'\times Y'$ Prove that every star-shaped subspace $X\subset\Bbb{R^n}$ is contractible $id_{S^{n-1}}$ not homotopic to constant map (thus $S^{n-1}$ not contractible) $\Longrightarrow$ non-existence of retractions $D^n\rightarrow S^{n-1}$ Brouwers Fix Point Theorem $\Longrightarrow$ Poincare-Miranda Theorem I have no idea how to solve the questions and implications ... Can someone maybe help me? Thank you :)
The Riemann Hypothesis is true if and only if Robin's Inequality is true for all n>5040. It has also been shown by Akbary and Friggstad that the smallest counterexample greater than 5040, if it exists, must be a Superabundant Number. These two facts suggest that Robin's Inequality might provide a more elementary way to show the Riemann Hypothesis is true or help provide a counterexample in the form of a Superabundant Number which violates the inequality. Robin's Inequality is elegantly stated: $$f(n)=\frac{\sigma (n)}{e^{\gamma} n \log \log n}<1$$ Below is a plot of the LHS for the first 2000 Superabundant Numbers above 5040 from A004394. Looking at this plot and knowing how close the values get to 1 make it seem plausible that RH could be false, but if true it appears to be an asymptotically an increasing function bounded above by 1. Examining the remaining terms provided, the function behavior looks similar and for the largest term listed $f(a[1000000])\approx0.9998655$. No terms before the millionth cross the threshold of one and $a[1000000]\approx10^{103082}$. Is this the highest known verification of RH? It is my understanding that ZetaGrid and others working with the zeta function directly have only verified zeros up to $10^{13}$. Also, from my understanding of Briggs he appears to have only verified completely up to $10^{154}$ using SA numbers?
There are $x^n$ possible lists of $n$ objects, where there are $x$ different possible objects. So on average, over all possible such lists, a (non-lossy, optimal) compression algorith will give a file of size $n\log_2 x$ bits. Of course, many decent algorithms will either know which objects are common in general, and make those take less space, or have an overhead stating which objects were common in this specific list, and make those take less space. So the uncommon lists will in that case actually take up more space after compression. But $n\log_2 n$ is the best guess for a "general" size for an arbitrary list. Now presume we know that the list is sorted. Then there are only $\binom{x+n}{n}$ different possible lists. This is quite a lot smaller. How much smaller? If $x\gg n$ (i.e. most lists have only distinct objects), it's basically $\frac{x^n}{n!}$. This is a best case. The number of bits required on average to store a compressed list is for the best case $$n\log_2x-\log_2(n!)\approx n\log_2 x-n\log_2n+\frac{n}{\ln2}$$using Stirling's approximation.
Water, as simple as it might appear, has quite a few extraordinary things to offer. Most does not seem to be as it appears. Before diving deeper, a few cautionary words about hybridisation. Hybridisation is an often misconceived concept. It only is a mathematical interpretation, which explains a certain bonding situation (in an intuitive fashion). In a molecule the equilibrium geometry will result from various factors, such as steric and electronic interactions, and furthermore interactions with the surroundings like a solvent or external field. The geometric arrangement will not be formed because a molecule is hybridised in a certain way, it is the other way around, i.e. a result of the geometry or more precise and interpretation of the wave function for the given molecular arrangement. In molecular orbital theory linear combinations of all available (atomic) orbitals will form molecular orbitals (MO). These are spread over the whole molecule, or delocalised, and in a quantum chemical interpretation they are called canonical orbitals. Such a solution (approximation) of the wave function can be unitary transformed form localised molecular orbitals (LMO). The solution (the energy) does not change due to this transformation. These can then be used to interpret a bonding situation in a simpler theory. Each LMO can be expressed as a linear combination of the atomic orbitals, hence it is possible to determine the coefficients of the atomic orbitals and describe these also as hybrid orbitals. It is absolutely wrong to assume that there are only three types of sp hybrid orbitals. x Therefore it is very well possible, that there are multiple different types of orbitals involved in bonding for a certain atom. For more on this, read about Bent's rule on the network. [1] Let's look at water, Wikipedia is so kind to provide us with a schematic drawing: The bonding angle is quite close to the ideal tetrahedral angle, so one would assume, that the involved orbitals are sp 3 hybridised. There is also a connection between bond angle and hybridisation, called Coulson's theorem, which lets you approximate hybridisation. [2] In this case the orbitals involved in the bonds would be sp 4 hybridised. (Close enough.) Let us also consider the symmetry of the molecule. The point group of water is C 2v. Because there are mirror planes, in the canonical bonding picture π-type orbitals [3] are necessary. We have an orbital with appropriate symmetry, which is the p-orbital sticking out of the bonding plane. This interpretation is not only valid it is one that comes as the solution of the Schrödinger equation. [4] That leaves for the other orbital a hybridisation of sp (2/3). If we make the reasonable assumption, that the oxygen hydrogen bonds are sp 3 hybridised, and the out-of-plane lone pair is a p orbital, then the maths is a bit easier and the in-plane lone pair is sp hybridised. [5] A calculation on the MO6/def2-QZVPP level of theory gives us the following canonical molecular orbitals: (Orbital symmetries: $2\mathrm{A}_1$, $1\mathrm{B}_2$, $3\mathrm{A}_1$, $1\mathrm{B}_1$) [6,7] Since the interpretation with hybrid orbitals is equivalent, I used the natural bond orbital theory to interpret the results. This method transforms the canonical orbitals into localised orbitals for easier interpretation. Here is an excerpt of the output (core orbital and polarisation functions omitted) giving us the calculated hybridisations: (Occupancy) Bond orbital / Coefficients / Hybrids ------------------ Lewis ------------------------------------------------------ 2. (1.99797) LP ( 1) O 1 s( 53.05%)p 0.88( 46.76%)d 0.00( 0.19%) 3. (1.99770) LP ( 2) O 1 s( 0.00%)p 1.00( 99.69%)d 0.00( 0.28%) 4. (1.99953) BD ( 1) O 1- H 2 ( 73.49%) 0.8573* O 1 s( 23.41%)p 3.26( 76.25%)d 0.01( 0.31%) ( 26.51%) 0.5149* H 2 s( 99.65%)p 0.00( 0.32%)d 0.00( 0.02%) 5. (1.99955) BD ( 1) O 1- H 3 ( 73.48%) 0.8572* O 1 s( 23.41%)p 3.26( 76.27%)d 0.01( 0.30%) ( 26.52%) 0.5150* H 3 s( 99.65%)p 0.00( 0.32%)d 0.00( 0.02%) ------------------------------------------------------------------------------- As we can see, that pretty much matches the assumption of sp 3 oxygen hydrogen bonds, a p lone pair, and a sp lone pair. Does that mean that the lone pairs are non-equivalent? Well, that is at least one interpretation. And we only deduced all that from a gas phase point of view. When we go towards condensed phase, things will certainly change. Hydrogen bonds will break the symmetry, dynamics will play an important role and in the end, both will probably behave quite similarly or even identical. Now let's get to the juicy part: Second, if so, does this have any significance in actual physical systems (i.e. is it a measurable phenomenon), and what is the approximate energy difference between the pairs of electrons? Well the first part is a bit tricky to answer, because that is dependent on a lot more conditions. But the part in parentheses is easy. It is measurable with photoelectron spectroscopy. There is a nice orbital scheme correlated to the orbital ionisation potential on the homepage of Michael K. Denk for water. [8] Unfortunately I cannot find license information, or a reference to reproduce, hence I am hesitant to post it here. However, I found a nice little publication on the photoelectron spectroscopy of water in the bonding region. [9] I'll quote some relevant data from the article. $\ce{H2O}$ is a non-linear, triatomic molecule consisting of an oxygen atom covalently bonded to two hydrogen atoms. The ground state of the $\ce{H2O}$ molecule is classified as belonging to the $C_\mathrm{2v}$ point group and so the electronic states of water are described using the irreducible representations $\mathrm{A}_1$, $\mathrm{A}_2$, $\mathrm{B}_1$, $\mathrm{B}_2$. The electronic configuration of the ground state of the $\ce{H2O}$ molecule is described by five doubly occupied molecular orbitals: $$\begin{align} \underbrace{(1\mathrm{a}_1)^2}_{\text{core}}&& \underbrace{(2\mathrm{a}_1)^2}_{\text{inner-valence orbital}}&& \underbrace{ (1\mathrm{b}_2)^2 (3\mathrm{a}_1)^2 (1\mathrm{b}_1)^2 }_{\text{outer-valence orbital}}&& \mathrm{X~^1A_1} \end{align}$$ [..] In addition to the three band systems observed in HeI PES of $\ce{H2O}$, a fourth band system in the TPE spectrum close to 32 eV is also observed. As indicated in Fig. 1, these band systems correspond to the removal of a valence electron from each of the molecular orbitals $(1\mathrm{b}_1)^{-1}$, $(3\mathrm{a}_1)^{-1}$, $(1\mathrm{b}_2)^{-1}$ and $(2\mathrm{a}_1)^{-1}$ of $\ce{H2O}$. As you can see, it fits quite nicely with the calculated data. From the image I would say that the difference between $(1\mathrm{b}_1)^{-1}$ and $(3\mathrm{a}_1)^{-1}$ is about 1-2 eV. TL;DRAs you see your hunch paid off quite well. Photoelectron spectroscopy of water in the gas phase confirms that the lone pairs are non-equivalent. Conclusions for condensed phases might be different, but that is a story for another day. Notes and References A π orbital has one nodal plane collinear with the bonding axis, it is asymmetric with respect to this plane. A bit more explanation in my question What would follow in the series sigma, pi and delta bonds? With in the approximation that molecular orbitals are a linear combination of atomic orbitals (MO = LCAO). The terminology we use for hybridisation actually is just an abbreviation:$$\mathrm{sp}^{x} = \mathrm{s}^{\frac{1}{x+1}}\mathrm{p}^{\frac{x}{x+1}}$$In theory $x$ can have any value; since it is just a unitary transformation the representation does not change, hence\begin{align} 1\times\mathrm{s}, 3\times\mathrm{p} &\leadsto 4\times\mathrm{sp}^3 \\ &\leadsto 3\times\mathrm{sp}^2, 1\times\mathrm{p} \\ &\leadsto 2\times\mathrm{sp}, 2\times\mathrm{p} \\ &\leadsto 2\times\mathrm{sp}^3, 1\times\mathrm{sp}, 1\times\mathrm{p} \\ &\leadsto \text{etc. pp.}\\ &\leadsto 2\times\mathrm{sp}^4, 1\times\mathrm{p}, 1\times\mathrm{sp}^{(2/3)}\end{align}There are virtually infinite possibilities of combination. This and the next footnote address a couple of points that were raised in a comment by DavePhD. While I already extensively answered that there, I want to include a few more clarifying points here. (If I do it right, the comments become obsolete.) What is the reason for concluding 2 lone pairs versus 1 or 3? For example Mulliken has in table V the b1 orbital being a definite lone pair (no H population) but the two a1 orbitals both have about 0.3e population on H. Would it be wrong to say only one of the PES energy levels corresponds to a lone pair, and the other 3 has some significant population on hydrogen? Are Mulliken's calculations still valid? – DavePhD The article Dave refers to is R. S. Mulliken, J. Chem. Phys. 1955, 23, 1833., which introduces Mulliken population analysis. In this paper Mulliken analyses wave functions on the SCF-LCAO-MO level of theory. This is essentially Hartree Fock with a minimal basis set. (I will address this in the next footnote.) We have to understand that this was state-of-the-art computational chemistry back then. What we take for granted nowadays, calculating the same thing in a few seconds, was revolutionary back then. Today we have a lot fancier methods. I used density functional theory with a very large basis set. The main difference between these approaches is that the level I use recovers a lot more of electron correlation than the method of Mulliken. However, if you look closely at the results it is quite impressive how well these early approximations perform.On the M06/def2-QZVPP level of theory the geometry of the molecule is optimised to have an oxygen hydrogen distance of 95.61 pm and a bond angle of 105.003°. This is quite close to the experimental results. The contribution to the orbitals are given as follows. I include the orbital energies (OE), too. The contributions of the atomic orbitals are given to 1.00 being the total for each molecular orbital. Because the basis set has polarisation functions the missing parts are attributed to this. The threshold for printing is 3%. (I also rearranged the Gaussian Output for better readability.) Atomic contributions to molecular orbitals: 2: 2A1 OE=-1.039 is O1-s=0.81 O1-p=0.03 H2-s=0.07 H3-s=0.07 3: 1B2 OE=-0.547 is O1-p=0.63 H2-s=0.18 H3-s=0.18 4: 3A1 OE=-0.406 is O1-s=0.12 O1-p=0.74 H2-s=0.06 H3-s=0.06 5: 1B1 OE=-0.332 is O1-p=0.95 We can see that there is indeed some contribution by the hydrogens to the in-plane lone pair of oxygen. On the other hand we see that there is only one orbital where there is a large contribution by hydrogen. One could here easily come up with the theory of one or three lone pairs of oxygen, depending on your own point of view. Mulliken's analysis is based on the canonical orbitals, which are delocalised, so we will never have a pure lone pair orbital. When we refer to orbitals as being of a certain type, then we imply that this is the largest contribution. Often we also use visual aides like pictures of these orbitals to decide if they are of bonding or anti-bonding nature, or if their contribution is on the bonding axis. All these analyses are highly biased by your point of view. There is no right or wrong when it comes to separation schemes. There is no hard evidence for any of these obtainable. These are mathematical interpretations that do in the best case help us understand bonding better. Thus deciding whether water has one, two or three (or even four) lone pairs is somewhat playing with numbers until something seems to fit. Bonding is too difficult to transform it in easy pictures. (That's why I am not an advocate for cautiously using Lewis structures.) The NBO analysis is another separation scheme. One that aims to transform the obtained canonical orbitals into a Lewis like picture for a better understanding. This transformation does not change the wave function and in this way is as equally a representation as other approaches. What you loose by this approach are the orbital energies, since you break the symmetry of the wave function, but this is going much too far to explain. In a nutshell, the localisation scheme aims to transform the delocalised orbitals into orbitals that correspond to bonds. From a quite general point of view, Mulliken's calculations (he actually only interpreted the results of others) and conclusion hold up to a certain point. Nowadays we know that his population analysis has severe problems, but within the minimal basis they still produce justifiable results. The popularity of this method comes mainly because it is very easy to perform. See also: Which one, Mulliken charge distribution and NBO, is more reliable? Mulliken used a SCF-LCAO-MO calculation by Ellison and Shull and was so kind to include the main results into his paper. The oxygen hydrogen bond distance is 95.8 pm and the bond angle is 105°. I performed a calculation on the same geometry on the HF/STO-3G level of theory for comparison. It obviously does not match perfectly, but well enough for a little bit of further discussion. NO SYM HF/STO-3G : N(O) N(H2) | Mulliken : N(O) N(H2) 1 1A1 -550.79 2.0014 -0.0014 | -557.3 2.0007 -0.0005 2 2A1 -34.49 1.6113 0.3887 | -36.2 1.688 0.309 3 1B2 -16.82 1.0700 0.9300 | -18.6 0.918 1.080 4 3A1 -12.29 1.6837 0.3163 | -13.2 1.743 0.257 5 1B1 -10.63 2.0000 0.0000 | -11.8 2.000 As an off-side note: I completely was unable to read the Mulliken analysis by Gaussian. I used MultiWFN instead. It is also not an equivalent approach because they expressed the hydrogen atoms with group orbitals. The results don't differ by much. The basic approach of Mulliken is to split the overlap population to the orbitals symmetric between the elements. That is a principal problem of the method as the contributions to that MO can be quite different. Resulting problematic points are occupation values larger than two or smaller than zero, which have clearly no physical meaning. The analysis is especially ruined for diffuse functions. At the time Mulliken certainly did not know about anything we are able to do today, and under which conditions his approach will break down, it still is funny to read such sentences today. Actually, very small negative values occasionally occur [...]. [...] ideally to the population of the AO [...] should never exceed the number 2.00 of electrons in a closed atomic sub-shell. Actually, [the orbital population] in some instances does very slightly exceed 2.00 [...]. The reason why these slight but only slight imperfections exist is obscure. But since they are only slight, it appears that the gross atomic populations calculated using Eq. (6') may be taken as representing rather accurately the "true" populations in various AOs for an atom in a molecule. It should be realized, of course, that fundamentally there is no such thing as an atom in a molecule except in an approximate sense. For much more on this I found an explanation of the Gaussian output along with the reference to F. Martin, H. Zipse, J. Comp. Chem. 2005, 26, 97 - 105, available as a copy. I have not read it though. Scroll down until the bottom of the page for the image, read for more information: CHEM 2070, Michael K. Denk: UV-Vis & PES. (University of Guelph) If dead: Wayback Machine S.Y. Truong, A.J. Yencha, A.M. Juarez, S.J. Cavanagh, P. Bolognesi, G.C. King, Chemical Physics 2009, 355 (2–3), 183-193. Or try this mirror.
EDIT There was an essential step left out in the proof. Thanks to Al Jebr for pointing it out. Since I've muddled the matter with my comments, I'll post an answer, although you only asked for a hint. I'll try to phrase it as a sequence of hints, so you can stop reading when you want. First, this seems to have something to do with Taylor polynomials. After all, they allow us to find a polynomial that satisfies the criteria for one of these tuples. In particular, there is a polynomial $$p_1(x)= \sum_{k=0}^n{\frac{a_k}{k!}(x-x_1)^k}$$ that satisfies the criteria for the first tuple, and furthermore every polynomial $q$ that satisfies these criteria is of the form $q(x)=p_1(x)+(x-x_1)^{n+1}p_2(x)$ for some polynomial $p_2$. Now the $a_k$ don't tell us any more. Any polynomial will work for $p_2,$ so we ask how the second tuple can help us find the coefficients of $p_2.$ By now, it's easy to guess that Taylor's theorem should help here, too. For convenience in what follows, I'll write $F(x) = (x-x_1)^{n+1}.$ so that our formula becomes $q(x)=p_1(x)+F(x)p_2(x).$ Since all we know about is what happens at $x_2,$ it's plain that we must evaluate $q$ at $x_2$. Notice that $F^{(k)}(x_2) \ne 0$ for $0\le k \le n,$ since we are given $x_1 \ne x_2.$ EDIT This is where I left out a step. I said that we have to use Taylor's theorem a second time, but then I didn't do it! Now we look at the Taylor polynomial of $p_2$ at $x_2.$ $$p_2(x)= \sum_{k=0}^n{\frac{c_k}{k!}(x-x_2)^k,}$$ for some constants $c_k$. By Leibniz's formula for the derivative of a product,$$q^{(i)}(x_2) = p_1^{(i)}(x_2) + \sum_{j=0}^i{\binom{i}{j}F^{(i-j)}(x_2)}p_2^{(j)}(x_2)\text{ for } i=0,...,n. $$ Since none of the derivatives of $F$ vanish, and all the terms involving $c_k$ vanish, except the term involving $c_i,$ we can compute the coeffcients of $p_2$ one by one. Again, by Taylor's theorem, $p_2$ is unique up to term of degree higher than $n.$
Meshing Considerations for Linear Static Problems In this blog entry, we introduce meshing considerations for linear static finite element problems. This is the first in a series of postings on meshing techniques that is meant to provide guidance on how to approach the meshing of your finite element model with confidence. About Finite Element Meshing The finite element mesh serves two purposes. It first subdivides the CAD geometry being modeled into smaller pieces, or elements, over which it is possible to write a set of equations describing the solution to the governing equation. The mesh is also used to represent the solution field to the physics being solved. There is error associated with both the discretization of the geometry as well as discretization of the solution, so let’s examine these separately. Geometric Discretization Consider two very simple geometries, a block and a cylindrical shell: There are four different types of elements that can be used to mesh these geometries — tetrahedra (tets), hexahedra (bricks), triangular prismatics (prisms), and pyramid elements: The grey circles represent the corners, or nodes, of the elements. Any combination of the above four elements can be used. (For 2D modeling, triangular and quadrilateral elements are available.) You can see by examination that both of these geometries could be meshed with as few as one brick element, two prisms, three pyramids, or five tets. As we learned in the previous blog post about solving linear static finite element problems, you will always arrive at a solution in one Newton-Raphson iteration. This is true for linear finite element problems regardless of the mesh. So let’s take a look at the simplest mesh we could put on these structures. Here’s a plot of a single brick element discretizing these geometries: The mesh of the block is obviously a perfect representation of the true geometry, while the mesh of the cylindrical shell appears quite poor. In fact, it only appears that way when plotted. Elements are always plotted on the screen as having straight edges (this is done for graphics performance purposes) but COMSOL usually uses a second-order Lagrangian element to discretize the geometry (and the solution). So although the element edges always appear straight, they are internally represented as: The white circles represent the midpoint nodes of these second-order element edges. That is, the lines defining the edges of the elements are represented by three points, and the edges approximated via a polynomial fit. There are also additional nodes at the center of each of these quadrilateral faces and in the center of the volume for these second-order Lagrangian hexahedral elements (omitted for clarity). Clearly, these elements do a better job of representing the curved boundaries of the elements. By default, COMSOL uses second-order elements for most physics, the two exceptions are problems involving chemical species transport and when solving for a fluid flow field. (Since those types of problems are convection dominated, the governing equations are better solved with first-order elements.) Higher order elements are also available, but the default second-order elements usually represent a good compromise between accuracy and computational requirements. The figure below shows the geometric discretization error when meshing a 90° arc in terms of the number of first- and second-order elements: The conclusion that can be made from this is that at least two second-order elements, or at least eight first-order elements, are needed to reduce the geometric discretization error below 1%. In fact, two second-order elements introduce a geometric discretization error of less that 0.1%. Finer meshes will more accurately represent the geometry, but will take more computational resources. This gives us a couple of good practical guidelines: When using first-order elements, adjust the mesh such that there are at least eight elements per 90° arc When using second-order elements, use two elements per 90° arc With these rules of thumb, we can now estimate the error we’ve introduced by meshing the geometry, and we can do so with some confidence before even having to solve the model. Now let’s turn our attention to how the mesh discretizes the solution. Solution Discretization The finite element mesh is also used to represent the solution field. The solution is computed at the node points, and a polynomial basis is used to interpolate this solution throughout the element to recover the total solution field. When solving linear finite elements problems, we are always able to compute a solution, no matter how coarse the mesh, but it may not be very accurate. To understand how mesh density affects solution accuracy, let’s look at a simple heat transfer problem on our previous geometries: A temperature difference is applied to opposing faces of the block and the cylindrical shell. The thermal conductivity is constant, and all other surfaces are thermally insulated. The solution for the case of the square block is that the temperature field varies linearly throughout the block. So for this model, a single, first-order, hexahedral element would actually be sufficient to compute the true solution. Of course, you will rarely be that lucky! Therefore, let’s look at the slightly more challenging case. We’ve already seen that the cylindrical shell model will have geometric discretization error due to the curved edges, so we would start this model with at least two second-order (or eight first-order) elements along the curved edges. If you look closely at the above plot, you can see that the element edges on the boundaries are curved, while the interior elements have straight edges. Along the axis of the cylinder, we can use a single element, since the temperature field will not vary in this direction. However, in the radial direction, from the inside to outside surface, we also need to have enough elements to discretize the solution. The analytic solution for this case goes as \ln(r) and can be compared against our finite element solution. Since the polynomial basis functions cannot perfectly describe the function, let’s plot the error in the finite element solution for both the linear and quadratic elements: What you can see from this plot is that, as you increase the number of elements in the model, the error goes down. This is a fundamental property of the finite element method: the more elements, the more accurate your solution. Of course, there is also a cost associated with this. More computational resources, both time and hardware, are required to solve larger models. Now, you’ll notice that there are no units to the x-axis of this graph, and that is on purpose. The rate at which error decreases with respect to mesh refinement will be different for every model, and depends on many factors. The only important point is that it will always go down, monotonically, for well-posed problems. You’ll also notice that, after a point, the error starts to go back up. This will happen once the individual mesh elements start to get very small, and we run into the limits of numerical precision. That is, the numbers in our model are smaller than can be accurately represented on a computer. This is an inherent problem with all computational methods, not just the finite element method; computers cannot represent all real numbers accurately. The point at which the error starts to go back up will be around \sqrt{2^{-52}} \approx 1.5 \times 10^{-8} and to be on the safe and practical side, we often say that the minimal achievable error is 10 -6. Thus, if we integrate the scaled difference between the true and computed solution over the entire model: We say that the error, \epsilon, can typically be made as small as 10 -6 in the limits of mesh refinement. In practice, the inputs to our models will anyways usually have much greater uncertainty than this. Also keep in mind that in general we don’t know the true solution, we will instead have to compare the computed solutions between different sized meshes and observe what values the solution converges toward. Adaptive Mesh Refinement I would like to close this blog post by introducing a better way to refine the mesh. The plots above show that error decreases as all of the elements in the model are made smaller. However, ideally you would only make the elements smaller in regions where the error is high. COMSOL addresses this via Adaptive Mesh Refinement, which first solves on an initial mesh and iteratively inserts elements into regions where the error is estimated to be high, and then re-solves the model. This can be continued for as many iterations as desired. This functionality works with triangular elements in 2D and tetrahedrals in 3D. Let’s examine this problem in the context of a simple structural mechanics problem — a plate under uniaxial tension with a hole, as shown in the figure below. Using symmetry, only one quarter of the model needs to be solved. The computed displacement fields, and the resultant stresses, are quite uniform some distance away from the hole, but vary strongly nearby. The figure below shows an initial mesh, as well as the results of several adaptive mesh refinement iterations, along with the computed stress field. Note how COMSOL preferentially inserts smaller elements around the hole. This should not be a surprise, since we already know there will be higher stresses around the hole. In practice, it is recommended to use a combination of adaptive mesh refinement, engineering judgment, and experience to find an acceptable mesh. Summary of Main Points You will always want to perform a mesh refinement study and compare results on different sized meshes Use your knowledge of geometric discretization error to choose as coarse a starting mesh as possible, and refine from there You can use adaptive mesh refinement, or your own engineering judgment, to refine the mesh Comments (5) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Now that I'm ridiculously behind in the Stanford Online Statistical Learning class, I thought it would be fun to try to reproduce the figure on page 36 of the slides from chapter 3 or page 81 of the book. The result is a curvaceous surface that slices neatly through the data set. I’m not sure what incurable chemical imbalance explains why such a thing sounds like fun to me. But let’s go get us some data, specifically the Advertising data set from the book's website: Advertising <- read.csv("http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv", row.names = 1) Previously, we fooled around with linear models with one predictor. We now extend regression to the case with multiple predictors. \[ \hat{y} = \hat{\beta}_0 + \hat{\beta}_1 x_1 + \hat{\beta}_2 x_2 +···+ \hat{\beta}_p x_p \] Or, in matrix notation: \[ \hat{Y} = X \cdot \hat{\beta} + \hat{\beta_0} \] Where the dimensions of X are n by p, and \( \beta \) is p by 1 resulting in a Y of size n by 1, a prediction for each of the n samples. Find a fit for Sales as a function of TV and Radio advertising plus the interaction between the two. fit2 = lm(Sales ~ TV * Radio, data = Advertising) Before making the plot, define a helper function to evenly divide the ranges. This probably exists somewhere in R already, but I couldn't find it. evenly_divide <- function(series, r = range(series), n = 10) { c(r[1], 1:n/n * (r[2] - r[1]) + r[1]) } Generate a 2D grid over which we'll predict Sales. x = evenly_divide(Advertising$TV, n = 16) y = evenly_divide(Advertising$Radio, n = 16) Using the coefficients of the fit, create a function f that computes the predicted response. It would be nice to use predict for this purpose. From a functional programming point of view, it seems like there should be a function that takes a fit and returns a function equivalent to f. If such a thing exists, I'd like to find it. beta = coef(fit2) f <- function(x, y) beta[1] + beta[2] * x + beta[3] * y + beta[4] * x * y z = outer(x, y, f) I copied the coloring of the regression surface from the examples for persp. I'm guessing the important part is create a color vector of the right size to cycle through the facets correctly. nrz <- nrow(z) ncz <- ncol(z) nbcol <- 100 # Create color palette palette <- colorRampPalette(c("blue", "cyan", "green")) color <- palette(nbcol) # Compute the z-value at the facet centres zfacet <- z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz] # Recode facet z-values into color indices facetcol <- cut(zfacet, nbcol) OK, finally we get to the plotting. A call to persp sets up the coordinate system and renders the regression surface. With the coordinate system we got in the return value, we plot the line segments representing the residuals, using a higher transparency for points under the surface. We do the same for the actual points, so they look a bit like they're underneath the surface. # Draw the perspective plot res <- persp(x, y, z, theta = 30, phi = 20, col = color[facetcol], xlab = "TV", ylab = "Radio", zlab = "Sales") # Draw the residual line segments xy.true = trans3d(Advertising$TV, Advertising$Radio, Advertising$Sales, pmat = res) xy.fit = trans3d(Advertising$TV, Advertising$Radio, predict(fit2, Advertising), pmat = res) colors = rep("#00000080", nrow(Advertising)) colors[residuals(fit2) < 0] = "#00000030" segments(xy.true$x, xy.true$y, xy.fit$x, xy.fit$y, col = colors) # Draw the original data points colors = rep("#cc0000", nrow(Advertising)) colors[residuals(fit2) < 0] = "#cc000030" points(trans3d(Advertising$TV, Advertising$Radio, Advertising$Sales, pmat = res), col = colors, pch = 16) To be honest, this doesn't look much like the figure in the book and there’s reason to doubt the information conveying ability of 3D graphics, but whatever – it looks pretty cool!
Introduction In earlier parts we discussed about the basics of integral equations and how they can be derived from ordinary differential equations. In second part, we also solved a linear integral equation using trial method. Now we are in a situation from where main job of solving Integral Equations can be started. But before we go ahead to that mission, it will be better to learn how can integral equations be converted into differential equations. Integral Equation ⇔ Differential Equation The method of converting an integral equation into a differential equation is exactly opposite to what we did in last part where we converted boundary value differential equations into respective integral equations. In last workout, initial value problems$ always ended up as Volterra Integrals$ and boundary value problems$ resulted as Fredholm Integrals. $ In converse process we will get initial value problems$ from Volterra Integrals and boundary value problems$ from Fredholm Integral Equations. Also, as in earlier conversion we continuously integrated the differentials within given boundary values, we will continuously differentiate provided integral equations and refine the results by putting all constant integration limits. The above instructions can be practically understood by following two examples. First problem involves the conversion of Volterra Integral Equation into differential equation and the second problem displays the conversion of Fredholm Integral Equation into differential equation. Problem 1: Converting Volterra Integral Equation into Ordinary Differential Equation with initial values Convert $$y(x) = – \int_{0}^x (x-t) y(t) dt$$ into initial value problem. ( See Problem 1 of Part 3 ) Solution: We have, $$y(x) = – \int_{0}^x (x-t) y(t) dt \ldots (1)$$ Differentiating (1) with respect to $x$ will give $$y'(x) = -\frac{d}{dx} \int_{0}^x (x-t) y(t) dt$$ $$ \Rightarrow y'(x)=-\int_{0}^x y(t) dt \ldots (2)$$ Again differentiating (2) w.r.t. $x$ will give $$ y”(x)=-\frac{d}{dx}\int_{0}^x y(t) dt$$ $$ \Rightarrow y”(x)=-y(x) \ldots (3′)$$ $$ \iff y”(x)+y(x)=0 \ldots (3) $$ Putting the lower limit $x=0$ (i.e., the initial value) in equation (1) and (2) will give, respectively the following: $$y(0) = – \int_{0}^0 (0-t) y(t) dt$$ $$y(0)=0 \ldots (4)$$ And, $$ y'(0)=-\int_{0}^0 y(t) dt$$ $$y'(0)=0 \ldots (5)$$ These equations (3), (4) and (5) form the ordinary differential form of given integral equation. $\Box$ Problem 2: Converting Fredholm Integral Equation into Ordinary Differential Equation with boundary values Convert $$ y(x) =\lambda \int_{0}^{l} K(x,t) y(t) dt$$ into boundary value problem where $$ K(x,t)=\frac{t(l-x)}{l} \qquad \mathbf{0<t<x} $$ and $$ K(x,t)=\frac{x(l-t)}{l} \qquad \mathbf{x<t<l}$$ Solution: The given integral equation is $$ y(x) =\lambda \int_{0}^{l} K(x,t) y(t) dt \ldots (1)$$ or $$y(x) =\lambda (\int_{0}^{x} \frac{(l-x)t}{l} y(t) dt + \int_{x}^{l} \frac{x(l-t)}{l} y(t) dt) \ldots (2)$$ Differentiating (2) with respect to $x$ will give $$ y'(x) = -\frac{\lambda}{l} \int_{0}^x t y(t) dt + \frac{\lambda}{l} \int_{x}^l (l-t) y(t) dt \ldots (3)$$ Continued differentiation of (3) will give $$ y”(x) = -\lambda y(x)$$ That’s $$ y”(x) +\lambda y(x) =0 \ldots (4)$$ To get the boundary values, we place $x$ equal to both integration limits in (1) or (2). $x =0 \Rightarrow$ $$y(0)=0 \ldots (5)$$ $x=l \Rightarrow$ $$y(l)=0 \ldots (6)$$ The ODE (4) with boundary values (5) & (6) is the exact conversion of given integral equation. $\Box$ Feel free to ask questions, send feedback and even point out mistakes. Great conversations start with just a single word. How to write better comments?
I need to find the Fourier series of $$ f(x)=\begin{cases}a,& 0<x<\frac{\pi}{3}\\ 0,&\frac{\pi}{3}<x<\frac{2\pi}{3}\\-a,& \frac{2\pi}{3}<x<\pi\\ \end{cases} $$ $$ f(x)= a_0+\sum_{n=1}^{\infty}a_n\cos\frac{n\pi x}{L}+\sum_{n=1}^{\infty}b_n\sin\frac{n\pi x}{L}, $$ $$ a_0 = \frac{1}{2\pi} \int_{-\pi}^{\pi} f(x) dx $$ $$ a_n = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x)cos(nx)dx $$ $$ b_n = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x)sin(nx)dx $$ I found That An and A0 are both equal to 0 which meant i only had Bn to find $$ b_n = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x)sin(nx)dx $$ $$ b_n = \frac{1}{\pi} \int_{0}^{\frac{\pi}{3}} asin(nx)dx + \frac{1}{\pi} \int_\frac{2\pi}{3}^{{\pi}} -asin(nx)dx $$ $$ b_n = \frac{a}{n\pi}\left[-\cos(nx)\right]_0^{\frac{\pi}3} + \frac{a}{n\pi}\left[\cos(nx)\right]_\frac{2\pi}3^{\pi} $$ When n is odd $$ b_n = 0 $$ When n = 2 , 4 $$ b_n = \frac{a}{n\pi}\left[1-\cos(n\frac{\pi}{3}) + \cos(n\pi) -\cos(\frac{2n\pi}{3})\right] $$ $$ b_n = \frac{3a}{n\pi} $$ When n = 6 bn = 0 So this gives me a Fourier Series of $$ f(x)=\frac{3a}{\pi}\left[\frac{1}{2}\sin(2x)+\frac{1}{4}\sin(4x)+\frac{1}{8}\sin(8x).... \right] $$ But the book has a solution of $$ f(x)=\frac{3a}{\pi}\left[\sin(2x)+\frac{1}{2}\sin(4x)+\frac{1}{4}\sin(8x).... \right] $$ Not sure where im going wrong ! The standard Fourier series is on an interval of length $2\pi$, whereas your problem is posed on $[0,\pi]$. $L$ has to figure into the coefficients, which you can see just by looking at the constant term. In other words, if you are going to expand $f$ on $[0,\pi]$, then the standard expansion will involve a constant, and $\sin(2nx)$, $\cos(2nx)$. Assuming you have such an expansion $$ f(x) = a_{0} + \sum_{n=1}^{\infty} a_{n}\cos(2nx)+\sum_{n=1}^{\infty}b_{n}\cos(2nx), $$ then you multiply by one of these functions, on both sides, integrate over $[0,\pi]$, use the fact that the integrals of unlike terms are zero, in order to obtain $$ \begin{align} \int_{0}^{\pi}f(x)dx & = a_{0}\int_{0}^{\pi}dx =\pi a_{0}\\ \int_{0}^{\pi}f(x)\cos(2nx)dx & = a_{n}\int_{0}^{\pi}\cos^{2}(2nx)\,dx=a_{n}\frac{\pi}{2},\;\;\; n > 0,\\ \int_{0}^{\pi}f(x)\sin(2nx)dx & = b_{n}\int_{0}^{\pi}\sin^{2}(2nx)\,dx=b_{n}\frac{\pi}{2}. \end{align} $$ The values of integrals of $\sin^{2}$ and $\cos^{2}$ are easily determined: both types of integrals are over a full period and, so must be equal; and $\sin^{2}(2nx)+\cos^{2}(2nx)=1$, which means that the integrals of $\sin^{2}$ and $\cos^{2}$ are half of the period (period=$\pi$, so integrals of $\sin^{2}$ and $\cos^{2}$ are $\pi/2$.) So your series becomes $$ \begin{align} f(x) & \sim \frac{1}{\pi}\int_{0}^{\pi}f(x)\,dx + \\ & + \sum_{n=1}^{\infty}\left(\frac{2}{\pi}\int_{0}^{\pi}f(x)\cos(2nx)\,dx\right)\cos(2nx) \\ & +\sum_{n=1}^{\infty}\left(\frac{2}{\pi}\int_{0}^{\pi}f(x)\sin(2nx)\,dx\right)\sin(2nx). \end{align} $$ The normalization constants $\frac{2}{\pi}$ and ${1}{\pi}$ come from dividing by the integrals of the squares of $1$, $\sin(2nx)$ and $\cos(2nx)$. Your function $f$ is $a$ times a part which is odd about the $x=\pi/2$. The $\sin$ terms are odd about $x=\pi/2$ and the $\cos$ terms are even about $x=\pi/2$. That helps simplify. The cosine terms require integrating only the part of $f$ which is even about $x=\pi/2$, while the sine terms require integrating only the part of $f$ which is odd about $x=\pi/2$. So the $\cos$ terms drop out. In this particular case, you need only integrate the correct part over $[0,\pi/2]$ and double the value. So, $$ a_{n} = 0,\;\; b_{n}=2a\int_{0}^{\pi/3}\sin(2nx)\,dx. $$
Sure, in principle it could be used to separate the levels of $\mathsf{PH}$...the key thing is to find polynomial families complete for the relevant classes (or, at least polynomial families $f, g$ such that $f$ is in the larger class and is conjectured to not be in the smaller class, and such that showing that $f$ is not a projection of $g$ suffices to show that $f$ is not in the smaller class) that have nice symmetry properties. In practice, however, we know of very few natural complete problems for levels of $\mathsf{PH}$ beyond the second or third level, let alone problems with nice symmetry properties. Q2: Indeed, the current focus of GCT and much of algebraic complexity is on permanent versus determinant, which is much closer to $\mathsf{\# P/poly}$ vs $\mathsf{FNC^2/poly}$ than it is to $\mathsf{NP}$ versus $\mathsf{P}$. As $\mathsf{\# P}$ is relatively close to $\mathsf{PSPACE}$ (and certainly above $\mathsf{PH}$), this is already in line with your suggestion. Q1: Separating higher levels of $\mathsf{PH}$ from one another is formally harder than separating $\mathsf{NP}$ from $\mathsf{P}$. I'm not sure that it's reasonable to assume that any technique showing $\mathsf{NP} \not\subseteq \mathsf{P/poly}$ would be related to one for showing $\mathsf{\Sigma_i P} \not\subseteq \mathsf{\Delta_i P/poly}$. Indeed, the relativization barrier suggests that this shouldn't be the case. But also, even moving between the first and second levels of $\mathsf{PH}$ is currently nontrivial... So I don't really buy the premise for this question. But, also, I don't see any reason that GCT shouldn't apply to separate higher levels of $\mathsf{PH}$, nor any way to turn this around to show that GCT can't separate $\mathsf{NP}$ from $\mathsf{P/poly}$... That all being said, if you look at Mulmuley's parallel lower bound in the PRAM model without bit operations, it not only shows that "$\mathsf{P} \neq \mathsf{NC}$" in that model, it also separates the levels of $\mathsf{NC}$ from one another in that model.
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Search Now showing items 21-30 of 182 The new Inner Tracking System of the ALICE experiment (Elsevier, 2017-11) The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019–20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity ... Neutral meson production and correlation with charged hadrons in pp and Pb-Pb collisions with the ALICE experiment at the LHC (Elsevier, 2017-11) Among the probes used to investigate the properties of the Quark-Gluon Plasma, the measurement of the energy loss of high-energy partons can be used to put constraints on energy-loss models and to ultimately access medium ... Direct photon measurements in pp and Pb-Pb collisions with the ALICE experiment (Elsevier, 2017-11) Direct photon production in heavy-ion collisions provides a valuable set of observables to study the hot QCD medium. The direct photons are produced at different stages of the collision and escape the medium unaffected. ... Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE (Elsevier, 2017-11) Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... Charmonium production in Pb–Pb and p–Pb collisions at forward rapidity measured with ALICE (Elsevier, 2017-11) The ALICE collaboration has measured the inclusive charmonium production at forward rapidity in Pb–Pb and p–Pb collisions at sNN=5.02TeV and sNN=8.16TeV , respectively. In Pb–Pb collisions, the J/ ψ and ψ (2S) nuclear ... Net-baryon fluctuations measured with ALICE at the CERN LHC (Elsevier, 2017-11) First experimental results are presented on event-by-event net-proton fluctuation measurements in Pb- Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, recorded by the ALICE detector at the CERN LHC. The ALICE detector is well ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ... $\phi$ meson production in Pb-Pb collisions at $\sqrt s_{NN}$=5.02 TeV with ALICE at the LHC (Elsevier, 2017-11) Strangeness production is a key tool to understand the properties of the medium formed in heavy-ion collisions: an enhanced production of strange particles was early proposed as one of the signatures of the Quark-Gluon ...
The Lane-Riesenfeld algorithm subdivides the control polygon of a B-spline to create a new control polygon with the same limit spline. It's made up of two steps: first, duplicating all of the control points $P_i$ into $P^\prime_{2i}$ and $P^\prime_{2i+1}$; then, moving each point to the midpoint between it and the next point, so $P^\prime_i \rightarrow \frac{1}{2} P^\prime_i + \frac{1}{2}P^\prime_{i+1}$. These steps are illustrated in the figure: On the left is an initial control polygon. In the middle, I've duplicated the vertices. On the right, the vertices have moved halfway to the next vertex. Notice that only half of the vertices move in the first movement step: this is because $P^\prime_{2i} = P^\prime_{2i+1},$ but $P^\prime_{2i+1} \neq P^\prime_{2i+2}.$ These eight vertices are a refined control polygon for the linear B-spline defined by the initial four vertices. Now, we can do a second movement step (without another duplication): On the left, we've moved each vertex to the midpoint between it and its neighbour; note that all of the vertices move this time (since none are in the same position). On the right, we have drawn the polygon with these eight vertices. This is a refined control polygon for the quadratic B-spline defined by the initial four vertices. You may also recognize this as the same polygon you get by Chaikin corner-cutting, which also gets you the quadratic B-spline. Now we can perform more duplicate-move-move steps to further refine the polygon, and thus more closely approximate the quadratic B-spline curve: Here's an animation of this process: If, instead, we do a third movement step without a duplication step (that is, one duplication followed by three successive movements) we get a refined control polygon for the cubic B-spline defined by the initial four vertices: An animation of this process: In general, doing $k$ movement steps after each duplicate gives us the refined polygon for the $C^k$ B-spline. EDIT: Added animations.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
It sounds like both Griffiths and your lecturer are trying to come up with a mechanical rule for what you should say about states and measurement results instead of an explanation. I could be wrong about that since I have only your report to go on, but the appropriate explanation follows. Say you have a state $\psi = a\phi_1 + b\phi_2$, where $\phi_1,\phi_2$ are eigenstates of an observable $\hat{A}$. A measurement of $\hat{A}$ transfers information about $\hat{A}$ from the system being measured to a measuring instrument in some ready state $\alpha_0$. An interaction that does this would do the following $\phi_j\alpha_0 \to \phi_j\alpha_j$. The joint state of the measuring apparatus and system would then be$$a\phi_1\alpha_1 + b\phi_2\alpha_2.$$ After the measurement the measurement device is present in two different versions: one for each possible outcome. These two versions will be unable to undergo interference as a result of decoherence: https://arxiv.org/abs/1212.3245. As a result, each version of the measurement instrument only sees one result. We can say that the relative states of the measurement instrument and particle are $\phi_1\alpha_1$ for one version and $\phi_2\alpha_2$ for the other. We can also say that in each of those relative states the observable has the corresponding eigenvalue. This is commonly taken to mean you can then ignore quantum mechanics, but decoherent systems can carry quantum information, so this assumption is wrong, see http://arxiv.org/abs/quant-ph/9906007 and http://arxiv.org/abs/1109.6223. In a particular relative state, you can say the state of the system is $\phi_1$ say and that the measurement of the observable had a particular outcome. Saying that the measurement had a particular outcome or the system has a particular state outside that context is wrong. The lecturer and the textbook author are both wrong if they told you anything else.
Many users have tried to do a seasonal decomposition with a short time series, and hit the error “Series has less than two periods”. The problem is that the usual methods of decomposition (e.g., decompose and stl) estimate seasonality using at least as many degrees of freedom as there are seasonal periods. So you need at least two observations per seasonal period to be able to distinguish seasonality from noise. However, it is possible to use a linear regression model to decompose a time series into trend and seasonal components, and then some smoothness assumptions on the seasonal component allow a decomposition with fewer than two full years of data. This problem came up on crossvalidated.com recently, with the following data set. df <- ts(c(2735.869,2857.105,2725.971,2734.809,2761.314,2828.224,2830.284, 2758.149,2774.943,2782.801,2861.970,2878.688,3049.229,3029.340,3099.041, 3071.151,3075.576,3146.372,3005.671,3149.381), start=c(2016,8), frequency=12) We can approximate the seasonal pattern using Fourier terms with a few parameters. library(forecast)library(ggplot2)decompose_df <- tslm(df ~ trend + fourier(df, 2)) Then the usual decomposition plot can be constructed. trend <- coef(decompose_df)[1] + coef(decompose_df)['trend']*seq_along(df)components <- cbind( data = df, trend = trend, season = df - trend - residuals(decompose_df), remainder = residuals(decompose_df))autoplot(components, facet=TRUE) Here the model is \[ y_t = \alpha + \beta t + \sum_{k=1}^K \left[ \gamma_k \sin\left(\textstyle\frac{2\pi t k}{m}\right) + \psi_k \cos\left(\textstyle\frac{2\pi t k}{m}\right) \right] + \varepsilon_t, \] where \(1 \le K \le m/2\). The larger the value of \(K\), the more complicated the seasonal pattern that can be estimated. If \(K=m/2\), then we are using \(m-1\) degrees of freedom for the seasonal component. (The last sin term will be dropped as \(\sin(\pi t)=0\) for all \(t\).) For a short time series, we can use a small value for \(K\), which can be selected by minimizing the AICc or leave-one-out CV statistic (both computed using CV()). In the above example, the AICc is minimized with \(K=1\) and CV is minimized with \(K=2\). The trend could also be made nonlinear, by replacing trend with a polynomial or spline (although both will use up more degrees of freedom, and may not be justified with short time series). As with other methods of decomposition, it is easy enough to remove the seasonal component to get the seasonally adjusted data. adjust_df <- df - components[,'season']autoplot(df, series="Data") + autolayer(adjust_df, series="Seasonally adjusted")
Thursday, 30 March 2017 In a recent Science paper, Eric Loken and Andrew Gelman discuss an interesting statistical nuance relevant to measurement error. Loken and Gelman begin by pointing out statisticians/econometricians/people with data and computers have the mindset that measurement error in one of our covariates biases our point estimates toward zero—the infamous attenuation bias. Pushing this attenuation-based mindset further, researchers infer that a statistically significant point estimate would be even larger in the absence of measurement error. Loken and Gelman that this second logical leap is not, in fact, logical. Why? While attenuation bias is a real thing, once we begin conditioning on statistically significant results, we add a second dimension that we must consider: power. In settings with low power— e.g., small \(N\) and subtantial variance—in order to “achieve” a statistically significant result, the point estimate must be quite large. Thus, the attenuation effect may be reversed by a low-power effect. The big idea here is that conditioning on significant results—a very real thing in the presence of p-hacking and publication bias—changes the behavior of something we all thought we understood—measurement error and attentuation bias. Loken and Gelman’s takeaway? A key point for practitioners is that surprising results from small studies should not be defended by saying that they would have been even better with improved measurement. I thought it would be fun to replicate the results in a simple simulation. # Packageslibrary(data.table)library(magrittr)library(parallel)library(ggplot2)library(viridis)# My ggplot2 themetheme_ed <- theme( legend.position = "bottom", panel.background = element_rect(fill = NA), # panel.border = element_rect(fill = NA, color = "grey75"), axis.ticks = element_line(color = "grey95", size = 0.3), panel.grid.major = element_line(color = "grey95", size = 0.3), panel.grid.minor = element_line(color = "grey95", size = 0.3), legend.key = element_blank())# Set seedset.seed(12345) One of Loken and Gelman’s big points is that sample size (specifically, power) matters for attenuation bias, once we start conditioning on statistically significant results. So let’s write a function that generates a sample of size n for a simple linear regression. The function will also take as inputs the true intercept alpha and the true slope coefficient beta. For simplicity, we will generate each variable from a standard normal distribution. Finally, the function will accept a third parameter gamma that dictates the degree of measurement error in our covariance. Formally, the population data-generating process is \[ y_i = \alpha + \beta x_i + \varepsilon_i \] but instead of observing \(x_i\), the researcher observes \[ w_i = x_i + \gamma u_i \] where \(\gamma u_i\) is the “noise” that we add to \(x_i\) to generate measurement error. Again, we will assume \(u_i\) comes from a standard normal distribution. sim_data <- function(n, alpha, beta, gamma) { # Generate x, e (disturbance), and u gen_dt <- data.table( x = rnorm(n), e = rnorm(n), u = rnorm(n) ) # Calculate w and y gen_dt[, `:=`( w = x + gamma * u, y = alpha + beta * x + e )] # Return the data return(gen_dt)} In the simulation, we will regress \(\mathbf{y}\) on \(\mathbf{w}\) (and a column of ones) and then calculate the standard errors, t statistics, and p-values. The base installation’s lm() function works just fine in this context. We’ll ignore inference/estimates for the intercept. sim_reg <- function(data) { # Run the regression without measurement error reg_clean <- lm(y ~ x, data) %>% summary() %>% coef() %>% # Grab the second row extract(2, ) %>% # Force to matrix then to data.table matrix(nrow = 1) %>% data.table() # Run the regression with measurement error reg_noise <- lm(y ~ w, data) %>% summary() %>% coef() %>% # Grab the second row extract(2, ) %>% # Force to matrix then to data.table matrix(nrow = 1) %>% data.table() reg_dt <- rbindlist(list(reg_clean, reg_noise)) # Set names setnames(reg_dt, c("est_coef", "se", "t_stat", "p_value")) # Add column for type of regression reg_dt[, type := c("clean", "noise")] # Return reg_dt return(reg_dt)} Now we’ll make a wrapper function that applies sim_data() and sim_reg(), effectively running a single iteration of the simulation. sim_one <- function(n, alpha, beta, gamma) { # Run the regression function on simulated data one_dt <- sim_reg(data = sim_data(n, alpha, beta, gamma)) # Add the sample size one_dt[, n := n] # Return the results return(one_dt)} And now a function to run the simulation n_iter times for the given set of parameters ( n_iter times for each sample size n). sim_run <- function(n_iter, n, alpha, beta, gamma, n_cores = 4) { # Run n_iter sim_dt <- mclapply( X = rep(n, each = n_iter), FUN = sim_one, alpha = alpha, beta = beta, gamma = gamma, mc.cores = n_cores) %>% rbindlist() # Record the parameter values sim_dt[, `:=`(alpha = alpha, beta = beta, gamma = gamma)] # Return the simulation results return(sim_dt)} Now let’s actually run the simulation. Let’s start with three sample sizes: 30, 100, 1000. the_sim <- sim_run( n_iter = 1e4, n = c(30, 100, 1e3), alpha = 10, beta = 1, gamma = 0.25, n_cores = 4) A few quick changes for plotting/description: # Add a variable for a significant result (0.05 level)the_sim[, `:=`( sig = p_value < 0.05, n_fac = factor(n, ordered = T) )] %>% invisible() Examine the results: # Compare means by sample sizethe_sim[type == "noise", list( mean_coef = mean(est_coef), pct_sample = .N/1e4 ), by = n] ## n mean_coef pct_sample## 1: 30 0.9436209 1## 2: 100 0.9407640 1## 3: 1000 0.9414561 1 # Compare means by sample size and significancethe_sim[type == "noise", list( mean_coef = mean(est_coef), pct_sample = .N/1e4 ), by = .(n, sig)] ## n sig mean_coef pct_sample## 1: 30 TRUE 0.9472367 0.9932## 2: 30 FALSE 0.4155048 0.0068## 3: 100 TRUE 0.9407640 1.0000## 4: 1000 TRUE 0.9414561 1.0000 # Compare pct. of coef. > 1, by sample size and significancethe_sim[type == "noise", list( pct_exceed = mean(est_coef > 1), pct_sample = .N/1e4 ), by = .(n, sig)] ## n sig pct_exceed pct_sample## 1: 30 TRUE 0.3810914 0.9932## 2: 30 FALSE 0.0000000 0.0068## 3: 100 TRUE 0.2763000 1.0000## 4: 1000 TRUE 0.0331000 1.0000 Let’s plot the distribution of significant coefficient estimates. ggplot(the_sim[(sig == T) & (type == "noise")], aes(est_coef)) + geom_density(aes(fill = n_fac), alpha = 0.7, size = 0.1, color = "grey50") + geom_vline(xintercept = 1, color = "grey30", linetype = 2) + xlab(expression(paste(widehat(beta), " (estimated coefficient)"))) + ylab("Density") + ggtitle("Distribution of estimated coefficients, by sample size", subtitle = "in the presence of measurement error") + theme_ed + scale_fill_viridis("Sample size:", discrete = T, direction = -1, begin = 0.05, end = 0.90) We can also make a nice comparison of the estimated coefficient compared with the “ideal” estimate (without measurement error), following Loken and Gelman’s figure. # Create a 'paired' data.tablethe_sim[, id := rep(1:(.N/2), each = 2)] %>% invisible()clean_dt <- the_sim[type == "clean"]noise_dt <- the_sim[type == "noise"]pair_dt <- merge(x = clean_dt, y = noise_dt, by = "id", all = T, suffixes = c("_clean", "_noise"))# Clean uprm(clean_dt, noise_dt)gc() %>% invisible()# Plotggplot(pair_dt[sig_noise == T], aes(x = est_coef_clean, est_coef_noise)) + geom_point(aes(color = n_fac_noise), size = 0.25, alpha = 0.75, shape = 1) + stat_function(fun = function(x) x, size = 0.4) + xlab("\"Ideal\" estimate (no measurement error)") + ylab("Estimate with measurement error") + ggtitle("Comparison of estimates with and without measurement error", subtitle = "by sample size") + theme_ed + scale_color_viridis("Sample size:", discrete = T, direction = -1, begin = 0.05, end = 0.90) + guides(colour = guide_legend( override.aes = list(size = 4, stroke = 1))) We again see Loken and Gelman’s point: for smaller sizes, (conditional on a significant result) we frequently see point estimates from the “measure with error” regression that exceed the corresponding point estimates from regressions measured without error—not exactly the attenuation story we typically have in mind. Now let’s run the simulation for a bunch of sample sizes. Note: You may want to drop the number of iterations per sample size to 1,000 (from 10,000) so it finishes in a reasonable amount of time. big_sim <- sim_run( n_iter = 1e4, n = c(10, seq(50, 3e3, 50)), alpha = 10, beta = 1, gamma = 0.25, n_cores = 4) To replicate Loken and Gelman’s final figure, we need to generate a quick summary table (after pairing the results). In this table, for each sample size, we will calculate the percent of studies where # Add ID columnbig_sim[, id := rep(1:(.N/2), each = 2)] %>% invisible()# Separate the 'clean' and 'noise' observationsbig_clean <- big_sim[type == "clean"]big_noise <- big_sim[type == "noise"]# Mergebig_paired <- merge(x = big_clean, y = big_noise, by = "id", all = T, suffixes = c("_clean", "_noise"))# The summarybig_summary <- big_paired[, list( pct_exceed = mean(est_coef_noise > est_coef_clean) ), by = n_noise]setnames(big_summary, "n_noise", "n")# Clean uprm(big_clean, big_noise, big_paired)gc() %>% invisible() Now we can create the figure. ggplot(big_summary, aes(n, pct_exceed)) + geom_line(color = "grey50") + geom_point(aes(color = n), size = 2) + # geom_smooth() + ggtitle("The effect of sample size on attenuation bias", subtitle = paste("Percent of studies where measurement-error", "based estimate exceeds ideal estimate")) + xlab("Sample size") + ylab("Percent") + theme_ed + theme("legend.position" = "none") + scale_color_viridis("Sample size:", direction = -1, end = 0.97) This figure does not quite match the figure from Loken and Gelman. Why? One important observation that I don’t think Loken and Gelman really discuss: their results vary by the parameters in the simulation: the treatment effect (\(\beta\)), the degree of measurement error \(\gamma\) (maps to the variance of the measurement error), and the variance of the disturbance \(\varepsilon\). Loken and Gelman make a pretty interesting observation: attenuation bias may not have the same effect in small samples that it has in large samples. 1 I think the big picture is that some of our econometric/statistical intuition changes in the presence of p-hacking or publication bias—the theory behind our estimation and inference typically does not take into account p-hacking or publication bias. As discussed above, this result depends upon several parameters.↩
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
There have been a number of cases where efficient hidden subgroup algorithms have been found for specific non-Abelian groups with very specific structures. Why haven't we found any efficient quantum algorithm to find a hidden subgroup in a symmetric group even when the structure of the hidden subgroup is heavily restricted (e.g. the hidden subgroup could be a very special direct or semidirect product)? From what I understand, it is partially because we don't have any techniques currently that take advantage of structure of the hidden subgroup itself. Weak Fourier sampling solves the problem whenever the hidden subgroup is normal (but this is not a property of the subgroup itself - analogous to being a direct product etc - but rather how the subgroup sits within the larger group), but that's no help for the symmetric group because its only normal subgroup is the alternating group (for $n \geq 5$). From the examples of [1] - which shows that in the symmetric group strong Fourier sampling produces distributions for the hidden trivial group vs a hidden random involution that are exponentially close - and [2], it would seem that there may be some hope if the hidden subgroup is promised to be relatively large. But you would need to either be able to show that Fourier sampling succeeds in this case, or to come up with a different method that (likely) really took advantage of the largeness of the hidden subgroup. Moore, Russell, and Sniady [3] also showed that the other relatively successful method - a Kuperberg-style sieve - doesn't work efficiently in a group very close to the symmetric group (namely, $S_n \wr S_2 = (S_n \times S_n) \rtimes S_2$), to distinguish a hidden trivial group from a hidden involution. [1] Moore, C., A. Russell, and L. J. Schulman. The Symmetric Group Defies Strong Fourier Sampling. SIAM J. Comput. 37, p. 1842, 2008. Preliminary version in FOCS 2005. (arXiv version) [2] Grigni, M., L. J. Schulman, M. Vazirani, and U. Vazirani. Quantum Mechanical Algorithms for the Nonabelian Hidden Subgroup Problem. Combinatorica 24, p. 137, 2004. Preliminary version in STOC 2001. [3] Moore, Russell, and Sniady. On the impossibility of a quantum sieve algorithm for graph isomorphism. SIAM J. Comput. 39 (2010), no. 6, 2377–2396. Preliminary version in STOC 2007. Exact classical bounds are known, https://oeis.org/A186202 , you only have to sample certain prime cycles as they form a min dominating set on $S_n$ under a detection relation. Smaller than $n!$ but still about the order of $p!$ were $p$ is the largest prime less than or equal to $n$.
I want to estimate the distribution/variance of the ratio $WTP=\frac{\gamma}{\alpha}$. The problem is, both $\gamma$ and $\alpha$ are random coefficients. In other words, both are random to begin with (please note that here their true values are random; this is different from the usual regression models where the true values are fixed constants but the estimators are random). For those interested, the model in mind is a random coefficients model (BLP). Assume $\gamma \sim F(\mu_{\gamma},\sigma_{\gamma})$ and $\alpha \sim G(\mu_{\alpha},\sigma_{\alpha})$. Assume that $\alpha$ is bounded away from zero so that we do not worry about 0 in the denominator. When we estimate the model, the estimation package produces $\hat{\mu}_{\gamma}$, $\hat{\sigma}_{\gamma}$, $\hat{\mu}_{\alpha}$, and $\hat{ \sigma}_{\alpha}$, plus their standard errors $\mbox{se}(\hat{\mu}_{\gamma})$, $\mbox{se}(\hat{\sigma}_{\gamma})$, $\mbox{se}(\hat{\mu}_{\alpha})$, and $\mbox{se}(\hat{\sigma}_{\alpha})$. For simplicity, assume all estimators follow normal distributions. If $\gamma$ and $\alpha$ are not random coefficients, once I estimate the parameters I can simply use the Delta method or the Krinsky-Robb method to obtain the variance of WTP. When they are random coefficients, things get complicated. To obtain the variance or distribution of WTP, Daly, Stephane Hess, and Train (2011) and Rischatsch (2009) seem to suggest taking random draws $\gamma \sim F(\hat{\mu}_{\gamma},\hat{\sigma}_{\gamma})$ and $\alpha \sim G(\hat{\mu}_{\alpha},\hat{\sigma}_{\alpha})$. But I think this is wrong, because this completely ignores the sampling error in the estimated parameters of the distributions. In other words, this procedure ignores $\mbox{se}(\hat{\mu}_{\gamma})$, $\mbox{se}(\hat{\sigma}_{\gamma})$, $\mbox{se}(\hat{\mu}_{\alpha})$, and $\mbox{se}(\hat{\sigma}_{\alpha})$ (by assuming they are all zeros). To me, a "correct" simulation approach would require some sort of double randomization: First randomly draw $\mu_{\gamma}^m$ from $N(\hat{\mu}_{\gamma},\mbox{se}(\hat{\mu}_{\gamma}))$ and ${\sigma}_{\gamma}^m$ from $N(\hat{\sigma}_{\gamma},\mbox{se}(\hat{\sigma}_{\gamma}))$. Then randomly draw $\gamma_i$ from $F(\mu_{\gamma}^m,\sigma_{\gamma}^m)$. Do the same for $\alpha_i$. Obtain the ratio. Repeat this process to get a distribution of WTP. The "typical" simulation approach to find the distribution of WTP seems to ignore step 1 and thus underestimate the variance of WTP by treating the estimated quantities as true ones. Anybody can shed light on whether my approach is right or wrong? And are the approaches in the two references right or wrong? Thanks in advance.
I am trying to use a finite difference scheme to numerically solve sixth order parabolic equations such as \begin{equation} u_t = u_{xxxxxx} \end{equation} with symmetry conditions \begin{equation} u_x(0,t) = u_{xxx}(0,t) = u_{xxxxx}(0,t) = 0. \end{equation} On the vast majority of grid points, the central difference scheme \begin{equation} u_{xxxxxx} \approx \frac{1}{h^6}\left(u(x+3h) -6u(x+2h) + 15u(x+h) - 20u(x)+15u(x-h) - 6u(x-2h)+u(x-3h)\right) \end{equation} is second order accurate ($O(h^2)$) and is perfectly fine for my purposes, however I am unsure how to deal with evaluating this approximate sixth derivative at the first three gridpoints, $u_1, u_2$ and $u_3$. Extending the domain with 'fictional' gridpoints and using them to impose the symmetry conditions, e.g. \begin{equation} u_x \approx \frac{1}{2h}(u_2 - u_0) = 0,\quad u_{xxx} \approx \frac{1}{2h^3}(u_3 -2u_2 + 2u_0 - u_{-1}) = 0, u_{xxxxx} \approx \frac{1}{2h^5}(u_4 -4u_3 + 5u_2 -5u_0 +4u_{-1} -u_{-2})=0 \end{equation} allows us to substitute in the relations $u_0 = u_2$, $u_{-1}=u_3$, $u_{-2}=u_4$ into the central difference approximation around $u_1$, giving \begin{equation} u_{1xxxxxx} \approx 2u_4 -12u_3 +30u_2 -20u_1. \end{equation} However, trying to make the same substitutions for the finite difference of $u_2$ and $u_3$ has thrown up some truly bizarre and pathological behaviour in my simulations around the origin. I have also tried ignoring any symmetry conditions at these two points and simply using the sixth order forward difference approximations, with scarcely better results. How to proceed? Should I try somehow mixed finite difference approximations, and if so how do I factor the symmetry conditions into them? Thanks very much.
Note: While I am interested in typesetting the Euler constant raised to the power of a matrix, I am more interested in understanding why the second version inserts a square dot, and why the third doesn't compile. First, what works: I have adapted the answer here, (defining a new command). Second, when I try incorporating more of the suggestions, I get an odd square dot below the “g” in word “beginning.” Third, using scalebox directly from this answer, my code fails to compile at all. I am missing something obvious, but would appreciate an explanation. [Removing the first \end{document} command reveals the compilation error. It wants mathmode.] Code: \documentclass[11pt]{amsart}\usepackage{amsmath,amssymb} \usepackage{graphicx} \newcommand\scalemath[2]{\scalebox{#1}{\mbox{\ensuremath{\displaystyle #2}}}}% The Euler number, `e'.\newcommand{\eu}{\ensuremath{\mathrm{e}}}\newcommand{\du}{\ensuremath{\mathop{}\!\textnormal{\slshape d}}} % the command \du gives a slant roman differential operator, distinguished from either an italic math variable d or a math operator with d.\begin{document}\noindent Here is what I get without messing with things:$\eu^{\left[\begin{array}{cc} 1&2 \\ 3&4 \end{array}\right]}$. Obviously, not good.The following two versions compile, and the matrix has dimensions similar to those of the $\frac{\du}{\du x}$ operator from the previous term. Optically, however, each looks horrible. \\\[ \eu^{\pi}, \eu^{\sqrt[3]{2}}, \eu^i, e^{3-4i}, \eu^{\frac{\du}{\du x}}, {\eu^{\scalemath{0.4}{% {\left[ \begin{array}{cc} a&b \\ c&d \end{array} \right]} }}};\][Inserting frames and making more adjustment looks like this, which I \emph{think} improves the look of the matrix, but introduces a square dot at the beginning of the line. I can't figure out how to get rid of it.]\begin{frame} % inserts square dot that I don't understand\footnotesize\setlength{\arraycolsep}{2.5pt} % default: 5pt\medmuskip = 1mu % default: 4mu plus 2mu minus 4mu\[ \eu^{\pi}, \eu^{\sqrt[3]{2}}, \eu^i, e^{3-4i}, \eu^{\frac{\du}{\du x}}, {\eu^{\scalemath{0.4}{% {\left[ \begin{array}{cc} a&b \\ c&d \end{array} \right]} }}};\]\end{frame}\end{document} % on removing this line, the last example fails to compileWhy does scalemath seem to work, but scalebox not?\[\eu^{\scalebox{0.4}{% \left[ \begin{array}{cc} a&b \\ c&d \end{array} \right]} }\]\end{document}
It is not an answer, but maybe some information which could be useful : In an other post, it has been also noticed that the commutators of the $R$-symmetry generators with supercharge generators are: $[R^a_b,Q^c_{\alpha}]=\delta^c_bQ^a_{\alpha}-\frac{1}{4}\delta^a_bQ^c_{\alpha}$ So, taking the trace (on $a,b$), with $\mathcal N=4$, gives a null commutator, so the $U(1)$ symmetry may be factored out. In "Pierre Ramond, Group Theory, A Physicist's Survey, Cambridge, p $218$", it is stated that the superalgebra $su(n|m)$, may be wiritten in terms of hermitian matrices of the form : $$\begin {pmatrix}SU(n) &( \textbf n, \bar{\textbf m})\\( \bar {\textbf n}, \textbf m)&SU(m)\end {pmatrix} + \begin {pmatrix} n&0\\0&m\end {pmatrix}$$where the diagonal matrix generates the $U(1)$. (so the even elements form the Lie algebra $SU(n) \times SU(m) \times U(1)$) It is stated that : "when $n=m$, the supertrace of the diagonal matrix vanishes, and the $U(1)$ decouples from the algebra". This is also true for non-compact versions of the superalgebra, for instance $su(2,2|4)$ (here $n=m=4$) [EDIT] Correction : Following Olof's comment below, the $U(1)$ here is the (zero )central charge, so the symmetry group is really $psu(2,2|4)$ (see Olof's answer). It is not the $U(1)$ necessary to extend $su(2,2|4)$ to $u(2,2|4)$This post imported from StackExchange Physics at 2014-03-07 16:30 (UCT), posted by SE-user Trimok
Stephen Merity, Bryan McCann, Richard Socher Motivation: Better baselines Fast and open baselines are the building blocks of future work ... yet rarely get any continued love. Given the speed of progress within our field, and the many places progress arises, revisiting these baselines is important important Baseline numbers are frequently copy + pasted from other papers, rarely being re-run or improved Revisiting RNN regularization Standard activation regularization is easy, even part of the Keras API ("activity_regularizer") Activation regularization has nearly no impact on train time and is compatible with fast black box RNN implementations (i.e. NVIDIA cuDNN) ... yet they haven't been used recently? Activation Regularization (AR) Encourage small activations, penalizing any activations far from zero For RNNs, simply add an additional loss, where m is dropout mask and α is a scaler. We found it's more effective when applied to the dropped output of the final RNN layer \alpha L_2(m \cdot h_t) Temporal Activation Regularization (TAR) Loss penalizes changes between hidden states, where h is the RNN output and β is a scaler. Our work only penalizes the hidden output h of the RNN, leaving internal memory c unchanged. \beta L_2(h_t - h_{t-1}) Penn Treebank Results Note: these LSTMs have no recurrent regularization, all other results here used variational recurrent dropout WikiText-2 Results Note: these LSTMs have no recurrent regularization, all other results here used variational recurrent dropout Related work Independently LSTM LM baselines are revisited in Melis et al. (2017) and Merity et al. (2017) Both, with negligible modifications to the LSTM, achieve strong gains over far more complex models on both Penn Treebank and WikiText-2 Melis et al. 2017 also investigate "hyper-parameter noise" / experimental setup of recent SotA models Summary Baselines should not be set in stone As we improve the underlying fundamentals of our field, we should revisit them and improve them Fast, open, and easy to extend baselines should be first class citizens for our field We'll be releasing our PyTorch codebase for this and follow-up SotA (PTB/WT-2) numbers soon and our follow up work Regularizing and Optimizing LSTM Language Models Revisiting Activation Regularization for Language RNNs By smerity
Today in class I showed some ways for dealing with the classical integral $\int_{0}^{2\pi}\frac{d\theta}{(A+B\cos\theta)^2}$ under the constraints $A>B>0$, including Symmetry and the tangent half-angle substitution; Relating the integral to the area enclosed by an ellipse, via the polar form of an ellipse with respect to a focus and the formula $\text{Area}=\pi a b =\frac{1}{2}\int_{0}^{2\pi}\rho(\theta)^2\,d\theta$; Computing the geometric-like series $\sum_{n\geq 0} r^n \sin(n\theta)$ and applying Parseval's identity to it; Applying Feynman's trick (differentiation under the integral sign) to $\int_{0}^{2\pi}\frac{d\theta}{1-R\cos\theta}$ which is an elementary integral due to point 1. I finished the lesson by remarking that the point $4.$ allows to compute $\int_{0}^{2\pi}\frac{d\theta}{\left(1-R\cos\theta\right)^3}$ almost without extra efforts, while the $L^2$ machinery (point 3.) does not seem to grant the same. Apparently I dug my own grave, since someone readily asked (with the assumption $R\in(-1,1)$) What is the asymptotic behavior of the coefficients $c_n$ in $$ \frac{1}{\left(1-R\cos\theta\right)^{3/2}}= c_0+\sum_{n\geq 1}c_n \cos(n\theta) $$ ? (Q2) Do we get something interesting by following the "unnatural" approach of applying Parseval's identity to such Fourier (cosine) series? At the moment I replied that the Paley-Wiener theorem ensures an exponential decay of the $c_n$s, and with just a maybe to the second question. Later I figured out a pretty technical proof (through hypergeometric functions) of $$ |c_n| \sim K_R\cdot\sqrt{n}\cdot\left(\frac{|R|}{1+\sqrt{1-R^2}}\right)^n \quad \text{as }n\to +\infty$$and the fact that $c_n$ is given by a linear combination of complete elliptic integrals of the first and second kind. (Q1) I would like to know if there is a more elementary way for deriving the previous asymptotic behaviour. And the outcome of (Q2).