text
stringlengths
256
16.4k
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever." Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field. "You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. " so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force. For the buoyancy do I: density of water * volume of water displaced * gravity acceleration? so: mass of bottle * gravity = volume of water displaced * density of water * gravity? @EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$? As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern... You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer. Though as it happens I have to go now - lunch time! :-) @JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth. Anonymous Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure Not sure about that, but the converse is certainly false :P Derrida has received a lot of criticism from the experts on the fields he tried to comment on I personally do not know much about postmodernist philosophy, so I shall not comment on it myself I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger. I can see why a man of that generation would be leaned towards that idea. I do too.
8.3.3.1 - Example: SAT Scores Example: SAT Scores Section This example uses the dataset from Lesson 8.3.3 to walk through the five-step hypothesis testing procedure using the Minitab Express output. Research question: Do students score differently on the SAT-Math and SAT-Verbal tests? Because the sample size is large (\(n \ge 30\)), the t distribution may be used to approximate the sampling distribution. \(H_{0}:\mu_d=0\) \(H_{a}:\mu_d \ne 0\) Null hypothesis H 0: \(\mu_d\) = 0 Alternative hypothesis H 1: \(\mu_d\) ≠ 0 T-Value P-Value 3.18 0.0017 The t test statistic is 3.18. From the output, the p value is 0.0017 \(p\leq .05\), therefore our decision is to reject the null hypothesis There is evidence that in the population, on average, students' SAT-Math and their SAT-Verbal scores are different.
Munapo (2016, American Journal of Operations Research, http://dx.doi.org/10.4236/ajor.2016.61001) purports to have a proof that binary linear programming is solvable in polynomial time, and hence that P=NP. Unsurprisingly, it does not really show this. Its results are based on a convex quadratic approximation to the problem with a penalty term whose weight $ \mathcal{l}$ needs to be infinitely large for the approximation to recover the true problem. My questions are the following: Is this an approximation which already existed in the literature (I rather expect it did)? Is this approximation useful in practice? For example, could one solve a mixed integer linear programming problem by homotopy continuation, gradually increasing the weight $ \mathcal{l}$ ? Note: After writing this question I discovered this related question: Time Complexity of Binary Linear Programming . The related question considers a specific binary linear programming problem, but mentions the paper above. I have a language $ L= \{a^nb^nc^m : n, m \ge 0\}$ . Now, I wanted to determine whether this language is linear or not. So, I came up with this grammar: $ S \rightarrow A\thinspace|\thinspace Sc$ $ A \rightarrow aAb \thinspace | \thinspace \lambda$ I’m pretty sure(not completely however) that this grammar is linear and consequently language too is linear. Now, when I use pumping lemma of linear languages with $ w$ , $ v$ and $ u$ chosen as follow I find that this language is not linear. $ w = a^nb^nc^m, \space v = a^k, \space y=c^k$ $ w_0 = a^{n-k}b^nc^{n-k}$ now, $ w_0 \notin L \space (\because n_a \neq n_b)$ So, I’m unable to find whether the language is linear or not and what goes wrong in above logic with either case. Please help. In my slides for my algorithms class, I have a method of finding complexity of recursive functions called Recurrence Relation with “partizione bilanciata” which means “balanced partition” My course is in Italian and looking online in English I could not find any trace of this method, I know of the recursive tree, induction and master theorem . Has anyone an idea of how this is called in English ? I don’t like studying in Italian and I do not understand those pdfs. I’m attaching some images of the formulas. What exactly “balanced partition” mean, does it have something to do with for example Quick Sort , when you choose a middle pivot and the partitions are equal in size, which means it’s balanced ? Is it the same as master theorem written maybe in a different way ? Edit : I can see the form of the recurrence relation is different than master’s theorem. When for master’s theorem instead of $ c*n^b $ we have $ f(n)$ Thank you Is there a structure to the solution of the following linear program? $ \min_{x_{ij} } \sum_{i,j} x_{ij} \mu_{ij}$ $ s.t. \forall j, \sum_{i} x_{ij} = \beta_j,$ row sum $ \forall i, \sum_{j} x_{ij} D_{ij} \leq 1,$ column sum Question about computer science whether a problem is O(1) or O(N). This was a thought experiment I came up with and I’m sure it’s rather basic. But I wasn’t sure how to look it up so I apologize if this is already posted on here somewhere. But I was wondering … let’s say we have a simple question: given a string of random integers, is there any number that’s greater than a certain threshold value in the series, if that would be a linear or constant time Big O? Now the small twist is that the distribution of the input would be known for example let’s say we want to look at a series of N numbers to see if one is at least N/2. If yes, boom we are done. And let’s say the numbers are positive integers bounded to N so all n < N. Given we know this distribution does it change if it is O(1) or O(N)? If we have a very long string of numbers then of course it is possible that no number meets the threshold but this becomes a much smaller and smaller and smaller possibility for a long series of such random numbers. Does it make a difference if the N/2 threshold is some constant integer value less than N? Linear programming can solve only problems with weak inequalities, such as “maximize $ c x$ such that $ A x \leq b$ “. This makes sense, since problems with strict inequality often do not have a solution. For example “maximize $ x$ such that $ x<5$ ” does not have a solution. But suppose we are interested in finding supremum instead of maximum. In this case, the above program does have a solution – the supremum is $ 5$ . Given a linear program with strict inequalities and a supremum or infimum objective, is it possible to solve it by reduction to a standard linear program? The following theorem from Michael Sipser’s book “Introduction to the Theory of Computation” states: $ A_{\textrm{LBA}}= \{ \langle M, w \rangle \mid \text{$ M$ is an LBA that accepts string $ w$ } \}$ . THEOREM: $ A_{\mathrm{LBA}}$ is decidable. On the proof part, it states: The idea for detecting when $ M$ is looping is that as $ M$ computes on $ w$ , it goes from configuration to configuration. If $ M$ ever repeats a configuration, it would go on to repeat this configuration over and over again and thus be in a loop. I do not understand this: “If $ M$ ever repeats a configuration, it would go on to repeat this configuration over and over again”. What if $ M$ only repeat one configuration, then halts? Suppose, Min 2x+3y Subject to, x=2,x=5,x=7 y=5, y=9 is a linear program. Where x holds the values 2 or 5 or 7 and y holds the values 5 or 9. Then what should the correct formulation for the above problem? I want to write the following constraint: If A=1 and B <= m then C=1 ( where A and C are binary, m is a constant and B is continuous). i am new to machine learning i am trying de develop my knowledge and skills with projects, while doing so i encountred a probleme where i didnt find a “well coreleated” varible with the target ,the highest corelation coeffcient i found was 0.44, so i did a scatter plot to determine how the 2 variables are going to behave in order to choose between a polynomial regression model or a linear regression ,so it turned out like this , i am clueless about what to do
Globally, there is Lakner and Milanovik (2015)'s elephant graph:Hellebrandt and Mauro (2015)Thus, the two previous distributions look like bimodal log-normal distributions.or CDFs, as in MacAskill's book Doing Good BetterDid not find something strictly related to wages. For most of people, income may be a good proxy of wages. I do not believe that your suggested definitions will hold up. Since this is a forum for questions about economics, I do not see the point of proposing a new definition here. Who is going to see it?What you call “income generating wealth” is captured by “wealth” in its standard usages. However, what you exclude (such as bank deposits) will be included in “... Both distributions are often modelled log-normal, with a substantial number of zeros. A pareto distribution is also sometimes used (Piketty & Saez (2012), p.32) for modelling the distribution of top incomes. Wealth distributions are also in general far more skewed than wage (or income) distributions. The total assets of all US commercial banks are about \$17.2 trillion.You'll have to be clearer about what you mean by "their own money". The total equity capital of US commercial banks is about \$1.9 trillion. This depends on the exact preferences, but usually the utility function$$U(x,y) = v(x) + y$$is such that$$\lim_{x \to 0} |MRS(x,y)| = \lim_{x \to 0} \frac{\text{d}v(x)}{\text{d} x} = \infty.$$In this case it is the consumption of an additional marginal unit of the nonlinear good $x$ that is infinitely useful compared to the consumption of an ... (Mostly) ignore money for this growth issue; it's by and large a red herring that's distracting you.Instead just think of technological progress for instance. Assume everyone is washing their clothes by hand. That takes a fair bit of time. Now someone invents a washing machine. Everyone (who can get a washing machine) will then have more time on their ... The short answer is that money is not the same wealth. You yourself probably have less money than the value of all the stuff you own. When the economy grows the total amount of wealth grows. (This is usually also accompanied by growth of money, but that is a separate matter.) A random variable $X$ has a Pareto distribution with Pareto exponent$\theta$ if $$\text{P}(X>x)=\begin{cases}\left(\frac{x}{x_m}\right)^\theta \quad \text{ if } x\geq x_{min}\\ \\ 1 \quad \quad \quad \text{ if } x<x_{min} \end{cases}$$In this case, the Pareto exponent is $\theta = \alpha - 1$. Remember that $P(X>x)=\int_x^\infty p(x')\... The argument is correctSocial Mobility and Income Inequality are simply two distinct phenomenons. Mobility is arguably the more complex one, and there are many different ways to define it. As an example, absolute mobility refers to someone earning more income in absolute terms (\$ per year), while relative mobility refers to someone earning more in ... Is it that the consumer would not consume anything of the nonlinear good in case of insufficient wealth?No, it is exactly the other way around.The utility function in the two goods case will have the following form:$$U(x,y)=u(x)+y$$We assume that $u'(x)>0$ and $u''(x)<0$ or marginal utility of $x$ is decreasing in $x$. $MU_y$ meanwhile is ... Onurcanbektas, I really like your thought process. The problem with your mental model is that you've assumed economic output is exogenous. However, in the real world (most) jobs produce economic output. This output then increases the size of the pie, allowing people to be, on average, richer.However, from a societal perspective, I believe we will ... American economic theorist Henry George wrote about precisely this issue in his famous work Progress and Poverty published in the late 1800s, and based on his observations on the economic development of San Francisco during the gold rush. His argument was that increasing economic development primarily benefitted landowners, at the expense of both capital ... The distinction between the two is not well specified. If I own an apartment I can rent it out or I can live in it. If I own an art collection I can hang it in my house or charge others to see it. Cash in bank accounts is lent out by banks to form investments in other firms and projects. Land might be used for long walks or used for farming, natural resource ... If we view "wealth" as the subjective experience of pleasure, then "wealth" can spring up ex nihilo: if people used to get two dollars' worth of pleasure from having a cup, but now they get ten dollars' worth, then in some sense eight dollars of wealth has come "out of nowhere".However, it's more likely that at least one party has misvalued the cup. ... Even if $u^b$ is finite, it can never be achieved. This is what is meant by "does not attain a maximum". Rather, $u(x)$ approaches $u^b$ from below as $x \to \infty$. This is because $u$ is strictly increasing. If we had $u(x_*) = u^b$ for some $x_*$, then we would have $u(x_* + 1) > u^b$ and $u^b$ could not be a bound. This is why $U$ is written as the ...
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
In the comments, Charles Hudgins was kind enough to link me to Ying, J.H. Arch. Math (1973) 24: 561. This was a paper leading up to John Hsiao Ying's 1973 PhD thesis, Relations between subgroups and quotient groups of finite groups. As far as I can tell, the thesis is not available online anywhere; however, I was recently able to track it down from the library at State University of New York at Binghamton. I will summarize here what I learned from reading his research, and also aggregate some of the information from the help I've gotten from you lovely people. Shout out to Dr. Ying, if you're out there somewhere. There are still open questions littered throughout this answer and around this topic in general, which will hopefully draw some interest from all of you. Please feel free to ask new questions based on this one, or to edit this answer with updated information if you know something I don't. Definitions First of all, it turns out there is some existing terminology for studying the relationship between subgroups and quotients. As we have seen in the history of this question on MSE and MO, it is easy to confuse the condition defining these groups with a number of subtley different conditions, each which have different consequences. The paper introduces the groups that are the topic of this question as those "satisfying condition (B)," but the thesis goes on to give them a name: Q-dual groups. There are several related definitions, the most relevant of which I will condense as follows. An S-dual group satisfies $\forall H\leq G,\:\exists N\unlhd G\text{ s.t. }H\cong G/N$ -- that is, each subgroup of $G$ is isomorphic to a quotient group of $G$. A Q-dual group satisfies $\forall N\unlhd G,\:\exists H\leq G\text{ s.t. }H\cong G/N$ -- that is, each quotient group of $G$ is isomorphic to a subgroup of $G$. A group which is S-dual and Q-dual is self-dual. Actually, there are two definitions for a self-dual group. The one I've picked comes from Fuchs, Kertész, and Szele (1953). In the context of Abelian groups, there is an alternative definition, which Baer will roar at you about in these papers if you care. Examples and nonexamples So, what do we know about Q-dual groups? To begin with, Q-dual groups are rare. Here Derek Holt presents evidence that most groups are not Q-dual, where "most" is defined in the sense that if $g(n)$ denotes the fraction of isomorphism classes of finite groups of order $\leq n$ that are not Q-dual, $g(n)\to 1$ as $n\to \infty$. On the other hand, it's also important to note that S-dual groups are more rare than Q-dual groups, which is something that Ying points out. Let's look at some examples, some from the comments, some from papers. Here are some groups which are Q-dual: Finite simple groups Symmetric groups Hall-complemented groups (for each $H\leq G$, there is a $K\leq G$ with $H \cap K = \mathbf{1}$ and $G=HK$) Higman-Neumann-Neumann type universal groups (see Higman, Neumann, and Neumann (1949)) The semidirect product $A\rtimes \langle z \rangle$ of an abelian $p$-group $A$ with a power automorphism $z$ of prime power order. Some groups that are not Q-dual: Some simple examples: $C_3\rtimes C_4$, $C_4\rtimes C_4$, $C_5\rtimes C_4$ with a nontrivial center Any quasisimple group (that isn't simple) The commutator subgroup of a free group of rank $2$ Finitely generated linear groups over a field characteristic $0$. More generally, by Selberg's lemma and Malcev's theorem, any torsion-free hyperbolic group that is residually finite is not Q-dual (which may constitute all torsion-free hyperbolic groups; see here) Relating this to other properties, Q-dual groups may or may not be solvable and vice versa Q-dual groups may or may not be nilpotent and vice versa (this relationship discussed in a section below) Q-dual groups may or may not be $p$-groups and vice versa the Q-dual property is not subgroup closed (see example in last section) Reduction to smaller order It's often desirable to throw away irrelevant parts of a group when studying a property. Let $G$ be Q-dual. If there exists an element of prime order in the center that is not a commutator, then $G$ has a non-trivial, cyclic direct factor. This dovetails nicely with the next theorem. Let $G$ be Q-dual. If $G=H\times \langle x \rangle$, $H$ and $\langle x \rangle$ are Q-dual. This lets us reduce away cyclic direct factors. Building examples of Q-dual groups To build new Q-dual groups from old ones, take any Q-dual group $H$ that has a unique non-trivial minimal normal subgroup. Given a prime $p\not\mid |H|$, $H$ has a faithful irreducible representation on an elementary abelian $p$-group $C_p^{\; n}$, so we can build $G=C_p^{\; n}\rtimes H$, which is Q-dual by the theorem. You can keep going with that, tacking on as many primes as you like. Next we have a large class of easily constructed Q-dual groups. Let $A$ be an elementary abelian $p$-group and $\varphi\in\operatorname{Aut}(A)$ have prime order. Then $A\rtimes \langle \varphi \rangle$ is Q-dual. It's possible that this family comprises all nonabelian Q-dual $p$-groups of class $2$ with exponent $p$, but this is open. Relationship to nilpotency Ying's work focuses largely on nilpotent groups, based on the observation that the Q-dual condition is most pronounced in groups with many normal subgroups. Together with the (almost surely true) conjecture that most finite groups are nilpotent, this seems like a pretty good place to start. A nilpotent group $G$ is Q-dual if and only if all of its Sylow subgroups are Q-dual. (Actually this is proven in a paper by A.E. Spencer, which I have yet to get ahold of. I'll post a link when I do.) This is fair enough, and lets us reduce to studying $p$-groups. From here, he delves into nilpotency class. Let $G$ be an odd order Q-dual $p$-group of class $2$. Then $G'$ is elementary abelian. The additional hypotheses that $p$ be odd and the nilpotency class be $2$ are important. The counterexample given is the dihedral group of order $16$, whose commutator subgroup is cyclic of order $4$, and which has nilpotency class $3$. It's important to note that this is a $2$-group, however, and it may be that this is what causes the generalization to fail, not the higher class. In particular, it is still open (as of 1973!) whether odd $p$-groups of class greater than $2$, or $2$-groups of class $2$, have elementary abelian commutator subgroups. Let $G$ be an odd order Q-dual $p$-group of class $p$. Furthermore, suppose $\Omega_1(G)$ is abelian. Then $G=A\rtimes \langle z \rangle$ where $A$ is abelian, $z$ has order $p$, and $[a,z]=a^{\operatorname{exp}(A)/p}$ for all $a\in A$. When $\operatorname{exp}(G)>p^2$, this becomes an if and only if. Let $G$ be a $p$-group of class $2$ with $\operatorname{exp}(G)>p^2>4$. Then $G$ is Q-dual if and only if $G=A\rtimes \langle z \rangle$ where $A$ is abelian, $z$ has order $p$, and $[a,z]=a^{\operatorname{exp}(A)/p}$ for all $a\in A$. That is a pretty thorough characterization of this special case. When $\operatorname{exp}(G)\leq p^2$, things get more complicated. Example. Let $p$ be an odd prime, $|a|=p^2$, $|b|=|c|=p$, $[a,x]=a^p$, $[a,y]=b$, $[c,z]=a^p$, and all other commutators between $a,b,c,x,y$ and $z$ equal to $1$. Then $\left(\langle a\rangle\times \langle b\rangle\times \langle c\rangle\right)\rtimes \langle x,y,z\rangle$ is a finite Q-dual $p$-group of class $p$ and exponent $p^2$. This example shows that the Q-dual property is not subgroup closed, via the subgroup $\langle a,c,x,z\rangle$. It also shows that finite Q-dual $p$-groups of class $2$ need not contain an abelian maximal subgroup. It is still open whether there are counterexamples of this nature for odd primes $p$.
Please kindly refer to page 88 in the link below $\oint_{S}\vec{E}\cdot d\vec{a}=\frac{Q_{enc}}{\epsilon_{0} }=\frac{\sigma A}{\epsilon _{0}}$ The pill box has an area vector $\vec{A}=\pm \hat{z}A$: "-" if the area A is facing in the negative z direction and "+" if the area A is facing in the positive z direction. We know that the sheet produces its own electric field due to the surface charge $\sigma$. This electric field is in the +z direction above the sheet and in the -z direction below the sheet. Now, there is, without Griffith made explicitly clear, an external electric field below the charge sheet. This electric field is in the z direction and the field lines from this external electric field 'pass' through the charge sheet. Here is where things gets confusing: Griffith asserts that$ E^{\perp}_{above}-E^{\perp}_{below}=\frac{\sigma}{\epsilon _{0}}$ Is this $E^{\perp}_{above}, E^{\perp}_{below}$ a result due to the external electric field or is it a net electric field due to the electric field from the sheet and the external electric field? While I'm not sure, I am inclined to say this cannot be the case since $\frac{\sigma}{\epsilon _{0}}$ is due to the charge enclosed IN the gaussian pill box. Someone please shed some light.
Today we will advance our coverage toward quantum mechanics by looking at an unusual feature of daily life. We’ll be looking at an aspect of the world which doesn’t quite behave as expected; though it won’t be as counterintuitive as, say, the Heisenberg uncertainty relations, it does tend to make people blink a few times and say, “That’s not — well, I guess it is right.” Furthermore, poking into this area will motivate the development of some mathematical tools which will remarkably simplify our study of symmetry in quantum physics. Fortunately, then, I found an assistant to help me with the demonstrations. Please welcome my fellow physics enthusiast, here on an academic scholarship after a rough-and-tumble life in Bear City: Those of us who grew up, as I did, with the aftereffects of the “New Math” are familiar with the “commutative law.” At some tender age, we learned that “addition is commutative,” which we were taught meant that the order in which we do additions doesn’t matter. 22 + 17 is the same as 17 + 22, and in fact for any numbers real or complex, [tex]a + b = b + a.[/tex] When we’re talking about numbers which we use for counting things, this seems like the most unremarkable property in the world. How could it be false? My assistant will illustrate the process at work: Multiplication, which we define in terms of repeated additions, inherits the commutative property of addition, and as we build more types of numbers — irrational, real, complex — this nice and unsurprising character trait continues to hold. It seems so unremarkable that we’d be foolish not to wonder why it deserves a fancy name at all! Why make such a fuss about it and turn it into a grand thing. . . unless there were a place where it wasn’t true. Subtraction, we note, is not commutative. Six minus three gives one result, but three minus six gives another — a result which, as it happens, we can’t even use the “natural” or “counting” numbers to specify. But wait, isn’t subtraction just a special kind of addition — the addition of negatives? Aha: something must be going on with that “flip” which turns a number into its opposite, bearded twin. Thinking in terms of a number line, the expression a + b just means counting b units from some starting point a. Flipping the sign to get subtraction, a – b, means counting b units in the opposite direction from the same starting point. Addition of any number is a translation along the number line, and negation is a flip to the other side of zero, the origin. A translation followed by a flip does not give the same result as a flip followed by a translation. Starting from a number a, the former sequence of operations gives -( a + b), while the latter gives [tex](-a) + b = b – a.[/tex] Now, if addition is a shift to the side, then multiplication is a scaling. On the number line, twice a is the segment from 0 to a stretched out so that it extends from 0 to 2 a. Repeated multiplications are successive scalings; b 2 is scaling by the amount b twice in succession, and the square root of b is that scaling which one must perform twice in order to scale by b. This viewpoint is useful for the insight it gives into the next question: what is the operation which, when performed twice, gives a flip? In more “numerical” terms, we’re asking if there exists a scaling operation — a multiplier — which, when we multiply it by itself, has the same effect as flipping, or negation. This is the geometric interpretation of asking what is the square root of negative one! Thinking geometrically, it is not so difficult to find the answer. If we imagine the number a represented by a line segment from the origin 0, sticking out in the right-hand direction, we can pivot the number a one quarter-turn clockwise or counterclockwise (your choice), to give a line segment pointing up (or down). By repeating the same operation, we’ll get a line segment of length a pointing to the left — which is just the number (- a). The square root of -1 is a rotation by a quarter-turn! Well, we’ve worked ourselves right into the complex numbers. Instead of a number line, we’ve got a number plane, each number in which can be represented by a scaling (a shrinking or an expansion) and a turning. (Incidentally, we’re in a very good position now to understand why – i is just as good a square root of -1 as is i.) Starting with the number 1, we can rotate by a gradually increasing angle to trace out a full circle, on which the vertical coordinate is sin θ and the horizontal coordinate is cos θ. Rotation is just multiplication by a complex number, and the complex number which rotates by θ without scaling has a name; it’s called e . This is the geometric interpretation of Euler’s formula i θ [tex]e^{i\theta} = \cos\theta + i\sin\theta[/tex] which we used a little while ago to prove some trigonometric identities. Whew! We’ve covered a fair bit of territory just thinking about translations, rotations and scalings. One important thing to notice is that successive rotations in the 2D plane commute. Intuitively, we feel pretty confident that twisting by 30 degrees, taking a breather and then twisting by 60 degrees will have the same result as turning by 60 degrees, pausing and turning by another 30. What can we say about geometric operations in three dimensions? Naturally we can translate shapes in three different directions, but what does the extra “room” mean for rotations? Instead of having one axis about which we can turn, we’ve got three independent ways we can spin and twist. In aerospace lingo, in order to represent an arbitrary rotation in 3D we have to give pitch, roll and yaw. (There are many different ways to represent the same rotation, but they all end up giving the same amount of information.) To specify a scale factor in addition, we need one additional number, for a total of four. Therefore, whatever mathematical objects we employ to do for 3D space what the complex numbers do for 2D space, they must have four components. We can deduce another fact about the 3D analogues of complex numbers by looking at how successive rotations in 3D behave. Let’s pick a “zero point” somewhere in space to be our origin, and choose three perpendicular axes, which can be left-right, up-down and forward-back. We can rotate an object around any of these axes, by any amount we wish. To begin, we’ll consider rotations around the vertical and around the horizontal left-right axes, and we’ll rotate by one quarter-turn (90 degrees or π/4 radians) each time. My assistant will demonstrate a rotation around the vertical, followed by one around the horizontal: Now, surely, performing the same operations in the opposite order will have the same outcome, yes? It worked in two dimensions, didn’t it? Surprise! Rotations about different axes in three dimensions do not commute! This means that if we represent a rotation around the vertical by some “hypercomplex” number v, and one about the left-right axis by h, then whatever the specific form of v and h, we must have [tex] vh \neq hv.[/tex] Historically, this problem was approached in two different ways. One group of people followed the path to quaternions, while another took the other fork and developed vectors and matrices. Because we’re aiming for quantum mechanics, we’ll be taking the latter approach, though quaternions have interesting properties and practical applications too. (Mark Chu-Carroll and Tim Lambert wrote about them a few months ago.) RELATED POSTS:
Sometimes it is possible to express a value of the $\Gamma$-function at a rational point through values of the $\Gamma$-function at rational points with smaller denominators, e.g. $$\Gamma\!\left(\tfrac{1}{10}\right)=\frac{\sqrt{5+\sqrt{5}}}{2^{7/10}\sqrt{\pi}}\Gamma\!\left(\tfrac{1}{5}\right)\Gamma\!\left(\tfrac{2}{5}\right).$$ Is it possible to do that with $\Gamma\!\left(\tfrac{1}{50}\right)$? By the duplication formula, $$\Gamma\left(\frac{2}{50}\right) = C\cdot \Gamma\left(\frac{1}{50}\right)\Gamma\left(\frac{13}{25}\right)$$ hence: $$\Gamma\left(\frac{1}{50}\right) = \frac{\Gamma\left(\frac{1}{25}\right)}{C\cdot \Gamma\left(\frac{13}{25}\right)}.$$ Specifically, the constant $C$ is, $$\Gamma\left(\frac{1}{50}\right) = 2^{24/25}\sqrt{\pi}\, \frac{\Gamma\left(\frac{1}{25}\right)}{\Gamma\left(\frac{13}{25}\right)} = 49.44221\dots$$
Simplify expression Hi, I need to simplify the following. \(\cos\left(3π+α\right) \) I am unsure of the steps to take to do this. Thanks Quote: $\displaystyle \cos(x+y)=\cos{x}\cos{y}-\sin{x}\sin{y}$ Sorry, I'm still unsure what you mean. Do you mean... \(\cos\left(3π+α\right)= \cos\left(3π\right)\cos\left(α\right)-\sin\left(3π\right)\sin\left(α\right) \) ??? I'm behind as and an online student. Teacher and student interaction is at its minimum this semester. Haven't had a reply in my class forum for ages from the teacher. cou That's correct. Now, do you know what "" and "" are? Quote: I've been working with radians as well as degrees, so I assume \(3π\) would be the radian measure or \(540°\) but I don't need values. I'm looking on purplemaths website and I see the identity laws mentioned here. If I do \(\cos(3π)-\sin(3π)=-1 \) I still have \(\cos(α)-\sin(α) \) would the answer be \(-\cos(α) \) ? I still don't understand how I can simplify this further. $\displaystyle \cos(3\pi + \alpha) = \cos(3\pi)\cos(\alpha) - \sin(3\pi)\sin(\alpha)$ $\displaystyle \cos(3\pi + \alpha) = (-1) \cdot \cos(\alpha) - (0) \cdot \sin(\alpha) = -\cos(\alpha)$ Quote: -Dan All times are GMT -8. The time now is 12:15 PM. Copyright © 2019 My Math Forum. All rights reserved.
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
ä is in the extended latin block and n is in the basic latin block so there is a transition there, but you would have hoped \setTransitionsForLatin would have not inserted any code at that point as both those blocks are listed as part of the latin block, but apparently not.... — David Carlisle12 secs ago @egreg you are credited in the file, so you inherit the blame:-) @UlrikeFischer I was leaving it for @egreg to trace but I suspect the package makes some assumptions about what is safe, it offers the user "enter" and "exit" code for each block but xetex only has a single insert, the interchartoken at a boundary the package isn't clear what happens at a boundary if the exit of the left class and the entry of the right are both specified, nor if anything is inserted at boundaries between blocks that are contained within one of the meta blocks like latin. Why do we downvote to a total vote of -3 or even lower? Weren't we a welcoming and forgiving community with the convention to only downvote to -1 (except for some extreme cases, like e.g., worsening the site design in every possible aspect)? @Skillmon people will downvote if they wish and given that the rest of the network regularly downvotes lots of new users will not know or not agree with a "-1" policy, I don't think it was ever really that regularly enforced just that a few regulars regularly voted for bad questions to top them up if they got a very negative score. I still do that occasionally if I notice one. @DavidCarlisle well, when I was new there was like never a question downvoted to more than (or less?) -1. And I liked it that way. My first question on SO got downvoted to -infty before I deleted it and fixed my issues on my own. @DavidCarlisle I meant the total. Still the general principle applies, when you're new and your question gets donwvoted too much this might cause the wrong impressions. @DavidCarlisle oh, subjectively I'd downvote that answer 10 times, but objectively it is not a good answer and might get a downvote from me, as you don't provide any reasoning for that, and I think that there should be a bit of reasoning with the opinion based answers, some objective arguments why this is good. See for example the other Emacs answer (still subjectively a bad answer), that one is objectively good. @DavidCarlisle and that other one got no downvotes. @Skillmon yes but many people just join for a while and come form other sites where downvoting is more common so I think it is impossible to expect there is no multiple downvoting, the only way to have a -1 policy is to get people to upvote bad answers more. @UlrikeFischer even harder to get than a gold tikz-pgf badge. @cis I'm not in the US but.... "Describe" while it does have a technical meaning close to what you want is almost always used more casually to mean "talk about", I think I would say" Let k be a circle with centre M and radius r" @AlanMunn definitions.net/definition/describe gives a websters definition of to represent by drawing; to draw a plan of; to delineate; to trace or mark out; as, to describe a circle by the compasses; a torch waved about the head in such a way as to describe a circle If you are really looking for alternatives to "draw" in "draw a circle" I strongly suggest you hop over to english.stackexchange.com and confirm to create an account there and ask ... at least the number of native speakers of English will be bigger there and the gamification aspect of the site will ensure someone will rush to help you out. Of course there is also a chance that they will repeat the advice you got here; to use "draw". @0xC0000022L @cis You've got identical responses here from a mathematician and a linguist. And you seem to have an idea that because a word is informal in German, its translation in English is also informal. This is simply wrong. And formality shouldn't be an aim in and of itself in any kind of writing. @0xC0000022L @DavidCarlisle Do you know the book "The Bronstein" in English? I think that's a good example of archaic mathematician language. But it is still possible harder. Probably depends heavily on the translation. @AlanMunn I am very well aware of the differences of word use between languages (and my limitations in regard to my knowledge and use of English as non-native speaker). In fact words in different (related) languages sharing the same origin is kind of a hobby. Needless to say that more than once the contemporary meaning didn't match a 100%. However, your point about formality is well made. A book - in my opinion - is first and foremost a vehicle to transfer knowledge. No need to complicate matters by trying to sound ... well, overly sophisticated (?) ... The following MWE with showidx and imakeidx:\documentclass{book}\usepackage{showidx}\usepackage{imakeidx}\makeindex\begin{document}Test\index{xxxx}\printindex\end{document}generates the error:! Undefined control sequence.<argument> \ifdefequal{\imki@jobname }{\@idxfile }{}{... @EmilioPisanty ok I see. That could be worth writing to the arXiv webmasters as this is indeed strange. However, it's also possible that the publishing of the paper got delayed; AFAIK the timestamp is only added later to the final PDF. @EmilioPisanty I would imagine they have frozen the epoch settings to get reproducible pdfs, not necessarily that helpful here but..., anyway it is better not to use \today in a submission as you want the authoring date not the date it was last run through tex and yeah, it's better not to use \today in a submission, but that's beside the point - a whole lot of arXiv eprints use the syntax and they're starting to get wrong dates @yo' it's not that the publishing got delayed. arXiv caches the pdfs for several years but at some point they get deleted, and when that happens they only get recompiled when somebody asks for them again and, when that happens, they get imprinted with the date at which the pdf was requested, which then gets cached Does any of you on linux have issues running for foo in *.pdf ; do pdfinfo $foo ; done in a folder with suitable pdf files? BMy box says pdfinfo does not exist, but clearly do when I run it on a single pdf file. @EmilioPisanty that's a relatively new feature, but I think they have a new enough tex, but not everyone will be happy if they submit a paper with \today and it comes out with some arbitrary date like 1st Jan 1970 @DavidCarlisle add \def\today{24th May 2019} in INITEX phase and recompile the format daily? I agree, too much overhead. They should simply add "do not use \today" in these guidelines: arxiv.org/help/submit_tex @yo' I think you're vastly over-estimating the effectiveness of that solution (and it would not solve the problem with 20+ years of accumulated files that do use it) @DavidCarlisle sure. I don't know what the environment looks like on their side so I won't speculate. I just want to know whether the solution needs to be on the side of the environment variables, or whether there is a tex-specific solution @yo' that's unlikely to help with prints where the class itself calls from the system time. @EmilioPisanty well th eenvironment vars do more than tex (they affect the internal id in teh generated pdf or dvi and so produce reproducible output, but you could as @yo' showed redefine \today oor teh \year, \month\day primitives on teh command line @EmilioPisanty you can redefine \year \month and \day which catches a few more things, but same basic idea @DavidCarlisle could be difficult with inputted TeX files. It really depends on at which phase they recognize which TeX file is the main one to proceed. And as their workflow is pretty unique, it's hard to tell which way is even compatible with it. "beschreiben", engl. "describe" comes from the math. technical-language of the 16th Century, that means from Middle High German, and means "construct" as much. And that from the original meaning: describe "making a curved movement". In the literary style of the 19th to the 20th century and in the GDR, this language is used. You can have that in englisch too: scribe(verb) score a line on with a pointed instrument, as in metalworking https://www.definitions.net/definition/scribe @cis Yes, as @DavidCarlisle pointed out, there is a very technical mathematical use of 'describe' which is what the German version means too, but we both agreed that people would not know this use, so using 'draw' would be the most appropriate term. This is not about trendiness, just about making language understandable to your audience. Plan figure. The barrel circle over the median $s_b = |M_b B|$, which holds the angle $\alpha$, also contains an isosceles triangle $M_b P B$ with the base $|M_b B|$ and the angle $\alpha$ at the point $P$. The altitude of the base of the isosceles triangle bisects both $|M_b B|$ at $ M_ {s_b}$ and the angle $\alpha$ at the top. \par The centroid $S$ divides the medians in the ratio $2:1$, with the longer part lying on the side of the corner. The point $A$ lies on the barrel circle and on a circle $\bigodot(S,\frac23 s_a)$ described by $S$ of radius…
An electron has angular momentum. Shouldn't it also have angular velocity? Ignoring the g-factor (just for the order of magnitude approximation) and the fact that an electron is not a sphere the electron's angular velocity should be around: $$ \omega \approx \frac{\mu}{er^2} $$ or about 0.01 to 10^17 rad/s depending on whether the radius is the classical radius, the compton wavelength, or the planck length. Is there some "average" angular velocity that can be assigned to the electron?
8.3.2 - Hypothesis Testing Below are the procedures for conducting a hypothesis test for two paired means. This is often referred to as a "paired means \(t\) test," "dependent means \(t\) test," or "matched pairs \(t\) test." Data must be paired. The difference between the two groups must be normally distributed in the population or the sample size must be at least 30. The possible combinations of null and alternative hypotheses are: Research Question Is the mean difference different from 0? Is the mean difference greater than 0? Is the mean difference less than 0? Null Hypothesis, \(H_{0}\) \(\mu_d = 0 \) \(\mu_d = 0 \) \(\mu_d = 0 \) Alternative Hypothesis, \(H_{a}\) \(\mu_d \neq 0 \) \(\mu_d > 0 \) \(\mu_d < 0 \) Type of Hypothesis Test Two-tailed, non-directional Right-tailed, directional Left-tailed, directional Where \( \mu_d \) is the hypothesized difference in the population. The calculation of the test statistic for dependent samples is similar to the calculation you performed earlier in this lesson for a single sample mean. In this formula, \(\overline{x}_d\) is used in place of \(\overline{x}\) and \(s_d\) is used in place of \(s\): Test Statistic for Dependent Means \(t=\frac{\bar{x}_d-\mu_0}{\frac{s_d}{\sqrt{n}}}\) \(\overline{x}_d\) = observed sample mean difference \(\mu_0\) = mean difference specified in the null hypothesis \(s_d\) = standard deviation of the differences \(n\) = sample size (i.e., number of unique individuals) Observed Sample Mean Difference \(\overline{x}_d=\frac{\Sigma{x}_d}{n}\) \(x_d\) = observed difference Standard Deviation of the Differences \(s_d=\sqrt{\frac{\sum (x_d-\overline{x}_d)^{2}}{n-1}}\) When testing hypotheses about a mean difference, a \(t\) distribution is used to find the \(p\) value. The degrees of freedom are equal to \(n-1\) where \(n\) is the number of pairs. If \(p \leq \alpha\) reject the null hypothesis. If \(p>\alpha\) fail to reject the null hypothesis. Based on your decision in Step 4, write a conclusion in terms of the original research question.
Study Radiofrequency Tissue Ablation Using Simulation Radiofrequency tissue ablation is a medical procedure that uses targeted heat for a variety of medical purposes, including killing cancerous cells, shrinking collagen, and alleviating pain. The process involves applying mid- to high-frequency alternating current directly to the tissue, raising the temperature in a focused region near the applicator. We can simulate this process with COMSOL Multiphysics and the AC/DC and Heat Transfer modules. In today’s blog post, we will go over some key concepts for modeling this procedure. What Is Radiofrequency Tissue Ablation? Whenever an alternating electric current (or a direct current, for that matter) is applied to living tissue, there will be heat generation and temperature rise due to Joule heating. The ability to target this heat to specific localized tissue areas is a key advantage of the radiofrequency tissue ablation technique. In one of many medical applications, a cancerous tumor is a localized target. Using heat, the temperature of the area is raised to kill the cancer cells. Alternating current is used (rather than direct) to avoid stimulating nerve cells and causing pain. When alternating current is used, and the frequency is high enough, the nerve cells are not directly stimulated. To understand how we can model this process, let’s examine the figures below, which show some of the key concepts of this technique. A tumor within healthy tissue. Capillaries perfuse blood through the tissue and tumor. When an undesirable tissue mass is identified, such as a tumor, a doctor can use either a monopolar or bipolar applicator to inject current into and around the tumor. The current comes from a generator and varies sinusoidally in time. Frequencies of 300 to 500 kHz are common, although the procedure can use much lower frequencies. There are a wide variety of electrode configurations ranging from flat plates and single needles to a cluster of needles, depending on the desired shape of the heated domain and how the doctor will access the tissue. One common class of applicator is deployed through the circulatory system by using a long, flexible catheter and then extending a set of needles from the distal end into the tissue to be heated. A monopolar applicator is made up of a needle and patch applicator, whereas a bipolar applicator consists of two needle electrodes. More than two applicators and other applicator configurations are also possible. By convention, one electrode is called the ground, or reference, electrode. The voltage applied at the other electrode is with respect to this ground. A monopolar radiofrequency applicator and a patch electrode on the skin’s surface. A bipolar applicator primarily heats the region between the electrodes. An engineer designing one of these devices has a complicated problem to solve. The shape of the heated tissue depends on the shape and number of electrodes; which part is insulated and which is not; and ultimately, the thermal energy absorption distribution of the nearby tissue over time. The sharp, pointed ends of the needle electrodes complicate the design process, since they lead to high current densities and thus uneven temperature rise along the needle. For the cancerous tumor application, the goal is to kill the undesirable tissue mass and leave the surrounding healthy tissue unharmed. For shrinking collagen, the goal is still to heat tissue, but to avoid any possibility of damaging cells. COMSOL Multiphysics simulation streamlines and shortens this process. To properly model this procedure, we must build a model of the electric current flow through the tissue as well as the heat generation and temperature rise. Let’s explore these steps. Analyzing Joule Heating and Current Flow We begin by examining the typical material properties of both the applicator and living tissue and discuss how these materials behave at an operating frequency of 500 kHz. The table below shows the representative electrical conductivity, \sigma; relative permittivity, \epsilon_r; skin depth, \delta; and complex-valued conductivity, (\sigma+j\omega \epsilon_0 \epsilon_r) at 500 kHz. Although there is a variation to the electrical conductivity and relative permittivity of different tissues, for the purposes of this discussion, we will approximate the human body as having the properties of a weak saline solution. The actual properties of tissue do not vary by much more than one order of magnitude from this value, while the conductivity of the electrode and insulator are over five orders of magnitude larger or smaller. Electrical Conductivity (S/m) Relative Permittivity Skin Depth at 500 kHz (m) Complex Conductivity at 500 kHz (S/m) Metal Electrode 10 6 1 ~10 -4 10 6 + j 4 x 10 -6 Polymer Insulator 10 -12 2 ~10 10 10 -12 + j 9 x 10 -5 “Average” Human Tissue 0.5 65 1 0.5 + j 0.0003 We compute the skin depth to decide if we need to compute the magnetic fields and any heating due to induced currents. At 500 kHz, the electrical skin depth of the human body is on the order of one meter, while the heated regions have a typical size on the order of a centimeter. Hence, we can make the approximation that heating due to induced currents in the tissue is negligible and need not be calculated. Note that this approximation will not be valid if some small pieces of metal exist within the tissue, such as a stent within a nearby blood vessel. We can also see from the magnitude of the complex conductivity in the above table that the electrodes are essentially perfect conductors when compared to tissue. Similarly, the polymer insulators can be well approximated as perfect insulators when compared to human tissue. This information lets us choose the form of our governing equation. Under the assumption that magnetic fields and induction currents are negligible and operating at a constant frequency, we can solve the frequency-domain form of the electric currents equation. Further assuming that the human body itself does not generate any significant currents, the governing equation is: which solves for the voltage field, V, throughout the modeling domain. The electric field is computed from the gradient of the voltage: \mathbf{E} = -\nabla V. The total current is \mathbf{J} = (\sigma+j\omega \epsilon_0 \epsilon_r) \mathbf{E} and the cycle-averaged Joule heating is Q = \frac{_1}{^2} \Re (\mathbf{J}^* \cdot \mathbf{E} ). Since the conductors are essentially perfectly conducting compared to the tissue, we can omit these domains from our electrical model. That is, we can assume that all surfaces of the metal electrodes are equipotential. This is reasonable if the equivalent free-space wavelength (\lambda = c_0/f = 600m) is much larger than the model size. When using the AC/DC Module, we can use the Terminal boundary condition to fix the voltage on all surfaces of the electrode. The Terminal boundary condition can specify the applied voltage, total current, or total power fed into the boundaries. It is reasonable to ask why the conductor is omitted, for there is indeed some finite heat loss within the electrode itself. The heating within the electrode, however, is many orders of magnitude lower than in the surrounding tissue. Although the currents in the conductor can be quite high, the electric field (the variation in the voltage along the electrode) is quite small, hence the heating is negligible. Similarly, since the insulators are essentially perfect, these domains can also be eliminated from the electrical model. In the insulators, the electric fields may be quite high, but the current is essentially zero, which again means negligible heating. The Electric Insulation boundary condition, \mathbf{n} \cdot \mathbf{J} = 0, can be applied on the boundaries of the insulators and implies that no current (neither conduction nor displacement currents) passes through these boundaries. There is one caveat to this: If the electrodes are completely enclosed within the insulators, then there will be significant displacement currents in the insulators and these domains should be included in the model. On the exposed surface of the skin, the Electric Insulation boundary condition is also appropriate. However, if there is an external electrode patch applied to the skin’s surface, then current can pass through the skin to the electrode. The conductivity of skin is lower than that of the underlying tissue, and this should be modeled. However, we may not want to model the skin explicitly as a separate domain. In such cases, the Distributed Impedance boundary condition applies the condition \mathbf{n} \cdot \mathbf{J} = Z_s^{-1}(V-V_0), where V_0 is the external electrode voltage and Z_s is the equivalent computed impedance of the skin. A schematic of such a model is shown below, with representative material properties and boundary conditions. Now that the electrical model is addressed, let’s move on to the thermal model. A schematic of an electrical model of radiofrequency tissue ablation. Representative material properties are shown on the left. The modeling domain and governing equations are shown to the right. Computing Temperature Rise in Human Tissue The objective of the thermal model is quite straightforward: to compute the rise in tissue temperature over time due to the electrical heating and predict the size of the ablated region. The governing equation for temperature, T, is the Pennes Bioheat equation: where \rho and C_p are the density and specific heat of the tissue, while \rho_b and C_{p,b} are the density and specific heat of the blood perfusing through the tissue at a rate of \omega_b. T_b is the arterial blood temperature and Q_{met} is the metabolic heat rate of the tissue itself. This equation is implemented within the Heat Transfer Module. If the last two terms are omitted, then the above equation reduces to the standard transient heat transfer equation. It is also necessary to specify boundary conditions on the exterior of the modeling domain. The most conservative condition would be the Thermal Insulation boundary condition, which implies that the body is perfectly insulated. This would lead to the fastest rise in temperature over time. A more physically realistic boundary condition would be the Convective Heat Flux condition: with a heat transfer coefficient of h = 5-10 W/m^2K and an external temperature of T_{ext}=20-25 ^{\circ}C. This reasonably approximates the free convective cooling from uncovered skin to ambient conditions. Along with the change in temperature, we also want to compute the tissue damage. The Heat Transfer Module offers two different methods for evaluating this: Time-at-temperature threshold analysis: If the tissue is heated above a specified damage temperature for a specified time (e.g., over 50°C for over 50 seconds) or if a peak temperature of necrosis is ever instantaneously exceeded (e.g., 100°C), then the tissue is considered irreversibly damaged. A tissue damage fraction is also computed based upon the damage temperature and time (e.g., over 50°C for 25 seconds would lead to 50% damage). Energy absorption analysis: Given a frequency factor and activation energy that are properties of the tissue being studied, the Arrhenius equation is used to compute the fraction of damaged tissue. Along with these predefined damage integrals, it is also possible to implement a user-defined equation for damage analysis via the equation-based modeling capabilities of COMSOL Multiphysics. Representative radiofrequency ablation results from a 2D axisymmetric model. Two insulated applicators are inserted into a tumor within the body to heat and kill the diseased tissue. The plotted results include the voltage field (top left), resistive heating (bottom left), and the temperature and size of the completely damaged tissue at two different times (right). Solving the Coupled Problem to Understand Radiofrequency Tissue Ablation We have now developed a model that is a combination of a frequency-domain electromagnetics problem and a transient thermal problem. COMSOL Multiphysics solves this coupled problem using a so-called frequency-transient study type. The frequency-domain problem is a linear stationary equation, since it is reasonable to assume that the electrical properties are linear with respect to electric field strength over one period of oscillation. Thus, COMSOL Multiphysics first solves for the voltage field using a stationary solver and then computes the resistive heating. This resistive heating term is then passed over to the transient thermal problem, which is solved with a time-dependent solver. This solver computes the change in temperature over time. The frequency-transient study type automatically accounts for material properties that change with temperature and the tissue damage fraction. If the temperature rises or tissue damage causes the material properties to change sufficiently to alter the magnitude of the resistive heating, then the electrical problem is automatically recomputed with updated material properties. This can also be described as a segregated approach to solving a multiphysics problem. In such thermal ablation processes, it is also common to vary the magnitude of the applied electrical heating to pulse the load on and off at known times. In such situations, the Explicit Events interface can be used, as described in our earlier blog post on modeling periodic heat loads. If you instead want to model the heat load changing as a function of the solution itself, then the Implicit Events interface can be used to implement feedback, as described in our earlier blog post on implementing a thermostatic controller. Explore More Resources for Radiofrequency Tissue Ablation Modeling If you are interested in studying radiofrequency tissue ablation, there are several other resources worth exploring. If your electrodes have sharp edges and you are concerned about localized heating near these edges, consider adding fillets to your model, since a sharp edge leads to a locally inaccurate result for the heating. Also keep in mind that, despite any locally inaccurate heating, the total global heating will nevertheless be quite accurate with a sharp edge. Thus, the filleting of sharp edges is not always necessary, since the local temperature field can still be quite accurate. If there are any relatively thin layers of materials that have relatively higher or lower electrical conductivity compared to their surroundings, consider using the Electric Shielding or Contact Impedance boundary conditions for the electrical problem. There are similar boundary conditions available for thin layers in thermal models as well. If you are interested in modeling at much higher frequencies, such as in the microwave regime, then you need to consider an electromagnetic wave propagating through the tissue. In such cases, look to the RF Module and the Conical Dielectric Probe for Skin Cancer Diagnosis example in the Application Gallery. At even higher frequencies in the optical regime, a range of modeling approaches are possible, as described in our blog post on modeling laser-material interactions. The heat source for your problem need not even be electrical. High-intensity focused ultrasound is another ablation technique and can be modeled, as described in the Focused Ultrasound Induced Heating in Tissue Phantom tutorial in the Application Gallery. In closing, we have shown that COMSOL Multiphysics, in conjunction with the AC/DC Module and Heat Transfer Module, gives you the capability and flexibility to model radiofrequency tissue ablation procedures. If you are interested in using COMSOL Multiphysics for this type of modeling, or have any other questions, please contact us. Comments (6) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Search Now showing items 1-10 of 33 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
Hello, I've a problem to calculate the Position of a pendulum as a function of theta. For example: $\theta (t)$ is a function of time which returns the angle made by the pendulum at a particular instant wrt it's equilibrium Position. So, $$ T = \dfrac 12 m l^2 \dot \theta^2 $$$$ U = - mgl \cos \theta $$ $$ L(\theta, \dot \theta) = \dfrac 12 m l^2 \dot \theta^2 + mgl \cos \theta $$ Using, the Euler - Lagrangian Formula, $$ \dfrac d{dt} \left ( \dfrac{\partial L}{\partial \dot \theta}\right) - \dfrac{\partial L}{\partial \theta} = 0 $$ We get, $$ \boxed{\ddot \theta =- \dfrac gl \sin \theta} $$ which is the equation of motion. But, most of the derivations, I've seen/read go this way: $$ \ddot \theta = \dfrac gl \theta \quad \dots \quad (\text{As, } \sin \theta \approx \theta, \theta \rightarrow 0) \tag{*} $$ $$ \theta (t) = \cos \left ( \sqrt{\dfrac gl} t \right) $$ Because it satisfies $(*)$ So, I've 2 questions here. Other possible solutions of the Second Order Differential Equations exist like $\theta (t) = e^{\left( \sqrt{\dfrac gl}t \right)}$. So, why we choose that only one? One would argue that the sine function oscillates similar to the pendulum, so this makes sense to accept the sine one. But, in general case, when we solve the Lagrangian and get the equation of motion in differential form, then there are tons of complex situation possible, How can you determine which kind is needed? How can we solve the Second Order Differential Equation $\ddot \theta = - \dfrac gl \sin \theta$ and get an exact formula for that? Thanks :)
last week in chemistry 43new preprints about chemistry in the last week chemRxivmost active server in chemistry in the last week most popular topics mentioned by preprints How to Stay out of Trouble in RIXS Calculations Within the Equation-of-Motion Coupled- Cluster Damped Response Theory Framework? Safe Hitchhiking in the Excitation Manifold by Means of Core-Valence Separation coupled- cluster(EOM-CC) framework. chemRxiv Relativistic many body theory of the electric dipole moment of $^{129}$Xe and its implications for probing new physics beyond the Standard Model arXiv Dyson Orbitals within the fc-CVS-EOM-CCSD Framework coupled- clustersingles and doubles (CCSD) method, which enables efficient and accurate characterization of core-ionized states. chemRxiv The Alkaliphilic side of Staphylococcus aureus roles and catalytic properties of NhaC (NhaC1, NhaC2, and NhaC3), CPA1 (CPA1-1 and CPA1-2) and CPA2 family antiporters and how these antiporters crosstalk with Mnh1, a CPA3 family antiporter, recently shown to play important rolesin virulence and pH tolerance. bioRxiv Parallel changes in gut microbiome composition and function in parallel local adaptation and speciation important rolein host adaptation and diversification. bioRxiv Basic Mechanism of Sweating and Its Role in Temperature and Appetite Regulation: An Ayurvedic Perspective important rolebefore the action of blood and sweating. Preprints.org Uncovering the Role of Key Active Site Side Chains in Catalysis: An Extended Brønsted Relationship for Substrate Deprotonation Catalysed by Wild-Type and Variants of Triosephosphate Isomerase chemRxiv Broad spectrum antibiotic-degrading metallo-β-lactamases are phylogenetically diverse and widespread in the environment. active sitevariants inferred to have arisen from the ancestral B3 enzyme. bioRxiv Structural control of regioselectivity in an unusual bacterial acyl-CoA dehydrogenase active siteand a lateral shift in the positioning of the FAD cofactor within the enzyme. bioRxiv Selective Growth of Al2O3 on Size-Selected Platinum Clusters by Atomic Layer Deposition ray photoelectronspectroscopy (XPS), low-energy ion scattering spectroscopy (ISS), and CO temperature-programmed desorption (TPD). chemRxiv Dyson Orbitals within the fc-CVS-EOM-CCSD Framework ray photoelectronspectra (XPS) and propose an approach to simulate time-resolved (TR-)XPS for probing excited states. chemRxiv A novel approach for deducing the mass composition of cosmic rays from lateral densities of EAS particles ray(CR) extensive air showers (EAS) has been carried out in the energy regime of the KASCADE experiment. arXiv End-to-End Machine Learning for Experimental Physics: Using Simulated Data to Train a Neural Network for Object Detection in Video Microscopy dataset is generated on-the-fly using a noise injection process to produce simulated data characteristic of the experimentalsystem. arXiv Flow correlation as a measure of phase transition: results from a new hydrodynamic code arXiv A modified Coulomb's law for the tangential debonding of osseointegrated implants experimental data, especially the peak and the softening part of the torque curve with a relative error of less than 2.25 %. arXiv Giant temperature dependence of the Mott gap size in 1T-TaS$_2$ : a molecular dynamics study arXiv Structure and dynamics of a lipid nanodisc by integrating NMR, SAXS and SANS experiments with molecular dynamics simulations molecular dynamicssimulations of MSP1D1{Delta}H5 with the NMR and SAXS experiments. bioRxiv Investigating dual Ca2+ modulation of the ryanodine receptor 1 by molecular dynamics simulation molecular dynamicssimulation of the core RyR1 structure in the presence and absence of bound and solvent Ca2+ (total simulation time is 5 microseconds). bioRxiv Observations and Theories of Quantum Effects in Proton Transfer Electrode Processes protontunneling (EQPT), which is a key element in microscopic electrode processes. chemRxiv Towards Sustainable Synthesis of Polyesters: A QM/MM Study of the Enzymes CalB and AfEST proton transferto or from the catalytic histidine in all the transition states, but with different degrees of coupling between the motions of the atoms involved. chemRxiv Towards Sustainable Synthesis of Polyesters: A QM/MM Study of the Enzymes CalB and AfEST mechanicscalculations and Quantum Mechanics/Molecular Mechanics MolecularDynamics simulations. chemRxiv How to Stay out of Trouble in RIXS Calculations Within the Equation-of-Motion Coupled-Cluster Damped Response Theory Framework? Safe Hitchhiking in the Excitation Manifold by Means of Core- Valence Separation core- valenceseparation (CVS) scheme, which decouples the valence-occupied and core-occupied excitation manifolds, into the response domain. chemRxiv Dyson Orbitals within the fc-CVS-EOM-CCSD Framework core(fc) core- valenceseparated (CVS) equation-of-motion (EOM) coupled-cluster singles and doubles (CCSD) method, which enables efficient and accurate characterization of core-ionized states. chemRxiv Orthogonal Synthesis of Highly Porous Zr-MOFs Assembled from Simple Building Blocks for Oxygen Storage metal–organic frameworks (MOFs) using commercially available and affordable organiclinkers. chemRxiv Predicting Macrocyclic Molecular Recognition with Machine Learning molecularbinding to cucurbit[7]uril and experimentally validate these predictions. chemRxiv Spontaneous Formation of Autocatalytic Sets with Self-Replicating Inorganic Metal Oxide Clusters molecular recognition, where the larger clusters are the only products stabilised by information contained in the cycle, isolated due to a critical transition in the network. chemRxiv Tetranuclear Copper and Mononuclear Nickel Complex of the Schiff Base of (3Z)-3-Hydrazonobutan-2-One Oxime with Different Aromatic Carbonyl Compounds: Synthesis, Single Crystal X-Ray Structure, Catecholase Activity, Phenoxazinone Synthase Activity, Catalytic Study for the Homocoupling of Benzyl Amines Schiff baseligands: (3Z)-3-((Z)-(1-(thiophen-2-yl)ethylidene)hydrazono)butan-2-one oxime (LH) and 1-(pyridin-2-yl)ethylidene)hydrazono)butan-2-one oxime (L'H). 1 shows high... chemRxiv Air-Stable Hexagonal Bipyramidal Dysprosium(III) Single-Ion Magnets with Nearly Perfect D6h Local Symmetry Schiff baseligand and fine-tuning the axial alkoxide/phenol type ligands, a family of hexagonal-bipyramidal Dy(III) complexes, namely [DyIII(L)(Cl)2(H2O/CH3OH)]Cl 1, [DyIII(L)(C6F5O)2(H2O)](BPh4)·2, [DyIII(L)(PhO)2](BPh4) 3, [DyIII(L)(4-MeO-PhO)2](BPh4) 4, [DyIII(L)(naPhO)2](BPh4)... chemRxiv How to Stay out of Trouble in RIXS Calculations Within the Equation-of- Motion Coupled-Cluster Damped Response Theory Framework? Safe Hitchhiking in the Excitation Manifold by Means of Core-Valence Separation ofresponse equationsin the X-ray frequency range is often erratic due to the resonant nature of the virtual core-excited states embedded in the valence ionization continuum. chemRxiv Dyson Orbitals within the fc-CVS-EOM-CCSD Framework ofDyson orbitals within the recently proposed frozen-core (fc) core-valence separated (CVS) equation-of- motion(EOM) coupled-cluster singles and doubles (CCSD) method, which enables efficient and accurate characterization of core-ionized states. chemRxiv Quasi-Diabatic Scheme for Non-adiabatic On-the-fly Simulations equation of motionfrom diabatic to adiabatic representation, or construct global diabatic surfaces. arXiv Incorporating action and reaction into a particle interpretation for quantum mechanics -- Dirac case quantum mechanicsseems only to be describing fields when measurements generally detect localised particles. arXiv Operator approach in nonlinear stochastic open quantum physics quantum mechanicsand quantum noise theory should be able to start using this scheme to his or her own problem of interest. arXiv N-Acetyl Cysteine Alleviates Coxsackievirus B-Induced Myocarditis by Suppressing caspase-1 Beffect by inhibiting caspase- 1and viral proteases. bioRxiv A case study of bilayered spin-$1/2$ square lattice compound [VO(HCOO)$_2\cdot$(H$_2$O)] arXiv Efficient prediction of vitamin B deficiencies via machine- learning using routine blood test results in patients with intense psychiatric episode machine- learningcan efficiently predict some vitamin deficiencies in patients with active psychiatric symptoms, based on the largest cohort to date with intense psychiatric episode. medRxiv Deep Learning and Machine Learning in Hydrological Processes, Climate Change and Earth Systems: A Systematic Review machine learningmethods are already established in the fields, and novel methods with higher performance are emerging through ensemble techniques and hybridization. Preprints.org Constrained Multi-Objective Optimization for Automated Machine Learning machine learningapplications. arXiv A Method of Determining Excited- States for Quantum Computation excited- stateenergies can be calculated using existing hybrid-quantum classical techniques and variational algorithm(s) for determining ground-state. arXiv Reaction Rate Of p14N --> 15Ogamma Capture To All Bound States In Potential Cluster Model excited statesof 15O were determined at temperatures from 0.01 to 10 T9. arXiv One-proton and one-neutron knockout reactions from $N = Z = 28$ $^{56}$Ni to the $A = 55$ mirror pair $^{55}$Co and $^{55}$Ni arXiv Orthogonal Synthesis of Highly Porous Zr-MOFs Assembled from Simple Building Blocks for Oxygen Storage one- potwhere monoacid-based ligands reacted to form ditopic ligands which then assembled into a 3-D MOF with Zr6 clusters. chemRxiv Preparation, Characterization, and Application of a Lipophilic Coated Exfoliated Egyptian Blue for Near-Infrared Luminescent Latent Fingermark Detection one- potprocess. chemRxiv Probing Protein Shelf Lives from Inverse Mean First Passage Times different proteinsare calculated from the inverse MFPT, which show an excellent match with the experimentally reported rate constants and those extracted from the ThT/ThS fluorescence data. chemRxiv Thermodynamics of Amyloid Fibril Formation from Chemical Depolymerization different proteins(PI3K-SH3 and glucagon) at different concentrations and show that the previously applied linear polymerization model is an oversimplification that does not capture the concentration dependence of chemical depolymerization... chemRxiv Investigation of the impact of PTMs on the protein backbone conformation different proteinswere focused, covering 3 types of PTMs: N-glycosylation in renin endopeptidase and liver carboxylesterase, phosphorylation in cyclin-dependent kinase 2 (CDK2), and methylation in actin. arXiv A novel approach for deducing the mass composition of cosmic rays from lateral densities of EAS particles novel approach, and their CR mass-sensitivity is demonstrated. arXiv Discovery of Highly Polymorphic Organic Materials: A New Machine Learning Approach novel approachbased on machine learning classification methods is used to predict the likelihood for an organic compound to crystallise in multiple forms. chemRxiv Toward an accurate strongly-coupled many-body theory within the equation of motion framework arXiv Resolving the 11B(p, ${\alpha_0}$) Cross Section Discrepancies between 0.5 and 3.5 MeV arXiv Measurement of the 3He Spin-Structure Functions and of Neutron (3He) Spin-Dependent Sum Rules at 0.035 cross- section\sigma_{TT} have been extracted from the polarized cross- sectionsdifferences, \Delta \sigma_{ arXiv One-proton and one-neutron knockout reactions from $N = Z = 28$ $^{56}$Ni to the $A = 55$ mirror pair $^{55}$Co and $^{55}$Ni arXiv Observations and Theories of Quantum Effects in Proton Transfer Electrode Processes experimental observationsof EQPT, and next, we discuss possible theoretical pictures of the process. chemRxiv Surface oscillations and breakup characteristics of a charged droplet in the presence of various time varying functions arXiv Addressing band-edge-property spatial variations and localized-state carrier trapping and recombination in solar cell numerical modeling experimental observationsand in optimizing the performance of solar cells. arXiv N-Acetyl Cysteine Alleviates Coxsackievirus B-Induced Myocarditis by Suppressing caspase-1 showsthat NAC exerts potent anti-CV Beffect by inhibiting caspase-1 and viral proteases. bioRxiv Universal fluctuations in the bulk of Rayleigh-B'enard turbulence arXiv Orbit design and thruster requirement for various constant-arm space mission concepts for gravitational-wave observation arXiv Towards Sustainable Synthesis of Polyesters: A QM/MM Study of the Enzymes CalB and AfEST mechanicscalculations and Quantum Mechanics/Molecular Mechanics MolecularDynamics simulations. chemRxiv
When to Use the D86 Beam Width Measurement Method Introduction Although the clip level method and the second moment method are the most popular methods of beam width measurement, other beam width measurement techniques such as the D86 method can be used with beam profiling cameras. Rather than using the 1D or 2D profile of the beam to determine the width, the D86 method uses the diameter of a 86.5% power enclosure as the beam width. In this blog post we describe the D86 beam width measurement method and the applications for which it is most appropriate. Explanation of the D86 Method Figure 1: (a) Radially symmetric TEM\(_{00}\) Gaussian beam with the D86 beam enclosure (white line). The radius of the beam is given as the radius of the enclosure. (b) Elliptical TEM\({00}\) Gaussian beam with the D86 enclosure (white line). The major and minor widths are given from the elliptical enclosure region’s major and minor axes. The D86 method of beam width measurement uses a power enclosure, rather than the profile of the beam to determine the beam width. To calculate the width of the beam with the D86 method, a circle (see Fig. 1a) or ellipse (see Fig. 1b) is first found which encloses 86.5% of the total power across the sensor. The D86 method is named after this percentage enclosure. Once the power enclosure has been found, the radius of the enclosure is used as the beam radius (in the case of the ellipse, the major and minor axes are used as the beam radius). Although the choice of included power can seem arbitrary, the 86.5% value is related to the fundamental TEM 00 Gaussian beam’s 1/e 2 value (13.5%). Other beam measurement methods utilize the 1/e 2 value as well. The clip level method (often used with scanning slit devices) finds the two points where the beam intercepts 13.5% of the beam’s peak power and the difference between these two points is then used as the width. The second moment method—which is the most robust measurement method—calculates the beam width \(w(z)\) with the equation\begin{equation} w^2(z)=\frac{\displaystyle \iint r^2I(r, z)r \,dr\,d\theta}{\displaystyle \iint I(r, z)r \,dr\,d\theta}\text{,} \end{equation} where \(I(r, z)\) is the beam's intensity distribution. For a fundamental Gaussian beam, the width measured with the second moment yields the 1/\(e^2\) value. Similarly, the D86 method returns the 1\(/e^2\) value for a fundamental Gaussian beam. Even though the various width determination methods return similar values for the fundamental Gaussian, their values diverge for higher-order beams. We generated two beams, a TEM\(_{00}\) Gaussian (see Fig. 2), and a TEM\(_{20}\) Laguerre-Gaussian (see Fig. 3), and used DataRay's software to measure the diameters of the two beams with both the D86 method and the second moment method (see Table 1). The results of the test showed that, as predicted, the two beam width measurement methods return the same value for a fundamental Gaussian, but return significantly different values for the higher-order beams. Figure 2: DataRay software measuring the width of a fundamental TEM\(_{00}\) Gaussian beam with the second moment method. Figure 3: DataRay software measuring the width of a TEM\(_{20}\) Laguerre-Gaussian beam with the D86 method. Second Moment D86 TEM\(_{00}\) Gaussian 599 µm 600 µm TEM\(_{20}\) Laguerre-Gaussian 1358 µm 1270 µm D86 and M² The ISO 11146 standard calls for using the second moment width when calculating M². If the D86 method is used instead, the resulting M² value will be inaccurate. The M² values for the TEM\(_{00}\) Gaussian beam and the TEM\(_{20}\) Laguerre-Gaussian beam were found using both the second moment method and the D86 method (see Table 2). Additionally, the theoretical M² values were calculated to compare the measured M² values against. The D86 method returned comparable results to the second moment method for the fundamental Gaussian beam, but its accuracy for higher-order beams was significantly diminished. This is particularly problematic since the M² value given by the D86 method is closer to 1 than the actual value, falsely indicating higher beam quality. For this reason, DataRay software offers the D86 method to measure the width of a beam, but does not include an option to measure M² with the D86 method. Theoretical Second Moment D86 TEM\(_{00}\) Gaussian 1 0.996 0.998 TEM\(_{20}\) Laguerre-Gaussian 5 5.08 4.46 Using D86 Mode in DataRay Software To use the D86 method with DataRay software, first click Setup on the main menu. From the Setup menu select DXX mode (see Fig. 4a). The buttons on the main screen should be changed to reflect the new mode (see Fig. 3). Next, set the included power percent by selecting Set Included Power Percentage Target from the Setup menu. A dialog box should appear which allows the user to enter the included power target in percent (see Fig. 4b). (a) (b) Conclusion While the D86 method is useful for certain applications, it should not be used as a replacement for the second moment method, due to the difference in beam width measurement values for higher-order beams. If you have any questions regarding beam measurement methods for your application please contact us. We have years of experience in laser beam profiling and would love to discuss a solution for your system.
The strings do not attach to the space-time manifold, they move around on it as a background. Option 1 is not right. Option 2 is more like it, except that you are assuming that string theory as it is formulated in the string way builds up space-time from something more fundamental. This is not exactly true in the Polyakov formulation or in any of the string formulations (even string field theory). The string theory doesn't tell you how to build space-time from scratch, it is only designed to complete the positivist program of physics. It answers the question "if I throw a finite number of objects together at any given energy and momentum, what comes out?" This doesn't include every question of physics, since we can ask what happens to the universe as a whole, or ask what happens in when there are infinitely many particles around constantly scattering, but it's close enough for practical purposes, in that the answer to this question informs you of the right way to make a theory of everything too, but it requires further insight. The 1980s string theory formulations are essnetially incomplete in a greater way than the more modern formulations. The only thing 1980s string theory really answers (within the domain of validity of perturbation theory, which unfortunately doesn't include strong gravity, like neutral black hole formation and evaporation) is what happens in a spacetime that is already asymptotically given to you, when you add a few perturbing strings coming in from infinity. It then tells you how these extra strings scatter, that is what comes out. The result is by doing the string perturbation theory on the background, and it is completely specified within string perturbation theory by the theory itself. Option 3 is sort of the right qualitative picture, but I imagine you mean it as strings interacting with a quantum gravity field which is different from the strings, strings that deform space and then move in the deformed space. This is not correct, because the deformation is part of the string theory itself, the string excitations themselves include deformations of space-time. This is the main point: if you start with the Polyakov action on a given background $$ S = \int g_{\mu\nu} \partial_\alpha X^\mu \partial_\beta X^\nu h^{\alpha\beta} \sqrt{h} $$ Then you change the background infinitesimally, $g\rightarrow g+\delta g$, this has the effect of adding an infinitesimal perturbation to the action: $$ \delta S = \int \delta g \partial X \partial X $$ with the obvious contractions. When you expand this out to lowest order, you see that the change in background is given by a superposition of insertion of vertex operators on the worldsheet at different propagation positions, and these insertions in the path-integral have the form $$ \partial X^\nu \partial X^\mu$$ These vertex operators are space-time symmetric tensors, and these are the ones that create an on-shell graviton (when you smear them properly to put them on shell). So the changing background can be achieved in two identical ways in string theory: You can change the background metric explicitly You can keep the original background, and add a coherent superposition of gravitons as incoming states to the scattering which reproduce the infinitesimal change in background. The fact that any operator deforming the world-sheet shows up as an on-shell particle in the theory, this is the operator state correspondence in string theory, tells you that every deformation of the background that can be long-range and slow deformation shows up as an allowed massless on-shell particle, which can coherently superpose to make this slow background change. Further, if you just do an infinitesimal coordinate transformation, the abstract path-integral for the string is unchanged, so these graviton vertex operators have to have the property that coordinate gravitons don't scatter, they don't exist as on-shell particles. The reason this isn't quite "bulding space-time out of strings" is because the analysis is for infinitesimal deformations, it tells you how a change in background shows up perturbatively in terms of extra gravitons on that background. It doesn't tell you how the finite metric in space-time was built up out of a coherent condensation of strings. The question itself makes no sense within this formulation, because it is not fully self-consistent, it's only an S-matrix perturbative expansion. This is why the insights of the 1990s were so important. But this is the way string theory includes the coordinate invariance of General Relativity. It is covered in detail in chapter 2 of Green Schwarz and Witten. The Ward identity was discovered by Yoneya, followed closely by Scherk and Schwarz. The point is that the graviton is a string mode, a perturbation of the background is equivalent to a coherent superposition of gravitons, and graviton exchange in the theory includes the gravitational force you expect without adding anything by hand (you can't--- the theory doesn't admit any external deformations, since the world-sheet operator algebra determines the spectrum of the theory). In the new formulations, AdS/CFT and matrix theory and related ideas, you can build up string theory spacetimes from various limits in such a way that you don't depend on perturbation theory, rather you depend on the asymptotic background being fixed during the process (so if it starts out flat, it stays mostly flat, if it starts out AdS, it stays AdS). This allows you to get a complete answer to the question of scattering on certain fixed backgrounds, and get different pictures of the same string-theory spectrum in terms of superficially completely unrelated gauge fields or matrix-models. But you asked in the Polyakov string picture, and this is only consistent for small deformations away from a fixed background that satisfies the string equations of motion for the classical background.This post imported from StackExchange Physics at 2014-03-22 17:22 (UCT), posted by SE-user Ron Maimon
Given a 2D-curve arc $\mathscr{C}$, I would like to be able to easily compute a subset of $n$ points belonging to $\mathscr{C}$, so that the points are separated by equal-length curve arcs. For that purpose, I'm trying to find a convenient parametric representation of the curve $f : t \mapsto (x(t),y(t))$ (for my purposes, $t \in [0,1])$ such that $\forall t, t'>t$, $\exists v$ such that the length of the arc between $f(t)$ and $f(t')$ is $k(t'-t)$. In other words, if $t$ is time, $f$ describes the trajectory of a material point moving at constant speed $v$ along $\mathscr{C}$. Thus, my initial problem is solved by the values of $f$ for equally-spaced values of $t$ in [0,1], e.g. if $n = 3$, $\{ f(0), f(\frac{1}{2}), f(1) \}$ is the solution. In the case of segment [AB], such a parametric curve is easy to find: $f:t \mapsto A + t(\vec{AB})$ works (with $v = AB$). Circle arcs are also pretty nice to me: $t \mapsto r(cos(t), sin(t))$ is such a parametric curve for origin-centered $r$-radius circle, with $k = r$. I am well-aware that the length of the arc between points $f(t_0)$ and $f(t_1)$ is $ L = \displaystyle \int_{t=t_{0}}^{t_{1}} \sqrt{\displaystyle \left(\frac{dx}{dt}\right)^2 + \left(\frac{dy}{dt}\right)^2}dt$, and my definition of a convenient parametric function must imply that $\sqrt{\displaystyle \left(\frac{dx}{dt}\right)^2 + \left(\frac{dy}{dt}\right)^2}$ is constant and equals $v$. I'm happy to be able to solve my problem well for circle arcs and segments, but I want more. Segments and circles are pretty simple, I would like to find other classes of curves for which the explicit form of their convenient parametric representation can be known... I've tried setting $x(t)$ to some polynomial function of $t$ and computing the corresponding $y(t)$, turns out there's no such y(t) when x(t)'s degree is stricly more that one... Any suggestions ?
Disclaimer: I have no idea what is the state of the art in the environmental map sampling. In fact, I have very little knowledge about this topic. So this will not be complete answer but I will formulate the problem mathematically and analyze it. I do this mainly for myself, so I make it clear for my self but I hope that OP and others will find it useful. $$\newcommand{\w}{\omega}$$ We want to calculate direct illumination at a point i.e. we want to know the value of the integral$$I =\int_{S^2} f(\omega_i,\omega_o,n) \, L(\omega_i) \,(\omega_i\cdot n)^+ d\omega_i$$where $f(\omega_i,\omega_o,n)$ is BSDF function(I explicitly state dependance on the normal which will be useful later), $L(\omega_i)$ is radiance of environmental map and $(\omega_i \cdot n)^+$ is the cosine term together with the visibility(that what the $+$ is for) i.e. $(\omega_i \cdot n)^+=0$ if $(\omega_i \cdot n)<0$ We estimate this integral by generating $N$ samples $\omega_i^1,\dots,\omega_i^N$ with the respect to the probability density function $p(\omega_i)$, the estimator is$$I \approx \frac1N \sum_{k=1}^N \frac{f(\omega_i^k,\omega_o,n) \, L(\omega_i^k) \,(\omega_i^k\cdot n)^+ }{p(\omega_i^k)}$$ The question is: How do we choose the pdf $p$ such that we are able to generate the samples in acceptable time and the variance of the above estimator is reasonably small. Pick $p$ proportional to the integrand$$p(\omega_i) \sim f(\omega_i,\omega_o,n) \, L(\omega_i) \,(\omega_i\cdot n)^+ $$But most of the times it is very expensive to generate a sample according to this pdf, so it is not useful in practice. Best method Methods suggested by OP: Method one: Choose $p$ proportional to the cosine term$$p(\omega_i) \sim (\omega_i\cdot n)^+$$ Method two: Choose $p$ proportional to the EM$$p(\omega_i) \sim L(\omega_i)$$ Based on the names of mentioned papers I can partially guess what they do(unfortunately I do not have the time and energy to read them right now). But before discussing what they most probably do, let's talk about power series a little bit :D If we have a function of one real variable e.g. $f(x)$. Then if it is well behaved then it can be expanded into a power series$$f(x) = \sum_{k=0}^\infty a_k x^k$$Where $a_k$ are constants. This can be used to approximate $f$ by truncating the sum at some step $n$$$f(x) \approx \sum_{k=0}^n a_k x^k$$If $n$ is sufficiently high then the error is really small. Now if we have function in two variables e.g. $f(x,y)$ we can expand it only in the first argument$$f(x,y) = \sum_{k=0}^\infty b_k(y) \, x^k$$where $b_k(y)$ are functions only in $y$. It can be also expanded in both arguments$$f(x,y) = \sum_{k,l=0}^\infty c_{kl} x^k y^l$$where $c_{kl}$ are constants. So function with real arguments can be expanded as sum of powers of that argument. Something similar can be done for functions defined on sphere. Now, let's have a function defined on sphere e.g. $f(\omega)$. Such a function can be also expanded in similar fashion as function of one real parameter$$f(\omega) =\sum_{k=0}^\infty \alpha_k S_k(\omega)$$where $\alpha_k$ are constants and $S_k(\omega)$ are spherical harmonics. Spherical harmonics are normally indexed by two indices and are written as function in spherical coordinates but that is not important here. The important thing is that $f$ can be written as a sum of some known functions. Now function which takes two points on sphere e.g. $f(\omega,\omega')$ can be expanded only in its first arguments$$f(\omega,\omega') = \sum_{k=0}^\infty \beta_k(\omega') \, S_k(\omega)$$or in both its arguments$$f(\omega,\omega') = \sum_{k,l=0}^\infty \gamma_{kl} \, S_k(\omega)S_l(\omega')$$ So how is this all useful? I propose the CMUNSM(Crazy mental useless no sampling method):Lets assume that we have expansions for all the function i.e.\begin{align}f(\omega_i,\omega_o,n) &= \sum_{k,l,m=0}^\infty \alpha_{klm} S_k(\omega_i)S_l(\omega_o) S_m(n) \\L(\omega_i ) &= \sum_{n=0}^\infty \beta_n S_n(\omega) \\(\omega_i\cdot n)^+ &= \sum_{p,q=0}^\infty \gamma_{pq} S_p(\omega_i)S_q(n)\end{align}If we plug this into the integral we get$$I = \sum_{k,l,m,n,p,q=0}^\infty \alpha_{klm} \beta_n \gamma_{pq} S_l(\omega_o) S_m(n) S_q(n) \int_{S^2} S_k(\omega_i) S_n(\omega) S_p(\omega_i) d\omega_i$$ Actually we no longer need Monte Carlo because we can calculate values of the integrals $\int_{S^2} S_k(\omega_i) S_n(\omega) S_p(\omega_i) d\omega_i$ beforehand and then evaluate the sum(actually approximate the sum, we would sum only first few terms) and we get desired result. This is all nice but we might not know the expansions of BSDF or environmental map or the expansions converge very slowly therefore we would have to take a lots of terms in the sum to get reasonably accurate answer. So the idea is not to expand in all arguments. One method which might be worth investigating would be to ignore BSDF and expand only the environmental map i.e.$$L(\omega_i) \approx \sum_{n=0}^K \beta_n S_n(\omega_i)$$this would lead to pdf:$$p(\omega_i) \sim \sum_{n=0}^K \beta_n S_n(\omega_i) (\omega \cdot n)^+$$ We already know how to do this for $K=0$, this is nothing but the method one. My guess is, it is done in one of the papers for higher $K$. Further extensions. You can expand different functions in different arguments and do similar stuff as above. Another thing is, that you can expand in different basis, i.e. do not use spherical harmonics but different functions. So this is my take on the topic, I hope you have found it at least a little bit useful and now I'm off to GoT and bed.
It seems to me exaggerated to use the DCT here, because this is a deep theorem and the example is completely elementary. To be more precise, I would say that it is, of course, a perfect solution mathematically speaking, but perhaps not pedagogically speaking ... Indeed, we have : $$\forall x\in[0,1],\lim_{n\to\infty}f_n(x)=f(x)\textrm{, where }f(x)=\left\{\matrix{0&\textrm{if }0\le x<1\cr\frac12&\textrm{if }x=1}\right.$$ Hence : $$\int_0^1\lim_{n\to\infty}f_n(x)\,dx=0$$ On the other hand : $$\forall n\in\mathbb{N},\,0\le f_n(x)\le x^n$$ which leads to : $$\forall n\in\mathbb{N},\,0\le\int_0^1f_n(x)\,dx\le\frac1{n+1}$$ and so : $$\lim_{n\to\infty}\int_0^1f_n(x)\,dx=0$$
This is part of an old qual problem at my school. Assume $\{f_n\}$ is a sequence of nonnegative continuous functions on $[0,1]$ such that $\lim_{n\to\infty}\int_0^1 f_n(x)dx=0$. Is it necessarily true that there are points $x_0\in[0,1]$ such that $\lim_{n\to\infty}f_n(x_0)=0$? I think that there should be some $x_0$. My intuition is that if the integrals converge to $0$, then the $f_n$ should start to be close to zero in most places in $[0,1]$. If $\lim_{n\to\infty}f_n(x_0)\neq 0$ for any $x_0$, then the sequences $\{f_n(x_0)\}$ for each fixed $x_0$ have to have positive terms of arbitrarily large index. Since there are only countably many functions, I don't think it's possible to do this without making $\lim_{n\to\infty}\int_0^1 f_n(x)dx=0$. Is there a proof or counterexample to the question?
If you’re looking for a high-quality essay then you found the right person to write it. I’ve been working as an essay writer for 4 years now, which places me in the best position to handle your project. I look forward to working with you on not just this project but on future projects too; so you can totally rely on whatever finished work you get from me. I provide strengths in the following areas. Grammar, Well developed arguments, Efficiency, Easily read, Etc. What you will get. A well thought out essay that will leave you speechless. The overall topics explained by you and then properly conveyed by my writing skills. The end result will end up being the essay of perfection. If you have any requirements outside the services listed above or further questions, feel free to contact me for support. I’ll get back to you as soon as possible. I look forward to working with you. by: AbbieL05 Created: — Category: Content & Writing Viewed: 95 I am designing dashboard, and have a filters panel on the side. What is the best way to reflect the user that the filtering is an ‘AND’ combination and not an ‘OR’ combination? I am trying to prove that the language $ $ L=\{M\ |\ M\ is\ a\ TM\ and\ \forall.x\in \Sigma^*\ with\ |x|>2,\ M\ on\ x\ runs\ at\ most\ 4|x|^2\ steps\} $ $ belongs to $ Co-RE$ but not to $ R$ . Showing $ \bar{L}\in RE$ is pretty much straight forward, but I also want to show that $ L\notin{R}$ My idea was a reduction $ \bar{H_{TM}}\le_mL$ but I struggle to figure out how to do it. Any help/guidance will be much appreciated. I have heard that the reason the image portion of ReCaptcha contains so many driving questions is that Google is using it to train AI for their self-driving cars. But to make it an effective security measure, wouldn’t Google need to verify that the squares the user clicked on contained stop signs or street lights or whatever they asked for? That would mean they already know what is in each image, so how do they gain new information from it? How can they verify the information is correct without knowing beforehand? We are given with an array and a number k. We need to partition array into k parts so as to maximize ‘bit-wise and’ of sums(of elements in a partition). Note that we need to find this maximum ‘bit-wise and’ value For example: Example 1: Array: 30, 15, 26, 16, 21, 13, 25, 5, 27, 11 k: 3 Solution: We can partition array into {30, 15} {26, 16, 21, 13, 25} {5, 27, 11} to get result as (30+15) i.e. 33. &(26+16+21+13+25) &(5+27+11) where & represents ‘bit-wise and’ operation. So answer is 33. Let’s have a look on another example. Example 2: Array: 2, 0, 1, 2, 0 k: 1 Solution: Since k=1, array itself is the required partition. So result is (2+0+1+2+0) i.e. 5. So answer is 5. I have tried brute-force but it is a slow algorithm and takes too long to produce results. Time limit is 1 second. In brute force i tried all set of partitions and for each set find sum of elements in all partitions and then take bit-wise and of all of these sums. I am struggling to make an optimized solution.Please anyone help me out. Suppose I want to have an integer program for handling the cases $ x_1>1\wedge x_2>1\wedge x_3>1\wedge\dots\wedge x_n>1\iff\delta=1$ $ x_1>1\vee x_2>1\vee x_3>1\vee\dots\vee x_n>1\iff\delta=1$ how many number of integer variables are needed to handle case? Is it possible at least one of them needs at most a constant number of binary variables? Estou tentando executar a seguinte query dentro de um Aggregate do Mongoose $ match: { $ and:[ {$ or: [{"person_id":1}, {"person_id":2}]}, {$ not: {group_id: ""}} ] } mas estou recebendo o seguinte erro MongoError: unknown top level operator: $ not Estou tentando executar a seguinte query dentro de um Aggregate do Mongoose $ match: { $ and:[ {$ or: [{"person_id":1}, {"person_id":2}]}, {$ not: {group_id: ""}} ] } mas estou recebendo o seguinte erro MongoError: unknown top level operator: $ not This is a follow-up to Copying data to another sheet based on the month of a date, with a different requirement: instead of one-way data transfer (from Master sheet to others), two-directional transfer is needed. Background I have a sheet (Master) which has rows of data regarding different types of projects and deliverables. One column in this sheet has a date value under column header “closingDate”. The deliverable dates extend through the year. For example. we may create an entry today in the Master, for a project that is due to close in September of next year. We need to be able to see what are the deliverables for a particular month. The years don’t matter. All projects closing in Jan 2016, 2017, 2018, etc. For reasons that I can not state we can’t use filter or filter views in the Master Sheet. An ideal solution for us would be to have sheets named Jan, Feb, March, April, etc. Each time a record is added to the Master, depending upon the month of the closingDate a copy of the record is made in the sheet named after the month. Each time the record is edited in the master sheet the corresponding record is also updated. New aspect While the Filter works well, it has engineered another demand for me. Is it possible for creating the subset (filtered by Month) in a way that if we were to make changes in that subset view, those changes would cascade back to the Master. Would QUERY work? Or do I have to resort to a script. Or it is not possible. I want to find tweets including one of these three possibility in search results: or or “ AAAA” and “ BBBB” and “ CCCC“ I mean that AAAA should be in tweets results and one (or both) of BBBB and CCCC should be included too. How AAAA, BBBB and CCCC should be arranged using one AND operator and one OR operator to get the described result?
Uniqueness for the solution of semi-linear elliptic Neumann problems in $\mathbb R^3$ 1. School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China, China $\Delta_0 u-\lambda u+f(u)=0, u>0,\ $ in $\Omega,\quad \frac{\partial u}{\partial\nu}=0\ $ on $\partial\Omega$ where $\Omega$ is convex and $f(u)$ defined by (2). We prove that for $1< p_i < 5$, $i=1,\cdots, K$ and $\lambda$ small, the only solution to the above problem is constant. This can be seen as a generalization of Theorem 1 in [7]. Mathematics Subject Classification:Primary: 35J60; Secondary: 35C2. Citation:Guangyue Huang, Wenyi Chen. Uniqueness for the solution of semi-linear elliptic Neumann problems in $\mathbb R^3$. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1269-1273. doi: 10.3934/cpaa.2008.7.1269 [1] Maria Francesca Betta, Olivier Guibé, Anna Mercaldo. Uniqueness for Neumann problems for nonlinear elliptic equations. [2] Thierry Horsin, Peter I. Kogut, Olivier Wilk. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. II. Approximation of solutions and optimality conditions. [3] Thierry Horsin, Peter I. Kogut. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. I. Existence result. [4] [5] Alexander Gladkov. Blow-up problem for semilinear heat equation with nonlinear nonlocal Neumann boundary condition. [6] [7] [8] Haitao Yang, Yibin Zhang. Boundary bubbling solutions for a planar elliptic problem with exponential Neumann data. [9] [10] Liping Wang. Arbitrarily many solutions for an elliptic Neumann problem with sub- or supercritical nonlinearity. [11] Shu Luan. On the existence of optimal control for semilinear elliptic equations with nonlinear neumann boundary conditions. [12] Vladimir Lubyshev. Precise range of the existence of positive solutions of a nonlinear, indefinite in sign Neumann problem. [13] Guy V. Norton, Robert D. Purrington. The Westervelt equation with a causal propagation operator coupled to the bioheat equation.. [14] Moez Kallel, Maher Moakher, Anis Theljani. The Cauchy problem for a nonlinear elliptic equation: Nash-game approach and application to image inpainting. [15] Pierpaolo Soravia. Uniqueness results for fully nonlinear degenerate elliptic equations with discontinuous coefficients. [16] Shiren Zhu, Xiaoli Chen, Jianfu Yang. Regularity, symmetry and uniqueness of positive solutions to a nonlinear elliptic system. [17] Olivier Guibé, Anna Mercaldo. Uniqueness results for noncoercive nonlinear elliptic equations with two lower order terms. [18] Robert Jensen, Andrzej Świech. Uniqueness and existence of maximal and minimal solutions of fully nonlinear elliptic PDE. [19] Antonio Vitolo, Maria E. Amendola, Giulio Galise. On the uniqueness of blow-up solutions of fully nonlinear elliptic equations. [20] Yen-Lin Wu, Zhi-You Chen, Jann-Long Chern, Y. Kabeya. Existence and uniqueness of singular solutions for elliptic equation on the hyperbolic space. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
The notation Producer=Actor=Director is wrong. Besides that, neither Π nor Actor(Film) Π has an attribute Producer(Produce) Director. The first projection has only an attribute Actor, the second projection has only an attribute Producer. In the second answer what is meant if you divide two relations that have the same number of attributes. I think that does not make sense because the result is a relation with no attributes. You first relation Film(Title,Director,Actor) is very strange. It defines a relation between three attributes actor, directors and title. But from my understanding of a film there does not exist such a relation. There is a relation between actor an title, if an actors plays in a movie with this title.And there is a relation between a director and a title if a person is a director of this movie. But I don't understand the tuple (actor, director, title). Does an actor have a director in a movie and another actor has another director in the same movie? If that is not the case then you should better use two relations: A relation Acts(Person, Title)and a relation Directs(Person, Title). Here are the answers using only natural join, which makes them a little bit clumsy. The first question Which actors produce at least a Film they directed? has the following answer, using the notation from wikipedia: Latex of Formula: $$\pi_{\text{Actor}}(\text{Film}) \bowtie \pi_{\text{Actor}}\left(\rho_{\text{Actor}/\text{Director}}\left(\pi_{\text{Director},\text{Title}}(\text{Film}) \bowtie \rho_{\text{Director}/\text{Producer }}(\text{Produce})\right)\right)$$ Here is an equivalent SQL code (Oracle): select F2.Actor from Film F2 join ( Film F1 join Produce P on (F1.Director=P.Producer and F1.Title=P.Title) ) on F2.Actor=F1.Director Example For the following data the result is Donald Duck Title | Director | Actor -------------------+-------------+----------- Mickey Mouse Revue | Donald Duck | Minnie Mouse Mickey Mouse Revue | Donald Duck | Mickey Mouse Duck Tales | Walt Disney | Donald Duck Producer | Title ------------+------------------ Donald Duck | Mickey Mouse Revue Walt Disney | Duck Tales The second question Which actors produce every film they directed uses additionally the set difference operator \: Latex of Formula: $$\pi_{\text{Actor}}(\text{Film}) \bowtie \rho_{\text{Actor}/\text{Director}}(\pi_{\text{Director}}((\pi_{\text{Director},\text{Title}}(\text{Film}) \bowtie \rho_{\text{Director}/\text{Producer }}(\text{Produce}))\setminus \pi_{\text{Director}}(\pi_{\text{Director},\text{Title}}(\text{Film}) \setminus \rho_{\text{Director}/\text{Producer }}(\text{Produce}))))$$ These are all (Director,Title) pairs that were not produced by the director of the title: and therefore these are all directors that haven't produced at least one of their titles: Similar, these are all directors that produced at least one of their titles: The difference are the directors that have produced all of the titles the directed. Now we have to join it to the set of all actors to filter out the directors that are also actors (but not necessary of the films they directed). Here is an equivalent SQL Code (Oracle): Select F2.Actor from Film F2 join ( (Select F1.Director,F1.Title from Film join Produce on ( F1.Director=P.Producer and F.Title=P.Filem)) minus (Select Director, Title from Film minus select Producer,Title from Produce) ) F3 on(F2.Actor=F3.Director)
A plank of length $L$ and mass $M$ is placed on the ground such that there is no friction between the plank and the ground. A small block with mass $m$ is placed on one side of the plank. Friction between block and plank is $\mu$. At point $t=0$ block has some velocity $v_0$. What is the minimal $v_0$ such that block can travel to the end of the plank? There is an image for easier understanding. There are two ways to solve this question. First method if to look relative to the plank, so we must add inertial force $F_{in}$ to block. Second method is to look relative to the ground. I will explain the second method, so for now forget about $F_{in}$. From the image, we can create four equations from force vector addition. $$ma_1=-T\\0=N-mg\\Ma_2=T\\0=N_P-N-Mg$$ Here $N_P$ is the force ground react to the plank, but it is not important. From well-known relation $T=\mu N$ we have $$a_1=-\mu g\\a_2=-a_1\frac m M$$ Because both $a1$ and $a2$ are constant, we can apply equation $$v_1^2=v_0^2+2a_1s_1$$ where $v_1$ is velocity of the block at some point and $s_1$ is the distance block traveled. Because $v_0$ sould be minimal, the block stops at the end of the plank, so $v_1=0$ and $$s_1=-\frac{v_0^2}{2a_1}$$ Similary, from $$v_1=v_0+a_1t$$ we find that time to required for traveling to the end of the plank is $$t=-\frac{v_0}{a_1}$$ This is all we need for the block. Now, for the plank we need to find the distance it traveled for time $t$ $$s_2=v_{0_P}t+\frac12a_2t^2\\s_2=-\frac mM\cdot\frac{v_0^2}{2a_1}$$ where $v_{0_P}$ is velocity of the plank at $t=0$ which is $0$. Finally, we have the relation between distance traveled between the block and the plank $$s_1=s_2+L$$ It gives us $$v_0=\sqrt{\frac{2\mu gLM}{M-m}}$$ which is, for some reason, incorrect solution. I simply cannot see where I did a mistake, but I am sure I missed something obvious. Did I simply misscalculated something it these equations?
In flat space we have $$\hat{p}_\mu=-i\hbar \partial_\mu . $$ Does this still hold in a curved spacetime (particularly Schwarzschild space)? A short-ish answer to this : $\hat{p}_a = -i\hbar \partial_a$ is not appropriate for the momentum operator due to the fact that it is not self-adjoint. As a reminder, $\hat{p}$ is hermitian if $$\langle \psi, \hat{p}_a \phi \rangle = \langle \hat{p}_a\psi, \phi \rangle$$ In a curved space, that will be $$\langle \psi, \hat{p}_a \phi \rangle = \int \psi^* (-i\hbar \partial_a \phi) \sqrt{g}\ d^3 x$$ Integrating it by parts, \begin{eqnarray} \langle \psi, \hat{p}_a \phi \rangle &=& -i\hbar [\psi^* \phi ] + i\hbar \int \partial_a (\psi^* \sqrt{g}) \phi d^3x\\ &=& -i\hbar [\psi^* \phi ] + i\hbar \int \left[ \sqrt{g} \partial_a (\psi^*) + \psi^* \partial_a (\sqrt{g}) \right] \phi d^3x\\ &=& -i\hbar [\psi^* \phi ] + i\hbar \int (-i\hbar (\partial_a + \frac{\partial_a \sqrt{g}}{\sqrt{g}}) \psi)^* \phi d^3x\\ \end{eqnarray} It's not actually true that wavefunctions tend to 0 at infinity, but the boundary term can be neglected. We still have an extra term, though, which can be gotten rid of by using $$\hat{p}_a = -i\hbar(\partial_a + \frac{1}{2} \frac{\partial_a \sqrt{g}}{\sqrt{g}})$$ From differential geometry, we also have that $$\frac{\partial_a \sqrt{g}}{\sqrt{g}} = {\Gamma^b}_{ab} = \Gamma_a$$ so $$\hat{p}_a = -i\hbar(\partial_a + \frac{1}{2} \Gamma_a)$$ This is both self-adjoint and obeys the canonical commutation relations. In the paper [1], p.111, it is said that the operator of the square of momentum on a curved $n$-dimensional pseudo-Riemannian manifold is: $$ P^2 = -\hbar^2 \left( \square - \frac{n-2}{4(n-1)}R \right) $$ Here, $R$ is the scalar curvature. This $P^2$ is the analog of the classical one $g^{\alpha\beta}p_\alpha p_\beta = m^2 c^2$. The inclusion of the scalar curvature $R$ is required by conformal invariance of zero mass scalar field. In the case where $n=4$, we have the analog of the Klein-Gordon equation on a curved space-time: $$ \left(\square - \frac16 R\right) \varphi + \left(\frac{mc}{\hbar}\right)^2\varphi = 0 $$ They say that this equation was considered by Penrose [3] in 1964. The "non-relativistic" limit of this should be Schrödinger equation where $R$ appears : $$ \hat{H}\psi = -\frac{\hbar^2}{2m} \left(\Delta - \frac16 R\right)\psi $$ This equation can also be found via geometric quantization of the hamiltonian responsible of the geodesic flow, see [2], eq. (7.114). Now, what is $\hat{p}_i$ on a curved space ? According to geometric quantization (see again [2], eq. (7.42) (7.82)) it is given, at least in the non-relativistic limit, by: $$ \hat{p}_i = -i\hbar \left(\partial_i + \frac12 \mathrm{Div}(\partial_i)\right) $$ Here, $\partial_i := \partial/\partial q^i$ and $\mathrm{Div}(\partial_i)=|\det[g]|^{-1/2}\partial_i(|\det[g]|^{1/2})=\Gamma^k_{ik}$ is the covariant divergence of the vector field $\partial_i$. Now, what is $\hat{p}_\mu$ ? I guess it would be $-i\hbar \left(\partial_\mu + \frac12 \mathrm{Div}(\partial_\mu)\right)$, but I'm not sure. So, to answer your question : Does $\hat{p}_\mu = -i\hbar\partial_\mu$ still hold in a curved spacetime? I would say no. There should be at least some divergence term $\mathrm{Div}(\partial_\mu)$ somewhere. Remark :According to the Wikipedia sign convention :$$\Gamma^k_{ij} := \frac12 g^{km}(\partial_i g_{jm} + \partial_j g_{im} - \partial_m g_{ij}) $$$$R^l_{kij} := \partial_i\Gamma^l_{jk} - \partial_j\Gamma^l_{ik} + \Gamma^l_{im}\Gamma^m_{jk} - \Gamma^l_{jm}\Gamma^m_{ik}$$$$R_{ij} := R^k_{ikj} = \partial_k \Gamma^k_{ij} - \partial_j \Gamma^k_{ik} + \Gamma^l_{ij}\Gamma^k_{kl} - \Gamma^l_{ik}\Gamma^k_{jl}$$$$R := g^{ij} R_{ij} $$ the "good" sign is $\square - R/6$ as in [2], p.180, and not $\square + R/6$ as in [1] and [3]. Also, even if [2], p.133, has a different definition of the Riemann curvature and scalar curvature, at the end of the day the scalar curvature according to [2] is the same as the one written here. [1] : Quantum theory of scalar field in de Sitter space-time, N. A. Chernikov and E. A. Tagirov, 1968 [2] : Geometric quantization and quantum mechanics, J. Sniatycki, 1980. [3] : Conformal treatment of infinity, R. Penrose, 1964. This operator exists in flat spacetime because one has translational symmetry $x^\mu \to x^\mu + a^\mu$. This is no longer true in generic curved spacetimes. So no, there is no momentum operator in curved spacetimes. That being said, quantum mechanics/field theory is well-defined only in spacetimes which are globally hyperbolic, i.e. those that have a globally time-like Killing vector $\xi$ (with $\xi^2 < 0$). In such spacetimes, there is a time-translation symmetry so there exists a Hamiltonian operator ${\hat H} = - i \hbar \xi^\mu \partial_\mu$. In Schwarzschild spacetime, $\xi = \partial_t$.
Consider a set $S \subseteq \Sigma^n$ where $\Sigma$ is a finite alphabet and $p : \Sigma \rightarrow [0,1]$ is a probability function. Let $T$ be a tree leaf-labeled by the elements of $S$. Consider the following random process which labels the nodes of $T$ in a bottom-up order. Given an unlabeled node $x$ whose children $y,z$ have been labeled with words $w_y,w_z$, we assign to $x$ a word $w_x$ as follows: For each $i \in [n]$: (1) if $w_y[i], w_z[i]$ are equal to the same letter $a$, then set $w_x[i]$ to $a$; (2) if $w_y[i], w_z[i]$ are equal to different values $a,b$, then set $w_x[i]$ to $a$ with prob. $p(a)/(p(a)+p(b))$, and to $b$ otherwise. Let $w_r$ be the random word assigned to the root at the end of this process. Can we devise a polynomial-time strategy to maximize $Var(w_r) = \sum_{i \in [n]} Var(w_r[i])$? This could be done either 'globally' (by constructing the tree at once) or 'online' (by selecting a 'locally optimal' matching for each generation). The intuition is that a node $u$ represents a 'genetic group' which profiles is described by the distribution of $w_u$, and that we would try to mix groups at each generation in order to maximize the genetic diversity. This is probably NOT desirable in practice due to the possible effects of dominant/recessive mutations.
What you're getting at is the difference between a vector space and an algebra over a field (see https://en.wikipedia.org/wiki/Algebra_over_a_field). I'll just assume our field is $\mathbb{R}$, the real numbers. The set of vectors of length $n$ with entries in $\mathbb{R}$, which we might also think about as $n \times 1$ matrices with entries in $\mathbb{R}$ forms a vector space over $\mathbb{R}$. What this means is that we can add any two vectors together, and we can multiply a vector by a real number (by multiplying each entry by that number) (this is called "scalar multiplication"), and scalar multiplication distributes over vector addition. That is, if $v,w$ are vectors and $a \in \mathbb{R}$,$$a(v+w) = av + aw$$The set of polynomials in the variable $x$ with coefficients in $\mathbb{R}$ also forms a vector space over $\mathbb{R}$. We can add two polynomials, and we can multiply a polynomial by a scalar, and this scalar multiplication distributes over polynomial addition. For example,$$2\Big((x^2 + 2x + 1) + (x^3 + 3) \Big) = 2(x^2 + 2x + 1) + 2(x^3 + 3)$$What you've noticed is that these polynomials also have an algebra structure. We can multiply two polynomials together, e.g.$$(x+1)(x+2) = x^2 + 3x + 2$$and this polynomial multiplication also distributes over polynomial addition. In fact, this multiplication includes scalar multiplication by real numbers as a special case, since the real number $a$ is a polynomial where all the coefficients of the $x^k$ terms are zero. What you seem hung up on is that there isn't a "natural" or "obvious" way to multiply two vectors. However, just because there isn't one "right" multiplication for vectors of length $n$ doesn't mean there aren't any good ones. In fact, there are a vast multitude of reasonable ways to define multiplication between two vectors to get another vector. The cross product on $\mathbb{R}^3$ is one example. The study of things like Lie algebras and Jordan algebras is all about classifying these kinds of multiplication operations, based on imposing some additional restrictions, because there are really too many to study all at once.
So the question states, Let $B = \{x = (x_1,x_2,x_3) \in \mathbb{R}^3: x_1^2 +x_2^2 +x_3^2 \leq 1 \}$ be the unit ball in $\mathbb{R}^3$. Compute the diameter of $B$ for each of the following metrics. note: $diam(B) = \sup\{d(x,y): x,y \in B\}$ I know the diameter is 2 but I want to be able to do this in general for an arbitrary distance. I think seeing this one will help me do others. Here is what I have so far using euclidean distance. Let $x,y \in B$ then \begin{eqnarray*} d(x,y) &=& ((x_1 - y_1)^2+(x_2 -y_2)^2 + (x_3-y_3)^2)^{1/2} \\ &=& ((x_1^2 +x_2^2+x_3^2)+(y_1^2+y_2^2+y_3^2) -2(x_1y_1+x_2y_2+x_3y_3))^{1/2} \\ &\leq& (1 + 1 - 2(x_1y_1+x_2y_2+x_3y_3))^{1/2} \\ &=& (2(1 -(x_1y_1+x_2y_2+x_3y_3))^{1/2} \end{eqnarray*} I'm still yet to use the $\sup$ but I'm not sure how to move on from here. Do I need to use Lagrange multipliers or is there a better way to solve this?
Let $F$ be a field of characteristic zero, $\overline{F}$ be the algebraic closure of $F$. Let $\zeta_n$ be a primitive $n$-th root of unity in $\overline{F}$. Then it is well-known that $F(\zeta_n)$ is a finite Galois extension of $F$. Q. If $\sigma:F\rightarrow F$ is a field automorphism, then is it always possible to extend it to an automorphism of $F(\zeta_n)$? The question might be trivial,I do not know. But usually, in Galois theory, I had visited most of the time extension of identity automoorphism of a field to its finite (or even Galois) extensions. Here I am considering the problem of extending any automorphism of $F$ to an automorphism of $F(\zeta_n)$.
Let $M$ be a smooth manifold. (1) A subset $S$ of $M$ that with the subspace topology is a topological manifold (with or without boundary), together with a differential structure that makes the inclusion $\iota:S\rightarrow M$ a smooth embedding (that is, a smooth map with constant rank equal to the dimension of $S$), is called a smooth submanifold with or without boundary. (2) Equivallently, you could ask for $S$ to satisfy the following condition: There's a fixed $k$, such that, for every $p$ in $S$, there's a smooth chart $(U,\varphi)$ of $M$, such that $\varphi(U\cap S)$ is a k-slice of $\varphi(U)$, where a k-slice fo a set $U$ in $\mathbb{R}^n$ will be $\lbrace x\in U:x_k \geq 0$ and $x_{k+1}=\cdots=x_n=0\rbrace$. Now, if we now let $M$ have a boundary, I think these two definitions aren' t equivalent anymore. I see how the second definition still implies the first one, but I have problems with the other impliciation. So, does the first one imply the second one? That is, Let $M$ be a smooth manifold with or without boundary, and $S\subseteq M$ as in (1). Does $S$ satisfy the condition stated in (2)? The difficulty is that when proving this for $M$ without a boundary, you use the constant rank theorem, which (I understand), may no hold when the codomain has non-empty boundary.
I've translated the following from my German textbook, so please correct me if there is something wrong or strange. Definitions Let $X$ be a set and $\mathfrak{T} \subseteq \mathcal{P}(X)$ with fits the following restrictions: (i) $\emptyset, X \in \mathfrak{T}$ (ii) $\forall U_1, U_2 \in \mathfrak{T}: U_1 \cap U_2 \in \mathfrak{T}$ (iii) Let $I$ be an index set with $\forall i \in I: U_i \in \mathfrak{T} $. Then: $\bigcup_{i \in I} U_i \in \mathfrak{T}$ Then $\mathfrak{T}$ is called a topology and $(X, \mathfrak{T})$ is called a topological space Let $(X, \mathfrak{T})$ be a topological space. $\mathcal{S} \subseteq \mathfrak{T}$ is called a subbasis of $\mathfrak{T} : \Leftrightarrow \forall U \in \mathfrak{T}: U$ is a union of finite intersections from Elements in $\mathcal{S}$. Questions Can a finite subbasis generate an infinite topology? Or, more formally, is the following implication true: $$|\mathcal{S}| \in \mathbb{N} \Rightarrow |\mathfrak{T}| \in \mathbb{N}$$ Can a finite subbasis generate any topology for an infinite space $X$? So, formally: $$|\mathcal{S}| \in \mathbb{N} \Rightarrow |X| \in \mathbb{N}$$
I usually need to draw graphs with multiple edges. I would really appreciate if someone can tell me how to write a multiple edge command which will allow me to write one line instead of repeatedly many. Say the name of my edge command is \myedge[m] meaning that I will draw m edges between two nodes. I want to use something like the following whenever I need to draw a multiple edge (multiplicity is 5 in the following) \draw (a) \myedge[5] (b); Here is my actual code example... It is troublesome to keep using it in graphs where I have to keep drawing edges. \documentclass{article}\usepackage{tikz}\begin{document}\begin{center}\begin{tikzpicture}\node[circle,fill=black,inner sep=1.5pt,draw] (a) at (180:1cm) {};\node[circle,fill=black,inner sep=1.5pt,draw] (b) at (0:1cm) {};\draw[thick] (a) -- (b);\end{tikzpicture}\begin{tikzpicture}\node[circle,fill=black,inner sep=1.5pt,draw] (a) at (180:1cm) {};\node[circle,fill=black,inner sep=1.5pt,draw] (b) at (0:1cm) {};\draw[thick] (a) edge[bend left=5] (b);\draw[thick] (a) edge[bend right=5] (b);\end{tikzpicture}\begin{tikzpicture}\node[circle,fill=black,inner sep=1.5pt,draw] (a) at (180:1cm) {};\node[circle,fill=black,inner sep=1.5pt,draw] (b) at (0:1cm) {};\draw[thick] (a) edge[bend left] (b);\draw[thick] (a) edge (b);\draw[thick] (a) edge[bend right] (b);\end{tikzpicture}\begin{tikzpicture}\node[circle,fill=black,inner sep=1.5pt,draw] (a) at (180:1cm) {};\node[circle,fill=black,inner sep=1.5pt,draw] (b) at (0:1cm) {};\draw[thick] (a) edge[bend left=15] (b);\draw[thick] (a) edge[bend left=5] (b);\draw[thick] (a) edge[bend right=5] (b);\draw[thick] (a) edge[bend right=15] (b);\end{tikzpicture}\begin{tikzpicture}\node[circle,fill=black,inner sep=1.5pt,draw] (a) at (180:1cm) {};\node[circle,fill=black,inner sep=1.5pt,draw] (b) at (0:1cm) {};\draw[thick] (a) edge[bend left=16] (b);\draw[thick] (a) edge[bend left=8] (b);\draw[thick] (a) edge[bend right=8] (b);\draw[thick] (a) edge[bend right=16] (b);\draw[thick] (a) -- (b);\end{tikzpicture}\end{center}\end{document} Output: SITUATION: Here is what exactly I want to do in this new definition: Say, I would like to bend perpendicular to the line connecting two nodes (u) and (v) with coordinates (a,b) and (c,d). This gives me the chance to use ((c-a)*0.2*\i,(d-b)*0.2*\i) instead of $(0,0.2*\i)$ and it is most general. However, I can see that the bigger trouble here is that in defining edge[me=<number>] you actually do not account the ends of the edge. Can we do that? This would certainly prevent the curly edges when edges are not on a horizontal line (it is worst when they are vertical actually). On the other hand, depending on how big is r=\sqrt{(c-a)^2+(d-b)^2} is, we might need to replace 0.2 with a much smaller/larger number. Maybe it is a good idea even to replace 0.2 with \frac{0.2}{r}.
Arithmetic Sequences 3, 7, 11, 15, 19 is an arithmetic sequence with 4 being added to each term to get the next term. Given any arithmetic sequence we can find an expression for the \[n\]th term. If \[d\]is the number that is added each time (called the common difference) and \[a\]is the first term, then the \[n\]th term is \[a_n=a+(n-1)d\]. For the sequence above, \[a=3, \: d=4\]. Hence \[a_n=3+(n-1) \times 4=4n-1\] We can also find a formula for the sum \[S_n\]of the first \[n\]terms. \[S_n=a+(a+d)+...(a+(n-2)d)+ (a+(n-1)d)\] Writing this sum backwards gives \[S_n=(a+(n-1)d)+(a+(n-2)d)+...+ (a+d)+a\] Now adding these two sums gives \[\begin{equation} \begin{aligned} 2S_n &= \underbrace{(a+(n-1)d)+(a+(n-1)d)+...+ (a+(n-1)d)+(a+(n-1)d)}_{n \: terms} \\ &= n(2a+(n-1)d) \end{aligned} \end{equation}\] Hence \[S_n=\frac{n}{2}(2a+(n-1)d)\] For the sequence above the sum of the first 20 terms is \[S_{20}=\frac{20}{2} \times 3+(20-1) \times 4)=820\]
I swear this was supposed to be Silly proofs three, but obviously my memories of having done two silly proofs are misleading. This proof isn’t actually that silly. It’s a proof of the [tex]L^2[/tex] version of the Fourier inversion theorem. We start by noting the following important result: [tex]\int_{-\infty}^{\infty} e^{itx} e^{-\frac{1}{2}x^2} = \sqrt{2 \pi} e^{-\frac{1}{2}t^2}[/tex] Thus if we let [tex]h_0 = e^{-\frac{1}{2}x^2}[/tex] then we have [tex]\hat{h_0} = h_0 [/tex] (where [tex]\hat{f}[/tex] denotes the fourier transform of [tex]f[/tex]) Let [tex]h_n(x) = (-1)^n e^{x^2/2} \frac{d^n}{dx^n} e^{-x^2}[/tex] This satisfies: [tex]h_n – xh_n = -h_{n+1}[/tex] So taking the Fourier transform we get [tex]ix \hat{h_n} – i \frac{d}{dx} \hat{h_n} = -\hat{h_{n+1}}[/tex] So, [tex]h_n[/tex] and [tex](-i)^n h_n^[/tex] satisfy the same recurrence relation. Further [tex]\hat{h_0} = h_0[/tex] Hence we have that [tex]\hat{h_n} = (-i)^n h_n[/tex]. Now, the functions [tex]h_n[/tex] are orthogonal members of [tex]L^2[/tex], and so form an orthonormal basis for their closed span. On this span we have the map [tex]h \to \hat{h}[/tex] is an isometric linear map with each [tex]h_n[/tex] an eigenvector. Further [tex]\hat{\hat{h_n}} = (-1)^n h_n[/tex]. Thus the fourier transform is a linear isometry from this space to itself. Now, [tex]h_n[/tex] is odd iff n is odd and even iff n is even. i.e. [tex]h_n(-x) = (-1)^n h_n[/tex] Thus [tex]\hat{\hat{ h_n(x)} } = (-1)^n h_n(-x)[/tex]. And hence [tex]\hat{\hat{h}}(x) = (-1)^n h(-x)[/tex] for any [tex]h[/tex] in the span. As both sides are continuous, it will thus suffice to show that the span of the [tex]h_n[/tex] is dense. Exercise: The span of the [tex]h_n[/tex] is precisely the set of functions of the form [tex]p(x) e^{- \frac{1}{2} x^2 }[/tex], where [tex]p[/tex] is some polynomial. It will thus suffice to prove the following: Suppose [tex]f[/tex] is in [tex]L^2[/tex] and [tex]\int x^n e^{-\frac{1}{2}x^2 } f(x) dx = 0[/tex] for every x. Then [tex]f = 0[/tex]. But this is just an application of the density of the polynomial functions in [tex]L^2[a, b] [/tex]: pick a big enough interval so that the integral of [tex]|f(x)|^2[/tex] over that interval is within [tex]\epsilon^2[/tex] of [tex]||f||^2[/tex], and this shows that the integral of [tex]|f(x)|^2[/tex] over that interval is [tex]0[/tex]. Thus [tex]||f||_2 < \epsilon[/tex], which was arbitrary, hence [tex]||f||_2 = 0[/tex]. (Note: When editing this for the new blog site I noticed that this proof is wrong. I haven't been able to fix it yet, but will update this when I do). I’ve dodged numerous details here, like how the [tex]L^2[/tex] Fourier transform is actually defined, but this really can be turned into a fully rigorous proof – nothing in this is wrong, just a little fudged. The problem as I see it is that – while the [tex]L^2[/tex] Fourier theory is very pretty and cool – this doesn’t really convert well to a proof of the [tex]L^1[/tex] case, which is in many ways the more important one.
Indeed, you have lost some sort of uniqueness of dimension, but not between the vector and the spinor representation: The vector representation of $\mathrm{SO}(1,3)$ is irreducible, while the four-dimensional Dirac-spinor representation is not - it is the sum of a left-chiral and a right-chiral Weyl representation. In general, the finite-dimensional representations of the (connected component of the) Lorentz group are in bijection to the finite-dimensional representations of $\mathfrak{su}(2)\oplus\mathfrak{su}(2)$. For the precise relation between $\mathrm{SO}(1,3)$ and $\mathfrak{su}(2)\oplus\mathfrak{su}(2)$, see this answer by Qmechanic. The representation theory of $\mathfrak{su}(2)$ is precisely that of spin as we know it, and therefore a finite-dimensional representation of the Lorentz group is labeled by two half-integers $(s_1,s_2)$. If one examines the way the $\mathfrak{su}(2)$ algebras actually related to the Lorentz algebra, one finds that the total spin of such a representation should be $s_1+s_2$. The representation space associated to $(s_1,s_2)$ is just $\mathbb{C}^{2s_1 +1}\otimes\mathbb{C}^{2s_2+1}$, i.e. we tensor the spin-$s_i$ representations with each other. Of course, this shows you that the dimension of the space is no longer unique for a given representation, even if its irreducible. However, this has nothing to do with the non-compactness of the Lorentz group, it's simply because it's a little more complicated than the "easy" $\mathrm{SO}(3)$. For instance, the compact $\mathrm{SU}(2)\times\mathrm{SU}(2)$ shares the same finite-dimensional representation theory.
Alpha 1. INTRODUCTION Evaluating the return of an investment without proper accounting for risks taken does not make a lot of sense. Thus in asset management we take a relative performance perspective comparing the return on investment to return on investment with similar risk. Alpha is the average return in excess of such a benchmark (in other words speaking of alpha without properly defining the benchmark is meaningless). It is important that the benchmark should be tradable, furthermore, in practice it is usually a passive strategy (e.g. an index tracking ETF) which suggests little to no investment management and is affordable at reasonable price. 2. MEASURING ALPHA Sticking to the Ang (2014) [1] notation we write the excess return as the difference between returns on asset and benchmark: $r^{ex}_t = r_t - r^{bmk}_t\tag{1}$ Alpha is the average excess return across $T$ observations: $\alpha=\dfrac{1}{T}\sum\limits_{t=1}^{T}{r^{ex}_t} = \dfrac{1}{T}\sum\limits_{t=1}^{T}{(r_t - r^{bmk}_t)}\tag{2}$ So far we did not look at risk. Consider an example of benchmarking against risk-free rate: alpha is then equivalent to the average risk premium $\alpha= \bar{r}_t - \bar{r}_{ft}$, and the Sharpe ratio is then $\frac{\alpha}{\sigma}$, where $\sigma$ is risk premium's standard deviation (which is equal to the standard deviation of asset return if the risk-free rate is constant). The information ratio generalizes Sharpe ratio for any benchmark: $ IR=\dfrac{\bar{r}_t - \bar{r}^{bmk}_t}{\sigma(r_t - r^{bmk}_t)}=\dfrac{\alpha}{\bar{\sigma}}\tag{3}$ The denominator in (3) is called the tracking error, it measures the dispersion of asset returns around the benchmark. Just as the Sharpe ratio, equation (3) is interpreted as the excess return per unit of risk - in other words, how attractive investment is, accounting for the risk taken. 3.ADJUSTING FOR FACTOR RISK Recall that asset return may be represented as a collection of factors that capture systematic risks. In case of the CAPM, the systematic component of excess return is driven by single factor --- the market risk premium: $E[R_i]-r_f = \beta_i(E[R_M] - r_f)\tag{4}$ Rearranging equation (4) we can write the expected return of asset $i$ as the following linear combination of the risk-free rate and market portfolio: $E[R_i] =r_f+\beta_i(E[R_M] - r_f)=(1-\beta)r_f+\beta E[R_M]\tag{5}$ The RHS of (5) is the replicating portfolio, it implies that holding $(1-\beta)$ dollars in risk-free asset and $\beta$ dollars in market portfolio gives the same expected return as investing \$1 in asset i. In practice we estimate factor-adjusted alpha by running the following regression: $ R_{it}-r_{ft} =\alpha_i +\beta_i(R_{Mt} - r_{ft})+\varepsilon_{it}\tag{6}$ Suppose we benchmark performance of an asset against the market risk premium. In this case, alphas measured as average excess return (equation (2)) and estimated from equation (6) are equal only if $\beta=1$. In other words, failing to adjust for factor risk is equivalent to assuming that the asset and benchmark share the same risk structure, which is: i.) not generally true; ii.) more importantly, misleading for evaluating investment's performance. The same approach applies for a multiple factor benchmark, for example, the Fama and French (2014) [2] 5-factor model nests the CAPM and Fama-French 3-factor models as special cases: $R_{it}-r_{ft} =\underbrace{\overbrace{\alpha_i +\beta_{M,i}(R_{Mt} - r_{ft})}^\textrm{CAPM}+\beta_{SMB,i}SMB_t+\beta_{HML,i}HML_t}_\textrm{FF 3-factor}+\beta_{RMW,i}RMW_t+\beta_{CMA,i}CMA_t+\varepsilon_{it}\tag{7} $ The RMW (robust-minus-weak) factor is constructed by buying (selling) stocks with robust (weak) profitability and reflect one of the quality's dimensions. CMA (conservative-minus-aggresive) takes long position in companies with low investments and shorts high investment firms. As an illustration consider a 6-factor model (FF-5 plus momentum) for Berkshire Hathaway and two ETFs, namely SPDR S&P500 Value (SPYV) and SPDR S&P600 Small Growth (SLYG). The estimates of regressions for monthly returns are displayed in Table 1, the sample is November 2000 - January 2015. The factors come from Kenneth French's page ( http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html). Moreover, we note that alpha is often expressed on annualized basis. An annualized alpha is thus simply obtained in case of the monthly regression by multiplying by 12, weekly alphas by 52 and so on. 4.GENERATING ALPHA Active fund management is expected to deliver alpha in return for the higher costs typically charge. However, the average return for all investors cannot be better than the market return, adding the costs of management investors average returns have to be below market. Hence, it is often described as a losers game. Swedroe and Berkin (2015) [3] reflect on the possibility to generate alpha nowadays. The authors present four distinct reasons why they see higher hurdles of creating alpha in the future: Alpha is measured with respect to factors carrying positive risk premia, hence, to create alpha fund managers need to exploit different strategies than just simply value, size, or momentum. Moreover, recently factor models incorporate more than just three factors, setting the hurdle even higher. Untalented investors are increasingly shifting to pure passive strategies, due their poor performance. As a consequence, the set of remaining active managers becomes harder to beat due to the “adverse” selection. Active managers follow an arms race by investing more in talent, data and technology. Developing new alpha generating strategies requires more and more resources. The overall growth of assets implies that a fixed amount of money left at the table has to be divided in smaller pieces. (Even though it is hard to quantify) 5.SUMMARY Alpha is just a mere reflection of a benchmark's factor composition. The benchmarks, however, should be tradable alternatives which are available to investors at low cost, so, for example, the CMA portfolio should not be included in benchmark, since there are currently no ETFs tracking this factor. Furthermore, benchmarks should be adjusted for risk, otherwise alphas will overstate performance relative to the risk taken, if the loading to the benchmark exceeds 1, and understate in the opposite case. There are some additional issues that complicate performance evaluation. First, benchmarking against strategies with non-linear payoffs (think of derivatives) requires specific techniques. Second, factor loadings can vary over time, thus making proper adjustment for risk difficult --- so models allowing for dynamic loadings should be used (e.g. rolling regressions, dynamic conditional beta, or state space models, a short discussion on non-linear benchmarks and time-varying coefficients can be found in Ang (2014) [1] Ch.7).
An easy calculation is to start with the solar constant, the power (energy per unit time) produced by solar radiation at a distance of one astronomical unit. This is 1.361 kilowatts per square meter. The surface area of the Earth is $4\pi R^2$, where $R$ is the radius of the Earth, while the cross section of the Earth to solar radiation is $\pi R^2$. Thus the Earth as a whole receives 1/4 of that solar constant. Assume a planet with an atmosphere that is transparent in the thermal infrared, with the same albedo as that of the Earth (0.306), rotating rapidly like the Earth, and orbiting at the same distance from the Sun as the Earth. The effective temperature of this planet is given by the Stefan Boltzmann law:$$T = \left(\frac{(1-\alpha)\,I_\text{sc}}{4\sigma}\right)^{1/4}$$where $\alpha$ is the albedo (0.306), $I_\text{sc}$ is the solar constant (1.361 kW/m 2), $\sigma$ is the Stefan Boltzmann constant (5.6704×10 −8 W/m 2/K 4), and the factor of 1/4 arises from the the fact that the Earth is a rapidly rotating spherical object. The result is -19 °C.
Some useful discussion and links can be found on projectrho.com, I mentioned these in comments before the question was migrated but they were deleted in the migration, so I'll repost here. First of all, in the Space War page, at the top there are various links to posts on the "rocketpunk manifesto" blog which have good discussions of issues relating to space combat. And here are some other good pages from projectrho.com: Detection in Space Warfare (most relevant to your questions about stealth) Defenses in Space Warfare Introduction to Space Weapons (mostly just devoted to classification, but has a link to this site which has a lot of interesting ideas) Conventional Space Weapons Exotic Space Weapons Space Warship Designs Combat Theater Planetary Attack You can find some other semi-relevant pages if you google site:www.projectrho.com and "space war" (in quotes), but the other pages I saw are almost entirely devoted to describing how space war was depicted in various science fiction works rather than discussing how it would "realistically" work. (edited to add that I recently came across another good article about realistic space combat, The Physics of Space Battles) Since your question is mainly about whether it would be possible to "hide", definitely look through the "Detection in Space Warfare" page, the author is of the definite opinion that none of the proposed solutions would work. For example, here's the discussion of just channeling exhaust and waste heat in a narrow beam going in the opposite direction of where the enemy is located: Glancing at the above equation it is evident that the lower the spacecraft's temperature, the harder it is to detect. "Aha!" you say, "why not refrigerate the ship and radiate the heat from the side facing away from the enemy?" Ken Burnside explains why not. To actively refrigerate, you need power. So you have to fire up the nuclear reactor. Suddenly you have a hot spot on your ship that is about 800 K, minimum, so you now have even more waste heat to dump. This means a larger radiator surface to dump all the heat, which means more mass. Much more mass. It will be either a whopping two to three times the mass of your reactor or it will be so flimsy it will snap the moment you engage the thrusters. It is a bigger target, and now you have to start worrying about a hostile ship noticing that you occluded a star. Dr. John Schilling had some more bad news for would be stealthers trying to radiate the heat from the side facing away from the enemy. "Besides, redirecting the emissions merely relocates the problem. The energy's got to go somewhere, and for a fairly modest investment in picket ships or sensor drones, the enemy can pretty much block you from safely radiating to any significant portion of the sky. "And if you try to focus the emissions into some very narrow cone you know to be safe, you run into the problem that the radiator area for a given power is inversely proportional to the fraction of the sky illuminated. With proportionate increase in both the heat leakage through the back surfaces, and the signature to active or semi-active (reflected sunlight) sensors. "Plus, there's the problem of how you know what a safe direction to radiate is in the first place. You seem to be simultaneously arguing for stealthy spaceships and complete knowledge of the position of enemy sensor platforms. If stealth works, you can't expect to know where the enemy has all of his sensors, so you can't know what is a safe direction to radiate. Which means you can't expect to achieve practical stealth using that mechanism in the first place. "Sixty degrees has been suggested here as a reasonably 'narrow' cone to hide one's emissions in. As a sixty-degree cone is roughly one-tenth of a full sphere, a couple dozen pickets or drones are enough to cover the full sky so that there is no safe direction to radiate even if you know where they all are. The possiblility of hidden sensor platforms, and especially hidden, moving sensor platforms, is just icing on the cake. "Note, in particular, that a moving sensor platform doesn't have to be within your emission cone at any specific time to detect you, it just has to pass through that cone at some time during the course of the pre-battle maneuvering. Which rather substantially increases the probability of detection even for very narrow emission cones. Then the page gives another quote from Ken Burnside: "The problem with directional radiation is that you have to know both where the enemy sensor platforms are, and you have to have a way of slowing down to match orbits that isn't the equivalent of swinging end for end and lighting up the torch. Furthermore, directing your waste heat (and making some part of your ship colder, a related phenomena) requires more power for the heat pump - and every W of power generated generates 4 W of waste heat. It gets into the Red Queen's Race very quickly. "Imagine your radiators as being sheets of paper sticking edge out from the hull of your ship. You radiate from the flat sides. If you know exactly where the enemy sensors are, you can try and put your radiators edge on to them, and will "hide". You want your radiators to be 180 degrees apart so they're not radiating into each other. "Most configurations that radiate only to a part of the sky will be vastly inefficient because they radiate into each other. Which means they get larger and more massive, which reduces engine performance...and they still require that you know where the sensor is. "The next logical step is to make a sunshade that blocks your radiation from the sensor. This also requires knowing where the sensor is, and generates problems if the sensor blocker is attached to your ship, since it will slowly heat up to match the equilibrium temperature of your outer hull....and may block your sensors in that direction as well. Update: Some commenters have been asking about the possibility of having a sort of "heat battery" which absorbs waste heat generated by propulsion and other systems on the ship for the period of time where it needs to be stealthy, and is well-insulated so as not to give off detectable blackbody radiation, or to leak its energy to other parts of the ship as heat, so that from the outside the ship would not give off radiation due to heat. I found some useful equations relevant to the feasibility of this, so I thought I'd post them. Suppose we want to have enough fuel for some set of maneuvers during the period the rocket needs to be stealthy, such that, if the same amount of fuel were spent just accelerating the rocket continuously in one direction, the rocket's change in velocity would be $\Delta v$. Then if the final mass once all this fuel is spent is $m_1$ (which will include both the mass of the weapons and other useful systems, like life support if the rocket is manned and computers and sensors if it's not, as well as the mass of the heat battery), and the initial mass including fuel is $m_0$, and the effective exhaust velocity of the propellant is $v_e$, then the Tsiolkovsky rocket equation relates these quantities: $\Delta v = v_e \ln \frac{m_0}{m_1}$ A related equation is the amount of energy the fuel must supply to the rocket in order to achieve this $\Delta v$, given the effective exhaust velocity $v_e$ and the final mass $m_1$ that should be left over once the fuel is used up. As given in the "energy" section of the spacecraft propulsion article on wikipedia, "If the energy is produced by the mass itself, as in a chemical rocket", then the energy would be given by this formula: $E = \frac{1}{2}m_1 (e^{\Delta v / v_e} - 1)v_e^2$ The "internal efficiency" $\eta_{int}$ of a rocket is the ratio of the actual increase in linear kinetic energy delivered per unit time to the internal chemical energy used up per unit time, as explained here, so if the the fuel delivered an amount of linear kinetic energy $E$ to the rocket while it was burned, the original chemical energy must have been a greater amount $E / \eta_{int}$, and thus the energy lost to heat must have been approximately $(E / \eta_{int}) - E = E( \frac{1}{\eta_{int}} - 1) = E\frac{1 - \eta_{int}}{\eta_{int}}$ (Note that this isn't exact, because some of the loss of efficiency is not due to energy lost to heat, but rather due to exhaust particles having some kinetic energy that isn't parallel to the direction the rocket is traveling. Also I'm assuming below that the heat battery is somehow absorbing all energy lot to heat, the calculations would be somewhat different if heat couldn't be channeled away from the exhaust trail, but only the heat that would be added to the ship itself, see the chart here for estimates of about how much fuel energy is lost to each. Maybe the best way to be stealthy would be to avoid chemical rocketry with hot exhaust trails, and instead use something like a mass driver that could fling a stream of cooled pellets backwards at high velocity.) So using the above formula for $E$, the heat generated $Q$ would be approximately: $Q = ( \frac{1 - \eta_{int}}{\eta_{int}}) \frac{1}{2}m_1 (e^{\Delta v / v_e} - 1)v_e^2$ If the heat battery has mass $m_b$ and specific heat $c$, then rearranging the formula here, we can see that absorbing heat $Q$ will cause a temperature change $\Delta T$ of: $\Delta T = \frac{Q}{c m_b}$ And in the equation for $Q$, we can replace the final mass after fuel is expended, $m_1$, with $m_b + m_p$, where $m_b$ is again the heat battery mass and $m_p$ is the remaining payload mass (weapons etc.). Then combining the equations gives: $\Delta T = ( \frac{1 - \eta_{int}}{\eta_{int}}) \frac{1}{2 c m_b}(m_b + m_p) (e^{\Delta v / v_e} - 1)v_e^2$ With some algebra you can solve this for the ratio of the heat battery mass $m_b$ to the remaining payload mass $m_p$: $m_b / m_p = \frac{( \frac{1 - \eta_{int}}{\eta_{int}}) \frac{1}{2 c } (e^{\Delta v / v_e} - 1)v_e^2 }{\Delta T \, - \, [( \frac{1 - \eta_{int}}{\eta_{int}}) \frac{1}{2 c } (e^{\Delta v / v_e} - 1)v_e^2 ]}$ The part to note is the denominator, which goes to zero if $\Delta T = [( \frac{1 - \eta_{int}}{\eta_{int}}) \frac{1}{2 c } (e^{\Delta v / v_e} - 1)v_e^2 ]$, which would make $m_b$ infinite; and since $m_b$ can't be negative either, that means for a physically realistic solution you must satisfy $\Delta T > [( \frac{1 - \eta_{int}}{\eta_{int}}) \frac{1}{2 c } (e^{\Delta v / v_e} - 1)v_e^2 ]$, which can be rearranged as: $\Delta v < v_e \ln [(\Delta T (\frac{\eta_{int}}{1 - \eta_{int}}) \frac{2c}{v_e^2}) + 1]$ You can plug some numbers into this equation to get some sense of the limitations it puts on any such system. For example, say our heat battery starts off at 0 K, and its temperature can increase up to 1000 K before the insulation can no longer keep a system that hot hidden from the outside, so $\Delta T$ = 1000 K. And say the specific heat $c$ is 0.9 kJ/(kg K), the same as that of the tiles on the space shuttle at 400 K according to this, which converted into SI units becomes 900 J/(kg K). And suppose $\eta_{int}$ is 0.8, which would be extremely good according to the table here ($\eta_{int}$ = 1 would mean no energy lost to heat at all), which would make $(\frac{\eta_{int}}{1 - \eta_{int}})$ equal to 4. Finally, suppose the effective exhaust velocity $v_e$ is 2,500 m/s, about the same as a typical solid rocket according to the table in the "examples" section of the specific impulse wiki article. With these numbers, the formula tells us that $\Delta v$ cannot exceed 2500*ln(1000*4*(2*900)/(2500)^2 + 1), plugging that into the calculator here gives a maximum $\Delta v$ of about 1916 m/s, just slightly under the amount of fuel needed to achieve escape velocity from the moon, and equivalent in fuel use to about 196 seconds of 1G acceleration. That doesn't seem like nearly enough for hitting a target in space that may be making unpredictable changes in its own velocity to confound possible pursuers even if it can't see them yet, and with the distances involved being very large. You can change some of those numbers and plug the altered formula into the calculator to see the effects, though.
I’ve had the following question for a while: How do I create a mapping of keys to values where the keys are regular expressions, and two regular expressions are considered equivalent if they correspond to the same language? An example of why you might want to do this is e.g. when constructing a minimal deterministic finite automaton for a regular language you end up labelling states by regular expressions that represent the language matched when starting from that state. In order for the automaton to be minimal you need to have any two equivalent regular expressions correspond to the same state, so you need a way of taking a new regular expression and finding out which state it should go to. It’s easy (if potentially expensive) to test regular expression equivalence once you know how, so the naive implementation of this is just to do a linear scan of all the regular expressions you’ve found so far. It’s O(n) lookup but it’s at least an existence proof. In the past I’ve ended up implementing a crude hash function for regular languages and then just used a hash table. It works, but collisions are common unless you’re quite clever with your hashing, so it doesn’t work well. But it turns out that there is a better way! Rather than using hashed data structures you can used ordered ones, because it turns out that there is a natural and easy to compute (or at least not substantially harder than testing equivalence) total ordering over the set of regular languages. That way is this: If you have two regular languages \(L\) and \(M\) that are not equivalent, there is some \(x \in L \triangle M\), the symmetric difference. That is, we can find and \(x\) which is in one but not the other. Let \(x\) be the shortlex minimal such word (i.e. the lexicographically first word amongst those of minimal length). Then \(L < M\) if \(x \in L\), else \(M < L\). The work in the previous post on regular language equivalence is thus enough to calculate the shortlex minimal element of an inequivalent pair of languages (though I still don’t know if the faster of the two algorithms gives the minimal one. But you can use the fast algorithm for equivalence checking and then the slightly slower algorithm to get a minimal refutation), so we can readily compute this ordering between two regular expressions. This, combined with any sort of ordered collection type (e.g. a balanced binary tree of some sort) gives us our desired mapping. But why does this definition provide a total order? Well, consider the enumeration of all words in increasing shortlex order as \(w_0, \ldots, w_n, \ldots\). Let \(l_n = 1\) if \(w_n \in L\), else \(l_n = 0\). Define \(m_n\) similarly for \(M\). Then the above definition is equivalent to the reverse of the lexicographical ordering between \(l\) and \(m\)! If \(w_k\) is the smallest word in the symmetric difference then \(k\) is the first index at which \(l\) and \(m\) differ. If \(w_k \in L\) then \(l_k = 1\) and \(m_k = 0\), so \(l > k\), and vice versa. The lexicographical order is a total order, and the reverse of a total order is a total order, so the above definition is also a total order. This definition has a number of nice properties: Any language containing the empty word sorts before any language that doesn’t The function \(L \to \overline{L}\) is order reversing. \(L \cap M \leq L, M \leq L \cup M\) I originally thought that union was coordinate-wise monotonic, but it’s not. Suppose we have four words \(a < b < c < d\), and consider the languages \(L = \{a, d\}, M = \{b, c\}\). Then \(L < M\) because the deciding value is \(a\). But now consider \(P = \{a, b\}\). Then \(L \cup P > M \cup P\) because the deciding element now ends up being \(c\), which is in \(M\). I’ve yet to try this in practice, so it might turn out that there are some interestingly pathological failure modes for this comparison function, but I don’t think there are likely to be any that aren’t also present in testing regular expression equivalence itself. Another open question here is which sorted map data structure to use? The comparison is relatively expensive, so it might be worth putting a bit of extra effort in to balance it. As such an AVL tree might be a fairly reasonable choice. I’m not sure. Want more blog posts like this? Join the 30 others who are supporting my writing on Patreon!
Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form (i.e., $q$ is prime with $\gcd(q,n)=1$ and $q \equiv k \equiv 1 \pmod 4$). (That is, $2N=\sigma(N)$ where $\sigma$ is the classical sum-of-divisors function.) Since $\gcd(q^k,\sigma(q^k))=1$, it follows that $q \mid \sigma(n^2)$. My question is this: Is the Euler prime $q$ of an odd perfect number a repunit, or otherwise? Is there a research work out there that tackles this particular question? Thanks!
Using the Ratio Test, I have to find whether $$ \sum_{n=1}^\infty \frac{\cos(n\pi/3)}{n!} $$ converges or diverges. The back of the book says that the sum is absolutely convergent. My work: $a_n = \dfrac{\cos(n\pi/3)}{n!}$, $a_{n+1} = \dfrac{\cos((n+1)\pi/3)}{(n+1)!}$ \begin{align} &\lim_{n\rightarrow \infty} \left|\frac{a_{n+1}}{a_n}\right| \\[6pt] \implies&\lim_{n\rightarrow \infty} \left|\frac{\dfrac{\cos((n+1)\pi/3)}{(n+1)!}}{\dfrac{\cos(n\pi/3)}{n!}}\right| \\[12pt] \implies&\lim_{n\rightarrow \infty} \left|\frac{\cos((n+1)\pi/3) \cdot n!}{\cos(n\pi/3)\cdot(n+1)!}\right| \\[6pt] \implies&\lim_{n\rightarrow \infty} \left|\frac{\cos((n+1)\pi/3)}{\cos(n\pi/3)\cdot(n+1)}\right| \\ \end{align} Now this is where I am stuck. I don't know how to find the limit for the $\cos$ terms. I tried using the identity $\cos(x+y) = \cos(x)\cos(y) - \sin(x)\sin(y)$ but it didn't yield anything useful (maybe, I should have tried harder?). I tried looking at this question but it didn't help much. Any hints would be appreciated. Thanks for your time!
I have a two dimensional function represented by 3 data set $f_i(\omega_i,\beta_i)=\psi_i$ such that $i \in [1,N]$ How can I do the following: First interpolate to larger data points using Radial Basis Function but want to increase the interpolating points such $ g_k \equiv f_i $ with $ k \in [1,M] ; M > N $ Approximate $g$ using Radial Basis Functions or Similar algorithm for reconstructing the function.
This is Velleman's exercise 3.5.4: Suppose $A \cap C \subseteq B \cap C$ and $A \cup C\subseteq B \cup C$. Prove that $A \subseteq B$ This is the proof given by the book (which I understand completely): Proof. Suppose x ∈ A. We now consider two cases: Case 1. $x \in C$. Then $x ∈ A \cap C$, so since $A \cap C\subseteq B \cap C, x \in B \cap C$, and therefore $x \in B$. Case 2. $x \notin C$. Since $x \in A, x \in A \cup C$, so since $A \cup C\subseteq B \cup C$, $x \in B \cup C$. But $x \notin C$, so we must have $x \in B$. Thus, $x \in B$, and since x was arbitrary, $A \subseteq B$. I was wondering if one could write a proof like this one in below: Proof. Let x be an arbitrary element of A. Then by $A \cup C \subseteq B \cup C$, we have either $x \in B$ or $x \in C$. Now we consider these two cases: Case 1. x is an element of B. Case 2. x is an element of C. Since in one of the cases $x \in B$ and since x was arbitrary, $A \subseteq B$.
CLAIM : Prove that LCM$(n/a , n/b) = n$ if $(a,b)=1$ let us assume the factorization of $n = p_1^{\alpha_1} p_2^{\alpha_2} \cdots p_k^{\alpha_k}$ and $a=q_1^{\beta_1} q_2^{\beta_2} \cdots q_k^{\beta_k}$$ and $ $b = r_1^{\gamma_1} r_2^{\gamma_2} \cdots p_k^{\gamma_k}$ Please note that both $a$ and $b$ divides $n$. case 1: if both $a$ and $b$ divides $n$ then I am getting that $n/a$ will divid the $n$ and $n/b$ will also divide $n$ but why it will be the minimum multiple ?
Bayes’ Rule Section Consider any two events A and B. To find \(P(B|A)\), the probability that B occurs given that A has occurred, Bayes’ Rule states the following: \(P(B|A) = \dfrac{P(A \text{ and } B)}{P(A)}\) This says that the conditional probability is the probability that both A and B occur divided by the unconditional probability that A occurs. This is a simple algebraic restatement of a rule for finding the probability that two events occur together, which is \(P(A\ and\ B) = P(A)P(B|A)\). Bayes’ Rule Applied to the Classification Problem Section We are interested in \(P(\pi_{i} | \boldsymbol{x})\), the conditional probability that an observation came from population \(\pi_{i}\) given that the observed values of the multivariate vector of variables \(\boldsymbol{x}\). We will classify an observation to the population for which the value of P(\(\pi_{i} | \boldsymbol{x})\) is greatest. This is the most probable group given the observed values of \(\boldsymbol{x}\). Suppose that we have gpopulations (groups) and that the \(i^{th}\) population is denoted as \(\pi_{i}\). Let \(p_{i}=P(\pi_{i})\), be the probability that a randomly selected observation is in population \(\pi_{i}\). Let \(f(\boldsymbol{x} | \pi_{i}\)) be the conditional probability density function of the multivariate set of variables \(\boldsymbol{x}\), given that the observation came from population \(\pi_{i}\) . Note!We have to be careful about the word probability in conjunction with our observed vector \(\mathbf{x}\). A probability density function for continuous variables does not give a probability, but instead gives a measure of “likelihood.” Using the notation of Bayes’ Rule above, event A = observing the vector \(\boldsymbol{x}\) and event B = observation came from population \(\pi_{i}\). Thus our probability of interest can be found as... \(P(\text{member of } \pi_i | \text{ we observed } \mathbf{x}) = \dfrac{P(\text{member of } \pi_i \text{ and we observe } \mathbf{x})}{P(\text{we observe } \mathbf{x})}\) The numerator of the expression just given is the likelihood that a randomly selected observation is both from population \(\pi_{i}\) and has the value \(\boldsymbol{x}\). This likelihood = \(p_{i}f(\boldsymbol{x}| \pi_{i})\). The denominator is the unconditional likelihood (over all populations) that we could observe \(\boldsymbol{x}\). This likelihood = \(\sum_{j=1}^{g} p_j f(\mathbf{x}|\pi_j)\) Thus the posterior probability that an observation is a member of population \(\pi_{i}\) is \(p(\pi_i|\mathbf{x}) = \dfrac{p_i f(\mathbf{x}|\pi_i)}{\sum_{j=1}^{g}p_j f(\mathbf{x}|\pi_j)}\) The is to assign observation \(\boldsymbol{x}\) to the population for which the posterior probability is the greatest. classification rule The denominator is the same for all posterior probabilities (for the various populations) so it is equivalent to say that we will classify an observation to the population for which \(p_{i}f (\boldsymbol{x}\) | \(\pi_{i})\) is greatest. Two Populations Section With only two populations we can express a classification rule in terms of the ratio of the two posterior probabilities. Specifically we would classify to population 1 when \(\dfrac{p_1 f(\mathbf{x}|\pi_1)}{p_2 f(\mathbf{x}|\pi_2)} > 1\) This can be rewritten to say the we classify to population 1 when \(\dfrac{ f(\mathbf{x}|\pi_1)}{ f(\mathbf{x}|\pi_2)} > \dfrac{p_2}{p_1}\) Decision Rule Section We are going to classify the sample unit or subject into the population \(\pi_{i}\) that maximizes the posterior probability p(\(\pi_{i}\)). that is the population that maximizes \(f(\mathbf{x}|\pi_i)p_i\) We are going to calculate the posterior probabilities for each of the populations. Then we are going to assign the subject or sample unit to that population that has the highest posterior probability. Ideally that posterior probability is going to be greater than a half, the closer to 100% the better! Equivalently we are going to assign it to the population that maximizes this product: \(\log f(\mathbf{x}|\pi_i)p_i\) The denominator that appears above does not depend on the population because it involves summing over all the populations. Equivalently all we really need to do is to assign it to the population that has the largest for this product, or equivalently we can maximize the log of that product. A lot of times it is easier to write the log.
How to Use the Beam Envelopes Method for Wave Optics Simulations In the wave optics field, it is difficult to simulate large optical systems in a way that rigorously solves Maxwell’s equation. This is because the waves that appear in the system need to be resolved by a sufficiently fine mesh. The beam envelopes method in the COMSOL Multiphysics® software is one option for this purpose. In this blog post, we discuss how to use the Electromagnetic Waves, Beam Envelopes interface and handle its restrictions. Comparing Methods for Solving Large Wave Optics Models In electromagnetic simulations, the wavelength always needs be resolved by the mesh in order to find an accurate solution of Maxwell’s equations. This requirement makes it difficult to simulate models that are large compared to the wavelength. There are several methods for stationary wave optics problems that can handle large models. These methods include the so-called diffraction formulas, such as the Fraunhofer, Fresnel-Kirchhoff, and Rayleigh-Sommerfeld diffraction formula and the beam propagation method (BPM), such as paraxial BPM and the angular spectrum method (Ref. 1). Most of these methods use certain approximations to the Helmholtz equation. These methods can handle large models because they are based on the propagation method that solves for the field in a plane from a known field in another plane. So you don’t have to mesh the entire domain, you just need a 2D mesh for the desired plane. Compared to these methods, the Electromagnetic Waves, Beam Envelopes interface in COMSOL Multiphysics (which we will refer to as the Beam Envelopes interface for the rest of the blog post) solves for the exact solution of the Helmholtz equation in a domain. It can handle large models; i.e., the meshing requirement can be significantly relaxed if a certain restriction is satisfied. A beam envelopes simulation for a lens with a millimeter-range focal length for a 1-um wavelength beam. We discuss the Beam Envelopes interface in more detail below. Theory Behind the Beam Envelopes Interface Let’s take a look at the math that the Beam Envelopes interface computes “under the hood”. If you add this interface to a model and click the Physics Interface node and change Type of phase specification to User defined, you’ll see the following in the Equation section: Here, \bf E1 is the dependent variable that the interface solves for, called the envelope function. In the phasor representation of a field, \bf E1 corresponds to the amplitude and \phi_1 to the phase, i.e., The first equation, the governing equation for the Beam Envelopes interface, can be derived by substituting the second definition of the electric field into the Helmholtz equation. If we know \phi_1, the only unknown is \bf E1 and we can solve for it. The phase, \phi_1, needs to be given a priori in order to solve the problem. With the second equation, we assume a form such that the fast oscillation part, the phase, can be factored out from the field. If that’s true, the envelope \bf E1 is “slowly varying”, so we don’t need to resolve the wavelength. Instead, we only need to resolve the slow wave of the envelope. Because of this process, simulating large-scale wave optics problems is possible on personal computers. A common question is: “When do you want the envelope rather than the field itself?” Lens simulation is one example. Sometimes you may need the intensity rather than the complex electric field. Actually, the square of the norm of the envelope gives the intensity. In such cases, it suffices to get the envelope function. What Happens If the Phase Function Is Not Accurately Known? The math behind the beam envelope method introduces more questions: What if the phase is notaccurately known? Can we use the Beam Envelopesinterface in such cases? Are the results correct? To answer these questions, we need to do a little more math. 1D Example Let’s take the simplest test case: a plane wave, Ez = \exp(-i k_0 x), where k_0 = 2\pi / \lambda_0 for wavelength \lambda_0 = 1 um, it propagates in a rectangular domain of 20 um length. (We intentionally use a short domain for illustrative purposes.) The out-of-plane wave enters from the left boundary and transmits the right boundary without reflection. This can be simulated in the Beam Envelopes interface by adding a Matched boundary condition with excitation on the left and without excitation on the right, while adding a Perfect Magnetic Conductor boundary condition on the top and bottom (meaning we don’t care about the y direction). The correct setting for the phase specification is shown in the figure below. We have the answer Ez = \exp(-i k_0 x), knowing that the correct phase function is k_0 x or the wave vector is (k_0,0) a priori. Substituting the phase function in the second equation, we inversely get E1z = 1, the constant function. How many mesh elements do we need to resolve a constant function? Only one! (See this previous blog post on high-frequency modeling.) The following results show the envelope function \bf E1 and the norm of \bf E, ewbe.normE, which is equal to |{\bf E1}|. Here, we can see that we get the correct envelope function if we give the exact phase function, constant one, for any number of meshes, as expected. For confirmation purposes, the phase of \bf E1z, arg(E1z), is also plotted. It is zero, also as expected. Now, let’s see what happens if our guess for the phase function is a little bit off — say, (0.95k_0,0) instead of the exact (k_0,0). What kind of solutions do we get? Let’s take a look: What we see here for the envelope function is the so-called beating. It’s obvious that everything depends on the mesh size. To understand what’s going on, we need a pencil, paper, and patience. We knew the answer was Ez = \exp(-i k_0 x), but we had “intentionally” given an incorrect estimate in the COMSOL® software. Substituting the wrong phase function in the second equation, we get \exp(-i k_0 x)={\bf E1z} \exp(-0.95i k_0 x). This results in {\bf E1z} = \exp(-0.05i k_0 x), which is no longer constant one. This is a wave with a wavelength of \lambda_b= 2\pi/0.05k_0 = 20 um, which is called the beat wavelength. Let’s take a look at the plot above for six mesh elements. We get exactly what is expected (red line), i.e., {\bf E1z} = \exp(-0.05i k_0 x). The plot automatically takes the real part, showing {\bf E1z} = \cos(-0.05 k_0 x). The plots for the lower resolutions still show an approximate solution of the envelope function. This is as expected for finite element simulations: coarser mesh gives more approximate results. This shows that if we make a wrong guess for the phase function, we get a wrong (beat-convoluted) envelope function. Because of the wrong guess, the envelope function is added a phase of the beating (green line), which is -0.05 k_0 x. What about the norm of \bf E? Look at the blue line in the plots above. It looks like the COMSOL Multiphysics software generated a correct solution for ewbe.normE, which is constant one. Let’s calculate: Substituting both the wrong (analytical) phase function and the wrong (beat-convoluted) envelope function in the second equation, we get {\bf Ez} = \exp(-0.05i k_0 x) \times \exp(-0.95i k_0 x) = \exp(-i k_0 x), which is the correct fast field! If we take a norm of \bf E, we get a correct solution, constant one. This is what we wanted. Note that we can’t display \bf E itself because the domain can be too large, but we can find \bf E analytically and display the norm of \bf E with a coarse mesh. This is not a trick. Instead, we see that if the phase function is off, the envelope function will also be off, since it becomes beat-convoluted. However, the norm of the electric field can still be correct. Therefore, it is important that the beat-convoluted envelope function be correctly computed in order to get the correct electric field. The above plots clearly show that. The six-element mesh case gives the completely correct electric field norm because it fully resolves the beat-convoluted envelope function. The other meshes give an approximate solution to the beat-convoluted envelope function depending on the mesh size. They also do so for the field norm. This is a general consequence that holds true for arbitrary cases. No matter what phase function we use in COMSOL Multiphysics, we are okay as long as we correctly solve the first equation for \bf E1 and as long as the phase function is continuous over the domain. When there are multiple materials in a domain, the continuity of the phase function is also critical to the solution accuracy. We may discuss this in a future blog post, but it is also mentioned in this previous blog post on high-frequency modeling. 2D Example So far, we have discussed a scalar wave number. More generally, the phase function is specified by the wave vector. When the wave vector is not guessed correctly, it will have vector-valued consequences. Suppose we have the same plane wave from the first example, but we make a wrong guess for the phase, i.e., k_0(x \cos \theta + y \sin \theta) instead of k_0 x . In this case, the wave number is correct but the wave vector is off. This time, the beating takes place in 2D. Let’s start by performing the same calculations as the 1D example. We have \exp(-i k_0 x)= {\bf E1z}(x,y) \exp(-i k_0 (x \cos \theta+y \sin \theta) ) and the envelope function is now calculated to be {\bf E1z}(x,y) = \exp(-i k_0 (x (1-\cos \theta) -y \sin \theta) ) , which is a tilted wave propagating to direction (1-\cos \theta, -\sin \theta) , with the beat wave number k_b = 2 k_0/\sin (\theta/2) and the beat wavelength \lambda_b=\lambda_0/(2\sin (\theta/2)). The following plots are the results for θ = 15° for a domain of 3.8637 um x 29.348 um for different max mesh sizes. The same boundary conditions are given as the previous 1D example case. The only difference is that the incident wave on the left boundary is {\bf E1z}(0,y) = \exp(i k_0 y \sin \theta) . (Note that we have to give the corresponding wrong boundary condition because our phase guess is wrong.) In the result for the finest mesh (rightmost), we can confirm that \bf E1z is computed just like we analyzed in the above calculation and the norm of \bf Ez is computed to be constant one. These results are consistent with the 1D example case. The electric field norm (top) and the envelope function (bottom) for the wrong phase function k_0(x \cos\theta +y \sin\theta ), computed for different mesh sizes. The color range represents the values from -1 to 1. Simulating a Lens Using the Beam Envelopes Interface The ultimate goal here is to simulate an electromagnetic beam through optical lenses in a millimeter-scale domain with the Beam Envelopes interface. How can we achieve this? We already discussed how to compute the right solution. The following example is a simulation for a hard-apertured flat top incident beam on a plano-convex lens with a radius of curvature of 500 um and a refractive index of 1.5 (approximately 1 mm focal length). Here, we use \phi_1 = k_0 x, which is not accurate at all. In the region before the lens, there is a reflection, which creates an interference. In the lens, there are multiple reflections. After the lens, the phase is spherical so that the beam focuses into a spot. So this phase function is far different from what is happening around the lens. Still, we have a clue. If we plot \bf E1z, we see the beating. Plot of \bf E1z. The inset shows the finest beat wavelength inside the lens. As can be seen in the plot, a prominent beating occurs in the lens (see the inset). Actually, the finest beat wavelength is \lambda_0/2 in front of the lens. To prove this, we can perform the same calculations as in the previous examples. The finest beat wavelength is due to the interference between the incident beam and reflected beam, but we can ignore this because it doesn’t contribute to the forward propagation. We can see that the mesh doesn’t resolve the beating before the lens, but let’s ignore this for now. The beat wavelength in the lens is 3\lambda_0/2 for the backward beam and 2\lambda_0 for the forward beam for n = 1.5, which we can also prove in the same way as the previous examples. Again, we ignore the backward beam. In the plot, what’s visible is the 2\lambda_0 beating for the forward beam. The backward beam is only a fraction (approximately 4% for n = 1.5 of the incident beam, so it’s not visible). The following figure shows the mesh resolving the beat inside the lens with 10 mesh elements. The beat wavelength inside the lens. The mesh resolves the beat with 10 mesh elements. Other than the beating for the propagating beam in the lens, the beating in the subsequent air domain is pretty large, so we can use a coarse mesh here. This may not hold for faster lenses, which have a more rapid quadratic phase and can have a very short beat wavelength. In this example, we must use a finer mesh only in the lens domain to resolve the fastest beating. The computed field norm is shown at the top of this blog post. To verify the result, we can compute the field at the lens exit surface by using the Frequency Domain interface, and then using the Fresnel diffraction formula to calculate the field at the focus. The result for the field norm agrees very well. Comparison between the Beam Envelopes interface and Fresnel diffraction formula. The mesh resolves the beat inside the lens with 10 mesh elements. The following comparison shows the mesh size dependence. We get a pretty good result with our standard recommendation, \lambda_b/6, which is equal to \lambda_0/3. This makes it easier to mesh the lens domain. Mesh size dependence on the field norm at the focus. As of version 5.3a of the COMSOL® software, the Fresnel Lens tutorial model includes a computation with the Beam Envelopes interface. Fresnel lenses are typically extremely thin (wavelength order). Even if there is diffraction in and around the lens surface discontinuities, the fine mesh around the lens part does not significantly impact the total number of mesh elements. Concluding Remarks In this blog post, we discuss what the Beam Envelopes interface does “under the hood” and how we can get accurate solutions for wave optics problems. Even if we get beating, the beat wavelength can be much longer than the wavelength, which makes it possible to simulate large optical systems. Although it seems tedious to check the mesh size to resolve beating, this is not extra work that is only required for the Beam Envelopes interface. When you use the finite element method, you always need to check the mesh size dependence for accurately computed solutions. Next Steps Try it yourself: Download the file for the millimeter-range focal length lens by clicking the button below. References J. Goodman, Fourier Optics, Roberts and Company Publishers, 2005. Comments (29) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Discriminant analysis is a 7-step procedure. Step 1: Collect training data T raining data are data with known group memberships. Here, we actually know which population contains each subject. For example, in the Swiss Bank Notes, we actually know which of these are genuine notes and which others are counterfeit examples. Step 2: Prior Probabilities The prior probability \(p_i\) represents the expected portion of the community that belongs to population \(\pi_{i}\). There are three common choices: Equal priors: \(\hat{p}_i = \frac{1}{g}\) This is useful if we believe that all of the population sizes are equal Arbitrary priors selected according to the investigators beliefs regarding the relative population sizes. Note!We require: \(\hat{p}_1 + \hat{p}_2 + \dots + \hat{p}_g = 1\) Estimated priors: \(\hat{p}_i = \dfrac{n_i}{N}\) where \(n_{i}\) Step 3: Bartlett's test Use Bartlett’s test to determine if the variance-covariance matrices are homogeneous for all populations involved. The result of this test will determine whether to use Linear or Quadratic Discriminant Analysis.: Case 1: Linear Linear discriminant analysis is for homogeneous variance-covariance matrices: \(\Sigma_1 = \Sigma_2 = \dots = \Sigma_g = \Sigma\) In this case the variance-covariance matrix does not depend on the population. Case 2: Quadratic Quadratic discriminant analysis is used for heterogeneous variance-covariance matrices: \(\Sigma_i \ne \Sigma_j\) for some \(i \ne j\) This allows the variance-covariance matrices to depend on the population. Note!We do not discuss testing whether the means of the populations are different. If they are not, there is no case for DA Step 4: Estimate the parameters of the conditional probability density functions \(f ( \mathbf{X} |\pi_{i})\) . Here, we shall make the following standard assumptions: The data from group ihas common mean vector \(\boldsymbol{\mu_i}\) The data from group ihas common variance-covariance matrix \(\Sigma\). Independence: The subjects are independently sampled. Normality: The data are multivariate normally distributed. Step 5: Compute discriminant functions. This is the rule to classify the new object into one of the known populations. Step 6: Use cross validation to estimate misclassification probabilities. As in all statistical procedures, it is helpful to use diagnostic procedures to asses the efficacy of the discriminant analysis. We use cross-validation to assess the classification probability. Typically you are going to have some prior rule as to what is an acceptable misclassification rate. Those rules might involve things like, "what is the cost of misclassification?" This could come up in a medical study where you might be able to diagnose cancer. There are really two alternative costs. The cost of misclassifying someone as having cancer when they don't. This could cause a certain amount of emotional grief! There is also the alternative cost of misclassifying someone as not having cancer when in fact they do have it. The cost here is obviously greater if early diagnosis improves cure rates. Step 7: Classify observations with unknown group memberships. The procedure described above assumes that the unit or subject being classified actually belongs to one of the considered populations. If you have a study where you look at two species of insects, A and B, and the insect to classify actually belongs to species C, then it will obviously be misclassified as to belonging to either A or B.
Definition:Galois Connection Definition Let $\left({S, \preceq}\right)$, $\left({T, \precsim}\right)$ be ordered sets. Let $g: S \to T$, $d: T \to S$ be mappings. Then $\left({g, d}\right)$ is Galois connection if and only if: $g$ and $d$ are increasing mappings and $\forall s \in S, t \in T: t \precsim g\left({s}\right) \iff d\left({t}\right) \preceq s$ $g$ is upper adjoint and $d$ is lower adjoint of a Galois connection. Source of Name This entry was named for Évariste Galois. Sources 1980: G. Gierz, K.H. Hofmann, K. Keimel, J.D. Lawson, M.W. Mislove and D.S. Scott: A Compendium of Continuous Lattices Mizar article WAYBEL_1:def 10
I'm not strong with electrical engineering and I have prototyped a circuit which works; however, to my chagrin, I don't understand why. I'm using an Arduino digital out pin to power a relay. I have an NPN transistor to switch 5 volts through the relay coil that's 125 ohm. At first, I just connected the base to the output pin and it worked fine. Then I noticed people tend to add a base resistor so I stuck a 1K ohm resistor in there and it still worked fine. I've been trying to learn more about base resistors and I feel like I computed that I should have a larger resistor. Is the purpose of the base resistor just to lower the current to prevent wasting energy? Or to prevent generating heat? I don't fully understand why there's a need for a base resistor, and if there is, why the relay in my circuit worked fine without the resistor and also with a smaller resistor. This is the datasheet for my transistor: http://pdf1.alldatasheet.com/datasheet-pdf/view/21675/STMICROELECTRONICS/2N2222.html I don't have a datasheet for the relay (it's part of an old hobby kit . . . you know . . . one of the ones with the breadboard and springs). Here's my current setup: Here're my likely incorrect calculations. I computed the collector current as $$I_{c} = \frac{5V}{125\Omega} = 40mA$$ I felt like, based on the spec, a reasonable \$h_{FE}\$ for 40mA is 75. Thus, $$I_{b (sat)} = \frac{40mA}{75} \approx .5mA$$ Given that, I felt like the best fit for \$V_{BE (sat)} = 1.3V\$. Then, I thought I needed to drop 3.7V across the base resistor at .5mA. Thus, $$R_{base} = \frac{5V - 1.3V}{.5mA \div 1000} = 7,400\Omega$$ So, if that's correct, what am I trying to achieve with this resistor? Am I trying to prevent energy loss or heat production? Or, am I trying not to exceed the max base saturation voltage? Given that 7k4 ohm is not a common resistance, should I drop to a 6k8 ohm resistor or go up to 8k2 ohm resistor? It seems like 8k2 ohm would keep the base voltage below the saturation max voltage and the 6k8 ohm would keep the base voltage above the max base emitter saturation voltage. Which am I after? Or, I suppose, alternatively, am I completely off track?
hfe of a npn bjt transistor is given by collector current / base current so to calculate hfe you need to know the collector current and base current (which can be set using resistors) . I have to ask , what is the point of doing this ? You can simply calculate the emitter current using Kirchhoff current rule . This is obviously pointless since the transistor acts like a resistor between collector and emitter and unless you know the resistance of the resistor you can't find the current . I see in transistor datasheets a maximum and minimum value of hfe . This leads me to believe that hfe is a constant and I do not undersand the point of a varying constant ; I think for a single transistor only a single value of hfe exists which must be measured using a multimeter and the collector current is dependent of the base current so the collector current can be measured . And also , if the collector current is entirely dependent on the base current (Ic = Ib*hfe) then what is the point of the adding a resistor to the collector end of the transistor ? Surely there must be some change in current if a greater resistance is added to the collector end . I checked this with a multimeter with base resistance of 10k ohms and collector resistance of 330 ohms and I found no change in current when I connect the multimeter in common emitter topology (I think , I connect the emitter to ground and tested the current flowing through the collector ) I think there is a big hole in my understand or a big misunderstanding in my not understanding . Please help Thank you hfe of a npn bjt transistor is given by collector current / base current so to calculate hfe you need to know the collector current and base current (which can be set using resistors) . I have to ask , what is the point of doing this ? You can simply calculate the emitter current using Kirchhoff current rule . This is obviously pointless since the transistor acts like a resistor between collector and emitter and unless you know the resistance of the resistor you can't find the current . "... since the transistor acts like a resistor between collector and emitter ... " No, not really. The collector of a bipolar transistor acts like a current source (or sink) whose value is determined by the base current and the h FE of the device. However, the external circuit can limit the current to something less than this value, in which case the effective h FE is lower. "I see in transistor datasheets a maximum and minimum value of hfe." Yes. The actual value varies considerably from device to device, even from the same manufacturing batch, and it also varies somewhat with the operating parameters (voltage, temperature, etc.) of the device. You really can't depend on having a particular (or even a constant) value, so you design your circuits so that they work over a range of values. "... then what is the point of the adding a resistor to the collector end of the transistor?" This is part of the circuit design. When you're creating a voltage amplifier, you use the collector current of the transistor to develop the desired voltage across the external resistor. This resistor is called the "load resistor", and it gives you a definite value of output impedance — the transistor by itself has a very high effective output impedance. Example: collector emitter voltage is 9 v ,Hfe = 100 , base emitter voltage is 9 , a resistor at the collector has resistance of 330 ohms and one at the base has resistance 10k ohms, tell me the current at the collector with steps. OK, assuming you mean that 9V is applied to the base through a 10K resistor, 9V is applied to the collector through a 330Ω resistor, and that the emitter is grounded, the steps are as follows: The base current is \$I_B = \frac{V_{BB} - V_{BE}}{R_B} = \frac{9.00 V - 0.65 V}{10k \Omega} = 0.835 mA\$ Assuming the transistor is not saturated, the collector current \$I_C = h_{FE} \cdot I_B = 100 \cdot 0.835 mA = 83.5 mA\$ The voltage across the collecor resistor should be \$I_C \cdot R_C = 83.5 mA \cdot 330 \Omega = 27.5 V\$ Since that value is higher than our supply voltage, the assumption made in the second step must be false — the transistor issaturated. Therefore, the collector current is determined entirely by the collector resistor and the collector supply voltage: \$I_C = \frac{V_{CC} - V_{CE(SAT)}}{R_C} = \frac{9.00 V - 0.3 V}{330 \Omega} = 26.4 mA\$ This leads me to believe that hfe is a constant and I do not understand the point of a varying constant ; I think for a single transistor only a single value of hfe exists Hfe varies between individual transistors because they cannot all be made identical. Often the manufacturer will make a batch of transistors and then sort them according to gain, giving each group a different suffix or even a completely different code. A single transistor's Hfe will be 'constant' somewhere in the specified range, but also only under certain conditions. It reduces at very low or high current and when collector voltage is low, and may vary quite strongly with temperature. To get a true picture you should examine the curve traces for Hfe vs Ic and Ic vs Vce. In the example below you can see that Hfe is never truly constant, but for a BC107 is relatively flat in the 1-10mA range. The groups 'A', 'B', and 'C' are for the 3 different gain suffixes. Any individual transistor will have a similar curve, closer to its group than the others. In the right-hand graph you can see that collector current is almost independent of voltage, except at very low voltage when the transistor is saturated.
We assume that in population \(\pi_{i}\) the probability density function of \(\boldsymbol{x}\) is multivariate normal with mean vector \(\boldsymbol{\mu}_{i}\) and variance-covariance matrix \(\Sigma\) (same for all populations). As a formula, this is... \(f(\mathbf{x}|\pi_i) = \dfrac{1}{(2\pi)^{p/2}|\mathbf{\Sigma}|^{1/2}}\exp\left(-\frac{1}{2}\mathbf{(x-\mu_i)'\Sigma^{-1}(x-\mu_i)}\right)\) We classify to the population for which \(p _ { i } f ( \mathbf { x } | \pi _ { i } )\) ) is largest. Because a log transform is monotonic, this equivalent to classifying an observation to the population for which log( \(p _ { i } f ( \mathbf { x } | \pi _ { i } )\) )) is largest. Linear discriminant analysis is used when the variance-covariance matrix does not depend on the population. In this case, our decision rule is based on the Linear Score Function, a function of the population means for each of our g populations, \(\boldsymbol{\mu}_{i}\), as well as the pooled variance-covariance matrix. Linear Score Function The Linear Score Functionis: \(s^L_i(\mathbf{X}) = -\dfrac{1}{2}\mathbf{\mu'_i \Sigma^{-1}\mu_i + \mu'_i \Sigma^{-1}x}+ \log p_i = d_{i0}+\sum_{j=1}^{p}d_{ij}x_j + \log p_i\) where \(d_{i0} = -\dfrac{1}{2}\mathbf{\mu'_i\Sigma^{-1}\mu_i}\) \(d_{ij} = j\text{th element of } \mu'_i\Sigma^{-1}\) The far left-hand expression resembles a linear regression with intercept term d i 0and regression coefficients d. ij Linear Discriminant Function \(d^L_i(\mathbf{x}) = -\dfrac{1}{2}\mathbf{\mu'_i\Sigma^{-1}\mu_i + \mu'_i\Sigma^{-1}x} = d_{i0} + \sum_{j=1}^{p}d_{ij}x_j\) \(d_{i0} = -\dfrac{1}{2}\mathbf{\mu'_i\Sigma^{-1}\mu_i}\) Given a sample unit with measurements \(x _ { 1 } , x _ { 2 } , \dots , x _ { p }\), we classify the sample unit into the population that has the largest Linear Score Function. This is equivalent to classifying to the population for which the posterior probability of membership is largest. The linear score function is computed for each population, then we plug in our observation values and assign the unit to the population with the largest score. However, this is a function of unknown parameters, \(\boldsymbol{\mu}_{i}\) and \(\Sigma\). So, these must be estimated from the data. Discriminant analysis requires estimates of: \(p_i = \text{Pr}(\pi_i);\) \(i = 1, 2, \dots, g\) \(\mathbf{\mu_i} = E(\mathbf{X}|\pi_i)\); \(i = 1, 2, \dots, g\) \(\Sigma = \text{var}(\mathbf{X}| \pi_i)\); \(i = 1, 2, \dots, g\) Prior probabilities: The population means are estimated by the sample mean vectors: The variance-covariance matrix is estimated by using the pooled variance-covariance matrix: Typically, these parameters are estimated from training data, in which the population membership is known. Conditional Density Function Parameters Population Means: \(\boldsymbol{\mu}_{i}\) Variance-Covariance matrix: Let S i denote the sample variance-covariance matrix for population i. Then the variance-covariance matrix \(Σ\) is estimated by substituting in the pooled variance-covariance matrix into the Linear Score Function as shown below: \(\mathbf{S}_p = \dfrac{\sum_{i=1}^{g}(n_i-1)\mathbf{S}_i}{\sum_{i=1}^{g}(n_i-1)}\) to obtain the estimated linear score function: \(\hat{s}^L_i(\mathbf{x}) = -\frac{1}{2}\mathbf{\bar{x}'_i S^{-1}_p \bar{x}_i +\bar{x}'_i S^{-1}_p x } + \log{\hat{p}_i} = \hat{d}_{i0} + \sum_{j=1}^{p}\hat{d}_{ij}x_j + \log{p}_i\) where \(\hat{d}_{i0} = -\dfrac{1}{2}\mathbf{\bar{x}'_i S^{-1}_p \bar{x}_i} \) and \(\hat{d}_{ij} = j\)th element of \(\mathbf{\bar{x}'_iS^{-1}_p}\) This is a function of the sample mean vectors, the pooled variance-covariance matrix, and prior probabilities for g different populations. This is written in a form that looks like a linear regression formula with an intercept term plus a linear combination of response variables, plus the natural log of the prior probabilities. Decision Rule: Classify the sample unit into the population that has the largest estimated linear score function.
When an unknown specimen is classified according to any decision rule, there is always a possibility that the specimen is wrongly classified. This is unavoidable. This is part of the inherent uncertainty in any statistical procedure. One procedure to evaluate the discriminant rule is to classify the training data according to the developed discrimination rule. Because we know which unit comes from which population among the training data, this will give us some idea of the validity of the discrimination procedure. Method 1: Confusion Table Section The confusion table describes how the discriminant function will classify each observation in the data set. In general, the confusion table takes the form: Truth 1 2 \(\cdots\) \(g\) Total 1 \(n_{11}\) \(n_{12}\) \(\cdots\) \(n_{1g}\) \(n_{1\cdot}\) 2 \(n_{21}\) \(n_{22}\) \(\cdots\) \(n_{2g}\) \(n_{2\cdot}\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(g\) \(n_{g1}\) \(n_{g2}\) \(\cdots\) \(n_{gg}\) \(n_{g\cdot}\) Total \(n_{\cdot 1}\) \(n_{\cdot 2}\) \(\cdots\) \(n_{\cdot g}\) \(n_{\cdot \cdot}\) Rows 1 through g are g populations to which the items truly belong. Across the columns we are looking at how they are classified. \(n_{11}\) is the number of insects correctly classified in species (1). But \(n_{12}\) is the number of insects incorrectly classified into species (2). In this case \(n_{ij}\) = the number belonging to population i classified into population j. Ideally this matrix will be a diagonal matrix; in practice we hope to see very small off-diagonal elements. The row totals provide the number of individuals belonging to each of our populations or species in our training dataset. The column totals are the number classified into each of these species. The n... The dot notation is used here in the row totals for summing over the second subscript, whereas in the column totals we are summing over the first subscript. We will let: \(p(i|j)\) denote the probability that a unit from population π j is classified into population π i. These misclassification probabilities are estimated by taking the number of insects from population j that are misclassified into population i divided by the total number of insects in the sample from population j as shown here: \(\hat{p}(i|j) = \dfrac{n_{ji}}{n_{j.}}\) These are the misclassification probabilities. From the SAS output, we obtain the following confusion table. Truth \(a\) \(b\) Total \(a\) 10 0 10 \(b\) 0 10 10 Total 10 10 20 Here, none of the insects were misclassified! The misclassification probabilities are all estimated equal to zero. Example 10-5: Insect Data Section The confusion table for the cross validation is Truth \(a\) \(b\) Total \(a\) 10 0 10 \(b\) 2 8 10 Total 12 8 20 Here, the estimated misclassification probabilities are: \(\hat{p}(b|a) = \frac{0}{10} = 0.0\) for insects belonging to species A, and \(\hat{p}(a|b) = \frac{2}{10} = 0.2\) for insects belonging to species B. Method 2: Set Aside Method Section Step 1: Randomly partition the observations into two ”halves” Step 2: Use one ”half” to obtain the discriminant function. Step 3: Use the discriminant function from Step 2 to classify all members of the second ”half” of the data, from which the proportion of misclassified observations is computed. Advantage: This method yields unbiased estimates of the misclassification probabilities. Problem: This does not make optimum use of the data, and so, estimated misclassification probabilities are not as precise as possible. Method 3: Cross Validation Section Step 1: Delete one observation from the data. Step 2: Use the remaining observations to compute a discriminant function. Step 3: Use the discriminant function from Step 2 to classify the observation removed in Step 1. Steps 1-3 are repeated for all observations; compute the proportions of observations that are misclassified. Specifying Unequal Priors Section Suppose that we have information (from prior experience or from another study) that suggests that 90% of the insects belong to Ch. concinna. Then the score functions for the unidentified specimen are \begin{align} \hat{s}^L_a(\mathbf{x}) &= \hat{d}^L_a(\mathbf{x}) + \log{\hat{p}_a}\\[10pt] &= 203.052 + \log{0.9} \\[10pt] &= 202.946\end{align} and \begin{align} \hat{s}^L_b(\mathbf{x}) &= \hat{d}^L_b(\mathbf{x}) + \log{\hat{p}_b} \\[10pt] &= 205.912 + \log{0.1} \\[10pt] &= 203.609\end{align} In this case, we would still classify this specimen into Ch. heikertlingeri with posterior probabilities \(p(\pi_a|\mathbf{x}) = 0.36\) and \(p(\pi_b|\mathbf{x}) = 0.64\) These priors can be specified in SAS by adding the ”priors” statement: priors ”a” = 0.9 ”b” = 0.1; following the var statement. However, it should be noted that when the "priors" statement is added, SAS will include log pi as part of the constant term. In other words, SAS outputs the estimated linear score function, not the estimated linear discriminant function.
In order to compare $T$ and $W$, one needs to express one in terms of the other. As explained in the diagram itself, since the block is in translational equilibrium, we have: $$\vec{T_1}+\vec{T_2}=\vec{W_b}$$ Projecting this relation onto the $Oy$ axis, the components of the tensions are both $T\sin\theta$ (because the angle between $T$ and $Oy$ is $\frac{\pi}{2}-\theta$ and the angle between $T$ and $Ox$ is $\theta$). Therefore: $$2T\sin\theta=W\implies T=\dfrac{W}{2\sin\theta}$$ The information the strings are almost horizontal is equivalent to $\theta\cong 0$. For very small angles, the sine function is as well extremely small, and almost equal to the angle $\theta$ itself. Take a look at the following examples (I've included them here solely to make sure that you understand why $\sin\theta\to 0$ as $\theta\to 0$) $\sin(0.01)\cong 0.00999983333333\cong 0.01$ $\sin(0.0001) \cong 0.00009999999983\cong 0.0001$ $\sin(0)=0$ So the denominator $2\sin\theta\cong0$ and therefore $T\to\infty$. As $W=mg$, clearly $W$ is a finite value (making the reasonable assumption that the mass is not inifite), and therefore clearly $T>W$.
ok, suppose we have the set $U_1=[a,\frac{a+b}{2}) \cup (\frac{a+2}{2},b]$ where $a,b$ are rational. It is easy to see that there exists a countable cover which consists of intervals that converges towards, a,b and $\frac{a+b}{2}$. Therefore $U_1$ is not compact. Now we can construct $U_2$ by taking the midpoint of each half open interval of $U_1$ and we can similarly construct a countable cover that has no finite subcover. By induction on the naturals, we eventually end up with the set $\Bbb{I} \cap [a,b]$. Thus this set is not compact I am currently working under the Lebesgue outer measure, though I did not know we cannot define any measure where subsets of rationals have nonzero measure The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure that is, trying to compute the Lebesgue outer measure of the irrationals using only the notions of covers, topology and the definition of the measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Problem: Let $X$ be some measurable space and $f,g : X \to [-\infty, \infty]$ measurable functions. Prove that the set $\{x \mid f(x) < g(x) \}$ is a measurable set. Question: In a solution I am reading, the author just asserts that $g-f$ is measurable and the rest of the proof essentially follows from that. My problem is, how can $g-f$ make sense if either function could possibly take on an infinite value? @AkivaWeinberger For $\lambda^*$ I can think of simple examples like: If $\frac{a}{2} < \frac{b}{2} < a, b$, then I can always add some $\frac{c}{2}$ to $\frac{a}{2},\frac{b}{2}$ to generate the interval $[\frac{a+c}{2},\frac{b+c}{2}]$ which will fullfill the criteria. But if you are interested in some $X$ that are not intervals, I am not very sure We then manipulate the $c_n$ for the Fourier series of $h$ to obtain a new $c_n$, but expressed w.r.t. $g$. Now, I am still not understanding why by doing what we have done we're logically showing that this new $c_n$ is the $d_n$ which we need. Why would this $c_n$ be the $d_n$ associated with the Fourier series of $g$? $\lambda^*(\Bbb{I}\cap [a,b]) = \lambda^*(C) = \lim_{i\to \aleph_0}\lambda^*(C_i) = \lim_{i\to \aleph_0} (b-q_i) + \sum_{k=1}^i (q_{n(i)}-q_{m(i)}) + (q_{i+1}-a)$. Therefore, computing the Lebesgue outer measure of the irrationals directly amounts to computing the value of this series. Therefore, we first need to check it is convergent, and then compute its value The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Alessandro: and typo for the third $\Bbb{I}$ in the quote, which should be $\Bbb{Q}$ (cont.) We first observed that the above countable sum is an alternating series. Therefore, we can use some machinery in checking the convergence of an alternating series Next, we observed the terms in the alternating series is monotonically increasing and bounded from above and below by b and a respectively Each term in brackets are also nonegative by the Lebesgue outer measure of open intervals, and together, let the differences be $c_i = q_{n(i)-q_{m(i)}}$. These form a series that is bounded from above and below Hence (also typo in the subscript just above): $$\lambda^*(\Bbb{I}\cap [a,b])=\sum_{i=1}^{\aleph_0}c_i$$ Consider the partial sums of the above series. Note every partial sum is telescoping since in finite series, addition associates and thus we are free to cancel out. By the construction of the cover $C$ every rational $q_i$ that is enumerated is ordered such that they form expressions $-q_i+q_i$. Hence for any partial sum by moving through the stages of the constructions of $C$ i.e. $C_0,C_1,C_2,...$, the only surviving term is $b-a$. Therefore, the countable sequence is also telescoping and: @AkivaWeinberger Never mind. I think I figured it out alone. Basically, the value of the definite integral for $c_n$ is actually the value of the define integral of $d_n$. So they are the same thing but re-expressed differently. If you have a function $f : X \to Y$ between two topological spaces $X$ and $Y$ you can't conclude anything about the topologies, if however the function is continuous, then you can say stuff about the topologies @Overflow2341313 Could you send a picture or a screenshot of the problem? nvm I overlooked something important. Each interval contains a rational, and there are only countably many rationals. This means at the $\omega_1$ limit stage, thre are uncountably many intervals that contains neither rationals nor irrationals, thus they are empty and does not contribute to the sum So there are only countably many disjoint intervals in the cover $C$ @Perturbative Okay similar problem if you don't mind guiding me in the right direction. If a function f exists, with the same setup (X, t) -> (Y,S), that is 1-1, open, and continous but not onto construct a topological space which is homeomorphic to the space (X, t). Simply restrict the codomain so that it is onto? Making it bijective and hence invertible. hmm, I don't understand. While I do start with an uncountable cover and using axiom of choice to well order the irrationals, the fact that the rationals are countable means I eventually end up with a countable cover of the rationals. However the telescoping countable sum clearly does not vanish, so this is weird... In a schematic, we have the following, I will try to figure this out tomorrow before moving on to computing the Lebesgue outer measure of the cantor set: @Perturbative Okay, kast question. Think I'm starting to get this stuff now.... I want to find a topology t on R such that f: R, U -> R, t defined by f(x) = x^2 is an open map where U is the "usual" topology defined by U = {x in U | x in U implies that x in (a,b) \subseteq U}. To do this... the smallest t can be is the trivial topology on R - {\emptyset, R} But, we required that everything in U be in t under f? @Overflow2341313 Also for the previous example, I think it may not be as simple (contrary to what I initially thought), because there do exist functions which are continuous, bijective but do not have continuous inverse I'm not sure if adding the additional condition that $f$ is an open map will make an difference For those who are not very familiar about this interest of mine, besides the maths, I am also interested in the notion of a "proof space", that is the set or class of all possible proofs of a given proposition and their relationship Elements in a proof space is a proof, which consists of steps and forming a path in this space For that I have a postulate that given two paths A and B in proof space with the same starting point and a proposition $\phi$. If $A \vdash \phi$ but $B \not\vdash \phi$, then there must exists some condition that make the path $B$ unable to reach $\phi$, or that $B$ is unprovable under the current formal system Hi. I believe I have numerically discovered that $\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n$ as $K\to\infty$, where $c=0,\dots,K$ is fixed and $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$. Any ideas how to prove that?
Finite Subgroup Test/Proof 2 Theorem Let $\struct {G, \circ}$ be a group. Then: $H$ is a subgroup of $G$ $\forall a, b \in H: a \circ b \in H$ Proof Sufficient Condition Let $H$ be a subgroup of $G$. Then: $\forall a, b \in H: a \circ b \in H$ by definition of subgroup. $\Box$ Necessary Condition $\forall a, b \in H: a \circ b \in H$ Let $x \in H$. Thus all elements of $\set {x, x^2, x^3, \ldots}$ are in $H$. But $H$ is finite. Therefore it must be the case that: $\exists r, s \in \N: x^r = x^s$ for $r < s$. So we can write: \(\displaystyle x^r\) \(=\) \(\displaystyle x^s\) \(\displaystyle \leadsto \ \ \) \(\displaystyle x^r \circ e\) \(=\) \(\displaystyle x^r \circ x^{s - r}\) Definition of Identity Element, Powers of Group Elements: Sum of Indices \((1):\quad\) \(\displaystyle \leadsto \ \ \) \(\displaystyle e\) \(=\) \(\displaystyle x^{s - r}\) Cancellation Laws \(\displaystyle \leadsto \ \ \) \(\displaystyle e\) \(\in\) \(\displaystyle H\) as $H$ is closed under $\circ$ Then we have: \(\displaystyle e\) \(=\) \(\displaystyle x^{s - r}\) which is $(1)$ \(\displaystyle \leadsto \ \ \) \(\displaystyle e\) \(=\) \(\displaystyle x \circ x^{s - r - 1}\) as $H$ is closed under $\circ$ \(\displaystyle \leadsto \ \ \) \(\displaystyle x^{-1} \circ e\) \(=\) \(\displaystyle x^{-1} \circ x \circ x^{s - r - 1}\) \((2):\quad\) \(\displaystyle \leadsto \ \ \) \(\displaystyle x^{-1}\) \(=\) \(\displaystyle x^{s - r - 1}\) Definition of Inverse Element, Definition of Identity Element But we have that: \(\displaystyle r\) \(<\) \(\displaystyle s\) by hypothesis \(\displaystyle \leadsto \ \ \) \(\displaystyle s - r\) \(<\) \(\displaystyle 0\) \(\displaystyle \leadsto \ \ \) \(\displaystyle s - r - 1\) \(\le\) \(\displaystyle 0\) \(\displaystyle \leadsto \ \ \) \(\displaystyle x^{s - r - 1}\) \(\in\) \(\displaystyle \set {e, x, x^2, x^3, \ldots}\) \(\displaystyle \leadsto \ \ \) \(\displaystyle x^{s - r - 1}\) \(\in\) \(\displaystyle H\) as all elements of $\set {e, x, x^2, x^3, \ldots}$ are in $H$ So from $(2)$: $x^{-1} = x^{s - r - 1}$ it follows that: $x^{-1} \in H$ $\blacksquare$
Definition 28.35.1.reference Let f : X \to SWish Jersey Lakers Lakers Jersey be a morphism of schemes. Let \mathcal{L} be an invertible \mathcal{O}_ X-module. We say \mathcal{L} is relatively ample, or f-relatively ample, or ample on X/S, or f-ample if f : X \to S is quasi-compact, and if for every affine open V \subset S the restriction of \mathcal{L} to the open subscheme f^{-1}(V) of X is ample. 28.35 Relatively ample sheaves Let X be a scheme and \mathcal{L} an invertible sheaf on X. Then \mathcal{L} is ample on X if X is quasi-compact and every point of X is contained in an affine open of the form X_ s, where s \in \Gamma (X, \mathcal{L}^{\otimes n}) and T-shirt Triumph United com Statement T-shirt Triumph United com Statement n \geq 1, see Properties, Definition 27.26.1. We turn this into a relative notion as follows. Definition 28.35.1.reference Let f : X \to SWish Jersey Lakers Lakers Jersey be a morphism of schemes. Let \mathcal{L} be an invertible \mathcal{O}_ X-module. We say \mathcal{L} is We note that the existence of a relatively ample sheaf on X does not force the morphism X \to S to be of finite type. Lemma 28.35.2. Let X \to S be a morphism of schemes. Let \mathcal{L} be an invertible \mathcal{O}_ X-module. Let n \geq 1. Then \mathcal{L} is f-ample if and only if \mathcal{L}^{\otimes n} is f-ample. Proof. This follows from Properties, Lemma Selanne Anaheim Jersey Ducks Teemu. \square Lemma 28.35.3. Let f : X \to S be a morphism of schemes. If there exists an f-ample invertible sheaf, then f is separated. Proof. Being separated is local on the base (see Schemes, Lemma 25.21.7 for example; it also follows easily from the definition). Hence we may assume S is affine and X has an ample invertible sheaf. In this case the result follows from Properties, Lemma 27.26.8. \square There are many ways to characterize relatively ample invertible sheaves, analogous to the equivalent conditions in Properties, Proposition 27.26.13. We will add these here as needed. The invertible sheaf \mathcal{L} is fChicago Chicago Hoodie Bears Hoodie Bears-ample. There exists an open covering S = \bigcup V_ i such that each \mathcal{L}|_{f^{-1}(V_ i)} is ample relative to f^{-1}(V_ i) \to V_ i. There exists an affine open covering S = \bigcup V_ i such that each \mathcal{L}|_{f^{-1}(V_ i)} is ample. There exists a quasi-coherent graded \mathcal{O}_ S-algebra \mathcal{A} and a map of graded \mathcal{O}_ X-algebras \psi : f^*\mathcal{A} \to \bigoplus _{d \geq 0} \mathcal{L}^{\otimes d} such that U(\psi ) = X andr_{\mathcal{L}, \psi } : X \longrightarrow \underline{\text{Proj}}_ S(\mathcal{A}) is an open immersion (see Constructions, Lemma 26.19.1 for notation). The morphism f is quasi-separated and part (4) above holds with \mathcal{A} = f_*(\bigoplus _{d \geq 0} \mathcal{L}^{\otimes d}) and \psi the adjunction mapping. Same as (4) but just requiring r_{\mathcal{L}, \psi } to be an immersion. Proof. It is immediate from the definition that (1) implies (2) and (2) implies (3). It is clear that (5) implies (4). Assume (3) holds for the affine open covering S = \bigcup V_ i. We are going to show (5) holds. Since each f^{-1}(V_ i) has an ample invertible sheaf we see that f^{-1}(V_ i) is separated (Properties, Lemma 27.26.8). Hence f is separated. By Schemes, Lemma 25.24.1 we see that \mathcal{A} = f_*(\bigoplus _{d \geq 0} \mathcal{L}^{\otimes d}) is a quasi-coherent graded \mathcal{O}_ S-algebra. Denote \psi : f^*\mathcal{A} \to \bigoplus _{d \geq 0} \mathcal{L}^{\otimes d} the adjunction mapping. The description of the open U(\psi ) in Constructions, Section 26.19 and the definition of ampleness of \mathcal{L}|_{f^{-1}(V_ i)} show that U(\psi ) = X. Moreover, Constructions, Lemma 26.19.1 part (3) shows that the restriction of r_{\mathcal{L}, \psi } to f^{-1}(V_ i) is the same as the morphism from Properties, Lemma 27.26.9 which is an open immersion according to Properties, Lemma 27.26.11. Hence (5) holds. Let us show that (4) implies (1). Assume (4). Denote \pi : \underline{\text{Proj}}_ S(\mathcal{A}) \to S the structure morphism. Choose V \subset S affine open. By Constructions, Definition 26.16.7 we see that T-shirt Triumph United com Statement \pi ^{-1}(V) \subset \underline{\text{Proj}}_ S(\mathcal{A}) is equal to \text{Proj}(A) where A = \mathcal{A}(V) as a graded ring. Hence r_{\mathcal{L}, \psi } maps f^{-1}(V)88 San Jose Verde Numero Sharks E Hockey Nhl Preta Black Camisa Burns Esportiva isomorphically onto a quasi-compact open of \text{Proj}(A). Moreover, \mathcal{L}^{\otimes d} is isomorphic to the pullback of \mathcal{O}_{\text{Proj}(A)}(d) for some d \geq 1. (See part (3) of Constructions, Lemma 26.19.1 and the final statement of Constructions, Lemma 26.14.1.) This implies that \mathcal{L}|_{f^{-1}(V)} is ample by Properties, Lemmas 27.26.12 and Selanne Anaheim Jersey Ducks Teemu. Assume (6). By the equivalence of (1) - (5) above we see that the property of being relatively ample on X/S is local on S. Hence we may assume that S is affine, and we have to show that \mathcal{L} is ample on X. In this case the morphism r_{\mathcal{L}, \psi } is identified with the morphism, also denoted r_{\mathcal{L}, \psi } : X \to \text{Proj}(A) associated to the map \psi : A = \mathcal{A}(V) \to \Gamma _*(X, \mathcal{L}). (See references above.) As above we also see that T-shirt Triumph United com Statement \mathcal{L}^{\otimes d} is the pullback of the sheaf \mathcal{O}_{\text{Proj}(A)}(d) for some d \geq 1. Moreover, since X is quasi-compact we see that X gets identified with a closed subscheme of a quasi-compact open subscheme Y \subset \text{Proj}(A)Pullover Black Branded Hoodie Washed Logo Pro Line Baltimore Shadow Nfl Men's Ravens By Fanatics. By Constructions, Lemma 26.10.6 (see also Properties, Lemma 27.26.12) we see that \mathcal{O}_ Y(d') is an ample invertible sheaf on T-shirt Triumph United com Statement Y for some d' \geq 1. Since the restriction of an ample sheaf to a closed subscheme is ample, see Properties, Lemma Black Shirts Malcolm Jerseys Eagles Jersey Jenkins Authentic we conclude that the pullback of T-shirt Triumph United com Statement \mathcal{O}_ Y(d') is ample. Combining these results with Properties, Lemma Selanne Anaheim Jersey Ducks Teemu we conclude that \mathcal{L} is ample as desired. \square Lemma 28.35.5.reference Let f : X \to S be a morphism of schemes. Let \mathcal{L} be an invertible \mathcal{O}_ X-module. Assume S affine. Then \mathcal{L} is f-relatively ample if and only if \mathcal{L} is ample on X. Proof. Immediate from Lemma Wholesale Hockey Jerseys Cheap Shop Online Sports and the definitions. \square Cheap Fashion Black Flag Usa 4 Adidas Jersey Anaheim Ducks Cam Nhl Women's Authentic Fowlerreference Let f : X \to S be a morphism of schemes. Then f is quasi-affine if and only if \mathcal{O}_ X is f-relatively ample. Proof. Follows from Properties, Lemma 27.27.1 and the definitions. \square Lemma 28.35.7. Let f : X \to Y be a morphism of schemes, \mathcal{M} an invertible \mathcal{O}_ Y-module, and \mathcal{L} an invertible \mathcal{O}_ X-module. If \mathcal{L} is f-ample and \mathcal{M} is ample, then \mathcal{L} \otimes f^*\mathcal{M}^{\otimes a} is ample for a \gg 0. If \mathcal{M} is ample and f quasi-affine, then f^*\mathcal{M} is ample. Proof. Assume \mathcal{L} is f-ample and \mathcal{M} ample. By assumption Y and f are quasi-compact (see Definition 28.35.1 and Properties, Definition 27.26.1). Hence X is quasi-compact. Pick x \in X. We can choose m \geq 1 and t \in \Gamma (Y, \mathcal{M}^{\otimes m}) such that Y_ t is affine and f(x) \in Y_ t. Since \mathcal{L} restricts to an ample invertible sheaf on T-shirt Triumph United com Statement f^{-1}(Y_ t) = X_{f^*t} we can choose n \geq 1 and s \in \Gamma (X_{f^*t}, \mathcal{L}^{\otimes n}) with x \in (X_{f^*t})_ s with (X_{f^*t})_ s affine. By Properties, Lemma 27.17.2 there exists an integer e \geq 1 and a section s' \in \Gamma (X, \mathcal{L}^{\otimes n} \otimes f^*\mathcal{M}^{\otimes em}) which restricts to s(f^*t)^ e on X_{f^*t}. For any b > 0 consider the section s'' = s'(f^*t)^ b of \mathcal{L}^{\otimes n} \otimes f^*\mathcal{M}^{\otimes (e + b)m}. Then X_{s''} = (X_{f^*t})_ s is an affine open of X containing x. Picking b such that n divides e + b we see \mathcal{L}^{\otimes n} \otimes f^*\mathcal{M}^{\otimes (e + b)m} is the nth power of \mathcal{L} \otimes f^*\mathcal{M}^{\otimes a} for some a and we can get any a divisible by m and big enough. Since X is quasi-compact a finite number of these affine opens cover X. We conclude that for some a sufficiently divisible and large enough the invertible sheaf \mathcal{L} \otimes f^*\mathcal{M}^{\otimes a} is ample on T-shirt Triumph United com Statement X. On the other hand, we know that \mathcal{M}^{\otimes c} (and hence its pullback to X) is globally generated for all c \gg 0 by Properties, Proposition 27.26.13. Thus \mathcal{L} \otimes f^*\mathcal{M}^{\otimes a + c} is ample (Properties, Lemma 27.26.5) for c \gg 0 and (1) is proved. Lemma 28.35.8. Let g : Y \to S and f : X \to Y be morphisms of schemes. Let \mathcal{M} be an invertible \mathcal{O}_ Y-module. Let \mathcal{L} be an invertible \mathcal{O}_ X-module. If S is quasi-compact, \mathcal{M} is g-ample, and \mathcal{L} is f-ample, then \mathcal{L} \otimes f^*\mathcal{M}^{\otimes a} is g \circ f-ample for a \gg 0. Lemma 28.35.9. Let f : X \to S be a morphism of schemes. Let \mathcal{L} be an invertible \mathcal{O}_ X-module. Let S' \to S be a morphism of schemes. Let f' : X' \to S' be the base change of f and denote \mathcal{L}' the pullback of \mathcal{L} to X'. If \mathcal{L} is f-ample, then \mathcal{L}' is f'-ample. Proof. By Lemma Wholesale Hockey Jerseys Cheap Shop Online Sports it suffices to find an affine open covering S' = \bigcup U'_ i such that \mathcal{L}' restricts to an ample invertible sheaf on (f')^{-1}(U_ i') for all i. We may choose U'_ i mapping into an affine open U_ i \subset S. In this case the morphism (f')^{-1}(U'_ i) \to f^{-1}(U_ i) is affine as a base change of the affine morphism U'_ i \to U_ i (Lemma Bears Kohls Bears Jersey Bears Kohls Bears Kohls Jersey Jersey). Thus \mathcal{L}'|_{(f')^{-1}(U'_ i)} is ample by Lemma 28.35.7. \square Lemma 28.35.10. Let T-shirt Triumph United com Statement g : Y \to S and f : X \to Y be morphisms of schemes. Let \mathcal{L} be an invertible \mathcal{O}_ X-module. If \mathcal{L}Raptors Toronto Buy Buy Jersey Toronto is g \circ f-ample and f is quasi-compact 1 then \mathcal{L} is f-ample. Proof. Assume f is quasi-compact and \mathcal{L} is g \circ f-ample. Let U \subset S be an affine open and let V \subset Y be an affine open with g(V) \subset U. Then \mathcal{L}|_{(g \circ f)^{-1}(U)} is ample on T-shirt Triumph United com Statement (g \circ f)^{-1}(U) by assumption. Since f^{-1}(V) \subset (g \circ f)^{-1}(U) we see that \mathcal{L}|_{f^{-1}(V)} is ample on f^{-1}(V) by Properties, Lemma 27.26.14. Namely, f^{-1}(V) \to (g \circ f)^{-1}(U) is a quasi-compact open immersion by Schemes, Lemma 25.21.14 as (g \circ f)^{-1}(U) is separated (Properties, Lemma 27.26.8) and f^{-1}(V) is quasi-compact (as f is quasi-compact). Thus we conclude that \mathcal{L} is f-ample by Lemma Wholesale Hockey Jerseys Cheap Shop Online Sports. \square Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License.
Students T Processes Student T Processes (STP) can be viewed as a generalization of Gaussian Processes, in GP models we use the multivariate normal distribution to model noisy observations of an unknown function. Likewise for STP models, we employ the multivariate student t distribution. Formally a student t process is a stochastic process where the finite dimensional distribution is multivariate t. It is known that as \nu \rightarrow \infty, the MVT_{n}(\nu, \phi, K) tends towards the multivariate normal distribution \mathcal{N}_{n}(\phi, K). Regression with Student T Processes¶ The regression formulation for STP models is identical to the GP regression framework, to summarize the posterior predictive distribution takes the following form. Suppose \mathbf{t} \sim MVT_{n_{tr} + n_t}(\nu, \mathbf{0}, K) is the process producing the data. Let [\mathbf{f_*}]_{n_{t} \times 1} represent the values of the function on the test inputs and [\mathbf{y}]_{n_{tr} \times 1} represent noisy observations made on the training data points. STP models for a single output¶ The degrees of freedom \nu Kernel/covariance instance to model correlation between values of the latent function at each pair of input features. Kernel instance to model the correlation of the additive noise, generally the DiracKernel(white noise) is used. Training data 1 2 3 4 5 6 7 8 9 10 val trainingdata: Stream[(DenseVector[Double], Double)] = ... val num_features = trainingdata.head._1.length // Create an implicit vector field for the creation of the stationary // radial basis function kernel implicit val field = VectorField(num_features) val kernel = new RBFKernel(2.5) val noiseKernel = new DiracKernel(1.5) val model = new StudentTRegression(1.5, kernel, noiseKernel, trainingData) STP models for Multiple Outputs¶ You can use the MOStudentTRegression[I] class to create multi-output GP models. 1 2 3 4 5 val trainingdata: Stream[(DenseVector[Double], DenseVector[Double])] = ... val model = new MOStudentTRegression[DenseVector[Double]]( sos_kernel, sos_noise, trainingdata, trainingdata.length, trainingdata.head._2.length) Tip Working with multi-output Student T models is similar to multi-output GP models. We need to create a kernel function over the combined index set (DenseVector[Double], Int). This can be done using the sum of separable kernel idea. 1 2 3 4 5 6 7 8 9 10 11 12 13 val linearK = new PolynomialKernel(2, 1.0) val tKernel = new TStudentKernel(0.2) val d = new DiracKernel(0.037) val mixedEffects = new MixedEffectRegularizer(0.5) val coRegCauchyMatrix = new CoRegCauchyKernel(10.0) val coRegDiracMatrix = new CoRegDiracKernel val sos_kernel: CompositeCovariance[(DenseVector[Double], Int)] = (linearK :* mixedEffects) + (tKernel :* coRegCauchyMatrix) val sos_noise: CompositeCovariance[(DenseVector[Double], Int)] = d :* coRegDiracMatrix
Pseudo-prime Traditionally, a composite natural number $n$ is called a pseudo-prime if $2^{n-1} \equiv 1$ modulo $n$, for it has long been known that primes have this property: this is Fermat's little theorem. (The term is apparently due to D.H. Lehmer.) There are infinitely many such $n$, the first five being $$ 341,\,561,\,645,\,1105,\,1387\ . $$ More recently, the concept has been extended to include any composite number that acts like a prime in some realization of a probabilistic primality test. That is, it satisfies some easily computable necessary, but not sufficient, condition for primality. Pseudo-primes in this larger sense include: 1) ordinary base-$b$ pseudo-primes, satisfying $b^{n-1}\equiv1$ modulo $n$; 2) Euler base-$b$ pseudo-primes, whose Jacobi symbol with $b$ satisfies $$ b^{(n-1)/2} \equiv\left({\frac{b}{n}}\right) = \pm 1 $$ 3) strong base-$b$ pseudo-primes, for which the sequence $b^{s.2^i}$ modulo $n$ for $i=0,\ldots, r$ is either always $1$, or contains $-1$. (Here $n-1 = 2^r.s$ with $s$ odd.) For each $b$, the implications 3)$\Rightarrow$2)$\Rightarrow$1) hold. A number $n$ that is an ordinary base-$b$ pseudo-prime for all $b$ prime to $n$ is called a Carmichael number. Analogous numbers for the other two categories do not exist. The concept of a pseudo-prime has been generalized to include primality tests based on finite fields and elliptic curves (cf. also Finite field; Elliptic curve). For reviews of this work, see [a3], [a5]. The complementary concept is also of interest. The base $b$ is called a (Fermat) witness for $n$ if $n$ is composite and not a base-$b$ pseudo-prime. Euler and strong witnesses are similarly defined. If $W(n)$, the smallest strong witness for $n$, grows sufficiently slowly, there is a polynomial-time algorithm for primality. It is known that $W(n)$ is not bounded [a2], but if an extended version of the Riemann hypothesis (cf. Riemann hypotheses) holds, then $W(n) \le 2(\log n)^2$ [a1]. References [a1] E. Bach, "Analytic methods in the analysis and design of number-theoretic algorithms" , MIT (1985) [a2] W.R. Alford, A. Granville, C. Pomerance, "On the difficulty of finding reliable witnesses" , Algorithmic Number Theory, First Internat. Symp., ANTS-I , Lecture Notes in Computer Science , 877 , Springer (1994) pp. 1–16 [a3] F. Morain, "Pseudoprimes: a survey of recent results" , Proc. Eurocode '92 , Springer (1993) pp. 207–215 [a4] C. Pomerance, J.L. Selfridge, S.S. Wagstaff, Jr., "The pseudoprimes to $25\cdot10^9$" Math. Comp. , 35 (1980) pp. 1003–1026. Zbl 0444.10007. DOI 10.2307/2006210 [a5] P. Ribenboim, "The book of prime number records" , Springer (1989) (Edition: Second) [a6] N.J.A. Sloane, S. Plouffe, "The encyclopedia of integer sequences" , Acad. Press (1995) How to Cite This Entry: Pseudo-prime. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Pseudo-prime&oldid=34357
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
It would seem that one way of proving this would be to show the existence of non-algebraic numbers. Is there a simpler way to show this? As Steve D. noted, a finite dimensional vector space over a countable field is necessarily countable: if $v_1,\ldots,v_n$ is a basis, then every vector in $V$ can be written uniquely as $\alpha_1 v_1+\cdots+\alpha_n v_n$ for some scalars $\alpha_1,\ldots,\alpha_n\in F$, so the cardinality of the set of all vectors is exactly $|F|^n$. If $F$ is countable, then this is countable. Since $\mathbb{R}$ is uncountable and $\mathbb{Q}$ is countable, $\mathbb{R}$ cannot be finite dimensional over $\mathbb{Q}$. (Whether it has a basis or not depends on your set theory). Your further question in the comments, whether a vector space over $\mathbb{Q}$ is finite dimensional if and only if the set of vectors is countable, has a negative answer. If the vector space is finite dimensional, then it is a countable set; but there are infinite-dimensional vector spaces over $\mathbb{Q}$ that are countable as sets. The simplest example is $\mathbb{Q}[x]$, the vector space of all polynomials with coefficients in $\mathbb{Q}$, which is a countable set, and has dimension $\aleph_0$, with basis $\{1,x,x^2,\ldots,x^n,\ldots\}$. Added: Of course, if $V$ is a vector space over $\mathbb{Q}$, then it has countable dimension (finite or denumerable infinite) if and only if $V$ is countable as a set. So the counting argument in fact shows that not only is $\mathbb{R}$ infinite dimensional over $\mathbb{Q}$, but that (if you are working in an appropriate set theory) it is uncountably-dimensional over $\mathbb{Q}$. The cardinality argument mentioned by Arturo is probably the simplest. Here is an alternative: an explicit example of an infinite $\, \mathbb Q$-independent set of reals. Consider the set consisting of the logs of all primes $\, p_i.\,$ If $ \, c_1 \log p_1 +\,\cdots\, + c_n\log p_n =\, 0,\ c_i\in\mathbb Q,\,$ multiplying by a common denominator we can assume that all $\ c_i \in \mathbb Z\,$ so, exponentiating, we obtain $\, p_1^{\large c_1}\cdots p_n^{\large c_n}\! = 1\,\Rightarrow\ c_i = 0\,$ for all $\,i,\,$ by the uniqueness of prime factorizations. No transcendental numbers are needed for this question. Any set of algebraic numbers of unbounded degree spans a vector space of infinite dimension. Explicit examples of linearly independent sets of algebraic numbers are also relatively easy to write down. The set $\sqrt{2}, \sqrt{\sqrt{2}}, \dots, = \bigcup_{n>0} 2^{2^{-n}} $ is linearly independent over $\mathbb Q$. (Proof: Any expression of the $n$th iterated square root $a_n$ as a linear combination of earlier terms $a_i, i < n$ of the sequence could also be read as a rational polynomial of degree dividing $2^{n-1}$ with $a_n$ as a root and this contradicts the irreducibility of $X^m - 2$, here with $m=2^n$). The square roots of the prime numbers are linearly independent over $\mathbb Q$. (Proof: this is immediate given the ability to extend the function "number of powers of $p$ dividing $x$" from the rational numbers to algebraic numbers. $\sqrt{p}$ is "divisible by $p^{1/2}$" while any finite linear combination of square roots of other primes is divisible by an integer power of $p$, i.e., is contained in an extension of $\mathbb Q$ unramified at $p$). Generally any infinite set of algebraic numbers that you can easily write down and is not dependent for trivial reasons usually is independent. This because the only algebraic numbers for which we have a simple notation are fractional powers, and valuation (order of divisibility) arguments work well in this case. Any set of algebraic numbers where, of the ones ramified at any prime $p$, the amount of ramification is different for different elements of the set, will be linearly independent. (Proof: take the most ramified element in a given linear combination, express it in terms of the others, and compare valuations.) For the sake of completeness, I'm adding a worked-out solution due to F.G. Dorais from his post. We'll need two propositions from Grillet's Abstract Algebra, page 335 and 640: Proposition: $[\mathbb{R}:\mathbb{Q}]=\mathrm{dim}_\mathbb{Q}{}\mathbb{R}=|\mathbb{R}|$ Proof: Let $(q_n)_{n\in\mathbb{N_0}}$ be an enumeration of $\mathbb{Q}$. For $r\in\mathbb{R}$, take $$A_r:=\sum_{q_n<r}\frac{1}{n!}\;\;\;\;\text{ and }\;\;\;\;A:=\{A_r;\,r\in\mathbb{R}\};$$ the series is convergent because $\sum_{q_n<r}\frac{1}{n!}\leq\sum_{n=0}^\infty\frac{1}{n!}=\exp(1)<\infty$ (recall that $\exp(x)=\sum_{n=0}^\infty\frac{x^n}{n!}$ for any $x\in\mathbb{R}$). To prove $|A|=|\mathbb{R}|$, assume $A_r=A_{s}$ and $r\neq s$. Without loss of generality $r<s$, hence $A_s=\sum_{q_n<s}\frac{1}{n!}=\sum_{q_n<r}\frac{1}{n!}+\sum_{r\leq q_n<s}\frac{1}{n!}=A_r+\sum_{r\leq q_n<s}\frac{1}{n!}$, so $\sum_{r\leq q_n<s}\frac{1}{n!}=0$, which is a contradiction, because each interval $(r,s)$ contains a rational number. To prove $A$ is $\mathbb{Q}$-independent, assume $\alpha_1A_{r_1}+\cdots+\alpha_kA_{r_k}=0\;(1)$ with $\alpha_i\in\mathbb{Q}$. We can assume $r_1>\cdots>r_k$ (otherwise rearrange the summands) and $\alpha_i\in\mathbb{Z}$ (otherwise multiply by the common denominator). Choose $n$ large enough that $r_1>q_n>r_2\;(2)$; we'll increase $n$ two more times. The equality $n!\cdot(1)$ reads $n!(\alpha_1\sum_{q_m<r_1}\frac{1}{m!}+\cdots+\alpha_k\sum_{q_m<r_k}\frac{1}{m!})=0$. Rearranged (via $(2)$ when $m=n$), it reads $$-\alpha_1\sum_{\substack{m<n\\q_m<r_1}}\frac{n!}{m!}-\cdots-\alpha_k\sum_{\substack{m<n\\q_m<r_k}}\frac{n!}{m!}-\alpha_1 =\alpha_1\sum_{\substack{m>n\\q_m<r_1}}\frac{n!}{m!}+\cdots+\alpha_k\sum_{\substack{m>n\\q_m<r_k}}\frac{n!}{m!}. \tag*{(3)}$$ The left hand side (LHS) of $(3)$ is an integer for any $n$. If $n$ is large enough that $(|\alpha_1|+\cdots+|\alpha_k|)\sum_{m=n+1}^\infty\frac{n!}{m!}<1$ holds (such $n$ can be found since $\sum_{m=n+1}^\infty\frac{n!}{m!}=\frac{1}{n+1}\sum_{m=n+1}^\infty\frac{1}{(n+2)\cdot\ldots\cdot m}\leq\frac{1}{n+1}\sum_{m=n+1}^\infty\frac{1}{(m-n-1)!}\leq\frac{1}{n+1}\exp(1)\rightarrow 0$ when $n\rightarrow\infty$), then the absolute value of RHS of $(3)$ is $<1$, and yet an integer, hence $\text{RHS}(3)=0$. Thus $(3)$ reads $\alpha_1=-\sum_{i=1}^{k}\sum_{m<n,q_m<r_i}\alpha_i\frac{n!}{m!}=0\;(\mathrm{mod}\,n)$. If moreover $n>|\alpha_1|$, this means that $\alpha_1=0$. Repeat this argument to conclude that also $\alpha_2=\cdots=\alpha_k=0$. Since $A$ is a $\mathbb{Q}$-independent subset, by proposition 5.3 there exists a basis $B$ of $\mathbb{R}$ that contains $A$. Then $A\subseteq B\subseteq\mathbb{R}$ and $|A|=|\mathbb{R}|$ and Cantor-Bernstein theorem imply $|B|=|\mathbb{R}|$, therefore $[\mathbb{R}:\mathbb{Q}]=\mathrm{dim}_\mathbb{Q}{}\mathbb{R}=|\mathbb{R}|$. $\quad\blacksquare$ Here is a simple proof that a basis $B$ of $[ \mathbb{R}:\mathbb{Q}]$ has cardinality $|\mathbb{R}|$. Clearly since $B$ is contained in $R$, $|B| \le| \mathbb{R}|$. But also $\mathbb{R}=span(B)$ and thus $|\mathbb{R}| =|span(B)| \le |\mathbb{Q}^B|=|B|$. The last equality follows because $B$ is not finite (if it was, then $|\mathbb{R}|=|span(B)| \le |\mathbb{Q}^B|=|\mathbb{N}|$, a contradiction). Hence $ |\mathbb{R}| \le |B| \le| \mathbb{R}|$, so $|\mathbb{R}|=|B|$ Another simple proof: Take $P=X^n-p$ for a prime $p$. By Eisenstein's criterion, it is $\mathbb Q[X]$-irreductible. Therefore, the set of algebraic numbers is of infinite dimension over $\mathbb Q$. Since $\mathbb R$ is bigger, it works for $\mathbb R$ too. protected by user26857 Nov 25 '15 at 9:30 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
The question is a little hard to define because, as pointed out in the comments, it's not clear what the "outside volume" is, or how to define it in curved space to compare it to regular space. Still, I think there is a sense in which the answer is "yes". A regular sphere has area $A = 4 \pi R^2$ and volume $V = 4 \pi R^3/3$, giving a relation $V = (1/6)\sqrt{A^3/\pi}$. We may ask whether there is some situation where a sphere would have a larger volume for a given area. If we imagine that someone standing on the surface of this sphere can only measure the area, they would infer the volume using the above formula, and they would be surprised to know that the interior volume is in fact larger. There is a situation in which this can happen. For many years, cosmologists thought that the most likely shape for our universe was that of a 3-sphere, which at a fixed cosmological time has a metric given by $$ds^2 = \frac{dr^2}{1-r^2} + r^2 (d\theta^2 + \sin^2 \theta d\varphi^2)$$ in suitable coordinates. The area of a sphere at a given radius is still $A = 4\pi R^2$, but following the standard methods of differential geometry the volume is $$V = 4\pi \int_0^R dr\ \frac{r^2}{\sqrt{1-r^2}} = 2\pi \left(\arcsin R - R \sqrt{1-R^2}\right).$$ You can see by plotting that for any $R$, this volume is larger than the one given by the usual formula, even though the area is the same. Edit in response to your edit: you ask whether it's possible to fit an elephant in a mouse sized box. Unless our conception of space radically changes (again), then the answer is clearly no. A mouse sized box is a box in which a mouse fits, i.e., its interior volume is that of a mouse. You're asking whether an object can have an interior volume larger than its interior volume; I hope it's clear that this is not possible. You can, however, fit an elephant in a box with the surface area of a mouse.
8.2.2.1 - Formulas Earlier in this lesson we considered confidence intervals for proportions and the multiplier in our intervals was a value from the standard normal (i.e., \(z\)) distribution. But, what if our variable of interest is a quantitative variable and we want to estimate a population mean? We apply similar techniques when constructing a confidence interval for a mean, but now we are interested in estimating the population mean (\(\mu\)) by using the sample statistic (\(\overline{x}\)) and the multiplier is a \(t\) value. Similar to the \(z\) values that you used as the multiplier for constructing confidence intervals for population proportions, here you will use \(t\) values as the multipliers. Because \(t\) values vary depending on the number of degrees of freedom (df), you will need to use statistical software to look up the appropriate \(t\) value for each confidence interval that you construct. The degrees of freedom will be based on the sample size. Since we are working with one sample here, \(df=n-1\). MinitabExpress – Finding t* Multipliers To find the t* multiplier for a 98% confidence interval with 15 degrees of freedom: On a PC: Select STATISTICS > Distribution Plot On a Mac: Select Statistics > Probability Distributions > Distribution Plot Select Display Probability For Distributionselect \(t\) For Degrees of freedomenter 15 The default is to shade the area for a specified probability Select Equal tails For Probabilityenter 0.02 (if there is 0.98 in the middle, then 0.02 is split equally between the left and right tails) Click OK This should result in output similar to the output below. Note that your results may be slightly different due to random sampling variation. Select your operating system below to see a step-by-step guide for this example. Let’s review some of symbols and equations that we learned in previous lessons: Sample size \(n\) Population mean \(\mu=\frac{\sum X}{N}\) Sample mean \(\overline{x}= \frac{\sum x}{n}\) Standard error of the mean \(SE=\frac{s}{\sqrt{n}}\) Multiplier \(t^{*} \) Degrees of freedom (one group) \(df=n-1\) Recall the general form for a confidence interval: General Form of Confidence Interval \(sample\ statistic\pm(multiplier)\ (standard\ error)\) When constructing a confidence interval for a population mean the point estimate is the sample mean, \(\overline{x}\). The multiplier is taken from a \(t\) distribution. And, the standard error is equal to \(\frac{s}{\sqrt{n}}\). Confidence Interval for a Population Mean \(\overline{x} \pm t^{*} \frac{s}{\sqrt{n}}\) On the following pages we will walk through examples of constructing confidence intervals for population means by hand. Then, you will learn how to compute confidence intervals using Minitab Express.
Let's step back and ask what the determinant is and why it is useful. Then we'll get to this particular property. Permit me to introduce a new product of vectors, called the wedge product. If $a, b$ are vectors, then their wedge product is $a \wedge b$. The result is not a vector, but instead, we interpret it geometrically as an oriented plane--exactly the plane perpendicular to $a \times b$, as a matter of fact, but we can continue wedging--for instance, forming $a \wedge b \wedge c$, which represents a volume. Of course, in 3d space, there is only one unit volume (you could call it $\hat x \wedge \hat y \wedge \hat z$, but let me call it $i$ for short. All other volumes are scalar multiples of this volume, at least in terms of magnitude and orientation (you may be thinking "orientation?" I submit that a coordinate system following the right-hand rule is oppositedly oriented to one following a left-hand rule, and the unit volumes formed by wedging their unit vectors are oppositely oriented). It is for this reason that the volume elements are often called pseudoscalars, because they are only different from scalars by how they have orientations where scalars do not. How does this relate to matrices? Well, matrices are used to represent linear operators on vectors, but these operators can act on wedge products of vectors or pseudoscalars too. The "matrices" you use to represent these extensions of the original operator are different from the original matrix. We define the relationships by a simple rule. If $\underline T$ is a linear operator on a vector, then we define $\underline T(a \wedge b) = \underline T(a) \wedge \underline T(b)$, and so on (note that the cross product does not have this nice property except under rotations). But again, we said that there is only one unit pseudoscalar ($i$), so the action of a linear operator on $i$ must be some scalar multiple of $i$. That is, if $\alpha$ is a scalar, $$\underline T(i) = \alpha i$$ We define this number $\alpha$ to be the determinant, telling us geometrically how the unit volume is shrunk or dilated (or changes orientation) under the action of a linear operator. You can find the determinant of a linear operator by writing down the matrix representation and wedging the vectors that appear there. This is perfectly well-founded. You just need to know that $a \wedge b = - b \wedge a$--vectors anticommute under the wedge--and that the wedge is associative. Knowing these properties makes it possible to do computations with the wedge. (If you're puzzled why wedging vectors gives the determinant, feel free to ask and I'll clarify this point.) So let's take three vectors $f, g, h$ that appear in the matrix representation of a linear operator and wedge them to find the determinant. Let $f = f^x e_x + f^y e_y + f^z e_z$ and so on. We can then write out the following: $$\begin{align*}f \wedge g \wedge h = f^x e_x \wedge (g \wedge h) + f^y e_y \wedge (g \wedge h) + f^z e_z \wedge (g \wedge h)\end{align*}$$ I've expanded the wedge product through linearity (the distributive property, which I failed to mention earlier but is also valid for the wedge). This is the foundation for the technique you'e come across, which is called Laplace expansion, or cofactor expansion, or expansion by minors. The antisymmetry of the wedge means that we can write $f^x e_x \wedge (g \wedge h)$ as $$f^x e_x \wedge (g \wedge h) = f^x e_x \wedge (g^y e_y + g^z e_z) \wedge (h^y e_y + h^z e_z)$$ Why? Because $e_x \wedge e_x = 0$ always, so if any $e_x$ appeared in $g$ or $h$, they would be irrelevant. We can ignore them, and instead, we find the "determinant" of a linear operator on the $yz$-plane. Thus, the method of expansion by minors follows by recursively expanding a single vector through linearity to reduce the subsequent wedge products to a wedge product (a "determinant") you might already know. The method has its roots in the relationship between the determinant and volumes, how all the terms that make up the determinant must contain 1 and only 1 component of some vector from each of the coordinate directions--no more, no less.
Suppose $1<p<\infty$ and $\Omega$ is an open bounded set in $\mathbb R^n$ with nice boundary (say Lipschitz or even better). Let $(f_j)_j \subset W^{1,p}(\Omega)$ s.t. $f_j \rightharpoonup f$ weakly in $W^{1,p}(\Omega)$. Is it true that $f_j \to f$ stronglyin $L^p(\Omega)$? For sure it is true that $f_j \rightharpoonup f$ and $\nabla f_j \rightharpoonup\nabla f$. Moreover, we should have the strong convergence of a subsequence thanks to reflexivity: $(f_j)_j$ is bounded hence is has a strong convergent subsequence in $L^p(\Omega)$ because the embedding $W^{1,p} \to L^p$ is (always) compact. Thanks.
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
I am trying to understand proof of Prop. 5.1(page 64) from Fulton and Harris representation theory. I am unable to prove one statement in the proof. Which is Let $V$ be an irreducible representation of a finite group $G$. Let $W$ be restriction of $V$ to a normal subgroup $H$ with index $2$. Now $W = W' \oplus W''$, where $W', W''$ are irreducible and conjugate but not isomorphic. I am unable to understand why they are not self conjugate? The proof says, Since $W$ is self conjugate and if $W', W''$ were self conjugate $V$ wouldn't be irreducible. I am trying to assume $W',W''$ are self conjugate and produce a decomposition of $V$, but I couldn't. Please help me in filling details. UPDATE: My idea is We know that $W', W''$ are different irreducible representations(if not, $<W,W>=4$, a contradiction as it's value is $2$) and as $W', W''$ are self conjugate by an element of $G\backslash H$. For any $g \in G, g.W' \subset W'$, if not the map $g$ gives an isomorphism between $W'$ and $W''$ but we started with two different representations.
As pointed out in the comments, you certainly can't do this for any dense linear order of cardinality $\kappa$. However, if $D$ is a dense linear order of cardinality $\kappa$ in which every interval has cardinality $\kappa$ (i.e. for all $a,b\in D$, $(a,b) = \{c\in D\mid a < c < b\}$ has cardinality $\kappa$), you can just take the equivalence relation to be the equality relation, and you're done. And it's easy to build such a dense linear order by hand, for any infinite $\kappa$: Start with any order $M_0$ of size $\kappa$. Enumerate the pairs $(a,b)$ with $a<b$; there are $\kappa$-many of these. For each such pair, add $\kappa$-many new elements between $a$ and $b$. After having done this for every pair, you have an order $M_1$ of cardinality $\kappa$ with the property that every interval with endpoints in $M_0\subseteq M_1$ has cardinality $\kappa$. Now repeat, getting orders $M_2$, $M_3$, etc. The union $\bigcup_{n\in \omega} M_n$ has the desired property. Great! Except that I see no connection at all between the question in your first paragraph and the goal in your second paragraph. The construction above will not help you get a "rich" model. (By the way, where did you find this terminology? I haven't seen it before.) Let's forget about the equivalence relation for a moment and just think about the linear order. A rich model of the theory of linear order must be a dense linear order without endpoints (a model of the theory $\text{DLO}$), since density and the lack of endpoints can be characterized by the ability to extend any partial isomorphism from a finite linear order by one more point (between two elements of the domain, greater than all elements of the domain, or less than all elements of the domain). And indeed, the unique countable dense linear order without endpoints is a rich model of the theory of linear orders. But when we move to uncountable linear orders of size $\kappa>\aleph_0$, your original question is about having large intervals (between any two points $a$ and $b$, there are $\kappa$-many points), while richness is about filling cuts: for any sets $A$ and $B$ with $|A\cup B|<\kappa$ and $a<b$ for all $a\in A$ and $b\in B$, there is some point $c$ with $a<c<b$ for all $a\in A$ and $b\in B$. Here $A$ and $B$ are the set of points in $M$ which are less than and greater than the new point $m$, respectively. It's much harder to fill all cuts than to have large intervals! Actually, we can rephrase richness in more model-theoretic terminology: since $\text{DLO}$ has quantifier elimination and every linear order embeds in a model of $\text{DLO}$, a model $M\models \text{DLO}$ is rich if and only if it is saturated. The unique countable model $(\mathbb{Q},\leq)$ is saturated, but you can't prove in $\text{ZFC}$ that $\text{DLO}$ has any uncountable saturated models. You can build a saturated model if you assume $2^\kappa = \kappa^+$ for some cardinal (an instance of GCH) or the existence of inaccessible cardinals. The situation is exactly the same for your theory $T_0$ of linear orders equipped with an equivalence relation. The finite models of $T_0$ form a Fraïssé class, and its Fraïssé limit $M$ is the unique countable rich model of $T_0$. More generally, a model of $T_0$ is rich if and only if it is a saturated model of the theory $T = \text{Th}(M)$, and we cannot prove in $\text{ZFC}$ that such models exist, other than the countable one.
Probably not. After some more consideration, I'm less confident that the proposal could work. There are two reasons: No guarantee of focusing at any one point, and an inability to control the parameters of the lens. 1. A gravitational lens has no focal point. The analogy to a traditional lens does eventually break down. One reason is that gravitational lenses do not have focal points! They behave differently than the lenses you might be used to dealing with in optics; instead, gravitational lenses have focus lines. This means that if there is deviation in the positions of your lasers, you are definitely going to have problems. The big issue is that the effects of lensing are not linear. A light ray twice as far from the lens will not experience twice the lensing. In optics, you do see linear dependence for lensing, which is what allows you to construct lenses with focal points. That's what makes things like glasses, cameras and telescopes possible and effective. If a laser is offset the wrong distance from the central axis (the large black line in the figure below), its beam will reach the focus line at a different point than that of a laser positioned correctly. This means that your lasers will not, in fact, combine constructively. There's even the possibility for some destructive interference. 2. You need just the right setup for your lens. I put together a diagram of a typical gravitational lensing scenario, assuming that the lensing object is a point mass: I'm using some standard gravitational lensing notation: $\eta$: The initial distance of the source from some line connecting the source and the lens. $\xi$: The distance where the beam of light passes closest to the lens. Normally, $\xi\neq\eta$. $D_L$: The distance from the target to the lens. $D_{LS}$: The distance from the lens to the source. The remaining three angles ($\alpha$, $\beta$, $\theta$) should be apparent. The path of the light is in yellow. Your question features a ring of lasers spaced at some common distance from the center (I assume) with some radius $a$. I'm also assuming that the line connecting the ring and the target is perpendicular to the plane of the ring, meaning the initial path of the light is parallel to the line. Therefore, we have a special case where $\xi=\eta$. One important quantity is $\tilde{\alpha}$, given by$$\tilde{\alpha}=\frac{4GM(\xi)}{c^2\xi}\tag{1}$$where $M(\xi)$ is the mass contained within $\xi$. If we assume that $a=\xi$ is much greater than the radius of this object, then $M(\xi)=M$, the mass of the object. The angle $\alpha$ itself is$$\alpha=\left(\frac{D_{LS}}{D_S}\right)\tilde{\alpha}\tag{2}$$where $D_S=D_L+D_{LS}$, the distance to the target, and $\theta$ is$$\theta=\beta+\alpha\tag{3}$$Trigonometry means that$$\tan(\beta)=\frac{\xi}{D_S},\quad\tan(\theta)=\frac{\eta}{D_L}=\frac{\xi}{D_L}$$We can assume that $\beta$ and $\theta$ is small because $D_S\gg\xi$ and $D_L\gg\xi$, and so, by the small-angle approximation, $\tan(x)\approx x$:$$\beta\approx\frac{\xi}{D_S},\quad\theta\approx\frac{\xi}{D_L}\tag{4}$$Inserting all of this into $\text{(3)}$ gives us, for our case where $\xi=\eta=a$,$$\boxed{\frac{a}{D_L}=\frac{a}{D_S}+\frac{D_{LS}}{D_S}\frac{4GM}{c^2a}}\tag{5}$$You just need to find an object with approximately the right parameters. I'd advise finding something relatively small. Galaxies are not good because we've assumed that all of these lasers point in the same direction (and don't spread out too much - though that's not likely). If a galaxy was the lensing object, you'd need $2a>d$, where $d$ is the diameter of the galaxy, and that's not really feasible in your scenario! If the lasers could be pointed independently, then maybe you'd have something, but it doesn't seem like that's the case - and anyway, if they could, you could then simply point them all at the target without using any lensing at all. However, I believe that's not the case with the sort of beams you're describing. You're pretty limited by this method because you need three collinear points: The center of your array of lasers, the target, and the lens itself. We all know that Space is Big; more to the point, it's pretty empty when it comes to objects like stars. If you're using gravitational lensing, you really need to pick your targets wisely. $\text{(5)}$ is incredibly important (and, I think, accurate), because it must be satisfied for this to work. Addendum I'm a little concerned with the idea of giant space lasers, especially when used on the scale you're talking about. Let's assume that the beams are Gaussian (see also here), that is, the "radius" of the beam is given by$$w(z)=w_0\sqrt{1+\left(\frac{z}{z_R}\right)^2}\tag{6}$$where $z_R$ is a scale length called the Rayleigh range, calculated in terms of wavelength of the beam, $\lambda$, by$$z_R=\frac{\pi w_0^2}{\lambda}\tag{7}$$I have no idea what $w_0$ and $\lambda$ should be. Let's make each laser really big and say that $w_0=10^3\text{ m}$ and $\lambda=500\text{ nm}$. This is a gigantic green space laser, kind of like the Death Star's. This means that $z_R\simeq4.49\times10^{12}\text{ m}$, roughly $30\text{ AU}$. Intergalactic space is really big, as are galaxies. Let's say that on the first test run, the civilization just wants to shoot the laser from one end of their galaxy to the other. If the galaxy is roughly the size of the Milky Way, then it's maybe $100,000$ light-years long, or about $9.461\times10^{20}\text{ m}$. Plugging this all into $\text{(6)}$, I get$$w\left(\text{Other side of galaxy}\right)=10^3\sqrt{1+\left(\frac{9.461\times10^{20}\text{ m}}{4.49\times10^{12}\text{ m}}\right)^2}\text{ m}=2.11\times10^{11}\text{ meters}$$which is about $3\text{ AU}$ - actually a lot smaller than I thought, though still very big. At distances this large, by the way, the $1$ inside the square root is negligible, and, roughly $w(z)\propto z$. Therefore, multiplying $z$ by, say, $42$ should multiply $w(z)$ by $42$. The nearest major galaxy, Andromeda, is about 2.5 million light-years away, on the order of $10$ times the Milky Way's diameter. Therefore, if the beam was sent from the Milky Way to Andromeda, it should have a $w\left(\text{Andromeda}\right)$ of perhaps $30\text{ AU}$. Strange. It would seem, then, that this sort of beam could not grow very large - at least, not to the size of the galaxy, using these parameters. That's fortunate, because if it grew that large, the intensity would likely be so small that it wouldn't do much good at all! Basically, only use your Death-Star-green-Gaussian-Nicoll-Dyson-beam-space-lasers on smaller targets - like maybe planets or other stars. Or Alderaan. References:
Ratio of Densities of Planet Core and Mantle Problem \[M\]and radius \[R\]consists of two layers of material, each with a constant density. The core contains 60% of the planets mass, but has radius equal to 30% of the planets radius. The mantle constitutes the rest of the mass. What is the ratio Density of Core: Density of Mantle? We can draw up the table below. Core Mantle Mass \[0.6M\] \[0.4M\] Volume \[\frac{4}{3} \pi (0.3R)^3\] \[\frac{4}{3} \pi R^3 - \frac{4}{3} \pi (0.3)^3 = \frac{4}{3} \pi \times 0.973R^3\] Density \[\frac{0.6M}{\frac{4}{3} \pi (0.3R)^3}\] \[\frac{0.4M}{\frac{4}{3} \pi \times 0.973R^3}\] \[\frac{0.6M}{\frac{4}{3} \pi (0.3R)^3}\frac{0.4M}{\frac{4}{3} \pi \times 0.973R^3}\] This simplifies to \[\frac{200}{9} : \frac{400}{973}\].
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
Keep in mind that existential and universal types are rather different.It is constructive logic, not classical logic andin constructive logic $\forall$ and $\exists$ are not as related as they are in classical logic. $\forall x:A. B(x)$ is the type of programs that receive an object of type $A$ and return an object of type $B(x)$.The important thing here is that the type $B(x)$ depends on $x$ and is not the same for all $x$.It can vary depending on what $x$ is.For one input $x$ we might output an integer. For another one we might output a real number. For yet another one we might output a function over real numbers.If $B(x)$ doesn't vary with $x$ then you can use $A\to B$ in placewhich is the type of functions from $A$ to $B$. $\exists x:A. B(x)$ is the dependent version of (constructive) disjunction.You can think of constructive disjunction $A \lor B$ of two types $A$ and $B$ asthe disjoint union of $A$ and $B$. $\exists x:A.B(x)$ is the disjoint union of a collection of types $B(x)$ indexed by $x:A$. The fact that the type $B(x)$ van vary depending on the value of $x:A$ makes it a dependent type. Compare with the case where $B$ does not depend on $x:A$: $\exists x:A. B$.We are taking one copy of the same $B$ for each $x:A$.This is isomorphic to $A \times B$. Now you can ask why we need dependent product and sum types?Because they give us more expressive power.Now we can ignore the types completely and have untyped type theory/functional programming. But that removes the benefits of having types in the first place,e.g. you will not know if all programs will always terminate (strong normalization).See Lambda Cube andDependent Type.I think a good way to understand dependent types well is to look at the rules for introducing and eliminating the dependent types in Martin-Lof's type theory. The main point of dependent types is:we want to remain inside a nice typed theory for various reason(e.g. avoiding bugs, automatic proof of termination, etc.).We don't want to go to something like untyped lambda calculus wherewe can make expressing like those you stated andway more powerful stuff.We can say that dependent types were invented to allow expressing more thingswhile still remaining inside a nice type theory.
I am using the definition in "Introduction to Smooth Manifolds" by Lee Defn: Suppose $M$ is a smooth manifold. An embedded smooth manifold of $M$ is a subset $S\subset M$ which is a manifold in the subspace topology, endowed with a smooth structure with respect to which the inclusion map $i:S\to M$ is a smooth embedding. My question has to do with showing that {$(x,y):x=|y|=\sqrt{y^2} $} is not a smooth submanifold of $R^2$, but more specifically to do with the inclusion map. I suppose a second part to this question is if I am thinking about this the correct way, or if am I completely wrong. I want to show that the inclusion map isn't a smooth embedding (hopefully this is the correct approach). I am wondering if I can define $i:S\to M$ by $$i(x,y)=(x,y)=(|y|,y)$$ so that my derivative is $$(1,\frac{y}{\sqrt{y^2}});(0,1)$$ My conclusion is that this doesn't exist at $(0,0)$ so the inclusion map isn't a smooth embedding. (I don't know how to make a matrix on here, so first set of parenthesis is first row of Jacobian, second set is second row ) I understand that this question is similar to some from the past, by I haven't found any regarding how to define the inclusion map to break the definition. Thanks!
Yesterday, I got a math problem as follows. Determine with proof whether $\tan 1^\circ$ is an irrational or a rational number? My solution (method A) I solved it with the following ways. I can prove that $\tan 3^\circ$ is an irrational, the proof will be given later because it takes much time to type. Let's assume that $\tan 1^\circ$ is a rational number. As a result, $$ \tan 2^\circ = \frac{2 \tan 1^\circ }{1-\tan^2 1^\circ}$$ becomes a rational number. Here we don't know whether $\tan 2^\circ$ is an irrational or a rational. Let's consider each case separately as follows: If $\tan 2^\circ$ is actually an irrational then a contradiction appears in $\tan 2^\circ = \frac{2 \tan 1^\circ}{1 - \tan^2 1^\circ}$. Thus $\tan 1^\circ$ cannot be a rational. If $\tan 2^\circ$ is actually a rational then we can proceed to evaluate $$\tan 3^\circ = \frac{\tan 2^\circ + \tan 1^\circ}{1 - \tan 2^\circ \tan 1^\circ}$$ A contradiction again appears here because I know that $\tan 3^\circ$ is actually an irrational. Thus $\tan 1^\circ$ cannot be a rational. For both cases whether $\tan 2^\circ$ is rational or not , it leads us to the conclusion that $\tan 1^\circ$ is an irrational. End. My friend's solution (Method B) Assume that $\tan 1^\circ$ is a rational number. If $\tan n^\circ$ ($1\leq n\leq 88$, $n$ is an integer) is a rational number, then $$ \tan (n+1)^\circ = \frac{\tan n^\circ + \tan 1^\circ}{1-\tan n^\circ \tan 1^\circ}$$ becomes a rational number. Consequently, $\tan N^\circ$ ($1\leq N\leq 89$, $N$ is an integer) becomes a rational number. But that is a contradiction, for example, $\tan 60^\circ = \sqrt 3$ that is an irrational number. Therefore, $\tan 1^\circ$ is an irrational number. My friend's solution with shortened inteval (Method C) Consider my friend's proof and assume that we only know that $\tan 45^\circ =1$ which is a rational number. Any $\tan n^\circ$ for ($1\leq n\leq 44$, $n$ is an integer) is unknown (by assumption). Let's shorten his interval from ($1\leq n\leq 88$, $n$ is an integer) to ($1\leq n\leq 44$, $n$ is an integer). Use his remaining proof as follows. For ($1\leq n\leq 44$, $n$ is an integer), $$ \tan (n+1)^\circ = \frac{\tan n^\circ + \tan 1^\circ}{1-\tan n^\circ \tan 1^\circ}$$ becomes a rational number. Consequently, $\tan N^\circ$ ($1\leq N\leq 45$, $N$ is an integer) becomes a rational number. Based on the assumption that we don't know whether $\tan n^\circ$ for ($1\leq n\leq 44$, $n$ is an integer) is rational or not, we cannot show a contradiction up to $45^\circ$, Questions Can we conclude that $\tan 1^\circ$ is a rational number in method C as there seems no contradiction? Is the proof by induction correctly used in method B and C? Is the proof by induction in method A the strongest?
Where $\phi$ is the natural isomorphism yielding the adjunction and $\epsilon_A$ is the counit. Could anyone explain why this diagram commutes? Thanks very much! Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Where $\phi$ is the natural isomorphism yielding the adjunction and $\epsilon_A$ is the counit. Could anyone explain why this diagram commutes? Thanks very much! $\require{AMScd}$ For $f:A \to B$ in $\mathbf{A}$, consider the diagram $$\begin{CD} \mathbf{A}(FGA,A) @>{\phi}>> \mathbf{X}(GA,GA)\\ @VVV @VVV \\ \mathbf{A}(FGA,B) @>{\phi}>> \mathbf{X}(GA,GB) \end{CD}$$ which commutes by naturality of $\phi$. Then take $\epsilon \in \mathbf{A}(FGA,A)$, it goes to $Id_{GA}$ on the right and then goes down to $Gf \in \mathbf{X}(GA,GB)$. In the other direction, $\epsilon$ goes down to $f\circ \epsilon \in \mathbf{A}(FGA,B)$ and then also goes to $Gf \in \mathbf{X}(GA,GB)$ by commutativity of the diagram. This shows that your diagram commutes. The bijection $\phi_{GA,B}$ is defined by $(f:FGA\to B)\mapsto (Gf\circ \eta_{GA}:GA\to GB)$. Applying this formula to $f=g\circ \epsilon_A$ yields$$\phi_{GA,B}(g\circ \epsilon_A)=G(g\circ \epsilon_A)\circ \eta_{GA}=G(g)\circ (G\epsilon_A\circ \eta_{GA})=G(g),$$because of the triangular identity $G\epsilon_A\circ \eta_{GA}=Id_{GA}$.
Let $(X,d)$ be a compact, connected metric space. For every $\epsilon>0$ define an equivalence relation on $X$ by $x\sim_{\epsilon}y$ if and only if there exists a finite sequence $(x=x_0,x_1,\dots,x_n=y)$ such that $d(x_i,x_{i+1})<\epsilon$. Note that the space is connected if and only if for every $\epsilon>0$, the $\epsilon$-equivalence class of every point is the whole space. See this answer. My interest in this collection of equivalence relations is their properties when one restrict them to certain subsets of the space: For a set $A \subset X$ and $\epsilon>0$, define $a \sim_{\epsilon}^{A} b$ if and only if there exists a $\epsilon$-step sequence between $a$ and $b$ contained in $A$. The following property is something intuitive one could expect to hold in any compact, connected metric space: Let $U$ and $V$ be open disjoint subsets of $X$ and denote $K:=(U \cup V)^\complement$. Let $\epsilon>0$ and let $u \in U$ and $v \in V$ with $u\sim_{\epsilon}^{U \cup V} v$ through a finite sequence $S_{\epsilon}(u,v) = (u=x_0,x_1,\dots,x_n=v) \subset U \cup V$. Then there exists some $w \in S_{\epsilon}(u,v)$ with $d(w,K)<\epsilon$. The proof I managed to find consists on the additional assumption that every ball is a connected subset of the space. Assuming this, one can easily see that there exists some ball $B$ of radius $\epsilon$ intersecting $U$ and $V$, so assuming the ball is connected the proof is almost immediate. I couldn't find a counterexample to this property for compact, connected metric spaces that contain some disconnected ball. Yet, I couldn't prove it when I removed the assumption.
While designing another problem I came up with the following question: My guess is that the problem is not solvable (in the sense that one can construct multiple $h$ with the same $\alpha,\beta$ and $x$), but cannot see how to show this. So far I gave the problem a try by splitting the angle and combining some trigonometric identities but couldn't conclude. So far I only obtained the following \begin{align*} \tan(\alpha)=\frac{(h_1+h_2)(x+y)}{(x+y)^2-h_1h_2} \end{align*} and $$\tan(\beta)=\frac{(h_1+h_2)\cdot y}{y^2-h_1h_2}.$$ Many thanks in advance.
Central Tendency: The Mean Vector Section Throughout this course, we’ll use the ordinary notations for the mean of a variable. That is, the symbol \(\mu\) is used to represent a (theoretical) population mean and the symbol \(\bar{x}\) is used to represent a sample mean computed from observed data. In the multivariate setting, we add subscripts to these symbols to indicate the specific variable for which the mean is being given. For instance, \(\mu_1\) represents the population mean for variable \(X_1\) and \(\bar{x}_{1}\) denotes a sample mean based on observed data for variable \(X_{1}\). The population mean is the measure of central tendency for the population. Here, the population mean for variable \(j\) is \[\mu_j = E(X_{j})\] The notation \(E\) stands for statistical expectation; here \(E(X_{j})\) is the mean of \(X_{j}\) over all members of the population, or equivalently, overall random draws from a stochastic model. For example, \(\mu_j = E(X_{j})\) may be the mean of a normal variable. The population mean \(\mu_j\) for variable \(j\) can be estimated by the sample mean \[\bar{x}_j = \frac{1}{n}\sum_{i=1}^{n}X_{ij}\] Note!The sample mean \(\bar{x}_{j}\), because it is a function of our random data is also going to have a mean itself. In fact, the population mean of the sample mean is equal to population mean \(\mu_j\); i.e.,\[E(\bar{x}_j) = \mu_j \] Therefore, the \(\bar{x}_{j}\) is unbiased for \(\mu_j\) . Another way of saying this is that the mean of the \(\bar{x}_{j}\)’s over all possible samples of size \(n\) is equal to \(\mu_j\). Recall that the population mean vector is \(\boldsymbol{\mu}\) which is a collection of the means for each of the population means for each of the different variables. \(\boldsymbol{\mu} = \left(\begin{array}{c} \mu_1 \\ \mu_2\\ \vdots\\ \mu_p \end{array}\right)\) We can estimate this population mean vector, \(\boldsymbol{\mu}\), by \(\mathbf{\bar{x}}\). This is obtained by collecting the sample means from each of the variables in a single vector. This is shown below. \(\mathbf{\bar{x}} = \left(\begin{array}{c}\bar{x}_1\\ \bar{x}_2\\ \vdots \\ \bar{x}_p\end{array}\right) = \left(\begin{array}{c}\frac{1}{n}\sum_{i=1}^{n}X_{i1}\\ \frac{1}{n}\sum_{i=1}^{n}X_{i2}\\ \vdots \\ \frac{1}{n}\sum_{i=1}^{n}X_{ip}\end{array}\right) = \frac{1}{n}\sum_{i=1}^{n}\textbf{X}_i\) Just as the sample means, \(\bar{x}\), for the individual variables are unbiased for their respective population means, the sample mean vector is unbiased for the population mean vector. \(E(\mathbf{\bar{x}}) = E\left(\begin{array}{c}\bar{x}_1\\\bar{x}_2\\ \vdots \\\bar{x}_p\end{array}\right) = \left(\begin{array}{c}E(\bar{x}_1)\\E(\bar{x}_2)\\ \vdots \\E(\bar{x}_p)\end{array}\right)=\left(\begin{array}{c}\mu_1\\\mu_2\\\vdots\\\mu_p\end{array}\right)=\boldsymbol{\mu}\)
Dispersion: Variance, Standard Deviation Section Variance A variancemeasures the degree of spread (dispersion) in a variable’s values. Theoretically, a population variance is the average squared difference between a variable’s values and the mean for that variable. The population variance for variable \(X_j\) is Population Variance The population variance for variable \(X_j\) is \(\sigma_j^2 = E(X_j-\mu_j)^2\) Note that the squared residual \((X_{j}-\mu_{j})^2\) is a function of the random variable \(X_{j}\). Therefore, the squared residual itself is random and has a population mean. The population variance is thus the population mean of the squared residual. We see that if the data tend to be far away from the mean, the squared residual will tend to be large, and hence the population variance will also be large. Conversely, if the data tend to be close to the mean, the squared residual will tend to be small, and hence the population variance will also be small. Sample Variance The population variance \(\sigma _{j}^{2}\) can be estimated by the sample variance \begin{align} s_j^2 &= \frac{1}{n-1}\sum_{i=1}^{n}(X_{ij}-\bar{x}_j)^2\\&= \frac{\sum_{i=1}^{n}X_{ij}^2- n \bar{x}_j^2 }{n-1} \\&=\frac{\sum_{i=1}^{n}X_{ij}^2-\left(\left(\sum_{i=1}^{n}X_{ij}\right)^2/n\right)}{n-1} \end{align} The first expression in this formula is most suitable for interpreting the sample variance. We see that it is a function of the squared residuals; that is, take the difference between the individual observations and their sample mean, and then square the result. Here, we may observe that if observations tend to be far away from their sample means, then the squared residuals and hence the sample variance will also tend to be large. If on the other hand, the observations tend to be close to their respective sample means, then the squared differences between the data and their means will be small, resulting in a small sample variance value for that variable. The last part of the expression above gives the formula that is most suitable for computation, either by hand or by a computer! Since the sample variance is a function of the random data, the sample variance itself is a random quantity, and so has a population mean. In fact, the population mean of the sample variance is equal to the population variance: \[E(s_j^2) = \sigma_j^2\] That is, the sample variance \(s _{j}^{2}\) is unbiased for the population variance \(\sigma _{j}^{2}\). Our textbook (Johnson and Wichern, 6th ed.) uses a sample variance formula derived using maximum likelihood estimation principles. In this formula, the division is by \(n\) \[s_j^2 = \frac{\sum_{i=1}^{n}(X_{ij}-\bar{x}_j)^2}{n}\] Example 1-1: Pulse Rates Section Suppose that we have observed the following \(n =\) 5 resting pulse rates: 64, 68, 74, 76, 78 Find the sample mean, variance and standard deviation. The sample mean is \(\bar{x} = \dfrac{64+68+74+76+78}{5}=72\). The maximum likelihood estimate of the variance, the one consistent with our text, is \begin{align} s^2 &= \frac{(64-72)^2+(68-72)^2+(74-72)^2+(76-72)^2+(78-72)^2}{5}\\&=\frac{136}{5} \\&= 27.2 \end{align} The standard deviation based in this method is \(s=\sqrt{27.2}=5.215\). The more commonly used variance estimate, the one given by statistical software, would be \(\frac{136}{5-1}=34\). The standard deviation would be \(s = \sqrt{34}=5.83\).
A test statistic which has an F-distribution under the null hypothesis is called an F test. It is used to compare statistical models as per the data set provided or available. George W. Snedecor, in honour of Sir Ronald A. Fisher, termed this formula as F-test Formula. F \; Value \[\LARGE F \; Value \; = \; \frac{variance \; 1}{variance \; 2} = \frac{\sigma _{1}^{2}}{\sigma _{2}^{2}}\] To compare variance of two different sets of values, F test formula is used. Applied on F distribution under null hypothesis, we first need to find out the mean of two given observations and then calculate their variance. \[\LARGE \sigma^{2}\; = \; \frac{\sum (x – \overline{x})^{2}}{n-1}\] Where, σ 2 = Variance x = Values given in a set of data $\overline{x}$ = Mean of the data n = Total number of values.
I'm trying to prove the following theorem. Theorem (Zero free region): There exists $C>0$ such that $\sigma > 1-\frac{C}{log(\vert t \vert +4)}\Rightarrow \zeta(\sigma + i t)\neq 0$. In the proof I have a problem with one of the arguments, namely: If $0<\delta\leq 2$ then $\frac{-\zeta'}{\zeta}(1+\delta) = \frac{1}{1+\delta -1} + O(1)$. My question is simply why? I think it has something to do with the fact that $\zeta(s)$ has a simple pole for $s = 1$ but is otherwise analytic in the half-plane $\sigma >0$ (when we denote $s = \sigma + it)$. Someone who can clarify this?
Measures of Association: Covariance, Correlation Section Association is concerned with how each variable is related to the other variable(s). In this case, the first measure that we will consider is the covariance between two variables j and k. The population covariance is a measure of the association between pairs of variables in a population. Population Covariance The population covariance between variables jand kis \(\sigma_{jk} = E\{(X_{ij}-\mu_j)(X_{ik}-\mu_k)\}\quad\mbox{for }i=1,\ldots,n\) Note that the product of the residuals \( \left( X_{ij} - \mu_{j} \right) \) and \( \left(X_{ik} - \mu_{k} \right) \) for variables j and k, respectively, is a function of the random variables \(X_{ij}\) and \(X_{ik}\). Therefore, \(\left( X _ { i j } - \mu _ { j } \right) \left( X _ { i k } - \mu _ { k } \right)\) is itself random, and has a population mean. The population covariance is defined to be the population mean of this product of residuals. We see that if either both variables are greater than their respective means, or if they are both less than their respective means, then the product of the residuals will be positive. Thus, if the value of variable j tends to be greater than its mean when the value of variable k is larger than its mean, and if the value of variable j tends to be less than its mean when the value of variable k is smaller than its mean, then the covariance will be positive. Positive population covariances mean that the two variables are positively associated; variable j tends to increase with increasing values of variable k. Negative association can also occur. If one variable tends to be greater than its mean when the other variable is less than its mean, the product of the residuals will be negative, and you will obtain a negative population covariance. Variable j will tend to decrease with increasing values of variable k. The population covariance \(\sigma_{jk}\) between variables j and k can be estimated by the sample covariance. Sample Covariance This can be calculated by \begin{align} s_{jk} &= \frac{1}{n-1}\sum_{i=1}^{n}(X_{ij}-\bar{x}_j)(X_{ik}-\bar{x}_k)\\&=\frac{\sum_{i=1}^{n}X_{ij}X_{ik}-(\sum_{i=1}^{n}X_{ij})(\sum_{i=1}^{n}X_{ik})/n}{n-1} \end{align} Just like in the formula for variance we have two expressions that make up this formula. The first half of the formula is most suitable for understanding the interpretation of the sample covariance, and the second half of the formula is used for calculation. Looking at the first half of the expression, the product inside the sum is the residual differences between variable j and its mean times the residual differences between variable k and its mean. We can see that if either both variables tend to be greater than their respective means or less than their respective means, then the product of the residuals will tend to be positive leading to a positive sample covariance. Conversely, if one variable takes values that are greater than its mean when the opposite variable takes a value less than its mean, then the product will take a negative value. In the end, when you add up this product over all of the observations, you will end up with a negative covariance. So, in effect, a positive covariance would indicate a positive association between the variables j and k. And a negative association is when the covariance is negative. For computational purposes, we will use the second half of the formula. For each subject, the product of the two variables is obtained, and then the products are summed to obtain the first term in the numerator. The second term in the numerator is obtained by taking the product of the sums of variable over the n subjects, then dividing the results by the sample size n. The difference between the first and second terms is then divided by n -1 to obtain the covariance value. Again, sample covariance is a function of the random data, and hence, is random itself. As before, the population mean of the sample covariance s jk is equal the population covariance σ; i.e., jk \(E(s_{jk})=\sigma_{jk}\) That is, the sample covariance \(s_{jk}\) is unbiased for the population covariance \(\sigma_{jk}\). The sample covariance is a measure of the association between a pair of variables: \(s_{jk}\) = 0 This implies that the two variables are uncorrelated. (Note that this does not necessarily imply independence, we'll get back to this later.) \(s_{jk}\) > 0 This implies that the two variables are positively correlated; i.e., values of variable j tend to increase with increasing values of variable k. The larger the covariance, the stronger the positive association between the two variables. \(s_{jk}\) < 0 This implies that the two variables are negatively correlated; i.e., values of variable j tend to decrease with increasing values of variable k. The smaller the covariance, the stronger the negative association between the two variables. Recall, that we had collected all of the population means of the p variables into a mean vector. Likewise, the population variances and covariances can be collected into the population variance-covariance matrix: This is also known by the name of population dispersion matrix. Population variance-covariance matrix \(\Sigma = \left(\begin{array}{cccc}\sigma^2_1 & \sigma_{12} & \dots & \sigma_{1p}\\ \sigma_{21} & \sigma^2_{2} & \dots & \sigma_{2p}\\ \vdots & \vdots & \ddots & \vdots\\ \sigma_{p1} & \sigma_{p2} & \dots &\sigma^2_p\end{array}\right)\) Note that the population variances appear along the diagonal of this matrix, and the covariance appear in the off-diagonal elements. So, the covariance between variables j and k will appear in row j and column k of this matrix. The population variance-covariance matrix may be estimated by the sample variance-covariance matrix. The population variances and covariances in the above population variance-covariance matrix are replaced by the corresponding sample variances and covariances to obtain the sample variance-covariance matrix: Sample variance-covariance matrix \( S = \left(\begin{array}{cccc}s^2_1 & s_{12} & \dots & s_{1p}\\ s_{21} & s^2_2 & \dots & s_{2p}\\ \vdots & \vdots & \ddots & \vdots \\ s_{p1} & s_{p2} & \dots & s^2_{p}\end{array}\right)\) Note that the sample variances appear along diagonal of this matrix and the covariances appear in the off-diagonal elements. So the covariance between variables j and k will appear in the jk-th element of this matrix. Note! S (the sample variance-covariance matrix) is symmetric; i.e., \(s_{jk}\) = \(s_{kj}\) . S is unbiased for the population variance covariance matrix \(Σ\) ; i.e., \(E(S) = \left(\begin{array}{cccc} E(s^2_1) & E(s_{12}) & \dots & E(s_{1p}) \\ E(s_{21}) & E(s^2_{2}) & \dots & E(s_{2p})\\ \vdots & \vdots & \ddots & \vdots \\ E(s_{p1}) & E(s_{p2}) & \dots & E(s^2_p)\end{array}\right)=\Sigma\). Because this matrix is a function of our random data, this means that the elements of this matrix are also going to be random, and the matrix, on the whole, is random as well. The statement '\(Σ\) is unbiased' means that the mean of each element of that matrix is equal to the corresponding elements of the population. In matrix notation, the sample variance-covariance matrix may be computed used the following expressions: \begin{align} S &= \frac{1}{n-1}\sum_{i=1}^{n}(X_i-\bar{x})(X_i-\bar{x})'\\ &= \frac{\sum_{i=1}^{n}X_iX_i'-(\sum_{i=1}^{n}X_i)(\sum_{i=1}^{n}X_i)'/n}{n-1} \end{align} Just as we have seen in the previous formulas, the first half of the formula is used in interpretation, and the second half of the formula is what is used for calculation purposes. Looking at the second term you can see that the first term in the numerator involves taking the data vector for each subject and multiplying by its transpose. The resulting matrices are then added over the n subjects. To obtain the second term in the numerator, first compute the sum of the data vectors over the n subjects, then take the resulting vector and multiply by its transpose; then divide the resulting matrix by the number of subjects n. Take the difference between the two terms in the numerator and divide by n - 1. Example 1-2 Section Suppose that we have observed the following n = 4 observations for variables \(x_{1}\) and \(x_{2}\) . \(x_{1}\) \(x_{2}\) 6 3 10 4 12 7 12 6 The sample means are \(\bar{x}_1\) = 10 and \(\bar{x}_2\) = 5. The maximum likelihood estimate of the covariance is the average product of deviations from the mean: \begin{align} s_{12}&=\dfrac{(6-10)(3-5)+(10-10)(4-5)+(12-10)(7-5)+(12-10)(6-5)}{4-1}\\&=\dfrac{8+0+4+2}{4-1}=4.67 \end{align} The positive value reflects the fact that as \(x_{1}\) increases, \(x_{2}\) also tends to increase. Note! The magnitude of the covariance value is not particularly helpful as it is a function of the magnitudes (scales) of the two variables. This quantity is a function of the variability of the two variables, and so, it is hard to tease out the effects of the association between the two variables from the effects of their dispersions. Note, however, that the covariance between variables i and j must lie between the product of the two-component standard deviations of variables i and j, and negative of that same product: \(-s_i s_j \le s_{ij} \le s_i s_j\) Example 1-3: Body Measurements (Covariance) Section In an undergraduate statistics class, n = 30 females reported their heights (inches), and also measured their left forearm length (cm), left foot length (cm), and head circumference (cm). The sample variance-covariance matrix is the following: Height LeftArm LeftFoot HeadCirc Height 8.740 3.022 2.772 0.289 LeftArm 3.022 2.402 1.233 0.233 LeftFoot 2.772 1.234 1.908 0.118 HeadCirc 0.289 0.223 0.118 3.434 Notice that the matrix has four row and four columns because there are four variables being considered. Also, notice that the matrix is symmetric. Here are a few examples of the information in the matrix: The variance of the height variable is 8.74. Thus the standard deviation is \(\sqrt{8.74} = 2.956\). The variance of the left foot measurement is 1.908 (in the 3rd diagonal element). Thus the standard deviation for this variable is \(\sqrt{1.908}=1.381\). The covariance between height and left arm is 3.022, found in the 1st row, 2nd column and also in the 2nd row, 1st column. The covariance between left foot and left arm is 1.234, found in the 3rd row, 2nd column and also in the 2nd row, 3rd column. All covariance values are positive so all pairwise associations are positive. But, the magnitudes do not tell us about the strength of the associations. To assess the strength of an association, we use correlation values. This suggests an alternative measure of association. Correlation Matrix Section Correlation The population correlation is defined to be equal to the population covariance divided by the product of the population standard deviations: Correlation \(\rho_{jk} = \dfrac{\sigma_{jk}}{\sigma_j\sigma_k}\) The population correlation may be estimated by substituting into the formula the sample covariances and standard deviations: \(r_{jk}=\dfrac{s_{jk}}{s_js_k}=\dfrac{\sum_{i=1}^{n}X_{ij}X_{ik}-(\sum_{i=1}^{n}X_{ij})(\sum_{i=1}^{n}X_{ik})/n}{\sqrt{\{\sum_{i=1}^{n}X^2_{ij}-(\sum_{i=1}^{n}X_{ij})^2/n\}\{\sum_{i=1}^{n}X^2_{ik}-(\sum_{i=1}^{n}X_{ik})^2/n\}}}\) It is very important to note that the population as well as the sample correlation must lie between -1 and 1. \(-1 \le \rho_{jk} \le 1\) \(-1 \le r_{jk} \le 1\) Therefore: \(\rho_{jk}\) = 0 indicates, as you might expect, that the two variables are uncorrelated . \(\rho_{jk}\) close to +1 will indicate a strong positive dependence \(\rho_{jk}\) close to -1 indicates a strong negative dependence Sample correlation coefficients also have similar interpretation. For a collection of p variables, the correlation matrix is a p × p matrix that displays the correlations between pairs of variables. For instance, the value in the \(j^{th}\) row and \(k^{th}\) column gives the correlation between variables \(x_{j}\) and \(x_{k}\) . The correlation matrix is symmetric so that the value in the \(k^{th}\) row and \(j^{th}\) column is also the correlation between variables \(x_{j}\) and \(x_{k}\) . The diagonal elements of the correlation matrix are all identically equal to 1. Sample Correlation Matrix The sample correlation matrixis denoted as R. \(\textbf{R} = \left(\begin{array}{cccc} 1 & r_{12} & \dots & r_{1p}\\ r_{21} & 1 & \dots & r_{2p}\\ \vdots & \vdots & \ddots & \vdots \\ r_{p1} & r_{p2} & \dots & 1\end{array}\right)\) Example 1-4: Body Measurements (Correlations) Section The following covariance matrix shows the pairwise covariances for the height, left forearm, left foot and head circumference measurements of n = 30 female college students. Height LeftArm LeftFoot HeadCirc Height 8.740 3.022 2.772 0.289 LeftArm 3.022 2.402 1.233 0.233 LeftFoot 2.772 1.234 1.908 0.118 HeadCirc 0.289 0.223 0.118 3.434 Here are two examples of calculating a correlation coefficient: The correlation between height and left forearm is \(\dfrac{3.022}{\sqrt{8.74}\sqrt{2.402}}=0.66\). The correlation between head circumference and left foot is \(\dfrac{0.118}{\sqrt{3.434}\sqrt{1.908}}=0.046\). The complete sample correlation matrix for this example is the following: Height LeftArm LeftFoot HeadCirc Height 1 0.66 0.68 0.053 LeftArm 0.66 1 0.58 0.078 LeftFoot 0.68 0.58 1 0.046 HeadCirc 0.053 0.078 0.046 1 Overall, we see moderately strong linear associations among the variables height, left arm and left foot and quite weak (almost 0) associations between head circumference and the other three variables. In practice, use scatter plots of the variables to fully understand the associations between variables. It is not a good idea to rely on correlations without seeing the plots. Correlation values are affected by outliers and curvilinearity.
Old Neural Net API Warning This API is deprecated since v1.4.2, users are advised to use the new neural stack API. Feed-forward Network¶ To create a feedforward network we need three entities. The training data (type parameter D) A data pipe which transforms the original data into a data structure that understood by the FeedForwardNetwork The network architecture (i.e. the network as a graph object) Network graph¶ A standard feedforward network can be created by first initializing the network architecture/graph. 1 2 val gr = FFNeuralGraph(num_inputs = 3, num_outputs = 1, hidden_layers = 1, List("logsig", "linear"), List(5)) This creates a neural network graph with one hidden layer, 3 input nodes, 1 output node and assigns sigmoid activation in the hidden layer. It also creates 5 neurons in the hidden layer. Next we create a data transform pipe which converts instances of the data input-output patterns to (DenseVector[Double], DenseVector[Double]), this is required in many data processing applications where the data structure storing the training data is not a breeze vector. Lets say we have data in the form trainingdata: Stream[(DenseVector[Double], Double)], i.e. we have input features as breeze vectors and scalar output values which help the network learn an unknown function. We can write the transform as. 1 2 3 4 val transform = DataPipe( (d: Stream[(DenseVector[Double], Double)]) => d.map(el => (el._1, DenseVector(el._2))) ) Model Building¶ We are now in a position to initialize a feed forward neural network model. 1 2 3 val model = new FeedForwardNetwork[ Stream[(DenseVector[Double], Double)] ](trainingdata, gr, transform) Here the variable trainingdata represents the training input output pairs, which must conform to the type argument given in square brackets (i.e. Stream[(DenseVector[Double], Double)]). Training the model using back propagation can be done as follows, you can set custom values for the backpropagation parameters like the learning rate, momentum factor, mini batch fraction, regularization and number of learning iterations. 1 2 3 4 5 6 model.setLearningRate(0.09) .setMaxIterations(100) .setBatchFraction(0.85) .setMomentum(0.45) .setRegParam(0.0001) .learn() The trained model can now be used for prediction, by using either the predict() method or the feedForward() value member both of which are members of FeedForwardNetwork (refer to the api docs for more details). 1 2 val pattern = DenseVector(2.0, 3.5, 2.5) val prediction = model.predict(pattern) Sparse Autoencoder¶ Sparse autoencoders are a feedforward architecture that are useful for unsupervised feature learning. They learn a compressed (or expanded) vector representation of the original data features. This process is known by various terms like feature learning, feature engineering, representation learning etc. Autoencoders are amongst several models used for feature learning. Other notable examples include convolutional neural networks (CNN), principal component analysis (PCA), Singular Value Decomposition (PCA) (a variant of PCA), Discrete Wavelet Transform (DWT), etc. Creation¶ Autoencoders can be created using the AutoEncoder class. Its constructor has the following arguments. 1 2 3 4 5 6 7 8 9 10 11 12 13 import io.github.mandar2812.dynaml.models.neuralnets._ import io.github.mandar2812.dynaml.models.neuralnets.TransferFunctions._ import io.github.mandar2812.dynaml.optimization.BackPropagation //Cast the training data as a stream of (x,x), //where x are the DenseVector of features val trainingData: Stream[(DenseVector[Double], DenseVector[Double])] = ... val testData = ... val enc = new AutoEncoder( inDim = trainingData.head._1.length, outDim = 4, acts = List(SIGMOID, LIN)) Training¶ The training algorithm used is a modified version of standard back-propagation. The objective function can be seen as an addition of three terms. \mathcal{L}(\mathbf{W}, \mathbf{X}) is the least squares loss. \mathcal{R}(\mathbf{W}) is the regularization penalty, with parameter \lambda. KL(\hat{\rho} \| \rho) is the Kullback Leiblerdivergence, between the average activation (over all data instances x \in \mathbf{X}) of each hidden node and a specified value \rho \in [0,1] which is also known as the sparsity weight. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 //Set sparsity parameter for back propagation BackPropagation.rho = 0.5 enc.optimizer .setRegParam(0.0) .setStepSize(1.5) .setNumIterations(200) .setMomentum(0.4) .setSparsityWeight(0.9) enc.learn(trainingData.toStream) val metrics = new MultiRegressionMetrics( testData.map(c => (enc.i(enc(c._1)), c._2)).toList, testData.length)
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
* Make .ps file as normal * Make a dummy .tex file doing the psfrag replacements, e.g. Code: Select all \documentclass{article}\usepackage{epsfig,psfrag}\pagestyle{empty}\begin{document}\begin{figure}\begin{center}\psfrag{CT}[][][1.2]{$l(l+1) C_l / 2\pi \mu K^2$}\psfrag{l}[][][1.2]{$l$}\psfig{figure=series_contribs.ps,angle=0,width = 12cm}\end{center}\end{figure}\end{document} * latex and dvips the file * Open the new .ps file in gsview * Use "PS to EPS" on the File menu to convert to a nicely tightly clipped .eps file. This works with the Windows GSview program anyway. Alternative if you have ps2eps installed use Code: Select all latex %1.texdvips %1perl ps2eps.pl -f -g -l %1.ps For Matlab fans, using the above recipe generates .eps files typically 5 times smaller than putting in the latex in matlab 7 using 'Interpreter','Latex' labels.
The following formula gives the critical coupling (more precisely the ratio of the spin-spin coupling over the temperature) for $O(n)$ models on a triangular lattice: $$\text{e}^{-2K}=\frac{1}{\sqrt{2+\sqrt{2-n}}}$$ with $K=\beta J$ Numerically, it says that: Ising model (n = 1) has $K \approx 0.27$ XY model (n=2) has $K \approx 0.17$ Thus, the critical temperature for the XY model is higher than the Ising model. I've been thinking about it but I can't come out with a reason of why allowing the order parameter to take continuous values means that we need to go higher in temperature to destroy order. Is there a (semi) intuitive reason for that?
tl;dr The single particle density matrix is directly related to NEGF as shown here, I wish to find a way to relate NEGF also to density matrices which describe probability distribution of many body states, such as those encountered by solving Liouville Von Neumann based equations for many body problems. General question I'm wondering regarding the relation between Green functions and the density matrix. I wish to know if one can form a link between density matrix approaches based on the Von Neumann equation, and its open system counter-parts, i.e. Lindblad or Redfield equations. Motivation Master equation approaches (such as Lindblad or Pauli rate equations) require some approximations: Born, Markov, and secular. These aren't necessarily order consistent, and they have some well known subtle effects. I think I found a system in which some of these approximations yield undesirable side effects, and I'd like to try and preform the same calculation via NEFG for comparison. Even though I could compare expectation values for observables such as single site occupations, which are directly obtained in the GF formalism. A more general comparison would be to compare to the raw results of Lindblad's equation. To that end I wish to find the density matrix which describes the probability distribution of Fock statesvia GF as this is what I get from Lindblad. As can be seen in several sources master equations can be obtained via diagrammatic expansion, which suggests to me that the diagrammatic technique and perhaps the NEGF formalism contains in some way the information I desire. For completeness such sources are: 1, 2, 3, 4, though I'm sure that there are more. Also the source I referenced below discusses the relation and shows that the relation between the two approaches is trivial when discussing a single particle problem, but how can one generalize it? Main Question The intro above hinted that I'm interseted in the reduced density matrix of a sub-system coupled to a thermal bath, and is out of equilibrium. Even though I'm actually interested in such a system, for the sake of simplicity let his now discuss a system with a finite number of sites at thermal equilibrium. The occupation probability of a specific site (and hence the single particle density matrix) can written in terms of equal time lesser Green functions as shown here, by: $$ P_i=\langle d_i^\dagger (t) \;d_i (t)\rangle= -iG_{ii}^<(t,t) $$ I wish to know: Can I define a coherence (in the density matrix sense) in terms of lesser Green functions by $\rho_{ij}=\langle d_i^\dagger (t) \;d_j (t)\rangle= -iG_{ij}^<(t,t)$? Is this equivalent to the off-diagonals of the density matrix? Can these definitions be generalized to many particle states? The standard definition of density matrix defines the probability disribution over the Hilbert space regardless of whether the states under discussion are single or many particle states. However I'm not sure how to generalize the Green function definition of probability to learn something about the occupation many particle states, it only teaches me about the occupation of a specific site? Can it be done, and if so how? Specific Example Let us discuss a specific example of a $N=3$ site system, of fermions. The fact that I choose to discuss a fermionic system helps me as I cannot have more than 3 particles in my system and my Hilbert space is finite dimensional: $dim(V)=2^3=8$. Any linear operator which maps this space to itself can be expressed as on $8\times8$ matrix, and that includes the Green function for the system. As discussed in the comments this includes also the creation and annihilation, adn projection operators onto states. First, in equilibrium I may just write $G^r(\omega)=(I(\omega+i\eta)-H)^{-1}$. If on the other hand I wish to relate to NEGF for future use, I may write the equations of motion for the NEGF and solve them. At this point I'm already confused because the matrix form will yield an $8\times8$ matrix with 36 independent quantities (even though $G^r$ isn't hermitian the entires above and below the diagonal aren't really independent). However if I think of in terms of two point correlation functions: $G_{ij}^r(t)=-i\theta(t)\langle\{c_i(t),c_j^\dagger(0)\}\rangle$, I don't have that many options. What went wrong? As a side note, in response comments (now in chat) the the various Fock states can be written as multiplication of creation operators acting on the vacuum $|\Omega\rangle$. For instance two of the 8 states will be: $$ |1,1,0\rangle=c_2^\dagger c_1^\dagger|\Omega\rangle \\ |0,0,1\rangle= c_3^\dagger|\Omega\rangle $$ and the projection operators onto these states would be: $$ P_{|1,1,0\rangle}=|1,1,0\rangle\langle1,1,0| \\ P_{|0,0,1\rangle}= |0,0,1\rangle\langle0,0,1| $$ If one wishes to write these projectors as matrices and chooses the basis outlined above as the standard basis ,the trivial result is that the matrices are filled wiyth $0$'s except for a single $1$ somewhere on the diagonal according to the specific projector.
I'm swimming in the ocean and there's a thunderstorm. Lightning bolts hit ships around me. Should I get out of the water? In fresh water what makes lightening so dangerous to a swimmer is that most of the current travels on the surface of the water, so rather then getting a $1/r^2$ falloff in current density, you see a $1/r$ falloff. Obviously eventually it will be conducted down into the mass of the water, but this takes a many meters. In salt water, this should happen much quicker. I'm not sure how the conductivity of the inside of your body compares to seawater. Even if it is less, some current would still flow through you. For normal dry skin, it takes considerable voltage to penetrate the skin (maybe a hundred volts), wet your skin with saltwater and you'll conduct electricity quite well! As a teenager playing with chemistry and water, that happened to me once, 12 volts AC and ionic solutions made for a pretty nasty shock. Normally 12 volts won't penetrate the skin, so I was unrealistically confident! I have a spark generator that makes roughly 20KV sparks (from a capacitor), discharge it into water, and you see surface sparks spread from the point of entry in all directions. Here's a crude way to look at the problem: Suppose there are $N$ wires. Each has resistance $R$, common potential difference $V$ and are connected in parallel. So the current through each wire is $I = \frac{V}{NR}$. Let's imagine a hypothetical wire formed by sea water which has a length, $L$ and cross sectional area, $S$. There are approximately $\frac{2\pi L^2}{S}$ of those wires in a hemisphere of radius $L$. The resistance of such a wire would be $\frac{\rho L}{S} $, where $\rho$ is resistivity of sea water. The number of such wires that can be connected to your body (with area $A$) is $\frac{A}{S}$. So the approximate current that will flow through your body is: $I = \frac{\frac{A}{S}V}{\frac{2\pi L^2}{S} \frac{\rho L}{S}} = \frac{AVS}{2\pi\rho L^3}$ Now assuming $L=100, \rho=0.25, A=1, V=100M, S=10^{-2}, \rho=0.25$ $I = 0.6 Amp$ The most important things to note are that $I \propto A$ and $I \propto \frac{1}{L^3}$. either: it comes down very close to you, so the lightning will probably go through your head and fry you or: it comes down some distance away and dissipates rather quickly. salt water is probably even more conductive than your body, so the current might even flow right around you. I doubt it would make much of a difference. the danger from thunderstorms while you're in the water is really that your head is the most elevated thing in a large area, not the conductivity of water. While moving across a conductance, electricity tends to follow the more conductive(i.e. less resistive) path. In the salt-water case, it will, depending on the distance from you, mostly ignore you. However, if you are sufficiently close, since a lightning bolt has such a huge energy output, it can still be enough to fry you even though it mostly ignores you. I would feel uncomfortable swimming in a thunderstorm and you should too. Most probably the current just spreads in all directions and weakens quite fast (at least like $r^{-2}$, not counting resistance), so I don't think the hazard is much (in magnitude) larger than on land in similar conditions. EDIT: In what I found in Internet salty water has only 10 times better conductivity than wet soil; yet on land the wet soil layer lays on insulating layer of dry soil, so the current is directed to rather "flood" than penetrate. My girlfriend and I were swimming in the ocean about 30 feet from a beach in Costa Rica as a storm was approaching. We saw occasional flashes of lightning, and I was counting the time it took to hear the thunder, which was at least 7-8 seconds at the fastest, and often 10 or more seconds. I figured as it was at least a mile or more away we were fine, but after a delay with no lightning or thunder, we had a sudden flash, that appeared closer, with thunder a split second later. I felt a slight buzz in my body and I almost didn't mention it to her as we scrambled to shore, but then she said "I just felt zapped!" I would describe it as similar to static shock of touching a door knob after walking across a dry rug, but more diffuse and softer. protected by Qmechanic♦ May 24 '13 at 13:46 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Following this thread "Does a univariate random variable's mean always equal the integral of its quantile function?" I tried to do a similar thing for a conditional expectation. It seems like my stochastic skills are a bit rusty. For a continuous r.v. with support on the real line I think that it holds $ E[X|X<q_\theta] = \int_{-\infty}^\infty x f(x|x<q_\theta)dx = ... = \frac{1}{F(q_\theta)} \int_{-\infty}^{q_\theta} x f(x)dx = \frac{1}{\theta} \int_{0}^{\theta}F^{-1} (p) dp$ where $q_\theta$ is the $\theta$ quantile and $f(x)$ is the density, $F(x)$ is the cdf and $F^{-1}(x)$ is the quantile function. EDIT: My solution so far is $E[X|X<q_\theta]=\int xf(x|x<q_\theta)dx = \int x \frac{f(x)P(x<q_\theta|X=x)}{\int f(u)P(u<q_\theta|X=u)du}dx = \frac{1}{\int f(u) 1{(u<q_\theta)}du} \int x f(x) 1{(x<q_\theta)}dx= \frac{1}{F(q_\theta)} \int_{-\infty}^{q_\theta} xf(x)dx = \frac{1}{\theta} \int_{0}^{\theta}F^{-1} (p) dp$ using the relationsip $f(x|B)=\frac{f(x)P(B|X=x)}{\int f(x)P(B|X=x)dx}$ Is this alright? Thanks a lot in advance for any suggestions!
The second matrix translates the eye [...] You don't do that in a projection matrix. You do that with your view matrix: Model (/Object) Matrix transforms an object into World Space View Matrix transforms all objects from world space to Eye (/Camera) Space (no projection so far!) Projection Matrix transforms from Eye Space to Clip Space Therefore you don't do any matrix multiplications to get to a projection matrix. Those multiplications happen in the shader, where you do $Projection\cdot View \cdot Model$$^1$ Matrix (for example in the vertex shader to specify your position output). Also, remember that a perspective projection changes angles (i.e. parallel lines won't be parallel anymore). As far as I can see that is missing in your derivation. Edit As to how you actually derive it, I'll largely use the explanation byEtay Meiri. It has some additional information and illustration, so you may want to check it, if something seems unclear. First, your camera is (as mentioned earlier) positioned in the origin.Now assume, you will only project onto a plane of distance $d$ and the plane you project onto is bounded by the aspect ration $ar$ (that is $ar = \frac{window\text{ }width}{window\text{ }height}$ in positive and negative $x$ direction and by $1$ in positive and negative $y$ direction. You can now determine $d$ by your vertical field of view $\alpha$ (since looking from the side, your camera, the center at the top of your projection plane and the center of the bottom of your projection plane build a triangle and your vertical field of view is the angle at your camera): $\frac{1}{d} = \tan(\frac{\alpha}{2})\\\implies d = \frac{1}{\tan(\frac{\alpha}{2})}$ Now you go about calculating projected points. Assume you have an arbitrary point $v = (v_x, v_y, v_z)$, and you want to calculate the point on your plane $p = (p_x, p_y, d)$. Looking at it from the side again, the triangle with your camera, your projection plane top center point and your projected point is similar to the triangle with your camera, your projection plane top center prolonged to have the same z coordinate, as your point $v$ and the point $v$, i.e. they have the same angles$^2$. Therefore, you can now calculate the projected $p_x$ and $p_y$ coordinate: $\frac{p_y}{d} = \frac{v_y}{v_z} \implies p_y = \frac{v_y\cdot d}{v_z} = \frac{v_y}{v_z \cdot \tan(\frac{\alpha}{2})}\\\frac{p_x}{d} = \frac{v_x}{v_z} \implies p_x = \frac{v_x\cdot d}{v_z} = \frac{v_x}{v_x \cdot \tan(\frac{\alpha}{2})}$ To additionally take into account, that the projection plane ranges from $-ar$ to $ar$ in $x$ direction, you would add $ar$ to the denominator in order to make the projected $x$ range be $\left[-1, 1\right]$: $p_x = \frac{v_x}{ar \cdot v_x \cdot \tan(\frac{\alpha}{2})}$ If you think about applying this to a matrix now, you need to take into account the division by $z$, which is different for any point with differing $z$ value. Therefore, the division by z is deferred until after the projection matrix is applied (and is done without any code of you, i.e. for OpenGL you'd specify the gl_Position in Clip Space ($\implies$ after projection) and the division is done for you). The depth test (Z-Test) still needs the $z$ value though, so it needs to be "safed" from the $z$ divide. Therefore you copy your $z$ value to the $w$ component (which is what you got right in your assumption) and end up with the following matrix: $\left(\begin{array}{cccc}\frac{1}{ar\cdot\tan(\frac{\alpha}{2})}&0&0\\0&\frac{1}{\tan(\frac{\alpha}{2})}&0&0\\0&0&0&0\\0&0&1&0\end{array}\right)$ The next step is to not project onto a plane, but into a $z$ range of $\left[-1, 1\right]$ (this range is for correct clipping, as is the same range when handling the $x$ and the $y$ coordinate). To be more specific, you don't want all points to end up in that range, but all points between your near and your far plane, so you map $[n, f]$ to $[-1, 1]$.The resulting range is $2$, so you first scale your near-to-far range to $2$. Then you move it to be $[-1, 1]$: $f(z) = A\cdot z + B$ And now taking into account that we want to safe the $z$ value from $z$ divide, we get to $f(z) = A + \frac{B}{z}$ Mapping this scaling to $[-1, 1]$ is a little bit of leg work: you know that any point with $z = n$ (on your near plane) will be projected to $p_z = -1$ and any point with $z = f$ (on your far plane) will be projected to $p_z = 1$. This leads to the following equation system: $A + \frac{B}{n} = -1\\A + \frac{B}{f} = 1$ Solving this for $A$ and $B$ leads to$^3$: $A = \frac{-n-f}{n-f}\\B = \frac{2fn}{n-f}$ Your third row of the projection matrix must produced the (undivided) $p_z$ (projected) z value. Therefore, you can now choose the individual elements of said row $(\begin{array}{cccc}a&b&c&d\end{array})$ such that the correct $z$ value is produced when multiplying with your point. The correct $p_z$ value is (as established earlier):$p_z = A\cdot z + B$ Therefore this must be the right hand side of your multiplication. Assume your point to project is $(x, y, z, w)$, then you must achieve the following: $a \cdot x + b \cdot y + c \cdot z + d \cdot w = A \cdot z + B$ Obviously, your point's $x$ and $y$ coordinate should not influence the projected $z$ coordiante, so you can set $a$ and $b$ to $0$. $c \cdot z + d \cdot w = A \cdot z + B$ Since you know that $w$ for any point is $1$, you're left with $c \cdot z + d = A \cdot z + B$ And thus, you have $c = A$ and $d = B$. Now your matrix is complete for projection $\left(\begin{array}{cccc}\frac{1}{ar\cdot\tan(\frac{\alpha}{2})}&0&0\\0&\frac{1}{\tan(\frac{\alpha}{2})}&0&0\\0&0&\frac{-n-f}{n-f}&\frac{2fn}{n-f}\\0&0&1&0\end{array}\right)$ Obviously, the tutorial works a little differently in projecting with the vertical field of view, so we can take a look at how you can additionally get to that (with the help of Eric Lengyel: Instead of projecting onto a plane at $z = d$, we will project onto the near plane, and thus get for a point $v = (v_x, v_y, v_z)$ the projected point $p = (p_x, p_y, n)$: $p_x = \frac{v_x \cdot n}{v_z}$ Additionally, you want to map any point with $l\leq x \leq r$ to $\left[-1, 1\right]$, as before.Thus you get $f(x) = (x-l) \frac{2}{r-l}-1$ Combine those two and you achieve $p_x = \frac{2n}{r-l}\left(-\frac{v_x}{v_y}\right)- \frac{r+l}{r-l}$ Now put this (and the term for $y$) into the matrix and you get to the first row, first column being $\frac{2n}{r-l}$, and the first row, third column being $\frac{r+l}{r-l}$. Since you may assume $r$ and $l$ to be different, here you can see why those two matrices differ. $r$ and $l$ are guaranteed to be the same in the tutorial's matrix and therefore you get $r+l = ar-ar = 0$ in the denominator, making the third row first column (and second column accordingly) $0$. $^1$Assuming you have Column-Major matrices like in OpenGL$ $^2$For the $x$ calculation, you will need to take a different point of your projection plane of course, but the idea is the same $^3$The differences of the sign come from how you orient your camera: in your assumption it is along the negative $z$ axis, whereas in the tutorial it is along the positive $z$ axis
This question comes from Georgi, Lie Alegbras in Particle Physics. Consider the algebra generated by $\sigma_a\otimes1$ and $\sigma_a\otimes \eta_1$ where $\sigma_a$ and $\eta_1$ are Pauli matrices (so $a=1,2,3$). He claims this is "semisimple, but not simple". To me, that means we should look for an invariant subalgebra (a two-sided ideal). The multiplication table is pretty easy to figure out: $[\sigma_a,\sigma_b]=i\epsilon_{abc}\sigma_c,$ $[\sigma_a,\sigma_b\otimes\eta_1]=i\epsilon_{abc}\sigma_c\otimes\eta_1$ $[\sigma_a\otimes\eta_1,\sigma_b\otimes\eta_1]=i\epsilon_{abc}\sigma_c\otimes1$ I'm dropping off the identity in all the places where it looks like it should be. So the only subalgebra is the $\mathfrak{su}(2)$ generated by $\sigma_a\otimes 1$, and that is not invariant from the second line above. So this looks like a simple algebra to me. Is there a typo somewhere I do not see? This post has been migrated from (A51.SE)
Consider the stage game: Let $\delta\in(0,1)$ be the discount factor. Let $G$ be the symmetric grim trigger strategy profile. The payoffs are then $$E_{A}(G) = E_{B}(G) = \sum_{i=0}^{\infty}3\delta^{i} = \frac{3}{1-\delta}. $$ If a player were to defect from $G$ at time $n$, they would be playing $N$ at time $n$ and every round after they would play $Y$ because that would maximize their stage game payoff given the other player is going to play $N$ forever. Let $G_{n}^{A}$ and $G_{n}^{B}$ be the strategy profiles were $A$ and $B$, respectively, defect from $G$ at time $n$ in this way. By symmetry, we have \begin{align*} E_{A}(G_{n}^{A}) = E_{B}(G_{n}^{B}) = \sum_{i=0}^{n-1}3\delta^{i} + 7\delta^{n} + \sum_{i=n+1}^{\infty}\delta^{i} &= 3\frac{1-\delta^{n}}{1-\delta} + 7\delta^{n} + \frac{1}{1-\delta} - \frac{1-\delta^{n+1}}{1-\delta} \\ &= \frac{3+\delta^{n}(4-6\delta)}{1 - \delta}. \end{align*} If $\delta \geq \frac{2}{3}$, then $E_{A}(G_{n}^{A}) \leq E_{A}(G)$ (and $E_{B}(G_{n}^{B}) \leq E_{B}(G)$) for all $n$ so it's not clear to me why the symmetric grim trigger strategy profile is not a Nash in this case. My professor claims that $G$ is not a Nash for any $\delta$. I do not believe it.
I have the following question: For the passive RC low pass filter shown below: $$V_S(t)=\cos(t)+\cos(100t)$$ $$V_0(t)=\alpha\cos(t+\theta)+\beta\cos(100t+\phi)$$ (where \$\alpha\$, \$\beta\$, \$\theta\$ and \$\phi\$ are constants) The value of \$\displaystyle \left|\frac{\alpha}{\beta}\right|\$ is? The answer is \$10\$ I have tried using the transfer function of a RC low pass filter : $$H(s)=\frac{1}{sRC+1}=\frac{10}{s+10}$$ $$\implies h(t)=10e^{-10t}u(t)$$ Then tried using the two given equation with \$h(t)\$ as below: $$h(t)=\frac{V_0(t)}{V_S(t)}$$ After that I am stuck. Even after trying with Initial value theorem as \$h(0)=\lim_{s \to \infty}sH(s)\$, I end up with : $$\alpha\cos(\theta)+\beta\cos(\phi)=20$$ Can someone please tell if I am missing something or if the question doesn't provide enough information?