anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Query regarding a statement on information theory
Question: In the book "Information Theory, Coding and Cryptography" by R.Bose, it says that: "Any information source produces an output that is random in nature. If the source output had no randomness, that is, the output were well known exactly, then there would be no need to transmit it." I cannot grasp the meaning of this. What does it mean that an output is random? And why would there be no need to transmit the information if the output were known exactly? Please help. Answer: If you could foretell the message perfectly, then no knowledge would need to be sent to you to reconstruct the message. For example, suppose the message were one period of a pseudo random sequence generated by a feedback shift register or something like a (very similar idea) Mersenne Twister and you knew the feedback links / recurrence relationship that would produce it. Then there would be no need to send the pseudorandom sequence to you - you could reconstruct it yourself. Actually the quoted passage is giving you a definition of randomness and, in particular, of a random sequence, for the purposes of communication / information theory. Randomness is a property property that is relative to a certain observer / receiver / witness and it is the property of that observer's not being able to foretell a message that arises through that observer's ignorance. The information that needs to be sent to the observer is precisely the knowledge that that observer needs to reconstruct the message fully.
{ "domain": "physics.stackexchange", "id": 36787, "tags": "computational-physics, information" }
Is Faraday's law of induction valid for a partial conducting loop?
Question: If a conductive loop is partial or incomplete(wrt. $A$), is Faraday's law of induction still valid? $$\varepsilon = -\frac{\delta \Phi_B}{\delta t}$$ Intuitively it seems possible to define the flux $\Phi_B$, however, I must take into account the geometric changes wrt. the area since it's partial, if it half as diagrammed, then its logical to adjust the area to $\frac{A}{2}$ from a simple symmetry. Edit Note: The question is aimed mostly for a partial conducting loop. Answer: (a) Faraday's law applies to a complete loop. No parts of the loop need to be conducting. To say that there'e an emf along a path in the form of a loop means that if a charge goes round (or is taken round) that loop (in the sense given by Lenz's law), work is done on it by the non-conservative electric field associated with the changing magnetic field. (b) In general it is meaningless to talk about the emf induced in an isolated segment of conductor in a changing magnetic field, because we are free to complete the loop in any way we please, and different loops will have different emfs induced in them. For example, in your left hand diagram the boundary of the grey area will have twice the emf around it as an area bounded by your conducting path and a diameter of the grey area. (c) But suppose the loop has been defined, eg as the boundary of the grey circular patch in the left hand diagram. It would be hard not to agree that, by symmetry, the emf in the conducting portion is half that induced in the whole loop. Likewise, I want to say that the emf in each side of the perimeter of the square is a quarter that induced in the whole perimeter. But that doesn't entitle us to say that we've now found the emf in the conducting part of either loop, viewed as an isolated conductor, because we're back to (b) again! [This answer has been extensively rewritten. The material now replaced is below... So in your diagrams we can talk about the emfs along the whole perimeters of the grey areas. Any partitioning of emfs into parts along the conducting bits and the non-conducting bits would be quite arbitrary. Take your left hand diagram; we could perfectly well complete the brown conducting path by going underneath the conducting part rather than above it. Then, supposing that the changing magnetic field extended below the conducting path, we'd have an emf in the opposite sense around the loop! This makes nonsense of attempts to assign an emf to an isolated segment of a conductor.]
{ "domain": "physics.stackexchange", "id": 55763, "tags": "electromagnetism, classical-electrodynamics" }
Corrosion of Aluminum by Baking soda
Question: An aqueous solution of sodium hydrogen carbonate corrodes aluminum foil quite readily. How do you explain this? Answer: Sodium aluminate is produced. The reaction is slower than the usual $NaOH + Al$ and liberates $CO_2$.
{ "domain": "chemistry.stackexchange", "id": 740, "tags": "acid-base" }
Do catalysts shift equilibrium constant towards 1?
Question: I want to be able to understand shifts in equilibrium from the maxwell boltzmann distribution. One thing I cannot get my head around is the effect of catalysts on the equilibrium position - supposedly it's none at all. But consider these diagrams of the M.B distribution of both reactants and products of a reversible reaction. The top diagram shows the distribution of the products of the reaction which equilibrium favours. It has a bigger concentration of particles. The line dividing the shaded region on these diagrams represents activation energy and the $dE$ represents the shift in activation energy due to the catalyst. We can see that the shaded regions, $A_1=A_2$ because forward and backward rates are the same at eqm. But clearly, the rate of the top reaction increases more as a greater fraction of particles have above activation energy. So adding a catalyst should shift equilibrium to the middle? Basically, catalysts work better on reactants with higher concentration. So shouldn't the side with the lower concentration be favoured? Answer: TL;DR Your Maxwell–Boltzmann diagram up there is not sufficient to describe the variation of rate with $E_\mathrm{a}$. Simply evaluating the shaded area alone does not reproduce the exponential part of the rate constant correctly, and therefore the shaded area should not be taken as a quantitative measure of the rate (only a qualitative one). There is a subtle issue with the way you've presented your drawing. However, we'll come to that slightly later. First, let's establish that the "proportion of molecules with sufficient energy to react" is given by $$P(\varepsilon) = \exp \left(-\frac{\varepsilon}{kT}\right) \tag{1}$$ Therefore, for a reaction $\ce{X <=> Y}$ with uncatalysed forward activation energy $E_\mathrm{f}$ and uncatalysed backward activation energy $E_\mathrm{b}$, the rates are given by $$k_\mathrm{f,uncat} = A_\mathrm{f} \exp \left(-\frac{E_\mathrm{f}}{kT}\right) \tag{2} $$ $$k_\mathrm{b,uncat} = A_\mathrm{b} \exp \left(-\frac{E_\mathrm{b}}{kT}\right) \tag{3} $$ The equilibrium constant of this reaction is given by $$K_\mathrm{uncat} = \frac{k_\mathrm{f,uncat}}{k_\mathrm{b,uncat}} = \frac{A_\mathrm{f}\exp(-E_\mathrm{f}/kT)}{A_\mathrm{b}\exp(-E_\mathrm{b}/kT)} \tag{4}$$ As you have noted, the change in activation energy due to the catalyst is the same. I would be a bit careful with using "$\mathrm{d}E$" as the notation for this, since $\mathrm{d}$ implies an infinitesimal change, and if the change is infinitesimal, your catalyst isn't much of a catalyst. So, I'm going to use $\Delta E$. We then have $$k_\mathrm{f,cat} = A_\mathrm{f} \exp \left(-\frac{E_\mathrm{f} - \Delta E}{kT}\right) \tag{5} $$ $$k_\mathrm{b,cat} = A_\mathrm{b} \exp \left(-\frac{E_\mathrm{b} - \Delta E}{kT}\right) \tag{6} $$ and the new equilibrium constant is $$\begin{align} K_\mathrm{cat} = \frac{k_\mathrm{f,cat}}{k_\mathrm{b,cat}} &= \frac{A_\mathrm{f}\exp[-(E_\mathrm{f} - \Delta E)/kT]}{A_\mathrm{b}\exp[-(E_\mathrm{b} - \Delta E)/kT]} \tag{7} \\[0.2cm] &= \frac{A_\mathrm{f}\exp(-E_\mathrm{f}/kT)}{A_\mathrm{b}\exp(-E_\mathrm{b}/kT)} \frac{\exp(\Delta E/kT)}{\exp(\Delta E/kT)} \tag{8} \\[0.2cm] &= \frac{A_\mathrm{f}\exp(-E_\mathrm{f}/kT)}{A_\mathrm{b}\exp(-E_\mathrm{b}/kT)} \tag{9} \end{align}$$ Equations $(9)$ and $(4)$ are the same, so there is no change in the equilibrium constant. The question then arises as to how eq. $(1)$ is obtained. The simplest way is to invoke a Boltzmann distribution, which almost by definition gives the desired form. However, since you have a Maxwell–Boltzmann curve, I guess I should talk about it a bit more. The fraction of molecules with energy $E_\mathrm{a}$ or greater is simply the shaded area under the curve, i.e. one can obtain it by integrating the curve over the desired range. $$P(\varepsilon) = \int_{E_\mathrm{a}}^\infty f(\varepsilon)\,\mathrm{d}\varepsilon \tag{10}$$ where the Maxwell–Boltzmann distribution of energies is given by (see Wikipedia) $$f(\varepsilon) = \frac{2}{\sqrt{\pi}}\left(\frac{1}{kT}\right)^{3/2} \sqrt{\varepsilon} \exp\left(-\frac{\varepsilon}{kT}\right) \tag{11}$$ At first glance, we would expect this to be directly proportional to the exponential part of the rate constant, i.e. $\exp(-E_\mathrm{a}/kT)$. Alas, it is not that simple. If you try to work out the integral $$\int_{E_\mathrm{a}}^{\infty} \frac{2}{\sqrt{\pi}}\left(\frac{1}{kT}\right)^{3/2} \sqrt{\varepsilon} \exp\left(-\frac{\varepsilon}{kT}\right) \,\mathrm{d}\varepsilon \tag{12}$$ you don't get anything close to the form of $\exp(-E_\mathrm{a}/kT)$. Instead, you get some "error function" rubbish, and some nasty square roots and exponentials. (You can use WolframAlpha to verify this.) Why is this so? Well, it turns out that there are other terms that also depend on $\varepsilon$ and therefore need to go inside that integral (they aren't constants and can't be taken out). The simplest example is that faster molecules tend to collide more often, so even though the right-hand tail of the diagram seems to contribute very little to the "proportion of molecules with sufficient energy", it actually contributes more significantly to the overall rate because these molecules collide more often. In collision theory this is described using the "relative velocity" of the particles $v_\mathrm{rel}$. There is also another complication, in that the Maxwell–Boltzmann distribution, the direction of the particles is not accounted for. (For more insight please refer to Levine Physical Chemistry 6th ed., p 467.) Therefore, there has to be yet another term that takes into account the direction of movement of the particles. The idea is that a head-on collision between two molecules is more likely to overcome the activation barrier than is a $90^\circ$ collision. The term that compensates for this is the "collision cross-section" $\sigma$. If you go through the maths (and I don't really intend to type it out here, it's rather long, but I will give some references) then you will find that at the end you will recover the form $\exp(-\varepsilon/kT)$. Once you have arrived at this, it's very straightforward to see that the increases in rate of both the forward and backward reaction cancel each other out. Now, as for the promised references, Pilling and Seakins's Reaction Kinetics pp 61-2 have a short outline of the proof. Atkins's Physical Chemistry 10th ed. has a slightly longer proof on pp 883-4.
{ "domain": "chemistry.stackexchange", "id": 7915, "tags": "physical-chemistry, equilibrium, energy, catalysis" }
Maxwell relation for battery undergoing reaction
Question: I am trying to solve Reif Problem 5.16, and seem to continue to run into a barrier. I here give the problem and solution from the solution manual, as well as my work. This is not a homework question. I am an engineering student self-studying this over the summer. We begin with the total differential of $E$, where we take as independent variables $\nu, V, T$ and we have used $dT=dV=0$ in the given process. $$dE = \left(\frac{\partial E}{\partial \nu} \right)_{V,T}$$ Thus it remains to express the derivative above in terms of known quantities. We can do this by taking $S$ in terms of the chosen independent variables, from which we obtain the expressions (by taking the total differential $dS$, using equation (2) from the problem statement and "dividing" by $dT$ and $d\nu$ at constant "whatever" to get the required partial derivative in the total differential) $$\left(\frac{\partial E}{\partial \nu} \right)_{V,T}=T\left(\frac{\partial S}{\partial \nu} \right)_{V,T}-zf\Upsilon$$ and $$\left(\frac{\partial S}{\partial T} \right)_{\nu,V}=\frac{\left(\frac{\partial E}{\partial T} \right)_{\nu,V}}{T}$$ It is the next step I do not understand. Which Maxwell relation is being used in the solution? I am trying to use the equality of mixed partials in the second equation to substitute away the $S$ derivative, but I don't see where this comes from. Answer: Remember that Maxwell relations come from the fact that some quantities are state functions, so you can permute the partial derivatives. In your case, you want to transform $\left(\frac{\partial S}{\partial \nu}\right)_T$ so you want to identify $S$ as the partial derivative of something depending on $\nu,T$ with respect to $T$. The natural candidate is the free energy. Using the differentials: $$ dE = TdS-zf\Upsilon d\nu\\ dF = -SdT-zf\Upsilon d\nu $$ (second one because $F$ is a Legendre transform of $E$) you get: $$ \left(\frac{\partial S}{\partial \nu}\right)_T = \left(\frac{\partial (zf\Upsilon)}{\partial T}\right)_\nu \\ = zf\left(\frac{\partial \Upsilon}{\partial T}\right)_\nu \\ $$ as written in the solution. Note that I followed your convention, thinking in terms of fixed volume. However, if you look at the problem, it's at fixed pressure, so you should in the above reasoning you need to substitute $E$ by $H$ (enthalpy) and $F$ by $G$ (free enthalpy or Gibb's potential). Hope this helps.
{ "domain": "physics.stackexchange", "id": 88626, "tags": "thermodynamics, physical-chemistry, batteries, electrochemistry" }
How are sounds recorded and reproduced (played back)?
Question: Someday I have suddenly got wondered how sounds are recorded and reproduced. I searched a little bit but the information I got was composed of difficult words and contents, so I couldn't understand that. According to what I've read, the reason sounds can be reproduced is because of a vibration of diaphragm(?). But I think that doesn't make sense, because how one thing can reproduce various sounds having different waveforms? To me, it sounds like that violins can produce sounds of flutes. (Of course in this video a piano imitates a human voice, but what I'm talking about is a waveform, not a synthesis of waves. (I have totally no idea how a piano can make a human voice, but my guess is as follows: splitting a recorded sound over time → mimicking that short wave by using piano sounds)) After thinking for a while, I realized that my question was divided into 3 parts and I would like to know how all of the questions are achieved in analog and digital manners. (1) Capturing 'what' of sounds? (recording) // Maybe sound pressure? volume(amplitude)? but how? (2) How are the recorded data stored? (3) How can the recorded data be reproduced into the real sound? I read that digital sound recording is achieved by sampling, quantization, and encoding but I think only the first one is related to this question. As I commented on (1), sampling what and how? Currently I'm absolutely seriously curious about it. I really would like to know. It would be appreciated if you share some of your knowledge to me. Thank you for reading this long question. (Please let me know if this place is not a right place to ask this kind of question.) Answer: A big question. Plz have a look at this first https://en.wikipedia.org/wiki/Sound_recording_and_reproduction. Sound is highly related to vibration. Sound is generated by vibration of a sound source, and you can hear a sound is because of the vibration of eardrum. Sound Recording Sound pressure is the local pressure deviation from the ambient (average or equilibrium) atmospheric pressure, caused by sound wave, and often denoted by $p$, or more precisely, $p(x, y, z, t)$, which means that it is a function of position $(x, y, z)$ and time $t$. Sound pressure is a scalar quantity that has no direction. If we fix the location $(x, y, z)$ as constant, the sound pressure only depends on time variable $t$, which makes it a simple $p(t)$. Therefore, we can plot the sound pressure vary against time: The x axis is time and the y axis is sound pressure. A pretty simple curve. Now we want to record this curve using a microphone. The microphone has a membrane which is affected by the force generated by the sound pressure ($F=pS$), resulting in a displacement. Then convert this vibration signal to electric signal (electromagnetic induction) and finally digitize this electric signal and store it. Sound Reproduction Since we successfully recorded a sound signal, now send this digital signal into a digital-analog converter to get an analog electric signal. Use this electric signal to excite a loudspeaker and make the diaphragm vibrate and radiate sound wave to the air. As you may already notice that microphone and loudspeaker realize the opposite energy conversion process - acoustic to electronic and electronic to acoustic, they are called electroacoustic transducer.
{ "domain": "dsp.stackexchange", "id": 10845, "tags": "audio, sound, wave" }
Deciding whether a Turing machine decides a language $L$ in at most $n^2$ steps
Question: Let $L$ be a language for which there exists some turing machine deciding it in at most $n^2$ steps. Is it decidable whether a given turing machine $M$ decides $L$ and runs in at most $n^2$ steps? I expect the answer to always be "No", regardless of $L$, but I fail to see exactly how. Answer: This problem is indeed undecidable, assuming that $n$ is not a constant but refers to the length of the machine's input. Consider the problem $P$ of, given a Turing machine $\mathcal{M}$, to decide if it runs in at most $|x|$ steps on every input $x$. Problem $P$ is undecidable. For any constants $c,c' > 0$, let $Q_{L,c,c'}$ be the (promise) problem of: given a Turing machine $\mathcal{M}$ such that, for every $x$, either $\mathcal{M}$ halts on $x$ in at most $c\cdot |x|^2 + c'$ steps, or does not halt, decide if it runs in at most $c\cdot |x|^2 + c'$ steps on every input $x$ and decides $L$. Claim: There exists $c,c'>0$ such that problem $P$ reduces to problem $Q_{L,c,c'}$. Proof: Let $\mathcal{N_L}$ be the Turing machine which decides $L$ in at most $n^2$ steps. The reduction maps $\mathcal{M}$ to the machine $\mathcal{M}'$ which, on input $x$: first, it simulates $\mathcal{M}$ on input $x$ for $|x|$ steps; if the machine didn't stop, it loops forever; otherwise, it simulates $\mathcal{N}_L$ on $x$ for $|x|^2$ steps, and returns its output. Then $\mathcal{M}$ runs in at most $|x|$ steps for every input $x$ if and only if $\mathcal{M}'$ decides $L$ and runs in at most $c\cdot |x|^2 + c'$ steps for every input $x$ for some constants $c,c'$. Then, using the linear speadup theorem, you can show that the following problem is undecidable: given $\mathcal{M}$, does $\mathcal{M}$ decide $L$ and halts in at most \begin{cases} |x|^2 \text{ steps} & \text{on every input $x$ of size at least $n_0$}, \\ k \text{ steps} & \text{on every input $x$ of size at most $k$}, \\ \end{cases} for some constant $n_0 \in \mathbb{N}$. This problem differs from yours only on finitely many inputs, and so your problem is undecidable.
{ "domain": "cs.stackexchange", "id": 21669, "tags": "turing-machines, undecidability, rice-theorem" }
Game: Predict answer based on 3 user inputs, 2 self created inputs
Question: add9 = [] add9.append(int(input("Enter 1st 4-digit no.: "))) print(f"The answer will be 2 and {str(add9[0])[0 :-1]} and {str(add9[0]-2)[-1]} = 2{add9[0]-2}") add9.append(int(input("Enter 2nd 4-digit no.: "))) add9.append(9999-add9[1]) print(f"The 3rd 4-digit is 9999 - {add9[1]}= {add9[2]}") add9.append(int(input("Enter 4th 4-digit no.: "))) add9.append(9999-add9[3]) print(f"The 5th 4-digit is 9999 - {add9[2]}= {add9[4]}") print(f""" So, {add9[0]}+{add9[1]} = {add9[0]+add9[1]} {add9[0]+add9[1]}+{add9[2]} = {add9[0]+add9[1]+add9[2]} {add9[0]+add9[1]+add9[2]}+{add9[3]} = {add9[0]+add9[1]+add9[2]+add9[3]} {add9[0]+add9[1]+add9[2]+add9[3]}+{add9[4]} = {add9[0]+add9[1]+add9[2]+add9[3]+add9[4]} """) gives Enter 1st 4-digit no.: 9999 The answer will be 2 and 999 and 7 = 29997 Enter 2nd 4-digit no.: 2345 The 3rd 4-digit is 9999 - 2345= 7654 Enter 4th 4-digit no.: 6789 The 5th 4-digit is 9999 - 7654= 3210 So, 9999+2345 = 12344 12344+7654 = 19998 19998+6789 = 26787 26787+3210 = 29997 [Program finished] Code works fine. It's for kids to demo how to produce output using 3 inputs from user. The trick is to subtract 9999 from 2nd and 3rd input from user. I have tried to use as small syntax as possible. Edit: 3rd and 5th digit are - 9999 Enter 1st 4-digit no.: 9990 The answer will be 2 and 999 and 8 = 29988 Enter 2nd 4-digit no.: 6667 The 3rd 4-digit is 9999 - 6667= 3332 Enter 4th 4-digit no.: 8888 The 5th 4-digit is 9999 - 3332= 1111 So, 9990+6667 = 16657 16657+3332 = 19989 19989+8888 = 28877 28877+1111 = 29988 [Program finished] is working as expected From calculator, Calculation 1 (1/1) 9,990. + (1/2) 6,667. = (1/3) 16,657. + (1/4) 3,332. = (1/5) 19,989. + (1/6) 8,888. = (1/7) 28,877. + (1/8) 1,111. = (1/9) 29,988. Answer: To start, I wouldn't put everything in a list. There are benefits to putting all five numbers in a sequence, but that needn't happen until the end You can expand your explanation for the "answer". First, it's unclear what you mean by "answer"; it's actually the fourth sum. Also, you can expand your explanation for each of the terms going into this fourth sum. Some of the notation I've shown is perhaps a little advanced for kids, so simplify it at your discretion. You should be replacing your expressions at the bottom with a loop. Where possible, you should be phrasing your operations as mathematical (mod, floor division) rather than string-based. I think you have an algorithmic problem? When I enter 9990 for the first number, the predicted and actual fourth sum diverge. Suggested a = int(input("Enter 1st 4-digit no.: ")) sum4 = int(f'2{a//10}{(a - 2)%10}') print( f"The fourth sum will be the concatenation of:" f"\n 2" f"\n ⌊{a}/10⌋ = {a//10}" f"\n mod({a} - 2, 10) = {(a - 2)%10}" f"\n= {sum4}" f"\n" ) b = int(input("Enter 2nd 4-digit no.: ")) c = 9999 - b print(f"The 3rd 4-digit no. is 9999 - {b} = {c}") d = int(input("Enter 4th 4-digit no.: ")) e = 9999 - d print(f"The 5th 4-digit no. is 9999 - {c} = {e}") nums = (a, b, c, d, e) print('\nSo,') for i in range(4): addend = sum(nums[:i+1]) augend = nums[i+1] print(f'{addend} + {augend} = {addend+augend}') Output Enter 1st 4-digit no.: 9999 The fourth sum will be the concatenation of: 2 ⌊9999/10⌋ = 999 mod(9999 - 2, 10) = 7 = 29997 Enter 2nd 4-digit no.: 2345 The 3rd 4-digit no. is 9999 - 2345 = 7654 Enter 4th 4-digit no.: 6789 The 5th 4-digit no. is 9999 - 7654 = 3210 So, 9999 + 2345 = 12344 12344 + 7654 = 19998 19998 + 6789 = 26787 26787 + 3210 = 29997
{ "domain": "codereview.stackexchange", "id": 41782, "tags": "python-3.x, mathematics" }
Why the distance equation is not working while calculating distance manually?
Question: I was wondering how the average velocity is the arithmetic mean of initial velocity and final velocity if acceleration is constant. So, my first part of the question: Can someone give me an algebraic proof of this? Okay, so I moved on and thought maybe it was more of an experimental result, so I tried to do this: Lets suppose that an object is at rest, so its initial velocity is $0\ \mathrm{m/s}$. And lets say it starts moving with a constant acceleration of $2\ \mathrm{m/s}$ per second. And lets manually calculate the distance it will cover in $5\ \mathrm{s}$. The result is as follows: Distance covered in $0\ \mathrm{s} = 0\ \mathrm{m}$. Distance covered in $1\ \mathrm{s} = 2\ \mathrm{m}$. Distance covered in $2\ \mathrm{s} = 4\ \mathrm{m}$. Distance covered in $3\ \mathrm{s} = 6\ \mathrm{m}$. Distance covered in $4\ \mathrm{s} = 8\ \mathrm{m}$. Distance covered in $5\ \mathrm{s} = 10\ \mathrm{m}$. So total distance covered = $(0 + 2 + 4 + 6 + 8 + 10)\ \mathrm{m} = 30\ \mathrm{m}$. But, if I use this equation instead: $$s = \frac{v + u}{2}t$$ I get total distance covered of $25\ \mathrm{m}$. Why don't these results agree? Answer: In your first calculation, you're finding the distance travelled by an object that moves at a constant speed each second, and at the end of each second, instantaneously accelerates to a speed that is 2 m/s faster. That's not the same motion as an object that accelerates at a constant, continuous rate of $2 ~\rm m/s/s$, and they won't go the same distance in the same time. Here is a graphical comparison of the two different motions: (here's a link to the live Desmos graph) You might be interested to know that the first method you used is known as the rectangle method for approximating an integral, while the second method is exact.
{ "domain": "physics.stackexchange", "id": 27496, "tags": "kinematics, acceleration" }
Does all fire only emit light on its outermost shell?
Question: This question is about light emission, which may overlap with physics, but I am most interested in combustion and types of flame (incandescence, petrochemical fuel flames, nuclear reactions). Consider this article about candle flames. In it they describe how a candle flame only emits light where it has contact with oxygen (emphasis mine): However the reaction can only occur where the air meets the wax vapour. This cannot happen all the way though the flame, just around the outside. So all flames are hollow. How accurate is this? What parts of a flame emit light? If flames have complex scattering and absorption properties distinct from the burning substance, does this only apply to the outside regions, with the interior being goverened by whatever fuel is burning? I suppose "fire" is many different phenomena, so are the following flames also hollow? A: Campfire B: Rocket exhaust C: Oil platform explosion D: The Sun Answer: The answer is NO. The article you quote makes a completely unwarranted generalisation that "all flames are hollow". This is true of some flames but only because the fuel that is burning is only able to burn when mixed with oxygen from the air. In those flames, the flame is "hollow" because only in regions where air can mix with the fuel can the fuel burn and the rate of mixing is dependent on the flow of fuel and the rate at which air can mix or diffuse into the fuel rich region. Bunsen burner flames are like this when the air is not premixed with the fuel gas (because the air intake on the bae is closed) and candle flames are like this because the wax has to evaporate to mix with air. A bunsen flame when the air intake is open is not like that as the fuel gas and air are pre-mixed and can burn throughout most of the region of the flame. Another factor worth considering is what causes the emission of light. In candles and many other poorly controlled flames, there is a lot of light from incandescence not from the underlying burning reaction. Incandescence involves black-body emissions from hot solid particles. Those particles are usually small particles of solid carbon produced from partial, incomplete burning of hydrocarbons. They emit light because they are hot. Candles are tuned to create a lot of these particles as this gives far more light than complete combustion. In contrast, a well-controlled bunsen flame emits very little light (the light is not black-body emission but comes from specific excited states in the reactants). As to how this applies to your specific examples here are some analyses. 1. Campfires Campfires are a mixture of various phenomena. They have flames a little like candle flames. But they also have very hot solids which emit a lot of black-body radiation because they are hot. Some of the emitted gasses have little emission of light as they are like a bunsen flame, but the overall effect is dominated by the hot solids. And, of course, some camp fires have far better mixing of air and fuel than others so it is hard to generalise. 2. Rocket exhaust Most rockets need to be as efficient as possible. So they tend to have very carefully designed mixing of fuel and oxidiser to achieve maximal energy output. They are not hollow and tend not to emit huge amounts of light (compared to the equivalent candle flame anyway). 3. Oil platform explosion Impossible to generalise. But unintended explosions don't tend to be well controlled so will have some areas where mixing is good and others where it is not so some areas are like a candle and others like a bunsen. Incidentally, movie explosions emit a lot more light than real explosions because, well, hollywood likes visual spectacle not realism (see this youtube explanation and illustration of how movies deliberately add fuel to create poorly mixed candle like flames for more visually interesting explosions). 4. The sun The sun does not emit light because of burning (at least not in the normal chemical sense). The sun emits light because the surface is very hot. And this happens because of nuclear fusion deep in its core (not what a chemist would call burning, though a nuclear physicist might, but what do they know?). The light we see is mostly black body emissions because the surface is hot. It is certainly not hollow either.
{ "domain": "chemistry.stackexchange", "id": 15354, "tags": "heat, combustion, fuel, light, emission" }
Why is a transmon a charge qubit?
Question: The classic charge qubit is the cooper pair box which is a capacitor in series with a Josephson junction. In my understanding, by changing the gate voltage at the capacitor, one can create a superposition of $n$ and $n+1$ cooper pairs on the 'island' in between the junction and capacitor. A transmon looks far more like a classic LC circuit. It is often depicted as a Josephson junction in parallel with a very large capacitor and thus it is manipulated using microwave frequencies, not gate voltages. However, in all literature I can find it is called a special case of a charge qubit. I cannot seem to make sense of these two ideas. How are they equivalent? Answer: There are two things to consider: What does the potential look like? Is the wave function of the qubit narrow in the flux or charge basis? Potential shape The Hamiltonian of the transmon (a junction in parallel with a capacitor) is $$H_{\text{charge qubit}} = - E_J \cos(\phi) + \frac{(-2en)^2}{2C}$$ where $E_J\equiv I_c \Phi_0 / 2\pi$, $I_c$ is the junction critical current, $\phi$ is the phase across the junction and $n$ is the number of Cooper pairs which have tunneled through the junction. For easier comparison to other qubits it's really useful to note that using the Josephson relation $$V = \frac{\Phi_0}{2\pi}\dot{\phi}$$ and noting that the magnetic flux $\Phi$ is the time integral of the voltage we can write $$\Phi = \int dt V(t) = \Phi_0 \frac{\phi}{2\pi} \, .$$ Using this and the charge $Q = -2en$ the Hamiltonian becomes $$H_{\text{charge qubit}} = -E_J \cos(2\pi \Phi / \Phi_0) + \frac{Q^2}{2C} \, .$$ The $Q^2/2C$ term is the kinetic energy (notice the similarity to $p^2/2m$), and the part depending on $\Phi$ is the potential energy. Notice that, like with the charge qubit, this Hamiltonian's potential energy term is periodic. That is unlike the case with the e.g. flux qubit where the Hamiltonian is $$H_{\text{flux qubit}} = \frac{\Phi^2}{2L} - E_J \cos(2\pi \hat{\Phi}/\Phi_0) + \frac{Q^2}{2C} \, .$$ This is one of the main differences: the transmon Hamiltonian (like the charge qubit Hamiltonian) is periodic in the flux basis while the flux qubit Hamiltonian is not periodic in the flux basis. The physical reason for this difference is that the transmon (and charge) qubit do not have a dc path to ground. The junction and capacitor are in parallel with no e.g. inductor going to ground. The flux qubit has an inductance to ground; this inductance introduces the parabolic term in the Hamiltonian making the potential non-periodic. This is the sense in which a transmon is like a charge qubit. Wave function widths As you noted, the transmon is nearly a harmonic oscillator. The reason for this is that although the potential is periodic, the wave function is narrow enough that it mostly sits localized in a sincle well of the potential. We can check this self-consistently in an easy way: let's just compute the width of the wave function of a Harmonic oscillator which has the same parameters as a typical transmon. For a harmonic oscillator with Hamiltonian $$H = \frac{1}{2} \alpha u^2 + \frac{1}{2} \beta v^2 \qquad [u,v] = i \gamma $$ The mean square of $u$ in the ground state is $$\langle 0 | u^2 | 0 \rangle = (1/2) \gamma \sqrt{\beta / \alpha} \, . $$ The Harmonic oscillator Hamiltonian is $$H = \frac{\Phi^2}{2L} + \frac{Q^2}{2C} \qquad [\Phi, Q] = i\hbar \, .$$ Therefore, we have $\alpha = 1/L$, $\beta = 1 / C$, and $\gamma = \hbar$ and our mean square fluctuation of $\Phi$ is $$\langle 0 | \Phi^2 | 0 \rangle = (1/2) \hbar \sqrt{\frac{L}{C}} \, .$$ The inductance of an (unbiased) Josephson junction is $L_{J_0} = \Phi_0 / (2 \pi I_c)$. For the transmon this comes out to about $L=10\,\text{nH}$. With $C\approx 85\,\text{fF}$ this gives us $$\sqrt{\langle 0 | \Phi^2 | 0 \rangle} \approx 0.06 \Phi_0 \, .$$ As one period of the cosine potential is $\Phi_0$ wide (corresponding to a change in $\phi$ of $2\pi$), this means that the transmon wave function is pretty narrow in the flux basis. In this sense, the transmon is very unlike the charge qubit, which has a wide wave function in the flux basis. So in the end, while the transmon and charge qbits share a certain theoretical similarity in the form of their Hamiltonians, for all practical purposes the transmon is actually more like a flux qubit with a large $C$ and biased so that the flux qubit only has one potential well. Note that the width of the wave function in the flux basis decreases as we increase $C$. The whole reasons the transmon was invented was that narrowing the wave function by increasing $C$ leads to less sensitivity to charge noise. However, in all literature I can find it is called a special case of a charge qubit. That's largely historical. The folks who invented the transmon came from a charge qubit background, and the transmon was invented by trying to make the charge qubit less sensitive to charge noise. In fact, I have an amusing story about this. The problem with the charge qubit was that its sensitivity to charge noise gave it a low $T_2$. Charge noise is difficult to reduce so people looked for a way to make qubits which would just be less sensitive to it. Professor Rob Schoelkopf's idea was to add a dc shunt to ground to short circuit low frequency charge fluctuations; by making this shunt with a bit of transmission line, the shunt would be a short at dc but still have high impedance at the qubit's oscillation frequency, thus preserving the circuit's function as a qubit. Thinking of this as TRANSmission line shunting a Josephson junction plasMON oscillation they dubbed it the "transmon". However, in the end, the best design was to use a capacitor instead of a transmission line. So it should have been called the "capmon" :-)
{ "domain": "physics.stackexchange", "id": 22262, "tags": "quantum-mechanics, quantum-electrodynamics, superconductivity, quantum-computer" }
Is reversal of velocity always equivalent to reversal of time?
Question: Let us imagine there is a container full of small particles which are allowed to collide with each other and the container walls (in 2D). If I initialise the system with given velocities and positions at $t = 0$, and reverse the velocity of every particle at $t = 5$, then at $t = 10$ the particles should exactly return to their initial velocities and positions. However, when I simulate this experiment with simple 2D physics, the particles fail to return to the initial state if they are allowed to both collide with each other and container walls. So the question is Is my understanding of the physics wrong or is there something wrong with the simulation itself ? When only collision with container walls or collision with other particles is allowed, the particles more or less return to the same situation. However, when both are allowed they just don't. The only interactions in the simulation are 2D collisions between the particles which conserve momentum and collisions with the "walls" which just reverse the corresponding $x$ or $y$ component of the velocity. Answer: You are rediscovering the chaotic behavior of non-integrable systems. The reason the system does not recover exactly its original state is due to the unavoidable round-off which makes the final dynamic configuration slightly, but definitely different from the exact evolution. Even after a few steps. When velocities are reversed, since the system is not starting from the exact final state, after the same number of steps of the direct evolution, it does not go back to the original starting point. It is a direct check of the exponential divergence of chaotic trajectories. The simplest situations you may use to check the phenomenon are the so called chaotic billiards, i.e. two-dimensional surfaces exhibiting the same phenomenon of exponential divergences of close trajectories even for a single point-like particle experiencing elastic collisions. Probably the simplest are the "Sinai billiard" and the "Stadium billiard". Writing a computer code to exploit their chaotic dynamics is a simple but rewarding exercise.
{ "domain": "physics.stackexchange", "id": 60480, "tags": "kinematics, collision, simulations, time-reversal-symmetry, arrow-of-time" }
What do I need to know about algorithms?
Question: Drawing from Are algorithms (and efficiency in general) getting less important?, I get that algorithms are more important than ever, but I need to know anything about them other than their complexities? I would expect to use algorithms out of a library of some sort that has been written by people that are much much more knowledgeable than me, and have been thoroughly tested (most of the STL comes to mind here, but also boost, and probably every other library that someone would care to name). Thus, unless I am a library maintainer/implementer of some sort, do I need to know anything about these algorithms? Answer: If you do anything a bit complicated, you will almost certainly have to design your own algorithms. STL/boost are nice, amazing really, but it is useful to know how things work behinds the scenes and be able to reproduce it. This will help you tackle problems that the STL/boost, generic enough as they are, cannot handle; or problems that are entirely different from the ones the STL/boost can help with. Many times, algorithms and data structures that I've designed were built on top of STL/boost. Thus, unless I am a library maintainer/implementer of some sort, do I need to know anything about these algorithms. Chances are, if you make any interesting programs, you are going to find it useful to design some algorithms and data structures. Sometimes, the most effective optimizations cannot beat a simple change in methodology or representation. Unless the algorithm you require/design is useful to a wider audience, it can sometimes be difficult to package it as a useful library. Perhaps you will find that something you made can be useful, and will publish it (github is chock full of these projects). Therefore, I think that if you program long enough, you will become a "library maintainer/implementer of some sort", even if you are the only one using it, for lack of publicity. Bottom line: I don't think libraries like STL and Boost are making knowing algorithms less important at all; rather they are making more advanced algorithms within reach. While libraries such as these can sometimes make knowledge of their algorithms unnecessary for small programs, in practice it can important to know if you intend to design anything complex.
{ "domain": "cs.stackexchange", "id": 1865, "tags": "algorithms, efficiency" }
Can weight of a body cause tipping (friction) about any point?
Question: In a problem regarding below figure given body is tipping about point A. In textbooks the examples given always were tipping of body due to external force . Is it possible for a body to tipping due to its own weight? Please clarify mathematically Answer: If the CCW torque about point (A) produce by the force (P) is greater than the CW torque produced by gravity, then the rod will tip up from the surface of the incline.
{ "domain": "physics.stackexchange", "id": 85372, "tags": "newtonian-mechanics, classical-mechanics, weight" }
Banking of roads
Question: When drawing a free body diagram of banking of road, why don't we resolve weight of car into its cos and sin components? It is seen that only normal reaction force and frictional force are resolved enter link description here Answer: Although the choice of axes is arbitrary in lots of cases, including this one, there is a choice which makes solution of a problem easier. In this problem you want a car to go around a corner without moving up or down vertically. So one of the axes is chosen to be vertically upward so that the sum of the resolved forces in that direction is zero and the is no motion in the vertical direct and the other axis is chosen to be horizontally inwards so that the sum of the resolved forces in that direction provide the centripetal acceleration to enable the car to turn the corner. With that choice of axes you have to resolve the frictional force and the normal reaction into two components but the weight of the car only has one component in the vertically down direction.
{ "domain": "physics.stackexchange", "id": 47848, "tags": "newtonian-mechanics, forces, vectors, coordinate-systems, free-body-diagram" }
How does JWST position itself to see and resolve an exact target?
Question: Let's say the James Webb Space Telescope wants to move from observing the Andromeda galaxy millions of light years away to looking at the Trappist-1e planet some dozens of light years away, what actions are need to change targets and find and resolve that target? I think I read it has jet propulsion for movement, but does it have a different system for fine movement? It seems like jetting around would be extremely finicky. I imagine Hubble has to do something similar but I never heard how that works either. Answer: It's true that James Webb carries fuel, and you're right that it is not used for positioning, at least not directly (see below). Fuel is used for maintaining its orbit around L2, and was also used three times on its journey to L2, as "corrections burns". Note that the distance to the target is irrelevant. A nearby exoplanet and a distant galaxy are both "infinitely far away" for observing purposes (although of course in general more distant object are fainter). Reaction wheels To acquire a target, James Webb (and other space telescopes) uses a number of reaction wheels, one for each "axis". At least three are needed, but James Webb has six; more allows for easier control, but are also heavier. These wheels rotate constantly, thus storing a large amount of angular momentum to keep the telescope steady. Changing the angular speed of one of the action wheels causes Webb to change its direction along that wheel's axis. Edit thanks to @KarlKastor: While James Webb observes, the photon pressure of the Sun's light exerts a torque on the telescope. To maintain its position, this is counteracted by adjusting the spin of the reaction wheels. This causes angular momentum to build up, which must occasionally be dumped by firings Webb's thrusters once per week or so (JWST Momentum Management). Gyroscopes Additionally, Webb has six gyroscopes which tell the telescope which direction it's currently pointing, and how fast it's turning. Unlike Hubble's gyroscopes, however, which are mechanical, Webb uses Hemispherical Resonator Gyroscopes, which have no moving parts susceptible to wear, instead measuring the precession of vibration patterns in a crystal. Star tracking Finally, to ensure a perfect pointing, one of Webb's four instruments, NIRISS, is equipped with a "Fine Guidance Sensor" which "locks" the telescope on a target by observing the exact position of a star in its field of view. Edit thanks to @David Hammen: In addition to the Fine Guidance Sensor, the JWST also has a few regular star trackers. To power its various moving parts, James Webb has its solar array, capable of providing 2 kW, twice the needed amount. You can read more about the positioning system at the NASA FAQ.
{ "domain": "astronomy.stackexchange", "id": 6504, "tags": "space-telescope, james-webb-space-telescope" }
Compile-time list in C++
Question: Everything is evaluated in compile-time. You will need a C++14 compiler. The compilation time is quite long, and with a bigger list input, you'll get some error messages like constexpr evaluation hit maximum step limit. #include <iostream> #include <limits> #include <initializer_list> template<typename T> class List { template<typename T2> friend std::ostream &operator<<(std::ostream &, const List<T2> &); public: constexpr List(); constexpr List(std::initializer_list<T>); constexpr T head() const; constexpr List<T> tail() const; constexpr List<T> add(T) const; constexpr List<T> merge(List<T>) const; constexpr List<T> reverse() const; template<typename Filter> constexpr List<T> filter(Filter) const; constexpr List<T> sort() const; constexpr T sum() const; private: int length; T array[std::numeric_limits<int>::max() >> 2]; }; template<typename T> constexpr List<T>::List() : length {0} , array {0} { } template<typename T> constexpr List<T>::List(std::initializer_list<T> l) : length {static_cast<int>(l.size())} , array {0} { for (auto it = l.begin(); it != l.end(); ++it) { array[it - l.begin()] = *it; } } template<typename T> constexpr T List<T>::head() const { return array[0]; } template<typename T> constexpr List<T> List<T>::tail() const { List<T> l; l.length = length - 1; for (int i = 0; i < l.length; ++i) { l.array[i] = array[i + 1]; } return l; } template<typename T> constexpr List<T> List<T>::add(T t) const { List<T> l {*this}; l.array[l.length++] = t; return l; } template<typename T> constexpr List<T> List<T>::merge(List<T> l) const { for (int i = l.length - 1; i >= 0; --i) { l.array[i + length] = l.array[i]; } for (int i = 0; i < length; ++i) { l.array[i] = array[i]; } l.length += length; return l; } template<typename T> constexpr List<T> List<T>::reverse() const { List<T> l; l.length = length; for (int i = 0; i < l.length; ++i) { l.array[i] = array[length - i - 1]; } return l; } template<typename T> template<typename Filter> constexpr List<T> List<T>::filter(Filter f) const { List<T> l; for (int i {0}; i < length; ++i) { if (f(array[i])) { l = l.add(array[i]); } } return l; } template<typename T> struct LT { T pivot; constexpr bool operator()(T t) const { return t < pivot; } }; template<typename T> struct GE { T pivot; constexpr bool operator()(T t) const { return t >= pivot; } }; template<typename T> constexpr List<T> List<T>::sort() const { if (length == 0) { return *this; } return tail().filter(LT<T> {head()}).sort().add(head()) .merge(tail().filter(GE<T> {head()}).sort()); } template<typename T> constexpr T List<T>::sum() const { if (length == 0) { return T {}; } return head() + tail().sum(); } template<typename T> std::ostream &operator<<(std::ostream &os, const List<T> &l) { os << '{'; for (int i {0}; i < l.length - 1; ++i) { os << l.array[i] << ", "; } return os << l.array[l.length - 1] << '}'; } inline constexpr List<int> range(int a, int b, int c = 1) { List<int> l; while (a < b) { l = l.add(a); a += c; } return l; } int main() { constexpr std::size_t n = range(0, 300).reverse().sort().sum(); std::cout << n << std::endl; } Answer: I confess to not being entirely clear as to the purpose of this code. It seems to me that any list known at compile time could more easily and more correctly handled by other forms of preprocessing. With that said, though, here are some comments on this code. Think about real machines The code for the List class currently includes this data member: T array[std::numeric_limits<int>::max() >> 2]; On my machine, this attempts to allocate 2GB on the stack for an int-based List. That's a rather large amount of memory to attempt to allocate for every List! Perhaps this could be trimmed to some reasonable value. Reduce the number of requirements on template types The code requires a number of operations on the underlying T type. It requires both < and >= but that requirement can be relaxed somewhat by defining GE's operator like this: constexpr bool operator()(T t) const { return !(t < pivot); } Pass const references where practical Rather than passing by value, it would probably make more sense in the general case to pass by const reference. So for example, the previous function could be written like this instead: constexpr bool operator()(const T &t) const { return !(t < pivot); } In fact, one of the only places that can't be treated this way is the argument to merge() which must be passed by value. Try it with a non-primitive type Whenver you're creating templated code, carefully consider just how it might be used and what requirements it may demand of the underlying type. One technique I use is to wrap a primitive type inside a goofy minimalistic wrapper for testing: template <typename T> class Goofy { public: constexpr Goofy(T n = 0) : num(n) { } constexpr Goofy(const Goofy &g2) : num(g2.num) { } constexpr Goofy &operator=(const Goofy &g2) { num = g2.num; return *this; } constexpr Goofy &operator+=(const Goofy &g2) { num += g2.num; return *this; } constexpr bool operator<(const Goofy &g2) const { return num < g2.num; } friend std::ostream &operator<<(std::ostream &out, const Goofy &g2) { return out << g2.num; } private: T num; }; template <typename T> constexpr Goofy<T> operator+(const Goofy<T> &g1, const Goofy<T> &g2) { Goofy<T> result(g1); result += g2; return result; } This represents a minimal interface that can then be tested with your template: int main() { constexpr List<Goofy<float>> list{0, 5, 8, 13, 1, 7}; std::cout << list << std::endl; std::cout << list.reverse() << std::endl; std::cout << list.sort() << std::endl; constexpr auto m = list.sum(); std::cout << m << std::endl; } Sample output {0, 5, 8, 13, 1, 7} {7, 1, 13, 8, 5, 0} {0, 1, 5, 7, 8, 13} 34
{ "domain": "codereview.stackexchange", "id": 15509, "tags": "c++, c++14" }
Cost for Maintaining the Build Farm
Question: How much would it cost in terms of the build farm for not end-of-life-ing a particular distribution? Originally posted by David Lu on ROS Answers with karma: 10932 on 2014-08-24 Post score: 2 Original comments Comment by ahendrix on 2014-08-25: The EOL announcement for Groovy was sent out today: http://lists.ros.org/pipermail/ros-release/2014-August/004511.html Answer: The cost of supporting a specific platform for a specific duration is very hard to estimate. There are three main types of costs. The most obvious cost is the direct hosting costs for servers and bandwidth. The type of cost is the man hours of administration and maintenance of the buildfarm and management of the distribution. And the final type of cost is the developer effort to maintain all released packages to avoid bit-rot. Unfortunately the ability to measure the costs is correlated with the magnitude of the cost. The direct hosting costs are small compared to the cost of having a few people maintaining the distro. And that is a much smaller cost than the collective work required by all the maintainers. Other factors to keep in mind are startup costs vs maintenance. There's a very large effort to kick off a new distro up front to fix all the issues which occur from changing dependencies upstream. There's also correlation between distros, sometimes changes can easily port between similar architectures, but sometimes they need architecture specific patches. Also relating to extending support for EOL'd platforms. The cost goes up significantly as the upstream dependencies become EOL and no longer get updates and patches in parallel. (For example there's an issue with arm_navigation_experimental in groovy which only manifests on Quantal. To fix this will likely require a patch to work around the lack of updates upstream because Quantal is EOL. Originally posted by tfoote with karma: 58457 on 2014-08-27 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by David Lu on 2014-08-28: Thanks for the thorough answer Tully. Is there a wiki page that specifies the sunsetting of the various distros? Comment by tfoote on 2014-08-28: The Distributions wiki page has the policy that we will support them while upstream targeted platforms are supported. And target_platforms are in REP 3 Though for non-LTS releases of ROS we may need to add a caveat. Comment by David Lu on 2014-08-29: Thanks. I had seen that, but was looking for the dates. Really, what I'd like is something like this: http://i.stack.imgur.com/ihPiQ.png Comment by tfoote on 2014-08-29: If you take REP3 and correlate to the two supported distros, you can use that graphic.
{ "domain": "robotics.stackexchange", "id": 19171, "tags": "ros-groovy" }
Is our data "Big Data" (Startup)
Question: I worked at a startup/medium sized company and I am concerned that we may be over-engineering one of our products. In essence, we will be consuming real-time coordinates from vehicles and users and performing analytics and machine learning on this incoming data. This processing can be rather intensive as we try predict the ETAs of this entities matched to historical data and static paths. The approach they want to take is using the latest and most powerful technology stack, that being Hadoop, Storm etc to process these coordinates. Problem is that no-one in the team has implemented such a system and only has had the last month or so to skill up on it. My belief is that a safer approach would be to use NoSQL storage such as "Azure Table Storage" in an event based system to achieve the same result in less time. To me it's the agile approach, as this is a system that we are familiar with. Then if the demand warrants it, we can look at implementing Hadoop in the future. I haven't done a significant amount of research in this field, so would appreciate your input. Questions: How many tracking entities (sending coordinates every 10 seconds) would warrant Hadoop? Would it be easy to initially start off with a simpler approach such as "Azure Table Storage" then onto Hadoop at a later point? If you had to estimate, how long would you say a team of 3 developers would take to implement a basic Hadoop/Storm system? Is Hadoop necessary to invest from the get go as we will quickly incur major costs? I know these are vague questions, but I want to make sure we aren't going to invest unnecessary resources with a deadline coming up. Answer: Yes, this is a how-long-is-a-piece-of-string question. I think it's good to beware of over-engineering, while also making sure you engineer for where you think you'll be in a year. First I'd suggest you distinguish between processing and storage. Storm is a (stream) processing framework; NoSQL databases are a storage paradigm. These are not alternatives. The Hadoop ecosystem has HBase for NoSQL; I suspect Azure has some kind of stream processing story. The bigger difference in your two alternatives is consuming a cloud provider's ecosystem vs Hadoop. The upside to Azure, or AWS, or GCE, is that these services optimize for integrating with each other, with billing, machine management, etc. The downside is being locked in to the cloud provider; you can't run Azure stuff anywhere but Azure. Hadoop takes more work to integrate since it's really a confederation of sometimes loosely-related projects. You're investing in both a distribution, and a place to run that distribution. But, you get a lot less lock-in, and probably more easy access to talent, and a broader choice of tools. The Azure road is also a "big data" solution in that it has a lot of the scalability properties you want for big data, and the complexity as well. It does not strike me as an easier route. Do you need to invest in distributed/cloud anything at this scale? given your IoT-themed use case, I believe you will need to soon, if not now, so yes. You're not talking about gigabytes, but many terabytes in just the first year. I'd give a fresh team 6-12 months to fully productionize something based on either of these platforms. That can certainly be staged as a POC, followed by more elaborate engineering.
{ "domain": "datascience.stackexchange", "id": 383, "tags": "machine-learning, data-mining, bigdata, statistics, apache-hadoop" }
Waveform independent frequency analysis
Question: If a musician (or a vocal coach) hears a human voice at a certain note, he/she can easily find the same note on piano. Not only for human voice, if the waveform is square, or created with a synth, they will be able to figure out which note it is. However, using DFT (or FFT) gives mixed results for any waveform other than sinusoidal. It makes sense, because we use sinusoids as base functions (or base vectors). Now, I want to learn if there is any way we can implement a signal processing algorithm in order to detect tones regardless of their waveform. Googling didn't help. Can you help me DSP SE? Answer: What you are asking about is typically called "pitch detection". There are 100s of articles written on the topic. A good starter would be this one: https://ccrma.stanford.edu/~pdelac/154/m154paper.htm If you need fairly high precision (for example a tuner application), I recommend a phase locked loop (PLL) or delay locked loop. There is a fundamental trade off between precision and tracking speed and with a PLL you can adjust this easily.
{ "domain": "dsp.stackexchange", "id": 10217, "tags": "fft, audio, dft, audio-processing" }
Are mitochondrial genes to exclude in scRNA-seq such as ribosomal genes?
Question: In this answer, it is stated that ribosomal genes should be excluded prior to normalization in scRNA-seq as contaminants. Do mitochondrial genes have to be excluded as well? I plotted the top 50 expressed genes for a specific dataset and they tend to appear often (for example MT-ATP6). My assumption is that, given that they work for mitochondrial function and may be highly expressed, they can dilute the signal of gene differential across cell types but expressed at lower levels. Is this biologically sound? Additionally, in this course, mitochondrial genes are used to filter cells when they contribute above a certain ratio of total RNA of single cells. However, I could not find an explanation for this filtering. What is the underlying process this procedure is used to account for? Answer: According to Ilicic et al. (2016), on upregulation of mtRNA in broken cells: There is an extensive literature on the relationship between mtDNA, mitochondrially localized proteins, and cell death [34, 35]. However, upregulation of RNA levels of mtDNA in broken cells suggests losses in cytoplasmic content. In a situation where cell membrane is broken, cytoplasmic RNA will be lost, but RNAs enclosed in the mitochondria will be retained [...] According to this, I think that mtRNA levels relative to endogenous RNA can be used as a control for low quality (broken) cells, in a similar way to ERCC spike-ins. Ilicic T, Kim JK, Kolodziejczyk AA, Bagger FO, McCarthy DJ, Marioni JC, Teichmann SA. Classification of low quality cells from single-cell RNA-seq data. Genome biology. 2016 Feb 17;17(1):29.
{ "domain": "bioinformatics.stackexchange", "id": 657, "tags": "rna-seq, scrnaseq, quality-control" }
Kinematics contradicting conservation of energy?
Question: I was having some conceptual difficulty reconciling my intuitive understanding of kinematics with conservation of energy, so I made up a short problem that tested my intuitions: Suppose I define an initial point to be 20m above the ground. An object leaves this initial point along Path A, which is straight down. Another object leaves the initial point along Path B, which is parallel to the ground. Both objects have the same mass, and the same initial speed of $3 \frac{m}{s}$. What is the speed of each object as it hits the ground? My calculations for Path A: $$ y_{f} = y_{i} + v_{i}t + \frac{1}{2}a_{y}t^{2} $$ $$ 0 = 20m - 3\frac{m}{s}t - \frac{1}{2}(9.8\frac{m}{s^{2}})t^{2} $$ $$ t = \frac{3 + \sqrt{9 - 4(20)(\frac{1}{2})(-9.8)}}{40} = 0.576s $$ $$ v_{f}=v_{i}+at $$ $$ v_{f} = -3\frac{m}{s} - (9.8\frac{m}{s^{2}})(0.576s) = -8.64\frac{m}{s} $$ $$ s = 8.65 \frac{m}{s} $$ My calculations for Path B: $$ y_{f}=y_{i}+v_{i}t+\frac{1}{2}a_{y}t^{2} $$ $$ 0 = 20m+0t+\frac{1}{2}(-9.8\frac{m}{s^{2}})t^{2} $$ $$ t = \sqrt{\frac{20m}{\frac{1}{2}(9.8\frac{m}{s^{2}})}} = 2.02s $$ $$ v_{f} = v_{i}+at $$ $$ v_{fy} = 0-9.8\frac{m}{s^{2}}(2.02s) = -19.799\frac{m}{s} $$ $$ v_{fx} = 3\frac{m}{s} $$ $$ s = |\vec{v}| = \sqrt{(19.799\frac{m}{s})^{2} + (3\frac{m}{s})^{2}} = 20.02\frac{m}{s} $$ This result clearly defies conservation of energy. Both objects start at the same initial height, and so have the same potential energy. They both end their paths at the same height, and so end with the same potential energy. But they both have different velocities, and so different kinetic energies. The thing is, this is exactly what my intuitions would predict. How much an object accelerates in total is just a function of the time it spends accelerating. The object traveling along Path A has a fast downwards velocity, so it only has a short time to accelerate. Its final speed is just the sum of its initial speed and however much it accelerates in that time. The object traveling along Path B has no initial downwards velocity, and so has plenty of time to accelerate. The total final speed of this object is the vector sum of its initial velocity and however much it gained while accelerating downwards, which is a much larger number than the total acceleration of the first object. So what's going on? The calculations break conservation of energy, so I must be doing something wrong. And even if there's a flaw in my calculations and the numbers actually do come out the same, my intuition still says otherwise. Answer: The two particles have the same energy. It's easier to use the equation $$v^2 = u^2 + 2a\Delta y$$ For particle A, all motion is in the y-direction, so $$v^2 = v_y^2 = 3^2 + 2(9.8)(20)$$ $$v = 20.02 ms^{-1}$$ You went wrong in your application of the quadratic formula. $$t = \frac{-b \pm \sqrt{b^2-4ac}}{2a}$$ $$a = -0.5\cdot 9.8 = -4.9$$ $$b = -3$$ $$c = 20$$ Taking the negative sqrt solution and correcting $2a$, $t=1.74 s$ Plugging this into the $v = u + at$ formula yields $20.05 ms^{-1}$ for me, and the difference is just rounding error. For particle B, your calculation was correct. So now we've clarified that conservation of energy holds, but now to answer why. The thing is, this is exactly what my intuitions would predict. How much an object accelerates in total is just a function of the time it spends accelerating. Correct, but the kinetic energy of a particle is not dependent on how much it has accelerated. It only depends on the instantaneous velocity of the particle. Here the final velocity in the y-direction for particle A will be higher than that of particle B. The final velocity in the x-direction for particle B will be higher than that of particle A (equally so, in fact). Calculating the final speed of both would yield the same answer, and so the kinetic energy will remain equal.
{ "domain": "physics.stackexchange", "id": 28680, "tags": "kinematics, energy-conservation" }
Basic binary tree manipulation in Rust
Question: The goal here is to implement a basic binary tree with values only on leaves, with depth, mirror, and in_order (traversal) operations. I have not used Rust before. Some particular questions I have: In a few places, I'm defining methods by passing self by reference and then matching on its derefenced form. I then have to borrow the fields with ref in the destructuring. Is this the intended form? Is into_in_order using Vec properly/optimally? (I tried to use just lv.extend(&mut rv) but got an error that that particular method was still under churn and I should wait for "the dust to settle.") Am I using Boxes correctly? When should I use Boxes vs. * consts? Why do (*r).mirror() and r.mirror() do the same thing? I would have expected the latter to throw because Boxes don't have a mirror method. Is there a less verbose alternative to the Branch(Box::new(Branch(…))) syntax? #[derive(Debug, PartialEq, Clone)] enum BTree<T> { Leaf(T), Branch(Box<BTree<T>>, Box<BTree<T>>), } impl <T> BTree<T> { fn depth(&self) -> i32 { match *self { BTree::Leaf(_) => 1, BTree::Branch(ref l, ref r) => std::cmp::max(l.depth(), r.depth()) + 1, } } fn into_in_order(self) -> Vec<T> { match self { BTree::Leaf(val) => vec!(val), BTree::Branch(l, r) => { let mut lv = l.into_in_order(); let rv = r.into_in_order(); lv.extend(rv.into_iter()); lv } } } } impl <T : Clone> BTree<T> { fn mirror(&self) -> BTree<T> { match *self { BTree::Leaf(_) => (*self).clone(), BTree::Branch(ref l, ref r) => BTree::Branch(Box::new((*r).mirror()), Box::new((*l).mirror())), // // why does this work? // BTree::Branch(Box::new(r.mirror()), Box::new(l.mirror())), } } } #[test] #[allow(unused_variables)] fn test_btree_creation() { use BTree::*; let leaf: BTree<i32> = Leaf(10); let branch: BTree<i32> = Branch(Box::new(Leaf(15)), Box::new(Leaf(20))); let tree: BTree<i32> = Branch(Box::new(branch.clone()), Box::new(Leaf(30))); assert_eq!(branch, branch.clone()); } #[test] fn test_btree_depth() { use BTree::*; assert_eq!(Leaf(10).depth(), 1); let branch: BTree<i32> = Branch(Box::new(Leaf(15)), Box::new(Leaf(20))); assert_eq!(branch.depth(), 2); let tree: BTree<i32> = Branch(Box::new(branch.clone()), Box::new(Leaf(30))); assert_eq!(tree.depth(), 3); let other_tree: BTree<i32> = Branch( Box::new(branch.clone()), Box::new(branch.clone())); assert_eq!(other_tree.depth(), 3); } #[test] fn test_btree_mirror() { use BTree::*; assert_eq!(Leaf(10).mirror(), Leaf(10)); assert_eq!( Branch(Box::new(Leaf(10)), Box::new(Leaf(20))).mirror(), Branch(Box::new(Leaf(20)), Box::new(Leaf(10)))); assert_eq!( Branch( Box::new(Leaf(10)), Box::new(Branch(Box::new(Leaf(20)), Box::new(Leaf(30)))) ).mirror(), Branch( Box::new(Branch(Box::new(Leaf(30)), Box::new(Leaf(20)))), Box::new(Leaf(10)))); } #[test] fn test_btree_in_order() { use BTree::*; assert_eq!(Leaf(10).into_in_order(), vec!(10)); assert_eq!(Branch(Box::new(Leaf(10)), Box::new(Leaf(20))).into_in_order(), vec!(10, 20)); assert_eq!( Branch( Box::new(Leaf(10)), Box::new(Branch(Box::new(Leaf(20)), Box::new(Leaf(30)))) ).into_in_order(), vec!(10, 20, 30)); } I also have the following macro definitions (not all used above): macro_rules! assert_eq { ($actual:expr, $expected:expr) => ( if $expected != $actual { panic!("expected {:?}, but got {:?}", $expected, $actual); } ) } macro_rules! assert_neq { ($actual: expr, $expected_not: expr) => ( if $expected_not == $actual { panic!("expected {:?} not to equal {:?}", $expected_not, $actual); } ) } macro_rules! assert_approx { ($actual: expr, $expected: expr) => { if ($expected - $actual).abs() > 1e-3 { panic!("expected {:?} or similar, but got {:?}", $expected, $actual); } } } Answer: Overall, your code seems pretty spot-on. It was easy to read and understand. The most confusing to me was the mirror method, as that's not a common method for trees in my experience. You may want to consider adding some doc-comments, but I certainly wouldn't expect them in code at this level. ^_^ I do have some minor nits, of course! There's no space between impl and <. impl<T>, not impl <T>. For such a common and easily-understood function like max, I would go ahead and use the function to allow using it unqualified. For less common functions, I'd use the module, allowing you to skip typing the fully-qualified path. The vec! macro idiomatically uses []. vec![], not vec!(). There's no space between a generic type and : when defining constraints. T: Clone, not T : Clone. I prefer to use where clauses for constraints. They read better and are more flexible to modification. Not only can you say r.mirror() instead of (*r).mirror(), it's definitely preferred. Rust will automatically dereference for you. Instead of using allow(unused_variables), you can use a leading underscore _ to indicate that you are aware the variable is unused. You could also just not bind it to a variable at all. allow is a big hammer, and has the possibility of hiding variables you meant to use. Beyond the stylistic issues, there's one performance-related point. Your into_in_order method will allocate N vectors, one for each node in the tree. That will cause a lot of memory churn. Instead, I'd recommend breaking it up into two functions: one that allocates a vector and another that traverses the tree. This way, you can minimize allocations. Of course, benchmarking would be a good idea, but I didn't do that! ^_^ It's also possible you might want to create an iterative version of this, as recursive code does have the possibility of hitting stack limits. Doing this would also allow you to write an Iterator implementation that traverses in-order. Then your method basically becomes a collect(). That might be fun to try next! Now for your questions: In a few places, I'm defining methods by passing self by reference and then matching on its dereferenced form. I then have to borrow the fields with ref in the destructuring. Is this the intended form? Yes, this is pretty typical. Is into_in_order using Vec properly/optimally? Ah, yes, good question. I think I touched on this above. Am I using Boxes correctly? Looks fine to me. When should I use Boxes vs. * consts? You almost never want to use a raw pointer. The compiler cannot help you whatsoever with raw pointers and reading from them requires unsafe code. These will generally come into play when you are writing FFI code or if you are writing the code that underlies a safe abstraction. Why do (*r).mirror() and r.mirror() do the same thing? I would have expected the latter to throw because Boxes don't have a mirror method. This was touched on above, but this specific case works because Box implements Deref: impl<T> Deref for Box<T> where T: ?Sized { type Target = T fn deref(&self) -> &T } Is there a less verbose alternative to the Branch(Box::new(Branch(…))) syntax? I was wondering the same thing as I read through your tests. They get pretty gnarly as they grow. I don't have any great suggestion, but I wonder if some kind of builder would help. Or perhaps some clever macro implementation that could reduce the visual clutter...
{ "domain": "codereview.stackexchange", "id": 16662, "tags": "beginner, tree, rust" }
Unusual triplet in 13C-NMR
Question: Why is there a triplet at 46.7 ppm in the following spectrum of 2,2,2-tribromoethan-1-ol? Answer: So, there actually isn't a triplet in the spectrum shown. This is a 13C{1H}, and so proton coupling to carbons is not visible. Hence, the two peaks at 79.9 and 46.7 ppm are singlets. If you read the question carefully, on the left hand side it indicates what the multiplicity of that peak is i.e. how that peak would appear in the absence of proton decoupling. The peak at 46.7ppm, in this case, would appear as a triplet. This information helps you work out how many protons are attached to a particular carbon centre. So for this example, one of your carbon centres will have no protons attached, and the other will have 2 protons attached, CH2. A CH2 carbon will split into a triplet, due to the normal 2nI+1 rule. Multiplicities used to be determined by running off-resonance decoupled experiments, so that multiplicity could be seen clearly for 1J coupling. These days, multiplicity would usually be determined via a DEPT experiment, or preferably a phase edited HSQC. Of course, that peak at ~77ppm is a 1:1:1 triplet, and that is because it is CDCl3, and a proton-decoupled carbon experiment won't remove coupling to deuterium, and so the chloroform carbon splits into 3 lines, again due to the 2nI+1 rule (2H has spin=1). Important Edit I've just had a quick look at the solution on your page link, and it actually appears that there is an error in their assignment of the multiplicities. The -CBr3 peak will come at 46.7ppm, and would be a singlet. The -CH2OH peak wil come at 79.9ppm, and this would be a triplet. They have these two multiplicities switched on the question page.
{ "domain": "chemistry.stackexchange", "id": 5871, "tags": "spectroscopy, nmr-spectroscopy" }
Conversion accuracy from (latitude, longitude) to (x,y) by utm_odometry_node
Question: I wrote node for subscribing to the odom topic(nav_msgs/Odometry) at which utm_odometry_node publish results of conversion from (latitude, longitude)(degres)to (x,y)(meters). So my question is is that conversion to (x,y) valid? (I use C Turtle.Code of node who is sunscribing to odom topic is here in answer of my orevious question: http://answers.ros.org/question/485/using-utm_odometry_node-to-convert-gps-mesurement) Example: That gives gpsd_client: (coordinates in Croatia, nortwest) latitude: 46.3954 longitude: 16.448 That gives utm_odometry_node: x:611320 y:5.139e+06 I think that in x coordinate one digit is missing. Well if someone is working with utm_odometry_node and know something about conversion(accuracy, problems, etc.),please let write answer. Thanks Originally posted by Jurica on ROS Answers with karma: 97 on 2011-03-24 Post score: 4 Answer: That is probably a correct output. You have to remember that UTM is in reference to some fixed point on a grid. If you are significantly closer in the x direction, then it is totally reasonable that the x coordinate will be powers of 10 smaller than the y coordinate. As a quick sanity check, I use the Geotool: For your coordinates This gives you UTM coordinates: 33T 611320 5138999, which seems to be in line with the ones that you got with the UTM odometry node. The first digits represent the zone that you are in, which you can look up on a map like the following. The next numbers describe your position, in reference to a central meridian plus a "false northing" and "false easting" to insure that negative numbers will never be a possibility. image description http://upload.wikimedia.org/wikipedia/commons/e/ed/Utm-zones.jpg You may want to read up a bit more on UTM, Wikipedia has a really good explanation of the reasoning behind UTM. Originally posted by mjcarroll with karma: 6414 on 2011-03-24 This answer was ACCEPTED on the original site Post score: 6 Original comments Comment by Jurica on 2011-03-24: And thanks for useful tool geohack, i found that page with link https://jira.toolserver.org/browse/GEOHACK (http://toolserver.org/~geohack/). Comment by Jurica on 2011-03-24: Thanks for your answer. I looked at wikipedia UTM and i read about UTM coordinate system. I was using some version Gaus-Kruger conversions which gave me another results ( i was using for that geodetic GPS-RTK instead my GPS form Locosys) so i was thinking that conversions to UTM is false.
{ "domain": "robotics.stackexchange", "id": 5203, "tags": "ros, gps-umd, gps-common" }
Column With Many Missing Values (36%)
Question: Hello this is my first machine learning project, I got a dataset with 18.000 rows and I have a column with 4244 values missing. I don't know why the values are missing since when it's appropriate there's a 0 value in it. The dtype of the column is int64 I consider this column usable and would like to implement it to the model. Could you please help me with how to deal with this problem, or lead my to a resource to teach me how to deal with this ? Answer: I don't know why the values are missing since when it's appropriate there's a 0 value in it. The first step is to check with some SME (Subject Matter Expert) or the Data Custodian. I can't tell you how many times I've built a model/started analysis just to figure out that the data was wrong. Try to figure out the reason behind the Nulls/0. Besides that there are many ways to handle missing data a few are below: Remove records with this missing value in your column. If this is an important column to your model it may be best to get rid of that record depending on the shape (rows x cols/features) of your dataset. Don't throw off the results of your model because there's some data that may throw it off (even if you use some of the methods below) Mean/Median/Mode Impute - A common method of handling missing data is to fill the missing values with the column's mean or median (rarely do you use the Mode). Fill the values that creates a normal distribution - it depends on your data, but filling the values so you get normally distributed column data can be beneficial Try all these methods and more - When you start modeling you'll learn to "throw stuff at the wall" and see what sticks. Look at your model results, talk with SMEs, and think about what makes sense. Some ways of handling missing data will work better with different models/datasets. Experiment and have fun!
{ "domain": "datascience.stackexchange", "id": 4943, "tags": "machine-learning, dataset, missing-data" }
How to find the expectation value of several circuits using Qiskit aqua operator logic?
Question: I am using the method from this SE answer to calculate the expectation value using qiskit aqua operator logic which works well for a single circuit. Here is a simplified code snippet to demonstrate how it works. import qiskit from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, Aer import numpy as np from qiskit.aqua import QuantumInstance from qiskit.aqua.operators import PauliExpectation, CircuitSampler, StateFn, CircuitOp, CircuitStateFn qctl = QuantumRegister(1) psi = QuantumCircuit(qctl) psi = CircuitStateFn(psi) qctl = QuantumRegister(2) op = QuantumCircuit(qctl) op.z(0) op.ry(np.pi/4,0) op = CircuitOp(op) backend = Aer.get_backend('qasm_simulator') q_instance = QuantumInstance(backend, shots=1024) measurable_expression = StateFn(op, is_measurement=True).compose(psi) expectation = PauliExpectation().convert(measurable_expression) sampler = CircuitSampler(q_instance).convert(expectation) print('Expectation Value = ', sampler.eval()) However, I am applying it to a VQE application and for each iteration I have $x>1$ circuits. There is no issue when I run on my local machine because there is no queue time, but on the IBMQ machines I would like to submit a batch of the aqua expectation value circuits for each iteration to reduce queue times. For a list of circuits I can submit a batch using the IBMQJobManager() function, however, the method I am using does not store circuits as a list. My question is, can I use the jobmanager with the aqua expectation value to submit a batch of jobs to IBMQ? If not, is there a different way to submit a batch of the aqua expectation value circuits? Answer: There is no need to use IBMQJobManager as QuantumInstance will do the necessary work for you. Just add your CircuitOps to a ListOp then pass it to StateFn constructor: ops = [] # ... Construct your first circuit ... # Now, add it to the list: ops.append(op1) # ... Construct your second circuit ... # Now, add it to the list: ops.append(op2) measurable_expression = StateFn(ListOp(ops), is_measurement=True).compose(psi) This way, QuantumInstance will assemble these circuits into a single Qobj and pass it to the backend. Note that if the number of circuits is larger than backend.configuration().max_experiments, it will be splitted into multiple payloads.
{ "domain": "quantumcomputing.stackexchange", "id": 2852, "tags": "programming, qiskit, ibm-q-experience, vqe" }
Understanding the time dilation calculation in Brian Cox's Doctor Who lecture
Question: Background My son has just watched Brian Cox's fascinating "The Science of Doctor Who" lecture on space-time and was particularly intrigued by the part (at 22m:22s) where he said: Let us say that we catapult Jim off at 99.94% of the speed of light for five years, according to his watch. Then, we tell Jim to turn around and come back. It takes another five years to get back to the Earth, so for him the journey would take ten years. But for us, with our watches ticking faster than Jim's, 29 years would have passed. He wanted me to show him how to calculate this, but I need help! Question How do you calculate that a trip for 10 years at 99.94% of the speed of light results in 29 years being passed at Earth? What I've tried First we tried applying a relativistic distortion based on the rocket instantly moving to the speed of light, but this resulted in Earth dwellers having aged by 288 years. Second we tried assuming the rocket had a constant acceleration (in its frame of reference) and first accelerated to 99.94%, then decelerated to come to a stop, then accelerated backwards to 99.94%, and then decelerated to stop at the Earth. This results (according to our sums based on the wikipedia page on time dilation) in the people on the rocket aging 4 years, not 10. Perhaps we have made a mistake in the calculations (shown below)? Or perhaps there is another common model of acceleration that we should use? (Perhaps one that assumes we have more fuel at the start so our acceleration is slower initially?) Method Python code: from math import * c=299792458. v=c*99.94/100. year=365.25*24*60*60. print 'First assume we instantly accelerate to our final speed' rocket=10*year earth=rocket/sqrt(1-v**2/c**2) print 'On Earth people have aged by',earth/year,'years' print t=29*year/4 g = sqrt(v**2/(1-v**2/c**2))/t print 'Now assume a constant acceleration of',g,'metres per second per second' print 'After 29/4 years we accelerate to a speed of',g*t/sqrt(1+(g*t)**2/c**2) pt=(c/g)*asinh(g*t/c) print 'This takes',pt/year,'years of proper time' print 'so the complete journey will last',4*pt/year,'years for the spacemen' Answer: I agree with you. Brian Cox has obviously missed a factor of ten in the time on Earth or he meant the speed to be 0.94$c$ rather than 0.9994$c$. If you're interested, there are more details about the calculation for the relativistic rocket in John Baez's article. I haven't gone through your calculation in detail, but the approach of splitting the journey into four parts (acceleration then deceleration out then acceleration then deceleration back) is the correct way to do it.
{ "domain": "physics.stackexchange", "id": 97051, "tags": "special-relativity" }
Why samples per symbols $\geq 2$ in the GNURadio Constellation Modulator block?
Question: Let's say I want to emulate the design of a modulator I am building "in real life". I'd like the components, input data rate, and output bandwidth to be the same as in the real system. If I use the GNURadio Constellation Modulator block for this, it imposes a lower limit on samples/second. This results in a different bandwidth for the GNURadio graph, than the real system. Why does this block impose this restriction? And, is there a canonical (or preferred) way around it, when emulating real systems? Answer: The GNURadio Constellation modulator enforces a minimum of 2 Samples Per Symbol mostly due to simplicity and some practical reasons. Theoretically, if you're just transmitting PSK without any pulse shaping, you could do 1 sample per symbol (note these are complex symbols) . But typically we like filtered PSK, and the block does this with the RRC filter. However - a key point is that when filtering the signal, you have bandwidth expansion. So if you use a 35% excess bandwidth, your sample rate must be at least 1.35x larger , or another way to say it is that you need at least 1.35 samples per symbol to satisfy nyquist. Similarly for 50% bandwidth you'd need 1.5 samples/symbol. The easiest way to deal with this is to just upsample by 2 (i.e. oversample) and filter with an RRC designed for 2 samples per symbol and then you can support the full range of excess BW values that the RRC can do. There is one other key idea here though - if you were to try doing a rational upsampling by 1.35x for a 35% excess BW RRC, your resulting signal would be fully occupying your bandwidth (i.e. you have no oversampling). This becomes a problem in multiple areas - for one, when you upsample by 2 later on, the image that must be filtered will show up right next to the one you want to keep, making your digital filtering requirements harder. Additionally, if you instead plan to send this out via a D/A, the image in your 2nd nyquist zone will be sitting right next to your desired image in zone 1, and again filtering with your analog reconstruction filter will be very difficult (e.g. a complex filter with steep transition band). Also take a look at this very similar question asked by myself where Dan Boschen gave a great answer: Sample rates, Samples per Symbol, and Digital Pulse Shaping
{ "domain": "dsp.stackexchange", "id": 9759, "tags": "gnuradio" }
What is the difference between Machine Learning model, algorithm and hypothesis?
Question: I'm fairly new to Machine Learning field and still to grasp the basics, so this question may seem very stupid, but what is the difference between Machine Learning model, algorithm and hypothesis? Like the terms are interrelated, but how exactly are they different from each other? They are used interchangeably, so I don't really understand them. If someone can help to explain these words in simplest terms it would be very helpful. Answer: An algorithm is a sequence of instructions to tell the computer (or a human) what to do. The computer executes the algorithm and produces or not (may not halt) an output (e.g. prints a message). A Python program is an algorithm. Any program is an algorithm. So, think of an algorithm as a recipe that you use to cook/compute something. A model can have different meanings in different contexts, but they are related. Here I provide more details. For example, in supervised learning, a model can refer to a parametrized function (so kind of a set of functions) or a function. A good model is a model that approximates well some target function (e.g. a function that differentiates between cats and dogs). You need to define good and well, which depends on the context in practice. A hypothesis is roughly equivalent to the idea of a model. However, this term is used more in computational learning theory. Given your knowledge of ML, I'd ignore this term for now. Models that run on computers are, by definition, algorithms or part of algorithms. However, it's not always clear why a model (like a neural network) is a sequence of instructions. In fact, in many cases, that's not clear at all. But it turns out that algorithms can compute functions and models approximate, compute or are functions. If you want to think of a model as an algorithm, the best example I know of is a decision tree model.
{ "domain": "ai.stackexchange", "id": 3973, "tags": "machine-learning, comparison, algorithm, models, hypothesis" }
Relation between hyperparamters and training set for an object detection model
Question: I have 2 instances of an object detection model. The only difference between these two models is the training data used: The first model was trained with a small training set The second model was trained on a larger training set than the first one The first model was trained on the following hyperparameters: Number of iterations: 250k Batch Size: 10 Learning Rate: warms up to 0.001 and decreases to 0.0002 after 150k iterations Since the second model has more training data, I assumed I need to change the hyperparameters a bit. So I tried training the second model on the following hyperparamters: Number of iterations: 600k Batch Size: 10 Learning Rate: warms up to 0.001 and decreases to 0.0002 after 400k iterations When I measure the mAP for both models on a testing set, the first model vastly outperforms the second model. model 1 mAP: 0.924 model 2 mAP: 0.776 This leads me to my question for you: How would the hyperparameters (batch size, learning rate etc) change when the size of your training set increases? What factors need to be considered for this increase in training set size, in order to get the most optimal model possible? Any and all responses will be greatly helpful. Thank you :) Answer: A major difference between the first and the second model you trained is the size of the data assuming that the model is not pretrained. Increased data, of course, needs increased epochs. According, the batch size must also increase. Batch Size: While training on the smaller dataset, a batch size of 10 yielded better results. The errors were averaged over 10 samples and then back-propagated through the model. But now for the larger dataset, the batch size remains the same and hence only little optimization occurs as the error is averaged over 10 samples only ( which is relatively smaller for a large dataset ). Learning Rate: For the larger dataset, the number of epochs is increased. The purpose of the learning rate is to scale the gradients of the loss with respect to the parameter. A smaller learning rate always helps as it prevents the overshooting of the minima of the loss function. I would insist you increase the learning rate so that the optimization does not diminish as we are having a larger number of epochs. Gradually decrease the learning rate, as the loss approaches its minima. If you are training a popular architecture ( like Inception, VGG, etc. ) and that too on datasets like ImageNet, COCO with little modifications, you should definitely read the research papers published on various problems as they would provide a better start to the training.
{ "domain": "datascience.stackexchange", "id": 6894, "tags": "deep-learning, object-detection, hyperparameter-tuning" }
Predicting correct match of French to English food descriptions
Question: I have a training and test set of food descriptions pairs (please, see example below) First name in a pair is a name of food in French and second word is this food description in English. Traing set has also a trans field that is True for correct descriptions and False for wrong descriptions. The task is to predict trans field in a test set, in other words to predict wich food description is corect and which is wrong. dishes = [{"fr":"Agneau de lait", "eng":"Baby milk-fed lamb", "trans": True}, {"fr":"Agrume", "eng":"Blackcurrants", "trans": False}, {"fr":"Algue", "eng":"Buttermilk", "trans": False}, {"fr":"Aligot", "eng":"potatoes mashed with fresh mountain cheese", "trans": False}, {"fr":"Baba au rhum", "eng":"Star anise", "trans": True}, {"fr":"Babeurre", "eng":"seaweed", "trans": False}, {"fr":"Badiane", "eng":"Sponge cake (often soaked in rum)", "trans": False}, {"fr":"Boeuf bourguignon", "eng":"Créole curry", "trans": False}, {"fr":"Carbonade flamande", "eng":"Beef Stew", "trans": True}, {"fr":"Cari", "eng":"Beef stewed in red wine", "trans": False}, {"fr":"Cassis", "eng":"citrus", "trans": False}, {"fr":"Cassoulet", "eng":"Stew from the South-West of France", "trans": True}, {"fr":"Céleri-rave", "eng":"Celery root", "trans": True}] df = pd.DataFrame(dishes) fr eng trans 0 Agneau de lait Baby milk-fed lamb True 1 Agrume Blackcurrants False 2 Algue Buttermilk False 3 Aligot potatoes mashed with fresh mountain cheese False 4 Baba au rhum Star anise True 5 Babeurre seaweed False 6 Badiane Sponge cake (often soaked in rum) False 7 Boeuf bourguignon Créole curry False 8 Carbonade flamande Beef Stew True 9 Cari Beef stewed in red wine False 10 Cassis citrus False 11 Cassoulet Stew from the South-West of France True 12 Céleri-rave Celery root True I think to solve this as text classification problem, where text is a concatenation of French name and English description embeddings. Questions: Which embeddings to use and how concatenate them? Any other ideas on approach to this problem? BERT? Update: How about the following approach: Translate (with BERT?) French names to English Use embeddings to create two vectors: v1 - translated English vector and v2 - English description vector (from data set) Compute v1 - v2 Create new data set with two columns: v1 - v2 and trans Train classifier on this new data set Update 2: It looks like cross-lingual classification may be the right solution for my problem: https://github.com/facebookresearch/XLM#iv-applications-cross-lingual-text-classification-xnli It is not clear yet from the description given on the page with the link above, where to fit my own training data set and how to run classifier on my test set. Please help to figure this out. It would be ideal to find end-to-end example / tutorial on cross-lingual classification. Answer: As you suspected, the best approach would be to take a massive multilingual pretrained language model and make use of the information about French and English that it has already learned. You can read about some good options here. The basic idea is to train a new, lightweight network to make predictions based on the output from the pretrained model; its usual to just have a single layer feed forward network for this “fine-tuning”. Some implementations will already have this conveniently coded up for you, so check the documentation for whatever you decide to use! Your problem is specifically a sentence pair classification problem, and there is a tutorial for that here. Pay close attention to the data processing phase of the tutorial. Overall, the differences you need to apply to what the tutorial describes are You need to use multilingual BERT You need to prepare your data exactly as the tutorial says about how to set up your data, but use your own snippets in place of the sentence pairs
{ "domain": "datascience.stackexchange", "id": 7289, "tags": "classification, nlp, word-embeddings, machine-translation" }
Subscribe to PointClud2 Ros2
Question: Hello together I get a depth image from my intel d453 camera in a PointClud2 msgs. My problem is now the following. I do not understand how I can access the individual points, i.e. XYZ coordinates. It is only possible to access the whole data structure. Unfortunately it is not possible to sort the points correctly. I need the following information for each point XYZ coordinates Color of the point Is it possible to access the individual points within the data structure in this way? Here is my Subscriber Callback: void callback_depth_image_d435 (const sensor_msgs::msg::PointCloud2::SharedPtr depth_points) { printf("Cloud: width = %u, height = %u\n", depth_points->width, depth_points->height); // Looking for Something like that // BOOST_FOREACH(const Point& pt, depth_points->points) // { // printf("\tpos = (%f, %f, %f), w = %u, normal = (%f, %f, %f)\n", // pt.pos.x, pt.pos.y, pt.pos.z, pt.w, // pt.normal[0], pt.normal[1], pt.normal[2]); // } //test std::cout << "data_size" << depth_points->data.size() << "\n"; std::cout << "point_step" << depth_points->point_step << "\n"; //std::cout << "data_size" << depth_points-> << "\n"; std::cout << "data_size" << depth_points->data.size() << "\n"; } Many thanks in advance! Originally posted by Schloern93 on ROS Answers with karma: 51 on 2020-12-22 Post score: 0 Answer: Here is a solution. Install pcl_conversion (sudo apt install ros-eloquent-pcl-conversions) CMakeList find_package(PCL 1.3 REQUIRED) include_directories(${PCL_INCLUDE_DIRS}) link_directories(${PCL_LIBRARY_DIRS}) add_definitions(${PCL_DEFINITIONS}) target_link_libraries("target" ${PCL_LIBRARIES}) Subscriber Callback void callback_depth_image_d435 (const sensor_msgs::msg::PointCloud2::SharedPtr point_cloud2_msgs) { pcl::PointCloud<pcl::PointXYZ> point_cloud; pcl::fromROSMsg(*point_cloud2_msgs, point_cloud); BOOST_FOREACH(const pcl::PointXYZ& pt, point_cloud.points) { std::cout << "x: " << pt.x <<"\n"; std::cout << "y: " << pt.y <<"\n"; std::cout << "z: " << pt.z <<"\n"; std::cout << "---------" << "\n"; } } Originally posted by Schloern93 with karma: 51 on 2020-12-29 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by SimoneBenatti on 2021-04-06: I tried the inverse procedure (feeding data to pointcloud2) on Eloquent but I'm stuck since toROSMsg expects pcl::PointCloud and pcl::PCLPointCloud2. This is the one defined in pcl/ros/conversions.h
{ "domain": "robotics.stackexchange", "id": 35899, "tags": "ros2, pointcloud" }
No point cloud in Turtlebot Gazebo gmapping
Question: I'm trying to follow this tutorial with Gazebo simulator: http://learn.turtlebot.com/2015/02/01/11/ In the tutorial, (real) Turtlebot is scanning the environment with Kinect point cloud. But when I run the examples in the turtlebot_gazebo package, it only scans with a fake laser. Probably because of this node, in the launch file of example: https://github.com/turtlebot/turtlebot_simulator/blob/indigo/turtlebot_gazebo/launch/turtlebot_world.launch#L25 Anyway, how can I add the Kinect point cloud scan to that example? Originally posted by Ahmet Sezgin Duran on ROS Answers with karma: 1 on 2015-03-05 Post score: 0 Answer: I think you can see the point cloud if you add PointCloud2 with the topic name: e.g. /camera/depth/points Originally posted by JollyGood with karma: 58 on 2015-03-06 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Ahmet Sezgin Duran on 2015-03-08: Anyway without using script? I just wanted to see it in Rviz. Comment by BlitherPants on 2015-03-11: In RViz, you add a visualization by clicking the "add" button on the bottom left. A window will pop up, displaying different data types. Click "PointCloud2", hit "OK", and you will see a PointCloud2 on the left-hand side of RViz in the "Displays" area. Expand it, and then type the topic in.
{ "domain": "robotics.stackexchange", "id": 21054, "tags": "gazebo, navigation, kinect, turtlebot, gmapping" }
Find the maximum possible energy for a beta-particle decay-chain
Question: So I have this problem where I'm supposed to find the maximum possible energy for a $\beta$-particle in the following decay-chain: The first decay: $$^{90}Sr\rightarrow ^{90}Y + \beta^- + \bar{v}_e$$ The second decay: $$^{90}Y\rightarrow ^{90}Zr + \beta^- + \bar{v}_e$$ $^{90}Sr$ decays to the stable $^{90}Zr$ nucleus after two $\beta$-decays. For the first $\beta^-$-decay we have $E_{\beta^-, max}=0.5459$ MeV and for the second $\beta^{-}$-decay we have $E_{\beta^-, max}=2.2785$ MeV. Does this mean that the maximum possible energy for a $\beta$-particle in the given decay-chain is 2.2785 MeV ? Answer: Yes. (Some questions are easy to answer :) )
{ "domain": "physics.stackexchange", "id": 90026, "tags": "nuclear-physics, atomic-physics, radiation, nuclear-engineering" }
Insertion sort implementation
Question: import java.util.Scanner; import java.lang.Comparable; class InsertionSort { private InsertionSort() {} private static boolean less(Comparable v, Comparable w) { return (v.compareTo(w) < 0); } private static void exch(Object[] a, int i, int j) { Object swap = a[i]; a[i] = a[j]; a[j] = swap; } private static void sort(Comparable[] a) { int length = a.length; for (int i = 0; i < length; i++) { // Maintain Invariant for (int j = i; j > 0; j--) { if (less(a[j], a[j - 1])) { // Invariant broken, fix it. exch(a, j, j - 1); } } } // Postcondition: assert isSorted(a) } private static void show(Object[] a) { for (int i = 0; i < a.length; i++) { System.out.println(a[i]); } } private static Integer[] readAllInts() { Scanner scanner = new Scanner(System.in); int numberOfItems; Integer[] items; if (scanner.hasNext()) { numberOfItems = scanner.nextInt(); items = new Integer[numberOfItems]; } else { // mybe throw an error? return new Integer[0]; } int i = 0; while(scanner.hasNext() && i < numberOfItems) { items[i] = scanner.nextInt(); i++; } return items; } public static void main(String[] args) { Integer[] items = InsertionSort.readAllInts(); InsertionSort.sort(items); InsertionSort.show(items); } } Above is the implementation of the insertion sort algorithm. I want learn how to write algorithms as well Clean Code. Now below are the points which I have taken care of or am concerned about. Kept meaningful names. Declared variables near their usage. Suppose, this code needs to consumed by many clients then where and what exception I should handle and throw. Kept variable scopes to as local as possible for easy GC. Most of the methods are static because I don't think they belong to a particular object. That's it for now, I will keep updating the list if something comes to me later. Ref: Algorithms Answer: readAllInts This is a wishy washy name. What does it mean, "all ints"? In the universe? In my little black book of ints? Entered by the user on standard input? Aha! It might be good to split this up to multiple methods: a method to read and return the number of ints from a scanner and handle errors: print a message in case of invalid input and try again, or simply throw an exception a method to read specified number of ints from a scanner and return as an array a method that calls the previous two and returns the array That way each method will have a simpler, single responsibility. A bigger problem with this method is that it doesn't belong in the InsertionSort class. The way you read an array of integers has nothing to do with the spring algorithm. Type safety By not using type tokens, you sacrificed type safety. For example a call less(4, "hello") would compile, which is undesirable. You should never use bare types, always specify type parameters. For example you can make less type safe like this: private static <T extends Comparable<? super T>> boolean less(T v, T w) { return v.compareTo(w) < 0; } This effectively prevents less(4, "hello") at compile time. Correcting sort similarly: public static <T extends Comparable<? super T>> void sort(T[] a) { int length = a.length; for (int i = 0; i < length; i++) { for (int j = i; j > 0; j--) { if (less(a[j], a[j - 1])) { // Invariant broken, fix it. exch(a, j, j - 1); } } } } And that's it, the other methods are fine, as they don't use bare types, and they are fine using Object[] for swapping and printing. Naming exch is not a great name. " exchange " would be better, but the most common name for this operation is "swap". less is also not great, as some might interpret it in the sense of " minus ". I suggest "lessThan" instead, which makes the comparison logic perfectly clear. The variable a is the worst offender of all. "swap" is not great either.
{ "domain": "codereview.stackexchange", "id": 14596, "tags": "java, algorithm, reinventing-the-wheel" }
Counting Sort Implementation in C#
Question: Here is my Implementation. It is I think almost like algorithm. public static uint[] CountingSort(uint[] A ,uint max) { uint[] B = new uint[A.Length]; int[] c = new int[max +1 ]; for (int j = 0; j < A.Length; j++) c[A[j]]++; for (int i = 1; i < max + 1; i++) c[i] += c[i - 1]; for (int j = A.Length-1 ; j>=0; j--) B[--c[A[j]]] = A[j]; return B; } Answer: Var Use implicit typing when the right-hand side of the declaration makes the type obvious. uint[] B = new uint[A.Length]; should be var B = new uint[A.Length]; Naming Single character variable names are some of the least useful names you could give your variables. Your variables have a purpose. Name them appropriately using these purposes. B means nothing. Spacing Give your code some room to breathe, the compiler will cut it out anyway when it gets to the optimization stage, and there's no sense making it harder to read for maintenance programmers. Also consider surrounding your loop bodies with braces. This way if somebody came in and added a line to the loop body, it wouldn't break. Design Use foreach when iterating over every item in a collection, it more accurately conveys your meaning. Secondly, I cannot find a use for manually specifying max. Instead you should calculate this using: var max = A.Max(); All things combined, I recommend: public static uint[] CountingSort(uint[] unsorted) { var max = unsorted.Max(); var sorted = new uint[A.Length]; var count = new int[max +1]; foreach(var unsortedValue in unsorted) { count[unsortedValue]++; } for (var i = 1; i < max + 1; i++) { count[i] += count[i - 1]; } for (var j = unsorted.Length-1 ; j>=0; j--) { sorted[--count[unsorted[j]]] = unsorted[j]; } return sorted ; } Optionally You could consider making this method an extension method of the uint[] type: public static uint[] CountingSort(this uint[] unsorted) { var max = unsorted.Max(); var sorted = new uint[unsorted.Length]; var count = new int[max +1]; foreach(var unsortedValue in unsorted) { count[unsortedValue]++; } for (var i = 1; i < max + 1; i++) { count[i] += count[i - 1]; } for (var j = unsorted.Length-1 ; j>=0; j--) { sorted[--count[unsorted[j]]] = unsorted[j]; } return sorted ; } Then you can call the method like this: var sorted = unsorted.CountingSort(); But this is optional and depends on your desired use cases. Secondly, consider using an interface such as IList to allow callers of your method more freedom when specifying data types: public static IList<uint> CountingSort(this IList<uint> unsorted) { var max = unsorted.Max(); var sorted = new uint[unsorted.Count]; var count = new int[max +1]; foreach(var unsortedValue in unsorted) { count[unsortedValue]++; } for (var i = 1; i < max + 1; i++) { count[i] += count[i - 1]; } for (var j = unsorted.Count-1 ; j>=0; j--) { sorted[--count[unsorted[j]]] = unsorted[j]; } return sorted; } Now you can call: new List<uint>().CountSort(); new uint[].CountSort(); new Collection<uint>().CountSort(); With way more flexibility.
{ "domain": "codereview.stackexchange", "id": 17144, "tags": "c#" }
Particle in a cylinder with a spring, sign convention in potential energy (Lagrangian multipliers)
Question: I'm trying to get the force of constraint. The problem I have is when defying the sign of the potential energy using cylindrical coordinates $(\rho,\phi,z)$, what I have is: $$ V=mgy-\frac{1}{2}k\left(\rho^2+R\rho \sin\phi+\left(\frac{R}{2}\right)^2+z^2\right)= $$ $$ =mg\rho\sin{\phi}-\frac{1}{2}k\left(\rho^2+R\rho \sin\phi+\left(\frac{R}{2}\right)^2+z^2\right). $$ But the solution in theory is: $$ V=-mg\rho\sin{\phi}-\frac{1}{2}k\left(\rho^2+R\rho \sin\phi+\left(\frac{R}{2}\right)^2+z^2\right). $$ I don't get why $mg\sin{\phi}$ is negative, considering the axes in the frame of reference from the diagram below, shouldn't be positive if the $y$-axis has the same direction as the gravity. Answer: Since the $y-$axis is pointing downwards, the height will increase when the $y$ coordinate decreases, so the correct gravitational potential energy is $V_g=-mgy$, as it is greater at higher height. You can check it this way: The azimuthal angle $\phi$ is usually defined from the $x-$axis, so $V_g=-mgy=-mg\rho\sin\phi$ will be minimum when $\phi=\pi/2$ (the situation of the diagram) and maximum when $\phi=3\pi/2.$
{ "domain": "physics.stackexchange", "id": 77416, "tags": "homework-and-exercises, classical-mechanics, lagrangian-formalism, potential-energy, spring" }
Can FFT tells us existance of same frequencies with different phases?
Question: So I know that applying FFT on a time-domain signal, shows which frequency components exists and what amplitudes each frequency signal has. My question is, suppose the signal contains same frequency rate and amplitude, but different phases. Can we realize it from the results of FFT? Answer: I am thinking of two numbers. They add to the number 15. Tell me what the two numbers are.
{ "domain": "dsp.stackexchange", "id": 10367, "tags": "fourier-transform, phase, fourier, amplitude, array-signal-processing" }
Is discrete math enough for computer science ? Or there other Math topics that I should also learn With it?
Question: I want to learn computer science, SO is discrete math enough for computer science ? Or there other Math topics that I should also learn With it ? I don’t have specific topic that I care more about than others (like cryptography) ,Just CS. Answer: Discrete mathematics, linear algebra, calculus, and probability are all used pretty much everywhere in computer science. Basically, discrete maths is the basis of everything, while linear algebra and calculus are used in specific areas of computer science, and there are various places you will see probability. Probability is usually used to describe randomness in your algorithms, and therefore is used quite a lot. Basic knowledge of probability and expected value of random variables is usually enough. Calculus is used mainly in optimization and machine learning problems (but you can still see it elsewhere). Linear algebra, and specifically matrices from it are especially useful. You might not need the entire course for every application of it in computer science, but it will be very useful in machine learning. I know there are other mathematical topics that can be useful for computer science, like group theory, but I don't have experience with them to tell you what they are used for. Also, there is no need to learn everything at once! You will start the math topics appear only in more advanced courses.
{ "domain": "cs.stackexchange", "id": 17863, "tags": "combinatorics, discrete-mathematics, mathematical-foundations" }
colcon build doesn't generate executables
Question: I am trying to get started with ROS2 Humble and when running colcon build no executable files are created. My project structure looks like this: examples ├── CMakeLists.txt ├── package.xml └── src ├── CMakeLists.txt └── helloWorld.cpp File contents: examples/CMakeLists.txt: cmake_minimum_required(VERSION 3.25) project(examples LANGUAGES CXX) set(CMAKE_CXX_STANDARD 20) set(CMAKE_CXX_STANDARD_REQUIRED ON) set(CMAKE_CXX_EXTENSIONS OFF) find_package(ament_cmake REQUIRED) add_subdirectory(src) ament_package() examples/src/CMakeLists.txt: add_executable(helloWorld helloWorld.cpp) examples/helloWorld.cpp #include <iostream> using std::cout; using std::endl; int main() { cout << "Hello World" << endl; } Running colcon build in my workspace root succeeds but no executable helloWorld is generated. However, when I edit examples/CMakeLists.txt like this, it DOES work: cmake_minimum_required(VERSION 3.25) project(examples LANGUAGES CXX) set(CMAKE_CXX_STANDARD 20) set(CMAKE_CXX_STANDARD_REQUIRED ON) set(CMAKE_CXX_EXTENSIONS OFF) find_package(ament_cmake REQUIRED) #add_subdirectory(src) add_executable(helloWorld "src/helloWorld.cpp") #add executable directly instead ament_package() This successfully generates the executable and works with ros2 run. It looks like add_subdirectory doesn't work correctly... probably has to do something with ament... does anyone know how to do this correctly? I usually strcuture my CMake projects with add_subdirectory and want this to work with ROS as well. Originally posted by user334478 on ROS Answers with karma: 3 on 2023-02-22 Post score: 0 Answer: You need to install the required targets. For more about this see this link: https://docs.ros.org/en/foxy/Tutorials/Beginner-Client-Libraries/Writing-A-Simple-Cpp-Publisher-And-Subscriber.html#cmakelists-txt add_executable(helloWorld src/helloWorld.cpp) ament_target_dependencies(helloWorld) install(TARGETS helloWorld DESTINATION lib/${PROJECT_NAME}) should do the trick for your simple example Originally posted by GeorgNo with karma: 183 on 2023-03-23 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 38287, "tags": "ros, ros2, ament, cmake" }
Scaling of the shear rate with pipe diameter
Question: In a laminar flow regime the flow profile in a pipe should be given by: $$ v(r) = \frac{\Delta p}{4 \eta l} (R^2-r^2)$$ If we assume that the radial shear rate is a meaningful indicator for the deformation of a macromolecule swimming in the pipe we get: $$ \dot{\gamma}(r) = \frac{\partial v}{\partial r} = -\frac{\Delta p \cdot 2r}{4 \eta l} $$ Evaluating this expression between $0$ and $R$ makes it seem like the shear rate increases with pipe diameter which is counter-intuitive to me. More specifically if define something like an average shear rate: $$ \langle \dot{\gamma} \rangle = \int_{r=0}^{R} \int_{\phi=0}^{2 \pi} P(r,\phi) |\dot{\gamma}(r)| r dr d\phi $$ With uniform distribution of the macromolecule $\frac{1}{R^2 \pi}$ we would get: $$ \langle \dot{\gamma} \rangle = \frac{\Delta p R}{3 \eta l}$$ Which scales linearly with the pipe diameter. Now I know that the distribution of a particle in a pipe $P(r,\phi)$ is not uniform in the presence of flow (I think this is called inertial migration). However when we did some simulations the distribution was similar when doubling the diameter $R \rightarrow 2R$ and we did indeed observe a doubling of the shear rate computed as an average like stated above, with the radial distribution $P(r)$ looking "a bit like gaussian" distributions depending on the pressure difference. It is completely counterintuitive to me that something would be affected by a higher shear rate when increasing the pipe diameter. Where did we go wrong? Context: This question bugs me since my time in university. We were doing some investigations on the posibilty to unfold large globular polymers/proteins under shear, which seemed like a controversial topic. Answer: You are assuming that the pressure drop is constant, which it is not (at constant volumetric flow rate). For a given volumetric flow rate Q, the shear rate at the wall is $$\gamma_w=\frac{32Q}{\pi D^3}$$and you can see that this decreases with increasing diameter.
{ "domain": "physics.stackexchange", "id": 74836, "tags": "fluid-dynamics, molecular-dynamics, soft-matter" }
Euler #4: Refined Palindrome in Haskell
Question: This is my attempt at the Problem Euler #4 in Haskell ("Find the largest palindrome made from the product of two 3-digit numbers.") import Data.List isPalindrome :: Show a => a -> Bool isPalindrome n = l == reverse l where l = show n maxPalindrome :: (Integral a, Show a) => a maxPalindrome = maximum $ head . transpose $ allPalindrome <$> [999, 998 .. 1] where allPalindrome x = filter (isPalindrome) $ (x *) <$> [999, 998 .. x] To my surprise I didn't see any such optimisation in the snippets I found (the head . transpose is there to only consider the highest of each pairs), which surprised me. However this is still running about 0.5 seconds which I find still slow? Is there a way to make it run faster? I am aware of Project Euler #4 in Haskell however, my question is not about the algorithm I use but about its implementation. Do you have any other recommendation about my code? Thank you very much in advance Answer: Your code is fine, but I would suggest some small changes. Instead of head . transpose, I would use concatMap (take 1). This captures your intend to take the first (and therefore largest) number from each allPalindrome. Next, I would use Int instead of (Integral a, Show a), since 999 * 999 is smaller than maxBound :: Int. Why? Because by default, Integer will be used for Integral types, if they were note specified. Therefore, you end up with maxPalindrome handled as a Integer, which is slower than Int. And last, but not least, I would stop at 111, since 111 * 111 is a palindrome. We end up with: isPalindrome :: Show a => a -> Bool isPalindrome n = l == reverse l where l = show n maxPalindrome :: Int maxPalindrome = maximum $ concatMap (take 1) $ allPalindrome <$> [999, 998 .. 111] where allPalindrome x = filter isPalindrome $ (x *) <$> [999, 998 .. x] main :: IO () main = print maxPalindrome Note that you should compile your code if you want to check its performance. Alternatively, if you want to keep maxPalindrome's type, use :: Int: maxPalindrome :: (Integral n, Show n) => n maxPalindrome = maximum $ concatMap (take 1) $ allPalindrome <$> [999, 998 .. 111] where allPalindrome x = filter isPalindrome $ (x *) <$> [999, 998 .. x] main :: IO () main = print (maxPalindrome :: Int)
{ "domain": "codereview.stackexchange", "id": 25386, "tags": "programming-challenge, haskell, functional-programming" }
Why is $\rm SU(2)\times SU(2)$ spontaneously broken?
Question: Following the approach of Weinberg's book to discuss the chiral symmetry, at a certain point he says If the $\rm SU(2)\times SU(2)$ symmetry is exact and unbroken, then this would require any one-hadron state $|h\rangle$ to be degenerate with another state $\vec{X}|h\rangle$ of opposite parity and equal spin, baryon number and strangeness. No such parity doubling is seen in the hadron spectrum, so we are forced to conclude that if the chiral symmetry is a god approximation t all, then it must be spontaneously broken to its isotopic spin $\rm SU(2)$ subgroup. where $\{\vec{X}\}_i$ is the charge associated to the conserved axial-current $\vec{A}^\mu = i\bar{q}\gamma^\mu\gamma_5\vec{t}q$. Can anyone explain better why the symmetry is broken and why it is broken in the isotopic spin group and not in another group? Or maybe if anyone can suggest some notes. Answer: In simpler terms, if you take $SU(2)_L \times SU(2)_R$ symmetry, then X is the charge associated with the $SU(2)_R$ part.. so acting on a hadron state $|h>$ (with only chiral left symmetry) with X will lead to the creation of a new state $X|h>$ which will have similar features for both the right handed interaction as well as for the left handed interaction. Now since this state is related by a symmetry, it has to be degenerate, so accordingly, its mass spectrum will be the same as that of $|h>$, which is not true experimentally. Therefore you need to give it a higher mass, and as a result you would need to break the symmetry spontaneously from $SU(2)_L \times SU(2)_R$ to $SU(2)_L$.
{ "domain": "physics.stackexchange", "id": 29035, "tags": "symmetry-breaking" }
Why does the branch cut of the self-energy begin at $2m$?
Question: Consider a scalar field $\phi(x)$, and let its two-point function be $$ \frac{1}{p^2-m^2-\Pi(p^2)}=\int_0^\infty\mathrm d\mu^2\ \rho(\mu^2)\ \frac{1}{p^2-\mu^2} $$ We usually have $\Pi(m^2)=0$, which implies that there is a one-particle state with mass $m^2$. In terms of the spectral density, this reads $$ \rho(\mu^2)=\delta(\mu^2-m^2)+\sigma(\mu^2) $$ where the support of $\sigma(\mu^2)$ is disconnected from $m^2$. On the other hand, the continuum contribution usually begins at $(2m)^2$, meaning that $\Pi(k^2)$ has a branch cut from $(2m)^2$ to $\infty$. Again, in terms of the spectral function, this is written as $\text{supp}(\sigma)=[(2m)^2,\infty)$. My question I would expect that the first bound state to lie close to $(2m)^2$, but slightly below, something around $(2m)^2-\mathcal O(\alpha^n)$, where $\alpha$ is the coupling constant and $n\in\mathbb N$. For example, in a simplified QED model, the bound state of an electron and a positron has binding energy $\sim 7\mathrm{eV}\sim m\alpha^2$, and so I would expect the branch point to be somewhere around $(2m)^2-m^2\alpha^4$. I know the value of the self-energy to one loop, and the branch point is indeed exactly at $(2m)^2$. I don't know where to find higher loop corrections, so I cannot conclude whether 1) at higher loops, the branch point moves a bit closer to $m^2$, or 2) it stays at $(2m)^2$. I would expect that the correct behaviour is 1), but this contradicts the fact that all books I've read always depict the threshold of pair production as an hyperboloid with its bottom sitting exactly at $(2m)^2$. On the other hand, if it turns out that the correct option is 2), I am lead to ask why is perturbation theory unable to correctly predict the position of the branch point (I know this is a subtle issue because bound states are, to some extent, non-perturbative entities; but AFAIK, perturbative calculations do contain a lot of information about bound states). Answer: First, remember that the spectral density is written for states with zero total momentum. That branch cut comes not from bound states but simply from unbound two-particle states that form continuous spectrum (you can always take particles with arbitrary opposite momenta which will give you arbitrary total energy). In contrast two-particle bound states usually form discrete spectrum. That mean that they will contribute as poles below $(2m)^2$.
{ "domain": "physics.stackexchange", "id": 35020, "tags": "quantum-mechanics, quantum-field-theory, energy, perturbation-theory, propagator" }
Numpy ndarray holding string with unknown data type
Question: I ran the following code in a jupyter notebook cell ndarrs=np.array(["1.2","1.5","1.6"], dtype=np.string_) print(ndarrs.dtype) It returned |S3 as shown below. Can someone help me understand the meaning of this symbol? Answer: In Python 3, you shouldn't really specify the np.string_ dtype, as it is left there for backwards compatibility with Python 2. The S type you see using np.dtype is a map to the bytes_ type, a zero-terminated string buffer, which shouldn't be used. The S just means string and the number gives the number of bytes. In [1]: s = "" # start with an empty string In [2]: for i in range(5): # make the string larger s += str(i) a = np.array([s], dtype=np.string_) print(f"{a}\t{a.dtype}") [b'0'] |S1 [b'01'] |S2 [b'012'] |S3 [b'0123'] |S4 [b'01234'] |S5 [b'012345'] |S6 For python 3 you should instead use np.unicode: In [1]: a = np.array(["hello", "world"], dtype=np.unicode) In [2]: type(a) Out[2]: numpy.ndarray In [3]: b.dtype Out[3]: dtype('<U5') < means little-endian U means a unicode string 5 relates to the number of bytes used to hold the string. If you had really long strings, that number would then increase. Have a look at this documentation, which states: Note on string types: For backward compatibility with Python 2 the S and a typestrings remain zero-terminated bytes and np.string_ continues to map to np.bytes_. To use actual strings in Python 3 use U or np.unicode_.
{ "domain": "datascience.stackexchange", "id": 7309, "tags": "python, numpy" }
Is the Higgs 3/4 detected already?
Question: Can someone provide an expanded explanation on the statement that the Higgs field is already 3/4 detected? Link to ref (@nic, sorry I left it off, do a quick search on Higgs to find the right location on page) Answer: The link you provided is a page of G t'Gooft on an official site and is quite interesting reading. the statement you are wondering about is: consider the scientific facts concerning the Standard Model. Fact is that the W+, W‾ and the Z boson each carry three spin degrees of freedom, whereas the Yang-Mills field quanta, which describe their interactions correctly in great detail, each carry only two. Those remaining modes come from the Higgs field. What this means is that three quarters of the field of the Higgs have already been found. The fourth is still missing, and if you calculate its properties, it is also clear why it is missing: it is hiding in the form of a particle that is difficult to detect. He is stating that the Higgs' existence is based on the very strong verification of the Standard Model. There is strong experimental verification of the existence of W+ W- and Z The construction of the theory is such that since the gauge bosons exist one has 3/4 of the data needed to confirm the model, of which the existence of the Higgs is 1/4, counting up dimensions, of the necessary evidence/prediction. He is equating the existence of the Higgs to the validity of the SM in this statement.
{ "domain": "physics.stackexchange", "id": 1076, "tags": "standard-model, higgs" }
Class to DataTable or Datatable to class mapper
Question: My code goes from a class to DataTable, and back again. It populates the class's public properties, or creates DataColumns whose names and types match that of the the class's public properties. It also has methods that allow the developer to go from a query to a populated class, or from a class to an SQL script or SQL query to C# class code files. I still plan on adding more feature here and there, but overall I'm pretty satisfied with its functionality. I am submitting my code for review here, and I'm trying to polish it now for release. I welcome any and all criticism and comments, questions, etc. One area in particular that I have a question about is organization. Right now, I have the functions broke up into static classes that group them by their functionality. This Is the best I have thought of so far, and the naming is pretty intuitive. The current classes/groups are Map, Query, Script, Code, Convert and Helper. The name of this product (so far) is EntityJustWorks. I have kept all these classes under the namespace EntityJustWorks.SQL, because these are all (more or less) SQL specific and I think that I may want to add another library that deals with a different repository. Does this seem like a sound naming convention? Would this do good to hide behind an 'live record'? Again I welcome all and any criticisms/comments. If you would like to go straight to the code download, you can access my GitHub. I also keep a copy of my code, as well as comments and explanations of certain important sections on my C# programming blog. namespace EntityJustWorks.SQL { public static class Convert { /// <summary> /// Generates a C# class code file from a Database given an SQL connection string and table name. /// </summary> public static string SQLToCSharp(string ConnectionString, string TableName) { DataTable table = Query.QueryToDataTable(ConnectionString, "SELECT TOP 1 * FROM [{0}]", TableName); return Code.DatatableToCSharp(table); } /// <summary> /// Creates an SQL table from a class object. /// </summary> public static bool ClassToSQL<T>(string ConnectionString, params T[] ClassCollection) where T : class { string createTableScript = Script.CreateTable<T>(ClassCollection); return (Query.ExecuteNonQuery(ConnectionString, createTableScript) == -1); } } /// <summary> /// DataTable/Class Mapping Class /// </summary> public static class Map { /// <summary> /// Fills properties of a class from a row of a DataTable where the name of the property matches the column name from that DataTable. /// It does this for each row in the DataTable, returning a List of classes. /// </summary> /// <typeparam name="T">The class type that is to be returned.</typeparam> /// <param name="Table">DataTable to fill from.</param> /// <returns>A list of ClassType with its properties set to the data from the matching columns from the DataTable.</returns> public static IList<T> DatatableToClass<T>(DataTable Table) where T : class, new() { if (!Helper.IsValidDatatable(Table)) return new List<T>(); Type classType = typeof(T); IList<PropertyInfo> propertyList = classType.GetProperties(); // Parameter class has no public properties. if (propertyList.Count == 0) return new List<T>(); List<string> columnNames = Table.Columns.Cast<DataColumn>().Select(column => column.ColumnName).ToList(); List<T> result = new List<T>(); try { foreach (DataRow row in Table.Rows) { T classObject = new T(); foreach (PropertyInfo property in propertyList) { if (property != null && property.CanWrite) // Make sure property isn't read only { if (columnNames.Contains(property.Name)) // If property is a column name { if (row[property.Name] != System.DBNull.Value) // Don't copy over DBNull { object propertyValue = System.Convert.ChangeType( row[property.Name], property.PropertyType ); property.SetValue(classObject, propertyValue, null); } } } } result.Add(classObject); } return result; } catch { return new List<T>(); } } /// <summary> /// Creates a DataTable from a class type's public properties and adds a new DataRow to the table for each class passed as a parameter. /// The DataColumns of the table will match the name and type of the public properties. /// </summary> /// <param name="ClassCollection">A class or array of class to fill the DataTable with.</param> /// <returns>A DataTable who's DataColumns match the name and type of each class T's public properties.</returns> public static DataTable ClassToDatatable<T>(params T[] ClassCollection) where T : class { DataTable result = ClassToDatatable<T>(); if (Helper.IsValidDatatable(result, IgnoreRows: true)) return new DataTable(); if (Helper.IsCollectionEmpty(ClassCollection)) return result; // Returns and empty DataTable with columns defined (table schema) foreach (T classObject in ClassCollection) { ClassToDataRow(ref result, classObject); } return result; } /// <summary> /// Creates a DataTable from a class type's public properties. The DataColumns of the table will match the name and type of the public properties. /// </summary> /// <typeparam name="T">The type of the class to create a DataTable from.</typeparam> /// <returns>A DataTable who's DataColumns match the name and type of each class T's public properties.</returns> public static DataTable ClassToDatatable<T>() where T : class { Type classType = typeof(T); DataTable result = new DataTable(classType.UnderlyingSystemType.Name); foreach (PropertyInfo property in classType.GetProperties()) { DataColumn column = new DataColumn(); column.ColumnName = property.Name; column.DataType = property.PropertyType; if (Helper.IsNullableType(column.DataType) && column.DataType.IsGenericType) { // If Nullable<>, this is how we get the underlying Type... column.DataType = column.DataType.GenericTypeArguments.FirstOrDefault(); } else { // True by default, so set it false column.AllowDBNull = false; } // Add column result.Columns.Add(column); } return result; } /// <summary> /// Adds a DataRow to a DataTable from the public properties of a class. /// </summary> /// <param name="Table">A reference to the DataTable to insert the DataRow into.</param> /// <param name="ClassObject">The class containing the data to fill the DataRow from.</param> private static void ClassToDataRow<T>(ref DataTable Table, T ClassObject) where T : class { DataRow row = Table.NewRow(); foreach (PropertyInfo property in typeof(T).GetProperties()) { if (Table.Columns.Contains(property.Name)) { if (Table.Columns[property.Name] != null) { row[property.Name] = property.GetValue(ClassObject, null); } } } Table.Rows.Add(row); } } /// <summary> /// SQL Query Helper Class /// </summary> public static class Query { /// <summary> /// Runs a SQL query and returns the results as a List of the specified class /// </summary> /// <typeparam name="T">The type the result will be returned as.</typeparam> /// <param name="ConnectionString">The SQL connection string.</param> /// <param name="FormatString_Query">A SQL command that will be passed to string.Format().</param> /// <param name="FormatString_Parameters">The parameters for string.Format().</param> /// <returns>A List of classes that represent the records returned.</returns> public static IList<T> QueryToClass<T>(string ConnectionString, string FormatString_Query, params object[] FormatString_Parameters) where T : class, new() { IList<T> result = new List<T>(); DataTable tableQueryResult = QueryToDataTable(ConnectionString, string.Format(FormatString_Query, FormatString_Parameters)); if (Helper.IsValidDatatable(tableQueryResult)) { result = Map.DatatableToClass<T>(tableQueryResult); } return result; } /// <summary> /// Executes an SQL query and returns the results as a DataTable. /// </summary> /// <param name="ConnectionString">The SQL connection string.</param> /// <param name="FormatString_Query">A SQL command that will be passed to string.Format().</param> /// <param name="FormatString_Parameters">The parameters for string.Format().</param> /// <returns>The results of the query as a DataTable.</returns> public static DataTable QueryToDataTable(string ConnectionString, string FormatString_Query, params object[] FormatString_Parameters) { try { DataTable result = new DataTable(); using (SqlConnection sqlConnection = new SqlConnection(ConnectionString)) { sqlConnection.Open(); using (SqlCommand sqlCommand = sqlConnection.CreateCommand()) { sqlCommand.CommandText = string.Format(FormatString_Query, FormatString_Parameters); sqlCommand.CommandType = CommandType.Text; SqlDataAdapter sqlAdapter = new SqlDataAdapter(sqlCommand); sqlAdapter.Fill(result); } } return result; } catch { return new DataTable(); } } /// <summary> /// Executes a query, and returns the first column of the first row in the result set returned by the query. /// </summary> /// <typeparam name="T">The type the result will be returned as.</typeparam> /// <param name="ConnectionString">>The SQL connection string.</param> /// <param name="FormatString_Query">The SQL query as string.Format string.</param> /// <param name="FormatString_Parameters">The string.Format parameters.</param> /// <returns>The first column of the first row in the result, converted and casted to type T.</returns> public static T QueryToScalarType<T>(string ConnectionString, string FormatString_Query, params object[] FormatString_Parameters) { try { object result = new object(); using (SqlConnection sqlConnection = new SqlConnection(ConnectionString)) { sqlConnection.Open(); using (SqlCommand sqlCommand = sqlConnection.CreateCommand()) { sqlCommand.CommandText = string.Format(FormatString_Query, FormatString_Parameters); sqlCommand.CommandType = CommandType.Text; result = System.Convert.ChangeType(sqlCommand.ExecuteScalar(), typeof(T)); } } return (T)result; } catch { return (T)new object(); } } /// <summary> /// Executes a non-query SQL command, such as INSERT or DELETE /// </summary> /// <param name="ConnectionString">The connection string.</param> /// <param name="FormatString_Command">The SQL command, as a format string.</param> /// <param name="FormatString_Parameters">The parameters for the format string.</param> /// <returns>The number of rows affected, or -1 on errors.</returns> public static int ExecuteNonQuery(string ConnectionString, string FormatString_Command, params object[] FormatString_Parameters) { try { int rowsAffected = 0; using (SqlConnection sqlConnection = new SqlConnection(ConnectionString)) { sqlConnection.Open(); using (SqlCommand sqlCommand = sqlConnection.CreateCommand()) { string commandText = string.Format(FormatString_Command, FormatString_Parameters); sqlCommand.CommandText = commandText; sqlCommand.CommandType = CommandType.Text; rowsAffected = sqlCommand.ExecuteNonQuery(); } } return rowsAffected; } catch { return 0; } } } /// <summary> /// SQL Script Generation Class /// </summary> public static class Script { /// <summary> /// Creates a SQL script that inserts the values of the specified classes' public properties into a table. /// </summary> public static string InsertInto<T>(params T[] ClassObjects) where T : class { DataTable table = Map.ClassToDatatable<T>(ClassObjects); return InsertInto(table); // We don't need to check IsValidDatatable() because InsertInto does } /// <summary> /// Creates a SQL script that inserts the cell values of a DataTable's DataRows into a table. /// </summary> public static string InsertInto(DataTable Table) { if (!Helper.IsValidDatatable(Table)) return string.Empty; StringBuilder result = new StringBuilder(); foreach (DataRow row in Table.Rows) { if (row == null || row.Table.Columns.Count < 1 || row.ItemArray.Length < 1) return string.Empty; string columns = Helper.RowToColumnString(row); string values = Helper.RowToValueString(row); if (string.IsNullOrWhiteSpace(columns) || string.IsNullOrWhiteSpace(values)) return string.Empty; result.AppendFormat("INSERT INTO [{0}] {1} VALUES {2}", Table.TableName, columns, values); } return result.ToString(); } /// <summary> /// Creates a SQL script that creates a table where the column names match the specified class's public properties. /// </summary> public static string CreateTable<T>(params T[] ClassObjects) where T : class { DataTable table = Map.ClassToDatatable<T>(ClassObjects); return Script.CreateTable(table); } /// <summary> /// Creates a SQL script that creates a table where the columns matches that of the specified DataTable. /// </summary> public static string CreateTable(DataTable Table) { if (Helper.IsValidDatatable(Table, IgnoreRows: true)) return string.Empty; StringBuilder result = new StringBuilder(); result.AppendFormat("CREATE TABLE [{0}] ({1}", Table.TableName, Environment.NewLine); bool FirstTime = true; foreach (DataColumn column in Table.Columns.OfType<DataColumn>()) { if (FirstTime) FirstTime = false; else result.Append(","); result.AppendFormat("[{0}] {1} {2}NULL{3}", column.ColumnName, GetDataTypeString(column.DataType), column.AllowDBNull ? "" : "NOT ", Environment.NewLine ); } result.AppendFormat(") ON [PRIMARY]{0}GO", Environment.NewLine); return result.ToString(); } /// <summary> /// Returns the SQL data type equivalent, as a string for use in SQL script generation methods. /// </summary> private static string GetDataTypeString(Type DataType) { switch (DataType.Name) { case "Boolean": return "[bit]"; case "Char": return "[char]"; case "SByte": return "[tinyint]"; case "Int16": return "[smallint]"; case "Int32": return "[int]"; case "Int64": return "[bigint]"; case "Byte": return "[tinyint] UNSIGNED"; case "UInt16": return "[smallint] UNSIGNED"; case "UInt32": return "[int] UNSIGNED"; case "UInt64": return "[bigint] UNSIGNED"; case "Single": return "[float]"; case "Double": return "[double]"; case "Decimal": return "[decimal]"; case "DateTime": return "[datetime]"; case "Guid": return "[uniqueidentifier]"; case "Object": return "[variant]"; case "String": return "[nvarchar](250)"; default: return "[nvarchar](MAX)"; } } } /// <summary> /// Helper Functions. Conversion, Validation /// </summary> public static class Helper { /// <summary> /// Indicates whether a specified DataTable is null, has zero columns, or (optionally) zero rows. /// </summary> /// <param name="Table">DataTable to check.</param> /// <param name="IgnoreRows">When set to true, the function will return true even if the table's row count is equal to zero.</param> /// <returns>False if the specified DataTable null, has zero columns, or zero rows, otherwise true.</returns> public static bool IsValidDatatable(DataTable Table, bool IgnoreRows = false) { if (Table == null) return false; if (Table.Columns.Count == 0) return false; if (!IgnoreRows && Table.Rows.Count == 0) return false; return true; } /// <summary> /// Indicates whether a specified Enumerable collection is null or an empty collection. /// </summary> /// <typeparam name="T">The specified type contained in the collection.</typeparam> /// <param name="Input">An Enumerator to the collection to check.</param> /// <returns>True if the specified Enumerable collection is null or empty, otherwise false.</returns> public static bool IsCollectionEmpty<T>(IEnumerable<T> Input) { return (Input == null || Input.Count() < 1) ? true : false; } /// <summary> /// Indicates whether a specified Type can be assigned null. /// </summary> /// <param name="Input">The Type to check for nullable property.</param> /// <returns>True if the specified Type can be assigned null, otherwise false.</returns> public static bool IsNullableType(Type Input) { if (!Input.IsValueType) return true; // Reference Type if (Nullable.GetUnderlyingType(Input) != null) return true; // Nullable<T> return false; // Value Type } /// <summary> /// Returns all the column names of the specified DataRow in a string delimited like and SQL INSERT INTO statement. /// Example: ([FullName], [Gender], [BirthDate]) /// </summary> /// <returns>A string formatted like the columns specified in an SQL 'INSERT INTO' statement.</returns> public static string RowToColumnString(DataRow Row) { IEnumerable<string> Collection = Row.ItemArray.Select(item => item as String); return ListToDelimitedString(Collection, "([", "], [", "])"); } /// <summary> /// Returns all the values the specified DataRow in as a string delimited like and SQL INSERT INTO statement. /// Example: ('John Doe', 'M', '10/3/1981'') /// </summary> /// <returns>A string formatted like the values specified in an SQL 'INSERT INTO' statement.</returns> public static string RowToValueString(DataRow Row) { IEnumerable<string> Collection = GetDatatableColumns(Row.Table).Select(c => c.ColumnName); return ListToDelimitedString(Collection, "('", "', '", "')"); } /// <summary> /// Enumerates a collection as delimited collection of strings. /// </summary> /// <typeparam name="T">The Type of the collection.</typeparam> /// <param name="Collection">An Enumerator to a collection to populate the string.</param> /// <param name="Prefix">The string to prefix the result.</param> /// <param name="Delimiter">The string that will appear between each item in the specified collection.</param> /// <param name="Postfix">The string to postfix the result.</param> public static string ListToDelimitedString<T>(IEnumerable<T> Collection, string Prefix, string Delimiter, string Postfix) { if (IsCollectionEmpty<T>(Collection)) return string.Empty; StringBuilder result = new StringBuilder(); foreach (T item in Collection) { if (result.Length != 0) result.Append(Delimiter); // Add comma result.Append(EscapeSingleQuotes(item as String)); } if (result.Length < 1) return string.Empty; result.Insert(0, Prefix); result.Append(Postfix); return result.ToString(); } /// <summary> /// Returns an enumerator, which supports a simple iteration over a collection of all the DataColumns in a specified DataTable. /// </summary> public static IEnumerable<DataColumn> GetDatatableColumns(DataTable Input) { if (Input == null || Input.Columns.Count < 1) return new List<DataColumn>(); return Input.Columns.OfType<DataColumn>().ToList(); } /// <summary> /// Returns an enumerator, which supports a simple iteration over a collection of all the DataRows in a specified DataTable. /// </summary> public static IEnumerable<DataRow> GetDatatableRows(DataTable Input) { if (!IsValidDatatable(Input)) return new List<DataRow>(); return Input.Rows.OfType<DataRow>().ToList(); } /// <summary> /// Returns a new string in which all occurrences of the single quote character in the current instance are replaced with a back-tick character. /// </summary> public static string EscapeSingleQuotes(string Input) { return Input.Replace('\'', '`'); // Replace with back-tick } } /// <summary> /// C# Code Generation Class /// </summary> public static class Code { /// <summary> /// Generates a C# class code file from a DataTable. /// </summary> public static string DatatableToCSharp(DataTable Table) { string className = Table.TableName; if (string.IsNullOrWhiteSpace(className)) { return "// Class cannot be created: DataTable.TableName must have a value to use as the name of the class"; } // Create the class CodeTypeDeclaration classDeclaration = CreateClass(className); // Add public properties foreach (DataColumn column in Table.Columns) { classDeclaration.Members.Add(CreateProperty(column.ColumnName, column.DataType)); } // Add Class to Namespace string namespaceName = new StackFrame(2).GetMethod().DeclaringType.Namespace;// "EntityJustWorks.AutoGeneratedClassObject"; CodeNamespace codeNamespace = new CodeNamespace(namespaceName); codeNamespace.Types.Add(classDeclaration); // Generate code string filename = string.Format("{0}.{1}.cs",namespaceName,className); CreateCodeFile(filename, codeNamespace); // Return filename return filename; } #region Private Members private static CodeTypeDeclaration CreateClass(string name) { CodeTypeDeclaration result = new CodeTypeDeclaration(name); result.Attributes = MemberAttributes.Public; result.Members.Add(CreateConstructor(name)); // Add class constructor return result; } private static CodeConstructor CreateConstructor(string className) { CodeConstructor result = new CodeConstructor(); result.Attributes = MemberAttributes.Public; result.Name = className; return result; } private static CodeMemberField CreateProperty(string name, Type type) { // This is a little hack. Since you cant create auto properties in CodeDOM, // we make the getter and setter part of the member name. // This leaves behind a trailing semicolon that we comment out. // Later, we remove the commented out semicolons. string memberName = name + "\t{ get; set; }//"; CodeMemberField result = new CodeMemberField(type,memberName); result.Attributes = MemberAttributes.Public | MemberAttributes.Final; return result; } private static void CreateCodeFile(string filename, CodeNamespace codeNamespace) { // CodeGeneratorOptions so the output is clean and easy to read CodeGeneratorOptions codeOptions = new CodeGeneratorOptions(); codeOptions.BlankLinesBetweenMembers = false; codeOptions.VerbatimOrder = true; codeOptions.BracingStyle = "C"; codeOptions.IndentString = "\t"; // Create the code file using (TextWriter textWriter = new StreamWriter(filename)) { CSharpCodeProvider codeProvider = new CSharpCodeProvider(); codeProvider.GenerateCodeFromNamespace(codeNamespace, textWriter, codeOptions); } // Correct our little auto-property 'hack' File.WriteAllText(filename, File.ReadAllText(filename).Replace("//;", "")); } #endregion } } Answer: Intro This is really a nice to have question. The class seems to be good structured and well commented. But, it is a lot of code to review, so let us start. General based on the naming guidelines input parameter should be named using camelCase casing. you should use braces {} for single if statements also. This will make your code less errorprone. If you don't want to use them, you should be consistent with your style. In your code you are using them sometimes but most of the time you aren't using them. comments should describe why something is done. What is done should be described by the code itself by using meaningful names for methods, properties etc. So comments like // Create the class are just noise which should be removed. Convert It would be better to name the methods using the conventions of the NET System.Convert class, like ToXXX() or/and FromXXX(). SQLToCSharp() returns a string and by the name of the method one could assume that he will get the string representation of a C# class, but this method will instead return on success the filename of the generated and written class. To solve this issue you should consider to add a class CSharpCode which is returned. This class should have a static Empty property to reflect the case that the returned object isn't a good one. public class CSharpCode { public string Name { get; private set; } public string NameSpace { get; private set; } public string Content { get; private set; } public CSharpCode(string name, string nameSpace, string content) { Name = name; NameSpace = nameSpace; Content = content; } private static CSharpCode instance = new CSharpCode(); public static CSharpCode Empty { get { return instance; } } private CSharpCode() { } public override bool Equals(object obj) { if (obj == null) return false; if (this.GetType() != obj.GetType()) return false; // safe because of the GetType check CSharpCode other = (CSharpCode)obj; if (!Object.Equals(Name, other.Name)) return false; if (!Object.Equals(NameSpace, other.NameSpace)) return false; if (!Object.Equals(Content, other.Content)) return false; return true; } public override int GetHashCode() { unchecked // Overflow is fine, just wrap { int hash = 17; if (Name != null) { hash = hash * 23 + Name.GetHashCode(); } if (NameSpace != null) { hash = hash * 23 + NameSpace.GetHashCode(); } if (Content != null) { hash = hash * 23 + Content.GetHashCode(); } return hash; } } public override string ToString() { IList<string> values = new List<string>(); if (!String.IsNullOrWhiteSpace(NameSpace)) { values.Add(NameSpace); } if (!String.IsNullOrWhiteSpace(Name)) { values.Add(Name); } if (values.Count != 0) { return String.Join(".", values); } return base.ToString(); } } Now we can refactor the methods, but I prefer to pass a DataTable over a connectionstring and tablename. So, we will just do both. public static CSharpCode ToCSharpCode(string connectionString, string tableName) { DataTable table = Query.QueryToDataTable(connectionString, "SELECT TOP 1 * FROM [{0}]", tableName); return ToCSharpCode(table); } public static CSharpCode ToCSharpCode(DataTable dataTable) { return Code.DatatableToCSharp(dataTable); } The ClassToSQL() method does not belong to Convert class, because it doesn't convert the classCollection but saves them in a database. It would be better to change it to return a DataTable. This DataTable could then be saved using another method which should live inside the Query class. public static DataTable FromType<T>(params T[] classCollection) where T : class { return Map.ClassToDatatable<T>(classCollection); } Code CreateCodeFile() The creation of the CodeGeneratorOptions should be extracted to a separate method. This improves the readability of the CreateCodeFile() method. private static CodeGeneratorOptions GetDefaultOptions() { CodeGeneratorOptions codeOptions = new CodeGeneratorOptions(); codeOptions.BlankLinesBetweenMembers = false; codeOptions.VerbatimOrder = true; codeOptions.BracingStyle = "C"; codeOptions.IndentString = "\t"; return codeOptions; } if we add a string FromCodeNameSpace() method, we can simplify the CreateCodeFile() method and if we want to, we can just remove it. By using a MemoryStream instead of a FileStream we will speed up the creation of the code. private static string FromCodeNamespace(CodeNamespace codeNamespace) { // CodeGeneratorOptions so the output is clean and easy to read CodeGeneratorOptions codeOptions = GetDefaultOptions(); string code = String.Empty; using (MemoryStream memoryStream = new MemoryStream()) using (TextWriter textWriter = new StreamWriter(memoryStream, new UTF8Encoding(false, true))) using (CSharpCodeProvider codeProvider = new CSharpCodeProvider()) { codeProvider.GenerateCodeFromNamespace(codeNamespace, textWriter, codeOptions); code = Encoding.UTF8.GetString(memoryStream.ToArray()); } // Correct our little auto-property 'hack' return code.Replace("//;", ""); } now the CreateCodeFile() method is as simple as private static void CreateCodeFile(string filename, CodeNamespace codeNamespace) { string code = FromCodeNamespace(codeNamespace); File.WriteAllText(filename, code); } Next we will extract the creation of the CodeNamespace to a separate method. private static CodeNamespace ToCodeNameSpace(DataTable table) { CodeTypeDeclaration classDeclaration = CreateClass(table.TableName); foreach (DataColumn column in table.Columns) { classDeclaration.Members.Add(CreateProperty(column.ColumnName, column.DataType)); } string namespaceName = new StackFrame(2).GetMethod().DeclaringType.Namespace; CodeNamespace codeNamespace = new CodeNamespace(namespaceName); codeNamespace.Types.Add(classDeclaration); return codeNamespace; } which simplifies the DatatableToCSharp() method to public static string DatatableToCSharp(DataTable table) { string className = table.TableName; if (string.IsNullOrWhiteSpace(className)) { return "// Class cannot be created: DataTable.TableName must have a value to use as the name of the class"; } CodeNamespace codeNamespace = ToCodeNameSpace(table); // Generate code string filename = string.Format("{0}.{1}.cs", codeNamespace.Name, className); CreateCodeFile(filename, codeNamespace); // Return filename return filename; } but this wasn't the goal. We wanted to get a method which returns a CSharpCode object. So let us introduce a CSharpCode FromDataTable(DataTable table) method public static CSharpCode FromDataTable(DataTable table) { if (string.IsNullOrWhiteSpace(table.TableName)) { return CSharpCode.Empty; } string className = table.TableName; CodeNamespace codeNamespace = ToCodeNameSpace(table); string code = FromCodeNamespace(codeNamespace); return new CSharpCode(className, codeNamespace.Name, code); } now the new CSharpCode Convert.ToCSharpCode() method will be refactored to public static CSharpCode ToCSharpCode(DataTable dataTable) { return Code.FromDataTable(dataTable); } which can be saved using private const string noTableName = "Class cannot be created: DataTable.TableName must have a value to use as the name of the class"; public static string ExportAsCodeFile(DataTable table) { CSharpCode csharpCode = Convert.ToCSharpCode(table); if (csharpCode == CSharpCode.Empty) { throw new ArgumentOutOfRangeException(noTableName); } String fileName = csharpCode.ToString() + ".cs"; System.IO.File.WriteAllText(fileName, csharpCode.Content); return fileName; } Map ClassToDataRow() This method does not need any ref parameter. Please read Jon Skeet's answer on StackOverflow By inverting the conditions and using continue like already explained in RubberDuck's answer we can remove horizontal spacing. We will do this by extracting the checks to a separate method private static bool IsColumnByNameInvalid(DataColumnCollection columns, string propertyName) { return !columns.Contains(propertyName) || columns[propertyName] == null; } We should also check the state of the row before we add it. There is no sense in adding a row where no columns are filled. private static void ClassToDataRow<T>(DataTable table, T classObject) where T : class { bool rowChanged = false; DataRow row = table.NewRow(); foreach (PropertyInfo property in typeof(T).GetProperties()) { if (IsColumnByNameInvalid(table.Columns, property.Name)) { continue; } rowChanged = true; row[property.Name] = property.GetValue(classObject, null); } if (!rowChanged) { return; } table.Rows.Add(row); }
{ "domain": "codereview.stackexchange", "id": 11660, "tags": "c#, entity-framework, automapper" }
How can I estimate confidence intervals around the forecasted temperatures for future days?
Question: Weather.com will happily give me temperature predictions for the next ten days, but no indication of how reliable those predictions are. So if I want to plan something for a week from today, or four days from today, I don't really know whether there's much information in the forecasts. My personal, unscientific (and certainly unreliable) experience says there's not, but there's plenty of variation in the forecasted temperatures, even between days 9 and 10, so the weathermen are not simply falling back to the long-term averages for the time of year. I'm hoping for a rule of thumb here (so my question is different from this theory-heavy discussion from two years ago). Conceivably something as simple as a fixed interval size for each number of days in the future could work, although I could imagine there being low-hanging fruit in, for example, atmospheric pressure or the proximity of the location being forecasted for to water. Answer: Look at ensemble forecasts. Although they do not exactly give confidence intervals, they do give the same kind of information you would use a confidence interval for. Weather is chaotic. So are the models. If initial conditions change slightly, outcome changes dramatically. Therefore, models are typically run a number of times, for example, ten runs. Where the runs coincide, the forecast is rather certain. Where they diverge, the forecast is not. Below is an example of such for Bucharest, Romania, run on 28 January 2014: As you can see, the 850 hPa temperature agrees fairly well the first couple of days, but by the end, they are “all over the place”. This kind of plot, which may also be on a map, is also referred to as a spaghetti plot. Although they are not difficult to understand, they do tend to focus on “expert” variables and do not directly translate to precipitation amounts or sunshine hours. They are direct model output. Below is an example of an NCEP run for the US and surroundings. At 0 hours, all the model runs agree fairly well (fortunately): After 24 hours, the picture still looks fine: At 240 hours, however, it's pretty much spaghetti. This means the forecast that weather.com gives you is pretty much useless: Through the NCEP website, you can also look at animations and maps of the standard deviations. There are various sources for such ensemble forecasts, and not all are free. The line graph above is from the German website Wetterzentrale. ZMAW links to meteogrammes for European cities. The maps are from NOAA ESRL PSD. Weather.gov also links to a number of sources. If you search the web for spaghetti diagram or ensemble forecast, you may find a lot more.
{ "domain": "physics.stackexchange", "id": 11572, "tags": "geophysics, atmospheric-science, weather, meteorology" }
What's the exact definition of the power spectral density function?
Question: I learned from my signal processing course that PSD is the Fourier transform of the autocorrelation function. $$\mathscr{F}\Big\{\mathrm{E}\big[x(t)x(t+\tau)\big]\Big\}$$ Today I took my statistic course but my professor introduced spectral density function, which is the fourier transform of autocovariance function. $$\mathscr{F}\Big\{ \mathrm{E}\left[\ \left( x(t)-\mathrm{E}\big[x(t)\big] \right)\left( x(t+\tau)-\mathrm{E}\big[x(t+\tau)\big]\right) \ \right] \Big\}$$ What is the exact definition of PSD? If it's the FT of autocorrelation, then what's the name for the FT of the autocovariance function? Answer: The problem is that the terms autocorrelation and autocovariance are sometimes used to mean different things. In statistics, they often use autocorrelation for what would be called autocovariance in the signal processing literature (cf. this post over at stats.SP). The power spectrum is the Fourier transform of what in the signal processing literature is called autocorrelation function: $$S_x(\omega)=\int_{-\infty}^{\infty}R_x(\tau)e^{-j\omega\tau}d\tau$$ with $$R_x(\tau)=E\big\{x(t)x(t+\tau)\big\}$$ where I've assumed that $x(t)$ is a wide-sense stationary (WSS) random process. The Fourier transform of the autocovariance $$C_x(\tau)=E\big\{[x(t)-\mu_x][x(t+\tau)-\mu_x]\big\}=R_x(\tau)-\mu_x^2$$ is sometimes used for processes with a non-zero mean $\mu_x$ to avoid the Dirac delta impulse in the power spectrum at $\omega=0$. The Fourier transform of the autocovariance is called the covariance spectrum of $x(t)$.
{ "domain": "dsp.stackexchange", "id": 12325, "tags": "power-spectral-density, autocorrelation, covariance" }
What is the creature with the lowest neuron count that demonstrates cognition beyond reflexes?
Question: I'm under the impression that nematode worms just perform the same scripted actions over and over again in response to specific stimuli. They have 302 neurons. Chimpanzees display problem solving capabilities, as do other mammals with smaller brains (cats, for instance with ~ 1 billion neurons). You might say that bees demonstrate something more than reflexes when they communicate the location of nectar sources to the hive (~ 960,000 neurons). What's the creature with the lowest number of neurons to demonstrate something more than mere scripted actions? I would imagine its somewhere between a bee and a nematode worm... Answer: Indeed, C. elegans nematodes (which are the ones you are talking about) do not show cognitive responses. AFAIK, Drosophila melanogaster is also able to learn and display some quite complex behaviors, but no cognitive functions. I believe the "simplest" organism known to display what could be called "cognitive" functions is the honeybee (see for example this article). But as to your question: to demonstrate something more than mere scripted actions I would also ask "what is a scripted action?" and to what extend a complex, cognitive response from a monkey or a human isn't also scripted (though through many different interacting pathways)?
{ "domain": "biology.stackexchange", "id": 1873, "tags": "neuroscience, brain, intelligence" }
Is it possible to see a diffraction pattern from a BCC crystal made of 2 randomly-distributed species?
Question: Suppose I have a 50-50 mix of two different kinds of atoms in a crystal. The crystal is BCC and the atoms have different scattering factors $f$. I know that if the crystal is organized in the regular BCC way (say, like CsCl) then I can easily calculate the structure factor. This page shows what it would be for CsCl, for example. I understand this part. But what would happen if the crystal were still BCC but the atom locations were picked randomly? If you performed neutron diffraction on this crystal, would you be able to see any (h,k,l) planes? My thinking is either: a) you see no (h,k,l) planes (because the disorder means that you don't know the atom locations so the calculation for the structure factor is impossible) or b) you see all planes (because there are no interference effects so all planes are equally "visible")? I am trying to wrap my head around crystal diffraction and I'm finding this one tough to think about. This is partially a continuation of this question. Answer: Yes you will see the usual BCC pattern and I think it's fairly easy to see why. If you take some plane then this creates a bright reflection at a certain angle because along a line at that angle all the lattice points radiate in phase, so we get constructive interference. At other angles the lattice points don't radiate in phase so there is destructive interference and no reflection. Now imagine randomly removing some of the lattice points i.e. making the biggest decrease possible in the structure factor. The remaining lattice points still radiate in phase, so there's still a strong reflection at the same angle, but since there's fewer points radiating the intensity of the reflection is decreased. At the same time the destructive interference in other directions is less complete, so the background reflection goes up. But you'd still see a strong reflection at the same angle. As you removed more and more of the lattice points the contrast would decrease, and eventually you'd no longer see a pattern. However even removing half the lattice points wouldn't make much difference because the in phase reflection is so much stronger than the background.
{ "domain": "physics.stackexchange", "id": 29055, "tags": "diffraction, crystals" }
Question on the inner product of wavefunctions
Question: When taking the inner product of a wavefunction $\Psi$ with itself, denoting the inner product as $(\Psi,\Psi)$, since $$\Psi(x)=\int \psi(x)\vec{x}dx$$ letting $$\overline\Psi(x')=\int \overline\psi(x')\vec{x'} dx'$$ would $$(\Psi,\Psi)= \int \overline\psi(x')\psi(x)\delta(x'-x)dx'dx$$ or would the $dx'$ not be included? If I am wholly wrong, how would I go about representing $(\Psi,\Psi)$? Answer: I don't know how exactly you went from $(1)$ and $(2)$ to $(3)$. But, here is a nice trick. In our Hilbert Space $\mathbb{V}$, the eigenbasis $|x\rangle$ satisfies: $$\hat{I} = \int_{\mathbb{R}} |x\rangle \langle x| \, dx$$ where $\hat{I}$ is the identity operator in $\mathbb{V}$. Thus, $$\langle \Psi | \Psi \rangle = \langle \Psi | \hat{I} | \Psi\rangle = \langle \Psi| \int_{\mathbb{R}} | x \rangle \langle x | \, dx \ |\Psi\rangle = \int_{\mathbb{R}} \langle \Psi | x \rangle \langle x|\Psi\rangle \, dx = \int_{\mathbb{R}} \psi^{*}(x) \psi(x) \, dx \Longrightarrow$$ $$\langle \Psi | \Psi \rangle = \int_{\mathbb{R}} |\psi(x)|^2 \, dx$$ Note that this is the same as: $$\langle \Psi | \Psi \rangle = \int_{\mathbb{R}} \int_{\mathbb{R}} \psi^{*}(x') \psi(x) \delta(x-x') \, dx' \, dx $$ but it is completely unnecessary to do that. You could have also arrived at the latter result through the following route: $$\left\langle \int_{\mathbb{R}} \psi(x) |x\rangle \, dx \ | \int_{\mathbb{R}} \psi(x') |x' \rangle \, dx' \right\rangle$$ By using the $-$ linearity in the second argument and anti-linearity in the first $-$ properties of the inner product, we get: $$\left\langle \int_{\mathbb{R}} \psi(x) |x\rangle \, dx \ | \int_{\mathbb{R}} \psi(x') |x' \rangle \, dx' \right\rangle = \int_{\mathbb{R}} \psi(x') \left\langle \int_{\mathbb{R}} \psi(x) | x \rangle \, dx \ | x' \right\rangle dx' = $$ $$\int_{\mathbb{R}} \int_{\mathbb{R}} \psi(x') \psi^{*}(x) \langle x | x\rangle' dx' dx = \langle \Psi | \Psi \rangle = \int_{\mathbb{R}} \int_{\mathbb{R}} \psi^{*}(x') \psi(x) \delta(x-x') \, dx' \, dx = \int_{\mathbb{R}} |\psi(x)|^2 \, dx$$
{ "domain": "physics.stackexchange", "id": 72072, "tags": "quantum-mechanics, hilbert-space, wavefunction" }
c# record parameter validation technique
Question: This comes from an answer I provided to a question on stackoverflow here: https://stackoverflow.com/a/71482194/3258131 To make c# record parameter validation concise and maintain all the benefits of the positional record definition I created the following solution. The first part is using a "dummy" private validation field and some extension methods to facilitate the validations. I created the following validation class (removed other validations for brevity): public static class Validation { public static bool IsValid<T>(this T _) { return true; } public static T NotNull<T>(T @value, [CallerArgumentExpression("value")] string? thisExpression = default) { if (value == null) throw new ArgumentNullException(thisExpression); return value; } public static string LengthBetween(this string @value, int min, int max, [CallerArgumentExpression("value")] string? thisExpression = default) { if (value.Length < min) throw new ArgumentOutOfRangeException(thisExpression, $"Can't have less than {min} items"); if (value.Length > max) throw new ArgumentOutOfRangeException(thisExpression, $"Can't have more than {max} items"); return value; } public static IComparable<T> RangeWithin<T>(this IComparable<T> @value, T min, T max, [CallerArgumentExpression("value")] string? thisExpression = default) { if (value.CompareTo(min) < 0) throw new ArgumentOutOfRangeException(thisExpression, $"Can't have less than {min} items"); if (value.CompareTo(max) > 0) throw new ArgumentOutOfRangeException(thisExpression, $"Can't have more than {max} items"); return value; } } Then you can use it with the following: // FirstName may not be null and must be between 1 and 5 // LastName may be null, but when it is defined it must be between 3 and 10 // Age must be positive and below 200 record Person(string FirstName, string? LastName, int Age, Guid Id) { private readonly bool _valid = Validation.NotNull(FirstName).LengthBetween(1, 5).IsValid() && (LastName?.LengthBetween(2, 10).IsValid() ?? true) && Age.RangeWithin(0, 200).IsValid(); } The ?? true is VERY important, it is to ensure validation continues in case the nullable LastName was indeed null, otherwise it would short-circuit. Perhaps it would be better (safer) to use another static AllowNull method to wrap the entire validation of that variable in, like so: public static class Validation { public static bool IsValid<T>(this T _) { return true; } public static bool AllowNull<T>(T? _) { return true; } public static T NotNull<T>(T @value, [CallerArgumentExpression("value")] string? thisExpression = default) { if (value == null) throw new ArgumentNullException(thisExpression); return value; } public static string LengthBetween(this string @value, int min, int max, [CallerArgumentExpression("value")] string? thisExpression = default) { if (value.Length < min) throw new ArgumentOutOfRangeException(thisExpression, $"Can't have less than {min} items"); if (value.Length > max) throw new ArgumentOutOfRangeException(thisExpression, $"Can't have more than {max} items"); return value; } public static IComparable<T> RangeWithin<T>(this IComparable<T> @value, T min, T max, [CallerArgumentExpression("value")] string? thisExpression = default) { if (value.CompareTo(min) < 0) throw new ArgumentOutOfRangeException(thisExpression, $"Can't have less than {min} items"); if (value.CompareTo(max) > 0) throw new ArgumentOutOfRangeException(thisExpression, $"Can't have more than {max} items"); return value; } } record Person(string FirstName, string? LastName, int Age, Guid Id) { private readonly bool _valid = Validation.NotNull(FirstName).LengthBetween(1, 5).IsValid() && Validation.AllowNull(LastName?.LengthBetween(2, 10)) && Age.RangeWithin(0, 200).IsValid(); } Any comments would be appreciated. I am still not very happy with either of the options for handling the nullable properties, any advice? Answer: Let me suggest a slightly different approach public class ValidatorBuilder<T> { private List<Exception> exceptions = new(); private List<Action<T>> validators = new(); public ValidatorBuilder<T> NotNull(Expression<Func<T, object>> propertySelector) { validators.Add((T input) => NotNull(propertySelector, input)); return this; } public ValidatorBuilder<T> LengthBetween(Expression<Func<T, string>> propertySelector, int min, int max) { validators.Add((T input) => LengthBetween(propertySelector, min, max, input)); return this; } public ValidatorBuilder<T> RangeWithin<P>(Expression<Func<T, P>> propertySelector, P min, P max) where P : IComparable { validators.Add((T input) => RangeWithin(propertySelector, min, max, input)); return this; } public bool IsValid(T input, bool shouldThrowException = false) { foreach (var validator in validators) validator(input); return exceptions.Count == 0 ? true : !shouldThrowException ? false : throw new AggregateException(exceptions); } private void NotNull(Expression<Func<T, object>> propertySelector, T input) { var propertyName = ((MemberExpression)propertySelector.Body).Member.Name; if (propertySelector.Compile()(input) == null) exceptions.Add(new ArgumentNullException(propertyName)); } private void LengthBetween(Expression<Func<T, string>> propertySelector, int min, int max, T input) { var propertyName = ((MemberExpression)propertySelector.Body).Member.Name; var value = propertySelector.Compile()(input); if (value.Length < min) exceptions.Add(new ArgumentOutOfRangeException(propertyName, $"Can't have less than {min} items")); if (value.Length > max) exceptions.Add(new ArgumentOutOfRangeException(propertyName, $"Can't have more than {max} items")); } private void RangeWithin<P>(Expression<Func<T, P>> propertySelector, P min, P max, T input) where P: IComparable { var propertyName = ((MemberExpression)propertySelector.Body).Member.Name; var value = propertySelector.Compile()(input); if (value.CompareTo(min) < 0) exceptions.Add(new ArgumentOutOfRangeException(propertyName, $"Can't have less than {min} items")); if (value.CompareTo(max) > 0) exceptions.Add(new ArgumentOutOfRangeException(propertyName, $"Can't have more than {max} items")); } } This approach follows the builder pattern (For the sake of simplicity I have combined the builder and the product. If you have more validation rules than these then I suggest to separate components) The IsValid is the build method of the builder The public methods allow you to register validation rules against any arbitrary property The validators collection contains the registered rules The private methods implements the actual validation logic If the rule is violated then we add a new exception to the exceptions collection The IsValid method receives the to be validated input and a flag whether or not should throw exception If the flag is set then it will throw an AggregateException where the InnerExceptions collection will contain all violated rules The usage looks like this: record Person(string FirstName, string? LastName, int Age, Guid Id) { private readonly bool _valid = new ValidatorBuilder<Person>() .NotNull(p => p.FirstName) .LengthBetween(p => p.FirstName, 1, 5) .RangeWithin(p => p.Age, 0, 200) .IsValid(); } UPDATE #1 Fixing IsValid As it was pointed out the IsValid has been called in my sample without passing the this (I did not try out my code, I just made it compile). Due to C# limitations this can be referenced inside properties, methods, or constructors. We can't use this inside a field initialiser. Also we can't use this inside a read-only property with initializer method. So, if you want to stick with the positional record then you have to do something like this: record Person(string FirstName, string? LastName, int Age, Guid Id) { private bool _valid => new ValidatorBuilder<Person>() .NotNull(p => p.FirstName) .LengthBetween(p => p.FirstName, 1, 5) .RangeWithin(p => p.Age, 0, 200) .IsValid(this, true); public Person(string firstName, string? lastName, int age, Guid id, object validation) : this(firstName, lastName, age, id) => _ = _valid; } _valid become a read-only property There is a new ctor (an overload) which calls the positional (generated) ctor and enforces validation The drawback of this approach is that you can not enforce the caller to call the overload new Person("John", "Doe", 1000, Guid.NewGuid(), default);
{ "domain": "codereview.stackexchange", "id": 43106, "tags": "c#, .net, validation" }
Equation for force needed to replace energy lost by a pendulum each swing
Question: I'm sure this is just a question of not knowing the right search terms, but I'm trying to find out how much energy I would need to put back into a pendulum in order to keep it swinging forever. Specifically, I'm toying with the idea of building something like one of those automatic swings that rocks a baby but mechanical, rather than electric and possibly large enough for a bigger kid or even an adult. Kind of like a giant grandfather clock with a human pendulum. I'm sure the amount of push required to keep the swing going indefinitely would be in proportion to the weight of the pendulum (or kid, or lazy grown up) but while I've found dozens of websites with equations describing the period of the pendulum but nothing describing with how much force I need to push to keep it going. Can someone direct me to a resource or suggest different search terms? Answer: The force needed would be equal to the total air resistance of the pendulum, and the friction resistance of the bearing point. these will vary depending on the aerodynamics of the pendulum, and the type of bearing point and it's lubrication, and the weight on the bearing point. A good set of steel roller bearings with thin oil lubricant will normally have very little resistance, the most resistance will normally be from air displacement, depending on air density and aerodynamics. The weight of the pendulum only affects the bearing point, as a pendulum's "weight" does not change its duration.
{ "domain": "physics.stackexchange", "id": 59699, "tags": "newtonian-mechanics, momentum" }
Is bounded-error probabilistic computation sensitive to transition types?
Question: In the unbounded-error case, it is known that both realtime quantum and probabilistic finite automata can recognize some uncomputable languages if they are allowed to use arbitrary real numbers in their transitions (Rabin, 1963; Yakaryilmaz and Say, 2011). In the bounded-error case, we have a similar result for poly-time quantum Turing machines as well, i.e. the cardinality of $ \mathsf{BQP}_{\mathbb{C}} $ is uncountable (Adleman et. al., 1997). My question is whether any bounded-error probabilistic space (time) class defined with unrestricted real numbers contains an uncomputable (or a recursive enumerable) language. It is also known that (Watrous, 2003) if we restrict ourselves to algebraic numbers ($\mathbb{A}$), for any space-constructable function $ s(n) \in \Omega(\log n) $, \begin{equation} \mathsf{PrQSPACE}_{\mathbb{A}}(s) \subseteq \mathsf{DSPACE}(s^2), \end{equation} where $\mathsf{PrQSPACE}$ stand for unbounded-error quantum space. Any partial answer (for the case of bounded-error probabilistic computation using non-algebraic transitions) violating this upper bound would also be nice. Answer: If you have a coin which has an uncomputable probability of landing heads, then you can estimate with bounded error the first $k$ bits of this probability using $O(4^{k})$ coin flips. This lets you construct a machine with bounded error that computes an uncomputable function. The uncomputable function is the $k$th bit of the probability. The computation is estimating the probability of landing heads by flipping the coin $4^k$ times. Considerations of running time don't matter when you're worrying about whether a function is computable or uncomputable, so it doesn't matter that it takes time exponential in $k$ to estimate the $k$th bit of the function; it's still an uncomputable function.
{ "domain": "cstheory.stackexchange", "id": 2361, "tags": "reference-request, probabilistic-computation, quantum-computing" }
Objected-Oriented Hangman game In Python
Question: I'm still a beginner in python and made this simple program to practice OOP, I've ended up using the self keyword a lot and I'm not sure if this is normal or if I just misunderstood something. Any criticism on the program itself or good practices in python are also greatly appreciated. (Text File 'Words' is just 3000+ English words listed) Edit: Instance variable cont was only used in 1 function so it was switched to a local variable inside function play Hangman.py import random from phases import hang_phases class HangMan: def __init__(self, random_word, letters): self.word = random_word self.letters = letters self.excluded_letters = [] self.included_letters = [] self.original_word = random_word self.chances = 9 def remove_letter(self, guessed_letter): self.word = self.word.replace(guessed_letter, "") self.included_letters.append(guessed_letter) def guess_letter(self): print(hang_phases[self.chances]) guessed_letter = input("Guess A Letter: ").upper() if guessed_letter in self.excluded_letters: print("You Already Guessed This Letter") elif guessed_letter in self.word: number_of_letter = self.word.count(guessed_letter) if number_of_letter == 1: print(f"There is {number_of_letter} {guessed_letter}") else: print(f"There are {number_of_letter} {guessed_letter}'s") self.remove_letter(guessed_letter) else: print(f"There are no {guessed_letter}'s") self.excluded_letters.append(guessed_letter) self.chances -= 1 def play(self): cont = True while cont: if self.word == "": print(f"Congrats! The Word Was {self.original_word}") cont = False else: try: if self.chances == 0: print("The Man Has Been Hung! Better Luck Next Time :/") print(f"The Word Was {self.original_word}") print(hang_phases[0]) cont = False else: print() print("What Would You Like To Do?:\n*Please Enter The Integer Value Corresponding To The Action") print("1. Guess A Letter\n" "2. Check Included Guessed Letters\n" "3. Check Excluded Guessed Letters\n" "4. Get Length Of The Word\n" "5. Give Up\n") action = int(input("> ")) int(action) if action == 1: self.guess_letter() elif action == 2: if len(self.included_letters) == 0: print("You Have Not Guessed Any Letters That Are In The Word") else: print(self.included_letters) elif action == 3: if len(self.excluded_letters) == 0: print("You Have Not Guessed Any Letters That Are Not In The Word") else: print(self.excluded_letters) elif action == 4: print(len(self.original_word)) elif action == 5: print(f"The Word Is {self.original_word}") cont = False else: print("Pleas Enter A Value Between 1 & 5") except ValueError: print("Please Enter A Valid Integer") def main(): play = True while play: word_list = open('Words').read().split("\n") word = random.choice(word_list).upper() hm = HangMan(word, len(word)) hm.play() print("Play Again?") play_again = input("Y/n > ").upper() if play_again == "Y": print("Setting Up...") elif play_again == "N": print("Goodbye") play = False if __name__ == "__main__": main() phases.py hang_phases = [ ''' ______ rip lol. | | | O | \|/ | | | _/ \_ _|_ ''' , ''' ______ | | | O | \|/ | | | _/ \\ _|_ ''' , ''' ______ | | | O | \|/ | | | _/ _|_ ''' , ''' ______ | | | O | \|/ | | | / _|_ ''' , ''' ______ | | | O | \|/ | | | _|_ ''' , ''' ______ | | | O | \|/ | | _|_ ''' , ''' ______ | | | O | \| | | _|_ ''' , ''' ______ | | | O | | | | _|_ ''' , ''' ______ | | | O | | | _|_ ''' , ''' ______ | | | | | | _|_ ''' ] Answer: Simpler UI. A lot of your code is devoted to managing the user-interface, which is organized around five numbered choices (guess letter, see successful guesses, see failed guesses, see length of word, or give up). As a user, I find that process very tedious: type 1, press enter, type a letter, press enter, press 1, type letter, press enter, type 4, see length of word, etc etc etc. All of that hassle is unnecessary because you could easily display all of the relevant information all of the time. Furthermore, that's what most Hangman implementations do. The other thing that most Hangman games do is tell you where the successful guesses reside within the entire word. Here's a mock-up showing everything the user needs to know, followed by a prompt waiting for the next guess. And if the user just presses enter, you can interpret that as "give up", which would eliminate the need for a numbered UI. That relatively simple adjustment to your plan would substantially reduce the amount of code needed while also making life better for users: ______ | | | O | \|/ | | | _|_ Word T _ D I O U _ Missses A C F R Z > Strings are sequences with stealth members. Python strings are great because they are built on top of sequences, which means you can easily iterate over them character by character. However, strings are not merely character sequences, and naive membership queries can sometimes go awry. In your case, if a user guessing a letter just presses enter rather than typing a letter, your program incorrectly reports that the "letter" (in this case the empty string) appears in the word N + 1 times, where N is the length of the word. And for strings, that is technically correct: the empty string can be found before every letter and at the end of the string. That makes sense from a computer-sciency point of view of strings, but it's not intuitive if we view strings purely as a sequence of characters. Don't force the caller to do routine work. A user of your HangMan class is required to supply both a word and the number of letters in that word. A friendlier hangman would do that computation for the victim, free of charge. Boundaries: consider the purpose of the HangMan class. It's good to find projects to practice OOP. One part of that learning process it developing solid instincts about boundaries: what belongs in the class and what does not. These can be tricky questions and software engineers have a variety of opinions about how to draw such lines. If it were me, I would probably treat HangMan as nothing more that a dataclass to keep basic facts about game state (the word, prior guesses) and a variety of derived or computable attributes (prior hits, prior misses, number of remaining guesses). In other words, the class would not collect user input or orchestrate the playing of the game. Such matters would be handled elsewhere, maybe in a different class, maybe in a static method of the class, or in an ordinary function (I would probably opt for the latter). Why that division? Because most of the algorithmic aspects of the code are questions about data: letters in the word, prior guesses, remaining guesses before death. I want algorithmic logic in data-oriented code that can be easily tested and debugged. I don't want such code to be burdened by the annoyances of user-interface. Meanwhile, I want the UI portions of the code to be very simple from an algorithmic/logical point of view so that I don't need to spend too much effort with testing and debugging. I want the inevitably annoying UI-oriented code to be as free of logical complexity as possible: just procedural step-by-step stuff whenever possible. Gary Bernhardt's talk on the subject of class boundaries is a classic in this line of thought.
{ "domain": "codereview.stackexchange", "id": 42612, "tags": "python, beginner, object-oriented, hangman" }
Why can two wavefunctions can be interchanged in integrals involving hermitian operators?
Question: In many textbooks, it is said that because operators in QM are hermitian, we can write: $$\int \psi^*\hat{A}\phi\,\mathrm d\tau = \int\phi^*\hat{A}\psi\,\mathrm d\tau$$ An operator $\hat{A}$ is called hermitian iff $\hat{A}^\dagger = \hat{A}$. Using this definition, I tried to prove the above property but I can't. I started from: $$\langle\psi|\hat{A}|\phi\rangle = {\langle\phi|\hat{A}|\psi\rangle}^\dagger$$ based on the fact that the operator is hermitian. But then I don't know how to continue. Any ideas? I have found the first relation in the following textbook: Quantum Chemistry and Molecular Interactions, in page 81. For any two wavefunctions $\psi_i$ and $\psi_j$ $$\int \psi_i^*\hat{H}\psi_j\,\mathrm d\tau = \int\psi_j^*\hat{H}\psi_i\,\mathrm d\tau$$ Answer: This must be false. As a simple counterexample, if this were true for any Hermitian operator $A$ and any set of basis functions $\phi,\psi$, it must also be true for the identity operator $I$ with $\phi$ a complex function and and $\psi$ a real function. But clearly $$\int\phi^*\psi d\tau\ne\int\psi^*\phi d\tau$$ since only $\phi$ has an imaginary part. As others have stated in the comments, we can only say that $\langle\phi|A|\psi\rangle=\langle\psi|A|\phi\rangle^*$ for Hermitian operators, with the complex conjugation removed if $\phi$ and $\psi$ are known to be real functions and $A$ is a real symmetric operator.
{ "domain": "chemistry.stackexchange", "id": 16212, "tags": "physical-chemistry, quantum-chemistry" }
Applying a function to columns of two matrices producing their combinations
Question: I made a short snippet for applying a function f that takes two arguments (vectors) to columns of two matrices producing all possible combinations of those columns. function V = cross_apply(f, V1, V2) % Apply function f to columns of V1 and V2 producing % a matrix where V(i, j) = f(V1(:,i), V2(:,j)) s1 = size(V1, 2); s2 = size(V2, 2); V = zeros(s1, s2); for i = 1:s1 for j = 1:s2 V(i, j) = f(V1(:, i), V2(:, j)); end end end An example of a use of this function could include approximating a function with a polynomial using a custom inner product (not a traditional dot product) here ip and defined as a kind of integral. close all f = @(x) log(x); P = @(x) x.^(0:3); ip = @(x, y) trapz(x.*y) / (size(x, 1)-1); t = linspace(1, 2, 10)'; V = P(t); % Inner product matrices A = cross_apply(ip, V, V); b = cross_apply(ip, V, f(t)); % Coefficients for the polynomial rr = rref([A, b]) coef = rr(:, end); % Plot ln(t) and polynomial fit t_ = linspace(1, 3, 1000)'; figure() plot(t_, P(t_)*coef, 'r--') grid on, hold on plot(t_, f(t_), 'b:') legend('fit', 'f(t)', 'Location', 'best') Not using this function, one would write a whole bunch of inner products (ip in the code) for matrices A and b. I didn't find a built-in function that does this, so I'm after two things: Is this already an implemented function in Matlab built-ins or some toolbox? Is there some improvement to be made to the function? Answer: Your code is clear and concise, there is not much to complain about it, except for one small thing: t = linspace(1, 2, 10)'; In MATLAB, ' is the complex conjugate transpose, whereas .' is the non-conjugate transpose, the simple re-ordering of dimensions. The array it is applied to here is real-valued, so there is no difference, but it is good practice to use .' always when the intention is to change the orientation of a vector. Bugs do appear at times because of the improper use of the ' operator. As I've said before, loops are no longer slow in MATLAB, but often it's still possible to remove them ("vectorize the code") for a time gain. If the vectorization doesn't involve creating intermediate copies of the data, and doesn't involve complex indexing, then vectorization is still profitable. In this particular case, vectorization involves calling a function one time vs calling it 10x10=100 times, and function calls still have a significant overhead in MATLAB. So cutting down on function calls will be beneficial. Producing all possible combinations of two values can be accomplished with the bsxfun function in MATLAB. But in this case, we want to produce all combinations of two columns, and there is no function for that. However, you could use it to generate the product of all columns (the argument to trapz in your function ip). This requires a large intermediate matrix to be generated, but in this case (100 vectors of length 4), this is a very mild amount of data, and well worth the price if it means not calling a function 100 times. Note that bsxfun is, since R2016b, no longer necessary. Everything you could previously do with bsxfun can now be done without it. For example, given x=1:10, you can do: y = x .* x.'; to produce a 10x10 matrix with the product of all combinations of elements of x. This implicit singleton expansion happens automatically when the two arguments to any operator have compatible dimensions. That means that their sizes are identical along each dimension, or one of them has size 1 along a dimension (a singleton dimension). That singleton dimension gets repeated virtually to match the size of the other matrix. Your function ip is written such that, given two input matrices x and y of compatible dimensions, it will compute your custom inner product along the first dimension, as long as that dimension has a size larger than 1. To make this function safer in use, you can explicitly tell trapz along which dimension to operate: ip = @(x, y) trapz(x.*y, 1) / (size(x, 1)-1); Now, ip(x,y) will compute many custom inner products at once, if x and y are matrices. However, ip(V, V) will compute the inner product of each column in V with itself, not all combinations of columns. For that, we need to do a little bit of reshaping of the inputs: function V = cross_apply2(f, V1, V2) s1 = size(V1, 2); s2 = size(V2, 2); V2 = reshape(V2, [], 1, s2); V = f(V1, V2); V = reshape(V, s1, s2); What this code does is convert the second input matrix of size NxM to a matrix of size Nx1xM (this can be done without copying the data). Now the function f (really ip the way you call it) will use implicit singleton expansion to compute the custom inner product of each combination of columns of V1 and V2. You can verify that cross_apply and cross_apply2 yield exactly the same result: A = cross_apply(ip, V, V); b = cross_apply(ip, V, f(t)); A2 = cross_apply2(ip, V, V); assert(isequal(A,A2)) b2 = cross_apply2(ip, V, f(t)); assert(isequal(b,b2)) And we can also see what the time savings are for calling the function ip once instead of 100 times: timeit(@()cross_apply(ip, V, V)) timeit(@()cross_apply2(ip, V, V)) On my computer this is 1.5016e-04 vs 2.9650e-05, a 5x speedup. But cross_apply2 is not as general as cross_apply, as it imposes a requirement on the function to be applied to be vectorized. trapz is a funny function. In your case, where it is called without an x input argument, it sets x to 1. The trapezoid rule now is simply the sum of the elements of the input vector, minus half the first and half the last element. I don't know what you application even is, but have you considered replacing this with the simple sum? Computing the true dot product is a lot faster. You can do edit trapz to see how trapz is implemented. For the way you call it, it does: z = sum((y(1:end-1,:) + y(2:end,:)), 1)/2; This is the same, as I said before, as: z = sum(y(:,:)) - y(1,:)/2 - y(end,:)/2; But this second form is twice as fast for large arrays: The first form copies the data in y twice (minus one row), adds these results and sums them up, whereas the second form simply adds up all elements, then subtracts half of the first row and half the last row. For a random large array y: y = rand(1000,100,100); z1 = sum((y(1:end-1,:) + y(2:end,:)), 1)/2; z2 = sum(y(:,:)) - y(1,:)/2 - y(end,:)/2; max(abs(z1(:)-z2(:))) timeit(@()sum((y(1:end-1,:) + y(2:end,:)), 1)/2) timeit(@()sum(y(:,:)) - y(1,:)/2 - y(end,:)/2) This shows that the largest difference between the two results is rounding errors (4.8317e-13), and the timings are 0.0778 vs 0.0341.
{ "domain": "codereview.stackexchange", "id": 32217, "tags": "matrix, matlab" }
Elliptic Orbit Solution based on initial conditions
Question: $$\ddot{\bf{r}}=-\frac{\bf{r}}{|r|}\frac{k}{|r|^2}$$ $k$ here is a constant dependent on the gravitational constant, and the masses of the two objects. If I transform it into cartesian coordinates: $$\ddot{X}(t)=-\frac{k X(t)}{\left(X(t)^2+Y(t)^2\right)^{3/2}}$$ $$\ddot{Y}(t)=-\frac{k Y(t)}{\left(X(t)^2+Y(t)^2\right)^{3/2}}$$ I can not solve this system of equations. Perhaps it would require some special functions like the elliptic function etc. Maybe I should get rid of the time dependency and represent it as an implicit function but I do not know how. I realize that solving an elliptic orbit is know, I am curious about how one would solve it in the fundamental f=ma kind of way, without any other assumption. Answer: First of all, it is best to solve the equation in polar coordinates rather than Cartesian coordinates: this works well with the fact that the force vector - and therefore the acceleration vector - is always pointing in a radial direction. While it is possible to express the equations of motions as differential equations in time, it is not possible to obtain closed form solutions of the orbital position as a function of time, and so these time differential equations cannot be solved as they as. The strategy is to reformulate the time differential equations into "positional" differential equations, i.e. the derivative terms are differentiated with respect to the orbital angular position. Details for this are provided below. Formulation of polar equation of motion in time $r(t)$ and $\theta(t)$ are the polar coordinates. It is useful to be able to relate the acceleration vector to the polar coordinates and their derivatives. The position vector can be represented as: $$\mathbf{r}(t) = \begin{bmatrix} r\cos{\theta}\\ r\sin{\theta} \end{bmatrix}$$ The acceleration is obtain by double differentiation in time: $$\ddot{\mathbf{r}}(t) = \begin{bmatrix} \ddot{r}\cos{\theta} - 2\dot{r}\dot{\theta}\sin{\theta} - r\dot{\theta}^2\cos{\theta} - r\ddot{\theta}\sin{\theta}\\ \ddot{r}\sin{\theta} + 2\dot{r}\dot{\theta}\cos{\theta} - r\dot{\theta}^2\sin{\theta} + r\ddot{\theta}\cos{\theta} \end{bmatrix} = \left(\ddot{r} - r\dot{\theta}^2\right) \begin{bmatrix} r\cos{\theta}\\ r\sin{\theta} \end{bmatrix} + \left(2\dot{r}\dot{\theta} + r\ddot{\theta}\right) \begin{bmatrix} -r\sin{\theta}\\ r\cos{\theta} \end{bmatrix} $$ Note how $\ddot{r} - r\dot{\theta}^2$ is the radial component of acceleration and $2\dot{r}\dot{\theta} + r\ddot{\theta}$ is the tangential component. However, we know that gravity only imparts a radial acceleration on the body, and so two simultaneous equations of motion are obtained: $$\ddot{r} - r\dot{\theta}^2 = -\frac{k}{r^2}$$ $$2\dot{r}\dot{\theta} + r\ddot{\theta} = 0$$ These equations cannot be immediately solved as they are, since both equations contain $r$ and $\theta$ terms together. However, it can be shown that the second equation amounts to the conservation of angular momentum. Multiply the second equation by $r$ to yield $$2 r \dot{r}\dot{\theta} + r^2\ddot{\theta} = 0$$ Noting that $\frac{\mathrm{d}}{\mathrm{d}t}r^2 = 2r\dot{r}$, the equation can be re-expressed as $$\frac{\mathrm{d}}{\mathrm{d}t}\left(r^2\right)\dot{\theta} + r^2\frac{\mathrm{d}}{\mathrm{d}t}\left(\dot{\theta}\right) = 0 \longrightarrow \frac{\mathrm{d}}{\mathrm{d}t}\left(r^2 \dot{\theta}\right) = 0$$ This means that $$r^2\dot{\theta} = \textrm{constant}$$ We can represent this constant quantity with $h = r^2\dot{\theta}$, and $h$ is the specific angular momentum. Now we can substituted $\dot{\theta} = h/r^2$ into the first equation of motion, yielding $$\ddot{r} - \frac{h^2}{r^3} + \frac{k}{r^2} = 0$$ This is the polar equation of motion in time, but this form is still not sufficient in determining the shape of an orbit. Obtaining the "positional" equation of motion A useful step to perform is to defined a new quantity $q = \frac{1}{r}$, and substitute this into the equation of motion. In the process, we can convert time derivatives of $r(t)$ into positional derivatives of $q(\theta)$ Note how $\dot{r}$ can be expanded using the chain rule: $$\dot{r} = \frac{\mathrm{d}r}{\mathrm{d}t} = \frac{\mathrm{d}r}{\mathrm{d}\theta}\frac{\mathrm{d}\theta}{\mathrm{d}t} = \frac{\mathrm{d}}{\mathrm{d}\theta}\left(\frac{1}{q}\right)\dot{\theta} = -\frac{1}{q^2}\frac{\mathrm{d}q}{\mathrm{d}\theta}hq^2 = -h\frac{\mathrm{d}q}{\mathrm{d}\theta}$$ Likewise, $\ddot{r}$ becomes $$\ddot{r} = \frac{\mathrm{d}\dot{r}}{\mathrm{d}t} = \frac{\mathrm{d}\dot{r}}{\mathrm{d}\theta}\frac{\mathrm{d}\theta}{\mathrm{d}t} = \frac{\mathrm{d}}{\mathrm{d}\theta}\left(-h\frac{\mathrm{d}q}{\mathrm{d}\theta}\right)\dot{\theta} = -h^2 q^2\frac{\mathrm{d}^2 q}{\mathrm{d}\theta^2}$$ Therefore, the equation of motion can be re-expressed as: $$-h^2 q^2 \frac{\mathrm{d}^2 q}{\mathrm{d}\theta^2} - h^2 q^3 + k q^2 = 0$$ or $$\frac{\mathrm{d}^2 q}{\mathrm{d}\theta^2} + q = \frac{k}{h^2}$$ This equation can be solved, and is in fact a linear differential equation with a general solution: $$q(\theta) = A\cos{\theta} + B\sin{\theta} + \frac{k}{h^2}$$ Solving the equation of motion $A$ and $B$ are yet to be known, but can be determined with two boundary conditions. $h$ is also unknown, and can be determined with a third boundary condition. Let's say that $r_0$ is the initial distance of the body from the orbited planet, $u_0$ is the initial (outward) radial velocity, and $v_0$ is the initial (anticlockwise) tangential velocity. With no loss of generality, we can assume the orbiting body is initially at $\theta = 0$. The first boundary condition is that $r = r_0$ at $t=0$. In terms of $q$, this becomes $q = \frac{1}{r_0}$ at $\theta = 0$. The second boundary condition involves the radial velocity: $\dot{r} = u_0$ at $t = 0$. This becomes $\mathrm{d}q/\mathrm{d}\theta = -u_0/h$ at $\theta = 0$. As for the final boundary condition, note that $h = r^2\dot{\theta}$. Then $h$ can be determined from the initial conditions, such that $h = r_0 v_0$. With these conditions, the expression for $q$ becomes: $$q(\theta) = \left(\frac{1}{r_0} - \frac{k}{r_0^2 v_0^2}\right)\cos{\theta} - \frac{u_0}{r_0 v_0}\sin{\theta} + \frac{k}{r_0^2 v_0^2}$$ Then, since $r = 1/q$: $$r(\theta) = \frac{r_0}{\left(1 - \frac{k}{r_0 v_0^2}\right)\cos{\theta} - \frac{u_0}{v_0}\sin{\theta} + \frac{k}{r_0 v_0^2}}$$ This equation is the polar equation of the orbit, and is the equation of an ellipse for certain choices of initial conditions $r_0$, $u_0$ and $v_0$. This equation will also account for parabolic and hyperbolic "orbits".
{ "domain": "physics.stackexchange", "id": 54555, "tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, celestial-mechanics, differential-equations" }
Why do plano-convex lenses reduce spherical aberation?
Question: I keep reading this in comparisons of different kinds of lenses but I don't understand why the flat side would change the amount of spherical aberration if the focal length still has to be the same? Answer: Plano-convex lenses only reduce the spherical aberration if they are used in the appropriate position with respect to the conjugates. To understand this, one intuitive explanation is this: Snell's law is not linear, so it does not behave proportionately for large versus small angles. Looking at the two lenses below, for the left hand lens the angles of deviation for the marginal ray (the ray at the edge of the lens) are distributed between the front and back surfaces. For the right hand lens, the 2nd surface performs all of the work of refraction. It turns out the right hand lens will have much more spherical aberration than the left hand lens, when used in this manner. This is a simplified example, but more complicated lenses are designed by adjusting the amount of refraction at each surface, ideally so that aberrations on one surface are compensated by aberrations on another surface.
{ "domain": "physics.stackexchange", "id": 71465, "tags": "geometric-optics, lenses" }
How to find the max frequency at a certain db in a fft signal
Question: I am processing some audio files and I would like to find out the maximum frequency in the spectrum (fft) given a certain magnitude range (in may case: maximum magnitude to -60db). How can I achieve that? I know how to calculate the maximum frequency of an fft signal: peak_index = np.argmax(np.abs(fft)) peak_freq = fft_freq[peak_coefficient] * sample_rate how I can add the db level range in this calculation? Answer: It's not quite clear what you're asking, but I think this does something like it. Regarding how magnitude relates to dB: decibels are a relative scale, so you need to decide what it's relative to. Since you agreed in the comments that it was dBFS, I took the largest value in the data (e.g. np.max(data)). With that in mind, then the dB scale is: fft_db = 20*np.log10(np.abs(fft)/np.max(np.abs(fft))) Python code below import numpy as np import matplotlib.pyplot as plt fft_db = np.random.random(100)*100 - 130 fft_db[80:100] = -130 fft_db_flip = np.flip(fft_db) requested_value = len(fft_db) - np.argmax(fft_db_flip > -60) - 1 plt.plot(fft_db) plt.plot(requested_value, fft_db[requested_value],'r.') plt.plot([0,100],[-60,-60],'g:')
{ "domain": "dsp.stackexchange", "id": 10676, "tags": "fft, frequency" }
Reverse osmosis Water desalination
Question: Is it feasible to submerge a reverse osmosis system in the sea in order to use the increasing pressure to push through the membrane and as the salt was removed would the lighter water rise on its own? Answer: It depends on the efficiency of the RO membrane. When the membrane conversion rate was below 20 percent, the answer in general would be yes. However, with the newer type membranes the conversion rates are quite high. In RO sea water desalinization, the normal pressure required to force the the pure water molecules through the membrane is in the range of 800 to 1,000 psi so the system must be over 2,200 feet below surface level. The pure water (Flux) as it is collected is at a much lower pressure so you will need to pump it back to the surface. If you don't, the pressure head will equalize on both sides of the membrane and the flux will end. Where savings may enter the process is that the pure water flow rate to pump will be much less than the initial flow entering the system. Of course there are many other factors to consider. Plant size etc all Play a role. An example is as follows: If the RO system is to produce 1,000,000 gallon of pure water in an 8 hour day, and the RO membranes has a 30 percent conversion rate, then the total flow required to enter the system is: Total flow = 1,000,000/0.30 = 3,300,000 or 3,300,000 gallons/8-hours or, 926 gpm at 1,000-psi pressure. To determine the savings calculate the cost to pump 926 gpm at 1,000-psi vs. 30 percent of that flow at 1,000-psi.
{ "domain": "physics.stackexchange", "id": 30504, "tags": "experimental-physics, osmosis" }
Is this calculation assuming that the neutrino is massless?
Question: This is a calculation that I don't understand in Griffiths Intro to Elementary Particles page 105 second edition. This the decay of a pion into a moun and a neutrino. From Conservation of energy and momentum, the momentum four vector $p_{neutrino} = p_{pion} - p_{muon}$ implies $p_{neutrino}^2 = p_{pion}^2 + p_{muon}^2 - 2 p_{pion} \cdot p_{muon}$ and $p_{neutrino}^2=0 :p_{pion}^2= m_{pion}^2 c^2 , p_{muon}^2 = m_{muon}^2 c^2 $ Ok. Why is $p_{neutrino}^2=0$? Is it just because he's taking the mass of the neutrino to be 0? But it isn't 0. It's very small. Answer: In his "Formulas and Constants" table on XIII, Griffiths says, under the Lepton table, "Neutrino masses are extremely small, and for most purposes can be taken to be zero; for details see Chapter 11."
{ "domain": "physics.stackexchange", "id": 77697, "tags": "special-relativity, elementary-particles" }
Fraction of light that reaches one of the ends of a long glass cylinder with a point source emitting monochromatic light isotropically at the center
Question: What fraction of light reaches one of the ends of a long glass cylinder if there is a point source emitting monochromatic light isotropically at the midpoint of the cylinder's axis? Absorption in the glass can be considered negligible. I'm having difficulty gaining any sort of tractable solution. If we let theta be the angle made with respect to the axis normal to the surface of the cylinder (so that it is orthogonal to the cylinder's axis), then for angles greater than $\theta_c = 41.8$ degrees, total internal refraction will occur, so that all light will reach one of the ends of the cylinder. This is given by $arcsin(\frac{n_2}{n_1})$, where $n_2 = 1.0$ and $n_1 = 1.5$ Below the critical angle, some light will be reflected, and some refracted. The reflected light will be reflected at the same angle which it was incident. For this reason, it will be partially refracted every time it hits the cylinder's surface. As the cylinder was assumed to be long, it's safe to say it will undergo a fairly large number of reflections/refractions, losing some of intensity on the reflected light each time. Because of this, anything below the critical angle will be completely refracted by the time it reaches the end of the cylinder. Anything contained within the cone of radius $R$ and height $Rtan(\theta_c)$ will make it to the end (here, I am calling the radius of the cylinder $R$). The volume of this cone is $\pi R^2 \frac{h}{3} = \pi \frac{R^3}{3} tan(\theta_c) $, and the volume of the cylinder that contains it is $\pi R^2 h = \pi R^3 tan(\theta_c)$, making the ratio $\frac{1}{3}$, regardless of the critical angle. Does my line of reasoning follow here, or is there a better way? Thank you in advance! Answer: I would be surprised if the fraction did not depend on the critical angle. Think of what would happen if the critical angle was very small and very large. You need to consider to consider the areas, at right angles to the direction of travel of the light, through which the light would pass through rather than volumes. It will help you if you know the area of a spherical cap? Update
{ "domain": "physics.stackexchange", "id": 35513, "tags": "optics, waves, reflection" }
How to retrieve fasta sequence after local blast?
Question: I have created a Blast database using a reference genome. Then, I have performed a local blast search in command line using a gene of interest. I have obtained some hits with the usual Blasting information. Now, I want to extract the exact matching sequence from the blast search with their corresponding start and stop position. I have a .gff file for the genes as well for the reference genome. Can you give me any suggestions on how to do that? Thanks a lot. Answer: FWIW, In the Sequenceserver BLAST software, you can just click on "download alignment" to extract the aligning region.
{ "domain": "bioinformatics.stackexchange", "id": 2016, "tags": "gene, ngs, blast" }
How do you encode Lamping's abstract algorithm using interaction combinators?
Question: Interaction combinators have been proposed as a compile target for the λ-calculus before. That paper implements the full λ-calculus. It is also known that it is possible to optimize interaction-net encodings of the λ-calculus for the subset of λ-terms that is EAL-typeable. That paper implements that subset of the λ-calculus by translating EAL-typeable λ-terms to interaction nets that are arguably more complex than interaction combinators, since they use an infinite alphabet of labels to group duplicators. I wonder if it is possible to combine both proposals. That is, is there any encoding for the abstract algorithm - that is, λ-terms that are EAL-typeable - as interaction combinators? Answer: I am not aware of any implementation of Lamping's algorithm directly in the interaction combinators. I do know that the presence of integer labels is a necessary feature of Lamping's algorithm, even for EAL-typable terms, because the labels reflect the nesting of so-called exponential boxes in proof nets, and Lamping's algorithm is essentially the execution of proof nets using the geometry of interaction, as first observed by Gonthier, Abadi and Lévy. So the question of implementing the algorithm in the interaction combinators boils down to representing exponential boxes in proof nets using the combinators. This is essentially what Mackie and Pinto did in their paper. Of course, Mackie and Pinto's encoding adresses all $\lambda$-terms, which use full linear logic boxes, whereas EAL-typable terms use elementary linear logic boxes, which are simpler (they are so-called functorial boxes). However, I do not believe that this simplification would have a notable impact on interaction combinator implementations. This is because boxes are a global feature (they identify arbitrarily big subnets to be duplicated/erased), whereas the interaction combinators (as any interaction net system) are completely local (reduction only modifies bounded subnets), so the challenge is to represent such global features locally. Now, global duplication/erasing in EAL is identical as in full linear logic, that is why I do not expect that an interaction combinator implementation of EAL would radically differ from the one proposed by Mackie and Pinto.
{ "domain": "cstheory.stackexchange", "id": 3380, "tags": "lambda-calculus, functional-programming, interaction-nets" }
Why can't using 2 separate measuring instruments to measure position and velocity work?
Question: According to Heisenberg uncertainty principle, position and velocity cannot be measured accurately at the same time. What if we use 2 separate measuring instruments to measure position and velocity at the same time? One for measuring velocity and another one for measuring position. Would this solve the Heisenberg problem? I am sure there are smarter people who have thought of it, so I don't think it will work. May I ask why not? Answer: Quantum measurements are a bit different than classical measurements. In classical physics, we assume one can measure a quantity without affecting the subject of the measurement. In quantum mechanics, measurements affect the state of the particle. This, you can measure the position of a particle, followed by its velocity. However, the act of measuring the position of the particle affects the uncertainty of its velocity. You can then measure its velocity, but this is not the original velocity. Thus, you can never get both the exact position and exact velocity of a particle at the same time (which would permit you to extrapolate its state over time).
{ "domain": "physics.stackexchange", "id": 42178, "tags": "quantum-mechanics, heisenberg-uncertainty-principle" }
compiling rviz on debian
Question: I'm trying to compile rviz on fuerte running debian/amd64 sid. I downloaded the package from the wiki. Using rosmake leads to the following error: $ rosmake visualization [ rosmake ] rosmake starting... [ rosmake ] Packages requested are: ['visualization'] ... cannot find required resource: rviz Where can I download it ? Originally posted by Fabien R on ROS Answers with karma: 90 on 2013-02-24 Post score: 0 Answer: In Fuerte, rviz was part of the visualization stack, before version 1.9.x. Now, it is a separate package. To get the correct version for Fuerte, check out the fuerte branch from git://github.com/ros-visualization/visualization.git. Originally posted by joq with karma: 25443 on 2013-02-24 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Fabien R on 2013-02-25: Finally, I found an older version and successfully compiled it, with slight modifications in some source files.
{ "domain": "robotics.stackexchange", "id": 13035, "tags": "ros, rviz, ros-fuerte, compilation, debian" }
Working with two ROS Versions on single system
Question: Hi All, I would like to install two versions of ROS --> Groovy and Hydro (or Groovy and Fuerte for example). Is it fine to work this way or should I have only one version of ROS installed? Has anyone tried this before? Thanks Murali Originally posted by MKI on ROS Answers with karma: 246 on 2013-11-20 Post score: 1 Answer: Yes this is possible. If you install from e.g. Ubuntu packages they will install into /opt/ros/groovy and opt/ros/hydro. Then you simply source the setup.bash file from the distribution you want to use. NB: Be careful not to mix up the versions when you are running your robot. Sometimes it works, but other times it gives you some strange behaviour. Originally posted by Karl Damkjær Hansen with karma: 139 on 2013-11-20 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by MKI on 2013-11-20: Thank you Karl, I am trying to install groovy, hydro and fuerte. your answer is right. Will try experimenting... regards Murali
{ "domain": "robotics.stackexchange", "id": 16215, "tags": "ros, ros-fuerte, ros-groovy, ros-hydro" }
Dirac Equation and the Klein-Gordon Equation
Question: I am trying to solve an exercise in Halzen and Martin's Quarks and Leptons book and got stuck on doing some math. The Dirac equation reads $$i \gamma^{\mu} \partial_{\mu} \psi - m\psi = 0.$$ Now, I want to act $\gamma^{\nu}\partial_{\nu}$ on both sides. What I have done was: $\gamma^{\nu}\partial_{\nu}$ $\left(i \gamma^{\mu} \partial_{\mu} \psi - m\psi \right)$ = 0 Apply linearity and the product rule to get $\gamma^{\nu} \partial_{\nu} \left(i \gamma^{\mu} \partial_{\mu} \psi\right) - \gamma^{\nu} \partial_{\nu} (m\psi) = 0$ $i\gamma^{\nu} \gamma^{\mu} \partial_{\nu}\left(\partial_{\mu} \psi\right) + i\gamma^{\nu}\partial_{\nu}(\gamma^{\mu})\partial_{\mu}\psi - m\gamma^{\nu} \partial_{\nu} \psi = 0$ I am stuck on simplifying the first two terms. As in the key, they have written it the following fashion: I am puzzled as to how the quantity in red box was derived. I do understand the process after this though, where they made use of the anti-commutator relation $\left\{ \gamma^{\mu}, \gamma^{\nu}\right\} = 2g^{\mu \nu}$. Can someome walk me through how to get from the last line of my work to the red box? Answer: It's simple if you think about how you change the summation variables. $\gamma^\nu \gamma^\mu\partial_\mu\partial_\nu=\frac{1}{2}(\gamma^\nu \gamma^\mu\partial_\mu\partial_\nu+\gamma^\nu \gamma^\mu\partial_\mu\partial_\nu)$ This is straightforward I didn't do anything fancy here. Now what you can do is in the second term relabel the variables like this $\mu \rightarrow \nu$ and $\nu\rightarrow \mu$ and then use the fact that partial derivatives commute, so finally one gets: $\gamma^\nu \gamma^\mu\partial_\mu\partial_\nu=\frac{1}{2}(\gamma^\nu \gamma^\mu+\gamma^\mu \gamma^\nu)\partial_\mu\partial_\nu$
{ "domain": "physics.stackexchange", "id": 98412, "tags": "homework-and-exercises, dirac-equation, klein-gordon-equation, dirac-matrices, clifford-algebra" }
wheel encoders, robot_localization and robot_pose_ekf
Question: Hey! I'm just testing out the robot_localization package with our robots. Loving the level of documentation :). However, I realized that it handles the data streams differently from robot_pose_ekf. For instance, robot_pose_ekf, expected wheel odometry to produce position data that it then applied differentially i.e. it took the position estimate at t_k-1 and t_k, transformed the difference to the odom frame, and applied it to the state estimate. However as per the discussion about yaw velocities here, it seems robot_localization would rather just apply wheel velocities generated by the wheel odometry and generate a position information itself. I know robot_localization has a "differential" tag on each dataset but that seems to be removing initial static offsets (i.e. subtract the position estimate at t_0 , not t_k-1). I have two questions: Are my assumptions above correct? Am I losing anything by not doing the integration myself and relying on robot_localization to do the integration? Originally posted by pmukherj on ROS Answers with karma: 21 on 2014-05-20 Post score: 1 Answer: Hi! Yes, you are correct. That parameter is currently a misnomer and I need to correct it (where "correct" means actually carry out differential integration, rather than the static offset treatment I'm currently giving it). It is my sincere hope that you're not losing anything by letting robot_localization carry out the integration, but if you'd prefer to have control over that, then the fix for (1) should facilitate that. Originally posted by Tom Moore with karma: 13689 on 2014-05-20 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by pmukherj on 2014-05-20: Sweet! No worries, and honestly, how robot_localization handles wheel odometry is probably how it was always "classically" done. It may not be necessary to make a "real" differential option available. Just a note on migration from robot_pose_ekf maybe enough! Comment by Tom Moore on 2014-05-20: Noted. Perhaps I'll add the behavior of that parameter to the documentation for now and then see how other people feel about it. It sounds like I'm in need of a good migration tutorial.
{ "domain": "robotics.stackexchange", "id": 18006, "tags": "localization, navigation, encoders, robot-pose-ekf, robot-localization" }
Please explain the physics of a Cloud Chamber
Question: A friend of mine was telling me about building a cloud chamber while he was in graduate school. As I understand it, this allows you to "see" interactions caused by high energy particles going through the cloud chamber. This has fascinated me, and I would like to build one with my daughter, but I want to make sure I am able to explain it to her when the eventual questions come. Can someone help me out please? How would I explain a cloud chamber to someone who is a freshman in high school? Answer: Feynman used to say - if you can't explain something in simple words, such that a child could understand, then you don't understand it either. So here's my take: A cloud chamber is nothing more than a box where mist is about to form, but not quite yet. There's vapors of stuff (either alcohol, or water, or something else) in it, and the temperature is such that the vapors are almost about to produce mist (or "clouds"). Imagine wetlands or marshes on a cold autumn morning, it's kind of like that - fill a box with that kind of "cold wet air". Now a charged particle (such as Alpha radiation from a chunk of radioactive ore) zips through the chamber at high speed. It bumps into water (or alcohol) molecules and ionizes them - it creates a trail of ionized molecules marking its path. Now, the vapors are such that they really want to produce mist; any tiny disturbance is enough to push them over the edge. The trail of ionized molecules is enough to do that - the ions attract a bunch of molecules, the resulting clumps attract even more, and before you know it a droplet of water is formed, then another, and another. Voila, a trail of mist follows the particle. I could try to describe the construction, but this Instructables page will do it much better: http://www.instructables.com/id/Make-a-Cloud-Chamber-using-Peltier-Coolers/ Basically, you evaporate some alcohol and let it run over a very cold area (cooled by the Peltier elements). Like breath coming out of your mouth in the cold air of winter, the alcohol vapors will tend to produce mist, so some vapors will turn to mist anyway. But the process happens a lot faster when a charged particle zips through the chamber - so, if you place a tiny bit of radioactive material nearby, tiny white trails will seem to come from it and traverse the chamber, because mist tends to form that much better around the ionized trails left by the radiation in its wake. More designs: http://www.lns.cornell.edu/~adf4/cloud.html http://www.nothinglabs.com/cloudchamber/ http://www.bizarrelabs.com/cloud.htm
{ "domain": "physics.stackexchange", "id": 1614, "tags": "particle-physics" }
Voltage sensitive dyes technique: What is the underlying measure?
Question: I just discovered voltage sensitive dyes technique: first of all what imaging techniques do we use? And I have seen that figures are labeled with ΔF/F0, what does it stands for? Answer: Since these are fluorescent dyes, the most likely method of use is with some sort of fluorescent (likely confocal) microscope. The various types of dyes have different hydrophobic tails, so they target different types of membranes (plasma membrane, mitochondrial membranes, etc.). An excitatory light source (typically a laser, with a single wavelength) is applied to the sample, and the $F_0$ or baseline fluorescence intensity at time zero is measured. When an event happens, such as the contraction of a muscle, the $\Delta F$ or change in fluorescence intensity is measured. This difference is then divided by the baseline, and called the fractional fluorescence (${\Delta F}/{F_0}$). It is plotted like so: Bu G et al. Biophys J. 2009 Mar 18;96(6):2532-46.
{ "domain": "biology.stackexchange", "id": 5479, "tags": "neuroscience, lab-techniques, neurophysiology" }
How to calculate the refracted light path when refraction index continuously increasing?
Question: Suppose an incident light from vacuum ($n_1=1.0$) into some media ($n_2=n_1+\mu\; x^2$) as in the figure below. How to calculate the refracted light path curve in closed form? Update: Try to set up ordinary differential equation for the refracted light path per Snell's law. Suppose the curve is $y=y(x)$; Since $n_i \sin\theta_i=\text{constant}=n_1\sin\alpha=\sin\alpha$. For any point $P:(x_0,y(x_0))$ on the path $y(x)$, we have: $$\tan(\theta_P)=\dfrac{\sin\theta_P}{\cos\theta_P}=y'(x)=\dfrac{\rm{d}y}{\rm{d}x},\quad \text{where }\theta_P \text{ is incident / refracted angle}$$ Since $\theta_P$ is always an acute angle, we have: $$\dfrac{\sin^2\theta_P}{{1-\sin^2\theta_P}}=y'(x)^2\Rightarrow \sin\theta_P=\dfrac{\pm y'(x)}{\sqrt{1+y'(x)^2}}$$ Clearly $n_P\sin\theta_P=\sin\alpha$, where $n_P=1+\mu x^2$, then we have: $$\left(1+\mu x^2\right)\dfrac{\pm y'(x)}{\sqrt{1+y'(x)^2}}=\sin\alpha\quad\text{with: y(0)=5} ||y'(0)=\tan\alpha$$ Then it becomes how to solve the ODE with a boundary condition. Can the ODE be solved in closed form? Answer: This may (or may not) lead to the same answer as CuriousOne's suggestion above, but the most appropriate (and the longest) way of attempting a solution would to be to employ the Fermat's principle. The method's nicely described in the link, but in a nutshell, you would be led to a condition of the type $$\delta \int n ds = 0$$ where this $ds$ can be cast in terms of your 2D co-ordinates. Now, substitute for the spatial dependence of $n$ and arrive at $$\delta \int n(x,y) \sqrt{(1+(dy/dx)^2)} dx = 0$$ This is a sort of an ab-initio approach. I won't be surprised if there's a shorter method (maybe CuriousOne's suggestion.)
{ "domain": "physics.stackexchange", "id": 75158, "tags": "optics, refraction, variational-principle, geometric-optics" }
is there any distance measuring sensor in gazebo?
Question: I m beginner in gazebo and robotics .is there any sensor in gazebo by which i find the distance of object ,i used Hokuyo laser range finders which is given in tutorial http://gazebosim.org/wiki/Tutorials/1.3/control_robot/mobile_base_laser .as far i understood it send a broad laser so it detect any object in that area(circumference ) ,but how can i know in which direction obstacle is present and how can i navigate my robot to avoid obstacle . Originally posted by suvrat on Gazebo Answers with karma: 53 on 2013-08-01 Post score: 0 Answer: If you take a look into the hokuyo model's SDF you can see, that the laser takes 640 samples (<samples>640</samples>). With the resolution (<resolution>1</resolution>) and the min angle and max angle (<min_angle>-2.26889</min_angle>, <max_angle>2.268899</max_angle>) you have everything you need. Then for example in a plugin go through the samples (for this you should take a look in the control model tutorial) and check the distances: If sample 0 detects a low range then you know, that from your sensor direction in the angle of -2.26889 there is an object. Edit: perhaps its easier to connect a simple ray sensor to your mobile robot. Then you can control things like angle and samples without editing the hokuyo model. Originally posted by Chris with karma: 67 on 2013-08-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by suvrat on 2013-08-02: thanks @chris i will try this .thank you very much for help. Comment by Chris on 2013-08-02: No problem. If you have further questions, let me know Comment by suvrat on 2013-08-03: thanks a lot @chris ,i solved my problem Comment by anonymous on 2014-11-21: Hey. I am also doing the same work of obstacle avoidance but when i did it with polaris_ranger_model this control model tutorial is not working . Polaris dont stop. Can you plz suggest how will i do it with polaris ranger model so that it should stop when it detects the object. Thanks!
{ "domain": "robotics.stackexchange", "id": 3411, "tags": "gazebo-tutorial" }
Algorithm: ordering non-overlapping intervals
Question: Assume we have a (multi)set of nontrivial intervals $\mathcal{I} = \{I_1,...,I_n\}$ and for any two $I_i, I_j \in \mathcal{I}$, we have that $I_i \cap I_j$ is trivial (that is: contains at most one point), or one of them contains the other. Just to clarify, an example would be $\mathcal{I} = \{[0,10], [2,6],[6,8], [2,4],[2,3],[5,6],[6,7]\}$. For the sake of simplicity, assume there is a longest interval that contains all other intervals, call that the root. We can arrange such (multi)sets as a tree where $I_i$ is a descendant of $I_j$ iff $I_i \subseteq I_j$ (if an interval has multiplicity $> 1$, then its multiple occurrences can form a path in the tree). What I want to do is find such a tree. It's clearly possible in $\mathcal{O}(n^2)$ time: sort the intervals by length in descending order and process them in that order. Whenever we insert the next interval $I_i$ into the tree, find the shortest interval already in the tree $I_j$ such that $I_i \subseteq I_j$; such an $I_j$ must exist since the root contains all other intervals. I would like to do this in $\mathcal{O}(n \log n)$ time and it seems to me that should be possible. Rough ideas include using interval trees and trying to construct the tree from the bottom up starting from its leaves, or figuring out how to annotate search trees to do something like this for me, but nothing successful so far. Answer: You can solve it in $O(n \lg n)$ time using a divide-and-conquer algorithm. Explaining the main idea: a cleaner variant To help me explain the main idea, I'm going to change the problem slightly. I'll assume we're given a set of intervals, with no interval appearing more than once in the set, and where for any pair of overlapping intervals $I,I'$, one of them contains the other (i.e., either $I \subset I'$ or $I' \subset I$). The goal is to output a forest (a collection of trees). This variant of the problem is slightly more general, in that we're not assuming we have a single root interval that all others are contained in. It's also slightly more specific in that I'm assuming we don't have the case of a trivial overlap containing a single point. I'll show later in my answer how to remove these extra assumptions, but for now, let's go with this variant of the problem, as it makes explaining the main ideas easier. Main idea: an algorithm for the cleaner variant We'll use divide-and-conquer. Let $x$ denote the median of the $2n$ endpoints of the intervals. This can be computed in $O(n)$ time. Use $x$ to partition the intervals into three collections: The intervals that are wholly to the left of $x$ (i.e., their right endpoint is strictly less than $x$). The intervals that are wholly to the right of $x$ (i.e., their left endpoint is strictly greater than $x$). The intervals that contain $x$. We'll first compute three forests, one forest for each of these three collections. Call these forests $\mathcal{L}$ (the forest of the intervals on the left), $\mathcal{R}$ (for the intervals on the right), and $\mathcal{M}$ (for the intervals that contain $x$). You can compute $\mathcal{L}$ and $\mathcal{R}$ by invoking the algorithm recursively. You can compute $\mathcal{M}$ directly by sorting the intervals that contain $x$ by decreasing width and building a path of the intervals by descending size (each node in $\mathcal{M}$ has exactly one child; it is the next-smallest interval). Now we need to join together the forests $\mathcal{L},\mathcal{R},\mathcal{M}$. Since $\mathcal{M}$ is a path, given an interval $I$, you can find the smallest interval $I' \in \mathcal{M}$ such that $I \subset I'$ by using binary search. Therefore, to join the forests, take the roots of $\mathcal{L}$ and $\mathcal{R}$ and use binary search in $\mathcal{M}$ to find the smallest interval containing it, and add a corresponding child relationship. This is all you need to do to join them into a single forest, as there can be no other child relationships between $\mathcal{L},\mathcal{R},\mathcal{M}$. Output the resulting forest. What's the running time of this procedure? Well, it takes $O(n)$ time to partition the intervals into these three collections. You can compute $\mathcal{M}$ in $O(n)$ time (if you've done a single up-front precomputation to sort the intervals by decreasing width). It will take at most $O(n \lg n)$ time to join $\mathcal{L}, \mathcal{R}, \mathcal{M}$ into a single forest, since each binary search operation can be done in $O(\lg n)$ time. With a little more cleverness, you can speed up the merging to run in $O(n)$ time (pre-sort the intervals by their left endpoint once, and by the right endpoint; now you can join $\mathcal{L}$ and $\mathcal{M}$ by doing a merge on the sorted list of intervals of $\mathcal{L}$ and the sorted list of intervals of $\mathcal{M}$, both sorted by their left endpoint; join $\mathcal{R}$ and $\mathcal{M}$ similarly). Therefore, the running time is $O(n)$ plus the time for the two recursive calls. Finally, note that each recursive call is on at most $n/2$ intervals, due to the way we chose $x$ (there can be at most $n/2$ intervals wholly to the left of $x$, as each such interval contributes 2 endpoints to the $2n$ endpoints, and at most $n/2$ intervals wholly to the right of $x$). Therefore, we get the recurrence relation $$T(n) = 2 T(n/2) + O(n),$$ whose solution is $T(n) = O(n \lg n)$. Solving your original problem Your original problem allows a pair of intervals to overlap in a single point, without either one containing the other. This doesn't fundamentally change the problem. You need to generalize the method for computing $\mathcal{M}$ slightly, as it is no longer guaranteed to be a simple path, but this is not hard to do. Also, now $\mathcal{M}$ might have two leaves. If so, for each root of $\mathcal{L},\mathcal{R}$, you need to compare it to both leaves to see whether it is contained in either one. This doesn't change the overall running time. Also, your original problem allows some intervals to appear in the input twice. This is easy to accommodate: remove duplicates, count how many times each interval appears in the input, run the algorithm above, and then adjust the tree to account for multiplicities (e.g., by duplicating nodes, or however you would like them to appear in the final output).
{ "domain": "cs.stackexchange", "id": 4105, "tags": "algorithms, data-structures, trees, intervals, ordering" }
Why does pasta really boil over?
Question: I was making pasta, and I noticed the pasta boiling over. I thought about it some more, and I realized I had no idea why this was happening. When the lid is on, the foam rises. When the lid is off, the foam dies back. Clearly there's some surfactant at work allowing bubbles to form, and the bubbles are mostly full of steam. If you remove the lid, they cool and condense down to a tiny size. This article is honestly just not specific enough. Why is the starch important?(It also happens with potatoes) How does a polysaccharide, a hydrophilic molecule, become a surfactant(if indeed it does)? Is it phospholipids from the cells that became pasta? Do starch molecules form a polymer and trap steam underneath like a balloon? Is it some kind of hybrid where bubbles are stabilized by viscosity? What's going on here? This question on seasoned advice tries to tackle it but gets surface tension wrong(I think) so that makes me a little wary of it. Surfactants like soap reduce the surface tension, allowing bubbles to form by letting water molecules spread out into thin films. They also keep your alveoli in your lungs from collapsing. If starches increased the surface tension, wouldn't that reduce the likelihood of forming bubbles by increasing the energetic penalty? High surface tension materials act like mercury. This answer is all over the internet but doesn't make any sense to me. You need less surface tension for bubbles, surely? If it's just phospholipids acting as detergent, could I 'boil over' pasta in cold water with a whisk? (I tried this, you can't.) I considered putting this on seasoned advice, but I'm not interested in how to prevent boil-over(or really any practical results), and the supplied answer there lacks scientific rigor. Answer: The starch forms a loosely bonded network that traps water vapor and air into a foamy mass, which expands rapidly as it heats up. Starch is made of glucose polymers (amylopectin is one of them, shown here): Some of the chains are branched, some are linear, but they all have $\ce{-OH}$ groups which can form hydrogen bonds with each other. Let's follow some starch molecules through the process and see what happens. In the beginning, the starch is dehydrated and tightly compacted - the chains are lined up in nice orderly structures with no water or air between them, maximizing the hydrogen bonds between starch polymers: As the water heats up (or as you let the pasta soak), water molecules begin to "invade" the tightly packed polymer chains, forming their own hydrogen bonds with the starch: Soon, the polymer chains are completely surrounded by water, and are free to move in solution (they have dissolved): However, the water/starch solution is not completely uniform. In the middle of the pot of water, the concentration of starch is low compared to water. There are lots and lots of water molecules available to surround the starch chains and to keep them apart. Near the surface, when the water is boiling, the water molecules escape as vapor. This means that near the surface, the local concentration of starch increases. It increases so much as the water continues to boil, that the starch can collapse back in on itself and hydrogen bond to other starch molecules again. However, this time the orderly structure is broken and there is too much thermal motion to line up. Instead, they form a loosely packed network of molecules connected by hydrogen bonds and surrounding little pockets of water and air (bubbles): This network is very weak, but it is strong enough to temporarily trap the air as it expands due to heating - thus, the bubbles puff up and a rapidly growing foam forms. Since they are very weak, it doesn't take much to disrupt them. Some oil in the water will inhibit the bubbles from breaking the surface as easily, and a wooden spoon across the top will break the network mechanically as soon as it touches it. Many biomolecules will form these types of networks under different conditions. For example, gelatin is a protein (amino acid polymer) that will form elastic hydrogen-bonded networks in hot water. As the gelatin-water mixture cools, the gel solidifies, trapping the water inside to form what is called a sol-gel, or more specifically, a hydrogel. Gluten in wheat is another example, although in this case the bonds are disulfide bonds. Gluten networks are stronger than hydrogen-bonded polysaccharide networks, and are responsible for the elasticity of bread (and of pasta). DISCLAIMER: pictures are not remotely to scale, starch is usually several hundred glucose monomers long, and the relative size of the molecules and atoms isn't shown. there aren't nearly enough water molecules - in reality there would be too many to be able to see the polymer (1,000's). the starch molecules aren't "twisty" enough or showing things like branching - the real network structure and conformations in solution would be much more complicated. But, hopefully you get the idea!
{ "domain": "chemistry.stackexchange", "id": 4132, "tags": "everyday-chemistry, food-chemistry" }
How to avoid error when applying certain combinations of degree of freedom rotations using a quantum circuit?
Question: When applying each of the six degree of freedom rotations (or certain combinations of them) in an SO(4) using quantum gates, the results I expected are produced. For example, the following circuit in Craig Gidney's Quirk tool demonstrates rotations in three degrees of freedom, along with some displays of the resulting matrices expressed as percentages: However, when applying some combinations of rotations, such as the following, results I didn't expect are produced in the final matrix: In contrast, the results I am expecting are the following: $$ \begin{bmatrix} .73 & .07 & .13 & .07 \\ .00 & .73 & .15 & .13 \\ .13 & .07 & .73 & .07 \\ .15 & .13 & .00 & .73 \end{bmatrix} $$ For convenience, here is a link to the Quirk circuit with all six degree of freedom rotations, albeit with an unexpected final result. The results I expect are the following: $$ \begin{bmatrix} .62 & .01 & .08 & .29 \\ .11 & .80 & .01 & .08 \\ .13 & .07 & .80 & .01 \\ .15 & .13 & .11 & .62 \end{bmatrix} $$ I don't know enough about using ancilla bits and uncomputation techniques to apply them to this, but I suspect that it might explain part of the unexpected results. Any advice would be greatly appreciated. Answer: Since you haven't told us how you've tried to do the calculation, I don't know where you're making the mistake. (I'm also unfamiliar with Quirk, which seems to be using an unusual ordering of basis elements in the output matrix. If anything looks inconsistent in the following answer, try swapping the middle two rows/columns, and adding a transpose!) The first important thing is to not use the percentage values in the transition matrices. These correspond to probabilities, but to do any further work, we need to know about probability amplitudes. So, the unitary output of your first sequence of gates is $$ \left( \begin{array}{cccc} \frac{\sqrt{2+\sqrt{2}}}{2} & \frac{1}{4} \left(-2+\sqrt{2}\right) & 0 & -\frac{i}{2 \sqrt{2}} \\ 0 & \frac{1}{4} \left(2+\sqrt{2}\right) & -\frac{1}{2} i \sqrt{2-\sqrt{2}} & -\frac{i}{2 \sqrt{2}} \\ 0 & -\frac{i}{2 \sqrt{2}} & \frac{\sqrt{2+\sqrt{2}}}{2} & \frac{1}{4} \left(-2+\sqrt{2}\right) \\ -\frac{1}{2} i \sqrt{2-\sqrt{2}} & -\frac{i}{2 \sqrt{2}} & 0 & \frac{1}{4} \left(2+\sqrt{2}\right) \\ \end{array} \right) $$ Now we can apply the final sequence of gates; an $X$ on qubit 1, a controlled-$Y^{1/4}$ and another $X$ on qubit 1. You get the output unitary $$ \left( \begin{array}{cccc} \frac{1}{4} \left(2+\sqrt{2}\right) & -\frac{1}{2} \sqrt{1-\frac{1}{\sqrt{2}}} & -\frac{i}{2 \sqrt{2}} & -\frac{1}{2} i \sqrt{\frac{1}{2} \left(2-\sqrt{2}\right)} \\ 0 & \frac{1}{4} \left(2+\sqrt{2}\right) & -\frac{1}{2} i \sqrt{2-\sqrt{2}} & -\frac{i}{2 \sqrt{2}} \\ -\frac{i}{2 \sqrt{2}} & -\frac{1}{2} i \sqrt{\frac{1}{2} \left(2-\sqrt{2}\right)} & \frac{1}{4} \left(2+\sqrt{2}\right) & -\frac{1}{2} \sqrt{1-\frac{1}{\sqrt{2}}} \\ -\frac{1}{2} i \sqrt{2-\sqrt{2}} & -\frac{i}{2 \sqrt{2}} & 0 & \frac{1}{4} \left(2+\sqrt{2}\right) \\ \end{array} \right) $$ The mod-square of each element is then $$ \left( \begin{array}{cccc} \frac{1}{16} \left(2+\sqrt{2}\right)^2 & \frac{1}{8} \left(2-\sqrt{2}\right) & \frac{1}{8} & \frac{1}{8} \left(2-\sqrt{2}\right) \\ 0 & \frac{1}{16} \left(2+\sqrt{2}\right)^2 & \frac{1}{4} \left(2-\sqrt{2}\right) & \frac{1}{8} \\ \frac{1}{8} & \frac{1}{8} \left(2-\sqrt{2}\right) & \frac{1}{16} \left(2+\sqrt{2}\right)^2 & \frac{1}{8} \left(2-\sqrt{2}\right) \\ \frac{1}{4} \left(2-\sqrt{2}\right) & \frac{1}{8} & 0 & \frac{1}{16} \left(2+\sqrt{2}\right)^2 \\ \end{array} \right). $$ Numerically, these are the same as given in the question: $$ \left( \begin{array}{cccc} 0.729 & 0.0732 & 0.125 & 0.0732 \\ 0 & 0.729 & 0.146 & 0.125 \\ 0.125 & 0.0732 & 0.729 & 0.0732 \\ 0.146 & 0.125 & 0 & 0.729 \\ \end{array} \right) $$
{ "domain": "quantumcomputing.stackexchange", "id": 165, "tags": "quantum-gate, circuit-construction, quirk" }
Reverse binary representation of an int (only significant bits)
Question: This based on this question in StackOverflow. The accepted answer uses Convert.ToString(int, base) to get the binary string, reverses it and converts it back to int. Nice trick! I have very little experience with bit twidling (using Flags is about it :p ) so I decided to try and code the solution to this problema without using Convert. I came up with the following solution. I'd like to know easier ways to do this as there are probably many much better than the one I found. public static IEnumerable<bool> ToBinary(this int n) { for (int i = 0; i < 32; i++) { yield return (n & (1 << i)) != 0; } } public static int ToInt(this IEnumerable<bool> b) { var n = 0; var counter = 0; foreach (var i in b.Trim().Take(32)) { n = n | (i ? 1 : 0) << counter; counter++ } return n; } private static IEnumerable<bool> Trim(this IEnumerable<bool> list) { bool trim = true; foreach (var i in list) { if (i) { trim = false; } if (!trim) { yield return i; } } } And now you'd use it like this: var reversed = n.ToBinary().Reverse().ToInt(); Answer: You could use while loop instead of the for loop. Thus you don't need to trim zero bits, because it will continue to loop only while there is at least one significant bit in a value. Another advantage of this approach is that the result can be calculated using integer arithmetics only without using any kind of collections. static int Reverse(int input) { uint x = unchecked((uint)input); uint y = 0; while (x != 0) { y <<= 1; // Shift accumulated result left y |= x & 1; // Set the least significant bit if it is set in the input value x >>= 1; // Shift input value right } return unchecked((int)y); } Usage example: int t = 0x103; Console.WriteLine(Convert.ToString(t, 2)); Console.WriteLine(Convert.ToString(Reverse(t), 2)); Result: 100000011 110000001
{ "domain": "codereview.stackexchange", "id": 15874, "tags": "c#, integer" }
What are $S$ and $L$?
Question: In a helium atom, the total spin of the electrons is 0, suggesting that the total spin quantum number S is the sum of the ms quantum numbers (1/2-1/2=0). However, many sources say that the total spin quantum number S is the sum of the s quantum numbers, so in helium S should be 1 (s is always 1/2 for electrons). In much the same way, I've seen some sources say that L is the sum of the ml quantum numbers and others say that L is the sum of the l quantum numbers. So, which is correct? Answer: Both $L$ and $S$ are quantum angular momenta. Angular momentum is different in quantum mechanics (QM). Almost everything is different in QM. In QM, angular momentum is complete if you give two numbers: the "angular momentum" and its "third component" $M$. So you give the pairs $|l,\ m_l\rangle$ and $|s, \ m_s\rangle$ So, any angular momentum is defined with those two numbers. But those numbers behave very differently. The third component is additive. You add $m$'s without problem: Total $m$ = $m_1+m_2$ On the other hand, $l$'s behave very differently. When we add two angular momenta, the total angular momentum (let's call it $J$), is different: it can go from $|L-S|$ to $L+S$ And the thing is that both things are called "sum", that's the reason of the confusion. When you add two angular momenta (vectors) $\vec{L}+\vec{S}=\vec{J}$ That's okay, but this is the same as saying $$|l,\ m_l\rangle \ + \ |s,\ m_s\rangle \ = \ |j,\ m_j\rangle $$ And the total $m$ adds normally: $m_j=m_l+m_s$, okay. But $j$ is different. $j$ can go from $|l-s|$ to $|l+s|$ in unit steps. In all angular momenta, as you know, $m$ can go from $-l$ to $+l$. And also $m_s$ from $-s$ to $+s$; and $m_j$ from $-j$ to $+j$. So, if you have $s=½$, you can have $m_s=\pm ½$ But if you add two spins, $s_1+s_2$, the resulting $j$ can be 0 or 1. For $j=0$, $m_j$ can only be 0 as well. But for $j=1$, $m$ can have three values: $m_j={-1,0,1}$. So, "summing two spins" can mean that total spin is "J=1" but "M=0".
{ "domain": "physics.stackexchange", "id": 63991, "tags": "angular-momentum, atomic-physics, quantum-spin, terminology, notation" }
Ros1_bridge can't mapping my msgs..... HELP
Question: I installed Ros1_bridge by apt install ros-dashing-ros1_birdge. and refered to https://github.com/ros2/ros1_bridge/blob/master/doc/index.rst. i make 'my_mapping_rules.yaml' <my_mapping_rules.yaml> - ros1_package_name: 'tut_msgs' ros1_message_name: 'Num' ros2_package_name: 'tut_msgs' ros2_message_name: 'Num' fields_1_to_2: data: 'data' #/home/freesun/ros2_ws/src/tut_msgs/msg/Num.msg ros2_workspace #/home/freesun/catkin_ws/src/tut_msgs/msg/Num.msg ros1_workspace and placed in /home/freesun/ros2_ws/src/tut_msgs Finally i modifed msg's CMakeList.txt and package.xml. But it doesn't working. CANT MAPPING MY MESSAGES!!!!!!! what's the problem??? HELP ME PLZ..... OTL Originally posted by Freesun on ROS Answers with karma: 3 on 2020-03-19 Post score: 0 Answer: installed Ros1_bridge by apt install ros-dashing-ros1_birdge This is the reason. The bridge provided in ros-dashing-ros1-bridge has no knowledge of your messages. You need to build a custom bridge from source. If you follow the steps of the document you link to https://github.com/ros2/ros1_bridge/blob/master/doc/index.rst#how-does-the-bridge-know-about-custom-interfaces it will guide you through how to setup workspaces, build your messages and how to build a bridge capable of bridging your messages Originally posted by marguedas with karma: 3606 on 2020-03-19 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Freesun on 2020-03-19: oh! i got it! i'll do that!! thx :) Comment by Freesun on 2020-03-19: umm.... i faced another problem! when i run all nodes. ros1_bridge suddenly shut down. here is error message : $ ros2 run ros1_bridge dynamic_bridge created 2to1 bridge for topic '/rosout' with ROS 2 type 'rcl_interfaces/msg/Log' and ROS 1 type 'rosgraph_msgs/Log' created 1to2 bridge for topic '/my_topic' with ROS 1 type 'tut_msgs/Num' and ROS 2 type 'tut_msgs/msg/Num' [INFO] [ros_bridge]: Passing message from ROS 1 tut_msgs/Num to ROS 2 tut_msgs/msg/Num (showing msg only once per type) realloc(): invalid pointer Comment by marguedas on 2020-03-19: this is unfortunate Are you using CycloneDDS by any chance ? Did you try looking for a similar issue on the ros1_bridge repository ? This looks pretty similar to your issue https://github.com/ros2/ros1_bridge/issues/244 Comment by Freesun on 2020-03-19: okay... i'll search more issues! thank u ;) Comment by Freesun on 2020-03-20: Hello!! I solved it when i install cyclone dds!! Comment by marguedas on 2020-03-20: glad you can now bridge your messages. As your problem is solved, could you please close the question or, if it answered your original question, accept the proposed answer by clicking the checkmark on the left. This will allow to remove your question from the unanswered list
{ "domain": "robotics.stackexchange", "id": 34613, "tags": "ros, ros2, yaml" }
Reference request for symmetry breaking Hartree-Fock
Question: Stationary mean-field solutions break symmetries of the many-body Hamiltonian in favour of lowering the energy, e.g. translational or rotational symmetry, despite $[H,P]=0$, or $[H,L_z]=0$, respectively. This is equally true for bosons - where the Hartree ansatz leads to the Gross-Pitaevskii equation - as it is for fermions. I can trace back symmetry breaking Hartree ansatz solutions for bosons to Comparison between the exact and Hartree solutions of a one-dimensional many-body problem, F. Calogero and A. Degasperis, Phys. Rev. A 11, 265 (1975). I am also well aware that the symmetry breaking GP solutions in that work were known before. But I am sure that there must be references from a lot earlier, say about 1930 for electrons in molecules, because Hartree-Fock was invented in 1927. So, since when has it been known in atomic/molecular physics that Hartree-Fock breaks (continuous or discrete) symmetries? Anyone knows a reference? Answer: It appears that the first explicit statement about the symmetry-breaking of Unrestricted Hartree-Fock (UHF) was by P. O. Löwdin in Discussion on The Hartree-Fock Approximation, P. Lykos and G. W. Pratt, Rev. Mod. Phys. 35, 496 – Published 1 July 1963. Löwdin writes: "I would like to comment on some peculiarities with respect to the symmetry properties. ... Confusion may arise from the fact that the exact eigenfunction $\Psi$ and the approximate eigenfunction in the form of a Slater determinant may have rather different properties. For instance, if $\Lambda$ is a normal constant of motion satisying the relation $H\Lambda = \Lambda H$, then every eigenfunction to $H$ is automatically an eigenfunction to $\Lambda$ or (in the case of a degenerate energy level) may be chosen in that way, so that $H\Psi = E\Psi$ $\Lambda\Psi = \lambda\Psi$ ... On the other hand, if one drops the symmetry constraint and considers only the relation $\delta \langle D\vert H\vert D\rangle=0$ one obtains a, nonrestricted Hartree Fock scheme, and the solution $D$ corresponding to the absolute minimum has now usually lost its eigenvalue property with respect to $\Lambda$, i.e., the corresponding Hartree-Fock functions are no longer symmetry-adapted.... In my opinion, the Hartree-Fock scheme based on a single Slater determinant $D$ is in a dilemma with respect to the symmetry properties and the normal constants of motion $\Lambda$. The assumption that $D$ should be symmetry-adapted or an eigenfunction to $H$ leads to an energy $\langle H\rangle $ high above the absolute minimum, and the energy difference amounts to at least 1eV per electron pair and more. In the sense of Eckart's criterion [C. Eckart, Phys. Rev. 36, 877 (1930); B. A. Lengyel, J. Math. Analysis Appl. 5, 451 (1962)], the absolute minimum of $\langle D\vert H\vert D\rangle$ leads certainly to a better wave function $D$, but the symmetry properties are now lost and the determinant is a "mixture" of components of various symmetry types."
{ "domain": "physics.stackexchange", "id": 52613, "tags": "quantum-mechanics, resource-recommendations, symmetry-breaking, quantum-chemistry" }
What happens if a current flows through a wire wrapped around a magnetised material?
Question: A cylinder of radius $a$ has a uniform magnetisation along its axis, which results in zero bound volume current density and a bound surface current density $M\hat{\phi}$. A coil with $N$ turns and wire resistance $R$ and a given wire radius is wrapped around the magnetisation in the form of a solenoid with radius $b>a$. When the magnetised cylinder is flipped a charge $q$ flows through the wire in the coil. What happens in this situation, ignoring the self-inductance of the wire? (It was a question in my exam today asking to compute the new magnetisation of the cylinder that I couldn't do and am intrigued - before this was asked I had calculated the $\vec{H}$ and $\vec{B}$ fields everywhere.) My initial thoughts were that if there is a charge flowing into the wire then there is then a current flowing through it since the flow of charge is what current is. This can then be modelled as a solenoid with a magnetic field inside of $\vec{B}=\mu_0nI\hat{z}$. While the charge is flowing into the wire, would there then be a time-dependent current increasing with time? I was thinking of computing change in flux and thus induced emf inside the solenoid, and then go on to somehow combine this with what I had already discovered about the magnetised cylinder. Could someone please take me through the physics of what is going on in this situation? Answer: Well you are right in expecting an effect on the wire every time there is a change in the field. The magnetic induction phenomenon will occur with changes in the magnetic flux through the solenoid, which depends both the intensity of the magnetic field and the time in which the change occurs. Now when you say "flip the magnetised wire" if you mean sudden change, it would be the unrealistic case in which infinite forces in the wire move the charges and rip it apart. If you mean change with time in a known way, then the forces inside the coil will change proportional to the derivative of B (since the area of the solenoid remains constant) and the current in the coil behaves exactly like in certain simplified circuit with the corresponding time dependent voltage, R representing the coil resistance and L representing the inductance of the coil. Also if the ends of the coil are not connected, charge will accumulate in them, and they will behave further like a capacitor C, in which case the system would be analogous to a simplified RLC circuit.
{ "domain": "physics.stackexchange", "id": 27758, "tags": "electromagnetism, magnetic-fields, inductance" }
Kirchoff's law of thermal radiation states that if an object emits at a wavelength then it also absorbs at the same wavelength
Question: What if the object is completely red, then when I heat this object would it emit a blackbody spectrum except for the red color? Answer: The answer is yes - to the extent that the heated object approximates a blackbody at all. Thought experiment time. Suppose you have your "red" object - which I will take to be an object that reflects all red light that is incident upon it, but absorbs everything else. Now put it inside a blackbody cavity at some temperature and leave it to reach an equilibrium. Since it cannot absorb "red" radiation, it must achieve an equilibrium only through absorbing blackbody radiation in other parts of the spectrum. But if it reaches an equilibrium, it must radiate just as much energy in total as it absorbs. The obvious solution is that it emits blackbody radiation itself, at all wavelengths apart from red. But how could we test that - because another solution might be that it emits less than a blackbody at other-than-red wavelengths, whilst emitting some radiation in the red part of the spectrum? Simple - we wrap the object in a filter which blocks red light. This does NOT change the amount of radiation absorbed at all. With the filter around it, the object cannot lose any heat by radiating in the red part of the spectrum. Nevertheless it will reach a thermal equilibrium with the cavity - indicating that all the emitted radiation is coming out in the other-than-red wavelengths. OK, but maybe the emitted radiation isn't a blackbody spectrum? Simple - just try it again, but this time leave the red filter on and wrap another filter around it that blocks out some other waveband. This reduces the amount of absorbed radiation by exactly the amount of blackbody radiation in that filtered waveband. Nevertheless, the object will still reach thermal equilibrium with the cavity, indicating that the radiation it emits has also been reduced by exactly the same amount. i.e. The emitted radiation in any waveband is the same as the absorbed radiation. Kirchoff's Law.
{ "domain": "physics.stackexchange", "id": 100490, "tags": "thermal-radiation" }
A proof of a statement of the parallel axis theorem
Question: Now the proof for the parallel axis theorem is fairly easy to follow but I couldn't understand a part which says, $$\int{dm}{\vec{r_c}}={\vec{0}}.$$ This sums up the product of each particle's mass and it's perpendicular distance from the axis passing through the centre of mass. Can somebody please explain how this is true Answer: Let us say you want to define the center of mass as the point where if a rigid body is rotating about with some arbitrary rotational velocity $\boldsymbol{\omega}$, then the summation of translational momentum of all the particles on the body is zero. Place the origin at the center of mass and follow these steps The kinematics of each particle i located at $\boldsymbol{r}_i$ is $$ \boldsymbol{v}_i = \boldsymbol{\omega} \times \boldsymbol{r}_i$$ The momentum of each particle with mass $m_i$ is $$ \boldsymbol{p}_i = \boldsymbol{\omega} \times m_i \boldsymbol{r}_i$$ The total momentum is $$ \boldsymbol{p} = \sum_i ( \boldsymbol{\omega} \times m_i \boldsymbol{r}_i ) = \boldsymbol{\omega} \times \sum_i m_i \boldsymbol{r}_i$$ To make the total momentum zero you need $ \sum_i m_i \boldsymbol{r}_i =0 $ The interpretation of this definition of center of mass, is that it is located at the weighted average of all the particle positions.
{ "domain": "physics.stackexchange", "id": 92007, "tags": "rotational-dynamics, reference-frames, integration, moment-of-inertia" }
Classical action of the simple harmonic oscillator
Question: I have been calculating the classical action of the harmonic oscillator, the problem I have is that I am only able to solve it if I set the integration limits of the action integral to be $t=T$ and $t=0$. I have looked online and this seems to be the solution. What I don't understand is why we can set $t=0$ as one of the limits, for a general treatment do we not have to set the limits as an arbitrary $t_b$ and $t_a$? Answer: If there is no external force with explicit time dependence, then the harmonic oscillator contains no explicit time dependence. Then the system has time translation symmetry, i.e. the result can only depend on the difference $T =t_b-t_a$, not on $t_a$ and $t_b$ individually.
{ "domain": "physics.stackexchange", "id": 17865, "tags": "lagrangian-formalism, symmetry, harmonic-oscillator, action" }
[rospack] couldn't find package [rviz]
Question: After following the install instructions here http://www.ros.org/wiki/openni_kinect rviz has disappeared! It worked before. :~$ rosrun rviz rviz [rospack] couldn't find package [rviz] :~$ roscd rviz roscd: No such package 'rviz' Originally posted by manos on ROS Answers with karma: 11 on 2011-05-09 Post score: 1 Answer: My guess is that your ROS_PACKAGE_PATH is missing the stacks directory. It should read something like: "other stuff":/opt/ros/diamondback/stacks Originally posted by dornhege with karma: 31395 on 2011-05-09 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 5541, "tags": "rviz" }
Modeling Vehicle Acceleration - Simulink
Question: Hello I am modeling a vehicle that has a seperately excited DC motor as the power plant utilizing the constant-torque and constant-power regions for traction. The acceleration is governed by the following equation $$ M \frac{dv}{dt} = F_t -(\beta_1+\beta_2v^{2}+\beta_3v) $$ The problem with my model is that the $\beta_1$ component, the rolling resistance, is a constant value and causes the velocity to go negative for the first 0.7 seconds when the model is executed . Any suggestion on how to fix this issue? Answer: If this is the case, the equation either needs a domain readjustment(so, valid from t∈[0.7,∞]) or an equation redefinition. Using a piecewise function would look like: $$ M \frac{dv}{dt} = F_t -(\beta_1+\beta_2v^{2}+\beta_3v), t∈[0.7,∞] $$ $$ M \frac{dv}{dt} = 0, t∈(0,0.7) $$ Friction, is just a reactionary force. For rolling resistance can be modelled as an example of a dynamic friction. Hence, until motion occurs, $$F_t = β_1$$ Then, you should get better looking profiles. Another way to do this would be to have a criteria, where the reactionary $β_1$ is just as big as $F_t$. Therefore, $$ M \frac{dv}{dt} = F_t -(\beta_1+\beta_2v^{2}+\beta_3v), t∈[0,∞] $$ Where $$ \beta_1 =F_t, F_t∈[0,\beta_{max}] $$ $$ \beta_1 =\beta_{max}, F_t∈[\beta_{max},∞] $$ Here, $\beta_{max} $ is your initial value of $\beta_1$
{ "domain": "engineering.stackexchange", "id": 3892, "tags": "simulink" }
Factor of $1/2$ in $TT$-OPE
Question: I'm trying to calculate the TT OPE in a bosonic theory. I'm missing a factor of 1/2 in the least-singular term. We have (following Di Francesco) $$\langle \partial \phi(z) \partial \phi(0) \rangle = \frac{-1}{4 \pi g}\frac{1}{z^2}$$ and $$T(z) = -2 \pi g \, \colon \partial \phi \partial \phi \, \colon.$$ Performing Wick contractions, I'm able to obtain $$T(z) T(0) = \frac{1/2}{z^4} - \frac{4 \pi g}{z^2}\, \colon \partial\phi(z) \partial \phi(0) \, \colon .$$ Now I want to expand the last normal-ordered term around $0$, but whereas I would write $$\colon \partial\phi(z) \partial \phi(0) \, \colon = \, \colon \partial\phi(0) \partial \phi(0) \, \colon + z \, \partial \, \colon \partial\phi(0) \partial \phi(0) \, \colon$$ and obtain the incorrect OPE $$T(z) T(0) = \frac{1/2}{z^4} + \frac{2 T(0)}{z^2} + \frac{2 \partial T(0)}{z},$$ the correct answer is actually $$T(z) T(0) = \frac{1/2}{z^4} + \frac{2 T(0)}{z^2} + \frac{ \partial T(0)}{z}.$$ I must be Taylor-expanding the normal-ordered product incorrectly. Can someone walk me through the steps of doing it the right way? Answer: When you expand the normal ordered term, you have \begin{align} :\partial \phi(z) \partial \phi(0): &= :[ \partial \phi(0) + z \partial^2 \phi(0) ]\partial \phi(0): \\ &= : \partial \phi(0) \partial \phi(0): + z : \partial^2 \phi(0) \partial \phi(0): \\ &= : \partial \phi(0) \partial \phi(0): + \frac{z}{2} \partial \left( :\partial \phi(0) \partial \phi(0): \right) \\ &= T(0) + \frac{z}{2} \partial T(0). \end{align}
{ "domain": "physics.stackexchange", "id": 59714, "tags": "homework-and-exercises, operators, string-theory, conformal-field-theory, wick-theorem" }
Sharpen Defocused Image (Deconvolution / Image Restoration)
Question: Using OCR, I want to extract text from product packages using Google Glass. However, because of the fixed focus of the camera the package pictures are blurred. Is there a way to sharpen the image? Currently I use unsharp masking to enlarge the gradients of the edges, which gives me OK results. Is there a better way to do this? I thought about taking a picture of a point-spread-function and using this to deconvolve, but I doubt this gives me a good approximation of the distortion kernel. Here is an example: Answer: You're after an algorithm in the family of "DeConvolution". Specifically in your case, is called Blind Deconvolution. Yet if you have some assumption the Blur you can use Wiener Filter or Lucy Richardson. Both of them are actually the MMSE Estimator just with different assumption of the noise. Both of the methods are actually "Inverse Filter" on an Low Pass Filter, which means they are High Pass Filter, just like the Unsharp Filter you applied. The difference is those methods are optimal per given "Blur Model". Yet, if you go after the Blind Deconvolution, today the best estimators are based on some prior of the image. Something like the distribution of the Gradient of the image and stuff like that. Yet if you are only after the text in the image. Something simple like Wiener Filter + Edge Detection should do the work to give you most data.
{ "domain": "dsp.stackexchange", "id": 2569, "tags": "image-processing, convolution, deconvolution, ocr, inverse-problem" }
How does the decision tree implicitly do feature selection?
Question: I was talking with an ex-fellow worker and he told me that the decision tree implicitly applies a feature selection. He told me that the most important feature is higher in the tree because of the usage of information gain criteria. What does he mean with this and how does this work? Answer: Consider a dataset $S \in \mathbb{R}^{N \times (M + 1)}$ with $N$ observations (or examples), where each observation $S_i \in \mathbb{R}^{M + 1}$ is composed of $M$ elements, one value for each of the $M$ features (or independent variables), $f_1, \dots f_M$, and the corresponding target value $t_i$. A decision tree algorithm (DTA), such as the ID3 algorithm, constructs a tree, such that each internal node of this tree corresponds to one of the $M$ features, each edge corresponds to one value (or range of values) that such a feature can take on and each leaf node corresponds to a target. There are different ways of building this tree, based on different metrics to choose the features for each internal node and based on whether the problem is classification or regression (so based on whether the targets are classes or numeric values). For example, let's assume that the features and the target are binary, so each $f_k$ can take on only one of two possible values, $f_{k} \in \{0, 1\}, \forall k$, and $t_i \in \{0, 1\}$ (where the index $i$ correspond to the $i$th observation, while the index $k$ correspond to the $k$th column or feature of $S$). In this case, a DTA first chooses (based on some metric, for example, the information gain) one of the $M$ features, for example, $f_j$, to associate it with the root node of the tree. Let's call this root node $f_j$. Then $f_j$ will have two branches, one for each of the binary values of $f_j$. If $f_j$ were a ternary variable, then the node corresponding to $f_j$ would have three branches, and so on. The DTA recursively chooses one of the remaining features for each node of the child branches. The DTA does this until all features have already been selected. In that case, we will have reached a leaf node, which will correspond to one value of the target variable. When the DTA chooses a feature for a node, all observations of the dataset $S$ that take on the first binary value of that feature will go in the branch corresponding to that value and all other observations will go in the other branch. So, in this way, the DTA splits the dataset based on the features. The following diagram represents a final decision tree built by a DTA. You can see that the first feature selected (for the root node) by the DTA is "Is it male?", which is a binary variable. If yes, then, on the left branch, we have another internal node, which corresponds to another feature and, at the same time, to all observations associated with a male. However, on the right branch, we have a leaf node, which corresponds to one value of the target, which, in this case, is a probability (or, equivalently, a numerical value in the range $[0, 1]$). The shape of the tree depends on the dataset and DTA algorithm. Therefore, different datasets and algorithms might result in different decision trees. So, yes, you can view a decision tree algorithm as a feature selection or, more precisely, feature splitting algorithm.
{ "domain": "ai.stackexchange", "id": 1546, "tags": "machine-learning, decision-trees, feature-selection" }
High voltage motor control with arduino
Question: I'm trying to control a higher voltage motor than an arduino can source with a pin, with an arduino. I am trying to hook it up to a transistor. The battery pack is not supposed to be 4.8V, it's 6V, 4 D batteries. Here is the setup: Here is the arduino code I'm trying to run to it: int motorpin = 2; void setup() { pinMode(motorpin, OUTPUT); } void loop() { digitalWrite(motorpin, HIGH); delay(500); digitalWrite(motorpin, LOW); delay(500); } Code gives me no errors, but no motor movement happens. What would make this work? Thanks. Answer: Your hardware configuration sounds wrong... your trying to provide all the drive current from the arduino Given your description, your using a bipolar (NPN or PNP) transistor, I'd wire up as: Base: To arduino Collector: Motor -ve (motor +ve to Vcc) Emmitter: Ground Alternatively: Base: To arduino Collector: Vcc Emmitter: Motor +ve (motor -ve to Ground) This way, the processor is switching the current flow between Vcc and the motor, not supplying all the power.
{ "domain": "robotics.stackexchange", "id": 117, "tags": "arduino, motor" }
Is dihydrogen the only example of overlap between two s orbitals?
Question: I heard that $\ce{H2}$ is the only example of s–s orbital overlap. Can anyone give an example which contradicts this statement? Answer: You don't need particularly exotic things to get a counterexample. Metal–metal bonding in mercury(I) compounds is quite prevalent, where the electronic configuration is $\ce{Hg+}$: $\ce{[Xe](4f^14)(5d^10)(6s^1)}$, and the bonding involves 6s–6s overlap. The prototypical example is the $\ce{Hg2^2+}$ cation, which features in mercury(I) chloride $\ce{(Hg2^2+)(Cl^-)2}$; this cation is valence isoelectronic to $\ce{H2}$. Fundamentally, it is a relativistic contraction of the 6s orbital which makes such bonding common in Hg(I): see also Why does mercury form polycations?. The bonding in alkali metal dimers is qualitatively similar, but significantly weaker. If we play the "every bond has orbital overlap" card (i.e. not just covalent bonds, which I think is perfectly fair), then the delocalised bonding in Group 1 and 2 metals arises from overlap of ns orbitals on each atom to form bands, as has already been alluded to in the comments. Even simpler would be lithium hydride, which likely has some degree of 1s–2s orbital overlap, even though the bonding may be primarily ionic.
{ "domain": "chemistry.stackexchange", "id": 12117, "tags": "molecular-orbital-theory, valence-bond-theory" }
Group list items into pages, rows and columns
Question: I have a list of labels for printing. Now I have go over the list and print the labels. The layout is specified by how many labels per page and per row on a page I can print. @{ var labels = new List<int> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17 }; var labelsPerPage = 10; var labelsPerRow = 2; var labelCount = labels.Count; } @for (var i = 0; i < labelCount; i++) { if (i % labelsPerPage == 0) { @Html.Raw("<table class=\"page\">") } if (i % labelsPerRow == 0) { @Html.Raw("<tr>") } <td class="label">@PrintLabel(labels[i])</td> if (i % labelsPerRow == (labelsPerRow - 1)) { @Html.Raw("</tr>") } if (i % labelsPerPage == (labelsPerPage - 1)) { @Html.Raw("</table>") } } @helper PrintLabel(int label) { @label } Is there a better method (eg. using LINQ or nested for/foreach loops) for going through the list and doing something at the start and end of each page and each row? I would prefer a nested structure to a flat one so it can be used inside a Razor template. The flat version fails here because Razor requires HTML tags to be closed at the end of a code block. I'm not looking for performance here, I'd rather have more readable code and possibly getting rid of any Html.Raw() calls. Answer: You're closing tags only when a row or table is full. Is data always going to fit the table size? if (i % labelsPerRow == (labelsPerRow - 1)) { @Html.Raw("</tr>") } if (i % labelsPerPage == (labelsPerPage - 1)) { @Html.Raw("</table>") } You could make a functional helper and use that instead. Notice that after the loop, the row and table are ensured to be closed. The purpose is to allow the consumer to pick how to build a TAggregate from TSource items and observer methods. We're observing every item, table start/end and row start/end. public static class GridRenderer { public static TAggregate Render<TSource, TAggregate>(IEnumerable<TSource> source, TAggregate seed, int pageSize, int rowSize, Func<TSource, TAggregate, TAggregate> itemObserver, Func<TAggregate, TAggregate> beginPageObserver, Func<TAggregate, TAggregate> endPageObserver, Func<TAggregate, TAggregate> beginRowObserver, Func<TAggregate, TAggregate> endRowObserver) { if (source == null) throw new ArgumentNullException(nameof(source)); if (pageSize <= 0) throw new ArgumentOutOfRangeException(nameof(pageSize)); if (rowSize > pageSize) throw new ArgumentOutOfRangeException(nameof(rowSize)); // .. check observers for null var items = source.ToList(); var result = seed; if (!items.Any()) return result; for (var i = 0; i < items.Count; i++) { if (i % pageSize == 0) { result = beginPageObserver(result); } if (i % rowSize == 0) { result = beginRowObserver(result); } result = itemObserver(items[i], result); if ((i + 1) % rowSize == 0) { result = endRowObserver(result); } if ((i + 1) % pageSize == 0) { result = endPageObserver(result); } } if (items.Count % rowSize != 0) { result = endRowObserver(result); } if (items.Count % pageSize != 0) { result = endPageObserver(result); } return result; } } Let's say your data does not fit the table size and row size. var renderer = new StringBuilder(); GridRenderer.Render(new[] {1, 2, 3, 4, 5}, renderer, 4, 2, (item, cur) => cur.AppendLine($" <td class=\"label\">{item}</td>"), cur => cur.AppendLine("<table class=\"page\">"), cur => cur.AppendLine("</table>"), cur => cur.AppendLine(" <tr>"), cur => cur.AppendLine(" </tr>")); var layout = renderer.ToString(); The grid still gets created correctly. <table class="page"> <tr> <td class="label">1</td> <td class="label">2</td> </tr> <tr> <td class="label">3</td> <td class="label">4</td> </tr> </table> <table class="page"> <tr> <td class="label">5</td> </tr> </table> Note I've used a StringBuilder, but you could also use any other class to render the output. You could also change the flow a bit if you want an empty table when no data is available.
{ "domain": "codereview.stackexchange", "id": 35939, "tags": "c#, html, layout" }
When is a dipole-dipole interaction strongest?
Question: Are there any conditions that'll improve the interaction? Just like, I assume, hydrogen bonds are strongest when the difference in electronegativity is biggest. Answer: To some degree, hydrogen bonding can be thought of as a subset or type of dipole-dipole interaction. This Wikipedia article defines dipole-dipole interactions as: Dipole-dipole interactions are electrostatic interactions between permanent dipoles in molecules. These interactions tend to align the molecules to increase attraction (reducing potential energy). The same article states, regarding hydrogen bonding: The hydrogen bond is often described as a strong electrostatic dipole-dipole interaction. However, it also has some features of covalent bonding. So, the assumption/example in your question is generally correct: just as a large difference in electronegativity of two atoms within a hydrogen bonding molecule results in stronger hydrogen bonds, the same tends to hold for non-hydrogen bonding dipole-dipole interactions. And while there are other forces affecting dipole-dipole interactions (i.e. solvent effects, molecular geometry/steric effects, etc.) it is the strength of the permanent dipole moment resulting primarily from the bond length and the difference in electronegativity between the atoms at the permanent dipole location that determine how strong the intermolecular dipole-dipole interactions will be.
{ "domain": "chemistry.stackexchange", "id": 7868, "tags": "intermolecular-forces" }
WPF login screen and share username and access id across other forms
Question: Goal: Goal of this application is to sign in using MySQL database and across the forms access the username and access id. Reason I am sharing the username is for only reason at the moment, is to display a welcome message for the user. For the access id, depending on what number id, different dashboard buttons will restricted. I'd like to hear some of your thoughts on my coding - and whether the structure is good. Login Screen.xaml.cs public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } //Send object from MySQL results to the User class User u = new User(); private void Button_Click(object sender, RoutedEventArgs e) { try { //Connection string string connString = ConfigurationManager.ConnectionStrings["Technical_Application.Properties.Settings.MySQL"].ConnectionString; using (var conn = new MySqlConnection(connString)) { conn.Open(); //MySQL command using (var cmd = new MySqlCommand("SELECT count(*), id, access, username from users where username = '"+ txtUsername.Text + "' and password = MD5('" + txtPassword.Password + "');", conn)) { cmd.ExecuteNonQuery(); //Place results into dataTable DataTable dt = new DataTable(); MySqlDataAdapter da = new MySqlDataAdapter(cmd); da.Fill(dt); conn.Close(); if (dt.Rows[0][0].ToString() == "1") { //Send results to class u.id = Int32.Parse(dt.Rows[0][1].ToString()); u.access = Int32.Parse(dt.Rows[0][2].ToString()); u.username = dt.Rows[0][3].ToString(); Dashboard dashboard = new Dashboard(); dashboard.Show(); this.Close(); } else { MessageBox.Show("Login Failed", "Technical Login Error", MessageBoxButton.OK, MessageBoxImage.Warning); } } } } catch (Exception ex) { MessageBox.Show(ex.ToString()); } } private void Window_Loaded(object sender, RoutedEventArgs e) { txtUsername.Focus(); } private void CloseButton_Click(object sender, RoutedEventArgs e) { Close(); } } Login Screen.xaml <Grid> <Grid> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <Border CornerRadius="10" Grid.RowSpan="2"> <Border.Background> <LinearGradientBrush> <GradientStop Color="White"/> </LinearGradientBrush> </Border.Background> </Border> <Button Name="CloseButton" Content="x" Margin="0,10,0,164" Foreground="Black" FontSize="30" VerticalAlignment="Center" HorizontalAlignment="Right" Height="51" BorderBrush="Transparent" Background="Transparent" Click="CloseButton_Click"></Button> <StackPanel VerticalAlignment="Center"> <Image Source="Resources/Logo.jpg" Width="80"/> </StackPanel> <StackPanel Grid.Row="1"> <StackPanel Orientation="Horizontal"> <TextBox FontFamily="Helvetica" x:Name="txtUsername" FontWeight="Light" HorizontalContentAlignment="Left" HorizontalAlignment="Center" Width="235" Foreground="Black" FontSize="18" Height="30" Margin="63,0,0,0" /> <iconPacks:PackIconControl Kind="{x:Static iconPacks:PackIconMaterialKind.Account}" Width="24" Height="24" HorizontalAlignment="Center" VerticalAlignment="Center" Margin="10,0,0,0"/> </StackPanel> <StackPanel Orientation="Horizontal" Margin="0,30,0,0"> <PasswordBox FontFamily="Helvetica" x:Name="txtPassword" FontWeight="Light" HorizontalContentAlignment="Left" HorizontalAlignment="Center" Width="235" Foreground="Black" FontSize="18" Height="30" Margin="63,0,0,0" /> <iconPacks:PackIconControl Kind="{x:Static iconPacks:PackIconMaterialKind.FormTextboxPassword}" Width="24" Height="24" HorizontalAlignment="Center" VerticalAlignment="Center" Margin="10,0,0,0"/> </StackPanel> <Button Margin="0,50,0,0" Width="200" Click="Button_Click" Foreground="White">Login</Button> </StackPanel> </Grid> </Grid> User.cs - This is where I store the info and shared later for other forms to access the information required. namespace Technical_Application { class User { public int id; public int access; public string username; public int ID { get { return id; } set { id = value; } } public int Access { get { return access; } set { access = value; } } public string Username { get { return username; } set { username = value; } } } } More detail: This application will not be cloud or web based. It's fully only for a desktop application and to be used internally. Answer: Some quick remarks: If you're doing WPF, you should be using MVVM. There is a learning curve, but your code will be vastly easier to maintain once you add more and more features. Much of the code inside Button_Click should be in separate classes (and arguably even separate layers). Your code is vulnerable to SQL injection. Also, avoid writing ADO.NET and instead use Dapper. Do not mix the UI and the back-end logic. Have your login logic return a custom class and display an error message depending on the contents of that class. Why are the fields of User set to public? Why even have such fields, when auto-implemented properties have been a thing for more than a decade? To me, Access feels like it should be an enum and not a meaningless int.
{ "domain": "codereview.stackexchange", "id": 38269, "tags": "c#, wpf" }
Is a normal-ordered product of free fields at a point a Wightman field?
Question: $:\!\!\hat\phi(x)^2\!\!:$, for example, constructed from the real Klein-Gordon quantum field. For a Wightman field, the Wightman function $\left<0\right|\hat\phi(x)\hat\phi(y)\left|0\right>$ is a distribution, which is certainly the case for the real Klein-Gordon quantum field --- call it $C(x-y)$ in this case. In contrast, the expected value $\left<0\right|:\!\!\hat\phi(x)^2\!\!:\,:\!\!\hat\phi(y)^2\!\!:\left|0\right>$ is a product of distributions, in fact for the real Klein-Gordon quantum field it's $2C(x-y)^2$, which seems to make this normal-ordered product of quantum fields not a Wightman field. $C(x-y)^2$ is well-enough behaved off the light-cone, but on the light-cone it has a $[\delta((x-y)^2)]^2$ component. If $:\!\!\hat\phi(x)^2\!\!:$ is not a Wightman field, then is it nonetheless in the Borchers' equivalence class of the free field? If so, why so? A (mathematically clear) citation would be nice! Finally, if $:\!\!\hat\phi(x)^2\!\!:$ is not in the Borchers' equivalence class of the free field, because it isn't a distribution, is it nonetheless empirically equivalent to the free field at the level of S-matrix observables, as is proved to be the case for Borchers' equivalence classes (Haag, Local Quantum Physics, $\S$ II.5.5), even though it is manifestly not empirically equivalent to the free field at the level of Wightman function observables? My reading of the Wightman fields literature of the late 1950s and 1960s is far from complete, which may be why I haven't so far found clear answers for these questions. Answer: If I am not wrong the product is well defined, because the 2 point funtion is boundary value of an analytic function in some tube (see Streater/Wightman) with certain bounds. More genral there is 1-1 correspondence between translation invariant tempered distributions which satisfy the spectrum condition and analytic functions in this mentioned tube and an easy way to define the product of the distribution is the product of the analytic functions which lies in the same class and therefore gives a well defined distribution on the boundary. Really easy is the massless case in let's say 4D. $W(x-y) \sim \frac1{(x-y+i\epsilon(x_0-y_0))^2}$ $W^2(x-y) \sim \frac1{(x-y+i\epsilon(x_0-y_0))^4}$ where the two-point function is just the boundary value of the analytic function $1/z^2$. In 1 dimension, that means a chiral field on a light ray, it is even more drastic. There the Wick (=normal ordered) square of the free fermion is a free boson. edit The answer to the question "Is a normal-ordered product of free fields at a point a Wightman field?" is: YES! For convinience I give you a reference, which is "General Principles of Quantum Field Theory" by Bogolubov, Logunov, Oksak, Todorov (1990) p. 344 Ex. 8.16 Prove that the Wick Monomials satisfy the Wightman Axioms edit Tim: I can not comment on your yet. I did not say you can multiply general distribution which are boundary values by multiplying the function, just for a certain class but maybe I was sloppy because I thought this was standard. The $\delta$ is not in this class, because it does not fullfill a "spectrum contion", i.e. the fourier transform is supported in some convex cone with non-empty dual cone. The fourier transform of $\delta(x)$ is constant so does not fall into this class. But distributions which have this property on their fourier transformation you can multiply: see the standard textbook Reed Simon Volume 2 - page 92 Ex 4. I hope my references clear the confusion.
{ "domain": "physics.stackexchange", "id": 351, "tags": "quantum-field-theory" }
Python - ternary operator or if statement?
Question: Which of these two alternatives is more readable? token = map.get(token_str) if not token: token = Course(token_str) return token or token = map.get(token_str) token if token else Course(token_str) Answer: get() takes an optional second parameter, for a value to use if the key isn't present: token = map.get(token_str, Course(token_str)) I think that one is clearly the best solution, unless you don't want to create and immediately throw away a Course object. In that case, I would use 'or': token = map.get(token_str) or Course(token_str) Both of these are only one line, that one line is quite readable, and there is no unnecessary repetition. There is one third possible situation, in which you don't want to unnecessarily construct a Course instance, and you also can't rely on the values that are present to evaluate as True. Then 'or' wouldn't work correctly. Then I would use an 'if' statement: token = map.get(token_str) if token is None: token = Course(token_str) but that's personal preference. Something using 'in' is also fine: token = map[token_str] if token_str in map else Course(token_str) although of course that looks up token_str in map twice, so if you need to be really time efficient then don't do that.
{ "domain": "codereview.stackexchange", "id": 5254, "tags": "python" }
Calculation of drag coefficient of a square plate oscillating in water
Question: I am currently reading up on fluid mechanics and was reading up on the drag force equation, and it come to my attention I am not sure how one can find the coefficient of drag of and object without knowing the drag force. But my understanding is so far that the drag force equation is a purely experimentally derived one, thus this is causing me confusion as I can find the necessary information on what experiment were carried out to to find this equation. An example in one of my text book give a square plate oscillating up and down in what I assume to be water, and I was wondering how can one find the drag force on this square plate, if the coefficient is not known? Which I believe you cannot. I have look on various book ect but with fluid mechanics I seem to feel like I am going around in circles, every-time I research drag coefficient I keep getting the rearrangement of the the drag equation. I was just wondering if maybe could someone could expand on how, it possible to find the drag coefficient of a square plate, if the drag force is not known. Answer: The Naiver-Stokes equations describe the behavior of a viscous fluid in laminar flow (i.e., the pressure and velocity variations as functions of time and position). They can be used to solve your problem, but it would typically require numerical analysis. At very low fluid velocities and/or high fluid viscosities, the behavior reduces to Stokes creeping flow, in which the inertial terms in the equations can be neglected. As a first approximation, you would solve this problem using the Stokes flow approximation. Then you would add the inertial terms and the transient behavior. It would be a nasty problem at this level, but it can be done (using computational fluid dynamics).
{ "domain": "physics.stackexchange", "id": 54199, "tags": "classical-mechanics, fluid-dynamics" }