anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Lowest Michaelis constant Km
Question: I want to find a lower limit of the Michaelis constant for some evaluations of Michaelis-Menten enzyme kinetics. What is the lowest $K_m$ you ever encountered? Is there a theoretical limit? Answer: Take this generic reaction representing the Michaelis-Menten enzyme kinetic: You can derive the Michaelis-Manten constant $K_m$: (for derivation see https://en.wikipedia.org/wiki/Michaelis%E2%80%93Menten_kinetics) The quasi-steady state hypothesis was used, therefore the rate of formation and breakdown of $ES$ are the same; this is a good approximation if the enzyme concentration is much less than the substrate concentration $[S]$ or $K_m$ or both. That means that $K_m$ can be extremely small (almost zero) and the Michaelis-Menten equation still be valid proven that the enzyme concentration is much less than $[S]$. $K_m$ is very small if: $k_r$ and $k_{cat}$ are both very small compared to $k_f$, thus if the complex $ES$ is formed very quickly compared to its breakdown. In this case the reaction proceeds at its maximum velocity $V_{max}=k_{cat}*E_0$, with $E_0=[ES]+[E]$: About the theoretical limit on how small $K_m$ can be there are two considerations: First, $k_f$ cannot be cannot be faster than the diffusion-controlled encounter of an enzyme and its substrate. This means that $k_f$ cannot be higher than $10^9 s^{-1} M^{-1}$ (ref.https://www.ncbi.nlm.nih.gov/books/NBK22430/#:~:text=This%20rate%20cannot%20be%20faster,s%2D1%20M%2D1.) Second, $k_{cat}$ cannot be too small or the product $P$ formed is produced at a rate not suitable for the cell survival. Generally, I haven't encountered $K_m$ smaller than $10^{-6}M$ in biological enzymes (e.g Triosephosphate isomerase, TPI) but possibly there are. In TPI, $ k_f$ ∼ $10^9 s^{-1} M^{-1}$, therefore $k_r$ ∼ $k_{cat}$ ∼ $10^{3}s^{-1}$ that is still quite high. It would be very interesting to know if there are enzymes with an even lower $K_m$. Very intresting question.
{ "domain": "chemistry.stackexchange", "id": 14571, "tags": "kinetics, chemical-biology, enzymes, enzyme-kinetics" }
Identify bee in photo (found in Utah, USA)
Question: A friend took this photo of a bee in his back yard in central Utah (© Jon Mott, CC-BY 3.0). I have no formal training in systematics or entomology, so to me everything looks like the bees' poster child Apis mellifera. Can someone with a bit more experience confirm or refute this? Answer: Yes, this looks like Apis mellifera, which is also one of the most common bees you'll run into. If you have other pictures, you could check the identification tips at the bugguide page for the species. As a side note, in general to really confirm an identification a specialist would need to see a collected specimen (that is, a live or pinned bee) as the distinctive characters (things like size, wing veins, or in this case, long hairs on the eye) may not be visible in photographs.
{ "domain": "biology.stackexchange", "id": 4188, "tags": "entomology, species-identification" }
Physics Problem: Entropy
Question: A parachutist having the weigh of $88~\rm kg$ falls vertically from a height of $460~\rm m$ at a constant speed. Calculate the increase in entropy produced by the parachutist, assuming that the air temperature is $21 ~^\circ \mathrm C.$ $$\Delta S = \frac{Q}{T}= \frac{W}{T} = \frac{mgh}{T}$$ So far I found this is the correct answer, but I don't really understand why the heat is equal to the work made by the air. Also, if it is correct, how could the work equal to $mgh\,?$ Answer: The parachutists starts out with a certain amount of potential energy, $mgh$. This energy needs to be dissipated before she reaches the ground - and this is done by doing work against the air (in essence, stationary air is put into motion by the parachute, and so the potential energy of the parachutist is turned into kinetic energy of the air; from there, it dissipates into "general thermal motion" of the air molecules - which we call "heat").
{ "domain": "physics.stackexchange", "id": 34542, "tags": "homework-and-exercises, thermodynamics, entropy" }
Can we always transform a set of lines to a function?
Question: If I have n lines in a programming language like Python (globally or inside a function): .. .. # from here .. .. .. .. # to here .. .. or def example(): ... ... # from here ... ... ... ... # to here ... ... can I always transform it to a function of the form s1, s2, s3 … = function(s1, s2, s3 ...), where: s1, s2, and s3 etc are the local/global variables created/updated after the n lines gets executed. function is almost the same code as the n lines above except it collects them and returns them? As an example, in the below function: def example(): ... ... # from here ... ... a = 'c' b = 'd' ... ... # to here ... ... I know I can do this: def example(): ... ... a, b = transform('a', 'b') ... ... But my question is if I can extend this idea to any arbitrary n lines of code, abstracting them as an input-output block? According to me, the only thing that happens after any n lines is executed, is a state change (as far as the program is concerned) - which can always be represented as a function with an input and output. Am I missing something? Can someone please clarify? Answer: The short answer is yes. The longer answer is no. The short answer is yes: that's a fundamental computation operation and it's pretty much the definition of a function. The equivalence between def f(x): do something with x f(foo) and do something with foo is in fact the definition of a function, or more precisely of function application. This is so fundamental that it's the basis of the lambda calculus. In the lambda calculus, there are just three syntactic constructs: variables ($x$, $y$, …); applying a function to an argument ($F X$ where $F$ is a function and $X$ is the argument); lambda abstraction $\lambda x. M$ where $x$ is the parameter name and $M$ is the function body. The lambda calculus gets its name from that $\lambda$ notation, and it's where Python got lambda. In the lambda calculus, there is a single computation rule, called beta conversion (beta reduction when done from left to right, beta expansion from right to left): $$ (\lambda x. M) N \equiv M[x \leftarrow N] $$ where $M[x \leftarrow N]$ means to replace $x$ by $N$ in $M$. (Details omitted because it would take a book chapter or two.) That single rule is enough to express all possible computations, in the sense that the lambda calculus is Turing-complete. Beta conversion can be done in any language that has something that can reasonably called a function. But you need to take care of the details, and there are some language features that require additional effort or make it impossible in certain cases. Pretty much any language that isn't purely functional has restrictions on when beta expansion is correct. In any language, the lines that you move to the new function must form a syntactic block. For example, if you have lines that are part of a multi-line construct like while condition(): instruction1() instruction2() then you can move the loop out as a whole, but you can't move out while condition(): instruction1() and keep instruction2() in place. One superficial but easily understood feature where beta conversion changes the behavior is introspection features. For example, if the language exposes a way to identify the current function or the function call stack trace, such as traceback.extract_stack() in Python, beta conversion changes that trace. If you put a call to traceback.extract_stack in a new auxiliary function, it's going to return something different. To make a beta-expansion that preserves the behavior, you'd need to modify calls to traceback.extract_stack to remove the new function from the trace. Note that this includes calls that may be deeply nested (if a function called inside the moved code calls a function that calls a function that … that calls traceback.extract_stack), so doing a fully behavior-preserving beta expansion turns into a global program transformation. Another introspection feature of Python that breaks beta conversion is that it exposes local variables through locals(): locals()['x'] evaluates to the same value as x. If the code that you move calls locals(), you also need to pass the variables accessed through locals() as arguments to the new function and return their new values. So it isn't a purely syntactic transformation anymore. A more interesting interaction is with flow control features. If the instructions that you put in the new function have self-contained flow, meaning that they're executed by starting at the top and either finishing at the bottom or raising an exception, then beta expansion or beta conversion doesn't change anything. It's ok if the code has loops and function calls inside it. But if the block of code that you move contains a non-local exit, i.e. an instruction that makes the execution jump outside that block of code such as return or break, you can't just move it. Likewise if the block contains a jump target (in imperative languages that have goto). It's possible to get around this with a local transformation: make the auxiliary function take one more argument which indicates the entry point (if there's a way to jump into the middle of the code), and one more return value which indicates where to exit to (if there's a way to jump out to a place other than the end of the code block). For example: def outer_function(x): if x == 1: return 2 # else: x = x - 1 # return x If you want to extract the two lines marked with # on the right into a function, you need to remember whether to return the 2 or continue on to return x. def new_auxiliary_function(x): if x == 1: return "RETURN", 2 else: x = x - 1 return "FALLTHROUGH", x def new_outer_function(x): tmp = new_auxiliary_function(x) # if tmp[0] == "RETURN": return tmp[1] # x = tmp[1] # return x Your transformation also changes exactly when variables are modified. This can become an issue due to aliasing. Aliasing is not normally an issue in Python since there's no way for a variable to designate another variable, as opposed to designating the same object as another variable. I wouldn't swear that it's never an issue, but I can't think of a way to do it. So instead I'll give an example in C, where aliasing is common due to pointers. int x = 3; int *p = &x; *p = 2; // printf("%d\n", x); // This code prints 2, since the pointer p points to x and the line *p = 2 therefore sets x to 2. Now let's create an auxiliary function for the part marked with //. Since C can't create compound values on the fly, we need to define a structure type for the return values, but that's just a cosmetic change compared with Python. typedef struct { int x; int *p; } values; values new_function(int x, int *p) { *p = 2; printf("%d\n", x); } … int x = 3; int *p = &x; values tmp = new_function(x, p); x = tmp.x; p = tmp.p; The line *p = 2 in new_function sets the outer variable x to 2, since that's where p points to. It does not change the variable x that is inside new_function. Therefore this program prints 3. When a compiler performs a beta reduction, it's called inlining. This is a common optimization, which typically makes the program run faster at the expense of larger code size. Compilers much more rarely do beta expansions. It's a worthwhile optimization when the same block of code (or more generally similar-enough blocks) appears more than once and code size is more important than execution speed, but it's difficult to detect worthwhile cases. Both transformations have limitations as to exactly when they're correct.
{ "domain": "cs.stackexchange", "id": 14593, "tags": "programming-languages, functional-programming" }
Will neutral particles be affected by EM waves?
Question: Air molecules scatter sunlight and makes the sky blue. Many books say that the air molecules are oscillated by E field and so they become sources of EM waves. Is it because the air molecules have charges? I wonder, if air molecules or molecules of other medium (e.g. water) are completely neutral (i.e. no excess charges at all), will they still be oscillated by the E field and scatter light? Answer: A model for the interaction of light with atoms and molecules treats the charge distribution as an electric dipole, because the particles consist af separate positively and negatively charged particles that can be polarised to have a non-zero electric dipole moment. Neutral particles where no (internal) charge separation is possible should not be affected by EM waves. This electric dipole oscillates in response to the applied oscillatory electric field from an electromagnetic wave and can be thought of as a damped, driven harmonic oscillator. The oscillating dipole in turn generates electromagnetic waves, but these waves are not just emitted in the same direction as the incoming radiation. In other words, the power of the incoming radiation is scattered. This simple, classical analogy is able to explain the phenomenon and wavelength-dependence of Rayleigh scattering (which is why the sky is blue) and also offers a simple description of why atoms/molecules absorb and scatter light particularly well at resonant frequencies associated with the difference in energy between quantum states. However, the classical analogy fails when trying to describe spontaneous and stimulated emission.
{ "domain": "physics.stackexchange", "id": 23947, "tags": "electromagnetic-radiation, scattering" }
How can the Copenhagen interpretation possibly be redeemed of this contradiction?
Question: It seems like the Copenhagen interpretation is just self contradictory. These two axioms are contradictory: Quantum Mechanics describes all the particles in the universe Measurement devices evolve superpositions into eigenstates. Suppose an electron is in a state $|\psi \rangle $ and all the particles of a measurement device are in a state $|m\rangle$. If we apply axiom #1 on the state $|\psi \rangle \otimes |m\rangle$, we can evolve it using the Schrodinger equation. The decoherence theorem, which is an application of the Schrodinger equation, says that the electron will evolve into a mixed state. The decoherence theorem applies because the measurement device has 10^23 particles. If we apply axiom #2 on the state $|\psi\rangle \otimes |m\rangle$, it says that the electron will evolve into an eigenstate. A mixed state contains all eigenvectors, and not just one. Since a mixed state $\neq$ an eigenstate, we have a contradiction. What is the way out of this contradiction? Answer: Equations of physics are all time-reversal symmetric. But we know the Universe is in fact not (second law of Thermodynamics). How can modern Physics be redeemed of this contradiction? The truth is that all physical theories have limitations. What you wrote is true, but it only means that the Quantum Theory the way we know it now has a limited sphere of applicability. We have to live with the theories we have, be mindful of their shortcomings and be careful to not to use them beyond the intended range of validity (here belong the Schodinger cats and the like) until something more general (=wider range) comes about. In the meantime we try to come up with something more universal. This is what they call research.
{ "domain": "physics.stackexchange", "id": 91384, "tags": "quantum-mechanics, quantum-interpretations, measurement-problem" }
Differential form of the velocity equation in non-standard configuration
Question: I'm reading a text on special relativity ($^{\prime\prime}$Core Principles of Special and General Relativity$^{\prime\prime}$, by James H. Luscombe, Edition 2019), in which we start with the equation for composition of velocities in non-standard configuration. Frame $S^{\prime}$ is moving w.r.t. $S$ with constant velocity $\boldsymbol{\upsilon}$ and the velocity of a particle in $S$ is $\boldsymbol{u}$. Then the velocity of the particle in $S^{\prime}$ is \begin{equation} \boldsymbol{u^{\prime}=}\dfrac{\boldsymbol{u-\upsilon}}{1\boldsymbol{-\upsilon\cdot u}/c^2}\boldsymbol{+}\dfrac{\gamma}{c^2\left(1\boldsymbol{+}\gamma\right)}\dfrac{\boldsymbol{\upsilon\times}\left(\boldsymbol{\upsilon\times u}\right)}{\left(1\boldsymbol{-\upsilon\cdot u}/c^2\right)} \tag{3.26}\label{3.26} \end{equation} where \begin{equation} \gamma\boldsymbol{=}\left(1\boldsymbol{-}\dfrac{\upsilon^2}{c^2}\right)^{\boldsymbol{-\frac12}} \nonumber \end{equation} Then the text states that "differentiating" the above equation \eqref{3.26} gives us \begin{equation} \mathrm{d}\boldsymbol{u^{\prime}=}\dfrac{1}{\gamma\left(1\boldsymbol{-\upsilon\cdot u}/c^2\right)^2}\left[\mathrm{d}\boldsymbol{u-}\dfrac{\gamma}{c^2\left(1\boldsymbol{+}\gamma\right)}\left(\boldsymbol{\upsilon\cdot \mathrm{d}u}\right)\boldsymbol{\upsilon}\boldsymbol{+}\dfrac{1}{c^2}\boldsymbol{\upsilon\times}\left(\boldsymbol{u\times} \mathrm{d}\boldsymbol{u}\right) \right] \tag{3.32}\label{3.32} \end{equation} I'm struggling with proving this. Just to reduce some of the notational headache, if we denote \begin{equation} f\left(\boldsymbol{u}\right)\boldsymbol{=}\dfrac{1}{1\boldsymbol{-\upsilon\cdot u}/c^2} \tag{01}\label{01} \end{equation} then \begin{equation} \mathrm d f\left(\boldsymbol{u}\right)\boldsymbol{=}\dfrac{ f^2\left(\boldsymbol{u}\right)\left(\boldsymbol{\upsilon\cdot} \mathrm{d}{\boldsymbol{u}}\right)}{c^2} \tag{02}\label{02} \end{equation} Also let \begin{equation} K\boldsymbol{\equiv}\dfrac{\gamma}{c^2\left(1\boldsymbol{+}\gamma\right)} \tag{03}\label{03} \end{equation} Then the original equation \eqref{3.26} is: \begin{equation} \boldsymbol{u^{\prime}=}f\left(\boldsymbol{u}\right)\left(\boldsymbol{u-\upsilon}\right)\boldsymbol{+}K f\left(\boldsymbol{u}\right)\left[\boldsymbol{\upsilon\times}\left(\boldsymbol{\upsilon\times u}\right)\right] \tag{04}\label{04} \end{equation} Differentiating (writing $\,f\,$ without its argument for convenience), \begin{align} \mathrm{d}\boldsymbol{u^{\prime}}& \boldsymbol{=}\left(\boldsymbol{u-\upsilon}\right)\mathrm{d}f\boldsymbol{+}f\mathrm{d}\boldsymbol{u}\boldsymbol{+}K \mathrm{d}f\left[\boldsymbol{\upsilon\times}\left(\boldsymbol{\upsilon\times u}\right)\right]\boldsymbol{+}K f \left(\boldsymbol{\upsilon\cdot} \mathrm{d}{\boldsymbol{u}}\right)\boldsymbol{\upsilon}\boldsymbol{-}K f\upsilon^2 \mathrm{d}\boldsymbol{u} \nonumber\\ &\boldsymbol{=}\dfrac{ f^2\left(\boldsymbol{u-\upsilon}\right)\left(\boldsymbol{\upsilon\cdot} \mathrm{d}{\boldsymbol{u}}\right)}{c^2}\boldsymbol{+}f\mathrm{d}\boldsymbol{u}\boldsymbol{+}K \dfrac{ f^2\left(\boldsymbol{\upsilon\cdot} \mathrm{d}{\boldsymbol{u}}\right)}{c^2}\left[\boldsymbol{\upsilon\times}\left(\boldsymbol{\upsilon\times u}\right)\right]\boldsymbol{+}K f \left(\boldsymbol{\upsilon\cdot} \mathrm{d}{\boldsymbol{u}}\right)\boldsymbol{\upsilon}\boldsymbol{-}K f\upsilon^2 \mathrm{d}\boldsymbol{u} \nonumber\\ &\boldsymbol{=} f^2\Biggl[\dfrac{ \left(\boldsymbol{u-\upsilon}\right)\left(\boldsymbol{\upsilon\cdot} \mathrm{d}{\boldsymbol{u}}\right)}{c^2}\boldsymbol{+}\dfrac{\mathrm{d}\boldsymbol{u}}{f}\boldsymbol{+}K \dfrac{\left(\boldsymbol{\upsilon\cdot} \mathrm{d}{\boldsymbol{u}}\right)}{c^2}\left[\boldsymbol{\upsilon\times}\left(\boldsymbol{\upsilon\times u}\right)\right]\boldsymbol{+}\dfrac{K}{f} \left(\boldsymbol{\upsilon\cdot} \mathrm{d}{\boldsymbol{u}}\right)\boldsymbol{\upsilon}\boldsymbol{-}\dfrac{K}{f}\upsilon^2 \mathrm{d}\boldsymbol{u}\Biggr] \nonumber \end{align} Beyond this, I'm really not able to get to the final result despite trying a bunch of times. Not sure if I'm overcomplicating things or missing some magical identity that simplifies everything. Would appreciate any help. Answer: Hints : In the brackets of the last line of your equation replace all \begin{equation} \dfrac{1}{f} \quad \boldsymbol{\longrightarrow} \quad \left(1\boldsymbol{-}\dfrac{\boldsymbol{\upsilon\cdot u} }{c^2}\right) \tag{a-01}\label{a-01} \end{equation} In the brackets of the last line of your equation expand \begin{equation} \boldsymbol{\upsilon\times}\left(\boldsymbol{\upsilon\times u}\right) \quad \boldsymbol{\longrightarrow} \quad \left[\left(\boldsymbol{\upsilon\cdot u} \right)\boldsymbol{\upsilon}\boldsymbol{-}\upsilon^2\boldsymbol{u}\right] \tag{a-02}\label{a-02} \end{equation} Expand the last item in the rhs of equation \eqref{3.32} \begin{equation} \boldsymbol{\upsilon\times}\left(\boldsymbol{u\times} \mathrm{d}\boldsymbol{u}\right)\boldsymbol{=}\left(\boldsymbol{\upsilon\cdot} \mathrm{d}{\boldsymbol{u}}\right)\boldsymbol{u}\boldsymbol{-}\left(\boldsymbol{\upsilon\cdot u} \right)\mathrm{d}{\boldsymbol{u}} \tag{a-03}\label{a-03} \end{equation} Keep $\,K\,$ as it is until the end and don't replace it by its expression \eqref{03} in order to avoid lengthy equations In the next steps you must realize that \begin{equation} \left(1\boldsymbol{-}K\upsilon^2\right)\boldsymbol{=}\dfrac{1}{\gamma} \quad \text{and} \quad \left(K\boldsymbol{-}\dfrac{1}{c^2}\right)\boldsymbol{=-}\dfrac{1}{c^2\left(1\boldsymbol{+}\gamma\right)}\boldsymbol{=-}\dfrac{K}{\gamma} \tag{a-04}\label{a-04} \end{equation}
{ "domain": "physics.stackexchange", "id": 68700, "tags": "homework-and-exercises, special-relativity, inertial-frames, differentiation, calculus" }
What are the biochemical processes occurring when food spoils?
Question: Let's assume for a minute that microbes themselves and their direct toxic products (i.e. endotoxins) aren't toxic to humans. Let's also discount any innate immune responses the body mounts against the invading microbe (i.e. inflammation and production of cytokines). What happens to food molecules (mechanistically) as it spoils and what deleterious effects do these "spoil products" have on the body if ingested? I'm looking for compounds that can result from the spontaneous breakdown of food or the byproducts of microbial metabolism (that is NOT a "direct" toxin) that is harmful to the body. For example, do the proteins in food break down into some toxic nitrogenous substance? Answer: During putrefaction of animal tissue, lysine is decarboxylated into cadaverine and arginine is decarboxylated into putrescine. These compounds are deemed to be toxic. A serving of meat contains 8 g of protein, corresponding to 640 mg lysine and a little bit less of arginine. Let's go straight and say that a spoiled meat serving contains 640 mg cadaverine and a little bit less of putrescine. In rats, the acute oral toxicity for both polyamines is around 2000 mg/kg, let'assume that this is valid for humans also. According to these rough calculations, to have an acute toxic effect, a 70kg man that is resistant to the direct toxic effects of microbes, should eat 140 grams of cadaverine, corresponding to 218 smelly rotten meat servings. [composition and toxicity data taken from wikipedia]
{ "domain": "biology.stackexchange", "id": 185, "tags": "biochemistry, metabolism, digestive-system, food" }
What is the difference between A-normalization and K-normalization in compilers?
Question: Administrative normal form is a program intermediate representation in which each immediate instruction has a name. It is used in GHC and OCaml. K-normalized form is an intermediate representation in which each instruction consists of one assignment and operation. It's used in MLKit, Min-Caml, and GoCaml. Both A-normalization and K-normalization involve generating a let expression with a continuation. A-normalization and K-normalization seem to be exactly the same transformation. What is the difference between them such that they deserve different names? Answer: As far as my search-foo led me; K-Normal Form is inspired by A-Normal Form, but instrumented for use in Storage Mode Analysis, which is a static program analysis used for inferring memory mangement directives for functional programs. The term seems to originate from the following publication: L. Birkedal, M. Tofte, and M. Vejlstrup. From region inference to von Neumann machines via region representation inference. A copy of the publication can at the time of writing be obtained from L.Birkedal's faculty webpage under publications.
{ "domain": "cs.stackexchange", "id": 13964, "tags": "programming-languages, compilers" }
Why do OX5034 GM mosquitos require the presence of tetracycline to survive? What does the drug do in this case?
Question: I'm confused. Debug Fresno; why are the released mosquitos said to be sterile? from 2017 addresses male mosquitos released with a bacteria that will affect fertility of females after mating. They are not genetically modified. CNN's 2021 article First-ever US release of genetically modified mosquitoes begins in Florida Keys describes release of Oxitec's OX5034 GM mosquito Aedes aegypti. Curiously, these can be kept viable for reproduction when maintained in an environment that includes tetracyline, an antibiotic. Question: Without the bacterial infection, is it just a coincidence that the GM strain needs an antibiotic to survive? What does the tetracycline do in this case? Why was this drug chosen? Potentially helpful resources: https://www.oxitec.com/en/our-technology https://endmalaria.org/sites/default/files/Enca%20Martin-Rendon.pdf Answer: The other answers are correct and on the right track, but I will expand on them with the specific mechanism for these mosquitos. The following paper discusses the engineering behind strain OX513A upon which the OX5034 strain is based: Phuc HK et al. 2007. Late-acting dominant lethal genetic systems and mosquito control. BMC Biol 5:11 Oxitec mosquitos carry a transgenic construct containing two genes. One encodes for a fluorescent marker, DsRed2, for identifying genetically modified mosquitos and the other encodes a protein called tTAV: The tTAV gene is under the control of a minimal hsp70 promoter flanked immediately upstream by a tetO site. The tTAV protein is a transcriptional activator that binds tetO and enhances expression. What that means in this construct is that expression of the tTAV gene creates more tTAV protein which binds to tetO which leads to more tTAV expression, etc. This is the positive feedback mechanism mentioned in the above image. Importantly, however, the tTAV protein can bind tetracycline and, when complexed, can no longer bind tetO and therefore cannot drive its own expression. In other words, in the presence of tetracycline, there will be minimal tTAV expression. tTAV overexpression is lethal during development. Although the exact mechanism of lethality isn’t entirely clear, there are some leading hypotheses: Knudsen KE et al. 2020. Genetic Variation and Potential for Resistance Development to the tTA Overexpression Lethal System in Insects. G3 10(4): 1271–1281 tTA overexpression is thought to cause lethality due to “transcriptional squelching,” that is a general interference in gene expression (Gong et al. 2005). Consistent with this hypothesis, some genes identified by the GWAS were involved in gene silencing (Su(var)2-HP2) (Shaffer et al. 2002; Shaffer et al. 2006), chromatin binding (mamo) (Hira et al. 2013), chromatin remodeling (Hira) (Loppin et al. 2000) and alternative splicing (bru1)(Spletter et al. 2015), which could all influence the level of tTA expression. Other candidate genes were involved in defense response (PGRP-LC, Lmpt)(Jin et al. 2008), the septate junction (cora) (Tepass et al. 2001) and apoptosis (out) (Coffman 2003); all of which are systems that could potentially impact survival. Four genes; eff, tey, CG32085 and CG13085, encode proteins that are predicted to participate in protein ubiquitination and degradation (Thurmond et al. 2018; Gramates et al. 2017). For example, the Eff protein is a E2 ubiquitin-conjugating enzyme (Chen et al. 2009). It has been suggested that overexpression of the tTA protein could cause lethality due to interference with ubiquitin-dependent proteolysis (Gong et al. 2005) as ubiquitination of VP16 is required for activity and also signals destruction (Salghetti et al. 2001). To summarize, tTAV is a transcriptional activator and, in Oxitec mosquitos, it regulates its own expression in a positive feedback loop: tTAV protein binding to its response element tetO leads to more tTAV gene expression which leads to more tTAV protein. This overexpression of tTAV is lethal during development. However, tTAV can also bind tetracycline and, when it does, it can no longer bind tetO. In the presence of tetracycline, there is no positive feedback loop and and only basal, non-lethal amounts of tTAV are expressed. Image from the Oxitec website.
{ "domain": "biology.stackexchange", "id": 11368, "tags": "molecular-biology, reproduction, infectious-diseases, pest-control" }
Aligning many long sequences
Question: I'm faced with having to align many (some 100s) bacterial genomes, where the genome length is in the millions. Obviously, this is beyond normal alignment techniques and it's unclear to me what the best practice is for such circumstances: conventional alignment on a very powerful computer with lots of memory break up the genome into smaller fragments and align them individually some exotic different procedure What possible avenues of attack are there? (I've attempted to use Mafft and Clustal with little success) Answer: Whole genome aliment can be done using Progressive Mauve, LAST or Mummer. For bacteria I used Mauve since it has also very nice visualisation engine. A very new tool is Minimap2, a super fast mapper that supposed beside read mapping be able to handle reference vs reference. However, I do not know how performance of it compares to the tools mentioned above. If you are interested in a rough idea of the shared genome regions, you can use bevel. Bevel is not really an aligner, it is more like a dot-plot, but it is super fast (even for mammalian sized genomes).
{ "domain": "bioinformatics.stackexchange", "id": 97, "tags": "sequence-alignment, genome" }
Learning physics online?
Question: I'm thinking of following some kind of education in physics online. I have a master degree in Computer Science and have reasonable good knowledge in physics. I would like a program of 1-2 years and I'm more interested in particle physics. Is there any good online program that offer something similar? Answer: Since you say you're mostly interested in learning physics, not in a degree, there are lots of options. You say you have good knowledge of physics; if you want to learn particle physics, you need a good undergrad physics background plus some additional knowledge. I tend to think that reading textbooks is still the best way to learn the basics; if you want to learn particle physics, you might start with the book by Griffiths. If you find that you don't have enough background knowledge for it, then you would need to read other, more basic textbooks. Once you pass some threshold of background knowledge, though, you can learn a lot online. You could try searching arxiv.org for introductory lectures on various topics. In particle physics, for instance, you might search for the word "TASI", which is a summer school for graduate students to learn more about the field (the lectures are often written up and posted online). Most of this will assume you already know quantum field theory, though. You can get a prepublication draft of Mark Srednicki's QFT textbook from his website (PDF file). The Perimeter Institute has a one-year Master's degree program and videotapes all of the lectures. You can watch the videos from their archive.
{ "domain": "physics.stackexchange", "id": 65756, "tags": "soft-question, resource-recommendations, education" }
Async database helper function in TypeScript
Question: I have made an async function in TypeScript that responds to Events and returns an object with methods that return a promise. I would like help making it better and prettier. :) My interface files: config.model.ts export interface IDBUConfigModel { version: number; dbName: string; storeNames: string[]; keyPath?: string; } IDBUtility.model.ts export interface IDBUtility { add: (storeName: string, value: {}) => Promise<string | {}>; put: (storeName: string, value: {}) => Promise<string | {}>; update: (storeName: string, keyValue: string, value: {}) => Promise<string | {}>; get: (storeName: string, keyValue: string) => Promise<any>; remove: (storeName: string, keyValue: string) => Promise<{} | void>; } My main function: import { IDBUConfigModel } from '../models/config.model'; import { IDBUtility } from '../models/idb-utility.model'; export async function openIDB(config: IDBUConfigModel): Promise<IDBUtility> { if (!window.indexedDB) { // console.log("Your browser doesn't support a stable version of IndexedDB. IndexedDB will not be available."); return void 0; } return new Promise<IDBUtility>((resolve, reject) => { const request = indexedDB.open(config.dbName, config.version); request.onerror = (evt: ErrorEvent | any) => { reject(request.result); }; request.onupgradeneeded = (evt: IDBVersionChangeEvent | any): void => { const nextDb = evt.target.result; if(config.keyPath){ config.storeNames .forEach((storeName: string) => { nextDb.createObjectStore( storeName, { keyPath: config.keyPath } ); }); } else { config.storeNames .forEach((storeName: string) => { nextDb.createObjectStore( storeName, { autoIncrement: true } ); }); } }; request.onsuccess = (evt) => { const db = request.result; resolve({ async add(storeName: string, value: {}): Promise<string | {}> { return new Promise((res, rej) => { const request = db.transaction([storeName], 'readwrite') .objectStore(`${storeName}`) .add(value); request.onsuccess = (evt) => { res(request.result); }; request.onerror = () => { rej(request.result); }; }); }, async put(storeName: string, value: {}): Promise<string | {}> { return new Promise((res, rej) => { const request = db.transaction([storeName], 'readwrite') .objectStore(storeName) .put(value); request.onsuccess = () => { res(request.result); }; request.onerror = () => { rej(request.result); }; }); }, async update(storeName: string, key: string, value: ({} | any[])): Promise<string | {}> { return new Promise((res, rej) => { const transaction = db.transaction([storeName], 'readwrite'); const getRequest = transaction .objectStore(storeName) .get(key); transaction.onerror = () => { rej(request.result); }; getRequest.onsuccess = () => { const currentValue = getRequest.result; const updatedValue = mergeDeep(currentValue, value); const delRequest = transaction .objectStore(storeName) .delete(key); delRequest.onsuccess = () => { const addRequest = transaction .objectStore(storeName) .add(updatedValue); addRequest.onsuccess = () => { res(addRequest.result); }; }; }; }); }, async remove(storeName: string, keyValue: string): Promise<any> { return new Promise((res, rej) => { const delRequest = db.transaction([storeName], 'readwrite') .objectStore(storeName) .delete(keyValue); delRequest.onsuccess = () => { res(delRequest.result); }; delRequest.onerror = () => { rej(delRequest.result); }; }); }, async get(storeName: string, key: string): Promise<{}> { return new Promise((res, rej) => { const request = db.transaction([storeName]) .objectStore(storeName) .get(key); request.onsuccess = () => { res(request.result); }; request.onerror = () => { rej(request.result); }; }); } }); }; }); } function mergeDeep (target, source) { if (typeof target == "object" && typeof source == "object") { for (const key in source) { if (source[key] === null && (target[key] === undefined || target[key] === null)) { target[key] = null; } else if (source[key] instanceof Array) { if (!target[key]) target[key] = []; target[key] = target[key].concat(source[key]); } else if (typeof source[key] == "object") { if (!target[key]) target[key] = {}; this.mergeDeep(target[key], source[key]); } else { target[key] = source[key]; } } } return target; } Answer: I consider myself no expert in the field, but I think I can give you some small tips :). Here you go: Writing a function as async makes it return a Promise. As you already wrote the return type Promise<IDBUtility>, declaring the function as async makes it redundant. In Typescript, an async function can include await expressions to simplify the Promise behaviour. MDN I recommend you looking up some NPM packages to simplify your tasks. There is no need to reinvent the wheel. I believe these might be helpful for you: https://www.npmjs.com/package/idb https://www.npmjs.com/package/idb-keyval The union types are very useful in Typescript, but doesn't add any information to the type when you join something with any, as the joint type is already covered by any. Therefore, in evt: ErrorEvent | any, evt can be an ErrorEvent or anything else. This makes it not type safe, as you will have to type-cast it anyway. If you need to cover more kind of events than just ErrorEvent, you could maybe use the class it extends (evt: Event) or make your own custom events by extending Event. function add(storeName: string, value: {}): Promise<string | {}> { return new Promise((res, rej) => { const request = db.transaction([storeName], 'readwrite') .objectStore(`${storeName}`) .add(value); request.onsuccess = (evt) => { res(request.result); }; request.onerror = () => { rej(request.result); }; }); } you could rewrite it to something like this: async function add(storeName: string, value: any): Promise<any> { const request = await db.transaction([storeName], 'readwrite') .objectStore(storeName).add(value); if (request.isValid()) { return Promise.resolve(request.result); } return Promise.reject(`Add transaction failed: ${request.error}`); } It's 3 LoC shorter and easier to understand what's going on (although, keep in mind that I made it up!). There are three significant instructions that can be caught in just a glimpse: Async transaction Return valid transaction Return transaction error You will probably want to add some catching to make your database connection tolerant to errors. With the approach I'm trying to explain, shouldn't be hard to extend your code and still leave it readable and maintainable. I suggest you to study how to create clean, readable and easy-to-maintain code. The function you provided has way too many indentation levels for a function with that complexity (which shouldn't have either!). A function should do one thing, and do it well. A 100-line function might be easy to write, but it's hard to understand for a person who hasn't been following your code development (or even yourself after certain amount of time). I strongly suggest Clean Code by Robert C. Martin. Related question. Good luck! :)
{ "domain": "codereview.stackexchange", "id": 29219, "tags": "database, event-handling, promise, async-await, typescript" }
Kepler's third law: the equations $\frac{T^2}{\langle r\rangle^3}=\text{constant}$ and $\frac{T^2}{a^3}=\text{constant}$ are equivalent?
Question: Kepler's third law or periods affirms that: "The squares of the times that the planets use to cover their orbits are proportional to the cube of their average distances from the Sun". font from as an example https://it.wikipedia.org/wiki/Leggi_di_Keplero (the first definition) and from the English book PHYSICS, James Walker, 5^ edition I write $r=\mathrm{d}(\text{Planet,Sun})$ and $r_i$ for $i=1,\ldots n$, are the radius vectors of the planet when it moves during its period of revolution around the Sun. I have written only $r_1, r_2$ and $r_3$. Considering that in the starting definition we speak of average distances, is it possible to write $$\frac{T^2}{\langle r\rangle^3}=\text{constant}\tag 1$$ where I indicate the arithmetic average of the distances of a planet from the Sun when it travels through its elliptical orbit? For example we have an equation of an canonical ellipse, $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$$ where $a$ is the major semi-axis, $b$ the minor semi-axis with $a>b>0$. Supposing to keep the numerator constant in the $(1)$ if I take just three distances $r_1$, $r_2$ and $r_3$ and I consider, using for example, Geogebra with a drawing $$\langle r \rangle=\frac{r_1+r_2+r_3}{3}\approx a \tag 2$$ If this approach is meaningful then I can also write, with good approximation, that $$\frac{T^2}{a^3}=\text{constant}\tag 3$$ So the $(3)$ is justified by the $(1)$. But in almost all books in Italian language of an high school books, the first definition is not given, but it is written that The ratio between the square of the revolution period and the cube of the semi-axis major of the orbit is the same for all planets. My question is: Is there a correlation of average distances $\langle r \rangle$ with the $a$ or $\langle r \rangle\equiv a$? Any answer is welcome and I hope with a lot of serenity. Answer: Is there a correlation of average distances $\langle r \rangle$ with the $a$ or $\langle r \rangle\equiv a$? Since $r$ changes continuously, most people would assume that $\langle r \rangle$ means either a continuous average over all angles $\theta$ around the ellipse, $$\langle r \rangle_\theta\equiv\frac{1}{2\pi}\int_0^{2\pi}r(\theta)d\theta,\tag1$$ or a continuous time average over one period $T$ of the orbit, $$\langle r \rangle_t\equiv\frac{1}{T}\int_0^T r(t)dt.\tag2$$ Let's calculate these two averages. The elliptical orbit is given by $$r(\theta)=\frac{a(1-e^2)}{1-e\cos\theta}\tag3$$ where $a$ is the semimajor axis and $e$ the eccentricity. Substituting this into (1) and doing the integral gives $$\langle r \rangle_\theta\equiv\frac{a(1-e^2)}{2\pi}\int_0^{2\pi}\frac{d\theta}{1-e\cos\theta}=\frac{a(1-e^2)}{2\pi}\frac{2\pi}{\sqrt{1-e^2}}=a\sqrt{1-e^2}.\tag4$$ So the angular average is not equal to $a$; it is less than $a$. To compute the time average, it is easiest to turn it into another integral over $\theta$ by writing it as $$\langle r \rangle_t=\frac{1}{T}\int_0^{2\pi}\frac{r(\theta)d\theta}{\dot\theta}.\tag5$$ where the overdot means a time derivative. To evaluate this, use Kepler's Second Law, which says that $$\frac{dA}{dt}=\frac12r^2\dot\theta=\text{const}=\frac{A}{T}=\frac{\pi ab}{T}=\frac{\pi a^2\sqrt{1-e^2}}{T}\tag6$$ (here $b=a\sqrt{1-e^2}$ is the semiminor axis) so $$\dot\theta=\frac{2\pi a^2\sqrt{1-e^2}}{T}\frac{1}{r^2}.\tag7$$ Putting (7) into (5), we get $$\langle r \rangle_t=\frac{1}{2\pi a^2\sqrt{1-e^2}}\int_0^{2\pi}r(\theta)^3d\theta.\tag8$$ Putting (3) into (8) and doing the integral, we get $$\begin{align}\langle r \rangle_t&=\frac{a(1-e^2)^{5/2}}{2\pi}\int_0^{2\pi}\frac{d\theta}{(1-e\cos\theta)^3}=\frac{a(1-e^2)^{5/2}}{2\pi}\frac{(2+e^2)\pi}{(1-e^2)^{5/2}}\\&=a\left(1+\frac12e^2\right).\tag9\end{align}$$ So the time average of $r$ is not equal to $a$; it is greater than $a$. Thus neither the continuous angular average of $r$ nor the continuous time average of $r$ is equal to $a$. The way in which to understand $a$ as an "average" distance is simply as a discrete average of $r$ at two particular points on the orbit, namely aphelion and perihelion: $$a=\frac12(r_\text{max}+r_\text{min}).\tag{10}$$ P.S. I did the two integrals with Mathematica. One way to do them by hand is to turn them into contour integrals around the unit circle in the complex plane and evaluate them using residues.
{ "domain": "physics.stackexchange", "id": 72042, "tags": "newtonian-mechanics, newtonian-gravity, orbital-motion" }
Sending turtlesim to Goal Location
Question: When I was studying TF tutorials, I noticed that a calculation is performed in TF listener to send turtle2 to the location of turtle1. It is demonstrated below. vel_msg.angular.z = 4.0 * atan2(transform.getOrigin().y(), transform.getOrigin().x()); vel_msg.linear.x = 0.5 * sqrt(pow(transform.getOrigin().x(), 2) + pow(transform.getOrigin().y(), 2)); Is there anybody who could understand this computation ? I know that it is obvious. Orientation angle and the distance between two turtles are computed. The turtle2 is assumed to be reference point, and pose of turtle1 is computed with respect to frame of turtle2. What I cannot understand is how this mathematical calculation enables turtle2 to approach turtle1 perfectly. There has to be senseful proof about this. Besides, it is not accurate how the constants 4 and 0.5 are chosen. For example, before seeing this computation, I thought that there has to be a p controller or pid controller code in this TF listener node. However, there is not anything about those controllers. Instead, distance computation exists, and I cannot understand how it really works. Is there anyone who can explain this ? Originally posted by gktg1514 on ROS Answers with karma: 67 on 2019-11-04 Post score: 0 Answer: The lines of code you refer to act something like a very simple proportional controller. Consider the first line that sets the angular velocity. The transform.getOrigin() call represents a vector from turtle2 to turtle1 expressed in the turtle2 frame. By grabbing just the x and y components of this vector and feeding them into atan2, we are computing a signed angle between the turtle2 x-axis and the vector pointing to turtle1 -- when this angle is zero, turtle2 is pointing straight towards turtle2. We multiply this by a positive gain (randomly chosen to be 4.0) to turn this into a proportional controller for the orientation of turtle2. If we only used this line, this controller would always work to rotate turtle2 to point towards turtle1. The second line is a proportional controller for the linear velocity of turtle1. Here we compute the distance and then multiply that by an arbitrary gain to set the linear velocity. As turtle2 gets closer to turtle1, the velocity tends to zero. Note, this system is not really the type of system that you should expect your classic PID-style controller to work well. It's nonlinear, has multiple inputs and outputs, and is nonholonomic (not to mention the fact that the actual implementation of the turtlesim responds to a single command input for a finite amount of time and then automatically zeros the command). The system is often called a differential drive or a unicycle model and there is a ton of research on stable position and trajectory tracking controllers. This simple implementation is just a rough way of getting turtle 2 to follow turtle 1. Originally posted by jarvisschultz with karma: 9031 on 2019-11-04 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 33970, "tags": "ros-kinetic, transform" }
What causes XYY Syndrome?
Question: It's obvious how a misdisjunction can result in klinefelter's syndrome (XXY) but I don't see how this can result in XYY syndrome. Your parents have a collective total of only one Y chromosome so how can any misdisjunction result in a child with two Y chromosomes? Answer: It comes from a Y nondisjunction in the father. Anaphase nondisjunction of the Y yields a YY, which goes with one X from the mother to give XYY.
{ "domain": "biology.stackexchange", "id": 868, "tags": "genetics" }
Stokes's theorem in tensor field
Question: On pg 73 of "Tensors, Relativity and Cosmology" The generalized Stokes's theorem in arbitrary $N$-dimensional space is given by: $$\int_c A_mdx^m=\frac{1}{2}\int_S F_{mn}dS^{mn} \tag{1}$$ where $F_{mn}$ is the curl tensor of the vector $A_m$, $F_{mn}=A_{n,m}-A_{m,n}$ (, denotes covariant differentiation here) and $dS^{mn}$ is the contravariant tensor of an infinteseimal element of the surface $S$ $(dS^{mn}=dx^m \wedge dx^n$). In the three-dimensional metric space the RHS of (1) is equivalent to the ordinary curl A definition I tried to expand the RHS of (1) to obtain $$\frac{1}{2} \left(\frac{\partial A_n}{\partial x^m}-\frac{\partial A_m}{\partial x^n} \right)dx^m \wedge dx^n$$ since the Christoffel symbols vanish in the three-dimensional Euclidean metric space. It appears that $$\frac{\partial A_n}{\partial x^m} - \frac{\partial A_m}{\partial x^n}$$ is the definition of curl A but how do I convert $\frac{1}{2}dx^m \wedge dx^n$ into dS to obtain Stokes's theorem in the ordinary vector notation? Answer: Here, you still have the curl in antisymmetric tensor form, not a vector form. Once you are in three dimensions and there are no peculiarities with the metric, you have the correspondence: $$(\operatorname{curl}{\bf A})_i=\frac12\epsilon_{ijk} \left(\frac{\partial A_{j}}{\partial x^k}-\frac{\partial A_{k}}{\partial x^j}\right) $$ where $\epsilon_{ijk}$ is the Levi-Civita tensor. The inverse of this expression is: $$\epsilon_{inm}(\operatorname{curl}{\bf A})_i= \left(\frac{\partial A_{n}}{\partial x^m}-\frac{\partial A_{m}}{\partial x^n}\right) $$ When you use this on your expression, you get $$\frac12\epsilon_{inm} dx^m \wedge dx^n$$ which is just the cross product you require to get to $dS_i$. Note that antisymmetric property of $\wedge$ goes together nicely with antisymmetric Levi-Civita and the one-half in front. In terms of differential geometry, you convert a 2-form into 1-form with the Hodge star operator, which is a generalized and more formally correct version of what we did above.
{ "domain": "physics.stackexchange", "id": 59126, "tags": "differential-geometry, tensor-calculus" }
Creating a scraper using multithreading
Question: I've written a script in python using "threading" module to scrape two sites simultaneously. It parses the two sites flawlessly. Any insight as to how I can improve this script will be appreciated. Here is what I did: import requests ; from lxml import html import threading ; import time Yp_link = "https://www.yellowpages.com/search?search_terms=coffee&geo_location_terms=Los%20Angeles%2C%20CA&page=2" Tuts_link = "http://www.wiseowl.co.uk/videos/" def create_links(url): response = requests.get(url).text tree = html.fromstring(response) for title in tree.cssselect("div.info"): name = title.cssselect("a.business-name span[itemprop=name]")[0].text street = title.cssselect("span.street-address")[0].text phone = title.cssselect("div[itemprop=telephone]")[0].text if title.cssselect("div[itemprop=telephone]") else "" time.sleep(1) print(name, street, phone) def process_links(link): response = requests.get(link).text tree = html.fromstring(response) for titles in tree.xpath("//p[@class='woVideoListDefaultSeriesTitle']"): title = titles.xpath('.//a')[0] time.sleep(1) print(title.text, title.attrib['href']) th1 = threading.Thread(target=create_links, args=(Yp_link,)) th2 = threading.Thread(target=process_links, args=(Tuts_link,)) th1.start() th2.start() th1.join() th2.join() Answer: First of all, I think you are putting time.sleep() calls into the wrong places - you are putting them into the loops where you iterate over the extracted elements. Elements are already extracted and no requests are issued at that point - add delays between each requests - at the end of your functions. I would also improve naming - Yp_link and Tuts_link can be renamed to more explicit YELLOW_PAGES_URL and WISEOWL_URL - note that I think these two need to be defined as proper constants - in upper case. And, I would also switch to CSS selector locators for process_links() function as well. As far as imports go, just don't put them on same lines - put each import on it's own line as per PEP8 importing guidelines.
{ "domain": "codereview.stackexchange", "id": 27370, "tags": "python, python-3.x, multithreading, web-scraping" }
Copying 80 bytes as fast as possible
Question: I am running a math-oriented computation that spends a significant amount of its time doing memcpy, always copying 80 bytes from one location to the next, an array of 20 32-bit ints. The total computation takes around 4-5 days using both cores of my i7, so even a 1% speedup results in about an hour saved. By using the memcpy in this paper by Intel, I was able to speed up by about 25%, and also dropping the size argument and simply declaring inside seems to have some small effect. However, I feel I am not utilising the fact that my copying operations are always the same size. That said, I can't come up with a better way. void *memcpyi80(void* __restrict b, const void* __restrict a){ size_t n = 80; char *s1 = b; const char *s2 = a; for(; 0<n; --n)*s1++ = *s2++; return b; } Some other things that may be useful for optimization: I use an Intel Core i7-2620M, based on Sandy Bridge. I don't care about portability at all. I only care about the 16 least significant bits of every int. The other 16 are useless to me and are permanently zeroed out. Even though I copy 20 32-bit ints per memcpy invocation, I only care about the first 17. I have added 3 as it helps with alignment and therefore speed. I use GCC 4.6 on Windows 7. Any ideas? UPDATE: I think this is the assembly output (never done this before, there may be more than you need): memcpyi80: pushq %r12 .seh_pushreg %r12 pushq %rbp .seh_pushreg %rbp pushq %rdi .seh_pushreg %rdi pushq %rsi .seh_pushreg %rsi pushq %rbx .seh_pushreg %rbx .seh_endprologue movq %rdx, %r9 movq %rcx, %rax negq %r9 andl $15, %r9d je .L165 movzbl (%rdx), %ecx leaq -1(%r9), %r10 movl $79, %esi andl $7, %r10d cmpq $1, %r9 movl $79, %ebx leaq 1(%rdx), %r8 movl $1, %r11d movb %cl, (%rax) leaq 1(%rax), %rcx jbe .L159 testq %r10, %r10 je .L160 cmpq $1, %r10 je .L250 cmpq $2, %r10 je .L251 cmpq $3, %r10 je .L252 cmpq $4, %r10 je .L253 cmpq $5, %r10 je .L254 cmpq $6, %r10 je .L255 movzbl (%r8), %r8d movl $2, %r11d movb %r8b, (%rcx) leaq 2(%rax), %rcx leaq 2(%rdx), %r8 .L255: movzbl (%r8), %ebx addq $1, %r11 addq $1, %r8 movb %bl, (%rcx) addq $1, %rcx .L254: movzbl (%r8), %r10d addq $1, %r11 addq $1, %r8 movb %r10b, (%rcx) addq $1, %rcx .L253: movzbl (%r8), %edi addq $1, %r11 addq $1, %r8 movb %dil, (%rcx) addq $1, %rcx .L252: movzbl (%r8), %ebp addq $1, %r11 addq $1, %r8 movb %bpl, (%rcx) addq $1, %rcx .L251: movzbl (%r8), %r12d addq $1, %r11 addq $1, %r8 movb %r12b, (%rcx) addq $1, %rcx .L250: movzbl (%r8), %ebx addq $1, %r8 movb %bl, (%rcx) movq %rsi, %rbx addq $1, %rcx subq %r11, %rbx addq $1, %r11 cmpq %r11, %r9 jbe .L159 .p2align 4,,10 .L160: movzbl (%r8), %r12d movb %r12b, (%rcx) movzbl 1(%r8), %ebp movb %bpl, 1(%rcx) movzbl 2(%r8), %edi movb %dil, 2(%rcx) movzbl 3(%r8), %ebx movb %bl, 3(%rcx) leaq 7(%r11), %rbx addq $8, %r11 movzbl 4(%r8), %r10d movb %r10b, 4(%rcx) movq %rsi, %r10 movzbl 5(%r8), %r12d subq %rbx, %r10 movq %r10, %rbx movb %r12b, 5(%rcx) movzbl 6(%r8), %ebp movb %bpl, 6(%rcx) movzbl 7(%r8), %edi addq $8, %r8 movb %dil, 7(%rcx) addq $8, %rcx cmpq %r11, %r9 ja .L160 .L159: movl $80, %r12d subq %r9, %r12 movq %r12, %rsi shrq $4, %rsi movq %rsi, %rbp salq $4, %rbp testq %rbp, %rbp je .L161 leaq (%rdx,%r9), %r10 addq %rax, %r9 movl $1, %r11d leaq -1(%rsi), %rdi vmovdqa (%r10), %xmm0 movl $16, %edx andl $7, %edi cmpq $1, %rsi vmovdqu %xmm0, (%r9) jbe .L256 testq %rdi, %rdi je .L162 cmpq $1, %rdi je .L244 cmpq $2, %rdi je .L245 cmpq $3, %rdi je .L246 cmpq $4, %rdi je .L247 cmpq $5, %rdi je .L248 cmpq $6, %rdi je .L249 vmovdqa 16(%r10), %xmm3 movl $2, %r11d movl $32, %edx vmovdqu %xmm3, 16(%r9) .L249: vmovdqa (%r10,%rdx), %xmm4 addq $1, %r11 vmovdqu %xmm4, (%r9,%rdx) addq $16, %rdx .L248: vmovdqa (%r10,%rdx), %xmm5 addq $1, %r11 vmovdqu %xmm5, (%r9,%rdx) addq $16, %rdx .L247: vmovdqa (%r10,%rdx), %xmm0 addq $1, %r11 vmovdqu %xmm0, (%r9,%rdx) addq $16, %rdx .L246: vmovdqa (%r10,%rdx), %xmm1 addq $1, %r11 vmovdqu %xmm1, (%r9,%rdx) addq $16, %rdx .L245: vmovdqa (%r10,%rdx), %xmm2 addq $1, %r11 vmovdqu %xmm2, (%r9,%rdx) addq $16, %rdx .L244: vmovdqa (%r10,%rdx), %xmm3 addq $1, %r11 vmovdqu %xmm3, (%r9,%rdx) addq $16, %rdx cmpq %r11, %rsi jbe .L256 .p2align 4,,10 .L162: vmovdqa (%r10,%rdx), %xmm2 addq $8, %r11 vmovdqu %xmm2, (%r9,%rdx) vmovdqa 16(%r10,%rdx), %xmm1 vmovdqu %xmm1, 16(%r9,%rdx) vmovdqa 32(%r10,%rdx), %xmm0 vmovdqu %xmm0, 32(%r9,%rdx) vmovdqa 48(%r10,%rdx), %xmm5 vmovdqu %xmm5, 48(%r9,%rdx) vmovdqa 64(%r10,%rdx), %xmm4 vmovdqu %xmm4, 64(%r9,%rdx) vmovdqa 80(%r10,%rdx), %xmm3 vmovdqu %xmm3, 80(%r9,%rdx) vmovdqa 96(%r10,%rdx), %xmm2 vmovdqu %xmm2, 96(%r9,%rdx) vmovdqa 112(%r10,%rdx), %xmm1 vmovdqu %xmm1, 112(%r9,%rdx) subq $-128, %rdx cmpq %r11, %rsi ja .L162 .L256: addq %rbp, %rcx addq %rbp, %r8 subq %rbp, %rbx cmpq %rbp, %r12 je .L163 .L161: movzbl (%r8), %edx leaq -1(%rbx), %r9 andl $7, %r9d movb %dl, (%rcx) movl $1, %edx cmpq %rbx, %rdx je .L163 testq %r9, %r9 je .L164 cmpq $1, %r9 je .L238 cmpq $2, %r9 je .L239 cmpq $3, %r9 je .L240 cmpq $4, %r9 je .L241 cmpq $5, %r9 je .L242 cmpq $6, %r9 je .L243 movzbl 1(%r8), %edx movb %dl, 1(%rcx) movl $2, %edx .L243: movzbl (%r8,%rdx), %esi movb %sil, (%rcx,%rdx) addq $1, %rdx .L242: movzbl (%r8,%rdx), %r11d movb %r11b, (%rcx,%rdx) addq $1, %rdx .L241: movzbl (%r8,%rdx), %r10d movb %r10b, (%rcx,%rdx) addq $1, %rdx .L240: movzbl (%r8,%rdx), %edi movb %dil, (%rcx,%rdx) addq $1, %rdx .L239: movzbl (%r8,%rdx), %ebp movb %bpl, (%rcx,%rdx) addq $1, %rdx .L238: movzbl (%r8,%rdx), %r12d movb %r12b, (%rcx,%rdx) addq $1, %rdx cmpq %rbx, %rdx je .L163 .p2align 4,,10 .L164: movzbl (%r8,%rdx), %r9d movb %r9b, (%rcx,%rdx) movzbl 1(%r8,%rdx), %r12d movb %r12b, 1(%rcx,%rdx) movzbl 2(%r8,%rdx), %ebp movb %bpl, 2(%rcx,%rdx) movzbl 3(%r8,%rdx), %edi movb %dil, 3(%rcx,%rdx) movzbl 4(%r8,%rdx), %r10d movb %r10b, 4(%rcx,%rdx) movzbl 5(%r8,%rdx), %r11d movb %r11b, 5(%rcx,%rdx) movzbl 6(%r8,%rdx), %esi movb %sil, 6(%rcx,%rdx) movzbl 7(%r8,%rdx), %r9d movb %r9b, 7(%rcx,%rdx) addq $8, %rdx cmpq %rbx, %rdx jne .L164 .L163: popq %rbx popq %rsi popq %rdi popq %rbp popq %r12 ret .L165: movq %rdx, %r8 movl $80, %ebx jmp .L159 .seh_endproc .p2align 4,,15 .globl memcpyi .def memcpyi; .scl 2; .type 32; .endef .seh_proc memcpyi UPDATE: By building on Peter Alexander's solution and combining it with ideas from around the thread, I have produced this: void memcpyi80(void* __restrict b, const void* __restrict a){ __m128 *s1 = b; const __m128 *s2 = a; *s1++ = *s2++; *s1++ = *s2++; *s1++ = *s2++; *s1++ = *s2++; *s1++ = *s2++; } The speedup is small but measurable (about 1%). Now I guess my next temptation is to find how to use __m256 AVX types so I can do it in 3 steps rather than 5. UPDATE: The __m256 type requires alignment on the 32-bit barrier, which makes things slower, so it seems __m128 is a sweet spot. Answer: The fastest way to do this would be to align your data on 16-byte boundaries, then the entire copy just becomes 5 copies through XMM registers. This is over twice as fast as your version on my machine. Store your data like this: #include <xmmintrin.h> struct Data { union { int i[20]; __m128 v[5]; }; }; Then the copy function is just: void memcpyv5(__m128* __restrict b, const __m128* __restrict a) { __m128 t0 = a[0]; __m128 t1 = a[1]; __m128 t2 = a[2]; __m128 t3 = a[3]; __m128 t4 = a[4]; b[0] = t0; b[1] = t1; b[2] = t2; b[3] = t3; b[4] = t4; } // Example Data dst, src; memcpyv5(dst.v, src.v); Assembly output: __Z8memcpyv5PU8__vectorfPKS_: LFB493: pushq %rbp LCFI2: movq %rsp, %rbp LCFI3: movaps 16(%rsi), %xmm3 movaps 32(%rsi), %xmm2 movaps 48(%rsi), %xmm1 movaps 64(%rsi), %xmm0 movaps (%rsi), %xmm4 movaps %xmm4, (%rdi) movaps %xmm3, 16(%rdi) movaps %xmm2, 32(%rdi) movaps %xmm1, 48(%rdi) movaps %xmm0, 64(%rdi) leave ret
{ "domain": "codereview.stackexchange", "id": 25586, "tags": "optimization, c, bitwise, sse" }
How to get angles optimized by classical optimizer from QisKit's QAOA Module?
Question: I have the following simple optimization in QAOA: from qiskit_optimization.algorithms import MinimumEigenOptimizer # from qiskit_aer import Aer from qiskit.algorithms.minimum_eigensolvers import QAOA from qiskit.algorithms.optimizers import COBYLA from qiskit.primitives import Sampler n_qubits = len(G.nodes()) problem = QuadraticProgram() _ = [problem.binary_var("x{}".format(i)) for i in range(n_qubits)] problem.maximize( linear=nx.adjacency_matrix(G).dot(np.ones(n_qubits)), quadratic=-nx.adjacency_matrix(G), ) meo = MinimumEigenOptimizer(QAOA(sampler=Sampler(), optimizer=COBYLA(maxiter=100))) result = meo.solve(problem) print(result.prettyprint()) print("\ndisplay the best 5 solution samples") for sample in result.samples[:5]: print(sample) I want to get the actual angles found by QAOA, the $\beta, \gamma$. How do I get these from the results of this algorithm? I'm not seeing it in the docs: https://qiskit.org/documentation/stubs/qiskit.algorithms.minimum_eigensolvers.QAOA.html#qiskit.algorithms.minimum_eigensolvers.QAOA Answer: To get optimized parameters from QAOA you can also do this. The MinimumEigenOptimizer returns a MinimumEigenOptimizationResult which has a field min_eigen_solver_result which is as the API ref linked states the result obtained from the underlying algorithm. Now QAOA extends SamplingVQE and provides an identical result object, a SamplingVQEResult. The final β,γ can be in that field optimal_point which is just the list of floats the optimizer was working with, or in optimal_parameter which is a dictionary of the β,γ parameters to the values. Your code sample did not run, I edited it to add a G taken from the Optimization MaxCut tutorial and other imports. from qiskit_optimization import QuadraticProgram from qiskit_optimization.algorithms import MinimumEigenOptimizer from qiskit.algorithms.minimum_eigensolvers import QAOA from qiskit.algorithms.optimizers import COBYLA from qiskit.primitives import Sampler import numpy as np import networkx as nx n = 4 G = nx.Graph() G.add_nodes_from(np.arange(0, n, 1)) elist = [(0, 1, 1.0), (0, 2, 1.0), (0, 3, 1.0), (1, 2, 1.0), (2, 3, 1.0)] G.add_weighted_edges_from(elist) n_qubits = len(G.nodes()) problem = QuadraticProgram() _ = [problem.binary_var("x{}".format(i)) for i in range(n_qubits)] problem.maximize( linear=nx.adjacency_matrix(G).dot(np.ones(n_qubits)), quadratic=-nx.adjacency_matrix(G), ) meo = MinimumEigenOptimizer(QAOA(sampler=Sampler(), optimizer=COBYLA(maxiter=100))) result = meo.solve(problem) print(result.prettyprint()) print("\ndisplay the best 5 solution samples") for sample in result.samples[:5]: print(sample) # Print the final QAOA parameters print(result.min_eigen_solver_result.optimal_point) print(result.min_eigen_solver_result.optimal_parameters) the extra prints I added printed this for me [5.60276426 4.22978775] {ParameterVectorElement(β[0]): 5.602764261565667, ParameterVectorElement(γ[0]): 4.229787753395598}
{ "domain": "quantumcomputing.stackexchange", "id": 4808, "tags": "qiskit, qaoa" }
Why do we use equal sign?
Question: In these two lines, the author has used the equal sign inside the parenthesis. self.vel_pub = rospy.Publisher(name=self.cmd_vel_topic, data_class=Twist, queue_size=10) and self.front_topic_subscriber = rospy.Subscriber(name=self.front_sensor_topic, data_class=Range, callback=self.front_callback) for example: name=self.cmd_vel_topic Why did he do that? Originally posted by RoboTBiLL on ROS Answers with karma: 5 on 2022-07-12 Post score: 0 Answer: The author is telling the function (in this case the constructor for the rospy subscriber class) which particular arguments the value they are passing in is for. If you look at the documentation for the subscriber class you will notice the __init__() function has name,data_class, and callback as arguments in the function definition. You don't necessarily have to do that, but then you need to be careful about the order you pass in the values to the function. Basically, the authors are explicitly telling the function which argument each value they pass in relates to. Docs: Subscriber Originally posted by Stkr22 with karma: 36 on 2022-07-12 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 37845, "tags": "ros, python3" }
What are pros and cons of using a multi-head neural network versus a single neural network for multi-label classification?
Question: I haven't been able to find a good discussion specifically comparing the two (only one describing a classification and regression problem). I am training a classifier to learn both age and gender based on genomic data. Every sample has a known age and known gender (20 classes in total). Currently, I am using a single neural network with a sigmoid activation in the last layer with a binary_crossentropy loss. This works fine. However, I also see people using multi-head neural networks where, for example, a set of shared layers would split in to two either additional dense layers or in to two final layers for classification – each with an independent loss (in my case likely a categorical_ce). What I am unsure of, though, are the advantages and disadvantages between the two (maybe advantages and disadvantages are not the right words to use – actual differences between the two might be more appropriate and when one might use one of those over the other)? I want to be able to calculate the usual metrics – TP, FP, etc. after training – presumably it would be easier with two heads at the end of the network, as you can work with two independent sets of predictions to calculate these? Answer: If I understood things correctly: You have a task which you need to estimate two values, gender and age. Your question revolves about the difference between networks which share layers for both inputs, whether the shared layers should be followed independent linear layers. Firstly, using shared layers in the networks of two related tasks may be useful to create more general latent representations in the network hidden layers. It also can speed up training, the shared layer will learn useful features more quickly than if there were two separate networks for each task. Some examples which demonstrate the potential benefit of shared networks can be found in the papers for two RL algorithms, A3C and PPG (PPG adds some extra tricks to the shared layers it): http://proceedings.mlr.press/v48/mniha16.html https://arxiv.org/abs/2009.04416v1 Whether the shared layers should be followed by many separate linear layers or a single one, at least for me, isn't something easy to deduce. Intuitively, having a single linear mapping after the shared layers will help prevent over-fitting because the shared layers induce more general features in those layers. While having many separate layers may be useful if there is some complex non linear mapping between the final output and the latent features from the shared layers. I think the best way to find out is just to experiment with it and see which gives the best results. A little bit anecdotal, but, an example from my experience: Shared layers are commonly used in RL for actor critic algorithms. A network takes an image as input and outputs an action and a value (the output for the actor and the critic, receptively). Generally a single linear mapping from the shared layers works just fine, even better than more complex networks. =========== Edit In pseudo-code, this is the networks that came to my mind: # Network 1 shared_layer = Linear(input_dim, latent_dim) output_layer = Linear(latent_dim, m + n) # Network 2 shared_layer = Linear(input_dim, latent_dim) output1_layer = Linear(latent_dim, m) output2_layer = Linear(latent_dim, n) And, in the question you mention about using BCE with Network 1 and changing to CE for one of the outputs of the Network 2. The networks themselves are equal to one another. One implementation might be more practical than the other, but they are the same. Depending on the framework you use, you can use either BCE or CE loss in Network 1 and 2. In Network 1 this would mean taking the output of the last layer and slicing the outputs for age and gender into two variables and applying a loss function in each of them. That said, I would expect to see a difference between using CE or BCE for age classification. When training with BCE it's possible that one estimator turn to be more 'optimist' or 'pessimist' and gives overall high/low probabilities (this will depend on factors such as whether there is class unbalance, if those are taken into account in the training procedure, etc...). And, that will mean that when you take the maximum probability of the age outputs there will be some bias. CE seems to me to be a more appropriate choice for age classification. Using CE will not prevent bias if there is class unbalance in your data, but with it is more straightforward to handle these issues.
{ "domain": "ai.stackexchange", "id": 2862, "tags": "binary-classification, multi-label-classification, binary-crossentropy" }
Project Euler #35 in Common Lisp
Question: To start with Common Lisp I am doing Project Euler using this language. Usually I manage to solve problems but I am quite sure that my code is not as efficient as it could be in Common Lisp. That is why I need a review from experienced lispers. This is my code for problem 35. Please offer any improvements. (defun prime-p (n) (cond ((= n 1) nil) ((= n 2) t) ((evenp n) nil) (t (loop for i from 3 to (isqrt n) by 2 never (zerop (mod n i)))))) (defun list->num (lst) (loop for i in lst for p = (- (length lst) 1) then (- p 1) sum (* i (expt 10 p)))) (defun num->list (n) (loop for c across (write-to-string n) collect (parse-integer (string c)))) (defun rotate (lst) (append (last lst) (butlast lst))) (defun number-rotations (n) (let* ((digits (num->list n)) (digits-count (length digits))) (loop repeat digits-count for rotated = digits then (rotate rotated) collect (list->num rotated)))) (defun problem-35 (limit) (let ((hash-primes (make-hash-table))) (loop for n from 1 to limit if (prime-p n) do (setf (gethash n hash-primes) t)) (loop for p being the hash-keys in hash-primes if (loop for n in (number-rotations p) always (gethash n hash-primes)) collect p))) Answer: In list->num you can count down with something like for i downfrom n. (defun num->list (n) (loop for c across (write-to-string n) collect (parse-integer (string c)))) In above function you can just collect (digit-char-p c). The function returns the digit value as a number.
{ "domain": "codereview.stackexchange", "id": 31391, "tags": "programming-challenge, lisp, common-lisp" }
Size of the universe 13 billion years ago
Question: When wee look at the sky in opposite directions, we can see early galaxies that were formed about 13 billion years ago. At that time, the distance between two such galaxies at the opposite ends of the universe was only about 45 million light years. That implies that most of the billions of galaxies that we see today were inside such small volume. Is my conclusion correct? If it is correct, I cannot make sense of it. If we were able to pack all galaxies in such volume, assuming a diameter per galaxy of 100000 light years, then we could only fit 125 million galaxies there. How is this possible? Were galaxies much more dense and smaller then? Answer: The first galaxies formed at redshifts of 10-20. Such galaxies are now at proper distances (comoving radial distance) of 30-35 billion light years and thus would be at proper distances of 1.7-3 billion light years when they formed. Thus I think the premise of your question is flawed; although the first galaxies probably were considerably smaller than the big spirals and ellipticals in the present universe that are likely built from many mergers.
{ "domain": "physics.stackexchange", "id": 92190, "tags": "cosmology, space-expansion, estimation, big-bang" }
What does it mean to apply a creation or annihilation operator to a free field, e.g. $\langle 0|a(p)\varphi(x)| 0 \rangle$?
Question: I am self studying Quantum Field Theory, and I am starting to get a little lost. So far, I have studied free fields and some basic computations involving them, such as creation and annihilation operators. My understanding is that the free field, $\varphi$, models a collection of particles that do not interact. Hence if $a$, $a^\dagger$ respectively, represent the annihilation and creation operators, then $a\varphi$ lowers the number of particles in $\varphi$ by 1 and similarly for the creation operator. In Lemma 13.7.2 of Talagrand's book What Is a Quantum Field Theory?, he gives the formulas $$\langle 0|a(p)a^\dagger(p')| 0 \rangle = (2\pi)^3\delta^{(3)}(p-p') \\ \langle 0|a(p)\varphi(x)| 0 \rangle = \frac{1}{\sqrt{2 \omega_p}}\exp(i(x,p)) \\ \langle 0|\varphi(x)a^\dagger(p)| 0 \rangle = \frac{1}{\sqrt{2 \omega_p}}\exp(-i(x,p)). $$ He provides a proof of these statements, but I am trying to interpret their physical meaning (if they have one at all). For example, in the first case it seems we are calculating the probability that we annihilate a particle of momentum $p$ that has been created with momentum $p'$? For the last two I have no guess on what they physically mean. Part of my confusion could be due to the fact that I am a mathematics student, and hence bra-ket notation is very unfamiliar to me. Answer: When you hit $|0\rangle$ with $\hat{a}^{\dagger}(p)$, it injects a particle into the system moving with momentum $p$. Likewise, when you hit something with $\hat{a}(p)$, it removes a particle that already is moving with that momentum. Note that you can then represent a state containing a single particle with general momental wave function $\psi_p$ via $$|\psi\rangle := \int_{p=-\infty}^{\infty} \psi_p(p) [\hat{a}^{\dagger}(p) |0\rangle ]\ dp$$ Think about it like using a paintbrush: you "paint" the wave function onto the vacuum by sweeping it over the whole $p$-space while weighting with the weight of the function you want to make (how hard your "paintbrush" pushes in, if you could push a brush complexly hard, with which this brush you can, ordinary paintbrushes you can't). If you took that state, and tried to annihilate any single momentum value via the corresponding annihilator $\hat{a}(p)$, it would only make it impossible for it to have that exact value of momentum. The field would still contain 1 particle. To erase it all, you'd have to do this: $$|\text{0-equivalent}\rangle := \int_{p=-\infty}^{\infty} \hat{a}(p) |\psi\rangle\ dp$$ think like you're "cleaning up" the particle's wave function like using an eraser or mop. No probability is left anywhere for it to be after this, so the particle has been completely removed. (The "equivalent" label is because I think there will be some constant factors out front; so the actual state in strict terms, which is a ray or even better an [extremal] density operator, is the same, even if the ket vector is not) As for inner products, $\langle \phi | \psi\rangle$, the meaning is the same as for regular QM: it means whether if you tried to measure the field state $|\phi\rangle$ for whether it was the field state $|\psi\rangle$, what would be the probability to obtain "yes". When we put a pair of operators in there like $$\langle 0|\hat{a}(p_2) \hat{a}^{\dagger}(p_1)|0\rangle$$ you have to be a bit careful: you say "is it the probability to annihilate a particle with one momentum after creating it with another". No, because you have to remember that in $$\langle \phi | \psi\rangle$$ the thing on the left is actually a bra - it's a dual vector, living in the Hilbert dual space $H^{*}$. That is, $\langle \phi|$, is acting, in its linear-functional way, on the ket vector $|\psi\rangle$. And that little $\dagger$ notation there on the creation and annihilation operator is not just for show: that actually literally means that the creation operator is the Hermitian conjugate of the annihilation operator, thus it follows that when it comes to the dual space, $\hat{a}(p)$ actually acts as creation operator, and $\hat{a}^{\dagger}(p)$ acts as annihilation operator. That is to say, the roles are exactly reversed! Hence the inner product you give actually means "what is the probability [better: quantum amplitude] I will observe that a particle which has been created with momentum definitively equal to $p_1$, to actually instead be one created with a momentum equal to $p_2$?", because $\hat{a}(p_2)$, even though it lacks a dagger, creates a particle when the dual vector $\langle 0|$ hits it coming in from the left. And the expression on the right then should make perfect sense: it is probability zero so long as $p_1 \ne p_2$, because that just can't happen! Exercise: tell me what $$\langle 0| \hat{a}^{\dagger}(p_1) \hat{a}^{\dagger}(p_2) |0\rangle$$ means. Leave a comment. (It may not be what you first think!) FWIW, insofar as the field operator $\hat{\phi}(x)$ ... that's different. That's a Hermitian operator; and it can and does actually belong to the observable algebra. $\hat{\phi}(x)$ is the value the quantum field takes at $x$, understood as a quantum observable just like any other. Hence hitting $|0\rangle$ with it alone makes about as much sense as trying to interpret what $\hat{p}|\psi\rangle$ "physically means" in non-relativistic quantum mechanics. We use the operators for their algebraic properties, not their actual "action". Thus, we can say that when we are thinking of $\hat{a}^{\dagger}(p)$ and $\hat{a}(p)$, we are thinking of particles, in momentum space representation. When we think of $\hat{\phi}(x)$, we are thinking of fields that fill position space. The really cool bit is how that the two go together! Finally, regarding particles in position space ... position is funny, and you'd get a number of views on it, because there isn't just one mathematical way to relate to position space. You'll see Newton-Wigner operator, and then you'll see it has caveats (in particular, perfectly "localized" Newton-Wigner positions are not orthogonal, i.e. there is probability to measure one "localized" particle "at" position $x_1$ as being "at" position $x_2 \ne x_1$!), versus many who say "just forget about position at all". That's probably the best summary insofar as "consensus" goes. (Nonetheless, I don't really like that :D The way I personally like to think about it may be a bit unusual, and I am not even entirely sure it truly works, so take this with salt. I won't call it "original", just "unusual", because especially H. Nikolic pretty much gave the gist [we have to talk of a "probability density function in space-time"], just not the precise details I've laid out, and moreover he was working in the context of Bohmian mechanics. Others [Stueckelberg? iirc] seem to "point" at it with brief footnotes about "promoting time to an operator", but then back off from it. And because I have nobody to bounce the ideas off, I have no way to know if or how valid my specific approach is, so I rather not just post dilettante junk here, hehe.)
{ "domain": "physics.stackexchange", "id": 92659, "tags": "quantum-field-theory, hilbert-space, operators, fourier-transform, vacuum" }
Dispersion relation for electron plasma waves at large and small wavelengths
Question: I am currently reading F.Chen's Introduction to Plasma Physics and Controlled Fusion On page 83, he derives the dispersion relation for the electron plasma wave: $$ \omega^2 = \omega^2_p + \frac{3}{2}k^2v_{\text{th}}^2 $$ where $v_{\text{th}} = \sqrt{\frac{2k_B T_e}{m_e}}$ represents the thermal velocity. Note that $k_B$ is the Boltzmann constant and $k$ is the wave number. We can then derive the expression for the group velocity by: $$ v_g = \frac{d\omega}{dk} = \frac{3}{2} \frac{k}{\omega} v_{\text{th}}^2 $$ This is the velocity in which information is carried by the electron plasma wave. He then proceeds to state the following: (a) At large $k$ (small $\lambda$), information travels essentially at the thermal velocity (b) At small $k$ (large $\lambda$), information travels more slowly than $v_{\text{th}}$ even though the phase velocity $v_\phi = \frac{\omega}{k}$ is greater than $v_{th}$. This is because the density gradient is small at large $\lambda$ and thermal motions carry very little net momentum into the adjacent layers. For (a), how does he arrive at the conclusion that: $$ v_{g} = \frac{3}{2} \frac{k}{\omega} v_{\text{th}}^2 \approx v_{th} $$ for large $k$? For (b), I do not really understand the argument regarding the density gradient and require some explanation, if $v_\phi = \frac{\omega}{k}$ is large because $k$ is large, can't the conclusion that $v_g << v_{th}$ be deduced from: $$ v_{g} = \frac{3}{2} \frac{k}{\omega} v_{\text{th}}^2 << v_{th} $$ without referring to the density gradient? Answer: First, keep the derivative in terms of the wavenumber, $k$, and the thermal speed, $v_{th}$ to see that the group speed is given by: $$ \frac{ \partial \omega }{ \partial k } = \frac{ 3 \ k \ v_{th}^{2} }{ 2 \sqrt{ \omega_{pe}^{2} + \tfrac{ 3 }{ 2 } k^{2} \ v_{th}^{2} } } \tag{0} $$ (a) At large $k$ (small $\lambda$), information travels essentially at the thermal velocity Take the limit of Equation 0 above as $k \rightarrow \infty$ and you will get a result proportional to $v_{th}$. (b) At small $k$ (large $\lambda$), information travels more slowly than $v_{th}$ even though the phase velocity $v_{\phi} = \tfrac{ \omega }{ k }$ is greater than $v_{th}$. This is because the density gradient is small at large $\lambda$ and thermal motions carry very little net momentum into the adjacent layers. Take the limit of Equation 0 above as $k \rightarrow 0$ and you will get a result that asymptotically approaches zero. For (b), I do not really understand the argument regarding the density gradient and require some explanation... It's a relative statement. When the wavelength is large, the density oscillation of the Langmuir waves are spread out over large distances, thus the density gradients will be small (i.e., since gradients are the change in some quantity over distance). It's another way of saying that the change in density is gradual relative to some other physically relevant parameter (e.g., gyroradius). ...if $v_{\phi} = \tfrac{ \omega }{ k }$ is large because $k$ is large... I think you have this backwards. In the limit of large $k$ for constant $\omega$, the phase speed will go to zero. ...can't the conclusion that $v_{g} \ll v_{th}$ be deduced from... without referring to the density gradient? Okay, I think you meant to say small $k$ here but no matter. Physically, the limit of large $k$ (small $\lambda$) is just saying that the wavelengths asymptotically approach some lower boundary. In a plasma, the smallest physically meaningful wavelength is roughly $2 \pi \ \lambda_{De}$, where $\lambda_{De}$ is the Debye length. Generally Langmuir wave wavelengths approach the electron skin depth, which tends to be much much larger than the Debye length in most plasmas. Since Langmuir waves are really just longitudinal thermal osciallations, it makes physical sense that information would not exceed the local thermal speed of the electrons. In the opposite limit (i.e., small $k$, large $\lambda$), the electrons can only oscillate so fast and so far before they would decouple to the local ions, i.e., you can't arbitrarily pull electrons out of a plasma without affecting the ions and the local quasi-neutral system. Eventually the electric fields will always do work to eliminate themselves. Even so, if the wavelength gets extremely large, the electrons have to cover larger distances per unit time than the local thermal speed would allow thus preventing the net transfer of matter/energy. The end result is a mode with a finite phase speed but zero group speed.
{ "domain": "physics.stackexchange", "id": 79968, "tags": "fluid-dynamics, waves, plasma-physics, dispersion" }
Is my friend right about omitting $c^2$ in world famous tiny equation?
Question: I know $E = mc^2$ says that inertial mass of a system is equal to the total energy content of a system in its rest frame. My friend told me the $c^2$ can be omitted from this equation because that's just an `artifact' when measuring inertia and energy in different units. Is he right? Answer: This is basically a philosophical question, but I'm going to take what will probably be an unpopular position that your friend's reason is basically wrong in the context of an introduction to special relativity. Sure, you can work in units where c = 1, and then the equation $E = m c^2$ reduces to $E = m$. But that fact alone is kind of vacuous: you can also work in units $v = 1$, where $v$ equals 1 m/s, and then $E = m c^2$ reduces to the technically equally legitimate equation $E = (9 \times 10^{16}) m$. But this clearly seems like a "less right" thing to do. In many contexts $c$ is the natural velocity scale to set to 1. But that because it's a highly physically privileged speed in special relativity, and in order to understand why, you need to understand a bunch of facts like $E = m c^2$. So (I would argue that) it's subtly misrepresenting the causation to say that $E = m$ "because" $E = m c^2$ and $c = 1$. I would instead say that $c = 1$ "because" $E = m c^2$ (and several other closely related facts). The danger of setting $c = 1$ too early when first learning special relativity is that it hides the fact that $c$ does have a physical value, with a unique physical significance. It's not just a convenient simplification, like doing a mechanics problem where you assume that a car is traveling at unit speed. But once you're comfortable with special relativity at an intuitive level, then yes, you can absolutely say that $E = m$ and everyone will know what you mean.
{ "domain": "physics.stackexchange", "id": 68147, "tags": "special-relativity, mass-energy, dimensional-analysis, physical-constants, absolute-units" }
What can be the minimum time for a somewhat stable twin star orbit to collapse on being affected by a third body?
Question: The third 'invading' celestial body passing by or crashing into one of the stars could be a possible reason for the orbit collapse. Something that throws the stars* off-course speeding up their death dance. Is an orphaned/rogue planet capable of this? If so how long would it take? If I were to consider extreme events, how 'fast' can this process occur?** *Main Sequence stars **Say from the moment the third body starts to affect the system significantly till the first collision of the main bodies of the stars. I hope this question is not off-topic. This is my first time asking on this site. Answer: Binary stars are usually in stable orbits due to energy conservation: since they do not lose energy, they stay in the same orbit. What can happen over their lifespan is that one of them becomes a red giant and the extended atmosphere transfers mass to the other, changing their orbits, or (more importantly for this question) that tidal forces and gas drag in the envelope causes them to spiral in. Over very long timescales they may also lose energy due to gravitational radiation. However, these are slow processes. There is a lot of energy and angular momentum to somehow get rid of. For a semi-major axis $a$ the energy is $$E=-\frac{GM_1M_2}{2a}.$$ Interlopers with energy comparable to this could disrupt the system, but the question is instead about causing a merger. To get a merger the closest distance has to be within $r_{min}=R_1+R_2$ where $R$ is the stellar radius. $r_{min}=(1-e)a$, where $e$ is the eccentricity, which needs to be driven up near 1 so $e> 1 - (R_1+R_2)/a$. For $a=$ 1 AU and $R_1=R_2=$ 1 solar radius, $e>0.9907$ is required. The eccentricity is $e=\sqrt{1+2\epsilon h^2/\mu^2}$ where $\epsilon$ is the total energy divided by the reduced mass $M_1M_2/(M_1+M_2)$, $h$ the angular momentum divided by the reduced mass, and $\mu=G(M_1+M_2)$. If we want to boost it to a high eccentricity we need to change things so $1+2(\epsilon+\Delta \epsilon)(h+\Delta h)^2/\mu^2 = e^2$. If we start with $e=0$ we have $1 +2\epsilon h^2/\mu^2=0$, or $\mu^2/2=-\epsilon h^2$ (remember that the energy is negative for bound orbits). So to first order, $2\Delta \epsilon (h^2 + 2h \Delta h) \approx e^2 \mu^2$ or for $e\approx 1$ $\Delta \epsilon \approx \mu^2/h^2$ if we ignore the change in angular momentum (we set $\Delta h=0$). This is essentially a change on the order of $E$. So if you want a rogue planet to hit and cause a merger, it needs to have about as much kinetic energy as one of the stars - not an easy task, since even Jupiter is 1/1000 of a solar mass, so it would need to move at about 200 km/s to equal the kinetic energy of a solar mass star in a 1 AU orbit. Were it to happen the time until the merger would be the free-fall timescale $$\tau = \frac{\pi a^{3/2}}{\sqrt{2G(M_1+M_2)}},$$ which in this case is 0.25 year, or about 91 days.
{ "domain": "astronomy.stackexchange", "id": 5932, "tags": "binary-star, impact, rogue-planet, orbital-migration" }
How to Properly Orient Kinect Data from Turtlebot in RVIZ?
Question: This has been bugging me for a while now, and it's starting to make things quite difficult. The point cloud data (and, if I remember correctly, stuff like odometry data) from the Turtlebot come in with 'depth' oriented along the z-axis, while 'depth' relative to the Turtlebot lies in the x/y-plane. How do I get this to display and work properly together, especially in rviz? Currently in rviz, the point cloud emanates from the 'top' of the Turtlebot, and points up in the air. I'm sure there's got to be some sort of quick solution to this, so I figured I'd better ask before I go writing all of my own nodes and stuff to try to compensate for it. My best guess is that there's some simple setup stuff I'm not doing properly or whatever. If there isn't actually a 'solution' to this because it's not a problem for you --if it 'just works' for you-- would you kindly walk me through how you set up/run your Turtlebot? I'm pretty sure I'm following the Turtlebot bringup tutorials and everything properly... but they don't talk about making it work with rviz, from what I can tell. Thanks in advance! Originally posted by Yo on ROS Answers with karma: 183 on 2012-01-11 Post score: 1 Original comments Comment by mmwise on 2012-01-27: When the TurtleBot runs out of the box in rviz you should see the kinect pointcloud in front of the TurtleBot as shown in this video http://www.ros.org/wiki/turtlebot/Tutorials/Looking%20at%20Camera%20Data . It might be that your fixed frame is messed up somehow is it set to base_link? Comment by patrick_hammer on 2012-02-19: Works out of the box for me - does your robot model look correct in rviz? Answer: I haven't used the turtlebot yet, but as a hack you could change the kinect frame/link in the URDF file, which contains the model description of the turtlebot. Just change the orientation according to your needs. But as you already hinted, there might be a better solution, since the other turtlebot users seem to have no complains about this ... Originally posted by bit-pirate with karma: 2062 on 2012-01-11 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Yo on 2012-01-12: Right, thanks. I'd definitely like a nicer solution though -- I'm not all that experienced with ROS or anything, so even the 'simple' solution of modifying the URDF file would be a bit difficult for me, I'm afraid. >_< That's why I'm guessing there's just something simple I'm missing. Thanks though!
{ "domain": "robotics.stackexchange", "id": 7856, "tags": "kinect, rviz, turtlebot" }
Time derivative of vector in rotating frame with angular velocity about a rotating axis
Question: In general, I know that if you have a vector $\vec{F}$ in a rotating frame, and the frame has an angular velocity $\vec{\Omega}$ that the time derivative of $\vec{F}$ in a fixed frame would be $$\frac{d\vec{F}}{dt}=\left(\frac{d\vec{F}}{dt}\right)_r+\vec{\Omega}\times\vec{F}.$$ However, I'm confused how or if this would change if there are multiple angular velocities attached to a rotating axis. Let's say our rotating frame is as below. This angular velocity $\vec{\Omega_{z'}}$ has its own angular velocity $\vec{\Omega_y}$. My original thoughts are to simply combine the angular velocities into a single vector $\vec{\Omega_T}=\vec{\Omega_y}+\vec{\Omega_{z'}}$, but since the axis $z'$ is moving I'm not sure if it's that simple. Answer: As was mentioned in the comments, there is only one angular velocity $\vec{\Omega}_T=\Omega_y\hat{y}+\Omega_{z'}\hat{z'}$. This is confirmed here from some MIT lecture notes. It seems my intuition was correct. EDIT: If you want to use this to find the velocity of a vector, you need to cast this into the global frame first.
{ "domain": "physics.stackexchange", "id": 50825, "tags": "newtonian-mechanics, vectors, rotational-kinematics, differentiation, angular-velocity" }
ROS Fuerte in Multiple machines
Question: Hello, In ros fuerte i can't communicate from slave to the master! can someone help-me? I have in the bash script for master this: export ROS_IP=192.168.55.112 export ROS_MASTER_URI=http://192.168.55.112:11311 at the slave the same. In the slave i make rostopic list and i see all publications. but when i run a node, in the slave pc, that should publish something, the master can't ear anything! (if run the publisher in the master and subscriber in the slave it works, the inverse not!!!!) If i run the same node in the master everything runs ok! What could be the problem? Originally posted by Filipe Santos on ROS Answers with karma: 346 on 2012-05-15 Post score: 0 Answer: The slave should not export its IP to be the same as the master. It should export its own IP. Originally posted by DimitriProsser with karma: 11163 on 2012-05-15 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Filipe Santos on 2012-05-15: thanks DimitriProsser, it was that mistake! :) Comment by tfoote on 2012-05-15: @Filipe Santos Please accept the answer so others know your question is solved. (Click the checkmark next to the answer.)
{ "domain": "robotics.stackexchange", "id": 9405, "tags": "ros-fuerte" }
"Cage-like" Formazine particles, how are molecules arranged?
Question: Question: What would the "cage-like" arrangement of Formazine molecules look like in a Formazine particle while it is still suspended in water? Would it be totally random, or have some kind of imperfect but recognizable regularity? Formazine is used as an easily defined and prepared calibration or reference standard for turbidity measurements. above x2: Formazine from PubChem. From Wikipedia: Formazine (formazin) is a heterocyclic polymer produced by reacting hexamethylenetetramine with hydrazine sulfate. The hexamethylenetetramine tetrahedral cage-like structure, similar to adamantane, serve as molecular building block to form a tridimensional polymeric network. From Formazine: The readings shown by turbidimeters are not scaled in the measured light intensity, but in concentration of a reference suspension. Because the accuracy of this calibration solution determines the reliability of subsequent turbidity measurements, it is of crucial importance. The internationally established turbidity standard for calibration is formazine. This standard can be reproduced at any time with the recipe taken from standard ISO 7027 (Water quality - Determination of turbidity). It is extremely important to observe the prescribed preparation temperature, because it affects the particle size of the formazine particles perceptibly. The following Figure illustrates this. Errors caused by temperature variations are on the order of 1..2% per °C. Consequently, Sigrist keeps the preparation temperature constant to within ± 1°C. above: "Particle size distributions of formazine at various preparation temperatures" from here. above: Enlarged from TurbidityStandards. above: Slide #10 from What is Turbidity? John Daly, ISA NorCal President, South Fork Instruments, Inc. Click for full size. Answer: First, note that formazine structure attached from PubChem is a different compound; note that there are also, related, formazan $\ce{HN=N-CH=N-NH2}$ and “formaldazine” $\ce{CH2=N-N=CH2}$ (dimethylidenehydrazine); formazin (polymer) does not necessarily has to have cage-like structure, as this characterization belongs to hexamethylenetetramine in the quoted citation. Hexamethylenetetramine (urotropine, $\ce{(CH2)6N4}$) can be prepared from formaldehyde and ammonia. In water it also somewhat decomposes to its precursors: Formazin (also called formalazine) structure might be still unknown, as a related publication from 1976 states:[1] … This substance, an insoluble condensation product of uncertain composition, is prepared by mixing solutions of hexamethylenetetramine and hydrazine sulfate. … Sometimes empirical-like formula $\ce{(C2H4N2)_n}$ is claimed.[2] Sometimes, claimed structure like following one is presented:[3,4] It was (with several other structures, less nice or likely), hypothesized (well, rather more complicated, with some ether linkages $\ce{-CH2-O-CH2\bond{-}}$ and $\ce{-NH\bond{-}}$ groups present), based on spectroscopic studies.[5,6] References: Rice, E. W. The preparation of formazin standards for nephelometry. Analytica Chimica Acta 1976, 87 (1), 251–253. Ziegler, A. Issues Related to Use of Turbidity Measurements as a Surrogate for Suspended Sediment. Turbidity and Other Sediment Surrogates Workshop; Reno, NV, USA, 2002. Kaur, N.; Kishore, D. An Insight into Hexamethylenetetramine: A Versatile Reagent in Organic Synthesis. J. Iran. Chem. Soc. 2013, 10 (6), 1193–1228. Sadar, M. J. Stabilized Formazin Composition. US Patent 5,777,011, July 1998. Mashima, M. The Infrared Absorption Spectra of the Condensation Products of Formaldehyde with Hydrazine. Bull. Chem. Soc. Jpn. 1966, 39 (3), 504–506. Bondybey, V. E.; Nibler, J. W. Infrared and Raman Spectra of Formaldazine. Spectrochimica Acta Part A: Molecular Spectroscopy 1973, 29 (4), 645–658.
{ "domain": "chemistry.stackexchange", "id": 11176, "tags": "polymers, molecular-structure" }
Cancelling function execution with a ContinuationError
Question: Although throwing excptions for control flow is a controversial topic there are some quite popular examples of using this anti-pattern like C#'s async routines throwing the OperationCanceledException to cancel a task or python throwing the StopIteration to control iterators. I thought I'll try to use such an exception with my logger decorator package (GitHub). I call it ContinuationError. In general the decorator handles logging of such states as started before entering a function completed when a function successfully executed canceled when a function exited prematurely faulted when something unexpected occured The exception supports the canceled state by replacing spammy logging with an error: Before: logger.canceled(reason="No luck!") return 5 After: raise ContinuationError("No luck!", 5) It expects a reason for why the cancellation was necessary, optionally a return value if the function is expected to return something and also optionally other arguments that are later rendered into a json-message. class ContinuationError(Exception): """Raise this error to gracefully handle a function cancellation.""" def __new__(cls, *args, **details) -> Any: instance = super().__new__(cls) instance.details = details | dict(reason=args[0]) if len(args) > 1: instance.result = args[1] return instance def __init__(self, message: str, result: Optional[Any] = None, **details): super().__init__(message) The decorator takes care of handling it by checking whether a result was provided and returns it if necessary. The decorator also provides a lambda for creating started details or another lambda for logging the result. def telemetry(on_started: Optional[OnStarted] = None, on_completed: Optional[OnCompleted] = None, **kwargs): """Provides flow telemetry for the decorated function.""" on_started = on_started or (lambda _: {}) on_completed = on_completed or (lambda _: {}) def factory(decoratee): @contextlib.contextmanager def logger_scope() -> Logger: logger = Logger( module=inspect.getmodule(decoratee).__name__, scope=decoratee.__name__, attachment=kwargs.pop("attachment", None), parent=_scope.get() ) token = _scope.set(logger) try: yield logger except Exception: logger.faulted() raise finally: _scope.reset(token) def inject_logger(logger: Logger, d: Dict): """ Injects Logger if required. """ for n, t in inspect.getfullargspec(decoratee).annotations.items(): if t is Logger: d[n] = logger def params(*decoratee_args, **decoratee_kwargs) -> Dict[str, Any]: # Zip arg names and their indexes up to the number of args of the decoratee_args. arg_pairs = zip(inspect.getfullargspec(decoratee).args, range(len(decoratee_args))) # Turn arg_pairs into a dictionary and combine it with decoratee_kwargs. return {t[0]: decoratee_args[t[1]] for t in arg_pairs} | decoratee_kwargs if asyncio.iscoroutinefunction(decoratee): @functools.wraps(decoratee) async def decorator(*decoratee_args, **decoratee_kwargs): with logger_scope() as scope: inject_logger(scope, decoratee_kwargs) scope.started(**on_started(params(*decoratee_args, **decoratee_kwargs))) try: result = await decoratee(*decoratee_args, **decoratee_kwargs) scope.completed(**on_completed(result)) return result except ContinuationError as e: if hasattr(e, "result"): scope.canceled(**(on_completed(e.result) | e.details)) return e.result else: scope.canceled(**e.details) else: @functools.wraps(decoratee) def decorator(*decoratee_args, **decoratee_kwargs): with logger_scope() as scope: inject_logger(scope, decoratee_kwargs) scope.started(**on_started(params(*decoratee_args, **decoratee_kwargs))) try: result = decoratee(*decoratee_args, **decoratee_kwargs) scope.completed(**on_completed(result)) return result except ContinuationError as e: if hasattr(e, "result"): scope.canceled(**(on_completed(e.result) | e.details)) return e.result else: scope.canceled(**e.details) decorator.__signature__ = inspect.signature(decoratee) return decorator return factory Later one of the logging APIs checks for the exception and decides whether to log a normal message or an actual error: def _log(self, **kwargs): status = inspect.stack()[1][3] details = Logger.serialize_details(**kwargs) with _create_log_record( functools.partial(_set_module_name, name=self.module), functools.partial(_set_func_name, name=self.scope) ): # Ignore the ContinuationError as an actual error. is_error = all(sys.exc_info()) and sys.exc_info()[0] is not ContinuationError self._logger.log(level=self._logger.level, msg=None, exc_info=is_error, extra={ "parent": self.parent.id if self.parent else None, "node": self.id, "status": status, "elapsed": self.elapsed, "details": details, "attachment": self.attachment }) Internally the package is using python's standard logging library. Example I use it like this: import wiretap.src.wiretap as wiretap # becasue it's from the test environment @wiretap.telemetry(on_started=lambda p: {"value": p["value"], "bar": p["bar"]}, on_completed=lambda r: {"count": r}) def foo(value: int, logger: wiretap.Logger = None, **kwargs) -> int: logger.running(test=f"{value}") raise wiretap.ContinuationError("No luck!", 0, foo="bar") return 3 if __name__ == "__main__": print(foo(1, bar="baz")) # <-- prints: 0 What do you think of this idea? I guess I probably should check if the decorated function is expected to return something and throw an invalid operation exception when a return value wasn't provided. Answer: Your decorator has almost no type hints. You can use ParamSpec to hint your code. Here's an example typed closure wrapper decorator: import functools from typing import Callable, ParamSpec, TypeVar P = ParamSpec("P") TRet = TypeVar("TRet") def typed_decorator() -> Callable[[Callable[P, TRet]], Callable[P, TRet]]: def wrapper(fn: Callable[P, TRet]) -> Callable[P, TRet]: @functools.wraps(fn) def inner(*args: P.args, **kwargs: P.kwargs) -> TRet: return fn(*args, **kwargs) return inner return wrapper def untyped_decorator(): def wrapper(fn): @functools.wraps(fn) def inner(*args, **kwargs): return fn(*args, **kwargs) return inner return wrapper @typed_decorator() def foo(bar: str) -> int: return 1 @untyped_decorator() def bar(bar: str) -> int: return 1 reveal_type(foo) reveal_type(bar) $ mypy --strict foo.py foo.py:36: note: Revealed type is "def (bar: builtins.str) -> builtins.int" foo.py:37: note: Revealed type is "Any" As you can see, we basically can just say the parameters are P and the return type is just TRet. Next lets get the DI working. I'll be copying and modifying your inject_logger code. Since I'm focusing on type hints at the moment I'm going to have to ask you to ignore the contents of the function. The function does the same thing yours does, DI the logger. I'm going to change foo to take Logger as an argument, with a default. We need to provide a default for the type hints to work as intended. Otherwise you'll attempt to call the function without a logger and get an error. The logger has some nonsense code just to make determining which logger we're interacting with easier to reason with. import functools import inspect from typing import TYPE_CHECKING, Any, Callable, ParamSpec, TypeVar P = ParamSpec("P") TRet = TypeVar("TRet") TArgs = TypeVar("TArgs", bound=tuple) # type: ignore TKwargs = TypeVar("TKwargs", bound=dict) # type: ignore class Logger: ID = 0 id: int def __init__(self) -> None: self.id = self.ID type(self).ID += 1 def __repr__(self) -> str: return f"<{type(self).__name__} {self.id}>" def inject_logger( fn: Callable[..., Any], logger: Logger, args: TArgs, kwargs: TKwargs, ) -> tuple[TArgs, TKwargs]: """ Injects Logger if required.""" sig = inspect.signature(fn) bound = sig.bind_partial(*args, **kwargs) for param in sig.parameters.values(): if issubclass(Logger, param.annotation): bound.arguments.setdefault(param.name, logger) bound.apply_defaults() return bound.args, bound.kwargs # type: ignore def typed_decorator() -> Callable[[Callable[P, TRet]], Callable[P, TRet]]: def wrapper(fn: Callable[P, TRet]) -> Callable[P, TRet]: @functools.wraps(fn) def inner(*args: P.args, **kwargs: P.kwargs) -> TRet: args, kwargs = inject_logger(fn, Logger(), args, kwargs) print(args, kwargs) return fn(*args, **kwargs) inner.__signature__ = fn.__signature__ = inspect.signature(fn) # type: ignore return inner return wrapper @typed_decorator() def foo(bar: str, logger: Logger = Logger(), /) -> int: return 1 if TYPE_CHECKING: reveal_type(foo) if __name__ == "__main__": foo("test") $ python foo.py ('test', <Logger 1>) {} $ mypy --strict foo.py foo.py:58: note: Revealed type is "def (builtins.str, foo.Logger =) -> builtins.int" We can avoid the need specify a default value by using Concatenate. The limitation with Concatenate is the syntax only works with arguments on the left side of the function. def typed_decorator() -> Callable[[Callable[Concatenate[Logger, P], TRet]], Callable[P, TRet]]: def wrapper(fn: Callable[Concatenate[Logger, P], TRet]) -> Callable[P, TRet]: @functools.wraps(fn) def inner(*args: P.args, **kwargs: P.kwargs) -> TRet: return fn(Logger(), *args, **kwargs) fn.__signature__ = sig = inspect.signature(fn) # type: ignore params = iter(sig.parameters.values()) next(params, None) inner.__signature__ = inspect.Signature(list(params), return_annotation=sig.return_annotation) # type: ignore return inner return wrapper @typed_decorator() def foo(logger: Logger, bar: str) -> int: return 1 if TYPE_CHECKING: reveal_type(foo) $ mypy --strict foo.py foo.py:56: note: Revealed type is "def (bar: builtins.str) -> builtins.int" We can support both of the options by using a Protocol with overload. As you can possibly see from the increased # type: ignore comments the approach is somewhat janky internally. def inject_logger( fn: Callable[..., Any], logger: Logger, args: TArgs, kwargs: TKwargs, ) -> tuple[TArgs, TKwargs]: """ Injects Logger if required.""" sig = inspect.signature(fn) if ((param := next(iter(sig.parameters.values()), None)) is not None and issubclass(param.annotation, Logger) ): args = (logger,) + args # type: ignore bound = sig.bind_partial(*args, **kwargs) for param in sig.parameters.values(): if issubclass(param.annotation, Logger): bound.arguments.setdefault(param.name, logger) bound.apply_defaults() return bound.args, bound.kwargs # type: ignore class PTypedDecorator(Protocol): @overload def __call__(self, /, fn: Callable[Concatenate[Logger, P], TRet]) -> Callable[P, TRet]: ... @overload def __call__(self, /, fn: Callable[P, TRet]) -> Callable[P, TRet]: ... def typed_decorator() -> PTypedDecorator: def wrapper(fn: Callable[Concatenate[Logger, P], TRet] | Callable[P, TRet]) -> Callable[P, TRet]: @functools.wraps(fn) def inner(*args: P.args, **kwargs: P.kwargs) -> TRet: args, kwargs = inject_logger(fn, Logger(), args, kwargs) print(args, kwargs) return fn(*args, **kwargs) fn.__signature__ = sig = inspect.signature(fn) # type: ignore params = iter(sig.parameters.values()) if ((param := next(params, None)) is not None and issubclass(param.annotation, Logger) ): sig = inspect.Signature(list(params), return_annotation=sig.return_annotation) inner.__signature__ = sig # type: ignore return inner return wrapper @typed_decorator() def foo(logger: Logger, bar: str, /) -> int: return 1 @typed_decorator() def bar(bar: str, *, logger: Logger = Logger()) -> int: return 1 if TYPE_CHECKING: reveal_type(foo) reveal_type(bar) if __name__ == "__main__": print(inspect.signature(foo)) foo("test") print(inspect.signature(bar)) bar("test") $ python foo.py (bar: str, /) -> int (<Logger 1>, 'test') {} (bar: str, *, logger: __main__.Logger = <Logger 0>) -> int ('test',) {'logger': <Logger 2>} $ mypy --strict foo.py foo.py:73: note: Revealed type is "def (bar: builtins.str) -> builtins.int" foo.py:74: note: Revealed type is "def (bar: builtins.str, *, logger: foo.Logger =) -> builtins.int" Review Your existing code doesn't really support type hints. @wiretap.telemetry(on_started=lambda p: {"value": p["value"], "bar": p["bar"]}, on_completed=lambda r: {"count": r}) def foo(value: int, logger: wiretap.Logger = None, **kwargs) -> int: The only reason logger: wiretap.Logger = None would type correctly is if you're not running your static analysis tool in strict mode. By default mypy runs in non-strict mode. The mode is useful for showing glaring typing issues, like passing int to a function for str. The benefit is by hiding some more pedantic static analysis tools users with large Python code bases can ease into static Python. You have a lot of functions in your factory closure which could just be global, look at my inject_logger. Your current inject_logger cannot support positional only parameters. To add the logger you mutate the kwargs however the logger can be provided before the / to be a positional only argument. Neither on_started or on_completed are properly typed. I can appreciate you may think using * and ** in a lambda can be ugly for on_started. So having an untyped interface may be preferable. >>> (lambda *_, foo, **__: foo)(foo="foo") 'foo' >>> (lambda *_, foo, **__: foo)(bar="foo") Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: <lambda>() missing 1 required keyword-only argument: 'foo' If you ignore typing, your code is good. Seems to do the job, doesn't seem to be poorly designed - from what I've seen. As you may be aware in Python using exceptions for control flow is expected. However, I can't really comment on whether using exceptions for control flow is good here. I would need to see real world usage of your library to see if another option would be better. However, your use of exceptions here seem to be a good way to get what you want to achieve with decorators. An alternate would be to return, say, a LogObject which has the exact same interface as ContinuationError.
{ "domain": "codereview.stackexchange", "id": 44605, "tags": "python, python-3.x, error-handling, logging, decorator-pattern" }
Path integral quantization of the EM field in Peskin and schroeder
Question: I'm studying path integral quantization of the electromagnetic field using Peskin and Schroeder secdtion 9.4. We want to compute the functional integral $$\tag{9.50} \int \mathcal{D}A\,e^{iS[A]}.$$ We use the method by Faddeev and Popov, let $$G(A)=\partial^\mu A_\mu(x)-\omega(x)$$ be the function that we wish to set to zero, use the following equation $$\tag{9.53} 1=\int\mathcal{D}\alpha(x)\,\delta(G(A^\alpha))\text{det}\bigg(\frac{\delta G(A^\alpha)}{\delta\alpha}\bigg).$$ We stick 9.53 into 9.50, then integrate over $w(x)$ with respect to the Gaussian weight $\exp[-i\int d^4x \frac{\omega^2}{2\xi}]$, this shows that (9.50) is given by $$\tag{9.56} N(\xi)\det\bigg(\frac{1}{e}\partial^2\bigg)\bigg(\int\mathcal{D}\alpha\bigg)\int\mathcal{D}A e^{iS[A]}\exp\bigg[-i\int d^4x\frac{1}{2\xi}(\partial^\mu A_\mu)^2\bigg].$$ Peskin and Schroeder then claims that we have worked out the denominator of $$\langle \Omega|T\mathcal{O}(A)|\Omega\rangle=\lim_{T\rightarrow\infty}\frac{\int \mathcal{D}A\mathcal{O}(A)\exp[i\int_{-T}^Td^4x \mathcal{L}]}{\int \mathcal{D}A\exp[i\int_{-T}^Td^4x \mathcal{L}]}$$ We can write a similar expression for the numerator, then Peskin and Schroeder claims the "awkward constant factors in (9.56) cancel" and we find for its correlation function $$\tag{9.57}\langle \Omega|T\mathcal{O}(A)|\Omega\rangle=\lim_{T\rightarrow\infty}\frac{\int \mathcal{D}A\mathcal{O}(A)\exp[i\int_{-T}^Td^4x (\mathcal{L}-\frac{1}{2\xi}(\partial^\mu A_\mu)^2)}{\int \mathcal{D}A\exp[i\int_{-T}^Td^4x (\mathcal{L}-\frac{1}{2\xi}(\partial^\mu A_\mu)^2)]}.$$ My questions are: In (9.57), by "awkward constant factors have canceled", do we mean $N(\xi)\det(\frac{1}{e}\partial^2)(\int\mathcal{D}\alpha)$? That is are we treating $\int\mathcal{D}\alpha$ as a constant factor? Peskin and Schroeder says we need $\mathcal{O}(A)$ to be gauge invariant, what is an example of a gauge invariant $\mathcal{O}(A)$? I don't think expressions like $A(x_1)A(x_2)$ which we use for scalar fields work here. Peskin and Schroeder then claims the method by Faddeev and Popov shows that correlation function is independent the choice of $\xi$. But how? In (9.57) we clearly still have $\xi$ in it. Answer: Yes and yes. E.g. $\mathcal{O}(A)=F_{\mu\nu}$ is gauge invariant. Independence of the gauge-fixing choice is e.g. discussed in this related Phys.SE post.
{ "domain": "physics.stackexchange", "id": 90683, "tags": "quantum-field-theory, quantum-electrodynamics, path-integral, gauge, quantization" }
Minimum spanning tree of a connected induced subgraph
Question: I'm doing an online course in which I'm struggling with the following multiple-choice question: Suppose $ T $ is a minimum spanning tree of the connected graph $ G $. Let $ H $ be a connected induced subgraph of $ G $. (I.e., $ H $ is obtained from $ G $ by taking some subset $ S \subseteq V $ of vertices, and taking all edges of $ E $ that have both endpoints in $ > S $. Also, assume $ H $ is connected.) Which of the following is true about the edges of $ T $ that lie in $ H $? You can assume that edge costs are distinct, if you wish. [Choose the strongest true statement.] For every $ G $ and $ H $, these edges form a minimum spanning tree of $ H $ For every $ G $ and $ H $, these edges are contained in some minimum spanning tree of $ H $ For every $ G $ and $ H $ and spanning tree $ T_H $ of $ H $, at least one of these edges is missing from $ T_H $ For every $ G $ and $ H $, these edges form a spanning tree (but not necessary minimum-cost) of $ H $ I don't understand why option 4 is not correct; the hint given is as follows: Suppose G is a triangle and H is an edge. Suppose that G is a triangle with nodes 1, 2, and 3, all connected, and we choose the subgraph H from nodes 1 and 2, thus including only the edge (1,2). That edge then forms a minimum spanning tree of those two nodes, no? Incidentally, the answer For every G and H, these edges for a minimum spanning tree of H is also incorrect. Answer: Suppose that $G$ is the triangle on $\{1,2,3\}$ (with arbitrary edge weights), that $T$ is $\{\{1,2\},\{1,3\}\}$ (without loss of generality), and consider $H = \{\{2,3\}\}$, which is induced by $S = \{2,3\}$. No edges of $T$ lie in $H$, and in particular these edges do not constitute a spanning tree of $H$.
{ "domain": "cs.stackexchange", "id": 12834, "tags": "graphs, minimum-spanning-tree" }
A good textbook on GMO
Question: I am interested in learning about GMO. The topic is so wrapped in controversy, that it's hard to find a good book that introduces the basic concepts involved. I went through various university websites, but couldn't find any lecture notes on the topic. I don't know, is it too broad a topic? Is there even a book called Introduction to GMO? Maybe that's the problem. I am interested in a book that tells you about how gene manipulation is done. How do biology students get introduced to this concept? I hope the question is not too general. Answer: I too had difficulty finding any textbooks or notes that focused solely on genetic engineering. However, after some rather intense looking, I did come across several textbooks that may be helpful. I wasn't sure how basic of a text you were looking for but I'm hoping college level is okay because that is all I have been able to find. The first book was An Introduction to Genetic Engineering by Dr. Desmond S.T. Nicholl. Another was Principals of Gene Manipulation and Genomics by Sandy B. Primrose. If you need a more basic biology text for reference, I would recommend Miller & Levine Biology by Prentice Hall (Pearson Prentice Hall). Also, you could do a quick-study on genetics on Khan Academy - Crash Course: Biology and Ecology. I hope this information was helpful for you!
{ "domain": "biology.stackexchange", "id": 3503, "tags": "genetics, book-recommendation" }
Is this definition of mole correct?
Question: Chemical engineers define one mole as the amount of a substance which possess as many entities as $12\ \mathrm g$ of $\ce{^{12}C}$. The number of atoms in $12\ \mathrm g$ of $\ce{^{12}C}$ is $6.022 \times 10^{23}$ which is a constant by its definition. Now come to the relevant definition which is given in my textbook that defines one mole of a substance as the atomic mass, molecular mass or formula mass in grams. Is this definition correct? Answer: The mole is a base unit as specified in the Système international d’unités (SI) by the bureau international des poids et mesures. Its decisive definition is that published in French: La mole est la quantité de matière d’un système contenant autant d’entités élémentaires qu’il y a d’atomes dans 0,012 kilogramme de carbone 12 ; son symbole est « mol ». Lorsqu’on emploie la mole, les entités élémentaires doivent être spécifiées et peuvent être des atomes, des molécules, des ions, des électrons, d’autres particules ou des groupements spécifiés de telles particules. La mole est une unité de base du Système international d’unités. The proposal was brought forth by the International Union of Pure and Applied Physics (IUPAP), the International Union of Pure and Applied Chemistry (IUPAC) and the International Organisation for Standardization (ISO). As with all SI texts, the decisive French version has a semi-official English translation: The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in $0.012$ kilogram of carbon 12; its symbol is “mol”. When the mole is used, the elementary entities must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles. (The third point is not translated.) This definition is more or less identical with the one in your first paragraph. In practice and well within experimental error, this means that your later definition will hold true for any substance. I.e. take $1~\mathrm{mol}$ of an entity and the combined mass of that mole will be the same numerical value in grams as a single entity has in atomic mass units ($\mathrm{u}$). It is not the correct definition, and for any entity that is not carbon-12 the masses will differ slightly (but well within margin of your macroscopic experimental error) but it is good enough for most contexts. A redefinition of the SI units is being discussed and will likely be adopted at the 26th General Conference of Weights and Measures in autumn 2018. This would redefine the mole in a way that the Avogadro constant is defined to be numerically exactly $6.02214 \cdot 10^{23}~\mathrm{mol^{-1}}$ (with a few further digits appended to the end of the number that yet need agreement). This would mean that the new definition of the mole would be along the lines of: The mole, mol, is the unit of amount of substance of a specified elementary entity, which may be an atom, molecule, ion, electron, any other particle or a specified group of such particles; its magnitude is set by fixing the numerical value of the Avogadro constant to be equal to exactly $6.02214X \cdot 10^{23}$ when it is expressed in the unit $\mathrm{mol^{-1}}$. Currently, the Avogadro constant must be measured experimentally giving a value of $6.022140857(74)~\mathrm{mol^{-1}}$; the digits in brackets express the numerical uncertainty. This will mean that $1~\mathrm{mol}$ of carbon-12 atoms will no longer have the mass of exactly $12~\mathrm{g}$ (but again, it will be well within experimental error for everybody not practising theoretical physics). To answer the follow-up question you asked in the comments: The coefficients in chemical equations such as $$\ce{Zn + 2 HCl -> ZnCl2 + H2}$$ are always and exclusively to be understood as ratio coefficients. Thus, instead of thinking one atom or one mole of zinc, think amount $n$ of zinc and amount $2n$ of $\ce{HCl}$.
{ "domain": "chemistry.stackexchange", "id": 4593, "tags": "stoichiometry, terminology, mole" }
Why inverse square not inverse cube law?
Question: So as I understand, the inverse-square law which shows up in a variety of physical laws (Newton's universal law of gravitation, Coulomb's law, etc.) is a mathematical consequence of point-like particle emanating a certain physical quantity in all directions in the form of a sphere, and the density of that quantity is inversely proportional the surface area of that sphere on which that physical quantity gets spread out at a certain distance (radius), and since the surface area of a sphere is directly proportional to its radius squared, therefore the density of that physical quantity is inversely proportional to the distance squared. My question is: consider the specific example of point-like particle with a certain gravitational mass, now if we pictured the gravitational field of that particle through gravitational flux that emanates out of it isotropically, the density of that gravitational lines is inversely proportional to the volume of the sphere at a certain given distance (radius), and since the volume of sphere is directly proportional to the radius cubed. Therefore the force of gravity (or the electrostatic force or whatever) is inversely proportional to the distance cubed. What is wrong with this analysis? Answer: Flux is proportional to the area of the sphere not the volume of the sphere. It is evident from definition of the flux $\Phi_\mathbf{B}$ of some quantity $\mathbf{B}$ , which is defined in the following way, $$\Phi_\mathbf{B}= \iint\mathbf{B} \cdot \mathrm d \mathbf{A} $$ Therefore the flux is proportional to the area of the sphere and hence the $1/r^2$ dependency. Note that the detailed treatment of $1/r^2$ dependence of Coulomb's Law and Newton's law needs Maxwell's theory of EM and GR respectively. Furthermore these laws are experimentally heavily tested and they perfectly agree with experiment. Therefore they must be correct as far as the experimental evidence is concerned.
{ "domain": "physics.stackexchange", "id": 21946, "tags": "newtonian-gravity, gauss-law" }
Rust way of the generic quicksort implementation
Question: I have implemented the classical Hoare's algorithm, but I think that the implementation is not readable enough. I have tried to refactor it in the way that I used to use in C#. But now I have got compiler errors. Let me show some details. This is the first implementation: pub fn quicksort<T>(array: &mut Vec<T>) where T: Ord { partition(array, 0, array.len() as isize - 1); } fn partition<T>(array: &mut Vec<T>, start: isize, end: isize) where T: Ord { let length = end - start + 1; if length == 0 || length == 1 { return; } let mut left = start; let mut right = end; loop { let pivot = &array[start as usize]; while array[left as usize] < *pivot { left += 1 } while array[right as usize] > *pivot { right -= 1 } if left < right { array.swap(left as usize, right as usize); left += 1; right -= 1; } else { break; } } partition(array, start, right); partition(array, right + 1, end); } It seems it works, at least my tests are passed. fn main() { test_arrays(&mut vec![], &vec![]); test_arrays(&mut vec![1], &vec![1]); test_arrays(&mut vec![1, 2, 3, 4, 5], &vec![1, 2, 3, 4, 5]); test_arrays(&mut vec![5, 4, 3, 2, 1], &vec![1, 2, 3, 4, 5]); test_arrays(&mut vec![2, 3, 1, 5, 4], &vec![1, 2, 3, 4, 5]); } fn test_arrays(source: &mut Vec<i32>, target: &Vec<i32>) { quicksort(source); assert!(source.iter().eq(target.iter())); } Next are my questions. First, I would rather declare pivot as the variable, not the reference: loop { let pivot = array[start as usize]; while array[left as usize] < pivot { left += 1 } while array[right as usize] > pivot { right -= 1 } . . . But when I do it, I get the error: | 29 | let pivot = array[start as usize]; | ^^^^^^^^^^^^^^^^^^^^^ | | | move occurs because value has type `T`, which does not implement the `Copy` trait | help: consider borrowing here: `&array[start as usize]` I get what it means, but I don't want to limit the question with the Copy trait. Next, I would rather move pivot's assignment out of the loop: let pivot = &array[start as usize]; loop { while array[left as usize] < *pivot { left += 1 } while array[right as usize] > *pivot { right -= 1 } . . . When I do it, I get the another error: error[E0502]: cannot borrow `*array` as mutable because it is also borrowed as immutable --> src\main.rs:39:13 | 27 | let pivot = &array[start as usize]; | ----- immutable borrow occurs here ... 30 | while array[left as usize] < *pivot { | ------ immutable borrow later used here ... 39 | array.swap(left as usize, right as usize); | ^^^^^ mutable borrow occurs here As I get, the compiler has protected me from the case when the pivot has changed after the swapping. But I don't like that the pivot is recalculated in every iteration of the loop although it keeps the same value in most cases. I may keep the code with recalculation, or I may add the Copy trait. Finally, I think that the partition function is too large. I want to extract the body of loop to the an another function. I can do it if I will add the Copy trait to the quicksort and partition implementations: pub fn quicksort<T>(array: &mut Vec<T>) where T: Ord+Copy { partition(array, 0, array.len() as isize - 1); } fn partition<T>(array: &mut Vec<T>, start: isize, end: isize) where T: Ord+Copy { let length = end - start + 1; if length == 0 || length == 1 { return; } let mut left = start; let mut right = end; loop { swap_next_unordered_elements(array, &mut left, &mut right, array[start as usize]); if left >= right { break; } } partition(array, start, right); partition(array, right + 1, end); } fn swap_next_unordered_elements<T>(array: &mut Vec<T>, left: &mut isize, right: &mut isize, pivot: T) where T: Ord+Copy { while array[*left as usize] < pivot { *left += 1 } while array[*right as usize] > pivot { *right -= 1 } if *left < *right { array.swap(*left as usize, *right as usize); *left += 1; *right -= 1; } } As for me, this code looks simpler. Unfortunately, the quicksort has had the Copy trait that was not needed in the "complicated" version of the code. So, what is the best solution? Should I keep the first ("complicated") version of the code without Copy? Or should I make simpler implementation with the Copy? Or maybe there is another way to make simple generic quicksort? Answer: When you start using references and arrays in rust you tend to run into difficulties with the borrow checker. The borrow checker isn't smart enough to understand the invariants of the algorithm, and thus has to be overly cautious. There are three solutions: Use Indexes In this approach, you simply never take a reference, always pass around an index. So instead of: let pivot = array[start as usize]; while array[left as usize] < pivot { left += 1 } while array[right as usize] > pivot { right -= 1 } You would do: let pivot = start as usize; while array[left as usize] < array[pivot] { left += 1 } while array[right as usize] > array[pivot] { right -= 1 } If you follow this approach consistently, you'll find that borrow checker is happy. However, it does mean re-evaluating array[pivot] and similar a lot. After optimizations are applied, this isn't as big a deal as you might think. Split Slices Slices, and by extension Vec, have methods that you split it into distinct slices. For example, you can do let (pivot, rest) = array.split_first_mut().unwrap(); Now you have independent references to the partition and the rest of the array. The borrow will understand that mutations to rest can't affect the pivot. This is also a split_at_mut that splits at particular index you can use to divide a slice into two slices, perhaps to sort each independently. Use Unsafe You can use the unsafe function as_mut_ptr to get a pointer to the vec contents, and then use pointer logic to implement the algorithm with the borrow checker watching over your shoulder. Generally, its not worth doing this, but for some low level code it can make sense. You can also use get_unchecked to access the elements of the array without an index. If you look at the code used in the standard libraries implementation of quicksort, it uses some combination of all three.
{ "domain": "codereview.stackexchange", "id": 42232, "tags": "rust" }
What are the initial conditions for solving Schwarzschild geodesic equations?
Question: I am trying to solve the Schwarzschild geodesic equations and trying to plot them. I am new to the subject, so I am struggling with the initial conditions that I need to feed my computer. For reference I have these system of differential equations whose solution I want to plot: $$\dot{\phi} = \frac{l}{r^2}$$ $$\dot{t} = \frac{e}{1-\frac{2GM}{rc^2}}$$ $$\dot{r} = e^2- \left( 1+\frac{l^2}{r^2} \right) \left(1-\frac{2GM}{rc^2} \right)$$ Since I am considering the equatorial plane ($\theta = \frac{\pi}{2}$), what initial values of angular momentum and Energy (or range) should I choose to get valid orbits of particles around the spacetime. Initially, I want to feed valid Energy and Angular momentum values, which should give some consistent solutions. Once I am confident with my model, I can feed arbitrary values as well. Answer: The specific energy $\mathcal{E}$ and angular momentum $\mathcal{L}$ for bound geodesics in Schwarzschild are given by $$\mathcal{E}= \frac{\sqrt{(p-2)^2-4e^2}}{\sqrt{p(p-3-e^2)}}, $$ and $$ \mathcal{L}= \frac{p}{\sqrt{p-3-e^2}}, $$ where $e$ is the eccentricity and $p$ is the semi-latusrectum. There are stable orbits only for $p > 6+2e$.
{ "domain": "physics.stackexchange", "id": 82799, "tags": "general-relativity, black-holes, spacetime, orbital-motion, geodesics" }
Understanding the units of cosmic string number density
Question: I am reading this old paper: https://arxiv.org/pdf/1309.6637.pdf and trying to work out the units in equation 63. It gives the number density of cosmic strings in the radiation era as $$ \frac{n(\ell,t)}{a^3(t)}\approx\frac{0.18}{t^{3/2}(\ell+\Gamma G\mu t)^{5/2}} $$ with $n$ the number density, $a$ the scale factor, $t$ coordinate time, $\ell$ the loop size, $\Gamma$ the ratio of power radiated between GWs and EM and $G\mu$ the characteristic string tension. I expect the LHS to have units of $1/m^3$, as the scale factor is unitless. Making the substitution $t\rightarrow ct$ and $G\mu\rightarrow\frac{G\mu}{c^2}$ in order to convert $s$ to $m$ and $\frac{m^2}{s^2}$ to unitless turns the units of the RHS into a quantity with units $1/m^4$, which doesn't reconcile with the LHS. What mistake am I making converting back to SI? Answer: $$n=\frac{\mathrm{d}^2 N}{\mathrm{d}V\mathrm{d}\ell};$$ integrate over $\ell$ to get something with number density units.
{ "domain": "physics.stackexchange", "id": 93675, "tags": "cosmology, units, si-units, absolute-units, cosmic-string" }
Is there an intuitive explanation for how adaptive beamformers work?
Question: Recently I've been learning about and implementing some adaptive beamforming schemes (particularly the SMI/Capon beamformer, and the Robust Capon Beamformer). I understand the mathematical derivations for them, but I'm struggling with the intuition behind how they work. The SMI beamformer is the most basic, with array weights $$w=R^{-1}a^H$$ Where $R$ is the covariance matrix and $a$ is your steering vector. $a^H$ represents the conjugate transpose of $a$. For the purposes of this, I'm considering the ideal case where $R$ is calculated using sampled noise and interferer data, and not the desired signal. The beamformer is designed to have unit gain in the "look direction", the direction you think the source is in. It also has the effect of nulling the beam response in the direction of the interferers. My question is: how does this work? I realise that information about the interferers (particularly the time of arrival at each sensor) is wrapped up inside the covariance matrix, but why does multiplying its inverse by the steering vector have this nulling effect? I've tried a few approaches to get some better intuition about this. These include working through the derivation of the beamformer, playing with the equations (in particular, considering the alternate form $a=Rw$) and producing beampatterns for various cases. I can see exactly what the beamformer is doing, and I understand each step of deriving the equations, but I can't explain in a satisfying way why this process nulls the interfering signals. Answer: A basic two-element array suffices to explain the general case. Also, the receiving behaviour is the reciprocal of that of the transmitting. Consider two receivers separated by $d$. A plane harmonic (sinusoidal) electromagnetic wave arrives (from a large enough distance) with an angle of incidence $\theta$, to both receivers whose outputs (denoted $x_1(t)$, $x_2(t)$) are superposed to produce $s(t) = a_1 x_1(t) + a_2 x_2(t)$. $$ x_1(t) = A \sin( \omega_0 t) \tag{1}$$ $$ x_2(t) = A \sin( \omega_0 (t- t_d)) \tag{2}$$ where the delay in the second receiver (caused by the inclined arrival path) is $$t_d = d \sin(\theta) / c \tag{3}$$ where $c$ is the speed of light, $\omega_0 = 2 \pi c / \lambda$ is the (ang) frequency, and $\lambda$ is the wavelength. Eqs 1&2 can be written as: $$ x_1(t) = A \sin( \omega_0 t) \tag{4}$$ $$ x_2(t) = A \sin( \omega_0 t - \phi) \tag{5}$$ where $ \phi = 2 \pi ~(d/\lambda) \sin(\theta) $. Trigonometric reasoning yields that for different values of phase difference $\phi$, you will get constructive or destructive interference at the summation output $s(t) = a_1 x_1(t) + a_2 x_2(t)$. Since, for fixed values of $d$ and $\lambda$, the value of $\phi$ is a function of $\theta$, then the receiving (or transmitting) pattern associated with the array will also be a function of it; resulting in nulls and peaks. For larger arrays, a similar interference phenomenon yields a more selective directivity pattern, associated with the array distance $d$ and the array weights $a_k$.
{ "domain": "dsp.stackexchange", "id": 12182, "tags": "beamforming" }
Why don't you see multiple images of an object?
Question: Consider the ray model of light. Let's say an object such as a pencil is illuminated, and consider one point on that pencil. Since there could be many rays of light bouncing off the same point on the pencil, one ray could hit the left side of your eye and another ray from the same point could hit the right side of that same eye. Why don't you see that point in 2 different places then? Thanks! Answer: I believe that the answer to this question involves multiple parts. I will try and hit all of them. Since you mentioned the ray model, I will assume you are relatively familiar with geometric optics. First, we do see different images of the same object at times! Or rather, we see a blurry image rather than a sharp one. If you bring a pencil so close to your eyes that you cannot focus on it using your iris (more on that later), then you will see a blurry image. This happens for exactly the reason you mention. Rays from the same point of the object take different paths to your retina. If the path taken to your lens makes a large angle compared to the path that passes straight through the lens, then in general the rays will not recombine at a single point, but will have some spread. As the object moves closer to your eye, the angles increase, so the spread becomes larger and you see a blurry image. Your iris is very important for creating sharp images. Generally, the distance between your lens and your retina is fixed. The distance between the object and your lens is not fixed, but we would like to be able to resolve detail for some range of object distances, rather than just a single one. So, the only parameter your body can control is the focal length. By constricting and relaxing, the iris changes the curvature of your lens. The change in curvature leads to a change in focal length. So, whatever object you are attempting to focus on, your iris constricts so that the object is beyond the focal length of your lens. This ensures that the rays will converge toward the retina and produce an image. However, even with the object beyond the focal length you still get a blurry image if the rays make a large angle with the axis that is perpendicular to your eye/lens/retina (as discussed before) and this is one reason why you have a pupil. The pupil only allows rays that are approximately all parallel to each other to fall on your lens. It effectively acts as a collimator. So now, you have an object at or beyond the focal length and a set of approximately parallel rays that fall on your lens and are focused. This leads to a relatively sharp image on your retina (assuming that the object is far enough away that the pupil can do its job). The final piece of the puzzle is your brain. Even though your iris, lens, and pupil do what they can to create a sharp image for your retina, it is still imperfect. There are still a number of aberrations in the image that falls on your retina. Your visual cortex and related support areas in the brain do all of the processing and reconstructing that leads to you perceiving a sharp image.
{ "domain": "physics.stackexchange", "id": 7190, "tags": "optics, visible-light, geometric-optics" }
GC normalization ATAC seq data
Question: Is it really required to do GC normalization ATAC seq data ? One of the paper where they have ATAC seq data did the normaisation of GC bias after peak calling. When i looked for the library and the reason why GC bias i found this Could it be true that genes with higher GC content are higher expressed? It has been suggested that genes that are either extremely high or extremely low expressed are under some form of selection leading to “extreme” GC content. What CQN does, is making the effect of GC content comparable across samples, and we show in [1] that this leads to improved inference. It also flattens the effect of GC content on gene expression, but we believe this is better than having the effect of GC content depend on the sample. https://www.bioconductor.org/packages/release/bioc/vignettes/cqn/inst/doc/cqn.pdf Now that scenario is for RNA seq would it be logical to do the same for ATAC seq? Answer: The only reason to normalize for GC content in RNA-seq is if it differs notably between samples/groups. If that's not the case and you aren't trying to compare genes withing samples then you have no reason to try to account for GC content. The same goes for ATAC-seq, though there the danger with trying to normalize for GC content is that you then mask changes in global accessibility. In short, normalize for things that should cause problems with your analysis and nothing else.
{ "domain": "bioinformatics.stackexchange", "id": 1666, "tags": "atac-seq" }
Unity3D script for controlling a character
Question: This is a Unity3D script I wrote for my game to control a character. I don't consider myself a highly-skilled coder, so I come here to ask for suggestions. Is my code clean enough? How can I improve my coding style and pattern? using UnityEngine; public enum OnWallStatus { None = 0, OnLeft = 1, OnRight = 2 } public enum OnGroundStatus { OnGround = 0, InAir = 1 } public class MoveControl : MonoBehaviour { private const float minSpeed = 1f; private const float maxSpeed = 5f; private const float minJump = 1f; private const float maxJump = 10f; [SerializeField] private float speedModifier; [SerializeField] private float jumpModifier; private new Rigidbody2D rigidbody2D; private new Transform transform; private OnGroundStatus _onGroundStatus; private OnWallStatus _onWallStatus; public OnGroundStatus onGroundStatus { get { return _onGroundStatus; } } public OnWallStatus onWallStatus { get { return _onWallStatus; } } private bool isHorizontalStill { get { return (rigidbody2D.velocity.x == 0); } } private bool isVerticalStill { get { return (rigidbody2D.velocity.y == 0); } } void Awake () { speedModifier = Mathf.Clamp(speedModifier, minSpeed, maxSpeed); jumpModifier = Mathf.Clamp(jumpModifier, minJump, maxJump); rigidbody2D = GetComponent<Rigidbody2D>(); transform = GetComponent<Transform>(); _onGroundStatus = OnGroundStatus.OnGround; _onWallStatus = OnWallStatus.None; } public void MoveHorizontal(float speed) { speed = Mathf.Clamp(speed, -1f, 1f); rigidbody2D.AddForce(new Vector2(speed * speedModifier * 10f, 0f)); Vector2 velocity = rigidbody2D.velocity; velocity.x = Mathf.Clamp(velocity.x, -1f * speedModifier, 1f * speedModifier); rigidbody2D.velocity = velocity; } public void MoveJump(float speed) { if (_onWallStatus == OnWallStatus.None && _onGroundStatus == OnGroundStatus.InAir) return; speed = Mathf.Clamp(speed, 0f, 1f); if (_onGroundStatus == OnGroundStatus.OnGround) { rigidbody2D.velocity = new Vector2(rigidbody2D.velocity.x, speed * jumpModifier); } else if(_onWallStatus == OnWallStatus.OnLeft) { rigidbody2D.velocity = new Vector2(1f * speedModifier, speed * jumpModifier); } else if (_onWallStatus == OnWallStatus.OnRight) { rigidbody2D.velocity = new Vector2(-1f * speedModifier, speed * jumpModifier); } } public void MoveLand(float speed) { if (_onWallStatus != OnWallStatus.None || _onGroundStatus != OnGroundStatus.InAir) return; speed = Mathf.Clamp(speed, -1f, 0f); rigidbody2D.velocity = new Vector2(rigidbody2D.velocity.x, speed * jumpModifier); } void OnCollisionEnter2D(Collision2D collision) { if (collision.gameObject.tag != "Platform") return; if (collision.contacts[0].normal.y > 0) { _onGroundStatus = OnGroundStatus.OnGround; } else { if(collision.contacts[0].normal.x < 0) { _onWallStatus = OnWallStatus.OnRight; } else { _onWallStatus = OnWallStatus.OnLeft; } } } void OnCollisionExit2D(Collision2D collision) { if (collision.gameObject.tag != "Platform") return; if (collision.contacts[0].normal.y > 0) { _onGroundStatus = OnGroundStatus.InAir; } else { _onWallStatus = OnWallStatus.None; } } } Answer: You don't need this directive in the code you have given us. using System.Collections; It's conventional to use Pascal casing to name private constants in a file, so private const float minSpeed = 1f; private const float maxSpeed = 5f; private const float minJump = 1f; private const float maxJump = 10f; should be: private const float MinSpeed = 1f; private const float MaxSpeed = 5f; private const float MinJump = 1f; private const float MaxJump = 10f; In C# it's convention that private instance fields start with an underscore. [SerializeField] private float _speedModifier; [SerializeField] private float _jumpModifier; I would also consider writing 'getter' and 'setter' methods for each field instead of serializing them. I believe public properties also use Pascal case conventionally. public OnGroundStatus onGroundStatus { get { return _onGroundStatus; } } public OnWallStatus onWallStatus { get { return _onWallStatus; } } If you preferred you could use auto-properties to set these instead. public OnGroundStatus OnGroundStatus { get; private set; } public OnWallStatus OnWallStatus { get; private set; } I don't see a usage of these properties in your code. private bool isHorizontalStill { get { return (rigidbody2D.velocity.x == 0); } } private bool isVerticalStill { get { return (rigidbody2D.velocity.y == 0); } }
{ "domain": "codereview.stackexchange", "id": 18315, "tags": "c#, design-patterns, unity3d" }
Experimental evidence for 3 generations of quark?
Question: I know that looking at the invisible decay width of the $Z$-boson at the LEP collider at CERN leads to the evidence of the existence of three (light) lepton generations but I can't find any information on the experimental evidence for 3 quark generations. Thus: what experimental evidence suggests the existence of three quark generations? Answer: The data that provides direct measurement of the charges of high-mass quarks also exhibits the existence of those heavier states. By comparing the cross sections for $$ e^+ + e^- \longrightarrow \mu^+ + \mu^-$$ with that for $$ e^+ + e^- \longrightarrow q + \bar{q} \longrightarrow \text{hadrons} $$ we get a fairly direct measurement of $$ \sum_\text{accessible quark masses} q^2_\text{quark} \;.$$ All by itself that's pretty strong evidence, but when combined with the group structure of the hadronic n-tuplets it's about as close to iron clad as evidence comes in the particle physics world.
{ "domain": "physics.stackexchange", "id": 39313, "tags": "particle-physics, standard-model, quarks" }
Why does an egg boiler require more water to cook fewer eggs?
Question: I got an egg boiler machine, on the instructions is stated: Less water is used when cooking more eggs. My thermodynamics understanding cannot figure this out yet. Why would I need less water for more eggs? The machine beeps when the water has evaporated, so the eggs are ready. In that case, why will less water cook more eggs? Answer: Presumably, the rate of the steam escaping the cooker depends on the "resistance" of the steam path: from the opening in the bottom, where the steam enters the dome, to the opening on the side of the dome, from where the steam escapes. The more eggs in the cooker, the narrower the path, the slower the flow. Also, as relatively slowly moving steam makes contact with more eggs, it is more likely to condense and make its way back to the water at the bottom of the cooker, which further reduces its escape rate. So, with more eggs in the cooker, a smaller amount of water will last about as long as a greater amount of water with fewer eggs, resulting in a similar degree of cooking.
{ "domain": "physics.stackexchange", "id": 52105, "tags": "thermodynamics, everyday-life" }
Conservation Vs Non-conservation Forms of conservation Equations
Question: I understand mathematically how one can obtain the conservation equations in both the conservative $${\partial\rho\over\partial t}+\nabla\cdot(\rho \textbf{u})=0$$ $${\partial\rho{\textbf{u}}\over\partial t}+\nabla\cdot(\rho \textbf{u})\textbf{u}+\nabla p=0$$ $${\partial E\over\partial t}+\nabla\cdot(\textbf{u}(E+p))=0$$ and non-conservative forms. However, I am still confused, why do we call them conservative and non-conservative forms? can any one explain from a physical and mathematical point of view? Many off-site threads deal with this question (here and here), but none of them provides a good enough answer for me! If any one can provide some hints, I will be very grateful. Answer: What does it mean? The reason they are conservative or non-conservative has to do with the splitting of the derivatives. Consider the conservative derivative: $$ \frac{\partial \rho u}{\partial x} $$ When we discretize this, using a simple numerical derivative just to highlight the point, we get: $$ \frac{\partial \rho u}{\partial x} \approx \frac{(\rho u)_i - (\rho u)_{i-1}}{\Delta x} $$ Now, in non-conservative form, the derivative is split apart as: $$ \rho \frac{\partial u}{\partial x} + u \frac{\partial \rho}{\partial x} $$ Using the same numerical approximation, we get: $$ \rho \frac{\partial u}{\partial x} + u \frac{\partial \rho}{\partial x} = \rho_i \frac{u_i - u_{i-1}}{\Delta x} + u_i \frac{\rho_i - \rho_{i-1}}{\Delta x} $$ So now you can see (hopefully!) there are some issues. While the original derivative is mathematically the same, the discrete form is not the same. Of particular difficulty is the choice of the terms multiplying the derivative. Here I took it at point $i$, but is $i-1$ better? Maybe at $i-1/2$? But then how do we get it at $i-1/2$? Simple average? Higher order reconstructions? Those arguments just show that the non-conservative form is different, and in some ways harder, but why is it called non-conservative? For a derivative to be conservative, it must form a telescoping series. In other words, when you add up the terms over a grid, only the boundary terms should remain and the artificial interior points should cancel out. So let's look at both forms to see how those do. Let's assume a 4 point grid, ranging from $i=0$ to $i=3$. The conservative form expands as: $$ \frac{(\rho u)_1 - (\rho u)_0}{\Delta x} + \frac{(\rho u)_2 - (\rho u)_1}{\Delta x} + \frac{(\rho u)_3 - (\rho u)_2}{\Delta x} $$ You can see that when you add it all up, you end up with only the boundary terms ($i = 0$ and $i = 3$). The interior points, $i = 1$ and $i = 2$ have canceled out. Now let's look at the non-conservative form: $$ \rho_1 \frac{u_1 - u_0}{\Delta x} + u_1 \frac{\rho_1 - \rho_0}{\Delta x} + \rho_2 \frac{u_2 - u_1}{\Delta x} + u_2 \frac{\rho_2 - \rho_1}{\Delta x} + \rho_3 \frac{u_3 - u_2}{\Delta x} + u_3 \frac{\rho_3 - \rho_2}{\Delta x} $$ So now, you end up with no terms canceling! Every time you add a new grid point, you are adding in a new term and the number of terms in the sum grows. In other words, what comes in does not balance what goes out, so it's non-conservative. You can repeat the analysis by playing with altering the coordinate of those terms outside the derivative, for example by trying $i-1/2$ where that is just the average of the value at $i$ and $i-1$. How to choose which to use? Now, more to the point, when do you want to use each scheme? If your solution is expected to be smooth, then non-conservative may work. For fluids, this is shock-free flows. If you have shocks, or chemical reactions, or any other sharp interfaces, then you want to use the conservative form. There are other considerations. Many real world, engineering situations actually like non-conservative schemes when solving problems with shocks. The classic example is the Murman-Cole scheme for the transonic potential equations. It contains a switch between a central and upwind scheme, but it turns out to be non-conservative. At the time it was introduced, it got incredibly accurate results. Results that were comparable to the full Navier-Stokes results, despite using the potential equations which contain no viscosity. They discovered their error and published a new paper, but the results were much "worse" relative to the original scheme. It turns out the non-conservation introduced an artificial viscosity, making the equations behave more like the Navier-Stokes equations at a tiny fraction of the cost. Needless to say, engineers loved this. "Better" results for significantly less cost!
{ "domain": "physics.stackexchange", "id": 53861, "tags": "fluid-dynamics, conservation-laws, continuum-mechanics" }
How can anions exist?
Question: Consider an nitrogen atom with 7 protons and 7 electrons. How can an nitrogen anion $\ce{N^-}$ exist? Shouldn't the 7 electrons in valence shell repel the extra one? What force does hold the extra one electron in the valence shell? There isn't an extra proton in the $\ce{N^-}$ that would hold the extra electron. Answer: One reason cations and anions exist is due to the stability of a full or half-full valence shell. The stability from those electronic configurations means that the atom or molecule does not require protons to "hold" the extra electron. Recall also that nitrogen has three (or five) valence electrons, rather than seven. The 1s shell is full and is not considered part of its valency. The three 2p electrons are the valence electrons although they hybridize with the 2s electrons to produce the trigonal pyramidal structure of ammonia with its lone pair. The single anion $\ce{N^-}$ could exist, but would not be stable because it puts four electrons in the p shell. The p shell would prefer to have three electrons as it does in the nitrogen atom or no electrons as it does in the $\ce{N^3+}$ cation.
{ "domain": "chemistry.stackexchange", "id": 16662, "tags": "valence-bond-theory" }
What is optical path length? Does it make sense that the optical path length is variable or zero?
Question: I've encountered an exercise where L(A,B) = sum of other OPLs that are parts of itself, and at the end we got that L(A,B) = 0?! But the definition is that L(A,B) = OPL = sum(...) = n×AB, so that means that either n is zero or AB is zero, where both are impossible. I'm missing something here, or i didn't really get the idea of OPL which as i understand: corresponds to the distance L(A,B) in vacuum equivalent to the distance AB traversed in the medium of index n at the same time t. And what does L(A,B) = 0 or constant or variable means? (like how rays of light will behave in all these conditions?) And does L(A,B) =0 or constant defines a stigmatic optical system? Answer: The optical path between two points $A$ and $B$ is related to the phase difference of the wave between these two points : $L_{AB}=\frac{\omega}{c}(\varphi(B)-\varphi(A))$ With this relation, the optical path appears as an algebraic quantity which is positive if one goes in the direction of the light, negative in the opposite direction. For the virtual extension of a ray, we consider that the light goes in the same direction as on the real part. For example, for a plane mirror, with objet $A$ and image $A'$ we have : $L=(AI)+(IA')=AI - IA' = 0$ : the optical path is indeed zero in this case.
{ "domain": "physics.stackexchange", "id": 83130, "tags": "definition, refraction, geometric-optics" }
From the standpoint of a tachyonic frame of reference, is the universe perceived as a black hole?
Question: I am familiar with the limitations imposed by special relativity related to the existence of tachyonic observers. Still, since many experiments directed towards the detection of tachyons have been conducted in the past, I think it is legitimate to consider tachyonic frames of reference. For an observer very close (but under) the light speed, radiation from the universe would emanate from a single point in the direction of travel, and all radiation would be Doppler shifted to gamma ray wavelengths. In this direction, this link was sent to me by Thomas Fritsch , related to another question. When we go into the FTL domain (for the tachyonic observer), that point will disappear (observables take imaginary values), and the Doppler shifted radiation will have such high frequencies that a black hole will be formed. In other words, for a tachyonic observer, the whole universe will be a black hole (or a naked singularity?). Related to the possibility that extremely high frequency radiation could create a black hole, check this link This could be an explanation why the existence of tachyons has never been confirmed in experiments (or a way to design better experiments). For the status of these searches , check this link Question. From the standpoint of a tachyonic frame of reference, is the universe perceived as a black hole? Answer: It is a misunderstanding that SR would say that nothing can travel faster then c, so this question does have some merit (theoretically). As per SR and GR (in vacuum, when measured locally): whatever travels slower then c (has rest mass) will always travel slower then c whatever travels at speed c, will always travel faster then c whatever travels faster then c, will always travel faster then c Now what gives merit to the question is 3., it is not commonly known, and not really stated too much on this site though. Now to understand what the universe would look like to an observer, we have to clarify: as per SR, anything with rest mass does have a reference frame as per SR, anything with no rest mass, does not have a reference frame as per SR, anything that travels faster then c, might or might not have rest mass (theoretically there is no model for this) Since your question is about the observations of an observer traveling faster then c, this does not contradict with the fact that as per SR, massless particles, traveling at speed c, do not have a reference frame. It does not make sense to talk about what the observations of an observer traveling at speed c (a photon) would be, because as per SR it does not have a reference frame. This does not mean that anything traveling faster then c does not have a reference frame, so theoretically there is no contradiction with SR, though, experimentally, we cannot prove that anything could ever travel faster then c. Let's disregard that, and say that we start with the frame of a neutrino, that is as close to the speed of light as possible. Now a neutrino would see its own clock tick normally, but as soon as it compares its clock to the clocks in the universe at rest (relative to itself), it will see that those other clocks tick almost infinitely fast. The neutrino will see the whole 13.8 billion years (assuming it is flying around since then), as just a moment in time on its own clock. Like a fast forward of the whole movie called Universe, lasting 13.8 billion years, watched in a moment. Now there is no sense in going even faster, but you get the idea, the photon would see (again, there is no reference frame of the photon) the whole 13.8 billion years in an infinitely small amount of time (assuming the photon is flying around since then), the photon could not even experience this 13.8 billion years, since for the photon, emission and absorption are in a spacetime distance of 0, that is, it is a lightlike path. The photon could not even watch this movie called Universe, it would be so short for it. Now going even faster, theoretically, in your question, a tachyon. It would see the 13.8 billion years movie backwards (theoretically). This is not the same as traveling back in time. This is just that the frame of the tachyon (theoretically) would make the whole movie called Universe to be played in reverse. How fast it would play in the frame of the tachyon is just a matter of how much faster the tachyon is then c. But for a tachyon, that is still flying around since 13.8 billion years, this movie hasn't even started yet (in our frame at rest relative to the tachyon). It will start seeing the movie whenever it decays (or whenever the universe ends). Of course in the frame of the tachyon the movie has already started (playing backwards from the decay or end of the universe), that is why they say that (theoretically) the tachyon is foreseeing the future. That is not correct to say, but theoretically it is watching the universe backwards from ending to the Big Bang.
{ "domain": "physics.stackexchange", "id": 58704, "tags": "general-relativity, special-relativity, black-holes, tachyon" }
A few (basic) doubts about motion concepts
Question: I'm a beginner studying mechanics for the first time. I am a homeschooler, stuck with an incomplete and vague course textbook, and largely self-taught, so please bear with me; these are painfully basic questions, but I need clarification of concepts. I've attached an image of the question. It has multiple parts. My questions are: For part (iii): The answer key uses one of the constant acceleration kinematics formulas to solve this, given intial velocity $5.5 m/s$ and time $4 seconds$ to find $s$. It keeps using the same acceleration (-0.25 m/s^2). Why? Doesn't acceleration change when the box is "struck"? For part (iv): It stops using the acceleration -0.25 m/s^2. Why? Also, we can't use t = 4 for time any more. Why? In part (v), we can't assume the speed of the rebound is the same as the speed of arrival at the boundary board. Why? In the mechanics sense of things, I mean; what is the force exerted by the boundary board upon impact equal to? Not the force at which the box hits the board? If possible, can I get the "whys" for everything? I'm set to give a super difficult exam soon and need to know the whys. Answer: The answer for all the Why's ? Part (iii) : By struck , the question indicates an impulsive force acting on the block and giving speed to the block . So yes , It accelerates because of this force but only for the time the force exists. But the question asks for the distance travelled after it gains the speed of $5.5 m/sec$ i.e. the question starts after the striking force is removed. So now , you are left with some velocity and friction force opposing that velocity and thus the acceleration has a value of ($-0.25m/sec^2$). Part (iv) : After rebounding , the block moves towards A but it's initial velocity is lesser than the one which it gained when struck at A . So the acceleration is again due to friction and its value is the same . The book might use the signs differently but the magnitude of that acceleration is the same i.e. $0.25m/s^2$. We can't use the same time $t=4$ for backward motion because the initial speed is less by a value of $2$. So , the block will take more time to reach $A$ from $B$ than it took to reach $B$ from $A$. Part (v) : the speed after rebound can't be equal to the speed of arrival at the boundary because when it hits the boundary it loses its kinetic energy mainly in the form of heat and you can feel this heat if you touch that Boundary after the block rebounds. So , there exists a normal force (again impulsive since a large force acts for a very small interval of time). This force first decelerates the coming body and then gives it some velocity in the opposite direction of its earlier motion. Note : if you are new to impulsive force , then just remember the time when you played with a ball by throwing it down on the floor and it rebounds on the floor and comes to you back . Here also the ball experiences a great force in a very small time and in such type of situation , the force is regarded as Impulsive force. Hope it helps ☺️.
{ "domain": "physics.stackexchange", "id": 72017, "tags": "homework-and-exercises, newtonian-mechanics, forces, kinematics" }
Green functions of interacting Bose gas
Question: It is well-known fact that the appearance of superconducting state in superconductor relates to the existence of the pole in two-particle Green function. Does exist a similar fact for Bose condensation? I mean the following: one can compute a Green function of interacting Bose gas and see that a singularity of this function corresponds to appearance of condensate state Answer: Bose-Einstein condensation is not caused by interaction: ideal Bose gas undergo Bose-Einstein condensation. In contrast, superfluidity is due to interaction and one should see its signature in Green functions as a pole associated to phonons.
{ "domain": "physics.stackexchange", "id": 71935, "tags": "quantum-field-theory, statistical-mechanics, greens-functions" }
Searching for bad Minecraft questions on Gaming.SE
Question: To preface this post a little bit, I'll explain a little bit of the "backstory" here. The Stack Exchange site, Gaming.SE, also known as Arqade, often has a large influx of bad questions about the popular video game, Minecraft. Many of these questions are either easy to find on a search engine, horribly unclear, have been asked before, are treating Arqade like a forum, or are asking for things like account support, which only the company that produces the game can provide. A few examples might include: what is the best biome to spawn in minecraft Can't Log In To Minecraft Account After 1.9 No .Minecraft folder at school What I need to make TNT? How to Spawn Herobrine without Mods? The query essentially searches in a manner like this: It looks for questions with tags that are like %minecraft% or %minecraft-%%. It checks to see if it has a score below -1. It checks to make sure that the post hasn't already been deleted. It checks to see if it's either been closed, or not closed. I'd like to know the following things: Is there a way to make this shorter and less repetitive? Are there any redundant checks that aren't needed? Are there any checks that could cause problems? Anything else? Here's the code, and the query link: SELECT Id AS [Post Link] , OwnerUserId AS [User Link] , Score FROM Posts WHERE DeletionDate IS NULL AND Score <= -1 AND ( Tags LIKE '%minecraft-%%' OR Tags LIKE '%minecraft%' ) AND ( ClosedDate IS NULL OR ClosedDate IS NOT NULL ); SELECT COUNT(*) FROM Posts WHERE DeletionDate IS NULL AND Score <= -1 AND ( Tags LIKE '%minecraft-%%' OR Tags LIKE '%minecraft%' ) AND ( ClosedDate IS NULL OR ClosedDate IS NOT NULL ); Answer: The use of the Tags field on the Post is a poor choice for the query. It relies on a table scan, which is slow, because it has to check each question. Note, you can use the < and > characters to identify tag start and end values in the Tags column, but, as I say, don't use that column. Instead, you should do a join with the PostTags and Tags tables. Using a CTE to simplify that query makes sense. I really dislike the all-caps style of SQL, but realize that there is no standard. If you can change standards, please use lower-case for keywords... it makes the column and table names much easier to see (and those are the important parts). Still, you get +1 for being consistent in your capitalization. Consistency is really important, and you nailed it. Note that the Posts table has no deleted posts, none. There's no need to check the DeletedDate at all. So, joining Tags to PostTags gets you minecraft-tagged posts. Joining that to Posts gets you the ones you can test scores and closures on. Here are unclosed, low-scoring, minecraft-related posts....: WITH mctags AS ( SELECT Id AS tid FROM Tags WHERE TagName LIKE 'minecraft%' ), mcqs AS ( SELECT DISTINCT PostId FROM PostTags INNER JOIN mctags ON TagId = tid ) SELECT Id AS [Post Link], OwnerUserId AS [User Link], Tags, Score FROM Posts INNER JOIN mcqs ON Id = PostId WHERE Score <= -1 AND ClosedDate IS NULL AND PostTypeId = 1 Note, I have tried to continue using your all-caps keywords.... despite myself ;-) Hmmm, some aditional notes: your text says you check for questions "below -1", but your code checks for questions "below 0" ... there is a difference between < and <=. it does not check the closed-status of the post. If you want to see if a currently-open post was closed in the past, you need to check the PostHistory table.
{ "domain": "codereview.stackexchange", "id": 16167, "tags": "sql, sql-server, stackexchange, t-sql" }
Where do the three negative oxygens in a phosphate group get their missing electron from?
Question: A video I watched showed the construction of a phosphate group using a Lewis structure. The oxygen with the double bond fills its octet group. The three other oxygens lack an electron. The teacher merely pointed out this lack and added the missing electrons (shown in red). Where do these missing electrons come from? Are they stolen from passing hydrogens? Answer: Well ! A neutral substance like $\ce{PO4}$ does not exist. It only exists as an ion with three negative charges : $\ce{PO4^{3-}}$. These charges can come from three atoms of the first column like Hydrogen $\ce{H}$ or sodium $\ce{Na}$. These atoms become the cations $\ce{H^+}$ and $\ce{Na^+}$ after this electron "gift". So the ion phosphate is always surrounded by three ions like $\ce{H+}$ or $\ce{Na+}$. This is why the phosphate ions may be obtained by dissolving neutral substances like $\ce{Na3PO4}$ or $\ce{H3PO4}$ in water. They will never exist alone.
{ "domain": "chemistry.stackexchange", "id": 14474, "tags": "electrons, lewis-structure" }
Language Recognition Devices and Language Generators
Question: I have few CS textbooks with me which discuss languages, well actually 2 plus old course notes supplied a few years ago. I have been searching the web too any only seem to come up with vague responses just like the text books I have. My question is about language recognizers verses generators. I get the overriding principle of a recognizer. That it analyses a language and is able to determine nay or yay if a String belongs to a language. This is at least what I have picked up from the books and notes. However, it's much more complex than that is it not? A tokeniser and syntax analyzer (which I assume to be recognizers) do not just say yes or no, they also say where and somtimes what don't they...? However, regarding language generators,I cannot find a very clear explination about what they are. The typical description I get is, for example in Sebasta's Concepts of programming languages says 'A language generator is a device that can be used to generate the sentences of a language. We can think of a generator as a push button that produces a sentence of a language every time it is pressed.' Seriously? That's it?? Your kidding right.... I read that Regex is an example of a Generator, then why when people talk of generators to they not talk of the inputs. For example Regex has a target String, and the Regex with defines both the accepted alphabet and it's grammar rules. Can someone provide for me a clearer distinction of what a recogniser is? Answer: I assume that, by definition, a language is a set of finite strings. The first distinction to be made is between operational and denotational concepts. Denotational definition of a language is a precise specification of the language through various (a priori) non computational mathematical devices such as, for example: a specification with a property of strings that characterise the strings of the language. $\{ x^n \mid n$ is a prime integer$\}$ an algebraic characterisation from other languages and strings: $L= (L_1 \cup L_2).abac$ where $L_1$ and $L_2$ are languages already defined. Note that $abac$ may also be viewed as a language with a single string. Regular expressions are an example of such a definition, when they are interpreted as defining regular languages. (Discussion of their pragmatic use for string matching in programming languages, under the name of Regex or Regexp, would require a specific presentation). a system of language equations, of which the relevant language is a solution with specific properties, for example the smallest solution, if that is well defined. An example of such an equation is $A = A B a$ where $a$ is a symbol, and $A, B$ are variables ranging on languages over an alphabet containing $a$. A pair of languages $(\hat A,\hat B)$ is a solution if $\hat A$ is equal to the concatenation of itself with $\hat B$ and a final symbol $a$ any mix of the above Then operational concepts give you ways of answering algorithmic questions about the language or performing algorithmic tasks related to the language. Among these operational concepts, two are classical ones: if you are given a string, can you decide whether it is in the language or not. Such an algorithm providing a decision procedure is a recognizer. can you enumerate all the strings in the language. If such an algorithm exists, the language is said to be recursively enumerable. There are many other names. Such an enumeration algorithm is a generator of the language. A generator provides a semi-decision procedure that can tell you whether a string is in the language, but may not terminate if it is not in the language. A recogniser can always be used to have a generator (assuming your alphabet is denumerable, which is a general assumption in operational definitions). You just enumerate all the strings on the alphabet, and use the recognizer to determine whether it is in the language. Both a generator or a recognizer may be used to define a language, operationally. It is often the case that several "views" or "definitions" of the same language have to be used. It is then essential to prove consistency of the different views, to establish that they do define the same language. Very often, general theorems will provide that. For example, there are theorems that establish that some Context-Free recognizer building techniques (e.g., LR(k)) are consistent with the generator view of the CF grammars. The same formalism may sometimes be read as fitting any of the above concepts. For example, a context-free grammar define the language as a the smallest solution to a system of language equations. It may also be read as a string rewriting system that can generate the language. And it can be used in a fairly direct way to decide whether a string belongs to the language (without building any specific pushdown machine). Note that there are many other thing one may want to do with a language, such as associating a structure with strings (e.g., parse-tree, or derivation tree, which need not be the same, depending on the kind of grammar considered). One may also want to associate semantics with the strings. But that is yet another type of problems, which may be very dependent of the kind of language considered. Note following a remark by user @Vor. It seems that the concept of recognition, and probably of recognizer, is not the same in various sub-communities of the field. Two articles of Wikipedia seem to have differing views on this: Formal grammar a grammar is usually thought of as a language generator. However, it can also sometimes be used as the basis for a "recognizer"—a function in computing that determines whether a given string belongs to the language or is grammatically incorrect. Recursively enumerable language In mathematics, logic and computer science, a formal language is called recursively enumerable (also recognizable, partially decidable, semidecidable or Turing-acceptable) if it is a recursively enumerable subset in the set of all possible words over the alphabet of the language, i.e., if there exists a Turing machine which will enumerate all valid strings of the language. It is not exactly the same use of the concept of recognition, applied in one case to a string and in the other to the language. Nevertheless, it seems that the terminology is somewhat inconsistent.
{ "domain": "cs.stackexchange", "id": 2487, "tags": "formal-languages, terminology, automata" }
Implement a square root of Swap gate in using qiskit
Question: Does the √SWAP gate have a ready-to-use function on the Qiskit circuit library? If not, how to implement it? Answer: I don't think √SWAP gate has a ready-to-use function in Qiskit. But it is easy to implement. √SWAP unitary is: $$\sqrt{swap} = \left( {\begin{array}{*{20}{c}} 1&0&0&0 \\ 0&\frac{1}{2}(1 + i)&\frac{1}{2}(1 - i)&0 \\ 0&\frac{1}{2}(1 - i)&\frac{1}{2}(1 + i)&0 \\ 0&0&0&1 \end{array}} \right)$$ It is equivalent to: So, it can be implemented as follows: from qiskit import QuantumCircuit circ = QuantumCircuit(2) circ.cx(0, 1) circ.csx(1, 0) circ.cx(0, 1) To verify: from qiskit.quantum_info.operators import Operator from qiskit.visualization import array_to_latex op = Operator(circ) array_to_latex(op) And to use it as a gate: sr_swap = circ.to_gate(label = '√SWAP') qc = QuantumCircuit(2) qc.append(sr_swap, [0, 1])
{ "domain": "quantumcomputing.stackexchange", "id": 3225, "tags": "quantum-gate, circuit-construction" }
Bridge full wave rectifier more smooth signal
Question: This is a full wave bridge rectifier: The waveform of the full wave bridge rectifier is this : But I want a more smooth signal which will remind less of an AC. If I put an inductor with a resistor instead of only a resistor will it work? Answer: If I put an inductor with a resistor instead of only a resistor will it work? While it's not entirely clear from your question, I believe you propose to place an inductor in series with the load $R_L$ (and, from your comment, you're not interested in using a shunt capacitor for filtering). A series inductor (choke) would, in principle, act to reduce the ripple voltage across the load. In practice, however, it may not be a practical, e.g., the inductance required might be impractically large. Questions about how to calculate the required inductance should, I think, be asked at Electrical Engineering You might find the following helpful: Rectifiers, Clippers, and Clampers
{ "domain": "physics.stackexchange", "id": 69333, "tags": "electromagnetism, electricity, electric-circuits, electric-current" }
Visualize the runtimes of list algorithms
Question: I wrote a small script to visualize the runtimes of list algorithms. The script works by creating a list of lists with sampled lengths from 1 to max_size. Next, I map the given algorithm over each list element and replaced the list with the resulting runtime. The runtimes of each list are plotted against the length of each list to generate a runtime plot. My questions are: Does my code deviate from best practices? Could my code get a performance increase by using co-routines or some other type of concurrency (like asyncio)? from time import time from random import randrange from concurrent.futures import ThreadPoolExecutor from functools import partial from matplotlib import pyplot as plt def func_timer(func, *params, verbose=False): """Takes in a function and some parameters to the function and returns the execution time""" start = time() func(*params) t = time() - start if verbose: print('function {func_name} took {time} seconds to complete given parameters'.format( func_name=func.__name__, time=t)) return t def list_generator(max_size, num_samples, n=10, upper_num=100): """Generates random integer lists with (sampled) lengths from range 1 to max_size. The difference between lengths (length spacing) of each list is delta, where delta is defined as the floor division of max_size and num_samples. max_size: int The max length of a generated list num_samples: int The number of lists to generate returns: lst_of_rand_lsts: list of lists """ def get_rand_list(n): """Returns a list of random numbers.""" return [randrange(upper_num) for x in range(n)] assert max_size > num_samples delta = max_size // num_samples # uses the floor function to get difference between each sample. lst_lengths = [delta*x for x in range(1, num_samples + 1)] # Shift range by 1 to avoid making an empy list. return (get_rand_list(x) for x in lst_lengths) def runtime_generator(func, lists): """Maps func over each list in a list of lists and replaces each list element with the function runtime.""" partial_func_timer = partial(func_timer, func) with ThreadPoolExecutor(max_workers=5) as executor: res = executor.map(partial_func_timer, lists) def rtviz(func, *args, max_size=1000, num_samples=500, viz=True, verbose=True): """Takes in a function that receives an iterable as input. Returns a plot of the runtimes over iterables of random integers of increasing length. func: a function that acts on an iterable""" def subplot_generator(lst_lengths, func_times_lst): plt.plot(lst_lengths, func_times_lst, 'bo') plt.xlabel('length of random lists') plt.ylabel('function runtime (sec)') plt.title('Runtime of function {}'.format(func.__name__)) return lst_of_lsts = list(list_generator(max_size, num_samples)) lsts_len = [len(elem) for elem in lst_of_lsts] start = time() runtime_gen = runtime_generator(func, lst_of_lsts) t = time() - start if verbose == True: print('%s took %0.3fms.' % (func.__name__, t*1000.)) subplot_generator(lsts_len, list(runtime_gen)) if viz == True: plt.show() return if __name__ == "__main__": rtviz(sorted) Answer: An even more vital question than does it deviate from best practices, is whether it actually does what you intend for it to do! Does it accurately time exection of the function in you are testing? I believe not due to the following reasons: Using time one a single run is at best inaccurate. Any code when run only once is kind of a random indicator as to how long time it'll take to do on the average. The shorter time it takes to execute, the more repeats you should have. You might also run into issues related to which current platform you are on, and imprecision in the time.time() call, as the documentation only promises a precision of a second. You're setting up the time testing method within the actual time taking. You're timing on the outside of the runtime_generator, and do functools.partial and ThreadPoolExecutor within that function. In the best situations this only skew all of your data, in a more likely situation your method disappears in the time to set up the test run. This effect will be worse the faster your tested function are. Lets say that test setup takes 100 ms, and your method 1 ms for 1 item, and 10 ms for 1000 ms. Your total time will vary from 101 ms to 110 ms, so you'll likely conclude that it is almost constant time, as 101 ms ~ 110 ms, but the real situation is that is has ten-doubled, 1 ms ~ 10 ms. Best practice in Python, seems to use the timeit module for timing executions. This module can setup your runs, and execute them multiple times and eliminate some of these caveats. Although you really gotta keep your head straight even when using this module, as timing execution depends on a lot of sub parameters like: operating system, python interpreter, other load on testing platform, memory issues, code issues, sub effects of memoisations or other programming paradigm, and list goes on.
{ "domain": "codereview.stackexchange", "id": 16710, "tags": "python, performance, functional-programming, concurrency, benchmarking" }
Why is it that all metals do not become superconductor by lowering the ambient temperature?
Question: Aluminium becomes a superconductor at a temperature below $1.91$K. But I am quite certain that all metals do not exhibit superconductivity even when the temperature is lowered to nanokelvin or below. Why is it that all metals do not become superconductor? Answer: This is actually an unsolved and very interesting problem. Consider for example the case of the three best known conventional conductors: gold, silver, and copper. None of them have been shown to superconduct. This is one of the reasons for the controversy surrounding this paper. A few potential reasons are, Within a BCS framework, the phonon-electron coupling is too weak to lead to a discernible $T_c$. The Fermi surface is too complex/asymmetric and leads to strong mixing of the Landau parameters. So a strong onsite Coulomb repulsion (which leads to strong repulsion in the s-wave channel) would affect all angular momentum channels, and suppress non-BCS mechanisms like Kohn-Luttinger. (not applicable to the aforementioned elemental metals) The metal is too dilute which leads to a small density of state at the Fermi level thereby suppressing superconductivity.
{ "domain": "physics.stackexchange", "id": 52625, "tags": "condensed-matter, solid-state-physics, superconductivity, phase-transition" }
Is the ACh receptor more permeable to sodium ions?
Question: The AChR is permeable to sodium and potassium ions only and has a reversal potential of 0mV. However the Nernst potentials for sodium and potassium ions is ~ +60mV and -88mV respectively. Taking a simple average of these two potentials would give an expected AChR reversal potential of -14mV; but that is not what we see. My question is whether the reversal potential is pulled up to zero because of a greater conductivity to sodium than potassium ions, and if so, how the AChR selectivity filter achieves this. Thanks! Answer: Short answer No. There are differences in Na+ versus K+ permeability, but you have it backward: potassium is actually slightly more permeable; however, these differences are not the only factors influencing the reversal potential, and also, the reversal potential for a nAChR need not be exactly 0 mV. tl;dr to the long answer You need to sum before taking ratios rather than take a difference of ratios when computing a reversal potential for multiple ions together. Before we get into the weeds... In most sources you will hear something like the reversal potential for a nonspecific ligand-gated channel like nAChR is "about 0 mV" - this is going to vary in different cell types, in different conditions, with different recent activity levels, with different versions of the receptor, etc. What's important to remember in all this is that -14 mV is also about 0 mV... In the context of excitable cells, what is most important is the reversal potential relative to the threshold for firing a spike or activating other channels. That said, I'm not actually saying the reversal potential for nAChR is actually -14 mV, there are a couple other factors missing... What you are missing The big one you are missing is that the concentrations of the ions matter, not just their ratios. For the reversal potential of a single ion, all we really care about is the ratio inside and out, which is found in the Nernst equation. Averaging the reversal potential for two major ion species, like you have done, gives you a pretty good ballpark estimate (afterall, -14 mV is also about 0 mV!). To get the actual reversal, however, you'll need the Goldman Equation. We can simplify the Goldman equation for just Na+ and K+ in fairly standard conditions to something like: $61.5 \mathrm{mV} * log_{10}(\frac{P_{Na} [Na^+_{out}] + P_{K} [K^+_{out}]]}{P_{Na} [Na^+_{in}] + P_{K} [K^+_{in}]]}$) If the permeabilities are both :=1, and the concentrations of Na and K are equal and opposite, then indeed you will get zero. However, if that was true, you would also have equal and opposite reversal potentials for Na and K alone in the cell: in the numbers you gave, that isn't the case. I'm not sure what numbers you used exactly, but I'll use my own example. If potassium is 5mM out, 140 mM in, the reversal potential will be -85.6 mV for potassium. If sodium is 140 mM out, 12 mM in, the reversal potential will be 63.1 mV for sodium (you can check my math with the Nernst equation). If you combine these numbers in the Goldman equation above, you will get -1.2 mV: closer to zero than your -14 mV estimate. They are so close because the ratio (140+5)/(12+140) is a lot smaller than the difference of the ratios (5/140) and (140/12): order of operations matters. This is assuming identical permeabilities However, the permeabilities aren't actually identical There are many papers measuring relative permeabilities in nAChRs, I just chose one randomly as an example (happened to be the first Google Scholar result). Nutter and Adams 1995 found the relative permeability of several monovalent and divalent cations, with Na as a reference: Therefore, potassium is actually slightly more permeable than sodium. However, there are also other important ions that could contribute, even if they are found at lower concentrations than either sodium or potassium, and the example ion concentrations you find in a textbook might not apply in the systems you are studying. A source that says nAChRs are only permeable to sodium and potassium really just means that they are not permeable to another other important high-concentration ion: chloride. They are also simplifying things a bit for you, which is okay. If I use the Goldman equation quickly with these data, I get a reversal potential (with just Na and K, ignoring other ion species) of -5.9 mV: still "about 0 mV" and still closer than your original -14 mV estimate. But how are different ions with the same charge showing different permeabilities? This is better off as a separate question, and probably easier to address with a more extreme case, like in selective sodium or potassium channels. However, like in those channels, in channels that we call "non-specific", the particular arrangement of amino acid residues (especially charged ones) at the pore can favor particular ions because those ions differ slightly in size. Nutter, T. J., & Adams, D. J. (1995). Monovalent and divalent cation permeability and block of neuronal nicotinic receptor channels in rat parasympathetic ganglia. The Journal of general physiology, 105(6), 701-723.
{ "domain": "biology.stackexchange", "id": 8734, "tags": "muscles, electrophysiology, neurotransmitter, protein-structure, electromuscular" }
Mathematical formalism to show that an atom casts a small shadow in the photon field that illuminate it
Question: This questions regards the relationship between photon absorption and the spatial mode of light. In the question I have some physical intuition which I think I understand and which is born out by experiment which is sprinkled throughout. However, the mathematical formalism I have to tackle the question at hand seems to fall short of being able to describe the physical situation I am concerned with and the formalism also raises causality issues for me. Because of all of this I spend most of the text in this post laying out the mathematical formalism as I understand it in hopes of seeking further understanding of this formalism or to be pointed towards a more sophisticated formalism which can address my concerns. Background In quantum optics the electric field can be quantized as $$ \hat{\boldsymbol{E}}(\boldsymbol{x}, t) = i\sqrt{\frac{\hbar}{2\epsilon_0 V}} \sum_{\boldsymbol{k}, s}\sqrt{\omega_{\boldsymbol{k}}}\left(\boldsymbol{f}_{\boldsymbol{k}, s}(\boldsymbol{x})\hat{a}_{\boldsymbol{k},s}(t) - \boldsymbol{f}_{\boldsymbol{k}, s}^*(\boldsymbol{x})\hat{a}_{\boldsymbol{k},s}^{\dagger}(t)\right) $$ Bold symbols represent vector quantities. The is an equation for the quantum electric field in space and time. We sum over all wavevectors $\boldsymbol{k}$ which have, by the Helmholtz equation, related temporal frequencies $\omega_{\boldsymbol{k}} = c|\boldsymbol{k}|$. $s$ is a polarization index and takes on the values 1 or 2. $\boldsymbol{f}_{\boldsymbol{k},s}(\boldsymbol{x})$ is a dimensionless vector valued spatial mode function which is determined by the boundary conditions*. For example, commonly, if we consider quantization in box of volume $V$ the mode functions are given by $$ \boldsymbol{f}_{\boldsymbol{k}, s}(\boldsymbol{x}) = \boldsymbol{\epsilon}_{\boldsymbol{k},s} e^{i \boldsymbol{k}\cdot\boldsymbol{x}} $$ Here $\boldsymbol{\epsilon}_s$ is the polarization vector. Note that this is only one possible choice for the complete set of modes arising from solving the Helmholtz equation. The $\boldsymbol{f}_{\boldsymbol{k},s}(\boldsymbol{x})$ could also be, for example, Hermite-Gaussian or Laguerre-Gaussian modes as may be helpful to consider for this problem. The mode volume or quantization volume is related to the spatial modes by** $$ \int d\boldsymbol{x}\boldsymbol{f}_{\boldsymbol{k}, s}(\boldsymbol{x})\cdot\boldsymbol{f}_{\boldsymbol{k}',s'}^*(\boldsymbol{x}) = \delta_{\boldsymbol{k}\boldsymbol{k}'}\delta_{ss'}V $$ The $\hat{a}_{\boldsymbol{k},s}(t)$ and $\hat{a}^{\dagger}_{\boldsymbol{k},s}(t)$ are the bosonic, photonic annihilation and creation operators. These operators are related to the number of photons occupying a single mode. We see that quantum statistical properties of $\hat{\boldsymbol{E}}$ depend on the quantum statistical properties of the $a_{\boldsymbol{k},s}$ If we remove the hats from this expression we can see that the $a_{\boldsymbol{k},s}(t)$ are time-dependent coefficients of the spatial mode decomposition of the electric field. Putting the hats back in we see that these mode coefficients, $\hat{a}_{\boldsymbol{k},s}(t)$ are now quantum random variables rather than fixed amplitudes. Shining Laser on a Screen First a thought experiment. Suppose we have a light source which outputs, say a Gaussian beam*** which is focused down to a spot size $w_0$ at a certain location. Suppose we are able to arbitrarily tune the power of this source. Suppose for the sake of argument that it outputs coherent states of light. In one mode (high power) the output can be tuned so that the coherent state flux is composed of many many photons per second (like in a usual laser that we think of) or in another mode (low power)it can be tuned so that the output is less than one photon per second. In one experiment we put a screen at the location of the focus and shine the laser beam in high power onto the screen. We will of course see a spot on the screen with a Gaussian shape. In another experiment we put the screen at the same location of the focus but we now turn the laser down to low power. Now if we look at the screen we will not see a brightly illuminated spot. What we will see, is as time goes on we will see little**** spots appear on the screen one at a time (the temporal spacing between the appearance of spots will be statistical but related to the photon flux). If we keep track of all of the spots that we see then over time the distribution of the spots will look exactly like the gaussian spot we had for high power. This sort of story is familiar to those who know about Young's double slit experiment. Now imagine we put a little disk in front of the screen, say a few optical wavelengths in front of the screen. In the high power case we will just see a shadow of the disk. In the low power case we will see the shadow of the disk when we look at the distribution of bright spots. Shadow of a Single Atom Now imagine instead of a disk in front of the screen we place a single atom which has a transition resonant with the frequency of the laser beam. The atom can absorb a little bit of light and thus cast a shadow. The question sort of goes like this: 1) What does the shadow look like? Actually I know the answer to this question thanks to Absorption Imaging of Single Atom. The answer is that a small shadow of size $\approx \lambda \approx 1\text{ $\mu$m}$ will appear on the screen. Note that $w_0\gg \lambda$. 2) My question is how do describe in the formalism laid out in the background section? We can consider the (dipole)***** coupling between an atom light of the form $H = -\boldsymbol{E}\cdot \boldsymbol{d}$ and we will see something like \begin{align} \hat{H}_{AF} = \sum_{\boldsymbol{k},s} \hbar g_{\boldsymbol{k},s} \hat{\sigma}^{\dagger}\hat{a}_{\boldsymbol{k},s} + \hbar g_{\boldsymbol{k},s} ^*\hat{\sigma} \hat{a}_{\boldsymbol{k},s}^{\dagger} \end{align} Here $\hat{\sigma} = |G\rangle\langle E|$ is the atomic lowering operator which takes the atom from the excited to the ground state. The coupling operator for each mode is given by \begin{align} g_{\boldsymbol{k},s} = \sqrt{\frac{\omega}{2\hbar \epsilon_0 V}}d^{GE}_{\boldsymbol{k},s} \end{align} Here \begin{align} d^{GE}_{\boldsymbol{k},s} = \langle G|e\boldsymbol{x}\cdot \boldsymbol{\epsilon}_{\boldsymbol{k},s}|E\rangle \end{align} $e$ is the electron charge. Note that if we consider, for example, an $s\rightarrow p$ atomic tranisition there are actually multiple excited states which makes the coupling of the atom to the different optical modes isotropic. That is the total coupling is the same for light coming from all directions. My thinking would be that the answer to how the shadow is formed is that the atom preferentially absorbs modes with certain wavevectors but not others. As a result, the mode decomposition for light "after" the atom is different than the decomposition "before" the atom. This means the optical field will look different, i.e. it can have a shadow in it. however, The fact that the coupling is isotropic seems to put a wrench in this hope.. The question itself A) If the coupling of light to all spatial modes is the same then wouldn't the affect of the atom on the field be to suppress the transmitted amplitude of the ENTIRE optical pattern by the same amount? Thus dimming the whole pattern rather than creating a shadow? B) Of course, if the proposition in A is correct (I don't think it is, especially given the cited reference above) then there seem to be some serious locality issues. How can the presence of the atom in the center of the gaussian beam affect the transmitted intensity near the edge of the beam when they are separated by many many wavelengths? C) This sort of raises a general question for me about the locality of atom-light interactions. Viewed in this way $\hat{a}_{\boldsymbol{k},s}$ is the quantum amplitude of an entire extended, non-local spatial mode with spatial pattern $\boldsymbol{f}_{\boldsymbol{k},s}(\boldsymbol{x})$. If one photon is emitted or absorbed into this field by the atom then it seems like the atom is doing something highly non-local in this mathematical description. That is, the atom occupies a very very small subwavelength volume of the field but in this mathematical description it can affect the amplitude of the field millions of wavelengths away instantaneously by absorbing or emitting a photon. Is there a more sophisticated mathematical formalism for treating this physical situation that would clarify these issues. Footnotes *Boundary conditions are assumed to be finite, like a large but finite box. I don't know exactly how to treat what I am asking in the case of infinite space and I think that this might be implicated in the answer to my question. **Note other normalizations for mode volume are possible but this is the one that I take. Note that in this setup all modes have the same mode volume. ***For what follows, even though the light is a Gaussian mode I will consider $\boldsymbol{f}_{\boldsymbol{k},s}(\boldsymbol{x})$ to be plane waves. This means that the optical field coming out of the laser is actually composed of many plane wave modes with different wavevectors. That is, the field is in a (quantum) superposition of occupying many different modes. ****How little actually? I guess in principle as little as whatever is absorbing or scattering the light on the screen so perhaps atomic scale meaning, because of the diffraction limit the spots would appear upon imaging to be about the size of an optical wavelength, $\lambda$. *****I wonder if part of the answer to my question has to do with high order multipole coupling terms? I don't think so. We can suppose there are not nearby transitions with the appropriate selection rules so that these higher order couplings play no role. Answer: Upon reading the question carefully, I believe that the problems of the OP have nothing to do with the quantum nature of the interaction, but simply with the understanding of how modes work. To see this, let us simply write the interaction term in a different form which is in fact also mentioned in the question. Putting in the relevant functional dependences $$ \hat{H}_{AF} = -\hat{\mathbf{E}}(\mathbf{r_a}, t) \cdot \hat{\mathbf{d}}, $$ where $\mathbf{r}_a$ is the position of the atom. This interaction the starting point for deriving the modal picture which is given by the OP. It comes from the minimal coupling prescription and involves for example the dipole approximation and fixing the gauge appropriately. So let us look at this problem on a conceptual level. What we have is an electric field operator (an operator valued function of space and time) which is coupled to the atom. The field operator is governed by the operator version of Maxwell's equations. The atomic operators are governed by the standard Hamiltonian for whatever level structure you have in the atom. The Hamiltonian makes these two operator evolution equations coupled. Your task is not to start with a certain initial condition for the electric field operator (or density matrix) and solve these evolution equations. By this we can at least answer C) Answer to C): There is nothing non-local here, the coupling to the electric field is only at the position of the atom (this assumes the dipole approximation of course). Solving these operator equations is of course difficult. But as far as I understand the question is about conceptual issues, not about how to solve this problem in a certain context. This makes it clear that the only problem is the mode decomposition. Let's work backwards and first look at question B): B) How can the presence of the atom in the center of the gaussian beam affect the transmitted intensity near the edge of the beam when they are separated by many many wavelengths? The answer is simple: light couples to the atom, which causes a local change of the quantum field, which then propagates according to the propagation equations. Nothing difficult here either. Here we can already see why this changes in the modal picture. The modes themselves are a non-local basis in some sense. That is you do not work in position space. If you want to describe how a localized field behaves, you therefore have to look at superpositions and cannot consider the modes individually. This prepares us for A): A) If the coupling of light to all spatial modes is the same then wouldn't the affect of the atom on the field be to suppress the transmitted amplitude of the ENTIRE optical pattern by the same amount? Thus dimming the whole pattern rather than creating a shadow? Well, the coupling constant may be the same, but the population of each of the modes is not. If you are looking at linear scattering, you can simply imagine to replace the atom by a little refractive sphere, which is entirely equivalent for linear scattering. What would happen then is exactly the classical intuition which the OP described by the examples in the question, just that the atom is refractive instead of a fully absorbing material.
{ "domain": "physics.stackexchange", "id": 62614, "tags": "optics, photons, atomic-physics, quantum-electrodynamics, cavity-qed" }
What is the standing genetic variation?
Question: I am reading this review. In the first part, the author introduces Standing Genetic Variation, described as: STANDING GENETIC VARIATION Allelic variation that is currently segregating within a population; as opposed to alleles that appear by new mutation events Does it mean that in adaptation the allelic variation is already present (not created by mutations) and in adaptation it prevails over the others? Thanks Answer: Standing genetic variation is when there is more than one allele at locus in the population at the time-point in question. When an allele goes to fixation there is no standing genetic variation at the locus until new mutations occur. Loci where alleles are not fixed are described as having standing genetic variation. "Standing genetic variation: the presence of more than one allele at a locus in a population." from Barrett and Schluter 2007
{ "domain": "biology.stackexchange", "id": 1962, "tags": "genetics, mutations, adaptation" }
Join large list of pairs
Question: I have a list of millions of pairs of strings, and I want to join all of the pairs that have matching members into lists without duplicates. Example input: [["A", "B"], ["A", "D"], ["M", "Q"], ["A", "F"], ["D", "E"], ["Q", "Z"]] Example output: [["A", "B", "D", "E", "F"], ["M", Q", "Z"]] Does anyone know of an efficient algorithm for this? I'm somewhat constrained by memory. Anything that would square the memory from the input would not be an option. Answer: You can use a two pass approach: In the first pass, identify all the different strings appearing in your input. (This can be done in various ways, e.g. hashing, trie, BST) For the second pass initialize a Disjoint-set data structure with the strings found in the first pass and perform a union operation for each pair in the input.
{ "domain": "cs.stackexchange", "id": 2967, "tags": "algorithms, data-structures, lists" }
differences between camera plugins in GAZEBO
Question: Hello all. I looking to install a camera sensor on my robot, in a GAZEBO simulation. I encountered several different plugins : gazebo_ros_depth_camera gazebo_ros_prosilica gazebo_ros_camera Is there difference between those different plugins ? Also i would happy to know where can i found some documentation about the plugins ? (there is nothing inside the packages... ) Thanks. Originally posted by dmeltz on ROS Answers with karma: 192 on 2012-09-12 Post score: 2 Answer: GazeboRosProsilica plugin strives to provide ROS topic and service interfaces similar to those provided by the Prosilica Camera hardware on PR2. And as trinighost mentioned, GazeboRosCamera and GazeboRosDepthCamera plugin provides ROS interfaces similar to those offered by wge100 camera. The only difference between the two is that GazeboRosCamera maps to Gazebo Camera Sensor, whereas GazeboRosDepthCamera maps to the Gazebo Depth Camera Sensor. GazeboRosDepthCamera is able to publish 3D pointcloud using the camera's z-buffer. Recently, GazeboRosDepthCamera have evolved into (been duplicated by) GazeboRosOpenniKinect (replicating ROS interface offered by openni_camera), I am looking to retire one of the two in a way that hopefully will not break anyone depending on either one of these plugins. Please let me know if this is the information you were looking for... I'd like to update the documentation on ros wiki with more detail, your input is very valuable, thanks! Originally posted by hsu with karma: 5780 on 2012-09-12 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by Gazer on 2013-07-04: Hi. I am looking to simulate swissranger_camera in Gazebo; I found that previous model use "gazebo block laser" as a camera plugin. We will need pointcould2 as an input source for PCL_Ros, which produce the image. But the "block laser plugin" only publish pointcloud1. I am wondering Comment by Gazer on 2013-07-04: will ROSDepthCamera suits our needs? Comment by tasneem2000 on 2023-06-17: i used GazeboRosOpenniKinect in my URDF to generate a 3D pointcloud but the z values are always either 4.0 or 0.0. this means it cannot see all z levels right? if yes please recommend me a way to generate a 3D pointcloud to determine the heights of the obstacles.
{ "domain": "robotics.stackexchange", "id": 10991, "tags": "camera" }
$(\frac{1}{2},\frac{1}{2})$ representation of $SU(2)\otimes SU(2)$
Question: The representation $(\frac{1}{2},\frac{1}{2})$ of the Lorentz group correspond to a four- vector or a spin-one object. Right? Does it imply that any four-vector is identical to a spin-one object or any scalar is identical to a spin-0 object? This can't be correct, right? Because although $A^\mu$ is a four vector and a spin-one object at the same time (which is photon), there is no concept of spin associated with $p^\mu$ or $J^\mu$. I'm confused by terminologies of representation. Edit- How can I show that $A^\mu$ represent a spin-1 object? Answer: The problem here is with the identification of the $(A,B)$ values of a representation with spin. $A$ and $B$ do not correspond to spin (they are not even Hermitian!), they just happen to obey $SU(2)$ Lie algebras, and as such they add up in the same way that spins do. When we say that $A_\mu,J_\mu,p_\mu,...$ are all in the $(\frac{1}{2},\frac{1}{2}) $ representation of the Lorentz group we mean that they transform as a four-vector, that's all. People may get lazy and say they are spin 1 objects, but what they really mean is $(A,B)$ spin 1 objects.
{ "domain": "physics.stackexchange", "id": 11921, "tags": "quantum-field-theory, special-relativity, group-theory, group-representations, lorentz-symmetry" }
Does frame dependency of events result in entirely different worlds?
Question: Suppose there are two seeds kept at equal distance from a light source which emits a photon each on either directions. Seed germinates when a photon falls on it. According to rest frame both the seeds receive photons simultaneously but according to moving frame one of the seeds receive a photon earlier than the other seed. The second seed dies off due to delay, so there is only one plant in the moving frame but two plants in the rest frame. Is my line of reasoning correct? What happens when I bring the moving observer to rest so that he should agree with whatever observations made by the observer who was at rest? Answer: No, your reasoning is based on a misunderstanding. There is only one reality. Observers in two different frames will assign different time coordinates to exactly the same events. In every frame, the two seeds will germinate and survive, but one seed might germinate at an earlier or later time coordinate than the other, depending on the direction of motion of the frame. You will find, if you think about it, that your supposition that one seed might die is based on the assumption that both seeds were planted at the same time. However, if one seed receives the photon later than the other in a given frame, it was also planted later than the other, so it goes without light for the same length of time as the other seed. If you want to understand relativity, it is essential that you anchor your thinking on the principle that an event is an event, so that if it happens in one frame it also happens in any other.
{ "domain": "physics.stackexchange", "id": 97812, "tags": "special-relativity, optics, inertial-frames, observers" }
Does sodium cyanide react with water, and can it cause an explosion?
Question: According to some conspiracy theory, the explosion that happened last year in Tianjin (China) wasn't caused by sodium cyanide. Can sodium cyanide react with water, and can it cause an explosion if you have 700 tons of it? Answer: Usually, I dislike conspiracy theories.1 However, this one (if it even is one) is true. Looking at the list presented on Wikipedia of the chemicals that were stored in that warehouse, why would you even suspect sodium cyanide as being a cause of the explosion? It is a stable ionic compound that can be put on the shelf, wet with water, dissolved in water, and won’t react with air with any measurable half-life. (Except that you possibly shouldn’t put it on a shelf because it is slightly volatile and highly poisonous.) What was also in the warehouse and is an explosive was ammonium nitrate. In the 1921 explosion in Oppau (now part of Ludwigshafen), BASF workers tried to use dynamite to loosen a plaster of ammonium nitrate (it is hygroscopic and baked together). Mixtures containing at least $40~\%$ ammonium sulphate were considered stable and the mixture in question was a $50~\%$ mixture of the two according to the production data. After approximately 20,000 uses of dynamite where nothing happened, the entire 4,500 tonne lot exploded killing some 500 or 600 people and leaving thousands injured. The theory $40~\% = \text{safe}$ was later disproven in careful examinations. Apparantly, the explosion could be heard in Munich, some $300~\mathrm{km}$ away. Another substance stored there and dangerous is calcium carbide $\ce{CaC2}$. When calcium carbide gets wet, it produces acetylene gas (and calcium hydroxide). The acetylene is flammable and can also explode in certain acetylene–oxygen mixtures.2 Luckily, no acetylene or carbide explosions are known enough to make it onto Wikipedia. So far more likely than sodium cyanide having any primary effect would be one the following theory: Fire (something must have ignited because the compounds I read there aren’t self-flammable). Attempts to extinguish fire with water by the fire brigade. Water reacts with calcium carbide liberating acetylene. Acetylene burns. The critical heat is reached, ammonium nitrate explodes. Toxic sodium cyanide is liberated into the environment causing further severe health risks for man and animal alike. 1: Okay, that was a lie. I love listening to them and making fun of them. 2: A teacher at school talked about his time at uni and mentioned a professor that let a balloon of acetylene explode — he made a fuse long enough to ignite it from outside of the lecture hall (nobody was allowed inside), had the doors closed and my teacher said ‘it was the loudest bang I ever heard.’
{ "domain": "chemistry.stackexchange", "id": 4860, "tags": "inorganic-chemistry, ionic-compounds, explosives" }
Could an internally tethered rocket cause external movement?
Question: Imagine a scenario where a large box is floating freely in space. There is a vacuum both inside and outside of this box. Also imagine that a dormant rocket is inside this box, but it is loosely tethered to all sides of the box. When the rocket fires, and tethers are pulled taunt, will the box move also? Answer: Briefly, yes. The box would speed up momentarily, until the propellant from the rocket hits the back end of the box and slows it back down again. Once the propellant is exhausted, the box will again be motionless (neglecting the minuscule jiggling caused by thermal motion of the propellant gas inside). The controlling principle here is that the total momentum of the system is conserved; since in the reference frame you described it starts out at zero, it remains forever zero. A consequence of that is that the center of mass of your closed system cannot move. Thus, once your rocket runs out of propellant, even though the box might have shifted over to the right a bit, the propellant on average will have shifted over to the left a bit, so the overall center of mass remains in the same place. By the way, you might find it beneficial in thinking about this to eliminate the tether, and just affix the nose of the rocket to the right side of the box. It's a pretty similar situation, and conceptually simpler.
{ "domain": "physics.stackexchange", "id": 24257, "tags": "forces" }
Pressure and Tall Buildings
Question: Does living in tall buildings somehow exert more pressure on our bodies? I remember reading something about this once but I'm not sure if its true. Answer: Tall buildings are subject to the stack effect. Because the interior is held at a different temperature than outside (due to indoors temperature control), the vertical pressure gradient is different. Typically it is a reasonable assumption that the pressure inside the building is continuous from one floor to the next via the stairwell (if nothing else). Typically, it is also a reasonable assumption the the ground floor is the closest to maintaining equilibrium with the outdoor pressure because people obviously walk in and out. The result of the higher indoor temperature is the that it loses less pressure with increasing altitude than the cold outdoor air. That means that the top floors of the building have a higher air pressure than the air outside the windows. If a window is opened, a large amount of air will flow. If allowed to continue, this will continue as a pump, moving air up the building from the bottom and out the top. This is natural circulation, and is powered by the heat input to the building air temperature control. The people who pay the utility bills will desire to minimize the flow as much as possible. However, going back to the assumption of equalized pressure on the ground floor, you are not at all at a higher pressure inside the building than before you entered. The pressure is lower the higher up you are. In fact, pressure might have a small drop as you go through the door itself since natural circulation is trying to suck air up.
{ "domain": "physics.stackexchange", "id": 19701, "tags": "newtonian-gravity, pressure" }
rosinstall (electric) fails on Mac OS X 10.6.7
Question: Hello, I am trying to install ROS on Mac OS X Snow Leopard. I am getting this error message, when trying to install the ros base (or any other configuration) e.g. by executing rosinstall ~/ros "http://packages.ros.org/cgi-bin/gen_rosinstall.py?rosdistro=electric&variant=ros-base&overlay=no" Bootstrapping ROS build Detected ros_comm bootstrapping it too. Rospack failed to build Traceback (most recent call last): File "/opt/local/bin/rosinstall", line 5, in <module> pkg_resources.run_script('rosinstall==0.5.22', 'rosinstall') File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pkg_resources.py", line 499, in run_script self.require(requires)[0].run_script(script_name, ns) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pkg_resources.py", line 1235, in run_script execfile(script_filename, namespace, namespace) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/rosinstall-0.5.22-py2.6.egg/EGG-INFO/scripts/rosinstall", line 679, in <module> sys.exit(not rosinstall_main(sys.argv)) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/rosinstall-0.5.22-py2.6.egg/EGG-INFO/scripts/rosinstall", line 670, in rosinstall_main subprocess.check_call("source %s && rosmake ros%s --rosdep-install%s" % (os.path.join(options.path, 'setup.sh'), ros_comm_insert, rosdep_yes_insert), shell=True, executable='/bin/bash') File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 488, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command source /Users/shoefer/ros/setup.sh && rosmake ros ros_comm --rosdep-install returned non-zero exit status 255 I done everything according to the installation manual, I have installed all necessary packages via MacPorts, activated the MacPorts python 2.6 version etc. I have also checked that I have YAML and boost installed. Any help would be appreciated! Sebastian Originally posted by sebastianh on ROS Answers with karma: 70 on 2011-10-16 Post score: 0 Original comments Comment by Wim on 2011-10-17: I just followed the same instructions on Mac OS X 10.5.8, and I did not run into any problems. Answer: Ok I have found the source of the problem: rosmake was unable to build rospack and rosstack. When invoking make manually in /Users/shoefer/ros/ros/tools/rospack/ the following error occurs: macbook:/Users/shoefer/ros/ros/tools/rospack/ shoefer$ make /Users/shoefer/ros/ros/tools/rospack/rospack.cpp: In member function 'void rospack::ROSPack::crawl_for_packages(bool)': /Users/shoefer/ros/ros/tools/rospack/rospack.cpp:1933: error: 'PATH_MAX' was not declared in this scope /Users/shoefer/ros/ros/tools/rospack/rospack.cpp:1935: error: 'tmp_cache_dir' was not declared in this scope /Users/shoefer/ros/ros/tools/rospack/rospack.cpp:1945: error: 'tmp_cache_path' was not declared in this scope This is a problem of my gcc version which is gcc-mp-4.4 (installed via Macports). The default gcc shipped with XCode (on my machine it is gcc-4.2) does actually define PATH_MAX. So to be safe one should better use a different gcc version, but a quick hack actually also did the job, namely defining PATH_MAX manually in rospack.cpp and rosstack.cpp: #define PATH_MAX 256 Maybe it would be a good idea to check if PATH_MAX is defined to prevent other users from getting into the same trouble. Any comments on that? Originally posted by sebastianh with karma: 70 on 2011-10-19 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 6985, "tags": "rosinstall, macos-snowleopard, osx" }
How can I calculate what a recording would sound like if it was played through my speakers and mic?
Question: I am trying to isolate the sound of someone speaking while music plays back in a noisy environment. I have multiple speakers and a microphone in my system. I am able to play back just music and record it, then play back music while speaking and record it. After that, I can look at the difference in the signals and get the meaningful information I need. In order for this to work in real-time, I would need to calculate what that first recording would sound like. If I have the impulse+magnitude response or some other data about the system, is it possible to predict what the microphone will detect? EDIT:I suspect that some kind of transform could be derived from the impulse response that will give a decent prediction of what the mic will hear. To be clear, this doesn't need to be incredibly accurate. I just want a general idea of what the mic will pick up. Answer: You can use convolution to hear a recording as it would sound through a specific speaker. The linked page describes how to get an impulse response and convolve a signal with it. https://www.mathworks.com/help/audio/examples/measure-impulse-response-of-an-audio-system.html
{ "domain": "dsp.stackexchange", "id": 6658, "tags": "audio, impulse-response, real-time" }
Difference between live and dead body from atomic perspective?
Question: Sorry if this is a dumb question, recently I have been interested in the question of how "life" should be defined and got a specific question: If we compare a live body of a person and a dead body of someone who just died, what is the difference in term of their molecule? Say for example, when the heart stops beating, the blood won't cycle through the body, but if we focus on individual cells, say the blood cell, which has hemoglobin molecules, it stops carrying oxygen when the person dies, but if we dig further, the hemoglobin molecule is defined as: A hemoglobin molecule is made up of four polypeptide chains, two alpha chains of 141 amino acid residues each and two beta chains of 146 amino acid residues each. A bit more further we would reach the microscopic world, where there are atoms, protons, electrons, etc, AFAIK, these things would still obey the rule of thermal motion, although the motion may be less active due to the reduced temperature of the dead body. So from the atom's perspective, what is the difference between a live person and a dead body? Thanks and sorry again for this dumb question! Answer: None. Say you were to take a skin sample from a living individual, and a skin sample from a deceased person. If you were then to somehow extract and isolate a single atom from each sample, they would be indistinguishable. In other words, if you were to mix up the two atoms, there would be no way if telling which came from the deceased individual.
{ "domain": "biology.stackexchange", "id": 12116, "tags": "cell-biology, life" }
Loading content in page via jQuery
Question: I'm doing a website in CodeIgniter and I'm loading the content page via jQuery, but I don't know if this is a good practice or not. Could I improve it? jQuery: // Another question here: how can I group the repeated code in a function: $(function() { var control = '../Indoamericana/Intranet/index/Proceso/'; var loadPage = function(page){ $(".page-content").load(page); }; $("#gestion_directiva").click(function(event) { var process = 1; loadPage(control + process); }); $("#gestion_calidad").click(function(event) { var process = 2; loadPage(control + process); }); $("#gestion_academica").click(function(event) { var process = 3; loadPage(control + process); }); Controller: /** * index(); * * funcion que carga la información del proceso en la intranet * @param String $process Identificador del proceso a cargar * @return Vista del proceso indicado */ public function index($process, $idProcess){ $data['data'] = $this->intranet_model->getProcessInfo($idProcess); $data['member'] = $this->intranet_model->getMembers($idProcess); $data['proceso'] = 'GI7'; $route = '/Modules/Intranet/'; $route .= $process; $this->load->view($route, $data); } View: <div class="row"> <div class="col-md-12"> <h3 class="page-title"> <?php foreach ($data as $row): ?> <?php echo $row->name ?> <?php endforeach ?> </h3> <ul class="page-breadcrumb breadcrumb"> <li> <i class="icon-home"></i> <a href="index.html">Página Principal</a> <i class="icon-angle-right"></i> </li> <li> <a href="#">Intranet</a> <i class="icon-angle-right"></i> </li> <li> <a href="#"> <?php foreach ($data as $row): ?> <?php echo $row->name ?> <?php endforeach ?> </a> </li> </ul> </div> </div> Answer: You might already know this but you can add a library class called 'template' it allows you to make a kind of a masterpage where you can load all different php files, for example your header - content - footer or anything else, seperately into one page. Your controller will look like this for example: public function example() { $partials = array('header' => 'header', 'navigation' => 'navigation', 'content' => 'content'); $this->template->load('masterpage.php', $partials); } Read more at http://ellislab.com/codeigniter%20/user-guide/libraries/parser.html It's really use to use once you know how it's done. For example you could use for every page the same navigation but another index and header. The masterpage looks like this: <!DOCTYPE html> <html lang="en"> <head> </head> <body> <?php echo $navigation; ?> <?php echo $header; ?> <?php echo $content; ?> <div id="footer"> <div class="container"> </div> </div> </body> </html> The last thing you need to do is search for the library file on the net upload it and autoload the library in your configs, file: autoload or just in the constructor of your controller. I recommend autoloading the template library because you normally use it at every end of a controller to make the full view.
{ "domain": "codereview.stackexchange", "id": 5977, "tags": "javascript, php, jquery, controller, codeigniter" }
Graph embedding which maximizes minimum angle
Question: Given a planar graph, one can embed it in linear time crossing free into an $n \times n$ grid. I am interested whether any efficient algorithms are known to straight line embed a planar graph crossing free into a $n^c \times n^c$ grid, for some small $c$, such that the minimum angle between two edges is maximized? Answer: I don't think any such algorithm is known. The results I know about maximizing the minimum angle in straight line drawings of planar graphs are: Every planar graph has a (possibly nonplanar) drawing in which the minimum angle is inversely proportional to the maximum degree. For the main proof idea and some references, see http://11011110.livejournal.com/230133.html There exist planar graphs of degree d such that the minimum angle in any straight line planar drawing is $O(\sqrt{(\log d)/d^3})$. This result is due to Garg and Tamassia, "Planar drawings and angular resolution: algorithms and bounds", ESA '94. They also show that achieving near-optimal angles with a grid drawing may require a grid of exponential area. Every planar graph has a planar drawing in which the minimum angle is bounded by a function of its degree. This can be shown using the Koebe-Andreev-Thurston circle packing theorem. For a reference to a slightly stronger version of this result (showing that every planar graph of bounded degree has a planar drawing with a bounded number of edge slopes) see http://11011110.livejournal.com/205447.html
{ "domain": "cstheory.stackexchange", "id": 1304, "tags": "reference-request, graph-theory, graph-drawing" }
Question on dimensions of CFT operators (ref: MAGOO, hep-th/9905111)
Question: Right now I am having this silly difficulty from the following: BTW, Conformal dimension/scaling dimension is -ve of mass dimension ..right? In p-63 of Magoo, after 3.15 eq, they said a.) $\phi$ is dimensionless..why? b.) They, after getting length dimension of boundary field as $\Delta d$, referred to 3.13 to comment that O operator has conformal dimension $\Delta$. I know its right..But as in the lhs of 3.13, the exponent must be dimension less, it seems O should have length or conformal dimension to be $-\Delta$.. I know I am making a silly mistake here..Thanks for answering. Answer: Just to be sure, MAGOO is the AdS/CFT Bible http://arxiv.org/abs/hep-th/9905111 whose authors are usually listed alphabetically. Concerning your questions, Conformal dimension or scaling dimension or mass dimension are meant to be the same thing in the context of conformal field theories in more than 2 dimensions. In 2 dimensions, one may distinguish the left-moving (holomorphic) and right-moving (antiholomorphic) dimensions or their sum - but the separate chiral "dimensions" are usually called "weights", anyway. The dimension pretty much counts the exponent of "kilogram" in the unit of the corresponding operator - except that in quantum mechanics, the exponent is often fractional. The dimensions are usually positive - they get mapped to the energy which is positive as well. Operators have positive dimensions of mass. If your cryptic "-ve" meant that there is a minus sign, then there is no minus sign. If you prefer to use units of length, then their powers are negative, but you should switch to masses as the base whose powers count the dimension. a) The field $\phi$, as opposed to $\phi_0$, is dimensionless because it is an actual field in the AdS space (the bulk), unlike $\phi_0$ that is defined on the boundary. They mean that it is dimensionless under the conformal transformations of the boundary - and that's true because the conformal transformations are realized as simple isometries in the bulk, so they can't rescale the AdS field $\phi$ - at most, they move it to another point. Also, on that page, one deals with particular finite modified boundary conditions for $\phi$ at the boundary of the AdS space so it must be finite even at $z=0$. b) In equation (3.15), the left hand side is dimensionless. The right hand side has a power of $\epsilon$ whose dimension is $length^{d-\Delta}$ because $\epsilon$ has units of length (on boundary), and $\phi_0$ whose dimension must be $length^{\Delta-d}$ as a consequence, to get a dimensionless product. In equation (3.13), the exponent on the left hand side has $d^4x$ whose dimension is $length^d$ - note that $d=4$ for $AdS_5$; $\phi_0$ whose dimension is $length^{\Delta-d}$ as I said in the previous sentence - note that the $d$ term in the exponent cancels; and $O$ whose dimension must therefore be $length^{-\Delta}$ which means $mass^{\Delta}$. As I said, the standard base for counting dimensions is mass, so we also say that $O$ has dimension $\Delta$. So I suspect that your sign error is simply caused by the point 1) - namely by your incorrect assumption that dimensions refer to the powers of length. When we say "dimension" without extra specifications, we mean the power of the mass, not length. It's a healthy convention because the operators end up having non-negative dimensions. You may want to remember the dimensions of basic operators in 4 dimensions. The identity operator is always dimensionless (and $x$-independent): the dimension is 0. Bosonic scalar fields $\phi_0$ and potentials $A_\mu$ have dimension 1, their field strength has dimension $2$, fermions like the Dirac $\psi$ have dimension 3/2, and all terms in the Lagrangian density have dimension $4$. These are classical dimensions; each derivative adds $1$ to the dimension. Products of these operators have dimensions that are sums of the dimensions of the factors - except that quantum mechanics also adds "anomalous dimensions" to this classical form of the dimension, so that the total dimensions may have fractional terms that are proportional to powers of $g$ etc.
{ "domain": "physics.stackexchange", "id": 306, "tags": "string-theory, conformal-field-theory" }
A functional binary heap implementation
Question: I've implemented a binary heap in F#. It's pure and uses zippers for tree modification. To test it out I have implemented heap sort using it but it takes 10 seconds to sort a list of 100 000. Regular List.sort is instant and since heap sort should have the same complexity I'm wondering what I can do to improve my implementation. Profiling revealed most of the time (above 50%) is spent in the bubble down, which is to be expected since sort is basically just don't a bunch of removes but nothing in the method is really exceptionally slow (that I can see). #nowarn "25" namespace FSharpExt module Heap = type HeapNode<'a, 'b when 'a : comparison> = | Full of 'a * 'b * HeapNode<'a, 'b> * HeapNode<'a, 'b> | Half of 'a * 'b * HeapNode<'a, 'b> | Leaf of 'a * 'b | Empty let (|KeyValue|) zipper = match zipper with | Full(k, v, _, _) | Half(k, v, _) | Leaf(k, v) -> (k, v) | Empty -> failwith "List is empty" let cut node = match node with | Leaf(k, v) -> Empty | Half(k, v, _) -> Leaf(k, v) | Full(k, v, left, _) -> Half(k, v, left) type Direction = Left | Right type Pointer = Direction list let rec next pointer = match pointer with | [] -> [Left] | x :: xs when x = Left -> Right :: xs | x :: xs -> Left :: next xs let rec previous pointer = match pointer with | [Left] -> [] | x :: xs when x = Right -> Left :: xs | x :: xs -> Right :: previous xs type Zipper<'a, 'b when 'a : comparison> = Zipper of HeapNode<'a, 'b> * (HeapNode<'a, 'b> * Direction) list let moveLeftZipper (Zipper((Full(_, _, left, _) | Half(_, _, left)) as node, path)) = Zipper(left, (node, Left) :: path) let moveRightZipper (Zipper(Full(_, _, _, right) as node, path)) = Zipper(right, (node, Right) :: path) let moveDirectionZipper direction zipper = match direction with | Left -> moveLeftZipper zipper | Right -> moveRightZipper zipper let moveAlongPathZipper path zipper = List.fold (Func.flip moveDirectionZipper) zipper (List.rev path) let moveUpZipper (Zipper(current, (last, dir) :: path)) = match last, dir with | Full(k, v, _, right), Left -> Zipper(Full(k, v, current, right), path) | Half(k, v, _), Left -> Zipper(Half(k, v, current), path) | Full(k, v, left, _), Right -> Zipper(Full(k, v, left, current), path) let rec toRootZipper (Zipper(current, path) as zipper) = match path with | [] -> zipper | x :: xs -> zipper |> moveUpZipper |> toRootZipper let keyValue (Zipper(KeyValue(k, v), _)) = (k, v) let modifyCurrentZipper (k, v) (Zipper(current, path)) = match current with | Full(_, _, left, right) -> Zipper(Full(k, v, left, right), path) | Half(_, _, left) -> Zipper(Half(k, v, left), path) | _ -> Zipper(Leaf(k, v), path) let appendLeafZipper (k, v) (Zipper(current, path)) = match current with | Half(ck, cv, left) -> Zipper(Full(ck, cv, left, Leaf(k, v)), path) |> moveRightZipper | Leaf(ck, cv) -> Zipper(Half(ck, cv, Leaf(k, v)), path) |> moveLeftZipper | Empty -> Zipper(Leaf(k, v), []) let removeLeafZipper zipper = match zipper with | Zipper(_, (node, dir) :: path) -> Zipper(cut node, path) | _ -> Zipper(Empty, []) let rec bubbleUpZipper (Zipper(KeyValue(k, v), path) as zipper) = match path with | (KeyValue(pk, pv), _) :: rest when pk > k -> modifyCurrentZipper (pk, pv) zipper |> moveUpZipper |> modifyCurrentZipper (k, v) |> bubbleUpZipper | _ -> zipper let rec bubbleDownZipper (Zipper(current, _) as zipper) = let move fn (ak, av) (bk, bv) = modifyCurrentZipper (bk, bv) zipper |> fn |> modifyCurrentZipper (ak, av) |> bubbleDownZipper let right = move moveRightZipper let left = move moveLeftZipper match current with | Full(k, v, KeyValue(lk, lv), KeyValue(rk, rv)) when k > lk || k > rk -> if lk > rk then right (k, v) (rk, rv) else left (k, v) (lk, lv) | Half(k, v, KeyValue(lk, lv)) when k > lk -> left (k, v) (lk, lv) | _ -> zipper let toRoot (zipper, pointer) = (toRootZipper zipper, pointer) let appendLeaf (k, v) (zipper, pointer) = (moveAlongPathZipper (List.tail pointer) zipper |> appendLeafZipper (k, v), next pointer) let removeLeaf (zipper, pointer) = (moveAlongPathZipper (previous pointer) zipper |> removeLeafZipper, previous pointer) let bubbleUp (zipper, pointer) = (bubbleUpZipper zipper, pointer) let bubbleDown (zipper, pointer) = (bubbleDownZipper zipper, pointer) type Heap<'a, 'b when 'a : comparison> = Heap of Zipper<'a, 'b> * Pointer let (|Root|) (Heap(Zipper(root, _), _)) = root let insert (k, v) (Heap(zipper, pointer)) = appendLeaf (k, v) (zipper, pointer) |> bubbleUp |> toRoot |> Heap let min (Heap(zipper, pointer)) = keyValue zipper let remove (Heap(zipper, pointer)) = (modifyCurrentZipper (moveAlongPathZipper (previous pointer) zipper |> keyValue) zipper, pointer) |> removeLeaf |> toRoot |> bubbleDown |> toRoot |> Heap let pop heap = (min heap, remove heap) let tryPop heap = match heap with | Root(Empty) -> None | _ -> pop heap |> Some let singleton (k, v) = Heap(Zipper(Leaf(k, v), []), [Left]) let empty = Heap(Zipper(Empty, []), []) let ofList list = match list with | [] -> empty | first :: tail -> List.fold (fun h x -> insert x h) (singleton first) tail let ofValues list fn = match list with | [] -> empty | first :: tail -> List.fold (fun h x -> insert (fn x, x) h) (singleton (fn first, first)) tail let sort list = ofValues list id |> Seq.unfold tryPop |> Seq.map snd |> List.ofSeq The source is also available on GitHub. Feel free to clone it if you want to profile it or something. Answer: First of all I'm not surprised that your algorithm is significantly slower than List.sort: List.sort is implemented using Array.Sort, which uses introsort. Normal (array-based) heapsort is one of the slower sorting algorithms. When compared with normal heapsort, you're using a separate object for each item in the collection. When compared with mutable tree-based heapsort, you're also creating garbage when "mutating" the tree. All these put together mean that functional heapsort is never going to be anywhere near List.sort in terms of performance. I don't see why you're using zippers here at all. After each operation, the zipper is positioned at the root of the tree, so you might as well use just the binary tree (plus a pointer to the next leaf position). All it does is to make your code more complicated (and probably less efficient). type HeapNode<'a, 'b when 'a : comparison> = 'a and 'b are pretty bad names. All names you use should be meaningful. | Full of 'a * 'b * HeapNode<'a, 'b> * HeapNode<'a, 'b> | Half of 'a * 'b * HeapNode<'a, 'b> | Leaf of 'a * 'b | Empty I don't understand why do you have both Empty and a special case for inner node with one child empty. let (|KeyValue|) zipper = This is very confusing, since zipper is not a Zipper, it's a HeapNode. let rec next pointer = match pointer with | [] -> [Left] | x :: xs when x = Left -> Right :: xs | x :: xs -> Left :: next xs I don't see any reason to use when here, you could write this as: let rec next pointer = match pointer with | [] -> [Left] | Left :: xs -> Right :: xs | Right :: xs -> Left :: next xs Or even (if you're okay with not naming the parameter): let rec next = function | [] -> [Left] | Left :: xs -> Right :: xs | Right :: xs -> Left :: next xs let rec toRootZipper (Zipper(current, path) as zipper) = match path with | [] -> zipper | x :: xs -> zipper |> moveUpZipper |> toRootZipper You don't need to name variables you're not going to use. Here, you could just write _ instead of x :: xs. let ofList list = match list with | [] -> empty | first :: tail -> List.fold (fun h x -> insert x h) (singleton first) tail You could start folding from empty instead of singleton first. That way, you wouldn't even need a special case for the empty list: let ofList list = List.fold (fun h x -> insert x h) empty list
{ "domain": "codereview.stackexchange", "id": 8539, "tags": "optimization, f#, heap" }
Interface of a variant-like class
Question: I have created an NBT format reader (NBT is a binary serialization format that Minecraft uses to store its data). I made a node-like class that can be a data entry or the root of a subtree (that's how NBT is defined). The public interface of it currently looks like this: class Tag { Tag(); void read(std::istream& stream); void write(std::ostream& stream); void print(int indent = 0); int8_t byteValue() const; int16_t shortValue() const; int32_t intValue() const; int64_t longValue() const; float floatValue() const; double doubleValue() const; ByteArray byteArrayValue() const; IntArray intArrayValue() const; UTF8String stringValue() const; std::vector<Tag> children() const; UTF8String name() const; }; Is this the right design though? Originally I wanted to do the *Value methods with templates, but the problem is that the value of a Tag is dynamic, so that does not play nicely with the compile-time genericity. Is it a blasphemy not to make the various types derived classes? I couldn't really take advantage of a virtual value method because the return types would need to differ. Is there a better way to implement a data structure like this? The aim is to make it easy to use and fast. Answer: Couple of things I noticed: The serialize members: void read(std::istream& stream); void write(std::ostream& stream); I don't particular mind them but I also would prefer to have operator<< and operator>> defined (Note you can simply make these call write/read respectively). With the stream iterators defined this becomes less useful. void print(int indent = 0); The following seem like very simialer. int8_t byteValue() const; int16_t shortValue() const; int32_t intValue() const; int64_t longValue() const; float floatValue() const; double doubleValue() const; ByteArray byteArrayValue() const; IntArray intArrayValue() const; UTF8String stringValue() const; I think we can replace all of them with: template<typename T> T const& get() const; I like this better as we do not need to guess that byte maps to int8_t or short maps to int16_t as we can explicitly get the type we want: int8_t val = t.get<int8_t>(); But there is already something very similar defined in boost: boost::any
{ "domain": "codereview.stackexchange", "id": 1478, "tags": "c++, api" }
Port of NLTK tokenizing code from Python to Rust
Question: I'm working on a port of NLTK to Rust. I am fairly new to Rust, so I wanted to post a small file to check if I was writing idiomatic Rust. I've included the original Python. The Python has docstrings that describes usage, so I didn't include comments for brevity in the Rust. This file passes all of the tests we've written, so we're certain that it mostly works. util.py from re import finditer def string_span_tokenize(s, sep): r""" Return the offsets of the tokens in *s*, as a sequence of ``(start, end)`` tuples, by splitting the string at each occurrence of *sep*. >>> from nltk.tokenize.util import string_span_tokenize >>> s = '''Good muffins cost $3.88\nin New York. Please buy me ... two of them.\n\nThanks.''' >>> list(string_span_tokenize(s, " ")) [(0, 4), (5, 12), (13, 17), (18, 26), (27, 30), (31, 36), (37, 37), (38, 44), (45, 48), (49, 55), (56, 58), (59, 73)] :param s: the string to be tokenized :type s: str :param sep: the token separator :type sep: str :rtype: iter(tuple(int, int)) """ if len(sep) == 0: raise ValueError("Token delimiter must not be empty") left = 0 while True: try: right = s.index(sep, left) if right != 0: yield left, right except ValueError: if left != len(s): yield left, len(s) break left = right + len(sep) def regexp_span_tokenize(s, regexp): r""" Return the offsets of the tokens in *s*, as a sequence of ``(start, end)`` tuples, by splitting the string at each successive match of *regexp*. >>> from nltk.tokenize import WhitespaceTokenizer >>> s = '''Good muffins cost $3.88\nin New York. Please buy me ... two of them.\n\nThanks.''' >>> list(WhitespaceTokenizer().span_tokenize(s)) [(0, 4), (5, 12), (13, 17), (18, 23), (24, 26), (27, 30), (31, 36), (38, 44), (45, 48), (49, 51), (52, 55), (56, 58), (59, 64), (66, 73)] :param s: the string to be tokenized :type s: str :param regexp: regular expression that matches token separators :type regexp: str :rtype: iter(tuple(int, int)) """ left = 0 for m in finditer(regexp, s): right, next = m.span() if right != 0: yield left, right left = next yield left, len(s) def spans_to_relative(spans): r""" Return a sequence of relative spans, given a sequence of spans. >>> from nltk.tokenize import WhitespaceTokenizer >>> from nltk.tokenize.util import spans_to_relative >>> s = '''Good muffins cost $3.88\nin New York. Please buy me ... two of them.\n\nThanks.''' >>> list(spans_to_relative(WhitespaceTokenizer().span_tokenize(s))) [(0, 4), (1, 7), (1, 4), (1, 5), (1, 2), (1, 3), (1, 5), (2, 6), (1, 3), (1, 2), (1, 3), (1, 2), (1, 5), (2, 7)] :param spans: a sequence of (start, end) offsets of the tokens :type spans: iter(tuple(int, int)) :rtype: iter(tuple(int, int)) """ prev = 0 for left, right in spans: yield left - prev, right - left prev = right util.rs extern crate regex; use regex::Regex; pub fn string_span_tokenize(s: &str, sep: &str) -> Result<Vec<(usize, usize)>, String> { if sep.len() == 0 { Err(String::from("Error! Separator has a length of 0!")) } else { // TODO: we'll likely want to do some error checking // to ensure s.len() and str.len() don't exceed usize::MAX let strlen = s.len(); let seplen = sep.len(); let mut result: Vec<(usize, usize)> = Vec::new(); let mut left = 0; let mut r_idx; loop { let right = s[left..].find(sep); // TODO: Will this work on unicode? match right { Some(right_idx) => { if right_idx != 0 { result.push((left, right_idx)); } r_idx = right_idx; }, None => { if left != strlen { result.push((left, strlen)); } break; } } left = r_idx + seplen; } return Ok(result); } } pub fn regexp_span_tokenize(s: &str, regexp: &regex::Regex) -> Vec<(usize, usize)> { let mut result: Vec<(usize, usize)> = Vec::new(); let strlen = s.len(); let mut left = 0; for (right, next) in regexp.find_iter(s) { if right != 0 { result.push((left, right)); } left = next } result.push((left, strlen)); return result; } pub fn spans_to_relative(spans: Vec<(usize, usize)>) -> Vec<(usize, usize)> { let mut prev = 0; let mut result: Vec<(usize, usize)> = Vec::new(); for (left, right) in spans { result.push((left - prev, right - left)); prev = right; } return result; } We decided to return Vecs instead of using generators because generators are verbose in Rust. Answer: You didn't include any of your Rust tests, so I can't make any guarantees that I preserved the semantics of your code. Additionally, these tests would have greatly helped me understand what your code was trying to do. passes all of the tests we've written You will want to double-check your tests, then. Running string_span_tokenize("hello world", "l") leads to an infinite loop. Because of this (and the lack of documentation and tests), I can't tell what behavior you actually want for this method, so I just guessed. You have inconsistent handling of zero-width spans. string_span_tokenize prevents zero-width spans at the beginning and end, but regexp_span_tokenize only prevents them at the beginning. Neither method prevents zero-width spans in the middle of the string. For my rewrite, I allow zero-width spans everywhere. I'd expect you could add a filter call to remove all zero-width spans before collecting if that is needed. You have inconsistent handling of zero-width separators. The string method complains if you pass "", but the regex version can accept an empty regex. You use (usize, usize) to mean both absolute and relative spans. This is an invitation to confuse the two and pass one where the other is expected. I did not make this change, but you should consider creating newtypes like AbsoluteSpan(usize, usize) and RelativeSpan(usize, usize). You can have adapter methods to-and-from a tuple and it would be a place to hang the conversion methods. ensure s.len() and str.len() don't exceed usize::MAX Beyond the fact that you cannot exceed usize::MAX (not sure how you would exceed the maximum), remember that usize represents the native machine width (32-bit or 64-bit). If you could somehow exceed this, that means you'd have a string that was over 4GiB or 16EiB. It is likely you will run into other problems before you reach this point. Seeing many push calls should usually give you pause. There are often higher-level functions like map that are easier to understand and can actually be faster! For example, the Vec implementation of collect uses size_hint to pre-allocate the array. Avoid slicing strings and slices when possible and still understandable. These have a tiny amount of overhead as they have to do out-of-bounds checks. Instead, try to use iterators as the bounds checks can be avoided, giving you a little bit of a speedup. Don't top-load all your lets. Instead, define them as close as possible to where they are used. This helps keep their scope smaller which reduces the chance of programmer error and also tends to make refactoring easier in the long run. In Rust specifically, it often means you can avoid making things mutable. Avoid functions that are complete if-else statements, especially if one branch is much longer than the other. Use an early return called a guard clause for the short branch and dedent the happy path. It's very rare to need a Vec<T> as an argument. Instead, accept a &[T]. It's more general as anything that looks like a slice can be used. For example, you can pass a reference to an array. Don't use explicit return statements at the end of methods. Don't use explicit types on variables unless the compiler tells you to add them. Specifically, your Vec::new calls don't need them. Use is_empty instead of len == 0. It's easier to read and shorter. extern crate regex; use regex::Regex; pub fn string_span_tokenize(s: &str, sep: &str) -> Result<Vec<(usize, usize)>, String> { if sep.is_empty() { return Err(String::from("Error! Separator has a length of 0!")); } let seplen = sep.len(); let mut left = 0; let spans = s.split(sep).map(|piece| { let right = left + piece.len(); let span = (left, right); left = right + seplen; span }).collect(); Ok(spans) } pub fn regexp_span_tokenize(s: &str, regexp: &regex::Regex) -> Vec<(usize, usize)> { let mut left = 0; let mut spans: Vec<_> = regexp.find_iter(s).map(|(right, next)| { let span = (left, right); left = next; span }).collect(); spans.push((left, s.len())); spans } pub fn spans_to_relative(spans: &[(usize, usize)]) -> Vec<(usize, usize)> { let mut prev = 0; spans.iter().map(|&(left, right)| { let span = (left - prev, right - left); prev = right; span }).collect() } fn main() { println!("{:?}", string_span_tokenize("hello world", "l")); // Ok([(0, 2), (3, 3), (4, 9), (10, 11)]) println!("{:?}", regexp_span_tokenize("hello world", &regex::Regex::new("l").unwrap())); // [(0, 2), (3, 3), (4, 9), (10, 11)] println!("{:?}", spans_to_relative(&[(1,2), (3,4), (100, 200)])); // [(1, 1), (1, 1), (96, 100)] }
{ "domain": "codereview.stackexchange", "id": 17095, "tags": "parsing, regex, rust, natural-language-processing" }
How does one derive the Lamb shift for the Hydrogen atom?
Question: I've been perusing my copies of Srednicki and Peskin & Schroeder, and I can't seem to find an explanation of how one derives the Lamb shift that I can follow. How does one derive the Lamb shift? What order in perturbation theory do you have to go up to? Does one use a path integral or canonical quantization approach? Answer: A derivation is here: http://en.wikipedia.org/wiki/Lamb_shift#Derivation or in Landau-Lifshitz. Bethe's original derivation is found e.g. in Matt Schwartz's Harvard lecture here http://isites.harvard.edu/fs/docs/icb.topic792163.files/20-LambShift.pdf The leading contribution to the Lamb shift is the one-loop level (the first non-classical correction) but of course, the effect receives corrections at every higher order, too. One may derive it in the operator approach or path integral approach, much like pretty much everything in physics. These are just equivalent languages to do physics. The Lamb shift has to deal with an atom which is not quite an elementary particle. So the usual perturbative rules of QED have to be "generalized" to deal with the composite object. However, otherwise it's about a virtual photon emitted and reabsorbed by the atom. If the atom were elementary, it would be a simple "photon loop" correction to the atom's propagator. A divergent term has to be removed – equivalently, one has to find out sensible limits of the integral – and what is left is some "truncated logarithmic divergence" that produces those 1,000 MHz for the relevant levels.
{ "domain": "physics.stackexchange", "id": 4946, "tags": "quantum-field-theory, lamb-shift" }
calling an available package(APIs) from another package
Question: Hello All, I am working on kuka youbot. I have to use API functions for operating the base of youbot. To do this, I need to create a user application based on the ROS package which calls the youbot API functions from the Library/package of those APIs. Dose anyone knows how do I do this procedure? Any ideas or tutorial links would be helpful. Originally posted by A.M Dynamics on ROS Answers with karma: 93 on 2014-11-24 Post score: 0 Answer: This may help: http://docs.ros.org/indigo/api/catkin/html/howto/format2/system_library_dependencies.html As may this: http://answers.ros.org/question/118231/how-to-add-non-cmake-based-libraries-to-a-catkin-build/ Originally posted by Tom Moore with karma: 13689 on 2014-11-24 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by A.M Dynamics on 2014-12-26: @Tom Moore. Thanks for help. My problem was solved.
{ "domain": "robotics.stackexchange", "id": 20148, "tags": "ros, package, youbot" }
Total Multiplicity of Einstein Solids
Question: I am reading Problem 2.8 in Schroeder's Introduction to Thermal Physics. The question asks for certain pieces of information regarding two einstein solids, A and B, each containing 10 oscillators, sharing a total of 20 units of energy. Part (b) asks for the total microstates in this system. Now, when we were looking for the total microstates in a set of fair coins, we can use 2^(# of coins). Is there an equation like this for the system of two einstein solids? If not, then I suspect that the only way to determine the total number of microstates is to find the microstates for each macrostate and then add them up. Answer: When looking at the multiplicity of an Einstein solid, the equation tells us the ways of storing the energy across all n oscillators, n-1 oscillators, all the way down to 1 oscillator. So, if we look at n = all oscillators, we are given the total multiplicity of the Einstein solid.
{ "domain": "physics.stackexchange", "id": 60494, "tags": "thermodynamics" }
What controls the Python search path, and how?
Question: In my shell I first activate a Python virtual environment and then the Catkin workspace which results in this sys.path, which searches for Python modules in this order. [ '/home/user/project/devel/lib/python3/dist-packages', '/opt/ros/noetic/lib/python3/dist-packages', '/home/user/project/.venv/lib/python37.zip', '/home/user/project/.venv/lib/python3.7', '/home/user/project/.venv/lib/python3.7/lib-dynload', '/usr/lib/python3.7', '/home/user/project/.venv/lib/python3.7/site-packages', '/home/user/.local/lib/python3.7/site-packages', '/usr/local/lib/python3.7/dist-packages', '/usr/lib/python3/dist-packages' ] Yet when I run roslaunch from this shell, which in turn executes a Python node, this is the sys.path it sees. My virtual environment is no longer active. [ '/home/user/project/src/pkg/src', '/home/user/project/devel/.private/pkg/lib/pkg', '/home/user/project/devel/lib/python3/dist-packages', '/opt/ros/noetic/lib/python3/dist-packages', '/usr/lib/python37.zip', '/usr/lib/python3.7', '/usr/lib/python3.7/lib-dynload', '/home/user/.local/lib/python3.7/site-packages', '/usr/local/lib/python3.7/dist-packages', '/usr/lib/python3/dist-packages' ] I would expect it to add the first two entries (so that the ROS package's own Python modules take priority), but keep the rest which include my .venv directories. Originally posted by rgov on ROS Answers with karma: 130 on 2022-04-21 Post score: 0 Original comments Comment by rgov on 2022-04-21: It appears that it is due to the Python interpreter /usr/bin/python3 being executed instead of /home/user/project/.venv/bin/python3 even though the shebang line is #!/usr/bin/env python3 and the PATH (as the node sees it) should prioritize the one from my virtual environment. Comment by rgov on 2022-04-21: When roslaunch calls execve() to run my node, it is executing (via some symlinks) /home/user/project/devel/.private/pkg/lib/pkg/node which is a script that goes: #!/usr/bin/python3 # -*- coding: utf-8 -*- # generated from catkin/cmake/template/script.py.in Which rather than just execute the program as any other executable, reads the entire thing and then executes it... Answer: Basically, my nodes were being launched with a different python3 executable than I expected. This is because roslaunch actually launches a trampoline script, based on this template, that provides a different shebang line which is fixed at build time. (Aside: This seems like very unusual behavior from Catkin. Why not just write out a file with #!/whatever/python3 pathtofile.py instead of this? Why the circuitousness?) To get the right shebang line in this trampoline, the virtual environment needs to be active while building. Originally posted by rgov with karma: 130 on 2022-04-21 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 37589, "tags": "ros, python3" }
Defining Grammar for Given Language
Question: I'm attempting to practice for an exam and I'm having some trouble on one of the practice problems. The problem asks to identify a variety of language as regular grammar, context-free grammar, context-sensitve grammar, or unrestricted grammar. It also asks that if the grammar is regular or context-free, to write out the exact grammar. I'm not having trouble with two out of the 4 pieces of language. For instance, the easiest one is as follows: $\{a^n$ where $n\ge0$, $n\pmod 3 \not= 1\}$ can be described by the regular grammar $A \rightarrow aA \mid a$ However, the language I am struggling with is: $$\{a^n b^m \text{ where } n>1, m\ge1, n>m\}$$ and $$\{a^{2n} b^{3n}\text{ where }n\ge1\}$$ I believe that the first language is context-free because I know that the language $a^nb^n$ is context-free from prior examples and can be described by the grammar $A \rightarrow aAb \mid ab$, however, in this version, $b$ is taken to the $m$ power rather than the $n$ and the bounds for $m$ and $n$ are different, and I'm not sure how that affects the grammar that describes it. Frankly, I'm not sure where to start with the latter piece of language... I don't know how to determine what type of grammar describes it, let alone the grammar itself if it is context-free or regular. Could anyone help, or at least point me in the right direction? Answer: Both of your languages are context-free and not regular. The first one can be generated using the following grammar: $$ S \to a S \mid a S b \mid aab $$ The second one can be generated using the following grammar: $$ S \to aaSbbb \mid aabbb $$ You can show that these languages are not regular using the pumping lemma or using Myhill–Nerode theory. Let me briefly indicate how to do so using the pumping lemma. For the first language, suppose it were regular. Let $p$ be the constant promised by the pumping lemma. Then the word $a^{p+1} b^p$ is in your language. According to the lemma, it can be written as $xyz$, where $|xy| \leq p$ consists solely of $a$'s, and $y \neq \epsilon$. But then $xy^0z$ is not in your language. The second language is even simpler, and left to you.
{ "domain": "cs.stackexchange", "id": 16074, "tags": "formal-grammars, parsers" }
Small building game logic
Question: I am planing to create a small game using C#, so I've decided to create a small logic for that game. I immediately opened Visual studio and started to create some classes to hold on the player's data, so I came up with something like this. However, I don't know whether or not there is better ways to achieve such thing rather than this. The Player class public class Player { /// <summary> /// Player name. /// </summary> public string Name { get; set; } /// <summary> /// Buildings the player already have... /// </summary> public Building[] Buildings { get; set; } /// <summary> /// Player money. /// </summary> public int Money { get; set; } } The Building class public class Building { /// <summary> /// The building type, Academy, Barracks, Tower, GlassHall... etc /// </summary> public BuildingType Type { get; set; } /// <summary> /// Requirements for that building. /// </summary> public Building[] Requirements { get; set; } /// <summary> /// Needed money for that building. /// </summary> public int NeededMoney { get; set; } /// <summary> /// The buildings level... should be updated after the building is already built. /// </summary> public int Level { get; set; } } The Building types public enum BuildingType { Academy, Barracks, Tower, GlassHall } Main() static void Main() { var player = new Player { Name = "SweetPlayer...", Money = 50000, //Buildings the player already has... Buildings = new[] { new Building {Level = 5, Type = BuildingType.GlassHall}, new Building {Level = 5, Type = BuildingType.Barracks}, new Building {Level = 5, Type = BuildingType.Tower} } }; var buildingTobuild = new Building { Level = 1, NeededMoney = 40000, Type = BuildingType.Tower, //Buildings the player needs in order to establish this building... Requirements = new[] { new Building {Level = 5, Type = BuildingType.GlassHall}, new Building {Level = 5, Type = BuildingType.Academy}, new Building {Level = 5, Type = BuildingType.Barracks} } }; //Check whether the player has enough money. if (player.Money >= buildingTobuild.NeededMoney) { //We have enough money for the building so let's check if the player meets the requirements... var neededBuildings = new ArrayList(); foreach (var requirement in buildingTobuild.Requirements) { //The player already has this requirement... so continue... if (player.Buildings.ToList().Find(element => element.Type == requirement.Type && element.Level >= requirement.Level) != null) continue; neededBuildings.Add(requirement); } //if the list contains more the 0 elements so the player does not meet the requirements... Print the required things for the building. if (neededBuildings.Count > 0) { //a small counter to print the number of the element. int counter = 0; Console.WriteLine("Sorry you need those buildings in order to build the \"{0}\" : ", buildingTobuild.Type); Console.WriteLine(); foreach (var neededBuilding in neededBuildings) { Console.WriteLine("{0} - {1}", counter, ((Building) neededBuilding).Type); counter++; } } else { player.Money = player.Money - buildingTobuild.NeededMoney; Console.WriteLine("Successfully built the \"{0}\" your money now = {1}", buildingTobuild.Type ,player.Money); } } else { Console.WriteLine("Sorry the \"{0}\" needs {1} money to be built... You need {2} more", buildingTobuild.Type, buildingTobuild.NeededMoney, buildingTobuild.NeededMoney - player.Money); } Console.Read(); } I have commented everything out for you to know what is going on. Answer: It's hard to review code like this, because it's very obvious it's not “real” code. But I'll try: All your “classes” are basically just C structs. You should probably add some constructors and make some of the setters private. You should also move some of the logic from Main() to instance methods on those classes. It feels wrong that you need to have a Building to know how much it costs and what its requirements are. It might make sense to have two separate classes for that, e.g. BuildingBlueprint and Building, or something like that, especially if you can have more than one building of the same type in the game. Array is a type that's natural for computers, but it often doesn't make much sense to have in your object model. If you want a mutable collection, use IList<T> (or at least List<T>). If you want a collection that can't be changed from the outside, use IEnumerable<T> (or IReadOnlyList<T>). If you continue working on this, you'll probably need specific behavior for each kind of Building. Be prepared to use derived classes for that. Don't use ArrayList, it's there just for backwards compatibility with .Net 1.0. Use List<T> (in your case List<Building>) instead. The loop could be written using LINQ as something like: var neededBuildings = buildingTobuild.Requirements.Where( required => !player.Buildings.Any( built => built.Type == required.Type && built.Level >= required.Level)) .ToList(); This is still O(n^2), but that probably doesn't matter. In any case, you should use better variable names, something specific like built is much better than the overly general element. Instead of using foreach with a counter, I think a for loop is usually better. You usually shouldn't print the value of an enum (like GlassHall) directly to the user, you should instead print it using normal English rules (like Glass hall). If you want to subtract some value from a property, you can use -=. If you separated your code into smaller methods (possibly on your classes, see #1), you could use early return to avoid deep nesting and code that's difficult to follow. E.g.: if (player.Money < buildingTobuild.NeededMoney) { Console.WriteLine("Sorry, the \"{0}\" needs {1} money to be built.", …); return; } // for the rest of the method, you know there is enough money
{ "domain": "codereview.stackexchange", "id": 4127, "tags": "c#, game" }
Precipitation of Cu2+ and Pb2+ ions
Question: Hi chemistry enthusiasts! I am working on my scholarship exam practice. I believe this exam assumes high school + first year university knowledge although the standard may be varied across the world. And I'm not quite sure what I did wrong. Could you please have a look? There is an aqueous solution containing Cu2+ and Pb2+ ions. The most suitable reagent to precipitate one of the two ions from the solution is (1) nitric acid (2) sodium carbonate (3) sulfuric acid (4) hydrogen sulfide I picked answer (4) hydrogen sulfide because I think that it will react with copper (II) ions and will precipitate out. However, the correct answer is (3) sulfuric acid. How is this answer correct and what did I do wrong for hydrogen sulfide. Your advice will be much appreciated! Answer: If you read the question carefully, they wish to precipitate only one ion, either copper or lead. The facts that must be in your mind are: (i) Lead sulfate is insoluble in water but copper sulfate is. (ii) Both lead sulfide and copper sulfide are water insoluble.
{ "domain": "chemistry.stackexchange", "id": 11811, "tags": "precipitation" }
Finding optimal decision threshold for binary comm system
Question: I'm trying to solve the following exercise: Given a system that transmits bit $b$ with probability $p_b$ and $-b$ with $p_{-b}$ and the transmission is degraded by AWGN $N(0,\sigma^2)$. What is the optimal decision threshold $\mu$ to minimize the probability of error? Use Q functions to express the probability of error. My approach: $P_e = P_b P(-b|b) + P_{-b} P(b|-b)$, where $P(-b|b)$ means probability of detecting $-b$ given that $b$ was transmitted. $P(-b|b)$ is the left tail of the distribution ($\Phi$) since I need to receive a value smaller than $\mu$ after sending $b$, but since the exercise asks in $Q$ -> $Q(-(\frac{\mu-b}{\sigma}))$ $P(b|-b)$ is already the right tail as I need to get something greater than $\mu$ after sending $-b$, so $Q(\frac{\mu+b}{\sigma})$. To find the optimal $\mu$, I took the derivate of $P_e$ with respect to $\mu$ and set it equal to 0. To facilitate notation, I will use $X = \frac{-(\mu-b)}{\sigma}$ and $Y = \frac{\mu+b}{\sigma}$. Now I have $P_1 e^{-\frac{X^2}{2}} = -P_{-1} e^{-\frac{Y^2}{2}}$. After some manipulation and taking $ln$ on both sides: $ln(\frac{p_1}{p_{-1}}) = \frac{\frac{Y^2}{2}}{\frac{-X^2}{2}} = \frac{2\mu b}{\sigma^2}$. So, $\mu = ln(\frac{p_1}{p_{-1}}) \frac{\sigma^2}{2b}$ but the solutions I have don't have the $\sigma^2$ term, so I cannot find where I made a mistake. As for the derivative of the $Q$ function, it should be $-\Phi'$, right? Which ends up being the pdf, so $e^{-\frac{z^2}{2}}$ (omitting the normalization factor as they will cancel out on this particular question) Answer: Let P(0) ( P(1) ) is a probability of transmitting bit zero (one); P(e|0) ( P(e|1) ) is a probability of error when detecting bit zero (one). The probability of erroneous detection is $$ P(e) = P(e|0)P(0) + P(e|1)P(1) \tag {1} $$ Let $V_0$ ( $V_1$ ) be a nominal signal voltage of bit zero ( one ) signal at the transmitter. $$ P(e|0) = \int_T^{\infty}{{\frac {1} {\sigma \sqrt{2\pi}}}\exp\left(-(\nu-V_0)^2/2 {\sigma}^2\right) d\nu } \\ P(e|1) = \int_{-\infty}^T{{\frac {1} {\sigma \sqrt{2\pi}}}\exp\left(-(\nu-V_1)^2/2 {\sigma}^2\right) d\nu } $$ where $T$ is a detection threshold (OP's $\mu$). Differentiating $P(e)$ of eq.1 w.r.t. $T$, we arrive at $$ -P(0){{\frac {1} {\sigma \sqrt{2\pi}}}\exp\left(-(T-V_0)^2/2 {\sigma}^2\right)} + P(1){{\frac {1} {\sigma \sqrt{2\pi}}}\exp\left(-(T-V_1)^2/2 {\sigma}^2\right)} \tag {2} $$ To find an optimal threshold, we equate the expression in eq.2 to zero: $$ P(0)\exp\left(-{\frac {(T-V_0)^2} {2 {\sigma}^2}}\right) = P(1)\exp\left(-{\frac {(T-V_1)^2} {2 {\sigma}^2}}\right) \\ T = {\frac {V_0+V_1} {2}} + {\sigma}^2{\ln{(\frac {P(1)} {P(0)})}} $$ that is, exactly the OP's result. The OP's reference source may compute the optimal threshold for the system in which the probabilities of zero/one bit transmissions are equal. The variance ${\sigma} ^2$ is multipled by $\ln{(\frac {P(1)} {P(0)})}$, and it disappears from the expression for optimal threshold, when the zero/one probabilities in transmission are equal ($P(0)=P(1)=1/2$).
{ "domain": "dsp.stackexchange", "id": 10014, "tags": "digital-communications, noise, gaussian, self-study, thresholding" }
What causes an arrow to rotate?
Question: My intuition: In a system without air resistance, I would expect an arrow shot at an angle with its head pointing upwards to follow a ballistic trajectory without rotating around the horizon - because gravity can't induce torque on the body. So the arrow will impact the ground on its tail - not its head. Is this correct? If yes, what causes the same arrow to impact the ground on its head when there is air resistance? Answer: Your intuition in your first paragraph seems correct. As to your question, if you think about the design of an arrow, the back of the arrow is much lighter (therefore more susceptible to being pushed around by air resistance) and designed such that the shaft of the arrow lines up with the tangent line of the arc drawn by the head of the arrow as it flies through the air. If you imagine freezing time at some point while the arrow is on its trajectory, the location of the tail that equalizes the air resistance forces on it is when it is pointing directly backwards from the head of the arrow. Then note that the "backwards" direction changes as the arrow flies through the air under gravity, with the tail of the arrow following this change in direction. When the head of the arrow is on its way down towards the ground, the "backwards" direction points slightly up, depending on the angle at which the arrow was fired, hence the head lands before the tail.
{ "domain": "physics.stackexchange", "id": 98180, "tags": "newtonian-mechanics, torque, projectile, drag, aerodynamics" }
Why does water follow the spoon's surface?
Question: Why does water follow the spoon's surface? Is the reason surface tension, viscosity effects, a pressure gradient? Answer: The relevant forces are adhesion (the attractive force between dissimilar molecules) and cohesion (the attractive force between similar molecules). Adhesion sticks the water molecules close to the metal-water interface to the metal. Cohesion sticks the water molecules far from the metal-water interface to the water molecules close to the metal-water interface. Surface tension (the bulk force difference between the air side of the air-water interface, where there is no cohesion force, and the water side of the air-water interface, where there is a cohesion force) plays a minor role: it keeps the shape of the stream roughly dome shaped.
{ "domain": "physics.stackexchange", "id": 83084, "tags": "fluid-dynamics, flow, surface-tension, adhesion" }
Searching and comparing a record
Question: I am still a beginner in C++ and I am always curious about good/best coding methods. Let's say I have a program which allows a users to edit the salary of a employee. The system will prompt the user which to key in a employee's name first. The system will then check whether or not the username exist. If the username existed, the system will then allow the user to change the employee's salary. The salary and the name of the person is stored in a text file. employeeInfo.txt (formatted by name and salary) john 1000 mary 2000 bob 3000 user.h #ifndef user_user_h #define user_user_h #include <iostream> class user { public: user(std::string userName,std::string salary); std::string getUserName(); std::string getSalary(); void setUserName(std::string userName); void setSalary(std::string salary); private: std::string userName,salary; }; #endif user.cpp #include "user.h" #include <iostream> #include <string> using namespace std; user::user(string userName,string salary) { setUserName(userName); setSalary(salary); }; string user::getUserName() { return userName; } string user::getSalary() { return salary; } void user::setUserName(std::string userName) { this->userName = userName; } void user::setSalary(std::string salary) { this->salary = salary; } main.cpp #include "user.h" #include <iostream> #include <string> #include <fstream> #include <sstream> #include <vector> #include <stdio.h> #include <string.h> using namespace std; int main(){ vector<user> userDetails; string line; string userName; string salary; ifstream readFile("employeeInfo.txt"); while(getline(readFile,line)) { stringstream iss(line); iss >> userName >> salary; //consturctor user employeeDetails(userName, salary ); userDetails.push_back(employeeDetails); } readFile.close(); string name; cout << "Please enter a user name\n"; cin >> name; for (int i =0; i<userDetails.size(); i++) { //search if there's a match if (userDetails[i].getUserName() == name) { string newSalary; cout << "Please enter a new salary" << endl; cin >> newSalary; userDetails[i].setSalary(newSalary); } } //display to see if the salary gets updated for (int i=0; i<userDetails.size(); i++) { cout << userDetails[i].getSalary << "\n"; } } I am not really sure if my code (shown below) is the worst method to search for a record in a vector and match it against the input of the user. I would like to know is there any way to improve my code. I'd also like to gather some tips and ideas from you all to determine if this is the worst method to search for a record in a vector. //search for existing username for (int i =0; i<userDetails.size(); i++) { if (userDetails[i].getUserName() == name) { //do stuff } } Answer: Your question is surprisingly hard to answer, largely because you ask one question in your words, but a different question in your code. Your written question is whether looking up a value by sequentially scanning a vector is the best or worst way to do it. And the answer to that question is nuanced. I'll talk more about this in a bit. However your code shows a full scenario in which you read a file, look up and edit your single user, and display the full vector's contents. In this fuller context, the performance questions of the sequential scan almost definitely do not matter, as reading and displaying the whole vector will cost more time than you could save by having a smarter look-up. A more typical program might let you perform multiple edits, and then the time it took to look up each person might be more important. So that's another improvement to consider. Scanning a Vector Scanning a vector with a hand-rolled for loop is not generally considered the best way to do things. It's often much better to find an algorithm that does what you want, and use it. In this case we don't know anything particularly useful about the contents of your vector, so the best bet is going to be find or find_if. To use find, you'll have to overload operator== in something, and pass it around; to use find_if you can instead create a helper function or overload operator(), or in C++11 you can pass a lambda function. Let's examine the middle option: struct user_by_name { user_by_name(string name) : name(name) {} bool operator()(user& user) { return user.getUserName() == name; } string name; }; : : : vector<user>::iterator iterFound = std::find_if(userDetails.begin(), userDetails.end(), user_by_name(name)); if (iterFound != userDetails.end()) { : : : } The nice thing about this approach is you can create ways to find users by other criteria, such as a user_by_salary, and substitute it in with almost no code change. The downside is that without making other changes, you'll never get better than "linear" performance - on average you'll always have to look through half of the items in your vector to find the one you need. Scanning faster If there's a natural ordering for the items in your vector, you have two main options. Both of them require implementing an operator<, and thus do change your user struct's usage paterns. And both of them scan faster by being able to skip past some items while finding the one you want. You can keep the items in the vector, but sort them by this ordering. This allows you to use an algorithm such as lower_bound to find them in a fraction of the time. You can store the items in a different data structure such as a std::map which makes similar use of the ordering to give you the same performance assistance that lower_bound does on the sorted vector. Choosing between these cases depends on how the data will be used. Again, with just a single edit in your main program, this is all overkill. But if you are going to have a long-running multi-edit scenario, especially one with thousands of employees being looked up by name and updated, it might be worth examining these options. Other options If you were storing a realistic amount of data about each employee, had a realistic number of employees, and had a lot of tasks you wanted to perform on them, chances are you'd find storing their information in a database would make a lot more sense. Databases solve a lot of problems for you; not only do they support fast look-ups, they handle persistence of the data (loading and saving it as necessary) and, for the right scenarios, save you a lot of time and effort. Obviously for learning how to do some of the C++ programming that you show here, using a database may help you less (although learning how to use a database isn't a bad skill either). Other comments Finally I wanted to end this review with a list of more generic comments about your actual code. A lot of these things are details you don't need to care about yet, or that may have occurred just in trying to post your code here, but they may help you form better habits as you advance your C++ programming skills. using namespace std is frowned upon. It's utter anathema in a header file (which you didn't do), but it's also a risk in your cpp file. You're better off either adding the std:: prefixes, or adding using std::string; using std::vector; etc. However it's unlikely to matter in code like you show here. Your use of whitespace is inconsistent. You don't always put spaces after commas, and sometimes use newlines instead. In some places you use more blank lines that I would consider helpful. Also not going to really matter in code this short. Your property accessors (getUserName, getSalary) should be const as they should not visibly modify the object, and probably should return a const string& instead of a mutable copy. Your header should include <string> instead of <iostream>, and user.cpp can similarly drop its incldue of <iostream>; you don't appear to use anything from iostream in there. Your main.cpp similarly has a lot of unused includes, but I would expect it's more of a testing file so won't harp on it. Your search loop will find multiple matches. If you have multiple employees named bruce, your code will stop at each of them and ask for a new salary. If this is not intentional, you can use break; to avoid this. Note that most of my commentary above assumes you will want to only update a single employee at a time. If there's no match, the program directly displays the output without any explanatory messages. That seems a little surprising to me, but perhaps it's intentional. There's not a lot of self-documentation in your code. You'll note the last two bullets I talk about what's intentional, and in particular that I'm uncertain what you meant for your code to do. Finding the right balance between variable names, function names, and comments will help resolve that in the future. This can start as simply as adding the description you gave in your post as a comment in your main.cpp. But all that said, I think you're starting off on the right foot. You show some solid understanding, and you also show interest in refining your skills. Congratulations on a good first post here!
{ "domain": "codereview.stackexchange", "id": 5565, "tags": "c++, beginner, search" }
Find and select image files from webpage
Question: For some reason, I feel like this is a bit messy and could be cleaner. Any suggestions? I'm selecting any image files ending in .png or .jpg, and removing any image source files that contains avatar (meaning it's a header image or an avatar). def self.images(url) proxy_addr = 'http://localhost:' proxy_port = 8080 doc = Nokogiri::HTML(open(url), proxy: "#{proxy_addr}#{proxy_port}") images = doc.css('img[src$="jpg"], img[src$="png"]').select do |uri| uri['src'] =~ %r{^http://(\d+|media)} end images.map { |uri| uri['src'] }.reject { |uri| uri =~ /avatar/ } end Answer: There is no point in hard-coding proxy_addr and proxy_port separately, then concatenating them using interpolation. On the other hand, it would be useful to make the proxy configurable. One way is to use a default parameter: def self.images(url, proxy='http://localhost:8080') doc = … end (Personally, I'd choose to have proxy default to nil, but the formulation above preserves compatibility with your existing code.) You appear to be using the open function that is provided by OpenURI. You have a bug: the proxy needs to be specified as an option to open rather than to Nokogiri#HTML. Furthermore, opening the stream like that makes it impossible to close it properly. I suggest using a block instead: def self.images(url, proxy=nil) open(url, proxy: proxy) do |io| doc = Nokogiri::HTML(io) … end end Your regular expression should be case-insensitive when it looks for http:, as recommended by RFC 3986 Sec 3.1: An implementation should accept uppercase letters as equivalent to lowercase in scheme names (e.g., allow "HTTP" as well as "http") for the sake of robustness but should only produce lowercase scheme names for consistency. Furthermore, hostnames are case-insensitive, as stated in RFC 1035 Sec 2.3.3 (though clarified by RFC 4343 Sec 2): For all parts of the DNS that are part of the official protocol, all comparisons between character strings (e.g., labels, domain names, etc.) are done in a case-insensitive manner. Also consider allowing URLs that start with https:. The parameter to your two blocks should not be named uri, but rather img, after the <img> tags that your CSS-style selector requested. The way you sequence select, map, and reject is awkward. I suggest doc.css('img[src$="jpg"], img[src$="png"]').map do |img| img['src'] end.select do |uri| uri =~ %r{^https?://(\d+|media)}i and uri !~ /avatar/ end
{ "domain": "codereview.stackexchange", "id": 8826, "tags": "ruby, web-scraping" }
Does the first step in electrophilic aromatic substitution follow equilibrium? If so, can equilibrium constant be calculated?
Question: Using chlorine (a Lewis base) and ferric chloride (a Lewis base) as an example, the first step I saw in the textbook looks like this: The picture above depicts the Step 1 in the EAS mechanism using chlorine as an example. I noticed that this step has two reversible arrows. I think it means chlorine, ferric chloride, the molecular complex and the ion pair will all be present in a system and their concentration values follow chemical equilibrium. From high school knowledge (I am a freshman), we can calculate equilibrium constant using concentration. Can we still do it in this case? If so, how? Answer: Let me put it this way: Every compound exist in a 3N dimensional configuration space (may be simple XYZ coordinates of the system, may be internal coordinates, based on your choice), and based on the configuration, its potential energy is defined. The potential energy surface (PES) can be calculated quantum mechanically or using some force field classically. The probability of the system to be found in the given set of coordinates is given by the distribution function defined according to statistical mechanics. For a canonical ensemble, the formula is given as (Z is the partition function): $$P(\vec{r}_{3N})= \frac{1}{Z}e^{-E(\vec{r}_{3N})/k_BT}$$ Now imagine this, a reaction between compounds containing M and N atoms is actually an exploration of the potential energy surface in 3(M+N) dimensions. Reactants, product and intermediates are given by local minima in the PES while transition state is given by maxima along one dimension and minima along all other dimensions. In other words, for a stable state (reactant, product, intermediate, etc.) all the frequencies (square root of the second derivate of the PES) is real, while for transition state one of the frequencies is imaginary. Now coming to your question: You will require to calculate the energetics of each of the step of reaction considering the presence of solvent (this is probably happening in presence of solvent), plug in your values to the appropriate statistical ensemble (if it is inside solvent without release of gaseous products, canonical ensemble should be good enough) probability function and you will get an idea about the percentage of intermediates you get. In most high school textbooks, the reaction marked with double arrows simple means the reactant, product (and in your case the intermediate) is not too much different in terms of potential energy. For a single headed arrow, it means that the difference between potential energies of the minimas (reactant and product) is significant and most of the compounds (in an ensemble) will be trapped in the lower minima. In case you want to read more about it, I suggest the following books: Statistical Mechanics by McQuarrie Physical Chemistry by Atkins Physical Chemistry by Levine For more details about reaction dynamics, you may refer to: Molecular Reaction Dynamics by R. D. Levine
{ "domain": "chemistry.stackexchange", "id": 13927, "tags": "organic-chemistry, equilibrium, electrophilic-substitution" }
Decide whether a matrix's kernel contains any non-zero vector all of whose entries are -1, 0, or 1
Question: Given an $m$ by $n$ binary matrix $M$ (entries are $0$ or $1$), the problem is to determine if there exists two binary vectors $v_1 \ne v_2$ such that $Mv_1 = Mv_2$ (all operations performed over $\mathbb{Z}$). Is this problem NP-hard? It is clearly in NP as you can give two vectors as witnesses. Equivalently: Given $M$, is there a non-zero vector $v\in \{-1,0,1\}^n$ such that $Mv=0$? Equivalently: Given $n$ vectors $X=\{x_1,\dots,x_n\}$ over $\{0,1\}^m$, are there two different subsets $A,B \subseteq X$ such that $\sum_{x \in A} x = \sum_{x \in B} x$? Answer: I use the user17410 equivalent formulation: Input: $n$ vectors $X = \{ x_1, \dots, x_m \}$ over $\{0,1\}^n$, $n$ is part of the input Question: Are there two different subsets $A,B \subseteq X$ such that $$\sum_{x \in A} x = \sum_{x \in B} x$$ The hardness proof involve many intermediate reductions that follow the same "chain" used to prove the hardness of the standard EQUAL SUBSET SUM problem: X3C $\leq$ SUBSET SUM $\leq$ PARTITION $\leq$ EVEN-ODD PARTITION $\leq$ EQUAL SUBSET SUM (I'm still checking it so it may be wrong :) STEP 1 The following problem (0-1 VECTOR SUBSET SUM) is NP-complete: given $X = \{ x_1, \dots, x_m \}$, $x_i$ vectors over $\{0,1\}^n$ and a target sum vector $t$, decide if there is $A \subseteq X$ such that $$\sum_{x \in A} x = t$$ Proof: Direct reduction from EXACT COVER BY 3-SETS (X3C): given a set of $n$ elements $Y = \{y_1,...,y_n\}$ and a collection $C$ of $m$ three elements subsets $C = \{C_1,...,C_m\}$ we build the corresponding 0-1 VECTOR SUM instance setting $x_i[j] = 1$ if and only if element $j$ is included in $C_i$; $t = [1,1,...1]$. STEP 2 Finding two equal sum subsets $A,B$ among $m$ 0-1 vectors over $\{0,1\}^n$, is equivalent to finding two equal sum subsets $A,B$ of vectors with element of bounded size $x_1 ... x_m$ where $max\{x_i\} = O((mn)^k)$ for fixed $k$. For example the set of vectors: x1 2 1 0 1 x2 1 2 3 1 Is equvalent to the 0-1 vectors: x1 1 1 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 ^ ^ +-- 0 elsewhere x2 1 1 1 1 0 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 1 1 ^ ^ ^ +-- 0 elsewhere Informally the 0-1 vectors are grouped (if you select one vector of the x2 group and add it to subset $A$, then you are forced to include in $A$ the other two and put the last in subset $B$) and the sums are done in unary (this is the reason why the corresponding non binary vectors must contain elements that are polynomially bounded with respect to $mn$). So the following problem is NP-complete. STEP 3 The following problem (0-1 VECTOR PARTITION) is NP-complete: given $B = \{ x_1, \dots, x_m \}$, $x_i$ vectors over $\{0,1\}^n$ decide if $X$ can be partitioned in two subsets $B_1, B_2$ such that $$\sum_{x \in B_1} x = \sum_{x \in B_2} x$$ Proof: Reduction from 0-1 VECTOR SUM: given $X = \{ x_1, \dots, x_m \}$ and the target sum vector $t$; let $S = \sum x_i$, we add to $X$ the following vectors: $b' = -t + 2S$ and $b'' = t + S\;$: $B = X \cup \{b',b''\}$. ($\Rightarrow$) Suppose that there exists $A \subseteq X$ such that $\sum_{x \in A} x= t$; we set $B_1 = A \cup \{b'\}$ and $B_2 =B \setminus B_1 = X \setminus \{A\} \cup \{b''\}$; we have $$\sum_{x \in B_1} = b'+\sum_{x \in A} x = t - t + S = 2S$$ $$\sum_{x \in B_2} = b'' + \sum_{x \in X\setminus A} x = b'' + S - \sum_{x \in A} x=2S$$ ($\Leftarrow$) Suppose that $B_1$ and $B_2$ have equal sum. $b', b''$ cannot both belong to the same set (otherwise their sum is $\geq 3S$ and cannot be "balanced" by the elements in the other set). Suppose that $b' = -t + 2S \in B_1$; we have: $$-t +2S+ \sum_{x \in B_1 \setminus\{b'\}} x = t + S + \sum_{x \in B_2 \setminus\{b''\}} x$$ Hence we must have $\sum_{x \in B_1 \setminus\{b'\}} x = t$ and $B_1 \setminus\{b'\}$ is a valid solution for the 0-1 VECTOR SUM. We only allow 0-1 vectors in the set $B$, so vectors $b', b''$ must be "represented in unary" as shown in STEP 2. STEP 3 The problem is still NP-complete if the vectors are numbered from $x_1,...,x_2n$ and the two subsets $X_1,X_2$ must have equal size and we require that $X_1$ contains exactly one of $x_{2i-1},x_{2i}$ for $1 \leq i \leq n$ (so, by the equal size constraint, the other element of the pair must be included in $X_2$) (0-1 VECTOR EVEN-ODD PARTITION). Proof:: The reduction is from 0-1 VECTOR PARTITION and is similar to the reduction from PARTITION to EVEN-ODD PARTITION. If $X = \{x_1,...,x_m\}$ are $m$ vectors over $\{0,1\}^n$ replace each vector with two vectors over $\{0,1\}^{2n+2m}$: 1 2 n -------------------- x_i b_1 b_2 ... b_n becomes: 1 2 ... 2i ... 2m -------------------------- x'_2i-1 0 0 ... 1 ... 0 b_1 b_2 ... b_n 0 0 ... 0 x'_2i 0 0 ... 1 ... 0 0 0 ... 0 b_1 b_2 ... b_n Due to the $2i$ element, the vectors $x'_{2i-1}$ and $x'_{2i}$ cannot be contained in the same subset; and a valid solution to the 0-1 VECTOR EVEN-ODD PARTITION correspond to a valid solution of the original 0-1 VECTOR PARTITION (just pick elements 2m+1..2m+n of each vector of the solution discarding vectors that contain all zeros in those positions). STEP 4 0-1 VECTOR EQUAL SUBSET SUM (the problem in the question) is NP-complete: reduction from 0-1 VECTOR EVEN-ODD PARTITION similar to the reduction from EVEN-ODD PARTITION to EQUAL SUM SUBSET, as proved in Gerhard J. Woeginger, Zhongliang Yu, On the equal-subset-sum problem: given an ordered set $A = \{x_1,...,x_{2m}\}$ of $2m$ vectors over $\{0,1\}^n$, we build a set $Y$ of $3m$ vectors over $\{0,1\}^{2m+n}$. For every vector $x_{2i-1}, 1 \leq i \leq m$ we build a vector $y_{2i-1}$ over $\{0,1\}^{2m+n}$ in this way: 1 2 ... i i+1 ... m m+1 m+2 ... m+i ... 2m 2m+1 ... 2m+n ------------------------------------------------------ 0 0 ... 2 0 ... 0 0 0 1 0 x_{2i-1} For every vector $x_{2i}, 1 \leq i \leq m-1$ we build a vector $y_{2i}$ over $\{0,1\}^{2m+n}$ in this way: 1 2 ... i i+1 ... m m+1 m+2 ... m+i ... 2m 2m+1 ... 2m+n ------------------------------------------------------ 0 0 ... 0 2 ... 0 0 0 1 0 x_{2i} We map element $x_{2m}$ to 1 2 ... ... m m+1 m+2 ... . 2m 2m+1 ... 2m+n ------------------------------------------------------ 2 0 ... ... 0 0 0 1 x_{2m} Finally we add $m$ dummy elements: 1 2 ... ... m m+1 m+2 ... ... 2m 2m+1 ... 2m+n ------------------------------------------------------ 4 0 ... ... 0 0 0 0 0 ... 0 0 4 ... ... 0 0 0 0 0 ... 0 ... 0 0 ... ... 4 0 0 0 0 ... 0 Note again that vectors containing values $> 1$ can be represented in "unary" using a group of 0-1 vectors like showed in STEP 2. $Y$ has two disjoint $Y_1,Y_2$ subsets having equal sum if and only if $X$ has an even-odd partition.
{ "domain": "cstheory.stackexchange", "id": 2424, "tags": "cc.complexity-theory, np-hardness, linear-algebra" }
What kind of bug is this on my curtain?
Question: These bugs are frequently found hanging on a curtain next to my window. Answer: This is a carpet beetle, although identifying the exact species is difficult. Compare for example the varied carpet beetle, Anthrenus verbasci: from https://en.wikipedia.org/wiki/Varied_carpet_beetle#/media/File:Dermestidae_-_Anthrenus_verbasci.JPG
{ "domain": "biology.stackexchange", "id": 11444, "tags": "species-identification, entomology" }
Why it is nearly impossible to have an approximation algorithm for Maximum Clique problem?
Question: I read a theorem which states that: If there exists a polynomial time approximation algorithm for solving the Maximum Clique problem (or the Maximum Independent Set problem) for any constant performance ratio r, then NP = P. But I never understood the reasoning behind this!! Answer: In fact, something stronger is true: if you can approximate maximum clique within $n^{1-\epsilon}$ for some $\epsilon > 0$ then P=NP. This is because for every $\epsilon > 0$ there is a polytime reduction $f_\epsilon$ that takes an instance $\varphi$ of SAT and returns an instance $(G,cn)$ of maximum clique such that: If $\varphi$ is satisfiable then $G$ has a $cn$-clique. If $\varphi$ is not satisfiable then $G$ has no $cn^{1-\epsilon}$-clique. If you could approximate maximum clique within $n^{1-\epsilon}$ you would be able to distinguish the two cases (exercise), and so to decide whether $\varphi$ is satisfiable or not. The reduction uses the PCP theorem as a first ingredient. Given the PCP theorem it is not hard to give a similar reduction with a constant gap, and with some effort to give a reduction with a gap of $n^\epsilon$ for some $\epsilon > 0$. The reduction claimed above, which has a gap of $n^{1-\epsilon}$ for every $\epsilon>0$, is much harder. See lecture notes of Guruswami and O'Donnell for the constant gap, and lecture notes of Scheideler for the $n^\epsilon$ gap.
{ "domain": "cs.stackexchange", "id": 5706, "tags": "complexity-theory, np-complete, approximation" }
Simplistic flash card web-app
Question: Feedback I would love to hear: Since this is really my first real JavaScript app, being that I can never find a project I want to actually work on, I want to know how I can improve my JS techniques. Am I doing a good job separating logic? Am I using good techniques? Should I completely forget about vanilla JS and just go straight into using Jquery or other libraries? Is it good that I used the card manager as a literal? Anything and everything! I'd even appreciate some feedback on how I setup the html or if you would set it up a different way! Any how, this is a snippet of what I have written so far, and you can find the source at https://ide.c9.io/lemony_andrew/flashcardapp Here's the working demonstration: https://flashcardapp-lemony-andrew.c9.io/flashcard.html <body> <div id="newCards" > <center> Front:<input type="text" id="newFront" name="front"/> Back: <input type="text" id="newBack" name="back"/> <input value="Add" type="button" onclick="userEnter();"/> </center> </div> <center><h1>French Demonstration</h1></center> <div id="cardButton" ><p id="cardText"></p></div> <p align="center"> <input type="button" id="prevCard" value="previous" onclick="cardsHandle.cardMove(-1);"/> <span id="positionIndex">0/0</span> <input type="button" id="nextCard" value="next" onclick="cardsHandle.cardMove(1);"/> </p> <script> String.prototype.isEmpty = function() {// Returns if a string has only whitespace return (this.length === 0 || !this.trim()); }; function Card(front, back){ /*A card is just a container that holds a front and back value! - You can get either back or front by displaying it*/ this.frontVal = front; this.backVal = back; this.display = function(side){ if( side === 0 ){ return this.frontVal; }else{ return this.backVal; } }; } var cardsHandle = { cards: [], cardInd: 0, cardButton: document.getElementById("cardButton"), cardText: document.getElementById("cardText"), cardTPosition: document.getElementById("positionIndex"), cardSide: 0, cardAdd: function(back, front){ this.cards.push( new Card(back, front) ); }, cardUpdate: function(){ var curCard = this.cards[ this.cardInd ]; this.cardText.innerHTML = curCard.display( this.cardSide ); this.cardTPosition.innerHTML = (this.cardInd+1)+"/"+this.cards.length; }, cardFlip: function(){ this.cardSide = (this.cardSide + 1) % 2; }, cardMove: function(moveBy){ this.cardInd += moveBy; if( this. cardInd < 0 ){ this.cardInd += this.cards.length; } this.cardInd = this.cardInd % this.cards.length; this.cardSide = 0;// Set back to front this.cardUpdate(); }, cardTap: function(){ this.cardFlip(); this.cardUpdate();// Display card } }; cardsHandle.cardAdd("Hello or Good bye","Salut"); cardsHandle.cardAdd("Hello or Good Morning","Bonjour"); cardsHandle.cardAdd("Good Night","Bonne nuit!"); cardsHandle.cardUpdate(); var userEnter = function(){ var nFront = document.getElementById("newFront"), nBack = document.getElementById("newBack"); if( nFront.value.isEmpty() || nBack.value.isEmpty() ) return; cardsHandle.cardAdd(nFront.value,nBack.value); nFront.value=""; nBack.value=""; cardsHandle.cardUpdate(); } cardsHandle.cardButton.addEventListener('click', function(){ cardsHandle.cardTap();} ); </script> </body> Program Overview: Basically I have 3 functions to my program. The container that just holds and displays the front and back values of a card. The second functionality is basically the card handler. It's purpose is basically to manage what is put on the screen, and how the index of cards is sorted/manipulated. The third functionality is simply having the user be able to add in cards. Why I'm making this: I'm working on making a completely free and completely open-source flash-card web-app. The reason I'm making it is because I find that most flash card apps are not simple enough (just not enjoyable to work with), don't have a good folder (organizing) system, and are just not free. This is just the beginning of my application, and it will essentially be my first real JavaScript work. I don't normally use JavaScript, in fact it's been nearly 4 months since I touched it, so this is a good refresher for me and will hopefully be a part of my future portfolio. (I also hope it will be my foot in the door for bigger contributor communities like Github). Answer: HTML: Don’t use the center element or the align attribute. Both are obsolete in HTML5. Use CSS instead (e.g., text-align:center;). Use label elements for your form fields (except for the submit buttons), e.g.: <label for="newFront">Front:</label> <input type="text" id="newFront" name="newFront"> You could (i.e., it’s not required) use a fieldset to group the elements of the form for creating new flashcards, and give them a name (e.g., "Create new flashard"). You could use article for each flashcard. Accessibility: Currently it doesn’t seem possible to use this web app with keyboard only. While you can create new flashards and navigate existing cards, you can’t reveal the back of a card. Possible solution: Add a "Flip" (or similar) button.
{ "domain": "codereview.stackexchange", "id": 11798, "tags": "javascript, html, quiz" }
Whats the difference between spatial and temporal resolution?
Question: Iam trying to understand how super-resolution works. But i think i have not understood correctly the difference between the optical resolution (spatial resolution?) and the resolution i know from a simple signal. I mean, i can have a higher sampling rate to improve resolution. Does this also effect images in the same way? And what about aliasing? Is aliasing in images difference than in a 1 D signal? Here is a nice paper from google where they apply super-resolution on google pixel camera. They write: Super-resolution techniques reconstruct a high-resolution signal from multiple lower resolution representations. They use multiple pictures taken at once, and use pixel shifts resulting from hand motion ... that pixel shifts and aliasing in the signal is used with kernel regression and other algorithms to reconstruct a higher resolution."The input must contain multiple aliased images, sampled at different subpixel offsets. This will manifest as different phases of false low frequencies in the input frames" Is there a difference in applying this technique with Aliasing and multiple frames on 1d signals? How can i understand subpixel shifts applied on 1d signal. Answer: A good 1d example of this is the foundation of the FFT algorithm in how an $N$ length DFT can be created from two $N/2$ length DFTs. If you look under the hood of this, we are increasing the resolution through multiple copies of a time domain signal each sampled at a different offset, and resulting in each signal containing the low frequency content as well as the aliasing of the high frequencies. The beauty is in the combining such that we can recover the low frequencies by adding the two FFT's and the high frequencies by subtracting the two FFT's (with an appropriate phase adjustment in frequency of one of the two before combining to compensate for the 1 sample shift in the time domain. Let me demonstrate with formulas and graphics: Given the general formula for an N length DFT: $$ X[k] = \sum_{n=0}^{N-1}x[n]W_N^{nk}$$ Where $W_N^{nk}$ are the "roots of unity" phase rotations on a unit circle as $e^{-j2\pi nk/N} $ As further detailed in Cooley and Tukey's famous 1965 paper https://www.ams.org/journals/mcom/1965-19-090/S0025-5718-1965-0178586-1/ the equation above for an $N$ point DFT can be calculated from two $N/2$ point DFT's as: $$ X[k] = \sum_{r=0}^{N/2-1}x[2r]W_{N/2}^{rk} + W_N^k\sum_{r=0}^{N/2-1}x[2r+1]W_{N/2}^{rk}$$ Here is what you can observe about each of teh two DFT's and how it relates to the OP's question: Point 1: The first DFT is a DFT of all the even samples in x[n] Point 2: The second DFT is a DFT of all the odd samples in x[n] Point 3: The frequency response of $W_N^k$ has a magnitude of 1 for all frequencies and a phase that increases negatively from 0 to $2\pi$ as the frequency goes from 0 to $F_s$ where $F_s$ is the sampling rate (normalized radian frequency of $2\pi$ or normalized frequency in cycles/sample of 1). This is exactly the same as the frequency response of $z^{-1}$, a unit delay of one sample (at the sample rate of x[n])! First considering point 1 and 2 if we aligned the resulting even and odd samples in time. What we have done is decimated the sequence by two for the case of the even, and for the case of the odd we have (non-causally) advanced x[n] one sample and repeated the same decimate by two operation. The transfer function of $z^{+1}$ is a magnitude of 1 for all frequencies but notably it advances the phase linearly from $0$ to $2\pi$ as we advance through all frequencies up to the sampling rate. Next consider a digital spectrum so we can see how aliasing is handled in this case. The graphic below depicts a real spectrum where different symbols are used to differential the low frequency from the high frequency components in the first Nyquist zone extending from $0$ to $F_s/2$. The DFT of $N$ samples would return the block extending from $0$ to $F_s$ (with $mF_s$ actually cyclically repeating as bin 0 for all integers $m$). Note what occurs to this spectrum when we compare the spectrum directly to the spectrum after it has gone through the $z^{+1}$ operation (we could equally say the bottom is the direct path and the top one goes through $z^{-1}$ since that could be actually implemented, but this is consistent with us aligning the output of the DFT result with the input without regard to processing delay and then will be consistent with the final formula, so at this point it is just math). The bottom plot doesn't quite show it since I couldn't draw a 3d spiral, but the phase shift is such that at the halfway point it will be $\pi$ or 180° representing a complete inversion of the spectrum such that it will be completely out of phase with the upper one, and then as it extends to the upper end it has spun around 360° so the spectrum at that point is back in perfect phase alignment with the upper one. Next we see what happens when we decimate by 2 and how aliasing is created. When we sample any signal, all the spectrum around $mF_s$ for any integer $m$ is mapped via aliasing to $F=0$. So if we resample (which occurs when we select every other sample or decimate by 2), the same thing occurs, we have just created a new sampling rate. Most of the time when we decimate properly we are sure to low pass filter the signal first to eliminate anything in the middle of the spectrum that would alias (so true decimation is low pass filtering and down-sampling; here we are only down-sampling). In our case we want those images as we will be able to separate them with proper recombining: Below shows the recombining where we recover the low frequency portion by summing the two and the high frequency portion by subtracting. The phase rotator $W_N^k$ is the summation undoes the opposite rotation that was innate in the even/odd FFT processing, and because of the difference in the rotation of the upper spectrum the aliasing can be isolated through adding and subtracting as depicted in the graphics. The subtraction occurs since each DFT here is only $N/2$ long and $W_N^k = -W_N^{k+N/2}$: So note specific to your question how here was a 1D example of two lower resolution samples of the same data set with an offset in the sampling rate, and with it we are able to create the higher resolution data set (simply by interleaving the even and odd samples of course, but I believe this FFT view helps us see how the aliases are impacted and used to help create the higher frequency components).
{ "domain": "dsp.stackexchange", "id": 8618, "tags": "sampling, nyquist, image-registration, superresolution" }
Why is building a heap $\mathcal O(n)$ and not $\theta(n)$?
Question: From what I see online, all seem to suggest that heapifying takes $\mathcal O (n)$ time, but it seems like it should always takes $\theta(n)$ time, even in the best case. Is something wrong with my pseudocode? Is there a more optimized way to do heapification? Heapification(H[1...n]) for i <- floor(n/2) downto 1 do k <-i; v <- H[k] heap <- False while !heap && 2k <= n do j <- 2k if j <= n if H[j] < H[j+1] j <- j+1 if H[j] > v H[k] <- H[j] k <- j else heap <- true H[k] <- v Answer: Time complexity of algorithms is usually given as big O. There are many reasons for this, including tradition, but when pertinent reason is that in some cases the time complexity depends on the input. For example, a sorting algorithm might run in $O(n\log n)$ in the worst case, but in $O(n)$ in the best case, and so it is not correct to say that it runs in $\Theta(n\log n)$. In your particular case, your algorithm runs not only in $O(n)$ but also in $\Theta(n)$. Stating that it runs in $O(n)$ doesn't rule out a matching lower bound — it just doesn't tell the complete story.
{ "domain": "cs.stackexchange", "id": 14877, "tags": "asymptotics, heaps, big-o-notation" }
Nearest point using 2D tree, static variable for the closest point so far
Question: This is a problem from a Coursera class: Write a data type to represent a set of points in the unit square (all points have x- and y-coordinates between 0 and 1) using a 2D tree to support efficient nearest neighbor search (find a closest point to a query point). I implemented the class with pre-setup API. The problem for me was to efficiently realize the pruning rule: To find a closest point to a given query point, start at the root and recursively search in both subtrees using the following pruning rule: if the closest point discovered so far is closer than the distance between the query point and the rectangle corresponding to a node, there is no need to explore that node (or its subtrees). So I need to store statically best tuple (closest point, minimum distance) so far. For this purpose I think static variable of class is not the best choice (it is needed only for one this method). I also don't like my solution for this: I made nested static class, pointDist, with static variables. What is a more optimal/elegant decision? public class KdTree { private Node root; private int size; public KdTree() { root = null; size = 0; } // ... other class methods private static final class pointDist { private static Point2D p; // closest point private static double d; // min distance private static void begin() { p = null; d = Double.POSITIVE_INFINITY; } } private void nearest(Point2D p, Node x) { if (x == null) return; // code with p, x, pointDist.p and pointDist.d variables // ... } public Point2D nearest(Point2D p) { assert size() > 0; pointDist.begin(); nearest(p, root); return pointDist.p; } private static class Node { private Point2D p; // the point private RectHV rect; // the axis-aligned rectangle corresponding to this node private Node lb; // the left/bottom subtree private Node rt; // the right/top subtree public Node(Point2D pp, Node prevN, int caseCut) { p = pp; lb = null; rt = null; // root case if (caseCut == -1) rect = new RectHV(0, 0, 1, 1); else { RectHV prevR = prevN.rect; // the rest of the code here is to define rect based on values of // caseCut variable: 0,1,2,4 - cuts from left, right, bottom and top // ... } } } } Answer: You're right in sensing there has to be a better way to do this. Right now, you're setting static fields from within instance methods, which is rarely a good idea. Luckily, your code is already sufficiently structured that fixing this requires very little work. Explicitly pass an instance containing a Point2D and double as a parameter in the nearest method. This keeps your algorithm as it is, but it limits the scope of changes so that they stay within local execution: private static final class PointDistance { // note that these are not static Point2D p; double d; } private void nearest(Point2D p, Node x, PointDistance closest) { // replace pointDist with closest here } public Point2D nearest(Point2D p) { final PointDistance closest = new PointDistance(); nearest(p, root, closest); return closest.p; }
{ "domain": "codereview.stackexchange", "id": 10770, "tags": "java, beginner, algorithm, coordinate-system" }
In which direction is the acceleration directed in a non uniform circular motion?
Question: Acceleration is directed towards the center of the circle in a uniform circular motion. Is it same for the non-uniform circular motion? Answer: Assuming that by non-uniform circular motion you mean moving in a circle but at changing speed, then this does not conserve angular momentum so it cannot happen in a central field. There must be so component of the force tangential to the radius, and therefor some component of the acceleration that is not central. This happens for satellites orbiting the Earth, and indeed the Moon (as the GRAIL satellites have found) because the Earth, Moon and presumably the vast majority of other bodies are not spherically symmetric so field felt by an object orbiting close to them is not central.
{ "domain": "physics.stackexchange", "id": 6117, "tags": "classical-mechanics" }