anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Why modulo-2 arithmetic over n-bits doesn't produce single bit result?
Question: I was studying CRC and came across modulo 2 arithmetic. When we add two 1 bit numbers like 1 + 1, 0 + 1, then the result is summation modulo 2 which is similar to XORing of the two bits. My doubt is when this is extended to multiple bits why is the solution not summation modulo 2 i.e. one bit result either 0 or 1, instead it is defined as XOR of the two n-bit numbers to be added. Modulo 2 addition should be sum modulo 2 right? Answer: There is a difference between applying the modulus to the entire integer, or applying it to the individual bits. The first would have the effect you describe here: $1010_b + 0101_b \equiv 1_b(\mod 2)$. However, that is not always very useful. Bitwise application of the modulus can give you arithmetic in finite fields. In particular, bitwise addition of $2^k$-ary binary integers can be extended into the finite field $\mathbb{F}_{2^k}$. Much of the theory of cryptography and coding theory rely on finite fields, both because arithmetic tends to be more computationally challenging (for crypto, this means we can design encryption methods that are hard to decrypt, for coding theory, this means that there is something complicated to study ;) ), especially compared with infinite fields such as the real numbers, $\mathbb{R}$. If all this went over your head, just remember that there is a difference between bitwise and ordinary modulus, and that the bitwise modulus has a well-known and solid theoretical component as well.
{ "domain": "cs.stackexchange", "id": 19026, "tags": "modular-arithmetic" }
Why is potential difference across the ends of a conductor equal to the product of electric field and length of the conductor?
Question: Why is the potential difference across the ends of a conductor equal to the product of the electric field and length of the conductor? Explain this considering I am in high school. Answer: This basically comes down to the definition of potential difference. Potential difference is the energy it would take to move a test charge from one point to another. And you probably know that the work done when moving something is equal to the force you exerted on it, times the distance you moved it. And the electric field tells you how much force it would take to move a test charge in a region of space. In symbols $$W = F\ell$$ where $W$ is work done, $F$ is force, and $\ell$ is the distance you moved it. Then, for an electric charge moving in an electric field: $$W = Eq\ell$$ Since the potential energy you added to the system is equal to the work you did on it, you can express the definition of potential difference as $$V = \frac{W}{q}$$ so $$V = E\ell$$
{ "domain": "physics.stackexchange", "id": 69529, "tags": "electricity, electric-current" }
Has there been research done regarding processing speech then building a "speaker profile" based off the processed speech?
Question: Has there been research done regarding processing speech then building a "speaker profile" based off the processed speech? Things like matching the voice with a speaker profile and matching speech patterns and wordage for the speaker profile would be examples of building the profile. Basically, building a model of an individual based solely off speech. Any examples of this being implemented would be greatly appreciated. Answer: Yes, there is. A quick search found this: Multimodal Speaker Identification Based on Text and Speech (2008). In the abstract, they write This paper proposes a novel method for speaker identification based on both speech utterances and their transcribed text. The transcribed text of each speaker’s utterance is processed by the probabilistic latent semantic indexing (PLSI) that offers a powerful means to model each speaker’s vocabulary employing a number of hidden topics, which are closely related to his/her identity, function, or expertise. Melfrequency cepstral coefficients (MFCCs) are extracted from each speech frame and their dynamic range is quantized to a number of predefined bins in order to compute MFCC local histograms for each speech utterance, which is time-aligned with the transcribed text. Two identity scores are independently computed by the PLSI applied to the text and the nearest neighbor classifier applied to the local MFCC histograms. It is demonstrated that a convex combination of the two scores is more accurate than the individual scores on speaker identification experiments conducted on broadcast news of the RT-03 MDE Training Data Text and Annotations corpus distributed by the Linguistic Data Consortium. Under figure 2, they write Identification rate versus Probe ID when 44 speakers are employed. Average identification rates for (a) PLSI: 69%; (b) MFCCs: 66%; (c) Both: 67%. In section 4, they write To demonstrate the proposed multimodal speaker identification algorithm, experiments are conducted on broadcast news (BN) collected within the DARPA Efficient, Affordable, Reusable Speech-to-Text (EARS) Program in Metadata Extraction (MDE). If you need more papers related, you could use a tool like https://the.iris.ai/ to find related papers.
{ "domain": "ai.stackexchange", "id": 156, "tags": "natural-language-processing, reference-request, speech-recognition, voice-recognition" }
Simple Bayesian classification with Laplace smoothing question
Question: I'm having a hard time getting my head around smoothing, so I've got a very simple question about Laplace/Add-one smoothing based on a toy problem I've been working with. The problem is a simple Bayesian classifier for periods ending sentences vs. periods not ending sentences, based on the word immediately before the period. I'm collecting the following counts in training: number of periods, number of sentence-ending periods (for the prior), words and counts for words before sentence-ending periods, and words and counts for words before not-sentence-ending periods. With add-one smoothing, I understand that $$P(w|\text{ending}) = \frac{\text{count}(w,\text{ending}) + 1}{\text{count}(w) + N},$$ where $P(w|\text{ending})$ is the conditional probability for word $w$ appearing before a sentence-ending period, $\text{count}(w,\text{ending})$ is the number of times $w$ appeared in the training text before a sentence-ending period, $\text{count}(w)$ is the number of times $w$ appeared in the training text (or should that be the number of times it appeared in the context of any period?), and $N$ is the "vocabulary size". The question is, what is $N$? Is it the number of different words in the training text? Is it the number of different words that appeared in the context of any period? Or just in the context of a sentence-ending period? Answer: The correct formula is $$P(w|\text{ending}) = \frac{\text{count}(w,\text{ending}) + 1}{\text{count}(w) + N},$$ where $N$ is the number of possible values of $w$. Here $w$ ranges over the set of all words that you'll ever want to estimate $P(w|\text{ending})$ for: this includes all the words in the training text, as well as any other words you might want to compute a probability for. For instance, if you limit yourself to only computing probabilities for English words in a particular dictionary, $N$ might be the number of words in that dictionary. The intuition/idea behind add-one smoothing is: in addition to every occurrence of a word in the training set, we imagine that we see one artificial "occurrence" of each possible word too -- i.e., we augment the training set by adding these artificial occurrences (exactly one per possible word), and then compute probabilities using the ordinary unsmoothed formula on this augmented training set. That's why we get $+1$ in the numerator and $+N$ in the denominator; $N$ is the number of artificial occurrences we've added, i.e., the number of possible words.
{ "domain": "cs.stackexchange", "id": 5874, "tags": "machine-learning, classification, bayesian-statistics" }
Finite temperature correlation functions in QFT
Question: Suppose that we want to calculate this imaginary time-ordered correlation function for an interacting system (in Heisenberg picture) : $$\langle \mathscr{T} A(\tau_A)B(\tau_B) \rangle =\frac{1}{Z} Tr\{\mathscr{T}(e^{-\beta H} e^{H\tau_A}A(0)e^{a-H\tau_A}e^{H\tau_B}B(0)e^{-H\tau_B})\}$$ Assuming that $\tau_A > \tau_B$ we can drop the time-ordering operator $\mathscr{T}$ and using the definition of imaginary time evolution operator in interaction picture : $$\langle \mathscr{T} A(\tau_A)B(\tau_B) \rangle = \frac{1}{Z}Tr\{ e^{-\beta H_0}S(\beta,\tau_A)A_I(\tau_A)S(\tau_A,\tau_B)B_I(\tau_B)S(\tau_B,0)\}$$ In AGD’s “Methods of quantum field theory”, it's said that we can now write the above relation as : $$\langle \mathscr{T} A(\tau_A)B(\tau_B) \rangle = \frac{1}{Z}Tr\{e^{-\beta H_0} \mathscr{T} (A_I(\tau_A)B_I(\tau_B)S(\beta,0))\}$$ But it's only true when $\beta > \tau_A,\tau_B$. In fact, if we consider a correlation function of some operators, each one at an arbitrary time, then the finite-temperature version of the Gell-Mann–Low theorem is valid only if the time difference between operators is lower than the characteristic thermal time-scale of the system. My question is how we can solve this problem and find the correlation function perturbatively ? Or maybe it's not a problem and it's natural that we can't find any correlation between two (or more) quantities in a system in thermal equilibrium when their time separation is so long that the thermal fluctuations kill any correlation between them. And even if the latter statement is true, it doesn't say that we can't find the correlation, it just says we must use another method, for example solving the Hamiltonian exactly. Now suppose that we could do this. What physical behaviors we expect from it to have (maybe some kind of exponential decrease as the time difference increases with an exponential factor of $e^{-\frac{|\tau_A-\tau_B|}{\beta}}$? Or maybe I'm misled completely because the imaginary time have nothing to do with the real time. Answer: OP's question is about the long-time behavior of imaginary-time correlation functions in general. But in fact, the correlation function is ill-defined if the time difference $|\tau_A - \tau_B|$ is larger than the inverse temperature $\beta$. To see this, suppose that the Hamiltonian $H$ has eigenstates $\{ | n \rangle \}$ and associated eigenvalues $\{ E_{n} \}$, and assume $\tau_A > \tau_B$ . Then, \begin{equation} \begin{split} &\mathrm{Tr}\big\{\mathscr{T}[e^{-\beta H}e^{\tau_{A}H} \, A(0) \, e^{-\tau_{A}H}e^{\tau_{B}H} \, B(0) \, e^{-\tau_{B}H}]\big\}\\ &= \sum_{n,n^\prime} \langle n| \, e^{-\beta H}e^{\tau_{A}H} \, A(0) \, |n^\prime\rangle\langle n^\prime| e^{-\tau_{A}H}e^{\tau_{B}H} \, B(0) \, e^{-\tau_{B}H}| n\rangle\\ &=\sum_{n,n^\prime}e^{-(\beta-\tau_A+\tau_B)E_n} e^{-(\tau_A-\tau_B)E_{n^\prime}}\langle n| \, A(0) \, |n^\prime\rangle\langle n^\prime| \, B(0) \, |n\rangle. \end{split} \end{equation} The energy eigenvalues $\{E_{n}\}$ of a physical system are bounded below and unbounded above. Hence, for both $e^{-(\beta-\tau_A+\tau_B)E_n}$ and $e^{-(\tau_A-\tau_B)E_{n^\prime}}$ to well behave for all $n$ and $n^\prime$, we must have $0\le\tau_A - \tau_B\le\beta$. Similarly, if $\tau_A < \tau_B$, we have $0\le\tau_B - \tau_A\le\beta$. Combining the two cases $\tau_A > \tau_B$ and $\tau_A < \tau_B$, it follows that the imaginary-time correlation function is well-defined only if \begin{equation} |\tau_A - \tau_B| <\beta. \end{equation}
{ "domain": "physics.stackexchange", "id": 27137, "tags": "quantum-field-theory, perturbation-theory, correlation-functions" }
Force per velocity per time squared signifies what?
Question: Anyone know what force per velocity per time squared signifies? It has to do with magnetic fields. I was doing dimensional analysis and it popped up. (Charge is assigned units of force and the rest just follows) $ E \quad = \quad \frac{q_1}{d^2} = \quad \frac{F}{d^2} $ $\frac{1}{\epsilon_0} \quad = \quad \frac{d^2}{F}$ $F \quad = \quad \frac{1}{\epsilon_0} \cdot q_2 \cdot E \quad = \quad \frac{d^2}{F} \frac{q_2 q_1}{d^2} \quad = \quad \frac{d^2}{F} \frac{F F}{d^2}$ $B \quad = \quad \frac{I_1}{d} \quad = \quad \frac{q_1}{t d} \quad = \quad \frac{F}{t d} \quad = \quad \frac{F / v}{t^2}$ $\mu \quad = \quad \frac{t^2}{F}$ $F \quad = \quad \mu \cdot I_2 \cdot B \cdot d \quad = \quad \frac{t^2}{F} \frac{q_2}{t} \frac{q_1}{t d} d \quad = \quad \frac{t^2}{F} \frac{F}{t} \frac{F}{t d} d$ $c \quad = \quad \sqrt{\frac{1}{\epsilon_0 \mu_0}} \quad = \quad \sqrt{\frac{d^2}{F} \frac{F}{t^2}} \quad = \quad \sqrt{\frac{d^2}{t^2}} = \frac{d}{t}$ https://tok.fandom.com/wiki/Template:Electromagnetism_header Answer: A charged particle moving in a magnetic field experiences a force that is proportional to its velocity. So units of force per velocity make sense. But I have no idea where time squared comes from. Rate of change of current gives "per time squared"
{ "domain": "physics.stackexchange", "id": 74390, "tags": "dimensional-analysis" }
Is my PHP script/embed remover robust?
Question: The goal of this question: Your goal here is to find a security hole in my code which allows a user to create input that contains a script doing anything they want, without that script being stopped by my code. Please bear in mind that that is the ONLY goal of this post. I am not so much here to talk about how good the algorithm is, only if it works. However, that being said, I am still open to ideas on how to improve the algorithm, make it faster, etc, but please as comments. I will upvote any good suggestions :) With that out of the way, My code: As you may have read above, you are trying to trick my code into letting a script through. What does that mean exactly? Well, I have created a php script which attempts to parse HTML and remove scripts and embeds which are not from trusted websites. Yes.... Using some regex... I know, but I do have a good reason -- You see, the only reasons I have found and can think of NOT to use regex are: HTML is recursive, regex is not. (hence all this talk about context-free vs regular. I am sure there are more reasons for HTML being "above regex" but the recursion is the best one I can persoanly come up with.) However, the interesting thing about my problem is this: Scripts are not recursive! This means that WHENEVER I come across a script tag, everything between that and the next end tag will NEVER be html! Thinking of HTML as a bunch of random letters, with every once in a while an open-script tag and a close-script tag actually brings HTML down to a level that regex can handle. And that's almost exactly what I did... The other reason is memory constriction. I've used a lot of non-capturing groups with my regex, so, that should not be a problem. I also don't use 100% regex, I only use it as a generic way to detect tags. Actual handling of the tags is done with my own code (only with a bit more regex to select src attributes) The reason I decided to push so hard for regex is this: DomDocument only handles CLEAN html. I will be receiving html form potentially malicious users. Clean html will very likely NOT be provided. PHP HTML tidying libraries are not an option as they require installation to the backend, which I have no control over whatsoever. Writing my own 100% substring-indexof interpreter would become a mess. Especially when having to deal with both embed and script tags. Regex expresses all of the same logic but much more concisely, which is why I chose to go with it instead. So, there you have my reasons. If you attack my code for the regex, I would rather you not attack it without at least reading that section of the question above. The question: With all of that said, I will clearly define what I need: I need to know if the following code has security holes. IE is there a way that a user could cause a script within the inputted text to be ignored and passed through? For testing, I consider a script that alerts the text "haxored" to be acceptable. You will find some example input at the bottom. <?php header('Content-type: text/plain'); function dbStr($string) { $ranges = dbStr_GetRanges($string); return dbStr_FilterStringWithRanges($string, $ranges); } function dbStr_FilterStringWithRanges($string, $ranges) { $offset = 0; $maxidx = 0; foreach ($ranges as $range) { if ($range[0] + $range[1] <= $maxidx) continue; if ($range[0] < $maxidx) { $orig = $range[0]; $range[0] = $maxidx; $range[1] -= $range[0]-$orig; } $string = substr_replace($string, '', $range[0]-$offset, $range[1]); if ($range[0]+$range[1] > $maxidx) $maxidx = $range[0] + $range[1]; $offset += $range[1]; } return $string; } function dbStr_GetRanges($string) { preg_match_all ( "#<(/){0,1}?\s*?(?:script|embed)"."[^'\"/]*?(?:[^'\"/]*?[\"'](?:(?:\\\\\"|\\\\'|[^\"'])*?)['\"][^'\"/]*?)*?[^'\"/]*?"."(/){0,1}?>#imsSX", $string, $matches, PREG_SET_ORDER|PREG_OFFSET_CAPTURE ); $ranges = array(); foreach ($matches as $key=>$value) { if (!in_array($value, $matches))continue; $type = get_dbStrMatchType($value); $possiblesave = null; if ($type == 1) { $idx = strlen($string-1); $len = 0; $protectkey; foreach ($matches as $key2=>$value2) { if ($key2 < $key) continue; $type2 = get_dbStrMatchType($value2); if ($type2 == 2) { $idx = $value2[0][1]; $len = strlen($value2[0][0]); $protectkey = $key2; break; } } $substrstart = $value[0][2] + strlen($value[0][0]); $content = substr($string, $substrstart, $idx - $substrstart); if (preg_match("#[^\s]#imsSX", $content)) { $ranges[] = array($value[0][3], ($idx+$len)-$value[0][4]); } else { if (isset($protectkey)) { $possiblesave = $protectkey; } $type = 3; } } if ($type == 2) { $ranges[] = array($value[0][5], strlen($value[0][0])); } else if ($type == 3) { preg_match_all ( "#src=[\"']((\\\\\"|\\\\'|[^\"'])*?)['\"]#imsSX", $value[0][0], $submatches, PREG_SET_ORDER|PREG_OFFSET_CAPTURE ); if (count($submatches) !=1 || !approve_dbStrSrc($submatches[0][6][0])) { $ranges[] = array($value[0][7], strlen($value[0][0])); } else { if ($possiblesave != null) { unset($matches[$possiblesave]); } } $possiblesave = null; } } return $ranges; } function get_dbStrMatchType($val) { if (count($val) == 3 && strcmp($val[2][0], "/")==0) { return 3; } else if (count($val) == 2 && strcmp($val[1][0], "/")==0) { return 2; } else { return 1; } } function approve_dbStrSrc($src) { $dbStrTrusted = array ( "http://www.youtube.com", "http://youtube.com", "http://widgets.twimg.com/", "http://www.twiigs.com/", "http://twiigs.com/", "http://twitter.com/", "http://www.twitter.com/", "http://picasaweb.google.com", "http://www.flickr.com", "http://flickr.com", "http://static.pbsrc.com/", ); foreach ($dbStrTrusted as $trusted) { if (strpos($src, $trusted) === 0) { return true; } } return false; } echo "test" . dbStr ( ' This will be passed: <embed type="application/x-shockwave-flash" src="http://picasaweb.google.com/s/c/bin/slideshow.swf" width="288" height="192" flashvars="host=picasaweb.google.com&amp;hl=en_US&amp;feat=flashalbum&amp;RGB=0x000000&amp;feed=http%3A%2F%2Fpicasaweb.google.com%2Fdata%2Ffeed%2Fapi%2Fuser%2F109941697484668010012%2Falbumid%2F5561383933745906193%3Falt%3Drss%26kind%3Dphoto%26authkey%3DGv1sRgCN2H88H41qeT6AE%26hl%3Den_US" pluginspage="http://www.macromedia.com/go/getflashplayer"></embed> This will not: <script type="text/javascript"> alert("U R HAXORED"); </script> '); ?> If the full version interests you (E.G. the version filled with notes about how the algorithm works) then check this out: http://pastebin.com/jYZH07cK EDIT: Here is an online demo of the code: http://www.geiodo.com/g-cont/mars3/MarsSecurity.php Answer: The filter doesn't block inline javascript. Example 1: <body onscroll=alert(1)><br><br><br><br><br><br>...<br><br><br><br><input autofocus> Example 2: <form id="test"></form><button form="test" formaction="javascript:alert(1)">X</button> Also, it doesn't encode the html, thus this will break your filter. </textarea> Example: Inserts HTML: </textarea><marquee><h1>I'm a bug</h1></marquee><textarea> Inserts Script: </textarea><script>alert(\'I'm a bug\')</script><textarea> If you're trying to prevent XSS attacks then your goal should be not to allow ANY html to render. One way to do this would be to replace the < and > symbols with the respective special html entries, being &lt; and &gl;.
{ "domain": "codereview.stackexchange", "id": 2346, "tags": "javascript, php, html, parsing, regex" }
Can ovulation and menstruation occur simultaneously?
Question: Is that possible that ovulation happens in the time of actual menstrual haemorrhage in human females - let say in a case that woman has 21 day cycle and her menstrual period is 7 days? If yes, does it means that in this particular case the woman has lower chances to conceive? Answer: Ovulation and menstruation don’t happen in normally cycling women at the same time. A basic outline of the hormonal cycle that triggers these events will make this clear. Proliferative (a.k.a follicular*) phase Beginning after menses (when the endometrium is thinned), the hypothalamus produces GnRH which stimulates the anterior pituitary to produce LH and FSH. These in turn stimulate ovarian follicles to develop. A dominant follicle produces estradiol which causes the endometrium to thicken (proliferate). At a certain level of estrogen (actually estrogen/progesterone ratio), the feedback on the hypothalamus flips from a negative to a positive feedback loop. Thus, there is a GnRH followed by a LH surge. The latter triggers ovulation. Note that at this time the endometrium is stable due to relatively high estrogen levels. Secretory (a.k.a. luteal*) phase After ovulation, the high levels of LH trigger formation of a corpus luteum from the tissues left behind after ovulation. The corpus luteum makes progesterone. This hormone triggers a change in the endometrium from a proliferative to a secretory state. Progesterone also provides negative feedback to the hypothalamus and anterior pituitary, maintaining low levels of GnRH, LH, and FSH, so no new dominant follicles develop at this time. If pregnancy does not occur, the corpus luteum will eventually (10-12 days) degenerate and stop producing progesterone. It is this abrupt drop in progesterone that triggers the sloughing of the endometrial lining. You can see these hormone shifts in an illustration like this: The decline of the corpus luteum is correlated with a decline in serum levels of ovarian hormones including progesterone, estradiol, and inhibin A. Release from negative feedback provided by these hormones at the level of the hypothalamus and pituitary permits FSH to rise, and the cycle begins again. You should now be able to see that: Around the time of ovulation, the uterine lining is not fully developed and is stable due to the hormonal milieu. Menstruation does not occur. Around the time of menstruation, FSH and LH are suppressed in a way that is not conducive to ovulation. In theory, yes, of course there would be a lower chance of initiating a viable pregnancy (implantation rather than conception is the most obvious problem) were the endometrial lining to be unstable at the time of ovulation. The problem of luteal phase deficiency is along these lines. In this condition, the corpus luteum does not produce adequate progesterone during the luteal phase to develop the endometrial lining in such a way as to support a healthy pregnancy. However, ovulation and menstruation are still time-separated events for the reasons outlined above. *Note that the first term is with respect to the endometrium; the second is with respect to the ovary. Abbreviations: GnRH - Gonadotropin Releasing Hormone; LH - Luteinizing Hormone; FSH - Follicule Stimulating Hormone References 1. Anatomy & Physiology, Connexions Web site. Illustration is also from here. 2. Jerome Strauss, Robert Barbieri. Yen & Jaffe's Reproductive Endocrinology. September, 2013. Saunders.
{ "domain": "biology.stackexchange", "id": 2869, "tags": "reproduction, endocrinology, pregnancy, ovulation" }
The colour of a complex light wave?
Question: https://physics.stackexchange.com/q/259208 First of, as is discussed in the above question, light waves can be superimpositions of various sinusoidal waves.... Now i am no physicist, but as far as i know, colour of a light depends on its frequency. Thus colour of a sinusoidal wave is pretty easily understood. Now suppose i mix two colours of light, say red and blue. Considering the wave nature of light i should get a complicated looking wave, which has a purple colour. But there are normal sinusoidal purple light waves also.... So how exactly does the waveform change the sensation of light? How are the two waves different? I am aware that the waveform of a sound wave influences what is known as the quality of sound... but i have never heard of quality of light.... (My apologies if this is a noob question) Answer: Your preception of color has more to do with the biology of your eye then physics in this case. You have 3 recptors in your eye for color one for each red, blue, and green. These are doing a fourier transform of sorts where it breaks down any complex wave into a weighted sum of red, blue, and green simple waves. So if you shine a simple wave with value of purple (actually hard to do because no one frequency really makes purple ) or of you shine a little red and a little blue, the human eye can't tell the difference. This is how your computer screen works and produces all colors with just red green and blue pixels.
{ "domain": "physics.stackexchange", "id": 89755, "tags": "optics, waves" }
Quantum tunnelling in space vs. time
Question: In the Gamov model of alpha decay they use the WKB approximation to find the magnitude of the stationary state wavefunction of an alpha particle with a given fixed energy $Q$ that has tunnelled through the potential barrier of the nucleus, then say the rate of alpha decay $R = \frac{1}{\tau} = fe^{-2G}$ where $e^{-2G}$ is the probability of tunnelling to a certain position and $f$ is the frequency of the alpha particle 'hitting the side of the nucleus'. My question is, why does this model work so well? In reality we have an alpha particle wavefunction superposition of many energy eigenstates which dynamically flows from being located within the nucleus to being an approximately free state. The dynamic evolution will depend on the relative phase evolution of the coefficients of each of these energy eigenstates. Why would some parameter $f$ multiplied by the probability of a stationary state being detected outside the nucleus give a similar result to a time dependent calculation? This question may relate to how, in general, we can use information from time-independent stationary states to derive information about the time evolution of a dynamic wavefunction. Answer: TL; DR: the two approaches give the same answer in the long-time limit When dealing with time-independent perturbations, there is a great deal of similarity between Fermi-Golden-Rile-like calculations and scattering theory. To be more precise, scattering theory calculates the transition probability under the assumption of the adiabatic switching (i.e., pertirbation is absent at $t\rightarrow -\infty$), whereas FGR usually assumes sudden switching (and is usually presented only for time-dependent perturbation and finite orders in perturbation). Both however assume that we look at the result in a long-time limit (i.e., long time after switching on the perturbation in the case of FGR), which makes them mathematically indistinguishable. Specifically, if we were to consider the spread of a wave packet initially localized at the nucleus, we would expand its wave function in terms of the true (outgoing) eigenstates of the Hamiltonian: $$ |\varphi(0)\rangle = \sum_nc_n(0)|\phi_n\rangle, $$ and apply the time evolution of the expansion coefficients $$ c_n(t)=e^{-iE_n t/\hbar}. $$ at small times the behavior will be very different from the exponential decay, but at long times it will indeed resemble a series of exponential terms. Furthermore, out of these exponential terms we will choose only the one with the smallest exponent $\Gamma$, since the other decay faster. Let me further point out that the value of the Gamow's equation is mainly empirical, as predicting the correct functional form of the decay. In many practical situations the coefficients $f$ and $G$ may be not calculated, but taken from the experiment, to better fit the observations. As a supplementary reading I could recommend the papers by Shmuel Gurvitz, who has been extensively using this approach for describing tunneling in seiconductor quantum dots, e.g., this one. (I recommend looking through more of his papers, since some of them may contain more relevant details.) Remark: Let me bring up a seemingly very different classical problem of diffusion escape of a particle from a potential well, due to noise/Brownian motion (Kramers rate). Although the setting and the methods of solution (e.g., using Fokker-Planck equation) may appear very different, the long-time approximation resulting in the exponential decay is virtually the same. Gamow's formula is essentially the extension of this result from chemical reactions to nuclear domain.
{ "domain": "physics.stackexchange", "id": 79270, "tags": "quantum-mechanics, particle-physics, nuclear-physics, approximations, semiclassical" }
Is there an analog to polar coordinates for Minkowski 4-vectors?
Question: Is there a way to represent a Minkowski 4-vector using a 4D polar coordinate system, i.e. a single radial coordinate and 3 angular coordinates? I know this can be done in Euclidean 4-space with spherical coordinates. And I know that 3D spherical coordinates can be used for the spacelike part of a Minkowski 4-vector. But I'm not sure how to combine the spacelike and timelike parts into one set of polar coordinates. Answer: One sometimes useful decomposition of four-vectors uses two Euclidean angles and what is called the rapidity, a sort of unitless measure of energy. You can imagine starting with a four-vector in its rest frame, $p=(m,0,0,0)$, where $p^2 = m^2$. Then you can boost in the $z$ direction with rapidity parameter $\eta \in [0,\infty)$, giving you $p = m(\cosh\eta,0,0,\sinh\eta)$. You can check that this left the Minkowski norm invariant. Finally, you can do a spatial rotation to make the three-vector part point any direction you want. $p = m(\cosh\eta,\sinh\eta \sin\theta\cos\phi,\sinh\eta\sin\theta\sin\phi,\sinh\eta\cos\theta)$. For physical interpretation, note that $E = m\cosh\eta$, so $\cosh\eta$ is the same as the Lorentz factor $\gamma$, and $\sinh\eta = \beta\gamma$. Note this form is only for time-like vectors. For lightlike vectors or spacelike vectors you have to start with different initial four-vectors, like $m(1,0,0,1)$ or $m(0,0,0,1)$.
{ "domain": "physics.stackexchange", "id": 40113, "tags": "special-relativity, coordinate-systems" }
Libfreenect not finding the Kinect
Question: This is sort of a follow up to this about libfreenect. I successfully got ROS installed on RaspberryPi (ARM) and compiled freenect_stack from @piyushk. I complete all of the steps, but it can't find the Kinect: [kevin@raspberrypi freenect_stack]$ rosrun freenect_camera freenect_node [ INFO] [1351981388.025719933]: Initializing nodelet with 1 worker threads. [ INFO] [1351981396.302127503]: No devices connected.... waiting for devices to be connected [ INFO] [1351981399.320659202]: No devices connected.... waiting for devices to be connected [ INFO] [1351981402.339445895]: No devices connected.... waiting for devices to be connected [ INFO] [1351981405.356383634]: No devices connected.... waiting for devices to be connected [ INFO] [1351981408.375364319]: No devices connected.... waiting for devices to be connected [ INFO] [1351981411.394149012]: No devices connected.... waiting for devices to be connected [ INFO] [1351981414.413055699]: No devices connected.... waiting for devices to be connected [ INFO] [1351981417.431225405]: No devices connected.... waiting for devices to be connected [ INFO] [1351981420.453722998]: No devices connected.... waiting for devices to be connected [ INFO] [1351981423.472487689]: No devices connected.... waiting for devices to be connected [ INFO] [1351981426.493123329]: No devices connected.... waiting for devices to be connected [ INFO] [1351981429.516694892]: No devices connected.... waiting for devices to be connected [ INFO] [1351981432.538032515]: No devices connected.... waiting for devices to be connected Is there something else I am missing? Originally posted by Kevin on ROS Answers with karma: 2962 on 2012-11-03 Post score: 2 Original comments Comment by MarkyMark2012 on 2012-11-07: Hi Kevin - any suggestions around a couple of problems I've having building this:libfreenect: No definition of [libxmu-dev] for OS [debian] freenect_launch: Missing resource pcl (I've installed libxmu-dev and freenect_camera: No definition of [log4cxx] for OS version [] where to get log4cxx? Comment by MarkyMark2012 on 2012-11-07: and finally - MSG: gencfg_cpp on:Test.cfg CMake Error at cmake/cfgbuild.cmake:66 (string): string sub-command REPLACE requires at least four arguments. Thanks Mark Comment by MarkyMark2012 on 2012-11-07: lsusb gives me - "Bus 001 Device 008: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor" on the Pi Answer: I don't have any experience with the RaspberryPi. I am not sure how well libfreenect supports it. Can you check to see if lsusb gives a suitable output: lsusb lsusb should give an output similar to: Bus 001 Device 004: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor Bus 001 Device 005: ID 045e:02ad Microsoft Corp. Xbox NUI Audio Bus 001 Device 006: ID 045e:02ae Microsoft Corp. Xbox NUI Camera Also, I am not sure which binaries inside libfreenect can be run on the pi, but hopefully atleast tiltdemo should work: rosrun libfreenect tiltdemo Let me know if the above work. Originally posted by piyushk with karma: 2871 on 2012-11-04 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Kevin on 2012-11-10: No I only see the Motor and the Audio and Camera don't show up. There appears to be others having the same issue, so I has more to do with the RaspberryPi and nothing to do with your code. Some people are thinking it has to do with the RPi's USB implementation. Comment by MarkyMark2012 on 2012-11-20: Kevin - should you find a solution to this please let me know - am looking into it myself can covered/built from scratch all suggestions I've found thus far..... Thanks Mark Comment by Kevin on 2012-11-20: doing more research, many have built libfreenect for RPi, but I cannot find anyone that has really gotten it to work. RPi can see the kinect motor but not the camera and audio. I think this is a RPi issue and not anything else. RPi probably has a bandwidth issue on the USB bus. Still looking at it. Comment by MarkyMark2012 on 2012-11-20: Yep agreed - I'm getting the same issues as you...shall keep looking Comment by MarkyMark2012 on 2012-11-25: Kevin - by way of an update. Received new 512Mb board this w/e and updated the firmware. Now I'm able to see motor, audio and camera. A quick test of the tiltdemo also works. How did you get freenect_stack to build? Comment by MarkyMark2012 on 2012-11-25: I get the following error: MSG: gencfg_cpp on:Test.cfg CMake Error at cmake/cfgbuild.cmake:66 (string): string sub-command REPLACE requires at least four arguments. Call Stack (most recent call first): cmake/cfgbuild.cmake:87 (gencfg_cpp) CMakeLists.txt:24 (include) when rosmake Comment by MarkyMark2012 on 2012-11-25: I get the following error: MSG: gencfg_cpp on:Test.cfg CMake Error at cmake/cfgbuild.cmake:66 (string): string sub-command REPLACE requires at least four arguments. Call Stack (most recent call first): cmake/cfgbuild.cmake:87 (gencfg_cpp) CMakeLists.txt:24 (include) Comment by Kevin on 2012-11-25: Glad to hear you can see the camera now. I had no errors ... try opening a new question and posting the errors.
{ "domain": "robotics.stackexchange", "id": 11617, "tags": "ros, kinect, raspberrypi, libfreenect, raspbian" }
Molecule with a quadrupole moment in an electric field
Question: How does an uncharged non-polar molecule that has a quadrupole moment (such as carbon dioxide) behave in an electric field? I know that in a homogeneous electric field, ions travel while dipoles orient along the field (rotate) and non-polar molecules are not affected. What kind of electrical field, if any, would exert a force on a molecule with a quadrupole moment? Answer: Different moments of a charge distribution couple to different components of the external electric field. In the case of the quadrupole moment, the coupling is to the gradient of the electric field (EFG). Such an interaction is for instance relevant in NMR of quadrupolar nuclei, NQR (not to be confused with naked quad run, according to the wikipedia - o tempora, o mores) and Mössbauer spectroscopy, although those techniques consider the nuclear quadrupole moment. In the case of atoms and molecules without permanent electric monopoles or dipoles, EFG interactions with permanent quadrupoles are the leading field-multipole interaction energy term (ignoring dispersion terms, ie induced dipole-induced dipole). In some cases such interactions can be of particular importance, for instance in aromatic compounds (see Kocman et al. cited below). The quadrupole moment has been measured in $\ce{CO_2}$, see eg Chetty cited below, which includes experimental methods. References Electric quadrupole moment of graphene and its effect on intermolecular interactions M. Kocman, M. Pykal and P. Jurecka Physical Chemistry Chemical Physics Vol. 16, 2014 N. Chetty and V.W. Couling Measurement of the electric quadrupole moments of CO2 and OCS Molecular Physics Vol. 109 (5), 2011, 655–666
{ "domain": "chemistry.stackexchange", "id": 11312, "tags": "molecular-structure" }
Finding common elements in two arrays
Question: I just had this question in an interview. I had to write some code to find all the common elements in two arrays. This is the code I wrote. I could only think of a 2-loop solution, but something tells me there must be a way to accomplish this with only 1 loop. Any ideas? public List<Integer> findCommonElements(int[] arr1, int[] arr2) { List<Integer> commonElements = new ArrayList<>(); for (int i = 0; i < arr1.length; i++) { for (int j = 0; j < arr2.length; j++) { if (arr1[i] == arr2[j]) { commonElements.add(arr1[i]); break; } } } return commonElements; } Answer: You use a hashset, this means you have 2 loops, but not nested. You have a boolean hashset in this case where all values start at false of size k where k (this is typically what is used in Big O'notation) is the the number of possible integer values. You loop over your first array and for each value you go hashset[firstArray[i]] = true; once you have done this you loop over your second array, going if(hashset[secondArray[i]]) commonElements.add(secondArray[i]);. This is O(2n) which then becomes simply O(n) due to getting rid of the constants, your solution was O(n^2). Although it should be noted the storage required for using a hashset is considerably more.
{ "domain": "codereview.stackexchange", "id": 29788, "tags": "java, algorithm, array, interview-questions" }
A partition problem with order constraints
Question: In the OrderedPartition problem, the input is two sequences of $n$ positive integers, $(a_i)_{i\in [n]}$ and $(b_i)_{i\in [n]}$. The output is a partition of the indices $[n]$ into two disjoint subsets, $I$ and $J$, such that: $\sum_{i\in I} a_i = \sum_{j\in J} a_i$ For all $i\in I$ and for all $j\in J$: $b_i\leq b_j$. In other words, we have to first order the indices on a line such that the $b_i$ are weakly increasing, and then cut the line such that the sum of the $a_i$ in both sides is the same. If all $b_i$ are the same, then condition 2 is irrelevant and we have an instance of the NP-hard Partition problem. On the other hand, if all $b_i$ are different, then condition 2 imposes a single ordering on the indices, so there are only $n-1$ options to check, and the problem becomes polynomial. What happens in between these cases? To formalize the question, define by OrderedPartition[n,d], for $1\leq d\leq n$, the problem restricted to instances of size $n$, in which the largest subset of identical $b_i$-s is of size $d$. So the easy case, when all $b_i$-s are different, is OrderedPartition[n,1], and the hard case, when all $b_i$-s are identical, is OrderedPartition[n,n]. More generally, For any $n$ and $d$, in any OrderedPartition[n,d] instance, the number of possible partitions respecting condition 2 is $O(n 2^d)$. Hence, if $d\in O(\log{n})$, then OrderedPartition[n,d] is still polynomial in $n$. On the other hand, for any $n$ and $d$, we can reduce from a Partition problem with $d$ integers to OrderedPartition[n,d]. Let $p_1,\ldots,p_d$ be an instance of Partition. Define an instance of OrderedPartition[n,d] by: For each $i\in \{1,\ldots,d\}$, let $a_i := 2n\cdot p_i$ and $b_i := 1$. For each $i\in \{d+1,\ldots,n\}$, let $a_i := 1$ and $b_i := i$ [if $n-d$ is odd, make $a_n:=2$ such that the sum will be even]. Hence, if $d\in\Omega(n^{1/k})$, for any integer $k\geq 1$, then OrderedPartition[n,d] is NP-hard. QUESTION: What happens in the intermediate cases, in which $d$ is super-logarithmic but sub-polynomial in $n$? Answer: Intuitively, the intermediate cases should be neither in P, nor NP-hard. Perhaps it depends exactly on what we mean by "intermediate case". Here is one interpretation for which we can prove something. Note: The Exponential-Time Hypothesis, or ETH, is that it is not the case that, for every constant $\epsilon>0$, SAT has an algorithm running in time $2^{n^{\epsilon}}$. See also this cs.stackexchange discussion. As far as we know, ETH is true. Define OP$_c$ to be the restriction of the OrderedPartition problem to instances where $d \le \log^c n$. Equivalently, to instances where $n \ge 2^{d^{1/c}}$. Here we intend OP$_c$ to capture what the post means by "intermediate instances". We show that these instances are not likely to be in P, nor NP-hard. Lemma 1. If OP$_c$ is in P for all $c$, then ETH fails. Proof. Suppose OP$_c$ is in P for all $c$. That is, for some function $f$, OP$_c$ has an algorithm running in time $n^{f(c)}$. SAT inputs of size $n$ reduce (via Partition as described in the post) to OrderedPartition$[2^{n^{b/c}}, n^b]$, for some constant $b$ and any constant $c>0$. So, SAT inputs of size $n$ reduce to OP$_c$ instances of size $2^{n^{b/c}}$, which can be solved in time $2^{f(c)n^{b/c}}$ via the algorithm for OP$_c$. For any $\epsilon>0$, taking, say, $c=2b/\epsilon$, SAT can be solved in time $2^{c' n^{\epsilon/2}} \le 2^{n^{\epsilon}}$ (for large $n$), violating ETH.$~~~~\Box$ Note: It seems likely to me that even OP$_2$ is not in P, but showing something like that would be similar to showing, say, that SAT has no algorithm running in time $2^{\sqrt n}$. Lemma 2. If OP$_c$ is NP-hard for some $c$, then ETH fails. Proof. Suppose OP$_c$ is NP-hard for some $c$. Then SAT inputs of size $n$ reduce to OP$_c$ in time $O(n^b)$ for some $b$. That is, to instances of OrdereredPartition$[n^b, d]$ where $d\le \log^c (n^b)$. As observed in the post such instances can be solved in time $n^{O(1)} 2^d = n^{O(1)} 2^{\log^c (n^b)}$, (strongly!) violating ETH.$~~~~\Box$ Probably something cleaner or stronger can be shown. If I had to guess, I'd define NP$_{d(n)}$ to be the complexity class comprised of those languages that have a non-deterministic poly-time algorithm that, on any input of size $n$, uses at most $d(n)$ non-deterministic guesses. (Here $d$ could be any function.) Then OrderedPartition$[n, d(n)]$ is in NP$_{d(n)}$. Perhaps it is complete for that class under poly-time reductions? A natural guess for a problem that should be complete for the class would be: given a circuit of size $n$ with $d$ input gates, is there an input that makes the circuit output True? Or something like that. (I wonder how this compares to, say, defining SAT$_{p(n)}$ to consist of SAT instances, padded with $p(n)$ useless bits to make the input larger. When $p(n)$ is super-polynomial but sub-exponential, the problem should be neither NP-hard nor in P.) p.s. See also Consequences of sub-exponential proofs/algorithms for SAT .
{ "domain": "cstheory.stackexchange", "id": 4771, "tags": "np-hardness, partition-problem" }
Testing an action returns a View with a ViewModel using xUnit and Moq
Question: This is my first test, I think it's testing what I need it to test but wanted to get some feedback. I wanted to test to make sure the controller action returns a view with a certain ViewModel. The code that it will be testing: Controller: public class UserController : Controller { private readonly ILogger<UserController> _logger; private readonly IViewModelService _vmService; public UserController(ILogger<UserController> logger, IViewModelService vmService) { _logger = logger; _vmService = vmService; } public async Task<IActionResult> Index() { return View(await _vmService.GetIndexVM()); } } ViewModel public class UserListVM { public IQueryable<DimUser> UserList { get; set; } } ViewModelService public async Task<UserListVM> GetIndexVM() { return new UserListVM() { UserList = await _userRepo.GetUserList() }; } The Test: public class UserControllerTests { [Fact] public async Task Index_ReturnsAViewResult_WithUserListVM() { // Arrange var logger = new Mock<ILogger<UserController>>(); var vmService = new Mock<IViewModelService>(); var myList = new List<DimUser> { new DimUser() { UserId = 1, FirstName = "Test", Surname = "TestSur", RefNumber = "ABC1111111", DateOfBirth = DateTime.Now } }; vmService.Setup(x => x.GetIndexVM()).ReturnsAsync(new UserListVM() { UserList = myList.AsQueryable() }); var userController = new UserController(logger.Object, vmService.Object); // Act var result = await userController.Index(); var viewResult = Assert.IsType<ViewResult>(result); var model = Assert.IsType<UserListVM>(viewResult.ViewData.Model); // Assert Assert.IsType<UserListVM>(model); } } Answer: Let's review each code segment one by one: public class UserController : Controller { private readonly ILogger<UserController> _logger; private readonly IViewModelService _vmService; public UserController(ILogger<UserController> logger, IViewModelService vmService) { _logger = logger; _vmService = vmService; } public async Task<IActionResult> Index() { return View(await _vmService.GetIndexVM()); } } I would generally recommend to avoid code like this: return View(await _vmService.GetIndexVM()); It is hard to add proper error handling, add transformation logic, add conditional branching, etc. A better approach would be to separate these two operations: var indexViewModel = await _vmService.GetIndexVM(); return View(indexViewModel); public async Task<UserListVM> GetIndexVM() { return new UserListVM() { UserList = await _userRepo.GetUserList() }; } First of all the same applies here as above, do not mix object creation logic and async calls. Secondly, in this simple code you have repeated four times the underlying collection type, which is an implementation detail. If you need to change that implementation detail in the lower layer that would propagate through several layers. Remember that hiding implementation details will help you minimize the scope of a change. A better approach would be: public async Task<UsersVM> GetIndexVM() { return new UsersVM() { User = await _userRepo.GetUsers() }; } There is another thing, this service now has two responsibilities: 1) Retrieve data via a lower layer 2) Transform data to the presentation layer In other words, this layer is an adapter between your presentation layer and repository layer. Generally speaking the service layer is the place where your business logic should reside. Because there is no business logic here, that's why it acts as an adapter. I have seen the following two approaches regarding object mapping: Each layer transforms its objects to the lower layer's object model Each layer accepts upper layer's object model and it transforms it into its own model The first one fits nicely into the n-layer architecture model where each layer only knows about that layer, which is directly beneath it. So, presentation layer knows about service layer. Service layer knows about repository layer. The second approach violates this rule. Service layer knows about repository layer and knows about presentation layer's domain model. It is not bad, but the first approach (in my opinion) separates the concerns better. public class UserListVM { public IQueryable<DimUser> UserList { get; set; } } Here your naming and data type is not matching. With this name you are stating that it should contain a List, which implies that you could use such operators like Add, Remove, etc. IQueryable does not provide such API. IQueryable is a type, which is used for deferred execution. In other words it indicates that this is just a query not the materialised form of the query. The problem with this is that it will execute the query when you somehow iterate through it (via foreach or calling .Count, etc.) If you do this in your view then your repository's datacontext might already be disposed. A better approach would be to expose it like this: public class UsersVM { public IList<DimUser> Users{ get; set; } } Your test's Arrange part looks good, so I will spend some thought on the rest: // Act var result = await userController.Index(); var viewResult = Assert.IsType<ViewResult>(result); var model = Assert.IsType<UserListVM>(viewResult.ViewData.Model); // Assert Assert.IsType<UserListVM>(model); Your Act section should consist only of the call of the Index function of the userController. The assertions should go the under the Assert section. I would also consider to use IsAssignableForm<T> instead of IsType<T>, because the former supports inheritance as well.
{ "domain": "codereview.stackexchange", "id": 38384, "tags": "c#, asp.net-core, moq, xunit" }
Calculating the intensity of an emission spectrum line
Question: I'm writing a program which generates the emission spectrum of an element with atomic number $Z$. To do this, I have used the equation: $$\frac{1}{\lambda} = R_{\infty}Z^2\left(\frac{1}{n_1^2}-\frac{1}{n_2^2}\right)$$ Where $n_1$ is the number of the original energy level and $n_2$ is the number of the final energy level and where $n_1 > n_2$ However, this does not calculate the intensity of the lines. Is there anyway to model the intensity using $n_1$, $n_2$ and/or $Z$, assuming that the brightest line has an intensity of 1? Answer: The formula that you exhibit give the correct values for possible wavelengths for any "hydrogen-like" atoms—one with exactly one electron, meaning neutral hydrogen, singly ionized helium, double ionized lithium and so on. There is no formula for the wavelength of line associated with atoms that have more than one electron present (though there are computational results to high precision for a number of relatively light atoms). Nor is it possible to address the question of line strength without knowing something about the environment around the atoms, because an atom in splendid isolation doesn't have an emission spectrum: it has already decayed to the ground state and it just sits there. So you must invoke an understanding of the environment to work out how strongly various lines show up (or don't show up). In cool environments most atoms don't get excited and therefore don't emit. In hot enough environments they may tend to be fully ionized and it is the free-interaction spectrum that you see most. In between you get a range of different emission spectra from the same atom.
{ "domain": "physics.stackexchange", "id": 40612, "tags": "electromagnetic-radiation, radiation, simulations, spectroscopy, photon-emission" }
Information carried by single photon
Question: In Quantum Information we can use photons for quantum bits (qubits). What i often read is that each photon can carry one unit of information, i.e. using the polarization state of a single photon. I have two questions: 1) i read on this article that it is possible to send 1.63 bits of information per photon, what does that mean? http://www.newscientist.com/article/dn13522-twisting-light-packs-more-information-into-one-photon.html#.Uy1cKBx22QE 2)If i can take a single photon state as a tensor product of its polarization state and its orbital angular momentum state (we could also add a frequency state), can i say that i am sending 2 (or 3) qubits of information in one single photon? Thank you Answer: 1) Basically, it seems what they implemented a version of the superdense coding theorem. What's that about? Well, the idea is the following: As always, Alice wants to send bits to Bob. Before they start, however, they already establish a connection by sharing a maximally entangled state $|\Phi\rangle=\frac{|00\rangle+|11\rangle}{\sqrt{2}}$. So, in principle, they take a two-photon state and each of them has one photon. Alice can now act on her photon and produce (via local transformation) any of the four Bell states that are available. She can then send her photon to Bob and Bob can measure the two-photon state, obtaining one of the four Bell states (depending of what Alice did). So during the protocol, Alice sent ONE photon but Bob obtains TWO bits of information (four possible outcomes). So it seems that you can actually send two bits of information with one photon. Now it seems that in the experiments, this is not really possible (you cannot measure every Bell state for example, but only three of them) and this is how the number 1.63 comes about. However, this does in no way contradict what you heard, because we didn't really send two bits of information with one photon. What we did is, we sent TWO photons - the first one before the real sending of a message started and the other one during the protocol (the entangled pair will be created by either Alice or Bob and one of these photons has to be sent to the other party). 2) Speaking about the bound, this also gives you an answer to the second question: There is actually nothing saying that a photon can only ever contain one bit of information. What the theorem says is that a qubit (i.e. a two-level quantum system) can only be used to send one classical bit - if we assume no prior entanglement (otherwise, we can use the protocol above). However, if your phyiscal system has more than two levels, you can use that to convey more information. The polarization states of a photon form a two-level quantum system, hence they can be used as a qubit. If, in your system, there are more degrees of freedom available, you can use them for additional storage of information.
{ "domain": "physics.stackexchange", "id": 12671, "tags": "photons, quantum-information" }
Axioms necessary for theoretical computer science
Question: This question is inspired by a similar question about applied mathematics on mathoverflow, and that nagging thought that important questions of TCS such as P vs. NP might be independent of ZFC (or other systems). As a little background, reverse mathematics is the project of finding the axioms necessary to prove certain important theorems. In other words, we start at a set of theorems we expect to be true and try to derive the minimal set of 'natural' axioms that make them so. I was wondering if the reverse mathematics approach has been applied to any important theorems of TCS. In particular to complexity theory. With deadlock on many open questions in TCS it seems natural to ask "what axioms have we not tried using?". Alternatively, have any important questions in TCS been shown to be independent of certain simple subsystems of second-order arithmetic? Answer: Yes, the topic has been studied in proof complexity. It is called Bounded Reverse Mathematics. You can find a table containing some reverse mathematics results on page 8 of Cook and Nguyen's book, "Logical Foundations of Proof Complexity", 2010. Some of Steve Cook's previous students have worked on similar topics, e.g. Nguyen's thesis, "Bounded Reverse Mathematics", University of Toronto, 2008. Alexander Razborov (also other proof complexity theorists) has some results on the weak theories needed to formalize the circuit complexity techniques and prove circuit complexity lowerbounds. He obtains some unprovability results for weak theories, but the theories are considered too weak. All of these results are provable in $RCA_0$ (Simpson's base theory for Reverse Mathematics), so AFAIK we don't have independence results from strong theories (and in fact such independence results would have strong consequences as Neel has mentioned, see Ben-David's work (and related results) on independence of $\mathbf{P} vs. \mathbf{NP}$ from $PA_1$ where $PA_1$ is an extension of $PA$).
{ "domain": "cstheory.stackexchange", "id": 651, "tags": "cc.complexity-theory, lo.logic, proof-complexity" }
Two extremely naive questions about the Kronecker problem from Geometric Complexity Theory
Question: I was reading the GCT IV paper (http://arxiv.org/pdf/cs/0703110v4.pdf) and while the representation theory is clear enough (by which I do not mean to say 'easy'!) the relation to complexity theory as stated in the introduction confuses me. I quote: The flip suggests that separating the classes P and NP will require solving difficult positivity problems in algebraic geometry and representation theory. A central positivity problem arising here is the following fundamental problem in the representation theory of the symmetric group. Let $S_r$ denote the symmetric group on $r$ letters and let $M_\nu$ denote the $S_r$-irreducible corresponding to the partition $\nu$. Given three partitions λ, µ, ν of r, the Kronecker coefficient $g_{\lambda \mu \nu}$ is defined to be the multiplicity of $M_\nu$ in the tensor product $M_\lambda \otimes M_\mu$. (...) Problem 1.1 (Kronecker problem). Find a positive combinatorial formula for the Kronecker coefficients $g_{\lambda \mu \nu}$. There are two precise related problems in complexity theory that arise in the flip: (1) find a (positive) #P formula for Kronecker coefficients, and harder, (2) find a polynomial time algorithm to determine whether a Kronecker coefficient is zero. My questions are: A) When they say polynomial time in question (2), what is the input size here? I found it easy to find a $O(d^3)$ algorithm where $d$ is the product of the dimensions of the vector spaces $M_\lambda, M_\mu, M_\nu$, so apparently they are thinking of something smaller than that. But what? The number $r$? How do we know that the dimensions are not polynomial in $r$? (It seems to me they are polynomial in $r$ but with an exponent that depends on the number of elements in the partition, so maybe that doesn't count as truly polynomial?) B) How can question (2) be harder than question (1) when, obviously, any solution to question (1) gives you a solution to question (2). (At least I find it hard to imagine a world where an algorithm to compute a number doesn't give you an algortithm to tell if that number is zero.) The $P$ in $\#P$ also means 'polynomial time', right? It looks like I am missing something really obvious here but what? Answer: A) The input here is the triple of partitions $(\lambda, \mu, \nu)$, represented as sequences of numbers in binary. The dimension of the irreducible representation $M_{\lambda}$ can actually be exponential in $r$, take for example the hook partition $n + 1, 1, 1, \dots, 1 \vdash 2n + 1$. Using the hook formula for the dimension we get $\frac {(2n)!}{(n !)^2}$. B) The $P$ in $\# P$ means polynomial time, but the $\#$ in $\#P$ means "count computation paths of a nondeterministic machine". An example of $\#P$ problem is $\#SAT$ (count the number of satisfying assignments for a given formula), and the corresponding decision problem is $SAT$, which is probably not in $P$.
{ "domain": "cstheory.stackexchange", "id": 3608, "tags": "polynomial-time, gct" }
Why there is no dipole gravitational wave?
Question: I have read that "thanks to conservation of momentum" there is no dipole gravitational radiation. I am confused about this, since I cannot see the difference with e.m. radiation. Is this due to the non-existence of positive and negative gravi-charges? Is this due to the non-linearity of Einstein equations? How the conservation of momentum enters here? An example of my confusion below Q: Why I cannot shake a single mass producing dipole gravi-radiation? A: You need another mass to shake it. Q: Isn't it the same with electromagnetism? Answer: The simple Newton-like explanation of dipole gravitational radiation unexistence is following. The gravitational analog of electric dipole moment is $$ \mathbf d = \sum_{\text{particles}}m_{p}\mathbf r_{p} $$ The first time derivative $$ \dot{\mathbf d} =\sum_{\text{particles}}\mathbf p_{p}, $$ while the second one is $$ \ddot{\mathbf d} = \sum_{\text{particles}}\dot{\mathbf p}_{p} = 0, $$ indeed due to momentum conservation. "Magnetic" dipole gravitational radiation is analogically impossible due to conservation law of angular momentum. Indeed, since by the definition it is the sum of cross products of position of point on corresponding current: $$ \mathbf{M} = \sum_{\text{particles}}\mathbf r_{p}\times m_{p}\mathbf{p}_{p} = \sum_{\text{particles}}\mathbf{J}_{p} \Rightarrow \dot{\mathbf M} = \ddot{\mathbf M} = 0 $$ What's about general relativity? As you know, the propagation of gravitational waves is described by linearized Einstein equations for perturbed metric $h_{\mu \nu}$, and in this limit they coincide with EOM for helicity 2 massless particles in the presence of stress-energy pseudotensor $\tau_{\mu \nu}$: $$ \square h_{\mu \nu} = -16 \pi \tau_{\mu \nu}, \quad \partial_{\mu}h^{\mu \nu} = 0, \quad \partial_{\mu}\tau^{\mu \nu} = 0 $$ Since $\tau^{\mu \nu}$ is conserved, this protects $h_{\mu \nu}$ from the contributions from monopole or dipole moments of sources as well as from additional helicities. Formally the deep difference between gravitational and EM radiations is that we associate General relativity symmetry $g_{\mu \nu} \to g_{\mu \nu} + D_{(\mu}\epsilon_{\nu )}$ (it is infinitesimal version of $g_{\mu \nu}(x)$ transformation under $x \to x + \epsilon$ transformation) with covariant stress-energy tensor conservation (indeed, tensor current conservation, from which we can extract conservation of 4-momentum vector current), while EM gauge symmetry is associated with vector current conservation (from which we can extract the conservation of electrical charge scalar quantity). So that corresponding conservation laws affect on different quantities; the nature of radiation in EM and GR cases are different, and the first one rules primarily by Maxwell equations (and hence conservation of charge plays the huge role), while the second one rules by linearized Einstein equations (and hence the momentum conservation is genuine). For example, heuristically speaking, due to conservation of EM charge EM monopole radiation is impossible (it is expressed therough the time derivative of charge), but nothing restricts dipole moment radiation. In GR due to conservation of momentum vector, which is related to metric (an so to gravitational waves, in the sense I've shown above), dipole moment radiation is impossible. This, indeed as anna v said in comments, is connected with the fact that EM field represents helicity-1 particles, while linearized gravitational field coincides with the field which represents helicity-2 particles. As you see, such thinking doesn't require presence of plus minus masses.
{ "domain": "physics.stackexchange", "id": 45454, "tags": "general-relativity, quantum-spin, gravitational-waves, dipole, multipole-expansion" }
How does the wax layer of a leaf get damaged by acid rain?
Question: I know that wax is not soluble in water so I think it doesn't react with or dissolve in water-based solvent. Some sources I have consulted say that it does react but slowly. So getting confused by different opinion, I have no idea what the truth is. Answer: Being insoluble in water does not mean something cannot react with water or water-based solvents. Plant waxes are made up of a number of constituents; amoung them esters, alcohols, alkenes, fatty acids and carbonyls. Especially the esters can be hydrolysed by acidic conditions, creating a more hydrophilic area which water (and more acid) can use to penetrate the layer. Another probable reason is the fact that acidic proton transfers often involve the release of heat which will make the wax layer less viscous. All these things taken together mean that slowly but surely, the wax layer will be destroyed given enough acid rain.
{ "domain": "chemistry.stackexchange", "id": 6446, "tags": "acid-base, everyday-chemistry" }
How we can manipulate the momentum of a particle?
Question: Is there any way to affect a particle's momentum value? Answer: You apply a (net) force (i.e. push it). Recall that the generalized version of Newton's 1st law is that force is proportional to the rate of change in momentum: $$ \vec{F} = \frac{\mathrm{d} \vec{p}}{\mathrm{d}t} \,,$$ or in the language of impulse ($J$) $$ \vec{J} = \Delta\vec{p} = \langle \vec{F} \rangle \Delta t \,,$$ with $\langle \rangle$ meaning the time-average of the enclosed quantity. Written this way the law is completely valid in special relativity as well as in Newtonian mechanics, which is nice because much of particle physics occurs are relativistic relative velocities.
{ "domain": "physics.stackexchange", "id": 13358, "tags": "momentum, subatomic" }
Why are Newman Projections drawn with 120 degrees when actually sp3 carbons are 109.5 degrees?
Question: Title says the question, pretty much. For example, in the Newman projection of ethane, the carbon hydrogen bond angles are $120$ degrees, not $109.5$ degrees, as they actually are in an $\text{sp}^3$ hybridized center. Why is this so, and does it have any effect on the arguments traditionally derived from Newman projections? Answer: It is a projection. Depending on the orientation of a water molecule with respect to the projection plane, the projected angle can be anywhere from zero to 180 degrees. Similarly, the angles in ethane projected to a 2D plane will be different from 109.5 degrees unless the three atoms making up the angle are in the plane: Other projections (Fisher projection) make the angles appear as right angles. In either case, the 3D angles remain at 109.5 degrees.
{ "domain": "chemistry.stackexchange", "id": 13064, "tags": "molecular-structure, conformers" }
Meaning/veracity of "each state has [..] or two outgoing ϵ transitions" in Thompson's construction
Question: The dragon book lists properties of an NFA N(r) created using Thompson's construction, in particular: Each state of N(r) other than the accepting state has either one outgoing transition on a symbol in Σ or two outgoing transitions, both on ϵ. However, the same book shows NFAs created using Thompson's construction that seem to contradict that very statement, like this one: Here, nodes 3 and 5, which are not accepting states, have only one outgoing ϵ transition each. What am I missing? Answer: Thanks @Highheath in the comments, you are right. At first I found it hard to believe that I should be the first one to have found such an error in the 36+ years since this - apparently very popular - book is out. It is in its second edition since 2008, but I would have expected this part to have been in the first edition as well. Be that as it may, the error shows up in the errata:
{ "domain": "cs.stackexchange", "id": 21292, "tags": "automata, finite-automata, regular-expressions" }
Disposable Heroes
Question: I had a bit of an issue with my last piece of code, having to do with cleaning up resources. I needed a way to ensure the database connection was always properly closed, even if there still were instances of an IPresenter or IRepository still floating around. IDisposable class module Option Explicit Public Sub Dispose() End Sub So I made a minor tweak to my UnitOfWork class, and implemented this IDisposable interface: UnitOfWork class module Option Explicit Private Const CONNECTION_STRING As String = "DRIVER={MySQL ODBC 5.1 Driver};UID=;PWD=;SERVER=;DATABASE=;PORT=;" Private repositories As New Dictionary Private adoConnection As New ADODB.Connection Private disposed As Boolean Implements IUnitOfWork Implements IDisposable Private Sub Class_Initialize() adoConnection.ConnectionString = CONNECTION_STRING adoConnection.Open adoConnection.BeginTrans End Sub Private Sub Class_Terminate() If Not disposed Then Dispose End Sub Public Sub Dispose() Set repositories = Nothing If Not adoConnection Is Nothing Then If adoConnection.State = adStateOpen Then adoConnection.RollbackTrans 'rollback any uncommitted changes adoConnection.Close End If Set adoConnection = Nothing End If disposed = True End Sub Private Sub IDisposable_Dispose() If Not disposed Then Dispose End Sub Public Sub AddRepository(ByVal key As String, ByRef repo As IRepository) repo.SetConnection adoConnection repositories.Add key, repo End Sub Public Property Get Repository(ByVal key As String) As IRepository Set Repository = repositories(key) End Property Public Sub Commit() adoConnection.CommitTrans adoConnection.BeginTrans End Sub Public Sub Rollback() adoConnection.RollbackTrans adoConnection.BeginTrans End Sub Private Sub IUnitOfWork_AddRepository(ByVal key As String, ByRef repo As IRepository) AddRepository key, repo End Sub Private Sub IUnitOfWork_Commit() Commit End Sub Private Sub IUnitOfWork_Dispose() IDisposable_Dispose End Sub Private Property Get IUnitOfWork_Repository(ByVal key As String) As IRepository Set IUnitOfWork_Repository = Repository(key) End Property Private Sub IUnitOfWork_Rollback() Rollback End Sub Now, in an ideal world, I would have made my presenters implement IDisposable too. But vba isn't very inheritance-friendly (i.e. I can't have IPresenter "extend" IDisposable, and implementing more than 1 interface on a class is possible, but a pain in the neck for the client code), and so I modified the IPresenter interface as follows - basically that's just for documentation purposes, all that really matters is that there's a Public Sub Dispose() method on the IPresenter interface: IPresenter class module Option Explicit Implements IDisposable ... Public Sub Dispose() End Sub Private Sub IDisposable_Dispose() Dispose End Sub I don't need IPresenter to implement IDisposable - in fact, IDisposable_Dispose will never even be called: this just being for documentation purposes, so I'm wondering, does it actually help, or does it make things more confusing than they need to be? Should I put a comment there? The IPresenter implementations have been modified as follows (this code is identical in all implementations, if that's a smell): Private Sub Class_Terminate() Dispose End Sub Public Sub Dispose() If Not View Is Nothing Then Unload View If Not this.UnitOfWork Is Nothing Then this.UnitOfWork.Dispose If Not this.DetailsPresenter Is Nothing Then this.DetailsPresenter.Dispose Set this.UnitOfWork = Nothing Set this.View = Nothing Set this.DetailsPresenter = Nothing End Sub Private Sub IPresenter_Dispose() Dispose End Sub These disposable heroes make this code possible: Public Sub MaintainCustomerGroups() On Error GoTo ErrHandler If IsBusy Then Exit Sub Dim presenter As New CustomerGroupsPresenter Dim childPresenter As New CustomerGroupDetailsPresenter Dim selectorPresenter As New CustomerGroupSelectorPresenter Dim uow As New UnitOfWork uow.AddRepository "CustomerGroups", New CustomerGroupsRepository uow.AddRepository "Customers", New CustomerRepository Set presenter.UnitOfWork = uow Set childPresenter.UnitOfWork = uow Set selectorPresenter.UnitOfWork = uow SetupSimplePresenter presenter, _ GetResourceString("CustomerGroupsTitle"), _ GetResourceString("CustomerGroupsInstructionsText"), _ CRUD + ShowDetails SetupSimplePresenter childPresenter, _ GetResourceString("CustomerGroupCustomersTitle"), _ GetResourceString("CustomerGroupCustomersInstructionsText"), _ ShowDetails + Refresh SetupSelectorPresenter selectorPresenter, _ GetResourceString("CustomerGroupSelectorTitle"), _ GetResourceString("CustomerGroupSelectorInstructionsText") Set presenter.DetailsPresenter = childPresenter Set childPresenter.DetailsPresenter = selectorPresenter presenter.Show CleanExit: presenter.Dispose Exit Sub ErrHandler: MsgBox Err.description, vbCritical, GENERIC_ERR_MSG Resume CleanExit Resume End Sub Notice CleanExit is only calling Dispose on the presenter instance - that's all that needs to happen for the UnitOfWork to close its connection, and I don't need to be doing this: Set presenter.UnitOfWork = Nothing Set presenter.DetailsPresenter = Nothing Set childPresenter.UnitOfWork = Nothing Set childPresenter.DetailsPresenter = Nothing Set selectorPresenter.UnitOfWork = Nothing Set presenter = Nothing Set childPresenter = Nothing Set selectorPresenter = Nothing Set uow = Nothing ...just for Class_Terminate() to be called in the UnitOfWork class, and my database connection properly closed. So, have I cleaned things up, or made a mess? Answer: Type checking/casting is a bit of a pain in VBA, but with a little static helper proper implementation gets much simpler: VERSION 1.0 CLASS BEGIN MultiUse = -1 'True END Attribute VB_Name = "Disposable" Attribute VB_GlobalNameSpace = False Attribute VB_Creatable = False Attribute VB_PredeclaredId = True Attribute VB_Exposed = False Option Explicit Public Sub Dispose(ByRef obj As Object) Dim instance As IDisposable If obj Is Nothing Then Exit Sub If Not TypeOf obj Is IDisposable Then Exit Sub Set instance = obj instance.Dispose End Sub This simple class allows you to get rid of the Public Sub Dispose() methods on IPresenter and IUnitOfWork interfaces, and allows you to do this: Option Explicit Private Type tPresenter UnitOfWork As IUnitOfWork DetailsPresenter As IPresenter View As IView End Type Private this As tPresenter Implements IPresenter Implements IDisposable ...and implement the IDisposable interface cleanly (i.e. without exposing a Dispose() instance member, and without leaking that method into the IPresenter & IUnitOfWork interfaces). In other words, it turns a leaky abstraction into a good one. The Dispose method in IPresenter implementations can be made Private, and go from this: Public Sub Dispose() If Not View Is Nothing Then Unload View If Not this.UnitOfWork Is Nothing Then this.UnitOfWork.Dispose If Not this.DetailsPresenter Is Nothing Then this.DetailsPresenter.Dispose Set this.UnitOfWork = Nothing Set this.View = Nothing Set this.DetailsPresenter = Nothing End Sub To that: Private Sub Dispose() If Not View Is Nothing Then Unload View Disposable.Dispose this.UnitOfWork Disposable.Dispose this.DetailsPresenter Set this.UnitOfWork = Nothing Set this.View = Nothing Set this.DetailsPresenter = Nothing End Sub And the calling code can now clean up like this: CleanExit: Disposable.Dispose presenter Exit Sub And that leaves you with segregated interfaces that don't pretend to be inheriting one another.
{ "domain": "codereview.stackexchange", "id": 9281, "tags": "design-patterns, vba, repository" }
Check if given number can be expressed as sum of 2 prime numbers
Question: Similar problem: Sum Of Prime I've done the given problem in roughly \$\mathcal{O}(n\sqrt{n})\$. How can I improve this algorithm? #include <stdio.h> int isPrime(long long int n) { long long int i; for(i=2;i*i<=n;i++) { if(n%i==0) return 0; } return 1; } int main(void) { int i, t, prime; long long int n, start, end; scanf("%d", &t); for(i=0;i<t;i++) { prime = 0; scanf("%lld", &n); if(n<4) printf("No\n"); else { if(n%2==0) printf("Yes\n"); else { start = 2; end = n-2; while(start<=end) { if(isPrime(start) && isPrime(end)) { printf("Yes\n"); prime = 1; break; } else { start++; end–-; } } if(prime==0) printf("No\n"); } } } return 0; } Answer: Some general remarks: Use more horizontal space, for(i=0;i<t;i++) is difficult to read. Use true and false from <stdbool.h> for boolean values, not the integers 1 and 0. Declare variables at the nearest scope where they are used, e.g. in loops: for (long long i = 2; i * i <= n; i++) { ... } Always use braces { } in if-statements, even if there is only a single statement to be executed in the if- or else-case. Adding the braces easily forgotten if you add more statements later. long long int is identical to long long. I prefer the latter, but that is a matter of taste (or your coding style guidelines). Now to your code: The isPrime() functions treats n = 1 as a prime number, which it isn't. Separate the computation from the I/O. That makes the code more organized and allows you to add test cases easily. So the program should look like this: #include <stdio.h> #include <stdbool.h> bool isPrime(long long n) { if (n < 2) { return false; } for (long long i = 2; i * i <= n; i++) { if (n % i == 0) { return false; } } return true; } bool isSumOfTwoPrimes(long long n) { // ... check if `n` is the sum of two prime numbers ... } int main(void) { int t; scanf("%d", &t); for (int i = 0; i < t; i++) { long long n; scanf("%lld", &n); if (isSumOfTwoPrimes(n)) { printf("Yes\n"); } else { printf("No\n"); } } return 0; } Printing the result can also be done with a single statement: puts(isSumOfTwoPrimes(n) ? "Yes" : "No"); You return true for all even numbers, which is correct because Goldbach's conjecture has already been proven for all even numbers in your range 1<=n<=1000000. For odd numbers, the calculation can be simplified considerably. If n = p + q is odd then p is even and q is odd, or vice versa. Since 2 is the only even prime number, this reduces to check if n - 2 is prime: bool isSumOfTwoPrimes(long long n) { if (n < 4) { return false; } if (n % 2 == 0) { return true; } return isPrime(n - 2); } Note another advantage of the separate function: You can "early-return", and that makes the "state variable" int prime from your code obsolete. If the program is run with a large number of test cases, then a further improvement would be to pre-compute all prime numbers in the given range, for example with the Sieve of Eratosthenes.
{ "domain": "codereview.stackexchange", "id": 21553, "tags": "c, primes" }
Can a system perform work on a reversible work source while maintaining constant volume?
Question: From Exercise 4.5-10 of Thermodynamics and an Introduction to Thermostatistics by Callen: Two identical bodies each have heat capacities (at constant volume) of $$C(T)=a / T$$ The initial temperatures are $ T_{10} $ and $ T_{20} $, with $ T_{20}>T_{10} $. The two bodies are to be brought to thermal equilibrium with each other (maintaining both volumes constant) while delivering as much work as possible to a reversible work source. What is the final equilibrium temperature and what is the maximum work delivered to the reversible work source? I have been suggested using $$\bigg(\frac{\partial S}{\partial T}\bigg)_V = \bigg(\frac{\partial U}{\partial T}\bigg)_V \bigg(\frac{\partial S}{\partial U} \bigg)_V = \frac{C(T)}{T},$$ calculating $$\Delta S = \int_{T_i}^{T_f} \frac{\partial S}{\partial T} dT = \int_{T_i}^{T_f} \frac{C(T)}{T} dT,$$ and setting $\Delta S = 0$ as by the theorem of maximal work. But I can't get comfortable with the fact that the system is supposed to perform work while simultaneously maintaining constant volume. In other words, if $P dV = 0$, how can $\Delta U$ be nonzero if the system is thermally isolated? Answer: If we apply the first and second laws of thermodynamics to the combined system consisting of the two bodies and the engine, we obtain: $$\Delta U=\Delta U_H+\Delta U_C+\Delta U_E=-W_E\tag{1}$$and$$\Delta S=\Delta S_H+\Delta S_C+\Delta S_E=0\tag{2}$$ where the subscripts H, C, and E refer to the hot body, the cold body, and the engine working fluid, respectively. Since the engine is assumed to be working in a cycle, we must have that $\Delta U_E=0$ and $\Delta S_E=0$, so Eqns. 1 and 2 reduce to: $$\Delta U_H+\Delta U_C=-W_E\tag{3}$$and$$\Delta S_H+\Delta S_C=0\tag{4}$$If $T_f$ is the final temperature of the hot and cold bodies when no more work can be done, for the specified heat capacity equation, we have that:$$\Delta U_H=\int_{T_{20}}^{T_f}{CdT}=a\ln{(T_f/T_{20})}\tag{5a}$$$$\Delta U_C=\int_{T_{10}}^{T_f}{CdT}=a\ln{(T_f/T_{10})}\tag{5b}$$$$\Delta S_H=\int_{T_{20}}^{T_f}{\frac{C}{T}dT}=a\left(\frac{1}{T_{20}}-\frac{1}{T_f}\right)\tag{6a}$$and$$\Delta S_C=\int_{T_{10}}^{T_f}{\frac{C}{T}dT}=a\left(\frac{1}{T_{10}}-\frac{1}{T_f}\right)\tag{6b}$$Eqns. 6 together with Eqn. 4 determines the final temperature of the bodies. This can then be substituted into Eqns. 5 and 3 to determine the maximum work done by the engine on the surroundings.
{ "domain": "physics.stackexchange", "id": 62708, "tags": "homework-and-exercises, thermodynamics, work" }
Problems with complexity between P and NP that have NP-complete generalizations
Question: Can anyone list some well-known problems that satisfies the following conditions: 1. has a generalization problem that is known to be NP-complete 2. has not been proved to be NP-complete nor has a known polynomial time solution. Answer: Most famously: Graph Isomorphism, and Dominating Set on Tournaments. Generalizations are natural.
{ "domain": "cstheory.stackexchange", "id": 1472, "tags": "cc.complexity-theory, np-hardness" }
Using PHP goto Labels for code folding
Question: I've been using PHP labels in my Controllers for code folding. PHP labels are actually used with goto statements. But I'm using labels only because it's so much easier to fold my code in different IDEs using labels + curly braces, and I can use labels as titles for different sections of the code. Here's an example. Take a look at SettingCartFields label. <?php namespace App\Http\Controllers; use App\Cart; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class CartController extends Controller { public function create(Request $request) { $cart = new Cart; SettingCartFields: { $cart->field1 = $request->field1; $cart->field2 = $request->field2; $cart->field3 = $request->field3; $cart->field4 = $request->field4; $cart->field5 = $request->field5; $cart->field6 = $request->field6; } $cart->save(); } } I have a few questions, are there any performance issues with this? Am I introducing some kind of overhead by adding a lot of labels in my code? Is there any better way to do this? Answer: The main issue I have with your approach is that it is non-standard and purely for the way that you work. Others looking at your code may be a bit confused as to what you are doing and why. In your example, there is (IMHO) little benefit in being able to fold the code as there is not really enough code to need to fold the code up, but in larger code segments there may be a use for it. For me - it looks as though you are in the initial stages of identifying logically grouped functionality within your code, something that could then be extended by extracting these logical groups into new class methods. The exact approach I would take depends on how much other processing you have with the cart either pre or post this main chunk of code. If there is a lot of other processing around what the cart does, then you could just pass the cart and the request to a new method and get something like... class CartController extends Controller { public function create(Request $request) { $cart = new Cart; $this->setCartFields ( $cart, $request ); $cart->save(); } private function setCartFields ( Cart $cart, Request $request ) { $cart->field1 = $request->field1; $cart->field2 = $request->field2; $cart->field3 = $request->field3; $cart->field4 = $request->field4; $cart->field5 = $request->field5; $cart->field6 = $request->field6; } } If the data from the request forms the basis of the data for the cart, then this could instead create it's own cart, initialise the data and return the newly created cart for further processing... class CartController extends Controller { public function create(Request $request) { $cart = createCart ( $request ); $cart->save(); } private function createCart ( Request $request ) : Cart { $cart = new Cart; $cart->field1 = $request->field1; $cart->field2 = $request->field2; $cart->field3 = $request->field3; $cart->field4 = $request->field4; $cart->field5 = $request->field5; $cart->field6 = $request->field6; return $cart; } } The problem being that you could eventually end up with all of the processing in the newly created method and rather than relieve the problem you have just moved it. This is only something you can decide on a per instance basis.
{ "domain": "codereview.stackexchange", "id": 36875, "tags": "php, laravel, slim, symfony4" }
What components of safety should be included in a chemistry laboratory experiment conclusion?
Question: The focus of my question here is this: In a laboratory there is a Bunsen burner, a hot plate, hydrochloric acid, and concentrated ammonia. What would you mention about safety precautions? As students of chemistry and science, we often need to write detailed conclusions about our laboratory experiments. This eventually becomes second-nature, but to have an an idea of which of the safety measures taken to include are often useful to ensure ourselves that we have not left anything out. I believe that the following are some of the the most important components of any well written conclusion in a lab entry, something that your lab instructor will read and grade you on. Purpose: Explain the goal and purpose of the experiment in a clear and concise manner. Findings: Presents a reasonable interpretation of, and logical explanation for all findings pertaining to the problem and stated purpose. Discussion: Discusses possible sources of error in detail, including their effect on the results and ways of avoiding them in the future. Referencing experimental findings and explaining the known/expected results we were looking for. Mentioning and discussing reasons for trends, if any. In particular my instructors last year were often interested in: Safety: This is often what I always got marked down for. I would explain that hydrochloric acid is a caustic substance and should be treated with care to not get on your tissue by wearing clothing that does not expose skin and closed toe shoes, and to always carry out the experiment with safety goggles securely fastened. It never seems to be enough for them, even if I mention eye flush and chemical shower in case of emergencies. Should I mention to not snort or freebase it? /end sarcasm Have I missed any safety components that should be included in a "complete" conclusion for the average experiment? What else might I have left out that instructors tend to look for? Answer: Perhaps your instructors want more specific information about the hazard itself, rather than generic safety advice. Some examples: 1) HCl: Strong acid, highly corrosive (nitpick: caustic is usually reserved for strong bases, corrosive for acids), emits corrosive and toxic fumes (not all acids do) 2) 29% NH4OH: Caustic, emits strong fumes (lachrymator--makes your eyes water), work in a fume hood, outdoors or in a well-ventilated area 3) Bunsen burner: No open flames in the presence of flammable vapors or liquids with high vapor pressure (much bigger hazard than just burning yourself) 4) Hotplate: Hotplate looks the same whether hot or cold, so assume it is hot, take care with flammable liquids with low flash points Again, these are just examples, but I tried to tailor the answer to the item in question.
{ "domain": "chemistry.stackexchange", "id": 176, "tags": "experimental-chemistry, safety" }
Dual of the TDSE
Question: Quite a quick and hopefully simple question. The TDSE takes the form $$i\hbar\frac{\partial\lvert\psi\rangle}{\partial t}=H\lvert\psi\rangle$$ and so if we take the dual of this to find the time evolution of a bra, we find $$-i\hbar\frac{\partial\langle\psi\lvert}{\partial t}=\langle\psi\lvert H$$ which is all pretty obvious to me, apart from the time derivative. If we want to take the dual of $\frac{\partial\lvert\psi\rangle}{\partial t}$ why are we simply allowed to bypass the derivative operator and take the dual of the ket. On the right hand side we had to reverse the order of operator and ket and then take their respective duals (H is hermitian). Why doesn't that apply here, so we get $\langle\psi\lvert\frac{\partial}{\partial t}$ instead? Answer: Why doesn't that apply here, so we get $<\psi|\frac{\partial}{\partial t}$ instead? For one thing, the expression: $$ <\psi|\frac{\partial}{\partial t} $$ makes no sense because the derivative is not operating on anything... To explain further: The quantity $$ \frac{\partial|\psi\rangle}{\partial t}\;, $$ is itself a ket (you could call it, e.g., "$|\chi\rangle$") because it is the limit of the difference of two kets $$ \frac{\partial|\psi\rangle}{\partial t}\equiv\lim_{\delta\to0}\frac{|\psi(t+\delta)\rangle-|\psi(t)\rangle}{\delta}\equiv|\chi\rangle\;. $$ And so, the dual is $$ \langle\chi|\equiv\lim_{\delta\to0}\frac{ \langle\psi(t+\delta)|-\langle\psi(t)|}{\delta}\equiv\frac{\partial\langle\psi|}{\partial t}\;. $$
{ "domain": "physics.stackexchange", "id": 20575, "tags": "quantum-mechanics, mathematics" }
Highest frequency of collected force signal
Question: I have collected force at the footrest during paddling at two occasions. During one occasion I accidently collected the force signals at 150 Hz instead of 1500 Hz. During the other time the data was collected at 1500 Hz. I now want to calculate the highest frequency of the signal collected at 1500 Hz to be able to know if the data collected at 150 Hz is within the Nyquist criterion. How do I calculate the highest frequency in my signal? Answer: In order to determine the spectral content of your signal you can compute and display its Fourier transform. Assuming you have $N$ samples of the signal $x[n]$ sampled at $F_s = 1500$ Hz, then you can simply determine its frequency content by the following Matlab command: Xk = fft(x ,N); Then you can plot its spectrum to see if there is significand energy in the frequency region above $150/2 = 75$ Hz by the following code: figure, plot ( linspace(0,Fs,N) , abs(Xk) ) ; xlabel('Frequency [Hz]') title('DFT Spectrum Magnitude |X[k]| of x[n]'); If there is any signifcand energy above $75$ Hz then there will be significand aliasing. However note that the sensor might have applied a $75$ Hz lowpass filter before sampling the signal, so that you can still avoid alising, but your signal will be distorted, nevertheless, as it would have beeen sampled from a lowpass filtered version of the true signal.
{ "domain": "dsp.stackexchange", "id": 5880, "tags": "matlab, fft, frequency, nyquist" }
Why are radicals unstable?
Question: Why are unpaired electrons especially reactive? Does pairing electrons decrease the reactivity of the electrons? But then doesn't forcing two electrons into the same orbital cost energy? Answer: Free radicals are usually very reactive. If we bring two free radicals or a free radical and a molecule with electrons available for bonding together a bond will usually form. Yes, it does cost energy to bring those electrons close together. But look at the payoff, a new chemical bond is formed and this is usually quite exothermic (stabilizing), usually more than enough energy to offset the energy cost of bringing those electrons close enough to form a new bond. Because of the exothermicity of bond formation the reaction profile usually has a small activation energy. Translation: a small activation energy means a fast reaction. A caveat, at the outset I said that "free radicals are usually reactive". There are many known free radicals that are quite stable. This is usually due to steric factors (the reactive free radical center is surrounded by bulky groups and just not sterically accessible for reaction) or electronic factors (some free radicals exist in very large delocalized systems, hence the spin density at any one atom in the system is so small that reaction is unlikely). One further point, most of what I've said relates to free radicals, which was the title of your question, however you did mention "unpaired electrons" in the beginning of your post. Molecules with unpaired electrons (note the plural, electrons) are different than free radicals with (usually) just one unpaired electron. Some molecules like $\ce{O2}$ are quite stable even though they have two unpaired electrons.
{ "domain": "chemistry.stackexchange", "id": 1367, "tags": "bond, radicals" }
How is silica gel made?
Question: How is silica gel prepared from sand (after extracting it). How many ways can it be used? Is there a way to make it without an oven? Answer: Sand may be dissolved in concentrated and hot solutions of NaOH. The dissolution is slow, and pressure may help this action. At the end of the reaction, a viscous solution of so-called sodium silicate is obtained. The formula of the solute is not well defined, and can be described as $(Na_2SiO_3)_n$. It is made of a long polymeric chain similar to the formula $HO-Si-O-(Si-O)_n-Si-O-Si-OH$, with supplementary negatively charged Oxygen atoms above and under each Silicon atoms. The whole chain is negatively charged. And of course, there are Sodium ions $Na^+$ in the vicinity of each negatively charged Oxygen atoms. To make the description more complete, one should mention that from time to time there are side branches of Si-O-Si-O chains, which get fixed on the Si atoms of the main chain. Furthermore there are no $Si=O$ bonds in this formula. If you add some $HCl$ solution, you produce the precipitation of an insoluble gel whose formula has the same skeleton as the previous solution $HO-Si-O-(Si-O)_n-Si-O-Si-OH$. But the Oxygen atoms above and under the main chain are now holding a H atom, and the whole chain is neutral. This gel can be filtrated and washed with large amounts of water. This removes the ions $Na^+$ and $Cl^-$. The result is a gel containing more water than silica. Usually this gel is then heated and the water is evaporated. An amorphous white substance is obtained with a formula between $H_2SiO_3$ and $SiO_2$. This is the common "silicagel" used in the labs for dehydrating purposes, because it has a tendency to adsorb water from the atmosphere. The fresh gel can also be dried in a vacuum, instead of by heating. In this case, the silica gel is transformed into an extremely light substance, lighter than snow, which has a tendency to float in the air. It is a dangerous substance if you breeze it, because it will stay in the nose and the lung, without being able to get out. It may induce cancers, like asbestos.
{ "domain": "chemistry.stackexchange", "id": 13319, "tags": "experimental-chemistry" }
Why is it so dark during a solar eclipse?
Question: At a total solar eclipse the sun is barely covered, like right after sunset. So why is it much darker than right after sunset (which allows us to see the corona)? Answer: The most significant difference is that in a total eclipse the moon obstructs the sun's light outside of earth's atmosphere whereas at sunset, the light is obstructed by the horizon within the atmosphere. With the sun just below the horizon, sunlight still hits the atmosphere above the horizon and even above you, and it's scattered all around. Even around the total eclipse's trail there's partial shadowing, considerably reducing the amount of sunlight into the atmosphere that could be scattered. (image from Wikipedia)
{ "domain": "physics.stackexchange", "id": 85274, "tags": "visible-light, astronomy, atmospheric-science, moon, eclipse" }
Given Newton's third law, why are things capable of moving?
Question: Given Newton's third law, why is there motion at all? Should not all forces even themselves out, so nothing moves at all? When I push a table using my finger, the table applies the same force onto my finger like my finger does on the table just with an opposing direction, nothing happens except that I feel the opposing force. But why can I push a box on a table by applying force ($F=ma$) on one side, obviously outbalancing the force the box has on my finger and at the same time outbalancing the friction the box has on the table? I obviously have the greater mass and acceleration as for example the matchbox on the table and thusly I can move it, but shouldn't the third law prevent that from even happening? Shouldn't the matchbox just accommodate to said force and applying the same force to me in opposing direction? Answer: I think it's a great question, and enjoyed it very much when I grappled with it myself. Here's a picture of some of the forces in this scenario.$^\dagger$ The ones that are the same colour as each other are pairs of equal magnitude, opposite direction forces from Newton's third law. (W and R are of equal magnitude in opposite directions, but they're acting on the same object - that's Newton's first law in action.) While $F_\text{matchbox}$ does press back on my finger with an equal magnitude to $F_\text{finger}$, it's no match for $F_\text{muscles}$ (even though I've not been to the gym in years). At the matchbox, the forward force from my finger overcomes the friction force from the table. Each object has an imbalance of forces giving rise to acceleration leftwards. The point of the diagram is to make clear that the third law makes matched pairs of forces that act on different objects. Equilibrium from Newton's first or second law is about the resultant force at a single object. $\dagger$ (Sorry that the finger doesn't actually touch the matchbox in the diagram. If it had, I wouldn't have had space for the important safety notice on the matches. I wouldn't want any children to be harmed because of a misplaced force arrow. Come to think of it, the dagger on this footnote looks a bit sharp.)
{ "domain": "physics.stackexchange", "id": 83826, "tags": "newtonian-mechanics, forces, everyday-life, free-body-diagram, faq" }
Quadruped Robot Visualization and TF Problems In towr and xpp
Question: Hello. I try to visualize my own special robot with towr. I have already added the kinematic and dynamic models of my robot. But when i try to visualize in Rviz, the joints of the robot occur at very meaningless angles. It will be more descriptive in the video. When I look at the TF tree, I see the TF data of two robots at the same time. My Robot (It appears in the video as Vira) and HyQ robot. I'm working on the guide on towr and xpp but I don't know clearly where to approach the problem. Thank you for your help. Originally posted by Murat on ROS Answers with karma: 55 on 2020-04-25 Post score: 0 Answer: My first guess would be that the inverse kinematics function might have some bugs. This needs to be supplied by the user to xpp. You can reuse the one from the xpp HyQ example, but only if your kinematic tree is the similar and you adjust the frames and link parameters. So I would check that function, the conversion from cartesian endeffector positions to joint angles. Hope this helps, best of luck :) Originally posted by Alexander Winkler with karma: 26 on 2020-04-25 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Murat on 2020-04-26: I applied simple motion planning to my own robot by working on monoped_pub in the xpp guide. As a result, the robot's foots (kinematic structure) didn't follow endeffectors. As you said, there must be a bug with ik then converting ee positions in to joint angles should be main focus point. It has a slightly different kinematic structure. I will concentrate on inverse kinematics, taking into account what you say. Thank you so much!
{ "domain": "robotics.stackexchange", "id": 34834, "tags": "ros-melodic, transform" }
Text interpretation in Griffith's intro to QM
Question: It says in Griffith's chapter 2.1, that: $$\tag{2.14} \Psi(x,t)~=~\sum_{n=1}^{\infty}c_n\,\psi_n(x) e^{(-iE_n t/\hbar)}$$ It so happens that every solution to the (time-dependent) Schrodinger equation can be written in this form [...]. By "solution" does he mean solutions on separable form $\tag{2.1} \Psi(x,t)~=~\psi(x)f(t),$ which he stated to begin with? ` Answer: No, by a solution he means any solution, i.e. a function $\Psi(x,t)$ that satisfies Schrödinger's equation and that will typically not be writable in the $a(x)b(t)$ separated form. The claim that any solution may be written in the summation form you quoted is the claim that the solutions in the separated form are "sufficient" because the most general solution may be written as their superposition. There isn't really any ambiguity in the text.
{ "domain": "physics.stackexchange", "id": 8145, "tags": "quantum-mechanics, wavefunction, schroedinger-equation" }
Is the rate of nucleation of bubbles in beer dependent on the temperature of the liquid?
Question: Since there's more energy in the fluid, I believe that you will have a higher nucleation rate in a hotter (assuming all other relevant variables remain unchanged), is this the case? How can I calculate it? Answer: This is hard to calculate, for the following reasons. Although the solubility of CO2 in water solutions goes down in a predictable way with increasing temperature, it is difficult for CO2 to exsolve on its own: a supersaturated solution of CO2 in water can exist in a metastable state for hours after the pressure urging the CO2 to remain in solution has been released. Prompt equilibration requires the presence of seed nuclei to kickstart the exsolvation process. These nuclei are most commonly provided by cracks, pits or scratches in the walls of the beer glass which have tiny amounts of air stuck in them. Any model of the rate of bubble nucleation in beer must contain a model of the number and size distribution of the nuclei present, otherwise it will not furnish realistic results. This effect is so strong that when a beer video is shot for an advertisement and a brand-new beer glass is used, there are very few rising bubble cascades present when the beer is poured into the glass for the camera to capture. The videographer has to drop a handful of ball bearings into the glass first, shake the glass so as to scratch it up with the balls, remove the balls and then pour in the beer in order to get plenty of bubble cascades rising up through the beer.
{ "domain": "physics.stackexchange", "id": 54121, "tags": "thermodynamics, kinetic-theory" }
Deconvolution of a 1D Signal with Known Kernel (Square Wave)
Question: I have a signal measured from a radiation detector in a narrow beam of radiation. The peaks I get are quasi-gaussian in shape, see attached picture. The signal is not a function of time, rather a function of distance. The x-axis is is mm and the y-axis is in arbitrary detector response. The detector used to measure this signal had a finite width, which is contributing to a broadening of the peaks. What I want to do is deconvolve a square wave of width equal to that of the detector from the signal, hence removing some of the broadening effect. I am hoping to do this in matlab, however I am having trouble using the deconv function due to each data set being two vectors, x and y, and the fact that each data set is a function of linear distance not time. Any ideas on how to go about this? Answer: General Solution We have a Deconvolution problem with known operator. One way to define the objective function is: $$ \arg \min_{x} \frac{1}{2} {\left\| h \ast x - y \right\|}_{2}^{2} = \arg \min_{x} f \left( x \right) $$ The are 3 method to solve this: Use Gradient Descent by deriving the gradient of $ f \left( \cdot \right) $. Write the problem in Matirx Form and either use direct solver (One could use Toeplitz System Solver) of this Least Squares problem or again use iterative solver in Matrix Form. Solve this in Fourier Frequency Domain. I previously solved similar problems using the Matrix Form (See my answer to Deconvolution of 1D Signals Blurred by Gaussian Kernel) so this time we'll solve it in the convolution form. The Gradient Clearly the derivative of Inner Product is related to the adjoint of the operator. In the case of Convolution the Adjopint is the Correlation Operator. Let's define the Convolution Operation $ \ast $ with MATLAB Code. So $ y = h \ast x $ will be in MATLAB: vY = conv(vX, vH, 'valid'); Defining $ f \left( \cdot \right) $ in MATLAB would be: hObjFun = @(vX) 0.5 * sum((conv(vX, vH, 'valid') - vY) .^ 2); With the Derivative being: vG = conv2((conv2(vX, vH, 'valid') - vY), vH(end:-1:1), 'full'); Where when we convolve with the flipped version of a vector it means we're basically doing correlation. Solution in MATLAB So, one must pay attention that since we use valid convolution the size of $ y $ and $ x $ are different (Depends on the size of $ h $). Usually we chose $ h $ to have odd number of elements to have its radius to be defined. Then the MATLAB solution would be: vX = zeros(size(vY, 1) + (2 * kernelRadius), 1); vObjVal(1) = hObjFun(vX); for ii = 1:numIterations vG = conv2((conv2(vX, vH, 'valid') - vY), vH(end:-1:1), 'full'); vX = vX - (stepSize * vG); vObjVal(ii + 1) = hObjFun(vX); end Specific Solution In your case you asked for $ h $ to be Box Blur. So I did the above with this model. Let's go through results. First, let's see if the solution we got to is a real solution. In order to examine that, we will convolve the result of the optimization with $ h $ and compare result to the input data. We expect it to be similar to the input. As can be seen above it is, indeed, a perfect reconstruction of the input data. We can see this by the objective value as well: Yet the result is not good: So, why is the result so bad? There are 2 options: The SNR is not good enough to solve the inverse problem as is. It requires additional regularization like Wiener / Tikhonov (They are the same). I implemented it in the code so you have Wiener Filter to play with and understand its working. The model of Box Blur doesn't fit. So in my code I implemented Wiener / Tikhonov Regularization (See paramLambda in my code). The full code is available on my StackExchange Signal Processing Q51460 GitHub Repository (Look at the SignalProcessing\Q51460 folder).
{ "domain": "dsp.stackexchange", "id": 6641, "tags": "matlab, deconvolution, inverse-problem" }
Java mergesort with generics
Question: The following is my implementation of mergesort with Java generics. I chose to use the iteration variables a_iter and b_iter to prevent modifying the lists while merging. Any suggestions for improvement? public static <T extends Comparable<T>> List<T> mergesort(List<T> list){ // base case int len = list.size(); if(len<=1) return list; // recursively sort two halves List<T> left = mergesort(list.subList(0,len/2)); List<T> right = mergesort(list.subList(len/2, len)); // merge two halves List<T> combined = new ArrayList<>(); // vars to keep place in iteration (used to prevent ConcurrentModificationException) int a_iter = 0; int b_iter = 0; while(a_iter<left.size() || b_iter<right.size()){ // if left exhausted if(a_iter>=left.size()){ combined.add(right.get(b_iter)); b_iter++; continue; } // if right exhausted if(b_iter>=right.size()){ combined.add(left.get(a_iter)); a_iter++; continue; } T a = left.get(a_iter); T b = right.get(b_iter); if(a.compareTo(b)<0){ combined.add(a); a_iter++; } else{ combined.add(b); b_iter++; } } return combined; } Answer: The first thing I would do is split up the while loop into 3 parts: - handling both not empty - add all remaining from left - add all remaining from right No wait, the first thing I did is paste the code in IntelliJ and have it autoformat to comply with the java coding conventions. Since it's just some minor points like spacing I'm not going to list them here, overall your code looks decent already. I also renamed a_iter and b_iter to leftIndex and rightIndex. Both because a variable shouldn't have an _ in it's name and because it better conveys what the variable is. Same for renaming a and b to leftElement and rightElement. Now back to splitting up the while loop. I changed the || in the while condition to an &&, removed the "when xxx exhausted" parts and added the 2 new while loops after. This is the result: int leftIndex = 0; int rightIndex = 0; while (leftIndex < left.size() && rightIndex < right.size()) { T leftElement = left.get(leftIndex); T rightElement = right.get(rightIndex); if (leftElement.compareTo(rightElement) < 0) { combined.add(leftElement); leftIndex++; } else { combined.add(rightElement); rightIndex++; } } // At least one of the sub lists is empty at this point. // Just add all remaining elements from the other sub list. while(leftIndex < left.size()){ combined.add(left.get(leftIndex++)); } while(rightIndex < right.size()){ combined.add(right.get(rightIndex++)); } Or if you want, replace the last while loops with an addAll combined.addAll(left.subList(leftIndex, left.size())); combined.addAll(right.subList(rightIndex, right.size())); The biggest problem with this implementation is that it creates a lot of temporary lists. Taking a quick glance at how the built in list sort in java is implemented shows that they turn it into an array first, sort the array and then update the given list: public static <T extends Comparable<? super T>> void sort(List<T> list) { Object[] a = list.toArray(); Arrays.sort(a); ListIterator<T> i = list.listIterator(); for (int j=0; j<a.length; j++) { i.next(); i.set((T)a[j]); } } Note that this doesn't return the list. It actually sorts the list you pass into it. If you're looking for more improvements to the merge algorithm itself you could look up how it's implemented in java. (Although recently it's changed into a Tim sort which is a combination of merge sort and insertion sort). As a final note: if your intention was to actually use your code in a project other than studying how merge sort is implemented or practicing your java skills I would advise against it. Just use the built-in functions if available.
{ "domain": "codereview.stackexchange", "id": 26470, "tags": "java, sorting, generics, mergesort" }
reaction coordinate, kinetics, equilibrium in example
Question: In this special reaction coordinate diagram with two reaction mechanisms, I tried to analyze it in two other ways, one with kinetics, another with equilibrium. A. kinetics, I tried to find reaction rate of forward, and reverse. By the diagram, rate determining step of forward reaction is the first one (because activation energy is the highest and let's say it is one and only factor for RDS in this case). so Vf=k1[A][B]. RDS in reverse reaction is second step of the reaction judging by the diagram activation energy (M->A+B). So I used pre-equilibrium approximation for that. That's Vr in the picture. B. equilibrium, I wrote all the equilibrium constant and simply added the reactions. now here is the question. Expression of the rate of forward and reverse reaction in dynamic equilibrium is quite different from each case, analyzing in a way of kinetics and equilibrium constant, as you see in the picture below. Why is it different? What's right expression? What am I missing here? Answer: You have chosen an unusual scheme in that A appears to be an intermediate as well as a reactant. If you use the simpler scheme things are easier and so rather than try to follow yours I have examined the scheme below which is very nonetheless very similar $$RX\underset{k_{-1}} {\stackrel{k_1}{\leftrightharpoons}}R+X$$ $$R+Y\underset{k_{-2}} {\stackrel{k_2}{\leftrightharpoons}}RY$$ where species RX looses X and is replaced by Y. The intermediate species is R. The overall reaction is $RX+Y=RY+X$ and the equilibrium constant for the two steps are $$K_1=\frac{k_1}{k_{-1}}=\frac{\mathrm{[R]_e[X]_e}}{\mathrm{[RX]_e}}$$ $$K_2=\frac{k_2}{k_{-2}}=\frac{\mathrm{[RY]_e}}{\mathrm{[R]_e[Y]_e}}$$ and the overall equilibrium constant $$K=K_1K_2=\frac{k_{1}k_{2}}{k_{-1}k_{-2}}=\frac{\mathrm{[RY]_e[X]_e} }{\mathrm{[RX]_e[Y]_e }}$$ and all the concentrations in square brackets with subscript $e$ are equilibrium values. [This type of equation is true if there are many equilibria one after the other, $\displaystyle K=K_1K_2K_3K_4\cdots=\frac{k_1k_2k_3k_4}{k_{-1}k_{-2}k_{-3}k_{-4}}\cdots$]. In a rate equation approach we can apply a steady state approach to the intermediate species R. The steady state assumes that the rate of change of R is zero; $$\frac{d[R]}{dt}= k_1[RX]-k_{-1}[R][X]-k_2[R][Y]+k_{-2}[RY]=0$$ from which $$ [R]_{ss}= \frac{k_{-2}[RY]+k_1[RX]}{k_{-1}[X]+k_2[Y]}$$ The rate $r$ can be given by $$\begin{align} r=-\frac{d[Y]}{dt}=k_2[R][Y]-k_{-2}[RY] &= \frac{k_2(k_{-2}[RY]+k_1[RX])[Y]-k_{-2}[RY](k_{-1}[X]+k_2[Y])}{k_{-1}[X]+k_2[Y]}\\&=\frac{k_1k_2[RX][Y]-k_{-1}k_{-2}[RY][X]}{k_{-1}[X]+k_2[Y]}\\ \end{align}$$ and the numerator is zero when equilibrium concentrations are used making the rate equal to zero also. So this connects the equilibrium to the rate equations. Initially just after the reactants are mixed the amount of product is very small and $k_1k_2[RX][Y]>>k_{-1}k_{-2}[RY][X] $ and the the rate is $$r=\frac{k_1k_2[RX][Y]}{k_{-1}[X]+k_2[Y]}$$ which can be confirmed by experiment. Hope this helps.
{ "domain": "chemistry.stackexchange", "id": 10012, "tags": "equilibrium, kinetics" }
Translation of Counter-free automata into Linear Temporal Logic
Question: There is a well-known equivalence between counter-free automata and Linear Temporal Logic (which is cited for example by [1]). However, I cannot find a concrete way to obtain an LTL formula from a counter-free automaton. Is there any reference that shows such a translation? [1] Wolfgang Thomas, Safety- and Liveness-properties in Propositional Temporal Logic: Characterisation and Decidability, Mathematical Problems in Computation Theory, Volume 21, 1988 Answer: As mentioned in the comments, the translation is shown in: Volker Diekert and Paul Gastin. "First-order definable languages." (2008) http://www.lsv.fr/Publis/PAPERS/PDF/DG-WT08.pdf And it goes via a characterization of $LTL$ as $FO[<]$.
{ "domain": "cstheory.stackexchange", "id": 5299, "tags": "reference-request, automata-theory, linear-temporal-logic" }
Please explain what a gene isoform is in lay terms?
Question: I am a physicist by training, however I am now doing computational biology research. I know what genes, DNA, proteins, enzymes, introns and exons are. I sort of understand how DNA is used to create RNA, which is used by ribosomes to manufacture proteins. That is the extent of my biology and genetics knowledge. So, could someone explain what a gene isoform is in lay terms? There is a related question here and wikipedia has an explanation. However, there is a significant jargon barrier for me. Answer: Gene isoforms are all the different RNAs that can be synthesised from a single gene. You may be referring to splice variants, which are systems of getting different mRNAs from the same sequence using different combinations of introns and exons. (source: riken.jp) Translating your wikipedia article: Gene isoforms are the different RNA products obtainable from a gene that can vary on the place where the translation starts (protein synthesis), that have different sequences in the coding region (thus giving different proteins, sometimes with different functions) or by having different untranslated regions, UTR (the overhangs of the sequence that are not translated in the mRNA). This last difference, the UTR, normally gives rise to different stability within the cell as UTR secondary structures usually control the speed of degradation of an RNA. Hope it helps
{ "domain": "biology.stackexchange", "id": 4395, "tags": "gene" }
ROS Answers SE migration: Using actions
Question: Hello, I tried to modify one of those tutorials of actionlib and when i do catkin_make the output is this: -- Using these message generators: gencpp;genlisp;genpy -- Generating .msg files for action control/Control /home/robot/Desktop/gamma/src/control/action/Control.action Traceback (most recent call last): File "/opt/ros/hydro/share/actionlib_msgs/cmake/../../../lib/actionlib_msgs/genaction.py", line 136, in <module> Generating for action Control if __name__ == '__main__': main() File "/opt/ros/hydro/share/actionlib_msgs/cmake/../../../lib/actionlib_msgs/genaction.py", line 97, in main raise ActionSpecException("%s: wrong number of pieces, %d"%(filename,len(pieces))) __main__.ActionSpecException: /home/robot/Desktop/gamma/src/control/action/Control.action: wrong number of pieces, 6 CMake Error at /opt/ros/hydro/share/catkin/cmake/safe_execute_process.cmake:11 (message): execute_process(/home/robot/Desktop/gamma/build/catkin_generated/env_cached.sh "/usr/bin/python" "/opt/ros/hydro/share/actionlib_msgs/cmake/../../../lib/actionlib_msgs/genaction.py" "/home/robot/Desktop/gamma/src/control/action/Control.action" "-o" "/home/robot/Desktop/gamma/devel/share/control/msg") returned error code 1 Call Stack (most recent call first): /opt/ros/hydro/share/actionlib_msgs/cmake/actionlib_msgs-extras.cmake:67 (safe_execute_process) control/CMakeLists.txt:57 (add_action_files) The action file contains this: #Twist defined in action float64 x_angular --- float64 y_angular --- float64 z_angular --- float64 x_linear --- float64 y_linear --- float64 z_linear What could be happening here? Thanks beforehand! Originally posted by pexison on ROS Answers with karma: 82 on 2015-02-09 Post score: 0 Answer: Your action file must have exactly 3 sections separated by --- for goal, result and feedback. You have 6 sections. If you want to include another message type include that type instead of repeating its content, i.e. use geometry_msgs/Twist twist as an entry. Originally posted by dornhege with karma: 31395 on 2015-02-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by pexison on 2015-02-09: Sorry, it was a beginner fault
{ "domain": "robotics.stackexchange", "id": 20828, "tags": "ros, action, actionlib, compilation, msg" }
What exactly are Quantum XOR Games?
Question: I have done some research & found a few different papers that discuss xor games (classic & quantum). I am curious if someone could give a concise introductory explanation as to what exactly xor games are & how they are or could be used/useful in quantum computing. Answer: Quantum xor games are a method of greatly simplifying the ideas behind Bell's theorem, which states that no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. Basically, when two qubits are entangled, measurements on them appear correlated even if they are vastly far apart. The question then is whether the qubits decided how they would collapse at time of entanglement (thus carrying "local hidden variables" with them) or decided how they would collapse at time of measurement (thus requiring some kind of instantaneous "spooky action at a distance"). Bell's theorem, and xor games, come down firmly on the side of the latter. Xor games generally have the format of two people (Alice and Bob) given some random bits, and without communication outputting some other bits with the goal of making true a logical formula. For example with the original xor game, the CHSH game, Alice is given random bit $X$ and Bob random bit $Y$. Alice then outputs a chosen bit $a$ and Bob outputs a chosen bit $b$. They want to satisfy the equation $X \cdot Y = a \oplus b$. Of course, since they cannot communicate, they can only win some of the time; they want to choose a strategy to maximize the probability of winning. The best possible classical strategy is for Alice and Bob to both always output $0$, which will result in a win 75% of the time. However if Alice and Bob share an entangled qubit pair, they can come up with a strategy to win 85% of the time! The conclusion is this disproves the existence of local hidden variables, because if the qubits contained a local hidden variable (some string of bits) then Alice and Bob could have pre-shared that same string of bits to employ in their classical strategy to also get an 85% chance of winning; since no string of bits enables them to do this, that means the entangled qubits cannot be relying on a shared string of bits (local hidden variable) and something spookier is happening. You can see an implementation of the CHSH game in Microsoft's Q# samples (with expanded explanation) here. The best explanation of the CHSH game is from Professor Vazirani in this video. He claims something interesting (possibly rhetorically), which is that if Einstein had had access to the simplified presentation of xor games, he'd have avoided wasting the last three decades of his life searching for a hidden variable-based theory of quantum mechanics! I have also written a blog post detailing the CHSH game here. One application of xor games is self-testing: when running algorithms on an untrusted quantum computer, you can use xor games to verify that the computer isn't corrupted by an adversary trying to steal your secrets! This is useful in device-independent quantum cryptography.
{ "domain": "quantumcomputing.stackexchange", "id": 374, "tags": "foundations, nonlocal-games" }
Missing ROS_format.xml
Question: I was setting up Eclipse on a fresh Ubuntu install and forgot to save my copy of ROS_format.xml (the C++ auto-formatting for ROS). I went on the site to pull a new copy down (http://www.ros.org/wiki/IDEs) but the link to the XML file seems to be broken. Anyone know where I can get a copy of the XML file? Originally posted by rtoris288 on ROS Answers with karma: 1173 on 2011-11-10 Post score: 3 Answer: Thanks we're working on it. There appears to have been a regression in the attachment macro on the wiki. You can still get it directly from the wiki attachment with the full url http://www.ros.org/wiki/IDEs?action=AttachFile&do=get&target=ROS_format.xml Originally posted by tfoote with karma: 58457 on 2011-11-10 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Boris on 2012-09-19: Link is still (or again) broken. The difference with provided here is the parameter "do=view" instead of "do=get".
{ "domain": "robotics.stackexchange", "id": 7253, "tags": "eclipse" }
Does limit $\hbar \rightarrow 0$ in Quantum Mechanics mean anything?
Question: Assuming that I learn Quantum Mechanics first, and then I approach Classical Mechanics as a special case of Quantum Mechanics, I will definitely find the relationship between Quantum Mechanics and Classical Mechanics very confusing. I don't know how to make sense of what happens when $\hbar \rightarrow 0$. For one, you can't recover classical mechanics from quantum theory by setting $\hbar \rightarrow 0$. However, it is possible to recover classical mechanics from Schrodinger equations. So, does limit $\hbar \rightarrow 0$ in Quantum Mechanics mean anything? How should we interpret it? Or does the above contradiction reveal yet another flaw in the fundamentals of Quantum Mechanics? Answer: The question duplicates $\hbar \rightarrow 0$ in QM where it has been soundly answered. The most direct bridge is through deformation quantization, the phase-space formulation of QM, where operator observables are mapped injectively onto their Wigner transforms, c-number phase-space functions, just like their classical counterparts. It is then evident that QM laws are deformations of the classical laws, and include those in their $O(\hbar^0)$ pieces. So, for example, the Wigner transforms of the density matrix, the Wigner functions, go to Dirac deltas in phase space in that $\hbar \rightarrow 0$ limit. These δs, in turn, transcribe to the Liouville theorem for single particles, and hence Hamilton's dynamics, out of Heisenberg's equations of motion. The Ehrenfest theorem structures of course replicate and parameterize this injection. Note this is a mere shift in point of view: $\hbar \rightarrow 0$ means considering (macroscopic) phenomena at much larger scales, which thus dwarf the scale of $\hbar$ and make it insignificant. It is mere sloppy shorthand of the emergence of our classical world out of QM at large scales (the correspondence principle). For the limit to make sense, however, you must always consider angular momenta and actions S much larger than $\hbar$: it makes no sense to consider $\hbar \rightarrow 0$ limits for a single particle of small actions and spins---$\hbar$ is a dimensionful quantity, and its scale matters. I gather there is no interest in microscopic physics without $\hbar$. The least tortured limit then, is, as above, consideration of the $O(\hbar^0)$ part of macroscopic quantities. For details, see, e.g., Ref. 1. For instance, for a Freshman lab oscillator with maximum oscillation amp 10cm, m=10g, and ω=2Hz, the characteristic action is S=E/ω= $10^{-4}$ Js and so $\hbar / S= 10^{-30}$, suppressing any and all quantum terms and validating the limit. (One might be interested in thinking about the Wigner function for the nth excited state for n ~ $10^{30}$, a very spikey cookie-cutter function, indeed!) References: Thomas L. Curtright, David B. Fairlie, & Cosmas K. Zachos, A Concise Treatise on Quantum Mechanics in Phase Space, World Scientific, 2014. The PDF file is available here.
{ "domain": "physics.stackexchange", "id": 25252, "tags": "quantum-mechanics, classical-mechanics" }
Is stellar ignition all-or-nothing?
Question: The boundary between brown dwarfs and stars is around 80 Jupiter masses. Only stars generate a self-sustaining hydrogen fusion, although brown dwarfs sometimes fuse lithium and deuterium. Is hydrogen ignition all or nothing? If so, there should be a mass overlap. For example, a 79 Jupiter mass star may achieve ignition by forming faster and thus with a higher core temperature, or by having a bit more deuterium or lithium kindling than normal. Conversely, a slowly-forming 81 Jupiter mass brown dwarf may stay below ignition temperatures. The heaviest brown dwarf we know has about 90 Jupiter masses, while the lightest star has about 73 Jupiter masses, which suggests a mass overlap, although these masses are approximate values. Is there any estimate to how big this mass overlap is? Edit: To clarify my question: Stars above about 0.35 solar masses have a radiative envelope so end their life with some unburnt hydrogen. Stars below 0.35 solar masses burn all of their hydrogen b/c they are fully convective. Brown dwarfs (say of 0.05 solar masses) don't burn any light hydrogen. Are there individual objects (presumably around 80 Jupiter masses) that are between 0.05 and 0.35 solar masses and are destined to one day end their life with 10% of their hydrogen burnt? Or with 25%? Or with 50%? This is of course barring any outside catastrophic event like being torn apart by a black hole. Answer: No fusion isn't all or nothing. Given the same chemical composition of constituents then there will be a smooth ramping down of the nuclear fusion rate as the mass decreases. The lower mass objects will also take longer to contract and heat up, and so at any given mass, an older object will have more nuclear fusion. So there is some blurriness to the definition of the boundary. You could try to define it as the mass where the radius contraction ceases at some point and the luminosity is provided by nuclear fusion, but even in objects with slightly lower mass that continue to contract there will be some nuclear fusion going on. Fortunately this is of little actual consequence. The interior structures and observational properties of a "brown dwarf" just below the boundary and a "star" just above the boundary are quite close until they have lived for many billions of years although the brown dwarf will be smaller and have a higher surface gravity at a similar age. One issue that does inject a significant amount of blurring to the boundary is initial chemical composition. It is predicted that the boundary between stars and brown dwarfs will be at higher masses for lower metallicity objects. The classic work on this is Chabrier & Baraffe (1997). They define the hydrogen-burning minimum mass to be the lowest mass where thermal equilibrium is attained and the luminosity of the object is provided by nuclear reactions. The HBMM is about 0.072-0.075 solar masses at solar metallicity but 0.083 solar masses for metallicities about 30 times lower and could reach 0.092 solar masses for brown dwarfs born from primordial gas with no metals (Burrows et al. 2001). Thus if you have a spread of metallicities in a sample then the boundary between stars and brown dwarfs will be blurred by this. Note that initial abundances and subsequent burning of deuterium and lithium have almost no effect on the HBBM because all D and Li is burned before the onset of hydrogen (protium) burning. i.e. Whatever it started with, an object close to the HBMM will have no D or Li by the time it contracts close to the H-burning temperature, so it has no bearing on the question. In fact, Li-burning is energetically negligible and D-burning only delays the contraction by some ~10 million years (compared to a timescale for possible H ignition of more than a billion years). Edit: Let me answer your new question. Once hydrogen has ignited, it doesn't stop until all the available fuel is consumed. The increasing mean particle mass in the core, means it will contract and get hotter in order to maintain hydrostatic equilibrium, which increases the fusion rate. Stars with mass less than 0.35 of the Sun are fully convective and thoroughly mixed. If they are massive enough to begin nuclear fusion, then it will burn to completion. The mass will determine how long that will take. As the OP accurately summarises: That means that each individual object is either a star or not a star with no in-between "the fire got started but it burned out early" objects? However, the mass-cutoff is fuzzy and there is a mass overlap between "star" and "not-a-star" objects and there is no clear way to tell them apart without waiting billions of years? – Kevin Kostlan
{ "domain": "astronomy.stackexchange", "id": 6250, "tags": "star, stellar-evolution, brown-dwarf, fusion" }
rosmake vs make
Question: Is there a difference between rosmake and make? The way I understand it, rosmake is for building and updating the dependencies of a package, where these dependencies could be other ros packages. rosdep is for installing dependencies outside the realm of ROS... More like system dependencies for building a ros package. Then what is the purpose of running "make" in the directory of a ros package? Thanks Originally posted by cassander on ROS Answers with karma: 121 on 2012-02-23 Post score: 0 Answer: This is a duplicate of http://answers.ros.org/question/10614/rosmake-vs-make#15661 Originally posted by Mac with karma: 4119 on 2012-02-23 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 8353, "tags": "rosmake, make" }
Rostest succeeds test if a node crashes
Question: If a node in a test-case crashed, rostest doesn't care about it, and succeeds the testcase. For example: $ rostest somepkg basic.test ... logging to /home/parallels/.ros/log/rostest-parallels-vm-6224.log [ROSUNIT] Outputting test results to /home/parallels/.ros/test_results/somepkg/rostest-test_basic.xml Traceback (most recent call last): File "/home/parallels/catkin_ws/src/somepkg/src/fail.py", line 3, in <module> print 1/0 ZeroDivisionError: integer division or modulo by zero [Testcase: testbasic_test] ... ok [ROSTEST]----------------------------------------------------------------------- [clever.rosunit-basic_test/test_state][passed] SUMMARY * RESULT: SUCCESS * TESTS: 1 * ERRORS: 0 * FAILURES: 0 rostest log file is in /home/parallels/.ros/log/rostest-parallels-vm-6224.log How to make rostest fail the test if a node crashes? Originally posted by okalachev on ROS Answers with karma: 11 on 2019-07-01 Post score: 0 Answer: Actually, to make the nodes "required" is kind of a solution. Originally posted by okalachev with karma: 11 on 2019-07-04 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 33306, "tags": "rostest, ros-kinetic" }
What's it like inside a natural gas cavern on Earth?
Question: I suppose natural gas underground caverns on Earth have substantial volume and gas is in gaseous form there. As such I wonder how it would look like inside such cavern (with artificial light of course). Will one see a rocky sky at big distance? What is the actual volume of such caverns? It is correct to talk about methane oceans or seas? Are they empty or the gas is mixed up with solid substance? Answer: A cavern filled with natural gas would look like an ordinary cavern. However, natural gas is rarely abundant in caverns. Most natural gas reservoirs occur in the pore spaces of sandstones. These pore spaces are generally up to about 1 mm in diameter. Here's a photomicrograph of a sandstone — the green-blue colour is the pore space; Q, F, L, and M are quartz, feldspar, lithic fragments and mica respectively: The pores of reservoir rocks also contain 'formation' water in various proportions. The relationship between the water and the hydrocarbons can be quite complex, depending on the fluid composition, the reservoir temperature and pressure, and the mineralogy of the rock. So what about caverns? Trace amounts of methane probably occur in many, if not most, caverns — we're talking here about abundant methane. This is much less common, but it happens. In certain limestones pore spaces can be quite large — up to several metres in diameter. This is called vuggy porosity. If the limestone is charged with natural gas, the vugs will fill with gas, just like the other pores. The space would look like any other limestone cavity. Very large ones are rare, based on a typical power law size distribution. There are (copyright and unlicensed) images of such super-pores here and here. Recently Mattey et al. (2013; EPSL 374) wrote about methane in a karst in Gibraltar. I'd estimate that less than 1% of the world's gas is held in vuggy reservoirs. There's another kind of cavern in which you can find natural gas: man-made ones. Natural gas is often stored underground, usually in porous rocks, but sometimes in caverns dissolved out of salt. Around a quarter of natural gas stock is stored this way in the US. Here's a video from such a salt cavern. Last thing: methane seas can only occur at low temperature. As far as I know, liquefied natural gas or LNG is only stored above ground or on ships in tanks. Lots of natural gas reservoirs have very light hydrocarbon liquids in them — but again the liquid would be in tiny pore spaces, not caverns.
{ "domain": "earthscience.stackexchange", "id": 471, "tags": "geology, gas, cavern" }
Are long oxygen molecules possible?
Question: Are large oxygen containing molecules possible? Either large rings, or chains with hydrogen atoms at the ends. Like this: $\ce{H-O-O-O-O-O-O-O-O-O-$\cdots$-O-O-O-O-O-H}$ Answer: Yes, they are possible; several have even been detected. They all have the general formula $\ce{H2O_{n}}$ and belong to the class of hydrogen polyoxides. Of course, the $n=1$ member of the series is water $(\ce{H2O})$, and the $n=2$ member is hydrogen peroxide $(\ce{H2O2})$. The $n=3$ member is trioxidane $(\ce{H2O3})$, and it has the following structure: It can be prepared in a number of ways as outlined in the linked article, such as the reaction of ozone and hydrogen peroxide. Trioxidane is also an active antimicrobial ingredient. Trioxidane has a half-life around 16 minutes in organic solvents, but only milliseconds in water. Hydrogen tetroxide ($n=4$, $\ce{H2O4}$) is also a known compound as is hydrogen pentoxide ($n=5$, $\ce{H2O5}$). Hydrogen pentoxide is formed as a byproduct in the preparation of trioxidane. This abstract of a computational study provides bond dissociation energies for the hydrogen polyoxides. Starting from water, they steadily decrease (the $\ce{O-O}$ bonds become weaker) to $\ce{H2O6}$ and then start to increase again. Abstracts from other computational studies dealing with the expected structures and stabilities of these molecules can be found here and here.
{ "domain": "chemistry.stackexchange", "id": 4042, "tags": "molecules" }
Kerr Effect; Electro-refraction
Question: First post here. is is possible to observe the Kerr effect in transparent materials, at low voltages (such as 3.8V)? If so, can you refract light enough to distort an image, in a way similar to that of a magnifiying glass / glass lens? I read the post here (Is Kerr effect in glass observable?) but it doesn't directly answer if it is possible in materials other than glass. I'm thinking there may be the equivalent of 'superconductors', that display the kerr effect at very low applied electrical fields. Answer: First of all the nonlinear effects are threshold-less, i.e. they happen at any magnitude of electric field, however, the smaller the field is the higher the accuracy of measurement has to be. Here I would like to introduce you an effect I think it relates to your question of: can you refract light enough to distort an image, in a way similar to that of a magnifiying glass / glass lens I should say that this effect happens in optical filaments. When a femtosecond optical pulse with a certain power which is called ''critical power'' is traveling in air or transparent media, because of its high electric field a positive refractive index due to Kerr effect causes the beam to focus (I can elaborate more about the filament if you are interested). But the point here is, it is shown that two filaments that are propagating in parallel and close to each other sometimes can absorb or repel each other which is essentially the same as refraction that you are asking about. However still the electric field responsible for this effect is relatively large.
{ "domain": "physics.stackexchange", "id": 27491, "tags": "energy, visible-light, refraction" }
Kinect sensor gazebo-ros
Question: I am trying to add a kinect sensor to a simulation I have already running using a hokuyo laser as sensor for navigation. I want to add the kinect sensor in order to simulate navigation using point clouds. I have all of this already running in the real robot. I am using ros groovy and Gazebo 1.6 The important part of the .sdf I use is at the end of the question. I have gone through the tutorials but I am still not sure about how sensors and plugins properly work. For what I understand just adding the camera depth sensor should get the topic from gazebo listed in a gztopic list, but I do not see this happening. And for what I understand adding the plugin besides adding new functionalities should make the bridge to ros available so that it would publish in the same topics as a depth camera does in ros. I am pretty stuck so I would appreciate any help. Thanks for our time. <link name="kinect::link"> <pose>0.2 0 0.265 0 0 0</pose> <inertial> <mass>0.1</mass> </inertial> <collision name="collision"> <geometry> <box> <size>0.073000 0.276000 0.072000</size> </box> </geometry> </collision> <visual name="visual"> <geometry> <mesh> <uri>model://kinect/meshes/kinect.dae</uri> </mesh> </geometry> </visual> <sensor name="camera" type="depth"> <pose>0.2 0 0.265 0 0 0</pose> <update_rate>20</update_rate> <camera> <horizontal_fov>1.047198</horizontal_fov>  <clip> <near>0.05</near> <far>3</far> </clip> </camera> <plugin name="camera" filename='libDepthCameraPlugin.so'> <alwaysOn>1</alwaysOn> <updateRate>10.0</updateRate> <image_topic_name>image_raw</image_topic_name> <point_cloud_topic_name>points</point_cloud_topic_name> <camera_info_topic_name>camera_info</camera_info_topic_name> <cameraName>depth_cam</cameraName> <frameName>/base_link</frameName> <point_cloud_cutoff>0.001</point_cloud_cutoff> <distortionK1>0.00000001</distortionK1> <distortionK2>0.00000001</distortionK2> <distortionK3>0.00000001</distortionK3> <distortionT1>0.00000001</distortionT1> <distortionT2>0.00000001</distortionT2> </sensor> </link> <joint name="kinect_joint" type="revolute"> <child>kinect::link</child> <parent>chassis</parent> <axis> <xyz>0 0 1</xyz> <limit> <upper>0</upper> <lower>0</lower> </limit> </axis> </joint> Originally posted by agonzamart on Gazebo Answers with karma: 3 on 2013-06-21 Post score: 0 Answer: The <alwaysOn> should be a child of <sensor>, not <plugin>. This should force the sensor to be on all the time, and you should see a gztopic. Originally posted by nkoenig with karma: 7676 on 2013-06-27 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by agonzamart on 2013-06-30: Thanks I had already solved it but I didn't know exactly what had made it work. I appreciate it! :) Comment by agonzamart on 2013-07-04: nkoening I was checking out the config I used for the .sdf to make the kinect work and I realized I did't use the tag at all. But I still get ros topics published. I can add a image viewer to rviz and I see the image from the simulated camera /depth_cam/depth/image_raw. I even used a cloudthrottle to make a laserscan and seems to work properly. But if I try to add the tag as a child of sensor I get an error from the .sdf parser Comment by agonzamart on 2013-07-04: Error [parser.cc:719] XML Element[alwaysOn], child of element[sensor] not defined in SDF. Ignoring.[sensor] Error [parser.cc:710] Error reading element Error [parser.cc:710] Error reading element Error [parser.cc:710] Error reading element Error [parser.cc:710] Error reading element Error [parser.cc:369] Unable to read element Error [Server.cc:253] Unable to read sdf file Comment by agonzamart on 2013-07-04: I also see here: http://gazebosim.org/wiki/Tutorials/1.9/ROS_Motor_and_Sensor_Plugins in the Skid Steering Drive paragraph for example that the tag is used inside the plugin, I see no difference between using the tag inside the plugin or not using it so I assume that it is not being used. On the other hand when I use a gztopic I can not see wich one is from the camera, but gazebo must be publishing this topic if I can read it trough the ros topic. I think I might not getting something Comment by agonzamart on 2013-07-04: I really appreciate your help and time
{ "domain": "robotics.stackexchange", "id": 3347, "tags": "gazebo" }
Class accepting data input by variable, function pointer or lambda
Question: Background I am writing a library that takes some data from the user and works with it. I was experimenting with ways to allow users to provide the data by the following methods: As a (global) variable (taken by a const &) As a function pointer As a lambda (taken by value) I am aware that I can just wrap the variable or function in a lambda to achieve the same result, but I think I can learn something by doing it the "hard" way. I am targeting C++20. Code #include <cassert> #include <cstdint> #include <functional> template <class T, bool = std::is_invocable_v<T>> struct GetDataType { using type = typename std::invoke_result_t<T>; }; template <class T> struct GetDataType<T, false> { using type = T; }; template <class T, bool = std::is_invocable_v<T>> struct GetSourceType { using type = T; }; template <class T> struct GetSourceType<T, false> { using type = typename std::add_lvalue_reference_t<std::add_const_t<T>>; }; template <typename DataSource> class DataGetter { public: using DataType = GetDataType<DataSource>::type; using DataSourceType = GetSourceType<DataSource>::type; DataGetter(const DataSource &f) : m_dataSource(f) {} [[nodiscard]] bool dataMatches(DataType refData) const { return refData == getData(); } private: DataSourceType m_dataSource; [[nodiscard]] DataType getData() const { if constexpr (std::invocable<DataSource>) { return std::invoke(m_dataSource); } else { return m_dataSource; } } }; volatile uint32_t g_Data = 3; uint32_t getData() { return g_Data + 1; } int main(int argc, char **) { DataGetter fromLocal(argc); DataGetter fromGlobal(g_Data); DataGetter fromLambda([&argc]() { return argc + 1; }); DataGetter fromFunctionPointer(&getData); assert(fromLocal.dataMatches(1)); assert(fromGlobal.dataMatches(3)); assert(fromLambda.dataMatches(2)); assert(fromFunctionPointer.dataMatches(4)); argc++; g_Data = g_Data + 1; assert(fromLocal.dataMatches(2)); assert(fromGlobal.dataMatches(4)); assert(fromLambda.dataMatches(3)); assert(fromFunctionPointer.dataMatches(5)); return 0; } Compiler Explorer Answer: Your initial traits can be simplified using std::conditional: template <typename T> using GetDataType = std::conditional_t<std::is_invocable_v<T>, std::invoke_result<T>, std::type_identity<T>>::type; template <typename T> using GetSourceType = std::conditional_t<std::is_invocable_v<T>, std::type_identity_t<T>, std::add_lvalue_reference_t<std::add_const_t<T>>>; (From usage I would use const T& for non invokable for GetDataType) Then template <typename DataSource> class DataGetter { public: using DataType = GetDataType<DataSource>; using DataSourceType = GetSourceType<DataSource>; DataGetter(const DataSource &f) : m_dataSource(f) {} [[nodiscard]] bool dataMatches(DataType refData) const { return refData == getData(); } private: DataSourceType m_dataSource; [[nodiscard]] DataType getData() const { if constexpr (std::invocable<DataSource>) { return std::invoke(m_dataSource); } else { return m_dataSource; } } }; Demo
{ "domain": "codereview.stackexchange", "id": 42887, "tags": "c++, template, lambda, c++20" }
Rigid body dynamics of tossing of a coin
Question: While tossing a coin, it is commonly experienced that you get a head, if you toss it up with the head side up, and a tails if you toss with the tails side up. Is there a mathematical proof of this using classical mechanics? I would like to see a simple model of the coin as a symmetric top, and consider the precision of the body axis of symmetry about the angular momentum. Answer: I will give it a shot. Spoiler: I did this in the body frame so that the moment of inertia is time independent, before you get excited... Starting with Euler's equations: $$ I_i\dot{\Omega}_i+(I_j - I_k)\Omega_j \Omega_k = 0 $$ and taking cyclic permutations of $i,j,k$ to get the three of them; and in the absence of torques (I ignore air friction). It's a symmetric top so $I=I_1=I_2 \neq I_3$ so write $$ \dot{\Omega}_1 = -\frac{(I_3-I)}{I}\Omega_2 \Omega_3 $$ $$ \dot{\Omega}_2 = -\frac{(I - I_3)}{I}\Omega_1\Omega_3 $$ $$ \dot{\Omega}_3=0 \implies \Omega_3=k_1 $$ Now for this problem the coin is spinning about one of the first two symmetric axies. I chose 1. Then consider small variations on the other two angular velocities from zero: $\Omega_2 = \delta\Omega_2$, $\Omega_3 = \delta\Omega_3$, and $\Omega_1 \rightarrow \Omega_1$. So we make small changes in how the coin is rotating about a line through its center perpendicular to the coin, and about the other symmetric axis. In other words, it was spinning ideally like a coin would, then we changed the ideal to a little weird spinning. Making the changes, and ignoring second order in perturbations: $$ \dot{\Omega}_1=0 \implies \Omega_1 = k_1 $$ $$ \frac{d}{dt}(\delta\Omega_2)=-\frac{(I-I_3)}{I}\Omega_1 (\delta\Omega_3) $$ $$ \frac{d}{dt}(\delta\Omega_3)=0 \implies \delta\Omega_3 = k_2 $$ Then we can write $$ \frac{d}{dt}(\delta\Omega_2)=-\frac{(I-I_3)}{I}k_1 k_2 $$ Everything on the r.h.s is a number so $$ \delta\Omega_2 = -\left( \frac{(I-I_3)}{I}k_1 k_2 \right) t $$ so depending on how big $I$ is compared to $I_3$ will determine how $\delta\Omega_2$ changes during the flip. If one uses a radius of $r=0.014$ m and $h=0.0015$ m for the hight of the coin, one gets a moment of inertia tensor like the following: $$ I=M(0.0000491875) \quad I_3 = M(0.000098) $$ which tells me that the variations are unstable... which I don't really believe since I have seen a coin in real life. So look this over. But I can't find anything wrong so I'm going with it, and thinking that I can't really see a coin in real life up close while it's spinning... Hope this helps.
{ "domain": "physics.stackexchange", "id": 5703, "tags": "rigid-body-dynamics, classical-mechanics" }
What does the quantum part of the quantum support vector machine actually do?
Question: I'm implementing a quantum support vector machine on qiskit and I was wondering what the quantum part of the algorithm actually does. I'm aware that it's a feature map that executes the kernel function but what is the significance of using quantum computers for that if the kernel trick is already effective classically? And how does the quantum feature map work? Answer: The basic idea of how the quantum feature map works is that you're using a quantum computer to map each input datapoint $x$ from your training domain $\mathcal{X}$ into a quantum state $|\phi(x)\rangle = U(x)|0\rangle$ in the (presumably) high dimensional quantum state space, and then evaluating a set of kernel functions: $$ k_Q(x_i, x_j) = |\langle 0|U(x_j)^\dagger U(x_i)|0\rangle|^2 $$ for all pairs $x_i,x_j \in \mathcal{X}$. The Support Vector Machine (SVM) that classifies $\mathcal{X}$ can then be treated as a black box that takes in the kernel matrix $K_{ij} = k_Q(x_i, x_j)$ and returns a model $f_Q: \mathcal{X} \rightarrow \{0,1\}$. The "kernel trick" is the substitution that allows us to use the SVM (ordinarily a linear classifier on $\mathcal{X}$) to classify the data using $K$ to achieve non-linear decision boundaries. But regardless of whether $K$ is generated by a classical or a quantum computer, this doesn't guarantee an effective classifier. An example of a kernel that can be shown to fail is the quadratic polynomial kernel $$ k_C(x_i, x_j) = (\langle x_i, x_j \rangle + b)^2 $$ which will be generally incapable of classifying data that is labelled by a function of degree 3 or higher. So if you can find a quantum kernel $k_Q$ that results in a kernelized SVM that successfully classifies data labeled by those functions for which $k_C$ fails, you've found at least some evidence to support the use of your quantum feature map. More generally, the motivation to use quantum feature maps is that they might be more expressive than some classical counterparts. For instance (Schuld, 2020) uses Fourier analysis to connect the spectrum of a classifier $f_Q$ to the number of local rotations in the circuit $U(x)$. But justifying the use of quantum kernels also requires finding a feature map that is inefficient to compute classically, otherwise you would just simulate the unitaries $U(x) \forall x$ to evaluate your kernels$^\dagger$. Some recent work (Huang, 2020) takes steps towards evaluating the power of quantum kernels compared to some classical counterparts but overall this is still a very open question. $^\dagger$ keep in mind that if you can simulate $U(x)$ efficiently then you can do "one-shot" evaluation of the kernel matrix so that the number of circuit simulations is only $O(n)$ instead of the $O(n^2)$ needed to evaluate $k_Q(x, x')$ on hardware. This raises the bar for demonstrating speedup using this kind of quantum SVM.
{ "domain": "quantumcomputing.stackexchange", "id": 2357, "tags": "quantum-algorithms, machine-learning, quantum-enhanced-machine-learning" }
Adding action id to a history repository
Question: In my system I have a history repository. My history class is big and with a lot of information regarding operations. My warehouse needs, for some actions to know which histories are caused by the same action. My first idea was to create an overloaded version of each method requiring to write histories and make all the clients using that services create and supply a guid if they want to specify an action. As my system has a lot of services that some use other services and a lot of clients that may need to specify an action that solution sounds bad. Complicated, error prone and would pollute code that is not interested in knowing anything about this. My solution: I created a ActionIdContext class that my clients can use to specify a group action and then my HistoryBuilder uses a static property to get the current ActionId if any. This way no services or overloads have to be created or changed. Do you like this design or maybe you have another idea? public class ActionIdContext : IDisposable { private static Guid? actionId; private readonly bool isActionIdSet; public ActionIdContext() { if (actionId == null) { actionId = Guid.NewGuid(); isActionIdSet = true; } } public static Guid? ActionId { get { return actionId; } set { actionId = value; } } public void Dispose() { if (isActionIdSet) { actionId = null; } } } public class HistoryBuilder { private string text; public HistoryBuilder SetText(string value) { this.text = value; return this; } public History Build() { return new History() { Text = text, ActionId = ActionIdContext.ActionId }; } } public class Client { private readonly ISomeService someService; public Client(ISomeService someService) { this.someService = someService; } public void DoSomeAction() { someService.DoSomething("Flying in the sky"); } public void DoSomeGroupAction() { using (new ActionIdContext()) { someService.DoSomething("Driving in the corner"); someService.DoSomething("Reading Hulk!"); someService.DoSomething("Doing math"); } } } Answer: The problem is that the action id is static so it will be shared with all instances of the ActionIdContext class. That means that 2 separate pieces of work excuting at the same time could log everything with the same action id. var client1 = new Client(someService); var client2 = new Client(someService); var client3 = new Client(someService); Task.Run(() => client1.DoSomeGroupAction()); Task.Run(() => client2.DoSomeGroupAction()); Task.Run(() => client3.DoSomeGroupAction()); You have no idea how this code will log the actions - it could log them all with the same action id, it could log some with no action id and some with a common action id. It could also log some with one action id and others with another. The point I'm trying to make is, although a nice idea in theory, you can't locally instantiate state then share it via a static property and expect it to work. Can't you add a LogHistories(Guid actionId, param History[] histories) method or something? You should implement the full IDisposable.Dispose pattern; public static Guid? ActionId { get { return actionId; } set { actionId = value; } } would be better as an auto property. HistoryBuilder is a bit weird - Builders are generally used with immutables or objects with very complex constructors (in my experience).
{ "domain": "codereview.stackexchange", "id": 14345, "tags": "c#, design-patterns" }
Quantum computing in finance - list of articles
Question: I am trying to find application of a quantum computing in finance. So far I found these papers: Quantum computing for finance: Overview and prospects Quantum computational finance: quantum algorithm for portfolio optimization Efficient quantum algorithm for solving travelling salesman problem: An IBM quantum experience Towards Pricing Financial Derivatives with an IBM Quantum Computer Quantum computational finance: Monte Carlo pricing of financial derivatives Quantum Risk Analysis Credit Risk Analysis using Quantum Computers My question is: do you know about other possible application of quantum computing in finance, banking, companies finanace management or general business area? Answer: Here are two more papers that may be of interest: A Quantum Algorithm For Linear PDEs Arising In Finance Dynamic Portfolio Optimization with Real Datasets Using Quantum Processors and Quantum-Inspired Tensor Networks
{ "domain": "quantumcomputing.stackexchange", "id": 3072, "tags": "quantum-algorithms, resource-request, quantum-computing-for-finance" }
Expression with fastest growth in lambda-calculus
Question: Well-known example of divergent expression in lambda calculus is big-Omega combinator, defined as (λf. f f)(λf. f f). Although big-Omega is divergent expression, it's size became stable and always equal (λf. f f)(λf. f f) . Big-Omega combinator can be extended to provide more growth speed, for instance, expression (λf. f f f)(λf. f f f) has linear growth speed and increases by one term (λf. f f f) via one beta-reduction. Expression divergence growth speed can be significantly increased to exponential one with using following lambda-expression (λf.λu. f f (u u))(λf.λu. f f (u u))(λx.x), which doubles (λx.x) every two beta-reductions. Of course, by doubling (u u) in (λf.λu. f f (u u)) term growth speed can be increased, but still left in exponential class. Also, another mechanics can be applied to increase growth so on. But question is: which lambda expression has largest theoretically-known growth speed, and what this growth speed is? For example, in different mathematical fields there are huge numbers, like Tree(3) or BigFoot. Maybe is there known something like equivalent for lambda-calculus - divergent expression with maximum divergence growth speed? Thanks! Answer: Ok, I want to interpret this question as: what is the biggest growth rate a term $t$ of size $|t| = C$ can have, after $n$ $\beta$-reductions (as a function of $n$). There are several different interpretations for the question, as well as within the question: for example, am I allowed to pick which $\beta$-reduction I want at each step, or do I want the "worst case" (smallest increase possible)? All these questions are interesting! It's pretty easy to bound the growth from above by $C^{2^n}$ (double exponential): a redex $(\lambda x.t)\ u$ has at most $C$ occurrences of $x$ and the size of $u$ is also bounded by $C$, which gives $C^2$ as the maximum size for $t[u/x]$. Iterating this gives $(({C^2})^2)^\ldots$, that is, $C^{2^n}$. But is this upper bound attained? I suspect not. As you note, there is an exponential lower bound, which leaves quite a gap.
{ "domain": "cs.stackexchange", "id": 21571, "tags": "computability, lambda-calculus" }
Drawing a sudoku board to a canvas
Question: I have been looking around for a good place to get my code reviewed for some time. I just stumbled on this site and I was hoping some people could tell me if I'm progressing in the right direction. This is a snippet from a sudoku game I'm making. The full source code can be viewed at http://www.lesshardtofind.com/Sudoku/sudoku.js and a working example (only the user input segment) at http://www.lesshardtofind.com/Sudoku/main.html function Cell(X, Y){ // Object that contains all the data needed for a sudoku cell this.Size = CELLSIZE; // This object's size this.Value = 0; // Displayed value 1-9 this.Color = '#DDDDFF'; // I wanted to use CellColor but a glitch caused some of them to turn black this.BorderColor = 'black' this.Xloc = X; // Xcoordinate value on the canvas this.Yloc = Y; // Ycoordinate value on the canvas this.Draw = function(){ // The function to draw the cell on the screen var CNV = Get(CANVASID); // setup the context var CTX = CNV.getContext('2d'); CTX.fillStyle = this.Color; // setup and draw the rectangle CTX.fillRect(this.Xloc, this.Yloc, this.Size, this.Size); CTX.moveTo(this.Xloc, this.Yloc); // draw the boarder CTX.lineTo(this.Xloc+this.Size, this.Yloc); CTX.lineTo(this.Xloc+this.Size, this.Yloc+this.Size); CTX.lineTo(this.Xloc, this.Yloc+this.Size); CTX.lineTo(this.Xloc, this.Yloc); CTX.strokeStyle = this.BorderColor; CTX.lineWidth = 1; CTX.stroke(); if(this.Value){ // if the value isn't 0 draw the current value to the screen CTX.fillStyle = 'black'; CTX.font = "30px kaushan_scriptregular"; CTX.fillText(this.Value, this.Xloc+14, this.Yloc+33); } } } function Board(){ // the main game board object definition this.Rows = new Array; // the array to hold the cells of the board this.Born = false; // boolean to say if this is the initiation of the object this.Setup = function(){ // setup the board and initiate all variables to appropriate values and instantiate objects var CurX = 280; // The starting X coordinate var CurY = 50; // The starting Y coordinate for(var x = 0; x < 9; x++) // Set all the Rows array data position as arrays creating a 9x9 grid this.Rows[x] = new Array; for(var x = 0; x < 9; x++){ // loop through all 81 cells and create them at the right location for(var y = 0; y < 9; y++){ this.Rows[x][y] = new Cell(CurX, CurY); this.Rows[x][y].Value = 0; CurX += this.Rows[x][y].Size; } CurY += this.Rows[x][0].Size; CurX = 280; } this.Born = true; // set the flag that the board was initiated } this.Draw = function(){ // draw method to handle all objects within the board for(var r = 0; r < 9; r++){ // loop through all cells and call their draw method for(var c = 0; c < 9; c++){ this.Rows[r][c].Draw(); } } } } Answer: I don't seem to see the need for the methods to be declared inside the constructor. You aren't making any property "private". Also, declaring methods this way makes copies of the methods for each instance. I suggest you move them out to the prototype instead. That way, they are declared once, and shared across instances, saving memory. function Cell(x,y){ this.size = CELLSIZE; //properties here } //Methods out here Cell.prototype = { constructor : Cell, //refer back the constructor draw : function(){...}, //more methods for Cell here } I also notice CELLSIZE and CANVASID. If they are globals, then I suggest you not make them globals. Lastly, I suggest you use a canvas framework instead of drawing the shapes by hand. Learning the basics is good, but most developers don't code from scratch (unless there's a compelling need to do so). I suggest using PaperJS or KineticJS for drawing your shapes.
{ "domain": "codereview.stackexchange", "id": 4450, "tags": "javascript, sudoku, canvas" }
Hi, i wanto to meake a program that give the position of a red ball in 2D with rviz
Question: I already have a code in OpenCv for the detection and it works, i just dont know how to proceed with rviz. I see the camera in rviz by subcribing in the topic. But after that i dont know how to see it on GRID. Please does anyone can help me? Originally posted by ANDRO on ROS Answers with karma: 1 on 2017-10-22 Post score: 0 Original comments Comment by l4ncelot on 2017-10-23: You need to tell RVIZ the position of your camera. You can do it with simple markers. But I would recommend doing it by publishing tf transforms of your camera (you'll probably need that anyway in the future). Comment by l4ncelot on 2017-10-23: Have a look at tf_broadcaster tutorial. Answer: So here you have very rudimentary solution. Of course you will need to add calibration procedures, and take extra thing into account like the blob area to get the distance value, but this I hope gives you a starting point. So essentially you need to publish a Marker topic , with a Sphere Type of colour red. This marker will have to be referenced to your camera link or similar. I've done this Video as an example of how it could be done: Video The core about solving this is creating a script that retrieves your Detection Data and Publishes a Marker with it. Here you have the script I've used to publish it: #!/usr/bin/env python import rospy from cmvision.msg import Blobs, Blob from visualization_msgs.msg import Marker from geometry_msgs.msg import Point from sensor_msgs.msg import CameraInfo class MarkerBasics(object): def __init__(self): self.marker_objectlisher = rospy.Publisher('/marker_redball', Marker, queue_size=1) self.rate = rospy.Rate(1) self.init_marker(index=0,z_val=0) def init_marker(self,index=0, z_val=0): self.marker_object = Marker() self.marker_object.header.frame_id = "/camera_link" self.marker_object.header.stamp = rospy.get_rostime() self.marker_object.ns = "mira" self.marker_object.id = index self.marker_object.type = Marker.SPHERE self.marker_object.action = Marker.ADD my_point = Point() my_point.z = z_val self.marker_object.pose.position = my_point self.marker_object.pose.orientation.x = 0 self.marker_object.pose.orientation.y = 0 self.marker_object.pose.orientation.z = 0.0 self.marker_object.pose.orientation.w = 1.0 self.marker_object.scale.x = 0.05 self.marker_object.scale.y = 0.05 self.marker_object.scale.z = 0.05 self.marker_object.color.r = 1.0 self.marker_object.color.g = 0.0 self.marker_object.color.b = 0.0 # This has to be otherwise it will be transparent self.marker_object.color.a = 1.0 # If we want it for ever, 0, otherwise seconds before desapearing self.marker_object.lifetime = rospy.Duration(0) def update_position(self,position): self.marker_object.pose.position = position self.marker_objectlisher.publish(self.marker_object) class BallDetector(object): def __init__(self): self.rate = rospy.Rate(1) self.save_camera_values() rospy.Subscriber('/blobs', Blobs, self.redball_detect_callback) self.markerbasics_object = MarkerBasics() def save_camera_values(self): data_camera_info = None while data_camera_info is None: data_camera_info = rospy.wait_for_message('/mira/mira/camera1/camera_info', CameraInfo, timeout=5) rospy.loginfo("No Camera info found, trying again") self.cam_height_y = data_camera_info.height self.cam_width_x = data_camera_info.width rospy.loginfo("CAMERA INFO:: Image width=="+str(self.cam_width_x)+", Image Height=="+str(self.cam_height_y)) def redball_detect_callback(self,data): if(len(data.blobs)): for obj in data.blobs: if obj.name == "RedBall": rospy.loginfo("Blob <"+str(obj.name)+"> Detected!") redball_point = Point() # There is a diference in the axis from blobs and the camera link frame. # We convert to percent of the screen # TODO: Take into account the Depth distance and camera cone. rospy.loginfo("self.cam_width_x="+str(self.cam_width_x)) rospy.loginfo("self.cam_width_x="+str(self.cam_height_y)) rospy.loginfo("obj.x="+str(obj.x)) rospy.loginfo("obj.y="+str(obj.y)) middle_width = float(self.cam_width_x)/2.0 middle_height = float(self.cam_height_y)/2.0 redball_point.x = (obj.x - middle_width) / float(self.cam_width_x) redball_point.z = (obj.y - middle_height) / float(self.cam_height_y) redball_point.y = 0.6 rospy.loginfo("blob is at Point="+str(redball_point)) self.markerbasics_object.update_position(position=redball_point) else: rospy.logwarn("No Blobs Found") def start_loop(self): # spin() simply keeps python from exiting until this node is stopped rospy.spin() if __name__ == '__main__': rospy.init_node('redball_detections_listener_node', anonymous=True) redball_detector_object = BallDetector() redball_detector_object.start_loop() It defines two classes, one to publish a RedSphere (MarkerBasics) and the other retrieves and converts the blob data into something compatible with space localization ( BallDetector) . Its here where all the calibration and adjustments will have to be made to have a precise representation. Because in this example the depth axis ( Y in this case ) is hardcoded, but could be calculated based on the Blob Area. Another thing would be the fact that the conversion of the x,y values of the BlobTRacker to meters is done here in a very simple way, and in reallity this has to be calculated based on the focal distance and other camera details, so bare that in mind. Hope it was usefull. Originally posted by RDaneelOlivaw with karma: 281 on 2017-10-23 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 29165, "tags": "rviz" }
Which (Fourier transform?) processing method to use for time variant audio processing? (newbie)
Question: Edit: I've since been able to understand the how the relvant parts of Fourier transform work and how they relate to my problem / what I want to do. And have come to the conclusion that Fourier transform is not the best suited to my problem and that I have an alternative way to do what I want that is much simpler to control / behave in the way I want in practice. Since I'm new here I don't know what's the right thing to do with my question on this site. Just in case anybody was following this / was preparing an answer, I'm leaving this question up for one more day with this edit and after that I'll delete the question. I've devloped a basic algorithm for processing audio. I have some programming experience but none with Fourier transforms (but will dedicate all of 2020 to learn this full time if need be). My question relates to how best to program my algorithm in practice. What I need to do is the following: Analyze the frequency content at exact locations of a 44.1kHz PCM audio stream / file. Detect and modify the amplitude of specific frequencies at specific time locations (I do not need to modify phase or have knowledge of phase to do this). Then turn this modified version back into audio without audible artifacts from processing errors/limitations. As I said, I currenty do not know much about Fourier transform. So far my understanding is that an FFT (with windowing) will allow me to analyze the frequency spectrum of a block, detect and modify the amplitude of specific frequencies. But not modify the amplitude of specific frequencies at a specific time in that block and not the same frequency at a different time in that same block? For instance if for 44.1kHz sample rate and block size 8192 as far as I understand it normal FFT will not allow me to detect for instance two seperate 10kHz transients in that block as seperate and modify only one of them? If my above understanding is correct. Can anybody suggest proper (Fourier transform like?) processing method(s) to use / investigate to achieve my goal? I've throught of a rough way to do what I want with FFT/IFFT by dividing the frequency spectrum (5Hz to Nyquist for instance) into seperate octave bands all processed seperately, for instance first band 11.025kHz to 22.05kHz under 44.1kHz sample rate and block size 4 for instance not sure of the right block size just an estimate (so I get the transients in one block and still sufficient frequency resolution for that frequency band), then 5.5125kHz to 11.025kHz band under 22.05kHz sample rate (downsampled) also with block size 4 as an estimate, etc with a total of 12 bands down to about 5Hz for instance. But this will be a lot of work and will take me a lot of time to develop and surely someone must have done something similar and smarter a long time ago to do the type of processing I wish to do :) If anybody can point me in the right direction (and perhaps if I got some assumptions wrong correct me) I will be very thankful! Answer: I believe the answer to your question is Short-time Fourier Transform or Short-term Fourier Transform. There is the wikipedia article. I tried to show the essential math in this answer. Both the Phase-Vocoder and Sinusoidal Modeling are done with the STFT of some form or another. The two methods sorta merge in concept at the STFT level. Remember that the analysis window for STFT need not be the same as the synthesis window in constructing the processed audio for output.
{ "domain": "dsp.stackexchange", "id": 8240, "tags": "fft, audio" }
Effect of d-orbital electron shielding on atomic radius
Question: In a standard book it is given: "Atomic radius of Gallium is less than that of Aluminium. This can be understood from the variation in the inner core of the electronic configuration. The presence of additional 10 d-electrons offer only poor screening effect for the outer electrons from the increased nuclear charge in gallium. Consequently, the atomic radius of gallium (135 pm) is less than that of aluminium (143 pm)." In another chapter, it is given: "As we move along the period in 3d series, we see that nuclear charge increases from scandium to zinc but electrons are added to the orbital of inner sub shell, i.e., 3d orbitals. These 3d electrons shield the 4s electrons from the increasing nuclear charge somewhat more effectively than the outer shell electrons can shield one another. Therefore, the atomic radii decrease less rapidly." In the first para, it is given that the d-electrons shield poorly, but the opposite is given in the second but it is unlikely that the book is wrong. So what am I missing? Answer: Both sections do not contradict themselves, as they address different shielding aspects. 3d electrons give worse shielding of 4s/4p electrons than 1-3s and 2-3p electrons. 3d electrons give better shielding of 4s/4p electrons than 4s/4p electrons themselves (mutually).
{ "domain": "chemistry.stackexchange", "id": 15750, "tags": "inorganic-chemistry, orbitals, periodic-trends" }
c# best way to declare a class-level value in base class and set it at derived classes
Question: I have a root class and I'd like to add an indicator to it (property, static field, readonly field, virtual method or something like that), that should give me the same string value for every instance of that class, I would rather prefer to get that value without instantiating that class. GetObjType() virtual method is my current implementation of it but I am thinking there may a better way of doing this. Can you think of a better way of doing this, like class level singleton fields or something like that. public abstract class CoreEntity { public int Id { get; set; } public abstract string GetObjType(); } public class DepartmanBase : CoreEntity { public override string GetObjType() {return "DEPARTMAN";} public string DepartmanName { get; set; } public int ManagerId { get; set; } //Obj Type should be set as "DEPARTMAN" here and should be same in both derived classes } public class Departman : DepartmanBase { } public class DepartmanView : DepartmanBase { public string ManagerName { get; set; } } public class Personnel : CoreEntity { public override string GetObjType() {return "PERSONEL";} public string PersonnelName { get; set; } } public class Repository<T> where T : CoreEntity , new() { public void DoSomething() { string objType = new T().GetObjType(); Console.WriteLine("Object Type for: " + typeof(T).Name + " is " + objType); } } void Main() { var d = new Repository<Departman>(); var dv = new Repository<DepartmanView>(); var p = new Repository<Personnel>(); d.DoSomething(); dv.DoSomething(); p.DoSomething(); } Answer: What I've guessed from your code (and I might be wrong) is that you need to get some string from a type used in a persistence layer. The string might be a table name, or a value for a certain column. I've decided to call this a discriminator. Multiple types can have the same discriminator but each type can only have one. I think it makes sense to introduce an inheritable attribute here, let's do that: [AttributeUsage(AttributeTargets.Class, Inherited = true, AllowMultiple = false)] sealed class DiscriminatorAttribute : Attribute { readonly string discriminator; public DiscriminatorAttribute(string discriminator) { this.discriminator = discriminator; } public string Discriminator { get { return discriminator; } } } We'll also add a class which takes a type and gets us our string (if we have one). public static class DiscriminatorService { public static string LookupDiscriminatorForType(Type t) { var result = (from a in t.GetCustomAttributes(true) where a is DiscriminatorAttribute select a).SingleOrDefault() as DiscriminatorAttribute; return result != null ? result.Discriminator : null; } } This will allow you to change your repository to the following: public class Repository<T> where T : CoreEntity { public void DoSomething() { var objType = DiscriminatorService.LookupDiscriminatorForType(typeof(T)); Console.WriteLine("Object Type for: " + typeof(T).Name + " is " + objType); } } Some important things to note: You no longer need to create an instance to get the string you can remove the new() generic constraint Obviously you need to decorate your classes with the new attribute: [Discriminator("DEPARTMAN")] public class DepartmanBase : CoreEntity { public string DepartmanName { get; set; } public int ManagerId { get; set; } } public class Departman : DepartmanBase { // Inherits discriminator from DepartmanBase } public class DepartmanView : DepartmanBase { // Inherits discriminator from DepartmanBase public string ManagerName { get; set; } } [Discriminator("PERSONEL")] public class Personnel : CoreEntity { public string PersonnelName { get; set; } } Here's the full code in .Net Fiddle
{ "domain": "codereview.stackexchange", "id": 10594, "tags": "c#, object-oriented" }
The deSitter group vs. the Poincare group for a non-zero cosmological constant?
Question: The present experimental value for the cosmological constant is tiny, but nonzero: $\Lambda \approx 1.19·10^{−52}$ $1/m^2$. The Poincare group is the contraction of the deSitter group in the limit $\Lambda \rightarrow 0$, analogous to how the Galilean group is the contraction of the Poincare group in the limit $c\rightarrow \infty$. Doesn't a non-zero $\Lambda$ mean that the exact spacetime symmetry group is the deSitter group and the Poincare group is only a good approximation because $\Lambda$ is so small? Answer: Yes, if our universe is indeed deSitter, then the correct group to seek unitary representations of in quantum field theory is indeed the deSitter group $\mathrm{O}(1,n)$ instead of the Poincaré group $\mathrm{SO}(1,n-1)\ltimes\mathbb{R}^n$. As shown in "Contractions of representations of de Sitter groups" by Mickelsson and Niederle (see also their references to Wigner and Inonu), it is fortunately the case that at least all the massive representations of the Poincaré group can be obtained by contracting representations of the deSitter group, meaning the approximation is also consistent on this level. What exactly the mass parameter itself means in deSitter space is a somewhat open question. A longer overview over the Wigner classification for $\mathrm{O}(1,4)$, i.e. 4D deSitter space, and the physical meaning of the different possible representations is given in "Group theory and de Sitter QFT" by Boers.
{ "domain": "physics.stackexchange", "id": 36262, "tags": "particle-physics, cosmology, group-theory, beyond-the-standard-model" }
When does phenolphthalein containing solution change colour?
Question: We were estimating the amount of dissolved $\ce{CO2}$ in water using the American Public Health Association method. It was a Titrimetric Method using phenolphthalein indicator. Titrant used was $\ce{NaOH}$ and analyte was sample water. The following questions came to my mind: Why does the phenolphthalein $\ce{[HIn]}$ solution change colour at pH 8.2-10 when addition of a little titrant $\ce{NaOH}$ could result into production of some $\ce{In−}$ ions, the ions that give the colour to the solution. Why does it show colour change in only when there's $10^{-5.8}$ to $10^{-4}$ free $\ce{OH-}$ in the water? Is the source of this resulting basic pH, the titrant $\ce{NaOH}$? I think so because all the $\ce{OH-}$ formed from the reaction of titrant $\ce{NaOH}$ and analyte $\ce{CO2}$ reacts with the $\ce{H+}$ ions of phenolpthalien. So its only possible that the shift from pH 7 to pH 8.2 is due to the $\ce{OH-}$. Answer: I am not absolutely sure I understood the question completely, but I will try to shed some light on the colour change of an indicator. As a rule of thumb the human eye can make out a change in colour when there is a 10:1 ratio of the components. Since phenolphtalein, $\ce{HIn}$, is without colour, and its deprotonated form $\ce{In-}$ is pink, it means that a solution with $\displaystyle \frac{n(\ce{HIn})}{n(\ce{In-})} = \frac{10}{1}$ will appear clear, while $\displaystyle \frac{n(\ce{HIn})}{n(\ce{In-})} = \frac{1}{10}$ will appear pink. Since phenolphtalein is a weak acid, you will see, that the colour change will theoretically happen around equilibrium constant, i.e. $\mathrm{pH}=\mathrm{p}K_\mathrm{a}$. (Which is also a simplification, because phenolphtalein has multiple acidic protons.) At that point you will probably not notice significant changes. Though phenolphtalein is a bit of an exception here, because it has only one colour. The human eye can probably see a smaller surplus of the colourful component. (It is much more difficult for an indicator like methyl red.) If you substitute the above fractions into the Henderson-Hasselbalch equation, you can approximate the range in which the colour change will happen: \begin{align} \mathrm{pH} &= \mathrm{p}K_\mathrm{a} - \lg\left(\frac{n(\ce{HIn})}{n(\ce{In-})}\right)\\ &=% \begin{cases} \mathrm{p}K_\mathrm{a} - \lg\left(\frac{10}{1}\right); & \text{clear}\\ \mathrm{p}K_\mathrm{a} - \lg\left(\frac{1}{10}\right); & \text{pink} \end{cases}\\ \mathrm{pH} &= \mathrm{p}K_\mathrm{a} \pm 1 \end{align} When you are estimating the amount of carbon dioxide dissolved in water, then the following equilibrium will be present: $$\ce{CO2 + H2O <=> HCO3- + H+ <=> H2CO3}$$ Adding hydroxide ions you are shifting this equilibrium to the right, essentially deprotonating the acid. Once you have fully (in first approximation) converted it to carbonate, you start deprotonating the indicator until you notice the colour change.
{ "domain": "chemistry.stackexchange", "id": 6780, "tags": "analytical-chemistry, ph, titration" }
Is it possible to use the stars to determine the passage of time?
Question: I'm writing a science fiction short story which involves a group of people being in suspended animation for a very long period of time, on the order of thousands of years. My question is, would an astronomer on waking be able to determine that a significant period of time has passed purely by making observations in the night sky? How accurate would they be? For the purposes of the story, I'm more interested in what an astronomer could observe unaided or with a primitive telescope of the sort Galileo might have had early in his career (around 3X magnification). Thanks! Answer: In the night sky in 10,000 years, two things will have changed in relation to the stars. The first, the rotational axis of the Earth will have changed, shifting the celestial sphere. The second, the stars themselves will have moved a bit relative to each other due to proper motion. So, the night sky will be quite different in 10,000 years, but still recognizable, particularly the constellations which will have changed somewhat but will still be identifiable. I would posit that as long as the person or persons were at least casual star gazers or astronomy enthusiasts (not even necessarily professionals) they could estimate how much time has passed in the course of their slumber, I'd say with a margin of error of about +/-2000 years. Should they be trained astronomers that margin of error should fall.
{ "domain": "astronomy.stackexchange", "id": 838, "tags": "telescope, observational-astronomy, star-gazing" }
What is the formula for the intensity of light, and how are amplitude, frequency and number of photons considered?
Question: One formula for light intensity is$$ I = \frac{nfh}{At} \,,$$where: $n$ is the number of photons; $h$ is Planck's constant; $f$ is the frequency; $A$ is the incident area; $t$ is time. Another formula describes intensity as a function of the magnitude of electric field squared: $$I\left(t\right) \propto \left|E\left(t\right)\right|^2$$ $$I=\left|S\right|=\frac{\left|E\right|^2}{Z_0}$$ How do I reconcile these two formulas? Answer: Classical/Wave Model An electromagnetic wave is composed of an oscillating electric and magnetic fields, which are orthogonal. Our field equations might be described by $$\mathbf{E}(x,t) = {E_0}\sin\left(kx-\omega t\right)\mathbf{\hat x}$$ and $$\mathbf{B}(x,t) = {B_0}\sin\left(kx-\omega t\right)\mathbf{\hat y}.$$ Here the frequency is given by $f = \frac{\omega}{2\pi}$ and the wavelength by $\lambda = \frac{2\pi}{k}$. The amplitues are given by $E_0$ and $B_0$. These equations form a plane wave which has a total intensity, at any point in time, as given by the Poynting vector $$ \mathbf{S} = \frac{1}{\mu_0}\left(\mathbf{E} \times \mathbf{B}\right). $$ The time-average of the Poynting vector turns out to be $$ I(t) = \left< \mathbf{S}(t) \right> = \frac{1}{2c\mu_0} E_0^2.$$ This is the equation you mention. There are no photons to be counted in this paradigm, for photons are waves and not particles by classical electrodynamics theory. Particle/Quantum Model In the high-energy limit, photons act more like particles than waves. The intensity is defined as power per unit area, and power is defined as energy per unit time. Thus: $$I = \frac{P}{A} = \frac{E}{\Delta t} \frac{1}{A}.$$ The energy of a photon is $E = hf$, so the total intensity for $n$ photons is $$I = n \cdot \frac{hf}{A\Delta t}. $$ In this model, photons are only counted, and not seen as waves. Thus there is no amplitude to be considered.
{ "domain": "physics.stackexchange", "id": 49111, "tags": "visible-light, electric-fields, frequency, intensity" }
Ways of producing light through neutral particles
Question: My question is whether light can be produced in some way through neutral particles? Like usually I hear about light being produced by oscillating charged particles. Can it be produced by neutral particles by any means, and not involving charged particles? I've heard light being produced by smashing neutral with charged particles, but can it be produced by decay of only neutral particles, without getting smashed by charged particles? Is it theoretically possible to produce light by neutral particles? I mean possible according to conservation laws, satisfying conservation of lepton number, and all other laws? Answer: A neutral particle can decay to a charged particle/antiparticle pair, which can then annihilate to two photons. For example, the Higgs boson can decay to two photons in this way. However, in the Standard Model of particle physics, photons directly couple only to charged particles. In some extensions of the Standard Model, photons couple directly to the neutral Higgs and simultaneously to the neutral $Z$ boson, without any charged particles involved. In quantum gravity theories, photons also couple directly to gravitons, which are neutral. Rapidly expanding spacetime can theoretically create photons without first creating charged particles.
{ "domain": "physics.stackexchange", "id": 68250, "tags": "energy, particle-physics, visible-light" }
Where is the energy transfer from a metal ball falling from a magnet?
Question: A ball falls from a magnet but the magnet still exerts an upwards force against gravity yet the ball falls anyway. However, the ball slows down and thus the sound when it hits the floor is less signalling that some energy has been lost during its descent. What I'm wondering is where that energy has gone? Has the magnet gained magnet energy? Or has earth gained energy. Or has the ball not lost energy but its remaining energy just wasn't turned into sound upon contact with the ground? Or is it something else? Answer: If the ball is made of iron You put “magnetic” potential energy into the system when you brought the metal ball up to the magnet. The sign of that potential energy is negative indicating an attractive force. When you drop the ball, the ball loses gravitational potential energy, but adds magnetic potential energy and kinetic energy. Therefore less energy was available for kinetic energy and the corresponding impact speed was less. edit: We can write this as an energy relationship: $$ \begin{align} \Delta E &= \Delta K + \Delta U \\ 0 &= (K_f-K_i) + \Delta U_G + \Delta U_{M} \\ K_f &= -\Delta U_G - \Delta U_M \end{align} $$ Where $K_i = \Delta E = 0$. In the situation where the ball falls the gravitational potential energy decreases ($\Delta U_G < 0$) but the magnetic potential energy increases ($\Delta U_M > 0$), since the metal ball has moved further from the magnet. The type of forces two magnets experience is a conservative force (since it's path independent) and so it makes sense to talk about a magnetic potential energy. The ball would need to be iron (or one of its alloys like steel) because in terms of everday materials, iron is the only one that can be temporarily (induced) magnetized. For most other materials, $\Delta U_M \approx 0$.
{ "domain": "physics.stackexchange", "id": 54855, "tags": "electromagnetism, gravity" }
Python aiohttp extension
Question: I made my first open-source project and wonder if I could do it better. I'll give briefly description, ask some questions and then post whole code. Open to any opinons and suggestions, thank you. So, Aiohttp is asynchronous HTTP Client for Python. You can find it's documentation here. However all you need to know is that it have ClientSession class for making responses. This class is made in async-with approach. Developers suggest to use it like this: async def fetch(session, url): async with session.get(url) as response: return await response.text() async with aiohttp.ClientSession() as session: html = await fetch(session, 'http://python.org') print(html) I made an extension to aiohttp for retries. It's common idea to retry a request in case it failed. I tried to save this async-with approach. So, here two classes: RetryClient - client for retries It takes the same params as aiohttp ClientSession and do the same things. The only difference: methods like 'get' takes additional params like number of attemps. _RequestContext - async-with context for handling request Actually it contains all retry logic and tries to do requests. When it do request, it send it to aiohttp and then handle response in proper way. In my opinion the main problem here is huge usage of **kwargs. All params are stored here. I tested that it will be broken on wrong param, but smart IDE like Pycharm wouldn't suggest you anything. However, I don't know how to fix it. You can find the whole project here: https://github.com/inyutin/aiohttp_retry The whole code: import asyncio import logging from aiohttp import ClientSession, ClientResponse from typing import Any, Callable, Optional, Set, Type # Options _RETRY_ATTEMPTS = 3 _RETRY_START_TIMEOUT = 0.1 _RETRY_MAX_TIMEOUT = 30 _RETRY_FACTOR = 2 class _RequestContext: def __init__(self, request: Callable[..., Any], # Request operation, like POST or GET url: str, # Just url retry_attempts: int = _RETRY_ATTEMPTS, # How many times we should retry retry_start_timeout: float = _RETRY_START_TIMEOUT, # Base timeout time, then it exponentially grow retry_max_timeout: float = _RETRY_MAX_TIMEOUT, # Max possible timeout between tries retry_factor: float = _RETRY_FACTOR, # How much we increase timeout each time retry_for_statuses: Optional[Set[int]] = None, # On which statuses we should retry retry_exceptions: Optional[Set[Type]] = None, # On which exceptions we should retry **kwargs: Any ) -> None: self._request = request self._url = url self._retry_attempts = retry_attempts self._retry_start_timeout = retry_start_timeout self._retry_max_timeout = retry_max_timeout self._retry_factor = retry_factor if retry_for_statuses is None: retry_for_statuses = set() self._retry_for_statuses = retry_for_statuses if retry_exceptions is None: retry_exceptions = set() self._retry_exceptions = retry_exceptions self._kwargs = kwargs self._current_attempt = 0 self._response: Optional[ClientResponse] = None def _exponential_timeout(self) -> float: timeout = self._retry_start_timeout * (self._retry_factor ** (self._current_attempt - 1)) return min(timeout, self._retry_max_timeout) def _check_code(self, code: int) -> bool: return 500 <= code <= 599 or code in self._retry_for_statuses async def _do_request(self) -> ClientResponse: try: self._current_attempt += 1 response: ClientResponse = await self._request(self._url, **self._kwargs) code = response.status if self._current_attempt < self._retry_attempts and self._check_code(code): retry_wait = self._exponential_timeout() await asyncio.sleep(retry_wait) return await self._do_request() self._response = response return response except Exception as e: retry_wait = self._exponential_timeout() if self._current_attempt < self._retry_attempts: for exc in self._retry_exceptions: if isinstance(e, exc): await asyncio.sleep(retry_wait) return await self._do_request() raise e async def __aenter__(self) -> ClientResponse: return await self._do_request() async def __aexit__(self, exc_type: Any, exc_val: Any, exc_tb: Any) -> None: if self._response is not None: if not self._response.closed: self._response.close() class RetryClient: def __init__(self, logger: Any = None, *args: Any, **kwargs: Any) -> None: self._client = ClientSession(*args, **kwargs) self._closed = False if logger is None: logger = logging.getLogger("aiohttp_retry") self._logger = logger def __del__(self) -> None: if not self._closed: self._logger.warning("Aiohttp retry client was not closed") @staticmethod def _request(request: Callable[..., Any], url: str, **kwargs: Any) -> _RequestContext: return _RequestContext(request, url, **kwargs) def get(self, url: str, **kwargs: Any) -> _RequestContext: return self._request(self._client.get, url, **kwargs) def options(self, url: str, **kwargs: Any) -> _RequestContext: return self._request(self._client.options, url, **kwargs) def head(self, url: str, **kwargs: Any) -> _RequestContext: return self._request(self._client.head, url, **kwargs) def post(self, url: str, **kwargs: Any) -> _RequestContext: return self._request(self._client.post, url, **kwargs) def put(self, url: str, **kwargs: Any) -> _RequestContext: return self._request(self._client.put, url, **kwargs) def patch(self, url: str, **kwargs: Any) -> _RequestContext: return self._request(self._client.patch, url, **kwargs) def delete(self, url: str, **kwargs: Any) -> _RequestContext: return self._request(self._client.delete, url, **kwargs) async def close(self) -> None: await self._client.close() self._closed = True async def __aenter__(self) -> 'RetryClient': return self async def __aexit__(self, exc_type: Any, exc_val: Any, exc_tb: Any) -> None: await self.close() Answer: I think RetryClient is a bad name as it doesn't highlight that you're wrapping a session. Taking lots of retry_* parameters screams to me that you should make a class. For example popular libraries like requests and urllib3 use a Retry class. You can pass a tuple to isinstance to check against multiple types in one call. You use a tuple with except to filter to only those exceptions. Removing the need for isinstance at all. You can make _exponential_timeout a function that makes a full-fledged itarable. Making the Retry class take an iterable on how long each delay should be allows for easy customization from users. Want a 3 gap 20 times is easy: Retry(timeouts=[3] * 20) Users can also make it so it retries infinitely. class InfinateOnes: def __iter__(self): while True: yield 1 I would prefer to see an explicit while True and iteration over recursion. I would prefer if Retry was designed in a way in which it would work with any function that returns a ClientResponse. A part of me that likes to make things as generic as possible, would prefer changing statuses to a callback to check if the return is valid. However this doesn't make too much sense in a bespoke library. In all I think this drastically simplifies _RequestContext. Note: untested import asyncio import logging from aiohttp import ClientSession, ClientResponse from typing import Any, Callable, Optional, Set, Type, Iterable # Options _RETRY_ATTEMPTS = 3 _RETRY_START_TIMEOUT = 0.1 _RETRY_MAX_TIMEOUT = 30 _RETRY_FACTOR = 2 def exponential( attempts: int = _RETRY_ATTEMPTS, start: int = _RETRY_START_TIMEOUT, maximum: int = _RETRY_MAX_TIMEOUT, factor: int = _RETRY_FACTOR, ) -> Iterable[float]: return [ min(maximum, start * (factor ** i)) for i in range(attempts) ] class Retry: def __init__( self, timeouts: Iterable[float] = exponential(), statuses: Optional[Set[int]] = None, exceptions: Optional[Tuple[Type]] = None, ) -> None: self._timeouts = timeouts self._statuses = statuses or set() self._exceptions = exceptions or () def _is_retry_status(self, code): return 500 <= code <= 599 or code in self._statuses async def retry( self, callback: Callable[[...], ClientResponse], *args, **kwargs, ) -> ClientResponse: timeouts = iter(self.timeouts) while True: try: response = await self._request(*args, **kwargs) if not self._is_retry_status(response.status): return response try: retry_wait = next(timouts) except StopIteration: return response except self._retry_exceptions as e: try: retry_wait = next(timouts) except StopIteration: raise e from None await asyncio.sleep(retry_wait) class _RequestContext: def __init__( self, request: Callable[..., Any], url: str, retry: Retry, **kwargs: Any ) -> None: self._request = request self._url = url self._kwargs = kwargs self._retry = retry async def __aenter__(self) -> ClientResponse: return await self._retry.retry(self._request, self._url, **self._kwargs) async def __aexit__(self, exc_type: Any, exc_val: Any, exc_tb: Any) -> None: if self._response is not None: if not self._response.closed: self._response.close()
{ "domain": "codereview.stackexchange", "id": 37258, "tags": "python, python-3.x" }
Retrieving a csv header
Question: I have two seperate calls that returns a .csv file. I need to extract a couple of headers from the files depending on which one is called. The headers does not always sit in the same index, and the CourseId name changes to Asset_ID in the one call. How can I refactor the code to make it more maintainable? Also, I think the naming is not really good. public static class CsvHeader { public struct Index { public string Email { get; set; } public string CompletionDate { get; set; } public string CourseId { get; set; } } public static Index GetCsvHeaderIndex(string[] headers) { string emailIndex = null; string completionDateIndex = null; var email = headers.FirstOrDefault(_ => _.Contains("Email")); var completionDate = headers.FirstOrDefault(_ => _.Contains("Completion")); var courseId = headers.FirstOrDefault(_ => _.Contains("CourseID")); if (!string.IsNullOrEmpty(email)) emailIndex = headers.First(_ => _.Contains("Email")).Split('(', ')')[1]; /*3*/ if (!string.IsNullOrEmpty(completionDate)) completionDateIndex = headers.First(_ => _.Contains("Completion")).Split('(', ')')[1]; /*30*/ var courseIdIndex = string.IsNullOrEmpty(courseId) ? headers.First(_ => _.Contains("Asset_ID")).Split('(', ')')[1] : headers.First(_ => _.Contains("CourseID")).Split('(', ')')[1]; return new Index { Email = emailIndex, CompletionDate = completionDateIndex, CourseId = courseIdIndex }; } } Answer: You have some duplicated code here like for instance if email is not null you can get the emailIndex by just splitting the email field. The same is true for completitionDate and courseId. This Email = emailIndex, CompletionDate = completionDateIndex, CourseId = courseIdIndex looks not that good. You should rename the method variables xxxIndex to something better (will show in code) . I would like to encourage you to use braces {} although they are optional and although you are placing the instruction on the same line as the if. Its intent just gets more clear. public static Index GetCsvHeaderIndex(string[] headers) { string email = null; string completionDate = null; var emailHeader = headers.FirstOrDefault(_ => _.Contains("Email")); var completionDateHeader = headers.FirstOrDefault(_ => _.Contains("Completion")); var courseIdHeader = headers.FirstOrDefault(_ => _.Contains("CourseID")); if (!string.IsNullOrEmpty(emailHeader)) { email = emailHeader.Split('(', ')')[1]; /*3*/} if (!string.IsNullOrEmpty(completionDateHeader)) { completionDate = completionDateHeader.Split('(', ')')[1]; /*30*/} var courseId = string.IsNullOrEmpty(courseIdHeader) ? headers.First(_ => _.Contains("Asset_ID")).Split('(', ')')[1] : courseIdHeader.Split('(', ')')[1]; return new Index { Email = email, CompletionDate = completionDate, CourseId = courseId }; }
{ "domain": "codereview.stackexchange", "id": 17445, "tags": "c#, csv" }
Why wasn't the Stipa-Caproni plane efficient in its flight?
Question: The Stipa-Caproni was an experimental italian plane design. Though it has a very peculiar shape, it seems at first glance like it would have pretty good aerodynamic profile since its reference area looks rather small. However, its Wikipedia page cites: Unfortunately, the "intubed propeller" design also induced so much aerodynamic drag that the benefits in engine efficiency were cancelled out. What then causes all the drag on this plane? Answer: I guess Stipa didn't realize that a wimpy looking 2 blade prop was not going to make the air flow do what he hoped. Even modern ducted fans are less efficient than conventional aircraft propellers in cruise conditions, though they are much more efficient for generating thrust at low speeds and hence useful for hovercraft, airships, VTOL applications, etc. Trying to get more thrust by making the duct tapered is never going to work unless the flow at the throat of the duct becomes choked (and is therefore at Mach 1) which is far beyond the capabilities of the technology Stipa was using. To do that you need an afterburner, not a propeller! One cause of "all the extra drag" is simply the air flow over the inside surface of the whole of the duct. Air has viscosity. The frontal area of the structure is probably bigger than a conventional design as well. The engine alone (inside the duct) has a similar frontal area to the nose of a conventional plane design. The frontal area of the cockpit is also bigger, because bottom half (containing the pilot's seat etc) is not directly behind the engine and duct, but on top of it, adding more frontal area.
{ "domain": "physics.stackexchange", "id": 56189, "tags": "fluid-dynamics, aerodynamics, aircraft" }
Complexity of the coset intersection problem
Question: Given the symmetry group $S_n$ and two subgroups $G, H\leq S_n$, and $\pi\in S_n$, does $G\pi\cap H=\emptyset$ hold? As far as I know, the problem is known as the coset intersection problem. I am wondering what's the complexity? In particular, is this problem known to be in coAM? Moreover, if $H$ is restricted to be abelian, what does the complexity become? Answer: Moderately exponential time and $\mathsf{coAM}$ (for the opposite of the problem as stated: Coset Intersection is typically considered to have a "yes" answer if the cosets intersect, opposite of how it's stated in the OQ.) Luks 1999 (free author's copy) gave a $2^{O(n)}$-time algorithm, while Babai (see his 1983 Ph.D. thesis, also Babai-Kantor-Luks FOCS 1983, and a to-appear journal version) gave a $2^{\tilde{O}(\sqrt{n})}$-time algorithm, which remains the best known to date. Since graph isomorphism reduces to quadratic-sized coset intersection, improving this to $2^{\tilde{O}(n^{1/4-\epsilon})}$ would improve the state of the art for graph isomorphism.
{ "domain": "cstheory.stackexchange", "id": 2883, "tags": "cc.complexity-theory, graph-isomorphism, gr.group-theory" }
RosJava issue 70 fix problem
Question: In the fix located here: http://code.google.com/p/rosjava/issues/detail?id=70 The new Android.xml file doesn't have a name or as attribute, which causes me to generate this error when rosmake'ing android_gingerbread: BUILD FAILED /home/craig/ros_workspace/rosjava.android/android_gingerbread/build.xml:4: The following error occurred while executing this line: can't include build file file:/home/craig/ros_workspace/rosjava_core/rosjava_bootstrap/android.xml, no as attribute has been given and the project tag doesn't specify a name attribute So trying to solve my own problem even though I have relatively no knowledge in this area I plugged in the name="android" and tried making it again. This caused me to receive: BUILD FAILED /home/craig/ros_workspace/rosjava.android/android_gingerbread/build.xml:4: The following error occurred while executing this line: /home/craig/ros_workspace/rosjava_core/rosjava_bootstrap/android.xml:64: taskdef class com.android.ant.NewSetupTask cannot be found using the classloader AntClassLoader[] Sorry to be asking so many questions, I really am trying to get this working on my own. I'm just very inexperienced at this stuff and am having trouble figuring it all out. Thanks. Originally posted by mouser58907 on ROS Answers with karma: 45 on 2011-11-10 Post score: 0 Answer: Look at the android_tutorial_pubsub/build.xml for an example of how your build.xml should look. Also, make sure you've installed the last Android SDK and ADT. Originally posted by damonkohler with karma: 3838 on 2011-11-10 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by mouser58907 on 2011-11-14: Updating and re-downloading files seems to have fixed this issue. I'm getting very close, thanks for your help, Damon.
{ "domain": "robotics.stackexchange", "id": 7258, "tags": "rosjava, android" }
Colcon build error : 0 packages finished
Question: I am using ROS2 humble on Windows 10 and it is installed successfully. Now I am going through this tutorial for creating a workspace: https://docs.ros.org/en/humble/Tutorials/Beginner-Client-Libraries/Creating-A-Workspace/Creating-A-Workspace.html But while I am using the “colcon build --merge install” command I face the following error message: Failed <<< turtlesim [9.41s, exited with code 1] Summary: 0 packages finished [9.75s] 1 package failed: turtlesim 1 package had stderr output: turtlesim WNDPROC return value cannot be converted to LRESULT TypeError: WPARAM is simple, so must be an int object (got NoneType) You can see the full error message here: however, it creates the “build, install and log folder” on my workspace, and also “ros2 run turtlesim turtlesim_node” opens the turtlesim. It seems that the turtlesim package comes pre-built along with the ros2 installation and it is not calling the turtlesim package on my ros2_ws at all. So, when I modify the overlay like this: https://docs.ros.org/en/humble/Tutorials/Beginner-Client-Libraries/Creating-A-Workspace/Creating-A-Workspace.html#modify-the-overlay Nothing has happened; this is about 1 week I am stuck in this step, could anyone help me in this regard? I should mention that I installed “em”, “empty” and none of them fixed the problem. Answer: I solved this problem by uninstalling "Python 3.12, 64-bit version" which had been installed on my system because I found the CMake error refers to a Python 3.12 on my system and I didn't know why it should have existed on my system (During ROS2 installation I had installed python 3.8, not 3.12). So, I called the following command in the command prompt, and I was assured an extra Python version was installed on my system: > py -0 which returned: Installed Pythons found by py Launcher for Windows -3.8-64 * -3.12-64 I decided to uninstall version 3.12 so that ROS2 just uses Python 3.8. Fortunately, it fixed my issue.
{ "domain": "robotics.stackexchange", "id": 38798, "tags": "ros, ros2, ros-humble, colcon, windows" }
Which mechanical device could repeatedly present an ID tag to a card-reader
Question: I'm trying to build a test-automation robotic arm which can repeatedly present an ID-tag (such as RFID or NFC card or fob) to a card reader. I suspect our reader fails either (a) after hundreds of presentations or due to fast presentations or (b) at a specific moment in the reader duty cycle. The tag needs to move in a well-controlled manner: Quickly present the card, Pause (mark) Quickly remove the card, Pause (space) Repeat at 1. I'm calling the present/remove sequence the mark-space ratio for simplicity. The tests I want to perform involve varying (a) the frequency and (b) the mark-space ratio, to (a) stress-test and (b) boundary-test the re-presentation guard times built into the reader to debounce presentations. The guard times are around 400ms, response around 100ms, so I need something that can move in and out of a 5-10cm range quickly and repeat within those sorts of timescales. The distance the card needs to move depends on the reader model, as they have different field ranges. I want to get through the edge of the field quickly to avoid any inconsistencies in testing. I'm able to do any programming (professional) and simple electromechanical design and build (ex-professional, now hobbyist). I only need to build one, it doesn't have to be particularly robust, but it does need to be fairly accurate with regard to the timings to do the second test. What I've done so far: I've built one version already using a Raspberry Pi, GPIO, a stepper motor with an aluminium arm screwed to a wheel. It works, but it's a bit jerky and too slow, even with a 30cm arm to amplify the motion. It will probably do for the repeat test, but it's not time-accurate enough to do the timing tests. My other design ideas were: Servo (are these also slow?) Solenoid (fast, but too limited range? and might cause EM?) Motor (too uncontrollable, and will require too much mechanical work for me) Rotating drum (fast, stable, but cannot control mark-space ratio) I'm not a electro-mechanical design expert, so I'm wondering if I'm missing an electrical device or mechanical design which can do this more easily. Answer: I'd go for a linear axis. The cheapest version could be from a printer or you buy a small belt and two gears. A brushless DC would however fit better in this scenario. Such systems are easily faster than a normal user and easy to control.
{ "domain": "robotics.stackexchange", "id": 1067, "tags": "robotic-arm, raspberry-pi, stepper-motor, industrial-robot, automation" }
Hydrostatic pressure is equal everywhere, however, the answer I'm getting is incorrect
Question: Question: A hydraulic car lift has a pump piston with radius 0.015 m and a resultant piston with radius 0.120 m. The combined weight of the car and the plunger is 2500 N. Assume that the height of the piston and plunger are the same. What amount of force (in N) is required on the pump piston to stabalize the car? My try at it: Since the heights of the piston and plunger are the same, so, the pressure on the car lift and resultant piston wil be the same, therefore, Pressure on the car = Pressure on the resultant piston. => F/A = F/A => 2500 N / 0.015 = x N / 0.120 Calculating this, I got x = 20,000 N, however, it is incorrect, what am I doing wrong? Thanks. Answer: Here the radius of the pump piston according to your question is 0.015 m. So the area of the pump piston is πr^2=π(〖0.015)〗^2= A(say) If x N force require then we create pressure on that piston is P= F/A=x/(π〖(0.015)〗^2 ). Hence, F/A=x/(π〖(0.015)〗^2 )=2500/(π〖(0.12)〗^2 ) or, x= 39.06 N.
{ "domain": "physics.stackexchange", "id": 17530, "tags": "homework-and-exercises, fluid-dynamics" }
Why does the classical Noether charge become the quantum symmetry generator?
Question: It is often said that the classical charge $Q$ becomes the quantum generator $X$ after quantization. Indeed this is certainly the case for simple examples of energy and momentum. But why should this be the case mathematically? For clarity let's assume we are doing canonical quantization, so that Poisson brackets become commutators. I assume that the reason has something to do with the relationship between classical Hamiltonian mechanics and Schrodinger's equation. Perhaps there's a simple formulation of Noether's theorem in the classical Hamiltonian setting which makes the quantum analogy precisely clear? Any hints or references would be much appreciated! Mathematical Background In classical mechanics a continuous transformation of the Lagrangian which leaves the action invariant is called a symmetry. It yields a conserved charge $Q$ according to Noether's theorem. $Q$ remains unchanged throughout the motion of the system. In quantum mechanics a continuous transformation is effected through a representation of a Lie group $G$ on a Hilbert space of states. We insist that this representation is unitary or antiunitary so that probabilities are conserved. A continuous transformation which preserves solutions of Schrodinger's equation is called a symmetry. It is easy to prove that this is equivalent to $[U,H] = 0$ for all $U$ representing the transformation, where $H$ is the Hamiltonian operator. We can equivalently view a continuous transformation as the conjugation action of a unitary operator on the space of Hermitian observables of the theory $$A \mapsto UAU^\dagger = g.A$$ where $g \in G$. This immediately yields a representation of the Lie algebra on the space of observables $$A \mapsto [X,A] = \delta A$$ $$\textrm{where}\ \ X \in \mathfrak{g}\ \ \textrm{and} \ \ e^{iX} = U \ \ \textrm{and} \ \ e^{i\delta A} = g.A$$ $X$ is typically called a generator. Clearly if $U$ describes a symmetry then $X$ will be a conserved quantity in the time evolution of the quantum system. Edit I've had a thought that maybe it's related to the 'Hamiltonian vector fields' for functions on a symplectic manifold. Presumably after quantization these can be associated to the Lie algebra generators, acting on wavefunctions on the manifold. Does this sound right to anyone? Answer: The canonical quantization after Dirac should fulfill the following axioms: Q1: The map $f \to \hat f$ that assigns a operator to every function on the phase space is linear and the constant 1-functions get mapped to the 1-operator Q2: The Poisson bracket maps to the commutator decorated with $\hbar$ Q3: A complete system of functions in involution maps to a complete system of commutative operators. It is the last condition which ensures that $G$ is a symmetry on the quantum side (the assignment $f \to \hat f$ needs to be a irreducible representation of the symmetry generators). But the No-Go theorems of Groenwald und Van Hove shows that a quantization for all observables with Q1-Q3 is not possible. The two mayor solutions are: Weaken Q2 and only require that it holds only up to first order of $\hbar$ - this leads to deformation quantization. On the other hand, geometric quantization modifes Q3 in the sense that it should hold only for some reasonable subalgebra of functions (eg which contains momentum ect.).
{ "domain": "physics.stackexchange", "id": 9160, "tags": "quantum-mechanics, quantum-field-theory, classical-mechanics, symmetry, noethers-theorem" }
Tkinter Minesweeper Clone V2
Question: Intro Having spent quite a bit of time learning Python, I thought it was time to work on something. However, an Google search for "python projects 2022" came up with projects I had already completed. Looking through my projects, however, I came across this. (link). I decided to test my abilities and rewrote the code. Please review, thanks! What I am looking for Any best practices or performance flaws in my code. Proper separation of code into functions. Proper usage of type hinting in my functions. Code formating, spacing, and logical breaks in my code. A better emoji for my mine: currently using (maybe switch to images?) The code import random import tkinter as tk import tkinter.messagebox as msgbox from typing import List, Literal, Tuple # TODO # improve astethics and change colors # add colors of numbers class Main: def __init__(self): self.status_bar_on = True self.game_started = False self.gameover = False self.cols = 30 self.rows = 16 self.flags = 80 self.original_flags = self.flags self.time = 0 self.widget_board = [[0 for _ in range(self.cols)] for _ in range(self.rows)] self.setupTkinter() self.generateBoard() def run(self) -> Literal[None]: '''Run the GUI application: call tk.Tk.mainloop.''' self.window.after(200, self.updateTimer) self.window.mainloop() def gameOver(self, won: bool, bad: bool = False) -> Literal[None]: ''' GUI game over function. Called by Main.checkWon. Shows a popup message and also displays all mines: self.showBombs. ''' self.showBombs() if bad: msgbox.showinfo(title = 'Minesweeper', message = f'You lost!') self.gameover = True return if won: msgbox.showinfo(title = 'Minesweeper', message = f'You won in {self.timer} seconds!') self.gameover = True else: self.window.after(100, self.gameOver(won, bad = True)) def checkWon(self) -> Literal[None]: ''' Checks the board for a wins. Uses the following formula: (amount of opened cells) + (mines) == (total tiles) to check. Calls Main.gameOver when a win is detected. ''' opened_cells = 0 for widget in self.board_frame.winfo_children(): if widget.cget('text').isnumeric(): opened_cells += 1 if (self.rows * self.cols) == opened_cells + self.original_flags: self.gameOver(won = True) def showBombs(self) -> Literal[None]: ''' Iterates through the game frame's widgets: (winfo_children) and finds widgets that correspond to mines. When one is found, if it was correctly marked with a flag, it is marked with a green background. Otherwise, with a red background. ''' for row_index, row in enumerate(self.game_board): for item_index, item in enumerate(row): if item == '\N{BOMB}': widget = self.widget_board[row_index][item_index] if widget.cget('text') == ' ': widget.config(text = '\N{BOMB}', bg = 'red') elif widget.cget('text') == '\u2691': widget.config(bg = 'dark green') for widget in self.board_frame.winfo_children(): if widget.cget('text') == '\u2691': if widget.cget('bg') != 'dark green': widget.config(bg = 'red') def setupTkinter(self) -> Literal[None]: ''' An encapsulating function with Tkinter setup and layouting. Generates the main window along with the status widgets, and frames. ''' self.window = tk.Tk() self.window.geometry('1125x590') self.window.title('Minesweeper') self.window.config(bg = 'blue') self.status_bar = tk.Frame(self.window, height = 50) self.status_bar.config(bg = 'blue') self.board_frame = tk.Frame(self.window) self.board_frame.config(bg = 'black') self.time_variable = tk.StringVar(self.status_bar, value = '⌛ 0') self.flags_variable = tk.StringVar(self.status_bar, value = f' {self.flags}') self.time_label = tk.Label(self.status_bar, textvariable = self.time_variable, font = ('Courier', 30), bg = 'blue') self.flags_label = tk.Label(self.status_bar, textvariable = self.flags_variable, font = ('Courier', 30), bg = 'blue') self.generateBoardWidgets(self.board_frame) if self.status_bar_on: self.status_bar.pack(fill = tk.BOTH, expand = tk.NO, pady = 5) self.board_frame.pack(fill = tk.BOTH, expand = tk.YES) self.time_label.pack(side = tk.LEFT, padx = 5, expand = tk.YES) self.flags_label.pack(side = tk.RIGHT, padx = 5, expand = tk.YES) def generateBoardWidgets(self, board_frame: tk.Frame) -> Literal[None]: ''' Generates the tk.Label widgets in the board frame (board_frame). Iterates through required rows and columns to generate tk.Label widgets. Also binds the three buttons to the respective functions. ''' for row in range(self.rows): for col in range(self.cols): widget = tk.Label( board_frame, text = ' ', fg = 'white', width = 1, font = ('Courier', 25), bg = 'gray') widget.bind('<Button-1>', self.handleLeftClick) widget.bind('<Button-2>', self.handleRightClick) widget.bind('<Button-3>', self.handleMiddleClick) widget.grid(row = row, column = col, padx = 2, pady = 2, sticky = tk.NSEW) tk.Grid.rowconfigure(board_frame, row, weight=1) tk.Grid.columnconfigure(board_frame, col, weight=1) self.widget_board[row][col] = widget def floodfill(self, location, first = False) -> Literal[None]: ''' Flood fills the tiles where an 0 is clicked on. This opens all connected groups of zero, and a border around this. Uses the neighbors from Main.getNeighbors and calls openNeighbors to actually open the tiles. ''' if first: self.stack = [] neighbors = self.getNeighbors(*location, [self.rows, self.cols]) self.openNeighbors(*location) for neighbor_loc in neighbors: if neighbor_loc in self.stack: continue item = self.game_board[neighbor_loc[0]][neighbor_loc[1]] if item == 0: self.stack.append(neighbor_loc) self.floodfill(neighbor_loc) def openNeighbors(self, y, x) -> Literal[None]: ''' Opens the neighbors of an tile. Used by the middle click function (Main.handleMiddleClick) and the floodfill (Main.floodfill) ''' neighbors = self.getNeighbors(y, x, [self.rows, self.cols]) for neighbor_location in neighbors: item = self.game_board[neighbor_location[0]][neighbor_location[1]] widget = self.widget_board[neighbor_location[0]][neighbor_location[1]] if item == '\N{BOMB}' and widget.cget('text') != '\u2691': self.gameOver(won = False) if widget.cget('text') != ' ': continue widget.config(text = str(item)) if item == 0: self.floodfill(neighbor_location, first = True) def handleLeftClick(self, event = None) -> Literal[None]: ''' Tkinter event handling for left mouse button. Will regenerate boards on the first move if the first move is on a mine. Also floodfills the first move. ''' if self.gameover or event.widget.cget('text') != ' ': return widget = event.widget row, col = widget.grid_info()['row'], widget.grid_info()['column'] game_board_item = self.game_board[row][col] if not self.game_started: while game_board_item != 0: self.generateBoard() game_board_item = self.game_board[row][col] self.floodfill([row, col], first = True) widget.config(text = str(game_board_item)) if type(game_board_item) != int: self.gameOver(won = False) self.game_started = True self.checkWon() def handleMiddleClick(self, event = None) -> Literal[None]: ''' Tkinter event handling for middle click (mouse). Calls Main.openNeighbors. ''' if self.gameover: return widget = event.widget row, col = widget.grid_info()['row'], widget.grid_info()['column'] self.openNeighbors(row, col) def handleRightClick(self, event = None) -> Literal[None]: ''' Tkinter event handling for right mouse button. Toggles an flag on the tile if it is not opened. ''' if self.gameover or event.widget.cget('text') not in (' ', '\u2691') or self.flags <= 0: return widget = event.widget if event.widget.cget('text') == '\u2691': widget.config(text = ' ') self.flags += 1 else: widget.config(text = '\u2691') self.flags -= 1 self.flags_variable.set(f' {self.flags}') def updateTimer(self) -> Literal[None]: ''' An function that calls itself to update an timer. Stops once the game is over, and starts on the first click. ''' if self.gameover: return if self.game_started: self.time += 1 self.time_variable.set(f'⌛ {self.time}') self.window.after(1000, self.updateTimer) def generateBoard(self) -> Literal[None]: ''' Generates the Minesweeper game board. Places mines, and updates the numbers of neighbors by +1. Can be called multiple times to regenerate boards. ''' self.game_board = [[0 for _ in range(self.cols)] for _ in range(self.rows)] all_locations = [] for x in range(16): for y in range(30): all_locations.append((x, y)) sample = random.sample(all_locations, self.flags) for location in sample: self.setBomb(*location) def setBomb(self, y, x) -> Literal[None]: ''' Sets a spot on the game board to a bomb and increments the neighboring values. ''' self.game_board[y][x] = '\N{BOMB}' neighbors = self.getNeighbors(y, x, [self.rows, self.cols]) for neighbor_location in neighbors: item = self.game_board[neighbor_location[0]][neighbor_location[1]] if type(item) == int: self.game_board[neighbor_location[0]][neighbor_location[1]] += 1 def getNeighbors(self, y, x, board_shape) -> List[Tuple[int, int]]: ''' Returns a list of the indexes of neighboring tiles. (diagonals included) ''' neighbors = list() neighbors.append((y, x - 1)) neighbors.append((y, x + 1)) neighbors.append((y - 1, x)) neighbors.append((y - 1, x - 1)) neighbors.append((y - 1, x + 1)) neighbors.append((y + 1, x)) neighbors.append((y + 1, x + 1)) neighbors.append((y + 1, x - 1)) neighbors = [i for i in neighbors if i[0] in range(board_shape[0]) and i[1] in range(board_shape[1])] return neighbors if __name__ == '__main__': app = Main() app.run() Answer: The choice of your function boundaries is actually pretty reasonable, but the same is not true of your class bounds. You've made a classic god class. Separate this into a business logic class that only deals with the inner representation of the board and game state, and a presentation class that only deals with tkinter. All of your member variables (and their types, when ambiguous) should be first set in the constructor. Literal[None] is an interesting choice. I'd even say it's more technically true than what people usually do, which is -> None. If you wanted to be pedantic, import NoneType from types; but this is not necessary and you should instead simply write -> None. Formal validators like mypy will understand it. __init__ returns -> None. Use lower_snake_case for your method names. It's more typical to triple-double-quote """ rather than triple-single-quote ''' for docstrings. You have a mix of \u and literal Unicode emojis sprinkled through your strings. Some of them render on my IDE font and some don't (like the triangular flag), so you're better off being consistent and declaring all Unicode constants like BLACK_FLAG = '\u2691' In getNeighbors, delete all of your append()s and write one list literal.
{ "domain": "codereview.stackexchange", "id": 43859, "tags": "python, python-3.x, game, tkinter, minesweeper" }
How to work with kinect frames in RGB color format using cvbridge
Question: I am trying to process color kinect frames following the tutorial from the link working with cv bridge. I have a training image juice_color.png which I can visualize as RGB. I specified the encoding as cv_img.encoding = "rgb8"; Going by the same logic, I used rgb8 for the incoming kinect frames, but the display window imshow( WINDOW2, cv_ptr_frames->image ); shows the kinect frames in gray scale. How do I work with RGB kinect frames ? How to prevent the color format change to gray scale? Thanking you. static const std::string OPENCV_WINDOW1 = "Good Matches"; static const std::string WINDOW1 = "Training Image "; static const std::string WINDOW2 = "Test Frames "; class ImageConverter { ros::NodeHandle nh_; image_transport::ImageTransport it_; image_transport::Subscriber image_sub_; image_transport::Publisher image_pub_; public: ImageConverter() : it_(nh_) { // Subscrive to input video feed and publish output video feed image_sub_ = it_.subscribe("/camera/rgb/image_rect", 1, &ImageConverter::imageCb, this); image_pub_ = it_.advertise("/image_converter/output_video", 1); cv::namedWindow(OPENCV_WINDOW1); } ~ImageConverter() { cv::destroyWindow(OPENCV_WINDOW1); } void imageCb(const sensor_msgs::ImageConstPtr& msg) { cv_bridge::CvImage cv_img; // training image cv_bridge::CvImage cv_img_proc; cv_bridge::CvImagePtr cv_ptr_frames; // kinect frames cv::Mat object = imread( "juice_color.png" ); imshow( WINDOW1, object ); //Displayed in RGB color cv_img.header.stamp = ros::Time::now(); cv_img.header.frame_id=msg->header.frame_id; cv_img.encoding = "rgb8"; cv_img.encoding = sensor_msgs::image_encodings::RGB8; cv_img.image = object; sensor_msgs::Image im; cv_img.toImageMsg(im); try { cv_ptr_frames = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::RGB8); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } imshow( WINDOW2, cv_ptr_frames->image ); } Originally posted by SM on ROS Answers with karma: 15 on 2013-11-07 Post score: 0 Answer: Typically OpenCV highgui functions (imread, imwrite, imshow etc) use color model BGR8, so probably your image is in BGR8 not RGB8. Pushing it through cv_bridge with wrong color model may cause trouble, so try BGR8. Originally posted by Wolf with karma: 7555 on 2014-01-17 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 16087, "tags": "kinect, opencv, cv-bridge" }
The total time require for a pipeline to execute
Question: The book says that a total time required for a pipeline with k stages to execute n instructions is as follows. $T _{k,n} =[pqnk+(1-pq)(k+n-1)] \tau$ p is the probability of encountering a branch instruction. q is the probability that execution of a branch instruction I causes a jump to a nonconsecutive address. (Each jump requires the pipeline to be cleared) However, I can not understand why this makes sense. The formula indicates that "If there is a branch instruction and is taken, the number of stages is nk. And in the remaining cases, it is (1-pq)(k+n-1) I can understand the second case, but why is the number of stages nk in the first case? I think that the result of nk stages never occurs unless every instruction is a branch instruction so that the pipeline is cleared every time each instruction is executed. Instructions after a branch instruction which is taken can be executed in the pipeline, so it must be less than nk. What do I miss? Answer: The exact time could be obtained as $\tau\sum p_i\,c_i$ where $p_i$ is the probability of taking $c_i$ cycles. Let's admit that by virtue of linearity, it ends up to be equivalent to $\tau\sum p_i\,C_i$ with two terms, $p_0=pq$, thus $p_1=(1-pq)$. The two terms $C_0$ and $C_1$ must make the formula correct for the extreme cases $pq=1$ and $pq=0$. The $C_0=nk$ term (the question's first case) is the number of cycles it would take to execute the $n$ instructions with each requiring $k$ cycles (because each jumps to a non-consecutive location and requires refilling the $k$ stages). The $C_1=k+n−1$ term is the number of cycles it would take for linear code (either because there is no branch instruction, or none jumps to a non-consecutive address). The $n$ term is such that each additional instruction adds one cycle, and the other terms are such that for $n=1$, the outcome is the $k$ cycles necessary to fill the $k$ stages of pipeline.
{ "domain": "cs.stackexchange", "id": 11538, "tags": "cpu-pipelines" }
How to apply multiple filters in one statement
Question: While trying to grasp the potentials of the collection features in F#, I encountered this one: let needSpecialDocument country = (not country.IsEea) && (not country.IsInSchengen) // could also be: not (country.IsEea || country.IsInSchengen) let wantToMoveTo country = country.Name = "CC" let potentialDestinations = countries |> Seq.filter needSpecialDocument2 |> Seq.filter wantToMoveTo Is there a way to combine the two filters into one? I managed the one below but it feels like there is a more fsharpish way to do so.. |> Seq.filter (fun c -> (needSpecialDocument c) && (wantToMoveTo c)) Test data: type Country = {Name : string; IsEea : bool; IsInSchengen : bool} let uk = {Name = "UK"; IsEea = false; IsInSchengen = false} let ir = {Name = "IR"; IsEea = true; IsInSchengen = false} let fr = {Name = "FR"; IsEea = true; IsInSchengen = true} let ch = {Name = "CH"; IsEea = false; IsInSchengen = true} let cc = {Name = "CC"; IsEea = false; IsInSchengen = false} let countries = [uk;ir;fr;ch;cc] Answer: Nope, that's pretty much the way. If you really, really want to make this shorter and less cluttered, and if you face similar situations multiple times in your codebase, you could make yourself a special operator for combining predicates: let (.&&.) f g c = f c && g c Then you can use this operator in your filter: |> Seq.filter (needSpecialDocument .&&. wantToMoveTo) But base on my own experience I wouldn't advise this. You make the program slicker and cooler looking, but you're losing some readability. Now whoever reads your program will have to look up the meaning of .&&.. Programs are read more often than they are written.
{ "domain": "codereview.stackexchange", "id": 40519, "tags": "f#" }
Minimum number of edges to add such that there are no bridges on a tree
Question: Any edge of a tree is a bridge. What is the minimum number of edges that I will need to add so that there are no more bridges in a tree? I have seen the solution from the internet the answer is $\frac{|V|}{2}$, where $|V|$ is the number of vertices in the tree. How can I prove it? Answer: Let me describe the following algorithm. The basic idea is to add edges such that for any pair of vertices $v_i$ and $v_j$ there is at least two different simple paths between $v_i$ and $v_j$. Initially all edges are unmarked Select a simple path $v_i\dots v_j$ containing at least two unmarked edges such as $(v_i,v_j) \notin E$ (vertices $v_i$ and $v_j$ are not connected). If there is no path containing two unmarked edges, then select a path containing one unmarked edge (this means we are done) Connect $v_i$ and $v_j$ (thereby creating a cycle). Mark all edges of the (newly created) cycle $v_i\dots v_j$ If there is unmarked edge then go to step 2 Halt Claim 1: The algorithm adds at most $\frac{|V|}{2}$ edges. Proof: At step 2 we select a path whose length is at least $2$ (i.e. has at least two edges) containing at least two unmarked edges. Such path exists as long as we have two unmarked edges since there is always a simple path between any two vertices $v_i$ and $v_j$ in a connected tree. So, at each step 2 we decrease the number of unmarked edges by $2$. Since the tree initially has $|V|-1$ unmarked edges, the algorithm adds at most $\big\lceil{\frac{|V|-1}{2}}\big\rceil$ edges. But $\big\lceil{\frac{|V|-1}{2}}\big\rceil \leq \big\lfloor\frac{|V|}{2}\big\rfloor \leq \frac{|V|}{2}$. Claim 2: The resulting graph created by the algorithm has no bridges. Proof: Consider a partition $(V',V- V')$. Let $v_i \in V'$ and $v_j \in V-V'$ such that $(v_i, v_j)$ is an edge of the initial tree. Such edge obviously exists since the tree is connected. At some point the algorithm selects a path containing the edge $(v_i, v_j)$ and creates a simple cycle which includes the edge $(v_i, v_j)$. So, there are two different paths between vertices $v_i$ and $v_j$, and hence there are at least two paths (or edges) connecting the partitions $V'$ and $V-V'$. This algorithm does not compute the optimal number of edges for all possible input instances. For example for a tree which is a path it is enough to add a single edge connecting two end points to transform the tree into a bridgeless graph. My goal is to establish the least upper bound for the number of edges required to add to a tree to transform it into a bridgless graph. For example a star-like tree with even number of vertices is a worst case in which case we need exactly $\frac{|V|}{2}$ edges.
{ "domain": "cs.stackexchange", "id": 9921, "tags": "graphs, trees" }
What types of oil reservoirs are applicable to radio/microwave heating and advantageous to SAGD?
Question: I am doing a project looking into the advantages of radio/microwave heating of oil reservoirs. I've seen research indicating that RF/Microwave heating can be used for environments such as shallow, tight, high permeability, fractured, etc zones. What are the disadvantages of applying techniques such as SAGD in these zones? Answer: Heating oil makes it less viscous which makes it easier to flow and thus easier to pump. RF/microwave heating is an easy and relatively safe way of reducing the viscosity of the oil a reservoir. Additionally, the greater the permeability of a rock mass the more easily a fluid will flow through it. If a rock mass has large pore spaces, such as sandstone, the more permeable it will be. For rock which has small/tight pore spaces the permeability can be increased by fracturing the rock mass. SAGD (steam assisted gravity drainage) is another way to reduce the viscosity of the oil. The effectiveness of SAGD would rely on how precise the horizontal steam holes were drilled relative to each other. Rock mass have varying properties and there may be softer and harder portions within a rock mass. This can affect the path of the drill bit and steel when the holes are drilled. Consequently, the holes may not always be exactly where they are required for efficient SAGD recovery of the oil. Some parts of the reservoir may be cooler or hotter than needed. Applying additional heat via RF/microwave heating will increase the likelihood of more of the oil being heated and having its viscosity reduced and thus potentially increasing the recovery of oil from the reservoir.
{ "domain": "engineering.stackexchange", "id": 1732, "tags": "geotechnical-engineering, rf-electronics, petroleum-engineering" }
Use of beta-2 microglobulin staining to assess frozen tissue sample integrity
Question: In a Russian paper I'm reading the authors use staining for beta-2 microglobulin (B2M) to make sure that a variety of frozen tissue samples still express antigens on their surface, so that these samples could be used to assess the activity of an antibody they are researching. The authors write that "since virtually each cell expresses B2M on its surface, this assay ensures that other antigens have remained on the surface of the examined cells as well". Is there a special term for this kind of check? Is this anti-B2M staining a routine procedure, and are there papers dedicated to this procedure? I haven't found relevant papers. Answer: I'd say it is not common practice to look for B2M, although I guess there will be heavy users would would disagree. Without seeing the paper I cannot be sure, but I guess they are using B2M as a housekeeping gene, that is, a molecule whose presence and quantity is largely unregulated in living cells, and which is commonly used in labs as reference. Biological samples vary greatly in their content. It is hard to tell if a molecule is made or destroyed in response to a treatment, by merely measuring, and finding more or less of it. But if the ratio between the aforementioned molecule to an unregulated molecule goes up, you can be more sure it responded to the treatment. That is called an internal control. You may be familiar with "normalization to actin" or to GAPDH in immunoblot and qRT-PCR. The use you are describing is diverging from the typical use of housekeeping genes in a lab, and not optimal for what they were trying to show. Since it's there in pretty much any tissue measured by immunoblot, B2M is expected to be present in any tissue assayed by microscopy. If a tissue slide lacks B2M, it is likely spoilt. But if it has B2M, it doesn't make it 100% certain that other, scarcer, more unstable molecules have been preserved as well. It would be ideal if they looked for the specific molecule they were studying; but if many of their samples were supposedly devoid of the latter, they had to replace the ideal with something more practical. Note that I am using "molecule" in an ambiguous manner. With antigens in the referenced paper, it's obvious B2M is used as a reference protein. But housekeeping genes, including B2M, are also used as internal control for mRNA measurements ("gene expression").
{ "domain": "biology.stackexchange", "id": 5883, "tags": "literature" }
Turtlebot calibration returns absolute parameters not multipliers?
Question: Tutorial says that the result of calibration by "roslaunch turtlebot_calibration calibrate.launch" are to be used to multiply the current parameters. But I've been experiencing with my 4 turtlebots that multiplication makes things worse. Instead, replacing with those returned values tends to show better result. Has there been change here or am I doing something wrong? Thank you. Originally posted by 130s on ROS Answers with karma: 10937 on 2011-11-28 Post score: 0 Answer: Make sure that you restart your turtlebot_node between runs or use dynamic reconfigure's reconfigure_gui to change the values of the parameters in the node before running. Otherwise they will not be reread by the driver. The default values are close to 1, so if the corrections are close to absolute when using the default parameters. Originally posted by tfoote with karma: 58457 on 2011-11-28 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Nick Armstrong-Crews on 2011-12-27: After changing the params (with dynamic reconfigure or "rosparam set" command-line tool), you can "rosnode kill turtlebot_node" and it will respawn automatically with the new params. This way, no reboot is needed. Comment by 130s on 2011-11-29: Using dynamic reconfigure, multiplying the current values by the result values improves calibration. I also updated wiki page to clarify that rebooting node is needed. Thanks!
{ "domain": "robotics.stackexchange", "id": 7444, "tags": "ros, turtlebot, turtlebot-calibration" }
Dynamic array stack and bag implementation
Question: I've have methods for a stack and bag using a dynamic array in C. As far as I can tell everything is implemented correctly, but I'm wondering what I can do to improve this code. dynArray.h /* dynArr.h : Dynamic Array implementation. */ #ifndef DYNAMIC_ARRAY_INCLUDED #define DYNAMIC_ARRAY_INCLUDED 1 # ifndef TYPE # define TYPE int # define TYPE_SIZE sizeof(int) # endif # ifndef LT # define LT(A, B) ((A) < (B)) # endif # ifndef EQ # define EQ(A, B) ((A) == (B)) # endif typedef struct DynArr DynArr; /* Dynamic Array Functions */ DynArr *createDynArr(int cap); void deleteDynArr(DynArr *v); DynArr* newDynArr(int cap); int sizeDynArr(DynArr *v); void addDynArr(DynArr *v, TYPE val); TYPE getDynArr(DynArr *v, int pos); void putDynArr(DynArr *v, int pos, TYPE val); void swapDynArr(DynArr *v, int i, int j); void removeAtDynArr(DynArr *v, int idx); /* Stack interface. */ int isEmptyDynArr(DynArr *v); void pushDynArr(DynArr *v, TYPE val); TYPE topDynArr(DynArr *v); void popDynArr(DynArr *v); /* Bag Interface */ int containsDynArr(DynArr *v, TYPE val); void removeDynArr(DynArr *v, TYPE val); #endif dynamicArray.c #include <assert.h> #include <stdlib.h> #include "dynArray.h" struct DynArr { TYPE *data; /* pointer to the data array */ int size; /* Number of elements in the array */ int capacity; /* capacity ofthe array */ }; /* ************************************************************************ Dynamic Array Functions ************************************************************************ */ void initDynArr(DynArr *v, int capacity) { assert(capacity > 0); assert(v!= 0); v->data = (TYPE *) malloc(sizeof(TYPE) * capacity); assert(v->data != 0); v->size = 0; v->capacity = capacity; } DynArr* newDynArr(int cap) { assert(cap > 0); DynArr *r = (DynArr *)malloc(sizeof( DynArr)); assert(r != 0); initDynArr(r,cap); return r; } void freeDynArr(DynArr *v) { if(v->data != 0) { free(v->data); /* free the space on the heap */ v->data = 0; /* make it point to null */ } v->size = 0; v->capacity = 0; } void deleteDynArr(DynArr *v) { freeDynArr(v); free(v); } void _dynArrSetCapacity(DynArr *v, int newCap) { DynArr *temp; temp = newDynArr(newCap); for(int i = 0; i < v->size; i++) { temp->data[i] = v->data[i]; } v = temp; } int sizeDynArr(DynArr *v) { return v->size; } void addDynArr(DynArr *v, TYPE val) { if(v != NULL) { if(v->size == v->capacity) { _dynArrSetCapacity(v, v->capacity*2); } v->data[v->size+1] = val; v->size++; } } TYPE getDynArr(DynArr *v, int pos) { TYPE value; if(v != NULL && v->size != 0 && v->size >= pos) { value = v->data[pos]; } return value; } void putDynArr(DynArr *v, int pos, TYPE val) { if(v != NULL && v->size != 0 && v->size >= pos) { v->data[pos] = val; } } void swapDynArr(DynArr *v, int i, int j) { if(v != NULL && v->size != 0 && v->size >= i && v->size >= j && i >= 0 && j >= 0) { TYPE temp; temp = v->data[i]; v->data[i] = v->data[j]; v->data[j] = temp; } } void removeAtDynArr(DynArr *v, int idx) { if(v != NULL && v->size != 0 && v->size >= idx && idx >= 0) { for(int i=0; i < v->size-1; i++) { v->data[idx+i] = v->data[idx+i+1]; } v->size--; } } /* ************************************************************************ Stack Interface Functions ************************************************************************ */ int isEmptyDynArr(DynArr *v) { if(v->size == 0) { return 1; } else { return 0; } } void pushDynArr(DynArr *v, TYPE val) { addDynArr(v, val); } TYPE topDynArr(DynArr *v) { TYPE top; if(v != NULL && v->size != 0) { top = v->data[v->size-1]; } return top; } void popDynArr(DynArr *v) { if(v != NULL && v->size != 0) { v->size = v->size-1; } } /* ************************************************************************ Bag Interface Functions ************************************************************************ */ int containsDynArr(DynArr *v, TYPE val) { int contains = 0; if(v != NULL && v->size != 0) { for(int i = 0;i < v->size; i++) { if(v->data[i] == val) { contains = 1; } } } return contains; } void removeDynArr(DynArr *v, TYPE val) { if(v != NULL && v->size != 0) { for(int i = 0;i < v->size-1; i++) { if(v->data[i] == val) { removeAtDynArr(v, i); break; } } } } Answer: Avoid name-space pollution You have carefully suffixed all your function names with the name of the type they operate on. This is good. On the other hand, your header file spams macro definitions for LT, EQ and TYPE into each file that #includes it. This is evil. TYPE is a far too generic name that might legitimately be used otherwise. LT and EQ are even worse. Besides, you're never using them. I suggest you get rid of all three macros as they are not needed. If you do need to #define macros in public header files, prefix them with your package name, for example, DYN_ARRAY_TYPE. Declare helper functions static You have some helper functions that are only needed inside the dynamicArray.c file. You have suffixed them with an underscore to indicate this. It would be better to (additionally) declare them static. This way, they will be truly private to your implementation file and cannot clash with other functions in other files. It might also make the code smaller and faster. The function createDynArr is never defined. On the other hand, there is initDynArr which is not declared in the header. This is good, because the function is not needed by clients. You should delete the declaration of createDynArr from the header file and make initDynArr a static function. Consider putting the type name first: sizeDynArr → DynArr_size As discussed above, putting the type name into the function name is good because it avoids name clashes with functions that operate on other types that might also provide similar operations. I prefer prefixing the type name, though. This makes the textual appearance of the function names more cohesive and supports auto-completion. I find it more readable to put an underscore between type name and function name. Avoid confusing names You have the following functions: newDynArr createDynArr / initDynArr (see above) freeDynArr deleteDynArr Without looking at the code, it is hard to guess what the functions do. I recommend you seek for pairs of matching names. For example: DynArr_new – allocates memory for a DynArr and initializes it DynArr_init – initializes an already allocated DynArr DynArr_fini – deinitializes a DynArr DynArr_del – deinitializes a DynArr and deallocates it Since DynArr is an opaque type, the init and fini functions need not / should not be exposed by the header file. You might be interested in my answer to this question on Stack Overflow about initializing (non) opaque types. Consider bool instead of int for logical values C99 introduced <stdbool.h> which provides true, false and bool. Using these makes your intent clearer than using 1, 0 and int. Consider size_t (or ptrdiff_t) instead of int Some people say that when a value must not be negative, an unsigned integer type should be used. Others argue that unsigned arithmetic is a confusing source of bugs and it was a mistake to chose unsigned integer types for array sizes. Anyway, this choice is now baked so tightly into the language that it will never change. Having to switch between signed and unsigned types combines the worst of both worlds so you might as well accept it and use size_t for your array sizes and indices consistently. If you still want to use a signed integer, use ptrdiff_t as it is guaranteed to be large enough. int is only required to be at least 16 bit wide although it will be 32 bit on almost any modern platform. size_t and ptrdiff_t are provided by the <stddef.h> header. Be const correct If a function does not modify the object, it should take it by pointer-to-const. For example, DynArr_size certainly shouldn't alter the DynArr object, so declare it like this. size_t DynArr_size(const DynArr * self); // ^^^^^ What is a TYPE? Your header file #defines the macro TYPE (which should really be named DYN_ARRAY_TYPE) to int. It would be handy if I could #define DYN_ARRAY_TYPE double before #includeing your header and instead get a dynamic array of doubles. Except, it doesn't work like this. While the code will compile and (unfortunately) probably also link happily, it will invoke tons of undefined behavior when executed. The problem is that your implementation file is still compiled for int. If you don't want your users to #define the element type, just spell out int or use a typedef in your header file that cannot be changed from the outside. (But please pick a more specific name than type.) If you want to be generic, it can be done even in C but it is ugly. First, you have to put all your code into the header file. That's not too bad but don't forget to declare all functions as inline. Second, you have to change your names according to the type via some macro magic. For example, instead of size_t DynArr_size(const DynArr * self); you would write #define DYN_ARRAY_CONCAT_R(FIRST, SECOND) FIRST ## SECOND #define DYN_ARRAY_CONCAT(FIRST, SECOND) DYN_ARRAY_CONCAT_R(FIRST, SECOND) #define DYN_ARRAY_SELF DYN_ARRAY_CONCAT(DynArr_, DYN_ARRAY_TYPE) inline size_t DYN_ARRAY_CONCAT(DYN_ARRAY_SELF, _size)(const DYN_ARRAY_SELF * self); #undef DYN_ARRAY_SELF After pre-processing with -DDYN_ARRAY_TYPE=float, you will get this. inline size_t DynArr_float_size(const DynArr_float *self); Which is a proper unambiguous declaration yet without a definition. But you still cannot have DynArrs for different types in the same translation unit because of your #include guards. If this sounds like black magic to you, then because it is. Just hard-code int or learn C++. Even writing a place-holder like @TYPE@ in your code and manually stamping out concrete versions by running $sed 's,@TYPE@,float,g' dyn_array.h.in > dyn_array_float.h can be simpler and less frustrating. Document your contracts and report contract violations loudly You were careful to detect certain erroneous conditions, such as passing NULL as the pointer to the DynArr or asking to pop an element off an empty stack. Writing defensive code is a Good Thing. I'm not a fan of the way you respond to the error conditions, though. The first thing you should do is document the contract of your functions. There are two ways to specify contracts. Functions with a wide contract allow for them to be called with nonsensical arguments. If they detect a problem, they report an error and do nothing. Functions with narrow contract put the responsibility to only call them with valid arguments on the caller. If they are called with invalid arguments, their behavior is undefined and arbitrary bad things might happen. Both types of contracts have their place. However, wide contracts are much harder to implement and are not always as useful as they might appear at first. Let's look at element lookup as an example. It is pretty obvious that it can only perform a meaningful action when the position is non-negative and less than the size of the array. You have implemented it like this. TYPE getDynArr(DynArr *v, int pos) { TYPE value; if(v != NULL && v->size != 0 && v->size >= pos) { value = v->data[pos]; } return value; } Is the contract of this function narrow or wide? Okay, v->size >= pos must be v->size < pos, the check v->size != 0 is redundant and you should also check pos >= 0. But apart from that, what does the function do if it detects invalid arguments? It simply returns an uninitialized value. This itself is undefined behavior but if TYPE is int, chances are good that it won't crash your program and instead produce some garbage value. So, even though the function verifies its arguments, its behavior is undefined when called with invalid arguments. You could easily make the behavior well-defined by initializing value with 0. But would that be useful? When getDynArr gives me 0, how can I know whether it is an indication of failure or whether there happened to be the value 0 at the position? So, implementing DynArr_get with a proper wide contract could look like this. /** * Retrieves the value of an element at a given index. * * If `self == NULL` or if `pos` is not a valid index for the given array, * `false` is `return`ed and `*result` is not touched. Otherwise, `*result` * is set to the value at index `pos` and `true` is `return`ed. * * @param self * `DynArr` to operate on * * @param pos * index of the element to retrieve * * @param result * address to store the result at * * @returns * whether a valid value was stored at `*result` * */ bool DynArr_get(const DynArr *const self, const size_t pos, int *const result) { // I don't have to check `(pos < 0)` because `size_t` is unsigned. if ((self == NULL) || (pos >= self->size)) { return false; } *result = self->data[pos]; return true; } It can be used like this. int value; if (DynArr_get(array, 2, &value)) { printf("The value is: %d\n", value); } else { fprintf(stderr, "Oh no, I made a mistake!\n"); } I would say that this is convoluted and not useful. Whoever is using a DynArr had better make sure they only ask for elements at valid indices. If in doubt, they can always ask for the size of the array before they ask for the element. So let's implement DynArr_get with a narrow contract instead. /** * Retrieves the value of an element at a given index. * * If `self == NULL` or if `pos` is not a valid index for the given array, * the behavior is undefined. * * @param self * `DynArr` to operate on * * @param pos * index of the element to retrieve * * @returns * value of the element at index `pos` * */ int DynArr_get(const DynArr *const self, const size_t pos) { return self->data[pos]; } This is a perfectly valid and reasonable implementation. If the function is called with invalid arguments, it will invoke undefined behavior but that's just what its documentation says. However, you can do your users a favor by crashing their application loudly when you detect a contract violation. int DynArr_get(const DynArr *const self, const size_t pos) { assert((self != NULL) && (pos < self->size)); return self->data[pos]; } Triggering an assertion failure is a valid form of undefined behavior so we still fulfill our contract but we give the programmer a clear hint what to fix in their code. On the other hand, the assert will vaporize away when the NDEBUG macro is #defined so we don't force users to pay the overhead for the argument validation if they don't want to. If you still haven't enough about design-by-contract and defensive programming, I recommend you watch John Lakos' talk “Defensive Programming Done Right” (part 1, part 2) from CppCon '14. Some of it is specific to C++ but the general concepts are language agnostic. Alisdair Meredith's talk “Details Matter” (video, slides) at C++ Now '15 covers some of the same ideas. Don't handle out-of-memory with assert The assert macro should be used to detect contract violations and verify assumptions, that is unveil bugs. It is not an appropriate tool for handling general run-time errors. Even a bug-free program can run out of memory. Apart from that, not every failure of malloc should immediately terminate the program. It might be an appropriate reaction for many applications but a library type like your DynArr should not force this on its users. So, make the functions that need to allocate memory react gracefully to out-of-memory conditions and report the error back to their caller. DynArr_new could just return NULL and do nothing if it fails to allocate memory and DynArr_add could return a bool that indicates whether the operation succeeded and not modify the array when it cannot allocate memory but the current size is at the capacity limit. Note that while modern computers have plenty of memory, users might artificially constrain it for certain applications. For example, I might want to set the memory limit for a server process that handles simple requests to 10 MiB to prevent denial of service attacks that send maliciously crafted queries that exploit worst-case characteristics of my algorithm. Now imagine what happens when your code asserts on malloc to succeed and the program was compiled with NDEBUG. The allocation will fail but instead of having defeated the DoS attack, we have provided the attacker with a way to corrupt memory! Consider a “constructor” that creates a zero-initialized non-empty array If I want a zero-filled DynArr of size n, I currently have to do this. DynArr * array = DynArr_new(n); while (DynArr_size(array) != n) { DynArr_push(array, 0); } This is awkward and inefficient. On the other hand, why do I have to specify an explicit capacity when I create a DynArr? I would consider the following interface more useful. /** * Allocates a zero-initialized dynamic array. * * If `n == 0`, no internal storage is allocated yet. * * @param n * initial size for the array * * @returns * a pointer to the array or `NULL` if allocation failed * */ DynArr * DynArr_new(size_t n); /** * Reserves internal memory without changing the size. * * If the current capacity is already equal to or greater than `n`, this * function does nothing. If allocation fails, the `return`ed new capacity * will be less than `n`. * * @param n * minimum capacity to ensure * * @returns * new capacity * */ size_t DynArr_reserve(size_t n); It can do everything your current interface can do and more but is simpler to use and more efficient. Why is there add and push? The functions addDynArr and pushDynArr do exactly the same thing. I don't think that you need both. Just stick with push for the stack semantics. Make removing elements more convenient Suppose I have a DynArr and want to remove all occurrences of 42 from it. How do you code it? while (DynArr_contains(array, 42)) { DynArr_remove(42); } This is not only awkward to write but also hilariously inefficient. I would recommend the following improvement to your interface. /** * Removes the first occurrence of a value from the array and shifts the * remaining elements towards the front. * * If the value is not found, then this function has no effect. * * @param self * `DynArr` to operate on * * @param value * value to remove * * @returns * whether the value was found and removed * */ bool DynArr_remove(DynArr * self, int value); /** * Removes all occurrence of a value from the array, shifting the remaining * elements towards the front. * * @param self * `DynArr` to operate on * * @param value * value to remove * * @returns * number of removed occurrences * */ size_t DynArr_removeAll(DynArr * self, int value); If you implement it wisely, removeAll can be much more efficient than calling remove repetitively. Think about shrinking your backing storage again You have implemented a strategy of doubling the capacity of your array when needed, which is good. However, your capacity never shrinks again when elements are removed from the DynArr. This is a valid choice but not the only possible one. If you decide to shrink, be sure to do it like this: If the size drops below 1/4 of the capacity, reduce the capacity to twice the size. If you shrink more, your operations can become very inefficient. Be lazy or else do useful stuff Your containsDynArr function always loops over the entire array even if it has already found the value. It should either return true immediately or else count the elements that are equal to the target value and return that count. (Of course, such a function should then be named DynArr_count, not DynArr_contains.) Keep interfaces small Are all your functions really needed in this interface? For example, the algorithms to remove elements, check whether they are present or swap their values can all be implemented via the public interface to query the array size and access elements by-index. If you guarantee that your internal storage is contiguous, I can even take the address of the first element and use it as a pointer to a raw array. C is a language that doesn't encourage generic programming but you can still draw ideas from it. There is a bug in _dynArrSetCapacity The statement v = temp in _dynArrSetCapacity has no effect outside the function, which means that the entire function does basically nothing. You should assign *v = *temp. Before you do that, you should call DynArr_fini(v) or you will leak the memory for the old array. Don't cast the result of malloc C has the infamous rule that void * is implicitly convertible to any pointer type. So you can just write int * p = malloc(100 * sizeof(int)); and it is perfectly correct. Casting explicitly to int * brings no benefit and might hide other bugs. See this post on stack Overflow for a more detailed discussion. Avoid uninitialized variables You have a lot of code of the following form. int foo; // … foo = something(); This is an unnecessary potential entry hole for bugs. If you only declare the variable in the same statement that initializes it int foo = something(); the code will become shorter and you cannot accidentally use foo uninitialized. Trust boolean logic This logic if (v->size == 0) { return 1; } else { return 0; } is convoluted. Truth is not how you define it. Just use return (v->size == 0); instead. The parenthesis are not needed but I prefer to parenthesize expressions with logical comparisons for readability. Consider using memcpy and memmove instead of manual loops The standard library has highly optimized routines available to copy memory: memcpy if the regions don't overlap for sure and memmove if the regions might overlap. Consider using them in your code for simplicity and performance. Consider using ralloc to grow or shrink your storage The standard library already provides the realloc function to grow or shrink an allocated buffer while keeping its contents (those that were valid before and are still valid afterwards) intact. Consider using it for performance and simplicity.
{ "domain": "codereview.stackexchange", "id": 19568, "tags": "c, stack" }
Understanding non-faulting and faulting software prefetches
Question: What is the difference between a faulting and non-faulting software prefetch. I have read some material in Google but can't understand it deeply. How do we know if a software prefetch is faulting or non-faulting in practice? Answer: A faulting prefetch is one which (as gnasher729's answer notes) generates any translation (e.g., invalid page table entry), permission, or other fault (e.g., ECC failure, data watchpoint match) associated with the access. Such a prefetch acts as if the memory location was accessed normally. For data accesses such a prefetch can be provided by software on architectures that lack prefetch instructions. (For instruction accesses, a return operation would need to be placed in the cache block reducing the amount of useful code that would fit in the block.) By acting as a normal access, a faulting prefetch effectively guarantees that the chunk of memory will be cached (a non-faulting prefetch may be dropped under high memory use or even a TLB miss) and that its address translation will be cached. This behavior may be preferred if the address is known to be accessed in the near future, especially if a timing critical section is about to be entered. In some RISC ISAs, a faulting data read prefetch could be provided by a simple load to the zero register. Other ISAs could preform a normal load at the cost of temporary use of a register. Non-faulting prefetches are purely hints to hardware, allowing hardware to drop the access if it is perceived as too expensive. By avoiding faults, such prefetches can be used without concern about whether the address is valid; prefetching from a null pointer or past the end of an array is safe. A non-faulting prefetch is purely speculative. (Some ISAs provide support for speculative, non-faulting loads. In such an ISA, non-faulting prefetches could be implemented using such a load. Since the speculative load value may be used, hardware will typically treat such as an actual access and merely suppress any exceptions.)
{ "domain": "cs.stackexchange", "id": 7137, "tags": "computer-architecture, virtual-memory, memory-access" }
When is the AKS primality test actually faster than other tests?
Question: I am trying to get an idea of how the AKS primality test should be interpreted as I learn about it, e.g. a corollary for proving that PRIMES ⊆ P, or an actually practical algorithm for primality testing on computers. The test has polynomial runtime but with high degree and possible high constants. So, in practive, at which $n$ does it surpass other primality tests? Here, $n$ is the number of digits of the prime, and "surpass" refers to the approximate runtime of the tests on typical computer architectures. I am interested in functionally comparable algorithms, that is deterministic ones that do not need conjectures for correctness. Additionally, is using such a test over the others practical given the test's memory requirements? Answer: Quick answer: Never, for practical purposes. It is not currently of any practical use. First, let's separate out "practical" compositeness testing from primality proofs. The former is good enough for almost all purposes, though there are different levels of testing people feel is adequate. For numbers under 2^64, no more than 7 Miller-Rabin tests, or one BPSW test is required for a deterministic answer. This will be vastly faster than AKS and be just as correct in all cases. For numbers over 2^64, BPSW is a good choice, with some additional random-base Miller-Rabin tests adding some extra confidence for very little cost. Almost all of the proof methods will start out (or they should) with a test like this because it is cheap and means we only do the hard work on numbers which are almost certainly prime. Moving on to proofs. In each case the resulting proof requires no conjectures, so these may be functionally compared. The "gotcha" of APR-CL is that it isn't quite polynomial, and the "gotcha" of ECPP/fastECPP is that there may exist numbers that take longer than expected. In the graph, we see two open source AKS implementations -- the first being from the v6 paper, the second including improvements from Bernstein and Voloch and a nice r/s heuristic from Bornemann. These use binary segmentation in GMP for the polynomial multiplies so are pretty efficient, and memory use is a non-issue for the sizes considered here. They produce nice straight lines with a slope of ~6.4 on the log-log graph, which is great. But extrapolating out to 1000 digits arrives at estimated times in the hundreds of thousands to millions of years, vs. a few minutes for APR-CL and ECPP. There are further optimizations which could be done from the 2002 Bernstein paper, but I don't think this will materially change the situation (though until implemented this isn't proven). Eventually AKS beats trial division. The BLS75 theorem 5 (e.g. n-1 proof) method requires partial factoring of n-1. This works great at small sizes, and also when we're lucky and n-1 is easy to factor, but eventually we'll get stuck having to factor some large semi-prime. There are more efficient implementations, but it really doesn't scale past 100 digits regardless. We can see that AKS will pass this method. So if you asked the question in 1975 (and had the AKS algorithm back then) we could calculate the crossover for where AKS was the most practical algorithm. But by the late 1980s, APR-CL and other cyclotomic methods was the correct comparison, and by the mid 1990s we'd have to include ECPP. The APR-CL and ECPP methods are both open source implementations. Primo (free but not open source ECPP) will be faster for larger digit sizes and I'm sure has a nicer curve (I haven't done new benchmarking yet). APR-CL is non-polynomial but the exponent has a factor $\log \log \log n$ which as someone quipped "goes to infinity but has never been observed to do so". This leads us to believe that in theory the lines would not cross for any value of n where AKS would finish before our sun burned out. ECPP is a Las Vegas algorithm, in that when we get an answer it is 100% correct, we expect a result in conjectured $O(\log^{5+\epsilon}(n))$ (ECPP) or $O(\log^{4+\epsilon}(n))$ ("fastECPP") time, but there may be numbers that take longer. So our expectation is that standard AKS will always be slower than ECPP for almost all numbers (it certainly has shown itself so for numbers up to 25k digits). AKS may have more improvements waiting to be discovered that makes it practical. Bernstein's Quartic paper discusses an AKS-based randomized $O(\log^{4+\epsilon}(n))$ algorithm, and Morain's fastECPP paper references other non-deterministic AKS-based methods. This is a fundamental change, but shows how AKS opened up some new research areas. However, almost 10 years later I have not seen anyone use this method (or even any implementations). He writes in the introduction, "Is the $(\lg n)^{4+o(1)}$ time for the new algorithm smaller than the $(\lg n)^{4+o(1)}$ time to find elliptic-curve certificates? My current impression is that the answer is no, but that further results [...] could change the answer." Some of these algorithms can be easily parallelized or distributed. AKS very easily (each 's' test is independent). ECPP isn't too hard. I'm not sure about APR-CL. ECPP and the BLS75 methods produce certificates which can be independently and quickly verified. This is a huge advantage over AKS and APR-CL, where we just have to trust the implementation and computer that produced it.
{ "domain": "cs.stackexchange", "id": 2632, "tags": "algorithms, efficiency, primes" }
What conditions to use to crystallise Equine Myoglobin?
Question: What are the crystallisation conditions for Equine Myoglobin? I've tried repeating some from the PDB but no success. Answer: Does the 1958 Nature paper by Kendrew include any useful details? (Sorry, I'm behind a paywall right now).
{ "domain": "chemistry.stackexchange", "id": 831, "tags": "crystal-structure, structural-biology" }
Preventing unwanted access
Question: I have some code that is helping to protect some online surveys. The surveys have an incentive in the form of an Amazon voucher or cash (which is manually/semi-automatically distributed post-survey) so as you can imagine, we get a lot of people attempting to get multiple vouchers, or undesirable respondents mainly from China. Of course the main way to prevent this, is to send out unique links to a set of pre-recruited people, however this isn't always an option and hence the need for some kind of protection. Here is the code, I've removed 2 SQL insert queries to reduce the size of this post: $countryCode = sanitise($_SERVER["HTTP_CF_IPCOUNTRY"]); $allowedCountries = getAllowedCountries($projectID); // array('GB', 'US', 'SE'); if(in_array($countryCode, $allowedCountries)) { // Count records with matching ip for this project, >=1 TRUE, 0 FALSE $isBlocked = checkForBlockedIP($projectID, $ip); if($isBlocked == TRUE) { header("Location: error.php?blocked"); } else { // Count records with matching ip for this project, >=1 TRUE, 0 FALSE $isDuplicate = checkForDuplicateResponse($projectID, $ip); if ($isDuplicate == TRUE) { //<Removed> // Store identifying data into dup_attempts table // and block ip from further attempts header("Location: error.php?duplicate"); } else { // Create redirect link and send user to it $redirectLink = $projectLink . $userID; header("refresh:3; url=" . $redirectLink . ""); } } } I'm posting this here, because I'm not a professional developer and I don't know if I'm taking the right approach here. Perhaps my approach as a whole is wrong. I'm allowing only certain countries to participate, checking for a duplicate IP (per project) and if one is found, then I'm storing it in a table for duplicate attempts and adding the IP to a blacklist for that project. Answer: I agree with the comment above about IP-based restriction perhaps not being the best approach here, or at least not using it as the only means to detect duplicates. I think GeoIP is reasonable for blocking at a country level (though certainly not foolproof). I think IP address might be a reasonable component to duplicate detection. You could also add other duplicate detection mechanisms including: User ID (which is something you seem to have but are not using - I would think this would be most authoritative piece of information if there is authentication/login attached to this user ID) setting a cookie inspecting user agent setting value in browser localStorage and then have browser add to request (localStorage is a little harder for a typical user to clear out than cookies) With regards to the code itself, you should get in the practice of inverting conditions when you have large blocks of code nested in a conditional (or really just in general to provide clear, early exit paths from a section of code). For example: if(!in_array($countryCode, $allowedCountries)) { // set location header and exit } if (checkForBlockedIP($projectID, $ip)) { // set location header and exit } if (checkForDuplicateResponse($projectID, $ip)) { // set location header and exit } // happy path code follows There is no reason for your else blocks at all here from what I can tell. As a general rule of thumb, the less nested code and code branches you have, the less prone your code is to bugs, so you should actively look to design away such constructs where possible.
{ "domain": "codereview.stackexchange", "id": 25010, "tags": "php" }
Convert message const ptr to non-const ptr
Question: What is the PROPER way to convert the ConstPtr that I receive in a message callback to a non-const ptr so that I can modify the data? What I want to do is to store a buffer of the messages my node receives and then do some additional manipulation. I know I can manually allocate a copy the message information and store it locally, but this removes the whole point of using classes and becomes a pain when a class has member data which is another class. The autogenerated messages do not have a specific copy constructor and I could not find any good way of adding one (I don't wanna do it manually in the generated header as it will disappear if I change the message). Using the default C++ copy constructor will get me in trouble as it makes only shallow copies of complex data structures which are part of the member data of the message class and when the shared_ptr count for the message goes to 0 I will get a segfault. Originally posted by naikin on ROS Answers with karma: 21 on 2017-02-22 Post score: 1 Answer: I don't know about converting (as that would seem to violate semantics), but according to wiki/roscpp/Overview - Publishers and Subscribers - Subscribing to a Topic - Callback Signature, non-const callbacks are also fully supported: You can also request a non-const message, in which case a copy will be made if necessary (i.e. there are multiple subscriptions to the same topic in a single node): void callback(const boost::shared_ptr<std_msgs::String>&); void callback(boost::shared_ptr<std_msgs::String>); void callback(const std_msgs::StringPtr&); void callback(const std_msgs::String::Ptr&); void callback(std_msgs::StringPtr); void callback(std_msgs::String::Ptr); void callback(const ros::MessageEvent<std_msgs::String>&); Originally posted by gvdhoorn with karma: 86574 on 2017-02-23 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by naikin on 2017-02-23: Thanks, that indeed works in some cases, but I cannot make it work with actionlib because it expects a ConstPtr. Comment by gvdhoorn on 2017-02-24: So this is basically an xy-problem then? Comment by naikin on 2017-02-24: Partially, yes. Sorry about that. I failed to mention it is about actionlib in particular cause I was not aware there is a difference. Again, what I really want is to end up with non-const ptr to the message when I receive it with actionlib.
{ "domain": "robotics.stackexchange", "id": 27104, "tags": "ros, shared-ptr, roscpp" }
How to compile rmf_core using colcon build in ROS2 eloquent?
Question: On running the build, the error stated that the fcl package not found when building package rmf_traffic. Source from github: https://github.com/osrf/rmf_core Starting >>> rmf_utils Starting >>> rmf_dispenser_msgs Starting >>> rmf_fleet_msgs Starting >>> rmf_traffic_msgs Starting >>> rmf_door_msgs Starting >>> rmf_lift_msgs Starting >>> cpp_pubsub Starting >>> rmf_workcell_msgs Starting >>> turtlesim Finished <<< cpp_pubsub [0.28s] Finished <<< turtlesim [0.95s] Finished <<< rmf_utils [2.69s] Starting >>> rmf_traffic --- stderr: rmf_traffic CMake Error at /usr/share/cmake-3.10/Modules/FindPkgConfig.cmake:415 (message): A required package was not found Call Stack (most recent call first): /usr/share/cmake-3.10/Modules/FindPkgConfig.cmake:593 (_pkg_check_modules_internal) CMakeLists.txt:26 (pkg_check_modules) --- Failed <<< rmf_traffic [ Exited with code 1 ] Aborted <<< rmf_door_msgs Aborted <<< rmf_lift_msgs Aborted <<< rmf_dispenser_msgs Aborted <<< rmf_workcell_msgs Aborted <<< rmf_fleet_msgs Aborted <<< rmf_traffic_msgs Summary: 3 packages finished [26.3s] 1 package failed: rmf_traffic 6 packages aborted: rmf_dispenser_msgs rmf_door_msgs rmf_fleet_msgs rmf_lift_msgs rmf_traffic_msgs rmf_workcell_msgs 1 package had stderr output: rmf_traffic 3 packages not processed Originally posted by webvenky on ROS Answers with karma: 117 on 2020-02-07 Post score: 0 Answer: Normally, when trying to build ROS packages from source (which includes ROS 1 and ROS 2), you'd follow a certain workflow, one of the steps of which would be the make sure all package dependencies are present. An example workflow is described in #q252478 (ignore the specific versions of Ubuntu and ROS: it's a generic workflow). The important step that takes care of dependencies is the rosdep invocation. It would appear however that rmf_traffic does not state the dependency on fcl anywhere in its package.xml. Without that information, rosdep would not be aware of the dependency and cannot help you. In this particular case, you'd have to investigate the CMakeLists.txt of each package, identify the used packages/libraries (ie: see which find_package(..) lines there are) and manually install the corresponding Ubuntu/Debian packages. In addition: it would be nice if you could report this on the issue tracker and it would be even nicer if, after you've identified all dependencies, you could update the package manifests with this information and then submit a Pull Request to osrf/rmf_core that would fix this for everyone. Refer to the catkin documentation » How to do common tasks » Package format 2 (recommended) / Overview / Resolving dependencies section in the Catkin documentation for information on which particular sections would need to be added to the manifests. Originally posted by gvdhoorn with karma: 86574 on 2020-02-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by marguedas on 2020-02-07: :+1: to everything said above by @gvdhoorn Looking at the issue tracker of the package can also give you hints. For example it looks like this issue has been reported and has a solution listed: https://github.com/osrf/rmf_core/issues/70#issuecomment-580602947
{ "domain": "robotics.stackexchange", "id": 34399, "tags": "ros2, compile, colcon, build, fcl" }
Chaos implies Nonlinearity?
Question: Why, for finite dimensions, is nonlinearity a precondition for chaos? This article (Linear Chaos? By Nathan S. Feldman) offers an example of an infinite dimensional chaotic map, which is linear. While it also suggests a hint (in section 2) about why finite dimensional linear maps cannot be chaotic, the reason is not immediately clear to me. Could one maybe relate nonlinearity to the fulfilment of the 3 criteria for chaos (mentioned in the paper)? Moreover, could arguments made for a map/discrete dynamical system be naturally carried on to continuous dynamical systems/differential equations? I assume one would begin by looking at the flow function, but I have no idea how to proceed. Answer: Yes, it does. My take is that, without nonlinearity, folding is missing. One of the main mechanisms behind classical chaos is the so-called stretch and fold. It can be visualized as a blob of initial conditions being stretched and then folded over itself by the mapping: stretching leads to a divergence of close trajectories (the hallmark of chaos), while folding keeps them bounded (and dense). A linear system may be able to produce stretching, but this alone corresponds to a trivial behavior, divergence. But, in the spaces we're considering, a linear system cannot produce folding: [none of the] orbits of a linear operator in finite dimensions [...] are dense in the space Why can't linear functions produce folding? - The reason is that folding means that an interval is mapped on itself twice, and that requires a non-monotonic function (and linear functions are monotonic), such as the logistic map's parabola. Notice, though, that topology can provide the folding that a linear transformation alone can't, see, e.g., Arnold's cat map, which is a linear map on a torus. You ask about the condition 3, but we don't need to address it directly: if $f$ has a dense set of periodic points and is transitive, then $f$ must have sensitive dependence on initial conditions. Hence only the first two conditions of the definition of chaos need to be verified when showing that a particular function $f$ is chaotic. As for differential equations, any linear ODE, $\dot{\mathbf{x}}=\mathbf{A}\mathbf{x}$, can be solved. More pictorially, the understanding of maps can be somewhat transfered to flows by means of Poincaré maps and, especially, Poincaré recurrence plots. See, e.g., Carroll's A review of return maps for Rössler and the complex Lorenz or Crutchfield's slides from the lecture Example Dynamical Systems.
{ "domain": "physics.stackexchange", "id": 48869, "tags": "chaos-theory, non-linear-systems" }
Relative vibration damping relating to viscous damping
Question: I would like to understand better the information provided by a flexible coupling manufacturer (see this document, page 42), and how it relates to the viscous damping . In specific, the manufacturer provides the 'Relative Torsional Vibration Damping' which is a unitless number defined as follows: "Relative Torsional Vibration Damping $\Psi$: The relative damping $\Psi_{\text{nominal}}$ is the ratio of the damping energy, converted into heat during a vibration cycle, to the flexible strain energy." However, I struggle to understand how this $\Psi$ relates to the viscous damping coefficient $C$ which comes in the form of $T=C\dot{x}$ (where $T$ is the torque due to damping and $\dot{x}$ is the rotational speed). I am tempted to assume that $\Psi$ as defined there is equivalent to the well-known damping ratio $\zeta$, however I am not too sure. Any clarification/insight would be appreciated. Answer: These "rubber damper" materials do not behave like viscous dampers with a constant coefficient $T = C\dot x$. For a given amplitude $x$, they dissipate a constant amount of energy in each vibration cycle, independent of the frequency. If you want to include them in the usual equations for steady-state forced response, you model them as a complex stiffness term, not as a damping term, i.e. $$-M\omega^2 x(\omega) + K(1 + i \eta)x(\omega) = F(\omega)$$ where $\eta$ is the (constant) damping coefficient. Google for "hysteretic damping" or "structural damping" for more details of the theory if you haven't seen this before. Maybe the specification document gives you the value of $\eta$ somewhere. (I'm not going to search though 200+ pages of multi-lingual text to try and find it for you!) Otherwise, work out the loss of energy in one cycle and compare it with the total energy in the system.
{ "domain": "physics.stackexchange", "id": 82935, "tags": "newtonian-mechanics, rotational-dynamics" }
add_message_files() directory not found:
Question: I'm absolute beginner in Ubuntu and ROS. I'm using 12.04 LTS (virtualbox in windows 7) and groovy. I'm using catkin to build my workspace. This is what I did 1- I did create workspace for catkin. 2- I did create "catkin Package". 3- I did build and use catkin package in the workspace. 4- Now I want to use the example WritingPublisherSubscriber(c++) I'm getting the following error CMake Error at /opt/ros/groovy/share/genmsg/cmake/genmsg-extras.cmake:64 (message): add_message_files() directory not found: /home/croco/catkin_ws/src/beginner_tutorials/msg Call Stack (most recent call first): beginner_tutorials/CMakeLists.txt:8 (add_message_files) -- Configuring incomplete, errors occurred! Invoking "cmake" failed I'm sorry I can't post links; however, I'm following the tutorials literally. Originally posted by CroCo on ROS Answers with karma: 155 on 2013-07-01 Post score: 1 Answer: The tutorial seems to need an update. Please try commenting out the following three lines in your CMakeLists.txt: # add_message_files(FILES Num.msg) # add_service_files(FILES AddTwoInts.srv) # generate_messages(DEPENDENCIES std_msgs) Those are not necessary since this tutorial does not create any custom messages or services. Originally posted by Dirk Thomas with karma: 16276 on 2013-07-01 This answer was ACCEPTED on the original site Post score: 6 Original comments Comment by CroCo on 2013-07-01: @Dirk Thomas, Thank you so much. It worked. Comment by Cássio on 2013-07-24: Thank you, It worked. Comment by sina.cb on 2014-02-13: Thanks a lot! I was searching for this problem for about 2 hours. You saved my day! :) Comment by jason on 2017-02-14: thank you!
{ "domain": "robotics.stackexchange", "id": 14779, "tags": "ros" }