anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
If we cloned an extinct animal, what would become of its gut biome?
Question: If we cloned an extinct animal like the mammoth, what would become of its gut biome? Answer: The gut biome would be populated from whatever the environment was of the newly "cloned" animal. Presumably the baby animal would be born by birth from a live host mother, which would be the likely source for microbiota, similar to other animals.
{ "domain": "biology.stackexchange", "id": 12131, "tags": "cloning, microbiome, speculative" }
What are the differences between a $\psi$-epistemic ontological model and a $\psi$-ontic model of quantum mechanics, exactly?
Question: I am confused about the differences between a $\psi$-epistemic ontological model of quantum mechanics and a $\psi$-ontic one. The way I understand it, a $\psi$-epistemic model says that every quantum state does not correspond to a physical state, and I understand this is to be interpreted as saying that quantum mechanics isn't a complete description of reality. However, I also understand that an ontological model assumes that there is a way to assign a complete (deterministic) description of reality, and this seems to point to a $\psi$-ontic model, if I get that one correctly. Where am I going wrong here, and how exactly should one interpret these terms? Answer: You seem to be rather confused regarding those terms. The terms $\psi$-epistemic and $\psi$-ontic are mutually exclusive when describing an interpretation of quantum mechanics. Both of these terms are possible characteristics for an ontological model: a description of the set $\Lambda$ of possible "complete descriptions of reality", which are often denoted by $\lambda$ and termed ontic states. In other words, an ontological model is a descriptions of the things that "exist" in the real world. On that stage, there are two types: $\psi$-ontic models, where the wavefunction "exists", and $\psi$-epistemic models, where it doesn't. More concretely: In $\psi$-ontic models, the wavefunction is a physical property of the "real" state of the world. That is, if I were to obtain a complete description of reality for the state of the system, then I can deduce the wavefunction $\psi$ from this state. Graphically, such models look like this: Note, however, that this does not fully rule out interpretations of $\psi$ as a statistical quantity: it can still be a distribution over a set of real states, with each wavefunction corresponding to a distinct set of states. As a subset of these models, if the real state of the system turns out to be fully informationally equivalent to the wavefunction, the model is called $\psi$-complete. In such a model, if I know the wavefunction, then I know all that there is to know about the system. This rules out e.g. hidden variables In $\psi$-epistemic models, the wavefunction is not a physical property, but rather a statistical quantity and really just a description of our state of knowledge about the system. More concretely, a model is called $\psi$-epistemic if it allows the existence of two different wavefunctions that are consistent with the same "real" state of the system. In particular, this means that you cannot deduce the wavefunction from the ontic state of the world. In terms of how you phrased it in the question, [the way $\psi$-epistemic models should] be interpreted is that quantum mechanics isn't a complete description of reality, that is correct but not quite there. In $\psi$-epistemic models QM isn't a complete description of reality, but that is also the case in $\psi$-ontic models that are not $\psi$-complete. For more details, see the paper that (to my knowledge) introduced these terms with precise definitions: Einstein, incompleteness, and the epistemic view of quantum states. N Harrigan and RW Spekkens. Found. Phys. 40, 125 (2010), arXiv:0706.2661. Mathematica source for the graphics: Import["http://goo.gl/NaH6rM"]["https://i.stack.imgur.com/vtA9o.png"].
{ "domain": "physics.stackexchange", "id": 35118, "tags": "quantum-mechanics, wavefunction, quantum-interpretations, determinism" }
jQuery functions to toggle navbar menus
Question: This is the first JS/JQuery script I had written for my website, few months ago. It toggles/hide/show menus (and some sub-menus/interactions) when a user clicks on specific items from the navbar. Right now I have 6 main menus, and I believe my code is far from being optimized. $(document).ready(function(){ $(document).click(function(event) { $("#website-header #tab-lang").hide("slide", { direction: "right" }, 300); $("#website-header #tab-apps").hide("slide", { direction: "right" }, 300); $("#website-header #tab-apps #apps-reveal").delay(300).hide(0); $("#website-header #tab-dashboard").hide("slide", { direction: "right" }, 300); $("#website-header #tab-login").hide("slide", { direction: "right" }, 300); $("#website-footer #tab-about").hide("slide", { direction: "down" }, 300); $("#website-footer #tab-about #credits-reveal").delay(300).hide(0); $("#website-footer #tab-contact").hide("slide", { direction: "down" }, 300); }); // lang tab $("#website-header #tab-lang").hide().click(function(event) { event.stopPropagation(); }); $("#website-header #nav-lang").click(function(event) { event.stopPropagation(); $("#website-header #tab-lang").toggle("slide", { direction: "right" }, 300); $("#website-header #tab-apps").hide("slide", { direction: "right" }, 300); $("#website-header #tab-apps #apps-reveal").delay(300).hide(0); $("#website-header #tab-dashboard").hide("slide", { direction: "right" }, 300); $("#website-header #tab-login").hide("slide", { direction: "right" }, 300); $("#website-footer #tab-about").hide("slide", { direction: "down" }, 300); $("#website-footer #tab-about #credits-reveal").delay(300).hide(0); $("#website-footer #tab-contact").hide("slide", { direction: "down" }, 300); }); // apps tab $("#website-header #tab-apps").hide().click(function(event) { event.stopPropagation(); }); $("#website-header #nav-apps").click(function(event) { event.stopPropagation(); $("#website-header #tab-lang").hide("slide", { direction: "right" }, 300); $("#website-header #tab-apps").toggle("slide", { direction: "right" }, 300); $("#website-header #tab-apps #apps-reveal").delay(300).hide(0); $("#website-header #tab-dashboard").hide("slide", { direction: "right" }, 300); $("#website-header #tab-login").hide("slide", { direction: "right" }, 300); $("#website-footer #tab-about").hide("slide", { direction: "down" }, 300); $("#website-footer #tab-about #credits-reveal").delay(300).hide(0); $("#website-footer #tab-contact").hide("slide", { direction: "down" }, 300); }); $("#website-header #apps-toggle").click( function() { $("#website-header #tab-apps #apps-reveal").slideToggle(300); } ); // dashboard tab $("#website-header #tab-dashboard").hide().click(function(event) { event.stopPropagation(); }); $("#website-header #nav-dashboard").click(function(event) { event.stopPropagation(); $("#website-header #tab-lang").hide("slide", { direction: "right" }, 300); $("#website-header #tab-apps").hide("slide", { direction: "right" }, 300); $("#website-header #tab-apps #apps-reveal").delay(300).hide(0); $("#website-header #tab-dashboard").toggle("slide", { direction: "right" }, 300); $("#website-header #tab-login").hide("slide", { direction: "right" }, 300); $("#website-footer #tab-about").hide("slide", { direction: "down" }, 300); $("#website-footer #tab-about #credits-reveal").delay(300).hide(0); $("#website-footer #tab-contact").hide("slide", { direction: "down" }, 300); }); // login tab $("#website-header #tab-login").hide().click(function(event) { event.stopPropagation(); }); $("#website-header #nav-login").click(function(event) { event.stopPropagation(); $("#website-header #tab-lang").hide("slide", { direction: "right" }, 300); $("#website-header #tab-apps").hide("slide", { direction: "right" }, 300); $("#website-header #tab-apps #apps-reveal").delay(300).hide(0); $("#website-header #tab-dashboard").hide("slide", { direction: "right" }, 300); $("#website-header #tab-login").toggle("slide", { direction: "right" }, 300); $("#website-footer #tab-about").hide("slide", { direction: "down" }, 300); $("#website-footer #tab-about #credits-reveal").delay(300).hide(0); $("#website-footer #tab-contact").hide("slide", { direction: "down" }, 300); }); // about tab $("#website-footer #tab-about").hide().click(function(event) { event.stopPropagation(); }); $("#website-footer #nav-about").click(function(event) { event.stopPropagation(); $("#website-header #tab-lang").hide("slide", { direction: "right" }, 300); $("#website-header #tab-apps").hide("slide", { direction: "right" }, 300); $("#website-header #tab-apps #apps-reveal").delay(300).hide(0); $("#website-header #tab-dashboard").hide("slide", { direction: "right" }, 300); $("#website-header #tab-login").hide("slide", { direction: "right" }, 300); $("#website-footer #tab-about").toggle("slide", { direction: "down" }, 300); $("#website-footer #tab-about #credits-reveal").delay(300).hide(0); $("#website-footer #tab-contact").hide("slide", { direction: "down" }, 300); }); $("#website-footer #credits-toggle").click( function() { $("#website-footer #tab-about #credits-reveal").slideToggle(300); } ); // contact tab $("#website-footer #tab-contact").hide().click(function(event) { event.stopPropagation(); }); $("#website-footer #nav-contact").click(function(event) { event.stopPropagation(); $("#website-header #tab-lang").hide("slide", { direction: "right" }, 300); $("#website-header #tab-apps").hide("slide", { direction: "right" }, 300); $("#website-header #tab-apps #apps-reveal").delay(300).hide(0); $("#website-header #tab-dashboard").hide("slide", { direction: "right" }, 300); $("#website-header #tab-login").hide("slide", { direction: "right" }, 300); $("#website-footer #tab-about").hide("slide", { direction: "down" }, 300); $("#website-footer #tab-about #credits-reveal").delay(300).hide(0); $("#website-footer #tab-contact").toggle("slide", { direction: "down" }, 300); $("#website-footer #tab-contact #contact-callback").hide().css('visibility','visible'); $("#website-footer #tab-contact #callback-message").hide().css('visibility','visible'); $("#website-footer #tab-contact #callback-dismiss").hide().css('visibility','visible'); }); }) Basically I'm repeating the same thing, when a user clicks on a navbar item, I close every other menus and open the one the user requested (and reset the sub-menus if there were any). When the user clicks on the documents (or anywhere that is outside the menu they opened) I also close everything. I'm afraid my code, even though it works fine, might be a bit repetitive and somewhat ressources' greedy (does the $(document).click(function(event) trigger even when there is nothing to close?). I'm looking for some recommendations, ideas, or models from more experienced devs than I am. Answer: Don't query the same nodes One good practice is to store nodes you use repeatedly. Every time you click anywhere on the page you query the same 8 items which is expensive. Query them once and store them var tabLang = $("#website-header #tab-lang"); then you can use the variable tabLang in the events: tabLang.hide("slide", { direction: "right" }, 300); Querying IDs IDs are supposed to be unique. It is a little faster (and shorter) to query the ID directly than to establish its ancestry so: $("#website-header #tab-lang"); should be: $("#tab-lang"); Repetition Seeing as most of your functions close the tabs and toggle the clicked one you could create a function that closes all tabs and call it from within the click event. Making sure the proper tab gets toggled instead of closed could be accomplished by checking the tabs state before it gets hidden and hide or show accordingly: tabLang.click(function ( event ) { if (tabLang.is('visible')) { hideTabs(); } else { hideTabs(); // we want the tab to be visible so we stop the running animation tabLang.stop(true, true); // and tell it to show itself tabLang.show("slide", { direction: "right" }, 300) } }); You would obviously need to repeat this for every clickable tab. Further improvements Seeing as you already have a click event on the document you could move the logic for the tab clicks into that event and discriminate between the tabs by checking event.target. Keep in mind though that event.target is not a jQuery object. You could check equality like this: $(event.target).is(tabLang) // returns Boolean With this in mind you could also reduce the size of each click event if you handled the hide / show exception in the hideTabs() function, by calling it with the clicked element as an argument and do some comparisons there. (Though that won't improve performance, just file size which should not be necessary for you purposes)
{ "domain": "codereview.stackexchange", "id": 24268, "tags": "javascript, jquery" }
Help me understand the phase of an FIR filter
Question: I'm going over an exercise of designing an simple FIR filter, I have the solution in front of me, but I struggle to understand some parts of it. I'm asked to have an FIR filter with only 3 coefficients. I have reached this form: $$ H(\theta ) = (2\alpha_0 \cos(\theta)+\alpha_1)e^{-j\theta} $$ where $\theta$ is the frequency variable (DTFT). The magnitude is easy: $$ \left | H(\theta ) \right |=\left | 2\alpha_0 \cos(\theta)+\alpha_1 \right | $$ But I struggle with the phase, it seems easy: $\phi(H(\theta))=-\theta$ , but I guess this is wrong ? why ? The solution that is given is: $$\phi(H(\theta)) = -\theta +\beta$$ $$\beta = 0 \text{ for } 2\alpha_0 \cos(\theta) + \alpha_1 > 0$$ $$\beta = \pi \text{ for } 2\alpha_0 \cos(\theta) + \alpha_1 < 0$$ EDIT: I think I just answers figured it out, if that expression is negative we can write it as positive with a phase of π. What happens if $\alpha_1$ is larger than $4\alpha_0$ thus the expression is always positive ? does $\beta =0$ ? Answer: Your frequency response is of the form $$H(\theta)=Ae^{-j\theta}=Me^{j\phi(\theta)}$$ where $A$ is a real-valued number that can take either a positive or a negative sign. $M\ge 0$ is the magnitude, and $\phi(\theta)$ is the phase. For positive $A$ you have $A=M$, and consequently $\phi(\theta)=-\theta$. However, when $A$ is negative you have $A=-M$ and $$H(\theta)=-Me^{-j\theta}=Me^{-j(\theta-\pi)}$$ because $e^{\pm j\pi}=-1$. So the additional value of $\pi$ in the phase occurs because of the sign changes in $A$. If $A$ satisfied $A\ge 0$, your solution would be correct. The latter is the case if $\alpha_1>2|\alpha_0|$.
{ "domain": "dsp.stackexchange", "id": 2418, "tags": "filters, linear-phase" }
Decidability of directed strongly connected graphs
Question: Consider the problem of determining if a directed graph is strongly connected. How to phrase it as a language and prove that it's decidable. My Thoughts : To think of decidability given a graph I could run DFS on each node and see that all the nodes are reachable from the original node. So it can be determined if the graph is strongly connected in polynomial time. But I am confused about the formal automata definition of decidability. Which says a language is decidable if there is a Turing machine such that this language is the one exactly accepted by the Turing machine. Now given the above logic of DFS is that enough to prove decidability or does one have to build a turing machine ? Answer: Yes, the DFS is an algorithm, and hence has an equivalent turing machine. The definition of the decidability as you have said, is for the language to be exactly accepted by some turing machine. The formal definition of the turing machine accepting the language of strongly connected graphs probably is waaaay too hard to explicitly write, so I guess your teacher meant for you to show an algorithm and state that an algorithm is equivalent to a turing machine. Btw, as a side note, you can run DFS only twice in total instead of $n$ times - once per node. If you have free time, its a good exercise in algorithms :)
{ "domain": "cs.stackexchange", "id": 18119, "tags": "graphs, turing-machines, automata, connected" }
Prove finding a near clique is NP-complete
Question: An undirected graph is a near clique if adding an additional edge would make it a clique. Formally, a graph $G = (V,E)$ contains a near clique of size $k$ where $k$ is a positive integer in $G$ if there exists $S \subseteq V$ where $|S| = k$ and $u,v \in S$ where $(u,v) \not\in E$, and $S$ forms a clique in $(V,E \cup \{(u,v)\})$. How can I show finding a near clique of size $k$ in $G$ is NP-complete? Answer: I think the following reduction works (unless I missed something): Given $G=(V,E)$ and $k$, output $G'$ which is obtained from $G$ by adding $2$ new vertices $\{x,y\}$. that are each connected to all the vertices in $V$ (but not to each other). Finally, the output is to search for a $k+2$ near clique. Clearly this is polynomial (linear). If there exists a $k+2$ near clique in $G'$, then if $k+1$ of its vertices are contained in $V$, then $G$ has a $k$-clique. If less than $k+1$ vertices are from $V$, then at least $k$ of them are, since $x$ and $y$ are not connected between them. But then, the $k$ remaining vertices in $G$ must form a clique. Conversely, if $G$ has a $k$-clique, then just add the two new vertices $x,y$ to get a $k+2$-near clique.
{ "domain": "cs.stackexchange", "id": 1167, "tags": "complexity-theory, graphs, np-complete" }
How do the intercept and slope calculated in linear regression relate to the output of lm?
Question: I have been looking at how to calculate coefficients by hand and the example produces $Y = 1,383.471380 + 10.62219546 * X$ However the output shown of lm does not show these values anywhere. How do I reconcile the results of calculations by hand for B0 and B1 with the output of summary(model)? Answer: The reason that the values you get from manually calculating the coefficients do not show up in the output from lm is that lm is using a different dataset (Anscombe's Quartet) than the one used for manually calculating the coefficients. In addition, the regression formula also differs between the two. If using the exact same dataset and regression formula the values under the estimate column should match up by the coefficients you get from manually calculating them.
{ "domain": "datascience.stackexchange", "id": 11180, "tags": "linear-regression, linear-models" }
Why does Light get caught by Gravity, when both are travelling at the Speed of Light?
Question: Gravitational waves travel at the speed of light waves. So then, why do light waves get caught by gravitational waves (eg, black holes)? And if it is about the strength of the photon field, then why are light waves from a stronger light source (eg, a quasar) still caught by the gravity of black holes, instead of passing by ? Answer: Lets get things straight. Gravity waves travel at the speed of Light waves. Gravity waves are different than gravitational waves. It is gravitational waves you must mean. Gravitational waves They are disturbances in the curvature of spacetime, generated by accelerated masses, that propagate as waves outward from their source at the speed of light. you ask: So then, why does Light waves get caught by Gravity waves (eg. Black holes) ? Black holes are not gravity waves ( see link above for definition of gravity waves) They do not emit gravitational waves unless accelerating, as happens with the merging of black holes. And if it is about the strength of Photon field, Photons are elementary particles in the standard model of particle physics. The photon field is part of the quantum field theory that describes elementary particle interaction. then why are Light waves from a stronger light source (eg. Quasar) still caught by gravity of Black holes on the way ? If by "caught by gravity" you mean the gravitational lensing observed on light from stars passing close to heavy masses, it is explained by General Relativity. In general relativity, light follows the curvature of spacetime, hence when light passes around a massive object, it is bent. This means that the light from an object on the other side will be bent towards an observer's eye, just like an ordinary lens. In General Relativity the speed of light depends on the gravitational potential (aka the metric) and this bending can be viewed as a consequence of the light traveling along a gradient in light speed. You further ask: Why does Light get caught by Gravity, when both are travelling at Speed of Light? Gravity is not traveling at the speed of light. If the final quantization of gravity has gravitons, still the force of gravity would depend on the exchange of virtual gravitons, whereas the photons in the light are real particles interacting with the gravitational field ,so gravity is not traveling at any speed. Light does not get caught, it interacts with the gravitational field it finds on the way, which field, in terms of General Relativity, defines the curvature of space time.
{ "domain": "physics.stackexchange", "id": 64342, "tags": "gravity, electromagnetic-radiation, speed-of-light, gravitational-waves" }
Random uniqueID for each ID SQL
Question: How can I improve this code? I wanted to know if there's a possibility to remove the for loop, and UPDATE a different uniqueID for each ID in my database. <?php $reponse = $bdd->prepare("SELECT MAX(ID) FROM tuto WHERE title=''"); $reponse->execute(); $IDnow = $reponse->fetch(PDO::FETCH_ASSOC); $IDnow = $IDnow['MAX(ID)']; for ($i=2 ; $i <= $IDnow ; $i++){ $reponse = $bdd->prepare("UPDATE tuto SET uniqueID=:uniqueID WHERE uniqueID='' AND title='' AND ID=:i"); $uniqueID = $donnees['uniqueID']; $int = rand(0,51); $a_z = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"; $uniqueID = time().$a_z[$int]; $reponse->execute([':uniqueID' => $uniqueID, 'i' => $i]); } ?> Answer: Finally, I used RAND(), for generate an "UniqueID", and we can add some letters after or else : Thanks to NerdyDev and Bart Friederichs for solutions. <?php $reponse = $bdd->prepare("UPDATE tuto SET uniqueID=ROUND(RAND() * :tp) WHERE uniqueID='' AND title=''"); $uniqueID = $donnees['uniqueID']; $tp = time(); $reponse->execute([':tp' => $tp]); ?>
{ "domain": "codereview.stackexchange", "id": 17976, "tags": "php, sql, mysql, random" }
Reversible tax calculator class
Question: I have a class that contains 2 properties of the same type. decimal NetAmount and decimal GrossAmount I would like to initialize it using either GrossAmount or NetAmount and based on the one specified calculate the second one. Which way is the most elegant and why? (parameter validation is omitted for brevity) 1 public class TaxedPrice { private TaxedPrice() { } public decimal NetAmount { get; private set; } public decimal GrossAmount { get; private set; } public static TaxedPrice FromNet(decimal netAmount, decimal taxRate) { return new TaxedPrice { NetAmount = decimal.Round(netAmount, 2, MidpointRounding.AwayFromZero), GrossAmount = decimal.Round(netAmount.ApplyTax(taxRate), 2, MidpointRounding.AwayFromZero) }; } public static TaxedPrice FromGross(decimal grossAmount, decimal taxRate) { return new TaxedPrice { GrossAmount = decimal.Round(grossAmount, 2, MidpointRounding.AwayFromZero), NetAmount = decimal.Round(grossAmount.RemoveTax(taxRate), 2, MidpointRounding.AwayFromZero) }; } } 2 public class TaxedPrice { public TaxedPrice(decimal netAmount, decimal grossAmount, decimal taxRate) { if (grossAmount != default) { GrossAmount = decimal.Round(grossAmount, 2, MidpointRounding.AwayFromZero); NetAmount = decimal.Round(grossAmount.RemoveTax(taxRate), 2, MidpointRounding.AwayFromZero); } else if (netAmount != default) { NetAmount = decimal.Round(netAmount, 2, MidpointRounding.AwayFromZero); GrossAmount = decimal.Round(netAmount.ApplyTax(taxRate), 2, MidpointRounding.AwayFromZero); } else { throw new InvalidOperationException($"Either {nameof(netAmount)} or {grossAmount} must be set."); } } public decimal NetAmount { get; } public decimal GrossAmount { get; } } 3 public class TaxedPrice { public enum Type { Gross, Net } public TaxedPrice(decimal amount, Type type, decimal taxRate) { if (type == Type.Gross) { GrossAmount = decimal.Round(amount, 2, MidpointRounding.AwayFromZero); NetAmount = decimal.Round(amount.RemoveTax(taxRate), 2, MidpointRounding.AwayFromZero); } else if (type == Type.Net) { NetAmount = decimal.Round(amount, 2, MidpointRounding.AwayFromZero); GrossAmount = decimal.Round(amount.ApplyTax(taxRate), 2, MidpointRounding.AwayFromZero); } } public decimal NetAmount { get; } public decimal GrossAmount { get; } } 4 public class TaxedPrice { public TaxedPrice(decimal amount, bool fromGross, decimal taxRate) { if (fromGross) { GrossAmount = decimal.Round(amount, 2, MidpointRounding.AwayFromZero); NetAmount = decimal.Round(amount.RemoveTax(taxRate), 2, MidpointRounding.AwayFromZero); } else { NetAmount = decimal.Round(amount, 2, MidpointRounding.AwayFromZero); GrossAmount = decimal.Round(amount.ApplyTax(taxRate), 2, MidpointRounding.AwayFromZero); } } public decimal NetAmount { get; } public decimal GrossAmount { get; } } How it looks like from caller's side: // 1 var taxedPrice = TaxedPrice.FromNet(2.123m, 0.23m); // 2 var taxedPrice = new TaxedPrice(2.123m, default, 0.23m); // uses the first one to calculate the second one var taxedPrice2 = new TaxedPrice(2.123m, 1.11m, 0.23m); // uses the first one to calculate the second one var taxedPrice3 = new TaxedPrice(default, 1.11m, 0.23m); // uses the second one to calculate the first one // 3 var taxedPrice = new TaxedPrice(2.123m, TaxedPrice.Type.Net, 0.23m); // 4 var taxedPrice = new TaxedPrice(2.123m, false, 0.23m); Extensions for tax: public static class TaxExtensions { public static decimal ApplyTax(this decimal netPrice, decimal taxRate) { return netPrice * (taxRate + 1); } public static decimal RemoveTax(this decimal grossPrice, decimal taxRate) { return grossPrice / (taxRate + 1); } } Mapping perspective In my upper layer I pass those prices in POCOs/DTOs like: public class PriceDTO { public decimal NetAmount { get; set; } public decimal GrossAmount { get; set; } } And I have to check there as well which one was passed to decide from which to calculate. So in case of 1 mapping would look like: if (priceDto.GrossAmount != default) return TaxedPrice.FromGross(priceDto.GrossAmount, taxRate); else if (priceDto.NetAmount != default) return TaxedPrice.FromNet(priceDto.NetAmount, taxRate); else // error In case of 2 (no need to check in the mapping code) return new TaxedPrice(priceDto.NetAmount, priceDto.GrossAmount, taxRate) 3 - there's a check as well 4 - same like 1 and 3 And I agree this could be a struct instead. Answer: I'd personally go for option one as it's by far the simplest, as the names FromNet and FromGross give a clear indication of what the code is doing. That clarity is lost in the other examples. However, there is room to improve things. Both methods perform the same basic calculation, resulting in four near-dentical expressions. So DRY can be applied here: public class TaxedPrice { private TaxedPrice() { } public decimal NetAmount { get; private set; } public decimal GrossAmount { get; private set; } public static TaxedPrice FromNet(decimal netAmount, decimal taxRate) { return new TaxedPrice { NetAmount = ApplyRounding(netAmount), GrossAmount = ApplyRounding(netAmount.ApplyTax(taxRate)) }; } public static TaxedPrice FromGross(decimal grossAmount, decimal taxRate) { return new TaxedPrice { GrossAmount = ApplyRounding(grossAmount), NetAmount = ApplyRounding(grossAmount.RemoveTax(taxRate)) }; } private decimal ApplyRounding(decimal rawValue) => decimal.Round(rawValue, 2, MidpointRounding.AwayFromZero) }
{ "domain": "codereview.stackexchange", "id": 32182, "tags": "c#, comparative-review, finance" }
Finding the item with the minimal value in a sequence
Question: This is an algorithm to find the item with the minimal value in a sequence. I've written a function called minimal to implement this algorithm. Here is my script: def minimal(*args): """Returns the item with the minimal value in a sequence. """ if len(args) == 1: sequence = args[0] else: sequence = args index = 0 successor = index + 1 while successor < len(sequence): if sequence[index] > sequence[successor]: index = successor successor += 1 return sequence[index] Answer: Redundant use of indexing Since you only need value, not index, you could store current minimal value instead of its index. This would allow for for loop, like for item in sequence: if item < current_minimum: current_minimum = item You both ditch overhead for array access, for tracking the index and - most important - code looks cleaner. (although that's subjective) No support for generators Generators provide items one at a time, instead of wasting memory on all of them at once. Your code would force generation of all items before first of them is processed. This is not really good. Well, if you don't use indexing then you don't need len and generators would work just fine, your if len(args) == 1: clause is good in this regard. Confusing behaviour with single argument However, special treatment for a lone argument is not obvious from signature, that's not really good. If one does not know about this behaviour, one can easily use stuff like min_element = minimal(*some_list) which would generally work but will do an unexpected thing if some_list had only one element. Yes, it seems extremely useful, but also confusing. Maybe limiting to one argument only and expecting users to do stuff like minimal((1,2))? I'm really not sure. It seems useful enough that built-in min works like that, but generally speaking decisions like this should not be taken lightly. Also, take a closer look at what min does - there's also an option of using a key function and providing default value.
{ "domain": "codereview.stackexchange", "id": 21683, "tags": "python, beginner, algorithm, python-3.x, reinventing-the-wheel" }
Do high/low pass lenses exist?
Question: For an experiment I will hopefully be soon conducting at Johns Hopkins I need two different lenses. The first needs to allow all wavelengths above 500 nm to pass (thus a high pass filter) and cut off everything else. The second needs to allow all wavelengths below 370 nm to pass (thus a low pass filter) and cut off everything else. My knowledge of optics is middling. I know that good old glass cuts of UV light, but I was hoping for something more specific. Does anyone know of the theory necessary to "tune" materials to make such filters? Truth be told, I'm an experimentalist, so simply giving me a retail source that has such lenses would get me to where I need to go! But learning the theory would be nice as well. Thanks, Sam Answer: There are many ways to do this. Which option you choose depends on what degree of performance you require, and how much money you re willing to spend. First of all, you should understand that, while you could apply a wavelength selective coating to a lens, this would much more commonly and cheaply be done with a wavelength filter separate from the lens itself. Now, the cheapest way to do this would be with a glass filter which absorbs short or long wavelengths selectively. There is a wide range of these available from Schott Glass. I use these commonly, and I've never had trouble. Most vendors will be happy to produce custom shapes, thicknesses, etc. The down side to filter glass is that it doesn't have a particularly sharp cutoff between the bassband and the stopband. For that, you will need a dielectric coating engineered to your specifications. For that, I would look at CVI or Newport, although there are other vendors out there. There may be something you can use in their catalogs, but custom orders are normal for the optical manufacturing industry, so don't hesitate to call up a sales engineer. In my experience, sales people in this industry are very well educated on their products, or will direct you to an engineer who can tell you exactly what they can produce for you. Again, there are other vendors you could look at, but these are the ones I would go to first. At the very least, looking at their catalogs will give you an idea of what it is you are really looking for.
{ "domain": "physics.stackexchange", "id": 6366, "tags": "optics, experimental-physics" }
Before, while, and after lifting an object to some height above the ground, what happens to its kinetic energy?
Question: If you were to lift an object above the ground at a constant speed, you would have to exert a force of equal magnitude and opposite the direction of gravity onto it. However, wouldn't these forces cancel out and thus not affect the object's position at all? Also, if this object was elevated at a constant speed, I understand that it would gain in gravitational potential energy since its height above the ground is increasing. But what about its kinetic energy? Although there is no net force in the direction of the object's motion (constant speed, so equilibrium), the object is still moving, so wouldn't it possess some amount of kinetic energy? If so, what becomes of it once the object stops at some height, h? Answer: To understand why the answer they give in the physics text book makes sense, there's one slightly counter-intuitive thing to remember: if you try to lift something up "at a constant speed" the force you apply is not constant. We humans lift objects at a constant speed so intuitively, that it is easy for us to get the illusion that we're applying a constant force, but that's not actually the case. If an object is stationary, and you apply a force exactly equal to the force of gravity but in the upwards direction, the object will not move. This should make sense, if you think about it. The "normal force" from the ground holding up the object is exactly equal to the force of gravity if the object is lying there on the ground. To lift it off the ground, you have to apply more upward force than the force of gravity so that there is a net upward force. Now while you're doing this, the object is not moving at a constant speed. It is accelerating, exactly as predicted by F=ma. After a short while, its upward speed is exactly the "constant speed" you wanted, and you intuitively decrease the force you are exerting on the box until the force you are applying matches that of gravity exactly. In most intuitive cases, this acceleration occurs quickly, so we get tempted to ignore it, but it's absolutely there. In other situations, it's not so easily ignored. Consider the case of large cranes: When you have a crane lifting something as massive as this, it turns out that it's not effective to have the operator's joystick controlling the position of the crane up and down. They actually control the force that the crane applies when lifting the object. They have to be aware of the acceleration phases that we often intuitively overlook. They will actually put more force on the object to get it moving upward and then decrease their force so that it doesn't accelerate up out of control. When you reach your height, h, you stop. To stop, what you actually do is decrease your force to less than that of gravity, and it decelerates. Like before, the process is not instantaneous, but on human scales it often occurs fast enough that we don't pay attention to it. Once it has stopped moving upward, we once again apply a force exactly equal to that of gravity to keep it stationary. So during the initial acceleration period, your object is gaining kinetic energy because it velocity is increasing. During the middle period, where your force matches gravity's exactly, its velocity stays the same, so its kinetic energy remains the same. Any work you are doing (force * distance) is going into increasing its potential energy. Near the top, the object decelerates back to motionless. During this time, all of the kinetic energy you built up during the acceleration period is converted into potential energy (along with any potential energy coming from the fact that you're still applying force while its decelerating, it's just less force than before)
{ "domain": "physics.stackexchange", "id": 33267, "tags": "energy, kinematics" }
What's the difference between life expectancy of cigarette smoker and general population?
Question: Some say that smoking cigarettes will shorten lifespan. By how many years is the lifespan of a typical smoker shortened? What are the common cause(s) of death among smokers? Are there any known statistics for cigarette smoking and life expectancy? Answer: The difference in life expectancy between smokers and non-smokers appears to be at least 10 years on average, in a survey of American adults between 1997 and 2004. The same paper lists causes of death (higher among smokers than non-smokers, as measured by hazard ratio), although this is not exhaustive: lung cancer, other cancers, ischemic heart disease, stroke, other vascular disease, and respiratory disease have an adjusted hazard ratio of 1.7 or more, with 94% of lung cancer deaths attributable to smoking among female smokers, and 93% to smoking among male smokers. If you have access, you will probably find Table 2 (hazard ratios) of interest. Figure 2 shows the survival probability:
{ "domain": "biology.stackexchange", "id": 916, "tags": "epidemiology, smoking" }
a tiny rpg game using Phaser 3
Question: I wrote a tiny game using Phaser 3, here is the code. var total_attack = 0; var isMagicReady = false; class BootScene extends Phaser.Scene { constructor() { super({ key: 'BootScene' }); console.log(total_attack); } preload() { this.load.path = 'https://raw.githubusercontent.com/albert10jp/web_rpg/main/btn_cast_spell/assets/'; this.load.spritesheet('cooldown_sheet', 'cooldown_sheet.png', { frameWidth: 48, frameHeight: 48 }); this.load.image('magicAttack', 'magicAttack.png'); this.load.atlas('bolt', 'bolt_atlas.png', 'bolt_atlas.json'); // load brawler (player) this.load.spritesheet('brawler', 'brawler48x48.png', { frameWidth: 48, frameHeight: 48 }); // load frog (enemies) this.load.path = 'https://raw.githubusercontent.com/albert10jp/web_rpg/main/btn_cast_spell/assets/frog/'; for (var i = 1; i < 3; i++) { this.load.image("frog_attack" + i, "frog" + i + ".gif"); } } setupAnimation() { this.anims.create({ key: 'frog_idle', frames: [ { key: 'frog_attack1' }, { key: 'frog_attack2' }, { key: 'frog_attack1' }, ], frameRate: 2, repeat: 1 }); this.anims.create({ key: 'player_attack', frames: this.anims.generateFrameNumbers( 'brawler', { frames: [30, 31, 30] }), frameRate: 2, repeat: 0, }); this.anims.create({ key: 'magic_effect', frames: this.anims.generateFrameNames('bolt', { // frames: [0, 1, 2, 3, 4, 5, 6, 7] // prefix: 'bolt_ball_', start: 1, end: 10, zeroPad: 4 prefix: 'bolt_sizzle_', start: 1, end: 10, zeroPad: 4 }), frameRate: 12, repeat: 0, }); this.anims.create({ key: 'cooldown_animation', frames: this.anims.generateFrameNumbers('cooldown_sheet', { start: 0, end: 15 }), frameRate: 6, repeat: 0, repeatDelay: 2000 }); } animComplete(animation) { if (animation.key === 'player_attack') { this.bolt.visible = true; this.bolt.play('magic_effect'); this.magicAttackBtn.play('cooldown_animation'); } } setupBtn(btn_x, btn_y) { this.add.image(btn_x, btn_y, 'magicAttack').setScale(.5) this.magicAttackBtn = this.add.sprite(btn_x, btn_y, 'cooldown_sheet').setFlipX(false).setScale(1) this.magicAttackBtn.on('animationcomplete', () => { isMagicReady = true; if (total_attack < 3) { this.noti.setText('Magic is ready now!'); } }); let circle2 = this.add.circle(btn_x, btn_y, 150, 0x000000, 0).setScale(.1) // circle.lineStyle(10, 0xffffff, 0.9); circle2.setStrokeStyle(50, 0x000000, 1); this.magicAttackBtn.setInteractive(); this.magicAttackBtn.on('pointerdown', () => { if (total_attack < 3) { if (isMagicReady) { this.player.play('player_attack'); this.player.on('animationcomplete', this.animComplete, this); this.noti.setText(''); isMagicReady = false; // this.perform(); total_attack++; } else { this.noti.setText('Magic is not ready yet!'); } } }, this); this.noti = this.add.text(this.magicAttackBtn.x + 30, this.magicAttackBtn.y - 10, ''); this.noti.setFontSize(12); } setupBattle() { this.magicAttackBtn.play('cooldown_animation'); this.enemies.forEach((enemy) => { enemy.play('frog_idle', true); enemy.setAlpha(1); enemy.setTexture('frog_attack1'); }); // this.noti.setText(''); total_attack = 0; isMagicReady = false; } createEnemy(x, y) { let enemy = this.add.sprite(x, y, 'frog_attack1').setOrigin(0, 0); enemy.setScale(.6); enemy.flipX = true; this.enemies.push(enemy); } setupBolt() { this.bolt.setScale(2); this.bolt.visible = false; this.bolt.on('animationcomplete', () => { this.bolt.visible = false; this.enemies.forEach((item) => { item.stop(); item.setTexture('frog_attack1'); if (total_attack === 1) { item.setAlpha(.7); } else if (total_attack === 2) { item.setAlpha(.3); } else if (total_attack === 3) { item.setAlpha(0); this.noti.setText('Congratulations!\nMission complete!'); } }); }); } create() { var offsetX = 300 / 2.5; var offsetY = 220 / 2.5 - 15; var incrementX = 25; var incrementY = 15; let btn_x = 20, btn_y = 180 this.setupBtn(btn_x, btn_y); this.player = this.physics.add.sprite(100 / 2.5, offsetY, 'brawler', 30).setOrigin(0); this.player.setScale(.6); this.player.setFlipX(true); this.enemies = []; this.setupAnimation(); for (let y = 0; y < 3; y++) { for (let x = 0; x < 5; x++) { let posx = x * incrementX + offsetX; let posy = y * incrementY + offsetY; this.createEnemy(posx, posy); } } this.bolt = this.add.sprite(offsetX + 3 * incrementX, offsetY + 2 * incrementY, 'bolt'); this.setupBolt(); this.setupBattle(); } } var config = { width: 320, height: 240, zoom: 2.5, // zoom:2, physics: { default: 'arcade', arcade: { gravity: { y: 0 }, debug: false // set to true to view zones } }, backgroundColor: 0x000000, scene: [BootScene] } var game = new Phaser.Game(config); <script src="https://cdn.jsdelivr.net/npm/phaser@3.55.2/dist/phaser.js"></script> The playable code above construct a battle scene where a hero fights against a bunch of enemies. The hero only has one action to choose, cast a spell. Once the hero casts a spell, a cool-down animation starts. Only the cool-down animation finishes, the hero is allowed to cast the spell again. The code works as expected though I'm not sure I use the component efficiently. Answer: Review The code looks okay. Indentation looks consistent. The code doesn't seem very redundant. Some of the methods are a bit on the long side - e.g. setupAnimation(), setupBtn(), etc. There are just a few suggestions - see below. global variables The variables total_attack and isMagicReady are used globally. While it may seem unlikely, it may be useful someday to have multiple BootScene instances - in that case those variables could be made instance variables, or else static variables on BootScene so they wouldn't collide with variables in other namespaces. It is recommended to avoid global variables. config variables In the declaration of config there is: var config = { width: 320, height: 240, zoom: 2.5, And then within BootScene::create(): var offsetX = 300 / 2.5; var offsetY = 220 / 2.5 - 15; Perhaps instead of using 2.5 it makes sense to use this.scale.zoom or a similar variable. let vs const It is wise to default to using const, unless you know re-assignment is necessary - then use let. This will help avoid accidental re-assignment and other bugs
{ "domain": "codereview.stackexchange", "id": 42837, "tags": "javascript, game, ecmascript-6, phaser.io" }
With a list of opcodes for a virtual machine, is it possible to assign a number to every possible program of size n that can be run on that machine?
Question: Say you have some virtual machine with < 100 numbered opcodes that fully specify its behaviour. Is there an easy way to order and enumerate all possible programs of less than n instructions, the result being that a particular number refers to a particular program? Preferably with natural numbers and short programs at the begining. As this list would be combinatorially large, it would be helpful to assign these numbers without running the programs or loading them into memory. Answer: There is no need to assign a number. The program already is a number. If you have $m$ opcodes and your program has $n$ instructions, then it can be interpreted as an $n$-"digit" number written in base $m$.
{ "domain": "cs.stackexchange", "id": 18083, "tags": "programming-languages" }
What happens to the food you accidentally aspire?
Question: I'm well aware of the health effects of aspirating solid food and liquids, but I'm interested in the reaction of the body on the biological level to the strange body on our lungs. After I almost aspirated corn, I started to wander: what does the body do when food got on our lungs? Will it be eventually absorbed? Destroyed by our white cells? Or just lie there forever until it fully decomposes? The body has mechanisms to prevent food to get into the lungs, so the body is aware that eventually some food will get into the lungs. As a result, it makes sense to believe that our body would have a mechanism to deal with such issue if all other mechanisms fail ( coughing and etc.,) yet I couldn't find anything on Google. Answer: People can drown because of aspired food. If they don't then it can cause diseases, for example pneumonia. In extreme cases a tree can grow in the lungs. There are other aspiration/inhalation related diseases like silicosis or asbestos lung cancer. So it depends on the composition of the object (or liquid or powder) and other factors whether it causes a disease or not. I did not find anything about what exactly happens with these objects in the lungs. Probably the lung tries to get rid of them mechanically, if there is no success in that, then they cause a local inflammation, which can lead to diseases if it becomes chronic and/or the object contains pathogens. Common presenting symptoms (information available in 36 cases) included dyspnea (14), fever (9), and cough (6). A history of recurrent pneumonia was present in 9. 2007 - Pulmonary Disease due to Aspiration of Food and Other Particulate Matter: A Clinicopathologic Study of 59 Cases Diagnosed on Biopsy or Resection Specimens The annual overall inpatient cost associated with pediatric bronchial foreign-body aspiration is approximately $12.8 million. Combined, the rate of death or anoxic brain injury associated with pediatric foreign body is approximately 4%. 2014 - The national cost burden of bronchial foreign body aspiration in children Gastric aspiration is a high-risk condition for lung injury. Consequences range from subclinical pneumonitis to respiratory failure, with fibrosis development in some patients. Little is known about how the lung repairs aspiration-induced injury. 2015 -Resolution of Lung Injury after a Single Event of Aspiration : A Model of Bilateral Instillation of Whole Gastric Fluid Aspiration is a common but underrecognized clinicopathologic entity, with varied radiographic manifestations. Aspiration represents a spectrum of diseases, including diffuse aspiration bronchiolitis, aspiration pneumonitis, airway obstruction by foreign body, exogenous lipoid pneumonia, interstitial fibrosis, and aspiration pneumonia with or without lung abscess formation. Many patients who aspirate do not present with disease, suggesting that pathophysiology is related to a variety of factors, including decreased levels of consciousness, dysphagia, impaired mucociliary clearance, composition of aspirate, and impaired host defenses. 2014 - Aspiration-Related Lung Diseases 2012 - Aspiration and Infection in the Elderly 2015 - Pediatric foreign body aspiration: A nidus for Aspergillus colonization 2012 - All that wheezes is not asthma: a 6-year-old with foreign body aspiration and no suggestive history
{ "domain": "biology.stackexchange", "id": 4637, "tags": "human-biology, food, lungs" }
Determining the velocity function for a particle on a rough inclined plane
Question: This is a problem from the problem book by IE Irodov (Problem 1.106). We have a rough inclined plane with coefficient of friction $\mu = \tan\alpha$. A particle is kept on the incline and projected with an initial velocity $v_0$ 'sideways' to the incline, that is into the the plane of the figure shown below. A coordinate system is set up in the problem with the x-axis pointing down the incline and the y-axis along the initial velocity vector (the origin is at the initial position of the particle). We need to figure out the magnitude of the velocity as a function of the angle $\phi$ its vector makes with the x-axis. I did the problem in the following manner: We may fix the velocity vector at the origin and resolve components of the forces acting on the particle as shown: $v$ is the velocity at an arbitrary time, $F_2$ is the frictional force ($\mu mg\cos\alpha=mg\sin\alpha$), and $F_1$ is the component of gravity along the incline ($mg\sin\alpha$). Note that $\angle DAB$ is $\phi$. Now, we can figure out the tangential acceleration, $$-\frac{dv}{dt} = g\sin\alpha(1-\cos\phi)\tag1$$ and the radial acceleration $$v\frac{d\phi}{dt}=-g\sin\alpha\sin\phi\tag2$$ We can introduce a change of variables in $(1)$ $$\frac{dv}{dt} = \frac{dv}{d\phi}\cdot\frac{d\phi}{dt} = \frac{-g\sin\alpha\sin\phi}{v}\cdot\frac{dv}{d\phi}$$ we can substitute this into $(1)$ to get, $$\frac{dv}{v} = (\csc\phi-\cot\phi)d\phi$$ Integrating yields a solution of the form, $$\boxed{v=v_0e^{f(\phi)}}$$ where $f$ is a function of $\phi$ But, when I saw the solution, the problem was solved quite simply: we can see that the acceleration along x-axis is $g\sin\alpha(1-\cos\phi)$, which is equal in magnitude to the tangential acceleration. Thus, $v$ and $v_x$ will at any point differ by the same constant value $C$ $$v-v_x = C$$ This $C$ can be easily determined from the initial conditions. This yields, $$v = \frac{v_0}{1+\cos\phi}$$ Where did I go wrong? EDIT: This is another figure I'm including just to clarify the situation presented in the problem. Here are different views of the incline for an arbitrary $\phi$. Answer: I don't think you went wrong anywhere, you just didn't find the solution as quickly as the book. Using an integral table, you see that $$ \int\cot\phi= |\ln(\sin\phi)|+C \quad,\quad\int\csc\phi= |\ln(\csc\phi-\cot\phi)|+C.$$ So, integrating $\frac{dv}{v}=(\csc\phi-\cot\phi)d\phi$ and ignoring the absolute values gives $$ v=Ce^{\ln (\csc\phi-\cot\phi)-\ln(\sin\phi)}=C\frac{\csc\phi-\cot\phi}{\sin\phi} $$ $$= C\frac{1-\cos\phi}{\sin^2\phi}= C\frac{1-\cos^2\phi}{(1+\cos\phi)\sin^2\phi}=C\frac{1}{1+\cos\phi}.$$ Your solution is equivalent to the one in the book, you just need some simplification steps which weren't quite obvious.
{ "domain": "physics.stackexchange", "id": 20728, "tags": "homework-and-exercises, newtonian-mechanics" }
a simple implementation of unix2dos for windows
Question: On linux there is the utility called unix2dos which converts UNIX EOLs(\n) to DOS EOLs(\r\n). However on windows there is no such tool so as a result I decided to make one. unix2dos.c: #include <windows.h> #include <stdint.h> #include <stddef.h> #define chunksize (1 << 13) #define nullptr ((void *)0) uint8_t buffer[chunksize + 1] = { 0 }; int64_t newline_count(HANDLE filehandle) { DWORD bytes_read = 0; int64_t result = 0; do { if (ReadFile(filehandle, buffer + 1, chunksize, &bytes_read, nullptr) == 0) { WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not read file", 26, nullptr, nullptr); ExitProcess(GetLastError()); } if (SetFilePointerEx(filehandle, (LARGE_INTEGER) { .QuadPart = -1 }, nullptr, SEEK_CUR) == 0) { WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not read file", 26, nullptr, nullptr); ExitProcess(GetLastError()); } if (ReadFile(filehandle, buffer, 1, nullptr, nullptr) == 0) { WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not read file", 26, nullptr, nullptr); ExitProcess(GetLastError()); } if (SetFilePointerEx(filehandle, (LARGE_INTEGER) { .QuadPart = -1 }, nullptr, SEEK_CUR) == 0) { WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not read file", 26, nullptr, nullptr); ExitProcess(GetLastError()); } for (uint8_t *start = buffer + 1; start != buffer + 1 + (int64_t)bytes_read; ++start) { if (start[0] == '\n' && start[-1] != '\r') ++result; } } while (bytes_read == chunksize); return result; } void unix2dos1(wchar_t const *const src, wchar_t const *const dst) { HANDLE const dst_file = CreateFileW(dst, GENERIC_ALL, 0, nullptr, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, nullptr); if (dst_file == INVALID_HANDLE_VALUE) { WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not open ", 22, nullptr, nullptr); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), dst, lstrlenW(dst), nullptr, nullptr); ExitProcess(GetLastError()); } HANDLE const src_file = CreateFileW(src, GENERIC_READ, 0, nullptr, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, nullptr); if (src_file == INVALID_HANDLE_VALUE) { WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not open ", 22, nullptr, nullptr); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), src, lstrlenW(src), nullptr, nullptr); ExitProcess(GetLastError()); } int64_t invalid_newline_count = newline_count(src_file); LARGE_INTEGER end_locaition = { 0 }; if (GetFileSizeEx(src_file, &end_locaition) == 0) { CloseHandle(src_file); CloseHandle(dst_file); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not get the size of ", 33, nullptr, nullptr); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), src, lstrlenW(src), nullptr, nullptr); ExitProcess(GetLastError()); } if (SetFilePointerEx(dst_file, (LARGE_INTEGER) { .QuadPart = invalid_newline_count + end_locaition.QuadPart }, &end_locaition, FILE_BEGIN) == 0) { CloseHandle(src_file); CloseHandle(dst_file); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not resize ", 24, nullptr, nullptr); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), dst, lstrlenW(dst), nullptr, nullptr); ExitProcess(GetLastError()); } if (SetEndOfFile(dst_file) == 0) { CloseHandle(dst_file); CloseHandle(src_file); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not resize ", 24, nullptr, nullptr); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), dst, lstrlenW(dst), nullptr, nullptr); ExitProcess(GetLastError()); } HANDLE const dst_memory_mapped_file = CreateFileMappingW( dst_file, nullptr, PAGE_READWRITE, 0, 0, nullptr ); if (dst_memory_mapped_file == nullptr) { CloseHandle(src_file); CloseHandle(dst_file); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not create file mapping object for ", 48, nullptr, nullptr); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), dst, lstrlenW(dst), nullptr, nullptr); ExitProcess(GetLastError()); } HANDLE const src_memory_mapped_file = CreateFileMappingW( src_file, nullptr, PAGE_READONLY, 0, 0, nullptr ); if (src_memory_mapped_file == nullptr) { CloseHandle(dst_memory_mapped_file); CloseHandle(src_file); CloseHandle(dst_file); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not create file mapping object for ", 48, nullptr, nullptr); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), src, lstrlenW(src), nullptr, nullptr); ExitProcess(GetLastError()); } uint8_t *const src_file_buffer = MapViewOfFile(src_memory_mapped_file, FILE_MAP_READ, 0, 0, end_locaition.QuadPart - invalid_newline_count); if (src_file_buffer == nullptr) { CloseHandle(dst_memory_mapped_file); CloseHandle(src_memory_mapped_file); CloseHandle(src_file); CloseHandle(dst_file); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not map view of ", 29, nullptr, nullptr); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), src, lstrlenW(src), nullptr, nullptr); ExitProcess(GetLastError()); } uint8_t *const dst_file_buffer = MapViewOfFile(dst_memory_mapped_file, FILE_MAP_ALL_ACCESS, 0, 0, end_locaition.QuadPart); if (dst_file_buffer == nullptr) { UnmapViewOfFile(src_file_buffer); CloseHandle(dst_memory_mapped_file); CloseHandle(src_memory_mapped_file); CloseHandle(src_file); CloseHandle(dst_file); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not map view of ", 29, nullptr, nullptr); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), dst, lstrlenW(dst), nullptr, nullptr); ExitProcess(GetLastError()); } uint8_t *start1 = src_file_buffer; uint8_t *start2 = dst_file_buffer; end_locaition.QuadPart -= invalid_newline_count; for (; end_locaition.QuadPart; ++start1, ++start2, --end_locaition.QuadPart) { if (start1[0] == '\n') { if (start1 - 1 <= src_file_buffer || start1[-1] != '\r') { *start2++ = '\r'; } } start2[0] = start1[0]; } UnmapViewOfFile(src_file_buffer); UnmapViewOfFile(dst_file_buffer); CloseHandle(dst_memory_mapped_file); CloseHandle(src_memory_mapped_file); CloseHandle(src_file); CloseHandle(dst_file); } void unix2dos2(const wchar_t *const filepath) { HANDLE const file = CreateFileW(filepath, GENERIC_ALL, 0, nullptr, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, nullptr); if (file == INVALID_HANDLE_VALUE) { WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not open ", 22, nullptr, nullptr); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), filepath, lstrlenW(filepath), nullptr, nullptr); ExitProcess(GetLastError()); } int64_t invalid_newline_count = newline_count(file); if (invalid_newline_count == 0) { CloseHandle(file); return; } LARGE_INTEGER end_locaition = { 0 }; if (SetFilePointerEx(file, (LARGE_INTEGER) { .QuadPart = invalid_newline_count }, &end_locaition, FILE_END) == 0) { CloseHandle(file); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not resize ", 24, nullptr, nullptr); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), filepath, lstrlenW(filepath), nullptr, nullptr); ExitProcess(GetLastError()); } if (SetEndOfFile(file) == 0) { CloseHandle(file); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not resize ", 24, nullptr, nullptr); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), filepath, lstrlenW(filepath), nullptr, nullptr); ExitProcess(GetLastError()); } HANDLE const memory_mapped_file = CreateFileMappingW( file, nullptr, PAGE_READWRITE, 0, 0, nullptr ); if (memory_mapped_file == nullptr) { CloseHandle(file); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not create file mapping object for ", 48, nullptr, nullptr); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), filepath, lstrlenW(filepath), nullptr, nullptr); ExitProcess(GetLastError()); } uint8_t *const file_buffer = MapViewOfFile(memory_mapped_file, FILE_MAP_ALL_ACCESS, 0, 0, end_locaition.QuadPart); if (file_buffer == nullptr) { CloseHandle(file); CloseHandle(memory_mapped_file); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), L"Error: could not map view of ", 29, nullptr, nullptr); WriteConsoleW(GetStdHandle(STD_ERROR_HANDLE), filepath, lstrlenW(filepath), nullptr, nullptr); ExitProcess(GetLastError()); } uint8_t *start1 = file_buffer + end_locaition.QuadPart - invalid_newline_count - 1; uint8_t *start2 = file_buffer + end_locaition.QuadPart - 1; for (; start1 - file_buffer >= 0; --start1, --start2) { start2[0] = start1[0]; if (start1[0] == '\n') { if (start1 - 1 <= file_buffer || start1[-1] != '\r') { *--start2 = '\r'; } } } /* cleanup */ UnmapViewOfFile(file_buffer); CloseHandle(memory_mapped_file); CloseHandle(file); } void __cdecl mainCRTStartup() { int argc; wchar_t **const argv = CommandLineToArgvW(GetCommandLineW(), &argc) + 1; --argc; enum mode { mode_overwrite = 0x0, mode_create_file = 0x1, } current_mode = { mode_overwrite }; for (int i = 0; i < argc; ++i) { if (lstrcmpW(argv[i], L"-o") == 0) { current_mode = mode_overwrite; } else if (lstrcmpW(argv[i], L"-n") == 0) { current_mode = mode_create_file; } else { switch (current_mode) { case mode_overwrite: unix2dos2(argv[i]); break; case mode_create_file: if (lstrcmpW(argv[i], argv[i + 1]) != 0) { unix2dos1(argv[i], argv[i + 1]); } else { unix2dos2(argv[i]); } ++i; break; } } } /* free memory and exit */ LocalFree(argv - 1); ExitProcess(0); } to build the code use cl.exe -nologo -Oi -GS -Gs9999999 unix2dos.c -link -subsystem:console -nodefaultlib kernel32.lib shell32.lib -stack:0x100000,0x100000 Answer: Overall design Code makes heavy use of data as a file with a known size. I'd favor a stream approach where the conversion is done as data arrives and then written, negating the need for any large buffers. mode_overwrite design In my opinion, re-writing a file should not destroy the original until after the new file is completely written. I'd favor writing to a temporary new file first, rename files and then destroy the original. Should an error occur in the process, far easier to still have the original file around for recovery. Memory mapping The use of CreateFileMappingW() after walking the entire file with newline_count() reduces the benefit of mapping. It would make more sense to map the file and then read it for CR/LF. Logic error In newline_count(), there is no need for the 2nd. SetFilePointerEx(). newline_count() is also amiss in re-reading the the last character of the buffer into buffer[0]. What should be in buffer[0] is the last value from the previous block read. Pointer computation error start1 - 1 is invalid (UB) to compute when start1 == src_file_buffer. Instead // start1 - 1 <= src_file_buffer start1 <= src_file_buffer + 1 Access is UB start1[-1] is UB when start1 == src_file_buffer. Confusing error message SetFilePointerEx() may report "Error: could not read file", yet the error is not in reading, but seeking. Avoid error prone magic numbers Rather than ..., L"Error: could not resize ", 24, ... wchar_t err[] = L"Error: could not resize "; ... err, sizeof err / sizeof err[0],... Or other self-calculating code. Potential out of range access argv[i + 1] is attempted without knowing i + 1 < argc. Minor locaition --> location
{ "domain": "codereview.stackexchange", "id": 39326, "tags": "c, reinventing-the-wheel, windows" }
Are there algorithms for proving two finite state machines are equivalent?
Question: Suppose we call two finite state machines equivalent if they "perform the same computation" - they accept the same language, or they produce the same output for any given input. Is there an algorithm for checking if two finite state machines are equivalent? Answer: Given two deterministic finite automata $A_1,A_2$, you can construct the product automaton $A_1 \times A_2$ using the product construction. Let a mixed state be one which is accepting in $A_1$ and rejecting in $A_2$, or vice versa. The two automata are equivalent iff no mixed stated is reachable from the initial state.
{ "domain": "cs.stackexchange", "id": 11250, "tags": "finite-automata" }
How do black holes in merging galaxies find each other to merge?
Question: In many accounts of galaxy mergers, the prompt merging of their central black holes, if any, is stated seemingly as too obvious to need further explanation. While I don't dispute that this may indeed be the inevitable and prompt outcome, it doesn't seem self-evident, even if both black holes are near the centre of mass of their respective galaxies (and even if so, who is to say the centres of mass meet rather than in effect forming an orbit and the merged galaxy having a new resultant centre of mass?) After all, from a distance a black hole is no different gravitationally from any other mass distribution and, even if they end up fairly close, gravitational waves seem too feeble to degrade their mutual orbit in any reasonable time unless they are extremely close (by cosmic standards - a few diameters apart say). Answer: You're quite correct that two isolated black holes would simply orbit each other until over a very very long timescale gravity wave radiation would cause them to coalesce. But galactic black holes can change their momentum by flinging stars about. The net result isn't that different to the way energy is dissipated in a viscous liquid. The black holes will interact with the stars around them and lose energy in the process until they merge. There are lots of videos modelling galaxy mergers on YouTube. I particularly like this one. Note how stars are flung away from the galaxies as they collide.
{ "domain": "physics.stackexchange", "id": 4777, "tags": "general-relativity, cosmology, black-holes" }
Grabbing the previous 4 quarters
Question: I'm curious if there is a way to slim down this code either with SQL or with C#. -- VARS declare @iagent_id int declare @quarter int declare @prev_quarter_1 int declare @prev_quarter_2 int declare @prev_quarter_3 int declare @year int declare @prev_quarter_year_1 int declare @prev_quarter_year_2 int declare @prev_quarter_year_3 int set @quarter = datepart(QQ, getdate()) - 1 set @year = datepart(year, getdate()) set @prev_quarter_year_1 = @year set @prev_quarter_year_2 = @year set @prev_quarter_year_3 = @year if @quarter = 0 begin set @quarter = 4 set @year = @year - 1 set @prev_quarter_1 = 3 set @prev_quarter_year_1 = @year set @prev_quarter_2 = 2 set @prev_quarter_year_2 = @year set @prev_quarter_3 = 1 set @prev_quarter_year_3 = @year end else begin if @quarter = 3 begin set @prev_quarter_1 = 2 set @prev_quarter_2 = 1 set @prev_quarter_3 = 4 set @prev_quarter_year_3 = @year - 1 end else begin if @quarter = 2 begin set @prev_quarter_1 = 1 set @prev_quarter_2 = 4 set @prev_quarter_year_2 = @year - 1 set @prev_quarter_3 = 3 set @prev_quarter_year_3 = @year - 1 end else begin if @quarter = 1 begin set @prev_quarter_1 = 4 set @prev_quarter_year_1 = @year - 1 set @prev_quarter_2 = 3 set @prev_quarter_year_2 = @year - 1 set @prev_quarter_3 = 2 set @prev_quarter_year_3 = @year - 1 end end end end select @quarter as 'Quarter', @year as 'Year', @prev_quarter_1 as 'pq1', @prev_quarter_year_1 as 'pqy1', @prev_quarter_2 as 'pq2', @prev_quarter_year_2 as 'pqy2', @prev_quarter_3 as 'pq3', @prev_quarter_year_3 as 'pqy3' Answer: Your code is very procedural, with branching and variables and things we see in code... and bad queries. SQL likes sets/tables. You're lucky your fiscal quarters line up with the "normal calendar" - every company I worked for had different periods for their fiscal calendar. Since I worked with a data warehouse, one of the first things I do when I come on board in a company where I need to work with time data, is look at their databases to see if they have a table somewhere that contains their fiscal calendars. And if they don't have one, I simply create it: create table dbo.FiscalCalendars ( _Id int not null identity(1,1) ,_DateInserted datetime not null ,_DateUpdated datetime null ,CalendarDate date not null ,CalendarDayOfWeek int not null ,CalendarDayOfMonth int not null ,CalendarDayOfYear int not null ,CalendarWeekOfYear int not null ,CalendarMonthOfYear int not null ,CalendarYear int not null ,FiscalDayOfWeek int not null ,FiscalDayOfMonth int not null ,FiscalDayOfQuarter int not null ,FiscalDayOfYear int not null ,FiscalWeekOfMonth int not null ,FiscalWeekOfQuarter int not null ,FiscalWeekOfYear int not null ,FiscalMonthOfQuarter int not null ,FiscalMonthOfYear int not null ,FiscalQuarterOfYear int not null ,FiscalYear int not null ,constraint PK_FiscalCalendars primary key clustered (_Id asc) ,constraint NK_FiscalCalendars unique (CalendarDate) ); With a table that stores everything you've always wanted to know about every date you ever need to know anything about, selecting the last 4 quarters becomes... simple: declare @referenceDate as date; set @referenceDate = getdate(); with quarters as ( select t.FiscalYear ,t.FiscalQuarterOfYear ,min(t.CalendarDate) StartDate ,max(t.CalendarDate) EndDate from dbo.FiscalCalendars t group by t.FiscalYear ,t.FiscalQuarterOfYear ) select top 4 q.FiscalYear ,q.FiscalQuarterOfYear ,q.StartDate ,q.EndDate from quarters q where q.StartDate <= @referenceDate order by q.EndDate desc The hardest part is ...populating that time table, not querying it. Now, to make the above return a column for each quarter, you'll need to pivot it. But if you're working (and thinking) in sets (LINQ works off IEnumerable<T> after all), you won't need to do this. Speaking of LINQ... assuming you have the time table mapped to FiscalCalendar entities, you could query it like this: var referenceDate = DateTime.Today; var quarters = context.FiscalCalendars .GroupBy(t => new { t.FiscalYear, t.FiscalQuarterOfYear }) .Select(g => new { FiscalYear = g.Key.FiscalYear, FiscalQuarterOfYear = g.Key.FiscalQuarterOfYear, StartDate = g.Min(q => q.CalendarDate), EndDate = g.Max(q => q.CalendarDate) }) .Where(q => q.StartDate <= referenceDate)) .OrderByDescending(q => q.EndDate) //.Skip(1) // skip current quarter? .Take(4); That would give you an IQueryable<T> where T is an anonymous type with int FiscalYear, int FiscalQuarterOfYear, DateTime StartDate and DateTime EndDate properties.
{ "domain": "codereview.stackexchange", "id": 14212, "tags": "c#, sql" }
Question about Majorana spinor's property
Question: I am reading the BBS, Exercise 5.1 This exercise is nothing but showing that two Majorana spinors $\Theta_1$ and $\Theta_2$ \begin{align} \bar{\Theta}_1 \Gamma_{\mu} \Theta_2 = -\bar{\Theta}_2 \Gamma_\mu \Theta_1 \end{align} In BBS, there are solutions for exercise that I did not fully understand. First I know the \begin{align} \bar{\Theta}_1 \Gamma_{\mu} \Theta_2 = \Theta_1^{\dagger} \Gamma_0 \Gamma_{\mu} \Theta_2 = \Theta_1^T C \Gamma_{\mu} \Theta_2 \end{align} They said that this can be written in the form \begin{align} - \Theta_2^T \Gamma_{\mu}^T C^T \Theta_1 = - \Theta_2^T C \Gamma_{\mu}\Theta_1 = -\bar{\Theta}_2 \Gamma_{\mu} \Theta_1 \end{align} What i don't understand is why \begin{align} \Theta_1^T C \Gamma_{\mu} \Theta_2=- \Theta_2^T \Gamma_{\mu}^T C^T \Theta_1 \end{align} cf) I know that for the gamma matrix for Majorana spinor follows \begin{align} C\Gamma_{\mu}= - \Gamma_\mu^T C \end{align} which is related with above equation. References: [BBS] Becker, Becker, Schwarz, "String theory and M-theory: A modern Introduction". Answer: The relation you ask about is just a reshuffling of the components. Writing out the indices we have $$ \Theta_1^T C \, \Gamma_{\mu} \Theta_2 = (\Theta_1^T)_a C_{ab} \, (\Gamma_{\mu})_{bc} (\Theta_2)_c = - (\Theta_2)_c (\Gamma_{\mu})_{bc} C_{ab} (\Theta_1^T)_a = - (\Theta_2^T)_c (\Gamma_{\mu}^T)_{cb} (C^T)_{ba} (\Theta_1)_a $$ where the minus sign in the second step came from switching the order of the two fermions. Removing the indices again we then have $$ \Theta_1^T C \, \Gamma_{\mu} \Theta_2 = - \Theta_2^T\Gamma_{\mu}^T C^T \Theta_1 $$ which is what you asked about. Depending on your spinor conventions you might need to be more careful about the placement of the spinor indices than I was above, but the general idea should be the same.
{ "domain": "physics.stackexchange", "id": 23223, "tags": "homework-and-exercises, spinors, majorana-fermions, dirac-matrices" }
Birkhoff Method for Harmonic Oscillator Perturbation
Question: Problem: Given Hamiltonian $$H = \frac12 (p^{2}+q^{2})+q^{3}-3qp^{2}$$ make a perturbative canonical transformation $(q,p) \rightarrow (Q,P)$ such that the new Hamiltonian, apart from terms of degree greater than 4 in $Q$ and $P$, is of the form $$\bar{H}(Q,P)= \frac12 (P^{2}+Q^{2})+c(P^{2}+Q^{2})^{2}$$ by using Birkhoff's version of canonical perturbation theory in which one first makes a canonical transformation to complex coordinates $a$ and $a^{\dagger}$, where $a= \frac{(q-ip)}{\sqrt{2}}, a^{\dagger}=\frac{(p-iq)}{\sqrt{2}}.$ My Issue: In my previous use of the Birkhoff perturbation method (which I see more commonly referred to as Birkhoff normal form) the goal is to get the Hamiltonian into a power series of the action, $bb^{\dagger}$, where $(a,a^{\dagger})\rightarrow(b,b^{\dagger})$, and I don't see how to get the following transformation to give the Hamiltonian in the desired form. Work So Far: After substituting $(q,p) \rightarrow (a,a^{\dagger})$, where $q=\frac{(a+ia^{\dagger})}{\sqrt{2}}, p=\frac{(ia+a^{\dagger})}{\sqrt{2}}$, $$H(a,a^{\dagger})=iaa^{\dagger} + \sqrt{2}(a^{3}-ia^{\dagger 3})$$ I then chose a generating function $F(a,b^{\dagger}) = ab^{\dagger}+S(a,b^{\dagger})$, where $S(a,b^{\dagger})$ is a cubic polynomial. Then $$a^{\dagger} = \frac{\partial F}{\partial a} = b^{\dagger}+\frac{\partial S(b,b^{\dagger})}{\partial b} + \text{higher order terms}$$ $$b = \frac{\partial F}{\partial b^{\dagger}} = a+\frac{\partial S(b,b^{\dagger})}{\partial b^{\dagger}} + \text{higher order terms}$$ Then I think I'm doing something wrong with keeping track of the higher order terms here. I'd really appreciate any guidance in understanding this problem, thanks for your time. Answer: Your coordinate transformation, from $(q,p)\to(a,a^{\dagger})$ seems a little funky monkey. I would let $$a=\frac{1}{\sqrt{2}}(q+ip); \quad \quad a^{\dagger}=\frac{1}{\sqrt{2}}(q-ip),$$ so that $$q=\frac{1}{\sqrt{2}}(a+a^{\dagger}); \quad p=-\frac{i}{\sqrt{2}}(a-a^{\dagger}).$$ Next, we let $$a=\epsilon b+\epsilon^2 d; \quad a^{\dagger}=\epsilon b^{\dagger}+\epsilon^2 d^{\dagger},$$ where $\epsilon$ is a small parameter. We put this into the Hamiltonian and keep all terms to $\mathcal{O}(\epsilon^4)$. The point is to get rid of the third order interaction terms. This constrains the third order condition: $$\epsilon^3: \quad \sqrt{2}b^3+\sqrt{2}b^{\dagger3}+b^{\dagger}d+bd^{\dagger}=0.$$ This implies $$d=-\sqrt{2}b^{\dagger 2},$$ with $d^{\dagger}$ following trivially. Therefore, the Hamiltonian becomes $$H=bb^{\dagger} +c b^2b^{\dagger 2},$$ (with c=-10). The coordinates $(P,Q)$ can then be easily written in terms of $(b,b^{\dagger})$.
{ "domain": "physics.stackexchange", "id": 27081, "tags": "homework-and-exercises, classical-mechanics, harmonic-oscillator, hamiltonian-formalism, perturbation-theory" }
Interpolation on marching cubes algorithm
Question: I'm currently learning how isosurfaces are extrated from volumetric data. I already understood the original marching cubes algorithm which is based on 3D-voxel data which stores only values of either 1 or 0. For learning purposes I implemented the easier version of it for 2D-volumetric data called marching squares algorithm. Results are shown below: After all I read that the marching cubes algorithm can benefit of float values in the voxels instead of the binary values in terms of interpolating the vertices. However I do not understand how this should work simply because I'm missing the rules of which values should only be assumed for the voxels and also how the interpolation based on them would be done. Could someone give me a fundamental view of how interpolation with the marching cubes algorithm is asumed to work? An example with 2D would be fine but showing how it works in 3D would be even nicer. PS: For references, I read such stuff here. Answer: I worked it out on my own. I'm trying to write it down correctly so it's readable for newcomers. Introduction First of all, interpolation itself requires that there are float values and not binary values. This gives the ability to create different edge-crossing-vertices. How it's going I'm explaining now. The interpolation which is needed here is linear interpolation. For understanding purposes I'll explain it here too. General Interpolation After all, what is interpolation? Well good question. Interpolation is described as name for algorithms which smooth a given input of data e.g. points (with n-dimensions). Simpler there are points searched between the points for a given interpolation value. First it's required to give the points a base so that's clear where points are searched for in between. After that a specific algorithm for the interpolation has to be chosen. There are different kinds of interpolation algorithm out there each for a specific application. Best known is linear interpolation which results for a value one point on a line which crosses two input points. The formula for linear interpolation (2D here) is as following (simply maths): $$ y = y_{0} + (value - x_{0} ) * \frac{y_{1} - y_{0} }{x_{1} - x_{0}} $$ with $$ value \in \mathbb{R} \wedge 0 <= value <= 1 $$ After all with this formula it's possible to apply interpolation on the vertices of the marching cubes algorithm which requires following rules. The voxels must have a floating value instead of a binary value. The value of the voxels must be only between zero and one There must be a boundary value be declared which describes whether a voxel is supposed to be in surface. Looking back to the blog the boundary value is called the iso surface value. This value describes as explained whether a voxel is being asumed to be in the surface so for acquiring the edge crossing. This is important to know because without it you wouldn't find out where even a edge-crossing computation is needed. The interpolation is done on each coordinate of the two voxels position where and crossing-edge vertex is. The implementation is quite easy and looks like this (extracted from here): /* Linearly interpolate the position where an isosurface cuts an edge between two vertices, each with their own scalar value */ XYZ VertexInterp(isolevel,p1,p2,valp1,valp2) double isolevel; XYZ p1,p2; double valp1,valp2; { double mu; XYZ p; if (ABS(isolevel-valp1) < 0.00001) return(p1); if (ABS(isolevel-valp2) < 0.00001) return(p2); if (ABS(valp1-valp2) < 0.00001) return(p1); mu = (isolevel - valp1) / (valp2 - valp1); p.x = p1.x + mu * (p2.x - p1.x); p.y = p1.y + mu * (p2.y - p1.y); p.z = p1.z + mu * (p2.z - p1.z); return(p); } Conclusion The interpolation of the edge-crossing vertices is a nice feature for smoothing the presentation of voxels.However the interpolation relies on that there are already floating values which means that it cannot smooth your existing binary data. For such cases different algorithms and approaches are needed.
{ "domain": "cs.stackexchange", "id": 8494, "tags": "algorithms" }
Ros2 import regular python modules from where
Question: Ubuntu 20.04 ros2 foxy This may be a dumb question but I have not yet run into it so I feel the need to ask. If I am writing a python node and I want some modules from pip (or conda I suppose) such as pyserial, to what do I install the module in order to be able to use it in a ros2 node? If I wanted to use pyserial in a regular python script for a project, I would go to my venv that I use for misc projects and pip install pyserial from that environment, then use it. Am I supposed to run python3 -m pip install -U pyserial just for my system python? Because I've always made some sort of venv for my projects because gumming up the system with random modules usually results in me breaking something or causing some sort of package conflict. I know I can run limited ros functions from a venv (done it through an anaconda env once before to be able to run tensorflow2 in python3 in ros melodic) but I cant see that being something that is regular practice. For C++ I can just put the header/source files in a library type folder inside the ros package so that isn't an issue. Originally posted by seandburke99 on ROS Answers with karma: 38 on 2020-11-15 Post score: 1 Answer: There are several options available for using third-party python packages in your ROS nodes. Based on this question (and several similar ones), we have added some documentation to cover these use cases: https://index.ros.org/doc/ros2/Tutorials/Using-Python-Packages/ Originally posted by mjcarroll with karma: 6414 on 2020-12-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 35761, "tags": "ros2" }
Resolving horizontal and vertical components on an inclined plane
Question: Here is a very simple mechanics question related to inclined planes: We assume that the body is at equilibrium or at constant speed. To work out the force 'R', I resolve the 100N force due to weight into it's vertical component (Relative to the inclined plane) like so: The final answer of 115N is incorrect. The reason this answer is incorrect is because the 90 degree angle has been placed next to the line of force of the weight. The correct working is as follows: As you can see I have now placed the 90 degree angle next to the line of force of 'R'. The answer of 87N is now correct. My question is, why is placing the 90 degree angle on the line of force 'R' and making it the adjacent instead of the hypotenuse so essential to getting the correct answer? Answer: Here is the diagram you need to draw for yourself: The force of gravity (red) can be thought of as the sum of two (green) forces - one perpendicular to the incline (the normal force) and one along it. Since the red vector is the hypotenuse of a triangle, the normal force must be $mg\cos\theta$.
{ "domain": "physics.stackexchange", "id": 73246, "tags": "homework-and-exercises, newtonian-mechanics, forces, vectors, free-body-diagram" }
Electron Pushing
Question: I have a problem with this graphic, specifically the right hand side. Why a "sextet" atom (an atom with 6 valence electrons)? I thought only a select few elements (C, N, and O) actually follow the octet rule. Answer: Where you find this picture? Very misleading. When sextet is mentioned, the better way is to say is lone pair acceptor or Lewis Acid. You simply need one empty molecular orbital on the atom to be able to do all the steps in the picture. In the case for common B, C, N and O they happen to be preferring sixtet. However, for heavier element, they are wrong. If you change the $BF_{3}$ to $FeCl_{3}$ in the picture, the reaction will be also true. But the iron center is not sextet, it is $3s^{2}3p^{6}3d^{5}$ in its outer electron layer.
{ "domain": "chemistry.stackexchange", "id": 1185, "tags": "organic-chemistry, erratum" }
Question concerning subset sum problem: split into 3 equal subsets
Question: Task: Given an array $arr[a_1, a_2, \dots, a_n]$ of integers, let $A = \sum\limits _{i\in \{1, 2, \dots, n\}}a_i$. Determine whether it is possible to spit $arr[]$ into 3 subsequences of equal sum, i.e. if $s_1 =s_2 = s_3 =\dfrac{A}{3}$ where $s_1 ,s_2 , s_3$ denoted the splitted arrays. My thoughts: I will first examine whether there exists some sequence of numbers that sums up to $\dfrac{A}{3}$ via dp, then I will backtrack those numbers, "throw them out" (meaning I won't consider them anymore), and proceed with the remaining numbers of the array. After doing this a second time I examine whether the numbers left sum up to $\dfrac{A}{3}$ and return true if this is the case. Even though this sounds valid to me I somehow doubt the correctness of this. Recurrence of DP: $dp[i][j] = dp[i-1][j-a_j] \text{ OR } dp[i-1][j]$ $dp[]$ is a boolean array of dimension $n \times\dfrac{A}{3}$. Answer: Here my solution: boolean 3_Partition(int arr[]) { if (arr == null) return false; int sum = Arrays.stream(arr).sum(); if (sum % 3 != 0) return false; int n = arr.length; boolean dp[][][] = new boolean[n+1][sum = sum / 3 + 1][sum]; for (int i = 0; i < n+1; i++) { for (int j = 0; j < sum; j++) { for (int k = 0; k < sum; k++) { if (i == 0 ) dp[i][j][k] = false; else if (j == 0) dp[i][j][k] = true; else if (k == 0) dp[i][j][k] = true; else dp[i][j][k] = j >= arr[i-1] && k >= arr[i-1] ? dp[i - 1][j - arr[i-1]][k] || dp[i - 1][j][k - arr[i-1]] || dp[i - 1][j][k] : j >= arr[i-1] ? dp[i - 1][j - arr[i-1]][k] || dp[i - 1][j][k] : k >= arr[i-1] ? dp[i - 1][j][k - arr[i-1]] || dp[i - 1][j][k] : dp[i - 1][j][k]; } } } return dp[n][sum-1][sum-1]; }
{ "domain": "cs.stackexchange", "id": 17680, "tags": "dynamic-programming" }
Check existence of user identification number
Question: I was tasked with writing a python script to check if a passed ID number exists within a file. The file I was give contains roughly 25,000 unique ID numbers. An ID number is a unique 9 digit number. I have a couple specific things for you to comment on if you decide to review this: As it stands, I convert the user_id passed in the id_exists to a str. I do this because I need the ID number to be the same type when I read the ID's from the file. Is there an easier way to accomplish this? This is my first time using a PEP 484 type hint, Generator. Am I using it correctly? This is also my first time utilizing a generator in this fashion. I used a generator here because instead of loading the entire file into memory, storing in a list, then iterating over the list, I can read each line and can exit early if an ID is found. Is it good that I'm using it in this way, or is there an easier way to do this? Using a helper function is a new concept to me. Am I utilizing this helper function in the most pythonic way? Of course, any constructive criticism beyond these points is welcome. Thanks in advance. """ Ensures id number matches any within the ID file. An ID number is a unique 9 digit number. """ import random import time from typing import Union, Generator def id_exists(user_id: Union[int, str]) -> bool: """ Determines if the passed ID exists within the ID file. The ID passed must be of length 9 exactly, and only contain numbers. :param Union[int, str] user_id: User ID to check for existence :return bool: True if ID exists in file, False otherwise """ # Generator[yield_type, send_type, return_type] # def get_ids() -> Generator[str, None, None]: """ Helper function for `id_exists`. Yields each line in the ID file. :return Generator[str, None, None]: Lines in ID file. """ with open("ids.txt", "r") as file: for line in file: yield line.strip() # Convert user_id to <class 'str'> if not already # if type(user_id) == int: user_id = str(user_id) # Ensure user_id matches specifications # if len(user_id) != 9 or not user_id.isdigit(): raise ValueError("ID should be 9 characters long and all digits.") # Check ID against file and return result # return any(user_id == stored_id for stored_id in get_ids()) Answer: One of: Only take a string or a number. Use str either way. Work against a bespoke class. Yes. But I find it better to just use Iterator[str], unless you're using the coroutine aspects of it. Yeah that's fine. But: You can just use in rather than any(...). You don't need the function get_ids you can just use any, like you are now. Don't use type(...) == int, use isinstance. Validating user_id should probably happen somewhere else. I'll make a class as an example to show this. class UserId: __slots__ = ('id',) id: int def __init__(self, id: int) -> None: if not self.valid_id(id): raise ValueError("ID should be 9 characters long and all digits.") self.id = id @staticmethod def valid_id(id: int) -> bool: if not isinstance(id, int): return False str_id = str(id) return ( len(str_id) == 9 and str_id.isdigit() ) def exists(self) -> bool: user_id = str(self.id) with open('ids.txt') as f: return any(user_id == line.strip() for line in f) print(UserId(123456789).exists())
{ "domain": "codereview.stackexchange", "id": 37257, "tags": "python, python-3.x, file, generator" }
Andromeda & Milky Way Merger: Gravitational Waves
Question: When the Andromeda galaxy and Milky Way merge in the future, the super-massive black holes at their respective galactic centers will likely eventually merge. Similarly to the gravitational waves detected by LIGO on 14th September 2015, I assume that the merger of these super-massive black holes will generate gravitational waves. Whilst the gravitational waves detected by LIGO in 2015 were caused by a merger of two black holes approximately 1.3 billion light years away, the merger of black holes inside our own future galaxy (Milky Way & Andromeda merged together) will be much closer. Also, the size of the black holes merging will be bigger. Question: The waves detected by LIGO caused a space-time distortion of the order of a fraction of an atomic radius. What is the size of the spacetime distortion likely to be caused by the merger of the super-massive black holes at the galactic centers of Milky Way & Andromeda? Will the gravitational wave generated be of any threat to humanity on Earth (if humanity still exists at that point in time)? Answer: The peak strain of GW150914 was about $10^{-21}$. Strain scales linearly with the total mass of the system, and inversely proportionate to the distance. A merger of the of two supermassive black holes at the center of the galaxy, would be about (give or take an order of magnitude) a million times more massive and a million times closer than GW150914, giving a strain of $O(10^{-9})$. Across the size of the Earth this would still only translate to a few millimeters. This might cause measurable seismic activity across the globe, but would hardly be catastrophic. Add: The peak strain would be somewhere in the mHz regime, i.e. the correct regime for eigenmodes of the Earth's crust. Consequently, the gravitational waves can couple to seismic activity relatively effectively. UPDATE: In the comments it was questioned whether any seismic activity would exceed the typical background activity on Earth. So lets be a bit more precise. Sagittarius A* has a mass of $4\cdot 10^6 M_\odot$. The black hole at the center of Andromeda is much more massive, weighing in at about $1.2\cdot 10^8 M_\odot$. This gives a mass-ratio of about 1/32, so we can use one of the simulations from https://arxiv.org/abs/2006.04818 as a model. Our distance to the merger would be highly uncertain in the newly merged galaxy. For now let us assume a round 1 kpc. This would lead to a peak strain of $2.52\cdot 10^{-10}$ at a frequency of 0.434 mHz. Applied to the Earth, this translated to a peak power spectral density of $2.12\cdot 10^{-16} (\mathrm{m}/\mathrm{s}^2)^2/\mathrm{Hz}$. The ambient background seismic activity on Earth is given by the NLNM (new low noise model). At 0.434 mHz, this gives $1.63\cdot 10^{-17} (\mathrm{m}/\mathrm{s}^2)^2/\mathrm{Hz}$. Consequently, the signal would come in just above this noise floor, meaning it might be just measurable by sensitive seismic monitoring stations at quiet locations. Some caveats: As mentioned the distance to the merger would be highly uncertain. Increasing the distance by a factor of 10 (well possible) would reduce the power spectral density by a factor 100 and put it well below the ambient seismic background. The viewing angle of the merger can affect the observed strain by a factor of a few, which at the given margins could make the difference between being detectable, and not. The above assumed that the black hole were not spinning. A significant amount of spin on the heavier component could lead to a significantly higher peak strain at a higher frequency. All that being said, the effect would be order of magnitude smaller than the type of seismic events that happen on a daily basis, and would not pose any sort of threat to anything on Earth.
{ "domain": "physics.stackexchange", "id": 72094, "tags": "black-holes, gravitational-waves, estimation, galaxies, structure-formation" }
Does light accelerate as it nears a black hole?
Question: As light is affected by gravity ( gravitational lending and black holes), it would seem that gravity causes acceleration. Acceleration has two parts: direction and magnitude. It is clearly evident that gravity affects the former of the two. It would seem that it does not affect the later component because light is currently at the universal 'speed limit' (not counting taychons). What is going on here? Is the magnitude component affected by gravity? Answer: Light always travels at the same speed. What happens as photons move through changes in gravitational potential is that they gain and lose energy, which manifests itself as red shift or blue shift. In the specific case of light moving towards a black hole, where an object with a rest mass would gain kinetic energy as it fell, a photon is blue-shifted, carrying more energy (and momentum) for each photon - in classical terms, the light is shifted to a higher frequency i.e. a shorter wavelength. For the reverse situation, light coming from deep within a potential well to an observer far away, the light is red shifted to a lower frequency (higher wavelength), with lower energy per photon.
{ "domain": "physics.stackexchange", "id": 29004, "tags": "gravity, visible-light, speed-of-light, acceleration, faster-than-light" }
Temperature-induced wavelength shift of optical coatings?
Question: Optical coatings designed for reflection or anti-reflection are made of many thin layers which will expand when heated. What will the effect be on the wavelengths the coating will reflect when the coating is heated? For a first guess, magnesium fluoride has a thermal expansion coefficient of about 10 µm/m/K , so the affected wavelengths could be red-shifted by 1E-5/K? But this doesn't take into account refractive index changes. Does anyone know of experimental data on actual coatings? Answer: I have since found this pdf from CVI Melles Griot giving a temperature coefficient of 0.016 nm/°C at 400 nm, increasing to 0.027 nm/°C at 820 nm. This will vary between coating types but it is enough to get started.
{ "domain": "physics.stackexchange", "id": 10582, "tags": "thermodynamics, optics, visible-light, refraction, optical-materials" }
Weight of a tensor density
Question: Is there any freedom in choosing the weight of a tensor density? I have seen in some papers that they introduce a tensor density made from metric with a special weight. There is a tensor density with the weight of $\frac{-1}{2}$ : $$\overset{\sim}{g_{ab}} = \sqrt{({\det g})} ^{\frac{-1}{2}}~~g_{ab}$$ In the paper The geometry of free fall and light propagation by Ehlers and his colleagues (Gen. Relativ. Gravit. 44 no. 6, pp. 1587–1609 (2012)), on page 1599 there is an example. You can also look at the relation $2.4$ of this article. I don't understand why this special weight is chosen for the tensor density constructed from metric. Answer: In these particular cases, the authors are interested in the conformal structure, i.e. lightcone structure, of the manifold. A conformal structure can be defined by an equivalence class of metrics, all of which are related to each other by a conformal transformation, $$g_{ab}\sim e^{\omega(x)}\bar{g}_{ab}$$ A nice way to characterize a conformal structure is to say it is just the tensor density of weight $-1/2$ that you mention. Under a conformal transformation, the metric determinant changes by $$g\mapsto e^{4\omega}\bar{g},$$ so you can see that the density weight is chosen so that the tensor density $\tilde{g}_{ab}$ is invariant under the conformal transformation. If you were in dimension other than $4$, you would choose the weight to be $-2/d$, to maintain this invariance under conformal transformations.
{ "domain": "physics.stackexchange", "id": 15474, "tags": "general-relativity, differential-geometry, tensor-calculus, metric-tensor" }
Compute the generating functional for the $bc$ theory
Question: I need the generating functional for the $bc$ CFT, which has $$L=\frac{1}{2\pi}(b\bar{\partial}c + b\partial\bar{c}),$$ so I can compute the correlation function $$\langle b(z_1)c(z_2)\rangle =\frac{1}{z_{12}}.$$ I know that for a scalar field theory, the generating functional is defined as $$Z[J] = \int D\phi \exp\left\{i\int d^4 x[L+J(x)\phi(x)]\right\}$$ which can be put in a more explicit form by making a change of variables and completing the square. But I can't figure out how to get an analogous expression for the $bc$ theory. I think there must be two sources $J_b$ and $J_c$. Then I have something like $$Z[J] = \int DcDb \exp\left\{i\int d^4 x[L+J_b(x)b(x)+J_c(x)c(x)]\right\}$$ and I know at some point I'm supposed to complete the square in the exponential to get it in a useful form, but I can't see how to do that here since there are no quadratic terms. Answer: The action is a quadratic form, just one consisting of off-diagonal terms. So you can formally complete the square by writing \begin{align} b\bar{\partial}c + bJ_b + J_cc = (b + J_c \bar{\partial}^{-1}) \bar{\partial} (c + \bar{\partial}^{-1} J_b) - J_c \bar{\partial}^{-1} J_b. \end{align} This is essentially the same manipulation by which one derives the propagator of the Dirac field.
{ "domain": "physics.stackexchange", "id": 87395, "tags": "homework-and-exercises, conformal-field-theory, correlation-functions, propagator, ghosts" }
can hector_slam be used to create 3D models?
Question: Hello, Does hector_slam create only 2D maps or can it be used to created 3D models as well? My data comes from a Velodyne LiDAR and I need to create a 3D model of the object that is being scanned. I am looking for the right tool to use and was not sure if hector_slam could create 3D models. If it can't and you can recommend something else, please do so :) Thank you. Originally posted by chukcha2 on ROS Answers with karma: 89 on 2016-02-16 Post score: 1 Answer: hector_slam performs localization only in 2D, but if you combine that information with IMU data (for roll and pitch) and additional data about the z position (from prior knowledge, odometry or other sensors), it can be used for 3D mapping. This approach is used for building a 3D octomap-based map via mapping with known poses using a RGB-D sensor with our autonomous USAR robots as visible in this video. Depending on your accuracy requirements, sensor motion and scale of environment other solutions might be worth a look. See for instance this recent video made using a Velodyne HDL-64e. ethzasl_icp_mapping also uses libpointmatcher and you could try that. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2016-02-17 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by chukcha2 on 2016-02-17: Thank you Stefan. It is not immediately clear from the video the quality of the 3D model, the resolution, etc. but I will look into libpointmatcher and ethzasl_icp_mapping. I am specifically interested in mounting VLP-16 on a drone and scanning a small building with a couple cm or better resolution. Comment by jodafo on 2016-02-29: Hi, I'm doing something similar as you are up to. I first tried using lpm/ethzasl_icp_mapping but was disappointed by the high percentage of points that you have to filter out to get it running in realtime. I found configuring it quite hard. Please let me know if you're better at tweaking params :) Comment by jodafo on 2016-02-29: Also, I implemented a point to plane algo that doesnt rely on kd trees for surface normal approximation and src/tar point association. It seems to be a_lot faster (>20iter, 100%pts) than lpm and doesnt deteriorate with map size . I'll try to write up the approach and release code end of next month Comment by chukcha2 on 2016-03-02: @jodafo, I haven't had a chance to try it yet. But please let me know of your results :) Comment by Clack on 2016-03-03: Our team is also working on 3D SLAM by modifying some existing 2D SLAM algorithm such as hector_slam, gmapping and MRPT icp-SLAM. We are not sure whether we can successfully modify hector_slam because it seems hard to add IMU data to it, neither is gmapping. Hope to see any of your progress :) Comment by hashim on 2016-04-10: Hi Clack, how much work have you done ? have you been successful in modifying hector_slam to get a 3D SLAM ? Comment by Clack on 2016-04-10: We've mainly focused on building a 2D SLAM system recently. However one of our teammates planned to convert 2D point cloud to 3D using data obtained by IMU. He will have this done in maybe one month. I will inform you as soon as we have a progress. And we are also curious about your idea :) Comment by Clack on 2016-04-10: I mean the 3D point clouds are based on 2D ones obtained by the laser scanner. And IMU data are used to build a transform relationship between 2D and 3D. Comment by ateator on 2016-06-22: clack, do you have any updates? that is exactly what I am trying to do Comment by Clack on 2016-06-24: We recently encountered problems with 3D point clouds distortion caused by IMU and we are looking for papers and other resources to figure out how to fix that. If we had this done we will inform you ASAP :) Comment by ateator on 2016-06-24: Thank you. Do you have any hints of things I could look into to try to get started? I have a 2D LaserScan working with octomap and I have ordered the Razor 9DOF IMU. I'm wondering if the Third dimension is integrated automatically by ROS once I have the IMU node? Comment by Clack on 2016-06-24: I think so, the standard IMU message type in ROS is 3 dimensional, that is, the node will output 3D points which represents the pose of the IMU. But if you want to use those messages to convert your 2D LIDAR points to 3D, you need to write a new node, which is exactly our group is working at. Comment by ateator on 2016-06-28: I look forward to hearing about your progress :) Comment by ricot on 2017-05-02: Hello, I am new to ROS and trying to do the exact same thing. @Clack what were the main hurdles you went through and were you able to get the 3d points out of the 2d Lidar ? Are there guidelines somewhere that I could follow and/or is your code public ? Comment by Clack on 2017-05-02: Hello, ricot. Actually I'm not working on this topic any more. However one of the members in our lab is now using a kind of framework called LOAM to build a 3D map using LiDAR and IMU. The performance is quite impressive. Hope this will help :)
{ "domain": "robotics.stackexchange", "id": 23794, "tags": "ros, lidar, 3d-slam, hector-slam, 3dslam" }
Obtain a signal's peak value if it's frequency lies between two bin centers
Question: Please suppose the following: The frequency of a signal's fundamental has been estimated using FFT and some frequency estimation methods and is lying between two bin centers The sampling frequency is fixed Computational effort is not an issue Knowing the frequency, what is the most accurate way to estimate the corresponding peak value of the signals fundamental? One way might be to zero-pad the time signal to increase FFT resolution such that the bin center will be closer to the estimated frequency. In this scenario, one point I am not sure about is if I can zero-pad as much as I want or if there are some drawbacks in doing so. Another one is which bin center I should select after zero padding as the one I am obtaining the peak value from (because one may not hit the frequency of interest exactly, even after zero padding). However, I am also wondering whether there is another method which may deliver better results, f.e. an estimator which uses the peak values of the surrounding two bin centers to estimate the peak value at the frequency of interest. Answer: The first algorithm that springs to mind is the Goertzel Algorithm. That algorithm usually assumes that the frequency of interest is an integer multiple of the fundamental frequency. However, this paper applies the (generalized) algorithm to the case you are interested in. Another problem is that the signal model is incorrect. It uses 2*%pi*(1:siglen)*(Fc/siglen). It should use 2*%pi*(0:siglen-1)*(Fc/siglen) for the phase to come out correctly. I also think there is a problem with the frequency Fc=21.3 being very low. Low-frequency real-valued signals tend to exhibit bias when it comes to phase/frequency estimation problems. I also tried a coarse grid search for the phase estimate, and it gives the same answer as the Goertzel algorithm. Below is a plot that shows the bias in both estimates (Goertzel:blue, Coarse:red) for two different frequencies: Fc=21.3 (solid) and Fc=210.3 (dashed). As you can see the bias for the higher frequency is much less. The plot $x$-axis is the initial phase changing from 0 to $2\pi$.
{ "domain": "dsp.stackexchange", "id": 1303, "tags": "fft, discrete-signals, signal-analysis, estimation, peak-detection" }
Sam C21 Xplained Pro - Linux toolchain
Question: I recently got a ATSAMC21N Xplained Pro dev board. I was planning to use avr-g++ and avrdude to do development (I’m running Linux so Atmel Studio isn’t an option). I’ve used avr-gcc and avrdude in the past for Atmel development but the SAM C21 chip doesn’t seem to be on the supported target list. Does anyone have an idea about what toolchain I could use to build and deploy the program to the board? Answer: The SAM chips have ARM cores, so you should look compiliing with arm-none-eabi-gcc and flashing/debugging with openocd instead. OpenOCD is a program that communicates with debugging probes for a number of processor/microcontroller architectures, analogous to avrdude for AVR. You will have to figure out how to configure openocd to talk to your board. A quick google search brought me this guide, which might point you in the right direction, although I don't have a SAM breakout board to verify it with. OpenOCD is definitely worth learning if you're going to be working with ARM chips, but you could probably also flash your code using Atmel's IDE, Atmel Studio. I shy away from using proprietary IDEs, but if you need to get up and running quickly, you could probably use it to both build and flash your project. Hope this helps. edit: I glossed over the word "Linux" in the title of your question. I believe Atmel Studio is only available for Windows, so unfortunately you're stuck using the much more versatile open source tools which are available for Linux.
{ "domain": "robotics.stackexchange", "id": 2212, "tags": "microcontroller, avr" }
Read Clustal file in Python
Question: This question was also asked on StackOverflow I have a multiple sequence alignment (MSA) file derived from mafft in clustal format which I want to import into Python and save into a PDF file. I need to import the file and then highlight some specific words. I've tried to simply import the pdf of the MSA but after the highlight command doesn't work. I need to print the file like this: Multi.txt : CLUSTAL format alignment by MAFFT FFT-NS-i (v7.453) Consensus --------------------------------------acgttttcgatatttatgccat AMP tttatattttctcctttttatgatggaacaagtctgcgacgttttcgatatttatgccat ********************** Consensus atgtgcatgttgtaaggttgaaagcaaaaatgaggggaaaaaaaatgaggtttttaataa AMP atgtgcatgttgtaaggttgaaagcaaaaatgaggggaaaaaaaatgaggtttttaataa ************************************************************ Consensus ctacacatttagaggtctaggaaataaaggagtattaccatggaaatgtatttccctaga AMP ctacacatttagaggtctaggaaataaaggagtattaccatggaaatgtaattccctaga ************************************************** ********* Consensus tatgaaatattttcgtgcagttacaacatatgtgaatgaatcaaaatatgaaaaattgaa AMP tatgaaatattttcgtgcagttacaacatatgtgaatgaatcaaaatatgaaaaattgaa ************************************************************ Consensus atataagagatgtaaatatttaaacaaagaaactgtggataatgtaaatgatatgcctaa AMP atataagagatgtaaatatttaaacaaagaaactgtggataatgtaaatgatatgcctaa ************************************************************ Consensus ttctaaaaaattacaaaatgttgtagttatgggaagaacaaactgggaaagcattccaaa AMP ttctaaaaaattacaaaatgttgtagttatgggaagaacaaactgggaaagcattccaaa ************************************************************ Consensus aaaatttaaacctttaagcaataggataaatgttatattgtctagaaccttaaaaaaaga AMP aaaatttaaacctttaagcaataggataaatgttatattgtctagaaccttaaaaaaaga ************************************************************ Consensus agattttgatgaagatgtttatatcattaacaaagttgaagatctaatagttttacttgg AMP agattttgatgaagatgtttatatcattaacaaagttgaagatctaatagttttacttgg ************************************************************ Consensus gaaattaaattactataaatgttttattataggaggttccgttgtttatcaagaattttt AMP gaaattaaattactataaatgttttattataggaggttccgttgtttatcaagaattttt ************************************************************ Consensus agaaaagaaattaataaaaaaaatatattttactagaataaatagtacatatgaatgtga AMP agaaaagaaattaataaaaaaaatatattttactagaataaatagtacatatgaatgtga ************************************************************ Consensus tgtattttttccagaaataaatgaaaatgagtatcaaattatttctgttagcgatgtata AMP tgtattttttccagaaataaatgaaaatgagtatcaaattatttctgttagcgatgtata ************************************************************ Consensus tactagtaacaatacaacattgga---------------------------------- AMP tactagtaacaatacaacattggattttatcatttataagaaaacgaataataaaatg ************************ How can I import the alignment and print in the new PDF with the right alignment of the sequences. Thanks Answer: Ok, figured out a way, not sure its the best one, nedd to install fpdf2 (pip install fpdf2) from io import StringIO from Bio import AlignIO # Biopython 1.80 from fpdf import FPDF # pip install fpdf2 alignment = AlignIO.read("Multi.txt", "clustal") stri = StringIO() AlignIO.write(alignment, stri, 'clustal' ) # print(stri.getvalue()) stri_lines = [ i for i in stri.getvalue().split('\n')] # print(stri_lines) pdf = FPDF(orientation="P", unit="mm", format="A4") # Add a page pdf.add_page() pdf.add_font('FreeMono', '', 'FreeMono.ttf') pdf.set_font("FreeMono", size = 8) for x in stri_lines: pdf.cell(0, 5, txt = x, border = 0, new_x="LMARGIN" , new_y="NEXT", align = 'L', fill = False) # print(len(x)) pdf.output("out.pdf") output pdf out.pdf : Not sure why the file Header is changed, think is something within Biopythion (!!! ???) you can check adding: with open('file_output.txt', 'w') as filehandler: AlignIO.write(alignment, filehandler, 'clustal') I had to place a tff FreeMono font (Mono spaced font in my script directory) see: pdf.add_font('FreeMono', '', 'FreeMono.ttf') otherwise the alignement won't be printed in the correct way Which fonts have the same width for every character?. Attached a png of my pdf. See that you can highlight it, using: pdf.set_fill_color(255, 255, 0) filling = False for x in stri_lines: if 'Consensus' in x: filling = True else: filling = False pdf.cell(0, 5, txt = x, border = 0, new_x="LMARGIN" , new_y="NEXT", align = 'L', fill = filling) or something similar you can highlight while printing:
{ "domain": "bioinformatics.stackexchange", "id": 2350, "tags": "python, sequence-alignment, phylogenetics, multiple-sequence-alignment" }
Best way to procedurally generate an orbit given mass and eccentricity
Question: As stated here, I've badly expressed my problem and have made what we call a XY problem (the question was well answered nonetheless) so I restate the question. Sorry for the inconvenience I have a 2D orbit simulator using Euler-Cromer method for now; What I'm trying to do is to generate a central body with a given mass and a body orbiting this body with the orbit having: a procedurally generated eccentricity ranged in $\mathbf{0≤e<1}$ a procedurally generated semi-major axis What I'm trying to achieve is to get the initial velocity for the simulator to make any orbit that correspond this mass, eccentricity and semi-major axis - no matter the plane of the orbit nor direction. For simplification, let's state everything appends on the same plane and goes clockwise. So far, thanks to @ConnorGarcia 's answer, I may guess that I can: use the eccentricity to find $\mathbf{c}$ - the distance between the focal and the center of the ellipse - like that: $\mathbf{c = a \cdot e}$ place the body at any random distance to the orbited body equal to the distance between one focal and apogee using $\mathbf{a + c}$ use the vis-viva equation to have speed, and set velocity vector using speed as magnitude and making its direction perpendicular to the semi-major axis. Does it feels right? If I was building this generator in 3D, should I just have to add the plane to make it correct? Answer: ANSWER I would put the massive object at the origin. Then, I would calculate the velocity at periapsis, when $r=a(1-e)$ as $$v = \sqrt{GM\frac{1}{a}\left(\frac{1+e}{1-e}\right)}$$ (stolen from Uhoh's answer to Calculating object velocity at perihelion ). Put the periapsis on the positive y axis. Then the velocity vector at periapsis is $\vec{v}=[v,0]$. If you want the periapsis at a counter-clockwise angle $\omega$ from the positive y-axis, then multiply by your standard angle rotation matrix. EXAMPLE Let the mass of the central body be twice a solar mass $m=2M_{\odot}$ with a planet with semi-major axis $a=1$ AU and highly eccentric with $e=0.5$. Then periapsis is at $r=0.5$ AU. Here, the gravitational constant is $$G \approx 887.352 \frac{AU}{M_{\odot}}(km/s)^2$$ Plugging these values into Uhoh's equation for velocity above, we get $|\vec{v}| \approx 72.97$ km/s. So, when the orbiting body's position at periapsis is at $\vec{p} = [0,0.5]$ AU, its velocity is $\vec{v} = [72.97,0]$ km/s. Want to do this in 3D? Then $\vec{p} = [0,0.5,0]$, $\vec{v} = [72.97,0,0]$, with an argument of periapsis $\omega$, inclination $i$, and longitude of ascending node $\Omega$, then we can calculate new position and velocity vectors as $$\vec{p}' = R_z(\Omega)R_x(i)R_z(\omega)\vec{p}$$ and $$\vec{v}' = R_z(\Omega)R_x(i)R_z(\omega)\vec{v}$$ where $R_x(\theta)$ and $R_z(\theta)$ are the 3D rotation matrices around the x and z axes respectively. Note that if $i=\omega=\Omega=0$ then all of the rotation matrices are the identity matrix so no rotations take place.
{ "domain": "astronomy.stackexchange", "id": 5829, "tags": "orbit, n-body-simulations, eccentricity" }
communicate between ros and non-ros program
Question: I want to communicate ros program(just we write usually) and non-ros program(that means the program dosen't include the head file ros.h and anyother relate to the ros),I want the two programs can send and receive the date from each other. Are there any methods can do this work,or some ros API can be used to make the work easier. if you konw anyting about this please tell me ,thank you. Originally posted by cros on ROS Answers with karma: 3 on 2015-05-16 Post score: 0 Answer: How does the other program communicate now? A ROS program has access to all the ROS infrastructure, but it's just a process like any other and can communicate with any other process using whatever other means you have at your disposal. If you just have some ROS-agnostic classes that you want to use and they don't need to be an independent process, then include them, link against them, and use them as you would for any other program. If you want to communicate with some non-ROS process from a ROS node, then you have any number of options available to you, e.g., ActiveMQ, shared memory, etc. Originally posted by Tom Moore with karma: 13689 on 2015-05-16 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by cros on 2015-05-16: thank you,I want to communicate with some non-ROS process from a ROS node, are there some built-in ros api I can use ? Comment by Tom Moore on 2015-05-16: I don't think so. Is this an existing process/program, or just a non-ROS class? You can always wrap your non-ROS classes in ROS if you want to pass data to them.
{ "domain": "robotics.stackexchange", "id": 21707, "tags": "ros" }
Does burning always produce a flame and if so why?
Question: I have been looking at thermodynamics and I tried to find the answer on the internet but nothing of relevance came up and even this site did not have the answer. Answer: Does burning always produce a flame and if so why? No. And the reason generally involves the character of the fuel (chemical and physical properties) as well as the availability of sufficient oxidizer (commonly oxygen). Combustion (the more technical term for burning) does not necessarily involve a flame. Three broad categories (there are several intermediary) are: Smoldering combustion, glowing combustion, and flaming combustion. Smoldering combustion is the slow, lower temperature, flameless form of combustion. We see this in the end stages of fires where materials can smolder for long periods of time without being noticed. Smoldering combustion involves solid fuel materials such as coal, wood, cellulose, tobacco, cotton and some synthetic polymers. It occurs primarily in the interior of porous combustible materials. A familiar example is a lit cigarette sitting in an ashtray. Glowing combustion, a.k.a surface burning, is a reaction between oxygen (or other oxidizer) at the surface of a solid fuel in which heat and light is produced, but no flame. In contrast to flaming combustion fuel oxidations occurs in the solid phase of the fuel, rather than the gas phase. A familiar example is when a cigarette glows when puffed. It should be noted that both smoldering and glowing combustion can rapidly transition to flaming combustion, particularly when given an enriched supply of oxygen. An example of intentionally doing so is the use of a bellows to increase the rate of combustible of a fire. Bedding fires due to smoking in bed involving mattresses made before newer safety standards were promulgated is an example. Smoldering mattresses can suddenly burst into flames due to aeration (resulting from the movement of the sleeper). Flaming combustion occurs in the gaseous phase of fuels. Generally, flaming combustion occurs most readily in gaseous fuels, followed by liquids and lastly solids. Flaming combustion occurs easiest in gaseous fuels, simply because the fuel is already in the gaseous phase. Liquid fuels are classified or grouped as being either flammable or combustible, but both can involve flaming combustion once ignited. The main difference is the temperature at which the rate of vaporization above the liquid is sufficient to be ignited. For flammable fuels ignition can occur at or below normal working temperatures. A common example is gasoline. Combustible liquids have higher temperature flashpoints. The actual categorizations can vary, but typically begin over 100 F. They generally require some preheating or atomizing in order to obtain a sufficient rate of vaporization to ignite. A common example is kerosene. Solid fuels (with some exceptions) are generally the most difficult to ignite and flame. Generally this is because they need to be heated and decomposed (process called pyrolysis) in order to product volatile gases. Generally, flaming occurs in the vapor phase and not the solid itself. There's a lot more to this stuff than I have covered in this limited forum. Hope it helps.
{ "domain": "physics.stackexchange", "id": 60128, "tags": "thermodynamics, combustion" }
Rosjava gradle build java classpath
Question: Hi, I've spent the last week learning gradle with little success. It seems overly complex and unnecessary for rosjava, but I will try to put my complaints aside. No matter what I do, after a gradle build (either using gradlew installApp or gradle build), my java class path is not set. I check my class path with echo $CLASSPATH and it always spits out a blank line. I even tried this from the pubsub tutorial and got the same result. This leads me to believe that it is normal behavior. However, if this is the case, how do you include dependencies? I follow the gradle default set up with my source code in main/src/main/java and my external .jar in src/main/resources, which gradle seems to use as the default (and I specify it in the gradle build script anyway for redundancy). Could someone please help? src located in: /src/main/java/org/ros/rosd where org/ros/rosd is the package. .jar in: /src/main/resources/d.jar I know the jar runs fine, and after building I can manually run the java code: roscore & ./build/install/rosd/bin/rosd org.ros.rosd.talker & ./build/install/rosd/bin/rosd org.ros.rosd.listener & However, whenever I try to initialize a constructor from something in the .jar, it fails to recognize it as an actual resource. For example, adding the following to talker.java: new ds(); throws a compile-time error. Though, I can manually run the jar with java -jar d.jar edit: I should note that within talker.java I import edu.wpi.d; and it doesn't find the package, which is in d.jar. Your help would be greatly appreciated! --James Originally posted by jforkey on ROS Answers with karma: 82 on 2013-02-26 Post score: 2 Answer: I've solved this problem. If you want to define an external jar file located on your own system within gradle, it's as simple as adding: compile files('libs/d.jar') to the dependencies. So your dependencies look something like: dependencies { compile 'ros.rosjava_core:rosjava:0.0.0-SNAPSHOT' compile files('libs/d.jar') } An entire week on that tiny bit. At least I understand gradle dependencies quite well now. Cheers, JAmes Originally posted by jforkey with karma: 82 on 2013-02-26 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Yeison Rodriguez on 2013-03-24: Thanks James. I'm going to use this to add the slf4j logger to my classpath. I'm having some trouble building the rosjava build. Comment by uzair on 2014-01-20: i created another folder called lib in my package and put my jar in that lib folder. I aded this linein my build.gradle "compile files('lib/findtask-ros-1.0.0-SNAPSHOT.jar') " and so my dependencies looks like this "dependencies { compile project(':rosjava') compile files('lib/findtask-ros-1.0.0-SNAPSHOT.jar') } Is this all I need to do? Do I also need to include an import statement for the packages in this jar in my code?
{ "domain": "robotics.stackexchange", "id": 13065, "tags": "ros, gradle, ros-groovy, rosjava" }
Variation of Torsion-Free Spin Connection
Question: In the book 'Supergravity' by Freedman and van Proeyen, in exercise (7.27) it is written To calculate [the variation $\delta\omega_{\mu ab}$ of the torsion-free spin connection], consider the variation of the Cartan structure equation (7.81) without torsion: $d\delta e^a+\omega^a{}_b\wedge\delta e^b+\delta\omega^a{}_b\wedge e^b=0$. This $2$-form equation is equivalent to the component relation \begin{align} D_{[\mu}\delta e_{\nu]}^a+(\delta\omega_{[\mu}{}^{ab})e_{\nu]b}=0 \quad\text{with}\quad D_\mu\delta e_\nu^a\equiv \partial_\mu\delta e_\nu^a+\omega_\mu{}^{ab}\delta e_{b\nu}\,. \tag{7.95} \end{align} With the structure of (7.92) in mind, you should be able to derive \begin{align} e_\nu^a\, e_\rho^b\,\delta\omega_{\mu ab}=(D_{[\mu}\delta e_{\nu]}^a)e_{\rho a}-(D_{[\nu}\delta e_{\rho]}^a)e_{\mu a}+(D_{[\rho}\delta e_{\mu]}^a)e_{\nu a}\,. \tag{7.96} \end{align} Eq. (7.92) is \begin{align} \omega_{\mu[\nu\rho]}(e)=\tfrac12(\Omega_{[\mu\nu]\rho}-\Omega_{[\nu\rho]\mu}+\Omega_{[\rho\mu]\nu})=\omega_{\mu ab}(e)\,e_\nu^a\, e_\rho^b \tag{7.92} \end{align} where, as per eq. (7.89) \begin{align} \Omega_{[\mu\nu]\rho}=(\partial_\mu e_\nu^a-\partial_\nu e_\mu^a)\,e_{a\rho}\,. \tag{7.89} \end{align} I multpliy eq. (7.95) with the Minkowski metric $\eta_{ac}$ to get \begin{align} &\partial_{[\mu}\delta(\eta_{ac}e_{\nu]}^a)+\omega_{[\mu|cb}\,\delta e_{\nu]}^b+(\delta\omega_{[\mu|cb})e_{\nu]}^b=0\quad (\because \partial_\mu\eta_{ac}=0\,\, \& \,\, \delta\eta_{ac}=0\, \text{as}\,\,\eta_{ac}\,\text{is constant}) \\ \Rightarrow\,\,& \partial_{[\mu}\delta e_{\nu] a}+\omega_{\mu|ab}\delta e_{\nu]}^b+(\delta\omega_{\mu|ab}) e_{\nu]}^b=0 \\ \Rightarrow\,\,& D_{[\mu}\delta e_{\nu] a}+\tfrac12[(\delta\omega_{\mu ab}) e_{\nu}^b-(\delta\omega_{\nu ab}) e_{\mu}^b]=0 \\ \Rightarrow\,\,& (\delta\omega_{\mu ab}) e_{\nu}^b=2D_{[\mu}\delta e_{\nu] a}+(\delta\omega_{\nu ab}) e_{\mu}^b \end{align} Relabelling $\nu$ as $\rho$ and then multiplying the above equation with $e_\nu^a$, we get, \begin{align} e_{\nu}^a\, e_{\rho}^b\,(\delta\omega_{\mu ab})=2\,(D_{[\mu}\delta e_{\rho] a})e_\nu^a+(\delta\omega_{\rho ab}) e_{\mu}^b e_\nu^a\,. \tag{c1} \end{align} We can see that this equation is different from eq. (7.96). The book says that to go from eq. (7.95) to eq. (7.96), eq. (7.92) has been used. Does it mean that Freedman and Proeyen derived the expression for $\delta\omega_{\rho ab}$ appearing on the right side of eq. (c1), by varying eq. (7.92)? If yes, then why not simply derive the variation of the spin connection directly by varying eq. (7.92) instead of starting from the variation of the Cartan structure equation, which is eq. (7.95)? Can someone show a derivation of eq. (7.96)? Answer: I was looking for the same answer and I stumbled upon your question. It's very late but I guess leaving a response is probably gonna be useful for someone in our situation in the future! I'll just answer your last question, by showing a simple derivation of eq.(7.96). Consider the eq.(7.95) \begin{equation} D_{[\mu}\delta e^{a}_{\rho]}e_{\nu a}=\dfrac{1}{2}(\delta\omega_{\mu ab}e^{a}_{\nu}e^{b}_{\rho}-\delta\omega_{\rho ab}e^{a}_{\nu}e^{b}_{\mu}). \end{equation} Now, let's permute the indices to obtain the two expressions: \begin{align} D_{[\nu}\delta e^{a}_{\mu]}e_{\rho a}&=\dfrac{1}{2}(\delta\omega_{\nu ab}e^{a}_{\rho}e^{b}_{\mu}-\delta\omega_{\mu ab}e^{a}_{\rho}e^{b}_{\nu})\\ D_{[\rho}\delta e^{a}_{\nu]}e_{\mu a}&=\dfrac{1}{2}(\delta\omega_{\rho ab}e^{a}_{\mu}e^{b}_{\nu}-\delta\omega_{\nu ab}e^{a}_{\mu}e^{b}_{\rho}). \end{align} By summing the first two and subtracting the third equation, using $\omega_{\mu (ab)}=0$, you easily get your (7.96).
{ "domain": "physics.stackexchange", "id": 98071, "tags": "differential-geometry, differentiation, calculus, spinors, supergravity" }
Warning using the Package Robot_localization
Question: Hi everybody, Just a question, Is it normal the next WARNING using the package Robot_localization? : WARNING The following node subscriptions are unconnected: /ekf_localization: /set_pose Regards Originally posted by Fenix on ROS Answers with karma: 5 on 2016-01-29 Post score: 0 Answer: 'ekf_localization' is subscribed to '/set_pose', but nothing is publishing it. I think that's a ROS-level warning, and doesn't originate within 'ekf_localization_node' itself. Originally posted by Tom Moore with karma: 13689 on 2016-02-05 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Fenix on 2016-02-10: Thank you so much Tom.
{ "domain": "robotics.stackexchange", "id": 23588, "tags": "navigation, robot-localization" }
Efficient parsing of FASTQ
Question: FASTQ is a notoriously bad format. This is because it uses the same @ character for the id line as it does for quality scores. Deciding what is a quality score and what is an id is a tricky endeavor with many pitfalls. I'd like your opinion of my while 1 approach in the function read_fastq. This works, but I'd like your tips and ideas on ways of improving it. def read_fastq(fileH): """ takes a fastq file as input yields idSeq, sequence and score for each fastq entry """ #initialize the idSeq, sequence, score and index idSeq, sequence, score = None, None, None """ main loop structure: An outer while loop will run until the file runs out of lines. If the line starts with @ and score exists, yield the id, sequence and score. this is where the yielding happens in our loop, it could be considered where each round of the loop ends The first id is recorded on the first iteration. If there is no sequence, begin an inner while loop, read lines into sequence until we hit a + character, and break out of inner while loop if there is no score, begin an inner while loop where we will increment score until we have the same number of characters as sequence, this prevents interpretting a header as a score """ while 1: line = fileH.readline() #break if we hit the end of the file if not line: break if line.startswith('@') and score: #before yielding the sequence, remove non-ascii characters sequence = legalChar(sequence) yield idSeq, sequence, score #reset to default values sequence = None score = None idSeq = line.rstrip() elif not idSeq: #get our first idSeq idSeq = line.rstrip() continue elif not sequence: sequence = "" while not line.startswith('+'): sequence += line.rstrip().replace(' ', '') line = fileH.readline() elif not score: score = [] #begin collecting our score, only collect as many chars as in our sequence while 1: score += line.rstrip().replace(' ', '') if len(score) >= len(sequence): break else: line = fileH.readline() #yield our final idSeq, sequence and score yield idSeq, sequence, score Answer: 1. Bugs All the bugs in this section are to do with handling of bad input; see §2.1 below for more about this issue. If there are no sequences in the input, your function wrongly yields an output sequence: >>> from StringIO import StringIO >>> list(read_fastq(StringIO(''))) [(None, None, None)] If the input is wrongly formatted, your program can go into an infinite loop waiting for a line starting with a plus sign: >>> list(read_fastq(StringIO('@ID\nSEQUENCE\n'))) ^C Traceback (most recent call last): File "<stdin>", line 1, in <module> File "cr32897.py", line 155, in read_fastq sequence += line.rstrip().replace(' ', '') KeyboardInterrupt If the input is wrongly formatted, your program can go into an infinite loop waiting for the end of the quality data: >>> list(read_fastq(StringIO('@ID\nSEQUENCE\n+ID\nSCORE'))) ^C Traceback (most recent call last): File "<stdin>", line 1, in <module> File "cr32897.py", line 162, in read_fastq if len(score) >= len(sequence): KeyboardInterrupt 2. Other comments on your code You don't detect or report problems in the input. You just ignore or suppress them. This leads to outright bugs, as noted in §1 above, but it also means that you don't detect other problems with the input, like these: missing sequence id; junk lines before the first sequence or between sequences; sequence id on + line fails to match sequence id on @ line; length of quality data fails to match length of sequence data; quality data contains characters outside the correct range. You shouldn't ignore or suppress errors in the input, because you need to know about these errors in order to discover problems with your processing pipeline. What if you are being fed the wrong input? What if the input is in the wrong format (for example, FASTA instead of FASTQ)? What if there is a bug in the code that generated the input? If you don't detect errors then you will end up generating bogus output and causing trouble for whatever process is next in your processing pipeline. When writing code that parses input data, the code for detecting and reporting errors typically constitutes the majority of the code. See §3 below for how I would go about writing this kind of parser. The first item of each output tuple (the sequence id) starts with the @ sign, but the @ sign is not part of the id. Your claim, "Deciding what is a quality score and what is an id is a tricky endeavor with many pitfalls" is a bit exaggerated. As far as I can see there is just one pitfall, namely that you might incorrectly stop reading the quality data when you encounter a line starting with @. If there are other pitfalls, you should probably mention them in a comment in the code so that we can check that they are all avoided. It's not clear to me what the H means in the variable name fileH. Does it stand for "handle"? If that's what you mean, you should write it out for clarity: file_handle. But really there's no reason not just to use the name file here. Your code contains two docstrings, but only the first is actually available to a user via the help function. The second docstring should be a comment instead — or better still, break it up into small comments that appear next to the relevant bits of code. The actual comments could be better. There is really no need to write comments like this: line = fileH.readline() #break if we hit the end of the file if not line: break You can expect Python programmers to know that file.readline returns an empty string to indicate end-of-file; or if not, they can find out by running help(file.readline). Some of the comments are wrong, for example here: #get our first idSeq idSeq = line.rstrip() This only gets the "first" idSeq if there was no previous line starting with an @. For an infinite loop, you write: while 1: but it's conventional to write: while True: In Python, a file object is also an iterator that yields the lines in the file, so instead of line = fileH.readline() #break if we hit the end of the file if not line: break you can write: line = next(fileH) This raises StopIteration when there are no more lines. You build up the sequence string by repeated addition: sequence = "" while not line.startswith('+'): sequence += line.rstrip().replace(' ', '') line = fileH.readline() In Python this is an anti-pattern that results in quadratic space and time consumption. The efficient way to build up a string from components is to put the components in a list and then call str.join: sequence_lines = [] while not line.startswith('+'): sequence_lines.append(line.rstrip().replace(' ', '')) line = fileH.readline() sequence = ''.join(sequence_lines) You build up the score in the form of a list rather than a string: score = [] while 1: score += line.rstrip().replace(' ', '') ... which means that your results come out like this: ('@ID', 'GATTTGGG', ['!', "'", "'", '*', '(', '(', '(', '(']) Is that really what you want? I would have expected score to come out as a string, like this: ('ID', 'GATTTGGG', "!''*((((") 3. Revised code Here's some revised code that fixes the problems above, and is much more robust in its detection and reporting of errors in the input. class Error(Exception): pass class Line(str): """A line of text with associated filename and line number.""" def error(self, message): """Return an error relating to this line.""" return Error("{0}({1}): {2}\n{3}" .format(self.filename, self.lineno, message, self)) class Lines(object): """Lines(filename, iterator) wraps 'iterator' so that it yields Line objects, with line numbers starting from 1. 'filename' is used in error messages. """ def __init__(self, filename, iterator): self.filename = filename self.lines = enumerate(iterator, start=1) def __iter__(self): return self def __next__(self): lineno, s = next(self.lines) line = Line(s) line.filename = self.filename line.lineno = lineno return line # For compatibility with Python 2. next = __next__ def read_fastq(filename, iterator): """Read FASTQ data from 'iterator' (which may be a file object or any other iterator that yields strings) and generate tuples (sequence name, sequence data, quality data). 'filename' is used in error messages. """ # This implementation follows the FASTQ specification given here: # <http://nar.oxfordjournals.org/content/38/6/1767.full> import re at_seqname_re = re.compile(r'@(.+)$') sequence_re = re.compile(r'[!-*,-~]*$') plus_seqname_re = re.compile(r'\+(.*)$') quality_re = re.compile(r'[!-~]*$') lines = Lines(filename, iterator) for line in lines: # First line of block is @<seqname>. m = at_seqname_re.match(line) if not m: raise line.error("Expected @<seqname> but found:") seqname = m.group(1) try: # One or more lines of sequence data. sequence = [] for line in lines: m = sequence_re.match(line) if not m: break sequence.append(m.group(0)) if not sequence: raise line.error("Expected <sequence> but found:") # The line following the sequence data consists of a plus # sign and an optional sequence name (if supplied, it must # match the sequence name from the start of the block). m = plus_seqname_re.match(line) if not m: raise line.error("Expected +[<seqname>] but found:") if m.group(1) not in ['', seqname]: raise line.error("Expected +{} but found:".format(seqname)) # One or more lines of quality data, containing the same # number of characters as the sequence data. quality = [] n = sum(map(len, sequence)) while n > 0: line = next(lines) m = quality_re.match(line) if not m: raise line.error("Expected <quality> but found:") n -= len(m.group(0)) if n < 0: raise line.error("<quality> is longer than <sequence>:") quality.append(m.group(0)) yield seqname, ''.join(sequence), ''.join(quality) except StopIteration: raise line.error("End of input before sequence was complete:")
{ "domain": "codereview.stackexchange", "id": 4818, "tags": "python, parsing, bioinformatics" }
Check ROS Version in CMake File with Catkin
Question: Is there a way to check the version of ROS being used in the CMakeLists.txt file? i.e., are there any variables or macros to query the version? Originally posted by rtoris288 on ROS Answers with karma: 1173 on 2014-04-04 Post score: 2 Answer: The question is what you mean with "ROS version". Each package has its own version number and they level independently. After calling find_package() to find a catkin package foo you can access the version variables for it (as it is recommended CMake standard): ${foo_VERSION} * ${roscpp_VERSION_MAJOR} ${roscpp_VERSION_MINOR} * ${roscpp_VERSION_PATCH} Originally posted by Dirk Thomas with karma: 16276 on 2014-04-04 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by rtoris288 on 2014-04-07: Great, this will do the trick!
{ "domain": "robotics.stackexchange", "id": 17536, "tags": "ros, catkin, build, cmake" }
SQLite database for a micro/tumble blog application
Question: I'm creating a personal website where, among other things, there is a blog, or a "microblog" (I'm still not sure how to define the two), where I will be writing articles and having users, who will create accounts, write comments on the posts. In addition, the blog posts will be tagged to allow for easier searching of specific blog posts. I was also planning on having the users' comments and posts be listed on their profile page, so that it is possible to view all comments or all posts made by a specific user (and reach said comment or post from their profile). The blog will be built using Python 3, Flask, and SQLAlchemy for the backend. The structure for the website will be a list of Users, which has a list of Posts that they've created (and are related to in a one-to-many relationship). Each Post will have only one author (one-to-one), but can be tagged a variety of Tags (many-to-many since many posts can be tagged with many tags). Each post will also have a list of comments (multiple comments), but each comment can only be linked to one Post and one User (the author). Does the models code that I have below accurately describe and implement the design of my database that I'm trying to achieve? from hashlib import md5 from app import lm, db from flask.ext.sqlalchemy import SQLAlchemy from flask.ext.login import LoginManager, UserMixin class User(UserMixin, db.Model): __tablename__ = 'users' id = db.Column(db.Integer, primary_key=True) join_date = db.Column(db.DateTime) username = db.Column(db.String(64), index=True, unique=True, nullable=False) realname = db.Column(db.String(128), index=True) email = db.Column(db.String(128), index=True) role = db.Column(db.String(64)) about_me = db.Column(db.String(500)) last_seen = db.Column(db.DateTime) posts = db.relationship('Post', backref='author', lazy='dynamic') comments = db.relationship('Comment', backref='author', lazy='dynamic') def avatar(self, size): gravatar_url = 'http://www.gravatar.com/avatar/%s?d=identicon&s=%d&r=pg' return gravatar_url % md5(self.email.encode('utf-8')).hexdigest() def __repr__(self): _repr = '<models.User instance; ID: {}; username: {}>' return _repr.format(self.id, self.username) @lm.user_loader def load_user(id): return User.query.get(int(id)) class Post(db.Model): __tablename__ = 'posts' id = db.Column(db.Integer, primary_key=True) author_id = db.Column(db.Integer, db.ForeignKey('users.id')) tag_id = db.Column(db.Integer, db.ForeignKey('tags.id')) title = db.Column(db.String(128)) body = db.Column(db.String(20000)) create_date = db.Column(db.DateTime) comments = db.relationship('Comment', backref='tagged_post', lazy='dynamic') def __repr__(self): _repr = '<models.Post instance; ID: {}; title: {}>' return _repr.format(self.id, self.title) class Tag(db.Model): __tablename__ = 'tags' id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(64)) tagged = db.relationship('Post', backref='tag', lazy='dynamic') def __repr__(self): _repr = '<models.Tag instance; ID: {}; title: {}>' return _repr.format(self.id, self.title) class Comment(db.Model): __tablename__ = 'comments' id = db.Column(db.Integer, primary_key=True) post_id = db.Column(db.Integer, db.ForeignKey('posts.id')) author_id = db.Column(db.Integer, db.ForeignKey('users.id')) body = db.Column(db.String(1500)) create_date = db.Column(db.DateTime) def __repr__(self): _repr = '<models.Comment instance; ID: {}; post_title: {}>' return _repr.format(self.id, self.post_id.title) Here is a link to the (current) full source of the website. Answer: Everything is good by this point but I'd change the relation between posts and tags from one-to-many to many-to-many. You can do this with this code: tags_to_posts = db.Table( "tags_and_posts", db.Column( "post_id", db.Integer, db.ForeignKey( "posts.id" ) ), db.Column( "tag_id", db.Integer, db.ForeignKey( "tags.id" ) ) ) class Post(db.Model): # ... tags = db.relationship( "Tag", secondary=tags_and_posts, backref=db.backref( "posts", lazy="dynamic" ), passive_deletes=True, ) Then tags will be just a list. Assuming my_tag and my_post exist, you'll be able to do something like this: # Add tag to post my_post.tags.append(my_tag) db.session.add(my_post) db.session.add(my_tag) db.session.commit() # Remove tag from post # You need to be sure that my_tag is in my_post.tags my_post.tags.remove(my_tag) db.session.add(my_post) db.session.add(my_tag) db.session.commit() I hope, it helps.
{ "domain": "codereview.stackexchange", "id": 21568, "tags": "python, database, sqlite, flask, sqlalchemy" }
What issue is there, when training this network with gradient descent?
Question: Suppose we have the following fully connected network made of perceptrons with a sign function as the activation unit, what issue arises, when trying to train this network with gradient descent? Answer: what issue arises, when trying to train this network with gradient descent? The activation function is sign function or signum function (A little modified). So, its Derivative will be 0 at all the points Hence, the Gradient descent won’t be able to make progress in updating the weights and backpropagation will fail.
{ "domain": "datascience.stackexchange", "id": 9243, "tags": "gradient-descent, backpropagation, perceptron" }
Nyquist Frequency Confusion
Question: 1- If I have a sine wave with period of $T$, I need to sample at least every $T/2$ to be able to reconstruct the sine wave. Let's look at this: This way I'd get a series of $0$s and all information seems to be lost. Where am I wrong? $x(t)=\sin(2\pi ft)$ Let $f_s=2f$; then $$x[n]=\sin(2n\pi f/2f) = \sin(n\pi)=0.$$ 2- Assume I have sine wave of frequency say $100\text{ Hz}$ , and I sample it at $101\text{ Hz}$, this would cause aliasing as we know. My question is, what happens in the frequency domain? Answer: If you look carefully at the statement of the sampling theorem, you'll notice that it states that the sampling frequency $f_s$ has to be larger than twice $f_m$, the maximum frequency in the signal: $$f_s > 2f_m.$$ Sampling at exactly $2f_m$ is not guaranteed to work, just as you demonstrated. Regarding aliasing: See the section on folding in Wikipedia. The $101\text{ Hz}$ signal will alias to a $1\text{ Hz}$ signal.
{ "domain": "dsp.stackexchange", "id": 8252, "tags": "sampling" }
Number of photons in a range of wavelengths
Question: I need to calculate the number of photons in a beam of light of power $P$. I know that it has constant power $P$ across the range of wavelengths $[\lambda_1,\lambda_2]$. So, for calculating this, I've used a formula that was given in another SE question: $$N=\frac{1}{h} \int_{\nu_1}^{\nu_2} \frac{1}{\nu} \frac{dE}{d\nu} d\nu $$ It's all fine, and using this I came up with $N=ln(\nu_2/\nu_1) $. But I'm not convinced completely on that formula because I'm not able to derive it from $E=N(\nu)h\nu$. The answer I get from the formula seems right, but I need proof for that. Source for the equation: Number of photons Answer: Power is the amount of energy conveyed per second, so you won't be able to compute the number of photons. Instead, you will compute the number of photons per second. I take $P$ to mean the total beam power within the frequency range from $\nu_1$ to $\nu_2$. The number of photons per second in a small spectral interval $\delta\nu$ is going to depend on the ratio of beam power in that spectral interval, to the energy per photon in the spectral interval. The power of the beam is equal to the number of photons per second, divided by the energy per photon. The photons have a range of frequencies, $\nu_1$ through $\nu_2$. The problem states that the power is the same for each frequency within that range. Let N be the total number of photons per second conveyed by the beam. Let's pick a small frequency range from $\nu_i$ to $\nu_i + \delta\nu$. We can pretend all the photons in that small range have the same frequency, $\nu_i$. So the number of photons per second in that range is $\delta\nu\frac{dP/d\nu}{h\nu_i}$. But $dP/d\nu$ is a constant: $$dP/d\nu = P/(\nu_2-\nu_1)$$ To find the total number of photons per second in the whole range, we need to add up all the contributions from all the small ranges: $$N (total photons/sec) = \frac{P}{\nu_2-\nu_1}\sum(\delta\nu\frac{1}{h\nu_i})$$ over all $\nu_i$ in the range. That's just the integral $$N= \frac{P}{\nu_2-\nu_1}\int_{\nu_1}^{\nu_2} \frac{1}{h\nu} d\nu$$ where $N$ is the number of photons per second within the range from $\nu_1$ to $\nu_2$. (Hopefully I haven't made any errors in the math. I'm very clumsy with MathJax.)
{ "domain": "physics.stackexchange", "id": 55787, "tags": "photons, photoelectric-effect, photon-emission" }
Are there anti-greenhouse gases?
Question: The Greenhouse Effect can be described semi-formally as follows: With no atmosphere, virtually all of the heat received by the Sun would be radiated back into space at night. This can be seen on the moon, for example. Atmospheric gases have a "greenhousivity" property that traps heat and holds it in. Some gases have more greenhousivity than others. Let the value of the weighted average of the greenhousivity values for the mixture of gases commonly known as "air" be arbitrarily defined as 100. There exist certain gases whose greenhousivity value is so much higher than 100 that adding even a few parts per million to our atmosphere--far below any threshold that would make the air unsafe to breathe--can raise the heat retention of our atmosphere by a non-trivial amount. These are known as "greenhouse gases". The two best-known examples are carbon dioxide and methane. With this understanding, an interesting question arises. Because the value of 100 for the greenhousivity of standard air is an average for a mixture of gases, this necessarily implies the existence of gases with a value less than 100. "Anti-greenhouse gases," if you will. So what's stopping us from pumping a few parts per million of anti-greenhouse gases--some of which are normal parts of air, that we're already well-equipped to breathe--into the atmosphere to counteract the influence of greenhouse gases? Answer: First things first, the value of 100 for the greenhousivity of standard air is an average for a mixture of gases, this necessarily implies the existence of gases with a value less than 100. This isn't a good way to look at the greenhouse effect. I mean no offense, but it's simply not accurate. To get a more realistic understanding, you have to look at reflection and wavelength, and/or by looking at the Earth's energy balance. Most of our heat comes from the sun, but a smaller share comes from the internal heat of the earth, but the internal heat from inside the earth is reasonably consistent and can be ignored for the sake of this argument. Stored heat is also important, and that, for example, explains why the lowest sunshine day of the year (December 21) isn't the coldest day of the year, but to keep this reasonably short, lets ignore stored heat too. The heat in / heat out balance works kind of like this (from the link above): 71% of the energy we get from the sun warms the earth, 29% is directly reflected back into space. Of that 71, 59 is returned to space from the atmosphere and just 12 is returned to space from the surface. That's the atmospheric blanket effect in a sense. Most of the heat, in the form of infra-red light has to travel through the atmosphere to leave the earth and the greenhouse effect reflects that light, just like colored dye in water reflects light, where clear water mostly lets light pass through it. Atmospheric circulation plays a role too, so do clouds, but lets ignore that for now as well. The greenhouse effect, caused by CO2 or other greenhouse gas, H20 or CH4 (H20 in clouds is different, that's tiny ice particles), but water vapor, which, in the air is invisible to our eyes. They work in essentially the same way as putting colored dye in water, the colored water absorbed and reflects more light than clear water, and the sky, to infrared light, is opaque with greenhouse gas. That Opaqueness can only be removed by reducing the amount of greenhouse gas. It can't be removed by adding other gas, so the greenhouse effect is essentially, directly tied to the amount of greenhouse gas in the atmosphere. What Greenhouse gas does is it affects the 12% and 59%, and the rate and way that heat leaves the earth. If heat leaves the earth more slowly, the earth gradually warms. What "global cooling" gases do, doesn't undo what greenhouse gas does. That's not possible, any more than it's possible to make a dye in water stop being opaque by adding another color. Global cooling gases do exist, but they work in a different way, by affecting the 71%-29% ratio. Volcanic gas, for example, raises the 29% of immediate reflection and that cools the earth. Volcanic cooling however is quite temporary, lasting a few years at most, but that's basically the gist of your question, is, what reflective to visible light gases can be used to raise the 29% so that the thicker blanket of greenhouse gas is counteracted, and that's a perfectly valid question. There's 3 problems. The first is, when you talk a few parts per million in the entire atmosphere, you're talking billions and billions of tons. inserting billions and billions of tons of gas into the atmosphere is no simple task. The 2nd problem is, atmospheric half life. CO2 has a very long half-life in the atmosphere. Your average CO2 molecule that enters the atmosphere takes over 100 years to recycle back either by photosynthesis or by oceanic absorption. It stays in the atmosphere a long time. CH4 has a much shorter half life, of just a few years, and that's the case for many gases, so if you attempt to cool the earth using a global cooling gas, you'd need to replenish it every few years (billions of tons every few years), that gets expensive. The 3rd problem is many "global cooling" gases are gases we don't want in the atmosphere like SO2, which creates acid rain. I don't think anyone wants to shoot, (you might not need billions), but tens or hundreds of millions of tons of SO2 in the upper atmosphere every few years just to fight climate change. That would be both expensive and it would cause other problems. Freon has a slight cooling effect because it reduces Ozone, but that's even worse. I looked but couldn't find a list of gasses that cool the earth though I've seen them in the past. If somebody can find one, feel free to post). Other methods of increasing the 29% reflected, we could perhaps, build orbiting giant mirrors or put reflective surfaces over land or large floating disks in the ocean but none of those methods are easy. An article on some of the problems with trying to increase the earth's reflection: http://www.theguardian.com/environment/2014/nov/26/geoengineering-could-offer-solution-last-resort-climate-change It's not talked about as much as reducing CO2, but increasing reflection of solar energy is being explored and studied as a possible or partial solution to man made climate change. Hope that wasn't too long or wordy. I'll try to clean it up a litte. Corrections welcome.
{ "domain": "earthscience.stackexchange", "id": 710, "tags": "climate-change, geoengineering" }
Increasingly Long Runtime for Macro
Question: My code works, but the problem is that it is taking an increasingly long time to run, with the time required to complete calculations increasing every time I use the macro. I've tried a variety of variations and modifications with the syntax, but as I'm pretty new to VBA, I haven't made a whole lot of progress. Here's the code I'm running (Note, it is running as a subset, and ScreenUpdate = False): Public Sub deleteRows() Dim lastRow As Long Dim rng As Range With ActiveSheet .AutoFilterMode = False lastRow = .Cells(.Rows.Count, 2).End(xlUp).Row '~~> Set the range of interest, no need to include the entire data range With .Range("B2:F" & lastRow) .AutoFilter Field:=2, Criteria1:="=0.000", Operator:=xlFilterValues .AutoFilter Field:=5, Criteria1:="=0.000", Operator:=xlFilterValues End With .Range("B1:F" & lastRow).SpecialCells(xlCellTypeVisible).EntireRow.Delete .AutoFilterMode = False Rows("1:1").Select Selection.Insert Shift:=xlDown, CopyOrigin:=xlFormatFromLeftOrAbove End With MsgBox Format(Time - start, "hh:mm:ss") End Sub This code basically removes zero-valued results from the data by deleting an entire row. Initially, it ran in about 12 seconds, but that soon became 55 second, which has progressed into increasing long runtimes, with a 'fast' now being in the 5 minuet range. Below is a spreadsheet with the recorded runtimes and corresponding changes made: Runtime Changes 6:30 None 7:50 None 5:37 Manually stepped through code 7:45 Run with .cells instead of .range("B1:B" & lastRow) 5:21 Run with .Range(B:B) instead of .range("B1:B" & lastRow) 9:20 Run with application.calculation disabled/enabled, range unchanged 5:35 Run with application.enableEvents disabled/enabled, range unchanged 11:08 Run with application.enableEvents disabled/enabled, Range(B:B) 5:12 None 7:57 Run with Alternative code (old code) 5:45 Range changed to .Range(cells(2,2), Cells(lastRow,2) 10:25 Range changed to .Range(cells(2,2), Cells(lastRow,2), Application.Calculation Disabled/enabled 5:34 Range set to rngB for .delete portion (range assigned to variable) 9:59 Range set as rng("B1:F" & lastRow) 5:58 Changed system settings for Excel to "High Priority", code reverted to original 9:41 Rerun of old code for comparison 9:26 Reun with change in old code criteria to "0.000" 0:10 Moved SpecialCells……..Delete into 2nd With/End With 5:15 Rerun SpecialCells……..Delete into 2nd With/End With 11:31 Rerun SpecialCells……..Delete into 2nd With/End With 11:38 Excel restart; Rerun SpecialCells……..Delete into 2nd With/End With 5:18 Excel restart; Rerun SpecialCells……..Delete into 2nd With/End With 6:49 Removed 2nd with 'loop'; all data put into first with statement I did some research online, and it looks like this might be a known issue with Excel when working with large datasets, and as mine is ~51k rows, I can see how this might be the case. ...A macro that required several seconds to complete in an earlier version of Excel may require several minutes to complete in a later version of Excel. Alternatively, if you run a macro a second time, the macro may take twice as long to run as it did the first time. Source Is there any way to make this run faster, like it initially did? Why is this happening? Answer: This won't solve the long run times of your code if the issue is the EntireRow.Delete method. I assume you still want those rows deleted and stackoverflow would be the place to get a workaround or solution. That being said your code should be reviewed. Rows("1:1").Select Selection.Insert Shift:=xlDown, CopyOrigin:=xlFormatFromLeftOrAbove Never use Select unless you are making some macro to find a range for the user. Use Rows("1:1").Insert .... Making these concatenations should be priority #1 after recording a macro. Also you are not declaring the parent of Rows. ActiveSheet is implicitly the parent but Rows is already nested in a With statment. Just put the period .Rows("1:1")... to make it explicit. With ActiveSheet As you said, this is being called as a sub function. If your project grows to include more sheets, assuming ActiveSheet is the sheet you want to manipulate won't be safe. Any Sub not meant to be an outermost Sub should not use ActiveSheet, ActiveBook, ActiveCell, or Selection. Sheets, books, and ranges should be passed as an argument. This variable isn't being used. Delete it. Dim rng As Range You get the last row by going to the bottom of the sheet and iterating up until you find the bottom of your data. lastRow = .Cells(.Rows.Count, 2).End(xlUp).Row Instead get the bottom of the range containing data: lastRow = .UsedRange.Rows.Count Now there is the general steps that your macro takes. It operates like user interaction, which is usually not the best way to approach it. The steps you take: AutoFilter the range only rows with columns "C" and "F" are visible Delete all visible rows in the range It makes perfect sense in a user interface side but from a programming side is unnecessarily indirect. Rows to delete are marked by remaining visible and then deleted. What if something else has marked rows as invisible? I would suggest iterating over all rows in the range and deleting those that should be. Public Sub DeleteRows() ' ActiveSheet or Range("B2:F" & lastRow) should be passed Dim sheet as Worksheet set sheet = ActiveSheet Dim lastRow As Long lastRow = sheet.UsedRange.Rows.Count Dim table as Range set table = sheet.Range("B2:F" & lastRow) Dim l as Long For l = lastRow to 1 step -1 If ShouldBeDeleted(table.rows(l)) Then table.rows(l).EntireRow.Delete shift:=xlUp End If Next l ' These should be in the outside Sub sheet.Range("1:1").Insert Shift:=xlDown, CopyOrigin:=xlFormatFromLeftOrAbove MsgBox Format(Time - start, "hh:mm:ss") End Sub Function ShouldBeDeleted(row_range as Range) as Boolean ShouldBeDeleted = (row_range.cells(1, 2) = 0 And row_range.cells(1, 5) = 0) End Function Pulling bits out like ShouldBeDeleted might seem verbose but there is a better name for it. There is something special about those rows, and I would rename the function IsX where X describes what those rows are. Also, columns "C" and "F" seem to be special for your worksheet. If they are so, declare them as constants Const IMPORTANT_COL_1 As String = "C" Const IMPORTANT_COL_2 As String = "F" Sorting Optimizations If sorting the rows of the table is allowed and 0 is the minimum then you could sort the table and only iterate over the rows that you need to delete. Public Sub DeleteRows(table as Range) Dim lastRow As Long lastRow = table.Rows.Count With table.Parent.Sort .SortFields.Clear .SortFields.Add key:=table.Range(IMPORTANT_COL_1 & ":" & IMPORTANT_COL_1) .SortFields.Add key:=table.Range(IMPORTANT_COL_2 & ":" & IMPORTANT_COL_2) .SetRange table .Apply End With Dim botRow as Long botRow = 1 While ShouldBeDeleted(table.rows(botRow)) botRow = botRow + 1 Wend table.range("1:" & botRow).Delete shift:=xlUP End Sub
{ "domain": "codereview.stackexchange", "id": 8934, "tags": "optimization, vba, excel" }
What's the significance of the difference between the quantum numbers, $\ell$ and $m_{\ell}$?
Question: I know that $m_{\ell}$ is associated with the projection of the angular momentum vector onto the $z$ axis and $\ell$ is associated with the length of the angular momentum vector. To me this implies that the electron doesn't orbit in a disk like fashion, it precesses. Is this correct? Is there any further significance? Also, what's the total angular momentum $J$? how is this related to $\ell$? Why doesn't $\ell$ give the total angular momentum? Answer: The value $m_{l}$ is the eigenvalue of the operator $L_{z}$ determined by seeing the action of this operator on the eigenstate $|~l,m_{l}>$ or in other words $$ L_{z} |~l,m_{l}> = m_{l} |~l,m_{l}> $$ While $l$ is related to the total angular momentum operator $L$ and it acts on the same eigenstate giving you $$ L^2 |~l,m_{l}> = l(l+1) |~l,m_{l}> $$ The relations between the two operators is given by $$L^2 = L_x^2 + L_y^2 + L_z^2 ~~~~\mbox{and}~~~~ \vec{L} = (L_x, L_y, L_z)$$ The eigenvalues are also related and the relation that you can find in almost any textbook about quantum mechanics is $$ - l \leq m_l \leq l ~~~~\mbox{all of them interspaced by unity}$$ The idea that the electron is moving around the atomic nucleus is a simplification. The electron is not localized, all the information you can have about his position is expressed in a probabilistic form: the probability of finding the electron in the position $(r,\theta,\phi)$ is given by $|<r,\theta,\phi~|~\psi>|^2$ assuming that the wavefunction $|~\psi>$ is normalized. In the case of hydrogen atom we have $|~\psi> \propto|~l,m_{l}> $ or in the coordinate rappresentation $$(<r,\theta,\phi~|~\psi>= \psi(r,\theta,\phi) ) \propto (<r,\theta,\phi~|~l,m_{l}>= Y_l^m(\theta,\phi)) $$ what you can see in the left hand side of this equation is nothing but the square root of the probability I stated above and this tells you that the probability depends on the spherical harmonics $Y_l^m(\theta,\phi) $ which also depends on the $m_l$ value. So you can imagine that a the $m_l$ tells you the shape of the spherical harmonics which gives you the probability of finding the electron at some point in space. That's the best of the orbits you can get. The operator $J$ is the total angular momentum which is expressed as $J = L + S$ and that's just a definition that comes from the fact that particles have also spin angular momentum. The relation of the eigenvalues of $J$ (which are just $j$) with $l$ and $s$ has a quite long derivation that you can find for instance on the book Quantum Mechanics by Auletta Fortunato Parisi . Anyway the relation is $$ |l - s| \leq j \leq |l+s| $$ all of them interspaced by unity.
{ "domain": "physics.stackexchange", "id": 20950, "tags": "quantum-mechanics, angular-momentum, lie-algebra, representation-theory" }
Would the centrifugal effect of the Earth orbiting around the sun cause the weight of an object to change?
Question: If you weighed an object at mid day and then again at midnight, would the objects weight change ( ever so slightly ) due to the centrifugal effect of the earth travelling around the sun ? What I am trying to understand is if the object is weighed at mid-day the centrifugal effect however slight would press the object against the scales - where as 12 hours later when the same object is on the far side of the earth the centrifugal effect would do the opposite and marginally push the object away from the scales causing it to weigh a little less ? Answer: No, there is no effect of the type you're imagining. The earth is free-falling through the gravitational fields of the sun and the moon, and therefore we experience apparent weightlessness with respect to our weight in those fields. This is similar to the apparent weightlessness of astronauts aboard the ISS. Just as those astronauts can't tell by any experiment, without looking outside, the difference between up and down in the earth's gravitational field, neither can people on earth tell by gravitational experiments the difference between the sunward and anti-sunward directions. Apparent weightlessness occurs because you and the object you're using for reference (earth or ISS) are free-falling together, with the same acceleration. Cf. Weightlessness for astronauts We can detect the (fictitious) centrifugal force of the earth's rotation, but this is constant in time, so it just causes a variation of the earth's gravitational field (measured relative to the earth's surface) with latitude. The only time variation is due to tidal effects. These have a period of 12 hours (not the 24 hours you were imagining), and are quite small. Tidal effects slightly decrease your weight when the moon or sun is overhead and underfoot. They arise because you and the earth are at different distances from the moon or sun, so you accelerate slightly differently. The lunar effect is about $10^{-3}$ m/s2, which is about the same as the effect due to changing your elevation by 30 cm.
{ "domain": "physics.stackexchange", "id": 41479, "tags": "newtonian-mechanics, newtonian-gravity, reference-frames, orbital-motion, centrifugal-force" }
Generating a sequence of MAC addresses
Question: The following code will generate a sequence of N mac addresses based on the input from user. The input must be atleast one octet to generate the sequence , ie : input : A0 , 5 output : A0:00:00:00:00:01 - A0:00:00:00:00:05 My actual code is working fine and I wanted to make it optimized if it could be : public static List<String> getMacSequences(final String macAddress, final Integer count) { List<String> macList = new ArrayList<String>(); final List<Integer> macOctets = new ArrayList<Integer>(); String [] macSplit = macAddress.split(":"); int incPos = macAddress.split(":").length; // need to throw exception when the length is 0 if (incPos >= 1) { for (int bp=0; bp <6; bp++) { if (bp >= incPos) { macOctets.add(0); } else { macOctets.add(Integer.parseInt(macSplit[bp], 16)); } } for (int i = 0 ; i<count; i++) { int lastOctet = macOctets.get(5); macOctets.set(5, ++lastOctet); for (int j = 5; j >= 0; j--) { if (j == 0 && macOctets.get(j) > 255) { Collections.fill(macOctets, 0); } if (macOctets.get(j) > 255) { macOctets.set(j, 0); Integer macValue = macOctets.get(j - 1); macOctets.set(j - 1, ++macValue); } else{ break; } } macList.add(String.format("%02X:%02X:%02X:%02X:%02X:%02X", macOctets.toArray())); } } return macList; } I have done micro-benchmarking to determine the performance for generating 100 mac addresses in sequence and the result of average value for 10 sampling (100 itr/sampling) as follows (in ms) : 7.62 7.43 6.87 7.4 7.24 7.21 7.16 7.59 7.67 8.13 and the overall sampling average is : 7.432 ms Answer: Your code defines in the second loop: for (int j = 5; j >= 0; j--) and later you use j in this statement: macOctets.set(j - 1, ++macValue); If j==0 however, you will get an IndexOutOfBounceException as -1 is not a valid index. You avoid the exception by checking explicitely for j==0 and set the value to 0, though this check will be done for every segment of the mac address. This check therefore can be refactored out of the loop and to a later position. Next, you split the input string twice. Why? Something like String [] macSplit = macAddress.split(":"); int incPos = macAddress.split(":").length; // need to throw exception when the length is 0 if (incPos >= 1) { ... } can be replaced with this: String[] macSplit = macAddress.split(":"); if (macSplit.length >= 1) { ... } This will avoid splitting the input string multiple times. I'd also initially fill the macOctet array with 0 values and then simply set the respective defined mac value like this: for(int i=0; i<macSplit.length; i++) { macOctets[i] = Integer.parseInt(macSplit[i], 16); } If I execute the below code and compare it to your original code: public static List<String> getMacSequences(final String macAddress, final Integer count) { List<String> macList = new ArrayList<String>(); Integer[] macOctet = new Integer[] { 0, 0, 0, 0, 0, 0}; String[] macSplit = macAddress.split(":"); if (macSplit.length > 0) { for (int i=0; i<macSplit.length; i++) { macOctet[i] = Integer.parseInt(macSplit[i], 16); } } for (int i=0 ; i<count; i++) { macOctets[5]++; for (int j=5; j>0; j--) { if (macOctets[j] > 255) { macOctets[j] = 0; macOctets[j-1]++; } else { break; } } if (macOctets[0] > 255) { macOctets[0] = 0; } macList.add(String.format("%02X:%02X:%02X:%02X:%02X:%02X", macOctets)); } return macList; } I'll get the following times for 100000 iterations on 260 generated MAC addresses: Original - Total: 71341.440878 ms, Average: 0.71341440878 ms Modified - Total: 70220.091388 ms, Average: 0.70220091388 ms While not a big speed improvement, at least a little faster then the original one. Further speed improvements I used Integer[] macOctets = new Integer[] { ... } over a simple int[] macOctets = new int[] { ... } in the sample code above as the string formatter has some issues with int arrays for some reasons with my version of Java (Oracle Java 8 Update 101). I then implemented my own version of MAC string converter private static String toHexString(int[] macOctets) { StringBuilder sb = new StringBuilder(); for (int i=0; i<macOctets.length; i++) { sb.append(Integer.toHexString(0x100 | macOctets[i]).substring(1).toUpperCase()); if (i != macOctets.length-1) { sb.append(":"); } } return sb.toString(); } and changed the initialization of the macOctets array to a simple int[] macOctets = new int[6]; which internally will initialize each element with its default value, which is 0. When re-running the comparison test described above, the following output is returned: Original - Total: 71901.357327 ms, Average: 0.71901357327 ms Modified - Total: 10287.282422 ms, Average: 0.10287282422 ms The custom method makes use of a tiny hack in order to print the two byte pairs of a MAC segment. It seems that the algorithm spends most of the time within the array to hex string formating. I'd therefore highly suggest to change from the String formatter to a custom one as this only requires a fraction of the time the Java native function requires. For completeness I'll post the complete performance test main method so you can compare for yourself public static void main(String ... args) { int repetitions = 100000; String initialSequene = "A0"; int macsToGenerate = 260; long startTime = System.nanoTime(); for (int i=0; i<repetitions; i++) { getMacSequences(initialSequene, macsToGenerate); } long endTime = System.nanoTime(); long duration = (endTime - startTime); double totalDurationInMs = (duration / 1000000.); System.out.println("Original - Total: " + totalDurationInMs + " ms, Average: " + (totalDurationInMs / repetitions) + " ms"); startTime = System.nanoTime(); for (int i=0; i<repetitions; i++) { getMacSequences2(initialSequene, macsToGenerate); } endTime = System.nanoTime(); duration = (endTime - startTime); totalDurationInMs = (duration / 1000000.); System.out.println("Modified - Total: " + totalDurationInMs + " ms, Average: " + (totalDurationInMs / repetitions) + " ms"); } where getMacSequences(...) is your version and getMacSequences2(...) is my modified version.
{ "domain": "codereview.stackexchange", "id": 21384, "tags": "java, performance" }
Streaming int support
Question: This recent question Print Consecutive numbers by comparing two parameters frustrated me because I could not find a convenient way in Java 8 to support the conditions that are required. I answered it in a somewhat cumbersome way for Java 8... (the non-stream part of my answer is "OK" in terms of usability). To reiterate the problem (as I interpreted it): print the values in the range from a to b, where b may be less than a. e.g. from 1 to 3 will print 1, 2, 3 and from 3 to 1 will print 3, 2, 1. What I really wanted was something like this: IntSteam.loop(a, b, a < b ? ++ : --).foreach(System.out::println); (yes, the above code makes no sense, treat it as pseudocode...). Basically, stream from a value to another value, either incrementing, or decrementing the loop variable. To make a real implementation of that, I figured the stream needs a seed, a terminal condition, and a stepping function. I implemented it like: int a = 10; int b = 5; IntUnaryOperator op = a < b ? i -> i + 1 : i -> i - 1; ForIntStream.until(a, v -> v == b, op).forEach(System.out::println); The semantics there are: start from a, go until a == b, and use the operator to change the value. While I was doing that, I also implemented a similar loop, that, instead of running until a condition, runs while a condition is true: int a = 10; int b = 5; IntUnaryOperator op = a < b ? i -> i + 1 : i -> i - 1; ForIntStream.of(a, v -> v != b, op).forEach(System.out::println); Note that the difference there is that the 'of' stream terminates and does not include the terminating. Here is the code that implements the above features. Any suggestions, or alternatives are welcome import java.util.NoSuchElementException; import java.util.PrimitiveIterator; import java.util.Spliterator; import java.util.Spliterators; import java.util.function.IntConsumer; import java.util.function.IntPredicate; import java.util.function.IntUnaryOperator; import java.util.stream.IntStream; import java.util.stream.StreamSupport; public class ForIntStream { private static final class WhileIterator implements PrimitiveIterator.OfInt { private int next; private final IntPredicate proceed; private final IntUnaryOperator step; private boolean terminated; public WhileIterator(int next, IntPredicate proceed, IntUnaryOperator step) { this.next = next; this.proceed = proceed; this.step = step; this.terminated = !proceed.test(next); } @Override public void forEachRemaining(IntConsumer action) { while (!terminated) { action.accept(next); next = step.applyAsInt(next); terminated = !proceed.test(next); } } @Override public boolean hasNext() { return !terminated; } @Override public Integer next() { return nextInt(); } @Override public int nextInt() { if (terminated) { throw new NoSuchElementException("Iterated beyond terminal condition."); } int ret = next; next = step.applyAsInt(next); terminated = !proceed.test(next); return ret; } } private static final class UntilIterator implements PrimitiveIterator.OfInt { private int next; private final IntPredicate until; private final IntUnaryOperator step; private boolean terminated; public UntilIterator(int next, IntPredicate until, IntUnaryOperator step) { this.next = next; this.until = until; this.step = step; } @Override public void forEachRemaining(IntConsumer action) { while (!terminated) { action.accept(next); if (until.test(next)) { terminated = true; } else { next = step.applyAsInt(next); } } } @Override public boolean hasNext() { return !terminated; } @Override public Integer next() { return nextInt(); } @Override public int nextInt() { if (terminated) { throw new NoSuchElementException("Iterated beyond terminal condition."); } int ret = next; terminated = !until.test(next); next = step.applyAsInt(next); return ret; } } public static IntStream of(int seed, IntPredicate allow, IntUnaryOperator step) { PrimitiveIterator.OfInt it = new WhileIterator(seed, allow, step); return StreamSupport.intStream(Spliterators.spliterator(it, Long.MAX_VALUE, Spliterator.ORDERED), false); } public static IntStream until(int seed, IntPredicate terminator, IntUnaryOperator step) { PrimitiveIterator.OfInt it = new UntilIterator(seed, terminator, step); return StreamSupport.intStream(Spliterators.spliterator(it, Long.MAX_VALUE, Spliterator.ORDERED), false); } } if you want to experiment with it, the following will give some hints: public static void main(String[] args) { ForIntStream.of(1, i -> i != 10, i -> i + 1).forEach(System.out::println); ForIntStream.until(1, i -> i == 10, i -> i + 1).forEach(System.out::println); int a = 10; int b = 5; IntUnaryOperator op = a < b ? i -> i + 1 : i -> i - 1; ForIntStream.until(a, i -> i == b, op).forEach(System.out::println); } For another example of how this can be used... to print a collatz conjecture sequence (hailstone problem), you can stream like: ForIntStream.until(10, i -> i == 1, i -> i % 2 == 0 ? i / 2 : (3 * i + 1) ) .forEach(System.out::println); Answer: ForIntStream.of(1, i -> i != 10, i -> i + 1).forEach(System.out::println); ForIntStream.until(1, i -> i == 10, i -> i + 1).forEach(System.out::println); This is going to be subjective, but I'll just put in my two cents anyways... Looking at the statements above, I will roughly read them as (starting with index) of 1, (loop while) i != 10, (and increment by) i + 1 until 1... what? I'll suggest switching the first two parameters for until(), which I think can improve its readability in this way: ForIntStream.until(i -> i == 10, 1, i -> i + 1).forEach(System.out::println); until i == 10, (starting with index of) 1, (and increment by) i + 1
{ "domain": "codereview.stackexchange", "id": 13566, "tags": "java, stream, iterator" }
Why is a 7-endo radical cyclisation favoured over a 6-exo in this synthesis?
Question: Normally, according to Baldwin's rules, 3-exo to 7-exo cyclisations are preferred over endo cyclisations. But in the following example, a 7-endo-trig reaction is favoured over a 6-exo-trig and I am just not able to understand why. It's a real synthesis example (taken from Eur. J. Org. Chem. 2015, 2015 (15), 3240–3250). Please explain to me why the 7-endo product is favoured here. Answer: Before make any suggestion for the sought cyclization mechanism, let’s look at some rules and guidelines set by peers in their early studies on radical cyclizations. Specifically, Beckwith and coworkers have set few guidelines for radical reactions, which involves intramolecular cycloaddition reactions (Ref.1). During their kinetic studies on various en-yl radicals (Ref.2), they have shown that hept-6-eny1 radical (1: n = 4; Scheme 1) is able to give both 6-exo-trig and 7-endo-trig products (2 & 3; n = 4; Scheme 1), although 6-exo-trig is the favorable product between two ($k_{exo}/k_{endo} = 5.8$). On the other hand, hex-5-eny1 radical (1: n = 3; Scheme 1) almost exclusively gives 5-exo-trig (2: n = 3; Scheme 1) over 6-endo-trig product (3: n = 3; Scheme 1), although 6-endo-trig is more stable one between two ($k_{exo}/k_{endo} = 48.4$). Beckwith and Schiesser have performed a theoretical study on the relative rates and the regio- and stereo-chemistry of ring closure of a variety of alkenyl, alkenylaryl, alkenylvinyl, and similar radicals using the method, which involves the application of MM2 force-field calculations to model transition structures (Ref.3). The results show excellent qualitative and satisfactory quantitative agreement with experimental data. The values of both experimental and theoretical data for hept-6-eny1 radical shown to be remarkably close for 6-exo-trig (2: n = 4; Scheme 1) and 7-endo-trig closure (3: n = 4; Scheme 1). Based on these experimental evidence, it is safe to say, under certain conditions, 6-endo-trig product can predominate over its 5-exo-trig counterpart. For example, Substituents at substituting position disfavors cyclization (e.g, Scheme 2) (Ref.1). In summary, Beckwith has given few useful rules for radicals: (1) Intramolecular addition under kinetic control when $\mathrm{n} \le 5$, cyclization occurs preferentially in the exo mode (Scheme 1; thermodynamic preference for secondary radical overridden by kinetic preference based on required orbital alignment for cyclization); (2) Substituents disfavor cyclization at substituted position (Scheme 2); and (3) Homolytic bond cleavage is favored when bond concerned lies close to plane of adjacent semi-occupied, filled non-bonding, or $\pi$-orbital (Ref. 1). Now we can look at examples where the cyclization goes wild breaking those rules. As part of the generation of radicals in heteroaromatic systems, Dobbes, et al. have studied reactions of radicals at the C-7 position of indole (Ref.4). The synthesis and radical cyclisation of 7-bromoindoles carrying an unsaturated N-alkyl group (7; Scheme 3) has been performed for these studies. The radical cyclisation was carried out using tributyltin hydride and AIBN as the radical initiator in refluxing toluene: the cyclisation of N-allyl-7-bromoindole (7a) was expected to involve the potential 5-exo-trig cyclisation from the indole C-7 position on to an allyl chain to give 8a exclusively. Yet, a mixture of products was obtained, comprising the 6-endo cyclisation product (9a) and the reduction product (10a) in the ratio of 1:2, but no 5-exo cyclisation product (8a) was obtained. The authors stated that the reason for this mixture is believed to be the constraints enforced on the system by the geometry of the indole ring. Although generally favored under kinetic conditions, the 5-exo cyclisation is very difficult in this case as there is considerable strain and distortion in the five-membered ring product and hence presumably in the transition state. Thus, a significant amount of direct reduction of the indolyl radical is observed in this case (phenyl radical initiated is comparatively unstable and trapping it by $\ce{n-Bu4SnH}$ is comparatively faster (Ref.5)). Although the 6-endo reaction is usually much less favorable in any other situation, the bond angles involved in this situation clearly favor 6-endo cyclisation pathway (Ref.4). Nonetheless, on extending the chain length by a $\ce{-CH2-}$ group, N-(but-3'-ene)-7-bromoindole (7b) has given only the expected mixture of 6-exo cyclisation product (8b) and reduced product (10b) in 1:1.2 ratio, without giving a trace of 7-endo cyclisation product (9b). Now, it is clear that the sought radical cyclization in Ref. 6 can be directed by circumstances. The authors have the cyclization scheme envisioned as depicted in Scheme 4. They have expected that compound 5 (Ref.6) would be formed under radical cyclization conditions indicated in the Scheme without problem under the accepted Baldwin's rules. However, only 7‐endo‐trig adduct, 13, was obtained in 68% yield. None of expected spiro adduct 5 (Ref.6) was observed under a wide variety of radical cyclization conditions, in addition to $\ce{n-Bu4SnH}$. The authors hypothesized that this outcome may be attributed to the smaller torsional stress of the benzyl methylene group. However, if we look at the structural features more carefully, we can argue that Beckwith’s 2nd rule (Substituents disfavor cyclization at substituted position) may have also been playing a role as well, favoring 7-endo-trig addition, substituted position at which has no additional substituents. Moreover, larger ring size also has ability to adjust conformation in favorable way (Ref.3). The authors have stated their effort to receive the expected 6‐exo‐trig product as follows: Calculations (ref) have indicated that the 2‐oxo group has a great influence on the endo/exo selectivity of radical cyclization reactions. To overcome this unfavorable substrate control, a more torsionally strained $sp^2$ carbonyl group was introduced into compound 12, which we expected would favor a 6‐exo‐trig free‐radical cyclization. As shown in Scheme 5, $\alpha$‐bromination (ref) of aryl ketone 14 followed by coupling with enamide 10 gave highly functionalized enamide 15 in 75% yield over two steps. Fortunately, radical cyclization of compound 15 under the conditions used above gave 6‐exo‐trig product 16 in 72% yield. They really have said "fortunately", indicating more of a luck than their calculations. Yet, more evidence in literature (see list of reference in Ref.6) supporting their argument including Ref.4 listed here. References: A. L. J. Beckwith, C. J. Easton, A. K. Serelis, “Some guidelines for radical reactions,” J. Chem. Soc., Chem. Commun. 1980, 482-483 (DOI:10.1039/C39800000482). A. L. J. Beckwith, G. Moad, “Intramolecular addition in hex-5-enyl, hept-6-enyl, and oct-7-enyl radicals,” J. Chem. Soc., Chem. Commun. 1974, 472-473 (DOI:10.1039/C39740000472). A. L. J. Beckwith, C. H. Schiesser, “Regio- and stereo-selectivity of alkenyl radical ring closure: A theoretical study,” Tetrahedron 1985, 41(19), 3925-3941 (https://doi.org/10.1016/S0040-4020(01)97174-1). A. P. Dobbs, K. Jones, K. T. Veal, “Radical cyclisation reactions of 7-bromoindoles,” Tetrahedron Letters 1997, 38(30), 5379-5382 (https://doi.org/10.1016/S0040-4039(97)01177-5). W. P. Neumann, “Tri-n-butyltin Hydride as Reagent in Organic Synthesis,” Synthesis 1987, (8), 665-683 (DOI: 10.1055/s-1987-28044). M. He, C. Qu, B. Ding, H. Chen, Y. Li, G. Qiu, X. Hu, X. Hong, “Total Synthesis of (±)‐8‐Oxo‐erythrinine, (±)‐8‐Oxo‐erythraline, and (±)‐Clivonine,” Eur. J. Org. Chem. 2015, (15), 3240-3250 (https://doi.org/10.1002/ejoc.201500265).
{ "domain": "chemistry.stackexchange", "id": 14413, "tags": "organic-chemistry, radicals, regioselectivity, stereoelectronics" }
Trajectory Rollout paper
Question: Hello, On the wiki page for base_local_planner, there's a link link to a paper discussing TR. The link's broken; can anyone point me to a copy of this paper? Thanks, Rick Originally posted by Rick Armstrong on ROS Answers with karma: 567 on 2014-08-18 Post score: 0 Answer: CiteSeerX seems to have a cached copy available for download: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.330.2120 Originally posted by jarvisschultz with karma: 9031 on 2014-08-18 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Rick Armstrong on 2014-08-18: Ok, now I feel really dumb: I had actually found that page but didn't notice the "cached" link. Thanks!
{ "domain": "robotics.stackexchange", "id": 19096, "tags": "ros" }
A command-line application that manages my lending accounts
Question: I'm learning Python and tried to solve a problem that I have. That problem is "managing" the people who owe me money. Basically, my project goal is: v0.1 - command line application v0.1.1 - command line application that works on persistent data v1.0 - a (non web-base) GUI application that works on persistent data I think I'm close to finishing v0.1 so I'm looking for feedback before moving on. Here is the Github Repo. From file structure, Python conventions, or even terms that I misused in the program itself, please do share your comments. Project Structure . ├── LICENSE.txt ├── README.md ├── main.py └── modules ├── Account.py ├── Borrower.py ├── __init__.py └── helpers.py main.py #!/usr/bin/python3.5 ''' Main program ''' import os import sys from datetime import date from modules.Borrower import Borrower from modules.helpers import clear_delay, press_enter def main(): ''' The main function that allows the user to interact with the rest of the program. ''' # Setup os.system("clear") database = {} # Greet the user. print("Good day! Welcome to our lending company!") clear_delay(1) # Ask for their name. print("What is your name? ", end='') name = input() clear_delay(1) # Name validation. if name in database: print("Welcome back, {}".format(name)) clear_delay(1) elif name not in database: print("Name not found. Do you want to register? (Y/N)") user_wants_to_register = input().upper() == 'Y' clear_delay(2) if user_wants_to_register: database[name] = Borrower(name) print("You are now registered to our services, {}.".format(name)) clear_delay(2) else: print("See you soon!") clear_delay(2) sys.exit() # Interface loop. choice = None while choice != 'Q': print("How may we help you?") print("[0]: Create a new account on my name.") print("[1]: Check accounts under my name.") print("[2]: Pay an existing account.") print("[Q]: Exit the program.") print("\nI want to: ", end='') choice = input().upper() clear_delay(1) # Account creation. if choice == '0': # Prompt for amount. print("How much will you borrow?") print("Amount: ", end='') amt = input() clear_delay(1) # Validate amount. if not amt.isdigit(): print("Invalid value.") clear_delay(2) continue amt = float(amt) # Ask for a confirmation. print("A 5% interest rate will be applied weekly on this account.") print("Enter \"YES\" to confirm: ", end='') agree = input().upper() == "YES" clear_delay(1) if agree: database[name].open_account(amt, date.today(), 5) print("Account created! Summary: ") database[name].show_credits(-1) press_enter() else: continue # Check accounts. elif choice == '1': # Check if the user has no existing accounts. if not database[name].accounts: print("You have no accounts under your name.") clear_delay(2) else: print("Enter account id (leave blank to show all acounts): ", end='') acc_id = int(input()) # Validate acc_id if acc_id == "": database[name].show_credits() elif acc_id < 0 or acc_id >= len(database[name].accounts): print("Invalid account id") clear_delay(2) continue else: database[name].show_credits(int(acc_id)) press_enter() # Pay account(s) elif choice == '2': # Check if the user has no existing accounts. if not database[name].accounts: print("You have no accounts under your name.") clear_delay(2) else: print("Here are your accounts:") database[name].show_credits() acc_id = int(press_enter("Enter account id to pay: ")) # Validate acc_id if acc_id < 0 or acc_id >= len(database[name].accounts): print("Invalid account id") clear_delay(2) continue else: # Make payment print("Enter amount to pay: ", end='') amt = int(input()) database[name].pay(acc_id, amt) clear_delay(2) # Show feedback print("Payment succeeded!") database[name].show_credits(acc_id) press_enter() print("Thank you for using our services. See you soon!") main() modules/Account.py ''' Account module used for creating accounts. ''' import datetime class Account: ''' Class used for Account objects. ''' balance = 0 interest = 0 og_amount = 0 def __init__(self, name, base_amount, start_date, int_rate): self.name = name self.og_amount = self.base_amount = base_amount self.start_date = start_date self.int_rate = int_rate * 0.01 self.update_interest() self.update_balance() def show_info(self): ''' Prints the account information. ''' print("Account: {}".format(self.name)) print("Opened: {}".format(self.start_date)) print("Statement: {} for {}% weekly interest.".format(self.og_amount, self.int_rate * 100)) print("Interest: {}".format(self.interest)) print("Current balance: {}".format(self.balance)) def update_balance(self): ''' Calculates the balance of the account. ''' self.balance = self.base_amount + self.interest def update_interest(self): ''' Calculates and updated the interest of the account. ''' weeks_due = (datetime.date.today() - self.start_date).days // 7 self.interest = self.base_amount * self.int_rate * weeks_due def pay(self, amount): ''' Reduces the account balance by amount an amount ''' if amount > self.interest: self.interest, overflow = 0, self.interest - amount self.base_amount += overflow else: self.interest -= amount self.update_balance() modules/Borrower.py ''' Borrower module used for creating borrower objects. ''' from modules.Account import Account class Borrower: ''' Borrower class creates a borrower object that can hold multiple accounts. ''' accounts = [] def __init__(self, name, credit_status=True): self.name = name self.credit_status = credit_status def open_account(self, base_amount, start_date, int_rate): ''' Adds an account object to the borrower's accounts list. ''' if self.credit_status: self.accounts.append(Account(self.name, base_amount, start_date, int_rate)) else: print("Bad credit status") def show_credits(self, index=None): ''' Shows the account information of the given index. Shows an error message if no accounts are opened. Shows all the accounts otherwise. ''' if index is None: for account in self.accounts: print("-" * 10) print("ID: {}".format(self.accounts.index(account))) account.show_info() elif not self.accounts: print("No accounts opened") else: print("-" * 10) print("ID: {}".format(self.accounts.index(self.accounts[-1]))) self.accounts[index].show_info() def pay(self, index, amount): ''' Pays an amount to the account at index. ''' self.accounts[index].pay(amount) modules/helpers.py ''' Utility functions ''' import os import time def clear_delay(secs): ''' Delays execution of the program and then clears the screen afterwards. ''' time.sleep(secs) os.system("clear") def press_enter(msg="Press enter to continue"): ''' Delays execution of the program and then clears the screen when enter is pressed. An optional msg can be passed as an argument to customize the output. The keyboard input is returned. ''' print("-" * 10) print(msg, end='') output = input() os.system("clear") return output Answer: That's more a better user interface than clean code, but why ask user if they already registered if u would register them or close program anyway? Always haven't liked that in websites. Signing or register should be one button. def sigin_or_register(users): print('Signin or register: ') name = input() if name in users: print("Welcome back, {}".format(name)) else: print("You are now registered to our services, {}.".format(name)) As to the code I'll will review it a bit later, but this long if else should be refactored into a list or a dictionary of possible action functions like [action1, action2, action3]. def create_account(accounts, user): try: amt = prompt('How much will you borrow?\nAmount: ', int) except ValueError: print('Invalid Value') # Ask for a confirmation. print('A 5% interest rate will be applied weekly on this account.') agree = prompt('Enter "YES" to confirm', lambda i: i.upper() == 'YES') if agree: accounts[user].open_account(amt, date.today(), 5) print('Account created! Summary: ') accounts[user].show_credits(-1) press_enter() def prompt(string, as_type=None, delay=1): print(string, end='') inp = input() clear_delay(delay) if as_type and inp = '': return None return as_type(inp) def check_accounts(accounts, users): if not accounts[user].accounts: print('You have no accounts under your user.') clear_delay(2) else: acc_id = prompt('Enter account id (leave blank to show all acounts): ', int) if acc_id is None: database[user].show_credits() try: database[users].show_credits(acc_id) except Exception as e: print('Invalid accout id') press_enter() def exit(): print("Thank you for using our services. See you soon!") sys.exit() actions = ( ('0', (create_account, 'Create a new account on my name'), ('1', (check_accounts, 'Check accounts under my name'), ('Q', (exit, 'Exit the program')), ) choice = None print("How may we help you?") actions = OrderedDict(actions) choices_prompt = '\n'.join(f'[{key}]: {hint}' for key, (action, hint), actions.items()) while True: choice = prompt('How may we help you?\n' + choices_prompt, str.upper) action, key = actions[choice] action(accounts, user) Notice also the prompt function refactoring. This code may not work, but it shows the idea.
{ "domain": "codereview.stackexchange", "id": 25563, "tags": "python, console, finance" }
Arrange these compounds: CO2, CH3OH, RbF, CH3Br in order of increasing boiling points
Question: Arrange these compounds: $\ce{CO2}$, $\ce{CH3OH}$, $\ce{RbF}$, $\ce{CH3Br}$ in order of increasing boiling points. I think I should consider the forces between them, that is: $\ce{CO2}$: dispersion forces $\ce{RbF}$: dispersion and ionic forces $\ce{CH3OH}$: Dipole-dipole interactions, Hydrogen bonding and dispersion forces $\ce{CH3Br}$: Dipole-dipole interactions and dispersion forces It is obvious that $\ce{CO2}$ is the smallest one and because $\ce{CH3OH}$ stronger than $\ce{CH3Br}$ it will have higher boiling point But how to arrange the rest? Or how to compare ionic forces with Dipole? Answer: You know $\ce{CO_2}$ is gaseous at room temperature, so let's put that at the bottom. Methanol forms hydrogen bonds, so that will be above bromomethane which does not. At last we have rubidium fluoride which is a salt. Salts generally have a very high boiling point (> 1000 °C, much higher than molecular structures) because of the ionic (electrostatic) interaction between the ions, so that one will be at the top. Ionic forces can be seen as extreme dipoles in a certain way, there is a grey area when electronegativity becomes large enough, that it can be seen either as a molecular structure or ionic structure. Consulting online information about the boiling points of these compounds (i.e. just check Wikipedia or some MSDS site) confirms the theory.
{ "domain": "chemistry.stackexchange", "id": 2708, "tags": "intermolecular-forces, boiling-point" }
Does $\mathrm{tr}(A \otimes B) = \mathrm{tr} (A) \otimes \mathrm{tr}(B)$ hold for partial trace?
Question: I was reading this question from this site answered by DaftWullie. I would like to request you to read the question there. The answer says However, in this particular case, the calculation is much simply. Let $|\phi\rangle$ be the Bell pair such that $$ |\psi\rangle=|\phi_{12}\rangle|\phi_{34}\rangle. $$ Because there's a separable partition between (1,2) and (3,4), this is not changed by the partial trace. Thus $$ \text{Tr}_B|\psi\rangle\langle\psi|=\left(\text{Tr}_2|\phi\rangle\langle\phi|\right)\otimes \left(\text{Tr}_3|\phi\rangle\langle\phi|\right). $$ I don't understand where this expression comes from. There shouldn't be any $\otimes$ in there, right? Can anybody derive it step by step? Because trace is applied over $|\psi\rangle\langle\psi|$ or $|\phi_{12}\rangle|\phi_{34}\rangle\langle\phi_{34}|\langle\phi_{12}|$ or $(|\phi_{12}\rangle \otimes |\phi_{34}\rangle)( \langle\phi_{34}|\otimes \langle\phi_{12}|)$. This is matrix multiplication of $(|\phi_{12}\rangle \otimes |\phi_{34}\rangle)$ and $( \langle\phi_{34}|\otimes \langle\phi_{12}|)$. Answer: DaftWullie's answer is correct. The key identity they are using is $$ \mathrm{tr}_B(\rho_A\otimes\sigma_{BC}) = \rho_A \otimes (\mathrm{tr}_B\sigma_{BC})\tag1 $$ which says that we can pull out tensor factors that do not act on the system being traced over. Using $(1)$ and the symbols defined in the linked question and answer, we have $$ \begin{align} \mathrm{tr}_B |\psi\rangle\langle\psi| &= \mathrm{tr}_2 \mathrm{tr}_3 |\psi\rangle\langle\psi| \\ &= \mathrm{tr}_2 \mathrm{tr}_3\left(|\phi_{12}\rangle\langle\phi_{12}| \otimes |\phi_{34}\rangle\langle\phi_{34}|\right) \\ &= \mathrm{tr}_2 \left(|\phi_{12}\rangle\langle\phi_{12}| \otimes \mathrm{tr}_3|\phi_{34}\rangle\langle\phi_{34}|\right) \\ &= (\mathrm{tr}_2 |\phi_{12}\rangle\langle\phi_{12}|) \otimes (\mathrm{tr}_3|\phi_{34}\rangle\langle\phi_{34}|) \end{align} $$ where in the first equality we use $\mathrm{tr}_B = \mathrm{tr}_2 \circ \mathrm{tr}_3$ since Bob has qubits $2$ and $3$, in the second equality we use the definition $|\psi\rangle = |\phi_{12}\rangle|\phi_{34}\rangle$, in the third we use $(1)$ with $A=12$, $B=3$ and $C=4$ and in the fourth we use $(1)$ once again with $A=4$, $B=2$ and $C=1$. Remark on operator domains Note that $\mathrm{tr}_2 |\phi_{12}\rangle\langle\phi_{12}|$ is an operator acting on the Hilbert space $\mathcal{H}_1$ of subsystem $1$ and $\mathrm{tr}_3|\phi_{34}\rangle\langle\phi_{34}|$ is an operator acting on the Hilbert space $\mathcal{H}_4$ of subsystem $4$. Moreover, $\mathrm{tr}_B |\psi\rangle\langle\psi|$ is supposed to be an operator acting on the Hilbert space $\mathcal{H}_1 \otimes \mathcal{H}_4$ of the two qubits owned by Alice. This confirms that there should be $\otimes$ between the factors in the formula from DaftWullie's answer. Proof of $(1)$ Recall that $\mathrm{tr}_B\rho = \sum_i\langle i_B|\rho|i_B\rangle$ where $|i_B\rangle$ is an orthonormal basis of the Hilbert space of the subsystem $B$. Calculate $$ \begin{align} \mathrm{tr}_B(\rho_A\otimes\sigma_{BC}) &= \sum_i \langle i_B|\rho_A\otimes\sigma_{BC}|i_B\rangle \\ &= \sum_i \rho_A\otimes \langle i_B|\sigma_{BC}|i_B\rangle \\ &= \rho_A\otimes \sum_i \langle i_B|\sigma_{BC}|i_B\rangle \\ &= \rho_A \otimes (\mathrm{tr}_B\sigma_{BC}). \end{align} $$
{ "domain": "quantumcomputing.stackexchange", "id": 2450, "tags": "quantum-state, entanglement, linear-algebra" }
Animal choir simulator
Question: I got a PHP developer interview test to solve for a company. I didn't get the job but I would like to know where I was wrong. Did I understand the test completely? /** * Create an Animal Choir simulator * * The task constraints are: * * There must be 3 different choir member animals * (i.e. dogs, cats, mice) * * Every animal must have a sing method that returns a string representation of a voice * (i.e. 'bark', 'meow', 'squeak') * * Every animal must have a loudness property which can have 3 settings, * depending on which the sing method result will be rendered as * lowercase, first letter uppercase and uppercase. * * Singer groups are groups of animals with the same loudness property value. * Singer group song is represented with a CSV of the group singer sing result in random order. * * The choir simulator must have implement the following methods: * crescendo - the choir start singing from the least loud singer group, and then are being joined * by more and more loud singer groups until they are singing all together. * The joining is represented with a new line. * Example: * meow, squeak, bark * Meow, bark, squeak, Bark, meow * bark, Meow, MEOW, squeak, BARK, meow, Bark * * arpeggio - the choir singer groups of the same loudness start singing one by one from * the least loud to the loudest * Example: * meow, squeak, bark * Meow, Bark * MEOW, BARK * */ //TODO: Describe the class hierarchy //Choir class class Choir{ public $line_ending = ''; public $line_separator = ''; public $voices = ''; public function crescendo(){ $crescendo_song = ''; $animals = new Animal(); //We define which animal voices will be included in song $animals->voices = $this->voices; //First we start with the silent $animals->loudness = 'silent'; //Call the function $silent = $animals->sing(); //echo the result $crescendo_song .= $this->stringForm($silent); //Then we continue with the normal, but also merge with silent $animals->loudness = 'normal'; $normal = array_merge($animals->sing(),$silent); $crescendo_song .= $this->stringForm($normal); //And in the end we merge loud with silent and normal $animals->loudness = 'loud'; $loud = array_merge($animals->sing(),$silent,$normal); $crescendo_song .= $this->stringForm($loud); return $crescendo_song; } public function arpeggio(){ $arpeggio_song = ''; $animals = new Animal(); //We define which animal voices will be included in song $animals->voices = $this->voices; //First we start with the silent $animals->loudness = 'silent'; $arpeggio_song .= $this->stringForm($animals->sing()); //Then normal $animals->loudness = 'normal'; $arpeggio_song .= $this->stringForm($animals->sing()); //And then the loud $animals->loudness = 'loud'; $arpeggio_song .= $this->stringForm($animals->sing()); return $arpeggio_song; } private function stringForm($array){ //Randomize the array shuffle($array); //Predefine a song and a separator $song = ''; $comma = ''; //Form song string foreach ($array as $slog) { $song .= $comma.$slog; $comma = $this->line_separator.' '; }; return $song.$this->line_ending; } } //Animal class class Animal{ public $loudness = ''; public function sing(){ $song = array(); foreach ($this->voices as $voices) { switch ($this->loudness) { case 'silent': array_push($song,strtolower($voices)); break; case 'normal': array_push($song,ucfirst($voices)); break; case 'loud': array_push($song,strtoupper($voices)); break; default: array_push($song,$voices); break; } } return $song; } } $choir = new Choir(); $choir->line_ending = PHP_EOL; //$choir->line_ending = '<br>'; //For cleared viewing in browser use <br> tag //Define the line separator for CSV $choir->line_separator = ','; //Define the voices of animals $choir->voices = array('bark','meow','squeak'); //Call and echo the functions echo $choir->crescendo(); echo $choir->arpeggio(); Answer: Let's start with class hierarchy. The choir is made up of the animals. The Choir class should have a list of its Animal members: $felix = new Cat(); $rex = new Dog(); $choir = new Choir(array($felix, $rex, $bernand)); Then each animal has a different way of singing but they all sing . That is classic polymorphism. (In fact they sing in such a similar way that the sing() method goes into the Animal class). class Dog extends Animal { public function __construct() { $noise = 'bark'; .... class Animal { private $loudness; private $loudnessLevels = array('silent', 'normal', 'screaming'); private $noise; public function __set($name, $value) { if($name == 'loudness') { if(!in_array($value, $loudnessLevels) { throw new Excpetion('Unknown loudness'); } $loudness = $value; ... For a discussion about the value of getter/setters in PHP (coming from a C++/Python background I'm pleased never having to do PHP OOP). Now for some (e.g. I didn't check for bugs) low-level criticism: Use getters and setters In your sing() function there is the default case - you just output the string in this case but actually it is an undefined case and should throw an exception.
{ "domain": "codereview.stackexchange", "id": 8674, "tags": "php, object-oriented, interview-questions" }
How to prove that a language is not regular?
Question: We learned about the class of regular languages $\mathrm{REG}$. It is characterised by any one concept among regular expressions, finite automata and left-linear grammars, so it is easy to show that a given language is regular. How do I show the opposite, though? My TA has been adamant that in order to do so, we would have to show for all regular expressions (or for all finite automata, or for all left-linear grammars) that they can not describe the language at hand. This seems like a big task! I have read about some pumping lemma but it looks really complicated. This is intended to be a reference question collecting usual proof methods and application examples. See here for the same question on context-free languages. Answer: Proof by contradiction is often used to show that a language is not regular: let $P$ a property true for all regular languages, if your specific language does not verify $P$, then it's not regular. The following properties can be used: The pumping lemma, as exemplified in Dave's answer; Closure properties of regular languages (set operations, concatenation, Kleene star, mirror, homomorphisms); A regular language has a finite number of prefix equivalence class, Myhill–Nerode theorem. To prove that a language $L$ is not regular using closure properties, the technique is to combine $L$ with regular languages by operations that preserve regularity in order to obtain a language known to be not regular, e.g., the archetypical language $I= \{ a^n b^n \mid n \in \mathbb{N} \}$. For instance, let $L= \{a^p b^q \mid p \neq q \}$. Assume $L$ is regular, as regular languages are closed under complementation so is $L$'s complement $L^c$. Now take the intersection of $L^c$ and $a^\star b^\star$ which is regular, we obtain $I$ which is not regular. The Myhill–Nerode theorem can be used to prove that $I$ is not regular. For $p \geq 0 $, $I/a^p= \{ a^{r}b^rb^p\mid r \in \mathbb{N} \}=I.\{b^p\}$. All classes are different and there is a countable infinity of such classes. As a regular language must have a finite number of classes $I$ is not regular.
{ "domain": "cs.stackexchange", "id": 14007, "tags": "formal-languages, regular-languages, proof-techniques, reference-question" }
Can physical spaces 'resist' certain sound frequencies?
Question: I whistle a lot, and I'm fairly decent at it. Recently I was walking up the stairs in a house while whistling. As I whistled, I found it difficult to hit a specific note in the song that I would normally be able to whistle. I found this strange, so I stopped in that exact spot on the stairs, and tried whistling that note. It felt as though there was some kind of resistance when I whistled that exerted pressure on the pitch either up or down to a different note at which there was no resistance. It seemed really weird to me, but I stood there for a few minutes testing different pitches, and it really seemed as though there was a certain pitch that didn't resonate well in that space, and not only that, but I felt some kind of pressure on the note that I was whistling to move to a different note. Is this kind of like the opposite of resonance? Or am I crazy? Answer: This sounds like destructive interference interacting with your attempt at driving a resonance. Normally when whistling or singing in the shower or a stairwell, certain pitches get nicely amplified by resonance. These room modes are standing waves where the pitch fits into an integer number of wavelengths between reflecting walls. The sound source corresponds to a pressure amplitude maximum, but if it is a quarter wavelength from the wall the reflected pressure wave will exactly cancel it. So at least in some places for each pitch there should be "dead" spots. (An interesting question is why we rarely observe them, while shower singing easily finds resonances where our voices fill out well. It might just be that we unconsciously modulate the pitch to where it resonates rather than keeping the same pitch and moving.)
{ "domain": "physics.stackexchange", "id": 62722, "tags": "acoustics, interference, wavelength, superposition, resonance" }
What is a "Born diagram"?
Question: In the introduction of this article, the following statement is made regarding the partonic picture for hadronic scattering amplitudes: To leading order in $\alpha_S (Q^2)$, the "hard-scattering amplitude" $T_H$ is the sum of all Born diagrams for $\gamma^*+3q\rightarrow 3q$ in perturbative QCD. In the above, $T_H$ is the scattering amplitude for $\gamma^*+3q\rightarrow 3q$ where each quark $q$ is specified. Of course, it is given by the sum of all Feynman diagrams contributing to it. At $O(\alpha)$ we only have tree-level diagrams, and therefore I believe "Born diagrams" should mean either "tree-level Feynman diagrams" or "tree-level partonic Feynman diagrams". Are either of these guesses correct? What exactly is a "Born diagram"? I would appreciate any references. Answer: You're right. A "Born" and "tree level" are the same thing. It's not very common to say Born anymore, but the reason why the call it like that in the reference is likely due to the more standard quantum mechanical definition. In nonrelativistic quantum mechanics one usually calls the "Born approximation" as the approximation at lowest order in the small potential. You can see pretty much every quantum mechanics books (say, Weinberg's "Lectures on Quantum Mechanics") when the scattering off a potential is treated.
{ "domain": "physics.stackexchange", "id": 82690, "tags": "quantum-field-theory, terminology, definition, feynman-diagrams" }
Filtering detail table based on selection in summary table in Orange 3.11
Question: I'm rather new to Orange, so apologies in advance if I'm missing something obvious or am trying to fit a square peg in a round hole. I have two files, Summary and Detail, that I'm importing into an Orange workflow. Summary has one row per Contact, and Detail has multiple rows per Contact, each row corresponding to a particular action they've taken. Contacts are identified in both files by Contact ID. I've done a bunch of analysis to identify Contacts in Summary that I'm interested in using Orange's machine learning algorithms. I'd like to be able to step through each row in Summary (that is, each Contact) to see what actions in Detail that the Contact took. I thought I'd be able to use the Merge Data widget to do this by hooking up Selected Data from Summary as the Data and Data from Detail as the Extra Data using a Find Matching Rows join type. However, since each unique Contact ID appears multiple times in Detail, I'm not able to select it in the Merge Data widget. I know it's nonsensical, but I did try to wire Data from Detail as the Data and Selected Data from Summary as the Extra Data with the only Merge Data option that worked (Append columns from Extra Data). This showed me all the data from Detail, but appended with columns from Summary when the appropriate row was selected in Summary. Based on my understanding of Merge Data, this is predicted but (in my use case) unhelpful behavior. In database terms, I'm sort of trying to set up an inner join with a one-to-many relationship between Summary and Detail and, by selecting a row in Summary, to be able to see the related rows in Detail. So, the bottom line is, is this sort of behavior doable in Orange, or am I trying to paint a house with a chainsaw? Is there another widget that would be more appropriate for the task? (I also tried the Select Rows widget, but couldn't figure out how to populate the filters based on the row that I had selected in Summary.) Answer: A workflow like this might do the trick, although it is static (does not adjust to varying actions). Load the two files, Summary and Details. I suppose "Details" has columns such as "Action" and some value. In my example, Details will contain "Subject" and "Grade" for 3 highschool subjects. "Select columns" is optional (slim down unused columns) You need a "Select rows" for each subject/action, filtering that specific one. Every "Edit Domain" is used to rename the generic feature name to a specific on (like "Grade" to "Grade_Math"). Results are then merged step-wise to the Summary (which in my example contains students, your summary contains some other people). If the number of actions is limited, the time needed is reasonable. If this is a one-time job, you might as well save the output (CSV) and use that in a different project. If it needs to be more dynamic and adaptive, you'll have to dig into Python and use the Python code widget to create a list of actions, create a list of columns based on that, and then populate those with matching values.
{ "domain": "datascience.stackexchange", "id": 2927, "tags": "orange" }
What is the expression for current density of a ring in spherical coordinates
Question: I am not understanding the following expression for the current density of a ring of radius $a$ : $$J_{\phi}= I \sin\theta'\delta (\cos\theta')\delta(r'-a)/a.$$ It is dimensionally correct. I do not know how the normalization was done to get the total current. This is Jackson 5.33 of Section 5.5. I am sorry to ask a homework-like question. Answer: If your ring of current $I$ is at the parallel $r=r_0$ and $\theta=\theta_0$, then: $$ j = I\delta(\theta-\theta_0)\frac{\delta(r-r_0)}{r_0}e_\phi $$ This is dimensionally correct, like your expression. Indeed, when looking at the current flowing through a half plane $\phi=cst$, using: $$ \int_0^\pi d\theta\int_0^\infty dr r \delta(\theta-\theta_0)\frac{\delta(r-r_0)}{r_0} = 1 $$ you do get a total current $I$. You need to use the 2D area element (same as polar coordinates): $$ d\theta rdr $$ In general, if it is not located at a singular point of the coordinate system, you just need to divide by the 2D Jacobean at that point. Hope this helps.
{ "domain": "physics.stackexchange", "id": 96140, "tags": "homework-and-exercises, electromagnetism, electric-current, dimensional-analysis" }
Solve the recurrence $\ T(n) = \sqrt{n} \cdot T(\sqrt{n}) + 1 $
Question: $$\ T(n) = \sqrt{n} \cdot T(\sqrt{n}) + 1 $$ I've found so many similar questions but I couldn't understand any of the answers explanations. When I try to draw a recurrence tree, I see that each 'level' has as many operations as nodes (because of only $\ 1 $ operation in each node) so in the first level it has $\ 1 $ node then $\ \sqrt{n} $ nodes then $\ \sqrt{\sqrt{n}} $ nodes and so forth to $\ n^{\frac{1}{2^k}} $ on the lowest level on the tree. I get the same answer when unrolling it: $\ T(n) = n^\frac{1}{2} T(n^{\frac{1}{2}}) + 1 = n^{\frac{1}{2}}(n^{\frac{1}{4}} T(n^{\frac{1}{4}}) + 1) + 1= n^{\frac{1}{2}} ( n^{\frac{1}{4}}(n^{\frac{1}{8}} T(n^{\frac{1}{8}}) + 1 ) + 1) + 1 = ... $ But it is really inconvenient working with this form. Also tried using substitution as mentioned here and then applying master theorem but with I can't understand how to make the transition back. Also a similar question here but no further explanations in the answers. I would rather use tree recurrence to solve it but substitution and master theorem also good. Answer: Let us assume that $n$ is of the form $2^{2^k}$, and furthermore, a base case of $T(2) = 1$. Applying the substitution method, \begin{align} T(2^{2^k}) &= 1 + 2^{2^{k-1}} T(2^{2^{k-1}}) \\ &= 1 + 2^{2^{k-1}} + 2^{2^{k-1}+2^{k-2}} T(2^{2^{k-2}}) \\ &= 1 + 2^{2^{k-1}} + 2^{2^{k-1}+2^{k-2}} + 2^{2^{k-1}+2^{k-2}+2^{k-3}} T(2^{2^{k-3}}) \\ &= \cdots \\ &= 1 + 2^{2^{k-1}} + 2^{2^{k-1}+2^{k-2}} + 2^{2^{k-1}+2^{k-2}+2^{k-3}} + \cdots + 2^{2^{k-1} + \cdots + 2^0} T(2^{2^0}) \\ &= 1 + 2^{2^{k-1}} + 2^{2^{k-1}+2^{k-2}} + 2^{2^{k-1}+2^{k-2}+2^{k-3}} + \cdots + 2^{2^{k-1} + \cdots + 2^0} \\ &= 2^{2^{k}} \left[ 2^{-2^k} + 2^{-2^{k-1}} + 2^{-2^{k-2}} + 2^{-2^{k-3}} + \cdots + 2^{-2^0} \right] \\ &\approx 2^{2^k} \sum_{\ell=0}^\infty \frac{1}{2^{2^\ell}}. \end{align} The infinite series converges, and we conclude that $T(2^{2^k}) = \Theta(2^{2^{k}})$.
{ "domain": "cs.stackexchange", "id": 17220, "tags": "algorithm-analysis, recurrence-relation" }
Confused by proof of correctness of Majority
Question: I have been studying a streaming algorithm to determine if there is a majority element in a stream. But am confused by a proof for it. The algorithm works as follows. You keep one counter $c$ and a store for one item called $a^*$. When a new item arrives, first you check if $c == 0$. If so you set $c=1$ and $a^*$ stores the arriving item. Otherwise, if $c>0$ and the arriving item is the same as $a^*$ you increment $c$ or decrement $c$ if not. If there is a majority element then it will be stored in $a^*$ at the end. In the notes from http://www.cs.toronto.edu/~bor/2420f17/L11.pdf there is a proof of this fact (called simpler proof). I can see that if there is a majority item then $c'$ will be positive at the end of the stream. But: How do we know that $a^*$ will hold the majority item at the end? Does $c'$ being positive imply that $c$ will be positive too? Answer: At each step, if $c == 0$, set $c=1$. Otherwise, if $c>0$, either increment $c$ or decrement $c$. Since $c$ is 0 initially, it is always true that $c\ge0$. That means, $-c\le0$ all the time. Suppose, at some moment, $c'$ is positive. The definition of $c'$ says $c'=c$ when $a^*=v$ and $c'=-c$ otherwise. Since $-c\le0$, $c'\not=-c$. So, the situation cannot be "otherwise". The situation must be other than "otherwise", i.e, $a^*=v$ and, hence, $c'=c$. We just proved that for any moment, if $c'$ is positive, then $a^*=v$ and $c=c'$ is also positive. Now just apply the above conclusion to the very last moment.
{ "domain": "cs.stackexchange", "id": 16780, "tags": "streaming-algorithm" }
How safe is this assumption for turbulence modelling in porous media?
Question: I am studying the performance of volumetric solar receivers (specifically ceramic foams) versus the change of many parameters: solar flux, surface temperature, porosity, pore diameter, flow pressure and flow velocity. I have seen two ways in the literature to model porous media: Drawing a complex 3D structure that is analogous to the porous zone. While this might be a very-problem-specific solution, but turbulence modelling in that case is not a problem. Averaging the governing partial differential equations to account for the nature of porous media and introducing new terms to the equations (porosity $\epsilon$ and superficial velocity). And this is the approach I am taking which is used in porous models in many CFD solvers. However, while reading about the treatment of turbulence in porous media in the FLUENT user guide I found that turbulence is not exactly modelled: turbulence in the medium is treated as though the solid medium has no effect on the turbulence generation or dissipation rates. This assumption may be reasonable if the medium's permeability is quite large and the geometric scale of the medium does not interact with the scale of the turbulent eddies. In other instances, however, you may want to suppress the effect of turbulence in the medium. I find "medium's permeability is quite large" quite ambiguous (What values are considered large?) and don't know whether my model will be badly affected by this treatment or not and to what extent? So has anyone came through this before? or am I overcomplicating things for a parametric study? Answer: Short answer: The FLUENT approach is trivial. Many information are lost due to the space-averaging process - over a representative elementary volume - of the governing equations (which is the essence of every porous model that tries to avoid the complexity of the real geometry of the porous media), So any turbulence model is not to "reproduce the fine structure dynamics of the flow but to take into account information embedded in smaller scale for large scale modelization."[1]. The FLUENT approach simply ignores this fact or even assumes that there is no turbulence energy dissipation or generation due to the porous zone (similar approach was taken by Antohe and Lage to develop a new turbulence model that lead to trivial solutions ($k = 0$ or $\epsilon = 0$)). I found that STAR-CCM+ code follows a similar way in turbulence treatment: The effect of a porous region on turbulent flow depends on the internal structure of the porous medium. Where turbulence is present, the turbulence scales are determined from the geometric structure of the porous medium. As it is not possible for STAR-CCM+ to predict the turbulence scales directly, you specify the appropriate values on the porous region. Turbulence quantities in fluid leaving the porous region are constructed from the user-defined values; they are not transported from the upstream side of the porous region. So, I think if you are pragmatic enough to have a model that can sacrifice the accuracy of results of the turbulence model you use, these assumptions are the best you can get. [1] $k–\epsilon$ Macro-scale modeling of turbulence based on a two scale analysis in porous media - Francois Pinson, Olivier Gregoire, and Olivier Simonin. [2] A new turbulence model for porous media flows. Part I: Constitutive equations and model closure - Federico E. Teruel and Rizwan-uddin.
{ "domain": "engineering.stackexchange", "id": 603, "tags": "cfd, turbulence, porous-medium" }
Increasing distance between Earth and Moon
Question: I have a problem where a planet's rate of rotation is decreasing due to tidal effects of the moon. I know that the angular momentum of the system will be conserved. So, in order to conserve that the moon will recede away from the planet. $$ L = mvr = m\omega r^2.$$ I'm not sure how to convert/translate this loss in angular momentum of the planet to the rate of recession of the moon. Answer: The key is, as you indicate, in the conservation of angular momentum. First comment on the cause : the tides the effect of tides (flows) causes a slowdown in the angular velocity of the Earth, that is to say, the Earth slows down by the tidal effects caused by the Moon ( the sun also causes but is much lower). What happens? Let us consider the isolated system Earth-Moon The total angular momentum must be conserved. The angular velocity of the Earth decreases because of the effects of the tide, the period of rotation is increasing. That is to say, the angular momentum of the Earth decreases As a result, if the angular momentum of the Earth decreases, the angular momentum of the Moon has to increase to the total angular momentum is maintained, is the only option. Therefore, to increase the angular momentum of the moon, the orbit of the moon becomes higher, so, gradually, moon's orbit becomes larger= more eccentric = the distance increases with the Earth Let me make you an approach. Of the dynamics of the circular movement $$ G \frac {M_e M_m}{d^2} = M_m w^2 d $$ where $w = \sqrt {G \frac{M_e}{d^3}}$ $M_e$ and $M_m$ are the mass of the Earth and Moon respectively. Now, we consider the isolated system Earth-Moon, the total momentum $L$ must be conserved over the time, this is : $$ L = L_e + L_m = L'_e + L'_m = L' $$ The moment of inertia of a sphere of mass m and radius R with respect to the axis of rotation that passes through its center is $2M_eR^2/5$ (obviously, for simplicity, we are considering that the earth is spherical), and the moment of inertia of a particle of mass m that is far d of the axis of rotation is $M_m d^2$ Another consideration, mass of the Earth is considerably greater than the mass of the Moon and that the distance between its centers is much greater than any of its rays, so we can consider the moon as a particle of mass $M_m = 8.99\times 10^{22} kg $ which describes a circular orbit of radius d around the Earth. The last consideration, also for simplicity, we've considered the axis of rotation of the Earth perpendicular to the plane of the Moon's orbit. Also, you know about the Earth : $M_e = 5.98 \times 10^{24} kg $ , $R_e = 6.37 \times 10^6 m$ After all these considerations and introduction, we have that : $$ \frac 25 M_e R^2 \Omega + M_m d^2 w = \frac 25 M_e R^2 \Omega_1 + M_m d_1^2 w_1^2 $$ I have to leave now, anyway hope you understand my explanation. If you need, I can continue with the derivation but I think you'll be able to do that.
{ "domain": "physics.stackexchange", "id": 45074, "tags": "newtonian-mechanics, newtonian-gravity, angular-momentum, moon, tidal-effect" }
Motion of the center of mass of a falling rod
Question: I have drawn a slender rod released from rest. According to Newton's 2nd law, the horizontal displacement of the center of mass, which is located at the centroid, must remains constant as there are no forces acting on it horizontally. So, why if I skecht the rod at different times it's very clear that the horizontal displacement of com is changing. I'm very confused, what is my mistake? Answer: Do a free body diagram of the falling rod. There are three cases to consider, and as the body falls it will transition from one to another. There are 5 parameters to consider. The position of point A along the horizontal $x_A$ and its derivatives, the position of point A away from the ground $y_A$ and its derivatives, the angle of the rod $\theta$ from vertical, the horizontal reaction force $A_x$ at A and the vertical reaction $A_y$ also at A. For each scenario, there should be 3 unknowns to be solved from the 3 equations of motion. Fixed End Friction on A causes that point to remain fixed in space and the center of mass to move to the right. This ends when $ A_x \gt \mu A_y$. $$ \begin{array}{r|l} \mbox{variable} & \mbox{state} \\ \hline \theta & \mbox{unknown} \\ y_A & \mbox{fixed at 0} \\ x_A & \mbox{fixed at 0} \\ A_y & \mbox{unknown} \\ A_x & \mbox{unknown} \\ \end{array}$$ Sliding End Friction at A is overcome, and the end slides along the horizontal axis. Sliding friction causes the center of mass to continue moving to the right, but the location of point A is no longer known. $$ \begin{array}{r|l} \mbox{variable} & \mbox{state} \\ \hline \theta & \mbox{unknown} \\ y_A & \mbox{fixed at 0} \\ x_A & \mbox{unknown}\\ A_y & \mbox{unknown} \\ A_x & \mbox{dependend, } A_x = \mu A_y \\ \end{array}$$ Flying End The rotation of the rod is high enough to lift the end of the rod. The center of mass moves in a projectile motion at this point, under the influence of gravity only. $$ \begin{array}{r|l} \mbox{variable} & \mbox{state} \\ \hline \theta & \mbox{unknown} \\ y_A & \mbox{unknown} \\ x_A & \mbox{unknown}\\ A_y & \mbox{fixed at 0} \\ A_x & \mbox{fixed at 0} \\ \end{array}$$ If the plane is frictionless then the rod transitions from Case 1 to Case 2 immediately and $A_x=0$.
{ "domain": "physics.stackexchange", "id": 50009, "tags": "newtonian-mechanics" }
Using Bohr's postulate find the relation for electron velocity of lithium atom
Question: Using Bohr's postulates derive formula for velocity of electron on 4th orbit in doubly ionized atom of lithium $_{3}Li^{7}$. Using $$mvr=n\hbar$$ and $$\frac{mv^2}{r}=\frac{1}{4\pi\epsilon_{0}}\frac{Ze^4}{r^2}$$ I obtain: $$v=\frac{Ze^4}{4\pi\epsilon_{0}\hbar}\frac{1}{n}$$ For the particular case $Z=3$ and $n=4$, but I am not sure how this double ionization affects the result. Answer: Bohr's model is only valid for atoms with a single electron, as it neglects any electron-electron interaction. One might think that the only atom with a single electron is hydrogen. However, ionized helium ($\mathrm{He}^+$) and doubly ionized lithium ($\mathrm{Li}^{2+}$) also fulfill this condition. (Similarly for even heavier atoms...) So in your exercise, you need not make any changes to accommodate for the fact that the lithium is doubly ionized. It is just a requirement for the model to be applicable.
{ "domain": "physics.stackexchange", "id": 32267, "tags": "homework-and-exercises, atomic-physics" }
Is the transit technique for exoplanet detections part of the "Wide-Field Precision Photometry Revolution"?
Question: In a exoplanet focused lecture I was informed that the two main techniques for the detection of exoplanets were: radial velocity (VR) and transit. These were very briefly explained to us. When watching a presentation by Dr. Bender, it is stated that the RV technique was the most prominent technique up to the late 00's, but at around 2010 there was a "Wide-Field Precision Photometry Revolution", which was characterized by its fainter limiting magnitude. Is the transit technique categorised as one of the techniques in this wide-field precision photometry revolution? Answer: Yes. By “precision photometry” we mean “measuring just exactly how bright this star is.” The transit technique looks for variations in the brightness of the star. As your photometry becomes more precise, you are able to distinguish smaller variations in stellar brightness, which allows you to detect transits by smaller planets, more rapid transits, or both. The “wide field” part is because, rather than concentrating on a single star, the Kepler mission (and follow-ups) was able to rapidly and precisely measure the brightness of every star in its field of view. The transit technique has a fainter limiting magnitude than spectroscopic techniques because you need less light to say “there is some light here” than you do to analyze that light’s spectrum and identify a periodic Doppler shift.
{ "domain": "physics.stackexchange", "id": 87278, "tags": "astrophysics, astronomy, exoplanets, transit" }
Can we create a W state of n qubits with constant circuit depth using mid circuit measurements?
Question: I saw this post where the GHZ state with many qubits (here 6) can be made by performing some mid circuit measurements in constant circuit depth: Depth circuit optimization for 6-qubits GHZ state I was wondering if mid-circuit measurements could help with creating a $W$ state in constant circuit depth. I'd like to constrict it to quantum computers that have limited connectivity, like a chain of qubits or some tiling layout like IBM quantum computers. To give an example, for $n=4$ up to normalization, we have: $$|W_4\rangle = |1000\rangle + |0100\rangle + |0010\rangle + |0001\rangle \,.$$ Answer: Section 4.2 of this paper seems to answer your question positively. Section 4.3 of the same paper also shows how to do this for Dicke-states, the generalization of W-states.
{ "domain": "quantumcomputing.stackexchange", "id": 5209, "tags": "quantum-circuit, state-preparation" }
What would be the correct terminology for soft cover on crust?
Question: Earths' crust is usually not exposed. Above-water portion is covered with soil, and Underwater portions are covered with various deposits or sediments. Oceans contain Pelagic sediments.(I don't know whether this underwater sediment considered as soil or not). The soft cover is probably not being considered as Crust. Some sources telling it pedosphere (soil-sphere), but with pictures of only terrestrial soil. But it is not clear to me : exact how-much portion is being called pedosphere? Is it the whole skin of Earth (including underwater-depositions)? If not; then what is the correct term for the complete soft-layer on Earth? Is there any term for part-of skin-of-Earth which is beneath water? Answer: what is the correct term for the complete soft-layer on Earth? Sediments. Is there any term for part-of skin-of-Earth which is beneath water? Marine sediments. The soft cover is probably not being considered as Crust It is definitely part of the crust. When discussing mantle-crust issues, then crust is usually in the context of crystalline rocks, but it depends on context. But it is not clear to me : exact how-much portion is being called pedosphere? Is it the whole skin of Earth (including underwater-depositions)? Pedosphere refers to terrestrial soil and sediments, and it usually discussed in context with other "spheres" such as atmosphere and lithosphere. Sediments (and marine sediments) is a neutral term. As always, in Earth Sciences, there are several terms for the same thing. There is no "correct" term, just "better" when discussing said thing in various circles.
{ "domain": "earthscience.stackexchange", "id": 857, "tags": "soil, crust" }
Efficient pathfinding without heuristics?
Question: Part of my program is a variable-sized set of Star Systems randomly linked by Warp Points. I have an A* algorithm working rather well in the grid, but the random warp point links mean that even though the systems have X,Y coordinates for where they're located on a galactic map, a system at 2,3 isn't always linked directly to a system at 2,4 and so the shortest path may actually lead away from the target before it heads back towards it. I think this limitation eliminates A* since there's almost no way to get a good heuristic figured out. What I've done instead is a recursive node search (I believe this specific pattern is a Depth-First Search), and while it gets the job done, it also evaluates every possible path in the entire network of systems and warp points, so I'm worried it will run very slowly on larger sets of systems. My test data is 11 systems with 1-4 warp points each, and it averages over 700 node recursions for any non-adjacent path. My knowledge of search/pathfinding algorithms is limited, but surely there's a way to not search every single node without needing to calculate a heuristic, or at least is there a heuristic here I'm not seeing? Here's my code so far: private int getNextSystem(StarSystem currentSystem, StarSystem targetSystem, List<StarSystem> pathVisited) { // If we're in the target system, stop recursion and // start counting backwards for comparison to other paths if (currentSystem == targetSystem) return 0; // Arbitrary number higher than maximum count of StarSystems int countOfJumps = 99; StarSystem bestSystem = currentSystem; foreach (StarSystem system in currentSystem.GetConnectedStarSystems() .Where(f=>!pathVisited.Contains(f))) { // I re-create the path list for each node-path so // that it doesn't modify the source list by reference // and mess up other node-paths List<StarSystem> newPath = new List<StarSystem>(); foreach (StarSystem s in pathVisited) newPath.Add(s); newPath.Add(system); // recursive call until current == target int jumps = getNextSystem(system, targetSystem, newPath); // changes only if this is better than previously found if (jumps < countOfJumps) { countOfJumps = jumps; bestSystem = system; } } // returns 100 if current path is a dead-end return countOfJumps + 1; } Answer: Complete and Incomplete Algorithms Search algorithms can be classed into two categories: complete and incomplete. A complete algorithm will always succeed in finding what your searching for. And not surprisingly an incomplete algorithm may not always find your target node. For arbitrary connected graphs, without any a priori knowledge of the graph topology a complete algorithm may be forced to visit all nodes. But may find your sought for node before visiting all nodes. A* is a complete best-first method with a heuristic to try to avoid searching unlikely parts of the graph unless absolutely necessary. So unfortunately you can not guarantee that you will never visit all nodes whatever algorithm you choose. But you can reduce the likelihood of that happening. Without pre-processing If you cannot consider pre-processing your graph then you're stuck with a myriad of on-line algorithms such as depth-first, breadth-first, A* and greedy-best-first. Out of the bunch I'd bet on A* in most cases if the heuristic is even half good and the graphs are non-trivial. If you expect all routes to be short, a breadth-first algorithm with cycle detection and duplicate node removal may outperform A* with a poor heuristic. I wouldn't bet on it though, you need to evaluate. With pre-processing In your case I'd see if I could pre-process the graph, even if you need to repeatedly re-do the pre-processing when the graph changes as long as you do sufficiently many searches between pre-processing it would be worth it. You should look up Floyd-Warshall (or some derivative) and calculate the pairwise cost/distance/jumps between all nodes and use this table as a heuristic for your A*. This heuristic will not only be admissible, it will be exact and your A* search will complete in no-time. Unless you of course modify the algorithm to store all pairwise routes as they are detected in a matrix of vectors, then you have O(1) run time at the cost of memory.
{ "domain": "codereview.stackexchange", "id": 6630, "tags": "c#, recursion, search, pathfinding" }
How to separate paracetamol and caffeine?
Question: How do you separate paracetamol and caffeine using high-school laboratory equipment? I was thinking about using the fact that paracetamol has a lower melting point (169 °C) compared to caffeine (around 235 °C): I would heat them up till paracetamol melts while caffeine is still solid. What I can't figure out is how you filter the caffeine from the liquid paracetamol (filter paper would burn at such a high temperature). I would like to know how you do the filtration at high temperature; or, if you have a better method, please let me know! Answer: The answer that my teachers had in mind was column chromatography. Essentially we use the van der Waals force of silicon gel to do an elution of the caffeine / paracetamol solution dissolved in water. More details can be found here: http://www.chemguide.co.uk/analysis/chromatography/column.html
{ "domain": "chemistry.stackexchange", "id": 8853, "tags": "melting-point, mixtures" }
Need advice on URDF axis rotations
Question: Although I am running Ubuntu 14.04 LTS and testing using ROS-Fuerte, I believe that this question is independent of these facts. Furthermore, I am only using Fuerte because there are working examples that I can use. My plan is to port things to Indigo as soon as I have something that seems correct. I am trying to build a URDF definition for a hexapod robot using xacro. I have the coordinate system arranged such that the two center side legs are aligned along the X-axis. The other legs are at 40 or 50 degrees off-axis. Since all legs are identical I planned on using a single macro to define a leg and expand it as needed to create them all. Each leg has 3 joints. The first, where the leg attaches to the body, rotates parallel to the Z-axis. The other two joint axes always remain parallel to the body's X-Y plane. I was originally thinking that I would use the X-axis as the length dimension, because that is most natural for the center legs and it looks similar to the tutorial. However, I am now realizing that with zero rotation the left leg extends in the negative X direction whereas the right runs positive. Either way I need to rotate the other four legs so that the length runs parallel to one axis. The choice of the rotation will then define which axis the other two joints rotate about. In general what I am wondering is what is the best way to arrange the rotations when defining the joints? Is there any reason that I should choose X versus Y axis to run parallel to the leg length definition? Originally posted by flb on ROS Answers with karma: 30 on 2015-01-01 Post score: 1 Original comments Comment by David Lu on 2015-01-03: Can you update this with pictures? Answer: Actually, as it turns out I decided that aligning the Y-axis with the legs would be best because using the X-axis would require one of them to use a 180 degree rotation. Aligning on Y does not require any rotations that large. Originally posted by flb with karma: 30 on 2015-01-03 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by David Lu on 2015-01-06: Are you all set on this question then? Comment by flb on 2015-01-06: Yes, thanks
{ "domain": "robotics.stackexchange", "id": 20465, "tags": "urdf" }
Any problem solved by a finite automaton is in P
Question: After my Theory of Computation class today this question popped in my mind: If a problem can be solved by a finite automaton, this problem belongs to P. I think its true, since automata recognize very simple languages, therefore all these languages would have polynomial algorithms to solve them. Thus, is it true that any problem solved by a finite automaton is in P? Answer: Yes, it is true. In terms of complexity classes, $$ \text{REG} \subseteq \text{P}, $$ where $\text{REG}$ is the class of regular languages (i.e., problems that can be solved by a finite automaton). More specifically, $$ \text{REG} \subseteq \text{DTIME}(n), \tag{*} $$ and $\text{DTIME}(n)$ is a strict subset of $\text{P}$ by the time hierarchy theorem. The proof of (*) is as follows: for any problem in $\text{REG}$, there is a DFA which solves it. Convert that DFA to a Turing machine with the same states and transition function, which always moves to the right until it sees a blank, and then accepts or rejects. This Turing machine always halts in time exactly $n$. It's also worth mentioning that $$ \text{REG} = \text{DSPACE}(0) = \text{DSPACE}(k)$$ for any fixed constant $k$.
{ "domain": "cs.stackexchange", "id": 8734, "tags": "complexity-theory, automata, finite-automata, polynomial-time" }
Can Schwarzschild black holes evaporate?
Question: I recently saw this question, and came across a claim from Anixx that a Schwarzschild black hole cannot evaporate because it is static: @HDE 226868 Schwartzshield solution is a static one, which does not change with time. Yet here there exists a detailed derivation of how a Schwarzschild black hole can emit Hawking radiation. Is Wikipedia right? Can Schwarzschild black holes evaporate? Answer: I don't think the answer is too exciting. The Schwarzschild solution is a static solution to the Einstein field equations. The Einstein field equations alone don't take into account quantum effects. Taking quantum effects into account will give you a modification of the solution, and the result that the Schwarzschild 'solution' is no longer static (and so could hardly be called a 'solution' any more).
{ "domain": "physics.stackexchange", "id": 18156, "tags": "black-holes, hawking-radiation" }
Filling A Linked List With Data From File And Handling User Status Messages
Question: I have completed an university assignment on C. While the code is fully functional based on the specifications of the exercise, I like high-quality code and would like to ask for opinions on how can certain parts of it be improved. Be advised that I cannot change any function signature. The first part is the below load() function whose job is to create and fill a linked list based on entries of a file. If the specified file doesn't exist then a new file must be created. Unfortunately, due to the structure of the signature, I am forced to fclose() the file (opening / closing files are expensive operations) as I don't have any way to store the file pointer. I don't quite like the double fopen() call there. Also, while I've found a way to detect when a file is blank or when it has wrong format, I still have the feeling that there are cases where undefined behavior can occur. studentList* load(char *filename) { studentList *list = newList(); FILE *f = fopen(filename, "r"); if (f == NULL) { f = fopen(filename, "w"); fclose(f); printf("File %s not found. A new blank file has been created.\n",filename); return list; } fseek(f, 0, SEEK_END); if (ftell(f) != 0) { rewind(f); while (!feof(f)) { student st; int res = fscanf(f, "%d%s", &st.id, st.name); if (res == 2) add(list, st); } fflush(f); fclose(f); } printf("File %s loaded successfully.\n",filename); return list; } The second part is more generic and has to do with the way status messages are printed to the console for the user. Excluding load() where I don't have any other option due to the nature of the signature , the other functions that provide basic functionalities of a linked list like add() , delete() , find() etc. don't have any printf() or puts() as I consider that method a dirty code style and besides that, I want to have full control of when a status message is printed. Instead, these functions return an status code int where then an other function takes that status message and prints the corresponding status message to the console. I have not found a better method to achieve this result. Any ideas in this part would be highly appreciated. Example of my code: int res = add(list, st); printAddStudentResult(res); Answer: The first thing that strikes me is that you're writing error and status messages to the standard output stream, when usually these go to stderr, the standard error stream. I'm not sure it makes much sense to attempt to create an empty file if the file opening fails - why not leave this until it's time to save? What if the reason it failed is because the file doesn't have read permission for the process? And how do you know whether creating a new file succeeded or not? I'd argue that it's better to return a null pointer if the file opening fails, so that client code can distinguish this from a successful load of an empty list: if (f == NULL) { /* NULL return signifies nothing could be read */ return NULL; } studentList *list = newList(); I'm assuming newList() is an allocation function that returns a null pointer if the allocation fails; that means we shouldn't attempt to use it until we've checked it's a valid pointer. while (!eof) is a common anti-pattern; what we want to do instead is more like while (fscanf(...)==2) - and that removes the need to measure the file size before reading the contents. This line is clearly wrong fflush(f); f is an input stream, so fflush(f) is Undefined Behaviour. Just remove this call.
{ "domain": "codereview.stackexchange", "id": 34912, "tags": "c, file, console, user-interface" }
Why so many methods of computing PSD?
Question: Welch's method has been my go-to algorithm for computing power spectral density (PSD) of evenly-sampled timeseries. I noticed that there are many other methods for computing PSD. For example, in Matlab I see: PSD using Burg method PSD using covariance method PSD using periodogram PSD using modified covariance method PSD using multitaper method (MTM) PSD using Welch's method PSD using Yule-Walker AR method Spectrogram using short-time Fourier transform Spectral estimation What are the advantages of these various methods? As a practical question, when would I want to use something other than Welch's method? Answer: I have no familiarity with the Multitaper method. That said, you've asked quite a question. In pursuit of my MSEE degree, I took an entire course that covered PSD estimation. The course covered all of what you listed (with exception to the Multitaper method), and also subspace methods. Even this only covers some of the main ideas, and there are many methods stemming from these concepts. For starters, there are two main methods of power spectral density estimation: non-parametric and parametric. Non-parametric methods are used when little is known about the signal ahead of time. They typically have less computational complexity than parametric models. Methods in this group are further divided into two categories: periodograms and correlograms. Periodograms are also sometimes referred to as direct methods, as they result in a direct transformation of the data. These include the sample spectrum, Bartlett's method, Welch's method, and the Daniell Periodogram. Correlograms are sometimes referred to as indirect methods, as they exploit the Wiener-Khinchin theorem. Therefore these methods are based on taking the Fourier transform of some sort of estimate of the autocorrelation sequence. Because of the high amount of variance associated with higher order lags (due to a small amount of data samples used in the correlations), windowing is used. The Blackman-Tukey method generalizes the correlogram methods. Parametric methods typically assume some sort of signal model prior to calculation of the PSD estimate. Therefore, it is assumed that some knowledge of the signal is known ahead of time. There are two main parametric method categories: autoregressive methods and subspace methods. Autoregressive methods assume that the signal can be modeled as the output of an autoregressive filter (such as an IIR filter) driven by a white noise sequence. Therefore all of these methods attempt to solve for the IIR coefficients, whereby the resulting power spectral density is easily calculated. The model order (or number of taps), however, must be determined. If the model order is too small, the spectrum will be highly smoothed, and lack resolution. If the model order is too high, false peaks from an abundant amount of poles begin to appear. If the signal may be modeled by an AR process of model 'p', then the output of the filter of order >= p driven by the signal will produce white noise. There are hundreds of metrics for model order selection. Note that these methods are excellent for high-to-moderate SNR, narrowband signals. The former is because the model breaks down in significant noise, and is better modeled as an ARMA process. The latter is due to the impulsive nature of the resulting spectrum from the poles in the Fourier transform of the resulting model. AR methods are based on linear prediction, which is what's used to extrapolate the signal outside of its known values. As a result, they do not suffer from sidelobes and require no windowing. Subspace methods decompose the signal into a signal subspace and noise subspace. Exploiting orthogonality between the two subspaces allows a pseudospectrum to be formed where large peaks at narrowband components can appear. These methods work very well in low SNR environments, but are computationally very expensive. They can be grouped into two categories: noise subspace methods and signal subspace methods. Both categories can be utilized in one of two ways: eigenvalue decomposition of the autocorrelation matrix or singular value decomposition of the data matrix. Noise subspace methods attempt to solve for 1 or more of the noise subspace eigenvectors. Then, the orthogonality between the noise subspace and the signal subspace produces zeros in the denominator of the resulting spectrum estimates, resulting in large values or spikes at true signal components. The number of discrete sinusoids, or the rank of the signal subspace, must be determined/estimated, or known ahead of time. Signal subspace methods attempt to discard the noise subspace prior to spectral estimation, improving the SNR. A reduced rank autocorrelation matrix is formed with only the eigenvectors determined to belong to the signal subspace (again, a model order problem), and the reduced rank matrix is used in any one of the other methods. Now, I'll try to quickly cover your list: PSD using Burg method: The Burg method leverages the Levinson recursion slightly differently than the Yule-Walker method, in that it estimates the reflection coefficients by minimizing the average of the forward and backward linear prediction error. This results in a harmonic mean of the partial correlation coefficients of the forward and backward linear prediction error. It produces very high resolution estimates, like all autoregressive methods, because it uses linear prediction to extrapolate the signal outside of its known data record. This effectively removes all sidelobe phenomena. It is superior to the YW method for short data records, and also removes the tradeoff between utilizing the biased and unbiased autocorrelation estimates, as the weighting factors divide out. One disadvantage is that it can exhibit spectral line splitting. In addition, it suffers from the same problems all AR methods have. That is, low to moderate SNR severely degrades the performance, as it is no longer properly modeled by an AR process, but rather an ARMA process. ARMA methods are rarely used as they generally result in a nonlinear set of equations with respect to the moving average parameters. PSD using covariance method: The covariance method is a special case of the least-squares method, whereby the windowed portion of the linear prediction errors is discarded. This has superior performance to the Burg method, but unlike the YW method, the matrix inverse to be solved for is not Hermitian Toeplitz in general, but rather the product of two Toeplitz matrices. Therefore, the Levinson recursion cannot be used to solve for the coefficients. In addition, the filter generated by this method is not guaranteed to be stable. However, for spectral estimation this is a good thing, resulting in very large peaks for sinusoidal content. PSD using periodogram: This is one of the worst estimators, and is a special case of Welch's method with a single segment, rectangular or triangular windowing (depending on which autocorrelation estimate is used, biased or unbiased), and no overlap. However, it's one of the "cheapest" computationally speaking. The resulting variance can be quite high. PSD using modified covariance method: This improves on both the covariance method and the Burg method. It can be compared to the Burg method, whereby the Burg method only minimizes the average forward/backward linear prediction error with respect to the reflection coefficient, the MC method minimizes it with respect to ALL of the AR coefficients. In addition, it does not suffer from spectral line splitting, and provides much less distortion than the previously listed methods. In addition, while it does not guarantee a stable IIR filter, it's lattice filter realization is stable. It is more computationally demanding than the other two methods as well. PSD using Welch's method: Welch's method improves upon the periodogram by addressing the lack of the ensemble averaging which is present in the true PSD formula. It generalizes Barlett's method by using overlap and windowing to provide more PSD "samples" for the pseudo-ensemble average. It can be a cheap, effective method depending on the application. However, if you have a situation with closely spaced sinusoids, AR methods may be better suited. However, it does not require estimating the model order like AR methods, so if little is known about your spectrum a priori, it can be an excellent starting point. PSD using Yule-Walker AR method: This is a special case of the least squares method where the complete error residuals are utilized. This results in diminished performance compared to the covariance methods, but may be efficiently solved using the Levinson recursion. It's also known as the autocorrelation method. Spectrogram using short-time Fourier transform: Now you're crossing into a different domain. This is used for time-varying spectra. That is, one whose spectrum changes with time. This opens up a whole other can of worms, and there are just as many methods as you have listed for time-frequency analysis. This is certainly the cheapest, which is why its so frequently used. Spectral estimation: This is not a method, but a blanket term for the rest of your post. Sometimes the Periodogram is referred to as the "sample spectrum" or the "Schuster Periodogram", the former of which may be what you're referring to. If you are interested, you may also look into subspace methods such as MUSIC and Pisarenko Harmonic Decomposition. These decompose the signal into signal and noise subspace, and exploits the orthogonality between the noise subspace and the signal subspace eigenvectors to produce a pseudospectrum. Much like the AR methods, you may not get a "true" PSD estimate, in that power most likely is not conserved, and the amplitudes between spectral components is relative. However, it all depends on your application.
{ "domain": "dsp.stackexchange", "id": 501, "tags": "frequency-spectrum, power-spectral-density" }
Calculating Riemann Tensor Using Tetrad Formalism
Question: I was trying to calculate the Riemann Tensor for a spherically symmetric metric: $ds^2=e^{2a(r)}dt^2-[e^{2b(r)}dr^2+r^2d\Omega^2]$ I chose the to use the tetrad basis: $u^t=e^{a(r)}dt;\, u^r=e^{b(r)}dr;\, u^\phi=r\sin\theta d\phi; \, u^\theta=rd\theta$ Using the torsion free condition with the spin connection $\omega^a{}_{b}\wedge u^b=-du^a$ I was able to find the non-zero spin connections. In class my teacher presented the formula: $\Omega^i{}_{j}=d\omega^i{}_j+\omega^i{}_k\wedge \omega^k{}_j=\frac{1}{2}R^i{}_j{}_k{}_l\,u^k\wedge u^l$ But this can't be right since I calculate with this: $\Omega^t{}_\phi=-\frac{1}{r}a_r \,e^{-2b}u^t\wedge u^\phi \implies R^t{}_\phi{}_\phi{}_t=-\frac{1}{r}a'e^{-2b}$ The real answer involves a factor of $e^a$, $\sin \theta$ and no $\frac{1}{r}$ term. Any help is appreciated. Edit: Here is some of my work: $du^t=-a_re^{-b}u^t\wedge u^r$ $du^\phi=-\frac{1}{r}[e^{-b}u^\phi\wedge u^r+\cot \theta u^\phi\wedge u^\theta]$ (I will not show the calculations for the rest) From no-torsion equation, we get 2 out of 4 spin connections (the rest require the two missing exterior derivatives that I have not shown in this post): $\omega^t{}_r=a_re^{-b}u^t$ $\omega^\phi{}_r=\frac{1}{r}e^{-b}u^\phi$ Then $\Omega^t{}_\phi$ is as shown above. Explicitly: $\Omega^t{}_\phi=d\omega^t{}_\phi+\omega^t{}_r\wedge \omega^r{}_\phi+\omega^t{}_\theta\wedge \omega^\theta{}_\phi$ where the first and last terms are 0 since $\omega^t{}_\phi$ and $\omega^t_\theta$ are 0. Answer: Turns out Michael Brown was right after all. The calculations are correct for the curvature terms in the tetrad basis.
{ "domain": "physics.stackexchange", "id": 6809, "tags": "homework-and-exercises, general-relativity, gravity, mathematical-physics" }
How does window length influence cut-off frequency of filter and group delay?
Question: I'm attempting to interpret the following function. To my understanding, this aims to implement an FIR low-pass filter with a Nuttall window. Then, it filters the signal by a simple application of the convolution equivalence theorem. static void GetFilteredSignal(int half_average_length, int fft_size, const fft_complex *y_spectrum, int y_length, double *filtered_signal) { double *low_pass_filter = new double[fft_size]; // Nuttall window is used as a low-pass filter. // Cutoff frequency depends on the window length. NuttallWindow(half_average_length * 4, low_pass_filter); for (int i = half_average_length * 4; i < fft_size; ++i) low_pass_filter[i] = 0.0; fft_complex *low_pass_filter_spectrum = new fft_complex[fft_size]; fft_plan forwardFFT = fft_plan_dft_r2c_1d(fft_size, low_pass_filter, low_pass_filter_spectrum, FFT_ESTIMATE); fft_execute(forwardFFT); // Convolution double tmp = y_spectrum[0][0] * low_pass_filter_spectrum[0][0] - y_spectrum[0][1] * low_pass_filter_spectrum[0][1]; low_pass_filter_spectrum[0][1] = y_spectrum[0][0] * low_pass_filter_spectrum[0][1] + y_spectrum[0][1] * low_pass_filter_spectrum[0][0]; low_pass_filter_spectrum[0][0] = tmp; for (int i = 1; i <= fft_size / 2; ++i) { tmp = y_spectrum[i][0] * low_pass_filter_spectrum[i][0] - y_spectrum[i][1] * low_pass_filter_spectrum[i][1]; low_pass_filter_spectrum[i][1] = y_spectrum[i][0] * low_pass_filter_spectrum[i][1] + y_spectrum[i][1] * low_pass_filter_spectrum[i][0]; low_pass_filter_spectrum[i][0] = tmp; low_pass_filter_spectrum[fft_size - i - 1][0] = low_pass_filter_spectrum[i][0]; low_pass_filter_spectrum[fft_size - i - 1][1] = low_pass_filter_spectrum[i][1]; } fft_plan inverseFFT = fft_plan_dft_c2r_1d(fft_size, low_pass_filter_spectrum, filtered_signal, FFT_ESTIMATE); fft_execute(inverseFFT); // Compensation of the delay. int index_bias = half_average_length * 2; for (int i = 0; i < y_length; ++i) filtered_signal[i] = filtered_signal[i + index_bias]; fft_destroy_plan(inverseFFT); fft_destroy_plan(forwardFFT); delete[] low_pass_filter_spectrum; delete[] low_pass_filter; } There are two major points that I don't understand in this algorithm: Half average length. What is it? It is used to calculate the filter order and the comment says it influences the cutoff frequency. I realise that the window length limits resolution when applying to a signal. My confusion is that all DSP guides I look up usually assume some filter order N as a design parameter, rather than selecting it based on a desired cut-off frequency. Is there a relationship between N and then cutoff frequency then in this case/generally? Compensation of the delay. I'm not familiar with the details of filter design, but after reading up, I found that it is essential to shift a time-domain signal after FIR filtering. Filtering usually has frequency-dependent group delay, but FIR filters have the advantage of a constant group delay being $ \frac{N-1}{2} $, where N is the number of taps or order of the filter. Assuming $ \text{half_average_length} \cdot 4 = N $, it follows that $ \text{index_bias} = (\text{half_average_length} \cdot 4 - 1) /2 $ . I realise this wouldn't make sense as it's not a whole number, but then there is something wrong with my interpretation of $ \frac{N-1}{2} $. What is wrong with my interpretation of this formula? Answer: Half average length. What is it? Primarily something the author made up. They design a lowpass more or less as a weighted moving average filter. They simply use the window itself as the impulse response of the filter. That's NOT the same as designing an ideal low pass filter at the cutoff frequency you want and then windowing it down to the filter order. This seems to work reasonably well, you get frequency response of a nuttall window (which isn't bad) and the cut off frequency is roughly the sample rate divided by the window length. Ir your sample rate is 44.1kHz and your window length is, say, N=32, the cut off will be around 1320 Hz or thereabouts. Compensation of the delay. .... What is wrong with my interpretation of this formula? Your interpretation is correct. If the window length is even, you can't correct for the group delay by simply shifting samples. The code only "approximately" compensates for group delay and is off by half a sample. There are a few more things wrong about the code: If someone calls it with half_average > fft_size/4 , bad things will happen Multiplication in the frequency domain is circular convolution (not linear convolution) in the time domain. The input must be properly zero-padded before FFT to avoid time domain aliasing.
{ "domain": "dsp.stackexchange", "id": 9241, "tags": "filters, filter-design, lowpass-filter, c++" }
Why do people claim electrons are accelerating
Question: A lot of text books mention that one of the reasons that classical mechanics failed to explain atomic and subatomic processes is that electrons which accelerate should release energy in the form of electromagnetic radiation, which would lower the atoms overall energy level, but this does not happen. One place where I discovered this, for example, is in the description for the Bohr model. What I don't understand is why everyone takes for granted the fact that the electron is accelerating. I thought the electron orbits the nucleus at a, more or less, constant velocity. Are people referring to specific situations when the atom is excited? Furthermore, I was under the impression that electrons already travel at the fastest allowable speed, the speed of light. Answer: On a uniform circular orbit, even if the speed does not change in norm, it does change in direction so that the speed vector change over time and $\frac{d\vec{v}}{dt}\neq\vec{0}$. In fact, in polar coordinates, you have $$\vec{a} = \frac{d\vec{v}}{dt} = -\frac{v^2}{R}\,\vec{e_r}$$ Imagine a car taking a turn at constant speed: if the turn is left, you feel like you're pulled on the right because the car itself is exerting a force on your lower parts to make you turn left rather than continue in straight line. Thus you effectively experience an acceleration with respect to the ground even if your speed stay constant in norm.
{ "domain": "physics.stackexchange", "id": 9923, "tags": "orbital-motion, electrons, velocity" }
Alternative to .launch files
Question: I just read that ROS2 has an alternative to .launch files, where instead of .xml the logic is defined in a python script. What a great idea. I assume that this won't work with ROS1 per-se. But how would I implement the launching of a series of nodes, setting of parameters and so on and so forth, in python? Is there an api that I could use. Are there other ideas? I am using melodic right but I could upgrade to neotic at any time. Originally posted by pitosalas on ROS Answers with karma: 628 on 2020-12-28 Post score: 1 Answer: But how would I implement the launching of a series of nodes, setting of parameters and so on and so forth, in python? Is there an api that I could use. yes, there is the API that roslaunch itself uses, which is a Python API. See wiki/roslaunch/API Usage for some example scripts. I would actually not recommend you use it though. It's a tad finicky, and leads to a lot of manual work which roslaunch takes care of for you. I just read that ROS2 has an alternative to .launch files, where instead of .xml the logic is defined in a python script. [..] I assume that this won't work with ROS1 per-se No, launch and launch_ros can't work with ROS 1, although launch is pretty much ROS-agnostic, so you could theoretically perhaps implement a ROS 1 wrapper similar to launch_ros but for ROS 1. Whether that would make sense, I don't know. There is however already someone who wrote a Python-based roslaunch wrapper: CodeFinder2/roslaunch2. From its readme: roslaunch2 is a (pure Python based) ROS package that facilitates writing versatile, flexible and dynamic launch configurations for the Robot Operating System (ROS 1) using Python, both for simulation and real hardware setups, as contrasted with the existing XML based launch file system of ROS, namely roslaunch. Note that roslaunch2 is not (yet) designed and developed for ROS 2 but for ROS 1 only although it may also inspire the development (of the launch system) of ROS 2. It is compatible with all ROS versions providing roslaunch which is used as its backend. roslaunch2 has been tested and heavily used on ROS Indigo, Jade, Kinetic, and Lunar; it also supports a “dry-mode” to generate launch files without ROS being installed at all. The key features of roslaunch2 are versatile control structures (conditionals, loops), extended support for launching and querying information remotely, an easy-to-use API for also launching from Python based ROS nodes dynamically, as well as basic load balancing capabilities for simulation setups. If you really like using Python instead of roslaunch XML, perhaps you could take a look at that. Originally posted by gvdhoorn with karma: 86574 on 2020-12-28 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by pitosalas on 2020-12-29: Great, thanks. I am checking it out, it looks excellent! Thanks a lot for the pointer! (And yes, doesn't everyone prefer Python over those crazy xml files (and other rando proprietary formats?)) Comment by gvdhoorn on 2020-12-29:\ doesn't everyone prefer Python over those crazy xml files no, certainly not. Those "crazy xml files" form a declarative description of the deployment configuration of your application. Declarative languages are much easier to parse and verify than an arbitrary, turing-complete imperative language such as Python. So for some (narrow) use-cases, an imperative language might be nice. But for system composition and eventual modelling, declarative languages are actually much better suited. (and other rando proprietary formats? You're probably aware, but XML is not a rando proprietary format[s]. Comment by pitosalas on 2020-12-29: Hehe. yes I know, xml has existed for 100 years. I was referring to .msg, and friends. But even Cmakielist.txt and package.xml, while the overarching syntax (xml and make) are "standard", they both have complex semantics which in my opinion are part of the reason that the ROS learning curve is so so steep. Comment by gvdhoorn on 2020-12-29: I'm biased, but .msg is essentially just an IDL which is something many communication frameworks and middleware systems use/have, and CMakeLists.txt is something Kitware created to script their build system. Would you rather have used Makefiles? they both have complex semantics which in my opinion are part of the reason that the ROS learning curve is so so steep. Creating software for a distributed system is simply complex, and there are many aspects to it which work best when using their own domain specific languages. I don't believe we've really resolved any of that complexity in software engineering still. ROS is no exception. Even "user friendly" systems such as Mathworks Simulink and Labview quickly become very complex when dealing with real applications. We may improve that situation in the future, but for now it is what it is. Comment by pitosalas on 2020-12-29: I agree that some complexity is unavoidable. But I also think that we need to pay attention to the learning curve. "Make the easy things easy and the hard things possible" is not a good slogan. I think in ROS there are many examples of unnecessary complexity or easy things which are unnecessarily hard. But anyway, this is getting way too philosophical and opinion-based for a comment thread. I truly appreciate all the help you've given me over the last few years and respect your far deeper knowledge of ROS than what I have.
{ "domain": "robotics.stackexchange", "id": 35914, "tags": "ros-melodic" }
Solution for a combinatorial minimization problem
Question: Let's say we have an inequality, $p \le {a \choose b}$ where $p$ is a fixed constant and $a, b$ are variables. The problem is that, we are trying to find the minimum $a$ with respect to the inequality $p \le {a \choose b}$. Is there a closed form solution (can be approximate as well/doesn't have to be exact) for that combinatorial optimization problem? Answer: G. Bach noticed that the best choice for $b$ is $\lfloor a/2 \rfloor$, and furthermore this choice gives the maximal binomial coefficient. The easiest way to see this is to consider the ratio of two adjacent binomial coefficients: $$ \frac{\binom{a}{b}}{\binom{a}{b+1}} = \frac{b+1}{a-b}.$$ Therefore $\binom{a}{b} \leq \binom{a}{b+1}$ iff $b+1 \leq a-b$ iff $2b+1 \leq a$, and we get the desired result. The binomial theorem states that $$ \sum_{b=0}^a \binom{a}{b} = 2^a, $$ hence $$ \frac{2^a}{a+1} \leq \binom{a}{\lfloor a/2 \rfloor} \leq 2^a. $$ (The correct order of magnitude, $\Theta(2^a/\sqrt{a})$, can be found using Stirling's approximation.) Therefore the optimal $a$ satisfies the following inequalities: $$ \frac{2^{a-1}}{a} < p \leq 2^a. $$ Therefore $a \geq \log_2 p$. On the other hand, when $p \geq 4$, $a \geq 2$, and so $a - \log_2 a \geq a/2$ (since $a-\log_2 a$ is monotone and $2 - \log_2 2 = 2/2$). Therefore for $p \geq 4$, $$ 2p\log_2 (2p) > 2\frac{2^{a-1}}{a} (a - \log_2 a) > 2^{a-1}.$$ We conclude that $$ \log_2 p \leq a < \log_2 p + 1 + \log_2 \log_2 (2p). $$ Therefore the binary search that G. Bach mentions takes only $O(\log\log\log p)$ steps. If all we want is an asymptotic expression, then since $\binom{a}{\lfloor a/2 \rfloor} = \Theta(2^a/\sqrt{a})$, approximately we have $2^a/\sqrt{a} = Cp$, and so $a = \log_2 p + \Theta(\log_2\log_2 p)$. More terms can be obtained by taking more terms in Stirling's approximation. For example, we can find a constant $A$ such that $a = \log_2 p + A\log_2\log_2 p + o(\log_2\log_2 p)$.
{ "domain": "cs.stackexchange", "id": 1727, "tags": "optimization, combinatorics" }
Why do we need effective action $\Gamma$ given the connected generating functional $W$?
Question: I have just learnt the path integral formalism in QFT, up to the point where we computed the generating functionals $\mathcal{Z}[J] := Z[J]/Z[0]$, $W[J]$, and $\Gamma[\varphi]$. Here $J(x)$ is the classical current and $\varphi(x)$ is defined to be the functional derivative $\delta W[J]/\delta J$. All the resources I perused (standard textbooks, lecture notes) mostly detail very carefully how to compute or derive them. What I don't understand is why do we still need effective action if we already have $W[J]$. The sum of connected diagrams already gives everything you need in perturbation theory (in fact you don't really need $Z[J]$ anymore if you have $W[J]$). Furthermore, we often assume (at least in simple examples, since I haven't reached gauge theory) that the Legendre transformation of $W[J]$ is involutive, so we do not lose information working with either $\Gamma$ or $W$. Since we have to do Feynman diagrammatics anyway, I doubt the reason we prefer one of them or the other is because one of them has easier Feynman diagrams integrations. So either (1) somehow the effective action trades off something for some advantages (that I cannot appreciate) over the connected generating functional $W[J]$, or (2) there's something about semi-classical vs quantum correlators that I don't understand: i.e. maybe $W[J]$ makes no sense for classical field theory but $\Gamma$ somehow does. I would appreciate an explanation and/or explicit example that $\Gamma$ is absolutely preferred over computing $W$. Answer: It seems like you're looking for an answer along the lines of "you always want to compute quantity $x$ because reasons $y$ and $z$," but there's no such answer. All of these quantities are useful in different contexts, so it really depends on what you're doing. You say that $W[J]$ already gives us everything because the coefficients in its $J$ function power series are the connected correlation functions, but this seems like the bias of someone who is only interested in computing scattering amplitudes via the LSZ reduction. Effective field theory itself works with the quantity $\Gamma[\phi]$ since $\phi$ (which is really a misleading shorthand for the tadpole $\langle\phi\rangle$) satisfies equations $\frac{\delta\Gamma}{\delta\phi}=0$ for zero current. Hence $\Gamma$ provides the equations of motion for the tadpoles, and if you were looking for quantum corrections to the classical theory, $\Gamma$ is really the object you're interested in. Hence effective field theory. I would also point out that $\Gamma$ is the correct object to consider when thinking about spontaneous symmetry breaking. It's standard to say that the breaking is given by having a minima in the classical potential, but this is actually only the tree level result. Strictly speaking, you need to look for minima in the potential of the effective action, since that's what's determining $\langle\phi\rangle$. For example, Coleman and Weinberg worked out at some point the 1-look correction you get to spontaneous symmetry breaking (in QED if I remember correctly). There is also an intimate (diagramatic) relationship between $W$ and $\Gamma$ which is quite nice, but for that I'll refer you to the QFT book by Banks: "Modern Quantum Field Theory: A Concise Introduction." He performs essentially no calculations in the text, leaving all of them as exercise problems, but it's still a good book for picking up some of the modern ways of thinking about concepts in QFT, leaving the details for other sources. For example, the QFT book by Nair will also contain more modern thoughts about QFTs with the level of detail you might expect from a textbook, but as a result it is a much more significant undertaking to read.
{ "domain": "physics.stackexchange", "id": 75908, "tags": "quantum-field-theory, feynman-diagrams, path-integral, partition-function, 1pi-effective-action" }
What is the minimum size of a ball of gas to become a star?
Question: I know there are two criteria to meet in order for nuclear fusion to occurs. High temperature (many times temperature at Sun's core) High pressure (protons are very close to each other) [Goal] However I want to know what equations allows me to calculate the amount of hydrogen gas needed so that the internal pressure can produce the right amount of heat for nuclear fusion to occur. [My own understandings] I also understand that the temperature of our Sun's core is not sufficient to overcome the coulomb barrier, it make use of quantum tunneling to achieve fusion instead. [Question] What is the minimum amount of hydrogen gas required to form a star? (also please include the name of the equation used) [Assumptions] shape is perfectly spherical no angular velocity, not rotating homogenous, consist of 95% hydrogen and 5% helium sustain fusion for at least 1 million years Answer: A proposal for the minimum mass of a gas cloud before collapsing to become a star was made by James Jeans; because of this, it was termed the Jeans mass. It is calculated as $$M=\frac{4 \pi}{3} \rho R^3$$ where $\rho$ is density and $R$ is the radius of the cloud - one half of the Jeans length, which is dependent on the speed of sound and the density of the cloud. It was originally thought that any mass above the Jeans mass would not be in hydrostatic equilibrium, and would collapse. But Jeans didn't realize that any regions outside this radius would also collapse. So his arguments were flawed, though they're still sound in many applications.
{ "domain": "astronomy.stackexchange", "id": 836, "tags": "star, temperature, star-formation, hydrogen, helium" }
How does the synthesis of Kaolinite (and it's by products) influence porosity in granodiorite?
Question: I started studying the effects of weathering on granodiorite (see previous posts) and plan on referencing the link between the chemical weathering of feldspar plagioclase leading to kaolinization, a process I am relatively unfamiliar with. In brief terms, how would this reaction reduce the porosity of the rock? Also, how would the release of $Ca$ and $K$ influence further degradation of the rocks porosity? Answer: Kaolinite "grains", for want of any better term, are swollen with chemically bonded water compared to the crystals they weather out of so porosity is decreased by particle expansion while at the same time overall density drops. Kaolin also has very strong particle-particle adhesion so any mechanical deformation of the material tends to compress it leading to further reductions in bulk porosity. The influence of Calcium and Potassium on the ongoing weathering of the rock and it's overall porosity depends greatly on the weathering regime; for example if weathering is continuously under wet conditions then Calcium and Potassium will be mobilised in ground water and removed. On the other hand in a wetting-drying cycle Calcium and Potassium liberated from the parent rock become oxides when the material dries out, those subsequently react strongly with water in the next wetting cycle acidifying the environment and accelerating rock decomposition.
{ "domain": "earthscience.stackexchange", "id": 1459, "tags": "geology" }
Fixed-wing UAV simulation?
Question: Are there any models of fixed-wing UAVs? I'd like to be able to simulate one and control it through ROS. This video seems promising, but I can't seem to locate anything else about it. Originally posted by Tom Moore on Gazebo Answers with karma: 15 on 2015-02-25 Post score: 0 Original comments Comment by Jose Luis Rivero on 2015-03-04: Maybe asking in the gazebo mailing list is a good option for your question. Answer: I think this is world file, and this is the plugin. This is very experimental, and just a proof of concept. I haven't seen other fixed wing models in Gazebo, but that doesn't mean someone has one floating around out there. Originally posted by nkoenig with karma: 7676 on 2015-03-04 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3718, "tags": "gazebo-model" }
Catty: A mini cat clone in Rust
Question: As part of my journey in learning the Rust programming language, I decided to make a miniature cat clone (catty) in it. The following is my code, which depends on clap for argument parsing (see below). It currently only supports concatenating 1 file with possibly numbered lines (-n/--number). I tried to be as close as possible to the actual cat program for this: #[macro_use] extern crate clap; use std::{io, error, env, fs::read_to_string, path::PathBuf, process}; fn main() { process::exit( if let Err(err) = cli(env::args().collect::<Vec<_>>()) { // CLI parsing errors if let Some(clap_err) = err.downcast_ref::<clap::Error>() { eprint!("{}", clap_err); } else { eprintln!("{}", err); } 1 } else { 0 } ); } fn cli(cli_args: Vec<String>) -> Result<(), Box<error::Error>> { let matches = clap::App::new("catty") .version(crate_version!()) .about("A minimal clone of the linux utility cat.") .arg(clap::Arg::with_name("FILE") .help("The file to concatenate to standard output") .required(true)) .arg(clap::Arg::with_name("number") .short("n") .long("number") .help("Numbers all output lines")) .get_matches_from_safe(cli_args)?; let file_contents = get_file_contents(matches.value_of("FILE").unwrap())?; let file_contents: Vec<&str> = file_contents.split("\n").collect(); let number_lines = matches.is_present("number"); for (i, line) in file_contents.iter().enumerate() { let formatted_line = if number_lines { format!("{:>6} {}", i + 1, line) } else { line.to_string() }; if i == file_contents.len() - 1 && line.len() > 0 { print!("{}", formatted_line); } else if !(i == file_contents.len() - 1 && line.len() == 0) { println!("{}", formatted_line); } } Ok(()) } fn get_file_contents(passed_argument: &str) -> Result<String, Box<error::Error>> { let mut resolved_path = PathBuf::from(passed_argument); if !resolved_path.exists() || !resolved_path.is_file() { resolved_path = PathBuf::from(env::current_dir()?); resolved_path.push(passed_argument); if !resolved_path.exists() || !resolved_path.is_file() { return Err(io::Error::new(io::ErrorKind::NotFound, "The passed file is either not a file or does not exist!").into()); } } Ok(read_to_string(resolved_path)?) } My Cargo.toml is as follows: [package] name = "catty" version = "0.1.0" authors = ["My Name <my@email.com>"] [dependencies] [dependencies.clap] version = "2.32" default-features = false features = ["suggestions"] Here is what I want to know from this code review: Is my code idiomatic rust (i.e good error handling, not overly verbose, etc)? Is my code performant or can it be improved in some way? Answer: 4: use std::{env, error, fs::read_to_string, io, path::PathBuf, process}; Just a personal taste, but I would do use std::{env, error, io, process}; use std::fs::read_to_string; use std::path::PathBuf; Yes, nested includes are nice and easy, but hard to extend. It's up to you. 35: let file_contents: Vec<&str> = file_contents.split('\n').collect(); I would suggest using lines instead. Also, you can omit &str or change the line completly. Either let file_contents: Vec<_> = file_contents.lines().collect(); or let file_contents = file_contents.lines().collect::<Vec<_>>(); 46/48: line.len() > 0 / line.len() == 0 Replace that by !line.is_empty() and line.is_empty(). 60: resolved_path = PathBuf::from(env::current_dir()?); Remove PathBuf::from completly, because current_dir is already a PathBuf
{ "domain": "codereview.stackexchange", "id": 32741, "tags": "console, rust" }
Detecting direction of sound using several microphones
Question: First of all, I've seen a similar thread, however it's a bit different to what I'm trying to achieve. I am constructing a robot which will follow the person who calls it (3D sound localization). My idea is to use 3 or 4 microphones - i.e. in the following arrangement in order to determine the from which direction the robot was called: Where S is source, A, B and C are microphones. The idea is to calculate phase correlation of signals recorded from pairs AB, AC, BC and based on that construct a vector that will point at the source using a kind of triangulation. The system does not even have to work in real time because it will be voice activated - signals from all the microphones will be recorded simultaneously, voice will be sampled from only one microphone and if it fits the voice signature, phase correlation will be computed from the last fraction of second in order to compute the direction. I am aware that this might not work too well i.e. when the robot is called from another room or when there are multiple reflections. This is just an idea I had, but I have never attempted anything like this and I have several questions before I construct the actual hardware that will do the job: Is this a typical way of doing this? (i.e. used in phones for noise cancellation?) What are other possible approaches? Can phase correlation be calculated between 3 sources simultaneously somehow? (i.e. in order to speed up the computation) Is 22 khz sample rate and 12bit depth sufficient for this system? I am especially concerned about the bit depth. Should the microphones be placed in separate tubes in order to improve separation? Answer: To extend Müller's answer, Should the microphones be placed in separate tubes in order to improve separation? No, you are trying to identify the direction of the source, adding tubes will only make the sound bounce inside the tube which is definitely not wanted. The best course of action would be to make them face straight up, this way they will all receive similar sound and the only thing that is unique about them are their physical placements which will directly affect the phase. A 6 kHz sine wave has a wavelength of $\frac{\text{speed of sound}}{\text{sound frequency}}=\frac{343\text{ m/s}}{6\text{ kHz}}=5.71\text{ mm}$. So if you want to uniquely identify the phases of sine waves up to 6 kHz, which are the typical frequencies for human talking, then you should space the microphones at most 5.71 mm apart. Here is one item that has a diameter that is less than 5.71 mm. Don't forget to add a low pass filter with a cut-off frequency at around 6-10 kHz. Edit I felt that this #2 question looked fun so I decided to try to solve it on my own. Can phase correlation be calculated between 3 sources simultaneously somehow? (i.e. in order to speed up the computation) If you know your linear algebra, then you can imagine that you have placed the microphones in a triangle where each microphone is 4 mm away from each other making each interior angles $60°$. So let's assume they are in this configuration: C / \ / \ / \ / \ / \ A - - - - - B I will... use the nomenclature $\overline{AB}$ which is a vector pointing from $A$ to $B$ call $A$ my origin write all numbers in mm use 3D math but end up with a 2D direction set the vertical position of the microphones to their actual wave form. So these equations are based on a sound wave that looks something like this. Calculate the cross product of these microphones based on their position and waveform, then ignore the height information from this cross product and use arctan to come up with the actual direction of the source. call $a$ the output of the microphone at position $A$, call $b$ the output of the microphone at position $B$, call $c$ the output of the microphone at position $C$ So the following things are true: $A=(0,0,a)$ $B=(4,0,b)$ $C=(2,\sqrt{4^2-2^2}=2\sqrt{3},c)$ This gives us: $\overline{AB} = (4,0,a-b)$ $\overline{AC} = (2,2\sqrt{3},a-c)$ And the cross product is simply $\overline{AB}×\overline{AC}$ $$ \begin{align} \overline{AB}×\overline{AC}&= \begin{pmatrix} 4\\ 0\\ a-b\\ \end{pmatrix} × \begin{pmatrix} 2\\ 2\sqrt{3}\\ a-c\\ \end{pmatrix}\\\\ &=\begin{pmatrix} 0\cdot(a-c)-(a-b)\cdot2\sqrt{3}\\ (a-b)\cdot2-4\cdot(a-c)\\ 4\cdot2\sqrt{3}-0\cdot2\\ \end{pmatrix}\\\\ &=\begin{pmatrix} 2\sqrt{3}(b-a)\\ -2a-2b-4c\\ 8\sqrt{3}\\ \end{pmatrix} \end{align} $$ The Z information, $8\sqrt{3}$ is just junk, zero interest to us. As the input signals are changing, the cross vector will swing back and forth towards the source. So half of the time it will point straight to the source (ignoring reflections and other parasitics). And the other half of the time it will point 180 degrees away from the source. What I'm talking about is the $\arctan(\frac{-2a-2b-4c}{2\sqrt{3}(b-a)})$ which can be simplified to $\arctan(\frac{a+b+2c}{\sqrt{3}(a-b)})$, and then turn the radians into degrees. So what you end up with is the following equation: $$\arctan\Biggl(\frac{a+b+2c}{\sqrt{3}(a-b)}\Biggr)\frac{180}{\pi}$$ But half the time the information is literally 100% wrong, so how.. should one.... make it right 100% of the time? Well if $a$ is leading $b$, then the source can't be closer to B. In other words, just make something simple like this: source_direction=atan2(a+b+2c,\sqrt{3}*(a-b))*180/pi; if(a>b){ if(b>c){//a>b>c possible_center_direction=240; //A is closest, then B, last C }else if(a>c){//a>c>b possible_center_direction=180; //A is closest, then C last B }else{//c>a>b possible_center_direction=120; //C is closest, then A last B } }else{ if(c>b){//c>b>a possible_center_direction=60; //C is closest, then B, last A }else if(a>c){//b>a>c possible_center_direction=300; //B is closest, then A, last C }else{//b>c>a possible_center_direction=0; //B is closest, then C, last A } } //if the source is out of bounds, then rotate it by 180 degrees. if((possible_center_direction+60)<source_direction){ if(source_direction<(possible_center_direction-60)){ source_direction=(source_direction+180)%360; } } And perhaps you only want to react if the sound source is coming from a specific vertical angle, if people talk above the microphones => 0 phase change => do nothing. People talk horizontally next to it => some phase change => react. $$ \begin{align} |P| &= \sqrt{P_x^2+P_y^2}\\ &= \sqrt{3(a-b)^2+(a+b+2c)^2}\\ \end{align} $$ So you might want to set that threshold to something low, like 0.1 or 0.01. I'm not entirely sure, depends on the volume and frequency and parasitics, test it yourself. Another reason for when to use the absolute value equation is for zero crossings, there might be a slight moment for when the direction will point in the wrong direction. Though it will only be for 1% of the time, if even that. So you might want to attach a first order LP filter to the direction. true_true_direction = true_true_direction*0.9+source_direction*0.1; And if you want to react to a specific volume, then just sum the 3 microphones together and compare that to some trigger value. The mean value of the microphones would be their sum divided by 3, but you don't need to divide by 3 if you increase the trigger value by a factor 3. I'm having issues with marking the code as C/C#/C++ or JS or any other, so sadly the code will be black on white, against my wishes. Oh well, good luck on your venture. Sounds fun. Also there is a 50/50 chance that the direction will be 180 away from the source 99% of the time. I'm a master at making such mistakes. A correction for this though would be to just invert the if statements for when 180 degrees should be added.
{ "domain": "dsp.stackexchange", "id": 6081, "tags": "sound-recognition" }
Running sample ROS node on Android through SL4A
Question: Hi, Following the instructions in section 6 on running the sample ROS node on android page http://www.ros.org/wiki/android, when I went to SL4A, then went into ros.py, it asks for the ROS_PACKAGE_URI. I filled in the field with http://ubuntu:11311/. Then it says "uname: permission denied". How can I go about this? Thanks Soe Originally posted by soetommy on ROS Answers with karma: 47 on 2011-06-08 Post score: 1 Answer: Have you considered using rosjava instead? You'll see much better integration with Android that way. See http://rosjava.googlecode.com/ Originally posted by damonkohler with karma: 3838 on 2011-06-09 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by damonkohler on 2011-06-23: They are part of the Android library and not part of rosjava.jar. If you're having trouble, this may be better off as a separate question. Comment by soetommy on 2011-06-22: There is currently a problem with importing RosTextView and MessageCallable into the java because they seem not to be included in the rosjava.jar library. Comment by damonkohler on 2011-06-22: There are fully functional Android projects under trunk/android/tutorials. Comment by soetommy on 2011-06-21: Hi Damon, Yes, I have already tried it, but cannot seem to understand how to start it off. Can you provide some simple example source files for linking android with ROS, which are fully functional? That would be very helpful indeed. I am using the eclipse with android sdk and adt tools. Thanks! :D
{ "domain": "robotics.stackexchange", "id": 5792, "tags": "android" }
Extracting filenames from a URL by splitting a string
Question: I am used to accessing items from an array, returned by the Split function, directly. This is lazy, I know. Usually, I know the element I want and I say something like: Debug.Print Split(href, "/")(0) Questions: 1) Can I do something similar to access the UBound of the array returned? 2) What is the "best practice" way of doing things and why? Code: I wrote the following but it looks messy. Sub Testing() Dim href As String Dim fileName As String href = "https://www.england.nhs.uk/statistics/wp-content/uploads/sites/2/2018/01/20180111-AmbSYS-post-ARP-month-of-December-2017v2.xlsx" fileName = Trim$(Split(href, "/")(UBound(Split(href, "/")))) End Sub I saw from here that I can also do: Debug.Print Split(href, "/")(Len(href) - Len(Replace(href, "/", ""))) Again, messy. I know that I can assign to an array variable and then access the UBound that way. It looks tidier but is essentially the same thing; i.e. Dim myArr() As String myArr() = Split(href, "/") fileName = myArr(UBound(myArr)) Answer: Abstract the implementation into a function. This makes it easy to catch mistakes if you want to use this capability in more than one location. GetUpperBoundElemenentFromDelimitedString takes the required arguments you give it and returns the UBound result. The name may be cumbersome but typing getup and then pressing Ctrl+J will 'List Properties/Methods', the same going to Edit and choosing that option. Public Function GetUpperBoundElemenentFromDelimitedString(ByVal inputValue As String, ByVal delimiter As String, Optional ByVal compare As VbCompareMethod = VbCompareMethod.vbTextCompare) As String Dim temp As Variant temp = Split(inputValue, delimiter, compare:=compare) GetUpperBoundElemenentFromDelimitedString = temp(UBound(temp)) End Function Now when you are using it the descriptive name lets you immediately know what's happening. Public Sub Foo() Dim bar As String bar = "This,is,going,to,return,an,element,at,a,specific,position." Debug.Print GetUpperBoundElemenentFromDelimitedString(bar, ",") End Sub The same could be done for a specific position Public Function GetArrayElemenentFromDelimitedString(ByVal inputValue As String, ByVal delimiter As String, ByVal zeroBasedPosition As Long, Optional ByVal compare As VbCompareMethod = VbCompareMethod.vbTextCompare) As String GetArrayElemenentFromDelimitedString = Split(inputValue, delimiter, compare:=compare)(zeroBasedPosition) End Function
{ "domain": "codereview.stackexchange", "id": 29205, "tags": "array, vba" }
What's wrong with this derivation that $i\hbar = 0$?
Question: Let $\hat{x} = x$ and $\hat{p} = -i \hbar \frac {\partial} {\partial x}$ be the position and momentum operators, respectively, and $|\psi_p\rangle$ be the eigenfunction of $\hat{p}$ and therefore $$\hat{p} |\psi_p\rangle = p |\psi_p\rangle,$$ where $p$ is the eigenvalue of $\hat{p}$. Then, we have $$ [\hat{x},\hat{p}] = \hat{x} \hat{p} - \hat{p} \hat{x} = i \hbar.$$ From the above equation, denoting by $\langle\cdot\rangle$ an expectation value, we get, on the one hand $$\langle i\hbar\rangle = \langle\psi_p| i \hbar | \psi_p\rangle = i \hbar \langle \psi_p | \psi_p \rangle = i \hbar$$ and, on the other $$\langle [\hat{x},\hat{p}] \rangle = \langle\psi_p| (\hat{x}\hat{p} - \hat{p}\hat{x}) |\psi_p\rangle = \langle\psi_p|\hat{x} |\psi_p\rangle p - p\langle\psi_p|\hat{x} |\psi_p\rangle = 0$$ This suggests that $i \hbar = 0$. What went wrong? Answer: Both p and x operators as operators do not have eigenvectors in the strict sense. They have distributional eigenvectors which are only defined in a bigger space of functions than the space of square-normalizable wavefunctions, and which should be thought of as only meaningful when smeared a little bit by a smooth test function. The normalization for $\langle \psi_p | \psi_p \rangle $ is infinite, because the p-wave is extended over all space. Similarly, the normalization of the delta-function wavefunction, the x-operator eigenvector, is infinite, because the square of a delta function has infinite integral. You could state your paradox using $|x\rangle$ states too: $$i\hbar \langle x|x\rangle = \langle x| (\hat{x}\hat{p} - \hat{p}\hat{x})|x\rangle = x \langle x|\hat{p}|x\rangle - \langle x|\hat{p}|x\rangle x = 0$$ because $|x'\rangle$ is only defined when it is smeared a little, you need to use a seprate variable for the two occurences of x'. So write the full matrix out for this case: $$ i\hbar \langle x|y\rangle = x\langle x|\hat{p}|y\rangle - \langle x|\hat{p}|y\rangle y = (x-y)\langle x|\hat{p}|y\rangle$$ And now x and y are separate variables which can be smeared independently, as required. The p operator's matrix elements are the derivative of a delta function: $$ \langle x|\hat{p}|y\rangle = -i\hbar \delta'(x-y)$$ So what you get is $$ (x-y)\delta'(x-y)$$ And you are taking $x=y$ naively by setting the first factor to zero without noticing that the delta function factor is horribly singular, and the result is therefore ill defined without more careful evaluation. If you multiply by smooth test functions for x and y, to smear the answer out a little bit: $$ \int f(x) g(y) (x-y) \delta'(x-y) dx dy= \int f(x)g(x) dx = \int f(x) g(y) \delta(x-y) $$ Where the first identify comes from integrating by parts in x, and setting to zero all terms that vanish under the evaluation of the delta function. The result is that $$ (x-y)\delta'(x-y) = \delta(x-y)$$ And the result is not zero, it is in fact consistent with the commutation relation. This delta-function equation appears, with explanation, in the first mathematical chapter of Dirac's "The Principles of Quantum Mechanics". It is unfortunate that formal manipulations with distributions lead to paradoxes so easiy. For a related but different paradox, consider the trace of $\hat{x}\hat{p}-\hat{p}\hat{x}$.
{ "domain": "physics.stackexchange", "id": 40206, "tags": "quantum-mechanics, operators, momentum, hilbert-space, dirac-delta-distributions" }
Express.js routes management for large web application
Question: I'm currently working on a large web application with numerous routes. Currently, our routes are in one large list in a single JSON file. We recently ran into a problem due to the precedence of params being set from top to bottom in the file. This question details the issue we encountered further: Node.js Express route naming and ordering: how is precedence determined? My plan is to propose an alternate routing solution based on my work from a personal project. I feel like it's a valid suggestion but missed out on a chance to really make my case in a meeting the other day. Here is a routes.js file from my personal project that demonstrates the pattern I would like to use, let me know what you think or of possible better alternatives/edits. Thanks! By creating separate routers for each resource, different controllers can access the same URLs without the params being overridden. const express = require('express'); const usersRouter = express.Router(); const songsRouter = express.Router(); const playlistsRouter = express.Router(); const usersController = require('../controllers/users'); const songsController = require('../controllers/songs'); const playlistController = require('../controllers/playlists'); usersRouter.route('/') .post(usersController.create) songsRouter.route('/') .get(songsController.index) .post(songsController.create) playlistsRouter.route('/') .get(playlistController.index) .post(playlistController.create) songsRouter.route('/:id') .get(songsController.show) .put(songsController.update) .delete(songsController.destroy) playlistsRouter.route('/:id') .get(playlistController.show) .put(playlistController.update) .delete(playlistController.destroy) module.exports = { songs: songsRouter, playlists: playlistsRouter, users: usersRouter, }; A couple questions that come to my mind are: How optimal it would be to have all those Express router instances? Is this approach scalable? Answer: absolutely, you should compose your express application in terms of a hierarchy of routers instancing all those routers should be negligible in terms of your application's performance this is the only approach i can imagine feasibly maintaining at scale, it buys you some separation of concerns
{ "domain": "codereview.stackexchange", "id": 30491, "tags": "javascript, node.js, ecmascript-6, express.js, url-routing" }
Is Huygens principle true for any shape of wavefront?
Question: I have read from a few sources that cylindrical waves propagate leaving a wake behind, differently from spherical and planar waves, which would propagate sharply, ‘cleanly’. One example is this question and its comments. One reads: “The wave equation may allow any shape wave front, but Huygens principle does not hold for any shape wave front. For example cylindrical waves do not propagate 'cleanly' without a wake whereas spherical waves and plane waves do.” Another example is this article I found, although it is a little complex for my understanding and I may be missing something. It reads: “By way of contrast, the cylindrically spreading pressure pulse depends on time t and the propagation delay, r/c, individually, which means that it does not retain the shape of the source signature as it propagates. Instead, after the propagation delay at time t = r/c, the pulse exhibits an extended tail, or wake, which decays to zero asymptotically as 1/t2.” Considering they’re all 3-dimensional waves, I find it odd that cylindrically shaped waves would propagate differently from spherical or planar waves. What explains this? Are those comments accurate? Also, do they say the same thing or I’m misunderstanding? I don’t understand complex math, so any simple answer would be appreciated. Thanks. Answer: Consider a line of sources of spherical waves, placed along the $z$ axis. For concreteness, let the waves emitted be square pulses. If we take a cross section of the wave field in the $xOz$ plane, we'll see the following some time after the (synchronous) emission of the pulses: Notice how, if we look at the $Ox$ line, we'll see that there's not just a single pulse visible. Instead there's a train of pulses: wavefronts from different sources arrive at a given point at different times: If we increase density of the sources along $z$ axis, we'll get something like the following: At different times the waves will look like this: We've made a cylindrical wave from a line of spherical waves. Compare this arrangement to a continuous line of monochromatic spherical sources, resulting in a cylindrical wave: $$\frac1\pi\int\limits_{-\infty}^\infty \operatorname{sinc}\left(\sqrt{x^2+y^2}\right)\,\mathrm{d}y=J_0(x).$$ Let's now arrange the sources along a plane $yOz$ instead of the line $Oz$. We'll get the following result along the $Ox$ line: Or, at varying times, This corresponds to a planar wave made of a plane of spherical sources. For monochromatic sources this corresponds to the arrangement* $$\frac1{2\pi} \int\limits_{-\infty}^\infty \int\limits_{-\infty}^\infty \operatorname{sinc}\left(\sqrt{x^2+y^2+z^2}\right)\,\mathrm{d}y\mathrm{d}z=\cos(x).$$ Notice how we've obtained a perfect (in the limit of continuous distribution of sources) piecewise-constant wave field. If we now make the sources emit a negative pulse so as to cancel this bump, we'll get the exact same shape, just of different sign (and with different propagation distance and amplitude). Adding these two—positive and negative—waves together by the principle of superposition, we'll get a planar pulse with well-defined beginning and ending. Do you see what we'll get if we attempt to do the same subtraction with the waves from the line of sources? Obviously we'll get a messy wave with two peaks—one positive (outer) and one negative (inner)—and exactly what is predicted by the answers you link to: a wake in the inner region, lack of an ending to the cylindrical wave pulse. So what we have here is just what this answer at Math.SE says: we have the simple effect of difference between arrival of waves from different sources in even dimensions, which leads to the wake, while this effect nicely cancels out in odd dimensions, resulting in well-formed wave pulses. *Strictly speaking, this integral diverges. But with the appropriate regularization the equation still holds, see this post for details.
{ "domain": "physics.stackexchange", "id": 72254, "tags": "waves, wavefunction, huygens-principle" }