anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Selection of recombinant host by color selection method
Question: In recombinant DNA tech, To select a recombinant host from non-recombinants, we have different types of techniques available. The one I'm talking about is the color selection method with the use of 2 markers(one antibiotic resistant gene for eg. Ampicillin, and the other Z-gene). We use Z-gene as the reporter gene. We use vectorless E. Coli as the host cell here. Now, when we add add Ampicillin to the petri dish containing all of the variants (transformants and non-transformants), - the non-transformants will die. The transformants survive. Now, we add X- gal to the replica of this petri dish, and we see some of the colonies turning blue due to the action of the enzyme β- galactosidase, which in turn is produced by the intact Z-gene in the plasmid. That means our blue colonies are the non-recombinant ones. My question is- The medium taken in the petri dish should contain glucose too , shouldn't it? Otherwise the recombinants would too give the colour reaction (X-gal is a homolog of lactose) due to the Z-gene present in their chromosomal DNA (operon concept). The question maybe silly since it's kind of obvious we have to supply basic necessities for the growth of bacteria(which obioviously is glucose). Just wanted to confirm in case we don't have glucose available ! =p Answer: The Z gene you are talking about should be lacZ that encodes the $\beta-$galactosidase, which is made from $\alpha$ and $\omega$ peptides. Neither peptide is functional by itself. $\beta-$galactosidase will cleave the glycosidic bond in X-gal and form galactose and 5-bromo-4-chloro-3-hydroxyindole. The latter product then dimerizes and oxidizes to 5,5'-dibromo-4,4'-dichloro-indigo, an intense blue product that is easy to identify and quantify. Next, you should check out the genotype of your bacterial strain because they typically have only the $\omega$-peptide encoded in its genome. Thus, the enzymatic activity of the $\beta-$galactosidase is only recovered when the $\alpha$-peptide is in the environment, and the process is called $\alpha$-complementation. Two commonly used strains for cloning and expression in E. coli are Dh5$\alpha$ and TOP10. The genotype for Dh5a is like this: F– Φ80lacZΔM15 Δ(lacZYA-argF) U169 recA1 endA1 hsdR17 (rK–, mK+) phoA supE44 λ– thi-1 gyrA96 relA1 Here, lacZΔM15 indicates that the M15 segment is lost from lacZ, and the mutant protein will only have the $\omega$-peptide. The genotype for TOP10 is like this: F– mcrA Δ(mrr-hsdRMS-mcrBC) Φ80lacZΔM15 ΔlacX74 recA1 araD139 Δ(ara leu) 7697 galU galK rpsL (StrR) endA1 nupG Again, you will notice lacZΔM15, which is still a non-functional peptide segment after all. Finally, to your question. Even if the there is no glucose as primary energy source in the medium and only X-gal is present, the operon of your host bacteria(I suppose it's E. coli) cannot produce the functional $\beta-$galactosidase on its own. Moreover, the recombinants, with the vector sequence encoding the $\alpha$-peptide disrupted, will not exhibit $\alpha$-complementation. No color reaction then! :D Reference: 1.Genotypes of Invitrogen™ competent cells - TOP10, retrieved from: https://www.thermofisher.com/cn/zh/home/life-science/cloning/competent-cells-for-transformation/chemically-competent/top10f-genotypes.html. 2.Genotypes of Invitrogen™ competent cells - DH5a, retrieved from: https://www.thermofisher.com/us/en/home/life-science/cloning/competent-cells-for-transformation/chemically-competent/dh5alpha-genotypes.html. 3. Langley, K. E.; Villarejo, M. R.; Fowler, A. V.; Zamenhof, P. J.; Zabin, I. (1975). "Molecular basis of beta-galactosidase alpha-complementation". Proceedings of the National Academy of Sciences of the United States of America. 72 (4): 1254–1257. doi:10.1073/pnas.72.4.1254. PMC 432510 Freely accessible. PMID 1093175. 4.
{ "domain": "biology.stackexchange", "id": 8159, "tags": "biotechnology, recombinant" }
Is it possible to simulate robots using magnetic wheels driving on a ferrous surface?
Question: Hi all, I'm looking to use Gazebo to simulate a robot using magnetic wheels to drive vertically on ferrous material. Can anyone please give me some advice if this is possible to do within Gazebo and if so where I should start? I haven't had much luck and I'm pretty new to ROS and Gazebo. Thanks, Originally posted by Bizarre_101 on Gazebo Answers with karma: 3 on 2017-04-19 Post score: 0 Answer: I think you'll have to write a gazebo plugin, which will calculate and apply magnetic forces to wheels on every time step update. Here are some links to start with: http://gazebosim.org/tutorials/?tut=plugins_hello_world http://osrf-distributions.s3.amazonaws.com/gazebo/api/dev/classgazebo_1_1physics_1_1Link.html -- look for setForce method Originally posted by eugene-katsevman with karma: 163 on 2017-04-19 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 4085, "tags": "gazebo" }
can't find linkage dependency
Question: Hi, I am trying to compile my package but I am getting the following error: /usr/bin/ld: cannot find -lgripper_click_ui collect2: ld returned 1 exit status In my manifest I have the following two dependencies which possibly give me the hard time but it should be including the missing library: <depend package="pr2_gripper_click"/> <depend package="pr2_pick_and_place_demos"/> This error occurs when I run make or rosmake. I am using diamondback. Any clues? Many thanks. Originally posted by Yianni on ROS Answers with karma: 123 on 2011-12-21 Post score: 0 Original comments Comment by ahendrix on 2011-12-30: How have you installed the pr2_object_manipulation stack? Which version of linux are you using? Answer: The gripper_click_ui library should be in the pr2_gripper_click package. Try running 'rosmake pr2_gripper_click' Originally posted by ahendrix with karma: 47576 on 2011-12-30 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 7704, "tags": "ros" }
What are Grassmann (even/odd) numbers used in superalgebras?
Question: Are Grassmann numbers a concept of graded Lie algebras or is something specific to superalgebras? What are they (i.e: how are they defined, important properties, etc.)? Is there a reasonable introduction to them? I think that what makes me wonder a little is, since there does not seem to be a sensible constructivist approach to these entities (other than accepting them as the entities that satisfy the required properties) is there nothing that stops someone from going into 'constructing' meta-superalgebras by defining 'numbers' $\kappa_{i}$, such that, e.g., $$\kappa_{i} \kappa_{j} = \theta_{k} \quad (\leftarrow \text{Grassmann odd}),$$ $$\kappa_{i} \kappa_{j}\kappa_{m} \kappa_{n} = \theta_{p}\quad (\leftarrow \text{Grassmann even}).$$ So I define such numbers as 'square-roots' of grassmann $a$-numbers. It seems nothing stops this process ad infinitum. Maybe there is some property I'm missing that will allow the algebra to be closed but I don't know what that could be. Btw, I think this is a great reference Phys.SE question regarding this topic: "Velvet way" to Grassmann numbers. Answer: Here I will just make a couple of general remarks. 1) Graded algebras usually refer to $\mathbb{Z}$- or $\mathbb{N}$-graded algebras, while superalgebras are $\mathbb{Z}_2$-graded algebras. 2) Grassmann numbers are oddly graded supernumbers. Please click on the links for further information, important properties and references. References: 1) Bryce De Witt, "Supermanifolds", Cambridge Univ. Press, 1992. 2) Deligne, Pierre and John W. Morgan, "Notes on Supersymmetry (following Joseph Bernstein)". Quantum Fields and Strings: A Course for Mathematicians (1999). American Mathematical Society. pp. 41–97. ISBN 0-8218-2012-5. Concerning v3 of the question. The $\kappa_i$'s correspond to a $\mathbb{Z}_4$ grading, and there are indeed research works in that direction. However, many properties of numbers and supernumbers do not generalize easily to $\mathbb{Z}_n$-grading with $n>$2. For instance, I think that already Berezin showed that it is not possible to define a useful notion of (Berezin) determinant of matrices with $\mathbb{Z}_n$-graded entries if $n>$2.
{ "domain": "physics.stackexchange", "id": 1012, "tags": "mathematical-physics, supersymmetry, fermions, grassmann-numbers, superalgebra" }
Project Euler #12 - first triangle number with more than 500 divisors
Question: I tried to solve Project Euler 12 with a first very naive solution by myself. It took nearly 30 minutes to run until it found the solution. Then I made a change in the function getDivisorCount which should have made the run time to about the square root of the original code, about 5 minutes. At least this was my opinion because the complexity should have changed from \$O(n^2)\$ to \$O(n\sqrt n)\$. But it went down to less than a second which surprised me and I could not find a reason. Here is my code for review, a second time: #include <iostream> #include <math.h> using namespace std; int getDivisorCount(unsigned int number) { unsigned int count = 0; unsigned int sqrt_ = sqrt(number); for(unsigned int i = 1; i <= sqrt_; i++) { if((number % i) == 0) count+=2; } if (sqrt_ * sqrt_ == number) { count--; } return count; } int main() { unsigned int number = 0; for (unsigned int i = 1; ; i++) { number+=i; if(getDivisorCount(number)>500) break; } cout << number; return 0; } Note that my first version used the method: int getDivisorCount(unsigned int number) { unsigned int count = 0; for(unsigned int i = 1; i <= number; i++) { if((number % i) == 0) count++; } return count; } Both compiled with g++ -O2. I'm looking for a fresh code review, but an explanation of why the current version is so much faster would also be appreciated. Answer: Brythan has given a good review of the code, but I am going to disagree with his analysis of the complexity.... or, at least part of his analysis. 'Complexity' is an indication of how the algorithm scales with respect to 'size'. How much additional time is required to compute a solution if the input data is X times larger. An algorithm with \$O(n)\$ time complexity, that runs in T seconds with X data, will require 2T seconds to run with 2X data. How does this relate to your problem? Well, it doesn't. Not at all. Your inputs are not changing at all. The solution to the problem "the first number that has 500 divisors" results in a number somewhat larger than 75,000,000. That is somewhat after the 12-thousandth triangle number. So, in this case, you are looping through 12,000 times, and that's the same regardless of whether you use your old, or your new getDivisorCount method. The question is, why is the new one so much faster? Well, that's simple.... As your numbers become reasonably large, say, around 70,000,000, you are, in your old loop, going to loop 70,000,000 times. In the new code, you are going to loop less than 8,500 times. Now, that is..... about 10,000 times faster. If you were to take the getDivisorCount method itself as an isolated system, the performance complexity of the original code was \$O(n)\$ where n is the input number. for the new algorithm, the complexity is \$O(\sqrt{n})\$. Since 'n' is, for the most part, a very large number, the difference between \$n\$ and \$\sqrt{n}\$ is huge. It does not surprise me that the second solution is thousands of times faster, for the large values you are factoring.
{ "domain": "codereview.stackexchange", "id": 23258, "tags": "c++, optimization, algorithm, programming-challenge" }
Why are bacteria immune to snake poisons?
Question: In a test I was asked why bacteria are insensitive to snake toxins. Is it their membrane that provides a barrier to the toxins? Or do snake poisons have specific targets and thus cannot bind to bacteria? Answer: Short answer Many snake poisons target specific proteins not present in unicellular organisms. Background The question is admittedly broad but the idea behind this question is pretty much what you indicate in your post - many venoms target specific proteins and do not simply destroy their target by, e.g., disrupting gross cellular structure (like alcohol does for example). Instead, they target specific molecules that are essential for the survival of their prey. Snake toxins can be categorized according to the organ systems they target, namely : the central nervous system the cardiovascular system the muscular system the vascular system Central nervous system toxins are carried by elapid snakes like cobras, kraits and the taipan. Typical targets are the nicotinic acetylcholine receptor and the muscarinic acetylcholine receptor. Blockade of these receptors at the neuromuscular junctions resulting in death by asphyxiation. Acetylcholine receptors are not present in bacteria. Cardiovascular toxins are pretty diverse and include things like angiotensin-converting enzyme inhibitors (leading to a drop in blood pressure) and glycosaminoglycans (the sulphated carbohydrate moieties that occur abundantly in cells of cardiovascular tissues) binding proteins that lead to cardiotoxicity. Again, the targets are specific molecules involved in heart function and hormones, stuff not present in bacteria. Muscular toxins include those that bind specifically to the sarcoplasmic reticulum of muscles or interfere with specific second messenger systems messing up muscular function. Again, quite specific targets not present in bacteria. Lastly, typical vascular system toxins include anti-coagulants such as protein C activators and inhibitors of prothrombin complex formation. Again, specific targets. Reference Koh et al., Cell. Mol. Life Sci (2006); 63: 3030–3041
{ "domain": "biology.stackexchange", "id": 3962, "tags": "toxicology, poison" }
Help operating with differentials in Geometric Algebra
Question: I'm trying to learn General Relativity with Geometric Algebra by following the article Spacetime Geometry with Geometric Calculus, but I'm finding some algebraic problems when dealing with expressions like the following, eq. 96: $$\partial_a \cdot (\dot{D} \wedge \dot{R}(a \wedge b))= \dot{R}(\dot{D} \wedge b) - D \wedge R(b) = 0$$ $D$ being the covariant derivative operator and $R(a \wedge b)$ being the Riemann tensor. How does the $D$ operator enter into the argument of the Riemann tensor? And why is there a minus sign for the Ricci tensor term? I have read through Clifford Algebra to Geometric Calculus and Geometric Algebra for physicists but I haven't found proper explanations about how these operations are performed. Answer: You're asking two questions about how to contract the Bianchi identity using geometric algebra. The question about where the minus sign comes from is an issue with how to distribute product operators in geometric algebra. The question about how $D$ winds up inside of $R$'s argument is an issue with a specific application of geometric calculus. I'll address the two issues in general first, and then apply the results to this specific problem. First, the geometric algebra issue: It's surprisingly hard to find the identities for how inner and outer products distribute over other products, when not all of the multivectors involved are vectors. Clifford Algebra to Geometric Calculus does have these identities, though, on page 12. We'll only need one of these identities, but I'll just list them all here to make them easier to find for other people who may have questions about how to distribute among the various products in geometric algebra. For $a$ a vector, $A_r$ a homogeneous multivector of grade $r$ and $B$ a multivector, the inner and outer products distribute over the geometric product as $$ \begin{aligned} a\cdot(A_r B)&=a\cdot A_r B+(-1)^r A_r a\cdot B\\ &=a\wedge A_r B-(-1)^r A_r a\wedge B \end{aligned} $$ $$ \begin{aligned} a\wedge(A_r B)&=a\wedge A_r B-(-1)^r A_r a\cdot B\\ &=a\cdot A_r B+(-1)^r A_r a\wedge B\ . \end{aligned} $$ For $B_s$ a homogeneous multivector, we can perform grade projection on the above identities to get $$ a\cdot(A_r\wedge B_s)=(a\cdot A_r)\wedge B_s+(-1)^r A_r \wedge(a\cdot B_s) $$ $$ a\wedge(A_r\cdot B_s)=(a\cdot A_r)\cdot B_s+(-1)^r A_r \cdot(a\wedge B_s)\ . $$ Next, the geometric calculus issue: First, note that both the vector derivative and the covariant derivative are vector operators, which can be treated algebraically just like any other vector. This is perhaps easiest to see if you express things in terms of components. Split a vector $a$ into components in the $\{e_i\}$ frame as $a=a^i e_i$. For the vector derivative, if a term contains $\dot{\partial}_a$ somewhere and $\dot{F}(a)$ somewhere, you can replace the $\dot{\partial}_a$ with the vector $e^i$, and replace $\dot{F}(a)$ with $\frac{\partial}{\partial a^i}F(a)$. Something similar still holds with the covariant derivative, even though the covariant derivative involves a projection onto the tangent space. With $\{e_i\}$ now a basis for the tangent space, if a term includes $\dot{D}$ somewhere and $\dot{F}$ somewhere, where $D$ is the covariant derivative, you can replace $\dot{D}$ with $e^i$ and $\dot{F}$ with $(e_i \cdot D)F$. Note that $(e_i\cdot D)$ is a scalar operator, so it commutes with everything and can go anywhere as a factor within the term, although overdots are needed if the $(e_i\cdot D)$ isn't placed right before the $F$. If a function $F(a)$ is linear in the vector $a=a^i e_i$, we have that $$\frac{\partial}{\partial a^i}F(a)=\lim_{\epsilon\to 0}\frac{F(a+\epsilon e_i)-F(a)}{\epsilon}=\lim_{\epsilon\to 0}\frac{F(\epsilon e_i)}{\epsilon}=F(e_i)\ .$$ This means that for any vector $v$, $$\begin{aligned} (\dot{\partial}_a\cdot v)\dot{F}(a)&=(e^i\cdot v)\frac{\partial}{\partial a^i}F(a)\\ &=v^i\frac{\partial}{\partial a^i}F(a)\\ &=v^i F(e_i)\\ &=F(v^i e_i)\\ &=F(v)\ , \end{aligned}$$ where in the next to last line we again relied on $F$ being linear. We're now prepared to directly address the problem in question. We have $$\begin{aligned} 0&=\partial_a\cdot(\dot{D}\wedge\dot{R}(a\wedge b))\\ &=(\partial_a\cdot \dot{D})\dot{R}(a\wedge b))-\dot{D} \wedge(\partial_a\cdot R(a\wedge b))\\ &=\dot{R}(\dot{D}\wedge b)-\dot{D} \wedge(\partial_a\cdot R(a\wedge b))\\ &=\dot{R}(\dot{D}\wedge b)-\dot{D} \wedge R(b)\ . \end{aligned}$$ In this equation, Line 1 is contracting $\partial_a$ with the Bianchi identity. Line 2 uses the identity above for distributing the inner product over the outer product, with $a\to\partial_a$, $A_r\to D$ and $B_s\to R(a\wedge b)$. Note that $r=1$ and $s=2$. The wedge product goes away because the dot product just results in a scalar, and $1\wedge B_s=B_s$. Line 3 uses the calculus result we derived, with $\partial_a\to \partial_a$, $v\to D$ and $F(a)\to R(a+b)$. Line 4 uses the definition of the Ricci tensor, equation 83 in the paper.
{ "domain": "physics.stackexchange", "id": 98104, "tags": "general-relativity, differential-geometry, clifford-algebra" }
Single Sentence definition of Lorentz Transform
Question: Dear fellow physics lovers, After having spent some significant time to understand the Lorentz Transformations the following is the most simple and complete single sentence definition of it that I have been able to think of. A Lorentz Transform (also known as a Lorentz Boost) are a Set of Linear Equations that Transform the Percieved Position $\vec{r}$ of an Object O and Percieved Ellapsed Time from the Point of View of a Static Observer A to the Point of View of a Non-Static Observer B which is moving at a relative Constant Velocity $\vec{v}$ with respect to Observer A after a certain Ellapsed Time t. Please let me know whether the above qualifies as a correct single sentence definition of Lorentz Transform. Please correct me if I am wrong. I will appreciate if someone can provide a simpler / more appropriate single sentence definition that does not involve usage of primed and unprimed coordinates and/or variables and also does not use the words spacetime and frame The goal is to update the definition Lorentz Transformation in Wikipedia with this single sentence. I think it will be useful to many people who have just begun to learn about Special Relativity/Lorentz Transform from the Internet. Answer: You are almost correct. But A Lorentz Transform (also known as a Lorentz Boost) ... this statement of yours is not correct. A Lorentz transformation is not exactly the same as the Lorentz boost. Even if two different coordinates systems are such that one is just a rotation of the other and each is at rest with respect to the other, their transformation is still considered as a Lorentz transformation. A Lorentz boost is a rotation-free Lorentz transformation. ... View of a Static Observer A to the Point of View of a Non-Static Observer B which is moving at a relative Constant Velocity $\bar{v}$ with respect to Observer A after a certain Elapsed Time t. Even this statement is not completely correct. You have to replace observer with the coordinate system. A coordinate system is a group of imaginary clocks that are synchronized. If you consider a point observer then the observer is a single clock. The observer does not measure the time we get from the transformations, he actually measures the time after considering the time taken for light to reach him from the position where the event happened. Also, you are specifying static observer and nonstatic observer, if you are saying this with respect to one of the observers it is correct, but if you are assuming one of them is at absolute rest then that is wrong as it violates the first postulate of Special Relativity. Also, don't update the definitions on Wikipedia, they are technically more correct. Edit: (Some points from Comments) Every 3d rotation can be expressed as a Lorentz transformation. Also, center inversion and mirror inversions are also Lorentz transformations. If the velocity of A with respect to B is v then the difference in velocities of A and B with respect to C won't have a magnitude v in general. This is only possible in some cases like where A and B are moving in the same direction with respect to C. In general, it is not true. The reason is a consequence of the fact that consecutive nonparallel Loretz boosts are not Lorentz boosts. It actually is a product of a pure rotation and a pure Lorentz boost. A good reference where these are properly discussed is https://www.google.co.in/books/edition/Special_Relativity/U3fADwAAQBAJ?hl=en&gbpv=1&printsec=frontcover
{ "domain": "physics.stackexchange", "id": 72669, "tags": "special-relativity, inertial-frames, definition" }
Why do nitro groups confer explosive tendencies?
Question: When suggesting nitration of an aromatic compound in the synthesis of some organic molecule, it was raised that this route should be avoided as to prevent things from going "ka boom." An explanation was not forthcoming. So, why do nitro groups tend to make organic molecules explosive? Is it because the $\ce{NO2}$ group really "wants" to be $\ce{N2}$ since a) diatomic nitrogen has a super high bond strength and b) diatomic nitrogen is a gas usually and therefore conversion to diatomic nitrogen would be entropically favorable? Is it because carbon's most oxidized state is also a gas and the $\ce{C=O}$ bond isn't a wimp either - hence the reason many organic compounds are flammable? Also, what happens first in an explosion? Does carbon become oxidized first, which then provides the activation energy necessary to decompose the nitro group? I ask this because organic carbon compounds by themselves which are flammable - i.e. toluene - aren't considered explosive, but trinitrotoluene is an explosive. Answer: Explosives chemistry is a rather complex topic. I've heard that this book is a good source of information about it (I haven't read it). In a nutshell, your intuition about the nitro group is accurate. Formation of $\ce{N2}$ is highly energetically favourable. To get an explosive, what we need is a rapid reaction that produces a lot of heat and gas to both cause the obvious effects of an explosion and to propagate the reaction to other molecules of explosive. The rapid part is what separates something like TNT from toluene. The combustion of toluene is also energetically favourable, but in TNT, to create $\ce{N2}$ is not dependant on mass transport of any other species and can thus happen very rapidly, whereas the combustion of toluene is limited by how quickly oxygen is transported to it. If one were to vaporize toluene in the correct concentration in air, this transport problem goes away and an explosion can occur. TNT also has an advantage in this regard because the nitro groups provide a source of oxygen to react with the carbon and nitrogen remaining (not enough for all of it, but it helps). Explosives are often mixed with fuels or oxidizing agents to produce a more oxygen-balanced mixture for a more efficient explosion. As for what happens first in an explosion, many different reactions can occur during an explosion, but it may be helpful to consider what it takes to actually detonate TNT and think about the timescale of the reactions. TNT is a solid at room temperature and has a flash point of 163 °C making it difficult to even ignite, and while it will burn in a fire, there is no risk of explosion. For an explosion to occur, enough gas and heat has to be produced to propagate the reaction through the bulk of the material. In practice this is done using a much more sensitive explosive (other explosives like lead azide or nitroglycerine are unstable enough to be set off by heat or pressure) to produce a small shockwave that provides the activation energy to initiate a reaction as it travels through the explosive which then sustains the shockwave through the rest of the material. In a normal explosive (not a fuel-air explosive or the like), the reactions that contribute to the bulk of the explosion are limited to what the explosive is made of because the speed of the explosion is too fast for air to play much of a role initially. In the case of TNT, the experimentally-measured time it takes for a shockwave to pass through is 100–200 fs (no idea how one measures that), so any oxidation of the carbons seem unlikely to contribute much to the the initial explosion, given the only readily available source of oxygen is from the nitro groups which must presumably decompose first. This group proposed a few decomposition pathways for TNT, including homolytic cleavage of the $\ce{C-NO2}$ bond, rearrangement from $\ce{C-NO2}$ to $\ce{C-ONO}$ followed by homolytic $\ce{O-NO}$ cleavage, and $\ce{C-H}$ attack from an adjacent nitro group to the methyl ring substituent, but they found that only the first was fast enough to occur during detonation, the others possible only for lower temperature thermal decomposition. This initial decomposition step is the only thing fast enough to contribute to the shockwave that sets off the rest of the TNT, while the reactions that produce the final products occur (relatively) long after the initial blast has initiated the rest of the explosive.
{ "domain": "chemistry.stackexchange", "id": 2517, "tags": "organic-chemistry, explosives, nitro-compounds" }
Clustering by using Locality sensitive hashing *after* Random projection
Question: It is well known that Random Projection (RP) is tightly linked to Locality Sensitive Hashing (LSH). My goal is to cluster a large number of points lying in a $d$-dimensional Euclidean space, where $d$ is very large. Questions: Does it make sense to cluster the points via LSH after having reduced the dimensionality of their input space by using first RP? Why yes/no? Is there any redundancy in the combined use of RP as dimensionality reduction method before LSH as clustering method? Answer: I think the following is the way to look at your question. RP reduces dimensionality based on distance. LSH clusters data based on a similar distance method used in RP. The primary function of any dimensionality reduction algorithm is to project data into a space that maximizes signal and reduces noise. So when you perform RP now you have data that is representative of the actual signal with less garbage. In such cases, if you plug this data into another algorithm that clusters data, in practice the result should be better. This is because now you have applied a transform to maximize the distance between the data points and when you feed this data to another algorithm whose job is to put them into different buckets, it is much easier. You already have increased the variance among data points and any clustering method will work well and easily with that data. Therefore it makes sense to cluster points via LSH after performing RP (even though they are related) and there is no theoretical redundancy in such a method.
{ "domain": "ai.stackexchange", "id": 3525, "tags": "machine-learning, clustering, dimensionality-reduction, randomness, k-nearest-neighbors" }
Different locations of a pump in a tube
Question: I'm confused about pumps in fluid dynamics. As far as I understand, the basic effect of a pump which deliver a power $\mathcal{P}$ can be described with modified bernoulli equation between a point $A$ before the pump and a point $B$ after the pump. $$(p_A+\frac{1}{2} \rho v_A^2 +\rho g h_A)\cdot Q +\mathcal{P}=( p_B +\frac{1}{2} \rho v_B^2 +\rho g h_B)\cdot Q=\mathrm{constant}\tag{1}$$ Now my specific problem is: does it really matter where the pump is located inside the tube? In the picture the pump is located ad height $B$, but, if it was located at $A$ or $C$, would something change? That is, would the fluid have different speed at the top when it flows out of the tube? My answer would be no, since I can place the pump in $B$, but I can also use Bernoulli equation between $A$ and $B$, which says that the $\mathrm{constant}$ in the equation is the same for $A$ and $B$, so the situation in the picture is equivalent to one with the pump in $A$. So if this is true I can use $(1)$ between any poin before the pump and any point after the pump, regardless the distance from the pump itself. Can the reasoning be correct? Answer: In the picture the pump is located ad height BB, but, if it was located at AA or CC, would something change? That is, would the fluid have different speed at the top when it flows out of the tube? In a nutshell: no. Bernoulli's equation, between the points $1$ and $2$, is as follows, where $p$ is the pressure supplied by the pump: $$p_1+\frac{1}{2} \rho v_1^2 +\rho g h_1+p=p_2 +\frac{1}{2} \rho v_2^2 +\rho g h_2$$ Now it is important to understand the suffixes $1$ and $2$. At point $1$ (the surface of the lower tank), $p_1=p_0$, where $p_0$ is atmospheric pressure. Similarly, assuming the point $2$ gives to open air, then $p_2=p_0$. In addition, is we assume the bottom tank's surface area is much larger than the cross section of the pipe, then $v_1 \ll v_2$. After minimal reworking, the equation then simplifies to: $$v_2\approx \sqrt{2\big[\frac{p}{\rho}-g(h_2-h_1)\big]}$$ So the placement of the pump is immaterial, only the pressure it delivers and the difference in height between the points $1$ and $2$ matter. The equation doesn't depended on the distances $|AB|$ or $|BC|$ at all.
{ "domain": "physics.stackexchange", "id": 31658, "tags": "homework-and-exercises, fluid-dynamics, power, bernoulli-equation" }
Can an object rotate about two axes at once?
Question: I recall from a course on classical dynamics that angular velocity is a 3-dimensional vector, and angular velocity can be added and subtracted. From this, my understanding is that an object can "rotate about two axes at once" only in the sense that we can add up the two angular velocity vectors and this defines a perfectly sensible rotation. But the final rotation can also be described as a single vector (the sum of the original two) and is thus just rotation about a single axis. Basically, rotation that cannot be expressed as rotation about a single axis cannot exist. But in this video of a sphere rotating around two axes, it doesn't seem to do this: I can't see any point that holds still relative to the camera (which would also be still relative to the center of the sphere). I don't think I'm missing such a point, either: The gears sweep over all points on the sphere, and none of the gear teeth are stationary. Further, the hairy ball theorem seems to imply that there must always be a point at zero velocity, at least instantaneously. But maybe that point of zero velocity is moving around constantly, thanks to some acceleration. Without external forces, parts of the sphere can accelerate thanks to internal forces (that's how it holds its shape while it rotates!), so is it possible that there is some internal force causing such an acceleration? Or is my understanding of angular velocity incorrect, or is there some hidden force that is implicitly acting on the sphere in the animation? Answer: But maybe that point of zero velocity is moving around constantly, thanks to some acceleration. There is always an instantaneous axis of rotation for any rigid body, but that doesn't mean that that rotation axis is fixed. This means that (as you suspect) the "point of zero velocity" is moving around constantly. We can actually write out $\vec{\omega}$ as a function of time by recalling that angular velocity vectors are additive between rotating reference frames. We can imagine going to an intermediate frame related to the space frame by an angular velocity $\omega_1 \hat{z}$. In this frame, the angular velocity of the body is something like $\omega_2 \hat{x}'$, where $\hat{x}'$ is the $x$-axis of the intermediate frame. So the overall angular velocity vector is something like $$ \vec{\omega} = \omega_1 \hat{z} + \omega_2 \hat{x}' = \omega_1 \hat{z} + \omega_2 (\cos (\omega_1 t) \hat{x} + \sin(\omega_1 t) \hat{y}), $$ since $\hat{x}' = \cos (\omega_1 t) \hat{x} + \sin(\omega_1 t) \hat{y}$. At any time, the point on the surface of the sphere along this direction is instantaneously at rest; but that "instantaneous rest point" is constantly shifting, both in space and relative to the body. In fact, it is well-known that a rigid body will generally not have a constant angular velocity $\vec{\omega}$, even in the absence of torques. In the body frame, the angular velocity vector is governed by Euler's equations, and unless $\vec{\omega}$ is aligned with one of the principal axes of the body, we will generally have $\dot{\vec{\omega}} \neq 0$. In the space frame, $\vec{L}$ is fixed in the absence of torques (not $\vec{\omega}$), so we generally have $\dot{\vec{\omega}} \neq 0$ in this frame as well. This is most readily seen in the case of a symmetric top (with two principal moments equal). In the body frame, the symmetry axis is fixed and $\vec{\omega}$ and $\vec{L}$ precess around it at a uniform rate. In the space frame, $\vec{L}$ is fixed, and $\vec{\omega}$ and the symmetry axis precess around it at a uniform rate. In neither frame is $\vec{\omega}$ fixed. That said, you also ask [I]s there some hidden force that is implicitly acting on the sphere in the animation? and the answer to that is "yes, probably". If the body is a perfect sphere, then all the principal moments are the same, and so $\vec{L} = I \vec{\omega}$ for some scalar $I$. This means that $\dot{\vec{\omega}} \neq 0$ (as in the example in the animation) implies that $\dot{\vec{L}} \neq 0$, which would imply that some non-zero torque would be necessary to maintain this rotation. More generally, if one knows the angular velocity in the body frame, one can use Euler's equations to calculate the torque necessary to maintain that rotation (as seen in the body frame).
{ "domain": "physics.stackexchange", "id": 74540, "tags": "rotational-dynamics, angular-momentum, reference-frames, rigid-body-dynamics" }
Are any advanced Reynolds-averaged fluid models used in astrophysics?
Question: Reynolds-averaged Navier-Stokes equations allow to split the description of a turbulent fluid into an averaged (typically laminar) flow on some length and/or time-scale and separate equations for the turbulent fluctuations. The resulting equations look like this $$\rho\bar{u}_j \frac{\partial \bar{u}_i }{\partial x_j} = \rho \bar{f}_i + \frac{\partial}{\partial x_j} \left[ - \bar{p}\delta_{ij} + \mu \left( \frac{\partial \bar{u}_i}{\partial x_j} + \frac{\partial \bar{u}_j}{\partial x_i} \right) - \rho \overline{u_i^\prime u_j^\prime} \right ]$$ Here the bars denotes the averaged values, and $\rho \overline{u_i^\prime u_j^\prime}$ is called the Reynolds stress tensor characterizing the influence of the turbulent fluctuations on the mean flow. To actually evaluate the Reynolds stress one usually passes to something called the Boussinesque hypothesis and that is that you can actually model the stress as a viscous stress tensor with a "turbulent viscosity" $\mu_t$ and an isotropic stress coming from "turbulent kinetic energy" $k$. That is, in Cartesian coordinates $$\rho \overline{u_i^\prime u_j^\prime} = \mu_t \left( \frac{\partial \bar{u}_i}{\partial x_j} + \frac{\partial \bar{u}_j}{\partial x_i} \right) -\frac{2}{3} k \delta_{ij}$$ Then there is a number of models for how to compute the quantities $\mu_t$ and $k$. Just for illustration, one of them is the k-epsilon model, where two transport equations for variables $k,\epsilon$ are solved $$\frac{\partial (\rho k)}{\partial t}+ \frac {\partial (\rho k \bar{u}_i)}{\partial x_i}= f(k, \epsilon, \bar{u}_j, \partial \bar{u}_j/\partial x_k,...)$$ $$\frac{\partial (\rho \epsilon)}{\partial t}+ \frac {\partial (\rho \epsilon \bar{u}_i)}{\partial x_i}= g(k, \epsilon, \bar{u}_j, \partial \bar{u}_j/\partial x_k,...)$$ and turbulent viscosity is then determined as $\mu_t = \mu_t(k,\epsilon)$. Many other models exist. Of course, in astrophysics we are talking about plasma dynamics, which is modeled by (radiative) compressible magneto-hydrodynamics. However, this set of equations can be Reynolds-averaged in very much the same way as the pure-fluid equations. The equations of models such as the k-epsilon model would have to be generalized by introducing the production of turbulent kinetic energy due to effects such as the magneto-rotational instability but otherwise the models should work in a similar fashion. Possibly, one would also have to include a model for the turbulent magnetic-field fluctuations in the Maxwell stress $\sim \overline{B_i B_j}$. So now for my question: These Reynolds averaged models seem to have applications only in engineering, but I have never seen them applied in an astrophysical context. Why is this so? I have instead seen a single, very special model, and that is the Shakura-Sunyaev prescription for turbulent viscosity in steady, thin accretion disks: $\mu_t = \alpha \rho \bar{p}$, where $\alpha$ is a constant. However, I do not see any other context than steady, thin disks where this kind of prescription can be useful. Does one perhaps use more sophisticated prescriptions in other astrophysical contexts such as the theory of stellar structure, intergalactic medium, or the solar wind? Answer: Closure models might not be popular in astrophysics but they certainly have been tried for a while. In the context of accretion disks, several people have tried more sophisticated closures compared to the Shakura-Sunyaev prescription, see for example: http://adsabs.harvard.edu/abs/1995PASJ...47..629K http://adsabs.harvard.edu/abs/2003MNRAS.340..969O Stellar convection is another case where closure models have been used: https://arxiv.org/abs/1401.5176 Some of these models do not incorporate the "dynamo" closure - that is the (turbulent) terms responsible for generation and sustenance of magnetic fields. For one such attempt that tries to incorporate alpha-Omega dynamo closure for accretion disks, see this: https://academic.oup.com/mnras/article/195/4/881/1746346
{ "domain": "physics.stackexchange", "id": 46931, "tags": "fluid-dynamics, astrophysics, viscosity, turbulence" }
Why does a plane wave leave the position of the particle unspecified?
Question: I'm covering a book on QM, and just started recently and I'm stuck at understanding something. It says that we can describe the state of motion of a particle with an infinite plane wave equation: $\psi(r,t)=Ae^{i(\mathbf{k}.\mathbf{r}-wt)} $ It says that it corresponds to a motion with precisely defined momentum. "but having amplitudes $|\psi|=const. \forall r,t $, the infinite harmonic plane waves leave the position of the particle entirely unspecified." What does the word "infinite" aim to imply here? Does it mean we assume the imaginary "waves" extend infinitely in both directions? Also, if so, does that matter? Of course any planar wave has constant amplitude! It's not that they are imposing some mathematical restriction on the wave. Please explain the statement. Answer: So as you say, a planar wave has a constant amplitude. This then implies that they do extend all the way to infinity in both directions--there is no edge to it. Of course, this is never physically realized, but it's a good first (or zeroth) approximation in a lot of cases--just like it is in electromagnetism, if you've dealt with that before. The reason this leaves the position unspecified is the way that you would extract information on where the particle is is to integrate the absolute square $\psi \psi^* = \left| \psi \right|^2$ over a volume. This quantity represents the probability that the particle will be measured in that region. If $\left| \psi \right|$ is constant everywhere in space, this probability is also constant everywhere in space--so we're equally likely to find the particle in any place we look. To do this mathematically, it's usually best to bound it somewhat so you can normalize the wavefunction--say that you have a volume $V$ that the particle can be found in. Still, though, a pure momentum plane-wave state will then just be entirely indeterminate in position in the volume. This is why people build wave packets by combining many plane waves; then their distribution in space will be nontrivial.
{ "domain": "physics.stackexchange", "id": 20355, "tags": "quantum-mechanics, wavefunction" }
Mass distribution of a black hole
Question: One of my references states that the moment of inertia of a black hole (as might be deduced from a safe distance outside the event horizon) is I(bh) = mr^2 where r is the radius of the event horizon. For comparison, the moment of inertia of a hollow shell is I(s) = (2/3)mr^2. Now the moment of inertia of a thin hoop is also I(h) = mr^2, which upon initial inspection suggests that the mass of a black hole is not distributed as a thin shell at the location of the event horizon (which was my initial guess) but instead as a hoop. Thus my question is: Does the moment of inertia of a black hole convey anything meaningful to us outside it, about how its mass is actually distributed inside or right at the event horizon? That is, is the hoop equivalence just a meaningless coincidence? Answer: This is just a coincidence. Dimensional analysis immediately tells us that the moment of inertia of a black hole needs to be $I_{\rm bh} = \alpha m r_{s}^2$ for some dimensionless constant $\alpha$. The answer just happens to come out as $\alpha=1$. This does not tell us anything about the distribution of the mass. The quantities that might tell us something about the mass distribution are the gravitational multipole moments. For a Schwarzschild black hole these are all zero accept for the monopole. This is incompatible with the mass being concentrated in a ring on the horizon.
{ "domain": "physics.stackexchange", "id": 97874, "tags": "black-holes, mass, event-horizon, moment-of-inertia" }
POS Tagging in R
Question: I would like to do POS tagging on around 8,000 tweets. I have a function and am using data.table to call it on every row. The problem I'm having is that it takes over 1.5hours to run this chunk of code. Code: options(java.parameters = "- Xmx3000m") library(rJava) library(NLP) library(openNLP) library(data.table,quietly = TRUE) dat[,c("ID"):= .I] dat[,c("POS"):= tagPOS(strip(Tweet)),by = .(ID)] tagPOS = function(x) { s <- as.String(x) sent_token_annotator = Maxent_Sent_Token_Annotator() word_token_annotator = Maxent_Word_Token_Annotator() a2 = annotate(s, list(sent_token_annotator, word_token_annotator)) pos_tag_annotator = Maxent_POS_Tag_Annotator() a3 = annotate(s, pos_tag_annotator, a2) a3w = subset(a3, type == "word") POStags = unlist(lapply(a3w$features, `[[`, "POS")) gc() return(paste(POStags,collapse = " ")) } Answer: The best tool to diagnose slow code is the profiler. Here is how you could run it on a few function calls to see what is slowing down the execution of your code: Rprof(tmp <- tempfile()) for (i in 1:10) tagPOS(strip(dat$Tweet[i])) Rprof() summaryRprof(tmp) unlink(tmp) Supposedly (from our comments), this would show that most of the computation time is spent creating the annotators. Since these are independent of the sole input x of your function, you could save a lot of time by defining them outside your function and passing them as arguments: annotators <- list(sent_token = Maxent_Sent_Token_Annotator(), word_token = Maxent_Word_Token_Annotator(), pos_tag = Maxent_POS_Tag_Annotator()) tagPOS <- function(x, ann = annotators) { s <- as.String(x) a2 <- annotate(s, list(ann$sent_token, ann$word_token)) a3 <- annotate(s, ann$pos_tag, a2) a3w <- subset(a3, type == "word") POStags <- unlist(lapply(a3w$features, `[[`, "POS")) gc() return(paste(POStags,collapse = " ")) }
{ "domain": "codereview.stackexchange", "id": 23688, "tags": "performance, r" }
How does diffraction cause laser beam divergence, and why will a laser beam always diverge, due to diffraction?
Question: I have seen it said that diffraction causes laser beam divergence, or that a laser beam will always diverge, due to diffraction, or some variation of these statements. I understand diffraction in general, and I understand that the phenomenon applies to all waves, so I understand that it would also apply to laser beams; but it is not clear to me how it causes laser beam divergence, or why a laser beam will always diverge, due to diffraction. When trying to research to understand how diffraction causes laser beam divergence, I can't find anything that directly and clearly explains this – most results either just mention diffraction in the context of lasers without providing explanation, or mention 'diffraction-limited beams', which I think is something different to what I'm asking. So how does diffraction cause laser beam divergence, and why will a laser beam always diverge, due to diffraction? Answer: The key point is that a laser beam is a wave which propagates according to Huygens principle. Once you accept this fact the divergence follows naturally. Huygens principle states that the propagation is due to the generation of spherical waves, which will generate spherical waves in the next step of propagation. [Picture taken from wiki] In the image we see that the center of the "hole" generates a "flat" wave. The diffraction is evident only in at the edges. In order to capture the behaviour of the "central part" of a wavefront we use approximation and omit the edges to a certain extend. In the upper picture we might describe the central part as a plane wave. If instead we use spherical mirrors to generate a propagating wave, we end up with the Gaussian beam $$ E \propto exp\left( - \frac{r^2}{w_0^2 (1 + (z/z_R)^2)} \right) $$ If we include the quadratic phase correction for the wavefront and the Gouy phase the approximation improves. However, the Gaussian beam is always an approximation obtained by omitting the edges of the wave (in deriving it, we use the paraxial Helmholz equation).
{ "domain": "physics.stackexchange", "id": 77879, "tags": "optics, laser, diffraction, laser-cavity" }
Meaning of generalized normal distribution
Question: I asked a version of this question over on Math.SX, and never received a response… perhaps it will be more appropriate here. I'm looking at spectroscopic data (specifically a $T_2$ coherence decay curve of some NMR data). Normally, this data is fit to a single or multi-exponential decay to account for multiple components. However, I have a data set that fits best to a function with a power in the exponent near 1.4 (in between a Gaussian and single exponential decay). Is there any physical meaning for generalized normal distribution functions? To elaborate on what I mean by "physical meaning", when working with spectroscopic absorptions, an an exponential decay (n=1) indicates a system with homogeneous broadening of lifetimes, while a Gaussian decay indicates inhomogenous broadening of lifetimes. What does a power between these two values indicate? Is there a precedent for using this sort of peak shape (or decay function in this case) in spectroscopic analysis? --EDIT-- To demonstrate the phenomenon, here are a couple of sample curves with some data. The depressed points at the start may be an experimental artifact, but I'd still be curious to know if there is any physical precedent for the exponential power between 1 and 2. $e^{-t/T_2}$ $e^{-(t/T_2)^{1.6}}$ Answer: Thanks to @user12262 for pointing me in the direction of the KWW function. After perusing that link and searching SciFinder for stretched and compressed exponential functions in relation to NMR, I ran across this paper (subscription required, sorry). To (briefly) summarize the paper, the compressed exponential function, $e^{-kt^q}$, with $1 < q < 2$ can be represented as a distribution of Gaussian functions with different relaxation rates, $$R_C (t) = \frac{1}{\pi} \int^∞_0 P_C(s; q)\, e^{−(sr^*t)^2} d\textrm{s},$$ where $R_C(t)$ is the observed decay curve, $P_C(s; q)$ is the probability distribution of Gaussian decays, $r^*$ represents some average value of the rate, and $s = r/r^*$. As the value of $q$ approaches 2, the distribution function approaches a delta spike (as one would expect). In the case of NMR $T_2$ decays, this most likely represents a distribution of relaxation couplings (e.g. interactions with 1, 2, 3, etc. other nearby spins).
{ "domain": "physics.stackexchange", "id": 12179, "tags": "quantum-spin, spectroscopy" }
Measuring the connectedness of a graph, and applying it to NP problems
Question: I'm looking for a way to measure how interconnected a graph is. It's well known that graphs can be broken down into connected components. It seems, though, that even in the cases where the graph is made of only one connected component, we can measure how interconnected that component is. Is it "almost" two components (if we would remove a small number of edges)? What is the correlation between edges (that is, if vertices A and B are each connected to C, is there a higher probability that A and B are themselves connected). I don't know how to define this measure properly, but I'm sure there are existing measures already defined. It would seem to me that this would be a great way to measure the difficulty of an instance of the SAT problem. Representing variables as nodes and being in the same clause as edges, it would seem the difficulty of the problem is related to the interconnectedness of the graph. Answer: A standard measure of "interconnectedness" is how expanding the graph is. There are several ways to define expansion that are all related. The algebraic way is by measuring the second largest eigenvalue (in absolute value) of the adjacency matrix. The largest eigenvalue is a graph in which all vertices have degree D is always D. If the second largest eigenvalue is much smaller than D, then every two sets of vertices in the graph have roughly as many edges between them as if the edges were chosen at random (this is the expanding mixing lemma). For more info about graph expansion, see this course: http://www.math.ias.edu/~boaz/ExpanderCourse/ If the graph is an expander, the entire graph is one big, very interconnected, component. This is the case with random SAT instances that are believed to be hard (for the appropriate clauses ratio).
{ "domain": "cstheory.stackexchange", "id": 347, "tags": "cc.complexity-theory, graph-theory, sat" }
In an anti-matter universe, would an anti-matter black hole emit anti-matter gamma radiation?
Question: In an anti-matter universe, when an anti-matter black hole is consuming a large anti-matter star, would it emit anti-matter gamma radiation, or would gamma radiation be the same in either a matter or anti-matter universe? Answer: Photons are their own antiparticle, so they are the same in an universe of matter and a universe of antimatter.
{ "domain": "physics.stackexchange", "id": 57995, "tags": "particle-physics, black-holes, antimatter, gamma-rays" }
SkylineProblem in Java
Question: taken from leetcode.com: A city's skyline is the outer contour of the silhouette formed by all the buildings in that city when viewed from a distance. Given the locations and heights of all the buildings, return the skyline formed by these buildings collectively. [...] please review this code. I'm most interested in feedback about OOP-Principles (SOLID, readability, etc.) and second most interested on performance. class Solution public class Solution { public static void main(String[] args) { final int[][] input = {{2,9,10},{3,7,15},{5,12,12},{15,20,10},{19,24,8}}; Solution solution = new Solution(); System.out.println("amount : "+solution.getSkyline(input)); } //this crude method is a HARD REQUIREMENT and may not be changed! public List<List<Integer>> getSkyline(int[][] buildings) { SkyLineConverter skyLineConverter = new SkyLineConverter(); SkyLine skyLine = skyLineConverter.convert(buildings); Set<Edge> edges = skyLine.getEdges(); return sortList(edges); } private List<List<Integer>> sortList(Set<Edge> edges) { List<List<Integer>> result = new ArrayList<>(); List<Edge> list = new ArrayList<>(edges); list.sort(Comparator.comparingInt(o -> o.x)); for(Edge edge: list){ List<Integer> intList = new ArrayList<>(); intList.add(edge.x); intList.add(edge.height); result.add(intList); } return result; } } class SkyLineConverter public class SkyLineConverter { public SkyLine convert(int[][] raw) { BuildingConverter buildingConverter = new BuildingConverter(); List<Building> buildings = buildingConverter.convert(raw); return new SkyLine(buildings); } } class Building public class Building { public final int x; public final int width; public final int height; public Building(int x, int width, int height) { this.x = x; this.width = width; this.height = height; } } class BuildingConverter public class BuildingConverter { private static final int FROM_INDEX = 0; private static final int TO_INDEX = 1; private static final int HEIGHT_INDEX = 2; public List<Building> convert(int[][] raw) { List<Building> buildings = new ArrayList<>(); for (int[] buildingRaw: raw){ int x = buildingRaw[FROM_INDEX]; int width = buildingRaw[TO_INDEX] - buildingRaw[FROM_INDEX]; int height = buildingRaw[HEIGHT_INDEX]; buildings.add(new Building(x,width,height)); } return buildings; } } class Skyline public class SkyLine { private final int width; private final Set<Edge> edges = new HashSet<>(); private final List<Building> buildings; public SkyLine(List<Building> buildings) { this.buildings = buildings; Building mostRight = findMostRight(buildings); width = mostRight.x + mostRight.width; addEdge(); } private void addEdge() { buildings.forEach(b -> { addEdge(b.x); addEdge(b.x + b.width); }); edges.add(new Edge(width, 0)); } private void addEdge(int x) { int skyline = getSkyLine(x); int previous = x == 0 ? 0 : getSkyLine(x - 1); if (previous < skyline || previous > skyline) { edges.add(new Edge(x, skyline)); } } private int getSkyLine(int x) { List<Building> aroundThisPoint = buildings.stream(). filter(b -> b.x <= x && b.x + b.width > x). collect(Collectors.toList()); return aroundThisPoint.stream().mapToInt(b -> b.height).max().orElse(0); } private Building findMostRight(List<Building> buildings) { Optional<Building> mostRight = buildings.stream().reduce((a, b) -> a.x > b.x ? a : b); //noinspection OptionalGetWithoutIsPresent return mostRight.get(); } public Set<Edge> getEdges() { return edges; } } class Edge public class Edge { public final int x; public final int height; public Edge(int x, int height){ this.x = x; this.height = height; } } Answer: I noticed you have used stream library in most of your code, so I thought about increasing readability with less lines of code using streams if possible because other parts looks fine to me. In your Solution class you have the following method : private List<List<Integer>> sortList(Set<Edge> edges) { List<List<Integer>> result = new ArrayList<>(); List<Edge> list = new ArrayList<>(edges); list.sort(Comparator.comparingInt(o -> o.x)); for(Edge edge: list){ List<Integer> intList = new ArrayList<>(); intList.add(edge.x); intList.add(edge.height); result.add(intList); } return result; } You could directly iterate over the edges set combining Stream#sorted and Stream#map, so avoiding the need of explicitly instantiate a list like my method below: private List<List<Integer>> sortList(Set<Edge> edges) { return edges.stream() .sorted(Comparator.comparingInt(edge -> edge.x)) .map(edge -> List.of(edge.x, edge.height)) .collect(Collectors.toList()); } In your BuildingConverter class you have the following method: public List<Building> convert(int[][] raw) { List<Building> buildings = new ArrayList<>(); for (int[] buildingRaw: raw){ int x = buildingRaw[FROM_INDEX]; int width = buildingRaw[TO_INDEX] - buildingRaw[FROM_INDEX]; int height = buildingRaw[HEIGHT_INDEX]; buildings.add(new Building(x,width,height)); } return buildings; } It is possible streaming every row of your int[][] raw 2d array with Arrays#stream and Stream#map obtaining the same expected result like below: public List<Building> convert(int[][] raw) { return Arrays.stream(raw) .map(arr -> new Building(arr[FROM_INDEX], arr[TO_INDEX] - arr[FROM_INDEX], arr[HEIGHT_INDEX])) .collect(Collectors.toList()); } In your class SkyLine you have the following two methods that can be simplified: private int getSkyLine(int x) { List<Building> aroundThisPoint = buildings.stream(). filter(b -> b.x <= x && b.x + b.width > x). collect(Collectors.toList()); return aroundThisPoint.stream().mapToInt(b -> b.height).max().orElse(0); } private Building findMostRight(List<Building> buildings) { Optional<Building> mostRight = buildings.stream().reduce((a, b) -> a.x > b.x ? a : b); //noinspection OptionalGetWithoutIsPresent return mostRight.get(); } In your getSkyLine method there is no need to instantiate an intermediate list that will be streamed, you can combine your code lines in a more succinct method: private int getSkyLine(int x) { return buildings.stream() .filter(b -> b.x <= x && b.x + b.width > x) .mapToInt(b -> b.height) .max() .orElse(0); } Your findMostRight method could be simplified using the Collections#max like below: private Building findMostRight(List<Building> buildings) { return Collections.max(buildings, Comparator.comparingInt(b -> b.x)); }
{ "domain": "codereview.stackexchange", "id": 41505, "tags": "java, object-oriented" }
PointCloud To LaserScan
Question: Hello. I'm new to ros.I set my scene in VREP and got my robot using kinect. I'm trying to create a map of the environment, but for this I need to convert the data from pointcloud to laserscan, but I have no idea how to do this. Originally posted by Dieisson Martinelli on ROS Answers with karma: 11 on 2018-04-17 Post score: 1 Answer: Please see: #q224463 #q11232 #q238481 #q235009 #q10568 #q188444 #q73186 This has been asked many times before, but it can be a little difficult to find these answers using the search function on the site. You can try entering the following search terms in Google: <question> site:answers.ros.org where <question> is your question. This will perform a search of ROS Answers using your question as a term. For example, your question would be: https://www.google.com/search?q=convert+point+cloud+to+laserscan+site%3Aanswers.ros.org Originally posted by jayess with karma: 6155 on 2018-04-17 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by paulbovbel on 2018-07-06: Funnily enough, the links provided mostly talk about how to convert a laserscan into a pointcloud :) Comment by jayess on 2018-07-06: Good catch. This is what happens when you have too many tabs open at once.
{ "domain": "robotics.stackexchange", "id": 30671, "tags": "ros, ros-kinetic, vrep, laserscan, pointcloud" }
Which controller from `ros2_control` is appropriate for firmly grabbing a box with a 1-DOF gripper in Gazebo Classic?
Question: I want my robot arm (Turtlebot3 with OpenManipulator-X) to firmly grab a box, however the gripper does a sudden close movement and the box flies away. Is a firm closing movement possible with ros2_control in Gazebo Classic? The gripper model uses gripper controller from ros2_control package with position control. I could not find a way to slow down the closing movement of the gripper. I searched for examples and the most similar example uses the Forward Command Controller, however it still has sudden movements like this: How can I achieve a firm closing movement with ros2_control gripper controller with position control? Answer: Welcome to RSE. So you want to simulate a gripper interacting with objects in Gazebo Classic? Don't use position or velocity interfaces. In my experience, that just doesn't work. You have to use an effort interface to let the physics engine solve that properly. This brings some other problems Mimicking position within gazebo with exposing an effort interface via ros2_control isn't implemented yet. Mimicking effort would be supported, but your "mimicked" finger will not have the same position as the "main" finger. I'm not aware of a different method as implementing a custom gazebo_system, see for example this PR. If your robot has a position interface (i.e. servo?), you can't use the same within your gazebo simulation without writing (again) a custom gazebo_system implementing some actuator dynamics between position-effort. Currently, you have to use an effort_controller to forward topics to efforts, because the gripper_controller's effort interface was broken (fixed with this PR, but it will take some time to be released as binary)
{ "domain": "robotics.stackexchange", "id": 38553, "tags": "gazebo, control, ros-humble, gripper" }
Majorana equation in two forms
Question: Let's have two forms of Majorana equation. First form (standart or spinor representations of gamma-matrices). $$ i\gamma^{\mu} \partial_{\mu}\Psi - m\Psi = 0, \quad \Psi = \Psi_{c} = \hat {C} \bar {\Psi}^{T} = \begin{pmatrix} \Psi_{a} \\ \bar {\Psi}^{\dot {a}}\end{pmatrix}, \quad \hat {C} = diag (\varepsilon_{\alpha \beta}, \varepsilon^{\dot {\alpha} \dot {\beta}}) = diag (-i\sigma_{y}, i\sigma_{y}). $$ I only say that the spinor is equal to its charge-conjugated, so the corresponding particle doesn't have an electric charge. The second form (Majorana representation of gamma-matrices). $$ i\tilde {\gamma}^{\mu} \partial_{\mu}\Psi - m\Psi = 0, $$ where all coefficients are real, so we can take $\Psi$ as the real function (for the Majorana fermions we must take $\Psi$ as the real function). So, the question: how these forms are connected? First, of course, I must get the unitary transformation $\Psi ' = \hat {U} \Psi$, which leads to $\tilde {\gamma}_{\mu} = \hat {U}^{+}\gamma_{\mu} \hat {U}$. But what to do with the definition of charge conjugation? Do I need to transform $C$-matrix? If I think correctly, will transformation $\hat {U}^{+} \hat {C} \hat {U}$ lead to the new charge conjugation operation, which consists only complex conjugation of the spinor? An edit. I decided to check my assumptions about charge conjugating. It is determined as $$ \Psi^{c} = \hat {C} \gamma_{0}^{T} \Psi^{*}, $$ where $\hat {C}$ refers to the charge conjugation operator. I started from spinor basis: $$ \gamma_{0} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad \hat {C} = \begin{pmatrix} -i\sigma_{y} & 0 \\ 0 & i\sigma_{y}\end{pmatrix}. $$ Standart (or Dirac) basis: $$ U_{spinor\to standart} = U_{1} = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1\end{pmatrix} \Rightarrow $$ $$ \gamma_{0}^{Dirac} = U_{1}^{+}\gamma_{0}U_{1} =\begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}, \quad \hat {C}^{Dirac} = -\begin{pmatrix} 0 & i\sigma_{y} \\ i\sigma_{y} & 0\end{pmatrix}. $$ Finally, Majorana basis: $$ U_{standart \to Majorana} = U_{2} = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & \sigma_{y} \\ \sigma_{y} & -1\end{pmatrix} \Rightarrow $$ $$ \gamma_{0}^{Majorana} = U_{2}^{+}\gamma_{0}^{Dirac}U_{2} =\begin{pmatrix} 0 & \sigma_{y} \\ \sigma_{y} & 0\end{pmatrix}, \quad \hat {C}^{Majorana} = \begin{pmatrix} -i & 0 \\ 0 & i\end{pmatrix}. $$ So $$ \Psi^{c} = \begin{pmatrix} 0 & i\sigma_{y} \\ -i\sigma_{y} & 0\end{pmatrix}\Psi^{*} \neq \Psi^{*}. $$ It is very strange, because Majorana fermion is real in Majorana representation, so charge conjugation must be equal to complex conjugation. Where is the mistake? One more edit. The answer is found. Answer: The transformation rule for $\hat{C}$ or that of $\hat{C}\gamma_{0}^{T}$ is different from the usual one. The whole charge conjugation operation is given by $VK$, where $V\equiv\hat{C}\gamma_{0}^{T}$ is unitary and $K$ is complex conjugation. (i.e., $VK$ is antiunitary.) Then, under a basis transformation, $$ VK \ \rightarrow \ U^{\dagger}VK U = U^{\dagger}VU^{\ast} K. $$ Hence $V$ transforms as $V \ \rightarrow \ U^{\dagger}VU^{\ast}$. My guess is that this is the piece that was missing in your derivation.
{ "domain": "physics.stackexchange", "id": 11764, "tags": "dirac-equation, majorana-fermions, charge-conjugation" }
Why are rare earth metals and platinum group metals are often found clustered together in ores
Question: Rare earth and platinum group metals are often found clustered together in the earth's crust. Mining for platinum, for instance, also yields Rhodium and Ruthenium belonging to the same group. Likewise, rare earth elements such as Neodymium, Europium and Samarium also cooccur in the same ore, so much so, that they are difficult to chemically separate. It could be reasoned that it's the result of nucleogenesis where elements are formed consecutively based on their atomic number. While it might explain the first row and the second row of each group, where each metal is only one atomic number apart, it doesn't explain why metals from both rows are found together which are much further apart. Alternatively, the similar chemistry of each group could explain the clustering. The two groups are the only group with this property. It fails to explain, however, how these metals found each other in a molten soup of heterogeneous elements. There may be some geological factors in the clustering, but it's unclear. Why are the two groups of elements found clustered together? Answer: The factors that generate mineral concentrations are complex and often only partly known Introduction: geology is complicated The one thing we can be very certain about is is that the distribution of minerals in the earth's crust has very little to do with the primordial origins of the component elements (that is where they came from in the early solar system and how they were originally generated). Most "heavy" elements are originally formed in the cores of supernovae and not in either the big bang or in normal stars. The distribution of elements in the earth is mostly unrelated to the cosmic origins of elements because the earth's crust is not static but is frequently churned up by a variety of processes on a geological timescale. If we go back far enough in the history of the planet, everything was molten and this allowed some of the denser components to separate out before the surface cooled enough to be solid. The led to the core being mostly metallic (and consisting of mostly iron and nickel). Higher layers contain less dense minerals containing a lot of silicate minerals. At the top there is a thin layer, the crust, which is where we find useful minerals and it is even more concentrated in silicate minerals and even less dense. But those early processes are mostly irrelevant to what we see in the crust. The crust is subject to a variety of processes that churn up the content, many of which concentrate specific components. On a very large scale we have plate tectonics where large arts of the crust are both made and recycled on a geological timescale. To simplify greatly, new continental rock at emerges at one place (eg the mid atlantic ridge) and is consumed by subduction at other plate boundaries (eg the Andean belt in South America). On a smaller scale (though often related to plate tectonic boundaries) volcanism takes molten rock from relatively deep in the crust and spews it out to the surface, bringing minerals of new compositions to the surface and altering the surrounding minerals by heat and pressure. Also the surface is subject to weather which causes erosion (leading to both chemical and physical separation of the mineral content of rocks) and the uncovering of rock layers originally formed much deeper in the crust by the cooling of liquid mantle. Another product of erosion is sedimentary rocks where the things being eroded reform into new types of rock. Plus life itself leads to the creation of some rocks. Some creatures collect carbonate minerals to make their protective bodies, for example, and if these concentrate when they dies, they may deposit layers which over time become new rock types (eg limestone). To make things even more complicated these processes may interact. Volcanic heat or pressure caused by burial or major stresses may cause major changes to other types of rock, altering their mineral content in the process. Limestone may be recrystallised into marble; plant deposits may be transformed into coal or oil. And other separations are caused by related processes. Metamorphism is often associated with fluid flow which, depending on the composition of the fluid, may move specific minerals around, extracting specific components from some minerals and recrystallising them in voids left by the stresses associated with the metamorphic processes. In short, things are pretty complicated in the crust and the dynamics will churn things up a lot over geological time. The one thing we can be sure of is that what we see now is not primarily dominated by the primordial origin of the elements. There are three major processes that concentrate specific ores I'm going to greatly simplify some of the things that matter here but there are basically three important processes that concentrate things. Not that geologists can typically reach a consensus on what specifically happened with many deposits. The three processes are: separation due to erosion separation due to differential crystallisation of liquid rocks separation caused by metamorphic processes Some major mineral deposits of economic importance (such as the Klondyke deposits in Canada) are caused by the first type of process. Some major platinum and related mineral deposits are from similar processes. What is basically happening is that the metals or minerals containing the metals are concentrated by flowing water because they are denser than the bulk of other minerals from the source rock. This is the same process that prospectors practice when panning for gold in rivers (fast flowing water tends to wash away the less dense clay minerals and leave the denser specs of gold). Over a geological time scale this process sometimes leads to very significant concentration of "heavy" minerals. The association depends on the density of the specific minerals and their specific presence in the rock being eroded. But, if the eroding rock contains many dense minerals, then the process can concentrate them all in the alluvial deposit. But why would some rocks originally contain more than their fair share of specific minerals? One reason is the second geological process that leads to selective concentration of some minerals. This is that, as liquid rocks in the mantle cool, different minerals will crystallise from the mix at different times. a visible manifestation of this process can be seen in many of the polished granites used to decorate kitchen tops or the floors and walls of buildings. Granites consist of three key minerals: feldspars, quartz and mica each with very different mineral contents. The rock is usually formed deep in the crust when a large body of liquid rock cools. But the feldspars crystallise first, giving the large colourful crystals that make the polished surfaces so attractive. Sometimes the large crystals even show patterns of flow in the liquid source rock. The important general point is that the composition of the liquid changes as crystals form and this may concentrate some components. But this process of selective crystallisation is very general and explains why some particular concentrations exist. The largest concentration of platinum group minerals is in a formation in southern africa called the Merensky "reef". The key concentrations of minerals appear to have been caused by separation as the rock crystallised (according to this book): “Present opinion is that the Complex was intruded from a magma that was undergoing some differentiation, but was intruded in discontinuous phases. There is an overall trend in the mineralogy and chemistry of the basic rocks that is normal, and the layering of individual rock units is thought to be due to settling out of crystals according to density modified by connection currents flowing in the magma.” The third process is also important for many valuable minerals. Metamorphic processes, for example, often involve heat and pressure but also hydrostatic processes. The pressure may crack existing rock leaving voids which are filled with high pressure liquid water containing various minerals that can selectively solubilise the contents of the rock and, later, deposit it in the voids. Once chemical property that can selectively separate gold and platinum group metals occurs when the water contains a lot of sulfur, selenium or tellurium and is a reducing environment (eg sufides rather than sulfates). Many precious metals will go into solution in such environments only to be deposited later from the solution as the state changes. Many gold deposits are like this as are some platinum group deposits. Rare earth (lanthanide-containing) minerals are thought to be concentrated by the same sorts of processes but are, despite rare earths being pretty common in the crust (far more common that precious metals), much less common as economically viable deposits. This may be because the specific chemical process that concentrate them in the first place are less common. This is partially because they have large atomic radii and high charge and that means they don't fit well into the commonest silicate types of mineral. The key minerals they do occur in are carbonate and phosphate based. And these are often thought to occur near locations where continental crust is being consumed at a plate boundary where metamorphic processes may increase the presence of phosphate and carbonate in the resulting rock. See this presentation for some ideas. But they can also be concentrated by sedimentary processes where solutions eroded from source rocks are selectively absorbed into clay minerals elsewhere. In general the reason why rare earths are found together is that they are chemically similar so there are few natural processes that will separate them (heck, the industrial processes are very hard and expensive). Summary There are many processes, both physical and chemical, that can cause concentrations of particular minerals in the earth's crust. Some of these depend on chemistry but others are simple physical processes. But geological chemistry is pretty messy and complicated and, even now, geologists can't always agree what particular process created a deposit. But there are broad types of process that probably contribute some of which depend on the similar chemistry of particular groups of elements. Rare earths are very chemically similar, for example, and precious metals have some chemical similarities (especially solubilisation in reducing environments with sulfur and other group 16 elements) that enable geological processes to concentrate them in some locations. Some material here is from The Atlas of Economic Mineral Deposits which, though fairly old (1979) is worth a read to get a feel for how complex this subject is.
{ "domain": "chemistry.stackexchange", "id": 15122, "tags": "metallurgy, nuclear-chemistry, geochemistry" }
Rate of Evolution of population of long lived individuals
Question: Is it necessary that rate of evolution of longer lived trees will be lesser than that of annuals ? I understand that new individuals will come up faster in annuals and it may adapt to varying conditions faster but does it necessarily imply that long lived trees will evolve slower ? Answer: That is true to a certain extent in normal environments, where there is not an excess prevalence of mutagenic elements. The "Generation Time Hypothesis" explains why a smaller generation time and hence a higher reproductive rate ( as in annuals) corresponds to a higher rate of mutation (substitutions in DNA) and therefore have a faster evolutionary rate as compared to slow reproducing and longer generation time individuals(for example, long-lived trees). Since "fitness"-increasing adaptations produced by somatic variations are very less, the formation of new offsprings giving rise to individuals carrying a DNA which has been subjected to processes (Repeated divisions, replications and translations, amphimixis, and different environment) making it more vulnerable to mutation, becomes mandatory, to propel evolution. This process is much faster in annuals than in long lived trees like Sequoia. Moreover Sexual selection (especially in animals) operates at a much slower rate in longer-generation time individuals. Next, smaller individuals, having higher metabolic rates usually have shorter generation time (more pronounced in case of animals). Higher metabolic activity is linked to greater mutation rate (Due to metabolic intermediates acting as mutagens) and hence faster evolutionary rate. Higher metabolic rate also is linked to higher temperature (faster mutation) and these individuals have a geographic distribution favouring the tropics.(Again increases evolutionary rate) All these, imply faster evolution for short lived individuals(which usually have higher reproductive rate) as compared to long lived trees. But under "normal" environmental conditions (not drastic and fast changes), the evolutionary rate of large individuals is fast enough to preserve their existence (as the Sequoia prevalence indicates). However, in fast changing environment, larger and long-lived individuals are more likely to be wiped out (probably, this applied to Dinosaurs!).
{ "domain": "biology.stackexchange", "id": 1406, "tags": "evolution" }
DESeq2 compare within a condition
Question: This might be a really stupid question, but I can't figure out how to do this. I've read through the DESeq2 vignette and manual pages but couldn't find an answer. I have a bunch of samples split up into different conditions (eg. celltypes and disease state). I would like to make a comparison between the two possible values within one condition, but only for the samples with a specific value of the other condition. For example, with the following samplesheet: patient | phenotype | type --------+-----------+----- 1 | healthy | A 1 | healthy | B 1 | sick | A 1 | sick | B 2 | healthy | A 2 | sick | A 2 | sick | B I would like to compare "healthy" vs "sick", but only for type "A". Currently, I have the following code, but this will also include the type "B" sample: dds <- DESeqDataSetFromMatrix(countData=counts, colData=design, design = ~ patient + phenotype + type) keep <- rowSums(counts(dds)) >= 10 dds <- dds[keep,] dds <- DESeq(dds) res <- results(dds, contrast=c("phenotype", "healthy", "sick")) Any idea how to accomplish this? Answer: Welcome to the world of all possible pairwise comparisons in DESeq2 ! So the easiest way is to create another group in your data frame, first I simulate some sensible counts: counts = counts(makeExampleDESeqDataSet(m=48)) design = expand.grid(id=1:3, phenotype=rep(c("sick","healthy"),each=2), type = rep(c("A","B"),2)) And you make a group that is a combination of phenotype and type: design$group = paste(design$phenotype,design$type,sep="_") Run DESeq2, including id: dds <- DESeqDataSetFromMatrix(countData=counts, colData=design, design = ~ id + group) keep <- rowSums(counts(dds)) >= 10 dds <- dds[keep,] dds <- DESeq(dds) res <- results(dds, contrast=c("group", "healthy_A", "sick_A")) So this compares only healthy_A with sick_A.. hopefully this is what you need... The other option (normal statistical) is to fit a model with interaction, but I think this is easier to explain and interpret. Another question you might have is why not do two separate analysis for A and B, I think it's sometimes better if you model the variance with all available data, given the limited number of replicates.
{ "domain": "bioinformatics.stackexchange", "id": 1386, "tags": "differential-expression, deseq2" }
The halocline and sonar
Question: I studied oceanography many years ago at university and recall a lecturer claiming that submarines in the second world war could pass undetected through the entrance to the Mediterranean. He claimed that the halocline caused by the different salinities of the Atlantic and Mediterranean seas would be enough to deflect sonar thus making the submarine invisible. Has anyone seen evidence to support this? Answer: Because of the limitations of wartime sonar, U-boats could sometimes pass undetected into the Mediterranean through the Straits of Gibraltar, but it was always risky and avoiding detection couldn't be guaranteed. There were many things that limited the efficiency of sonar (or ASDIC as the British called it). It is true that sea water has a tendency to separate itself into layers which sonar finds difficult to penetrate. Sometimes this layering was based on differences in salinity, sometimes on differences in temperature, and sometimes on intervening sea currents. Even ships wakes could deflect the sonar beam, so sometimes powerful curving wakes were deliberately made by the U-boat to shake off a pursuing warship. The range of active sonar under good conditions was normally a few miles, but that could be reduced if there was a lot of background noise. The Gibraltar straits are about 12 miles wide. Whales and schools of fish could also send back false echoes which would fool all but the most experienced sonar operators.
{ "domain": "earthscience.stackexchange", "id": 1848, "tags": "oceanography" }
Infinite number of degrees of freedom
Question: In a system with a finite number of degrees of freedom $\eta_i$, $i=1,\ldots, N$ , the partition function depends on the N external fields that may couple linearly to the $\eta_i$ in the Hamiltonian $$ Z[H_i]= Tr \exp \left[ -\beta(\mathscr H - \sum_i H_i \eta_i \right] $$ In a system with infinite number of degrees of freedom, the partition function becomes a functional of $H(\mathbf r)$: $$ Z[H(\mathbf r)]= Tr \exp \left[ -\beta(\mathscr H - \int d^d \mathbf r H(\mathbf r) \eta(\mathbf r) \right] $$ What is meant by infinite number of degrees of freedom? Initially, there was a finite number (N) of axes in the system, and $\eta_i$ was constant throughout a given direction. I understand that for the nonhomogeneous case, $\eta$ has to depend on $\mathbf r$. But to do this, the author says that is due to an infinite number of degrees of freedom. In the continuum limit is the number of axes is infinity? Answer: In physics, infinite number of degrees of freedom means the state or configuration of a system cannot be given completely by finite number of variables, but requires infinite number of variables. These do not need to correspond to any physical space axes. Often the infinite number of variables is due to working with field. Field is function of position, and since there is infinite number of positions, the field has infinite number of degrees of freedom.
{ "domain": "physics.stackexchange", "id": 60950, "tags": "continuum-mechanics, partition-function, degrees-of-freedom" }
How close to Earth were the asteroids moving through this Hubble composite image?
Question: Recently Hubble saw several asteroids traveling erratically through a deep field perspective. What region of the sky is this, and how near were these asteroids to the Earth? The photo is composed of multiple shots and the asteroids have copies because of Hubble's movement through frames as the photo assemblage shifted. Was Hubble viewing towards the plane of the solar system and the asteroid belt? How did they explain that there were five in the same narrow frame? https://astronomynow.com/2017/11/03/hubble-sees-nearby-asteroids-photobombing-distant-galaxies/ Answer: I visited the linked article in Astronomy Now and found the URL of the image contained the string STSCI-H-p1733a. STScI stands for Space Telescope Science Institute. Searching that string leads to http://hubblesite.org/images/news/release/2017-33 (four images) http://hubblesite.org/news_release/news/2017-33 http://hubblesite.org/image/4080/gallery the last two of which identify the location as Abell 370, at about RA: 2h 40m, Dec: -1.6°. At about RA = 0h the ecliptic and equator coincide, but by 2h 40m the ecliptic has a declination of about +16°, so this photo is looking about 18° below the ecliptic. The first paragraph of the caption contained in both of those links says: Like rude relatives who jump in front your vacation snapshots of landscapes, some of our solar system's asteroids have photobombed deep images of the universe taken by NASA's Hubble Space Telescope. These asteroids reside, on average, only about 160 million miles from Earth — right around the corner in astronomical terms. Yet they've horned their way into this picture of thousands of galaxies scattered across space and time at inconceivably farther distances. If the distance is about 160 million km, then they are suggesting these asteroids are in the inner part of the main asteroid belt. The Earth is moving at about 30 km/sec around the Sun, and the Hubble Telescope is "only" moving 7.6 km/sec with respect to the Earth, in roughly the same direction. So much and probably most of the motion is due to the Earth's velocity. I'm suggesting this because if the distance to the asteroids is really roughly 160 million kilometers, then they would be near opposition to the Sun and the Earth's motion would be roughly perpendicular to the viewing direction. There are a lot of asteroids! According to Wikipedia: One hundred asteroids had been located by mid-1868, and in 1891 the introduction of astrophotography by Max Wolf accelerated the rate of discovery still further. A total of 1,000 asteroids had been found by 1921, 10,000 by 1981, and 100,000 by 2000. Modern asteroid survey systems now use automated means to locate new minor planets in ever-increasing quantities. Asteroids are discovered constantly, and sometimes this happens in images that were not intended to be searches for asteroids, as in this case. While most most asteroids orbit close to the plane of the ecliptic, some of their orbits do stray substantially above and below it, as in this case. While the date of the News Release is 02-Nov-2017, I don't know the time frame of all of the exposures that contributed to this final composite image. Here is a screen shot of the Planetarium Mode of Dominic Ford's in-the-sky.org. The marker shows the position of Abell 370 just below the equator, and the Ecliptic is the thicker line above it.
{ "domain": "astronomy.stackexchange", "id": 2637, "tags": "asteroids, hubble-telescope" }
Maximise adjacent numbers in a functional style (PE#11)
Question: What is the greatest product of four adjacent numbers in the same direction (up, down, left, right, or diagonally) in the [provided] 20×20 grid? [source] This solution works, and works effectively instantly. The algorithm is \$ O ( n ) \$, as it loops through the entire field a total of four times. However, it seems like I should be able to do this in one pass, rather than four. More pertinent: I'm still new to writing using the functional interfaces (map, reduce, slice) as opposed to a more procedural style, so I'd like any comments on the readability of the four passes that I make. import java.util.* val grid = """ 08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08 49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00 81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65 52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91 22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80 24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50 32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70 67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21 24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72 21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95 78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92 16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57 86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58 19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40 04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66 88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69 04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36 20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16 20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54 01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48 """.trim() fun main(args: Array<String>) { val numbers = mutableListOf<List<Int>>() Scanner(grid).use { while (it.hasNext()) { val line = mutableListOf<Int>() Scanner(it.nextLine()).use { while (it.hasNext()) line += it.nextInt() } numbers += line } } var product = 0 // Horizontal for (y in 0 until numbers.size) for (x in 0 until numbers[y].size - 3) product = Math.max(product, numbers[y].slice(x..x + 3).reduce(Int::times)) // Vertical for (y in 0 until numbers.size - 3) for (x in 0 until numbers[y].size) product = Math.max(product, numbers.slice(y..y + 3).map { it[x] }.reduce(Int::times)) // Down Right for (y in 0 until numbers.size - 3) for (x in 0 until numbers[y].size - 3) product = Math.max(product, numbers.slice(y..y + 3).mapIndexed { i, it -> it[x + i] }.reduce(Int::times)) // Down Left for (y in 0 until numbers.size - 3) for (x in 3 until numbers[y].size) product = Math.max(product, numbers.slice(y..y + 3).mapIndexed { i, it -> it[x - i] }.reduce(Int::times)) println(product) } Answer: You can parse your grid using some very useful functions in kotlin-stdlib: val numbers = grid.lines().map { line -> line.split(' ').map(String::toInt) } Personally I prefer "row/column" instead of "y/x". Thinking "y" before "x" is unnatural. You can always define your own methods too which can improve readability. e.g.: fun Iterable<Int>.product(): Int = reduce(Int::times) Instead of using Math.max you might check to see if the given product is greater than the currently known maximum product and then assign it if it is. This is minor but personally I prefer to avoid unnecessary assignments. I wouldn't worry about trying to do all of this in one pass. Each directional slice has different row/column bounds so I think four separate loops is the clearest/cleanest. You can remove some duplicated code when it comes to calculating the product and updating the maximum. e.g.: var product = 0 fun maximizeProduct(numbers: List<Int>) { with(numbers.product()) { if (this > product) product = this } } // Horizontal for (y in 0 until numbers.size) for (x in 0 until numbers[y].size - 3) maximizeProduct(numbers[y].slice(x..x + 3)) // Vertical for (y in 0 until numbers.size - 3) for (x in 0 until numbers[y].size) maximizeProduct(numbers.slice(y..y + 3).map { it[x] }) // Down Right for (y in 0 until numbers.size - 3) for (x in 0 until numbers[y].size - 3) maximizeProduct(numbers.slice(y..y + 3).mapIndexed { i, it -> it[x + i] }) // Down Left for (y in 0 until numbers.size - 3) for (x in 3 until numbers[y].size) maximizeProduct(numbers.slice(y..y + 3).mapIndexed { i, it -> it[x - i] }) println(product)
{ "domain": "codereview.stackexchange", "id": 21771, "tags": "programming-challenge, functional-programming, kotlin" }
Does the length of the sidereal day vary systematically?
Question: I'm confused about some properties of the sidereal day, in particular whether its duration varies systematically over the course of the year.1 It seems to me that that must be the case, but the details are a bit confusing to me. I understand that length of a sidereal day is the time interval between successive transits of the line of equatorial longitude corresponding to a specified right ascension, $R$ (usually $0^\text{h}$). Each sidereal day, this line shifts against the earth's direction of rotation by some amount, $\Delta\alpha_{SID}$, to a new position, so that the sidereal day is always shorter than the earth's rotational period on its axis (the "stellar day"). What's confusing me is that the length of the sidereal day is often described as if it were constant; but it can't be, nor can the corresponding changes in the equatorial longitude of $R$, $\Delta\lambda_{SID}$, even assuming a constant rate of precession:2 If the sidereal day has a constant length, $\Delta\alpha_{SID}$ must be constant, but then $\Delta\lambda_{SID}$ will vary (between $\Delta\alpha_{SID}/\cos\varepsilon$ and $\Delta\alpha_{SID}\cdot\cos\varepsilon$) as a consequence of the varying equatorial latitude at which the the line of equatorial longitude corresponding to $R$ intersects the ecliptic. But this is impossible, since equatorial longitude should change uniformly, so that $\Delta\lambda_{SID}$ should be the same for each (same-length) sidereal day. The reverse can't be the case either: that each sidereal day, the position of the line corresponding to $R$ shifts by the same fixed amount, $\Delta\lambda_{SID}$, along the ecliptic. If that were the case, $\Delta\alpha_{SID}$ and thus the length of the sidereal day would vary (between $\Delta\lambda_{SID}\cdot\cos\varepsilon$ and $\Delta\lambda_{SID}/\cos\varepsilon$) resulting in a sidereal day of varying length; but if the length of the sidereal day varies $\Delta\lambda_{SID}$ can't be constant. So the only conclusion I can come to is that both $\Delta\lambda_{SID}$ and the duration of the sidereal day, and thus $\Delta\alpha_{SID}$, vary, subject to the constraints imposed by spherical trigonometry: $$\Delta\lambda_{SID}\cdot\cos\varepsilon<\Delta\alpha_{SID}<\Delta\lambda_{SID}/\cos\varepsilon$$ and $$\langle\Delta\lambda_{SID}\rangle=\langle\Delta\alpha_{SID}\rangle$$ Is this correct?3 Do both of these quantities vary systematically in this way, with these properties.4 Is "the" sidereal day really a mean sidereal day?5 (1) Though a distinction between a "mean sidereal day" and an "apparent sidereal day" is sometimes mentioned, the terminology I've come across is confusing, and descriptions that do discuss variations seem to be referring to different, unspecified, phenomena. Here, I am asking about the properties of an idealized "mean sidereal day", specifically whether even this "mean" day has systematic variation in duration. (2) My thinking went like this. Take the plane of the ecliptic as reference. Then precession is just the a rotation of the equatorial coordinate system around an axis perpendicular to that plain. The earth's rotational axis is not collinear with that axis, so that the great circle of the equator intersects the ecliptic plane at two points, one of which can be used as a reference to "track the progress" of precession. Clearly, the progress of that point along the ecliptic — i.e., the change in the intersection's ecliptic longitude, $\Delta\lambda$ — has a direct one-to-one relationship with the rate of precession. What's not clear to me is how this relates to the corresponding change in right ascension, $\Delta\alpha$. But what's being measured in that case is the distance between the reference point and some initial point on the ecliptic, within the equatorial system, an arc that passes through different declinations and thus $\Delta\alpha$ values that vary from $\Delta\lambda$ as described above. (Moreover, this seems to me the only way that $\langle\Delta\lambda\rangle$ and $\langle\Delta\lambda\alpha\rangle$ to be the same, which must be the case for the method described in (3) to work.) (3) I'm reasonably confident that the first is true, but I'm not sure how to arrive at the second; though it must also be true, since otherwise it would not be possible to determine the precessional period (in sidereal days) from $24^\text{h}/\langle\Delta\alpha_{SID}\rangle$, as is commonly done. (4) The corresponding (by definition, assuming constant rotational rate for the earth) constant quantities for the stellar day must thus be $$\Delta\lambda_{ST}=\langle\Delta\lambda_{SID}\rangle\cdot\frac{d_{ST}}{d_{SID}}=\Delta\alpha_{ST}=\langle\Delta\alpha_{SID}\rangle\cdot\frac{d_{ST}}{d_{SID}}$$ where ${d_{SID}}$ is the length of a mean sidereal day, and ${d_{ST}}$ the length of a stellar day. (5) Or perhaps a mean mean sidereal day. (A very mean sidereal day?) Answer: The sidereal day, $1/k$, is a mean quantity derived from many of observations. Apart from the consideration you are deriving, there are variations due to changes in the orbital parameters of the Earth. These changes are partly well known and predictable by celestial mechanics, but a significant part is unpredictable. One thing that, critically, does not vary, is the "specified right ascension" you refer to in your question: that is always $0^\text{h}$, the vernal equinox. I believe this is the source of your confusion. The basics of your reasoning are sound. It is in fact true that at any given moment, the rate at which the meridian sweeps through ecliptic longitude will be different from the rate at which it sweeps through right ascension, depending on the declination of the segment of the ecliptic the meridian is passing through at that time. And you're right that that rate will vary systematically between $\dot{\alpha}\cdot\cos\varepsilon$ and $\dot{\alpha}/\cos\varepsilon$. So, as a result, as you state, the omission of different "chunks" of the ecliptic from a complete circuit, will result in different day lengths. However, this is irrelevant to the length of a sidereal day. The "chunk" of the ecliptic by which a sidereal day differs from a complete ecliptic circuit is always the same: in particular, it is always at the same declinations, those just west of the vernal equinox. As a result, the effect of precession on the length of the day is the same each day, and the systematic variations you describe have no effect. The variation you describe does indeed occur for the time it takes for the meridian to return to the spot occupied by the sun at the start of each day, because the declination of that spot, and thus of the segment of the ecliptic omitted by precession, varies. But even in that case, those segments follow the sun through precisely the full circuit of the ecliptic over the course of a tropical year, so that the average duration of these "days" is $1/k$. Note that the same process applies to the time between successive transits of the sun. This effect is much more significant (since the sun several orders of magnitude faster than the equinox precesses), and is in fact the component of the equation of time due to the obliquity of the ecliptic.
{ "domain": "physics.stackexchange", "id": 75494, "tags": "astronomy, orbital-motion, coordinate-systems, precession" }
X-ray diffraction: Is there an intuitive explanation of structure and form factors?
Question: We have just started x-ray diffraction and I am utterly lost. We were given two formulas: First formula: The intensity of the x-rays scattered by $\mathbf{Q}=\mathbf{k}-\mathbf{k}'$ is given by the fourier transform of the electronic density, $\rho(\mathbf r)$ $$I(\mathbf Q)=\lvert \Psi(\mathbf{Q})\rvert \propto \left\lvert \int_V \rho(\mathbf r) e^{i \mathbf Q \mathbf r}\right\rvert^2$$ Second formula: $$I(\mathbf Q) \propto \left\lvert \sum_{n=1}^N e^{-i \mathbf{Q} \mathbf{R}_n}\right\rvert^2 \cdot \Biggr\lvert \underbrace{\sum_{j=1}^D f_j (\mathbf Q)e^{-i \mathbf{Q} \mathbf{d}_j}}_{\text{ Structure factor}}\Biggr\rvert^2$$ I have basically gone through the first ten pages of google but I still have no clue what these formulas represent mathematically or geometrically. Is there an intuitive explanation of these formulas? Why are they so important in X-ray diffraction? Answer: $k$ is the incoming beam, $k'$ is the reflected beam, expressed as wave vectors in the reciprocal lattice, which makes $Q=k-k'$ represents a particular plane in reciprocal space. If you assume that the diffracting beam is essentially a plane wave when it elastically scatters off of multiple sites within the crystal, the kinematics of the Laue equation ensures that energy and momentum are conserved. Your first equation is an expression of this, and it recognizes that the intensity of the beam at $Q$ is the Fourier transform of the diffracted beams from the corresponding Ewald sphere point, integrated over the volume of the crystal. The weighting function, $\rho$ is the atomic scattering factor for each lattice point. If the crystal was made of a single element, such as gold, you might be done. However, many crystals share the same structure, but have different elements at the lattice points, such as NaCl; or they may be more complex, having an entire molecule at some or all of the lattice points of each cell. This is what the second equation is for; it determines the structure factor for each point of the cell, based on the individual atomic or molecular scattering factors, $f_j$.
{ "domain": "physics.stackexchange", "id": 30667, "tags": "solid-state-physics, diffraction" }
Dictionary (with silly hash)
Question: Based on this question (but fixed so it can run). I have kept the code as close to the original as possible. I have Marked all the changes with Loki (should be easy to spot). Code here is written in the style of the original author. Dic.h #ifndef DIC_H #define DIC_H #include <iostream> #include <string> using namespace std; typedef string K; //key type //!!! typedef double V; //value type //!!! /* Loki Added */ namespace HA { int hash(std::string key); // There are plenty of things already called hash. // This was confusing the compiler so I had // to put it in its own namespace to make sure // I was using the correct one. } /* Loki END*/ class Dic{ public: Dic( ); //an empty Dic //The BIG 3: operator=, copyconstructor, destructor ~Dic(); Dic( const Dic & src ); //copy con Dic & operator=( const Dic & rhs ); //assignment op //return null or a pointer at the value in this Dic V * find(K key); //returns true if it ADDED (false if modified) bool addOrMod(K key, V val); int size( ); private: class DicNode{public: K key; V val; DicNode * nxt; }; int n; int SZ; DicNode** table; // DicNode* [ ] an array of linked lists of DicNodes int dichash(K); //private hash function void deallocate(); //private helper, used by the destructor. }; #endif dic.cpp #include "Dic.h" #include <string> int Dic::dichash(K key){ //DEPENDS ON K. This one assumes K is string /*Loki Change */ return std::abs(HA::hash(key)) % SZ; /*Loki END*/ } void Dic::deallocate(){ //separate member called by destruc and op= for(int i=0; i<SZ; i++){ //get rid of chain i DicNode * p = table[i]; while(p!=0){ DicNode * kill = p; p = p->nxt; delete kill; } } delete [] table; } V * Dic::find(K key) { /* Loki Wrote */ int hash = dichash(key); DicNode* f = table[hash]; while(f != nullptr && f->key != key) { f = f->nxt; } return f == nullptr ? nullptr : &f->val; /* Loki END */ } bool Dic::addOrMod(K key, V val){ /* Loki Wrote */ V* current = find(key); if (current != nullptr) { *current = val; return false; // false indicates value was not added just modified. } int hash = dichash(key); table[hash] = new DicNode{key, val, table[hash]}; return true; // true indicates new value was added to the table. /* Loki END */ } int Dic::size(){ return n; } //----------------------------------------------------------------- //BIG 3 Dic::Dic() /*Loki Add Must initialize all the members. Otherwise your object will have random values in it. */ : n(0) , SZ(13) , table(new DicNode*[SZ]()) /*Loki END*/ {} Dic::Dic( const Dic & src ) /*Loki Add Must initialize all the members. Otherwise your object will have random values in it. Having randome valus is not very useful when you call the assignment operator which calls the dealloce() method. */ : n(0) , SZ(0) , table(nullptr) /*Loki END*/ { *this = src; //Uses operator= defined for Dic } Dic & Dic::operator=( const Dic & rhs ) { //assignment op if(this == &rhs){ cout<<"goofy"<<endl; return *this; } // clean up any memory allocated by this this->deallocate(); // initialize this n,SZ,table to be like rhs this->n=rhs.n; this->SZ = rhs.SZ; this->table=new DicNode*[SZ]; // duplicate the DicNode chains for(int i=0;i<SZ;i++){ DicNode * q = rhs.table[i]; if(q==0){ this->table[i]=0; }else{ this->table[i]=new DicNode; //note: NOT DicNode() DicNode * p = this->table[i]; while(true){ //loop inv: *p is blank node corresp to *q p->key = q->key; p->val = q->val; if(q->nxt==0)break; q=q->nxt; p->nxt=new DicNode; p=p->nxt; } p->nxt=0; } } return *this; } namespace HA { int hash(string s) { int ret=0; /*Loki Remove The SZ member variable is what you want to use. This is because you want to take the hash and convert it into an index into your table. The string length is the correct value to use as the modulus. int SZ = s.length(); * Loki END*/ for (int i=0; i<s.size(); i++) { ret+=s[i]-'A'; } /*Loki Change return ret%SZ; *Loki End*/ /* Note returning a hash of the string. * It is not modulo anything. So * we also need to modify all call points to * use modulo if required. * I am not going to comment those changes. */ return ret; } } Dic::~Dic(){this->deallocate();} Test Code int main() { Dic dict; dict.addOrMod("Key", 5); std::cout << dict.find("Key") << "\n"; std::cout << *dict.find("Key") << "\n"; } Answer: Dict.h Do not do this. using namespace std; Especially in a header file. You are polluting the namespace for everybody that uses your header file (and you will break peoples code). Some people say its OK to use this in a source file. I disagree with even that as it causes problems in anything more than a toy program. But to make sure you don't get into bad habits don't even use it in toy programs. Use the explicit prefix std::. It was named std rather than standard explicitly so it would not be a burden to be explicit. See: Why is “using namespace std;” considered bad practice? When passing object around try and pass them by const reference. int hash(std::string key); // Prefer to do this: int hash(std::string const& key); This stop a copy of the object being made. Also the function can not modify the original because it was passed by reference. This is a bit untidy: Dic( ); //an empty Dic //The BIG 3: operator=, copyconstructor, destructor ~Dic(); Dic( const Dic & src ); //copy con Dic & operator=( const Dic & rhs ); //assignment op Group constructors together. Put Destructor at the end of the list. Assignment operator after that. The first thing people check when you are doing memory management is to make sure you have the rule of three correctly implemented so put it all together. An addition you should think about is adding move semantics to your class (Move constructor and ove assignment operator). // Too much vertical space (and useless comments) I would have done: Dic(); Dic(const Dic & src); Dic(Dic&& src); ~Dic(); Dic& operator=(Dic const& rhs); Dic& operator=(Dic&& rhs); Not a great interface. But it is sufficient. Remember to pass object parameters by const reference. V * find(K key); // Change to: V* find(K const& key); bool addOrMod(K key, V val); // Change to: bool addOrMod(K const& key, V val); When you have methods that don't change the state of your object you should mark them const. This will allow you to retrieve information from a const object. So passing it by const reference will still allow you to query from it: int size(); // This should have a const on it: int size() const; // In addition to the normal find() you specify above. // It is also worth having a const version that allows your users to read from the object. V const* find(Key const& key) const; // Notice that the method is const and the value I return is const. // So you can read it but not modify the content. If you are going to declare an all public class you may as well make it a struct (its the same thing). class DicNode{public: K key; V val; DicNode * nxt; }; I changed the order of your member variable declarations. In the constructor they are initialized in the same order that you declare them which is important. int n; int SZ; DicNode** table; // Note: the * is part of the type. // So place it with the type. Not particularly keen on an array of arrays. But you are building a container. But because you have no method to resize it I would have personally used std::array or potentially a std::vector. To prevent confusion I renamed this function from hash. As it does not actually return a hash but an index into the table (using the hash to calculate the index). int dichash(K); //private hash function Dict.cpp You have a strange order to your methods. Personally I always put the Constructors/Destructors first. People need to know how the class is set up before other functions make any sense. So putting the constructors firsts will give people a context on how the other functions will work. You did not implement this function: int Dic::dichash(K key){ //DEPENDS ON K. This one assumes K is string /*Loki Change */ return std::abs(HA::hash(key)) % SZ; /*Loki END*/ } Which is the main reason it did not work. OK: This code looks like it should work. void Dic::deallocate(){ //separate member called by destructor and op= for(int i=0; i<SZ; i++){ //get rid of chain i DicNode * p = table[i]; while(p!=0){ DicNode * kill = p; p = p->nxt; delete kill; } } delete [] table; } But it looks very untidy. And could have been written much easier with a nested for loop. Other may suggest you use smart pointers here. But I am going to disagree with them. There are two basic types of memory management in C++. Smart pointers. Containers There is no point in implementing containers in terms of smart pointers. The container is supposed to do the memory management. A Dictionary (hashed or otherwise) is a container so the memory management is well defined and contained. Another three functions you did not implement: V * Dic::find(K key) { bool Dic::addOrMod(K key, V val){ int Dic::size(){ So we get to the Big 3!!!! Your main problem is that you did not initialize your members in the constructors. Do not assume they will be zero initialized. You must usually do this explicitly. Dic::Dic() The copy constructor you implement in terms of the assignment operator. Nice idea. But the wrong way around. You should implement the assignment operator in terms of the copy constructor (its called the Copy and Swap Idiom (easy to find on Google or SO)). Also because you did not correctly initialize the object before calling the assignment operator things were going horribly wrong. Dic::Dic( const Dic & src ) When copying an object. You should do it three distinct phases. This allows you to provide the strong exception guarantee. Dic & Dic::operator=( const Dic & rhs ) Copy the object into a temporary. It is important to put it into a temporary. Because if things go wrong during the copy you still have the original state of the object to fall back on. Swap the content of the temporary with the state of the current object. Swap in C++ is a fundamental property that you should implement as a very simple transfer of safe objects (POD and pointers). Now that you have swapped the state (and the object is all good) you can destroy the old state. Which should now be in the temporary object. If we look at your copy we can point out the danger spots. // clean up any memory allocated by this this->deallocate(); At this point your object is not in a very dangerous state. All its data has been deleted but the pointers still point at that memory. If you throw an exception that is caught downstream this object would cause the program to potentially crash (or at least have a high likely hood of undefined behavior). Back to more general comments. One statement per line please. don't make your code hard to read. this->n=rhs.n; this->SZ = rhs.SZ; this->table=new DicNode*[SZ]; Some extra white space around the gibberish would go a long way in making this easier to read. for(int i=0;i<SZ;i++){ Personally I would have created another method for copy a list of elements. DicNode * q = rhs.table[i]; Hashing This is a terrible hash function: int hash(string s) { int ret=0; /*Loki Remove The SZ member variable is what you want to use. This is because you want to take the hash and convert it into an index into your table. The string length is the correct value to use as the modulus. int SZ = s.length(); * Loki END*/ for (int i=0; i<s.size(); i++) { ret+=s[i]-'A'; } /*Loki Change return ret%SZ; *Loki End*/ /* Note returning a hash of the string. * It is not modulo anything. So * we also need to modify all call points to * use modulo if required. * I am not going to comment those changes. */ return ret; } To be fair writing your own hash function is really hard; and thus a bad idea. Unless you happen to have a PhD in Maths this is not an easy task. It is better to go onto the internet and lockup existing hashing algorithms. There are some relatively simple ones that give a reasonable distribution for small pet projects like this. Another thing to note about hash and hash tables. Prime numbers are key. So it is probably a good idea to make your table have a prime number of buckets (that is why I used 13 above). But check your hashing algorithm you eventually pick.
{ "domain": "codereview.stackexchange", "id": 10291, "tags": "c++, hash-map" }
How can I measure the amount of voltage stored in a sealed lead-acid battery?
Question: I know that the simple way to measure the voltage stored in a lead-acid battery is to simply measure the positive and negative using a voltmeter. In my case, I think that my battery has a builtin charge controller and it is sealed like the picture below. How can I effectively measure it without opening? If there is no way to measure it, how do I open this thing? I tried to open it using flat screwdriver and it leaves a dented mark. Answer: The DC 12v output may or may not be direct from the battery, if it is current limited then probably not - you will need to check the spec sheet. The top is probably screwed down as the pv controller is under there. The screws to get access are hidden under the orange graphic with all the labels, removing it without damage depends on how strong the adhesive is. One way is to rub your thumb over the surface to find the screw holes then just uncover those... but if they fitted plastic hole covers you may not be lucky.
{ "domain": "engineering.stackexchange", "id": 2896, "tags": "battery" }
Weinberg's Coleman-Mandula theorem proof sufficient condition for isomorphism?
Question: In Weinberg's QFT Volume 3 book on Supersymmetry, he presents his own proof of the Coleman-Mandula theorem. As part of the proof, he proves that the only possible internal symmetry generators must form a direct sum of compact semi-simple Lie algebras and U(1) algebras. Label these internal symmetry generators $B_{\alpha}$. Their action on multi-particle states are as follows, using Weinberg's notation: $$B_{\alpha}|pm,qn,...\rangle=\sum_{m'}(b_{\alpha}(p))_{mm'}|pm',qn,...\rangle+\sum_{n'}(b_{\alpha}(q))_{nn'}|pm,qn',...\rangle+...$$ Where $b_{\alpha}(p)$ are finite Hermitian matrices which define the action on single particle states. The result can be easily proven for the single particle matrices, and the remainder of the proof is to show that there is an isomorphism between the $b_{\alpha}(p)$ and the $B_{\alpha}$. Both of these have the same commutation relations: $$[B_{\alpha},B_{\beta}]=iC^{\gamma}_{\alpha\beta}B_{\gamma}$$ $$[b_{\alpha}(p),b_{\beta}(p)]=iC^{\gamma}_{\alpha\beta}b_{\gamma}(p)$$ This provides a homomorphism between the two, i.e. $f:b_{\alpha}(p)\to B_{\alpha}$. Weinberg states: "For it $(f)$ to be an isomorphism would require that whenever $\sum_{\alpha}c^{\alpha}b_{\alpha}(p)=0$ for some coefficients $c^{\alpha}$ and momentum $p$, then $\sum_{\alpha}c^{\alpha}b_{\alpha}(k)=0$ for all momenta $k$, which is equivalent to the condition $\sum_{\alpha}c^{\alpha}B_{\alpha}=0$." Question: Why is this a sufficient condition for there to be an isomorphism between the two generators? For reference see section 24.B of Weinberg's Quantum theory of fields. Answer: Let us instead look at the inverse map $g_p: B_\alpha \mapsto b_\alpha(p)$. The claim we want to prove is that $g_p$ is an isomorphism for all $p$. A map between Lie algebras ($g_p$) is an isomorphism if and only if it is an isomorphism of vector spaces and it respects the Lie bracket. You already know that it respects the Lie bracket because the $b$ and $B$ have the same commutation relations and you know it is linear, i.e. a homomorphism, because we defined it only on the generators $B$ and are implicitly extending it linearly to the rest of the algebra. A homomorphism of vector spaces is an isomorphism if and only if it is both surjective and injective. You already know it is surjective because the image of $g_p$ is generated by the images of the $B_\alpha$, which are all of the $b_\alpha(p)$, and since the $b_\alpha(p)$ are the generators of their algebra, the image is the whole algebra. A homomorphism of vector spaces is injective if and only if its kernel is trivial, i.e. contains only the zero vector. An element $c^\alpha B_\alpha$ is in the kernel of $f$ if and only if $g_p(c^\alpha B_\alpha) = c^\alpha g_p(B_\alpha) = c^\alpha b_\alpha(p) = 0$. So $g_p$ is only injective for all momenta $p$ if $c^\alpha b_\alpha(p) = 0$ for that single $p$ implies $c^\alpha B_\alpha = 0$. By definition, we have that $B_\alpha$ acts as a sum of actions of $b^\alpha(k_i)$ for some collection of momenta $k_i$. So if $c^\alpha b_\alpha(p) = 0$ for a fixed $p$ implies $c^\alpha b_\alpha(q) = 0$ for all $q$, then it implies in particular that all $c^\alpha b_\alpha(k_i) = 0$, and therefore $c^\alpha B_\alpha = 0$, which in turn shows $g_p$ is injective, which in turn means $g_p$ is an isomorphism of Lie algebras, which is what we wanted to show.
{ "domain": "physics.stackexchange", "id": 55917, "tags": "quantum-field-theory, special-relativity, symmetry, s-matrix-theory" }
How were the Navier-Stokes equations found in the first place if we can't solve them?
Question: I was reading up on the Clay Institute's Millenium prizes in mathematics. And I noticed the Navier-Stokes equations were described as minimally understood. As far as I was taught in physics a few weeks ago(SCQF Level 6), they are used but solutions to them are hard to find in three dimensions because they require large amounts of computational power due to the complexity of the equations and so approximations are used. How were the equations discovered in the first place if we can't solve them? Answer: I just wanted to give a more concrete idea of how we know these equations even though we have trouble proving analytical theorems about them. Stuff moving in space Consider any stuff (as in, any conserved quantity) distributed over space. We know that we can describe this with a time-dependent density field $\rho(x,y,z,t)$ such that any little volume $dV$ has some amount of stuff $\rho~dV$ at that point. We also know that this stuff might be flowing around over time and we formally treat this by saying that we want to know the flow through a little flat surface of area $dA,$ which is oriented in the $\hat n$ direction: that is, the surface is normal to $\hat n$ and "positive" flow will be in the $+\hat n$ direction. Combined together this is a vector $d\mathbf A = \hat n~dA$ and there is some vector field $\mathbf J(x,y,z,t)$ such that the amount of stuff which flows through this area over a time $\delta t$ is $\delta t~d\mathbf A\cdot\mathbf J(x,y,z,t).$ With $\rho$ and $\mathbf J$ we know almost everything. Since the stuff is conserved, we can say that in this box of volume $dV,$ if the amount of stuff in the box changes, it is either because there was a net flow into or out of the sides of the box, so we are doing some $\iint d\mathbf A\cdot \mathbf J$ which turns out by Gauss's theorem to be just $dV~\nabla\cdot\mathbf J,$ or else it came from outside the system we're studying, so there is some term $dV~\Phi$. Equating that to the change in the box $dV~(\partial\rho/\partial t)$ gives the simple starting equation $${\partial \rho\over\partial t} = -\nabla\cdot \mathbf J + \Phi.$$Now when we've got a flow field $\mathbf v(x,y,z,t)$ dictating how a fluid flows, the most dominant transport term is that the box flows downstream, $\mathbf J = \rho~\mathbf v + \mathbf j$ for some deviation $\mathbf j.$ Usually the principal deviation then comes from Fick's law, that there is a flow proportional to the difference in density between adjacent points, $\mathbf j = -D~\nabla \rho,$ but there may be more complex terms there; in particular we shall see pressure here. Conservation of momentum The key point here is that $p_x$, the momentum in the $x$-direction, is a stuff. It is a known conserved quantity. It is conserved as a direct result of Newton's third law which turns out, under Emmy Noether's celebrated theorem, to be the same as the statement that the laws of physics are the same at position $x$ as they are at position $x+\delta x$, for a suitable definition of "laws of physics." We are pretty sure about this, and we are pretty sure that the momentum of the fluid itself in the $x$-direction must therefore also be conserved, and this is $\rho~v_x$ where I am shifting definitions a bit on you: $\rho$ now refers to the mass density field and $v_x$ still refers to the fluid velocity in the $x$-direction. Now a flow of momentum per unit time, which we said is what $\mathbf J\cdot d\mathbf A$ is, is a force. Therefore $\mathbf J$ naturally takes the form of a force per unit area in this context. Now we know that Newton's expression for viscous forces was in fact to write $F_x = \mu~A~v_x/y$ where I am moving a surface of a fluid at speed $v_x$ at a perpendicular distance $y$ from a place where it is being held still; it will not surprise you at all to see that this is very similar to Fick's law and can be written as just $\mathbf j_\text{viscosity} = -\mu~\nabla v_x.$ To that we also need to add the effects of pressure, as a lowering in pressure also drives a fluid motion; this is a little bit harder to reason out but it takes the form that we can imagine a constant flow in the $x$-direction of $p~\hat x$ and then deviations in this flow would produce the change in momentum per unit time $-\partial p/\partial x$ through this divergence term. (That's a little bit of a sloppy way to show that we are talking about a stress tensor and part of it is $p~\mathbf 1$, the identity matrix multiplied by the pressure.) Combining these two components of $\mathbf j$ we have $${\partial \over\partial t}(\rho~v_x) = -\nabla\cdot (\rho~v_x~\mathbf v - \mu \nabla (v_x)) - \frac{\partial p}{\partial x} + \Phi_x.$$The external contribution $\Phi$ comes from forces influencing the fluid from outside, like gravity. In the Navier-Stokes equations the Millenium Prize has restricted itself to a considerably simpler case where $\nabla\cdot\mathbf v = 0$ and $\rho$ and $\mu$ are constant, which we call "incompressible flow." This is generally a valid assumption when you're interacting with a fluid at speeds much lower than the speed of sound in that fluid; then the fluid would rather move away from you than be compressed into any one place. In this case we can commute $\rho$ out of all of the spatial derivatives and then divide by it, so that the only impact is to rewrite $\nu=\mu/\rho$ and $\lambda=p/\rho$ and $a_x=\Phi_x/\rho$, eliminating the unit of mass from the equation. For $v_x$ we have specifically, $${\partial v_x\over\partial t} + \mathbf v\cdot\nabla v_x - \nu \nabla^2 v_x = - \frac{\partial \lambda}{\partial x} + a_x,$$ and then we can extend the above analysis to the directions $y,z$ too to find, $$\dot{\mathbf v} + (\mathbf v\cdot\nabla)\mathbf v - \nu \nabla^2 \mathbf v = - \nabla \lambda + \mathbf a.$$This is the version of the Navier-Stokes equations written down in the Millenium Prize; we have a very straightforward explanation of this as "The flow of momentum in a small box flowing downstream in an incompressible homogeneous Newtonian fluid is due entirely to Fick's-law diffusion of the momentum due to the viscosity of the fluid, plus a force due to pressure gradients inside the fluid, plus forces imposed by the external world." Why this equation? The understanding of the physics of how we got to this equation is not in question. What's at stake is the mathematics of this equation, in particular this $(\mathbf v \cdot \nabla) \mathbf v$ term which contains $\mathbf v$ twice and thereby makes it a nonlinear partial differential equation: given two flow fields $\mathbf v_{1,2}$ which are valid, in general $\alpha \mathbf v_1 + \beta \mathbf v_2$ will not solve this equation, removing our most powerful tool from our toolbox. Nonlinearity turns out to be unbelievably hard to solve in general, and essentially the Clay Mathematics institute is giving the million-dollar prize for anyone who cracks nonlinear differential equation theory strongly enough that they can answer one of the more basic mathematical questions about these Navier-Stokes equations, as a "most basic example" for their new theoretical toolkit. The idea of the Clay prizes is that they are specific problems (which is important for awarding a prize for their solution!) but that they seem to require powerful new general ideas which would allow our mathematics to go into places where it has historically been unable to go. You see this for example in $\text{P} = \text{NP}$, it's a very specific question but to answer it we would seem to need to have a better handle on "here's a classification of set of stuff which computers can do, and here are some things which a computer can't efficiently do" which nobody has yet been able to convincingly present. A new toolbox which could resolve this "stupid little" question would therefore profoundly improve our ability to work on a huge class of related problems in computation.
{ "domain": "physics.stackexchange", "id": 43965, "tags": "fluid-dynamics, computational-physics, history, navier-stokes" }
Do processes involving synthesis need heat?
Question: I came across a textbook that stated a combination process that required heat. There was no explanation whether it was a prerequisite for a combination process to have heat or it was just an example just that it was grouped under "heat process chemical changes". Was doing a test paper that asked about 2 differences between thermal decomposition and combination processes and one of the difference was that "combination processes do not need heat but thermal decomposition processes do". Would like to ask which interpretation is correct. From preliminary searches on Google it seems that the consensus is that any reaction that simply involves two separate substances combining in a reaction to become a single product is considered to be a combination reaction regardless of whether there is heat involved or not. Would like to know what is the correct interpretation. Answer: Definitely. Although combination processes may be exothermic, they do have activational energy. As an example, consider the reaction: $$\ce{H2 + I2 -> 2HI}$$ While the formation of two $\ce{H-I}$ bonds is quite energy releasing, you still have to go through breaking a $\ce{H-H}$ and a $\ce{I-I}$ bond. You need to supply heat to give thermal agitation to allow the molecules with sufficient energy to break the bonds. A typical reaction like this one would have this energy-reaction coordinate graph: The reaction proceeds into a high-energy transition state, and then comes down to the low energy products. To reach the transition state however, you must supply heat. Edit: As @porphyrin mentioned, there are some reactions that have essentially zero activational energy. Combination of radicals may be an example: $$\ce{Cl. + Cl. -> Cl2}$$ This one has no activational energy because there is no unstable transition state involved. Another popular example is the basic neutralization reaction: $$\ce{H+ + OH- <=> H2O}$$ Here again, there isn't any highly unstable transition state.
{ "domain": "chemistry.stackexchange", "id": 8296, "tags": "synthesis, heat" }
Are astronomical studies with no "proprietary period" for data organized differently than studies with "proprietary period"?"
Question: Let's assume you are a group of astronomers that proposed an observation by a telescope. The observation has been conducted, so you have got the data you wanted. But the data has no proprietary period. So everybody can access it by the internet. If there are potential discoveries in the data, you would like to maximize the chance that your group will be the first to find them. It's you who proposed the observation. It would be a pity if others were reported the finding ahead of you. I can guess the data analysis will be more intensive compared to the case when there would have proprietary period when only you can access it. Also you may use new improved pipelines for data analysis which you do not make available for others. These are my guesses. But in reality - do studies differ much for open astronomical data compared to proprietary data? Answer: I can only tell from my experience, which might be not the ideal type of answer, but I will leave my 2 cents anyway: For such cases I have seen essentially three different approaches: Be fast: Publish your analysis before another group can scoop you. This is what you want to do if the interesting results are fairly obvious. This will then result in a shorter paper without deep analysis. Be prepared: If you know that this observation will take place early enough and it is somewhat predictable, you can prepare the whole analysis with predicted or "faked" data and have a paper draft already prepared. That way you can just re-run your pipeline with the real data, update your paper draft and publish. Be unique: Do something with the data, which you can be fairly sure nobody else can or will do. This might be the application of some model or algorithm that was developed by and is only available to your team. So what you sometimes see for data that does public immediately is that instead of one detailed paper, you get a short one (by the same or another group) and detailed one later.
{ "domain": "astronomy.stackexchange", "id": 6544, "tags": "data-analysis" }
Online materials on Support Vector Machine
Question: I am doing my final year project on Image Processing.I want to know about the Support Vector Machine.Please suggest some links for the former. Thanks and regards in Advance Answer: Video Lectures : http://nptel.iitm.ac.in/courses/117108048/ Notes : http://www.nptel.iitm.ac.in/courses/106108057/ (Read from Lesson 25. These notes helped me very much)
{ "domain": "dsp.stackexchange", "id": 1326, "tags": "image-processing, algorithms" }
CSV file reader in PHP that supports large files (>15k lines)
Question: I have written the following function in PHP to read a CSV file. It works correctly for small files. However, if I try to read in files that are bigger than 15k lines, it takes between 1–2 minutes to process them. How can I optimize this code to make it run faster on large files? Is there anything else that I should improve? function read_csv($file){ $return_waarde = array(); if(!is_null($file) && !is_empty($file)){ $header = str_getcsv(utf8_encode(array_shift($file)), ';'); $header_trimmed = array(); foreach($header as $value){ $trim = trim($value); if(!in_array($value, $header_trimmed)){ $header_trimmed[] = $trim; } else { $header_trimmed[] = $trim . "1"; } } ini_set('memory_limit', '512M'); ini_set('max_execution_time', '180'); foreach($file as $record) { if(!in_array($record,$return_waarde)){ $return_waarde[] = array_combine($header_trimmed, str_getcsv(utf8_encode($record), ';')); } } } else { $return_waarde = "there is no file"; } return $return_waarde; } Answer: Performance As performance is your main concern, let's face this first. To complete the example CSV-file with ~36k lines your original script needs around 139s*. The main bottlenecks are in_array: if (!in_array($record,$return_waarde)) {} and array_combine: $return_waarde[] = array_combine($header_trimmed, str_getcsv(utf8_encode($record), ';')); As you want an associative array, we can't get rid of array_combine but we can improve the very expensive and slow test from in_array. Idea Instead of checking the fastly growing and complex result-array for existence of the newly created associative array, you can do this: create a second array create a hash of the current dataset/row check this array's keys for the existence of the latest hash using isset, which is faster than in_array only if the hash is not found, store it, run array_combine on the row and append the result as well Result while (false !== ($data = fgetcsv($handle, 1000, ','))) { $hash = md5(serialize($data)); if (!isset($hashes[$hash])) { $hashes[$hash] = true; $values[] = array_combine($headerUnique, $data); } } With this improvement the script processes all 36k lines in ~0.5s now*. Seems a little faster. ;) Unique entries in the result Even though this is solved by using the hash now, let me point out a flaw in your logic: if (!in_array($record, $return_waarde)){ $return_waarde[] = array_combine($header_trimmed, str_getcsv()); } This will never find any duplicates, because you check for existence of the indexed array $record but afterwards you insert a different associative array. Unique header names In the beginning you create unique names for duplicate entries in the header row: if(!in_array($value, $header_trimmed)){ $header_trimmed[] = $trim; } else { $header_trimmed[] = $trim . "1"; } If you have a column name more than two times, you'll end up with this, probably unintended, result: ['column', 'column1', 'column1'] You can create a function to make the names truly unique, e.g.: function unique_columns(array $columns):array { $values = []; foreach ($columns as $value) { $count = 0; $value = $original = trim($value); while (in_array($value, $values)) { $value = $original . '-' . ++$count; } $values[] = $value; } return $values; } This will result in ['column', 'column-1', 'column-2'] Return value of read_cvs Currently your function read_csv() does return either a string or an array. The function should always return an array. You can even make the parameter- and return-value-types more strict: function read_csv(string $file): array {} Also try to exit early, when something went wrong instead of nesting if-statements. If you actually want to do something, if an error occurs, throw an exception: if (!$file) { throw new Exception('File not found: ' . file); } Final result Finally let's make this function more versatile by adding the line length and delimiter as optional parameters. function read_csv(string $file, int $length = 1000, string $delimiter = ','): array { $handle = fopen($file, 'r'); $hashes = []; $values = []; $header = null; $headerUnique = null; if (!$handle) { return $values; } $header = fgetcsv($handle, $length, $delimiter); if (!$header) { return $values; } $headerUnique = unique_columns($header); while (false !== ($data = fgetcsv($handle, $length, delimiter))) { $hash = md5(serialize($data)); if (!isset($hashes[$hash])) { $hashes[$hash] = true; $values[] = array_combine($headerUnique, $data); } } fclose($handle); return $values; } * For testing I used an example CSV-file with over 36.000 lines from the site SpatialKey. I duplicated a few column names and added at least one duplicate line. My environment is the latest MAMP running PHP 7.1.1. The time was measured using: $start = microtime(true); $x = read_csv('test.csv'); print microtime(true) - $start;.
{ "domain": "codereview.stackexchange", "id": 27760, "tags": "php, file, csv, performance" }
How long does an over contact binary star system last?
Question: I read recently about VFTS 352, an overcontact binary star system where both stars have roughly equal mass. All of the reports I've read (in mass-media type publications) have said that the system has one of two fates: either the two stars will merge, or they'll supernova. But when will this happen? The wikipedia page for contact binaries says that they have a lifespan of millions to billions of years, but doesn't say if that's different for overcontact binaries. It also says that they're often confused with common envelopes, which have a lifespan of months to years, and I'm not sure where in that spectrum an overcontact lies (or really what the distinction is, since the page for contact binaries says they share an envelope, which sounds the definition of a common envelope). I'm also not sure whether the fact that both stars have roughly equal mass affects the lifespan. The mass-media articles I've read have implied that the merger-or-supernova is happening soon, but I don't know if this is on a human scale (months) or galactic scale (millions of years). Answer: Short answer: $t \lesssim 10^5\ \mathrm{years}$ (maybe) An "overcontact binary" is just another way of saying "common envelope binary". The two phrases are exactly the same and it's frustrating that the authors on the VFTS 352 paper decided to create their own convention - as if astrophysical classifications weren't confusing enough! A contact binary exists on timescales predominantly dependent on stellar evolution, so figuring out how long a contact binary will exist is heavily dependent on the mass, metallicity, and rotation of the primary star among other things. Deriving the timescale: Let's keep the scope to systems like VFTS 352, where the primary is massive and the binary has an orbital period less than 4 years (2.5 AU separation). In order to have a common envelope event, the stars must have overflown their Roche lobes. The radius for the Roche lobe of two point masses is \begin{equation*} r_L = \frac{0.49 q^{\frac{2}{3}}}{0.6q^{\frac{2}{3}}+\mathrm{ln}(1+q^{\frac{1}{3}})}a \end{equation*} where $a$ is the separation. For close binaries, the general observed trend is a high mass ratio $q=M_2/M_1$. So, if we assume $q=1$, then $r_L = 0.38a$. Hence, for a binary with $a<2.5$ AU, \begin{align*} r_L &\lesssim 1\ \mathrm{AU}\\ r_L &\lesssim 215\ R_{\odot} \end{align*} since $q=1$ is an upper bound on the Roche lobe radius. Now, performing some trivial rearrangement of the blackbody luminosity equation $L=4\pi\sigma_{SB}R^2T^4$, we find that \begin{equation*} R \approx 3.31\times10^{7} \bigg(\frac{L}{L_{\odot}}\bigg)^{\frac{1}{2}}\bigg(\frac{1\ \mathrm{K}}{T}\bigg)^2 \ R_{\odot}. \end{equation*} Massive stars typically have roughly constant luminosity, so we will choose $L\approx10^5\ L_{\odot}$. Hence, \begin{equation*} R\approx 1\times10^{10}\bigg(\frac{1\ \mathrm{K}}{T}\bigg)^2 \ R_{\odot} \end{equation*} The massive star needs to evolve until its radius is equal to that of the Roche lobe radius, so we find that the star reaches the common envelope phase for \begin{equation*} T \gtrsim 7000\ \mathrm{K} \end{equation*} Taking a peek at an HR diagram, this star varies from about $30000\ \mathrm{K}$ to $4000\ \mathrm{K}$ from ZAMS to end of main-sequence. Thus the primary spends roughly 3/4 of its time on the main-sequence not in the common envelope phase. Hence, this binary's common envelope phase lasts for, at most, 1/4 the primary's total lifetime, which is on the order of $10^6$ years. Thus, the upper bound for the timescale of a common envelope event with massive stars with negligible rotation is $\sim10^5$ years. Please note that this derivation does not take into account the bulging effect that occurs as the separation decreases. This will certainly lower this upper bound, but by how much I'm not sure. It could lower it by 1 year, or $10^5\ \mathrm{years}$. Lower bounds to this timescale are entirely ambiguous and not particularly helpful in any physical context. The stars could be spinning really fast, have high or low metallicity, the binary could have a different mass ratio, there could be another binary close by, and there may be magnetic interaction (?). The list goes on! I'm sure there's something I left out.
{ "domain": "astronomy.stackexchange", "id": 2237, "tags": "stellar-evolution, binary-star" }
When are pseudo forces considered to do work?
Question: I was solving a question in which: there is an inclined wedge with a smaller block on the inclined side of it (all surfaces are smooth). The wedge is then given an acceleration by application of an external force on it, such that the block does not move on the wedge. In this case the smaller block will have 3 forces acting on it: Force of gravity Normal force by the wedge Pseudo force because of the accelerated wedge. If we were to apply the work-energy theorem for the block, it would be as follows: Work done by gravity + work done by normal + work done by pseudo force = Change in K.E Now work done by gravity will be zero since the block's displacement is only in horizontal direction (because of the acceleration of the wedge). So to find work done by normal, we must subtract the work done by pseudo force from the change in K.E, right? But in the solution, the work done by pseudo force was not taken into account and the work done by normal force was directly equated to the change in K.E. Please explain why this was done. Here is the diagram for your reference. Answer: Pseudo forces are not a physical effect- they appear when you analyze a system in an accelerating reference frame. If you analyze this system in the lab frame, there simply are no pseudo forces. Naturally, non-existent forces do no work. If you analyze this system in the accelerating frame, then the block does not move at all and the change in kinetic energy is trivially zero. But assuming you are interested in the change in kinetic energy of the block in the lab frame, you will have to change back to that frame to get the solution.
{ "domain": "physics.stackexchange", "id": 87304, "tags": "homework-and-exercises, newtonian-mechanics, reference-frames, work, free-body-diagram" }
Applying a dataframe function to a pandas groupby object
Question: I am trying to apply a function to each group in a pandas dataframe where the function requires access to the entire group (as opposed to just one row). For this I am iterating over each group in the groupby object. Is this the best way to achieve this? import pandas as pd df = pd.DataFrame({'id': [1,1,1,1,2,2,2], 'value': [70,10,20,100,50,5,33], 'other_value': [2.3, 3.3, 7.4, 1.1, 5, 10.3, 12]}) def clean_df(df, v_col, other_col): '''This function is just a made up example and might get more complex in real life. ;) ''' prev_points = df[v_col].shift(1) next_points = df[v_col].shift(-1) return df[(prev_points > 50) | (next_points < 20)] grouped = df.groupby('id') pd.concat([clean_df(group, 'value', 'other_value') for _, group in grouped]) The original dataframe is id other_value value 0 1 2.3 70 1 1 3.3 10 2 1 7.4 20 3 1 1.1 100 4 2 5.0 50 5 2 10.3 5 6 2 12.0 33 The code will reduce it to id other_value value 0 1 2.3 70 1 1 3.3 10 4 2 5.0 50 Answer: You can directly use apply on the grouped dataframe and it will be passed the whole group: def clean_df(df, v_col='value', other_col='other_value'): '''This function is just a made up example and might get more complex in real life. ;) ''' prev_points = df[v_col].shift(1) next_points = df[v_col].shift(-1) return df[(prev_points > 50) | (next_points < 20)] df.groupby('id').apply(clean_df).reset_index(level=0, drop=True) # id other_value value # 0 1 2.3 70 # 1 1 3.3 10 # 4 2 5.0 50 Note that I had to give the other arguments default values, since the function that is applied needs to have only one argument. Another way around this is to make a function that returns the function: def clean_df(v_col, other_col): '''This function is just a made up example and might get more complex in real life. ;) ''' def wrapper(df): prev_points = df[v_col].shift(1) next_points = df[v_col].shift(-1) return df[(prev_points > 50) | (next_points < 20)] return wrapper Which you can use like this: df.groupby('id').apply(clean_df('value', 'other_value')).reset_index(level=0, drop=True) Or you can use functools.partial with your clean_df: from functools import partial df.groupby('id') \ .apply(partial(clean_df, v_col='value', other_col='other_value')) \ .reset_index(level=0, drop=True)
{ "domain": "codereview.stackexchange", "id": 34156, "tags": "python, pandas" }
Are there any artificial intelligence systems with an internal dialectical approach and multiple minds which develop within a community of nodes?
Question: In my estimation we have two minds which manage to speak to each other in dialectic through a series of interrupts. Thus at any one time one of these systems is controlling master and inhabits our consciousness. The subordinate system controls context which is constantly being "primed" by our senses and our subordinate systems experience of our conscious thought process( see thinking fast and slow by Daniel Kahneman). Thus our thought process is constantly a driven one. Similarly this system works as a node in a community and not as a standalone thing. I think what we have currently is "artificial thinking" which is abstracted a long way from what is described above. so my question is "are there any artificial intelligence systems with an internal dialectical approach and with drivers and conceived above and which develop within a community of nodes? " Answer: There are a lot of systems that follow the ancient maxim: "Always two there are; no more, no less. A master and an apprentice." In reinforcement learning a class of such setups is called Actor-Critic-Method. There you have a master, whose duty it is to create feedback for the actions of the apprentice, who acts in a given environment. This would be comparable to how a human learns some physical activity, like playing table tennis. You basically let your body do its thing, but your consciousness evaluates how good the result is. The setup of AlphaGo might be even closer to Kahnemann's system 1 and system 2. AlphaGo has two neural networks which provide actions and evaluations (system 1, fast, intuitive, etc.) and the Monte Carlo tree search, which uses these actions and evaluations to prune a search tree and make a decision (system 2, deliberate, logical). In the end, this kind of structure will pop up again and again because it is often necessary to do some kind of classification or preprocessing on the raw data before your algorithm can be run on it. You could frame the whole history of gofai as the story of how scientists thought system 1 should be easy and system 2 should be doable in a few decades, where the reality is that we have no idea how difficult system 2 is because it turned out that system 1 is extremely difficult.
{ "domain": "ai.stackexchange", "id": 126, "tags": "philosophy" }
quick question about degeneracy
Question: For two non-interacting particles, with eigenfunctions $\phi_{n1}(x1)$ and $\phi_{n2}(x2)$ in a one-dimensional potential well $V_{(x)}$ with n = 1,2,.... Consider two spinless non-identical particles: What is the degeneracy of the ground state and first excited state? I'm thinking it should be 0 for both. Now Consider two spinless identical particles, I'm thinking degeneracy for ground state is 0, and first excited state is 2. (since ground state both n = 0, and first excited state it can either be 1,0 or 0,1) Answer: Spinless non-identical particles. Ground state: $(0,0) \implies \text{non-degenerate}$ First excited state: $(0,1) \text{ and } (1,0) \implies \text{doubly degenerate}$ Spinless identical particles. Ground state: $(0,0) \implies \text{non-degenerate}$ First excited state: $(0,1)+(1,0) \implies \text{non degenerate}$
{ "domain": "physics.stackexchange", "id": 14293, "tags": "quantum-mechanics, harmonic-oscillator" }
Simulating a toilet seat usage in Java - follow-up
Question: (See the previous iteration.) This time, I removed the stuff I don't need in my demo runner. Also, I consolidated some code that seemed DRY to me. Simulation rules When a female arrives, she ensures that the seat is down before performing the urge. She leaves the seat down before exiting. When a male arrives to pee, he makes sure the seat is up and pees. After peeing, if we require all the visitors to put down the seat, the male puts it down. Otherwise, the seat remains upwards. When a male arrives to poo, the case 1 applies. Here it goes: package com.github.coderodde.simulation.toiletseat; import java.util.Random; public final class ToiletSeatSimulator { private static enum Gender { FEMALE, MALE, } private static enum Operation { PEE, POOP, } private static enum SeatPosition { UP, DOWN, } private final int queueLength; private final double femaleRatio; private final double peeRatio; private final boolean alwaysLeaveSeatDown; private final Random random; private SeatPosition seatPosition = SeatPosition.DOWN; private int movements = 0; public ToiletSeatSimulator(int queueLength, double femaileProportion, double urinationProportion, boolean alwaysLeaveSeatDown, Random random) { this.queueLength = queueLength; this.femaleRatio = femaileProportion; this.peeRatio = urinationProportion; this.alwaysLeaveSeatDown = alwaysLeaveSeatDown; this.random = random; } public int simulate() { for (int i = 0; i < queueLength; i++) { performOperation(getRandomGender(), getRandomOperation()); } return movements; } private void performOperation(Gender gender, Operation operation) { switch (operation) { case POOP: setSeatPosition(SeatPosition.DOWN); return; case PEE: switch (gender) { case FEMALE: setSeatPosition(SeatPosition.DOWN); return; case MALE: setSeatPosition(SeatPosition.UP); if (alwaysLeaveSeatDown) { setSeatPosition(SeatPosition.DOWN); } return; } } } private void setSeatPosition(SeatPosition seatPosition) { if (this.seatPosition != seatPosition) { this.seatPosition = seatPosition; movements++; } } private Gender getRandomGender() { double coin = random.nextDouble(); // In the range [0, 1). return coin < femaleRatio ? Gender.FEMALE : Gender.MALE; } private Operation getRandomOperation() { double coin = random.nextDouble(); return coin < peeRatio ? Operation.PEE : Operation.POOP; } } ... and the demo driver is: package com.github.coderodde.simulation.toiletseat; import java.util.Random; public final class Demo { private static final int QUEUE_LENGTH = 1000; private static final double FEMALE_RATIO = 0.55; private static final double PEE_RATIO = 0.9; public static void main(String[] args) { long seed = System.currentTimeMillis(); System.out.println("<<< Seed = " + seed + " >>>"); Random random1 = new Random(seed); Random random2 = new Random(seed); ToiletSeatSimulator simulator1 = new ToiletSeatSimulator( QUEUE_LENGTH, FEMALE_RATIO, PEE_RATIO, false, random1); System.out.println( "Number of seat moves when changing seat position " + "on demand: " + simulator1.simulate()); ToiletSeatSimulator simulator2 = new ToiletSeatSimulator( QUEUE_LENGTH, FEMALE_RATIO, PEE_RATIO, true, random2); System.out.println( "Number of seat moves when changing seat position back to " + "closed: " + simulator2.simulate()); } } Critique request Now, what do you think? Did I improve anything? Answer: private void performOperation(Gender gender, Operation operation) { switch (operation) { case POOP: setSeatPosition(SeatPosition.DOWN); return; case PEE: switch (gender) { case FEMALE: setSeatPosition(SeatPosition.DOWN); return; case MALE: setSeatPosition(SeatPosition.UP); if (alwaysLeaveSeatDown) { setSeatPosition(SeatPosition.DOWN); } return; } } } This could be written more briefly: private void performOperation(PositionPreference preference, Operation operation) { switch (preference) { case STANDER: if (operation == PEE) { setSeatPosition(SeatPosition.UP); if (leaveSeatAsUsed) { return; } } case SITTER: setSeatPosition(SeatPosition.DOWN); return; } } or even more briefly without the switch (may also be more readable and maintainable) private void performOperation(PositionPreference preference, Operation operation) { if ((preference == STANDER) && (operation == PEE)) { setSeatPosition(SeatPosition.UP); if (leaveSeatAsUsed) { return; } } setSeatPosition(SeatPosition.DOWN); } Either has exactly the same behavior as the original but only uses two setSeatPosition calls rather than four. We now cover three situations with one case: Person is a SITTER. Person is performing a non-PEE operation. Person is a STANDER who performed a PEE operation with leaveSeatAsUsed false. Changing to leaveSeatAsUsed may be clearer about what the variable does. Also, this way, we only use the variable to prevent the default behavior. Otherwise, we can fall through to the next case. We don't need Gender (actually the wrong term here; sex would be more relevant). A PositionPreference gives us all the relevant information without making any assumptions about gender or sex. If the person is a SITTER, we don't care about the Operation. Such people always put the seat down. Obviously, other names should change outside this block of code for consistency.
{ "domain": "codereview.stackexchange", "id": 44049, "tags": "java, simulation" }
What is the origin of the naming convention for position functions?
Question: In physics, position as a function of time is generally called $d(t)$ or $s(t)$. Using "$d$" is pretty intuitive, however I haven't been able to figure out why "$s$" is used as well. Is it possibly based on another language? Answer: As commenters have pointed out, it's German Strecke. Note that $s$ is for displacement, whereas $d$ is for distance. Distance is the distance along the path traveled by a body, whereas displacement is the birds-eye distance traveled. Displacement can also be negative in 1-D, depending upon your reference positive direction. For some reason, even though Strecke actually means distance, not displacement, its symbol is used for displacement. You might want to check out this paper, it's got an analysis of the naming, mainly for electrodynamic units. A few symbols from the table at the end of the paper: $c$ (speed of light) comes from Latin celeritas; $I$ (current) comes from "intensity of current" in French (intensite du courant). The $\mathbf{A}$-potential, $\mathbf{B}$-field, $\mathbf{H}$-field got their symbols from the alphabetic order of the others.
{ "domain": "physics.stackexchange", "id": 2463, "tags": "terminology, conventions, notation, distance, displacement" }
Lorentz force experienced by outer electrons
Question: So we have a spray of electrons with charge density $\rho$, radius $a$ and it's moving at velocity $v$. I need to show the outer edge electrons are experiencing the following force $$F=\frac{e \lambda}{2 \pi a \epsilon_0}(1-\frac{v²}{c²}).$$ Where $\lambda=e/l$ ($l$ is the length). So far using Lorentz's force equation $F=eE + evB$ and following theorems $$B=\frac{\mu_0 I}{2 \pi a},$$ $$E=\frac{e}{4 \pi \epsilon_0},$$ $$\rho =e/V=e/ \pi a² l,$$ $$j=I/A=I/\pi a²=v \rho.$$ I've found out that $$\frac{el \lambda}{4 \pi \epsilon_0 a²}+\frac{2ea \lambda}{4 \pi \epsilon_0 a²}\cdot\frac{v²}{c²},$$ but I have no idea where I can get a minus sign or how to get rid of that $l$. Answer: While evaluating the second term, i.e. force due to magnetic field, there are some mistakes, firstly $e$ needs to be written with it's appropriate sign. Therefore we have $F_b=-evB$. Proceeding thus you shall get $\frac{e}{2\pi a\epsilon_0}.\frac{v^2}{c^2}.\frac{e}{l}$. You can then replace $\frac{e}{l}$ with $\lambda$. Ultimately $F_b$ should be $-\frac{e\lambda v^2}{2\pi a\epsilon_0 c^2}$ . I hope i helped you.
{ "domain": "physics.stackexchange", "id": 39759, "tags": "electromagnetism, forces" }
Skeletal muscle without antagonist
Question: Is there any skeletal muscle that does not have an antagonist? Excluding circular muscles such as around eye and mouth. The reason why I am wondering is because in biology normally nothing is ever just like that, there is always an exception. This would be the first biological rule without exception that I came across. I am thinking of humans only for a start but feel free to include any other animal if any have skeletal muscles without antagonists :) Answer: I would argue that the orbiculares do have antagonists. To some extent, levator palpebrae superiorus antagonizes orbicularis oculi, and zygomaticus major/minor as well as risorius antagonize orbicularis oris. I can think of three muscle that don't have obvious antagonists: Stapedius Tensor tympani Articularis genu 1 and 2 essentially perform the same action, to dampen sounds reaching the cochlea. 3 elevates the suprapatellar bursa.
{ "domain": "biology.stackexchange", "id": 371, "tags": "muscles, anatomy" }
Find longest sequence horizontally, vertically or diagonally in Connect Four game
Question: I'm new to programming and also to Java and working on problems in Intro to Java by Robert Sedgewick. Here is my question: Connect Four: Given an N-by-N grid with each cell either occupied by an 'X', an 'O', or empty, write a program to find the longest sequence of consecutive 'X's either horizontal, vertically, or diagonally. To test your program, you can create a random grid where each cell contains an 'X' or 'O' with probability 1/3. With many modifications, I came up with some code, which I feel is not efficient. Can someone help make my code more efficient? public class connectfour { public static void main(String[] args) { int N=Integer.parseInt(args[0]),t=0; String[][] a=new String[N][N]; for(int i=0;i<N;i++){ for(int j=0;j<N;j++){ double r=Math.random(); if(r<0.33){a[i][j]="X";t=1;} else if(r<0.66)a[i][j]="O"; else a[i][j]="."; System.out.print(a[i][j]+" "); } System.out.println(); } for(int i=0;i<N;i++){ for(int j=0;j<N;j++){ if(a[i][j]=="X"){ //to check horizontally for(int y=j,length=1;y<N-1;y++){ if(a[i][y]!=a[i][y+1])break; length++; if(t<length) t=length;} //to check vertically for(int x=i,length=1;x<N-1;x++) {if(a[x][j]!=a[x+1][j]) break; length++; if(t<length) t=length;} //to check diagonally ,right and down for(int x=i,y=j,length=1;x<N-1&&y<N-1;x++,y++) { if(a[x][y]!=a[x+1][y+1]) break; length++; if(t<length) t=length; } } } } for(int i=N-1;i>=0;i--){ for(int j=0;j<N-1;j++){ if(a[i][j]=="X"){ //to check diagonally ,right and up for(int x=i,y=j,length=1;x>0&&y<N-1;x--,y++) {if(a[x][y]!=a[x-1][y+1]) break; length++; if(t<length) t=length; } } } } System.out.println("the length of longest sequence of X in the above array: "+t); } } I want to post my improved code according to instructions given above. I implemented the algorithm suggested by Simon André Forsberg. It is working for all cases. public class connectfour2 { public static void main(String[] args) { int N = Integer.parseInt(args[0]), highestconsecutive = 0; String string1, string2 = "X", string3; String[][] board = new String[N][N]; for(int i = 0; i < N; i++ ) { for(int j = 0; j < N; j++ ) { double r = Math.random(); if( r < 0.33 ) board[i][j] = "X"; else if( r < 0.66 ) board[i][j] = "O"; else board[i][j] = "."; System.out.print( board[i][j] + " " ); } System.out.println(); } // loopingfor checking horizontally / vertically/diagonally down and right,down and left for(int i = 0; i < N; i++ ) { int consecutive1 = 0, consecutive = 0 ; for(int j = 0; j < N; j++ ) { // for horizontal check string1 = board[i][j]; if( string1.equals(string2)) consecutive++; else consecutive=0; if( highestconsecutive < consecutive) highestconsecutive = consecutive; // for vertical check string1=board[j][i]; if(string1.equals(string2)) consecutive1++; else consecutive1=0; if( highestconsecutive < consecutive1) highestconsecutive = consecutive1; // looping for diagonal check ,down and right for( int x = i, y = j, length = 0; x < N && y < N ; x++, y++ ) { string1 = board[x][y]; if(string1.equals(string2)) length++; else length=0; if(highestconsecutive < length) highestconsecutive = length; } // looping for diagonal check ,down and left for( int x = i, y = N - j - 1, length = 0; x < N && y >= 0 ; x++, y-- ) { string1 = board[x][y]; if(string1.equals(string2)) length++; else length=0; if(highestconsecutive < length) highestconsecutive = length; } } } System.out.println("the length of longest sequence of X in the above array: "+highestconsecutive); } } Answer: Introduction Hi and Welcome to Code Review! There are number of things that you can learn today. First I would like to say that it is nice that your algorithm works. Thanks for providing a compilable example, that helps a lot. Now, secondly, I have to disappoint you: I will not simply "give you the better code". I don't really think you would learn so much from that. However, I can help you in figuring out what things needs to be improved in your current code. List of improvements YOUR CODE IS NOT READABLE! Sorry for shouting but I am very serious. The importance of code readability cannot be emphasized enough. The readability of your code is severely off. Indentation: Your indentation is not consistent. Each { should be followed by one extra indentation step, and each } should remove one indentation step. Spacing: It seems like you are always using as few spaces as possible. If you are having serious storage problems and are running out of bytes, then... no, I wouldn't understand it even then... Compare else if(r<0.66)a[i][j]="O"; with else if(r < 0.66) a[i][j] = "O"; And compare for(int x=i,y=j,length=1;x<N-1&&y<N-1;x++,y++) with for (int x = i, y = j, length = 1; x < N - 1 && y < N - 1; x++, y++) Spacing is good for you. One space after each comma, one after semicolon, one before & after = and &&. Makes things so much readable. OrhowwouldyoulikeitifIwrotemyreviewlikethis,withoutusingspacesatall? (I hope that example made things clear why spacing is good). Please read up on the Java coding conventions, all these things are mentioned there. Once you have learned how to improved those, if you are using an IDE such as Eclipse, which I hope that you do - if you are not I really suggest that you download Eclipse now. Press Ctrl + Shift + F in Eclipse to make it format for you. If you are using NetBeans, press Alt + Shift + F. Variable names: All (except one) of your variable names is only one character. Try to have self-documenting variable names. What is the variable used for? row and col could be better names than i and j. t could be called maximumFoundLength. String comparison It is more or less pure luck that your code works at all. Thanks to Java only creating one String instance for "X" and such, it works with comparing your Strings with ==. However, if you would have any user input, this wouldn't work. == compares object references, to compare Strings correctly in Java you should use the .equals method. Please see the Stack Overflow question "How do I compare Strings in Java?" Limited number of possible values --> Enum Since the possible values of your board is very limited, you can use an enum instead of a String to store the value of the positions. public enum BoardValue { X, O, EMPTY; // _ for empty positions @Override public String toString() { return this == EMPTY ? "." : this.name(); } } Now make your String[][] a into BoardValue[][] board, which will make it much better to use in the long run. Classes, Methods and Objects Java is an object-oriented programming language. I think you can learn a lot by reading Oracle's Java Lesson: Classes and Objects. I would suggest that you make your game board into a class. Your current String[][] a=new String[N][N]; should be a field in the class. This class could then have several methods: void randomize(): Randomizes new data in to the board. void output(): Print the information to System.out. int findConsecutive(String lookingFor): Scan the board for the largest consecutive of the specified String. Your algorithm Your current algorithm works by looping through the entire two-dimensional board and for each position it does the following: Remember the value of the current position, we can call this "current" Loop through the rest of this row/column/diagonal When you encounter a position that is not equal to "current", you break this loop. This is highly inefficient because you are checking each tile way too many times than you need. Instead, you should treat each row/column/diagonal like an individual line of positions. Consider this algorithm: searchingFor is the value you want to search for ("X" or "O") Initialize the value consecutive to 0. Initialize the value highestConsecutive to 0. For each row/column/diagonal, loop through the positions in the line and check for searchingFor When you encounter this value, you do the following: increase consecutive by 1. If consecutive is more than highestConsecutive, set highestConsecutive to the value of consecutive. If the value did not match, reset consecutive to 0. Once the loop is finished, highestConsecutive is the highest consecutive number for this current row/column/diagonal. Regarding diagonals, you can loop through those by starting at a position on the edge of the board and do the loop once in a straight diagonal line, like the following. Start at all positions where x=0 or y=0 and loop on each square in a bottom-right manner. Then to the same for the other direction, you will have to start at x=MAX and y=0 and loop on each square in the bottom-left direction. One final thing: When you have improved your code, please come back here and post your improved version (also pointing us to this question).
{ "domain": "codereview.stackexchange", "id": 6152, "tags": "java, beginner, connect-four" }
How do I add JDBC and Queue functionality to a simple server?
Question: I would like to add some capabilities to the server. Firstly, it should accept and handle connections with multiple clients, so that there are no mixups between clients. Secondly, there should be some very basic JDBC connectivity: serialize a result set as a List, and then, as requested pop from the list and send that instance to a client for updates. When the client sends back an updated record, update the database accordingly. package net.bounceme.dur.driver; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.net.ServerSocket; import java.net.Socket; import java.util.Properties; import java.util.logging.Level; import java.util.logging.Logger; public class Server { private static final Logger log = Logger.getLogger(Server.class.getName()); private final RecordQueue recordsQueue = new RecordQueue(); public static void main(String[] args) { Properties props = PropertiesReader.getProps(); int portNumber = Integer.parseInt(props.getProperty("port")); while (true) { try { new Server().inOut(portNumber); } catch (java.net.SocketException se) { Logger.getLogger(Server.class.getName()).log(Level.FINE, "spammy", se); } catch (IOException ioe) { Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ioe); } catch (ClassNotFoundException cnf) { Logger.getLogger(Server.class.getName()).log(Level.INFO, null, cnf); } } } public void inOut(int portNumber) throws IOException, ClassNotFoundException, java.net.SocketException { ServerSocket serverSocket = new ServerSocket(portNumber); Socket socket = serverSocket.accept(); ObjectOutputStream objectOutputStream = null; MyRecord recordFromClient = null; try (ObjectInputStream objectInputStream = new ObjectInputStream(socket.getInputStream())) { objectOutputStream = new ObjectOutputStream(socket.getOutputStream()); recordFromClient = (MyRecord) objectInputStream.readObject(); } objectOutputStream.flush(); objectOutputStream.close(); log.info(recordFromClient.toString()); } } Is that a reasonable progression from this server code? Obviously, I would only add a single feature at a time. While I would love to learn Log4J or a similar testing framework, my immediate concern is adding functionality to server code. What are some pitfalls I might run into? What would be the most pragmatic approach to increasing the functionality of the server side operations? For example, I might start with a Queue, and then only later tie that into a database. Each client will only have access to a single record, so I'm not concerned about corrupt data. Will there be a problem when multiple clients are trying to access the Queue, however? The clients will only need pop and add, nothing more. The client will update, or modify, each record instance it receives. Answer: Using imports: catch(java.net.SocketException se) { /* [...] */ throws java.net.SocketException { /* [...] */ fully qualifying these is unnecessary: import java.net.SocketException; catch(SocketException se) { /* [...] */ throws SocketException { /* [...] */ I feel that this code is much more concise, while containing the same information. Using try-with-resources: First: You are using try-with-resources. Good. That's the way to go. Second: You're doing it wrong. try (ObjectInputStream objectInputStream = new ObjectInputStream(socket.getInputStream())) { objectOutputStream = new ObjectOutputStream(socket.getOutputStream()); recordFromClient = (MyRecord) objectInputStream.readObject(); } objectOutputStream.flush(); objectOutputStream.close(); try-with resources does the last two things for you. You achieve the exact same result when you leave those out. The last two statments become useless clutter if you used try-with-resources: try (ObjectInputStream objectInputStream = new ObjectInputStream(socket.getInputStream()); ObjectOutputStream objectOutputStream = new ObjectOutputStream(socket.getOutputStream())) { recordFromClient = (MyRecord) objectInputStream.readObject(); } Apart from that... You don't use your objectOutputStream anywhere. Why do you have it? Using exceptions / logging: Why is your ClassNotFoundException only logged as INFO? I'd expect a ClassNotFoundException to be minimum ERROR, if not FATAL! If you "expect" ClassNotFoundExceptions, then your design might be flawed.
{ "domain": "codereview.stackexchange", "id": 8278, "tags": "java, database, io, server, client" }
UDP Reverse Shell
Question: I'm currently learning python / network programming altogether and I coded this simple python reverse shell; I would like to hear any remarks from you regarding code structure, any common beginner mistake, actually pretty much anything that feels wrong with my code. The code is pretty straightforward, the client sends a command to the server, then listens for the command output; the server listens for the command, executes it and sends the command output. client.py : #!/usr/bin/env python3 import networking import prompt_handler def interpreter(): while True: prompt = prompt_handler.pull_prompt(sockt) cmd = input(prompt) sockt.sendto(cmd.encode('utf-8'), server) output = networking.receive_data(sockt) print(output) if cmd == "quit": break server = ('127.0.0.1', 8001) sockt = networking.socket_init('127.0.0.1', 9001) sockt.sendto('client hello'.encode('utf-8'), server) interpreter() server.py : #!/usr/bin/env python3 import os import platform import networking # separated sends for cwd and user_string to be able to color them client side def get_sys_info(): user_string = 'someone@' + str(platform.dist()[0]).lower() sockt.sendto(user_string.encode('utf-8'), client) user_cwd = os.getcwd() sockt.sendto(user_cwd.encode('utf-8'), client) return def shell(): while True: try: get_sys_info() cmd = networking.receive_data(sockt) if cmd.strip() == 'quit': sockt.sendto('Closing session...'.encode('utf-8'), client) sockt.close() break else: proc = os.popen(cmd) output = ''.join([i for i in proc.readlines()]) sockt.sendto(output.encode('utf-8'), client) except Exception as e: sockt.sendto(repr(e).encode('utf-8'), client) pass sockt = networking.socket_init('127.0.0.1', 8001) client = networking.receive_rhostinfo(sockt) shell() networking.py : import socket def socket_init(ip_addr, port): sockt = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sockt.bind((ip_addr, port)) return sockt # to be able to get the data directly - less clutter in main code def receive_data(sockt): data, rhost_info = sockt.recvfrom(1024) return data.decode('utf-8') # to be able to get the remote host info directly - less clutter in main code def receive_rhostinfo(sockt): data, rhost_info = sockt.recvfrom(1024) return rhost_info promp_handler.py import networking def pull_sys_info(sockt): user_str = networking.receive_data(sockt) cwd = networking.receive_data(sockt) return user_str, cwd # i was craving for some color def pull_prompt(sockt): user_str, cwd = pull_sys_info(sockt) user_str = "\u001b[31m" + user_str + "\u001b[0m:" cwd = "\u001b[34m" + cwd + "\u001b[0m$" return user_str + cwd If needs be you can find the code on github. Answer: UDP is not reliable. The packet sent to the sever could be lost (and therefore the server will not answer). The packet sent by the server could be lost. The client must handle such possibilities. As coded, it just hangs in recvfrom indefinitely. Your recvfrom only takes 1024 bytes. If the shell output is longer, the rest is irrecoverably lost. If the shell output is longer than MTU, the output is fragmented into the multiple packets. The client however only reads one. From this point down, the data client receives have no connection to what was executed. Try to cat a long file, for example. Also keep in mind that the fragments may arrive in any order (UDP doesn't guarantee the order of delivery). Beware shell builtins. Since each command is executed in an individual shell, some commands (such as cd) only appear to be executed, but in fact have no effect. Of course, don't ever run this server publicly. Execution of arbitrary commands (especially from an untrusted source) is a recipe for disaster.
{ "domain": "codereview.stackexchange", "id": 31595, "tags": "python, beginner, python-3.x, socket, udp" }
Length of Rope in Tug Of War Matter?
Question: I had a question about Tug of War Game, Does the length of the rope really matter, whats the difference between a 50cm rope and a 5meter Rope in Force, Torque And Safety in Tug Of War. Answer: The length of the rope does not matter on the level of fundamental physics principles, but it may matter on the level of human body mechanics. Humans are much better at pulling when they can lean their body at an angle to the ground, so their hands are ahead of their feet in the direction they want to pull. If the rope is too short, it might be impossible for both players to adopt this stance, making the game much harder. However, once the rope is long enough for both players to lean over as much as they want without interfering with each other there is no physics that would make the length of ropes have any effect on the game.
{ "domain": "physics.stackexchange", "id": 60516, "tags": "newtonian-mechanics, forces, torque, string" }
Collision between electron and proton?
Question: What would happen if an electron collided with a proton such that the two do not collapse? Would the two become a unit, or would some force prevent them from bonding thus forcing the electron to orbit around the proton? Answer: I'm not sure what you mean by "collapse", but if I interpret that as "no hydrogen is formed" or "the electron is not captured", then 2 things can happen: 1) Elastic electron-proton scattering: the electron and proton just "bounce" off each other under some angle theta. By observing the cross section of the scattering versus the theta angle it was shown that proton is not a point particle, but an extended object. 2) Deep inelastic scattering: the incoming high energy electron "destroys" the proton into a bunch of outgoing hadrons (mostly pions). By observing the cross section of this interaction it can be shown that proton is composed of pointlike particles. The electrons collide elastically with a parton. Some details are here.
{ "domain": "physics.stackexchange", "id": 19173, "tags": "atomic-physics, hydrogen" }
Processing a list of pairs of items using promises and functional programming
Question: I posted an answer on StackOverflow which I believed to be adherent to the principles of Functional Programming. However, I was told by the original asker that it was not 'functional' as my function used an internal variable oldData which kept track of results. I believe that the code still satisfies the paradigms of functional programming as it does not mutate it's arguments, does not use globals and has no side effects (assuming action is not a network call) Is the function process() violating the principles of functional programming? If so, how would I fix it? var items = [ ["item1", "item2"], ["item3", "item4"], ["item5", "item6"], ["item7", "item8"], ["item9", "item10"] ] function action(item) { return new Promise(function(resolve, reject){ setTimeout(function(){ resolve(item + ":processed"); }, 100) }); } function process(items) { return items.reduce((m, d) => { const promises = d.map(i => action(i)); let oldData; return m.then((data) => { oldData = data; return Promise.all(promises); }) .then(values => { //oldData.push(...values); oldData.push.apply(oldData, values); return Promise.resolve(oldData); }) }, Promise.resolve([])) } process(items).then(d => console.log(d)) //Prints: // ["item1:processed","item2:processed","item3:processed","item4:processed","item5:processed","item6:processed","item7:processed","item8:processed","item9:processed","item10:processed"] The original asker suggested that I update my code to use concat instead of push to create immutable arrays every time to make this properly functional. Does that make sense? Answer: Does this function break the paradigm of functional programming in JS? Short answer: It depends One common response I get from Software Engineering SE on this topic is you don't have to be pure all the time. A function can still act functional even if the implementation isn't written in a functional manner. Take for example the following: // All return an array with numbers starting from s to e with no // side-effects and takes all input from args. The first one is // obviously not "functional" but works just as well as the others. function range(s, e){ const array = [] for(let i = s; i < e; i++){ array.push(s) } return array } function range(s, e){ return Array(e - l).fill(null).map((v, i) => s + i) } function range(s, e){ return s == e ? [] : [s, ...range(s + 1, e)] } Striking a balance between ideal and practical is also a deciding factor. For instance, recursion is not a foreign concept in JS. But historically, due to stack size limits, loops became more prevalent. You can safely assume people can read loops better than recursion, and are therefore more likely to understand the first sample than the other two. It looks like the purpose of process function is to batch-process arrays of items. This can be done with recursion. The idea is to process the current item, then concat the results of the next call, which does the same thing until there's nothing in the array. If you use Node or have Babel to transpile, you can use async/await to simplify the syntax. const process = async (i) => { if (!i.length) return [] const r1 = await Promise.all(i[0].map(action)) const r2 = await process(i.slice(1)) return [...r1, ...r2] } If you can't do async/await, here's an expanded version with regular promise syntax. function process(i) { if (!i.length) return Promise.resolve([]) return Promise.all(i[0].map(action)).then(function(r1) { return process(i.slice(1)).then(r2 => r1.concat(r2)) }) }
{ "domain": "codereview.stackexchange", "id": 30142, "tags": "javascript, functional-programming, promise" }
robot_localization: yaw and position scaled down
Question: We have a setup on a Kobuki where we use the robot_localization package to fuse sensor data consisting of wheel odometry and landmark recognition, which is complemented by amcl to correct the odometry data while no landmarks are visible. For now, the landmark system is not providing input, but the wheel odometry is fed to an ekf_localization_node which provides the odom -> base_footprint (analogous to base_link) transform. amcl receives the filtered odometry output produced by the ekf_localization_node and provides the map -> odom transform. The ekf_localization_node however seems to "filter" the odometry in a weird way: The x and y position moves in the right direction, but only by half the distance of what it should do, whereas the yaw movement is scaled to maybe a 100th of it's original amount. What I would like to have happen is that the odometry input leads to a somewhat similar output on the /filtered/odometry topic. Another issue which might be related is that the output on the /filtered/odometry topic seems to very slowly drift in both the xy-position as well as the yaw. robot_localization launch file <launch> <node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization" clear_params="true"> <param name="frequency" value="60"/> <param name="sensor_timeout" value="0.05"/> <param name="two_d_mode" value="true"/> <param name="map_frame" value="map"/> <param name="odom_frame" value="odom"/> <param name="base_link_frame" value="base_footprint"/> <param name="world_frame" value="odom"/> <param name="odom0" value="/odom"/> <rosparam param="odom0_config">[true, true, false, false, false, true, false, false, false, false, false, false, false, false, false]</rosparam> <param name="odom0_differential" value="true"/> <param name="print_diagnostics" value="false"/> <param name="debug" value="false"/> <param name="debug_out_file" value="/home/***/debug_ekf_localization.txt"/> <rosparam param="process_noise_covariance"> [0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.015]</rosparam> <rosparam param="initial_estimate_covariance"> [0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9]</rosparam> </node> </launch> Originally posted by quantumflux on ROS Answers with karma: 1 on 2015-04-17 Post score: 0 Answer: This almost always points to a covariance issue, especially when you're using differential mode. Can you provide sample odometry message data? Also, which version of the software are you using? Originally posted by Tom Moore with karma: 13689 on 2015-04-17 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by quantumflux on 2015-04-17: It's should be the most up-to-date version that's in the hydro repositories for precise pangolin, which - judging by the file name - is version 1.1.7 Unfortunately, I won't be able to provide a sample message until Monday as I don't have access to the lab during weekends. Comment by quantumflux on 2015-06-30: Hadn't noticed that I never updated this again. It was in fact an issue with the covariance being set incorrectly. Because the covariance values were too high, the new measurements were accepted only as very inaccurate approximations and thus had little effect on the result.
{ "domain": "robotics.stackexchange", "id": 21460, "tags": "navigation, ekf, robot-localization, amcl, kobuki" }
Azeotropes and separation by distillation?
Question: I have just recently learnt the theory of fractional and normal distillations and the basics of maximum and minimum boiling azeotropes. Most books say that the components of the mixture forming an azeotrope cannot be separated by distillation because on heating, the residue (in case of negative or maximum boiling azeotropes) and the distillate (in case of positive or minimum boiling azeotropes) is closer in composition to the azeotropic composition and always approaches it. This being understood, we at least separate one component from the mixture as the residue (for positive azeotropes) and the distillate(for negative azeotropes) is richer in one component than the other since the remaining part approaches the azeotropic concentration thereby increasing the concentration of one of the components depending on which side of the azeotropic "point" are we on the graph of boiling temperature and composition. If this is true, for any mixture, we can at least theoretically obtain one purified component (provided we have enough quantities of the mixture)?? Reference azeotrope basic Answer: You seem to have the right idea. To elaborate, take the specific example of the positive azeotrope of (roughly) 96% ethanol and 4% water. If you begin with a mixture containing less than 96% ethanol, distillation will result in a distillate more abundant in ethanol and nearer to the azeotropic composition. That implies the residue will necessarily be proportionally lower in ethanol. If you continuously repeat the distillation procedure using the remaining pot residue, it will approach 100% water. On the other hand, if you begin with a concentration above 96% ethanol, the resulting distillate will actually be lower in ethanol (and therefore nearer to the azeotropic point), while the residue will actually become more concentrated in ethanol. Hence, in theory, with many repeated distillations you can approach purity in the residue when dealing with positive azeotropes, and the ultimate composition of the residue will depend on which side of the curve you're on to start with. When dealing with negative azeotropes, the situation is analogous but the particulars are basically reversed. Take the example of hydrofluoric acid and water, which form an azeotrope at approximately 37% HF. If you initially begin distilling a mixture comprising less than 37% HF, you'll obtain a distillate more dilute in HF, while the residue will contain more HF and be nearer to the azeotrope. If you were to collect the distillate and repeat the distillation, you would again end up with an even more dilute distillate, with the concentration of the distillate approaching 100% pure water. If you begin distilling a mixture containing more than 37% HF, then the residue will end up being more dilute as it approaches the azeotrope, while the distillate will actually be richer in HF. If you repeat this process many times over, you can eventually obtain a distillate approaching 100% pure HF (in theory). So, in summary, you're correct to suggest that one of the components can be isolated with something approaching complete purity from an azeotropic mixture, assuming a sufficient volume to enable an adequate number of simple distillation cycles or theoretical plates in fractional distillation. Of course, the problem is that you really have no choice in determining which of the two components you can isolate, as that depends entirely on the starting composition of the mixture. In practice, you can often separate azeotropeic mixtures by modifying the distillation process in various ways.
{ "domain": "chemistry.stackexchange", "id": 1025, "tags": "physical-chemistry, thermodynamics, aqueous-solution, purification" }
False theory of how a battery works
Question: There is an explanation of how a battery works that says that inside the battery, the electrons do a complete loop and can return to their starting point. When the electrons arrive in the positive terminal and they lost all of their potential energy, the battery does work on the electrons to put them back in the negative terminal so they are full of energy and reenter the circuit. This explanation is very great because it answers questions : why does putting 2 batteries in series double the voltage? (because work has to be done two times on the electrons in the battery, so the potential energy doubles). Why connecting - of battery 1 to + of battery 2 without connecting + of battery 1 to - of battery 2 doesn't create electricity (because the electrons have to do a complete loop in the circuit and return to their starting point to make electricity for a long time. There is no external wire connecting - of battery 2 to + of battery 1 so there isn't a complete loop). However, from what I understand of batteries, there is no electron that moves from the positive terminal to the negative terminal of the battery. The electrons of the circuit all come from the negative terminal and once they reach positive terminal there is nothing else happening except making the electrolyte ions neutral. They don't return to their starting point. So could someone prove to me that inside a battery, electrons from the positive terminal are done work on and go to the negative terminal. Or if this theory is false, could someone answer the two questions above that are easily answered by this theory in another way? Answer: There are many ways of approaching the subject as has been illustrated by the answers so far. There is nothing wrong with them, except that they do not answer your question - I think. Kirchoff says that current must flow in a complete loop. If the charge did not get back to the very starting point, there would be an accumulation of charge somewhere. This would cause the voltage at that spot to increase, which would then motivate charge to move faster from that location. So, your question, as I see it, is one of taking Kirchoff on face value and asking where and how the flow path is completed? All batteries have a solid part and a liquid part. The liquid part separates the two electrodes and is where the circuit is completed. When an electron leaves one pole and goes through the external circuit one NEGATIVELY charge ion comes out of solution and attaches to that pole. Similarly, when a negative electron arrives at the other pole through the external circuit, a negatively charged ion enters the solution. The solution then completes the circuit, as in, one negative charge has left solution and one has been gained, leaving no net charge on any pole (except for the net that is there creating the voltage. The exact chemistry depends on the reactants involved. This argument is simplified and very general. Positive ions also take part going in and out of solution, but in the opposite direction. But leaving the simple model described in place, we can ask the question: Why do the ions in solution behave the way they do? The answer is that they are motivated to do so AGAINST the voltage that exists because they "want" to chemically react at the poles. It is the chemical energy expended that moves them against the voltage and causes them to move in that manner. I hope this helps.
{ "domain": "chemistry.stackexchange", "id": 7093, "tags": "electrochemistry" }
Is there a cleaner way to add DEBUG comments?
Question: I have a rather large class that needs to provide a reasonable amount of output for debugging purposes. I've done this with the following: #if DEBUG Console.WriteLine("Source Site Set to: {0}", archiveQueueEntity.SourceSite); Console.WriteLine("Source List Set to: {0}", archiveQueueEntity.SourceUrl); Console.WriteLine("Destination Site Set to {0}", archiveQueueEntity.DestinationSite); Console.WriteLine("Destination List Set to: {0}", archiveQueueEntity.DestinationUrl); #endif Is there a better way to do this though? After googling this I tried using Debug.WriteLine but it appears that this will only output to the 'output' window in Visual Studio, and not the console. Am I missing something? Answer: I think it would be much cleaner to utilize partial methods to create your logging statements. That way you can log wherever you need it and can disable the code my omitting it the logging function definition. By using partial methods, if the definition is omitted, no IL is generated for the method and calls to the partial method are ignored as if it was never there. Just mark the class partial, define the signature of the partial method and call it like normal. Then wrap the actual implementation in the conditional compilation blocks. partial class MyClass { // declare the partial method static partial void Log(string format, params object[] arguments); static void SomeMethod() { // call the log method like usual Log("Source Site Set to: {0}", archiveQueueEntity.SourceSite); Log("Source List Set to: {0}", archiveQueueEntity.SourceUrl); Log("Destination Site Set to {0}", archiveQueueEntity.DestinationSite); Log("Destination List Set to: {0}", archiveQueueEntity.DestinationUrl); } #if DEBUG static partial void Log(string format, params object[] arguments) { Console.WriteLine(format, arguments); } #endif } Otherwise if you're not able to change it, you should still create a separate logging method and disable to actual printing in the method there. That way your method is called but does nothing. static void Log(string format, params object[] arguments) { #if DEBUG Console.WriteLine(format, arguments); #endif } Or alternatively, use the Trace class to do your logging. As long as you have no listeners registered, you will not see any of the logging messages. When debugging, add a ConsoleTraceListener to the Listeners collection. #if DEBUG Trace.Listeners.Add(new ConsoleTraceListener()); #endif Trace.WriteLine("Source Site Set to: {0}", archiveQueueEntity.SourceSite); Trace.WriteLine("Source List Set to: {0}", archiveQueueEntity.SourceUrl); Trace.WriteLine("Destination Site Set to {0}", archiveQueueEntity.DestinationSite); Trace.WriteLine("Destination List Set to: {0}", archiveQueueEntity.DestinationUrl); Something that eluded me until now, use the ConditionalAttribute attribute on the log function for both cases to achieve the same effect. [Conditional("DEBUG")] static void Log(string format, params object[] arguments) { Console.WriteLine(format, arguments); }
{ "domain": "codereview.stackexchange", "id": 2317, "tags": "c#" }
Convert formula from CGS to SI
Question: I'd like to convert this formula \begin{equation} l^2 =\frac{c\hbar}{eH} \end{equation} where $l$ is a length, and $H$ is in oersted, to SI units. I am pretty sure it uses CGS, since Oe is mentioned in the text, and its from a theory paper (Kawabata1980, eq.3). Answer: As a plasma physicist I use the NRL Plasma Formulary to convert between CGS and SI units. It can be downloaded from here. On page 18 it gives you a prescription on how to convert any formula. Remember to convert both sides of the equation. For your problem I get $$ l^2 = \frac{\varepsilon_0 c^2 \hbar}{eH} $$ Step by step instruction: Identify all the quantities in your equation (with $\alpha=10^2\mathrm{cm\;m}^{-1}$ and $\beta=10^7\mathrm{erg\;J}^{-1}$) $l$ length, factor $\alpha$ $c$ velocity, factor $\alpha$ $\hbar$ action = energy $\times$ time, factor $\beta \times 1$ $e$ charge, factor $(\alpha \beta / 4 \pi \varepsilon_0)^{1/2}$ $H$ magnetic intensity, factor $(4 \pi \mu_0\beta/\alpha^3)^{1/2}$ Replace all quantities in the equation $$ \alpha^2 l^2 = \frac{\alpha c \; \beta \hbar}{(\alpha \beta / 4 \pi \varepsilon_0)^{1/2}e\;(4 \pi \mu_0\beta/\alpha^3)^{1/2}H} $$ Simplify $$ l^2 = \frac{c \; \hbar}{(1 / \varepsilon_0)^{1/2}e\;\mu_0^{1/2}H} = \frac{\varepsilon_0 c \; \hbar}{(\varepsilon_0 \mu_0)^{1/2}eH} $$ Use $c = 1/\sqrt{\varepsilon_0 \mu_0}$ $$ l^2 = \frac{\varepsilon_0 c^2 \hbar}{eH} = \frac{\hbar}{e\mu_0 H}$$
{ "domain": "physics.stackexchange", "id": 34269, "tags": "electromagnetism, units, si-units, unit-conversion" }
How can I understand an adiabatic process in Quantum Mechanics?
Question: I want to understand what adiabaticity in Quantum Mechanics means. I have attained the next information: Adiabatic process: gradually changing conditions allow the system to adapt its configuration, so the probability density is modified by the process. If the system begins in an eigenstate of the initial Hamiltonian, will end in the eigenstate that corresponds to the final Hamiltonian. The adiabatic theorem states that quantum jumps are preferably avoided and that the system tries to retain your state and quantum numbers. An adiabatic change is one that occurs at a rate much slower than the difference in frequency between the eigen states of energy. In this case, the energy states of the system do not make transitions, so the quantum number is an invariant. I don't understand completely what these sentences mean. I want to state it in the most simple terms possible. Answer: I am not sure what specific applications of the adiabatic theorem you are looking at for quantum rings, but I can give you a general overview of the quantum adiabatic theorem and break down some of what those words mean. As review, remember that energy states and levels in quantum mechanics are represented by eigenstates and eigenvalues, respectively, of your time independent Hamiltonian $\hat{H}$. Thus if we our state $| \psi_E \rangle$ is in an energy eigenstate, then it satisfies the following: $$ \hat{H}\ | \psi_E \rangle = E\ | \psi_E \rangle $$ where $E$ would be the energy corresponding to our energy eigenstate. Now let's say our Hamiltonian now carries some explicit time dependence $\hat{H}(t)$; for example maybe the mass of our particle is now changing in time. In simplest terms, what the quantum adiabatic theorem states is that if your Hamiltonian is changing slowly enough (we'll define this in a second), then if you start in an energy eigenstate $| \psi_E(t = 0)\rangle$, then you will remain in an energy eigenstate $|\psi_E(t)\rangle$ for all time $t$. Thus, you will always have a well defined instantaneous energy for all time (which is what they meant when they said you retain your state and quantum numbers). If your Hamiltonian is not changing slowly, then in general you will you will have: $$ |\psi(t)\rangle = \sum c_i |\psi_E(t)\rangle $$ which means your state is now in a superposition of the instantaneous energy states of your system. I guess in the language of your original question, your state can now "jump" to other simultaneous energy levels, since it will no longer remain in just one. So how slow is "slow enough"? Sakurai goes through a full derivation to find that the adiabatic approximation holds if the time scale for changes of your Hamiltonian is much larger than the inverse energy of your eigenstate, $\tau \gg \hbar /E$. This is what is meant by "An adiabatic change is one that occurs at a rate much slower than the difference in frequency between the eigen states of energy." This may have been a bit formal, but it gets to the meat behind the quantum adiabatic theorem. Hope this helped.
{ "domain": "physics.stackexchange", "id": 77100, "tags": "quantum-mechanics, adiabatic" }
What is the autocorrelation of a Dirac pulse?
Question: What is the autocorrelation of $x(t) = \delta(t)$? Can you explain to me how to calculate it? Answer: Well, by definition of the $\delta$ distribution, you have: $\int_{-\infty}^{\infty} f(t) \delta(t-T)\, \textrm{d}t = f(T)$ The autocorrelation of a function $g(t)$ can be computed via: $\int_{-\infty}^{\infty} g^{*}(t)g(t + \tau)\, \textrm{d}t$, with $g^*$ as the complex conjugate of $g$. Since $\delta(t)$ is real-valued, this is conjugation can be skipped. So you are left with: $\int_{-\infty}^{\infty} \delta^{*}(t)\delta(t + \tau)\, \textrm{d}t = \int_{-\infty}^{\infty} \delta(t)\delta(t + \tau)\, \textrm{d}t = \delta(-\tau)$. The first = sign comes from the autocorrelation of the real-valued $\delta$, the second from the definition of the $\delta$-distribution. So, the autocorrelation function of the $\delta$-distribution is the distribution itself. A eigenfunction of the autocorrelation function, so to say ;) Think about it, this does make sense: the only perfect match is achieved with no time shift, ie at $\tau = 0$. All other shifts would end up with one of the arguments of the $\delta$ being different from 0, hence with the $\delta$-function being 0 there. BTW: $\delta(-\tau) = \delta(\tau)$, since the function/distribution is symmetric.
{ "domain": "dsp.stackexchange", "id": 7984, "tags": "signal-analysis, continuous-signals, autocorrelation, pulse" }
How to install ros-electric-object-recognition?
Question: I use ROS electric. I followed Object Recognition Kitchen. This doc says 'Install the ros-electric-object-recognition package from the official ROS repositories.' I try to install: sam@/home/sam$ sudo apt-get install ros-electric-object-recognition Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package ros-electric-object-recognition sam@/home/sam$ How to fix it? Thank you~ ============================================= ROS apt: sam@/etc/apt/sources.list.d$ cat ros-latest.list # deb http://packages.ros.org/ros/ubuntu natty main # 停用升級至 natty sam@/etc/apt/sources.list.d$ cat ros-latest.list.distUpgrade deb http://packages.ros.org/ros/ubuntu maverick main sam@/etc/apt/sources.list.d$ cat ros-latest.list.save deb http://packages.ros.org/ros/ubuntu maverick main sam@/etc/apt/sources.list.d$ Originally posted by sam on ROS Answers with karma: 2570 on 2012-09-14 Post score: 0 Answer: For me, the package exists in electric. I can also see it at packages.ros.org. Maybe you need to run sudo apt-get update first? Otherwise, what is your apt entry for ROS? Originally posted by dornhege with karma: 31395 on 2012-09-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by sam on 2012-09-14: I try to use apt-get update but doesn't work. I have updated my original post. What to do next? Thank you~ Comment by dornhege on 2012-09-14: The ros repository in ros-latest.list is commented out. This obviously won't work, just remove the #, run apt-get update again and it should work. Comment by dornhege on 2012-09-14: Note: This automatically happened during the maverick -> natty update. You should have gotten a warning about that from ubuntu and it will probably happen on every ubuntu update again.
{ "domain": "robotics.stackexchange", "id": 11026, "tags": "ros, object-recognition" }
Convolutional Neural Network not learning EEG data
Question: I have trained a simple CNN (using Python + Lasagne) for a 2-class EEG classification problem, however, the network doesn't seem to learn. loss does not drop over epochs and classification accuracy doesn't drop from random guessing (50%): Questions Is there anything wrong with the code that is causing this? Is there a better (more correct?) way to handle EEG data? EEG setup Data is collected from participants completing a total of 1044 EEG trials. Each trial lasts 2 seconds (512 time samples), has 64 channels of EEG data, and labelled 0/1. All trials have been shuffled so as to not learn on one set of participants and test on another. The goal is to predict the label of a trial after being given the 64x512 matrix of raw EEG data The raw input data (which I can't show here as its part of a research project) has a shape of (1044, 1, 64, 512) train/validation/test splits are then created at 60/20/20% With such a small dataset I would have thought overfitting would be a problem, but training loss doesn't seem to reflect that Code Network architecture: def build_cnn(input_var=None): l_in = InputLayer(shape=(None, 1, 64, 512), input_var=input_var) l_conv1 = Conv2DLayer(incoming = l_in, num_filters = 32, filter_size = (1, 3), stride = 1, pad = 'same', W = lasagne.init.Normal(std = 0.02), nonlinearity = lasagne.nonlinearities.rectify) l_pool1 = Pool2DLayer(incoming = l_conv1, pool_size = (1, 2), stride = (2, 2)) l_fc = lasagne.layers.DenseLayer( lasagne.layers.dropout(l_pool1, p=.5), num_units=256, nonlinearity=lasagne.nonlinearities.rectify) l_out = lasagne.layers.DenseLayer( lasagne.layers.dropout(l_fc, p=.5), num_units=2, nonlinearity=lasagne.nonlinearities.softmax) return l_out Note: I have tried adding more conv/pool layers as I thought the network wasnt deep enough to learn the categories but 1) this doesn't change the outcome I mentioned above and 2) I've seen other EEG classification code where a simple 1 conv layer network can get above random chance Helper for creating mini batches: def iterate_minibatches(inputs, targets, batchsize, shuffle=False): assert len(inputs) == len(targets) if shuffle: indices = np.arange(len(inputs)) np.random.shuffle(indices) for start_idx in range(0, len(inputs) - batchsize + 1, batchsize): if shuffle: excerpt = indices[start_idx:start_idx + batchsize] else: excerpt = slice(start_idx, start_idx + batchsize) yield inputs[excerpt], targets[excerpt] Running the model: def main(model='cnn', batch_size=500, num_epochs=500): input_var = T.tensor4('inputs') target_var = T.ivector('targets') network = build_cnn(input_var) prediction = lasagne.layers.get_output(network) loss = lasagne.objectives.categorical_crossentropy(prediction, target_var) loss = loss.mean() train_acc = T.mean(T.eq(T.argmax(prediction, axis=1), target_var), dtype=theano.config.floatX) params = lasagne.layers.get_all_params(network, trainable=True) updates = lasagne.updates.nesterov_momentum(loss, params, learning_rate=0.01) test_prediction = lasagne.layers.get_output(network, deterministic=True) test_loss = lasagne.objectives.categorical_crossentropy(test_prediction, target_var) test_loss = test_loss.mean() test_acc = T.mean(T.eq(T.argmax(test_prediction, axis=1), target_var), dtype=theano.config.floatX) train_fn = theano.function([input_var, target_var], [loss, train_acc], updates=updates) val_fn = theano.function([input_var, target_var], [test_loss, test_acc]) print("Starting training...") for epoch in range(num_epochs): # full pass over the training data: train_err = 0 train_acc = 0 train_batches = 0 start_time = time.time() for batch in iterate_minibatches(train_data, train_labels, batch_size, shuffle=True): inputs, targets = batch err, acc = train_fn(inputs, targets) train_err += err train_acc += acc train_batches += 1 # full pass over the validation data: val_err = 0 val_acc = 0 val_batches = 0 for batch in iterate_minibatches(val_data, val_labels, batch_size, shuffle=False): inputs, targets = batch err, acc = val_fn(inputs, targets) val_err += err val_acc += acc val_batches += 1 # After training, compute the test predictions/error: test_err = 0 test_acc = 0 test_batches = 0 for batch in iterate_minibatches(test_data, test_labels, batch_size, shuffle=False): inputs, targets = batch err, acc = val_fn(inputs, targets) test_err += err test_acc += acc test_batches += 1 # Run the model main(batch_size=5, num_epochs=30) Answer: I had the same problem when I used TensorFlow to build a self driving car. The training error for my neural nets bounced around forever and never converged on a minimum. As a sanity check I couldn't even intentionally get my models to overfit, so I knew something was definitely wrong. What worked for me was scaling my inputs. My inputs were pixel color channels between 0 and 255, so I divided all values by 255. From that point onward, my model training (and validation) error hit a minimum as expected and stopped bouncing around. I was surprised how big of a difference it made. I can't guarantee it will work for your case, but it's definitely worth trying, since it's easy to implement.
{ "domain": "datascience.stackexchange", "id": 1258, "tags": "machine-learning, python, neural-network, convolutional-neural-network, theano" }
How can physicists observe events at large scales such as a star birth?
Question: I read recently multiple articles about physicists observing the birth of a star, or a star swallowed by a black hole. However I can't manage to understand how these phenomena are observable at such scales. Common sense would lead me to think that the bigger the object you observe is, the bigger the timeframe of associated phenomena are. I mean it seems that when you look at the micro-/nanoscopic world, phenomena happen very very fast. So when you look at galaxies it should be very very slow from our point of view. So if we can watch the process of a star's birth, does it mean that such events have a timeframe similar to the phenomena that we observe at our scales? Answer: Typically astronomical events do not happen on the time scale of humans. So what scientists do, is look at a large sample of events each at a different time in the evolution of the event. So for example to see 'stars' being born they would look in a gas nebula and see several examples of stars being formed in the different stages of coalescing. However, some portions of these events can be on a human time-scale. A supernova can be seen over a period of several weeks to months. This would be rather boring to watch in real-time but if you use time-lapse 'photography' of about 1 day per sample you could see the event quite clearly.
{ "domain": "physics.stackexchange", "id": 39220, "tags": "astronomy, time, scales" }
Vertical Circular Motion Stone Drop From A Plane
Question: The pilot of an airplane flying a vertical circular arc of radius $R$ at a constant speed $v$ drops a stone of mass $m$ at the highest point of the arc with a zero relative velocity relative to him. Describe the trajectory of the stone with respect to an observer on the ground? Does the stone just fly off horizontally at the initial speed of $v$ OR because there was a centripetal force acting on it prior to release change the trajectory in any way? Is there a point where the velocity $\vec{v}$ is such that the pilot actually sees the stone rise up? I can't think of anything other than the trajectory will be equivalent to a rock attached to a string and the cord being cut, but I feel that the vertical circular motion at constant speed might change the trajectory in some way that I am not able to think of. Answer: Yes, the initial velocity of the stone will just be horizontal with speed $v$. Instantaneous velocity at a moment does not depend on what net force is acting on it at this moment. As for whether the pilot will at a certain moment sees the stone rising up, it depends on $v$ and $R$. Since the downward acceleration of the pilot at an angle $\theta$ (made with the vertical upward direction) is $\frac{v^2}{R}\cos \theta$, and the stone is under downward acceleration $g$, with the same initial vertical velocity, it depends on which one has a larger downward acceleration. For instance, if $\frac{v^2}{R}>g$, then the pilot will see the stone rising just after the release. On the other hand, if $\frac{v^2}{g}\le g$, the pilot will never see the stone rising.
{ "domain": "physics.stackexchange", "id": 43696, "tags": "homework-and-exercises, kinematics, rotational-kinematics" }
Why the Hamiltonian and the Lagrangian are used interchangeably in QFT perturbation calculations
Question: Whenever one needs to calculate correlation functions in QFT using perturbations one encounters the following expression: $\langle 0| some\ operators \times \exp(iS_{(t)}) |0\rangle$ where, depending on the textbook S is either (up to a sign) $\int \mathcal{L}dt$ where $\mathcal{L}$ is the interaction Lagrangian or $-\int \mathcal{H}dt$ where $\mathcal{H}$ is the interaction Hamiltonian. It is straightforward to prove that if you do not have time derivatives in the interaction terms these two expressions are equivalent. However these expressions are derived through different approaches and I can not explain from first principles why (and when) are they giving the same answer. Result 1 comes from the path-integral approach where we start with a Lagrangian and do perturbation with respect to the action which is the integral of the Lagrangian. Roughly, the exponential is the probability amplitude of the trajectory. Result 2 comes from the approach taught in QFT 101: Starting from the Schrödinger equation, we guess relativistic generalizations (Dirac and Klein-Gordon) and we guess the commutation relations to be used for second quantization. Then we proceed to the usual perturbation theory in the interaction picture. Roughly, the exponential is the time evolution operator. Why and when are the results the same? Why and when the probability amplitude from the path integral approach is roughly the same thing as the time evolution operator? Or restated one more time: Why the point of view where the exponential is a probability amplitude and the point of view where the exponential is the evolution operator give the same results? Answer: Starting from the Hamiltonian formulation of QM one can derive the path-integral formalism (see chapter 9 in Weinberg's QFT volume 1), where the Hamiltonian action is found to be proportional to $\int \mathrm{d}t (pv - H)$. For a subclass of theories with "a Hamiltonian that is quadratic in the momenta" (see section "9.3 Lagrangian Version of the Path-Integral formula" in above textbook), the term $(pv - H)$ can be transformed into a Lagrangian $L_H = (pv - H)$. Then the Lagrangian action is proportional to $\int \mathrm{d}t L_H$. Both actions give the same results because one is exactly equivalent to (and derived from) the other. $$ \int \mathrm{d}t (pv - H) = \int \mathrm{d}t L_H$$ Moreover, when working in the interaction representation you do not use the total Hamiltonian but only the interaction. The derivation of the Hamiltonian action is the same, except that now the total Hamiltonian is substituted by the interaction Hamiltonian $V$. Again you have two equivalent forms of write the action either in Hamiltonian or Lagrangian form. If you consider Hamiltonians whose interaction $V$ does not depend on the momenta, then the $pv$ term vanishes and the above equivalence between the actions reduces to $$ - \int \mathrm{d}t V = \int \mathrm{d}t L_V$$ where, evidently, the interaction Lagrangian is $L_V = -V$ This is what happens for instance in QED, where the interaction $V$ depends on both position and Dirac $\alpha$ but not on momenta. Note: There is a sign mistake in your post. I cannot edit because is less than 10 characters and I have noticed the mistake in a comment to you above, but it remains.
{ "domain": "physics.stackexchange", "id": 5589, "tags": "quantum-field-theory, lagrangian-formalism, operators, hamiltonian-formalism, path-integral" }
Reaction between ammonia and dichloromethane
Question: Could ammonia displace a chloride in dichloromethane to form a methylammonia? If so, given enough reagents and time, could this be used to make a 3D structure comprised of nitrogen bonded carbons, and carbons bonded to nitrogen and hidrogen? Answer: If the reaction with methanal does not yield your product, DCM, being less reactive, won't as well. I cannot point out why the reaction stops there, but from Wikipedia it seems that methenamine can be easily converted to a quaternary ammonium salt by simple alkylation. Furthermore, your product has a problem: if the nitrogens did become quaternary ammonia**, your structure would collapse due to Coulomb repulsion caused by the positive charges. ** I'm not a native speaker, but if equilibrium plural is equilibria, I think it is reasonable for ammonium plural to be ammonia.
{ "domain": "chemistry.stackexchange", "id": 14198, "tags": "organic-chemistry, inorganic-chemistry, polymers" }
Control consisting of a mute button and an expanding range slider
Question: I'm learning BackboneJS and I just made an attempt at converting a pre-existing module to a Backbone.View. I was hoping to get some feedback on my attempt and learn. I've been using the annotated ToDo source as a guide. Here's some HTML to give you a rough idea: <div id="VolumeControl"> <div id="MuteButton" class="volumeControl" title="Toggle Volume"> <svg width="16" height="16"> <path d="M0,6 L3,6 L7,2 L7,14 L3,10 L0,10Z" fill="#fff" /> <rect class="MuteButtonBar" id="MuteButtonBar1" x="9" y="6.5" width="1" height="3" /> <rect class="MuteButtonBar"id="MuteButtonBar2" x="11" y="5" width="1" height="6" /> <rect class="MuteButtonBar" id="MuteButtonBar3" x="13" y="3.5" width="1" height="9" /> <rect class="MuteButtonBar" id="MuteButtonBar4" x="15" y="2" width="1" height="12" /> </svg> </div> <div id="VolumeSliderWrapper" class="volumeControl"> <input type="range" id="VolumeSlider" class="volumeControl" title="Click or drag to change the volume." min="0" max="100" step="1" value="0" /> </div> </div> It's essentially a two-part control consisting of a mute button and an HTML5 range slider which expands out. Here's a quick screenshot to bring things together mentally: Here's my Backbone.View: // Responsible for controlling the volume indicator of the UI. define(['player'], function (player) { 'use strict'; var volumeControlView = Backbone.View.extend({ el: $('#VolumeControl'), events: { 'change #VolumeSlider': 'setVolume', 'click #MuteButton': 'toggleMute', 'mousewheel .volumeControl': 'scrollVolume', 'mouseenter .volumeControl': 'expand', 'mouseleave': 'contract' }, render: function () { var volume = player.get('volume'); // Repaint the amount of white filled in the bar showing the distance the grabber has been dragged. var backgroundImage = '-webkit-gradient(linear,left top, right top, from(#ccc), color-stop(' + volume / 100 + ',#ccc), color-stop(' + volume / 100 + ',rgba(0,0,0,0)), to(rgba(0,0,0,0)))'; this.volumeSlider.css('background-image', backgroundImage); var activeBars = Math.ceil((volume / 25)); this.muteButton.find('.MuteButtonBar:lt(' + (activeBars + 1) + ')').css('fill', '#fff'); this.muteButton.find('.MuteButtonBar:gt(' + activeBars + ')').css('fill', '#666'); if (activeBars === 0) { this.muteButton.find('.MuteButtonBar').css('fill', '#666'); } var isMuted = player.get('muted'); if (isMuted) { this.muteButton .addClass('muted') .attr('title', 'Click to unmute.'); } else { this.muteButton .removeClass('muted') .attr('title', 'Click to mute.'); } return this; }, // Initialize player's volume and muted state to last known information or 100 / unmuted. initialize: function () { this.volumeSliderWrapper = this.$('#VolumeSliderWrapper'); this.volumeSlider = this.$('#VolumeSlider'); this.muteButton = this.$('#MuteButton'); // Set the initial volume of the control based on what the YouTube player says is the current volume. var volume = player.get('volume'); this.volumeSlider.val(volume).trigger('change'); this.listenTo(player, 'change:muted', this.render); this.render(); }, // Whenever the volume slider is interacted with by the user, change the volume to reflect. setVolume: function () { var newVolume = parseInt(this.volumeSlider.val(), 10); player.set('volume', newVolume); this.render(); }, // Adjust volume when user scrolls mousewheel while hovering over volumeControl. scrollVolume: function (event, delta) { // Convert current value from string to int, then go an arbitrary, feel-good amount of volume points in a given direction (thus *3 on delta). var newVolume = parseInt(this.volumeSlider.val(), 10) + delta * 3; this.volumeSlider.val(newVolume).trigger('change'); }, toggleMute: function () { var isMuted = player.get('muted'); player.set('muted', !isMuted); }, // Show the volume slider control by expanding its wrapper whenever any of the volume controls are hovered. expand: function () { this.volumeSliderWrapper.addClass('expanded'); }, contract: function () { this.volumeSliderWrapper.removeClass('expanded'); } }); var volumeControl = new volumeControlView; }) Am I doing too much in render? Anything look weird? Answer: I think this is fine for something as simple as a volume control; however, there are some limitations to at least be aware of: Since RequireJS invokes the module, it would be problematic to construct player dynamically. There's no good way of creating more than one instance of your view – probably not a problem. The View is tightly bound to a specific DOM structure. This means it will require extra code to make your View responsive. e.g. a small volume control for mouses (desktop) and a big one for fingers (mobile). Here are some potential solutions: Likely, player just has a single default state, but if you ever want to construct this object yourself, you should consider return-/exporting the volumeControlView definition from your module, rather than returning an instance of it. A simple solution here is to simply return the result of Backbone.View.extend. Use a template. In the future, you can use additional templates to support other platforms. e.g. start with a desktop template, later on create a mobile template, and choose the template dynamically at runtime based on the environment. I would define your module like so (uses RequireJS text plugin): define(['Backbone', 'underscore', 'text!templates/volume-bar.html'], function (Backbone, _, volume_bar) { 'use strict'; return Backbone.View.extend({ template: _.template(volume_bar), // ... }); }); This lets you instantiate the view like so: require(['models/Player', 'views/VolumeBar', function(PlayerModel, VolumeBarView) { 'use strict'; var player = new PlayerModel({...}); var volume = new VolumeBarView({ model: player }); volume.render().$el.appendTo('#player'); }); Doing so would change how you bind your events and render your HTML, so I'm leaving it at this just to give you the general idea.
{ "domain": "codereview.stackexchange", "id": 3909, "tags": "javascript, jquery, backbone.js, require.js" }
Does matrix shape for training/testing sets have to be in a particular order?
Question: I've noticed that in the Andrew Ng Deep Learning course that for image analysis he always has X_train matrices in the shape of [height, width, 3, num_inputs], or, if flattened, [height X width X 3, num_inputs]. He also has his y_train as [1, num_inputs]. To me, it is more intuitive to flip these so that X_train is [num_inputs, height X width X 3] and y_train is [num_inputs, 1]. Is there any motivating reason or justification that it has to be the way he does it or is it just preference? Is this a standard or does it vary? Answer: It depends on the deep learning framework that you use, and you have to use the shape that the functions of the framework use. I think it is different in Tensorflow and Pytorch. The recommendation is to check before doing anything in the documentation of the framework.
{ "domain": "datascience.stackexchange", "id": 3219, "tags": "machine-learning, neural-network, deep-learning" }
Python implementation of a wrapped Conway's Game Of Life board
Question: import random import rlereader class Board: """Handles the status of all cells.""" def __init__(self, size): self.size = size self.grid = self.make_blank_grid() self.furthest_col = 0 self.furthest_row = 0 def run_turns(self, num_turns): """Run a the simulator for a number of turns.""" while num_turns > 0: self.run_turn() num_turns -= 1 def run_turn(self): """Run a single turn of the simulator.""" new_grid = self.make_blank_grid() for row in range(0, self.size): for col in range(0, self.size): new_grid[row][col] = self.get_cell_life(row, col) self.grid = new_grid def toggle_cell(self, row, col): """Toggle the dead or alive status of a single cell.""" self.grid[row][col] = not self.grid[row][col] def check_furthest(self, row, col): """Check the furthest processed cell against this one and update if we near the edge.""" if row + 1 >= self.furthest_row: self.furthest_row = row + 2 if col + 1 >= self.furthest_col: self.furthest_col = col + 2 def get_cell_life(self, row, col): """Return whether a given cell should become dead or alive. This may update the processed cell boundaries if neccessary.""" living_neighbours = self.count_living_neighbours(row, col) if self.grid[row][col]: if living_neighbours in [2, 3]: return True else: self.check_furthest(row, col) return False else: if living_neighbours == 3: self.check_furthest(row, col) return True return False def check_cell(self, row, col): """Return whether the cell is dead or alive for the current generation.""" if row < 0: row = self.size - 1 if row > self.size - 1: row = 0 if col < 0: col = self.size - 1 if col > self.size - 1: col = 0 return self.grid[row][col] def count_living_neighbours(self, row, col): """Find how many neighnours of a given cell are alive.""" active_count = 0 to_check = [ (row - 1, col - 1), # Top left (row - 1, col), # Top (row - 1, col + 1), # Top right (row, col - 1), # Left (row, col + 1), # Right (row + 1, col - 1), # Bottom left (row + 1, col), # Bottom (row + 1, col + 1) # Bottom Right ] for crow, ccol in to_check: if self.check_cell(crow, ccol): active_count += 1 return active_count def make_blank_grid(self): """Returns a blank grid for future use.""" grid = [] for row in range(0, self.size): grid.append([]) for col in range(0, self.size): grid[row].append([]) grid[row][col] = False return grid def load_rle_into_grid(self, rle): """Loads a RLE representation of a playing field into the grid. rle should be a file like object.""" reader = rlereader.GRLEReader() data = reader.read_rle(rle) # Returns an uncompressed series of tokens from an rle file. The rle file is opened elsewhere than this package. self.blank_grid() current_token = 0 current_row = 0 current_col = 0 while True: try: token = data[current_token] except IndexError: break # Out of tokens if type(token) == rlereader.EOFToken: break if token.value in ['b', 'o']: # 'o' = alive' 'b' = dead self.grid[current_row][current_col] = (token.value == 'o') current_col += 1 if current_col > self.size - 1: print('Too wide an import, cancelling import.') break if current_col >= self.furthest_col: self.furthest_col = current_col + 2 if token.value == '$': # $ indicates end of line. current_row += 1 if current_row > self.size - 1: print('Too high an import, cancelling import.') break current_col = 0 if current_row > self.furthest_row: self.furthest_row = current_row + 2 current_token += 1 def randomise_grid(self): """Change every cell in a grid to random dead or alive state.""" for row in range(0, self.size): for col in range(0, self.size): self.grid[row][col] = random.choice([True, False]) self.furthest_row = self.size - 1 self.furthest_col = self.size - 1 def blank_grid(self): """Replace the current grid with a blank grid.""" self.grid = self.make_blank_grid() This is the business end of a Python Game Of Life simulator I wrote. Can I get some opinions of it? (I hope it's not too much, I cut out all the GUI code and rlereader because they're not too relevant. blank_grid, randomise_grid and toggle_cell are all only called from the UI code. Answer: import random import rlereader class Board: """Handles the status of all cells.""" def __init__(self, size): self.size = size self.grid = self.make_blank_grid() self.furthest_col = 0 self.furthest_row = 0 def run_turns(self, num_turns): """Run a the simulator for a number of turns.""" while num_turns > 0: Use a for turn in range(num_terms) loop instead of a while loop self.run_turn() num_turns -= 1 def run_turn(self): """Run a single turn of the simulator.""" new_grid = self.make_blank_grid() for row in range(0, self.size): Just use range(self.size), the 0 is not needed. for col in range(0, self.size): new_grid[row][col] = self.get_cell_life(row, col) self.grid = new_grid def toggle_cell(self, row, col): """Toggle the dead or alive status of a single cell.""" self.grid[row][col] = not self.grid[row][col] def check_furthest(self, row, col): """Check the furthest processed cell against this one and update if we near the edge.""" if row + 1 >= self.furthest_row: self.furthest_row = row + 2 if col + 1 >= self.furthest_col: self.furthest_col = col + 2 Write it like this instead: self.furthest_row = max(self.furthest_row, row + 2) self.furthest_col = max(self.furthest_col, col + 2) Also, the function updates rather then checks the furthest, a better name would be update_furthest I don't know why you are adding 2 here, a comment explaining that would be helpful def get_cell_life(self, row, col): """Return whether a given cell should become dead or alive. This may update the processed cell boundaries if neccessary.""" living_neighbours = self.count_living_neighbours(row, col) if self.grid[row][col]: if living_neighbours in [2, 3]: return True else: self.check_furthest(row, col) return False else: if living_neighbours == 3: self.check_furthest(row, col) return True return False Write it like this instead: living_neighbours = self.count_living_neighbours(row, col) if self.grid[row][cell]: new_value = living_neighbors in [2,3] else: new_value = living_neighbors == 3 if new_value != self.grid[row][cell]: self.check_furthest(row, cell) return new_value This way we separate the furthest updating from the decision of the new value of the cell. It might also be a good idea to move the furthest updating to run_turn def check_cell(self, row, col): """Return whether the cell is dead or alive for the current generation.""" if row < 0: row = self.size - 1 if row > self.size - 1: row = 0 if col < 0: col = self.size - 1 if col > self.size - 1: col = 0 return self.grid[row][col] Use modulous instead of all those ifs def check_cell(self, row, col): return self.grid[row % self.size][col % self.size] The % divides with remainder which gives exactly the wrap around feature you want. def count_living_neighbours(self, row, col): """Find how many neighnours of a given cell are alive.""" active_count = 0 to_check = [ (row - 1, col - 1), # Top left (row - 1, col), # Top (row - 1, col + 1), # Top right (row, col - 1), # Left (row, col + 1), # Right (row + 1, col - 1), # Bottom left (row + 1, col), # Bottom (row + 1, col + 1) # Bottom Right ] for crow, ccol in to_check: if self.check_cell(crow, ccol): active_count += 1 return active_count You can simplify the function by replacing it with: return sum(self.check_cell(row, col) for row, col in to_check) I recommend moving to_check to a global constant. def make_blank_grid(self): """Returns a blank grid for future use.""" grid = [] for row in range(0, self.size): grid.append([]) for col in range(0, self.size): grid[row].append([]) grid[row][col] = False return grid Write it like this instead: def make_blank_grid(self): """Returns a blank grid for future use.""" grid = [] for row in range(0, self.size): grid.append([False] * self.size) return grid The multiply produces a new list which is the original list repeated. But only use it on immutable values like numbers and bools. Don't use on anything that can be modified like other lists. def load_rle_into_grid(self, rle): """Loads a RLE representation of a playing field into the grid. rle should be a file like object.""" reader = rlereader.GRLEReader() data = reader.read_rle(rle) # Returns an uncompressed series of tokens from an rle file. The rle file is opened elsewhere than this package. self.blank_grid() current_token = 0 current_row = 0 current_col = 0 while True: try: token = data[current_token] except IndexError: break # Out of tokens Use for token in data instead of the while loop. For loops are almost always better then while loops. if type(token) == rlereader.EOFToken: break Breaking out because of either an EOFToken or running out of tokens is strange. You should probably only have to break in case or the other not both. Also, checking the types of variables is discouraged. You shouldn't need to do that. Is rlereader your code or something else? if token.value in ['b', 'o']: # 'o' = alive' 'b' = dead self.grid[current_row][current_col] = (token.value == 'o') current_col += 1 if current_col > self.size - 1: print('Too wide an import, cancelling import.') break When something goes wrong, its best to throw an exception, not print a message and continue on your merry way. if current_col >= self.furthest_col: self.furthest_col = current_col + 2 Use you check_furthest method, so as not to duplicate logic. if token.value == '$': # $ indicates end of line. current_row += 1 if current_row > self.size - 1: print('Too high an import, cancelling import.') break current_col = 0 if current_row > self.furthest_row: self.furthest_row = current_row + 2 current_token += 1 def randomise_grid(self): """Change every cell in a grid to random dead or alive state.""" for row in range(0, self.size): for col in range(0, self.size): self.grid[row][col] = random.choice([True, False]) self.furthest_row = self.size - 1 self.furthest_col = self.size - 1 def blank_grid(self): """Replace the current grid with a blank grid.""" self.grid = self.make_blank_grid() One additional thing to consider would be using numpy. Numpy is a python library that provides a multi-dimensional array type. Its more natural to work with then lists of lists. It also provides vector operations, which allow you to perform the same operation against all the element in the array very efficiently and with less code.
{ "domain": "codereview.stackexchange", "id": 328, "tags": "python, game-of-life" }
Difference Between Publisher Subscriber Model and Client Service Model
Question: What exactly is the difference between the two? From what I gather, messages in a publisher/subscriber are only sent one way from publisher to subscriber but in the client/service model, messages can go both ways. Is this correct? Originally posted by Syngerical on ROS Answers with karma: 1 on 2016-06-21 Post score: 0 Answer: There is a bit more to publish/subscribe and client/server (or remote-procedure-call) than just the direction in which messages flow. A good paper that discusses the main benefits of the publish/subscribe communication style and compares it to traditional remote procedure call is The Many Faces of Publish/Subscribe by Patrick Eugster et al. Originally posted by gvdhoorn with karma: 86574 on 2016-06-21 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 25014, "tags": "ros" }
Turtlebot doesn't stop after killing the velocity command
Question: Hi Everyone, I am running two turtlebots (kobuki) with gazebo1.9 and I provided command velocities to each turtlebot using “rostopic pub” command. I noticed that turtlebot keep moving after I killed the command velocity using “Control tab +c”. I am wondering if this anomaly is due to the inertia values for my turtlebot wheels or something else Thanks for any help with this issue Originally posted by Robert on Gazebo Answers with karma: 9 on 2014-05-02 Post score: 0 Answer: Try to send a 0 velocity command. rostopic pub /cmd_vel geometry_msgs/Twist '[0.0, 0.0, 0.0]' '[0.0, 0.0, 0.0]' Not sure if this is the right command but give it a try or replace it with the right one and let me know. Hope that works. Originally posted by ffurrer with karma: 349 on 2014-05-04 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Robert on 2014-05-05: Thanks Ffurrer. I sent a 0 velocity command and the turtlebot stopped. Somebody mentioned the gazebo plugin might not have a timeout on incoming commands when they stop being published it may continue to execute the last received command. Comment by Robert on 2014-05-05: Your tip worked for me. Thanks a lot!!
{ "domain": "robotics.stackexchange", "id": 3584, "tags": "gazebo" }
Left-right topology
Question: Are there non-trivial topological solutions (in particular 't Hooft-Polyakov magnetic monopoles) associated with the (local) breaking \begin{equation} SU(2)_R \times SU(2)_L \times U(1)_{B-L} \to SU(2)_L \times U(1)_Y \end{equation} ? Answer: In order for a theory to present stable monopole solutions it has to satisfy three requirements: i) It has to have the topological conditions, generally showed as non trivial second homotopy group of the vacuum manifold. ii) It has to satisfy a quantization condition $$e^{ieQ_m}=\mathbb 1,$$ where $Q_m$ is the (non-Abelian) magnetic charge. This is a generalization of the Dirac quantization condition. iii) The monopole has to be a solution of the classical equations of montion. It can be shown that to satisfy ii) the $U(1)_{em}$ has to be compact (the electric charge has to quantized as well), that is, isomorphic to the circle and not to the reals. It turns out that when you have a SSB $G\rightarrow K\times U(1)$, the $U(1)$ is compact if $G$ and $K$ are both semisimple. Otherwise $U(1)$ may be non compact. In your case, $G$ is not semisimple, it has an Abelian factor. For chiral symmetry breaking such as in QCD where the symmetry is global there are definitely no 't Hooft-Polyakov monopoles since those appear when you spontaneously break a local gauge symmetry. You are breaking a global one. There are some studies about something called "semilocal defects" that may appear when you break a local and a global symmetry in a "mixed way". What characterizes the stability of topological solutions as monopoles and vortices are topological quantities as the winding number. Take a vortex, for simplicity. Roughly speaking the winding number would say how the scalar field rotate as we go around the vortex. This rotation is in the internal space. With a global symmetry you are not able to construct such a rotating scalar field. The topological numbers would be trivial.
{ "domain": "physics.stackexchange", "id": 30379, "tags": "gauge-theory, symmetry-breaking, topology, magnetic-monopoles, solitons" }
gmapping : Scan Matching Failed, using odometry‏
Question: Hi, I'm using gmapping to create maps. But, it returns a lot of messages saying Scan Matching Failed, using odometry and the resulting map has several overlapped maps. In rviz the laser values seem to be correct. When I run the rostopic command to see the behavior of base_scan (laser) and odom (odometry) topics, it is possible to verify that the values are correctly published on ROS. . I'm using these parameters in my code for the values on LaserScan: scan.angle_min = -1.57; scan.angle_max = 1.57; scan.angle_increment = 3.14 / num_readings; scan.time_increment = (1 / laser_frequency) / (num_readings); scan.range_min = 0.0; scan.range_max = 80.0; Someone can help me ? Together with this message I attach the resulting .yaml file and the .bag file. NOTE: Below is presented part of the resulting command line with the error: -maxUrange 79.99 -maxUrange 79.99 -sigma 0.05 -kernelSize 1 -lstep 0.05 -lobsGain 3 -astep 0.05 -srr 0.1 -srt 0.2 -str 0.1 -stt 0.2 -linearUpdate 1 -angularUpdate 0.5 -resampleThreshold 0.5 -xmin -100 -xmax 100 -ymin -100 -ymax 100 -delta 0.05 -particles 30 [ INFO] [1321129265.860843005]: Initialization complete update frame 0 update ld=0 ad=0 Laser Pose= 15 1 1.57733 m_count 0 Registering First Scan update frame 246 update ld=1.00164 ad=0 Laser Pose= 13.9984 0.993456 1.57733 m_count 1 Scan Matching Failed, using odometry. Likelihood=3.10908e-305 lp:15 1 1.57733 op:13.9984 0.993456 1.57733 Scan Matching Failed, using odometry. Likelihood=-1200 lp:15 1 1.57733 op:13.9984 0.993456 1.57733 Thanks Originally posted by rcampos on ROS Answers with karma: 31 on 2011-11-14 Post score: 3 Original comments Comment by Brian Gerkey on 2011-11-15: I don't see the bag file you're referring to. Can you post it somewhere? Answer: The first thing that I would do is verify that accumulating laser data in the odometric frame produces a sensible result. See this question for a simple technique using rviz. If you can't build at least a crude "map" by just accumulating scans based on odometry over short distances, then something is wrong with your laser, or odometry, or both, and gmapping won't be able to help you. Originally posted by Brian Gerkey with karma: 2916 on 2011-11-15 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 7293, "tags": "slam, navigation, slam-gmapping, gmapping" }
Identifying users based on smartphone data (Google)
Question: I've heard that Google has now a technique to identify users based on the smartphone touch input (how the user is using the phone). I have found nothing about that online. Is there a paper or some article available explaining how this is done (e.g. what features are used)? Answer: Even I was unable to find a paper on sucha topic by Google. But, I can discuss some features which Google might be using to uniquely identify its users based on smartphone data. Fingerprint scanning: One of the preliminary methods of scanning users as the fingerprint is unique to every individual. Also, most devices are now equipped with a fingerprint scanner. Home location: The Google account also keeps a track of the user's home location which can't be directly used to identify a user, but could be an important feature. Data Personalization: Suppose a user has habit of viewing videos, blogs and websites related with the top tag "artificial intellgence". This personalized information could be used to track down a user. Data persoanlization could be applied to apps, videos and websites with which the user interacts. This data will be super complex and probably unique to each individual. Device Usage: Every individual likes to personalize and use his/her device in their own way. Android 9 had this feature of tracking apps which are most used by the user and also at which times. This is another great factor for identification. Preliminary device details: IP address, the device's model, Android version, account password are also stored with the Google account. The above features could be brought together to uniquely identify users as they provide information in different dimensions regarding the user. The weaker features ( Data personalization, device usage ) can be used with the powerful features ( IP address, passwords ) to uniquely identify a particular user. All the above information was not mentioned in any paper. Hope this helps. :-)
{ "domain": "datascience.stackexchange", "id": 5298, "tags": "machine-learning, google" }
Why do we sweat after drinking water and running?
Question: Why do we sweat after running? Also we sweat sometime after drinking lots of water. Why it is so? Can someone please enlighten me in this regard? Answer: Exercise, such as running, increases muscle activity. This increases the energy demand of these tissues, which increases the rate of cellular respiration. Respiration releases heat as a by-product, therefore the body is hotter during and after exercise. Sweating is a homoeostatic mechanism to keep core body temperature constant. It is a response to lower the body temperature. When the body becomes too hot, sweat is released onto the surface of the skin. The water from the sweat then takes some of the excess heat energy from the body and uses it to evaporate. Because water has a relatively large specific heat capacity a lot of heat can be carried away by this method.
{ "domain": "biology.stackexchange", "id": 2760, "tags": "human-biology" }
Is Mercury's core liquid?
Question: A very basic question, but one to which I keep finding different answers: does Mercury have a liquid core, or is it all solid? Whatever the reason, what are the causes of it being so? Answer: It's liquid. As detailed here, To figure out whether Mercury's core was liquid or solid, a team of scientists led by Jean-Luc Margot at Cornell University measured small twists in the planet's rotation. They used a new technique that involved bouncing a radio signal sent from a ground telescope in California off the planet and then catching it again in West Virginia. After 5 years and 21 such observations, the team realized their values were twice as large as what would be expected if Mercury's core was solid. "The variations in Mercury's spin rate that we measured are best explained by a core that is at least partially molten," Margot said. "We have a 95 percent confidence level in this conclusion." The NRAO has another article on it, which goes slightly more in-depth into the subject. The official site of the Messenger mission is slightly more cautious: However, these constraints are limited because of the low precision of current information on Mercury's gravity field from the Mariner 10 and MESSENGER flybys. Fundamental questions about Mercury's core remain to be explored, such as its composition. A core of pure iron would be completely solid today, due to the high melting point of iron. However, if other elements, such as sulfur, are also present in Mercury's core, even at a level of only a few percent, the melting point is lowered considerably, allowing Mercury's core to remain at least partially molten as the planet cooled. Constraining the composition of the core is intimately tied to understanding what fraction of the core is liquid and what fraction has solidified. Is there just a very thin layer of liquid over a mostly solid core or is the core completely molten? Addressing questions such as these can also provide insight into the current thermal state of Mercury's interior, which is very valuable information for determining the evolution of the planet. At this point in time, though, all evidence indicates that Mercury has a molten core. As userLTK pointed out, lower pressure inside Mercury makes it easier for the core to stay liquid at lower temperatures.
{ "domain": "astronomy.stackexchange", "id": 918, "tags": "planet, core, mercury" }
What's the difference between Channel shortening and time-domain equalizer
Question: I'm asking about the difference about channel shortening and time-domain equalization. I have checked on-line and tried to get something clear but I couldn't. What I found is that channel shortening means almost "Calculating the coefficients for DFE equalizer" But I couldn't also imagine that. Could you please explain. Thank you Answer: Yes,I agreed these terms are very common in communication engineering, I was asking about channel shortening long time ago, and I got an answer telling me that channel shortening is just calculating the coefficients of the DFE. you can read the details in the below link Channel Shortening for underwater acoustic channel communication thanks
{ "domain": "dsp.stackexchange", "id": 6593, "tags": "discrete-signals, equalization" }
Simulator that allows your to test your real firmware inside a simulated world?
Question: I am interested in learning more about how robotics are tested in simulators. Reading the Wikipedia article on robotics simulators I can't tell how the onboard firmware (the "brains" of the robot) are actually tested inside the simulator. Every simulator I've found from my online searches seems to want you to program the robot inside the simulator, and then export that code as firmware to the actual robot. Or even worse, maintain duplicate codebases (one codebase for use inside the simulator, and another one for the actual firmware flashed to the robot and used in "production"). Are there any simulators that allow you to upload your firmware (and a 3D model of your robot) into a simulated world, and see how your actual, production firmware behaves in that simulated world? To be clear, I'm talking about the following general setup: The firmware is running on some type of production-like MCU/SOC Instead of hardware inputs (cameras, microphones, lidar, etc.) being wired to the MCU/SOC, the machine that the simulator is running on (a.k.a. the "simulator machine") is wired up to those same ports/pins Instead of hardware output (motors, servos, speakers, etc.) being wired up to the MCU/SOC, the simulator machine is wired up to those same ports/pins So essentially the firmware is running in environment where its hardwired to speak to the simulator machine The simulator renders the 3D world for the robot-under-test and sends each input frame over the wire to the MCU/SOC/firmware The firmware processes the input as it normally would, in production (the real world) The firmware generates outputs to its output pins/ports, which get picked up and interpreted by the simulator So for example, when the test starts, the firmware is waiting for video frames from a camera. The simulator renders those frames and sends them over the wire to the MCU/SOC where the firmware is running. The firmware decides it wants to explore the world, so it sends commands that would normally turn its motors on to move forward. But because the associated pins/ports are connected to the simulator machine, the simulator receives those commands, and (inside the 3D world) moves the robot forward accordingly, and re-renders its view, sending new video frames back to the firmware's input pins/ports. Update in response to Chuck's answers below I'm not understanding why I would need to plop my firmware onto a hardware emulator for such a hypothetical system. Say I am running on firmware (in production) on some MCU/SOC, say the BuggyBoard 9000. And lets say my robot is super simple: it hooks up to an RGB camera as its only input, processes video frames from it, and outputs different sounds (based on what it "sees" in the video captures) to a speaker (its only output) that is connected via a few wires soldered onto some I/O pins on the board. So in production, I have a USB camera connected to the Buggboard 9000 at one of its USB serial ports. The firmware is using USB device drivers to read the camera's video frames and do visual processing with them. When it detects something and decides it needs to make a sound, it uses its own algorithms and the speaker's device driver to send sound to the connected speaker. But this is in production. In my simulator/test harness however, I simply have the BuggBoard 9000 connected to a power source and its running my firmware. Instead of a USB camera I have a USB cable connected to the BuggBoard 9000, where the other end is connected to the simulator machine. Instead of the soldered-on speaker, I have those same I/O pins soldered to wires going to a dongle that also connects to another port on the simulator machine. The simulator renders some video frames and sends them to the USB port connected to the BuggyBoard 9000. The firmware uses device drivers to read the video frames into memory and do its visual processing. It sends sounds through the device drivers, down into the I/O wires, the dongle and back into the simulator, where perhaps something in the simulated environment will respond to certain sounds being broadcasted. Answer: :EDIT: - In response to the question edit, In my simulator/test harness however, I simply have the BuggBoard 9000 connected to a power source and its running my firmware. Instead of a USB camera I have a USB cable connected to the BuggBoard 9000, where the other end is connected to the simulator machine. Yeah, but here you need to code the camera-side of the USB protocol. You can probably get the camera USB driver from the manufacturer, but you're 99.99% guaranteed to not be able to get a copy of the camera-side USB code, which means you get to reverse engineer the communication protocol that the camera is using. You might "luck out" in that it's some kind of standard protocol, like RTSP, but could also wind up being something that's specific to that particular camera model. This also means you need to at least verify and probably re-write the whole camera-side USB spoofing code any time you change camera models. Likewise for the speaker, you write an analog-to-digital converter to capture the waveform and then need to implement some streaming protocol (your choice!) to encode it, transmit it to the simulator, then you need to decode it in the simulator. The time you spend designing the spoofing hardware, software, simulator interface, etc. is time that you will never get back, and doesn't directly help your project beyond being able to test your device without making changes. If instead you modularized your code, then you could do something like run ROS connections between the different aspects of your code. If you had a USB camera connected to a dedicated microcontroller that encoded the images to a ROS video stream, now your core logic isn't doing anything with USB drivers. It subscribes to the ROS image topic and now your system becomes cleanly testable. You can easily spoof the ROS message and test your business logic without an actual camera. You can create a debug subscriber and test your camera-to-ROS encoding. You can do the same for playing the sound - you can create the speaker driver as a ROS subscriber, and your business logic can send a ROS message with an enum or similar that would indicate to the speaker driver which sound to make. Again, that's really easy to spoof and again breaks the hardware dependence. You can run your business logic on the BuggyBoard 9k, you can do it as a desktop application, in a docker container running on AWS, etc. Doesn't really matter because it's modular. == Original Post == simulation! This is exactly my domain. Firmware implies hardware. The firmware that runs on a TV remote isn't the same as the firmware that runs on a network router, and you couldn't run firmware from one device on the other. There are, generally speaking, two levels of simulation: Software-in-the-loop (SIL), where you execute your program on some platform that's NOT the actual device you're going to use in production, and Hardware-in-the-loop (HIL), where you ARE using the production hardware. A simulator has some virtual environment, and then you would add virtual sensors to that environment. You create a kind of avatar that is the simulated device, and as your program runs it signals the avatar to take some action. I've done every level of testing here, from a basic simulator that is capable of testing some subsystem on a product, to full-scale hardware-in-the-loop testing that has multiple racks of PLCs and network switches setup as they'll be configured at a customer site. The real difficulty with HIL testing is replicating the sensor OEM packet structure. For example, at a previous employer we did full-scale HIL testing with a bank of SICK brand lidars. Those devices have an internal state machine and a particular boot sequence, complete with a handshake protocol that steps through that internal state machine. In order to run our software with no changes, we had to fully implement the internal state machine, all the status messages, etc., in addition to "just" the raw lidar packets. Depending on how "low" you want to get with the HIL testing, you'll also need to recreate all the motor encoders, limit/whisker switches, circuit breaker and contactor status, bus voltage readings, etc. Part of what you can do for yourself is to structure your hardware in such a way that you segregate your core program logic from the I/O handling. If you had one board that handles I/O and forwards that I/O data as an Ethernet message, then you get the opportunity to just send I/O packets from your simulator that are structured in the same way as the physical I/O handler and life gets dramatically easier. If your core logic and I/O handling are on the same board then you'll wind up needing a kind of "translator" board. This translator would receive I/O status data from the simulator and would generate digital and analog signals that you can wire to your production hardware to simulate all those encoders and other feedbacks. Kind of similarly from the other side, if your core logic is also directly interacting with I/O pins or other hardware-specific aspects of your platform then you're going to have a really hard time performing SIL testing, because that code won't work as-is if you run it as a desktop application. You can't just change your compiler options and get it to run, and this is probably where you've read about some people/companies that need to keep a "production" and "test" version of their code, where the difference is whether or not the code is tethered to the hardware platform. Again, careful structuring on your part can mitigate this and really let your exercise your core logic on an arbitrary platform. Regarding your question, Are there any simulators that allow you to upload your firmware The thing you're probably looking for is "hardware emulation," but the results there are pretty limited. I can find stuff for the Raspberry Pi or Texas Instruments' MSP-430 (no affiliation to either company), but beyond those you're probably going to be pretty limited on what you can find for hardware emulation. Even then, I don't know that you're going to be able to easily interface the emulator to a virtual environment. Back to your question again: So for example, when the test starts, the firmware is waiting for video frames from a camera. The simulator renders those frames and sends them over the wire to the MCU/SOC where the firmware is running. Yup, your simulator captures the scene, encodes those captures appropriately, and then sends that data over the appropriate connection - TCP/IP, RealTime Streaming Protocol (RTSP), etc. Again, the difficulty isn't so much with encoding the data and sending it but (for example, with RTSP) the state machine and handshake associated with making the connection, handling messages beyond frame transmission like DESCRIBE, SETUP, PLAY, TEARDOWN, etc. The firmware decides it wants to explore the world, so it sends commands that would normally turn its motors on to move forward. Yeah, and again you need a way to receive those commands, maybe also virtual encoders to report wheel speeds or motor speeds, etc. All those also need to be packaged correctly. But because the associated pins/ports are connected to the simulator machine, the simulator receives those commands This is the hand-waving part where the real effort happens. If your production platform's software is using hardware I/O to generate outputs, then you'll need a way for the simulator to read that hardware I/O. As I mentioned above, this is typically some OTHER piece of hardware that will make up part of your test stand. The other hardware will read the System Under Test (SUT) outputs, convert to some network message the simulator can read (ROS message, TCP/IP socket connection, etc.). The simulator runs, generates its own outputs (like wheel encoders), sends those outputs to the translator, which then generates the square wave, analog signal, gray code, etc. - whatever the appropriate representation is that your SUT is expecting. These are generally all specific to whatever system you're trying to test and I think everyone wants to make their own Simulator. Gazebo is supposed to be a simulator that has a lot of stock sensors, but (1) probably won't have all the sensors you're using, and (2) almost certainly won't "just work" with your production hardware out of the box. You'll need to tailor the simulator to your needs and if you're that work then I'd highly recommend you evaluate a suite of simulation environments to determine what's going to work best for your application. Pre-built sensors aren't going to cut it if what you really need is photorealism, etc.
{ "domain": "robotics.stackexchange", "id": 2570, "tags": "simulation, simulator, operating-systems" }
Matrix: Set a row/col to zero
Question: A practice interview question: Write an algorithm such that if an element in an MxN matrix is 0, its entire row and column is set to 0. void SetZeroOnMatrix(int **matrix, int M, int N) { bool *rowZeros = new bool[M]; bool *colZeros = new bool[N]; // Find the 0's for(int i=0; i<M; i++) { for(int j=0; j<N; j++) { if(matrix[i][j] == 0) { rowZeros[i] = false; colZeros[j] = false; } } } // Replace the values for(int i =0;i<M;i++) { for(int j=0;j<N;j++) { if(!rowZeros[i]||!colZeros[j]) matrix[i][j]=0; } } delete[](rowZeros); delete[](colZeros); } If \$N = M\$, I'm assuming that this would have a time complexity of \$O(N^2)\$ and a space complexity of \$O(N)\$. Answer: Don't dynamically allocate objects if you don't need to. bool *rowZeros = new bool[M]; bool *colZeros = new bool[N]; If you do it this way you need to take control and do manual memory management (a bad idea). Most specifically your code is not exception safe. If your code or anything you called throws an exception you leak memory (OK so this function is simplistic enough that it is unlikely but code has a tendency to evolve over time and you want to write code that is easy to maintain so writing it exception safe to begin with is a general goal). Also C++ has a tendency to have early return backed into the code. This is because of automatic memory management that has been developed over the years. If somebody modifies your code to have an early return (say because they found no zero's on the first pass). Then your code is now likely to leak. So designing your code again t to use automatic memory management just makes your code more maintainable. I would use a vector here: std::vector<char> rowZeros(M, true); std::vector<char> colZeros(N, true); This allocates memory. But when the function exits all memory is correctly deallocated (even after early return or exception). Use standard container like automatic variables whenever you can. Notice I did not use std::vector<bool>. Normally I would use the same type in the vector that I would have used in the array. But there is special consideration taken for std::vector<bool> that make it less efficient (everybody has concluded this was a mistake by the committee but it is not going to be fixed for backwards compatibility (at least not any time soon)). Also note your original code contains a bug: bool *rowZeros = new bool[M]; // This allocates the array but does not initialize. Because the array is not initialized reading the elements is undefined behavior (until after they have been initialized). You could have initialized them to zero(false) with: bool *rowZeros = new bool[M](); // ^^ forces zero initialization of all members. But the vector constructor has a more obvious way of doing it. std::vector<char> rowZeros(M, true); // ^ Size // ^^^^ Value copied into each cell of the vector.
{ "domain": "codereview.stackexchange", "id": 10645, "tags": "c++, interview-questions, matrix" }
Web-scraping a list of lawyers
Question: I've written a program to get the names along with the titles of some practitioners out of a webpage. The content stored within disorganized HTML elements (at least it seemed to me) and as a result the output becomes messier. However, I've managed to get the output as it should be. The way I've written the scraper serves its purpose just fine but the scraper itself looks ugly. Is there any way to make the scraper look good and still have it do the same job as it is doing currently? For example, is there a way to eliminate the exception handlers? This is how I've written it: import requests from bs4 import BeautifulSoup url = "https://www.otla.com/index.cfm?&m_firstname=a&seed=566485&fs_match=s&pg=publicdirectory&diraction=SearchResults&memPageNum=1" def get_names(link): res = requests.get(link) soup = BeautifulSoup(res.text,'lxml') for content in soup.select(".row-fluid"): try: name = content.select_one(".span5 b").get_text(strip=True) except: continue #let go of blank elements else: name try: title = content.select_one(".span5 b").next_sibling.next_sibling.strip() except: continue #let go of blank elements else: title print(name,title) if __name__ == '__main__': get_names(url) Answer: As @scnerd already discussed avoiding the use of a bare try/except clause, I'll focus on removing it entirely. After inspecting website source, you will notice that elements of class row-fluid are containers for each lawyer. In each of those elements, there is two .span5 clearfix elements, one containing lawyer's name and firm, other their specialization. Since you don't seem to be interested in the latter, we can skip that element entirely and move onto the next one. for element in soup.select(".span5 b"): name = element.text.strip() if name.startswith('Area'): continue title = element.next_sibling.next_sibling.strip() print('{}: {}'.format(name, title)) You will notice I left out row-fluid container from the selector, as we are iterating only over the span5 elements that are contained within them, however if you wanted to keep that it, you can chain the CSS selector as such: soup.select(".row-fluid .span5 b"). If there was any elements of the span5 class outside of the row-fluid containers, it would be better to chain the CSS, making it more explicit. get_names is a fairly ambiguous function name that also suggests it will return an iterable with names. What you are doing in this function is printing names of lawyers, together with the firm they're working for. I'd suggest renaming it to print_lawyer_information, or better yet get_lawyer_information and return a dictionary containing the name as key and title as value. lawyer_information = {} for element in soup.select(".span5 b"): name = element.text.strip() if name.startswith('Area'): continue title = element.next_sibling.next_sibling.strip() lawyer_information[name] = title return lawyer_information As you can see above, we're creating an empty dictionary and then, as we iterate over the elements, we append records to it. Once that is done, you just return the dictionary, which then you can print or do more things with. This can be done with a much neater dictionary comprehension. return {row.text.strip(): row.next_sibling.next_sibling.strip() for row in soup.select('.span5 b') if not row.text.strip().startswith('Area')} A couple other nitpicks would involve some PEP-8 (Python Style Guide) violations, such as lack of 2 lines before function definition and multiple statements on one line. Run your code through something like http://pep8online.com and you'll get a better idea of how to make your code easier to read and follow. You'll thank yourself in a years time when you look at your code again. import requests from bs4 import BeautifulSoup def get_lawyer_information(url): response = requests.get(url) soup = BeautifulSoup(response.content, 'lxml') return {element.text.strip(): element.next_sibling.next_sibling.strip() for element in soup.select('.span5 b') if not element.text.strip().startswith('Area')} if __name__ == '__main__': url = "https://www.otla.com/index.cfm?&m_firstname=a&seed=566485&fs_match=s&pg=publicdirectory&diraction=SearchResults&memPageNum=1" lawyer_information = get_lawyer_information(url) for name, title in lawyer_information.items(): print('{} | {}'.format(name, title))
{ "domain": "codereview.stackexchange", "id": 30843, "tags": "python, python-3.x, web-scraping, beautifulsoup" }
How do varying static point charges exert the same force on each other?
Question: If you have two point charges one being 1 Coulomb and the other being 1 Trillion Coulomb, it is said that the electric force from the 1 Coulomb point charge exerted on the 1 trillion Coulomb point charge is equivalent to the electric force from the 1 trillion coulomb point charge exerted on the 1 Coulomb point charge. How can a 1 coulomb point charge exert the same force as a 1 trillion coulomb point charge? Answer: Perhaps this analogy will help: Imagine I have two fans - one with a huge diameter, the other with a tiny diameter. When I put them facing each other, with the huge fan running, I will be able to extract a small amount of power from the tiny fan (because only a tiny fraction of the wind generated by the big fan will intersect with it). Conversely, when the tiny fan is running, almost all its air will be "felt" by the huge fan. But the tiny fan only generates a little bit of air movement... This is how it is with two different charges (or if you like with two different masses). The same thing (charge, mass) that makes them able to generate a field, makes them susceptible to the field of another (charge, mass). This is indeed a necessary consequence of Newton's third law - the attractive force must be reciprocal (force of A on B must equal force of B on A), so there must be symmetry in the equation describing the force. If you are OK with the force of Moon on Earth being the same as the force of Earth on Moon, then you should be OK with this. And if those forces were not the same, their would either be crashing into each other, or flying apart...
{ "domain": "physics.stackexchange", "id": 99055, "tags": "newtonian-mechanics, forces, electrostatics, coulombs-law" }
Multithread to print odd and even numbers
Question: I recently started on C++ multithreading. Here is my code to print odd and even numbers in two different thread. Can somebody please review this code? #include <iostream> #include <thread> #include <mutex> #include <condition_variable> using namespace std; int x = 1; mutex m; bool evenready = false; bool oddready = true; condition_variable cond; void printEven() { for (; x < 10;) { unique_lock<mutex> mlock(m); cond.wait(mlock, [] { return evenready; }); oddready = true; evenready = false; cout << "Even Print" << x << endl; x++; cond.notify_all(); } } void printOdd() { for (; x < 10;){ unique_lock<mutex> mlock(m); cond.wait(mlock, [] { return oddready; }); oddready = false; evenready = true; cout << "Odd Print" << x << endl; x++; cond.notify_all(); } } int main() { thread t1(printOdd); thread t2(printEven); t1.join(); t2.join(); return 0; } Answer: I see some things that may help you improve your program. Don't abuse using namespace std Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid. Eliminate global variables where practical The code declares and uses 5 global variables. Global variables obfuscate the actual dependencies within code and make maintainance and understanding of the code that much more difficult. It also makes the code harder to reuse. For all of these reasons, it's generally far preferable to eliminate global variables and to instead pass pointers to them. That way the linkage is explicit and may be altered more easily if needed. For example, one way to do it would be to gather all of the variables into a struct and pass a reference to the struct to each thread instance. The structure instance could be a local variable within main. Think about eliminating redundant variables Since boolean variables evenready and oddready are always in opposite states, one of them is redundant. In fact, in this case, both are redundant since one can easily derive the same function from the value of x. Use appropriate C++ idioms This line is somewhat strange: for (; x < 10;) { It's much more clear to write like this: while (x < 10) { Omit return 0 When a C++ program reaches the end of main the compiler will automatically generate code to return 0, so there is no need to put return 0; explicitly at the end of main. Don't Repeat Yourself (DRY) If you're writing almost identical functions, think if there's a way to consolidate them. In this case there certainly is, as I'll demonstrate later in this answer. Think carefully about data race conditions A mutex is generally used to assure non-conflicting access to a shared resource. For that reason, it's good to clearly answer the question, "what shared resource is this mutex protecting?" In this case, it seems to be protecting access to std::cout and x but it doesn't do a thorough job of that. Consider that when one thread is evaluating x < 10 (without a lock) the other might be incrementing x (with a lock). That's a classic data race. Here's a rewrite that avoids this problem: #include <iostream> #include <thread> #include <mutex> #include <condition_variable> #include <functional> #include <string_view> struct OddEven { int x = 1; std::mutex m; std::condition_variable cond; }; void printTask(OddEven &oe, const std::string_view &label, bool odd) { for (bool running{true}; running; ) { std::unique_lock<std::mutex> mlock(oe.m); oe.cond.wait(mlock, [&oe, odd] { return (oe.x & 1) == odd; }); std::cout << label << oe.x << std::endl; oe.x++; running = oe.x < 10; oe.cond.notify_all(); } } int main() { OddEven oe; std::thread t1(printTask, std::ref(oe), "Odd Print", true); std::thread t2(printTask, std::ref(oe), "Even Print", false); t1.join(); t2.join(); }
{ "domain": "codereview.stackexchange", "id": 31552, "tags": "c++, c++11, multithreading" }
Visual studio developer command prompt freezes after launching 2nodes in windows 10
Question: Hi, I am unable to run more than 2 nodes in windows10. The moment I launch more than 2 nodes. The vs developer command prompt freezes. Steps I followed: Open visual studio developer command prompt. source ros_foxy with command "call ros2_foxy/local_setup.bash" build my workspace "my_workspace" with command "colcon build" After successful build, i source the workspace "install/local_setup.bash" Finally, i launch all the node(5nodes) using launch file "ros2 launch mypackage mypackage.py" expected behavior : All nodes should have been launch. actual behavior: Only two nodes are launched. Few more inputs: I am using ros2 inside a windows vm. 8GB RAM, 110GB C:, 4cores. Once two nodes are launched(out of 5nodes) i am unable to run any other ros2 related programs like rqt, ros bag record/play. But interestingly i can run all apps via commandline other than ros2 related. I did check the CPU load and memory usage it was well under 60% of utilization. My development setup Visual studio : 2019 Python : 3.8.3 Windows: 10 ROS2_Foxy: Patch 6.1 C++ 11 Any hints or suggestions highly appreciated. Thanks Originally posted by TinyTurtle on ROS Answers with karma: 25 on 2021-10-25 Post score: 1 Original comments Comment by gvdhoorn on 2021-10-25: Unless you're running nodes as background processes, not being able to use the terminal after starting a node seems like expected behaviour (as in: it's just a program running, which is blocking). I believe it would help (others) if you could describe exactly how you are starting things, and what you expected to happen (if something else happened). Comment by TinyTurtle on 2021-10-25: Even if I run 5 separate instances of vs developer command prompt with each running different nodes. Still I can only run two nodes. Comment by alberto on 2022-06-20: Hi! I guess I have the exact same problem, in fact I created this post. Did you manage to solve it? If not, are you using dynamic array allocation? Are you sending dynamic array via topic (like an image)? Do you think it is a ros problem or is your code causing it? Answer: Try to change your RMW implementation to rmw_cyclonedds_cpp as said here. For me, this did the trick. Originally posted by alberto with karma: 100 on 2022-06-23 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 37055, "tags": "ros, ros2, windows10" }
EMF in an LC circuit
Question: What is the emf of the inductor in an LC circuit that is oscillating at a fixed amplitude, without damping, and without an external energy source? Am I right that the emf is zero when the work done by the inductor is negative? EDIT 1: This is a new diagram. I have changed the sign of the emf, as suggested by Alfred Centauri, and I shifted the energy of the capacitor over 90 degrees. My primary question remains the same: is the emf zero when the work done by the inductor is negative. EDIT 2: My confusion partly came from a statement in the wikipedia article on electromotive force: "devices provide an emf by converting other forms of energy into electrical energy". In a resistor, which does not provide an emf, the energy conversion is the opposite direction: electrical energy is converted to heat. That made me speculate that the inductor provides no emf when it reduces the electrical energy of the capacitor. The answer by Alfred Centauri clarified the meaning of emf to me, as well as the article "Does the electromotive force (always) represent work?" (link) Answer: Am I right that it is equal to the voltage of the inductor when the inductor is doing positive work, so when the capacitor is gaining energy, and zero for the rest of the time? No, for an ideal inductor, the inductor emf $\mathscr{E}_L$ is, at all times, equal in magnitude and opposite in sign to the voltage $v_L$ across the inductor. $$v_L = L \frac{di_L}{dt} = -\mathscr{E}_L$$ But I am curious about the line of reasoning that led you to believe the inductor emf behaves as you describe. Would you edit your question to include it? It might be valuable to others. Also, I've answered, in more detail, a few related questions here. I'll look them up and post links later.
{ "domain": "physics.stackexchange", "id": 81661, "tags": "electric-circuits, voltage, capacitance, inductance" }
How to use SMOTENC inside the Pipeline?
Question: I would greatly appreciate if you could let me know how to use SMOTENC. I wrote: num_indices1 = list(X.iloc[:,np.r_[0:94,95,97,100:123]].columns.values) cat_indices1 = list(X.iloc[:,np.r_[94,96,98,99,123:160]].columns.values) print(len(num_indices1)) print(len(cat_indices1)) pipeline=Pipeline(steps= [ # Categorical features ('feature_processing', FeatureUnion(transformer_list = [ ('categorical', MultiColumn(cat_indices1)), #numeric ('numeric', Pipeline(steps = [ ('select', MultiColumn(num_indices1)), ('scale', StandardScaler()) ])) ])), ('clf', rg) ] ) Therefore, as it is indicated I have 5 categorical features. Really, indices 123 to 160 are related to one categorical feature with 37 possible values which is converted into 37 columns using get_dummies. I think SMOTENC should be inserted before the classifier ('clf', reg) but I don't know how to define "categorical_features" in SMOTENC. Besides, could you please let me know where to use imblearn.pipeline? Thanks in advance. Answer: As it follows, two pipelines should be used: num_indices1 = list(X.iloc[:,np.r_[0:94,95,97,100:120,121:123]].columns.values) cat_indices1 = list(X.iloc[:,np.r_[94,96,98,99,120]].columns.values) print(len(num_indices1)) print(len(cat_indices1)) cat_indices = [94, 96, 98, 99, 120] from imblearn.pipeline import make_pipeline pipeline=Pipeline(steps= [ # Categorical features ('feature_processing', FeatureUnion(transformer_list = [ ('categorical', MultiColumn(cat_indices1)), #numeric ('numeric', Pipeline(steps = [ ('select', MultiColumn(num_indices1)), ('scale', StandardScaler()) ])) ])), ('clf', rg) ] ) pipeline_with_resampling = make_pipeline(SMOTENC(categorical_features=cat_indices), pipeline)
{ "domain": "datascience.stackexchange", "id": 6252, "tags": "python, scikit-learn, imbalanced-learn, smotenc" }
Optimizing coin splitting - Is this algorithm as fast as I think?
Question: In a recent exam, I've been asked to solve the following problem: The problem Two players play the following game: Given a sequence of coin values $v_1,\ldots,v_n (n \vert 2)$ the players take turns in taking one coin from either "end" of the sequence. The value of that coin is added to their score. Player 1 always starts. What is the maximum score that Player 1 can reach if Player 2 plays optimally (always minimizing their potential score)? What is the time-complexity of your algorithm? Prove your result. Example: Given the coin sequence $1,1,3,1,1,2$ one of the best playing sequence for player 1 is (scores denoted as the second tuple): $$(\lbrack 1,1,3,1,1\rbrack,(2,0)) \rightarrow (\lbrack 1,1,3,1\rbrack,(2,1)) \rightarrow (\lbrack 1,3,1\rbrack,(3,1)) \rightarrow (\lbrack 3,1\rbrack, (3,2)) \rightarrow (\lbrack 1 \rbrack, (6,2)) \rightarrow (6,3)$$ The best score player 1 can reach is 6. My solution I'm just going to roughly outline my proof here, the formal correctness is not the point of this question. I'm more interested in the correctness of my asserted upper bound of complexity. For the following I will use tuples to denote ranges of values. $(i,j)$ specifies the value range $v_i,\ldots,v_j$. We iteratively create a DAG $G = (V, E, \gamma)$ as follows: Create a vertex $v$ named $(1,n)$. Create or refer to the previously created vertices $v_1, v_2, v_3$, named $(2,n-1), (3,n), (1,n-2)$ respectively. Create the edges $e_1 = (v,v_1), \gamma(e_1) = v_1$ and $e_2 = (v,v_2), \gamma(e_2) = max\lbrace v_1, v_n \rbrace$ and $e_3 = (v,v_3), \gamma(e_3) = v_n$. Iterate through the vertices $v_{\lbrace 1,2,3\rbrace}$ using a queue or similar to repeat the process until the sequence of coins denoted by the $v$ we iterate over is empty. The number of edges in the created graph is bounded as follows: $\lvert E \rvert \leq 3\lvert V \rvert$. The number of vertices is denoted by the following sum: $1 + \sum_{i=0}^{\frac{n}{2}} 2i + 1 \leq n^2$ We can determine the maximum score for player 1 by "backpropagating" the edge weights through the graph, which takes $\lvert V \rvert * \lvert E \rvert$ steps. This makes the overall algorithm asymptotically require $\Theta(n^2)$ steps The problem I have with my answer is the upper bound for the number of vertices. I just pull that number out of thin air and my gut-feeling tells me that it is correct, but I might just be completely wrong. Assuming that this is indeed a correct way to calculate the answer to the question, is my time-complexity argumentation sound? Answer: You have made significant progress on this problem. Your final conclusion, "the overall algorithm asymptotically requires $\Omega(n^2)$ steps" is likely to be correct as well. Analysis of Your Argumentation Here are some unclear or unconventional or inconsistent notations found in your post, which makes it difficult to understand or justify your arguments about time-complexity. Does "$n|2$" mean $n$ is even? The standard convention for for "$a$ divides $b$" is $a|b$ according to math stackexchange answers and my years of practicing number theory. So $2|n$ is the prefered notation for $n$ is even. Anyway, in current case, I strongly prefer the plain English, $n$ is even. "use tuples to denote ranges of values. $(i,j)$ specifies the value range $v_i,\cdots,v_j$.". People use $(i,j)$ generally to mean the pair of elements $i$ and $j$ or an open interval with ends $i$ and $j$. I would suggest you use $v_{i..j}$, where the two-dot notation $i..j$ are the convention for an integer interval. "create a DAG $G = (V, E, \gamma)$". Do you mean an edge-weighted DAG? "Create a vertex $v$ named $(1,n)$", which does not sound like English. It is better to write "Create a vertex $v$ that represents $1..n$". "Create or refer to the previously created vertices $v_1,v_2,v_3$, named $(2,n−1),(3,n),(1,n−2)$ respectively". Although I can probably guess your intention, I am puzzled by the notations. Did you not notice that $v_1, v_2, \cdots, v_n$ appears as the coin values in the quoted problem? It looks like you want to use "$(2,n-1)$" as a name, which would be very confusing, since I have never seen any name of a person, a item, a variable or whatever starts and ends with a parenthesis for years except for very few social online accounts. "Create the edges $e_1=(v,v_1),\gamma(e_1)=v_1$ and $e_2=(v,v_2), \gamma(e_2)=\max\{v_1,v_n\}$ and $e3=(v,v_3),\gamma(e_3)=v_n$". Now I am more into the wanderland. Is the codomain of $\gamma$ the vertices or the indices or the coin values? "Iterate through the vertices $v_{\lbrace 1,2,3\rbrace}$". It is better to just write $v_1, v_2, v_3$. "the sequence of coins denoted by the $v$". Didn't you say $v$ is a vertex? Now Let us check your computation of the time-complexity. "The number of edges in the created graph is bounded as follows: $|E|\le3|V|$. The number of vertices is denoted by the following sum: $1 + \sum_{i=0}^{\frac{n}{2}} 2i + 1 \leq n^2$". It looks like your estimates are correct, even though they can be tighter. "We can determine the maximum score for player 1 by 'backpropagating' the edge weights through the graph, which takes $| V | * | E |$ steps. This makes the overall algorithm asymptotically require $\Theta(n^2)$ steps". Since it is not clear how your backpropagating works exactly, it is hard to justify the number of steps. However, since $| V |* | E | \le |V| * (3 |V|) \le 3 n^4 $, you can only say that your algorithm takes $O(n^4)$ time. In summary, even assuming that your algorithm is indeed a correct way, there is not enough clear information to fully justify or disprove your time-complexity argumentation. There are quite some room for you to improve the description of your algorithm and the time-complexity argumentation. A Reference Solution For simplicity, I will assume $n$ is even. Let $f(i,j)$ be the maximal score reachable by the first player if the two players start the game with the coins sequence $v_i, v_{i+1}, \cdots, v_j$. Then we have the base cases, $$f(i, i+1)=\max(v_i, v_{i+1})$$ and recurrence relation $$ f(i,j) =\max(v_i+\min(f(i+2,j),\ f(i+1,j-1)),\ v_j + \min(f(i+1,j-1),\ f(i,j-2)))$$ The parameter pair $(i,j)$ of any $f(i,j)$ that appears in the course of computation of $f(1,n)$ will be such that $1\le i\le j\le n$ and $(i-1)+(n-j)$ must be even. The number of all such parameter pairs are $n^2/4$. The base case and the recurrence relation together shows that we need some $O(1)$ operations to obtain $f(i,j)$ for a new parameter pair $(i,j)$, assuming that the usual memorization technique in dynamic programming is used. So the time-complexity of computing $f(1,n)$ is $\Theta(n^2/4)=\Theta(n^2)$.
{ "domain": "cs.stackexchange", "id": 12113, "tags": "algorithms, graphs, runtime-analysis, check-my-answer" }
Relation between gravitational potential and gravitational field
Question: The relation between gravitational potential and gravitational field is $V_{r_2} - V_{r_1} = - \int^{r_2}_{r_1}\vec E\cdot d\vec r$ Where terms $V$ stands for potential and $E$ for gravitational field. Now, I know by calculation that $E$ inside a thin spherical shell is $0$ where as V inside a shell is non zero and is fixed at $$ V= -{GM\over a} $$ where $a =$ radius of shell and $M =$ mass of shell To find $V$ is non zero and fixed at $-{GM\over a}$ inside a shell we have taken reference point as infinity and $V(\infty)=0$. So modifying the first equation we can write $$V(r) - V(\infty) = - \int_\infty^r\vec E\cdot d\vec r$$ Since $V(\infty)$ is taken to be zero $$V(r) = - \int_\infty^r\vec E\cdot d\vec r \tag{1}$$ I am unable to use (1) to verify relationship between $V$ and $E$ inside a shell. I have proved $E$ inside a shell is $0$. Then I should get $V(r) = 0$ from (1)? but my $V(r)$ must be fixed at $-{GM\over a}$. What am I missing here? By sorting this concept out I also want to answer the following question- Q) Let $V$ and $E$ denote the gravitational potential and fie;d at a point. It is possible to have- A) $V=0$ and $E=0$ B) $V=0$ and $E\ne 0$ C) $V\ne0$ and $E =0$ D) $V\ne0$ and $E\ne0$ (The question does not specify if reference point is infinity. All it says is that given above) The answer given is ALL OF THE OPTIONS ARE TRUE. I have taken reference point as infinity and i came to following observations- D) is obviously true and is a valid statement by generally thinking of V and E at a point and also by (1). A) is also true if you assume a massless space and at all points E and V =0 ;; AND ALSO BY (1) { If i put either 0 in the equation shouldn't I get other =0??} For C) If you consider case of shell it becomes true but I am NOT ABLE TO OBTAIN IT THROUGH (1) For B) I am unable to think of such a situation nor am I able to obtain result by (1) (PLEASE NOTE I HAVE USED REFERENCE POINT AS INFINITY.) So I would like to clear the concept of relationship between the two through (1) and then use it to solve the problem at hand. How can ALL OPTIONS BE TRUE. If I take reference as some other point, will it give be different results leading to ALL 4 BEING TRUE? Or are only 3 true and B) false because when i tried to obtain a solution I encountered sources where it was reported B) can never happen. Answer: To get the potential inside the spherical shell, note that the gravitational field is discontinuous at the shell, since it drops suddenly to zero. So, you need to do the integral in (1) in two parts: $$V(r < a) = -\int_\infty^r \vec{E}\cdot d\vec{r} = -\int_\infty^a \vec{E}_{outside}\cdot d\vec{r} - \int_a^r \vec{E}_{inside}\cdot d\vec{r}$$ since $\vec{E}_{inside} = \vec{0},$ this simplifies to $$V(r < a) = -\int_\infty^a \vec{E}_{outside}\cdot d\vec{r} = -\frac{GM}{a}$$ Since the result does not depend on $r$, it is constant inside the shell. As for the multiple choice question, B is actually impossible because gravity is always attractive. To see this, start with an empty universe, were $V = 0$ everywhere. Now add any mass, and you get a gravitational potential: $$V(r) = -\frac{GM}{r}.$$ The terms $G$, $r$, and $M$ are all positive (we have not found negative mass anywhere), so this expression is always negative. If the gravitational field is not zero, then there must be some mass nearby, which means the gravitational potential is negative everywhere and cannot be zero.
{ "domain": "physics.stackexchange", "id": 47555, "tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, potential" }
Write object data to CSV files
Question: Does this method follow the rules of objective programming? If not, how can I change it? static void SaveStudentData(Student[] student) { StreamWriter File1 = new StreamWriter("First.csv", false, Encoding.Default); StreamWriter File2 = new StreamWriter("Fourth.csv", false, Encoding.Default); for (int i = 0; i < Count; i++) { if (student[i].Year == 1) { File1.WriteLine("{0};{1};{2};{3};{4};{5};{6}", student[i].FName, student[i].LName, student[i].BDate.ToString("yyyy/MM/dd"), student[i].StudentID, student[i].Year, student[i].PhoneNumber, student[i].IsFreshman); } else if (student[i].Year == 4) { File2.WriteLine("{0};{1};{2};{3};{4};{5};{6}", student[i].FName, student[i].LName, student[i].BDate.ToString("yyyy/MM/dd"), student[i].StudentID, student[i].Year, student[i].PhoneNumber, student[i].IsFreshman); } } File1.Close(); File2.Close(); } Answer: Here's a possible solution - note there are always many ways to solve a problem. The main points I'm trying to demonstrate here are: Your concerns are kept separate. There are data elements, data formatting, I/O and business logic. Classes can be mostly independent. For instance, you can create a different formatting class that implements IStudentFormatter and inject it into the I/O class. That being said, here is the implementation: public interface IStudentFormatter { string AsCsv(Student student); } public sealed class StudentToCsv : IStudentFormatter { public string AsCsv(Student student) { if (student == null) { throw new ArgumentNullException("student"); } return string.Format( "{0};{1};{2};{3};{4};{5};{6}", student.FName, student.LName, student.BDate.ToString("yyyy/MM/dd"), student.StudentID, student.Year, student.PhoneNumber, student.IsFreshman); } } public static class StudentBusinessLogic { private const int FirstYear = 1; private const int FourthYear = 4; public static IEnumerable<Student> FirstYears(this IEnumerable<Student> students) { if (students == null) { throw new ArgumentNullException("students"); } return students.Where(student => (student != null) && (student.Year == FirstYear)); } public static IEnumerable<Student> FourthYears(this IEnumerable<Student> students) { if (students == null) { throw new ArgumentNullException("students"); } return students.Where(student => (student != null) && (student.Year == FourthYear)); } } public sealed class StudentWriter : StreamWriter { private readonly IStudentFormatter _StudentToCsv; public StudentWriter(string path, bool append, Encoding encoding, IStudentFormatter studentToCsv) : base(path, append, encoding) { if (studentToCsv == null) { throw new ArgumentNullException("studentToCsv"); } this._StudentToCsv = studentToCsv; } public void WriteStudent(Student student) { if (student == null) { throw new ArgumentNullException("student"); } this.WriteLine(this._StudentToCsv.AsCsv(student)); } public void WriteStudents(IEnumerable<Student> students) { if (students == null) { throw new ArgumentNullException("students"); } foreach (var student in students) { this.WriteStudent(student); } } } You'd call these as such: static void SaveStudentData(IEnumerable<Student> students) { var studentToCsv = new StudentToCsv(); using (var file1 = new StudentWriter("First.csv", false, Encoding.Default, studentToCsv)) { file1.WriteStudents(students.FirstYears()); } using (var file2 = new StudentWriter("Fourth.csv", false, Encoding.Default, studentToCsv)) { file2.WriteStudents(students.FourthYears()); } }
{ "domain": "codereview.stackexchange", "id": 15855, "tags": "c#, csv" }
Can water differential be used to store renewable energy?
Question: Excuse me if the question is poorly worded or there already is a similar question, I searched but as a physics layman I do not know the proper terminology. I was watching a Nova story just now and they mentioned that the major problem about renewable energy (solar and wind) is that there is no place to store the energy. Ie on a sunny day what do you do with the extra power generated to compensate for a cloudy day. They mentioned how batteries are used but this is not an ideal solution for various reasons. I thought in my head could the extra power that is generated be used to pump water to a higher elevation and stored. Once power is needed again the water can be released to power water turbines to create hydro-electric power. (this is what I meant by water differential, if there is a proper term for this please feel free to correct my question.) Now obviously this is not some brilliant idea or else someone would be doing it, so my question is why is this not practical? Answer: Yes, it can be done and it is done. It is called pumped-storage hydroelectricity. When there is a surplus in electric energy available (e.g. during the night, when people sleep and factories are not operating) this excess is used to pump water to higher altitudes. The energy stored in this way is then used to buffer during times of high consumption. More details here: https://en.wikipedia.org/wiki/Pumped-storage_hydroelectricity It is obviously not available everywhere (there need to be mountains or similar), I assume they are pretty expensive to build and they are stationary, so you have to consider the loss transporting the electric energy to and from the plant. The German Wikipedia has a more detailed article, that lists the total efficiency of the plant at 70% to 80%. This includes the loss pumping the water and the inherent loss of the turbines.
{ "domain": "physics.stackexchange", "id": 36936, "tags": "renewable-energy" }
Get average post score of TOP 10 tags on StackOverflow using Stack Exchange Data Explorer
Question: I want to use the Stack exchange data explorer in order to display some basic information about the top tags. The query should print the number of questions and answers, the average number of answers per question and the average score of posts(questions and answers under such questions). The query is ordered by the total number of questions. This is my SQL code: SELECT TOP(10) Tags.TagName, COUNT(DISTINCT Questions.Id) AS 'Number of Questions', COUNT(Answers.Id) AS 'Number of Answers', (COUNT(Answers.Id)+0.0)/COUNT(DISTINCT Questions.Id) AS 'Answers per question', ( SELECT AVG(post.Score+0.0) FROM Posts post,PostTags tag WHERE ( post.Id=tag.PostId OR post.ParentId=tag.PostId ) AND tag.TagId=Tags.Id ) AS 'average score of posts under this tag(questions and answers)' FROM Posts Questions LEFT JOIN Posts Answers ON Answers.ParentId = Questions.Id INNER JOIN PostTags ON Questions.Id=PostTags.PostId INNER JOIN Tags ON PostTags.TagId=Tags.Id GROUP BY Tags.Id,Tags.TagName ORDER BY COUNT(*) DESC You can look at the script in the data explorer here. The problem is that the script times out(in around 100 seconds). If I remove the OR post.ParentId=tag.PostId, it still takes around 25 seconds to run (for Stack Overflow) but it does not time out. If I run my script on Super User, it does not time out. (It runs around 4 seconds, I think it is because Super User is smaller than Stack Overflow) Is there a way to run optimize the script so that it does not time out on Stack Overflow without losing functionality and why does that simple OR decrease the performance that much? The OR is that answers do also count for the average score. I've added the execution plan files here. Answer: Oh well, I didnt notice the sql-server tag... What a detailed execution plan they have :) But anyway the answer may be as simple as that there is more answers than there are questions. And because you are making another subselect for the column and you are selecting what you already have in the outer select, this may get expensive quite fast. So I would move the subquery from columns to source tables. Something like this (sorry, I'm not familiar with sql server syntax): SELECT TOP(10) Tags.Id, Tags.TagName, COUNT(Questions.Id) AS 'numOfQuestions', SUM(Answers.AnswerCount) AS 'numOfAnswers', (SUM(Answers.AnswerCount) + 0.0) / COUNT(Questions.Id) AS 'answersPerQuestion', (SUM(Questions.Score) + SUM(Answers.ScoreSum) + 0.0) / (COUNT(Questions.Id) + SUM(Answers.AnswerCount)) AS 'averageScore' FROM Tags INNER JOIN PostTags ON PostTags.TagId = Tags.Id INNER JOIN Posts Questions ON Questions.Id = PostTags.PostId LEFT JOIN ( SELECT Question.Id AS QuestionId, COUNT(Answer.Id) AS AnswerCount, Sum(Answer.Score) AS ScoreSum FROM Posts Question INNER JOIN Posts Answer ON Question.Id = Answer.ParentId GROUP BY Question.Id ) AS Answers ON Answers.QuestionId = Questions.Id GROUP BY Tags.Id, Tags.TagName ORDER BY COUNT(Questions.Id) + SUM(Answers.AnswerCount) DESC Few notes: I'm intentionally doing left join with the subquery and then inner join in the subquery itself to limit the size of the subquery result. But this means that the outer query must handle possible nulls and replace them with zeroes, which I have not done (because idk sql server syntax), but you should know what I mean. There is a lot of aggregations going on, but only top 10 are selected in the end. It is possible that the above approach may still not be the most performant. In which case I would recommend to split the query into few separate queries. First extract top 10 using only what is needed for this decision. Then aggregate the other stuff on subset of all posts limited to those 10 tags. EDIT: I have now noticed that in your question you state: The query is ordered by the total number of questions But your query is not doing that. It orders by COUNT(*) which means that a question without answers is counted as +1 and question with N (N>0) answers is counted as +N. At first I made my query sort it by number of questions and answers together. Which is yet something different. But to comply with your statement, it should be ORDER BY COUNT(Questions.Id) DESC only. And should have been COUNT(DISTINCT Questions.Id) in your original query...
{ "domain": "codereview.stackexchange", "id": 37660, "tags": "performance, sql, time-limit-exceeded, sql-server, stackexchange" }
Why are fine structure energies $ \propto \alpha^4 $?
Question: There are 3 main contributions to the fine structure energy shifts: relativistic kinetic energy, LS (spin-orbit) coupling and the Darwin term. All these shifts to lowest order scale as $\alpha^4$ where $\alpha$ here is the fine structure constant. Is there an easy way to see why this is? Answer: One way is to look how the relativistic energy scales. In the Bohr model the fine structure energy is related to the (ground state) velocity by: $$\alpha=\frac{v}{c}$$ In relativity the energy is of a particle is given by: $$E=\frac{mc^2}{\sqrt{1-\frac{v^2}{c^2}}}$$ Expanding this in terms of $v/c$ we get: $$ E= mc^2 \left( 1+\frac{v^2}{2c^2}+\frac{3v^4}{8c^4}+...\right)$$ The first term here is the rest energy, the second is the classical energy which since is proportional to $v^2$ we would expect the gross energy of the atom $\propto \alpha^2$. Higher order terms are related to relativistic corrections, the most prominent one been $\frac{v^4}{c^4 }$ and thus we expect the relativistic corrections to be of the order of $\alpha^4$.
{ "domain": "physics.stackexchange", "id": 32968, "tags": "quantum-mechanics, special-relativity, atomic-physics, dimensional-analysis, physical-constants" }
Prove that a model of infinite tapes is stronger then turing machine model
Question: I want to prove that if we have a model, $A$, with unbound number of tapes, then $A$ is stronger then Turing Machine model. Can you help me with example of such a language that $M$ will fail but $A$ will give an output? thanks. Answer: It is a standard proof that Turing machines with any finite number of tapes are equivalent to single-tape Turing machines, since you can code any fixed number of tapes on a single tape. If you allow a Turing machine to have an infinite number of tapes, then you can decide any language $L$. To do this, first copy the input $x_1\dots x_\ell$ so that the $i$th tape contains the symbol $x_i$ in its first cell. Now move all the tape heads back to position $1$ and enter some special state $q_\mathrm{decide}$. Recall that the transition function decides what to do based on the state the machine is in at the moment, and what is under each of the infinitely many heads. So just define it to accept from state $q_{\mathrm{decide}}$ if there is some $k$ such that it sees $w_1, \dots, w_k$ under the first $k$ heads and $w_1\dots w_k\in L$, and every other head sees the blank character; otherwise, it rejects. Note that there is a philosophical problems with write down the transition function if $L$ is undecidable, but the function certainly exists.
{ "domain": "cs.stackexchange", "id": 8916, "tags": "turing-machines" }
Interpretation of the nonuniqueness of vacuum of QFT in flat spacetime for a given inertial observer; No Lorentz transformation; No accelerated motion
Question: Consider an inertial observer in flat spacetime with a choice of coordinates $(t,{\vec x})$. This observer can expand a quantum field $\hat{\phi}$ in more than one complete set of orthonormal modes. Let us consider two such choices made by the observer in question. In terms of modes $\{f_i\}$, the quantum field is expanded as $$\hat{\phi}(t,{\vec x})=\sum_i\left(\hat{a}_if_i+\hat{a}_i^\dagger f_i^*\right)$$ $$[a_j,a_j^\dagger]=\delta_{ij}, [a_i,a_j]=[a_i^\dagger, a_j^\dagger]=0.$$ Similarly, in terms of modes $\{g_i\}$, the same field $\hat{\phi}(t,{\vec x})$ is expanded as $$\hat{\phi}(t,{\vec x})=\sum_i\left(\hat{b}_ig_i+\hat{b}_i^\dagger g_i^*\right)$$ $$[b_j,b_j^\dagger]=\delta_{ij}, [b_i,b_j]=[b_i^\dagger,b_j^\dagger]=0.$$ It can be shown that the basis modes are related by the Bogoliubov transformation. Further, it can be shown that the vacuum defined by $a_i|0_f\rangle=0$ is different from the vacuum defined by $b_i|0_g\rangle=0$. In particular, though $$\langle 0_f|a_i^\dagger a_i|0_f\rangle=0,$$ $$\langle 0_f|b_i^\dagger b_i|0_f\rangle\neq 0.$$ Does it mean that even for a given inertial observer the notion of vacuum is ambiguous? If more than one choice of annihilation operators (e.g. $a_i$ and $b_i$, above) is possible, how can we define a unique vacuum? Is there at all a preferred choice of basis modes (and thus, a preferred choice of creation and annihilation operators) and a preferred vacuum? Please note that I am (i) neither talking about changing from one inertial frame to another (Lorentz transformation), (ii) nor about curved spacetime or accelerated observers. I asked a related question here, but I did not receive a satisfactory answer probably because the question was not focussed and poorly worded. Answer: Valter Moretti gave a great answer from the perspective of rigorous mathematics. As a supplement, I'll add an answer from the less rigorous perspective of for-all-practical-purposes (FAPP) physics. The context of the question is flat spacetime with a single observer, but generality is one of the keys to insight, so I'll start with a broader perspective and then apply it to the question. The FAPP approach The energy operator is the operator that generates time-translations. In quantum field theory, the vacuum state is the state of lowest energy. It can be non-unique due to things like spontaneous symmetry breaking, but that's a different subject. In many models, including models of free fields, it's unique — after we specify the energy operator. Proper time is observer-dependent, and it's a local concept (applicable only in a neighborhood of that observer), not a global concept. People often point out that the definition 1 only makes sense in a spacetime that has time-translation symmetry, like flat spacetime. But here's the FAPP theme: In the neighborhood of any given observer, we can always define an effective Hamiltonian $H=\int_R T^{00}$, where $T^{ab}$ is the energy-momentum tensor, $R$ is a spatial neighborhood of the observer, and the coordinate system is such that the "time" coordinate (index $0$) agrees with the observer's proper time. The operator $H$ works just fine as a generator of time-translations in a neighborhood of that observer, so we can use it in the definition 2: any state $|0\rangle$ that minimizes the expectation value of $H$ qualifies as a vacuum state locally, for that observer. (Warning: We cannot let the local integration region $R$ be too small — not microscopic — because the energy density $T^{00}$ is not bounded below in relativistic QFT. I cited some references about this in an answer to another question. In this answer, I'm assuming that the spacetime curvature and the observer's acceleration are both mild enough so that $R$ can be large enough to make that effect unimportant.) Particles are defined with respect to the vacuum state. But the vacuum state is the state that minimizes the energy, and the energy operator is observer-dependent (and typically only defined locally, as the generator of that observer's proper time translations locally), so particles are observer-dependent. By the way, the expression $\int_R T^{00}$ for the (local) energy operator illustrates the important idea of a splittable symmetry — a symmetry that can be applied locally, to only part of a system. I described the significance of splittable symmetries in another answer. This is the kind of "time translation symmetry" that matters when we're talking about a given observer, because observers are localized. Application to inertial observers in flat spacetime In flat spacetime, thanks to time-translation symmetry, the vacuum state can be defined globally. And thanks to Poincaré symmetry, the vacuum state is the same for all inertial observers, even though different inertial observers have different energy operators. Therefore, all inertial observers in flat spacetime agree about the definition of "particle." The question asks about a single inertial observer in flat spacetime. The answer is that physical particles are defined with respect to the lowest-energy state, and the energy operator is uniquely defined, so the lowest-energy state is also uniquely defined — ignoring things like spontaneous symmetry breaking, which is beside the point here. Accelerating observer in flat spacetime: the Unruh effect By the way, we can apply this same approach to the famous Unruh effect. The Unruh effect considers a uniformly accelerating observer in flat spacetime. For a uniformly accelerating observer, we can use the operator that generates boosts as the operator $K$ that generates translations of that observer's proper time. (The boost operator is globally defined, but that's not important. The important thing is that it's defined in a neighborhood of the observer, where it generates translations of that observer's proper time.) Locally, we can define a set of states that minimizes the expectation value of a local version of $K$. All of these states look the same locally, to that observer, and none of these states agree with the Minkowski vacuum state. That's why a uniformly accelerating observer "sees" particles in the Minkowski vacuum, and conversely why inertial observers "see" particles in the accelerating observer's vacuum. This is just a straightforward application of the definitions 1,2,3, applied locally FAPP. The Hawking effect We can also apply this approach to the even-more-famous Hawking effect. Hopefully that's pretty obvious now, so I won't ramble on about it. A useless definition Every once in a while, you might encounter a book/paper that defines the vacuum state as the state annihilated by a set of mutually commuting annihilation operators — operators $a_n$ whose commutator with their adoints is $[a_n,a_m^\dagger]=\delta_{nm}$. Authors are free to make whatever definitions they want, because language is arbitrary, but it's important to understand that the annihilation-operator-based definition is generally completely different than the lowest-energy definition that is standard in the quantum field theory literature. To see just how arbitrary the annihilation-operator-based definition is, suppose that $|\varnothing\rangle$ is the state satisfying $a_n|\varnothing\rangle=0$ for all $a_n$, and let $|\psi\rangle$ be absolutely any other state. A unitary operator $U$ satisfying $U|\varnothing\rangle=|\psi\rangle$ always exists, and the operators $b_n\equiv U^{-1}a_n U$ satisfy the same type of commutation relations as the original operators: $[b_n,b_m^\dagger]=\delta_{nm}$. Therefore, according to the annihilation-operator-based definition, every state in the Hilbert space would be a vacuum state. That's not a very useful definition, but it's not illegal, either. It's just different... and useless. A similar but more useful definition Given a preferred time coordinate, such as one that agrees locally with a given user's proper time, we can define positive frequency and negative frequency. And given any time-dependent operator in the Heisenberg picture, we can define its positive- and negative-frequency parts. In free field theories, the standard set of creation and annihilation operators are defined to be the negative- and posititive-frequency (respectively) parts of the field operators. Those annihilation operators annihilate the lowest-energy state, because the positive-frequency part of any time-dependent operator acts as an energy-lowering operator. When applied to a state that already has the lowest possible energy, an energy-lowering operator annihilates it. So if we define the vacuum state as the state to be the state annihilated by the negative-energy parts of all time-dependent operators, then this is equivalent to defining the vacuum state as the state of lowest energy. In free field theory, these operators are called annihilation operators, but notice that these are not just any old set of annihilation operators. They're defined with respect to a given energy operator (generator of time translations), and that's the key to answering the question.
{ "domain": "physics.stackexchange", "id": 78882, "tags": "quantum-field-theory, special-relativity, operators, definition, unruh-effect" }
Equilibrium of rigid body
Question: A uniform beam $AB$ of length $2l$ rests with end $A$ in contact with rough horizontal ground. A point $C$ on the beam rests against a smooth support. $AC$ is of length $\frac{3l}{2}$ with $C$ higher than $A$, and $AC$ makes an angle of $60$ degree with the horizontal. If the beam is in limiting equilibrium, find the coefficient of friction between the beam and the ground. Can anyone draw out the diagram for me ? I can't visualize it. Answer: I draw one for you,hope it helps!
{ "domain": "physics.stackexchange", "id": 31840, "tags": "homework-and-exercises, newtonian-mechanics, friction, statics, equilibrium" }
Where does the $(\ell + x)^2\dot\theta^2$ term come from in the Lagrangian of a spring pendulum?
Question: I am reading some notes about Lagrangian mechanics. I don't understand equation 6.9, which gives the Lagrangian for a spring pendulum (a massive particle on one end a spring). $$T = \frac{1}{2}m\Bigl(\dot{x}^2 + (\ell + x)^2\dot{\theta}^2\Bigr)\tag{6.9}$$ I don't understand where the component $(\ell + x)^2\dot{\theta}^2$ is coming from. If we say the $x$-component is radial and $y$ is tangential, so we have according to this $\vec{v}^2 = v_{x}^2 + v_{y}^2$, then $y = (\ell + x)\sin\theta$ by small angle approximation we have $y = (\ell + x)\theta$, but then if we choose this coordinate system then $V(x,y)$ equation doesn't make sense specifically the potential from gravity! If someone could shed some light into this that would be nice. Answer: Velocities in the kinetic part of Lagrangian The variable $\;x\;$, that represents the displacement of the string from its position at rest, has been replaced by the variable $\;s\;$ in order not to be confused with the coordinate $\;x\;$ of a Cartesian system. The velocity of the particle $\:\mathbf{v}\:$ is analysed as follows \begin{equation} \mathbf{v}=\mathbf{v}_{s}+\mathbf{v}_{\theta} \tag{01} \end{equation} where $\:\mathbf{v}_{s}\:$ the component along the string line and $\:\mathbf{v}_{\theta}\:$ that normal to it. Now, \begin{equation} v_{s}=\dfrac{d\left(\ell+s\right)}{dt}=\underbrace{\dot\ell}_{=0}+\dot{s}=\dot{s} \tag{02} \end{equation} \begin{equation} v_{\theta}=\left(\ell+s\right)\omega =\left(\ell+s\right) \dfrac{d\theta}{dt}=\left(\ell+s\right)\dot{\theta} \tag{03} \end{equation} \begin{equation} v^{2}=v_{s}^2 + v_{\theta}^2=\dot{s}^2 + (\ell + s)^2\dot{\theta}^2 \tag{04} \end{equation}
{ "domain": "physics.stackexchange", "id": 23525, "tags": "homework-and-exercises, classical-mechanics, lagrangian-formalism" }
Automatically transferring files between folders depending on the file name
Question: Im relatively new in python. I've wrote my "first" script to do some staff for me. I am creating Image dataset for Mashine Lerning. In Photoshop, I select the area I need in the large image and using the Photoshop script I save the selected area to a folder tree_checker. I have different scripts configured in Photoshop, each script saves a file with a specific name. This script takes every new file in directory and moves it in depending directory. """ This script provides automatically file ordering. """ import os import time def get_dir_length(path): """ Returns the length of given directory e.g the amount of files inside the folder. """ return len([f for f in os.listdir(path)if os.path.isfile(os.path.join(path, f))]) ORC_MALE = 'orc_male.jpg' ORC_MALE_MAG = 'orc_male_mag.jpg' ORC_FEMALE = 'orc_female.jpg' DARKELF_MALE = 'darkelf_male.jpg' DARKELF_FEMALE = 'darkelf_female.jpg' HUMAN_MALE = 'human_male.jpg' HUMAN_MALE_MAG = 'human_male_mag.jpg' HUMAN_FEMALE = 'human_female.jpg' ELF_MALE = 'elf_male.jpg' ELF_FEMALE = 'elf_female.jpg' DWARF_MALE = 'dwarf_male.jpg' DWARF_FEMALE = 'dwarf_female.jpg' PATH_TO_WATCH = './tf_models/tree_checker/' BEFORE = dict([(f, None) for f in os.listdir(PATH_TO_WATCH)]) COUNT_AMOUNT_OF_FILE_MOVE = 0 try: while 1: time.sleep(1) AFTER = dict([(f, None) for f in os.listdir(PATH_TO_WATCH)]) ADDED = [f for f in AFTER if not f in BEFORE] REMOVED = [f for f in BEFORE if not f in AFTER] if ADDED: if ''.join(ADDED) == ORC_MALE: DIR_LEN = get_dir_length('./tf_models/orc_male/') try: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/orc_male/orc_male_' + str(DIR_LEN+1) + '.jpg')) except FileExistsError: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/orc_male/orc_male_' + str(DIR_LEN+1) + 'a.jpg')) COUNT_AMOUNT_OF_FILE_MOVE += 1 if ''.join(ADDED) == ORC_MALE_MAG: DIR_LEN = get_dir_length('./tf_models/orc_male_mag/') try: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/orc_male_mag/orc_male_mag_' + str(DIR_LEN+1) + '.jpg')) except FileExistsError: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/orc_male_mag/orc_male_mag_' + str(DIR_LEN+1) + 'a.jpg')) COUNT_AMOUNT_OF_FILE_MOVE += 1 if ''.join(ADDED) == ORC_FEMALE: DIR_LEN = get_dir_length('./tf_models/orc_female/') try: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/orc_female/orc_male_' + str(DIR_LEN+1) + '.jpg')) except FileExistsError: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/orc_female/orc_male_' + str(DIR_LEN+1) + 'a.jpg')) COUNT_AMOUNT_OF_FILE_MOVE += 1 if ''.join(ADDED) == DARKELF_MALE: DIR_LEN = get_dir_length('./tf_models/darkelf_male/') try: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/darkelf_male/darkelf_male_' + str(DIR_LEN+1) + '.jpg')) except FileExistsError: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/darkelf_male/darkelf_male_' + str(DIR_LEN+1) + 'a.jpg')) COUNT_AMOUNT_OF_FILE_MOVE += 1 if ''.join(ADDED) == DARKELF_FEMALE: DIR_LEN = get_dir_length('./tf_models/darkelf_female/') try: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/darkelf_female/darkelf_female_' + str(DIR_LEN+1) + '.jpg')) except FileExistsError: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/darkelf_female/darkelf_female_' + str(DIR_LEN+1) + 'a.jpg')) COUNT_AMOUNT_OF_FILE_MOVE += 1 if ''.join(ADDED) == HUMAN_MALE: DIR_LEN = get_dir_length('./tf_models/human_male/') try: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/human_male/human_male_' + str(DIR_LEN+1) + '.jpg')) except FileExistsError: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/human_male/human_male_' + str(DIR_LEN+1) + 'a.jpg')) COUNT_AMOUNT_OF_FILE_MOVE += 1 if ''.join(ADDED) == HUMAN_MALE_MAG: DIR_LEN = get_dir_length('./tf_models/human_male_mag/') try: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/human_male_mag/human_male_mag_' + str(DIR_LEN+1) + '.jpg')) except FileExistsError: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/human_male_mag/human_male_mag_' + str(DIR_LEN+1) + 'a.jpg')) COUNT_AMOUNT_OF_FILE_MOVE += 1 if ''.join(ADDED) == HUMAN_FEMALE: DIR_LEN = get_dir_length('./tf_models/human_female/') try: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/human_female/human_female_' + str(DIR_LEN+1) + '.jpg')) except FileExistsError: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/human_female/human_female_' + str(DIR_LEN+1) + 'a.jpg')) COUNT_AMOUNT_OF_FILE_MOVE += 1 if ''.join(ADDED) == ELF_MALE: DIR_LEN = get_dir_length('./tf_models/elf_male/') try: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/elf_male/elf_male_' + str(DIR_LEN+1) + '.jpg')) except FileExistsError: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/elf_male/elf_male_' + str(DIR_LEN+1) + 'a.jpg')) COUNT_AMOUNT_OF_FILE_MOVE += 1 if ''.join(ADDED) == ELF_FEMALE: DIR_LEN = get_dir_length('./tf_models/elf_female/') try: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/elf_female/elf_female_' + str(DIR_LEN+1) + '.jpg')) except FileExistsError: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/elf_female/elf_female_' + str(DIR_LEN+1) + 'a.jpg')) COUNT_AMOUNT_OF_FILE_MOVE += 1 if ''.join(ADDED) == DWARF_MALE: DIR_LEN = get_dir_length('./tf_models/dwarf_male/') try: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/dwarf_male/dwarf_male_' + str(DIR_LEN+1) + '.jpg')) except FileExistsError: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/dwarf_male/dwarf_male_' + str(DIR_LEN+1) + 'a.jpg')) COUNT_AMOUNT_OF_FILE_MOVE += 1 if ''.join(ADDED) == DWARF_FEMALE: DIR_LEN = get_dir_length('./tf_models/dwarf_female/') try: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/dwarf_female/dwarf_female_' + str(DIR_LEN+1) + '.jpg')) except FileExistsError: os.rename((PATH_TO_WATCH + ''.join(ADDED)), \ ('./tf_models/dwarf_female/dwarf_female_' + str(DIR_LEN+1) + 'a.jpg')) COUNT_AMOUNT_OF_FILE_MOVE += 1 BEFORE = AFTER if (COUNT_AMOUNT_OF_FILE_MOVE % 10 == 0 and COUNT_AMOUNT_OF_FILE_MOVE > 0): print('Currently moved ' + str(COUNT_AMOUNT_OF_FILE_MOVE) + \ ' files. - ' + str(time.clock)) except KeyboardInterrupt: print(COUNT_AMOUNT_OF_FILE_MOVE) Some of place wich could be refactored is "using a dictionary comprehension" as my pylint says. But I don't know how to du that. Also Pylint says that is will be faster if I do refactor here. consider-using-dict-comprehension (R1717): Consider using a dictionary comprehension Although there is nothing syntactically wrong with this code, it is hard to read and can be simplified to a dict comprehension.Also it is faster since you don't need to create another transient list What does this code do? Folders tree: Project |-elf_male |-elf_female |-darkelf_male |-darkelf_female |-dwarf_male |-dwarf_female |-human_male |-human_male_mag |-human_female |-orc_male |-orc_male_mag |-orc_female `-tree_checker In general this script continuously watching if there is a new image in folder tree_checker as soon as there is new image (images in this folder you have to put manually) script checks the name of the image (e.g orc_male.jpg, orc_male_mag.jpg, orc_female.jpg etc) and moves the image to the folder according to the name of the image. This script is actually quite slow. If you too quickly insert few images into the tree_checker folder, it breaks. Answer: Some of place wich could be refactored is "using a dictionary comprehension" as my pylint says. But I don't know how to du that. Let's start with that as this is the least of this code problems. A dictionary comprehension is roughly like a list-comprehension (that you know of and use well) except: it produces a dictionary instead of a list; it uses braces instead of brackets; it uses a key: value token instead of a single element in front of the for keyword. BEFORE = {f: None for f in os.listdir(PATH_TO_WATCH)} But since you’re not changing the value, you can use dict.fromkeys: BEFORE = dict.fromkeys(os.listdir(PATH_TO_WATCH)) However, you never make use of the values of the dictionary anyway. So keep it simple and use sets instead. This even let you compute additions and suppressions ways more easily: after = set(os.listdir(PATH_TO_WATCH)) added = after - before removed = before - after Now, onto your real problem: this code repeats exactly the same instructions for each of your subfolders! This less than optimal. Instead, write a function that operate on the folder name. It would also be a good idea to list these destination folders automatically instead of hardcoding their names. Also your usage of ''.join(ADDED) is problematic: if you ever add more than one file every second in the folder you monitor, you will end up with a name that can't be matched againts anything: >>> added = ['human_male.jpg', 'elf_female.jpg'] >>> ''.join(added) human_male.jpgelf_female.jpg Instead you should loop over ADDED and check if the file names match either of the destination folder. Your check for existing file may help catch some overwrite errors, but what if the second filename also already exist? If you want to properly handle such cases, you should try in a loop with increasing attempts to write the new file. Lastly, try to separate computation from presentation. Make this a reusable function and move your prints outside of there, into a main part: #! /usr/bin/env python3 """This script provides automatic file ordering. """ import os import time import pathlib from itertools import count def get_dir_length(path): """Return the amount of files inside a folder""" return sum(1 for f in path.iterdir() if f.is_file()) def monitor_folder(path): path = pathlib.Path(path).absolute() destinations = {f.name for f in path.parent.iterdir() if f.is_dir()} destinations.remove(path.name) content = {f.name for f in path.iterdir()} while True: time.sleep(1) current = {f.name for f in path.iterdir()} added = current - content # removed = content - current for filename in added: name, suffix = os.path.splitext(filename) if suffix != '.jpg': continue if name in destinations: size = get_dir_length(path.parent / name) new_name = '{}_{}.jpg'.format(name, size) for attempt in count(): try: os.rename(str(path / filename), str(path.parent / name / new_name)) except FileExistsError: new_name = '{}_{}({}).jpg'.format(name, size, attempt) else: break yield filename, new_name content = current def main(folder_to_watch): files_moved = 0 try: for _ in monitor_folder(folder_to_watch): files_moved += 1 if files_moved % 10 == 0: print('Currently moved', files_moved, 'files. −', time.process_time()) except KeyboardInterrupt: print('End of script, moved', files_moved, 'files.') if __name__ == '__main__': main('./tf_models/tree_checker/')
{ "domain": "codereview.stackexchange", "id": 33681, "tags": "python, performance" }
Determinant of a matrix
Question: I'm visiting some code that I wrote for one of my finals projects and wanted to know whether there were a more optimal, more elegant way to do this so it does not look so "hard-coded". The problem was that I needed to calculate the Determinant of a matrix, and, at the time (some of it was rushed) I did not know of a better way, using loops or re-usable code that could make the code a little better. Here is the code: namespace Determinant { template<int X> float determinant(std::vector<Vector> &data) { float deter = 0.0; if(X == 2) { float a = data[0][0]; float b = data[1][0]; float c = data[1][1]; deter = (a + c) * (a + c) -4 * (a*c-b*b); }else if(X == 3) { float determinant1 = (data[1][1] * data[2][2]) - (data[2][1] * data[1][2]); float determinant2 = (data[1][0] * data[2][2]) - (data[2][0] * data[1][2]); float determinant3 = (data[1][0] * data[2][1]) - (data[2][0] * data[1][1]); deter = (data[0][0] * determinant1) - (data[0][1] * determinant2) + (data[0][2] * determinant3); } return deter; } } As you can see the Determinant is very hard-coded which is probably not the right way -- But are there any other alternatives that isn't hard-coded? I want to start to use a design pattern, as I feel one would be useful here but can't seem to figure out which one. Answer: You don't really need a design pattern for determinants, just a better algorithm. Generally one of the easiest (and fastest) ways of calculating a matrix determinant is by using what is known as LU-Decomposition. This factors a matrix into two matrices, a lower triangular and an upper triangular matrix. From these, the determinant can simply be calculated as the product of diagonal elements. Note that you have to be careful when calculating determinants of large matrices; for a 100x100 matrix, it can easily overflow the maximum size of a float (or double). For this reason it's often better to calculate a log-determinant. On to the actual code you've presented: data should be passed by const& since it isn't (and shouldn't be) modified: float determinant(const std::vector<Vector>& data) Using a template int parameter to choose between determinant sizes is really odd, and is potentially easily misused. What if I use Determinant::determinant<2>(...) on a 3x3 matrix? It'll give me the wrong answer. You should generally try to make your code easy to use and hard to misuse. In this case, that means calculating the determinant size based on the row/column size of the passed parameter. Better yet would be creating a matrix class to encapsulate all of this information. These days, it generally doesn't make a lot of sense to use float over double unless you really need the speed (and even then, it is often no faster, and can sometimes even be slower on modern hardware). Stick to using double by default.
{ "domain": "codereview.stackexchange", "id": 31490, "tags": "c++, matrix, library" }