anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
To prove mathematically that a tip standing vertically will fall exponentially as soon as it deviates from its original position. (Homework Problem)
Question: This is one of my questions from Homework for Quantum Mechanics. I know the answer quantitatively but cannot find any mathematical explanation. Question: Consider a thin needle of total mass m and length L. If it has an “atomically thin” endpoint but is otherwise macroscopic in size, you would not expect to be able to stand the needle exactly on its endpoint. Describe the classical motion of the needle when |θ| << 1 (here you should define θ = 0 to be when the needle is perfectly upright). Show that the needle will begin to tip over exponentially quickly if it deviates even slightly from θ = 0. In your derivation, you can assume there’s friction where the needle connects to the ground, such that the needle rotates rigidly about its endpoint. My Approach: I know that needle will fall as soon as it deviates slightly from θ = 0 because its center of mass will deviate and gravity will pull it down. How do I show this mathematically? Answer: If you follow my comment's advice, you obtain an equation of motion of the form $\ddot{\theta}=\omega^2\sin\theta$, where $\omega>0$ is a multiple of $\sqrt{g/L}$ you can compute as an exercise. Its value is conceptually unimportant, though you might be expected to determine it due to its appearance in the exponential tipping discovered below. Although $\theta(t)\equiv0$ is a solution, it's unstable because of the way a small deviation evolves. While $\theta$ is small, $\ddot{\theta}\approx\omega^2\theta$. This approximation implies $\theta=Ae^{\omega t}+Be^{-\omega t}$ for constants $A,\,B$. Taking $\theta=0$ at $t=0$, $A=-B\ne0$, so eventually $\theta\approx Ae^{\omega t}$. This exponential growth is what you were asked to prove.
{ "domain": "physics.stackexchange", "id": 84064, "tags": "homework-and-exercises, newtonian-mechanics, equilibrium, stability" }
Provided a unit vector and Force, how can I calculate it's components?
Question: Say I have a $F=kQ_{1}Q_{2}/r^{2}$ and a direction vector $(x, y, z).$ How can I find the component forces $F_{x}$, $F_{y}$, and $F_{z}$? Answer: It's somewhat unclear from your question, but I interpreted $F$ to be just the magnitude of the force (a scalar), and you want to construct a force of that magnitude pointing along the given direction vector. If the direction vector is a unit vector (a vector of length 1), then all you have to do is scale (resize) it. So it's just: $$\vec F = F \cdot (x, y, z) = (Fx, Fy, Fz)$$ If the direction vector is not a unit vector, then you have to make it into one first: $$\text{let }\space \vec d = (x, y, z)$$ Then it's magnitude squared is $$d^2= \vec d \vec d = x^2 + y^2 + z^2 \space\space\text{, and}\\ d = \sqrt{x^2 + y^2 + z^2} $$ so $$\vec F = F \cdot \frac{\vec d}{d} = (\frac{Fx}{d}, \frac{Fy}{d}, \frac{Fz}{d})$$
{ "domain": "physics.stackexchange", "id": 82794, "tags": "forces, electrostatics, vectors" }
Can we visualize a feature space with 4 or more dimensions?
Question: I have a dataset with 6 independent variables, I have trained this dataset on a Support Vector Machine. Is there is a way in python or R to visualize the feature space with the hyperplane? Or for example, can I plot many 2D plots such that every plot has two features vs each other separated by the decision boundary in the graph? Answer: For something like visualizing four dimensions, you could do something along the lines of constructing a 3D plot and then adding a color gradient to represent the fourth dimension. For five dimensions you could add scatter plot point size, for six perhaps even scatter point shape as well. After around four dimensions however, the 'standard' visualization of a scatterplot starts to look fairly clunky. One way to visualize six dimensional data is to use a technique you mentioned above. Namely, create a series of 2D plots. One way to accomplish this in by using the scatter_matrix in pandas. You can read about that on its documentation here. A minimal example is provided below: import pandas as pd import numpy as np # import the scatter_matrix function from pandas from pandas.plotting import scatter_matrix # create a df that contains a series of random numbers with 6 features df = pd.DataFrame(np.random.randn(60, 6), columns=['x_1', 'x_2', 'x_3', 'x_4', 'x_5','x_6']) #create a scatter matrix and plot a histogram across the diagonal scatter_matrix(df, alpha=.4, figsize=(8, 8), diagonal='hist'); The result:
{ "domain": "datascience.stackexchange", "id": 11151, "tags": "visualization" }
S-matrix element
Question: I'm confused with the relation between the fully resummed propagator in a given QFT and the corresponding S-matrix element. According to the LSZ reduction formula ($\phi^4$ theory for definiteness here), the S-matrix element for coming the simple "scattering" process: $$ p \rightarrow q$$ can be computed as: $$ \frac{i\sqrt{Z}}{p^{2} - m^{2}}\frac{i\sqrt{Z}}{q^{2} - m^{2}}\langle p|S|q\rangle = \int d^4y\text{ } e^{ip\cdot y}\int d^4x\text{ } e^{-iq\cdot x} \langle\Omega|T \phi(y) \phi(x)|\Omega\rangle$$ But the Fourier transformed of the 2-point correlator is just $$\int d^4y\text{ } e^{ip\cdot y} \langle\Omega|T \phi(x) \phi(0)|\Omega\rangle = \frac{iZ}{p^2-m^2} + \text{terms regular at $p^{2} = m^{2}$}$$ Following this, for the right pole structur on both sides, the S-matrix element on the left side must be something like $$c \cdot \delta^{(4)}(p-q) \text{ }(q^2-m^2)$$ which would vanish on-shell. But in my mind, the amplitude should just contribute to the 1 in the S-Matrix and thus just give a Kronecker delta (or just the delta function as in the above term). In the linked image, I visualized my problem in a diagram. There, the hatched bubbles denote the fully resummed propagator and "Amputated" denotes the S - matrix element up to an factor of $Z$, in the sense of Peskin,Schröder. I'm asking for the value of this S-matrix element or the corresponding amputated diagram. Answer: There is no problem, you just have to go back to the definition of the terms in the LSZ formula to see that $\langle p \vert S \vert q \rangle\vert_\text{connected}$ is indeed just zero for the 1-particle process $p\to q$, and it is only this connected piece that appears in the LSZ formula you are trying to use to compute this amplitude. (In the notes you linked in a comment [pdf], it is chapter 2.4. where the split into connected and disconnected pieces happens.) This is because the definition of "disconnected term" in the S-matrix is precisely "term in which nothing happens", i.e. a term that is represented by a Feynman diagram in which at least one particle just propagates without "participating in scattering". These are called "disconnected" because for all diagrams with more than 2 particles, they indeed correspond to disconnected graphs. For "2" particles, it's just a propagator, but this is a "disconnected piece" in the sense of the LSZ formula. So there is no issue. Your reasoning is correct, the connected S-matrix is zero on-shell for this process - which is physically sensible, because the connected S-matrix entries represent interaction, something happening. A particle just propagating is not a scattering event.
{ "domain": "physics.stackexchange", "id": 29860, "tags": "quantum-field-theory, correlation-functions, s-matrix-theory" }
Why is the quantum discord of $\rho$ zero iff $\rho=\sum_j p_j \pi_j\otimes \rho_j$ for mutually orthogonal projections $\pi_j$?
Question: In (Wiseman 2012), the author mentions (equation (4), page 6), that a state $\rho$ has zero quantum discord (more precisely, zero Alice-discord) if and only if it can be written in the form $$\rho = \sum_j p_j \pi_j\otimes \rho_j,$$ for some probability distribution $p_j$, some collection of orthogonal projections $\{\pi_j\}_j$ on Alice's system, and some collection of states $\rho_j$ on Bob's system. By "Alice-discord" I mean here the discord with respect to measurements performed by Alice (the first system). More precisely, the discord is defined here $$\delta_A(\rho) =I(\rho) - \max_{\{\Pi^A_j\}} J_{\{\Pi^A_j\}}(\rho) = S(\rho_A) - S(\rho) + \min_{\{\Pi^A_j\}} S(\rho_{B|\{\Pi^A_j\}}),$$ where $I$ and $J$ are the two classically equivalent forms of the mutual information, and the maximisation and minimisations are performed with respect to possible measurements performed by Alice. The author mentions this as "well known" and does not provide a reference. Is there an easy way to see why this is the case? Alternatively, what is a reference discussing this fact? Answer: $$I(A:B)=S(A)+S(B)-S(AB)$$ $$J(A_{\{\Pi_{i}\}}:B)=S(A_{\{\Pi_{i}\}})+S(B)-S(A_{\{\Pi_{i}\}}B)$$ $$I(A:B)-J(A_{\{\Pi_{i}\}}:B)=S(A)-S(AB)-S(A_{\{\Pi_{i}\}})+S(A_{\{\Pi_{i}\}}B)$$ Since $$\rho = \sum_j p_j \pi_j\otimes \rho_j,$$ it is invariant to the measurements performed on the subsystem A, after the results are forgotten, so $$S(A)=S(A_{\{\Pi_{i}\}})$$ and $$S(AB)=S(A_{\{\Pi_{i}\}}B)$$ resulting in a discord (for alice) of 0. If $\rho$ were any other state, wherein it was not invariant to a set of measurements after maximisation, then the measurements on it would lead to an increase in entropy, and a non-zero discord. Both Quantum Discord: A Measure of the Quantumness of Correlations and Studies on the Role of Entanglement in Mixed-state Quantum Computation go over this in greater detail.
{ "domain": "quantumcomputing.stackexchange", "id": 3044, "tags": "non-locality, nonclassicality, quantum-discord" }
Letter Combinations of a Phone Number
Question: I came across this problem in the “Elements of Programming Interviews” interview preparation book, and also on the site, leetcode.com (link to problem). Problem statement – Letter Combinations of a Phone Number "Given a digit string, e.g., “23”, return all possible letter combinations that the number could represent. The mapping of digits to numbers is the same as on a telephone number pad. So for “23”, the combinations are, ["ad", "ae", "af", "bd", "be", "bf", "cd", "ce", "cf"]." There are plenty of solutions online and in books. However, I’m curious to know about the fundamentals of this problem. Is this problem essential asking for the Cartesian product of multiple sets? If so, what are some efficient algorithms to compute this? Some solutions reference DFS and BFS. How are DFS and BFS related to this problem (are they possibly referring to the type of recursion being used to solve the problem) ? Thanks Answer: There are plenty of solutions online and in books. However, I’m curious to know about the fundamentals of this problem. Basically, this problem is trying to reveal your ability to solve problems using recursive algorithms/divide&conquer for generating all possible permutations (brute-force). There are some filtering techniques that can be used in practice, but the worst-case running time is exponential. Is this problem essential asking for the Cartesian product of multiple sets? If so, what are some efficient algorithms to compute this? Yes, one way to look at the problem is 'Cartesian product of multiple sets'. Given a string S, you can consider two sets a[i] and b[i], where a[i] is the set of character strings that can be made by substring S[1:i], and b[i] is the set of character strings that can be made by substring S[i+1,end]; then output the Cartesian product of a[i] and b[i] for every possible i. However, this algorithm is not the best one. Some solutions reference DFS and BFS. How are DFS and BFS related to this problem (are they possibly referring to the type of recursion being used to solve the problem) ? These algorithms use DFS for its recursive nature. Probably, it uses BFS to avoid stackoverflow.
{ "domain": "cs.stackexchange", "id": 6661, "tags": "algorithms, combinatorics, sets" }
What happens if two different elementary particles with different charges collide?
Question: For example, the up type quark and electron, or down type quark and up type quark. What will happen then? What will be absorbed or radiated? In the end, what kind of particles will turn out? Will it annihilate? These questions have been bothering me a lot for several days now, but there is no answer on the internet. Please answer, and I am even more concerned about what the Feynman diagrams look like in these cases. Answer: What happens when particles collide? What particles can collide and what not? These questions are answered by knowing the Lagrangian of a theory, which is a function that tells you the dynamics of your objects. From the Lagrangian one can obtain the Feynman Rules, with which one draws the Feynman diagrams. Our current best theory is the Standard Model, and from its Lagrangian one can derive the interactions. As far as we know, an up quark and an electron cannot interact directly with each other, meaning that there isn't a vertex where an up quark and an electron interact with each other. They can interact via the exchange of a Photon or Weak Boson: If this is what you mean by "collide" (which, despite many experiments being called "colliders", isn't a term that most theoretical particle physicists use), then this is your answer. Up quark and down quark, that's a different story. In the SM Lagrangian, one can see that there is a quark-boson-quark vertex, meaning that two quarks can annihilate and produce a boson: This boson will quickly decay into other particles. Which ones? The ones allowed by the Lagrangian! A lepton-antilepton pair, or an up-down quark pair, for example. Ultimately, electric charge (which seems to be what you're interested in) is a number that must be conserved at each vertex, but that doesn't otherwise tell you what vertices are allowed and what aren't: you must study and understand the Lagrangian of your theory.
{ "domain": "physics.stackexchange", "id": 88967, "tags": "electromagnetism, quantum-field-theory, particle-physics, charge, feynman-diagrams" }
Using a differential force in the derivation of hydrostatic equilibrium in a star
Question: I've been banging my head against this particular derivation of hydrostatic equilibrium in a star for the last few days, from Carroll and Ostlie's Introduction to Modern Astrophysics (2nd ed. p. 286): The authors consider an infinitesimal cylinder of gas (dimensions $A, dr$) aligned on an r-axis pointing away from the center of the star. Equilibrium demands that acceleration be zero, so we get $$F_{bottom} + F_{top} + F_G = 0,$$ Where the first two terms are the forces, taken normal to the surface, due to pressure, and $F_G$ is the gravitational force on the cylinder. The top of the cylinder is away from the center. Alternative derivations I have found from this point onward make perfect sense- substitute $-\rho A dr$ for the cylinder mass $dm$ in the $F_G$ term, substitute $P_{bottom}$ and $P_{top}$, the rest is straightforward (this is the derivation on Wikipedia). However, Carroll and Ostlie take an approach I haven't been able to understand. At this point they define $$F_{top} = -(F_{bottom} + dF_P)$$ where $dF_P$ -I'll write it $dF$ hereafter- is called the differential force caused by the change in pressure due to a change in $r$. Substitution into the first equation and a few other steps here ($dF = A dP$) lead to $dP/{dr} = -\rho g$ as needed. I have had trouble with the following: $dF$ needs to be negative because otherwise $F_{top}$ is greater in magnitude than $F_{bottom}$ which makes no sense- if $|F_t| > |F_b| $ then pressure isn't acting against gravity, is it? Pressure itself would be pushing the cylinder down in that case. ($b$ is 'bottom', $t$ is 'top'.) But if $dF$ is negative, then the way the (second) equation is written makes no sense. Wouldn't it make more sense to say $F_t = -F_b + dF$ ? Partially motivated by this question and partially trying to approach the problem from a slightly different angle, I decided to define $dF = F_b + F_t$ as the net outward force due to the infinitesimal change in pressure between the top and bottom of the cylinder. Then, as $F_{net} = 0,$ we get $dF + F_G = 0$, but this leads to an incorrect equation, namely $$ dF + F_G = 0 \implies dP/dr = \rho g . $$ My main issue is that I do not understand why trying to define a positive $dF$ fails to produce the correct hydrostatic equilibrium equation. Furthermore, if we view $dF$ as an infinitesimal quantity, trying to ensure its sign is consistent with the directions of the forces (outward being positive), we once again get the wrong answer. Can we only take $dF$ as an inward force? If so, why? If not, how can I establish a positive/outward differential force that will let me derive the correct hydrostatic equilibrium equation? Edit: I think I can illustrate my dilemma a bit more succinctly. Considering the equation $F_t = -(F_b + dF)$, we can try out 2 different possibilities: $dF$ being positive or negative. If $dF > 0$ then $F_t$ is bigger in magnitude than $F_b$, in which case I don't understand how the pressure gradient creates an outward force because $|F_t|>|F_b|$ implies the downward force is stronger. If $dF < 0$ then $F_t = -F_b + -dF = -F_b + |dF|$ but as far as I can understand, this form cannot, like any of my attempts to define a positive $dF$, output the correct sign in the final equilibrium equation. So computing the signs (that is, saying $(-dF) = |dF|$) somehow makes the equation incorrect. (Additional edit: For me, this also begs the question, if $dF = A dP$ then is $dP$ negative, and what does that signify?) Answer: The whole exercise illustrates the usefulness of vectors in addressing problems. $$\vec{F_g} = -\frac{GM(r)\delta m}{r^2}\ \hat{r}$$ $$\vec{F_P} = -\left(\frac{dP}{dr}\right) \delta r \delta A\ \hat{r}$$ $$\vec{F_P} + \vec{F_g} = 0$$ leads to the scalar equation of hydrostatic equilibrium $$ \frac{dP}{dr}=- \rho g,$$ where $\vec{g}= -g\hat{r}$.
{ "domain": "astronomy.stackexchange", "id": 4133, "tags": "hydrostatic-equilibrium" }
Why does your reflection stay the same size when you move further away from the mirror?
Question: This was an experiment I saw in my son's workbook. It said to mark out the top of your forehead and the bottom of your chin on a mirror using a whiteboard marker. Then slowly move backwards, and investigate what happens to the size of the reflection subjective to the two marks made. It actually got me quite flabbergasted. I always thought the reflection would get smaller as you moved away from the mirror. Why is this? Answer: The mirror gets proportionally smaller. The explanation is the similarity of triangles. The eye and the marks on the mirror form a triangle, while the eye and the two points on the image form another triangle. The two triangles are similar, with ratio 1/2, no matter the distance.
{ "domain": "physics.stackexchange", "id": 10089, "tags": "optics, reflection" }
Resources for droplets investigation
Question: I am interested in investigating droplets mechanics. I have read some documents and journal papers related to some aspects of (water) droplets e.g. surface tension, collision with flat surfaces or particles, etc. However, I feel that I should study some more basic materials as well to make sure that I am aware of all the principals exist in droplets statics and dynamics in order to be able to use these concepts in solving more complicated problems. It'd be kind of you if introduce some useful resources in this area, in any aspect of droplets including but not limited to: Basic concepts and definitions of different parameters defined for droplet motion or droplet equilibrium state; e.g. different types of contact angles (advancing, receding, etc) wettability, surface energy, etc Various sources of force (and energies) that may apply to a droplet in both a static or a dynamic situation and how to model (formulate) each one And in more details, I'm interested in: collision phenomena, formation of thin liquid films and lamella formation. Answer: Books For fundamentals I prefer books over papers, because they are typically more thorough and a little bit more 'slow' in the introduction of concepts. There are many books that will cover, some of, the topics that you mention. I will mention below the 3 books that where most useful to me in the past. 1) An excellent resource for a theoretical foundation in capillarity and wetting is the book by de Gennes et al.: P.G. de Gennes, F. Brochard-Wyart and D. Quéré, Capillarity And Wetting Phenomena - Drops, Bubbles, Pearls, Waves, 2003, Springer Concepts like surface tension and energy, contact angles, contact angle hysteresis, spreading behavior of droplets and many more are dealt with in a way that is quite easy to grasp conceptually. If you actually want to understand the derivations you will find that the book tends to skip some steps in the process, which makes it a bit harder to follow, but it provides references that you can use to follow up on that. 2) For a more thorough background on the origin of surface tension you might be interested in the book by Israelachvili: J.N. Israelachvili, Intermolecular and surface forces, 2011, Academic Press (Elsevier) 3) A third book that mostly focuses on the dynamics of capillary interfaces from a fairly mathematical perspective is the book by Shikhmurzaev Y.D. Shikhmurzaev, Capillary flows with forming interfaces, 2008, CRC Press Review papers For the latest information, books are typically not the best, because they might already be outdated by a couple of years when they are released. This is where review papers come in. They will still be more extensive then research papers, but are usually quite up to date, at least within a year or so. Additionally, more specific topics like the collision phenomena you are interested in might not be addressed (substantially) in books on capillarity. Below I have selected just a few review papers that you might find interesting to read, but I would advice you to also search for yourself in journals like Annual Review of Fluid Mechanics and Reports on Progress in Physics 1) Yarin et al.,Drop Impact Dynamics, 2006, Annual Review of Fluid Mechanics 2) Grabowski et al., Growth of Cloud Droplets in a Turbulent Environment, 2013, Annual Review of Fluid Mechanics
{ "domain": "physics.stackexchange", "id": 44428, "tags": "fluid-dynamics, resource-recommendations, collision, surface-tension" }
Zero Solution in infinite square well?
Question: Consider the well: $$V(x) = \begin{cases} \infty&\text{if }x<0 \\ 0&\text{if }x\in\left(0,L\right) \\ \infty&\text{if }x>L. \end{cases}$$ Solving the time independent Schrödinger equation on the well $$-\frac{\hbar^2}{2m} \frac{∂^2\psi}{∂x^2} =E\psi,$$ will yield one of the solutions $\psi = 0$. Some books say that this solution is un-normalizable. So we ignore it. Some books say that zero solution has no physical meaning. I don't understand why the normalizable condition is so important. (Does this condition imply that the particle will not disappear in the worlds?). If $\psi = 0$, what's the meaning of the state? (Does this mean we can't find the particle in the wall? But it's a reasonable solution, why ignore it?) Answer: If ψ=0, what's the meaning of the state $\psi=0$ isn't (a representation of) a state. It is the position basis representation of the zero ket. A vector space must have a zero vector. But a state is a ray in Hilbert space so the space of states is not a vector space. A wavefunction (the position basis representation of a ket) must be normalizable to unity since $\rho=\psi^*\psi$ is interpreted as a probability density. The system you give is the (one) particle in a (1D) box. That is, the system has by stipulation one particle that has 100% probability of being found somewhere within the box. As a probability density, the solution $\rho = \psi^*\psi = 0$ does not make physical sense.
{ "domain": "physics.stackexchange", "id": 67803, "tags": "quantum-mechanics, hilbert-space, wavefunction, probability, born-rule" }
Type to instance map
Question: I needed a class where I could store an instance of any type. The goal was an interface as follows: class TypeMap { template<typename T> T& get(); }; Such that: Iff &m1 == &m2 then &m1.get<T>() == &m2.get<T>(). The first call to m.get<T> for all m, T combinations returns a reference to a default-constructed T. A TypeMap going out of scope deallocates all memory that was accessible through it. Here is my solution: #pragma once // Note to CR.SO: Provides the ASSERT macro. #include "assert.hpp" #include <boost/utility.hpp> #include <set> //! A map of types to values. //! //! Associative container which allows mapping of types to values of that //! type. class TypeMap : boost::noncopyable { template<typename T> class TypeMapDetail { struct Node : boost::noncopyable { Node(void* key, Node* parent) : key(key), lhs(), rhs(), parent(parent), data() {} void* key; Node* lhs; Node* rhs; Node* parent; T data; }; Node* root; typedef void (*destruct_func)(void*); static void destroy_impl(void* p) { delete static_cast<Node*>(p); } destruct_func destroy; // No member data past this point. Node*& get_parent_to_this(Node* current) { if (!current->parent) return root; if (current->parent->lhs == current) return current->parent->lhs; else return current->parent->rhs; } void remove_specific(Node* current) { Node*& parent_to_this = get_parent_to_this(current); if (!current->lhs && !current->rhs) { parent_to_this = nullptr; return; } if (!current->lhs) { parent_to_this = current->rhs; current->rhs->parent = current->parent; return; } if (!current->rhs) { parent_to_this = current->lhs; current->lhs->parent = current->parent; return; } Node* replacement = current->lhs; while (replacement->rhs) replacement = replacement->rhs; replacement->parent->rhs = replacement->lhs; replacement->lhs->parent = replacement->parent; parent_to_this = replacement; current->lhs->parent = replacement; current->rhs->parent = replacement; replacement->lhs = current->lhs; replacement->rhs = current->rhs; replacement->parent = current->parent; } public: static TypeMapDetail map_; TypeMapDetail() : destroy(destroy_impl), root() {} // Will construct if not found. Node* retrieve(void* key) { if (!root) return root = new Node(key, nullptr); Node* current = root; while (true) { if (key < current->key) { if (current->lhs) current = current->lhs; else return current->lhs = new Node(key, current); } else if (key > current->key) { if (current->rhs) current = current->rhs; else return current->rhs = new Node(key, current); } else { return current; } } ASSERT(!"Unreachable code."); } static T& get(TypeMap* p) { return map_.retrieve(p)->data; } void remove(void* key) { Node* current = root; while (current) { if (key < current->key) { current = current->lhs; } else if (key > current->key) { current = current->rhs; } else { remove_specific(current); destroy(current); return; } } ASSERT(!"Tried to remove a non-existent node."); } // Kept for debugging. void print(Node* p) const { if (!p) return; std::cerr << "("; print(p->lhs); std::cerr << " " << p->data << " "; print(p->rhs); std::cerr << ")"; } void print() const { print(root); std::cerr << '\n'; } }; std::set<void*> members_; public: //! Retrieve the data associated with the given type. template<typename T> T& get() { members_.insert(&TypeMapDetail<T>::map_); return TypeMapDetail<T>::get(this); } ~TypeMap() { struct DummyForCleaningPurposes {}; for (auto m : members_) { auto map = static_cast<TypeMapDetail<DummyForCleaningPurposes>*>(m); map->remove(this); } } template<typename T> static void print() { TypeMapDetail<T>::map_.print(); } }; template<typename T> TypeMap::TypeMapDetail<T> TypeMap::TypeMapDetail<T>::map_; I know I am currently using a simple algorithm for the binary search tree which will not auto-balance the tree. I'll take a look at fixing it once I'm sure this way will work. I'll also be splitting out the functions so that they class definition is easier to read. I'm also aware that I should add a const getter of some sort, but I have no need for it and don't want to complicate the code any further. Questions: Does what I wrote satisfy the requirements? It seems to work so far, but I'd really like to know about any issues. What are some non-obvious cases of undefined behaviour here, and are they likely to cause issues in practice? I can see that treating Node<T>*s as Node<DummyForCleaningPurposes>* may break aliasing (could optimisations do anything nasty?), but perhaps there's more issues. (Layout of Node<T> that would make the four pointers in it fail?) Any stylistic issues that could be improved? Stuff that should be commented on? Answer: I don't see why you use void* as a key, when that pointer is always a TypeMap*. You could use a std::map<TypeMap*, T*> as TypeMapDetail<T>'s member. (Yes, std::less<X*> is a safe total ordering, even though operator<(X*, X*) is undefined behavior unless within a common complete object.) Or if C++11 features and libraries are available, std::map<NodeMap*, std::unique_ptr<T>> is even easier. I think I would approach the cleanup more like: class TypeMap : boost::noncopyable { typedef void cleanup_func(TypeMap*); template<typename T> class TypeMapDetail : boost::noncopyable { static std::map<TypeMap*, std::unique_ptr<T> > objects; public: static T& get(TypeMap*); static void cleanup(TypeMap* tm) { objects.erase(tm); } }; typedef std::map<std::type_index, cleanup_func> cleanup_map_type; cleanup_map_type cleanup_actions; public: TypeMap() : cleanup_actions() {} ~TypeMap() { for (cleanup_map_type::iterator iter = cleanup_actions.begin(); iter != cleanup_actions.end(); ++iter) { iter->second(this); } } template <typename T> T& get() { cleanup_actions.insert(std::make_pair( std::cref(typeid(T)), &TypeMapDetail<T>::cleanup)); return TypeMapDetail<T>::get(this); } }; ... which doesn't involve any of that static_cast undefined behavior at all. If C++11 is out, struct type_index is fairly easy to implement.
{ "domain": "codereview.stackexchange", "id": 6238, "tags": "c++, template-meta-programming" }
Can anyone find a mapping from the set of all possible string to the natural numbers?
Question: Can anyone find a map(injection) $h$ from the set of all possible strings $S^*$ to the natural numbers $\mathbb{N}$? $$h : S^* \rightarrow \mathbb{N} $$ Assume $S$ is finite. I would prefer an efficient map. There should be nice applications of this question for perfect hashing, right? Answer: Just pretend that your alphabet $S = \{0, \dots, s\}$ for some $s$ and then a string in $S^*$ is just a natural number written out in base $s+1$. There's the slight wrinkle that $0$, $00$, $000$, etc. are all representations of zero, but that's easily fixed by associating the string $x$ with the number whose base-$(s+1)$ representation is $1x$ (i.e., $x$ with a $1$ stuck on the front).
{ "domain": "cs.stackexchange", "id": 11387, "tags": "formal-languages, discrete-mathematics, hash, cryptography" }
Shortest paths disallowing each edge
Question: I'd appreciate any pointers or terms that could get me started in the right direction. We have a directed graph $G=(V,E)$ and lengths $l_{ij}$ for each edge $ij$ that can be assumed positive. There is a special start node $s$ and end node $t$. For each edge $ij$, we'd like to compute the length of the shortest path from $s$ to $t$ that does not use edge $ij$. A simple brute force algorithm is to run a shortest path algorithm for each edge, each time removing a different edge from the original graph. Is there a more efficient algorithm that takes advantage of the fact that there is a lot of repeated computation happening in this brute force algorithm? Thanks in advance. Answer: The problem you mention is called "replacement paths". Here are a few references: Gotthilf and Lewenstein, Improved algorithms for the k simple shortest paths and the replacement paths problems. Inf. Proc. Letters, 109(7):352–355, 2009. This paper gives the fastest to date exact algorithm for the replacement paths problem, running in time $O(mn+n^2\log\log n)$ time in graphs with $n$ nodes and $m$ edges. A. Bernstein. A nearly optimal algorithm for approximating replacement paths and k shortest simple paths in general graphs. In Proc. SODA, pages 742–755, 2010. This paper amazingly gives a quasilinear time approximation scheme for the problem. J. Hershberger, S. Suri, and A. Bhosle. On the difficulty of some shortest path problems. In Proc. STACS, pages 343–354, 2003. This paper shows that any path-comparison algorithm solving the replacement paths problem exactly must take at least $\Omega(m\sqrt{n})$ time. V.Vassilevska W., R. Williams. Subcubic Equivalences between Path, Matrix and Triangle Problems. In Proc. FOCS, pages 645-654, 2010. We show that if you obtain an $O(n^{3-\varepsilon})$ time exact algorithm for replacement paths for any constant $\varepsilon>0$, then this can be converted to an $O(n^{3-\varepsilon'})$ time algorithm for all pairs shortest paths for constant $\varepsilon'>0$. Such a truly subcubic algorithm for all pairs shortest paths is a longstanding open problem. O. Weimann, R. Yuster. Replacement Paths via Fast Matrix Multiplication. In Proc. FOCS, pages 655-662, 2010. and V. Vassilevska W. Faster Replacement Paths. In Proc. SODA, pages 1337-1346, 2011. These papers show how to use fast matrix multiplication to find replacement paths in graphs with integer edge weights in the interval $\{-M,\ldots, M\}$. The latter paper gives the best known runtime so far, $\tilde{O}(Mn^\omega)$.
{ "domain": "cstheory.stackexchange", "id": 895, "tags": "ds.algorithms, reference-request, graph-algorithms" }
Formula of Entropy
Question: Please, clear my confusion as to why this formula in an exam paper is written this way? Also, is/are there any way to calculate the difference of entropy? If I want to use the property tables of thermodynamics, which should I use? Answer: Using the definition of the change of entropy in a quasi-static process $\Delta S=\frac{\Delta Q}{T}$ (where $\Delta Q$ is the small amount heat transfered and $T$ is the temperature of the gas in that process) and the relation $\Delta Q=\Delta U + p \Delta V$ between the small amount of heat transferred in a process ($\Delta Q$), the small change of internal energy of the ideal gas ($\Delta U$), the pressure of the gas in that process ($p$) and the small change in its volume ($\Delta V$), we can write: $$\Delta S=\frac{\Delta Q}{T}=\frac{\Delta U + p \Delta V}{T}=\frac{n_{m}C_{v}\Delta T}{T}+\frac{n_{m}R\Delta V}{V}$$ We can now integrate this equation from state $1$ (entropy $S_1$, temperature $T_1$, volume $V_1$) to state $2$ (entropy $S_2$, temperature $T_2$, volume $V_2$). The integral $\int_{T_1}^{T_2} \frac{1}{T} \,dT$ is equal to $\ln \frac{T_2}{T_1}$ and the integral $\int_{V_1}^{V_2} \frac{1}{V} \,dV$ is equal to $\ln \frac{V_2}{V_1}$ so we get: $$S_2-S_1=n_mC_v\ln \frac{T_2}{T_1}+n_mR\ln \frac{V_2}{V_1}$$ which is the formula that you asked about. The only difference is that in your equation $m$ (which is the number of moles $n_m$ in my notation) is pulled out of the parenthesis and the states $S_1$ and $S_2$ are interchanged. The latter affects the sign: there's $-T_0(S_1-S_2)$, i.e. $T_0(S_2-S_1)$, from which you get $+T_0(C_v \ln \frac{T_2}{T_1}+R \ln \frac{V_2}{V_1})$ in the way I explained above. Additional question from the comment - https://ibb.co/GVqkJVb Let's start with the derived formula $S_2-S_1=n_mC_v\ln \frac{T_2}{T_1}+n_mR\ln \frac{V_2}{V_1}$ and work our way to the expression from the photo. We'll use the fact that $C_p=C_v+R$, as well as $\frac{p_2}{T_2}=\frac{n_mR}{V_2}$ and $\frac{T_1}{p_1}=\frac{V_1}{n_mR}$. $$S_1-S_2=n_mC_v\ln \frac{T_1}{T_2}+n_mR\ln \frac{V_1}{V_2}=n_m(R \ln(\frac{n_mR}{V_2} \frac{V_1}{n_mR})-C_v \ln \frac{T_2}{T_1})=n_m(R \ln(\frac{p_2}{T_2} \frac{T_1}{p_1})-C_v \ln \frac{T_2}{T_1})=n_m(R \ln \frac{p_2}{p_1}-R \ln \frac{T_2}{T_1}-C_v \ln \frac{T_2}{T_1})=n_m(R \ln \frac{p_2}{p_1}-C_p \ln \frac{T_2}{T_1})$$ We get the equivalent formula from the photo, scaled by $n_m$. It is possible to derive a couple of such equivalent formulae, which is mostly done using simple algebraic manipulations involving the $pV=n_mRT$ formula.
{ "domain": "physics.stackexchange", "id": 83051, "tags": "thermodynamics, entropy" }
why is the suffix of alcohols -ol but not -al?
Question: According to the IUPAC nomenclature, we put the suffix “-ol” to alcohols while we put the suffix “-al” to aldehydes. This will be related to some etymology issues. I presume “-ol” and “-al” are derived from the ending of “alcohol” and the beginning of “aldehyde” respectively. I get very confused about this. Both “alcohols” and “aldehydes”, I mean the English words, start with “al”. Then, why alcohols cannot have the “al” suffix? Answer: Chemical nomenclature did not emerge systematically. Some names have been borrowed from other languages centuries ago including German, French and Arabic. Alcohol is derived from Arabic, from the the time of Middle Ages. The word al-kali (alkali) are remnants of those times. There is no way current nomenclature can be changed. From the Elsevier Dictionary of Chemetymology- The Whies and Whences of Chemical Nomenclature and Terminology, we can see: Alcohol derived from al-kuhul (Arabic: powder) referring to the alchemistic procedure of reducing samples, through several purification steps, to a powder, the supposedly pure essence of the substance in question; thus alcohol is the essence (or spirit) of wine; the same word, spelled kohl, is used for powdered antimony as a cosmetic preparation Aldehyde is rather recent Aldehyde coined by contraction of alcohol dehydrogenatus (New Latin: aldehyde literally dehydrogenated alcohol) Hope that solves the mystery behind -ol and -al. There is a need to distinguish -ol and -al groups and most chemists have accepted this name to represent an alcohol and an aldehyde, at least in the English language. There will be certain molecules when both functional groups are present and the suffixes will be determined by IUPAC's priority list.
{ "domain": "chemistry.stackexchange", "id": 17965, "tags": "etymology" }
glossary of biotechnological engineering
Question: Similar to A good book for history of biology/biotechnology for lay people but not for lay people. Can anyone recommend an advanced dictionary of biotechnological engineering concepts in English (contemporary, not historical)? There doesn't seem to be anything adequate at my research institution nor even country. I have been looking for PDF's and e-books since they can be previewed more easily than ISBN #'s, but any suggestions are welcome. Answer: I found this glossary of Biotechnology and Genetic Engineering off Wikipedia, what do you think? Or The Facts On File DICTIONARY of BIOTECHNOLOGY and GENETIC ENGINEERING What concepts are you necessarily hoping to include? I'm hoping someone actually experienced in this field can advise.
{ "domain": "biology.stackexchange", "id": 985, "tags": "biotechnology, book-recommendation" }
Solve simple derivative questions
Question: Built my first script and its function is to solve simple derivative questions. How can I improve this code? This is my first time posting. #This is Simple Derivative Solver def derivative(): print("This program will give you the first derivative.") input("Press ENTER to continue...") cf=int(input("Enter the coefficient of the term...")) exp=int(input("Enter the exponent of the term...")) if exp==1 and cf==1: print("The answer is", 1) elif exp<0 and cf<0: print("The answer is:", abs(exp*cf), "x^", (exp-1)) elif exp>0: print("The answer is:", exp*cf, "x^", (exp-1)) elif exp==0: print("The answer is:", 0) elif exp<0: print("The answer is:", abs(exp*cf), "x^", (exp-1)) elif cf==0: print("The answer is:", 0) elif cf<0: print("THe answer is:", cf*exp, "x^", (exp-1)) derivative() Answer: Trust the math. If exp<0 and cf<0, their product will be positive, so abs is redundant. BTW, in case of elif exp < 0, call to abs leads to an incorrect answer. In any case, \$(ax^n)'\$ always equals to \$anx^{n-1}\$. Mathematically speaking there are no special cases. There are special cases when it comes to printing: \$n = 0\$: you (correctly) print 0 \$n = 1\$: you print x^0, but it is better to omit it altogether \$n = 2\$: you print x^1, but it is better to print just x. \$n < 0\$: consider printing parenthesis around the exponent. Separate concerns. The derivative shall only compute the derivative. Input shall be handled separately. The call to derivative should be conditionalized: if __name__ == `__main__`: derivative() This enables you to import your code in another project.
{ "domain": "codereview.stackexchange", "id": 25520, "tags": "python, beginner, symbolic-math" }
Reaction mechanism for 7-hydroxy-4-methylcoumarin synthesis with an iodine catalyst
Question: 7-Hydroxy-4-methylcoumarin can be synthesized from ethyl acetoacetate and resorcinol with iodine as a catalyst. What is the mechanism for this? From what I understand it follows the Pechmann reaction but I'm having trouble to understand how it works in this particular case. Answer: There is a paper about this exact reaction, but in addition to the use of $\ce{I2}$ it mentions the use of microwaves. In the reaction, $\ce{I2}$ acts as a Lewis acid, catalyzing both the transesterificaiton and electrophilic aromatic substitution reactions. It can be seen acting as such in the first step of the mechanism: $\hspace{2cm}$ Next, a proton transfer occurs: Next, the resorcinol derivative can act as a nucleophile, and ethanol is ejected, completing the transesterification reaction: Next, $\ce{I2}$ coordinates to the keto carbonyl, activating the carbon towards electrophilic addition. Because this resorcinol derivative is so activated by the $\ce{-OH}$ and $\ce{-OOCR}$ groups, a weak Lewis acid like $\ce{I2}$ is sufficient to catalyze the reaction: $\hspace{1.5cm}$ $\hspace{1.8cm}$ $\hspace{1.6cm}$
{ "domain": "chemistry.stackexchange", "id": 5631, "tags": "organic-chemistry, reaction-mechanism" }
Joule-Thomson Expression for dT
Question: In H. Callen's "Thermodynamics and an Introduction to Thermostatistics" in the chapter about the Joule-Thomson process, they say: "If the change in pressure in a throttling process is sufficiently small we can employ the usual differential analysis" $$ dT = \left(\frac{\partial T}{\partial P}\right)\bigg|_{H, N_1, N_2, ...}dP $$ How does one get to this equation for $dT$? I can't find any references to this (while skimming through other chapters) and I'm not entirely sure what this relationship is called. As far as I know, you can write $T(S,P,N)$ as an equation of state using the enthalpy equation, but writing $dT$ gives $$ dT = \frac{\partial T}{\partial S}\bigg|_{P,N}dS + \frac{\partial T}{\partial P}\bigg|_{S,N}dP + \frac{\partial T}{\partial N}\bigg|_{S,P}dN $$ I can understand if $dN = 0$, but is it possible to also have $dS = 0$ for this process? One can show $dH = 0$, and since it appears to be held constant, I figure this is the right track, but this is not in the equation above. Any insight is appreciated. Answer: The equation to start with is $$dH=C_pdT+\left[V-T\left(\frac{\partial V}{\partial T}\right)_P\right]dP$$which is derived directly from $$dH=TdS+VdP$$
{ "domain": "physics.stackexchange", "id": 53387, "tags": "thermodynamics" }
Can nuclear bombs intended for mass destruction also be used to deflect asteroids?
Question: As far as I know, there are two possible uses of nuclear bombs: for mass destruction and for deflecting asteroids. For example, Wikipedia describes how nuclear bombs can be used for deflecting asteroids. Can those two things be achieved with the same bombs? Or must they be designed in different ways? Also, can nuclear bombs which are intended for mass destruction, be used to deflect an asteroid without making large adjustments? Answer: Deflecting asteroids is far from the only proposed peaceful use of nuclear explosions, but few (if any) of them made economic, environmental, or political sense. The answer is yes to your specific question as to whether existing nuclear weapons could be used for deflecting asteroids. Specially designed "nuclear shaped charges" were proposed for Project Orion nuclear propulsion which could have been repurposed for asteroid deflection, but these were never built and do not exist (unless there is a government project that has been magically kept secret for decades :)). Neutron bombs were also proposed as better than regular nukes for "nudging asteroids" without blowing them up, but there has been no drive to develop dedicated neutron bomb asteroid deflectors. The 2007 NASA report on "Near Earth Object (NEO) Mitigation Options Using Exploration Technologies" proposed using existing B83 bombs in a particular sequence. As @Tiger-Guy says, most nuclear asteroid deflection research assumes "bombs go boom" and works on figuring what sizes and sequences of standoff, surface, or subsurface explosions of existing Nuclear Explosive Devices (NED) will be most effective. As noted in the November 2020 AIAA ASCEND conference discussion of “Nuclear Devices for Planetary Defense” (PANEL-17): The lack of need for new NED designs to handle the most probable future NEO threats is an important finding of the work to date on this topic
{ "domain": "engineering.stackexchange", "id": 5021, "tags": "aerospace-engineering, nuclear-engineering" }
How to compute the curl of the electric field, experimentally?
Question: In order to experimentally verify Faraday's law and express the curl of the induced electric field, is there any other way to compute the curl without directly working on the cross product (ie working through the cross products requires a lot attention to difficult measurements)... if there's any "other" way to calculate for curl that relies on some practical measurements... please help me out! Answer: The curl can be defined as the limit of flux passing through a loop $C$ (see wikipedia). Let $\hat{n}$ be a vector of length one (the orientation of the loop, component of the curl in that direction), and $A$ be the area enclosed by the loop. Then $$ \mathrm{curl}\vec{E}\cdot \hat{n} := \lim_{A\to 0}\frac{1}{|A|}\oint_{C} \vec{E}\cdot d\vec{s} \equiv \lim_{A\to 0}\frac{1}{|A|} \mathrm{EMF}_C$$ The integral on the right hand side is by definition the voltage along the loop. To measure the curl experimentally, take a small piece of wire and form it into a loop with as small an area as possible (approximating the limit on the rhs). Bring the ends as close together as you can without them touching. Attach the ends of the loop to a voltmeter and measure the voltage between the terminals (evaluating the integral on the rhs), then divide by the area of the loop. Do this for three different orientations $\hat{n}$ at any point in space you'd like to measure the components of the curl in those directions. Why three different directions? The curl is a vector. To determine it unambiguously you need its components in three linearly independent directions. For simplicity, choose three orthogonal axes - call them x,y,z - and measure in those directions. Then $\mathrm{curl}\vec{E} = (\mathrm{curl}\vec{E})_x \hat{x} + (\mathrm{curl}\vec{E})_y \hat{y} + (\mathrm{curl}\vec{E})_x \hat{z} $ That's how it works in theory. In practice (see Puk's comment below) you will probably have difficulty measuring anything because the effect is too small, unless the magnetic field is of high intensity or frequency. In that case, roll the wire to a coil and divide the measurement by the number of turns.
{ "domain": "physics.stackexchange", "id": 60236, "tags": "experimental-physics, electric-circuits, maxwell-equations, electromagnetic-induction" }
Extent of desert area influence in local climate
Question: With reference to this image...: ...from this recent tweet, I was wondering to what extent would Mediterranean regions be affected by the large desert areas to its South? "Affected", here meaning that there would be higher correlation between temperature measurements between the Sahara and Malta (for example), than Malta and Iceland. I suppose that measurements are correlated but that this correlation reduces with distance to some lower value (but not zero). The question here is, how quickly does this correlation tend to its lower value and is the distance between the "centroid" (?) of the Sahara desert to points in the Mediterranean short enough so that it can be considered an "influence"? Answer: The mediterranian do in a way create its own climate and is partly protected from the north saharan desert by the dominating wind direction in the area. The dominating wind direction is from west and northwest,The wind blows into the saharan desert in the direction of the equator in the winter,This creates the mild winters the mediterranian is known for. In the summer more warm and dry air moves from the north saharan desert and into the mediterranian area,this creates the dry summer climate the mediterranan is known for. So the north saharan desert have the strongest influence on the mediterranian climate in the summer. you can read about it here https://en.wikipedia.org/wiki/Hadley_cell if you click on the animation you can see how the precipitation changes over the year and thereby also the temparature. and here https://en.wikipedia.org/wiki/Mediterranean_climate
{ "domain": "earthscience.stackexchange", "id": 2001, "tags": "temperature, measurements, correlation, pressure" }
Bones of Strontium
Question: Elements in the same column of the Periodic Table have similar chemical properties to one another; for example, Group 1 elements such as sodium and potassium react rather vigorously when put in contact with water. This being the case, is it possible to substitute one important element from an object or system, for instance the calcium in our bones, with another element from its respective group, such as strontium, and still have it function in the same/similar way? Would there be any benefits or draw-backs to replacing calcium with strontium in the human body (if it is indeed possible) and how would it effect the chemistry/appearance of our bodies? Answer: Yes, such substitutions are indeed possible and can possibly happen. This being the case, is it possible to substitute one important element from an object or system, for instance the calcium in our bones, with another element from its respective column, such as strontium. There are some strontium based supplements on the market (strontium citrate, ranelate etc.) that are being touted as potential treatment for diseases like osteoporosis, and such like, however, evidence supporting their efficacy is tenuous at best. It is chemically similar enough to calcium to compete with calcium uptake. Strontium-90, is a radioisotope of strontium that is described as a bone seeker ( an element that tends to accumulate in the bones of humans and other animals when it is introduced into the body). It is typically a product of nuclear fission, and is found in spent nuclear fuel and radioactive wastes. It does have some industrial and medical applications. There is some dispute over its half-life in the human body, primarily because of its complex metabolism in the human body. The elimination rate of strontium from the human body shows variance between individuals, based on age, and sex. As strontium has an affinity to the calcium-sensing receptor of parathyroid cells that is similar to that of calcium, the increased risk of liquidators of the Chernobyl power plant to suffer from primary hyperparathyroidism could be explained by binding of strontium-90.
{ "domain": "chemistry.stackexchange", "id": 6507, "tags": "ions, chemical-biology" }
Observers in different planets viewing each other
Question: I'm a newbie to relativity, so If the question is idiotic, please excuse me. My question is, as in Interstellar movie suppose one person is sitting in a planet A whose 1 hour equals to 6 years of another planet B where another person is sitting. And if they look at each other using a telescope, what will be each others perspective? I mean the if the person in planet A looks for 1 hour can he see that other person growing old 6 years? and what will the person in planet B see when he looks at person in planet A. Thanks. Answer: Suppose one person is sitting in a planet A whose 1 hour equals to 6 years of another planet B where another person is sitting. And if they look at each other using a telescope, what will be each others perspective? The first guy A will see the second guy B living a "fast forward" life, while the second guy B will see the first guy A living a "slow motion" life. I mean the if the person in planet A looks for 1 hour can he see that other person growing old 6 years? Yes. If person A kept looking for say ten hours he'd see person B grow old and die. and what will the person in planet B see when he looks at person in planet A. Very little. Person B sees person A hardly moving at all. Note that the situation for gravitational time dilation isn't symmetrical like the twins paradox. Both observers agree that the guy on planet A is living his life slower than the guy on planet B. Much slower.
{ "domain": "physics.stackexchange", "id": 22081, "tags": "general-relativity, spacetime" }
Linear phase FIR filter for impulse responses that don't appear symmetric
Question: I would like to clarify some confusion I have about linear phase FIR filters of which do not seem to have symmetric impulse responses. Starting with a simple case, a delay, $ h[n] =\delta(n-n_0) $ does not seem to have have a symmetric impulse response. For example, $$h[n] = [0, 0 ,0,1,0] $$ is not a symmetric impulse response. How is it that the property of symmetry for a linear phase FIR filter holds true, in this case? For another example, say I have a filter defined as $$h[n] = [0,0,0,0,1,0,0,0,0,0,1]$$ I believe this filter is linear phase too. But it's impulse response is not symmetric. How does the symmetric property hold true? Does it have to do with zero padding? But even if we don't zeropad the filter, it is still linear phase? If we zeropad it by a large factor, is it also linear phase? For one more example, say I have this filter: $$ h[n] = [1, 2, 3, 4, 3, 2, 1]$$ This is obviously linear phase. When I zeropad it by a factor N, the impulse response is no longer symmetric, but it stays linear phase for all values of N. Why is this the case? Answer: Your confusion is understandable. If you consider the definition of linear phase FIR filter and the associated symmetry conditions on their impulse responses, then you can arrive the conclusion that the first two cases $$ h_1[n] = [0,0,0,1,0] $$ and $$ h_2[n] = [0,0,0,0,1,0,0,0,0,0,1] $$ are non-symmetric. However, as you use zeros and ones in those impulse responses, it can be seen that the following two new augmented impulse responses are equivalent to those non-symmetric looking ones: $$ h_3[n] = [0,0,0,1,0,0,0] $$ and $$ h_4[n] = [0,0,0,0,1,0,0,0,0,0,1,0,0,0,0] $$ And they are symmetric and linear phase according to the definition as well. This happens because of the specific way the zeros and ones are used. However, the following impulse response $$h_5[n] = [0,0,1,0,0,0,0,1,0,1,0]$$ is not symmetric, is not linear phase and cannot be made linear phase by augmenting with zeros...
{ "domain": "dsp.stackexchange", "id": 11543, "tags": "filters, finite-impulse-response, linear-phase" }
Counting Ads in webpage using XPath and EasyList in Python
Question: I have the following the function the retrieves a given webpage and returns the number of adverts that are on the page using a shortened version of EasyList (17,000 rules). Using multiprocessing, this scraped 18,000 pages in just over 2 days (which was fine at the time). However, I now have a dataset that is 10x larger so this runtime isn't particularly ideal. I suspect that it's running in quadraticly due to this line result = len(document.xpath(rule)) in the for loop. I'm not very familiar with XPath/lxml at all so some advice on how to make this more efficient would be appreciated or at least some indication whether I can make it run much faster or not. import lxml.html import requests import cssselect import pandas as pd from multiprocessing import Pool def count_ads(url): rules_file = pd.read_csv("easylist_general_hide.csv", sep="\t",header=None) try: html = requests.get(url,timeout=5).text except: print(f"Page not found or timed out: {url}") return count = 0 translator = cssselect.HTMLTranslator() for rule in rules_file[0]: try: rule = translator.css_to_xpath(rule[2:]) document = lxml.html.document_fromstring(html) result = len(document.xpath(rule)) if result>0: count = count+result except: pass return count``` Answer: You are apparently using this library: https://pypi.org/project/cssselect Measured time to process a scraped page is ~ 10 seconds, and we wish to reduce that. There are so many essential details left out of the OP, including profiler observations of an actual run. I can see at least one thing that could be immediately improved. A constant could be hoisted out of a loop. html = requests.get(url,timeout=5).text ... for rule in rules_file[0]: ... rule = translator.css_to_xpath(rule[2:]) document = lxml.html.document_fromstring(html) result = len(document.xpath(rule)) It looks like constant document parsing could be hoisted, similar to how translator has already been hoisted. No need to recompute it 17 K times, once per rule. document = lxml.html.document_fromstring(html) for rule in rules_file[0]: ... Presumably a given worker process will handle multiple URLs. So for N pages we invoke .css_to_xpath() 1.7e4 × N times. Looks like there may be an opportunity for caching, here. A naïve approach would just tack on a cache decorator: @lru_cache(max_size=17_400) def get_xpath(...): But there can be some fiddly requirements, such as all arguments being hashable. If you encounter such trouble, don't give up. There must be some way to avoid boring repeated xpath extraction of same old rule data. Seventeen thousand rules sounds like a lot. I bet some of them trigger often, and some quite seldom, perhaps zero times in your corpus. You have time and resource constraints. Apparently fewer than twenty days are available. Rank order rules by how useful they are, and run just the top thousand of them against your freshly received pages. Publish preliminary results. Decide if you want to go back and try the next thousand rules, or ten thousand rules. Look for patterns. Perhaps the URL's hostname predicts which hundred rules are most likely to apply to the page. There's two parts to this program: I/O web download, and compute The former has a very small memory footprint, unlike the latter. This has implications for scheduling server resources. Consider breaking out a "fetch" phase that focuses solely on issuing .get()s (with timeout) and then persisting the result to disk. Then a subsequent "compute" phase can analyze the fetched pages, perhaps in much less than an average of ten seconds. Benchmark a short run. Then try it again with pypy. Sometimes that will win.
{ "domain": "codereview.stackexchange", "id": 44558, "tags": "python, performance, html, xpath, lxml" }
What is an algorithm for minimizing the standard deviation of m sums summed from n summands? [with attempt]
Question: I have m bins (sums) and n summands. Each summand goes into a bin. In order to minimize the standard deviation, I have a greedy algorithm that appears to accomplish this. I am not sure of the name, but would like to know more. All m bins must have a sum greater than zero at the end of the algorithm. It seems simple: sort the summands from highest to lowest. for each summand in the summands: find the first available bin with minimum sum and place it in the bin I haven't proved anything about it, but I've come up with a few test data sets and it appears to work. EDIT --- Here is my attempt at an analysis: The algorithm seeks to minimize the standard deviation of m sums summed from n summands. The mean is always the sum of n summands divided by m, that is given. To minimize the standard deviation, the algorithm makes the greedy choice to minimize the standard deviation, or variance (either). I want to prove that the greedy choice is always the optimal choice. Suppose there is a bin m1, not the minimum sum bin, that is a better choice than minimum sum bin m. Adding to this bin will minimize the standard deviation around u. In other words, placing the next value n into this m will maximally decrease the variance, meaning that we have maximally decreased the sum of (m_i - u)^2. NOTE: m1' = m1 + x, m' = m + x, where x is the next summand. So: remaining_sum_var + (m1' - u)^2 + (m - u)^2 < remaining_sum_var + (m' - u)^2 + (m1 - u^2) where m1 is greater than m. Simplify and factor: (m1' - u)^2 + (m - u)^2 < (m' - u)^2 + (m1 - u^2) m1'^2 - 2um1' + 2u^2 + m^2 - 2um < m'^2 - 2um' + 2u^2 + m1^2 - 2um1 m1'^2 - 2um1' + m^2 - 2um < m'^2 - 2um' + m1^2 - 2um1 Ignoring linear terms whose difference will be constant (linearity). m1'^2 + m^2 < m'^2 + m1^2 If we have added summand x to m1 this expands to: (m1 + x)^2 + m^2 < (m + x)^2 + m1^2 m1^2 + 2m1x + x^2 + m^2 < m^2 + 2mx + x^2 + m1^2 2m1x < 2mx m1 < m, this is a contradiction. So it cannot be the case that there is a better choice than minimum sum bin m. Answer: No, your algorithm does not always produce the optimal solution. When $m=2$, this is (at least as hard as) the Partition problem, which is NP-hard. Wikipedia discusses the greedy algorithm and lists a counterexample which shows that your algorithm does not work.
{ "domain": "cs.stackexchange", "id": 16918, "tags": "algorithms, greedy-algorithms" }
What is a good strategy for fully quantitative X-Ray Diffraction on coal fly ashes?
Question: I am having some trouble with fully quantitative phase identification of coal fly ashes using x-ray diffraction (XRD). Here follows some basic information about the experiment: -- Instrument: Bruker D8A Software: TOPAS5 Database: COD (But can also use PAN ICSD if need be) Sample: Coal fly ashes from a Western Australian coal power station combusted at around 900$^\circ$C. The goal: How to accurately fit an XRD profile to the pattern and accurately quantify the phase composition (i.e. crystalline and amorphous wt%s) of the fly ashes. For example, I'd like a good fit of the XRD pattern so that I can get an accurate output phase composition of the fly ash along the lines of (as an example): 2% Hematite 4% Magnetite 8% Mullite 16% Quartz 70% Amorphous -- I have used TOPAS before for easier and idealised samples (i.e. crystals and simple amorphous structures using the internal standard method) and have fitted the curves great. However, I'm having trouble conceiving of an analytical strategy to analyse and quantify the crystalline and amorphous material in coal fly ashes; a "real world" sample. Refer to the picture below where I've prepared the sample in a way to minimise preferred orientation and with a 20 wt% corundum internal standard and have done phase ID to find quartz, hematite, magnetite, mullite and corundum (from the standard) along with allowing for 1/X background and zero error, absorption and LP corrections: The goal here is to input the phase composition of the fly ashes (i.e. the quantified crystalline and amorphous wt%s) into a mathematical model (the "mix designer") to synthesise geopolymers, without these wt%s the geopolymers I make don't have good consistency in their measured durability properties (i.e. compressive strength, rheology, viscosity etc.) nor quality assurance and control. Note that procedures for this have been developed in literature (i.e. https://www.sciencedirect.com/science/article/pii/S0016236110003789), but unfortunately, the detailed XRD method employed is not usually provided. Does anyone have any experience or learnings here and can provide some insight or knowledge? I have done some literature review on this but a lot of the fly ashes mentioned are very different due to differing parameters like location, coal power station combustion conditions, source coal geology and ash content (etc.). XRD of coal fly ashes is a niche and an uncommon area of materials science, accurate quantification is even more uncommon. Answer: I am the author of that paper. Reach out on linked in if you would like to discuss in detail. The bot deleted my previous message when I sent my linked in address. You can also get more information my PhD thesis available on the Curtin university library website. It probably has the what I refined and in what order if I did a sequential process. Like in the paper you need XRF and Quant XRD. The fluorite internal standard was used to not overlap and minimize micro-adsorption mismatch. A very important thing to do is fit the mullite with a le-bail fit to get the lattice parameters, then find the most appropriate mullite as it is a solid solution. Then you can use the correct mullite in the Rietveld refinement. There is a set of Mullite's which cover the range I covered in 9 Australian fly ashes, I think it was Ban and Okada (1992). Fitting the minor phases is also important to accurately quantify the amorphous fraction. Then as per the paper you calculate the composition of the amorphous fraction using Oxide fractions from XRF minus oxide fractions calculated from XRD. This works ok for many of the Australian fly ashes. In subsequent work we saw that fly ashes with amorphous Si/Al >> 2 did not work out well as Geopolymer target Si/Al ~= 2, in retrospec is was probably an issue with the commercial sodium aluminate product that we were using. Industrial byproduct sodium aluminate worked fine. If I was given a new fly ash I would follow this procedure and make about 9 paste mixes with a variation of Si/Al and Na/Al ratio to confirm most appropriate mix. If I was in process control I would use the above process to flag when there is likely going to be an issue to investigate. Ban T, Okada K. Structure refinement of mullite by the Rietveld method and a new method for estimation of chemical composition. J Am Ceram Soc 1992;75:227–30
{ "domain": "chemistry.stackexchange", "id": 17388, "tags": "experimental-chemistry, analytical-chemistry, materials, crystallography, x-ray-diffraction" }
Does MCMC belong to the statistical query model?
Question: It is known that a wide range of algorithms fall into the statistical query (SQ) learning model by Michael Kearns. Examples include k-means, logistic regression, naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis, EM, and backpropagation for neural networks. (See, e.g., http://papers.nips.cc/paper/3150-map-reduce-for-machine-learning-on-multicore.pdf for proofs.) According to Feldman et al. (http://arxiv.org/pdf/1201.1214v5.pdf), it seems that MCMC also belongs to the SQ model. However, I didn't find any concrete proofs for this simple fact in existing literature. So, does MCMC directly falls into the SQ model? Or it requires modifications to fit in? Answer: Let me first clarify what the paper states: "Most algorithmic approaches used in practice and in theory on a wide variety of problems can be implemented using only access to such an [meaning SQ] oracle". So the question is not really whether MCMC "falls into the SQ framework" (the framework does not place any restrictions on computation) but whether the access to data samples that is necessary to run the algorithm can be replaced with access to an SQ oracle without invalidating the guarantees provided by the algorithm. One can also show that an algorithm for which we have no guarantees is an SQ algorithm but that requires an implementation using an "unbiased" oracle (referred to as 1-STAT in the paper). Lower bounds that can be proved against such an oracle are weaker. Now regarding MCMC with SQs: the main way to apply MCMC to these kinds of problems that we had in mind in described in an example at the bottom of page 4. Please also take a look at the proof of Thm. 9 here in a follow up paper on this type of techniques: http://arxiv.org/pdf/1311.4821v5.pdf
{ "domain": "cstheory.stackexchange", "id": 3460, "tags": "machine-learning, query-complexity, pac-learning" }
Should a welded connection between strut and waler be designed to take compression load?
Question: Temporary structures are usually constructed in a less precise manner than permanent structures. This is more so in a braced deep excavation where the walls are in constant motion as the excavation progresses downwards. During the assembly of walers and struts, there could be a small gap between the strut-waler connection due to fabrication imperfections. How does one decide whether the fillet weld connecting the strut to the waler need to be designed against compression due to possibility of such a small gap? Answer: How does one decide whether the fillet weld connecting the strut to the waler need to be designed against compression due to possibility of such a small gap? In our office, we generally design welds to take compression load due to the exact reason you're specifying. Unless you can ascertain that you have full bearing between the two pieces (i.e., both pieces are milled to mate perfectly), I would say you should check compression on the weld. For a code reference, the closest I can find is from the AISC 360-10 specification, section J1.1: The required strength of the connection shall be determined by structural analysis for the specified design loads, consistent with the type of construction specified, or shall be a proportion of the required strength of the connected members when so specified herein.
{ "domain": "engineering.stackexchange", "id": 339, "tags": "structural-engineering, welds" }
If a key in a red-black tree has exactly one child (which isn't null) then it is always red
Question: I have the following claim: Prove or disprove: If a key in a red-black tree has exactly one child (which isn't null) then it is always red. My attempt: Disproof. We will exhibit a counterexample: This tree satisfies the conditions of being a red-black tree, since: The root is black. All leaves are black. There isn't any red key. In each path there is the same amount of black keys, in particular, we have three black keys in such paths. However, we have two keys in level 1, which have one child and it is black. $\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \blacksquare$ Is this counterexample correct? Answer: The disproof is wrong because in your Red-Black tree, all possible leaves need to be represented (see exemples on wikipedia). That means that the real tree represented should be: It is easy to see that it does not respect the fourth condition of the definition. In the claim, when it is said that: a key […] has exactly one child (which isn't null) it actually means that it has two children, and exactly one of them is null. It is due to the fact that in the representation of a red-black tree, you can either represent ALL null leaves or represent none. I will let you try to prove or disprove the claim knowing that and edit this answer if necessary.
{ "domain": "cs.stackexchange", "id": 18361, "tags": "data-structures, binary-trees, binary-search-trees, red-black-trees" }
Quiet Turntable Bearing for 3D Zoetrope
Question: I'm working on building a 3D Zoetrope like the Disney Zoetrope in this video. The part that has me stumped is the bearing. I bought a lazy susan bearing but it's a bit loud, doesn't have any mount options, and doesn't spin as freely as I'd like. I also see turntable bearings that seem like essentially the same thing but with better mounting options. My worry with those is their size. I have a 4 foot platter and most bearings are quite small, which is why I went with the lazy susan bearing at 24" diameter. Thrust bearings have also come up but the ones I've seen don't have any mount options. They seem constructed exactly as I need them with the bearings being sandwiched vertically (top to bottom), instead of horizontally between an outer and inner ring. I've also been reading that certain ball bearings have a a radial and/or angular thrust capacity. And that this may provide the rotation I need, though again the mounting options seem really limited for a turning table. I've been googling for days and seem to have a mental block. I don't know how I would stabilize a 4 foot platter if the bearing is small and centered, like the turntable bearings I found. So, what kind of bearing would be best for this? How would I mount it (if its not super obvious)? I'm thinking of using a pulley to drive the zoetrope, though I'd prefer a direct connection with the motor drive shaft in the middle of the platter. Answer: I would suggest getting an unbraked hub assembly intended for a lightweight trailer. You can get the stub axle, bearings and hub as a set and it gives you a convenient flange mounting for your main platter. As long as the platter itself is reasonably stiff and well balanced it shouldn't need any additional support. I recently used a similar arrangement for a rotating plinth for a trio of bronze and glass statues with a total rotating mass of around 150 kg with a diameter of 1.0 metres. In this case I just cut the stub axle to the required length and welded it to a 6mm thick steel plate which was bolted into a corresponding plate in the framework of the base, this allowed it to be squared with shims. Note that the castors visible are for moving the whole thing and not related to the rotating mechanism. I was able to stand on the end of one of the rotating arms without any problem. For the drive you could have a pneumatic or rubber wheel directly driving the underside of the platter. This also gives you to opportunity to fine tune the gear ratio according to where on the radius you put the drive wheel. This is probably the easiest drive solution to implement. You would probably want the motor and drive wheel on a sprung or counter-weighted pivot a bit like the belt drive on a lathe. Here a motor with an integrated gearbox would be make life easier as you won't need to provide a large reduction ratio. Alternatively you could fix a pulley to the hub itself (note that the axle is fixed and it is the hub which rotates).
{ "domain": "engineering.stackexchange", "id": 921, "tags": "mechanical-engineering, bearings" }
Summing up distinct elements in steps
Question: My current task is to find a score from an array where the highest/lowest scores have been taken away, and if the highest/lowest occur more than once (ONLY if they occur more than once), one of them can be added: E.g. int[] scores = [4, 8, 6, 4, 8, 5] therefore the final addition will be \$\sum{4,8,6, 5} = 23 \$. Another condition of the task is that LINQ cannot be used, as well as any of the System.Array methods (you can see by my previously ask questions that has been a bit of a pain for me, since I solved this with LINQ in less than 5 minutes). I have working code the solves the problem but the task requires multiple methods/functions. I have been trying to restructure the program but with all sorts of issues. using System; using System.Collections.Generic; //using System.Linq; using System.Text; using System.Threading.Tasks; namespace Scoring { class Program { static int highOccurrence = 0; static int lowOccurrence = 0; //static int high; <------ //static int low; <------ static void Main(string[] args) { int[] scores = { 4, 8, 6, 4, 8, 5 }; findScore(scores); ExitProgram(); } static int findOccurrence(int[] scores, int low, int high) { //find the number of times a high/low occurs for (int i = 0; i < scores.Length; i++) { if (low == scores[i]) { lowOccurrence++; //record number of time slow occurs } if (high == scores[i]) { highOccurrence++; //record number of times high occurs } } return highOccurrence; } static int findScore(int[] scores) { //calculates score, needs to be restructured int[] arrofNormal = new int[scores.Length]; //int low = scores[0]; <----This is where the issue is //int high = scores[0]; <----- ^^^^^ int total = 0; for (int i = 0; i < scores.Length; i++) { if (low > scores[i]) { low = scores[i]; } //record lowest value if (high < scores[i]) { high = scores[i]; //record highest value } } for (int x = 0; x < scores.Length; x++) { if (scores[x] != low && scores[x] != high) { arrofNormal[x] = scores[x]; //provides the total of the scores (not including the high and the low) } total += arrofNormal[x]; } findOccurrence(scores, low, high); if (highOccurrence > 1) { //if there is more than 1 high (or 1 low) it is added once into the total total += high; if (lowOccurrence > 1) { total += low; } } Console.WriteLine("Sum = " + total); return total; //remove not all code paths return.. error } static void ExitProgram() { Console.Write("\n\nPress any key to exit program: "); Console.ReadKey(); }//end ExitProgram } } I have placed arrows in the code above to show where my issue is. If I try to declare "high" and "low" as global variables, my final answer is always a few numbers off, buy if I leave the variables declared as high = scores[0] etc, I will get the right answer. What I want ideally is to have separate methods for each step of the calculation, so right now I have method for finding the number of times a specific value shows up in the array. The next I would like to do is finding the highest/lowest value in the array, one method would do the final calculation, and the final one would write the results into the console window. The last two parts (finding the high/low and final calculation) are currently in the find score method. Answer: My current task is to find a score from an array where the highest/lowest scores have been taken away, and if the highest/lowest occur more than once (ONLY if they occur more than once), one of them can be added So the programflow can be written as: Find the highest item of the array Find the lowest item of the array Find the number of occurance of the highest item of the array Find the number of occurance of the lowest item of the array Remove the first highest item found Remove the first lowest item found Sum together the remaining items Export/Print the results So you will have 8 methods without the ExitProgram() method. Review Naming Based on the naming conventions for C#, all methodnames should be written using the PascalCasing style. So findOccurrence should be FindOccurrence and findScore should be FindScore So based on the list of tasks above the methods should be named FindHighestItem() or GetHighestItem() FindLowestItem() or GetLowestItem(); FindNumberOfHighestOccurance() or GetCountOfOccurance() FindNumberOfLowestOccurance() or GetCountOfOccurance() RemoveHighestItems() or SetArrayitemsByValueToZero() RemoveLowestItems() or SetArrayitemsByValueToZero() GetHighestItemIndex() GetLowestItemIndex() GetNormalizedArray() which returns a new array or SetArrayitemsByValueToZero() which manipulates the items in the array SumItems() Print() Style Opening brackets should be placed on the line below the statement so for (int i = 0; i < scores.Length; i++) { should be for (int i = 0; i < scores.Length; i++) { Programflow By combining some of these tasks you are violating the single responsible principle. The findScore() method is searching for the highest and lowest item in the array removes the highest and lowest item from the array (basically add only the items which aren't highest or lowest to the new array) is summing the items of this new array stated above is printing/exporting the sum Refactoring For getting the highest/lowest item in the array we can use the Math.Max() and Math.Min() methods. For finding the highest item we will initialize the var highestValue with Int32.MinValue and for finding the lowest item we will initialize the var lowestValue with Int32.MaxValue. Based of the comment as it is not allowed for this assignment to use Math.Min() or Math.Max() I have changed the methods private int GetLowestItem(int[] items) { int lowestItem = Int32.MaxValue; foreach (int item in items) { // lowestItem = Math.Min(item, lowestItem); if (item < lowestItem) { lowestItem = item; } } return lowestItem; } private int GetHighestItem(int[] items) { int highestItem = Int32.MinValue; foreach (int item in items) { // highestItem = Math.Max(item, highestItem); if (item > highestItem) { highestItem = item; } } return highestItem; } private int GetCountOfOccurance(int[] items, int comparingValue) { int count = 0; foreach (int item in items) { if (item == comparingValue) { count++; } } return count; } private int GetHighestItemIndex(int[] items, int highestItem) { for(int i = 0 ; i<items.Length ; i++) { if(items[i] == highestItem) { return i; } } return -1; } private int GetLowestItemIndex(int[] items, int lowestItem) { // to be filled by you } Next we should create a new calssed named BoundaryItem which holds the min and max values of the array. class BoundaryItem { internal int Max { get; set; } internal int Min { get; set; } } now we add a method to set each item of the array to zero if the value is either the max or the min of the array. private void SetArrayitemsByValueToZero(int[] items, BoundaryItem boundaryItem) { for (int i = 0; i < items.Length; i++) { if (items[i] == boundaryItem.Min || items[i] == boundaryItem.Max) { items[i] = 0; } } } Next we will add a 2 overloaded SumItems() methods public int SumItems(int[] items) { BoundaryItem boundaryItem = PreProcessArray(items); return SumItems(items, boundaryItem); } private int SumItems(int[] items, BoundaryItem boundaryItem) { int sum = boundaryItem.Min + boundaryItem.Max; foreach (int item in items) { sum += item; } return sum; } and last we add the missing PreProcessArray() method private BoundaryItem PreProcessArray(int[] items) { BoundaryItem boundaryItem = new BoundaryItem(); boundaryItem.Min = GetLowestItem(items); boundaryItem.Max = GetHighestItem(items); int minOccurance = GetCountOfOccurance(items, boundaryItem.Min); int maxOccurance = GetCountOfOccurance(items, boundaryItem.Max); SetArrayitemsByValueToZero(items, boundaryItem); if (minOccurance < 2) { boundaryItem.Min = 0; } if (maxOccurance < 2) { boundaryItem.Max = 0; } return boundaryItem; } But wait, we can still do better, if the amount of methods needed would not matter. Instead of GetLowestItem() and GetHighestItem() we create a method GetBoundaryItem() private BoundaryItem GetBoundaryItem(int[] items) { BoundaryItem boundaryItem = new BoundaryItem(); boundaryItem.Min = Int32.MaxValue; boundaryItem.Max = Int32.MinValue; foreach (int item in items) { if (item < boundaryItem.Min) { boundaryItem.Min = item; } else if (item > boundaryItem.Max) { boundaryItem.Max = item; } } return boundaryItem; } and change the PreProcessArray() method private BoundaryItem PreProcessArray(int[] items) { BoundaryItem boundaryItem = GetBoundaryItem(items); int minOccurance = GetCountOfOccurance(items, boundaryItem.Min); int maxOccurance = GetCountOfOccurance(items, boundaryItem.Max); SetArrayitemsByValueToZero(items, boundaryItem); if (minOccurance < 2) { boundaryItem.Min = 0; } if (maxOccurance < 2) { boundaryItem.Max = 0; } return boundaryItem; } The printing I will leave to you. As you see I have made instance methods out of the static methods. So it would be a good idea to create a class named Scoring where you place these methods.Then you would call this like static void Main(string[] args) { Scoring scoring = new Scoring(); int[] scores = { 4, 8, 6, 4, 8, 5 }; int sum = scoring.SumItems(scores); // now print the result ExitProgram(); } See: When to Use Static Classes in C# As I have written the refactorings I came to the conclusion that we just need to get the index of the highest/lowest items so we can skip those items at composing the new array. After the coment from Malachi I realized that using the index of the highest/lowest item will be an invalid aproach if these values are contained more than 2 times in the array. This implementation assumes that for the given array [1 ,1 ,1 ,2 ,3 ,3 ,3] it's sum should be 1 + 2 + 3 .
{ "domain": "codereview.stackexchange", "id": 9247, "tags": "c#, beginner, scope" }
Recamán's Sequence in Racket
Question: I just started learning Racket as my first lisp dialect so after getting used to the syntax I implemented the Recamán's sequence. Apart from the style, I'd also like to know if my code is a linear iteration or not. I kind of get the idea but I don't know how to be sure it is. #lang racket (define (recaman-seq size) (define (recaman-iter curr-seq n goal) (if (= n goal) (reverse curr-seq) (cond [(and (= n 0) (empty? curr-seq)) (recaman-iter (list n) (+ n 1) goal)] [else (define a (- (car curr-seq) n)) (define b (+ (car curr-seq) n)) (define is-new (not (member a curr-seq))) (cond [(and (positive? a) is-new) (recaman-iter (list* a curr-seq) (+ n 1) goal)] [else (recaman-iter (list* b curr-seq) (+ n 1) goal)])]))) (recaman-iter '() 0 (+ size 1))) (recaman-seq 10) Answer: This code is not exactly linear as member is a o(n) time function and grown in proportion to curr-seq, making the whole function about O(n^2). However without an algorithmic trick, the function requires a search of past results, and the best search that I know of on a mutable structure is O(log (n)), so the best you can hope for here is O(n*log(n)). But this is only if you go off to infinity. This sequence seems reasonably dense, so you could probably create a boolean vector a bit bigger than the sequence initialized to true,and as you add numbers to the sequence flip the value of that index to false. First to translate to scheme... (define (recaman-seq size) (define (recaman-iter curr-seq n goal) (if (= n goal) (reverse curr-seq) (cond ((and (= n 0) (null? curr-seq)) (recaman-iter (list n) (+ n 1) goal)) (else (let* ((a (- (car curr-seq) n)) (b (+ (car curr-seq) n)) (is-new (not (member a curr-seq)))) (cond ((and (positive? a) is-new) (recaman-iter (cons a curr-seq) (+ n 1) goal)) (else (recaman-iter (cons b curr-seq) (+ n 1) goal)))))))) (recaman-iter '() 0 (+ size 1))) (recaman-seq 10) ]=>(0 1 3 6 2 7 13 20 12 21 11) Next making some modifications, using a let to make a temperary vector, changing the define and call with a named loop, ;lang rsr5 (define (recaman-seq size) (let ((rman-vect (make-vector (* 10 size) #t))) ;;hopefully generous enough (let recaman-iter ((curr-seq '()) (n 0)) ;; goal is unneccesary, if and cond are essentially the same, nest them (cond ((> n size) (reverse curr-seq)) ((and (= n 0) (null? curr-seq) (vector-ref rman-vect 0)) (vector-set! rman-vect 0 #f) ;;Mark number as used, this is a side effect,not a returned value (recaman-iter (cons n curr-seq) (+ n 1))) (else (let* ((a (- (car curr-seq) n)) (b (+ (car curr-seq) n))) (cond ((and (> a 0)) (vector-ref rman-vect a) (vector-set! rman-vect a #f) (recaman-iter (cons a curr-seq) (+ n 1))) (else (vector-set! rman-vect b #f) (recaman-iter (cons b curr-seq) (+ n 1)))))))))) Can get the first 500,000 terms that way before I run out of memory.
{ "domain": "codereview.stackexchange", "id": 31575, "tags": "lisp, scheme, racket" }
odom frame in navsat_transform_node in case of quasi mobile platform
Question: Hi, (I read the REP105 multiple times, but could not figure out in the following particular case what is the role of the frames and how to set them up) I have a robotic arm that is going to be attached to a mobile platform, but this mobile platform is controlled independently, so 1) i cant influence its position and 2) i don't know basically the position of this mobile platform (no encoder feedback what mobile robots have). I want to use GPS+IMU in order determine the pose of the base_link frame of my robotic arm. I'm learning about the robot_localization package + navsat_transform_node but i dont really understand what should be the odom frame in my case? Currently i have only the following frames: world (which is just an dummy/empty link) base link link_1 link_2, etc.. I don't have any /odom topic nor odom frame like mobile robots do. Can you please explain in this particular case how should i extend the urdf in order to have an odom frame/topic that is required by the navsat_transform_node? EDIT: The mobile platform rarely moves, e.g., it moves 5 meters east then it stops. I'm not interested in the instantaneous maneuvers/spatial trajectories during the maneuver from A to B point. I just want to determine what is the pose of the base_link of the robot arm when the maneuver has been finished, i.e., when the mobile platform stopped/arrived in point B. Thank you in advance. Originally posted by akosodry on ROS Answers with karma: 121 on 2019-06-14 Post score: 0 Original comments Comment by Maarten on 2019-06-20: I can't help you with the urdf, but the odom frame between the map (or world) and the base_link frame gives you both a continuous (but drifting) odom->base_link position (important for small local tasks, e.g. picking an object) and a non-drifting (but also jumping) map->base_link position (for long-distance navigation). I struggled with that too when I started using robot_localization, so I wrote a small introduction/tutorial on our company's website which explains these basics. If you understand these basics, all other robot_localization tutorials become clear. Comment by akosodry on 2019-06-20: @Maarten Thank you for the introduction, im going to read it after work. I had a feeling that my question was not well described,ie, not clear, so i made a new question yesterday from a different perspective. Can you have a look on that question, maybe that explains better what i am asking? Here is the question link . Thank you in advance. Comment by Maarten on 2019-06-20: If I understand it right, you have a mobile base that drives around randomly (well, not randomly, but you don't know anything about it) with a robotic arm that needs to do some operations? Which kind of operations do you need to do, do you need to know where you are (globally), or do the arm and the base work together in a local setting? You added a GPS and an IMU to the robotic arm, to estimate the position of the mobile base, but if you want an accurate estimate, this is probably not sufficient: an IMU isn't accurate enough to integrate its acceleration for more than a fraction of a second, leaving you more or less with the GPS accuracy and the orientation measurement of you IMU. If you can't use the robotics base's encoder, I would suggest using visual odometry (e.g. using a stereo camera (for prototyping, we have good experience with the ZED) or maybe using Intel's new RealSense T265 tracking camera (no experience with that though)). Comment by Maarten on 2019-06-20: If you want to estimate both the map->odom and the odom->base_link transformations, you need two robot_localization nodes. One which uses all continuous sensors which gives a continuous (but drifting) odom->base_link transform and one which uses all sensors which gives the map->odom transformation (so that map->base_link gives you an absolute (but sometimes non-continuous) estimate). Think about the odom frame as a frame in between the map and base_link which models the discontinuous jumps in the estimate (e.g. due to gps updates). This way, the odom->base_link transformation is continuous, which is important for local operations (e.g. if you need to move something 2cm in a given direction, you don't mind that your position could be drifting over longer periods or distances, but discontinuities (jumps) in the position estimate would make that simple taks impossible). Depending on you application, you may or may not need both estimates. Comment by akosodry on 2019-06-21: Hi @Maarten, sorry for the late response. I read your tutorial, but still i find difficult to adapt it to my particular case. To answer your questions: 1) Yes, you understood it right, a mobile base that drives around randomly with a robotic arm that needs to do some maneuvers. These maneuvers are executed with moveit, and work well in static situation ie., so far it was assumed that there is no mobile base. 2) Yes, the arm and the base work together in a local setting, which means that i just want to know how much did the mobile base move from its initial spatial position. 3) Yes, i added a GPS and an IMU to the robotic arm, to estimate the position of the mobile base. Im using the ublox C94 m8p GPS which can provide accurate position values up to cm accuracy. I would assume that should be enough to get an estimate of the pose of the base_link of the robot, dont you think? GPS for position, IMU for orientation. And for calculating XYZ+fusion there is the navsat_transform_node. Comment by akosodry on 2019-06-21: @Maarten "If you can't use the robotics base's encoder, I would suggest using visual odometry" How can visual odometry help in determining the pose of the mobile base? I assume RGBD provides some spatial information that can be used, but is it enough? The only continuous sensor i have on the mobile base is the IMU. Yea it is quite modest setup. But it should be enough for orientation, i would guess. Since the mobile base rarely moves, ie., it moves lets say 5meters east and stops for 15 minutes, and for me its enough to determine just the static position of the mobile base. Im not interested in the instantaneous maneuvers/trajectories and so on, i just want to know, that when the mobile base stops then what is the pose relative from the initial pose. So i think that GPS updates should be enough to determine the position in the aforementioned case, and the IMU data provides the orientation values. Comment by Maarten on 2019-06-22: I think you only need some filtering on the GPS position and the magnetometer orientation (I don't think the acceleration and gyroscope outputs of your IMU give that much useful information in your case). I would suggest to use a robot_localization node for the map->odom transform (inputs: GPS and IMU; maybe disable the IMU's gyroscope and accelerometer). I've never tried it, but I suppose this should work if you launch a static_transform_publisher node (all zeros, no translation or rotation) for the odom to base_link transform (in general, this transform is published by another robot_localization node which gets only the continuous inputs such as accelerometer, gyroscope and encoder data, but an accelerometer without an encoder is quite useless in practice, so I would try to replace this transform by an all-zero static transform). Comment by akosodry on 2019-06-23: @Maarten thanks for the help, im working on it. Could you post your comments in an answer? I would accept it, since all your comments were constructive and addressed multiple cases that could help others. Comment by Maarten on 2019-06-23: @akosodry I missed your visual odometry question. Visual odometry can more or less replace an encoder input (or can be used as an extra encoder-like input). You can combine visual odometry with map building and (exact) localization (SLAM), but you can also just track features in the images to estimate camera (and robot, if the camera is mounted on the robot) movement. That's what the Intel Realsense tracking camera does in hardware (and what you can do with any other stereo camera in software). But because you only need static positions, I would start with the IMU+GPS you have. Answer: As per @akosodry 's request, a summary of our comments discussion as an answer: I can't help you with the urdf, but the odom frame between the map (or world) and the base_link frame gives you both a continuous (but probably drifting) odom->base_link position (important for small local tasks, e.g. picking an object you're tracking) and a non-drifting (but probably jumping) map->base_link position (important for long-distance navigation). I struggled with that too when I started using robot_localization, so I wrote a small introduction/tutorial on our company's website which explains these basics. If you understand these basics, most other robot_localization tutorials become clear (at least for me). If you want to estimate both the map->odom and the odom->base_link transformations, you need two robot_localization nodes. One which uses all continuous sensors which gives a continuous (but probably drifting) odom->base_link transform and one which uses all sensors (including the non-continuous ones, e.g. GPS) which gives the map->odom transformation (so that map->base_link gives you an absolute (but sometimes non-continuous) estimate). Think about the odom frame as a frame in between the map and base_link which models the discontinuous jumps in the estimate (e.g. due to gps updates). This way, the odom->base_link transformation is continuous, which is important for local operations (e.g. if you need to move something 2cm in a given direction, you don't mind that your position could be drifting over longer periods or distances, but discontinuities (jumps) in the position estimate would make that simple taks impossible). Depending on you application, you may or may not need both estimates. In your case (only need for different static positions, no need for exact positioning while moving and no encoder information), I think you only need some filtering on the GPS position and the magnetometer orientation (I don't think the acceleration and gyroscope outputs of your IMU give that much useful information in your case). I would suggest to use a robot_localization node for the map->odom transform (inputs: GPS and IMU; maybe disable the IMU's gyroscope and accelerometer). I've never tried it, but I suppose this should work if you launch a static_transform_publisher node (all zeros, no translation or rotation) for the odom to base_link transform (in general, this transform is published by another robot_localization node which gets only the continuous inputs such as accelerometer, gyroscope and encoder data, but an accelerometer without an encoder is quite useless in practice, so I would try to replace this transform by an all-zero static transform). Originally posted by Maarten with karma: 206 on 2019-06-23 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 33189, "tags": "ros, ekf, ros-kinetic, navsat-transform-node, ekf-localization-node" }
Intensity of the resultant of two complex waves
Question: Suppose I have two waves: $Y_1= a_1e^{i(wt-kx1)}$ and $Y_2= a_2e^{i(wt-kx2)}$ I know by superposition $Y= Y_1+Y_2$ and intensity $(I) = |Y|² $ But how can I solve it. It seems hard for me to find the intensity. Answer: $$Y_1= a_1e^{i(wt-kx1)},Y_2= a_2e^{i(wt-kx2)}$$ $$Y^*_1= a_1e^{-i(wt-kx1)},Y^*_2= a_2e^{-i(wt-kx2)}$$ $$Y=Y_1 + Y_2$$ $$|Y|²=YY^*=Y_1(Y_1^*+Y_2^*)+Y_2(Y_1^*+Y_2^*)$$ $$Y_1Y_1^*=a_1^2$$ $$Y_2Y_2^*=a_2^2$$ $$Y_1Y_2^*=a_1a_2e^{-ik(x_2-x_1)}$$ $$Y_2Y^*_1=a_1a_2e^{ik(x_2-x_1)}$$ $$YY^*=a_1^2+a_2^2+a_1a_2(e^{-ik(x_2-x_1)}+e^{ik(x_2-x_1)})$$ $$|Y|²=a_1^2+a_2^2+2a_1a_2cos[k(x_2-x_1)]$$
{ "domain": "physics.stackexchange", "id": 57317, "tags": "homework-and-exercises, waves, superposition, complex-numbers, intensity" }
Why is the internal energy the expected value of energies of individual particles?
Question: In this Wikipedia page: https://en.wikipedia.org/wiki/Partition_function_(statistical_mechanics) .. the total sum of energy in an ideal gas is given as: $$\langle E \rangle = \sum_s E_s P_s $$ where $s$ runs over all states. But isn't this just the expected value of the energy for a single particle? Shouldn't the sum of energy be just: $$ E = \sum_s E_s $$ Why do we take the expected value? Answer: Reading carefully the Wikipedia page, one finds that the internal energy is the "ensemble average energy, which is the sum of the microstate energies weighted by their probabilities". Therefore, $E_s$ is the energy of the s-th microstate, where a microstate is the microscopic state of $N$ particles. Once the equilibrium ensemble has been fixed, the probability of each microstate is a function of the energy of the microstate and the ensemble average must be $$ \left<E\right>= \sum_s P_s E_s $$
{ "domain": "physics.stackexchange", "id": 55890, "tags": "statistical-mechanics" }
How to specify a robot go always right relative to itself from absolute perspective (north west east south)
Question: I have a robot that has a start and goal position within a maze. Each point in the maze-grid is simply a Position object containing x and y. I need an algorithm that specifies the robot only moving to its relative right only using goEast, west, east and south from the perspective of the maze. There is an implementation already (with uml and mathod names) that does not support degrees or heading. Just current position, next position and then directions in absolute terms. And the logic that goes for the robor for going only to it's relative right. Answer: I'm not sure what the difficulty is, here. If the robot is facing north, its right is east; if the robot is facing east, its right is south; etc.
{ "domain": "cs.stackexchange", "id": 11631, "tags": "algorithms, logic, object-oriented, adjacency-matrix" }
Refractive index of barium sulfate
Question: What is the refractive index of barium respectively where do I find it? I only find values for alpha, beta and x-ray radiation like on https://pubchem.ncbi.nlm.nih.gov/compound/barium_sulfate#section=Decomposition I need it for all wavelengths or at least at 455 nm. Thank you in advance! Answer: The index of refraction of $\ce{BaSO4}$ at 589 nm can be found in "Index of Refraction of Inorganic Crystals", in CRC Handbook of Chemistry and Physics, 90th Edition (CD-ROM Version 2010), David R. Lide, ed., CRC Press/Taylor and Francis, Boca Raton, FL. Separate values are given for the indexes of the orthorhombic crystals: $\begin{align} n_x &= 1.6362\\ n_y &= 1.6374\\ n_z &= 1.6480 \end{align}$
{ "domain": "chemistry.stackexchange", "id": 8728, "tags": "materials, optical-properties" }
Adding a camera to a model in Gazebo? (Beginner)
Question: Hello! I am very new to using things like Gazebo and ROS. I was wondering about how one might take a model from Gazebo (like turtlebot or quadrotor) and attach a camera to it, and see the feed? My guess is that I have to go into the code of the model files and add camera code? But I am unfamiliar with the code or how it works. And how would I get the feed? Is there a simpler way to do this? Can someone please guide me through this process? I have Gazebo5, and I am running Ubuntu. Thank you! Originally posted by utagai on ROS Answers with karma: 3 on 2015-06-06 Post score: 0 Answer: This is more a Gazebo question than a ROS question (so I should refer you to the gazebosim.org site), but what you ask is described in one of the Gazebo tutorials, namely the Connect to ROS one. If you've never done any of this (urdf, sdf, ROS, Gazebo, etc), I really recommend doing the ROS or (actually: and) Gazebo tutorials first (or find a book about it), and only then move on to more advanced things. Everything will seem really convoluted and incomprehensible otherwise (at least, that is what many students/people tell me is their experience). Originally posted by gvdhoorn with karma: 86574 on 2015-06-06 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by utagai on 2015-06-06: Okay never mind, I figured it out. This wasn't nearly as complicated as I thought haha, the .sdf files have a very intuitive format, and I think as a programmer it wasn't too hard to work it out. I will try giving myself a crash course using the tutorials you have provided. Thank you! Comment by ssaniljainn on 2021-02-25: Hey, did u figure this out? I am in a similar situation. The problem is that the iris model that I have does not have the iris.xacro file. Just 2 files - model.sdf and model.config. I am using ROS Kinetic.
{ "domain": "robotics.stackexchange", "id": 21851, "tags": "ros, gazebo, camera, sensor, model" }
How to pick a sampling frequency for Matlab fft
Question: I am recording vibration data using an adxl335 accelerometer.I have ~50ms delay before i save a reading from this sensor so most probably each second i have ~20 readings. I verified that by checking the difference between two consecutive readings since i am recording the time of reading, however since my pc is not real time i guess matlab is varying but i found that the average time between two consecutive readings is 0.0517 so my sampling should be ~19 hz ? is this the value that should be chosen ? Answer: The sensor is analogue and being input to Arduino for analogue to digital conversion. Arduino has a function Micros() which Returns the number of microseconds since the Arduino board began running the current program, I suggest using this to accurately save the time of analogue read on Arduino. Calculate the difference as shown in the code below and send it through with the sensor reading to Matlab. Then just average the values of the time difference calculated by t2-t1 which will be almost constant, and you will have as accurate as possible the sampling time/Sampling frequency used and so you can very accurately do an FFT analysis using this information. Hope this will be helpful. //Define Analogue Ports for Sensor input const int xpin1 = A2; // x-axis of the accelerometer 1 //Define variable t1 for timing unsigned long t1 = micros(); void loop() { int x1 = analogRead(xpin1); //read x from 1 Serial.print(x1); //print x1 value on serial monitor Serial.print("\t"); unsigned long t2 = micros(); Serial.print(t2-t1); Serial.print("\t"); t1 = t2; }
{ "domain": "dsp.stackexchange", "id": 3713, "tags": "matlab, fft, frequency" }
The instructions a Stack Machine has
Question: Trying to figure out which instructions a stack machine has, and wanted to clarify / reassure that these are in fact all of them. push onto stack pop off stack call a system function like add Then you can implement jump, for example, by pushing the address of the instruction in memory, and calling call. You can implement add by pushing twice and popping twice, etc. You can implement everything from just these 3 instructions. Wondering if this is correct, or if there is anything I am missing. Answer: There are dozens and dozens of stack machines out there in the wild, and they all have different instruction sets. So there's no single correct answer. Some people might consider using CALL like that cheating: after all, it relies on things like ADD and CMP and JMP and DUP and such all being implemented in the machine, so why don't you call them instructions? But if you're going that route, you can get rid of POP too, and implement that with CALL. The only reason you can't get rid of PUSH too is you need some way to get opcodes and arguments onto the stack. If you don't want to use CALL like that, the minimum for Turing-completeness would be something like PUSH, ROT (rotate the stack in some controllable way), DUP (duplicate the top of the stack), CMP (compare two values), JMP (some sort of conditional GOTO), and SUB (subtract one number from another). An actually useful machine would have more, things like ADD and AND and such.
{ "domain": "cs.stackexchange", "id": 13455, "tags": "computer-architecture, assembly" }
Liquid conductors and conservation
Question: Would gas or liquid conductors cost less than metal and are they more efficient conductor? Is there a material that would contain a magnetic field and would it increase conductivity? If so, This would be good to conduct electricity over large distances.? Would ionized matter stay ionized if sealed? Answer: Good conductors are materials providing a lot of efficient electric charge transporters. electrons and ions are two kinds of them. But electron are light and in stable phase in metal, while ions in fluids are heavy, slow, and need the fluid to be there, contained, not changed too much in partial densities of components, and this for a very long time if it's more than for an experiment. In addition in case of important power the metal can accept some heating, while the liquid (and/or containment) is likely to easily degenerate with heat. Permanent ionized gas needs energy and devices to keep ionized. Discharge tube requires huge tension and have to handle temperature/pressure change, but it's basically the neon tube. To summarize, electrons are lovely and difficult to beat :-)
{ "domain": "physics.stackexchange", "id": 28964, "tags": "electrons, semiconductor-physics, molecular-dynamics, thermoelectricity" }
Nyquist theorem vs sampling theorem vs shannon sampling theorem?
Question: Is there any difference between these three?or these are just three names of same theorem? Answer: Sampling operation has its roots in the mathematical interpolation theory which was used to generate certain function values at specified points from the availabe set (the samples) of existing values. These kind of work is summarized as Whittaker interpolation. Lagrange interpolation is also another related concept. Sampling theorem in electrical engineering, however, has its roots in the communication engineering literature and pioneered by communications engineer Harry Nyquist, who was among the first to consider digital transmission of analog signals. The Nyquist sampling theorem states the minimum number of uniformly taken samples to exactly represent a given bandlimited continuous-time signal so that it (the signal) can be transmitted using digital means and reconstructed (exactly) at the receiving end. Later, C.E.Shannon also worked on communications theory (especially the digital one) more from a probabilitistic point of view and published his conclusions as Information Theory within his classic paper of The Mathematical Theory of Communications. Among the works shannon did also include a more in depth analysis of sampling operation and resulting Shannon Sampling theorem which is identical to Nyquist sampling theorem for bandlimited signals. Hence the uniform sampling of bandlimited continuous-time signals is referred to as Nyquist-Shannon sampling theorem. You may omit the inventors' names and simply call it sampling theorem remembering that it's the uniform sampling of bandlimited signals.
{ "domain": "dsp.stackexchange", "id": 8499, "tags": "sampling, nyquist" }
Oil-water interface simulation using Gromacs
Question: I would like to perform a decane/water(tip4p) interface simulation using GROMACS to analyse the fluctuation of the interface over a long period of time (50ps - 1ns). Though I have a rough idea on how to perform it, I still am unsure about how to create a initial configuration which looks like the figure give below (A.R. Buuren, Ph.D. Thesis, University of Groningen, 1995). Also once I start my production run I would like to know, how to obtain the height fluctuations of the interface as a function of time. Answer: The standard way would be to simulate bulk water and bulk decane, separately, to generate two relaxed configurations for each phase. You will want them to both be the same pressure but to have the same cell dimensions in the x and y plane, so you should perform NPT but only allowing the cells to relax in the z-direction. Then write a script to stick the two relaxed configurations together, side-by-side, into a single cell. You will need to reverse any breaking apart of the individual molecules by the periodic boundary conditions first though. Then relax the merged simulation cell under NPT (this time allowing all three cell vectors to relax). how to obtain the height fluctuations of the interface as a function of time One possible idea: construct a histogram along the z-dimension where, for each decane you add (+1), and for each water molecule you minus (-1). Then your histogram will cross zero in two places - these define the boundary where the two phases meet.
{ "domain": "physics.stackexchange", "id": 42474, "tags": "computational-physics, simulations, physical-chemistry, surface-tension, molecular-dynamics" }
Does thermal state in Rindler spacetime correspond to Minkowski vacuum?
Question: We know that the Minkowski vacuum corresponds to the thermal state in a Rindler wedge at Unruh temp. But does the thermal state in one Rindler wedge at Unruh temperature uniquely map to the Minkowski vacuum? Or could there be other states in the Minkowski space field theory which also corresponds to thermal state in one of the wedges? In other words, If I am doing field theory in say the left Rindler wedge and the state is specified to be thermal at Unruh temp, does it mean that the state in the Minkowski space field theory is the vacuum, or could it be something else? One class of states it would map to are the following: $$U_R|0\rangle$$ where $U_R$ denotes an unitary operator in the right Rindler wedge. But it's not clear that they correspond to nice Minkowski states. Are there other states in Minkowski space field theory that, on writing in the Rindler basis and tracing out degrees of freedom from one wedge, would yield the thermal state? Answer: OON's idea can be made more precise to obtain an example of state which is identical to Minkowski vacuum in the left Rindler wedge but is different in the right wedge. It is just matter of a correct choice of the local operator in OON's idea. The idea works only with certain local operators: isometries localized in the right wedge. Take a real smooth function $g$ whose support is included in the (open) right wedge. Consider the unitary operator (though an isometry would be sufficient) $$U(g) := e^{i \phi(g)}\tag{1},$$ where I assumed that the field operator $\phi(g)$ is selfadjoint (it is essentially selfadjoint so actually $\phi(g)$ in the exponent is the closure of it). From Weyl commutation relations and Stone theorem, $$U(g) \phi(f) =\phi(f)U(g)\tag{2}$$ if $f$ and $g$ have spatially separated supports as it happens in particular if $f$ has support in the left wedge. Define $$\Omega_g := U_g\Omega$$ where $\Omega$ is Minkowski vacuum. It is clear that $\Omega_g$ is still a unit vector and is different from $\Omega$ if $g\neq 0$. Regarding Wightman functions of the new state, (2) implies that $$\langle \Omega_g| \phi(f_1)\cdots \phi(f_n)\Omega_g \rangle= \langle \Omega| U_g^*\phi(f_1)\cdots \phi(f_n)U_g\Omega \rangle = \langle \Omega| \phi(f_1)\cdots \phi(f_n)U_g^*U_g\Omega \rangle = \langle \Omega| \phi(f_1)\cdots \phi(f_n)\Omega \rangle$$ if each $f_k$ has support in the left wedge. There $\Omega$ and $\Omega_g$ are not distinguishable. In particular $\Omega_g$ has the same thermal properties as $\Omega$ with respect to the Killing boost (it satisfies the KMS condition due to Bisognano-Wickmann theorem). In the right wedge, $\Omega_g$ is a coherent state in view of its definition.
{ "domain": "physics.stackexchange", "id": 53293, "tags": "quantum-field-theory, qft-in-curved-spacetime, unruh-effect" }
High surface area for given volume
Question: What can be a example similar to mathematical Koch flake that could be found in nature... where for a given enclosed volume its surface area is indefinitely large? Answer: Mathematical fractals do not exist in nature. There is however a huge amount of phenomena that behave 'fractally' within a finite range of scales. Take the typical example of the coast of Britain that can be arbitrarily long depending on how finely you are willing to measure it. The link goes to a map where only a portion of the coast is visible. If you use it to measure the length of visible coast you will get a finite number (about $50$Km). However, if you zoom in, more details will become visible and your measure will give you a bigger number. If Britain was a mathematical fractal you could continue this forever and measure an infinite coast. In reality however you can only zoom in up to a certain point. When you reach the size of a beach the definition of 'coast' must be updated to take tides into account. When you zoom into sand grains, you need to know the tide schedule very precisely and I have no idea of what happens on the scale of atoms. Anyway, you should stop at some point. In the same way, if you zoom out, the coast gets more convoluted and the length of the coast grows faster than the size of your window. As before, this only goes on forever for mathematical fractals. In reality you end up seeing all of Britain at once and 'fractality' stops. Very roughly, and using google maps I find that the coast of Britain is a fractal if I look at it on scales ranging from $100$m to $100$Km. At larger scales, Europe becomes visible and at smaller scales the map is not detailed any more. If you accept the physicist's definition of a fractal: 'a normal fractal but within a finite range of scales', then there are fractals everywhere in nature. Check this out, for example. Actually, if you look into critical phenomena you can find scale invariant physics over huge range of scales (potentially from the size of a few atoms to the size of your sample). In practice the thing that restricts your scaling range is the distance of your system to the critical point.
{ "domain": "physics.stackexchange", "id": 31156, "tags": "material-science, crystals, fractals" }
Why is the tension force tangential to the string in a string wave?
Question: Many derivations and concepts regarding string waves have this kind of diagram: Where it describes that the tension force is tangential to the string curve. I didn't find the reason for this or maybe I do not know something preceding concept. Can you please provide me the reason? Answer: The tension force is applied to this element of the string by the next element of the string. Since the next element exists along the length of the string and not off to the side of the string, then along the length is the only direction that it can apply a force.
{ "domain": "physics.stackexchange", "id": 77353, "tags": "classical-mechanics, waves, string" }
PHP Bootstrap Autoloader
Question: I wrote an PHP Autoloader to include it "easyer" to my projects. Here the folder structure: www ├───index.php ├───init.php └───Libarys └───bootstrap ├───index.php ├───bootstrap_loader.php └───bootstrap_core_files (Files and Directories (Abbreviated so it doesn't get too much)) www/index.php: It's just like the main method in Java, which starts on execution. This integrates the init.php file. <?php include_once("./init.php"); $init = new Initialisation(); $init->loadBS(); ?> www/init.php: It's just like the init method in Java, which initialized integrate following things: Directory Locations (initialized) BootStrapLoader (integrate (www/Libarys/bootstrap/index.php) <?php class Initialisation { private $direct; function getFileLocation() { $value = ""; $value = $_SERVER["REQUEST_URI"]; return($value); } function getCurrentDirectory() { $dir = ""; $getFileLocation = $this->getFileLocation(); $countetSlashes = substr_count($getFileLocation,"/")-1; if ($getFileLocation !== "") { for ($i = 0; $i < $countetSlashes; $i++) { $dir = $dir."../"; } } $dir = $dir."./Libarys/bootstrap"; return $dir; } function setDirect($dir) { $this->direct = $dir; } function getDirect() { return $this->direct; } function loadBS() { $this->setDirect($this->getCurrentDirectory()); include_once($this->getDirect()."/index.php"); $loadBS = new LoadBootStrap(); $loadBS->loadBootStrap(); } } ?> www/Libarys/bootstrap/index.php: This index.php file just starts the true loader (www/Libarys/bootstrap/bootstrap_loader.php). <?php class LoadBootStrap { function loadBootStrap() { $init = new Initialisation(); $init->setDirect($init->getCurrentDirectory()); include_once($init->getDirect()."/bootstrap_loader.php"); $bootstrap = new BootStrap(); $bootstrap->loadFileGeter("JS", $init->getDirect()); $bootstrap->loadFileGeter("CSS", $init->getDirect()); } } ?> www/Libarys/bootstrap/bootstrap_loader.php: The bootstrap_loader.php searches and includes the CSS and JS files from the BootStrap Framework to the www/index.php file. I tested those scripts in more than 4 sub directories with sub directories and they worked well. <?php class BootStrap { function loadFileGeter($param, $direct) { switch ($param) { case "CSS": $this->getFileList($direct."/".strtolower($param), $param); break; case "JS": $this->getFileList($direct."/".strtolower($param), $param); break; } } function getFileList($dir, $param) { $dir = $dir; $files = scandir($dir, 1); sort($files); foreach ($files as &$value) { if (str_ends_with($value,".".strtolower($param))) { switch ($param) { case "CSS": $string = '<link rel="stylesheet" type="text/css" href="'.$dir.'/'.$value.'" />'."\n"; ECHO($string); break; case "JS": $string = '<script async="" src="'.$dir.'/'.$value.'"></script>'."\n"; ECHO($string); break; } } } } } ?> Answer: Spacing could be better Overall the code isn't difficult to read. Idiomatic PHP has spacing conventions, mainly outlined in the PHP Standards Recommendations - e.g. PSR-12: Extended Coding Style. There is one convention for spacing not followed by the code - and that is spacing around binary operators like . - e.g. $dir = $dir."./Libarys/bootstrap"; For readability, it is better to have a space around all binary operators including the concatenation operator: $dir = $dir . "./Libarys/bootstrap"; That way it is easier to see the placement of the operators and operands. Naming can be confusing The class Initialisation has a property $direct. In English direct is a verb or adverb but not really a noun, though it seems for the purposes of the code it stores a directory path. Perhaps a better name might be directory or else abbreviated as dir. Some variable declarations become superfluous when overwritten Looking at the first method in the class Initialisation - i.e. getFileLocation() I see: function getFileLocation() { $value = ""; $value = $_SERVER["REQUEST_URI"]; return($value); } The first line assigns an empty string to $value. The second line re-writes that variable with the request URI from the superglobal. Thus the first line becomes useless and it can be removed. Furthermore, $value is never altered or used in-between that last assignment and when it is returned, so it can be eliminated. The method can simply return $_SERVER["REQUEST_URI"]. In a previous review Dharman stated: Don't put parentheses after echo. It's not a function and these parentheses are extremely confusing. Same applies to require The same applies to return - parentheses are not needed for language constructs. As the documentation states: These words have special meaning in PHP. Some of them represent things which look like functions, some look like constants, and so on - but they're not, really: they are language constructs. 1 The List of Keywords also includes the include_once statement and that statement appears in the loadBS() method with parentheses- those can be removed there as well. Use String Concatenation assignment operators In the next method of class Initialisation - i.e. getCurrentDirectory() the variable $dir is assigned to multiple times, often with a string literal appended to it - e.g. $dir = $dir."../"; and $dir = $dir."./Libarys/bootstrap"; Many high-level programming languages, including PHP, offer a string concatenation assignment operator. For PHP it is .=. The two lines cited above can be re-written as: $dir .= "../"; and $dir .= "./Libarys/bootstrap"; Loop can be replaced with function call PHP offers the built-in function str_repeat() which can be used to replace the for loop within Initialisation::getCurrentDirectory() repeated $countetSlashes - 1 times. Redundant switch cases can be consolidated The method Bootstrap::loadFileGeter() (which likely has a typo - typically it is written getter) has two case statements inside the switch: class BootStrap { function loadFileGeter($param, $direct) { switch ($param) { case "CSS": $this->getFileList($direct."/".strtolower($param), $param); break; case "JS": $this->getFileList($direct."/".strtolower($param), $param); break; } } In both cases the same method is called with the same arguments. Those can be consolidated: switch ($param) { case "CSS": case "JS": $this->getFileList($direct."/".strtolower($param), $param); break; } Going a step further, the switch statement could be replaced by a conditional block utilizing the PHP function in_array(): if (in_array($param, ['CSS', 'JS'])) { $this->getFileList($direct."/".strtolower($param), $param); } Then there is no need to add the case and break lines. The method getFileList() also has a similar switch statement though the generated HTML tags are different. One could simplify that slightly by using a method to get the tag based on the value of $param. The method would be more granular, allowing for the code to be more in line with the Single Responsibility Principle. Reference operator in loop In that method getFileList() the foreach loop has a by-reference operator before $value which means it will be assigned by reference. 3 foreach ($files as &$value) { However, it doesn't appear that there is a need to assign that variable by reference. Closing tags can be omitted PSR-12 also mentions: The closing ?> tag MUST be omitted from files containing only PHP. Per the PHP documentation for PHP tags: If a file contains only PHP code, it is preferable to omit the PHP closing tag at the end of the file. This prevents accidental whitespace or new lines being added after the PHP closing tag, which may cause unwanted effects because PHP will start output buffering when there is no intention from the programmer to send any output at that point in the script. Sorting may not be necessary In the method BootStrap:: getFileList() there is a call to sort($files). Maybe you prefer to have the files sorted in the HTML output but that doesn't appear to be a requirement. It could be removed, though if it is preferable to have them sorted by name then consider using natsort() for "natural order" sorting - e.g. for sheet2.css after sheet12.css. Files could be filtered to certain extensions One could use the [glob()] function to find files with a pattern - e.g. *.css, *.js - then the amount of code to filter files could be decreased.
{ "domain": "codereview.stackexchange", "id": 44202, "tags": "php, html, file-system, bootstrap" }
Why are gauge theories called so?
Question: Why are gauge theories called so? I guessed it was because gauge also means to estimate, so when one is trying to find the gauge theory for such and such interactions one has to estimate what might be the best gauge group for that interaction. Does this make sense? Answer: Because Weyl's original gauge theory (1918-1920), which was also the very first unified field theory of electromagnetism and gravity (Kaluza only published in 1921), had a field of scales/gauges needed to completely localize Riemannian metric, see When and how was the geometric understanding of gauge theories developed? Weyl's prototype was general relativity, but as it is Riemannian geometry is not completely local. Lengths of vectors at different points are numbers, and can be compared in the absolute sense. Weyl's idea was that all measurements must be relativized to local scales only. So he replaced Riemannian with conformal metric, and added a field of scales instead of absolute scalars, the gauge field. This gauge field specifies how the scales are transported, but not minimally, different fields may specify the same transport, in which case they are related by a gauge transformation. Weyl's theory did not work out (Einstein quickly noticed that it was unphysical), but the gauge idea had a bright future. Weyl's principle of gauge invariance stated that the form of physical laws must be invariant under local gauge transformations. Selecting a particular version of a gauge field among many equivalent ones is now called "fixing the gauge", the Lorentz gauge for the vector potential is an example. Thus, the first gauge group was $\mathbb{R}^+$ of positive reals under multiplication, in late 1920s Weyl went from a field of gauges to a field of phases, replacing $\mathbb{R}^+$ with $U(1)$, but the name stuck. In this new theory Weyl was able to give a gauge theoretic explanation of the conservation of electric charge. Dirac replaced the $U(1)$ fields with the sections of associated complex line bundles in 1931, and noticed the possibility of magnetic monopoles when the bundles were globally non-trivial. Non-Abelian gauge theories did not appear until Yang and Mills in 1950s. Varadarajan's paper Vector Bundles and Connections in Physics and Mathematics is a good historical survey of early gauge theory, with technical details.
{ "domain": "physics.stackexchange", "id": 100459, "tags": "soft-question, terminology, gauge-theory, history" }
How does metallicity reduce the likelihood of black hole formation?
Question: Large stars collapse and if they are large enough form black holes. But the likelihood reduces with metallicity. What mechanism facilitates this? I believe it has something to do with opacity and Eddiington luminosity but haven't been able to find a source that pulls the information together in a conceptual manner. Answer: Metals have many electrons and many energy transitions available, therefore they are more likely to interact with photons (they increase the opacity). As a consequence, a star with higher metallicity will experience stronger winds, driven by the metal lines, that lead to a higher mass loss rate. This plot from Mapelli (2018) shows the mass of a $90 M_\odot$ star as a function of time, for different metallicities. The higher $Z$, the more dramatic is mass loss. This of course influences the mass of the black holes that can be produced. If at the end of its life, a star has retained only a small fraction of its initial mass, it will form a black hole much smaller than if it had not lost mass. And if the mass loss has been so dramatic that the mass of the compact remnant is below the Tolman-Oppenheimer-Volkoff limit, the result might be a neutron star instead of a black hole. This is the main reason why the likelihood of forming a black hole decreases with metallicity. High metallicity stars can loose enough mass during their life, that they are more likely to form a neutron star. Useful reference: Heger et al. (2003)
{ "domain": "astronomy.stackexchange", "id": 6572, "tags": "black-hole, supernova, metallicity" }
Thermodynamics Otto cycle
Question: I have a conceptual question regarding the Otto cycle. In a piston-cylinder when using the otto cycle at compression since the piston applies work on to the system (the gas), is the sign convention positive? And if so does this mean at expansion since the gas combusts and applies work on to the piston to push it down does this mean the work sign convention is negative? Answer: It depends on which version of the first law you are using. For $\Delta U=Q+W$, compression work is positive. For the more common version $\Delta U=Q-W$, compression work is negative. Reverse the signs for expansion work. The key is compression work increases internal energy while expansion work decreases internal energy. Hope this helps.
{ "domain": "physics.stackexchange", "id": 81052, "tags": "thermodynamics, conventions, heat-engine" }
Musical Fizzbuzz
Question: I wrote some code using ChucK : Strongly-timed, Concurrent, and On-the-fly Music Programming Language and would like a review. I broke down the logic into a number of classes, all of which are shown below. This is pretty new to me, and I would appreciate any and all constructive criticism. What the main script fizzbuzz_run.ck does, besides the regular FizzBuzz modulus operations, is play different chords depending on the FizzBuzz result, and throughout print the results and primary oscillator frequency. Clarification I want to clarify a few things that are specific to ChucK: printing statements to console: <<< "print this" >>>; The ChucK operator => is a bit like your usual assignment, except reverse order. For example 42 => int theMeaningOfLifeAndEverything; is how you would otherwise int theMeaningOfLifeAndEverything = 42;. Time and duration are handled by the ChucK VM. Code on Github I uploaded a video to YouTube showing what it sounds like. (video not embedded, link opens YouTube) OscPitch.ck public class OscPitch { this.freq => float freq; getSemitoneRatio() => float SEMITONE; // Change the pitch by N musical steps // @param float steps : the number of steps to change the frequency, positive or negative // @return float : the changed frequency fun float change(float steps) { if (Math.isnan(steps)) { <<< now, ": ", steps, " is not a number" >>>; return freq; } else { // The semitone ratio has to be applied once for each step change // hence the use of Math.pow which does everything // in one operation, rather than having to use a loop return freq * Math.pow(SEMITONE, steps) => freq; } } // Get the ratio to apply to a frequency to go up or down one semitone fun float getSemitoneRatio() { // The ratio for a musical semitone is // the 12th root of 2, or approximately 1.05946309436 // More info: https://en.wikipedia.org/wiki/Twelfth_root_of_two return Math.pow(2.0, 1.0/12.0); } // Getters and setters fun float getFreq() { return freq; } fun void setFreq(float val) { val => freq; } } OscChords.ck // Class used to create chords. public class OscChords { OscPitch oscPitch; this.root => float root; 0.05 => float gain; 0.0 => float mute; SawOsc I, III, V, VII; string currentChord; // initialize oscillators 0 => I.freq; 0 => III.freq; 0 => V.freq; 0 => VII.freq; mute => I.gain; mute => III.gain; mute => V.gain; mute => VII.gain; I => dac; III => dac; V => dac; VII => dac; // Sets all chord frequencies to 0, making them inaudible. fun void noChord() { 0 => I.freq; 0 => III.freq; 0 => V.freq; 0 => VII.freq; "no chord" => currentChord; } // Creates a major chord. // @input float root : the root note of the chord. fun void major(float root) { root => oscPitch.freq => I.freq; oscPitch.change(4) => III.freq; oscPitch.change(3) => V.freq; 0 => VII.freq; "major" => currentChord; } // Creates a minor chord. // @input float root : the root note of the chord. fun void minor(float root) { root => oscPitch.freq => I.freq; oscPitch.change(3) => III.freq; oscPitch.change(4) => V.freq; 0 => VII.freq; "minor" => currentChord; } // Creates a major7 chord. // @input float root : the root note of the chord. fun void major7(float root) { root => oscPitch.freq => I.freq; oscPitch.change(4) => III.freq; oscPitch.change(3) => V.freq; oscPitch.change(4) => VII.freq; "major7" => currentChord; } // Sets the gain on the chord to make it audible. fun void play() { gain => I.gain; gain => III.gain; gain => V.gain; gain => VII.gain; } // Sets the gain on the chord to make it inaudible. fun void stop() { mute => I.gain; mute => III.gain; mute => V.gain; mute => VII.gain; } // @return string : the name of the current chord. fun string getCurrentChord() { return currentChord; } } TempoMeter.ck public class TempoMeter { this.bpm => float bpm; this.upper => int upper; this.lower => int lower; 60 => int SECONDS_PER_MINUTE; // default values 1::second => dur beat; 4::beat => dur measure; // Divide a minute into the number of beats per minute (bpm) // and return the duration of 1 beat. Default 60 bpm unless set otherwise. // @param float bpm : the beats per minute of the tempo // @return dur : the duration of a beat fun dur setTempo(float bpm) { if (Math.isnan(bpm)) { <<< now, ": ", bpm, " is not a number" >>>; return beat; } else { (SECONDS_PER_MINUTE / bpm)::second => beat; return beat; } } // Set the meter (or time signature) using traditional Western // music notation. Default 4/4 unless set otherwise. // @param int upper : the number of beats per measure // @param int lower : the duration ratio (or unit) of a beat // @return dur : the duration of a measure fun dur setMeter(int upper, int lower) { if (Math.isnan(upper) || Math.isnan(lower)) { <<< now, ": ", upper, " or ", lower, " is not a number" >>>; return measure; } else { lower / 4 => float beatDurationRatio; beat * beatDurationRatio => beat; upper::beat => measure; return measure; } } // Getters and setters fun float getBpm() { return bpm; } fun void setBpm(float val) { val => bpm; } fun int getUpper() { return upper; } fun void setUpper(int val) { val => upper; } fun int getLower() { return lower; } fun void setLower(int val) { val => lower; } } fizzbuzz_run.ck TempoMeter tempoMeter; tempoMeter.setTempo(120) => dur beat; tempoMeter.setMeter(4,4) => dur measure; 55 => float NOTE_A1; OscPitch oscPitch; NOTE_A1 => oscPitch.freq; 1 => int NoteAOctave; OscChords chord; // oscillator to digital audio controller TriOsc fizzbuzzOsc => dac; oscPitch.freq => chord.root => fizzbuzzOsc.freq; // audio controls 0.1 => fizzbuzzOsc.gain; // variables for looping now + 12::measure => time stop; 1 => int stepCounter; while (now < stop) { chord.noChord(); <<< "-----" >>>; if (stepCounter % 15 == 0) { chord.major7(oscPitch.getFreq()); chord.play(); <<< "Result: ", "FizzBuzz", "| chord:", chord.getCurrentChord() >>>; } else if (stepCounter % 5 == 0) { chord.minor(oscPitch.getFreq()); chord.play(); <<< "Result: ", "Fizz", "| chord:", chord.getCurrentChord() >>>; } else if (stepCounter % 3 == 0) { chord.major(oscPitch.getFreq()); chord.play(); <<< "Result: ", "Buzz", "| chord:", chord.getCurrentChord() >>>; } else { <<< "Result: ", stepCounter >>>; } // identify reference A notes and their octave and print them if (Math.floor(oscPitch.getFreq() % NOTE_A1) == 0.0) { <<< "Reference A", NoteAOctave >>>; 1 +=> NoteAOctave; } <<< "Osc Freq: ", oscPitch.getFreq(), "Hz" >>>; beat +=> now; chord.stop(); oscPitch.change(1) => fizzbuzzOsc.freq; 1 +=> stepCounter; } <<< "end program" >>>; And just for reference, though this only pulls the classes together at runtime: fizzbuzz.ck // fizzbuzz.ck // This file defines which files to add to the VM Machine.add("OscUtils/OscPitch.ck") => int OscPitchId; Machine.add("OscUtils/OscChords.ck") => int OscChordsId; Machine.add("TimeUtils/TempoMeter.ck") => int TempoMeterId; if(me.args() > 0) { Machine.add("fizzbuzz_run:" + me.arg(0) + ".ck"); } else { Machine.add("fizzbuzz_run.ck"); } Answer: Really interesting thing you've created! I've never used ChucK, but I'll try to comment on some things. I also have some, but limited music theory knowledge. If language constraints is a reason for doing something I've mentioned below, please excuse my ignorance :) Semitone ratio? I wonder why you chose to create the getSemitoneRatio function. It's only used to initialize the SEMITONE variable. Why not just remove that function and put the body as the initializer expression instead? Additionally, I wouldn't actually call it a ratio. It's a factor. // The factor to apply to a frequency to go up or down one semitone // More info: https://en.wikipedia.org/wiki/Twelfth_root_of_two Math.pow(2.0, 1.0/12.0) => float SEMITONE_FACTOR; Dry initialization // initialize oscillators 0 => I.freq; 0 => III.freq; 0 => V.freq; 0 => VII.freq; mute => I.gain; mute => III.gain; mute => V.gain; mute => VII.gain; I => dac; III => dac; V => dac; VII => dac; // Sets all chord frequencies to 0, making them inaudible. fun void noChord() { 0 => I.freq; 0 => III.freq; 0 => V.freq; 0 => VII.freq; "no chord" => currentChord; } Since you already have the noChord function you can easily remove part of this code duplication. Just call noChord instead of the first four lines after the comment. The side-effect of initializing currentChord as well is probably just good. Dry initialization 2 Another approach to this is creating a factory function for SawOsc. Compare the following examples. SawOsc I, III, V, VII; 0 => I.freq; 0 => III.freq; 0 => V.freq; 0 => VII.freq; mute => I.gain; mute => III.gain; mute => V.gain; mute => VII.gain; createSawOsc(0, mute) => SawOsc I; createSawOsc(0, mute) => SawOsc III; createSawOsc(0, mute) => SawOsc V; createSawOsc(0, mute) => SawOsc VII; Gain? In OscChord there are two variables representing different gain levels. 0.05 => float gain; 0.0 => float mute; It's confusing that one is just named gain while the other one is called mute. Perhaps call them muteGain and playGain to show their relationship.
{ "domain": "codereview.stackexchange", "id": 16085, "tags": "fizzbuzz, audio, music, chuck" }
No sedimentation without water?
Question: I just found a video on sedimentary rock formation, and to my surprise, it stated that this only could occur in water! I always thought that any layer of matter, if buried deep underneath more matter, could form a sediment and become new rock. Am I completely wrong, or was the video simply not covering all bases? I did Google it and check Wikipedia, but with my limited knowledge of the field, the results were inconclusive. Answer: Not all sediments are deposited in water, but water is important in the formation of most sedimentary rocks. If we're just thinking about the deposition of the sediment, then we don't necessarily need water. Some counterexamples are: Aeolian sandstones, such as the Lower Permian Rotliegend sandstone of the North Sea. These are deposited by wind, not water. Some types of sedimentary breccia, which are chiefly deposited by gravity, not water. Tuff, which are deposited by gravity and wind, not water. They also undergo substantial compaction and lithification with or without water. But deposited sediment does not a rock make. Once deposited and if buried, most sediment undergoes compaction and diagenesis, eventually lithifying — a fancy word for turning into a rock. Below the water table, the shallow crust is saturated with (mostly) saline water, and processes like dissolution and cementation are necessarily aqueous. So it's fair to say that water is essential in the formation of sedimentary rocks, on Earth anyway. Footnote You were right to be skeptical, by the way; the video is introductory material apparently intended for grade school audience, so you can't treat it like a textbook. And you can't even take a textbook as 'truth', especially when it comes to slippery things like definitions. Sometimes generalizations and simplifications help, sometimes they don't.
{ "domain": "earthscience.stackexchange", "id": 1891, "tags": "sedimentology" }
generating recursive descent parser
Question: I have a grammar, $L \to A \langle A \rangle^*$ $A \to () \mid (L)$ but I am not sure how to generate parser for $\langle A \rangle$. I know in other case, for example, if $E \to T \langle^*T\rangle^*$ then I can create parser like proc E begin T; while symbol='*' do T od However in my case don't have any symbol. So how can I check symbol for while process? Or my grammar is incorrect? Answer: From the first rules, it seems clear that the angle brackets are metasyntactic parenthesis used to denote Kleene closure over more than a single symbol. This is also suggested by the existence of normal brackets in the grammar. This also suggest that the example rule for $E$ is syntactically not well formed and means nothing at all, since the star cannot apply to a metasyntactic parenthesis (as the first star seems to be doing). The correct rule is probably: $E \to T \langle +T\rangle^*$ and the corresponding parser: proc E begin T; while symbol='+' do T od Generating a parser for $\langle A \rangle$ is just generating a parser for $A$ alone. The angle brackets are just grammatical notation (like the arrow or the vertical bar), and are not supposed to appear in generated strings. They are not necessary here, but could be if you had Kleene star over several symbols, such as $\langle AB \rangle^*$, as in my corrected version of the rule for $E$. With these corrections, you should be able to finish your recursive descent parser.
{ "domain": "cs.stackexchange", "id": 4809, "tags": "algorithms, context-free, formal-grammars, parsers, syntax" }
Implementation of the Strategy pattern
Question: I am learning the Strategy pattern but hate the idea of passing a new object to the Context each time I need to change the desired algorithm. Instead I think it would be best to create an enum that holds all of the Concrete Strategy objects, and simply update them with a setter on the context. Below is my implementation. Please let me know if this is a good approach for Strategy? Client public class Client { public static void main(String args[]){ IntroductionContext introductionContext = new IntroductionContext(); introductionContext.setStrategy(ConcreteStrategies.HELLO); introductionContext.executeStrategy(); introductionContext.setStrategy(ConcreteStrategies.GOODBYE); introductionContext.executeStrategy(); } } Context public class IntroductionContext { private IInteractionStrategy iInteractionStrategy; public void executeStrategy() { if(iInteractionStrategy == null){ throw new RuntimeException("Context must set its strategy before invoking executeStrategy()"); } this.iInteractionStrategy.interact(); } public void setStrategy(ConcreteStrategies concreteStrategy){ this.iInteractionStrategy = (IInteractionStrategy) concreteStrategy.getConcreteStrategy(); } } Enum to hold all strategy objects public enum ConcreteStrategies { HELLO(new HelloStrategy()), GOODBYE(new GoodByeStrategy()); private Object concreteStrategy; private ConcreteStrategies(Object concreteStrategy){ this.concreteStrategy = concreteStrategy; } public Object getConcreteStrategy(){ return this.concreteStrategy; } } Strategy (Interface) public interface IInteractionStrategy { public void interact(); } Concrete Strategy public class HelloStrategy implements IInteractionStrategy { @Override public void interact() { System.out.println("Hello!"); } } Concrete Strategy public class GoodByeStrategy implements IInteractionStrategy { @Override public void interact() { System.out.println("Goodbye!"); } } Answer: The overall quality of your code is very good. It is, at least in my opinion, perfectly fine to have an enum containing the strategy implementations. A few comments and suggestions though: private Object concreteStrategy; in your enum can and should be private IInteractionStrategy concreteStrategy; instead. No need to use Object references at all in your enum when you can use IInteractionStrategy instead. By doing this, you won't have to typecast in your setStrategy method. After implementing the above, private IInteractionStrategy concreteStrategy can and should be marked final. Even though it's fine to store all your strategies in your enum, you shouldn't enforce it. By having the method public void setStrategy(ConcreteStrategies concreteStrategy) you deny any strategies that are not part of your enum. Either change this method, or add a new one, to be public void setStrategy(IInteractionStrategy concreteStrategy) IntroductionContext could include a executeStrategy(IInteractionStrategy) method to execute a specific strategy. Then you don't need to first call setStrategy and then call executeStrategy (This is only a suggestion, use it if you like)
{ "domain": "codereview.stackexchange", "id": 5397, "tags": "java" }
Would injecting water in a turboshaft engine avoid turbine material meltdown?
Question: For the little I know about turbine engines, I know that these run at enormous high temperatures and thus, need nickel based superalloys. From what I know, jet engines use a smart use of air flow throughout the parts of the engine (such as the combustion chamber) in order to make an air layer of protection between the metal walls and the flames. ... But you have to keep a balance between air cooling with the energy used to compress the air in order to increase efficiency. Well, in the scenario that one wouldn't use such superalloy because of its high price, if someone were to inject water with the combustion jet ejecting the combustion chambers directly to the turbine, would the water avoid the material melting? ... Or it would just make unnecessary steam? Answer: Injecting water into the airstream in front of a turbine spinning at speeds in excess of 180000rpm will destroy the blades in short order. Even propellors in water can easily suffer cavitation and it is to do with vapor pressure. The droplets of water will eat away the metal edges and surface. Will adding water be effective when incoming air to a jet engine at 30000 feet is cool anyway.
{ "domain": "engineering.stackexchange", "id": 5147, "tags": "thermodynamics, heat-transfer, cooling, turbines" }
When does a planet become a star?
Question: How big can a gas giant become before it's considered a star? How does the theoretical maximum compare the observed maximum? Answer: You mean like Arthur C. Clarke's 2010 when Jupiter turns into a star? We often turn to Jupiter's mass ($M_j$) when thinking about this problem. It turns out there's a whole class of stars that fuse so faintly that we can only see them well in infrared. Brown dwarfs (which are still called "stars") turned out to be so cool that only new infrared technologies could find them. We now know they are very common, so common that new classes, L and T (cooler than M) had to be made for them. Surprisingly they turn out to be about the same diameter as Jupiter. Between 0.073 solar masses (78 Jupiter-masses) and 13 Jupiter-masses, brown dwarfs do fuse their natural deuterium (heavy hydrogen, with an extra neutron) to helium. Below 13 Jupiters (0.0124 solar masses), fusion stops altogether. The brighter stars like our sun begin above 0.073 solar masses where they are hotter and emit more visible radiation. So you need at least 13 Jupiters to get it going and the theoretical limits are still being refined by observations of Brown Dwarfs. There is a fussy line between planets and brown dwarfs. Small brown dwarfs can still be considered stars and not planets even if they are not fusing because they probably burned off all their deuterium (form of hydrogen). From Wikipedia: Currently, the International Astronomical Union considers an object with a mass above the limiting mass for thermonuclear fusion of deuterium (currently calculated to be 13 Jupiter masses for objects of solar metallicity) to be a brown dwarf, whereas an object under that mass (and orbiting a star or stellar remnant) is considered a planet. The 13 Jupiter-mass cutoff is a rule of thumb rather than something of precise physical significance. Larger objects will burn most of their deuterium and smaller ones will burn only a little, and the 13 Jupiter mass value is somewhere in between. The amount of deuterium burnt also depends to some extent on the composition of the object, specifically on the amount of helium and deuterium present and on the fraction of heavier elements, which determines the atmospheric opacity and thus the radiative cooling rate.
{ "domain": "physics.stackexchange", "id": 40085, "tags": "astronomy, astrophysics, stars" }
Why would I leave a signal oversampled?
Question: I can't think of a better way for asking this question so I will start with an example. Suppose that I have an input signal with a max frequency of 50Hz (sampled at 100Hz). Now the signals of interest lie in the range 0-5Hz, so I can add a low-pass filter with a cut-off of 5Hz, and use the resulting signal for further processing. My understanding is that now I can downsample the filtered signal by a factor of 10 and hence reduce processing load. Am I right? If yes, why is downsampling not ALWAYS performed after filtering because it seems to me as the obvious way to go? And if I am wrong in my assumption, where am I mistaken? Answer: You are correct that if your signal is bandlimited to <5 Hz, then you can perfectly represent it with a 10Hz sampling rate. This is the well-known sampling theorem But ... there may be practical considerations for why one would not be able and/or inclined to use critically sampled data. One reason is the difficulty of making a signal critically sampled. Any operation you perform to change the rate of the signal is going to have some filter with a non-zero transition bandwidth. In your example, this limits the unaliased frequency content to 5-ftrans This transition bandwidth can be made very narrow with long impulse response filters but this has costs both in terms of processing and in transients (ringing) at signal start and end. Another reason is the efficacy of algorithms that work on the resulting signal. If you need to work with a blackbox component that can only choose the nearest sample, then you'll be better off feeding it oversampled data. Most (all?) non-linear operations will behave differently with critically sampled vs oversampled data. One example is squaring a signal, a well known method of BPSK carrier recovery. Without a 2x oversampled condition, the multiplication of the time domain signal with itself causes wraparound garbage aliasing when the frequency domain convolves with itself.
{ "domain": "dsp.stackexchange", "id": 20, "tags": "filters, decimation, downsampling, sampling" }
installation errors on ubuntu 12.10
Question: Hey, I have been trying to install Gazebo on ubuntu 12_10 and have been getting the following wrror: rushabh@ubuntu:~$ sudo apt-get install gazebo Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package gazebo Can anyone help please ? Originally posted by rdd0101 on Gazebo Answers with karma: 1 on 2013-02-04 Post score: 0 Answer: Ubuntu 12.10 binary packages of Gazebo are currently under development. They will be released as soon as possible. Currently we only have binary packages for Ubuntu 12.04. Other distributions will have to compile from source. Originally posted by nkoenig with karma: 7676 on 2013-02-04 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 3002, "tags": "gazebo" }
Implementing my own shared-ownership double-indirecting smart-pointer
Question: I am implementing my own double-indirecting shared-ownership smart-pointer because I needed a way to replace the managed object a group of smart-pointers is pointing to with a new object (it's the method reset_all in the code below). I tried to keep this as close as I could to the std::shared_ptr's interface, while avoiding things I don't understand / need like aliasing and conversion from other smart-pointer types. //(it's all in an header file) #pragma once template <typename T> class Manager { template <typename U> friend class SharedPtr; T* managed; unsigned int counter; Manager(T* new_ptr): managed(new_ptr), counter(1) {} ~Manager() = default; Manager<T>* own() { ++counter; return this; } void disown() { if (!(--counter)) { delete managed; delete this; } } }; template <typename T> class SharedPtr { public: SharedPtr() = default; SharedPtr(T* new_ptr) { if (new_ptr) { manager = new Manager<T>(new_ptr); } } SharedPtr(const SharedPtr &rhs): manager(rhs.manager->own()) {} SharedPtr(SharedPtr &&rhs): manager(rhs.manager) { rhs.manager = nullptr; } SharedPtr& operator=(const SharedPtr &rhs) { if (manager) { manager->disown(); } manager = rhs.manager->own(); return *this; } SharedPtr& operator=(SharedPtr &&rhs) { if (manager) { manager->disown(); } manager = rhs.manager; rhs.manager = nullptr; return *this; } void swap(SharedPtr &rhs) { Manager<T>* tmp = manager; manager = rhs.manager; rhs.manager = tmp; } void reset() { if (manager) { manager->disown(); } manager = nullptr; } void reset(T* new_ptr) { if (manager) { manager->disown(); } manager = new Manager<T>(new_ptr); } T* get() { if (manager) { return manager->managed; } return nullptr; } T& operator*() { return *(manager->managed); } T* operator->() { return manager->managed; } unsigned int use_count() { if (manager) { return manager->counter; } return 0; } bool unique() { return use_count() == 1; } operator bool() { return (bool) manager; } void reset_all(T* new_ptr) { if (manager) { delete manager->managed; manager->managed = new_ptr; return; } manager = new Manager<T>(new_ptr); } ~SharedPtr() { if (manager) { manager->disown(); } } private: Manager<T>* manager = nullptr; }; Answer: Design Review You have decided that the managed object can potentially be null (when the object being managed is null. Personally I would (consider) always having a managed object. That will make the rest of your code simpler to write (I like simpler code) as there will never be a nullobject Of course the other side of the argument is that allowing a null managed object allows better resource management. I have not done the mathsso I have not come to a conclusion but that may be worth doing and putting into a comment in the code. Code Review Namespace You should put all your code in its own namespace. This also helps in making your include guards unique. Prefer include guards This pragma is not supported by all compilers. #pragma once So prefer to use a system that works without change everywhere. Manager The class Manager is not used outside of class SharedPtr so personally I would make it a private member class of this SharedPtr. Though you don't do any copying it is obvious that the Manager is non copyable. So it would be a good idea to make it explicitly non copyable to make sure you don't do so accidently. Doing delete this; is very dangerous. delete this; I would rather change the interface so that the caller of disown() does the actuall delete of the manager object. bool disown() { return ((--counter) == 0); } ~Manager() { delete managed; } Single Argument Constructor When you have single argument constructors you have to be very careful. The compiler will eagerly convert one type to another using a single argument constructor (and this can be an issue when you least expect it). Examine this: actionOne(SharedPtr<int> const& val){/*Do Stuff*/} int main() { int b; actionOne(&b); // Caboom. } Here the compiler sees that it has int* as a parameter. But the only version of the function it has requires SharedPtr<int> const but that is easily achieved by constructing the SharedPtr<int> directly in place and it will do so. In this case this is very dangerous as the pointer was not dynamically allocated but is going to be deleted. So make your single argument constructor explicit (especially if they are taking ownership). explicit SharedPtr(T* new_ptr); Explicit construction with nullptr With the introduction of nullptr we have an object that can be used for the null value. But I see no constructor that covers an explicit construction with a nullptr. explicit SharedPtr(nullptr_t); Move Semantics. It is a good idea to mark your move constructor and move assignment operator as noexcept. This is because (as they are not creating resources they usually are) but also this allows some good optimizations with standard library containers. SharedPtr(SharedPtr &&rhs) noexcept; SharedPtr& operator=(SharedPtr &&rhs) noexcept; void swap(SharedPtr &rhs) noexcept; Why re-write std::swap? void swap(SharedPtr &rhs) { Manager<T>* tmp = manager; manager = rhs.manager; rhs.manager = tmp; } void swap(SharedPtr &rhs) noexcept {std::swap(manager, rhs.manager);} What about the null object? void reset(T* new_ptr) { if (manager) { manager->disown(); } manager = new Manager<T>(new_ptr); } In most situations a nullptr results in a null manager object. But this reset creates a manager object for the null pointer. I would try and be consistent. Const correctness If a function does not mutate the state of the object. Then it should be marked as const. unsigned int use_count() const; bool unique() const; Explicit bool cast The current cast to bool is a bit dangerous as it the compiler will use it when trying to get types to match for some operations. Consider: SharedPtr<int> val1(new int(4)); SharedPtr<int> val2(new int(8)); if (val1 == val2) { std::cout << "Val1 and Val2 match\n"; // Will print out. } This is because the compiler sees that it can do the comparison by converting the two values to bool and then doing the comparison. You can fix this by marking the function explicit. explicit operator bool() const; This will only allow conversion to bool when an object is used explicitly in a boolean context or explicitly cast to a bool. if (val1) // This works as expected as the expression for an if // statement requires a boolean and thus it is considered // a boolean context and will allow the bool cast operator // to fire even if it is explicit. { std::cout << "Val1 is null\n"; } Dont Like this. You are mixing business and resource logic. void reset_all(T* new_ptr) { if (manager) { delete manager->managed; manager->managed = new_ptr; return; } manager = new Manager<T>(new_ptr); } If the user wanted to set the value being managed to another value you can just use the managed objects assignment operator. SharedPtr<int> x(new int(6)); x.reset_all(new int(5)); // or *x = 5; Plug for me. I wrote up a detailed description of all the issues with writing a smart pointer here: unique ptr shared ptr constructor
{ "domain": "codereview.stackexchange", "id": 23038, "tags": "c++, c++11, reinventing-the-wheel, pointers" }
Build RViz for ROS2 fails, can't find resource_retriever
Question: I've installed ROS2 Eloquent from Debian packages, I now wanted to build RViz in a separate workspace and cloned the eloquent branch from https://github.com/ros2/rviz.git and ran colcon build --merge-install. It fails when building rviz_rendering saying CMake Error at /opt/ros/eloquent/share/ament_cmake_gmock/cmake/ament_add_gmock.cmake:72 (add_executable): Target "wrench_visual_test_target" links to target "resource_retriever::resource_retriever" but the target was not found. Perhaps a find_package() call is missing for an IMPORTED target, or an ALIAS target is missing? Call Stack (most recent call first): /opt/ros/eloquent/share/ament_cmake_gmock/cmake/ament_add_gmock.cmake:55 (_ament_add_gmock) CMakeLists.txt:260 (ament_add_gmock) CMake Error at /opt/ros/eloquent/share/ament_cmake_gmock/cmake/ament_add_gmock.cmake:72 (add_executable): Target "line_test_target" links to target "resource_retriever::resource_retriever" but the target was not found. Perhaps a find_package() call is missing for an IMPORTED target, or an ALIAS target is missing? Call Stack (most recent call first): /opt/ros/eloquent/share/ament_cmake_gmock/cmake/ament_add_gmock.cmake:55 (_ament_add_gmock) CMakeLists.txt:249 (ament_add_gmock) Not sure why it can't find resource_retriever, when I check /opt/ros/eloquent/share/ a resource_retriever directory exists. My ament_cmake_gmock version is $ apt list ros-eloquent-ament-cmake-gmock Listing... Done ros-eloquent-ament-cmake-gmock/bionic,now 0.8.2-1bionic.20200605.223611 amd64 [installed,automatic] Does someone know how this fix this? Originally posted by eloquent-fox on ROS Answers with karma: 25 on 2020-09-04 Post score: 0 Answer: Answering my own question: The gmock library was not installed on my computer. After making sure that libgmock.a is in /usr/lib, it worked. Originally posted by eloquent-fox with karma: 25 on 2020-09-15 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 35498, "tags": "ros2, rviz" }
Proof of A superposition of stationary states is not a stationary state
Question: This is proof: (Is the superposition of stationary states a stationary state? If not, then why not?) A superposition of stationary states is not a stationary state. Suppose we have two kets, $| E_1 \rangle$ and $|E_2 \rangle$, which solve the TISE like so: $$ \hat{H} | E_i \rangle = E_i | E_i \rangle $$ Here, the left hand side is the Hamiltonian operator and and the right hand side just shows that the stationary state picks up an eigenvalue. This is the definition of a solution of the TISE, although you may be more familiar with the differential operator version which is equivalent to left-multiplying by a position eigenstate $\langle x |$. Now, suppose we create a new superposition state, $| \psi \rangle$, defined as: $$ | \psi \rangle = \frac{1}{\sqrt{2}} \left( | E_1 \rangle + | E_2 \rangle \right) $$ If this is a solution to the TISE (that is, if it is a stationary state/eigenstate of the Hamiltonian) then it should follow that $\hat{H} | \psi \rangle = c | \psi \rangle$, where $c$ is a constant. But we can use linearity to write: $$ H | \psi \rangle = \frac{1}{\sqrt{2}} \left( \hat{H} | E_1 \rangle + \hat{H} | E_2 \rangle \right) \\ = \frac{1}{\sqrt{2}} \left( E_1 | E_1 \rangle + E_2 | E_2 \rangle \right) $$ This is not a constant $c$ multiplied by the vector $| \psi \rangle$ unless $E_1 = E_2$. So we can see that two stationary states cannot be combined into another stationary state unless they share the same eigenvalue. I have struggling understanding why $E_1 =E_2$ for the constant to be $c$ in the final line. Can anyone help expand the final line in more detail so that i can understand it better Answer: Here's an extremely drawn out proof of this simple fact: Let's assume that $$E_1 | E_1\rangle + E_2 | E_2\rangle= c \left(| E_1\rangle + | E_2\rangle\right)\tag{1}\label{1}$$ (I've absorbed the $\sqrt{2}$ factor in the definition of $c$.) Now, bring everything to one side and you'll see that $$(E_1-c) | E_1\rangle + (E_2 - c) | E_2\rangle= 0.$$ However, since $|E_1\rangle$ and $|E_2\rangle$ are linearly indepedent (since they are both different stationary states), the only way that a linear combination of them is zero is if both the coefficients that multiply them are individually zero. In other words, the above equation means that: $$E_1 = c,\\ E_2 = c,$$ i.e. $E_1 = E_2.$ In other words, this is the only way for Equation (\ref{1}) to hold.
{ "domain": "physics.stackexchange", "id": 75263, "tags": "quantum-mechanics, hilbert-space, schroedinger-equation, superposition, time-evolution" }
Why do we regard ($e^{-}, \nu_{e}$) as a doublet?
Question: To my understanding, particles in a multiplet have similar properties (similar masses and so on) and slight or symmetric differences (in mass, electric charge, spin, and so on). They can be regarded as the same particle split into different states. For example, Heisenburg proposed that a proton and a neutron are the same particle (neutron) in different states, which form a doublet. Similarly, the three pions ($\pi^{+}, \pi^{0}, \pi^{-}$) form a triplet. Moreover, we have baryon octet ($n, p, \Sigma^{-}, \Sigma^{0}, \Sigma^{+}, \Lambda, \Xi^{-}, \Xi^{0}$), meson octet ($K^{0}, K^{+}, \pi^{-}, \pi^{0}, \pi^{+}, \eta, K^{-}, \overline{K}^{0}$), and meson decuplet ($\Delta^{-}, \Delta^{0}, \Delta^{+}, \Delta^{++}, \Sigma^{*-}, \Sigma^{*0}, \Sigma^{*+}, \Xi^{*-}, \Xi^{*+}, \Omega^{-}$). When particles in a multiplet rotate into each other, the symmetry is invariant. However, $e^{-}$ and $\nu_{e}$ are not similar but conspicuously different. While the former is massive, the latter is massless (in Standard Model). Their difference is neither slight nor symmetric in any sense. Why do we regard ($e^{-}, \nu_{e}$) as a doublet? Answer: You are contrasting the Weak interactions to the Strong interactions, where the breaking is spontaneous (and large) as contrasted to explicit (and small), respectively. ($e^{-}, \nu_{e}$) are a doublet under weak isospin. This means the the corresponding SM lagrangian is invariant under an SU(2) gauge group -- you could rotate the fields under such a transformation and the Lagrangian would stay the same. This gives these two fermions remarkably many properties in common, such as lepton number, and dictates how they would connect under WI transmutations; this only applies to left-handed chiral components, the ones in this doublet. The right handed components are unconnected. However, the electroweak v.e.v. in this symmetry is not invariant, and so the symmetry is spontaneously broken, and many mass degeneracies of such doublets are vitiated. In fact, the particular Yukawa coupling giving mass to the electron is different than that giving mass to the neutrino, so their masses are completely disconnected: this is an unsung glory of the Standard Model. Their charges are different, but related, as well, since the electric charge does not commute with these SU(2) generators, but entangles into their common weak hypercharge. Their spins are the same. The multiplets you consider, by contrast, are hadron multiplets "almost degenerate" under flavor SU(3) transformation, which (almost) commute with the Hamiltonian to be sure, but also leave the strong interaction vacuum invariant, unlike the above. So, to a zeroth approximation, their masses are the same ... but you noticed their charges are also different, systematically related by the strong hypercharge Gell-Mann--Nishijima formula. Their spins are the same.
{ "domain": "physics.stackexchange", "id": 50576, "tags": "particle-physics, standard-model, electroweak, isospin-symmetry, leptons" }
Examples of "gauging a global symmetry"
Question: I am looking for someone to exemplify the actual process of "gauging a global symmetry." I am familiar with gauge bosons, gauge theories (QED), and the definition of "gauging a symmetry" etc., but I haven't seen an actual example of someone literally doing this and calling the process as such, which I think would be valuable given how often the phrase is employed. Preferably an answer to this would not be a fully general exposition on the notion of gauge symmetries, but instead just a short sketch of gauging a particular global symmetry. It might be helpful to gauge a symmetry both in a classical and in a quantum field theory. Gauging a higher-form symmetry would also be very helpful. Thank you! Answer: Here is a simple example, one of the first you should try to understand. The theory has a free $U(1)$ scalar field $\phi$ in $d+1$ spacetime dimensions, discussed in the modern notation of differential forms. The Lagrangian density $$L_0 = d\phi \wedge\star d\phi.$$ This has a manifest global symmetry $\phi \mapsto \phi + \theta$. If we perform a local variation where $\theta$ has a small first derivative, then the Lagrangian is not invariant, instead, up to boundary terms $$\delta L_0 = 2 \theta d\star d\phi + \mathcal{O}(\theta^2) = \theta\ dj + \mathcal{O}(\theta^2),$$ where we identify the Noether current $d$-form $j = \star d \phi$. The conservation law $$dj = 2 d\star d\phi = 0$$ is equivalent to the equations of motion. To gauge this symmetry, we couple to a $U(1)$ gauge field $A$. Minimal coupling is $$L_0 - A \wedge j = L_0 - 2 A \star d\phi.$$ This action is not yet gauge invariant, but we're allowed to add local terms possibly depending on $\phi$ and at least secord order in $A$. We're missing a term like $A \wedge \star A$. If we put it all together we get $$(d\phi - A) \wedge \star (d\phi - A).$$ You can check that this is a trivial theory (!). Note that this step we have only coupled to a background gauge field. If we want to integrate over $A$ also, we need to choose some measure. This last step, which is what is usually called gauging, is not canonical, but typically we use a Gaussian (Maxwell) measure for $A$ and it's alright. We still get a trivial theory: $\phi$ acts as the phase of a Higgs field for $A$. However, if instead the symmetry was $\phi \mapsto \phi + 2\theta$, we would end up with $j = 4 \star d\phi$ and a gauged Lagrangian $$(d\phi - 2A) \wedge \star (d\phi - 2A),$$ which you can check is a nontrivial TQFT. It's the $\mathbb{Z}_2$ gauge theory. You can see this theory has a $\mathbb{Z}_2$ 1-form symmetry which if you gauge takes you back to the trivial theory above. PS. Very happy to see the interest in higher symmetries :)
{ "domain": "physics.stackexchange", "id": 47475, "tags": "quantum-field-theory, symmetry, field-theory, gauge-theory, gauge-invariance" }
Calculating Fibonacci sequence in several different ways
Question: I'm doing an experiment trying to freshen up my unit-testing and learn dynamic programming. Every test passes but I'm curious of the result of some of them and worried if I'm doing the testing correctly. Currently I just got the one scenario for each method, with the input of 40, the goal of this test is to test which method is the most efficient for large numbers. Test result are as follow: BottomUp: 7 ms DynamicFibonacci: 5 sec DynamicFibonacci2: 8 ms MatrixFibonacci: 7 ms EfficientMatrix: 8 ms PiVersion: 7 ms RecursiveFibonacci: 4 sec I understand the slowness of the recursive, that one makes sense since it does not store the value. The efficientmatrix is however only 1 ms faster and in some test the time even seem to be the same. DynamicFibonacci1 I don't get at all why it's so slow, it should be very similiar to version 2 but the time difference is huge. It's even slower than the recursive one where it does not memoize. I ran all test individually since I noticed the result differed wildly otherwise, I believe the array value was stored when running them all at once, but that did not seem to be the case when debugging the test individually. My question is, how can I improve my tests, both method wise and the testing process. It feels like I get slightly different test result just running them once, is there a better way to run several time in a for loop maybe? It just seems like I can't really compare the results this way. Which method is generally the best to use when using large numbers/heavy calculations like this? I'm originally a mobile Xamarin developer and just trying to learn a bit more on how to speed up some of my other work. TEST Class // sample method, all of them look the same except calling the different methods. [TestClass] public class FibonacciSequenceTest { private const long Number = 40; private const long Result = 102334155; private readonly FibonacciSequence fibonacciSequence; public FibonacciSequenceTest() { fibonacciSequence = new FibonacciSequence(); } [TestMethod] public void MatrixFibonacciCalculatorTest() { // Act var returnValue = fibonacciSequence.MatrixFibonacciCalculator(Number); // Assert long actual = returnValue; Assert.AreEqual(actual, Result); } } Class and methods public class FibonacciSequence { private readonly long max = 1000; private readonly long[] memoizedFibonacciNumbers; public FibonacciSequence() { memoizedFibonacciNumbers = new[] { max }; } #region MatrixFibonnaciCalculator public long MatrixFibonacciCalculator(long n) { long[,] f = { { 1, 1 }, { 1, 0 } }; if (n == 0) return 0; PowerMatrix1(f, n - 1); return f[0, 0]; } /* Helper function that multiplies 2 matrices F and M of size 2*2, and puts the multiplication result back to F[][] */ public void MultiplyMatrix1(long[,] F, long[,] M) { long x = F[0, 0] * M[0, 0] + F[0, 1] * M[1, 0]; long y = F[0, 0] * M[0, 1] + F[0, 1] * M[1, 1]; long z = F[1, 0] * M[0, 0] + F[1, 1] * M[1, 0]; long w = F[1, 0] * M[0, 1] + F[1, 1] * M[1, 1]; F[0, 0] = x; F[0, 1] = y; F[1, 0] = z; F[1, 1] = w; } /* Helper function that calculates F[][] raise to the power n and puts the result in F[][] Note that this function is designed only for fib() and won't work as general power function */ public void PowerMatrix1(long[,] F, long n) { long i; var M = new long[,] { { 1, 1 }, { 1, 0 } }; // n - 1 times multiply the matrix to // {{1,0},{0,1}} for (i = 2; i <= n; i++) MultiplyMatrix1(F, M); } #endregion #region EfficentMatrixFibonacciCalculator public long EfficientMatrixFibonacciCalculator(long n) { var f = new long[,] { { 1, 1 }, { 1, 0 } }; if (n == 0) return 0; EfficientPowerMatrix(f, n - 1); return f[0, 0]; } public void EfficientPowerMatrix(long[,] F, long n) { if (n == 0 || n == 1) return; var M = new long[,] { { 1, 1 }, { 1, 0 } }; EfficientPowerMatrix(F, n / 2); EfficientMultiplyMatrix(F, F); if (n % 2 != 0) EfficientMultiplyMatrix(F, M); } public void EfficientMultiplyMatrix(long[,] f, long[,] m) { long x = f[0, 0] * m[0, 0] + f[0, 1] * m[1, 0]; long y = f[0, 0] * m[0, 1] + f[0, 1] * m[1, 1]; long z = f[1, 0] * m[0, 0] + f[1, 1] * m[1, 0]; long w = f[1, 0] * m[0, 1] + f[1, 1] * m[1, 1]; f[0, 0] = x; f[0, 1] = y; f[1, 0] = z; f[1, 1] = w; } #endregion public int IterativeFibonacciCalculator(long number) { int firstNumber = 0, secondNumber = 1, result = 0; if (number == 0) return 0; // To return the first Fibonacci number if (number == 1) return 1; // To return the second Fibonacci number for (var i = 2; i <= number; i++) { result = firstNumber + secondNumber; firstNumber = secondNumber; secondNumber = result; } return result; } public long RecursiveFibonacciCalculator(long number) { if (number <= 1) { return number; } return RecursiveFibonacciCalculator(number - 1) + RecursiveFibonacciCalculator(number - 2); } public long DynamicFibonacciCalculator(long number) { long result; var memoArrays = new long[number + 1]; if (memoArrays[number] != 0) return memoArrays[number]; if (number == 1 || number == 2) { result = 1; } else { result = DynamicFibonacciCalculator(number - 1) + DynamicFibonacciCalculator(number - 2); memoArrays[number] = result; } return result; } public long DynamicFibonacciCalculator2(long n) { // Declare an array to // store Fibonacci numbers. // 1 extra to handle // case, n = 0 var f = new long[n + 2]; long i; /* 0th and 1st number of the series are 0 and 1 */ f[0] = 0; f[1] = 1; for (i = 2; i <= n; i++) /* Add the previous 2 numbers in the series and store it */ f[i] = f[i - 1] + f[i - 2]; return f[n]; } // Helper method for PiCalculator public long PiFibonacciCalculator(long n) { double phi = (1 + Math.Sqrt(5)) / 2; return (long)Math.Round(Math.Pow(phi, n) / Math.Sqrt(5)); } public long BottomUpFibonacciCalculator(long n) { long a = 0, b = 1; // To return the first Fibonacci number if (n == 0) return a; for (long i = 2; i <= n; i++) { long c = a + b; a = b; b = c; } return b; } } Answer: DynamicFibonacciCalculator is slow because you create a new memoArrays for each recursion, so it will never contain any precalculated values, and it behave just as the normal recursive version (and even worse because of the overhead of allocating the arrays. public long DynamicFibonacciCalculator(long number) { long result; var memoArrays = new long[number + 1]; ... You should maintain the memoArrays outside of the recursion method. You could maybe do the recursion in an local function: public static long DynamicFibonacciCalculator(long number) { if (num <= 1) { return num; } long[] memoArrays = new long[number + 1]; long Recursion(long num) { if (num <= 1) { return num; } long result; if (memoArrays[num] != 0) { return memoArrays[num]; .... } return Recursion(number); } All your methods don't depend on instance members, so it would be more correct to make them static (and the helpers could be static as well): public static class FibonacciSequence { #region MatrixFibonnaciCalculator public static long MatrixFibonacciCalculator(long n) { long[,] f = { { 1, 1 }, { 1, 0 } }; if (n == 0) return 0; ... You could optimize the test class by making a common test method that takes a delegate as argument: public class FibonacciSequenceTest { private const long Number = 40; private const long Result = 102334155; public FibonacciSequenceTest() { } public void FibonacciTester(Func<long, long> method, string methodName) { // Act var returnValue = method(Number); // Assert long actual = returnValue; Assert.AreEqual(actual, Result, $"{methodName} produced wrong result."); } [TestMethod] public void TestBottomUpFibonacciCalculator() { FibonacciTester(FibonacciSequence.BottomUpFibonacciCalculator, nameof(FibonacciSequence.BottomUpFibonacciCalculator)); } // TODO: Test methods for each Fib method... } In this way it is easier to maintain, and you avoid repeating yourself.
{ "domain": "codereview.stackexchange", "id": 35358, "tags": "c#, performance, unit-testing, comparative-review, fibonacci-sequence" }
C++ wrapper for GLFW window object
Question: I'm creating a window class in C++ to provide a bit of abstraction for a GLFW window. window.h #pragma once #define GLFW_INCLUDE_VULKAN #include <GLFW/glfw3.h> class Window { GLFWwindow* m_Window; GLFWmonitor* m_Monitor; const char* m_Title; GLFWimage m_Icon[1]; int m_Width, m_Height; int m_PosX, m_PosY; bool m_Fullscreen; public: Window(int width, int height, const char* title, const char* iconPath); ~Window(); GLFWwindow* getWindow(); const char** getRequiredExtensions(); private: void queryVulkanSupport(); void initGLFW(); void createWindow(); void setIcon(const char* path); void center(); void setFullscreen(); void setWindowSizeCallback(); static void static_WindowSizeCallback(GLFWwindow* window, int width, int height); void windowSizeCallback(int width, int height); void setKeyCallback(); static void static_KeyCallback(GLFWwindow* window, int key, int scancode, int action, int mods); void keyCallback(int key, int scancode, int action, int mods); }; window.cpp #include "window.h" #define STB_IMAGE_IMPLEMENTATION #include <stb/stb_image.h> #include <stdexcept> #include <string> Window::Window(int width, int height, const char* title, const char* iconPath) : m_Fullscreen(false), m_Width(width), m_Height(height), m_Title(title) { initGLFW(); queryVulkanSupport(); m_Monitor = glfwGetPrimaryMonitor(); createWindow(); setIcon(iconPath); setWindowSizeCallback(); setKeyCallback(); } Window::~Window() { glfwDestroyWindow(m_Window); glfwTerminate(); } GLFWwindow* Window::getWindow() { return m_Window; } const char** Window::getRequiredExtensions() { uint32_t count; const char** extensions = glfwGetRequiredInstanceExtensions(&count); return extensions; } void Window::queryVulkanSupport() { if (!glfwVulkanSupported()) { throw std::runtime_error("Vulkan not supported!"); } } void Window::initGLFW() { if (!glfwInit()) { throw std::runtime_error("Failed to initialize GLFW!"); } } void Window::createWindow() { glfwWindowHint(GLFW_CLIENT_API, GLFW_NO_API); m_Window = glfwCreateWindow(m_Width, m_Height, m_Title, nullptr, nullptr); if (!m_Window) { throw std::runtime_error("Could not create GLFW window!"); } glfwSetWindowUserPointer(m_Window, this); center(); } void Window::setIcon(const char* path) { m_Icon[0].pixels = stbi_load(path, &m_Icon[0].width, &m_Icon[0].height, 0, 4); if (m_Icon[0].pixels) glfwSetWindowIcon(m_Window, 1, m_Icon); } void Window::center() { const GLFWvidmode* vidMode = glfwGetVideoMode(m_Monitor); glfwSetWindowPos(m_Window, (vidMode->width - m_Width) / 2, (vidMode->height - m_Height) / 2); } void Window::setFullscreen() { if (!m_Fullscreen) { const GLFWvidmode* vidMode = glfwGetVideoMode(m_Monitor); glfwGetWindowPos(m_Window, &m_PosX, &m_PosY); glfwGetWindowSize(m_Window, &m_Width, &m_Height); glfwSetWindowMonitor(m_Window, m_Monitor, 0, 0, vidMode->width, vidMode->height, vidMode->refreshRate); glfwSetWindowSize(m_Window, vidMode->width, vidMode->height); m_Fullscreen = !m_Fullscreen; } else { glfwSetWindowMonitor(m_Window, nullptr, m_PosX, m_PosY, m_Width, m_Height, 0); glfwSetWindowSize(m_Window, m_Width, m_Height); m_Fullscreen = !m_Fullscreen; } } void Window::setWindowSizeCallback() { glfwSetWindowSizeCallback(m_Window, static_WindowSizeCallback); } void Window::static_WindowSizeCallback(GLFWwindow* window, int width, int height) { Window* actualWindow = (Window*) glfwGetWindowUserPointer(window); actualWindow->windowSizeCallback(width, height); } void Window::windowSizeCallback(int width, int height) { glfwSetWindowSize(m_Window, width, height); } void Window::setKeyCallback() { glfwSetKeyCallback(m_Window, static_KeyCallback); } void Window::static_KeyCallback(GLFWwindow* window, int key, int scancode, int action, int mods) { Window* actualWindow = (Window*) glfwGetWindowUserPointer(window); actualWindow->keyCallback(key, scancode, action, mods); } void Window::keyCallback(int key, int scancode, int action, int mods) { if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS) glfwSetWindowShouldClose(m_Window, true); if (key == GLFW_KEY_F11 && action == GLFW_RELEASE) setFullscreen(); } main.cpp #include "window.h" int main() { Window window(600, 300, "Vulkan engine!", "include/Ressources/VulkanIcon.png"); while (!glfwWindowShouldClose(window.getWindow())) { glfwPollEvents(); } } The code works as intended. I'm really new to c++ though and I assume there are tons I could improve about this class, not just in terms of speed, but also structure. I'm especially concerned with the way I get around the fact that GLFW callbacks are static. In my callbacks I want to access methods in my class and so I create a pointer to the actual 'window user' using glfwGetWindowUserPointer, to access the actual class methods an member variables. Still I'm just looking for any thoughts on how to improve this code. Links to GLFW and STB. Answer: Use default member initialization where appropriate You always want to set m_Fullscreen to false when creating a new Window. Instead of setting this variable in the constructor, just initialize it where you declare it: class Window { ... bool m_Fullscreen = false; It doesn't seem a big deal here, but once you have more member variables or more constructors, you will see that this is the preferred way to do this. Avoid creating trivial single-line private member functions Functions like the following are rather pointless to have: void Window::setWindowSizeCallback() { glfwSetWindowSizeCallback(m_Window, static_WindowSizeCallback); } You are writing a lot of boiler plate here (5 lines in windows.cpp and windows.h combined), for no gain at all: this function is only used once, and calling glfwSetWindowSizeCallback() is roughly just as readable as setWindowSizeCallback(). Don't center the window after creating it The desktop environment's window manager will take care of placing new windows on the screen, and will usually take care of ensuring there is minimum overlap with existing windows, and might allow users to configure this behaviour. By forcing the window to be centered, you might cause undesirable results. Don't make getWindow() public The whole point of this class is to abstract away the details of GLFW's C API. So don't expose m_Window to users of your class. If a user of this class would need this pointer, it probably means you should add a new member function instead that performs the desired action on the window. Put class Window inside namespace GLFW This makes it clear that you are providing a GLFW window, and it avoids conflicts with other libraries that might expose a class with the same name. It will also provide a nice home for any other classes that you might need to add, if your goal is to provide full coverage of GLFW's functions. Avoid declaring multiple variables on a single line Instead of: int m_Width, m_Height; Just write: int m_Width; int m_Height; It generally is easier to read and to modify. Also, if things belong together, such as: int m_PosX, m_PosY; Maybe they should be put into their own class or struct, like: struct { int x; int y; } m_Pos; Or if you use things like coordinates often, consider using a library that provides you with classes that do this. If this is going to be used for an application using Vulkan or OpenGL, I recommend using the GLM library, which provides a class vec2 that is perfect for 2D coordinates. Consider alternatives to GLFW I would seriously recommend that you look to alternatives for GLFW. It has issues. You already encountered one: toggling between fullscreen and windowed mode. Almost every other library just has a function that looks like setFullScreen(Window *w, bool fullscreen), and they take care of everything, including choosing the right monitor, remembering the original size, and so on. I strongly suggest you look at the SDL2 library, which has a better, friendlier API, can do much more than GLFW, and is very well supported on many platforms.
{ "domain": "codereview.stackexchange", "id": 32347, "tags": "c++, beginner, object-oriented, pointers, callback" }
Is it possible/practical to mold a single hollow piece of carbon fiber with a sealed vacuum in the middle?
Question: Let's say I design a simple piece of carbon fiber to be manufactured, in the shape of a simple cylinder with a diameter of 2cm and a height of 3cm (pretty small). Is it possible and/or practical to try to mold this piece of carbon fiber in such a way that it could be hollow on the inside? And if so, is it possible to mold the piece of carbon fiber inside of a vacuum chamber, so the resulting epoxy sealed carbon fiber part contains significantly less air in the hollow when completed? Answer: If I understand correctly, it sounds like you want to make a short carbon fiber tube. This can be done by wrapping the carbon fiber cloth around a mandrel that has the diameter that you want the inside diameter of your tube to be. There is an excellent video that demonstrates the process. Search for “How to make a roll wrapped carbon fiber tube” on YouTube. If you wrap the carbon fiber with compression tape, no vacuum is needed. Edit: I think I misunderstood your question initially. You can accomplish this by using a two-piece mold with a small hole to allow air to escape. Lay up the fiber in the mold, secure the two halves together, then place it in a vacuum chamber. Be sure to slowly pump out the air to avoid having the epoxy sucked out with it.
{ "domain": "engineering.stackexchange", "id": 3500, "tags": "manufacturing-engineering, vacuum, carbon-fiber" }
A two-level system absorbs a detuned photon. Where does the extra energy go?
Question: Let's consider simple two-level system with frequency gap of $\omega_0$ between ground and excited state. Now, when we turn on external electromagnetic field with frequency $\omega < \omega_0$, there is a non-zero probability that, starting from the ground state, the system will become excited. The energy of excited state is $\hbar \omega_0$, but each photon of the field contributes only $\hbar \omega < \hbar \omega_0$. So my question is - what's with the missing $\hbar (\omega_0 - \omega)$? I heard the explanation that it is measuring equipment that fills the energy gap, but I'm not quite convinced. Could you help? Answer: Rabi oscillations occur, in their simplest form, when a two-level system interacts with a quasi-resonant classical electric field; the total hamiltonian is then $$H_\textrm{s.c.}=\frac{\hbar\omega_0}{2}\hat{\sigma}_3+\frac{\hbar\Omega_0}{2}\cos(\omega t) \hat{\sigma}_1.$$ (For the atom in the ground state at $t=0$, the excitation probability then shows Rabi oscillations of the form $P=\frac{\Omega_0^2}{\Omega_0^2+\delta^2} \sin^2\left( \Omega t\right)$ for $\delta=\omega-\omega_0$ and $\Omega=\sqrt{\Omega_0^2+\delta^2}$.) The important thing to realize is that this is the total energy of the system. Since the hamiltonian is not time-independent, there is no need for the atomic system to conserve energy. Of course, if the laser pulse stops and the atom is left in the excited state, the extra energy is taken from the light field. To account for energy conservation, then, you need to include the light field into the reckoning. This is done by the Dicke model, for which the total hamiltonian is $$H_\textrm{D}=\frac{\hbar\omega_0}{2}\hat{\sigma}_3+\hbar\omega \hat{a}^\dagger \hat{a} +\frac{\hbar\Omega_0}{2}\left(\hat{a}+\hat{a}^\dagger\right) \hat{\sigma}_1.$$ This hamiltonian is barely integrable and no closed-form time-dependent evolutions are known, so it is usually simplified to its rotating-wave approximation counterpart, the Jaynes-Cummings hamiltonian $$H_\textrm{J.C.}=\frac{\hbar\omega_0}{2}\hat{\sigma}_3+\hbar\omega \hat{a}^\dagger \hat{a} +\frac{\hbar\Omega_0}{2}\left(\hat{a}\hat{\sigma}_+ +\hat{a}^\dagger \hat{\sigma}_-\right).$$ This hamiltonian leaves invariant the subspaces spanned by $\{|g,n\rangle, |e, n-1 \rangle\}$, and the probability of excitation shows Rabi oscillations of the form $$P=\frac{n \Omega_0^2}{n\Omega_0^2+\delta^2} \sin^2\left( \Omega t\right)$$ with $\Omega=\sqrt{n\Omega_0^2+\delta^2}$. Thus, there is still an oscillation between levels of "different energy", since in this process a photon of energy $\hbar\omega$ is absorbed (and subsequently re-emitted, of course) to excite the atom up by $\hbar\omega_0$, and these two may not match, as you point out. So what's going on? The hamiltonian is time-independent so that energy must be conserved. The resolution is that only the total hamiltonian need be conserved, and the notions of photon and atomic excitation are only approximate ones because they represent eigenstates of only part of the total hamiltonian. The true eigenstates are linear combinations of these and one can speak of excitations of the whole of the system, which are indeed conserved. So where does that leave the problem then? Say the atom is moving across the laser beam profile, so that the coupling $\Omega_0$ increases from zero to some maximum and returns to zero. Rabi oscillations will show up as a function of the interaction time, which one can control for example by changing the atom's speed (this is actually rather common!). Say, then, that the system starts up with the atom in the ground state and the field in a number state, with the field mode slightly red-detuned as in your set-up. Then with some probability the atom will emerge excited and the field will have lost a photon, which are (now!) valid notions since the coupling has been turned off. Where did the energy go? You can understand this as a system being adiabatically brought near a level anti-crossing similar to the one below. (image credit) As the coupling is increased, the system (initially in a non-perturbed eigenstate) is brought impulsively to the middle of the anti-crossing, which puts it in a superposition of the true Jaynes-Cummings eigenstates. These two eigenstates then evolve at their own frequencies (which are respectively slightly higher and lower than the original) and by their interference make the Rabi oscillations. When the atom exits the beam, the hamiltonian impulsively returns to the uncoupled hamiltonian, and the atom is frozen into a superposition of the two uncoupled eigenstates, even if they do have different energies. Where did the extra energy come from in this case? Well, the beam entry and exit were impulsive, so that again energy need not really be conserved (if the entry and exit are adiabatic then the system will remain in a total-hamiltonian eigenstate and will not show oscillations). In this particular case, then, it is probably taken from the atomic motion, which drives these impulsive transitions, and is affected by a light-induced potential. You would then need to add an atomic-position hamiltonian and account for the beam profile... and there's no ending to the amount of extra frills you can add. In general, though, the driving principles are that you can only "conserve energy" when you've got a closed, time-independent system, and that energy conservation only makes sense when you consider the total energy of the system, including interaction energies, which are not simply transition-driving terms but do represent additional energy. Finally, one must be careful when speaking of "photons" when dealing with interacting systems - it is only total-hamiltonian excitations that are truly physical (though of course the free-hamiltonian excitations are very much useful notions).
{ "domain": "physics.stackexchange", "id": 3660, "tags": "quantum-mechanics, electromagnetic-radiation, energy-conservation" }
Get files recursively as relative paths
Question: I want to get a recursive list of files in a directory but I want relative paths and not the full paths that Directory.GetDirectories() would return using the AllDirectories search option. This is because I want to move the list of files to another location and perform a custom check on each file. I could try converting each absolute path to a relative path but that's a bit messy, so I decided to create a recursive function that generates relative paths as it goes. public static string[] GetRelativeFilesRecursive(string path) { IEnumerable<string> GetRelativeFilesRecursive(string basePath, string subFolder) { IEnumerable<string> getRelativeArray(Func<string, IEnumerable<string>> func) { return func(Path.Combine(basePath, subFolder)).Select(file => Path.Combine(subFolder, Path.GetFileName(file))); } IEnumerable<string> files = getRelativeArray(Directory.GetFiles); IEnumerable<string> directories = getRelativeArray(Directory.GetDirectories); foreach (string directory in directories) { files = files.Concat(GetRelativeFilesRecursive(basePath, directory)); } return files; } return GetRelativeFilesRecursive(path, "").ToArray(); } I'm using nested functions to make it more modular, i.e. I'd rather have one function that I can copy and paste as a whole than have multiple functions that could become separated. I'm using IEnumerable<string> instead of string[] because otherwise I'd have to call ToArray() on every Concat() call and I'm worried that would be bad for performance. What do you think? Is this a good way to do this? Answer: I would return IEnumerable<string> in the outmost function too to avoid the call to ToArray() in order to let the client decide how to handle the output: public static IEnumerable<string> GetRelativeFilesRecursive(string path) { ... ... return GetRelativeFilesRecursive(path, ""); } As I see it, you're producing relative paths to the initial root path. I think, it can be done a little more easier in a way like this: IEnumerable<string> GetRelativePaths(string root) { int rootLength = root.Length + (root[root.Length - 1] == '\\' ? 0 : 1); foreach (string path in Directory.GetFiles(root, "*", SearchOption.AllDirectories)) { yield return path.Remove(0, rootLength); } }
{ "domain": "codereview.stackexchange", "id": 30014, "tags": "c#, file-system" }
Can I prune the tree if alpha-beta pruning finds 10 to the left of the root node?
Question: I have an AI class this semester. For our exam, we also cover alpha-beta pruning. I found an old example, where I think we can stop already earlier. Here is a picture of it. I think, because $X$ wants to maximize his win, and finds 10, he knows that he cannot get better. Therefore he cuts and puts 10. I marked my "improvement" with red. Answer: The vanilla Alpha-Beta Pruning algorithm as it has been taught to you in class does not assume any domain knowledge / knowledge about the game / knowledge about the tree it is searching. Therefore, if it immediately finds a score of 10 directly to the left of the root node, it can not prune yet, because... maybe there's a score of 20 somewhere else in the tree? In your description, you seem to assume prior knowledge that the maximum score that can possibly be obtained in the entire game is 10. This is not typically assumed to be available knowledge ahead of time, at least not in the original formulation of Alpha-Beta. That is why in pseudocode of the algorithm (for example on wikipedia), they initialize alpha and beta at minus and plus infinity. You are right though. If you have better estimates of the upper and lower bounds on scores that can every be achieved throughout the entire game, you can use better initializations of alpha and beta than minus and plus infinity. alpha and beta would have to be -11 and 11 if the true bounds are -10 and 10. I suspect you'd also have to make some additional changes to the typical pseudocode to actually make use of this though. In more complex games than Tic Tac Toe, you're often not going to have very useful knowledge of better bounds though. Typically you can't afford to search all the way to terminal game states, you'll have to stop the search earlier and use heuristic evaluation functions to estimate scores. If you do that, you'll often want to assign really high constants (like +/- 1Million) to actual terminal nodes (wins and losses), so that all your heuristic scores for non-terminal nodes can be somewhere in between. If you have such a large range of possible scores, your idea stops having much value.
{ "domain": "ai.stackexchange", "id": 368, "tags": "search, minimax, alpha-beta-pruning" }
I and Q Channels
Question: My understanding of I and Q channels is as follows (please correct me if I am wrong): I = In-phase, or real component Q = Quadrature (90° shift of real component) Where do these two channels come from in the first place? Is one the electric field and the other the magnetic field of a EM wave? I was under the impression that these channels are only present in digital waveforms; if this is true, and if so, why? How can this be used to find the vector of on incoming signal, and would the signal modulation make a difference (assuming you can invoke the proper filtering necessary)? Answer: The two channels exist only inside a transmitter or a receiver; the channels are physically combined in a single signal (or channel) in the physical medium (wire, coax cable, free space, etc). At the transmitter, two signals $s_I(t)$ and $s_Q(t)$ (called the I (or inphase) signal and Q (or quadrature) signal respectively) are combined into a single signal $s(t)$ that is transmitted over the physical medium in a frequency band centered at $\omega_c$ radians per second. Note that $$s(t) = s_I(t)\cos(\omega_c t) - s_Q(t)\sin(\omega_c t)$$ The receiver separates out the two signals $s_I(t)$ and $s_Q(t)$ from this by multiplying $s(t)$ by $2\cos(\omega_c t)$ and $-2\sin(\omega_c t)$ respectively, and low-pass filtering the two products. That is, $$\begin{align*} s_I(t) &= \text{result of low-pass filtering of}~ 2s(t)\cos(\omega_c t)\\ s_Q(t) &= \text{result of low-pass filtering of}~ -2s(t)\sin(\omega_c t) \end{align*}$$ Note that $$ \begin{align*} 2s(t)\cos(\omega_c t) &= 2s_I(t)\cos^2(\omega_c t) - 2s_Q(t)\sin(\omega_ct)\cos(\omega_c t)\\ &= s_I(t) + \bigr [s_I(t)\cos(2\omega_c t) - s_Q(t)\sin(2\omega_c t)\bigr]\\ -2s(t)\sin(\omega_c t) &= -2s_I(t)\cos(\omega_c t)\sin(\omega_c t) + 2s_Q(t)\sin^2(\omega_ct)\\ &= s_Q(t) + \bigr [-s_Q(t)\cos(2\omega_c t) - s_I(t)\sin(2\omega_c t)\bigr]\\ \end{align*}$$ where the quantities in square brackets are double-frequency terms that are eliminated by the low-pass filtering.
{ "domain": "dsp.stackexchange", "id": 660, "tags": "signal-detection, soft-question, decomposition" }
Is this a correct theoretical concept of a simple electric current generator?
Question: As I think I somehow understand electromagnetic induction and after watching several experiments on YouTube with magnet pieces left to fall through coils which had connected ends to permit the electric current to flow through them, I imagined how this feature can be used to easily generate electricity at home by just mechanical force...The problem was how to have the inductive piece to freely move through the coil and figured out that there should be some gears used to help this action...Please see the picture as it could help to understand what I am trying to present and ask for its functionality....This is just an exclusively theoretical question and not a engineering question so I decided to post it here and any positive comment or answer will be helpful for me as well as to other users to spot possible errors and help me to improve my basic idea.... In simple words does this idea make an automated version of the men on Youtube video dropping the magnet and then pick up it and drop it again through the coil several times?.... Answer: I'm not sure I fully understand the exact device you are showing in the diagram, but as for the principle of moving a magnet through a coil of wire via mechanical energy input – this is how all electrical power in the world is generated, apart from solar (and niche devices like thermoelectric generators). In order to create continuous motion and not have to "pick up the magnet and release" every time, a circular set of coils is used, and the magnets are spun using a turbine. The mechanical input is supplied by high pressure steam (boiled by fossil fields or nuclear or geothermal) or else by hydroelectric or wind power. As far a generating electricity at home it is unlikely you will be able to power anything substantial like an appliance, but they do make emergency flash lights and radios powered by hand cranks, which you can experiment with.
{ "domain": "physics.stackexchange", "id": 94666, "tags": "energy, magnetic-fields, electromagnetic-induction, rotation" }
In the earth's crust, why is there far more uranium than gold?
Question: In parts per million in the Earth's crust Uranium is around 1.8ppm and Gold 0.003ppm. Given that it takes far more energy to create Uranium than Gold, why is this? Answer: Since gold is much more abundant in the universe than is uranium (by a factor of about 20:1)1, why is the situation reversed in the Earth's crust (by a factor of about 1:600)2? The answer lies in chemistry. Uranium is chemically active. It readily oxidizes (pitchblende) and it readily combines with silicates. Uranium is a lithophile (literally, rock-loving) element3. It does not dissolve all that well in molten iron, and thus tended not migrate to the center of the Earth when the Earth differentiated. Uranium is a "high field strength element", one of the two classes of trace elements that are incompatible with the minerals that form the upper mantle4. When upper mantle rock undergoes a partial melt, incompatible elements such as uranium preferentially join the silicate melt rather than staying with the solid minerals. Over time, this magnifies the amount of uranium in the crust compared to that in the upper mantle5. Gold on the other hand is rather inert chemically. It has little affinity to oxygen or sulfur. It does however readily dissolve in molten iron. Gold is a siderophile (literally, iron-loving) element3. Of the tiny bit of gold currently found in the crust, hardly any is primordial. Almost all of the primordial gold sank to the Earth's core when the planet differentiated. The gold currently found in the crust instead arrived in meteors that hit the Earth after the Earth had finished forming6. The above assumes that the Bulk Silicate Earth (BSE) models of the Earth is basically correct, that the Earth formed from protoplanets and planetary embryos that had formed from material in the inner solar system, and that the proto-Earth differentiated into a core and primitive mantle. One prediction of these models is that the differentiation that created the Earth's core made the core strongly enhanced in siderophile elements and strongly depleted of lithophile elements, particularly so with regard to high refractory lithophile elements such as thorium and uranium. An opposing (not well accepted) model says that rather than being depleted of uranium, the Earth's core is uranium-enhanced, and to such an extent that there is a large georeactor at the very center of the Earth. These are testable hypotheses. Recent studies of geo-neutrinos are consistent with the BSE hypothesis, and simultaneously reject the possibility of a large georeactor at the center of the Earth7. Footnotes Based on Lodders, "Solar system abundances of the elements." Principles and Perspectives in Cosmochemistry, Springer Berlin Heidelberg, 379-417 (2010), the abundance of gold to uranium by mass in chondritic meteorites is 18.1:1, 25:1 for the sun's photosphere. To one significant digit, this ratio becomes 20:1. From Lide, editor, CRC Handbook of Chemistry and Physics, 88th edition, the crustal ratio of uranium to gold is 675:1. From online resources such as web elements.com, I get ratios ranging from over 400:1 to over 600:1. I used 600:1. Victor M. Goldschmidt developed the concept of classifying elements as siderophile ("iron loving"), lithophile ("rock loving"), chalcophile (literally "ore loving", but Goldschmidt implied "sulphur loving"), and atmophiles ("air loving") in the 1920s. While Goldschmidt's initial concept of a siderophilic core surrounded by a chalcophilic layer surrounded in turn by a lithophilic outer layer didn't pan out, his classification scheme lives on. That uranium is a lithophile and gold is a siderophile is basic chemistry. There are two key classes of "incompatible elements": Those with an abnormally large ionic radius, and those with an abnormally large field strength. Uranium and thorium fall into the latter class. While the "incompatible elements" are lithophiles based on chemistry, they don't fit nicely in the crystalline structures that comprise typical rock. In rock undergoing a partial melt, incompatible elements such as uranium tend to migrate to the melt because of this structural incompatibility. Over time, plate tectonics has made the incompatible elements migrate to the Earth's crust. This is the conclusion of Willbold, et al., "The tungsten isotopic composition of the Earth/'s mantle before the terminal bombardment." Nature 477.7363: 195-198 (2011). Others disagree. One thing is certain: Gold is an extremely rare element in the Earth's crust. For example, see Bellini, et al., "Observation of geo-neutrinos." Physics Letters B 687.4:299-304 (2010), Fiorentini, et al., "Geo-neutrinos and earth's interior." Physics Reports 453.5:117-172 (2007), and a host of other recent papers on this topic.
{ "domain": "physics.stackexchange", "id": 17445, "tags": "nuclear-physics, geophysics" }
What does the term "half life" mean for a single radioactive particle?
Question: I was introduced to the term half life as the time it takes for the number of radioactive nuclei to become half of its initial value in a radioactive sample. But there is a question in "Concept of Physics by HC Verma}" which says that a free neutron decays with a "half life" of 14 minutes. Now this is really confusing. Here it is : What does the term half life even mean for a single radioactive nucleus or for a free neutron ? Does it mean that the neutron is only "half transformed" (into a proton and the beta particle) by the given time ? Answer: For a single free neutron that exists a half-life of 14 minutes would mean that, over a timespan of 14 minutes, measured in the neutron's rest frame, there is a 50% chance that it will decay into a proton, an electron (beta particle), and an electron antineutrino. (As @PM 2Ring notes in their comment on the original question, the half-life of a free neutron in reality is about 10 minutes, and the book's question mistakenly substituted in the value of the free neutron mean lifetime.)
{ "domain": "physics.stackexchange", "id": 83950, "tags": "radiation, neutrons, half-life" }
Antimatter Majorana and DIrac
Question: i have a question , how did Dirac derive the existence of antimmater from his equation ?? $$i \hbar \gamma^\mu \partial_\mu \psi - m c \psi = 0$$ and why for the Majorana equation predicts the existence of massive particles which are their own anti- particles ? the majorana equation is real valued only ? $$ i \, {\partial\!\!\!\big /} \psi - m\, \psi_c = 0~ $$ Answer: Dirac did not originally predict antimatter. He thought that the positive charged particle should be the proton. That the theory forced it to have the same mass of the electron was a seen as a failure of his equation. It was only after the experimental discovery of antimatter that the equation was understood to be correctly describing the positron.
{ "domain": "physics.stackexchange", "id": 83407, "tags": "dirac-equation, antimatter, majorana-fermions" }
Build Farm says it can't Catkin
Question: CMake Error at CMakeLists.txt:6 (catkin_package): Unknown CMake command "catkin_package". http://jenkins.ros.org/job/ros-hydro-r2-gazebo_binarydeb_raring_i386/2/console Originally posted by David Lu on ROS Answers with karma: 10932 on 2013-12-26 Post score: 0 Answer: Your CMakeLists.txt file (https://github.com/brown-release/nasa_r2_simulator_release/blob/release/hydro/r2_gazebo/0.5.1-0/CMakeLists.txt) does not find_package(catkin) before it uses catkin specific functions like catkin_package(). Originally posted by Dirk Thomas with karma: 16276 on 2013-12-26 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 16528, "tags": "ros, catkin-package, buildfarm" }
Different versions of python3-catkin-pkg-modules are distributed on packages.ros.org
Question: It seems different versions of python3-catkin-pkg-modules are distributed on packages.ros.org. Therefore, when I tried to install python3-catkin-pkg-modules, sometimes it fails, and sometimes succeeds. Environment Ubuntu 18.04 ROS Melodic How to reproduce the problem $ docker run --rm -it ubuntu:18.04 /bin/sh # apt update && apt install -y lsb-release gnupg # sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list' # apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654 # apt update # apt install -y python3-catkin-pkg-modules Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: docutils-common libfreetype6 libjbig0 libjpeg-turbo8 libjpeg8 liblcms2-2 libpaper-utils libpaper1 libpng16-16 libtiff5 libwebp6 libwebpdemux2 libwebpmux3 multiarch-support python3-dateutil python3-docutils python3-olefile python3-pil python3-pkg-resources python3-pygments python3-pyparsing python3-roman python3-six sgml-base tzdata ucf xml-core Suggested packages: liblcms2-utils docutils-doc fonts-linuxlibertine | ttf-linux-libertine texlive-lang-french texlive-latex-base texlive-latex-recommended python-pil-doc python3-pil-dbg python3-setuptools ttf-bitstream-vera python-pyparsing-doc sgml-base-doc debhelper The following NEW packages will be installed: docutils-common libfreetype6 libjbig0 libjpeg-turbo8 libjpeg8 liblcms2-2 libpaper-utils libpaper1 libpng16-16 libtiff5 libwebp6 libwebpdemux2 libwebpmux3 multiarch-support python3-catkin-pkg-modules python3-dateutil python3-docutils python3-olefile python3-pil python3-pkg-resources python3-pygments python3-pyparsing python3-roman python3-six sgml-base tzdata ucf xml-core 0 upgraded, 28 newly installed, 0 to remove and 16 not upgraded. Need to get 42.0 kB/3179 kB of archives. After this operation, 15.7 MB of additional disk space will be used. Err:1 http://packages.ros.org/ros/ubuntu bionic/main amd64 python3-catkin-pkg-modules all 0.4.20-1 404 Not Found [IP: 64.50.236.52 80] E: Failed to fetch http://packages.ros.org/ros/ubuntu/pool/main/p/python3-catkin-pkg-modules/python3-catkin-pkg-modules_0.4.20-1_all.deb 404 Not Found [IP: 64.50.236.52 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? # apt install -y python3-catkin-pkg-modules # try again Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: docutils-common libfreetype6 libjbig0 libjpeg-turbo8 libjpeg8 liblcms2-2 libpaper-utils libpaper1 libpng16-16 libtiff5 libwebp6 libwebpdemux2 libwebpmux3 multiarch-support python3-dateutil python3-docutils python3-olefile python3-pil python3-pkg-resources python3-pygments python3-pyparsing python3-roman python3-six sgml-base tzdata ucf xml-core Suggested packages: liblcms2-utils docutils-doc fonts-linuxlibertine | ttf-linux-libertine texlive-lang-french texlive-latex-base texlive-latex-recommended python-pil-doc python3-pil-dbg python3-setuptools ttf-bitstream-vera python-pyparsing-doc sgml-base-doc debhelper The following NEW packages will be installed: docutils-common libfreetype6 libjbig0 libjpeg-turbo8 libjpeg8 liblcms2-2 libpaper-utils libpaper1 libpng16-16 libtiff5 libwebp6 libwebpdemux2 libwebpmux3 multiarch-support python3-catkin-pkg-modules python3-dateutil python3-docutils python3-olefile python3-pil python3-pkg-resources python3-pygments python3-pyparsing python3-roman python3-six sgml-base tzdata ucf xml-core 0 upgraded, 28 newly installed, 0 to remove and 16 not upgraded. Need to get 41.8 kB/3179 kB of archives. After this operation, 15.7 MB of additional disk space will be used. Get:1 http://packages.ros.org/ros/ubuntu bionic/main amd64 python3-catkin-pkg-modules all 0.4.18-1 [41.8 kB] Fetched 41.8 kB in 6s (7383 B/s) Investigation I can see that there are three addresses for packages.ros.org. $ host packages.ros.org packages.ros.org is an alias for ftp.osuosl.org. ftp.osuosl.org has address 64.50.233.100 ftp.osuosl.org has address 140.211.166.134 ftp.osuosl.org has address 64.50.236.52 ftp.osuosl.org has IPv6 address 2605:bc80:3010::134 ftp.osuosl.org has IPv6 address 2600:3402:200:227::2 ftp.osuosl.org has IPv6 address 2600:3404:200:237::2 These two have python3-catkin-pkg-modules_0.4.20-1_all.deb. http://140.211.166.134/pub/ros-ubuntu/pool/main/p/python3-catkin-pkg-modules/ http://64.50.233.100/pub/ros-ubuntu/pool/main/p/python3-catkin-pkg-modules/ However, 64.50.236.52 has python3-catkin-pkg-modules_0.4.18-1_all.deb, and when DNS selected this address, it fails. http://64.50.236.52/pub/ros-ubuntu/pool/main/p/python3-catkin-pkg-modules/ You can confirm the problem by directly downloading python3-catkin-pkg-modules_0.4.20-1_all.deb. $ docker run --rm -it ubuntu:18.04 /bin/sh # apt update && apt install -y wget # wget http://packages.ros.org/ros/ubuntu/pool/main/p/python3-catkin-pkg-modules/python3-catkin-pkg-modules_0.4.20-1_all.deb --2020-05-18 11:38:32-- http://packages.ros.org/ros/ubuntu/pool/main/p/python3-catkin-pkg-modules/python3-catkin-pkg-modules_0.4.20-1_all.deb Resolving packages.ros.org (packages.ros.org)... 64.50.236.52, 140.211.166.134, 64.50.233.100, ... Connecting to packages.ros.org (packages.ros.org)|64.50.236.52|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2020-05-18 11:38:38 ERROR 404: Not Found. # wget http://packages.ros.org/ros/ubuntu/pool/main/p/python3-catkin-pkg-modules/python3-catkin-pkg-modules_0.4.20-1_all.deb --2020-05-18 11:38:40-- http://packages.ros.org/ros/ubuntu/pool/main/p/python3-catkin-pkg-modules/python3-catkin-pkg-modules_0.4.20-1_all.deb Resolving packages.ros.org (packages.ros.org)... 140.211.166.134, 64.50.233.100, 64.50.236.52, ... Connecting to packages.ros.org (packages.ros.org)|140.211.166.134|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 41968 (41K) [application/x-troff-man] Saving to: 'python3-catkin-pkg-modules_0.4.20-1_all.deb' python3-catkin-pkg-modules_0.4.20-1_all.deb 100%[=====================================================================================================================>] 40.98K 29.4KB/s in 1.4s 2020-05-18 11:38:47 (29.4 KB/s) - 'python3-catkin-pkg-modules_0.4.20-1_all.deb' saved [41968/41968] Is this the correct state, or did I made any mistakes? Originally posted by Kenji Miyake on ROS Answers with karma: 307 on 2020-05-18 Post score: 0 Original comments Comment by gvdhoorn on 2020-05-18: I've seen this as well, but for python3-vcstool (ros-industrial/industrial_ci#509 and ros-industrial-consortium/tesseract#292). I can see that there are three addresses for packages.ros.org. this could be an issue with round-robin DNS and not all hosts being in-sync. Is this the correct state, or did I made any mistakes? No, I believe your analysis is correct. We'll have to wait for either: the hosts to get back in sync by themselves, or the admin(s) of the hosts to force them to sync @tfoote @nuclearsandwich? Edit: now also on ROS Discourse: comment on Preparing for Kinetic Sync 2020-05-13. Comment by Kenji Miyake on 2020-05-18: Thank you for your quick reply. I'll watch the discourse thread. Could you tell me whether this is an accidental problem or could happen regularly? this could be an issue with round-robin DNS and not all hosts being in-sync. Do you mean that this kind of problem can frequently happen right after releasing a new version? If so, do you know about how long we should wait for syncing? Also, what is the best way to let admins know the problem when I find it? (As you did, replying to the Preparing thread?) Comment by gvdhoorn on 2020-05-18:\ Could you tell me whether this is an accidental problem or could happen regularly? This is the first time this has happened, IIRC. Do you mean that this kind of problem can frequently happen right after releasing a new version? No, that's not what I meant. What I wrote is that it seems there is a round-robin DNS configuration and in this particular case one of the hosts involved in that seems to have gone out-of-sync with the others. If so, do you know about how long we should wait for syncing? I would not know. Also, what is the best way to let admins know the problem when I find it? Posting either here or on ROS Discourse, maybe in the Packaging and Release Management category. Or perhaps the Buildfarm category. Comment by gvdhoorn on 2020-05-18: Seems it's being worked on: Incident Report for ROS: packages.ros.org mirrors out of sync. Comment by Kenji Miyake on 2020-05-18: I understand, thank you! I'll wait for a while and check again after it's fixed. Comment by Kenji Miyake on 2020-05-18: Oh, now 64.50.236.52 seems to be synced. Will check other packages as well for just in case. Comment by Kenji Miyake on 2020-05-18: I've tried the installation of ROS melodic + colcon several times using docker, and it seems working correctly. Answer: According to Incident Report for ROS: packages.ros.org mirrors out of sync, this should now be fixed: Resolved The mirrors are re-synchronized. Original incident report: We have identified that one of our 3 mirrors of packages.ros.org is out of sync. We're working with our hosting provider to bring the new one up to date. Originally posted by gvdhoorn with karma: 86574 on 2020-05-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 34975, "tags": "ros, ros-melodic, installation" }
Why does wrapping aluminium foil around my food help it keep warm, even though aluminium is a good conductor?
Question: Aluminium being such a good conductor, how is it possible that it is helping me keep my food warm ?? Because ultimately it should conduct the heat that is inside to the outside for exchange and should have no effect (maybe even cool it faster by increasing the surface area). Then why is it that we wrap our food with aluminium foil ? How does it keep my food warm ? Answer: Being a shiny surface the aluminium sheet reflects radiant heat and reduces the heat loss by radiation by as much as $90\%$. Being impermeable the sheet stops the movement of hot air from the vicinity of the surface of the food into the surrounding by convection currents. This also has the effect of reducing the rate at which water evaporates from the surface of the food, evaporation requiring an input heat from the food. However as you point out aluminium is a good conductor of heat and so does not reduce heat loss by this mechanism although it does trap a layer of air between the sheet and the food. This does reduce the loss of heat by conduction as air is a bad conductor of heat. You may have seen these properties of reduced heat loss at the end of a marathon with the use of "space blankets"?
{ "domain": "physics.stackexchange", "id": 99516, "tags": "thermodynamics, everyday-life, heat-conduction" }
Subarray Sum Equals K
Question: I've written a solution to the following leetcode problem: Given an array of integers and an integer k, you need to find the total number of continuous subarrays whose sum equals to k. Example 1: Input: nums = [1,1,1], k = 2 Output: 2 Note: The length of the array is in range [1, 20,000]. The range of numbers in the array is [-1000, 1000] and the range of the integer k is [-1e7, 1e7]. Solution: class Solution: def subarraySum(self, nums, k): """ :type nums: List[int] :type k: int :rtype: int """ T = ['inf' for _ in range(len(nums))] count = 0 for i in range(len(nums)): for j in range(i,len(nums)): if j == i: T[i] = nums[i] if T[i] == k: count +=1 else: currSum = T[j-1] + nums[j] T[j] = currSum if currSum == k: count +=1 return count The solution passes 58/80 test cases, but for some reason, the solution is returning a Time Limit Exceeded exception on an input array with hundreds of elements. I implemented a dynamic programming solution using a 1d array to memoize previous sums so the algorithm should be efficient. Is there any way to optimize this? Answer: These nested loops for i in range(len(nums)): for j in range(i,len(nums)): undoubtedly make the time complexity quadratic. This is the reason for TLE. There is no point to optimize this code; you need to rethink the approach. I don't want to spell out the linear algorithm, and deprive you the fun of finding it. Hint: think of the sliding window. A condition j == i inside the inner loop happens exactly once, and we perfectly know when. Lift it out: for i in range(len(nums)): T[i] = nums[i] if T[i] == k: count +=1 for j in range(i + 1, len(nums)): currSum = T[j-1] + nums[j] T[j] = currSum if currSum == k: count +=1
{ "domain": "codereview.stackexchange", "id": 32141, "tags": "python, algorithm, python-3.x, programming-challenge, dynamic-programming" }
Finding a Simple Distribution In a Binary String
Question: Unsupervised feature discovery of text that started with its bit string representation would need to discover octets were the first-order parse of such a bit string. This raises a question: What is the technique called that can discover that a binary string, for example: 0100100111110010101010101011111011001000111000100101110001111110111010111110010111001010100011111110001101100101001010101111000111101011010011111001111101001111101011111011110011011001111000010100110001 has the simple model (with A x B meaning A occurs B times in the bag): {00 x 1, 01 x 2, 10 x 3, 11 x 4} even though it knows only that it should group bits in substrings (tokens) of the same bit length (ie: it doesn't know it should group bits in pairs)? That is to say, if the binary string input was generated by a perl program: for(0..100){print ( (('00') x 1, ('01') x 2, ('10') x 3, ('11') x 4)[rand(10)])} the technique would reject, as less predictive, the distribution (model): {0 x 7, 1 x 13} and it would also reject a model that used 2 bit tokens on odd-numbered bit boundaries, as well as models that used 3 bit, or longer, tokens. A related, more difficult technique, would find the model for a string generated by sampling the bag: {0 x 1, 1 x 1, 00 x 1, 01 x 2, 10 x 3, 11 x 4}} That is to say the bit string is a mix of token sizes. Answer: TL;DR: use maximum likelihood and discrete optimization. Evaluating candidate models: the maximum likelihood principle If you have a candidate model, you can evaluate how well it fits the data using the maximum likelihood principle. If $M$ is a model and $x$ is a string, let $P(x|M)$ denote the probability of outputting string $x$ when $M$ is the true model. Here I assume a generative model that produces $x$ as follows: at each step, it randomly picks one term $g_i \times n_i$ from $M$, appends $n_i$ copies of the string $g_i$ to the output, and repeats until some stopping point (say, stops once we've generated a string of fixed length). Of course in practice we have the reverse problem: we have observed a fixed string $x$, and want to infer $M$. Now we'll treat the observation $x$ as fixed. We define the likelihood of $M$ to be $L(M) = P(x|M)$. If we have observed multiple strings $x_1,\dots,x_m$, then we define the likelihood of $M$ to be $L(M) = P(x_1|M) \times \cdots \times P(x_m|M)$. The intuition is: models with larger likelihood fit the data better. So, if you have a choice of multiple models, choose the one with the largest likelihood -- that's the one that seems most consistent with the data. In practice, for computational reasons, we often deal with the log-likelihood, $\log L(M)$. We choose the model whose log-likelihood is largest. Since the log is monotone, this doesn't change anything fundamental. if you're comparing a simple model to a complex model, this introduces the risk of overfitting. The likelihood alone doesn't account for Occam's razor: the principle that, all else being equal, simpler models are more likely to represent the truth. This can be fixed by introducing some kind of regularization. Finally, note that the likelihood of a model can be computed efficiently using dynamic programming. We each prefix $w$ of $x$, we compute $L(w)$ in terms of shorter prefixes, starting from shorter prefixes to longer prefixes, until we have computed $L(x)$. If you don't immediately see how to do this computation, ask a separate question; it's a standard dynamic programming exercise. If you're dealing with long strings, you might want to compute using log-likelihoods rather than likelihoods, to avoid underflow. Fixed-length tokens If all tokens have the same length, it's probably fairly easy to find a good model. Assume know the length $\ell$ of all tokens in the model; if we don't, we can try each possibility for $\ell$, one at a time, and take the one that yields the best model. Since we know the length $\ell$, we can divide the string $x$ up into tokens of length $\ell$. In this way we can see the set of all tokens that appear in $x$, say $t_1,\dots,t_k$. Now we know that the model must be of the form $$M = \{t_1 \times n_1, \dots, t_k \times n_k\}$$ and we merely need to infer the numbers $n_1,\dots,n_k$. Let's focus on the token $t_1$ and see how to infer $n_1$. We can find all occurrences of $t_1$ in $x$, and combine them into sequences of contiguous repeats, and let $S_1$ denote the set of repeat lengths. For instance, if at one place we see $t_1$ repeated 3 times consecutively, and at another place we see $t_1$ repeated 9 times consecutively, then we have $S_1 = \{3,9\}$. At this point we simply take $n_1 = \gcd S_1$, i.e., $n_1$ is the largest number that divides every element of $S_1$. We'll of course repeat this for each token $t_i$. We end up with a complete model, as desired. A technical detail: This assumes that each token $t_i$ is listed only once in $M$, with a single repeat-factor $n_i$. In other words, it assumes the model is allowed to look like $\{00 \times 4\}$ but not $\{00 \times 2, 00 \times 3\}$ (the latter has the token $00$ with two different repeat-factors). If you want to consider the latter kind of model, the problem reduces to finding a set of repeat-factors $R_1$ such that every element of $S_1$ can be expressed as a linear combination of $R_1$. The optimal solution will depend on the form of regularization you use; without regularization, the optimal solution will always be to simply take $R_1$ to have a single element, $R_= \{\gcd S_1\}$. So if you want to consider models where the same token appears twice, you'll need to specify a particular form of regularization (ask a new question). For now, I'll assume such models aren't of interest. So this shows how to solve the problem, in the easy case where all tokens have the same length. Variable-length tokens Handling models where the lengths of the tokens are not all the same looks much more challenging. I can suggest one possible approach, but the best approach will probably depend on the parameter settings you're encountering in practice. I suggest reducing this to a discrete optimization problem. In particular, I suggest you identify a set of tokens $t_1,\dots,t_k$ that you're confident will be a superset of the ones in the real model, and then use optimization methods to solve for the repeat-factors $n_1,\dots,n_k$ that maximize the likelihood of the model. In more detail: Fix the set of $t_1,\dots,t_k$. Now the model looks like $$M = \{t_1 \times n_1, \dots, t_k \times n_k\}$$ where the $t_i$'s are known and the $n_i$'s are unknown (variables). Consequently we can think of the likelihood $L(M)$ as a function of the $n_i$'s: given any candidate values for $n_1,\dots,n_k$, we can compute $L(M)$ using dynamic programming. So, I'd suggest you use some existing optimization strategy to find $n_1,\dots,n_k \in \mathbb{N}$ that maximize $L(M)$. A natural approach is probably some form of local search, e.g., hillclimbing, hillclimbing with random restarts, or simulated annealing. A suggestion for a set of "local moves" would be to pick a single $n_i$ and change it via one of the following operations: multiply $n_i$ by a small prime number; divide $n_i$ by a small prime divisor of it; set $n_i$ to zero; change $n_i$ from zero to a small number; increment $n_i$; decrement $n_i$. How do we find the set $t_1,\dots,t_k$ of tokens? Here a convenient fact is that we don't have to get this set exactly right; it suffices for it to be a superset of the true set of tokens in the actual model. In particular, setting $n_i=0$ is equivalent to removing the token $t_i$ from the model entirely. So, we can choose a larger-than-necessary set of tokens $t_1,\dots,t_k$ and let the optimization routine effectively solve for which tokens should be retained and which should be eliminated. One heuristic would be to choose $t_1,\dots,t_k$ to be the set of all bit-strings of a certain range of lengths (e.g., all bit-strings of length 2 or 3). Another heuristic would be to use some kind of filtering condition: use the set of all bit-strings $t$ that appear at least some minimum number of times in $x$. The nice thing is that we can try each of these choices in turn, apply the optimizer to each, get a list of candidate models, and choose the best one (using the maximum-likelihood principle). For instance, it might not be clear how to choose a threshold for the filtering, but we can try multiple values in a exponentially decreasing sequence and keep the best model obtained. Similarly, it's also possible to come up with heuristics for the initial values of $n_1,\dots,n_k$ to feed to the optimizer (this will help some optimizers converge to a better solution). For instance, for each token $t_i$ and each candidate repeat-factor $r$, you could count the number of times that $t_i$ appears repeated $r$ times in a row, then choose the value of $r$ that has the highest count as the initial guess for $n_i$. How well will this work? I don't know. It will probably depend a lot on the parameters of the problem instances you run into in practice. I would suggest you try it on your data sets, with several different optimization methods and fiddling with the parameters a bit. If it doesn't work, ask another question where you show us what you've tried, and also show us the typical range of values for the most important parameters: the number of tokens in the model ($k$), the range of lengths of the tokens themselves, the range of values of the repeat-factors $n_i$, the length of the string $x$.
{ "domain": "cs.stackexchange", "id": 8142, "tags": "machine-learning, pattern-recognition" }
catkin export cmake definitions/variables to depending packages
Question: Is there a way to set cmake variables in one package and export them to all depending packages? For clarity: If I set a cmake variable in one packages CMakeLists.txt like: set( My_Var "Hello World" ) I can it now has the value in this package, I can e. g. echo it like message( "My_Var = ${My_Var}" ) which will result in the output: My_Var = Hello World However, if I put the same output in another package (which depends on the previous one), the result is empty: message( "My_Var = ${My_Var}" ) Resulting output: My_Var = Apparently, both packages have separate cmake namespaces. Now Is there a way to set a variable in one package, the mark it in some kind of way to be exported into depending packages and access its value in the depending packages? Originally posted by Wolf on ROS Answers with karma: 7555 on 2014-09-10 Post score: 2 Answer: Found solution: *########################################################################## 1 ) created a folder cmake in the "first package" and added a file pkg_config_extras.cmake into it. It contains: cmake_minimum_required(VERSION 2.8.3) set( My_Var "Hello World" ) *########################################################################## to make My_Var viewable in the "first package" 's CMakeLists.txt I added: include( cmake/pkg_config_extras.cmake ) ----> now I can output it there with correct result: message( "My_Var = ${My_Var}" ) ----> Result: My_Var = Hello World *########################################################################## to make My_Var viewable in the "second package" 's CMakeLists.txt ("second package" depends on the "first package") I needed to add in the "first package" 's CMakeLists.txt a CFG_EXTRAS when calling catkin_package: catkin_package( INCLUDE_DIRS include # unchanged, whatever was there before LIBRARIES ${PROJECT_NAME} # unchanged, whatever was there before CATKIN_DEPENDS roscpp # unchanged, whatever was there before DEPENDS OpenCV # unchanged, whatever was there before CFG_EXTRAS pkg_config_extras.cmake # added this line !!!! ) ----> now I can also output it in the "second package" 's CmakeLists.txt with correct result: message( "My_Var = ${My_Var}" ) ----> Result: My_Var = Hello World Originally posted by Wolf with karma: 7555 on 2014-09-10 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 19359, "tags": "catkin, cmake" }
Multiple rounds of bipartite matching problem
Question: I have a set of investors (say n), and a set of startups (say m). At the start, I have all the investors say either yes or no to the startup (which corresponds to whether they want to interact with them or not). Now, I have to conduct a networking event of r rounds considering this scenario. In each round, I match investors with startups they want to meet (one investor with one startup). I want to conduct the rounds such that the sum of the startups they met, across all investors, is maximum. Now, if r=1, this reduces to the maximum bipartite matching problem. Does anyone have any ideas on how to solve this for r>1? I have been thinking along the lines of having the source and sink edge capacities to be r instead of 1, but I am unable to prove that this works. Answer: You can solve this by a mild extension of the standard reduction to network flow: increase the weight of the edges touching the source and the sink to $r$. Any integer flow corresponds to a union of $r$ matchings: An integer flow corresponds to a bipartite graph of maximum degree $r$. We can complete this bipartite graph to an $r$-regular bipartite graph by adding vertices and edges: Add vertices so that both sides have the same number of vertices. If the graph is not $r$-regular, then there must be two vertices $x,y$ (one on each side) with degree less than $r$. Add $r$ new vertices $x_1,\ldots,x_r$ and $y_1,\ldots,y_r$ to each side (respectively), and connect all of them except $x_1$ and $y_1$. Connect $x$ to $y_1$ and $y$ to $x_1$. This reduces the overall deficiency of the graph $\sum_v (r - \deg(v))$ by 1. Repeated applications of Hall's theorem shows that the new graph can be decomposed into $r$ perfect matchings. Restricting these matchings to the original vertices and edges, we obtain a union of $r$ matchings. Schrijver mentions this problem in his Combinatorial Optimization (Vol 1): Corollary 21.4b. Let $G = (V,E)$ be a bipartite graph and let $k \in \mathbb{Z}_+$. Then the maximum size of the union of $k$ matchings is equal to the minimum value of $k|V \setminus X| + |E(X)|$ taken over $X \subseteq V$. This follows by an application of the min cut max flow theorem.
{ "domain": "cs.stackexchange", "id": 12787, "tags": "graphs, bipartite-matching" }
[Solved] Turtlebot simulation. Depth image duplicated
Question: Hi there, I'm working with the turtlebot simulator using ROS Fuerte. I have a problem with visualising (and subsequently using) the depth image. The depth image I receive looks like this, http://postimg.org/image/q8lsehaon/. As you can see, the depth image looks correct, but is duplicated six times. I've used my own conversion method (which produced the posted image) and rviz to display in the image. Both produce the same result, albeit rviz makes the depth image flicker. Has anyone has any experience with this or has an idea of how to solve this? The depth image is published on /depth/image_raw. One oddity is that the topic /depth/camera_info does not produce any messages. rostopic echo /depth/camera_info WARNING: no messages received and simulated time is active. Is /clock being published? The coloured images are published via the topic /camera/image_raw and /camera/camera_info publishes the camera info. Both work correctly and display the correct image. Thanks for any help you can provide. Kind regards, Bram Originally posted by Bram Ridder on ROS Answers with karma: 3 on 2013-05-08 Post score: 0 Answer: I think you are facing the same issue as here: http://answers.ros.org/question/45659/openni-gazebo-plugin-stepsize-for-depth-images-bug/ IIRC, the patch linked at the end of the ticket was not backported to Fuerte. You can try building gazebo from source after applying the patch. Originally posted by piyushk with karma: 2871 on 2013-05-08 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Bram Ridder on 2013-05-09: Thank you very much! That was it :).
{ "domain": "robotics.stackexchange", "id": 14109, "tags": "ros, simulation, turtlebot, camera, depth" }
Representation of the Lorentz group
Question: Is there any representation of the Lorentz group where $$U^{-1} f(x) U = f(\lambda^{-1}x)$$ other than the (0,0) representation? If not then is it possible for a field (with a well defined polynomial basis) to behave like a scalar field under the Lorentz group? Will such fields still be called the (0,0) representation of the Lorentz group? Answer: It precisely one of the Wightman axioms that the infinite-dimensional unitary representation 1 $U : \mathrm{SO}(1,3)\to\mathrm{U}(\mathcal{H})$ on the space of states $\mathcal{H}$ of the theory upon which the field acts as operator is compatible with the field transformation law under the finite-dimensional representation $\rho_\text{fin}: \mathrm{SO}(1,3)\to\mathrm{GL}(V)$ where $V$ is the target space of the field. For a real scalar field, $V=\mathbb{R}$ and $\rho_\text{fin}$ is the trivial representation. Being "compatible" means that $$ U(\Lambda)^\dagger\phi_i(x)U(\Lambda) = \sum_j\rho_\text{fin}(\Lambda)_{ij}\phi_j(\Lambda^{-1}(x))$$ holds as an operator equation on the space of states. Now, if $\phi$ is scalar, then $\rho_\text{fin}$ is trivial. However, this does not mean, in any way, that $U$ is trivial. The infinite-dimensional unitary representations of the Poincare group $\mathrm{SO}(1,3)\ltimes \mathbb{R}^4$ are given by Wigner's classification, and the scalar field creates particles with mass and momentum, so the unitary representation is not trivial - the trivial unitary representation is just the vacuum. 1No finite-dimensional representation can be unitary.
{ "domain": "physics.stackexchange", "id": 26047, "tags": "special-relativity, field-theory, group-representations, lorentz-symmetry" }
Running Ros Openni Tracker from bag files
Question: Hi, I have a bag file which has /camera/depth_registered/image_rect_raw. I was thinking to receive data and create depth nodes from this topic in ros openni tracker rather than the live data from the sensor. As far as i have investigated running on the recorded data in Openni works for only '.oni' files. ROS openni tracker works well with recorded oni file but how to make it run on the raw depth data from a bag file is the question. One option could be to run the NiViewer and record the data in an .oni file in parallel but it does not seem to be a nice solution as it will make the frame rate to drop low. Any help would be appreciated thanks. Originally posted by tayyab on ROS Answers with karma: 59 on 2012-05-10 Post score: 3 Answer: I managed to do it back then. You can find the respective piece of code here. Originally posted by tayyab with karma: 59 on 2013-08-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 9345, "tags": "ros, openni, openi-tracker, bagfiles" }
Where does the energy produced by fusion come from?
Question: Fission, in layman's (or "initiate's") terms, is easy enough to understand; a large atom with a lot of protons and neutrons requires a large amount of force provided by the strong interaction to overcome electromagnetism and keep the nucleus together. The necessary energies were imparted to the atom during its formation, which we can replicate to some degree within a nuclear reactor; a combination of heat and fast-moving free protons/neutrons recaptured by the atoms of the fuel turns uranium into plutonium, even as the actual atom-splitting produces much lighter krypton and barium isotopes (or a host of other possibilities, some more likely than others, as with many other types of reactions). That atom-splitting, by the way, reduces the size of the resulting nuclei, and therefore reduces the amount of strong interaction force required to contain them; the leftover energy is released as a high-frequency gamma photon. Fusion, however, is an odder beast. A fusion reaction requires a large amount of starting energy; enough to strip the electrons off of the deuterium/tritium nuclei and form plasma. That heat is also enough to accelerate the particles fast enough that when they collide, the initial electromagnetic repulsion is overcome and the strong interaction binds them together. Now, that's a lot of energy required; the Sun's inner core where most of the fusion occurs is estimated to be about 15 million Kelvin. The reaction, however, doesn't seem to release any energy, based at least on this simplistic explanation. Now, obviously that's wrong; pretty much all the energy we have available to us right now is, however indirectly, a result of the Sun bathing us in the energy from nuclear fusion. The rest of it, such as from nuclear fission, is also star-based, via creation of superheavy elements in stellar nucleogenesis. But where is the source of this energy on the subatomic level? Surely more strong force is required to contain a helium atom than a hydrogen one, because the helium now actually has two protons and must actually overcome the Coulomb force (as opposed to hydrogen, which only has one proton and so the Coulomb force between the proton and its zero-charge neutrons is lesser). So, what imbalance does fusion induce within the nucleus that is then resolved with the release of a photon? Answer: There are other interactions to consider besides the Coulomb interaction. A very nice model of the nucleus is the liquid drop model, in which one models it as a constant-density liquid with various interparticle interactions. The result is known as the semi-empirical mass formula, which I summarize here. Let $Z$ be the number of protons, $N$ the number of neutrons, and $A = N + Z$ the atomic number. The "volume" of the drop scales as $A$, the "surface area" scales as $A^{2/3}$, and the "radius" and typical interparticle separation scale as $A^{1/3}$. Coulomb repulsion therefore scales as $Z^2/A^{1/3}$, but there are other energy considerations. In general, we expect the binding energy to be something like $$ E_\text{B} = a_\text{V} A - a_\text{S} A^{2/3} - a_\text{C} \frac{Z^2}{A^{1/3}} - a_\text{A} \frac{(N-Z)^2}{A} + \chi a_\text{P} \frac{1}{A^{1/2}}, $$ where the five parameters $a_\text{X}$ are (hopefully) constant across all nuclei so we can fit this formula to empirical data. The energy has contributions from volume (A), surface area (S), the Coulomb interaction (C), proton-neutron asymmetry (A). The last term tries to take into account spin pairing, and in my notation $$ \chi = \begin{cases} 1, & \text{$Z,N$ even} \\ 0, & \text{$A$ odd} \\ -1, & \text{$Z,N$ odd.} \end{cases} $$ The linked article contains more detail and further references if you are interested. While this formula does have its limitations (after all, it neglects all sorts of messy quantum-ness), it is still rather illustrative and it works remarkably well. For simplicity, let's neglect the pairing term. Also, let's take $Z = N = A/2$, which is pretty good for the most stable light elements, which is what you are most interested in. One particular fit cited in that article gives $a_\text{V} = 15.8~\mathrm{MeV}$, $a_\text{S} = 18.3~\mathrm{MeV}$, and $a_\text{C} = 0.714~\mathrm{MeV}$. With these numbers we can plot the energy as a function of $A$. However, you have to be careful to plot the energy per nucleon, which I'll denote $\epsilon = E_\text{B}/A$. This is useful because the number of nucleons is conserved in all fission/fusion reactions. The result is below. As you can see, fusing light elements makes for tighter bound products. This is because of those other terms. You can think of it as intermediate-mass elements striking a balance between long-range Coulomb repulsion and strong force nearest-neighbor attraction. Nucleons like to have neighbors on all sides but not to have too many neighbors off in the distance. You can also look at the nuclear mass per nucleon, $m = (Z/A) m_\text{p} + (N/A) m_\text{n} - \epsilon$, plotted below. If we allowed the proton-neutron ratio to vary, this would give a slightly different answer to which nucleus is most stable. Indeed, direct measurements tell us Ni-62 has the greatest binding energy per nucleon, while Fe-56 has the lowest mass per nucleon.
{ "domain": "physics.stackexchange", "id": 6413, "tags": "fusion" }
How to get a command line value in the launch file
Question: I have a situation with several similar robots as well as the same robot with slightly different configurations. Right now, that means I have a bunch of almost identical launch files, and I'd like to reduce that for maintainability. The difficulty I'm having is figuring out how to get the info into my launch file. So far I've been able to group collections of nodes, but I'd like to be able to en/disable them from the command line. Here's what I'm talking about: def generate_launch_description(): node_list = [] node_list += some_nodes() node_list += some_other_nodes() return launch.LaunchDescription(node_list) What I'd like to do is get a command line option to enable or disable some other sets of things, so something like this: def generate_launch_description(): node_list = [] node_list += some_nodes() node_list += some_other_nodes() if(foo): node_list += foo_nodes() if(bar): node_list += bar_nodes(bar) return launch.LaunchDescription(node_list) How do I get foo and bar from the command line? So far LaunchConfigurations don't seem to be correct. Originally posted by Barty on ROS Answers with karma: 25 on 2022-07-11 Post score: 0 Answer: Have a look at this launch file from the nav2 system. To get access to runtime arguments, you first provide a LaunchConfiguration (like for namespace, at line 34), and then DeclareLaunchArgument (like lines 46-49). At this point, if you launch this file, you can use the syntax ros2 launch nav2_bringup bringup_launch.py namespace:=mycustomnamespace to specify the input. I found it useful to read the design documentation to understand the system a little better. It can be a little verbose, but it makes sense when you squint your eyes a little. :) Originally posted by Per Edwardsson with karma: 501 on 2022-07-12 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 37841, "tags": "python, ros2, roslaunch" }
Why is my aluminum foil emitting the same amount of infrared as the other objects in my room?
Question: I got an infrared thermometer (SOVACARTE HS980E) recently and have been playing around with it. My understanding is that it works by measuring the amount of infrared radiation coming off of the object, in terms of W per $m^2$, and then basically using the Stefen-Boltzmann law to determine what its temperature should be. It has an "emissivity" setting which defaults to 0.95 . Further from what I understand, aluminum foil has a very low emissivity (0.03), so aluminum foil of the same temperature as a 'normal' object (e.g. of emissivity 0.95) should emit far less infrared radiation: [source] Thus I expected that when I measured the foil with the same settings as I do other objects in the room, the foil would register as far colder because it is emitting less infrared radiation. But the measurement is the same! I measured the following: Top of drawers: 23°C-24°C Aluminum foil: 23°-25°C Ice cream fresh out the freezer (to verify it's working): -10°C to -9°C Since the IR thermometer doesn't know what it's looking at, the only conclusion I can draw is that the aluminum foil is emitting just as much infrared as any other object at room temperature in the room. Is that correct, and if so why is that? And if not, what explains the readings? Answer: The thermometer is measuring the infrared from the walls and air and other objects in the room, reflected off of the aluminum foil, which reflects 97% of it. That plus its own 3% equals same 'temperature' as the other objects. To confirm this, I took my wallet and brought it from inside to outside on the lawn, and it measured the same temperature (~25°C). When I took the foil out, the measurement dropped as I went outside, and started reading -8°C to -10°C above the lawn . The ground of the lawn itself registered as ~15°C, and when I put the foil near the ground and measured upwards (from the ground to the foil), it, too, registered as ~15°C. Interestingly the lower I put the 'emissivity' on the thermometer, the lower the readings get, which seems backwards. It may not be the most quality of products ...
{ "domain": "physics.stackexchange", "id": 94561, "tags": "thermodynamics, temperature, reflection, thermal-radiation, infrared-radiation" }
Hardness and softness are terms used to describe which aspect of acidic behaviour?
Question: What do hardness and softness mean? Is it something to do with the polarisability. Answer: The terms softness and hardness are used in HSAB theory to distinguish Lewis acids and bases by charge density, polarizability, electronegativity, and, in the case of molecular compounds, HOMO and LUMO energies. Charge density is inversely proportional to atomic/ionic radius and proportional to oxidation state (in case of a neutral atom) or charge (in case of an ion). Hence charge density increases with decreasing radius and increasing oxidation state. Polarizability is the tendency of the electron shell of an atom/molecule to become deformed in an electric field, which can originate e.g. from the charge of an adjacent ion or the partial charge of a dipole. An electron cloud of an atom or ion is more easily deformed with increasing volume, as it is more diffuse, and its outer electrons tend to be more loosely bound. Therefore, larger atoms and molecules tend to be more polarizable, and higher polarizability is often accompanied by lower charge density or oxidation state. In the case of molecules, when a Lewis acid reacts with a Lewis base, electrons from the highest occupied molecular orbital (HOMO) of the Lewis base are transferred to the lowest unoccupied molecular orbital (LUMO) of the Lewis acid. How well this transfer of electron density works depends on the relative energies of the acid LUMO and the base HOMO. The stability of the reaction product will increase with the energy difference between its HOMO and LUMO (source). The HSAB theory predicts that hard Lewis bases react preferentially with hard acids, and that likewise soft bases form stronger bonds with soft acids. For example, $\ce{Li+}$ and $\ce{BF3}$ are hard acids and $\ce{F-}$ and $\ce{NH3}$ are hard bases due to their small size and high charge density/low polarizability of the ions. $\ce{BF3}$ therefore readily forms Lewis acid-base adducts with fluoride and ammonia, and $\ce{LiF}$ is less soluble in water than $\ce{LiCl}$ and $\ce{LiBr}$. $\ce{Ag+}$ is an example for a soft acid (large ion with low charge), and the halide ions become increasingly softer bases with increasing ionic radius. Silver therefore forms stronger bonds with heavier halides like $\ce{I-}$ and $\ce{Br-}$, and this also explains the decreasing solubility of the silver halides in water, which is classified as a hard solvent. In the case of soft acids and bases, the difference of electronegativity tends to be smaller, and the bonds have more covalent character (e.g. silver iodide) than those between hard acids and bases (e.g. lithium fluoride) with larger difference in electronegativity (source). The shift of reaction equilibria can also be predicted with this concept, like in the following example (reference): $$\ce{HgO + H2S \rightleftharpoons HgS + H2O}$$ This equilibrium will be shifted to the right side, because $\ce{Hg^2+}$ as a soft acid prefers to bond to the soft base $\ce{S^2-}$. Likewise, oxygen as a hard base forms stronger bonds with hydrogen (hard acid).
{ "domain": "chemistry.stackexchange", "id": 1279, "tags": "hsab" }
Why can't the callee whom I am on call with hear the sound coming from my phone's speaker but I can while my phone is on loud speaker?
Question: I was on call with my friend and have put the phone on loud speaker and I opened youtube,The sound was audible to me but not to my friend. How is it even possible when he can hear my voice but not the sound coming from the speakers Answer: First of all, welcome to DSP SE. In general most (if not all) smartphones nowadays have some kind of noise cancellation technology integrated. The simplest of all is to have two microphones, one which is supposed to record the noise and the other one record the voice with the noise (please keep in mind that this is an oversimplification of the problem). Then by subtracting the one from the other you end up with the voice without the noise. Nowadays, there are some more sophisticated techniques to perform denoising such as echo-cancellation, active-noise-control schemes and many more, or even neural-network-based denoisers. In addition to that, the software running in the background may also perform some kind of speech recognition in order to get the speech components of the signal. Alternatively, some source-separation techniques could be used, but to be honest I am not sure how effective they could be. All in all, the algorithm running in the background tries to separate your voice from the background (effectively, any other sound) and transmit only your voice. Most of the aforementioned techniques are adaptive filtering techniques, which means that they present a time-variable response. On top of that, they most probably will affect your voice. This is quite easy to experience yourself. Just try to move your phone around while you speak (or the person who you talk to should do that instead for you to hear the difference) or start and stop some sounds in the background abruptly. I hope that this answer provides some insight into the general scheme of the process running in the background while you perform either phone or video calls. I know that this may not be a direct answer but it is a bit difficult to get an exact answer without providing the exact model and conditions (and to be honest I doubt you would still not be able to know exactly how it is done since this most probably constitutes the "magic" the company integrates into their products).
{ "domain": "dsp.stackexchange", "id": 9437, "tags": "signal-detection, sound, signal-power" }
Photon - Neutrino interaction
Question: Suppose we have a laser source, that means a coherent laser beam formed by 'in phase' photons. Is it possible to calculate how many photons can change their wavelength in neutrino - photon interaction if any? If there is no such wavelength change, can some photon loose the phase coherence in the interaction with neutrino? Obviously I suppose there could be a difference between muonic, eletronic and tau neutrino but how take into account this? Thanks Answer: Is it possible to calculate how many photons can change their wavelength Photons do not have wavelength , just energy = to h*nu, where nu is the frequency of the classical electromagnetic light, the laser light. Photons can scatter off various particles and if the scattering is inelastic, a new photon goes off. in neutrino - photon interaction if any? Neutrinos can couple to electromagnetic fields only in higher orders in perturbation theory, and the coupling constants are so small that any scatter will be unmeasurable. If there is no such wavelength change, can some photon loose the phase coherence in the interaction with neutrino? The phase coherence can only exist with the laser light, as individual photons just scatter, it would be the emergent beam that could display a classical phase difference, as laser light on a medium does. As explained above neutrinos are not real targets to laser light. Obviously I suppose there could be a difference between muonic, eletronic and tau neutrino but how take into account this See above.
{ "domain": "physics.stackexchange", "id": 75723, "tags": "quantum-field-theory, particle-physics, photons, neutrinos, laser-interaction" }
Clarification about Wald's notation on his General Relativity Book
Question: In equation 4.1.4 Wald describes the metric in pre-relativistic physics as, $$h_{ab} = \sum_{\mu, \nu} h_{\mu\nu}(\mathrm{d}x^{\mu})_a (\mathrm{d}x^{\nu})_b\tag{4.1.4}$$ with $h_{\mu\nu} = \mathrm{diag}(1,1,1)$. My confusion is to what exactly is meant by $(\mathrm{d}x^{\mu})_a$ here? Answer: Greek indices are used to label elements of an ordered set. For example an arbitrary element from a set of basis vectors $\{e_0, e_1, e_2, e_3\}$ would be labeled $e_\mu$ so that $\mu \in \{0,1,2,3\}$ denotes its order in the basis. Similarly an arbitrary expansion coefficient of a vector $v$ in this basis belongs to the set $\{v^0,v^1,v^2,v^3\}$ and would be labeled $v^\mu$ to label which basis vector it is associated with, so that $v = \sum_\mu v^\mu e_\mu$. When working in components you can tell the rank of the tensor that components are associated with by the location of the Greek indices ($v^\mu$ is the $\mu$ component of a vector, $\omega_\mu$ is the $\mu$ component of a covector, etc), however if you want to work with the tensors themselves ($v$, $\omega$, etc) you have no way of knowing its rank from its notation. Abstract index notation solves this by labeling tensors with Latin indices according to their rank, e.g. $v^a$ is a rank $(1,0)$ tensor, $\omega_a$ is a rank $(0,1)$ tensor. For your example tensor, $h_{ab}$ is a rank $(0,2)$ tensor $h_{\mu\nu}$ is the $\mu,\nu$ element of a set of rank $(0,0)$ tensors $(\mathrm dx^\mu)_a$ is the $\mu$ element of a set of rank $(0,1)$ tensors $(\mathrm dx^\nu)_b$ is the $\nu$ element of that same set of rank $(0,1)$ tensors So to answer your question, for $(\mathrm dx^\mu)_a$, the Latin index $a$ means that this is a rank $(0,1)$ tensor, and the Greek index $\mu$ is the label associated with the ordered basis it belongs to.
{ "domain": "physics.stackexchange", "id": 99984, "tags": "general-relativity, differential-geometry, metric-tensor, tensor-calculus, notation" }
What is the difference between a moon and a random chunk in the rings
Question: Jupiter has 79 (known) moons, Saturn 82, Uranus 27 and Neptune 14 (numbers come from Wikipedia). These planets also all have rings. The rings are made up of chunks of rock and ice. There also are satellites in between the rings. What distinguishes a moon from any other chunk in orbit around a planet? If a body is in orbit around a planet, how do you know if it's a moon or if it's part of the ring? Are there orbital characteristics that differ between the two? Is there a lower bound on size or mass for something to be a moon? Is it the density of surrounding bodies? Is there no formal (e.g from the IAU) definition? Answer: Your question: There also are satellites in between the rings. What distinguishes a moon from any other chunk in orbit around a planet? Wikipedia should be taken with a grain of salt, but they define moons or moonlets within a ring system as creating a gap or partial gap, sometimes described as looking like a propeller. In 2006, four tiny moonlets were found in Cassini images of the A Ring.[44] Before this discovery only two larger moons had been known within gaps in the A Ring: Pan and Daphnis. These are large enough to clear continuous gaps in the ring.[44] In contrast, a moonlet is only massive enough to clear two small—about 10 km across—partial gaps in the immediate vicinity of the moonlet itself creating a structure shaped like an airplane propeller.[45] The moonlets themselves are tiny, ranging from about 40 to 500 meters in diameter, and are too small to be seen directly.[9] A moon, I would think, would need to clear out an entire gap within a ring. Clearing out a partial gap or "propeller" as noted above can be as small as 40 meters. Often the moonlets themselves aren't visible but the propellers they create in the ring system is visible. Some more details and images here and here
{ "domain": "astronomy.stackexchange", "id": 4095, "tags": "natural-satellites, planetary-ring, definition, moonlet" }
Cyclic invariance of trace of fermions
Question: Consider the Green's function of fermion operators with imaginary time, $$\mathcal{G}(\nu, \nu', \tau) = - \langle T_\tau c_{\nu}(\tau) c_{\nu'}^\dagger(0)\rangle\tag{1}$$ To show it satisfies the periodicity, $$\mathcal{G}(\nu, \nu', \tau) = - \mathcal{G}(\nu, \nu', \tau+ \beta) \tag{2}$$ one needs to used the following identity, (see eq (71) and (72) on page 16 of http://folk.ntnu.no/johnof/green-2013.pdf) $${\rm Tr}(ABC \ldots XY Z) = {\rm Tr}(ZAB \ldots XY )\tag{71}$$ $$Tr(e^{-\beta H}c_{\nu'}^\dagger e^{H \tau} c_\nu e^{-H \tau} ) = Tr( e^{H \tau} c_\nu e^{-H \tau} e^{-\beta H}c_{\nu'}^\dagger )\tag{3}$$ which is important for the appearance of the minus sign in equation (2). However, since $c_\nu$ and $c_\nu^\dagger$ are fermions, I doubt equation (3) is not correct. My question is whether (3) is indeed correct or not? If (3) is not correct, how can (2) hold? Answer: I can see why you might be confused. If the trace were only over the spinor indices on a fermi field, $\psi_\alpha$ say, then there would be an additional minus sign. Here, however, the trace is over the entire many-body Hilbert space and the $c_\nu$'s are just like any other operator and so have a cyclic trace. This is what Ismasou's terse comment is saying.
{ "domain": "physics.stackexchange", "id": 90572, "tags": "operators, fermions, greens-functions, grassmann-numbers, trace" }
ROS Answers SE migration: osg missing?
Question: Hi all! I'm trying to use UWSim and did eveything on the webpage. The problem now is that when I do: rosmake UWSim --rosdep-install I get the following error: [rosbuild] Including /opt/ros/electric/stacks/ros_comm/clients/rospy/cmake/rospy.cmake [rosbuild] Including /opt/ros/electric/stacks/ros_comm/clients/roslisp/cmake/roslisp.cmake [rosbuild] Including /opt/ros/electric/stacks/ros_comm/clients/cpp/roscpp/cmake/roscpp.cmake -- Configuring done -- Generating done CMake Warning: Manually-specified variables were not used by the project: CMAKE_TOOLCHAIN_FILE -- Build files have been written to: /home/albert/ros_electric/underwater_simulation/UWSim/build cd build && make -l2 make[1]: Entering directory `/home/albert/ros_electric/underwater_simulation/UWSim/build' make[2]: Entering directory `/home/albert/ros_electric/underwater_simulation/UWSim/build' make[3]: Entering directory `/home/albert/ros_electric/underwater_simulation/UWSim/build' make[3]: Leaving directory `/home/albert/ros_electric/underwater_simulation/UWSim/build' [ 0%] Built target rospack_genmsg_libexe make[3]: Entering directory `/home/albert/ros_electric/underwater_simulation/UWSim/build' make[3]: Leaving directory `/home/albert/ros_electric/underwater_simulation/UWSim/build' [ 0%] Built target rosbuild_precompile make[3]: Entering directory `/home/albert/ros_electric/underwater_simulation/UWSim/build' make[3]: Leaving directory `/home/albert/ros_electric/underwater_simulation/UWSim/build' make[3]: Entering directory `/home/albert/ros_electric/underwater_simulation/UWSim/build' Linking CXX executable ../bin/UWSim /usr/bin/ld: warning: libosg.so.65, needed by /home/albert/ros_electric/underwater_simulation/osgOcean/lib/libosgOcean.so, not found (try using -rpath or -rpath-link) /usr/bin/ld: warning: libosgUtil.so.65, needed by /home/albert/ros_electric/underwater_simulation/osgOcean/lib/libosgOcean.so, not found (try using -rpath or -rpath-link) /usr/bin/ld: warning: libosgDB.so.65, needed by /home/albert/ros_electric/underwater_simulation/osgOcean/lib/libosgOcean.so, not found (try using -rpath or -rpath-link) /usr/bin/ld: warning: libosgGA.so.65, needed by /home/albert/ros_electric/underwater_simulation/osgOcean/lib/libosgOcean.so, not found (try using -rpath or -rpath-link) /usr/bin/ld: warning: libOpenThreads.so.13, needed by /home/albert/ros_electric/underwater_simulation/osgOcean/lib/libosgOcean.so, not found (try using -rpath or -rpath-link) /home/albert/ros_electric/underwater_simulation/osgOcean/lib/libosgOcean.so: undefined reference to `OpenThreads::Mutex::Mutex()' /home/albert/ros_electric/underwater_simulation/osgOcean/lib/libosgOcean.so: undefined reference to `osgDB::readImageFile(std::basic_string, std::allocator > const&, osgDB::ReaderWriter::Options const*)' collect2: ld returned 1 exit status make[3]: *** [../bin/UWSim] Error 1 make[3]: Leaving directory `/home/albert/ros_electric/underwater_simulation/UWSim/build' make[2]: *** [CMakeFiles/UWSim.dir/all] Error 2 make[2]: Leaving directory `/home/albert/ros_electric/underwater_simulation/UWSim/build' make[1]: *** [all] Error 2 make[1]: Leaving directory `/home/albert/ros_electric/underwater_simulation/UWSim/build' From which I asume osg lib is missing. I've checked on google how to install it but no result found. Can somebody give me a hint on how to solve it?? Cheers! Originally posted by apalomer on ROS Answers with karma: 318 on 2012-06-06 Post score: 1 Answer: Did you recently upgrade your distro? You have installed openscenegraph 3.0, which should link against libosg.so.80. However, your osgOcean seems to be compiled against libosg.so.65. Have you tried to pre-clean your build? rosmake --pre-clean UWSim Originally posted by Mario Prats with karma: 291 on 2012-06-07 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by apalomer on 2012-06-07: it worked! thanks mario Comment by Mario Prats on 2012-06-07: great! you're welcome
{ "domain": "robotics.stackexchange", "id": 9705, "tags": "ros, rosmake, uwsim, make" }
How to calculate the concentration H3O+ in a solution with pH=6.99?
Question: What is the correct way to calculate the concentration $\ce{H3O+}$ in a solution with $\ce{pH}=6.99$? Attempt 1. pH<7, therefore there are only $\ce{H3O+}$ particles in the solution. $[\ce{H3O+}] = 10^{-\ce{pH}} = 10^{-6.99} = 1.02 \cdot 10^{-7}$ Attempt 2. We have $[\ce{H3O+}] = 10^{-\ce{pH}} = 10^{-6.99} = 1.02 \cdot 10^{-7}$ and $[\ce{OH-}] = 10^{-\ce{pOH}} = 10^{-7.01} = 9.77 \cdot 10^{-8}$. Because of $\ce{H3O+ + OH- -> 2 H2O}$ we are left with $[\ce{H3O+}] = 1.02 \cdot 10^{-7}- 9.77 \cdot 10^{-8} = 4.6 \cdot 10^{-9}$ When the pH is smaller than 6 or greater than 8, one will not notice the difference, but here it is logarithmically speaking very large. So I wonder what the correct way is? Answer: If you take a sample of pure water, there will be few hydroxide and hydronium ions. Of course, they can combine to form water and yes they do combine but there will be few water molecules which break/combine to form the ions again. Hence, there exists a dynamic equilibrium between concentration of ions and water molecules. $\textrm{pH}$ by definition is the negative logarithm of hydronium ion concentration. $$\textrm{pH} = -\log [\ce{H^+}] = -\log [\ce{H3O^+}]$$ You can obtain the concentration of H+ ions by substituting the value of pH in the following formula, $$[\ce{H3O^+}] = 10^{\mathrm{-pH}}.$$ Your attempt 2 is flawed because your assumption that all the ions combine to form water molecules is incorrect. There will always be some concentrations of the ions and all of them needn't combine to produce water molecules. Your attempt 1 is correct. It appears like you are not aware of the concept of equilibrium and self ionization of water, I have picked few good materials which you might(should) want to refer to, Chemical Equilibrium Self Ionization of water The concept of chemical equilibrium is very important and you will come across it frequently in chemistry, so you must learn it. Also, self-ionization of water along with chemical equilibrium are central concepts for learning acids and bases.
{ "domain": "chemistry.stackexchange", "id": 12441, "tags": "acid-base, theoretical-chemistry" }
Preferred implementation of `Array::operator=(const Array & rhs)`
Question: I have a custom template class Array<T>, and am currently implementing the assignment operator. However, I've come across a design decision: When assigning two arrays, must I require them to have the same size, or adjust the LHS Array size to the RHS? Pros and Cons of Requiring Equal Size Pro Most of the time, the array is often thought of as a static container (there are std::vector, etc. for dynamic containers). It's more efficient, because I do not have to reallocate; rather, I overwrite each element of the LHS. Con It's a hassle for the user to do the work himself if he does intend to have a variable-length Array, plus it would be inefficient. Code Here are the implementations of the two versions I'm considering. Equal Size template <typename T> bool JKArray<T>::operator=(const JKArray<T> & rhs) { if (this == *rhs) { return true; } else { if (size_ != rhs.size_) { return false; } else { for (int i = 0; i < size_; i++) { array_[i] = rhs.array_[i]; } } } } Unequal Size template <typename T> void JKArray<T>::operator=(const JKArray<T> & rhs) { if (this == *rhs) { } else { if (size_ != rhs.size_) { delete [] array_; size_ = rhs.size_; array_ = new T[size_]; } for (int i = 0; i < size_; i++) { array_[i] = rhs.array_[i]; } } } Note Regarding the implementation of Equal Size: I return false if the size is not equal, and true otherwise. Of course, I could use exceptions, however this is satisfactory for now. I'm leaning towards making operator= fail* if RHS is a different length than LHS. I'd like to know what other's have done, and what the prefer. To avoid closing this on the basis of opinion-based, please give explanations as to why one might be preferred over the other. *See 'Note' under 'Code'. Answer: template <typename T> bool JKArray<T>::operator=(const JKArray<T> & rhs) template <typename T> void JKArray<T>::operator=(const JKArray<T> & rhs) Neither of them have the correct signature for a copy assignment. The correct signature for that operator is class_name & class_name :: operator= ( const class_name & ). That already rules out abusing the return value to signal success or failure. If you want runtime errors, you got to do it with exceptions. If you violate that signature, it makes common patterns such as chained assignment (a = b = c = {...};) impossible. This may break other templated libraries depending on your datatypes to adhere to the standards. Especially returning a bool may cause rather unexpected side effects. Pro Most of the time, the array is often thought of as a static container (there are std::vector, etc. for dynamic containers). It's more efficient, because I do not have to reallocate; rather, I overwrite each element of the LHS. Con It's a hassle for the user to do the work himself if he does intend to have a variable-length Array, plus it would be inefficient. That list is by no means complete. There are more aspects to cover: Iterator invalidation When you have to re-allocate the backing data structure, it would be trivial to keep all existing iterators valid for same-size copies, while the same feat is problematic for different sizes. Heap vs stack allocation For fixed size arrays, it's not even necessary to allocate them on the heap, sufficiently small arrays may as well be allocated straight on the stack. This is obviously impossible to do with the dynamically sized ones. Array or Vector? While you call it Array, if you make it re-sizable it behaves more like an std::vector rather than an std::array. Size as part of the type? As @Olzhas mentioned, it's an option to include the size of the backing array as a template parameter. This provides compile time checks for compatible or incompatible sizes. Whereby compile time checks - if applicable - are obviously to be preferred. This would actually be how std::array does it.
{ "domain": "codereview.stackexchange", "id": 22736, "tags": "c++, array, library" }
Question on Introduction to Electrodynamics divergence examples
Question: I was going through the section on divergence and became confused on these examples. Griffith states in the textbook that (a) has positive divergence, (b) has zero divergence, and that (c) has again positive divergence. For (c), shouldn't the divergence be zero as all the arrows are pointing the same way? And if (c) has positive divergence, what's the difference between (b), which also just has arrows pointing in the same direction, if not little staggered? Thank you very much. Answer: The difference is that the vectors in (b) are all the same length / magnitude, while the vectors in (c) vary in magnitude as we move along the vertical direction. The divergence of a vector field $\vec{F}$ is $\nabla \cdot \vec{F} = \partial _x F_x + \partial_y F_y + \partial_z F_z$ in cartesian coordinates. If we choose the $y$ direction to be vertically upward in the figure you posted, the vector field in (c) has a $y$ component that varies as a function of the $y$ coordinate, so $\partial_y F_y$ will be non-zero, and so the divergence will be non-zero.
{ "domain": "physics.stackexchange", "id": 69992, "tags": "differentiation, vector-fields, calculus" }