anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Does electrotonic spread/conduction occur in saltatory conduction?
Question: Even as textbooks, and almost all web pages I've seen so far, explain electrotonic spread/conduction as the passive current flow along an axon, they do so with continuous conduction only. Apart from myelin sheaths which enwrap axons leaving only the Nodes of Ranvier exposed, and of course, the energy requirement and speed of conduction differences, I can't see how else continuous conduction differs from saltatory. My question then is why electrotonic spread is never mentioned in saltatory conduction. Answer: Generally textbooks take the following pedagogical flow in basic neurophysiology: Ions flow, so voltage changes propagate This is the "electrotonic part". The key concept is that if you add some ions or change the voltage of one part of a neuron, adjacent areas will also change in voltage as current flows 'passively'. The further away you go, the longer it takes the signal to arrive and the smaller it will be as the charge spreads out. A more advanced course might also talk about the sources of those charges, like sensory receptors or neurotransmitter action, and might talk about the ways that polarizing pulses integrate over space and time. Or these may be saved for later. (side note: in reality, neurons aren't all that passive, there are lots of conductance changes occurring even in this supposedly "electrotonic" scheme, but the electrotonic equations tend to work pretty well, and in a simplified experimental system they are sufficient. Biology is almost always too complex to try to include everything at once) Action potentials The key concepts here are that if the voltage changes enough, it stops being possible to think about just the passive flow, you now have to think closely about voltage-gated channels that create a positive feedback loop response to voltage. Above threshold, voltage-gated channels cause sufficient depolarization that adjacent areas of the membrane are also depolarized beyond threshold, and we call this propagating signal an action potential. At this point, you are supposed to remember and understand that what is driving this active response is still a somewhat "passive" flow of charge that you understood from the "electrotonic" part of the course. That part is the physics and is always present, you can't get rid of it. However, you're in a new scheme where you can't use the electrotonic equations to understand what happens anymore. Saltatory conduction Myelination and saltatory conduction comes next. In this section, you're supposed to be thinking about the way action potentials spread, but to add one extra little wrinkle. What if, instead of transmitting piecemeal to the next adjacent segment of membrane, axons were more insulated? In this scenario, the flow of charges will extend much further. You should still be applying the concepts you learned in the "electrotonic" part, though: if you add some ions/change the voltage of one part of the neuron, adjacent areas will also change in voltage as current flows. The further away you go, the longer it takes the signal to arrive and the smaller it will be as the charge spreads out. Therefore, even though insulation lets the signal travel further, it still decreases in amplitude over long distance, and you need to "boost" the signal again. That's where nodes of Ranvier come in. Back to your specific question... "Does electrotonic spread/conduction occur in saltatory conduction?" - No, but not because the "physics" part is different: saltatory conduction is necessarily active (involves voltage-gated channels), not electrotonic, even though all the electrotonic principles still apply. The "electrotonic" part of the lesson contains important concepts that you are supposed to remember and carry through the other sections, even if it's not discussed explicitly. I think making that active/passive distinction is possibly a bit misleading, but the concepts you are supposed to grab from the electrotonic section in terms of the physics of how charge/voltage moves around in a neuron applies to everything. Additionally, you should gather that the nodes of Ranvier are spaced in order to conduct action potentials. These are big voltage changes. Smaller voltage changes that are subthreshold will of course also travel down the axon (this is the physics part - nothing stops physics!), but if they aren't strong enough to open voltage gated channels at the next node of Ranvier then the voltage change isn't of any consequence and just decays with distance. If it was strong enough, then by definition it is superthreshold rather than subthreshold, and we are talking about an action potential. All the content in this answer is material like you would find in a basic undergraduate neuroscience textbook. The ones I usually recommend are Purves or Kandel.
{ "domain": "biology.stackexchange", "id": 10898, "tags": "neuroscience, physiology, neurophysiology, action-potential" }
Is there proof for condition of rolling?
Question: I want to know whether the condition of rolling has been proven or if it is just the result of observations only? rolling without slipping { $v = \omega r$ where $v$ is transactional velocity of object, $\omega$ is angular velocity and $r$ is radius of object } Answer: If you're asking for a proof that "rolling without slipping" means $v=r\omega$, here's one: When we say an object is "rolling without slipping," that means that the distance covered by the object in a certain time $\Delta t$ is exactly the same as the arc-length that a point on the wheel travels while rotating. (To see why this is true, consider the opposite: if a wheel is slipping, then one of two things are happening: it either travels some distance without turning, or turns without traveling any distance. In the former case, the arc-length that a point on the wheel travels is less than the distance covered; in the latter case, it is greater. Requiring that it doesn't slip therefore means that neither of the above are true, guaranteeing equality. You can also see this if you imagine "rolling up" the ground onto a wheel as it rolls, or equivalently, if you imagine the wheel as a spool of string that unravels as it rolls.) In a time interval $\Delta t$, a wheel rotating with constant angular velocity $\omega$ will rotate a total arc-length of $s=r\theta=r\omega\Delta t$. Similarly, in a time interval $\Delta t$, an object traveling at constant velocity $v$ will cover a distance $d=v\Delta t$. Setting $s=d$ as per our argument in the first paragraph, we see that $$v\Delta t=r\omega\Delta t$$ or equivalently $$v=r\omega$$ Since we have imposed no other assumptions besides "rolling without slipping" in the above, we have therefore proven that saying an object is rolling without slipping is equivalent to setting $v=r\omega$.
{ "domain": "physics.stackexchange", "id": 47746, "tags": "homework-and-exercises, rotational-dynamics, rotation" }
How does rolling resistance of rail wheel depend on diameter?
Question: Freight train is more efficient than truck due to lower rolling resistance. And I wonder which one has lower rolling resistance<>, small diameter or larger one or it doesn't not depend on diameter at all? Both are steel wheel on steel rail. Answer: If you look at the Wikipeda article on rolling resistance that @fibonatic pointed out, you can find an equation for the coefficient of rolling friction: $C_{rr}=\sqrt {\frac z d}$ There is another equation after it that is for steel on steel, but that also shows that C is proportional to $d^{-\frac 1 2}$. So from this, you can easily see that a smaller diameter will have a higher C, and thus a higher resistance. A freight train will have a larger diameter than a truck, meaning less resistance, thus being more efficient.
{ "domain": "physics.stackexchange", "id": 8586, "tags": "friction" }
How is it that adding a random field to the partial derivative results in a tensorial operation?
Question: We know that the partial derivative of a tensor is not a tensor. But how is this problem fixed by adding to the partial derivatives, a field of Christoffel symbols? Christoffel symbols are a completely arbitrary field because you can define any arbitrary connection you want. How is it that adding a completely arbitrary field fixes the problem, and that the resultant covariant derivative transforms as a tensor? Answer: Christoffel symbols are a completely arbitrary field because you can define any arbitrary connection you want. No, the Christoffel symbols are not arbitrary. They are defined (see Christoffel symbols - Definition in Euclidean space) by how the base vectors $\mathbf{e}_i$ depend on the coordinates $x^j$. $$\frac{\partial\mathbf{e}_i}{\partial x^j} = \Gamma^k_{ij}\ \mathbf{e}_k$$ or equivalently $$d \mathbf{e}_i = \Gamma^k_{ij}\ \mathbf{e}_k\ dx^j \tag{1}$$ It is this definition, from which you can derive that for a tensor field $A^i$ the expressions $$\frac{\partial A^i}{\partial x^j}+\Gamma^i_{jk}\ A^k$$ are components of a tensor, while the partial derivatives $$\frac{\partial A^i}{\partial x^j}$$ are not. You can derive this in a straight-forward way by starting with the invariant differential $d\mathbf{A}$ of a vector field between two positions in space. $$\begin{align} d\mathbf{A} &= d(A^i\ \mathbf{e}_i) \\ &= dA^i\ \mathbf{e}_i + A^i\ d\mathbf{e}_i & \text{use definition (1)} \\ &= \frac{\partial A^i}{\partial x^j} dx^j\ \mathbf{e}_i + A^i\ \Gamma^k_{ij}\mathbf{e}_k\ dx^j & \text{in the second term swap indices $i$ and $k$} \\ &= \frac{\partial A^i}{\partial x^j} dx^j\ \mathbf{e}_i + A^k\ \Gamma^i_{kj}\mathbf{e}_i\ dx^j \\ &= \left( \frac{\partial A^i}{\partial x^j}+A^k\ \Gamma^i_{kj} \right) \mathbf{e}_i\ dx^j \end{align}$$ or equivalently $$\frac{\partial\mathbf{A}}{\partial x^j} = \left( \frac{\partial A^i}{\partial x^j}+A^k\ \Gamma^i_{kj} \right) \mathbf{e}_i$$ You see, the covariant derivative emerged as the components of $\partial\mathbf{A}/\partial x^j$ in a quite natural way.
{ "domain": "physics.stackexchange", "id": 87596, "tags": "differential-geometry, coordinate-systems, tensor-calculus, differentiation, covariance" }
Are ACF plots sufficient to detect white noise assuming probable existence of non linear associations?
Question: ACF plots use autocorrelations tests to determine associations in time series. Autocorrelations, however, only measure linear associations. To my understanding, certain non-linear associations can exhibit zero autocorrelations. Therefore, can there be a scenario where an ACF plot that is considered to reflect white noise be actually representative of non-linear associations? In other words, are ACF plots sufficient for determining whether a signal is a white noise? Answer: In other words, are ACF plots sufficient for determining whether a signal is a white noise? A plot is never sufficient. An impulse-shaped ACF is. (A plot can only ever be an estimate based on observation; to find the true ACF, you'd have to observe the process for infinity, which you can't.) You can only observe that the plot approximates the ideal ACF sufficiently much (to your definition of "sufficient"). Anyway, assuming you mean "ACF being a dirac delta", yeah, that fulfills the definition of white. That is sufficient, because it literally is the definition. can there be a scenario where an ACF plot that is considered to reflect white noise be actually representative of non-linear associations No, because white noise is defined to be any random process with a delta dirac ACF. certain non-linear associations can exhibit zero autocorrelations Yep! "associations" is a bit of very vague term, though. I'd prefer if you said: Are there discrete random processes (i.e. random sequences) where elements aren't independent from each other, but still be uncorrelated? Independence is a strictly defined term: It means that the joint density functions of multiple points in time is the product of the individual densities at these times. I.e.: Let $X(t, \xi)\in \mathbb C, \, t \in \mathbb Z$ be a process at times $t$ (with different realisations $\xi$). Then, the ACF is defined as $$\phi_{XX}(t_1,t_2) = \mathbb E\left( X(t_1)X^*(t_2) \right)$$ and a process is white iff (assuming wide-sense stationarity, so that $\phi(t, t+\tau) =: \phi(\tau) \forall t$) $$\phi_{XX}(t_1,t_2)=\phi_{XX}(\tau)=\begin{cases} > 0 & t_1=t_2\iff \tau=0\\ = 0 & t_1\ne t_2\iff \tau \ne 0 \end{cases},$$ from which it's easy to see that different elements from an observed series are uncorrelated. However, as you probably know, correlation is not the same as independence. Counterexample: $$Y(t) = \begin{cases} U(t) & t\ge 0\\ S\cdot Y(-t) & t < 0 \end{cases},$$ with $U(t)\sim\mathcal U([-1,1])$ (independently uniformly drawn from the $[-1,1]$ interval), and $S\sim\mathcal U(\{-1,1\})$ (a randomly independently drawn sign). Obviously, different elements from the sequence are not independent. $Y(-5)$ is either $Y(5)$ or $-Y(5)$. Other values are not allowed. Checking the ACF, you'll see \begin{align} \phi_{XX}(t_1,t_2) &= \mathbb E\left( X(t_1)X^*(t_2) \right)\\ &=\begin{cases} \mathbb E\left( U^2 \right) =\frac13&\text{for }t_1=t_2&\text{(Variance of uniform continous RV)}\\ 0 &\text{for } t_1\ne -t_2 &\text{(the $U(t)$ are uncorr.)}\\ \mathbb E\left( S\cdot U^2 \right) &\text{for } -t_1=t_2 \end{cases}\\ &=\begin{cases} \frac13&\text{for }t_1=t_2\\ 0 &\text{for } t_1\ne -t_2\\ \mathbb E(S)\mathbb E\left( U^2 \right) &\text{for } -t_1=t_2 & \text{(Since $S(t)$,$U(t)$ indep.)} \end{cases}\\ &=\begin{cases} \frac13&\text{for }t_1=t_2\\ 0 &\text{for } t_1\ne -t_2\\ 0\cdot\frac13 &\text{for } -t_1=t_2 \\ \end{cases}\\ &=\begin{cases} \frac13&\text{for }t_1=t_2\\ 0 &\text{for } t_1\ne t_2 \end{cases}, \end{align} which seems to say "this process is white". BUT: this process is not Wide-Sense Stationary, so applying the term "white" to it doesn't "work". So, you need to be very careful to what you apply the label "white", to begin with.
{ "domain": "dsp.stackexchange", "id": 9100, "tags": "autocorrelation, correlation, time-series" }
Can multiple timelines exist according to theoretical physics?
Question: Multiple timelines is not a reality yet (as of 2016's science and technology), but exists only in Marvel/DC universe. It is something that is used to explain the paradox of time-travel. If you go back in time and kill yourself, the paradox is that you can't do that since you have just wiped off the physical existence of your future self by doing that. So, multiple timeline theory explains that there will be two continuous parallel realities - one in which you will stay alive, and another in which you will be dead. Time, essentially splits/branches like a river from the point in time where such an event occurred (Watch X-Men - Back to the Future if you need to understand this in detail). I want to know what is the physics scientist's opinion in this regard? I've heard somewhere (Stephen Hawkings perhaps) that there is some evidence of multiple parallel universes. Can this explain the multiple timeline theory? Answer: The book you need to read is Fabric of Reality by David Deutsche He is probably the current best known proponent of the Many Worlds Interpretation of Quantum Mechanics. There, you have a potentially infinite number of "parallel time lines"
{ "domain": "physics.stackexchange", "id": 27435, "tags": "spacetime, time, quantum-interpretations, time-travel, multiverse" }
How to plot Loss Landscape with more than 2 weights in the network
Question: For a single neuron with 2 weights, I can plot the loss landscape and it looks like this (OR data, sigmoid activation, MAE loss): But, when the neuron accepts more inputs, which means more than 2 weights required, or when there are more neurons, more layers in the network; how should the 3D loss landscape be plotted? Answer: It seems not possible to plot loss values (z) against all combinations of weights in all layers, especially when the network is big with thousands or millions of params; in that case, the number of points to plot is too too big. And also, the 3D space can't be used to plot more than 3 dimensions. However, with a deep network with lots of weights, these can be plotted: Loss value against any pair of 2 weights Turn the layer right before output layer (single neuron) into a layer of 2 neurons, and loss can be plotted against these 2 weights (but doesn't make much sense as the meaning of loss value depends on all other weights also) Example plot when there are 2 neurons in the layer right before output layer (of 1 neuron):
{ "domain": "ai.stackexchange", "id": 1443, "tags": "machine-learning, gradient-descent, plotting, loss, weights" }
How is Hypergraph Isomorphism (HI) reduced to Graph Isomorphism (GI) in polynomial time?
Question: This question states that the problem of Hyper-graph Isomorphism is equivalent to Graph isomorphism. I have not been able to find a description of the reduction so I am wondering how that might work and what is its complexity. If I understand the post in the link correctly, the reduction has to be polynomial in order for the Hyper-graph isomorphism solution to inherit the same complexity as the Graph-isomorphism problem. Is the reduction only valid for a special case of the Hypergraph isomorphism problem or in general? Answer: A description is given in the first paragraph of [1], where HI stands for Hypergraph Isomorphism and GI for Graph Isomorphism: Given a pair of hypergraphs $X=(V,E)$ and $X'=(V',E')$ as instance for HI, the reduced instance of GI consists of two corresponding bipartite graphs $Y$ and $Y'$ defined as follows. The graph $Y$ has vertex set $V \uplus E$ and edge set $E(Y) = \{\{v,e\} \mid v \in V, e \in E, v \in e\}$, and $Y'$ is defined similarly. Here, $C \uplus D$ denotes the disjoint union of the sets $C$ and $D$. It is easy to verify that $Y \simeq Y'$ if and only if $X \simeq X'$ assuming that $V$ can be mapped only to $V'$ and $E$ can be mapped only to $E'$. This latter condition is easy to enforce. The other direction is trivial as every graph is also a hypergraph. [1] Arvind, Vikraman, Bireswar Das, Johannes Köbler, and Seinosuke Toda. "Colored hypergraph isomorphism is fixed parameter tractable." Algorithmica 71, no. 1 (2015): 120-138.
{ "domain": "cs.stackexchange", "id": 18250, "tags": "complexity-theory, graphs, reductions, polynomial-time" }
Very strange published papers on Mach's Principle
Question: I have recently come across a set of peer-reviewed conference papers (https://petermarkjansson.com/research/machs-principle/) reporting observations of electromagnetic markers of Mach's Principle. In short, anomalies in the discharge of batteries are observed, and the claim is that they are related to the angular position of major astronomical masses (Earth, Sun, Moon, Virgo Supercluster). Holding an MSc in Physics myself, these claims seem to me as outright crackpottery. However, these claims do not come from some random forum, but from papers that are technical in their form, and publicly presented at conferences by an academic. Now, were they just papers about malfunctioning equipment, it would only be a matter of some conferences' standards on how (un)interesting contributions they are willing to accept. The potential crackpottery arises when the authors relate the anomaly to some Mach Effect, without stating what physical quantity is the source, or why such a large effect has gone unnoticed so far, or why it only affects their specific electric appliance. Moreover, I was not able to find other works along the same research line. So, my question is: have I been missing a legitimate line of research up until now, or have I walked into what is some reiterated malpractice (to say the least) in peer-review scrutiny? NOTE. I am aware that some red flags can be spotted: the targeted conferences are minor; the research topic, as well as the citations, look quite isolated; the emphasis on contributions from undergraduate students is unusual; the team's Mach Field Sensor/Detector is patented, which might have pushed some non-scientific interests. However, I am not interested in these aspects here, nor in the motive of the pattern; I am only interested in the physical claim being made, plus (if possible) a judgment on the review process. Thanks in advance. Answer: Having looked at the web site what stands out is the misuse of the concept of Mach's principle; the failure to give quantitative information; the failure to engage the obvious questions; the apparent lack of a self-critical mindset.
{ "domain": "physics.stackexchange", "id": 82201, "tags": "general-relativity, inertia, machs-principle" }
Does temperature affect nuclear decay?
Question: I'm wondering if the temperature could affect the half-life in an element. For example, Carbon 14 has a half-life of 5,730 years. Is this always true or only true for standard conditions of temperature and pressure? Also, if that does affect, how does it affect? Answer: Temperature affects the half-life via time dilation. If the half-life of a nucleus as observed in its rest frame is $\tau(0)$, then in some other frame in which it moves at a speed of $v$, it will be: $$\tau(v) =\frac{\tau(0)}{\sqrt{1-\dfrac{v^2}{c^2}}}$$ Because the typical thermal speeds of atoms at room temperature are much smaller than the speed of light, we may expand the above expression to leading order. This yields: $$\tau(v) =\tau(0)\left[1 + \frac{v^2}{2c^2}+\cdots \right]$$ The average of the square of the speed over the velocity distribution of atoms at temperature $T$ follows from the equipartition theorem. The average energy in each degree of freedom of a gas is $\frac{1}{2} k T$, this means that the average of the kinetic energy is $\frac{3}{2} k T$, it then follows that: $$\left\langle v^2\right\rangle = \frac{3 k T}{m}$$ Therefore, the leading temperature dependence of the half-life is given by: $$\tau =\tau(0)\left[1 + \frac{3 k T}{2mc^2}+\cdots \right]$$ For carbon-14 at $25 ^{\circ}$ C the term $ \dfrac{3 k T}{2mc^2}$ is $2.95\times 10^{-12}$. Since the experimental error in the half life is about 0.7% this effect is too small to be observed.
{ "domain": "physics.stackexchange", "id": 92593, "tags": "nuclear-physics, temperature, half-life" }
Why is rapidity additive?
Question: With the rapidity $\phi$ defined so that $\frac{v}{c}=\tanh{\phi}$, say you have 3 parallel moving reference frames $S$, $S'$ and $S''$ with a constant but different velocity/rapidity. If the rapidity of $S'$ in relation to $S$ is $\phi_1$, and the rapidity of $S''$ in relation to $S'$ is $\phi_2$, then the rapidity of $S''$ in relation to $S$ is $\phi_1+\phi_2$ I don't really see why this is the case, it's probably something really simple I'm missing because there's no further explanation in my syllabus. Answer: Alright here's a very crude heuristic(definitely not a proof) on why(without using velocity composition) rapidity is additive. Impose these two constraints: 1) the constancy of speed of light in all frames. This means that the rapidity of light $\tanh\phi_c=1$ is the same in all frames of reference. Consider this case: You observe a frame of reference $S'$ moving with $\tanh \phi_1=v$ with respect to $S$ followed by a light beam with $\tanh\phi_c=1$. we know the rapidity of light in the $S'$ frame is the same as in $S$. Given this fact we'd like to find out what is the relation $ f(\phi_1,\phi_c)$ between the old and new rapidities in the new frame $S'$ such that: $$\tanh( f(\phi_1,\phi_c))=1$$ 2) The second constraint is that the rapidity of an object at rest is zero. So if there's an object moving with velocity $v$ in $S$, it's at rest in $S'$. That is $$\tanh( f(\phi_1,\phi_1))=0$$ Then it's reasonable to conjecture that $f(\phi_1,\phi_2)=\phi_2 -\phi_1$ As you can check yourself using the hyperbolic identity $$\tanh (\phi_1 +\phi_2)=\dfrac{\tanh \phi_1+ \tanh \phi_2}{1+\tanh \phi_1 \tanh \phi_2} $$ Update: I was asked in the comments by user Prahar that I jumped into the conclusion too quickly. why for instance the functions $$(\phi_2 -\phi_1)^2$$ Or say $$(\phi_2 -\phi_1)^n$$ Or say $$\ln(\dfrac{\phi_2}{\phi_1})$$ are invalid?Although they satisfy the first two constraints? The reason has to do with another constraint: 3)The principle of relativity dictates the following: If you're $S$ and $S'$ is moving with $\tanh \phi_1 =v$ according to $S$, then according to $S'$, $S$ is moving with $-v$ so that $$\tanh (f(\phi_1,0))=-v$$ This is only satisfied for $$\phi_2 -\phi_1$$ Because for $$(\phi_2 -\phi_1)^2$$ or any other $n$, We have $$\tanh ( (f(\phi_1,0))= \tanh (0 -\phi_1)^2= \tanh \phi_1^2 $$ which is evidently not equal to $-v$ because $\tanh \phi_1^2 $ is positive(whereas it should have been negative) and also because it's never equal to $v$, since $\tanh$ is a one to one function.
{ "domain": "physics.stackexchange", "id": 27371, "tags": "special-relativity, velocity, inertial-frames" }
LIGO only measures of diagonal components of metric tensor?
Question: In a paper from Paik et al. 2016, they state (Section 3) that, a terrestrial gravitational wave (GW) detector measures only one off-diagonal component of the metric tensor. Can anyone further expand on why this is so? From my copy of Misner, Thorne, and Wheeler, I understand that we work with a linearized theory and adopt the transverse traceless (TT) gauge, which reduces the perturbed metric to be written just in terms of 2 degrees of freedom corresponding to the two GW polarizations. In this case, is a terrestrial detector (e.g. LIGO) not detecting two components, and they are not necessarily off-diagonal? Thanks Answer: It does not matter if you linearize GR or not but in transverse-traceless gauge there are only 2 polarizations ($h_+$ and $h_\times$). In linearized gravity you have the freedom to write these two polarizations as, $$ h_{\alpha \beta}(t,z) = \left( \begin{array}{c c c c} 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right) h_+(t-z) + \left( \begin{array}{c c c c} 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right) h_\times(t-z) \, ,$$ You may have more polarizations in a different gauge, but, eventually as you say there are only two independent DOF. Considering that the detectors response to an incoming wave is given as $$ h = F_+h_+ + F_\times h_\times$$ So with one or two detector you cant say much. But, if you have 3 detectors you can tell the polarization of an incoming wave i.e. is it linear, circular, elliptic etc. polarized ? You can triangulate the source and tell about the skymap.
{ "domain": "physics.stackexchange", "id": 40549, "tags": "general-relativity, gravity, gravitational-waves, ligo" }
Is it possible to 3D print a mirror to create a high quality telescope?
Question: Is it possible to 3D print a mirror with todays available materials? If so, would there be a reduction in image quality? Answer: The highest resolution 3d printers I know of are around 1600dpi, which is a resolution of about 15$\mu m$. Telescope mirrors have to be smooth to fractions of a wavelength of light, so the resolution of current printers is nowhere near good enough. Whether 3D printers could one day be good enough is a different question, but given that the improvement in resolution required is at least a factor of 1,000 I think it's not likely because 3D printers are designed to address quick manufacture rather than precision manufacture. In any case, making mirrors is a well established procedure. The difficulty is making them large, and it's not obvious how 3D printers would help with this.
{ "domain": "physics.stackexchange", "id": 28244, "tags": "optics, astronomy, reflection, telescopes" }
What are some ways to prove that centripetal force is a center seeking force?
Question: What are some examples of cases that illustrate the center-seeking behavior of centripetal force? I cannot find any examples of the center-seeking behavior of object in a circular path without any other source of force such as tension or another external force. In these cases, how are we able to say that the force is center-seeking, it seems to me that it can also be an outward force. In the case of object following semi-circular paths such as parabolic motion and in a bowl, the "center" seeking force is clearly demonstrated, but I cannot find such examples in the case of complete revolutions. Answer: I cannot find any examples of the center-seeking behavior of object in a circular path without any other source of force such as tension or another external force. Note sure what you are expecting to find here. The "behaviour" or motion of an object is determined by the net force acting on it. Newton's First Law tell us that if the net force acting on an object is zero then it moves in a straight line at constant speed - we can include a stationary object in this category, since its speed is zero. If an object is not moving in a straight line at constant speed - if it moves in a circle for example - then there must be some non-zero force acting on it. Newton's Second Law tells us that the net force must be in the same direction as the instantaneous change in the object's direction (which is its acceleration). If an object is moving in a circle at constant speed then a simple geometrical argument shows its acceleration vector is always pointing towards the centre of the circle. Hence the net force on the object must point towards the centre of the circle - this is what we call "centripetal force". There can be various sources for the centripetal force - tension in a string, friction on a flat bend, horizontal component of normal force on a banked bend, electromagnetic force, gravity etc. And there could be several forces acting on the object at the same time - when the Moon is between the Earth and the Sun, the Sun's gravity is "outwards" but the "inward" force of the Earth's gravity is greater, so the net force is "inwards", towards the Earth. But you cannot have circular motion with no forces acting on an object at all.
{ "domain": "physics.stackexchange", "id": 75084, "tags": "newtonian-mechanics, gravity" }
ASCII game in console
Question: Overall, I want to make a game in console, something Pokemon-like but with typical rpg scenery - swords, magic and all that. I want to make player able to move around the map, find dungeons, fight some monsters (in a round-like maner, like Pokemon but you fight yourself, four moves, being able to escape, potions, etc.), take treasures. I just want to know if I am on a right path. I'll take ratings, suggestions, ideas, everything. Mostly I want to know if I include right files and if this structure of classes is optimal. Everything else is here: https://github.com/supermaciu/ascii-rpg-game-official main.cpp: #include <iostream> #include <windows.h> #include <conio.h> #include <vector> #include <string> #include <sstream> #include <cstdlib> #include <time.h> #include <algorithm> #include "Utilities.h" #include "Game.h" #include "BoardObject.h" #include "Board.h" #include "Player.h" #include "Enemy.h" #include "Teleport.h" // TODO: optimize // TODO: file reader, txt to board setup // FIX: why does it take so much time to compile // TODO: Board function to move a BoardObject to another board // TODO: optimize Board.cpp coloring // IDEA: console window widens on inventory open on the right side / slowly // TODO: threading / doing things while waiting on input int main() { //Init srand(time(NULL)); bool running = true; bool debug_mode = true; //Game / Config Game game; game.setTitle("ascii-rpg-game"); game.setCursorVisibility(false); game.resizeWindow(get_screen_width()*0.85, get_screen_height()*0.8); game.moveWindowCenter(); game.setConsoleBufferSize(0, 0); //game.maxemizeWindow(true); game.resizeableWindow(false); //game.resizeableWindow(true); // not working //game.disableInput(true); Board* board = new Board("board1", 10, 10, &game); game.set_current_board(board); Player* player = new Player(1, 1, board); //Board 2 Board* board2 = new Board("board2", 15, 5, &game); Board* board3 = new Board("sklep", 8, 8, &game); // TODO: shop template Teleport* t1 = new Teleport(3, 3, board); t1->setDestination(0, 0, board2); Teleport* t2 = new Teleport(3, 3, board2); t2->setDestination(0, 0, board); Teleport* t3 = new Teleport(5, 3, board); t3->setDestination(0, 0, board); Teleport* t4 = new Teleport(1, 8, board); t4->setDestination(0, 0, board3); for (int i = 0; i < board->get_width(); i++) { Void* v = new Void(i, 5, board); } //Input char move; //Initial board draw player->eventsHandler(move); game.showBoard(); while (running) { move = getch(); if (move == -32) { move = getch(); } if (move == 3 || move == 27) { running = false; } else if (move == 9) { if (debug_mode == false) { debug_mode = true; } else { debug_mode = false; } } player->eventsHandler(move); game.showBoard(); //Debug game.debug(debug_mode, player); Sleep(1); } return 0; } Teleport.h: #pragma once #include <string> #include "BoardObject.h" //has to be included -> Teleport is a derived class class Board; class Teleport : public BoardObject { private: unsigned int dest_x = 0; unsigned int dest_y = 0; Board* dest_board = nullptr; bool tp_set = false; bool dynamic = false; public: Teleport(unsigned int x, unsigned int y, Board* board); ~Teleport(); bool is_set() { return tp_set; } void setDestination(unsigned int x2, unsigned int y2, Board* board2, bool dynamic=false); void unsetDestination(); void onTouchEvent(Player* player) override; }; Game.h: #pragma once #include <string> #include "Utilities.h" class Board; class BoardObject; class Player; class Game { private: std::string game_name; HWND consoleWindow; // console window handle HANDLE handle; // handles changes made to command prompt COORD coord; // coordinates CONSOLE_CURSOR_INFO consoleCursorInfo; // console cursor options std::vector<Board*> boards = {}; Board* current_board = nullptr; public: Game() = default; Game(const std::string& game_name); ~Game(); HANDLE get_handle() { return this->handle; } void setTitle(const std::string& game_name); void setCursorVisibility(bool show_cursor); void resizeWindow(int width, int height); void moveWindow(int x, int y); void moveWindowCenter(); void setConsoleBufferSize(int x, int y); void maxemizeWindow(bool maxemize); void resizeableWindow(bool can_resize); void disableInput(bool disable); Board* get_current_board() { return this->current_board; }; void set_current_board(Board *board); void showBoard(); Board* getBoardByName(const std::string& name); std::vector<Board*> getAllBoards(); std::vector<BoardObject*> getAllBoardObjects(); void addBoard(Board* board) { boards.push_back(board); } void debug(bool debug_mode, Player *player); }; Board.h: #pragma once #include <vector> #include <string> class Game; class BoardObject; class Board { private: Game *game; const int BOARD_LIMIT_MIN = 2; const int BOARD_LIMIT_MAX = 100; std::string name = "nullBoard"; int width; int height; char board[100][100] = {}; std::vector<BoardObject*> board_objects = {}; public: Board(unsigned int width, unsigned int height, Game* game); Board(const std::string& name, unsigned int width, unsigned int height, Game* game); std::string get_name() { return this->name; } void set_name(std::string name) { this->name = name; } int get_width() { return this->width; } int get_height() { return this->height; } void render(); void draw(); bool hasBoardObjects(); void addToBoard(BoardObject* board_object); void deleteFromBoard(BoardObject* board_object, bool delete_object_in_memory); std::vector<BoardObject*> getBoardObjects(); BoardObject* getBoardObjectById(int id); std::vector<BoardObject*> getBoardObjectsByClassname(std::string classname); BoardObject* getBoardObjectByCoords(int x, int y); friend class Game; }; BoardObject.h: #pragma once #include <string> #include "Game.h" class Board; class Player; class BoardObject { protected: Game *game; unsigned int x; unsigned int y; char c; int color = 0x07; // 0x, 0 -> black, 7 -> console white unsigned int id; static unsigned int ID; std::string classname = "nullBoardObject"; bool moveInto= false; bool interactWith = false; Board* board; public: BoardObject(unsigned int x, unsigned int y, char c, Board* board); std::string get_classname() { return this->classname; } int get_id() { return this->id; } int get_x() { return this->x; } int get_y() { return this->y; } void set_pos(int x, int y); char get_char() { return this->c; } void set_char(char c) { this->c = c; } int get_color() { return this->color; } void set_color(int color) { this->color = color; } Board* get_board() { return this->board; } void set_board(Board* board) { this->board = board; } bool get_moveInto() { return this->moveInto; } void set_moveInto(bool canMoveInto) { this->moveInto = canMoveInto; } bool get_interactWith() { return this->interactWith; } void set_interactWith(bool canBeInteractedWith) { this->interactWith = canBeInteractedWith; } virtual void onEnterEvent(Player* player) {}; virtual void onTouchEvent(Player* player) {}; friend class Game; friend class Board; }; class Void : public BoardObject { public: Void(unsigned int x, unsigned int y, Board* board); }; Player.h: #pragma once #include <string> #include "BoardObject.h" #include "Board.h" #include "Teleport.h" class Player : public BoardObject { public: char move; int kill_count = 0; Teleport* t1 = nullptr; Teleport* t2 = nullptr; private: int dirx = 0; int diry = 0; int prev_x; int prev_y; char prev_move; public: Player(unsigned int x, unsigned int y, Board* board); //BoardObject functions void set_pos(int x, int y); //Player functions int get_dirx() { return dirx; } int get_diry() { return diry; } int get_prev_x() { return prev_x; } int get_prev_y() { return prev_y; } char get_prev_move() { return prev_move; } bool canMoveInto(int x, int y); bool canMoveInto(BoardObject* bo); void movementHandler(char move); void checkOnEnterEvents(); void checkOnTouchEvents(); void eventsHandler(char move); }; Enemy.h: #pragma once #include <string> #include "BoardObject.h" class Board; class Enemy : public BoardObject { public: Enemy(unsigned int x, unsigned int y, Board* board); void onEnterEvent(Player* player) override; }; Answer: Unnecessary use of this-> You almost never have to write this-> in C++. The exception is when you have member function arguments whose names are the same as member variables, like you do in the constructors. There are two ways around this: Use a different name for the member function argument, like appending a _. Use member initializer lists. An example of the latter would be: Board::Board(unsigned int width, unsigned int height, Game* game): width(std::clamp(width, BOARD_LIMIT_MIN, BOARD_LIMIT_MAX)), height(std::clamp(height, BOARD_LIMIT_MIN, BOARD_LIMIT_MAX)), game(game) { game->addBoard(this); } Avoid manual memory management Avoid calling new and delete where possible, and let containers manage their memory, or use smart pointers like std::unique_ptr. Manual memory management often ends with memory leaks. For example, the Boards allocated in main() are never deleted. Also, you didn't write a destructor for class Board, so if a Board is deleted abut still had some items in board_objects, those items will have leaked as well. For board_objects, since you only know the base type, you can use std::unique_ptr like so: std::vector<std::unique_ptr<BoardObject>> board_objects; In addToBoard() you would only need to replace push_back() with emplace_back(). And you can remove the delete statement from deleteFromBoard(). However, that still requires you to new an object before calling addToBoard(). You could have addToBoard() take a std::unique_ptr<BoardObject> as an r-value parameter, create new objects using std::make_unique<DerivedObject> and then std::move() them around, but that's also cumbersome. Let's revisit this: Don't let objects manage their own storage I would avoid having objects derived from BoardObject add themselves to a Board. If a Board is going to own the objects, let the Board add them to itself. However, I can see that you don't want to have to write two lines of code every time, like so: Enemy *enemy = new Enemy(x, y); board->addToBoard(enemy); Although you could make a oneliner out of it: board->addToBoard(new Enemy(x, y)); But as said before, thisd way you are still allocating a derived object manually. Wouldn't it be nice if you could let addToBoard() create an object of the right type for you? You can do that if you make it a template, like so: template <typename T, typename... Args> void addToBoard(Args&&... args) { board_objects.push_back( // Add to board_objects a std::make_unique<T>( // new unique object of type T and std::forward<Args>(args...) // forward args to its constructor ) ); } If you're not used to templates, this might look a bit complicated, but the benefit of this is that the code using it will be much simpler to write. For example, to add an enemy to the board, now you just have to write: board->addToBoard<Enemy>(x, y); If you need a pointer to the object you just added, you can modify addToBoard() to return a pointer to the object you just pushed to the vector: template <typename T, typename... Args> T* addToBoard(Args&&... args) { ... return board_objects.back().get(); }
{ "domain": "codereview.stackexchange", "id": 43266, "tags": "c++, classes, windows, role-playing-game" }
Distance between very large discrete probability distributions
Question: I have 192 countries where each country has some value for 1 million attributes which sum up to 1 (a discrete probability distribution). For any one country most of the values for the attributes are 0. Now I am trying to find the distance/similarity between those countries using these attributes. I know we can use Jensen Shannon Divergence between two discrete probability distributions to get a distance measure, but the caveat is that all the values have to be non-zero. Given that there are zero valued attributes for the countries, is there any other suitable statistical distance measure that can help me to cluster these countries using these 1 million attributes? Answer: Yes, plenty. Get the book "encyclopedia of distances". For example, you can use Histogram Intersection distance. Since your data is already normalized, that reduces to Manhattan distance, if I am not mistaken. Yes: this can be appropriate for distributions.
{ "domain": "datascience.stackexchange", "id": 3545, "tags": "clustering, probability, distance" }
Technical name for a road "sliparound" lane
Question: What is the technical name for this kind of road segment? I think the purpose of the lane on the right is to bypass the traffic lights at the intersection. I've heard that kind of lane referred to as a "sliparound" lane. But I doubt that's the proper term. Edit: The location is Ontario, Canada. Answer: The name is the same as its purpose of service - "Exclusive Right Turn Lane", or "Right Turn Only Lane". Right turn lanes can significantly improve the capacity and level of service of signalized intersections.
{ "domain": "engineering.stackexchange", "id": 4792, "tags": "civil-engineering, terminology" }
The phase space trajectory of a single particle falling freely from height is?
Question: The phase space trajectory of a single particle falling freely from height is? Phase space is a plot between momentum and position, and since kinetic energy increases the momentum must increase with position, so option "2" must be correct, but the answer key shows that answer in option "4". Please tell me the correct method to plot it if my observation is wrong Answer: At t = 0, p = 0 and z = H. So the first point should be on the horizontal axis. At the final position the object is at z = 0 and its p is max, hence it is on the vertical axis. You can use the relation v^2 = 2g delta(z) to figure out these points. I think that supports the answer. Though I do not like the apparent hump in the plot.
{ "domain": "physics.stackexchange", "id": 58226, "tags": "homework-and-exercises, classical-mechanics, projectile, phase-space, free-fall" }
If dark matter can't interact electromagnetically, then how can it annihilate into photons?
Question: I have read this question: As I understand, dark matter theoretically only interacts with the gravitational force, and doesn't interact with the other three fundamental forces: weak nuclear force, strong nuclear force, and electromagnetism. If dark matter only interacts with gravity, why doesn't it all clump together in a single point? There are a few questions on this site about dark matter annihilation and anti-dark matter, and all of them take it for granted that dark matter annihilates into photons. Is there a possibility of anti-dark matter? Dark Matter gamma-ray flux from hadronic annihilation channels? How convincing is the evidence for dark matter annihilation at 130 GeV in the galactic center from the Fermi Satellite data? Now as far as I understand, ordinary matter and anti-matter can annihilate into photons because ordinary matter does interact electromagnetically. But dark matter does not interact electromagnetically, hence, dark matter cannot annihilate into photons, dark matter's energy cannot be transformed into the energy of the EM field. Question: If dark matter can't interact electromagnetically, then how can it annihilate into photons? Answer: This has been addressed in Neutrino annihilation and bosons since neutrinos are a type of dark matter (they just aren't the dark matter we need to account for the extra mass of the universe). You are quite correct that a neutrino and antineutrino cannot annihilate directly to photons but they can do so via a Z boson. We don't know what the dark matter particles are but we expect they will annihilate by a similar mechanism involving an intermediate particle.
{ "domain": "physics.stackexchange", "id": 81163, "tags": "quantum-mechanics, electromagnetism, general-relativity, dark-matter" }
What could be causing noisy pH measurements?
Question: I designed and built a hydroponics system with pH logging and I am trying to understand why pH measurements in the nutrient reservoir vary ~ ±0.2 pH while measurements in a separate bottle of probe storage solution only vary ~ ±0.01 pH. The nutrient solution is in a 3.5 gallon plastic pail, as shown below. The pH probe is the one with a blue top. The black probe measures EC and removing it has no effect on the pH noise in the reservoir. The plot below shows the logged pH. The abrupt change is when the probe is moved from the nutrient solution to the storage solution. Note the relatively large noise in the nutrient reservoir and the nearly imperceptible noise in the storage solution. In the plot,the pH is measured every 5 minutes at the start of a spray cycle, when the pump is off and has not been running for the last ~4.5 minutes. Note that similar noise occurs when measuring at a rate of 1 Hz. During the storage solution measurements, the probe tip is in a separate O-ring sealed bottle. The bottle is sitting at the spot where the pH probe would normally be inserted into the reservoir. What are potential causes of the pH noise in the nutrient solution? Answer: Several factors may influence your sensor readings: Electromagnetic noise. However, since EMF should be directly proportional to pH, it seems to be not your case. If it were the main reason, the same amount of noise would show up in both the nutrient and the storage solutions. Temperature. I've included this one because it is unclear from your post whether you are measuring the actual solution temperature when you calculate pH from EMF, or just assume that the temperature is some constant value. The Nernst equation may be used to estimate the amount of "noise" (i.e. the difference in EMF) one may expect from the temperature fluctuations: $\Delta E=2.303\frac{R\Delta T}{F}pH$. That is, if we can neglect the temperature dependence of pH itself, i.e. when $\Delta T$ is relatively small (when $\Delta T$ is large we can no longer neglect the temperature dependences of equilibrium constants). If, for some reason, the daily temperature change in the nutrient bath is different than that in the storage solution, you may expect to get slightly different readings. Math. We should not forget that the value measured (pH) is logarithmic, and that it is the concentration (well, activity) of protons that makes the difference. For example, to change pH from 6.45 to 6.05 (±0.2), you need to add just around $5.4\cdot 10^{-7}$ mol/l of protons, but to change it from 4.26 to 4.24 (±0.01) - $2.6\cdot 10^{-6}$, or almost 5 times as much. In other words, the same actual $[\textrm{H}^{+}]$ fluctuations cause different pH changes around different pH values. In turn, these $[\textrm{H}^{+}]$ fluctuations may be due to slight inhomogeneities of the nutrient solution, differences in dissolved carbon dioxide concentration, etc. The more neutral your solution is, the more sensitive, with respect to $\Delta [\textrm{H}^{+}]$, your pH electrode becomes. In this regard, you may want to prepare a few solutions with different pH and try to investigate the noise vs. pH dependence. Or do something even more simple - as a reference system, instead of the storage solution (which, in addition, probably possesses some buffer characteristics), try to use something more neutral, like water, and see if the noise is as large as in the nutrient solution. Since my analytical chemistry is a bit rusty, no other potential cause comes to mind.
{ "domain": "chemistry.stackexchange", "id": 11145, "tags": "ph" }
Angle to a circle tangent line
Question: I want to simulate the detection of a moving object by a unicycle type robot. The robot is modelled with position (x,y) and direction theta as the three states. The obstacle is represented as a circle of radius r1 (r_1 in my code). I want to find the angles alpha_1 and alpha_2from the robot's local coordinate frame to the circle, as shown here: So what I am doing is trying to find the angle from the robot to the line joining the robot and the circle's centre (this angle is called aux_t in my code), then find the angle between the tangent and the same line (called phi_c). Finally I would find the angles I want by adding and subtracting phi_c from aux_t. The diagram I am thinking of is shown: The problem is that I am getting trouble with my code when I try to find the alpha angles: It starts calculating the angles correctly (though in negative values, not sure if this is causing my trouble) but as both the car and the circle get closer, phi_c becomes larger than aux_t and one of the alphas suddenly change its sign. For example I am getting this: $$\begin{array}{c c c c} \text{aux_t} & \text{phi_c} & \text{alpha_1} & \text{alpha_2} \\ \hline \text{-0.81} & \text{+0.52} & \text{-1.33} & \text{-0.29} \\ \text{-0.74} & \text{+0.61} & \text{-1.35} & \text{-0.12} \\ \text{-0.69} & \text{+0.67} & \text{-1.37} & \text{-0.02} \\ \text{-0.64} & \text{+0.74} & \text{-1.38} & \text{+0.1} \\ \end{array}$$ So basically, the alpha_2 gets wrong form here. I know I am doing something wrong but I'm not sure what, I don't know how to limit the angles from 0 to pi. Is there a better way to find the alpha angles? Answer: First, determine the angle $\phi$ between the robot $<\!a_{x},a_{y},\theta\!>$ and the target $<\!p_{x},p_{y}\!>$ as follows $$ \phi = \tan^{-1} \left( \frac{ p_{y} - a_{y} }{ p_{x} - a_{x} } \right) - \theta $$ See the below picture, Based on $\phi$, you can determine the rest.
{ "domain": "robotics.stackexchange", "id": 1041, "tags": "mobile-robot, kinematics, matlab, geometry" }
Fraction of initial mass lost (radiated) by neutron star mergers compared to black hole mergers?
Question: GW190521 black hole merger total mass calculation and missing mass, how does this happen? notes that there are about 9 solar masses missing from the final black hole. GW170817 is the first observed merger of two neutron stars, detected in several ways including a weak gravitational wave. Do neutron star mergers also radiate several percent of their mass as gravitational waves, or is the fraction much smaller. They comprise ordinary matter rather than being singularities in spacetime, so my guess is that the fraction is much smaller, but I have no idea. My question is motivated by this answer. Related: "Who saw" the binary neutron star merger first? What was the sequence of events? (GRB/GW170817) What does "GPU-accelerated butterfly matched filtering over dense bank of time-symmetric chirp-like templates" mean? (GW170817) Answer: Dimensionally speaking, the luminosity of a gravitationally radiating binary system, consisting of two objects of mass $M$, separated by $R$, goes as $(M/R)^5$. The timescale of the chirp for such a system goes as $M^{-3} R^4$. (Schutz 1999). Thus the total energy released goes as $M^2/R$, i.e. it is proportional to the gravitational potential energy of the system. Because of the $R^{-1}$ dependence, it is basically the mass and radius of the "final" state that determines the energy lost. For the black hole case, the final mass is just less than $2M$ and the final configuration event horizon is $4M$ (with $G=c=1$). So $M^2/R = M$. Thus I would expect the mass-energy released in gravitational waves to be a fixed fraction of the combined mass of the black holes (note that unequal mass black holes will lead to complications). Looking at the data for the list of mergers this model seems reasonable, with the fixed fraction being about 5%. Extending this to neutron stars, well the "final" radius is going to depend on the physics of the neutron star material and so will be model dependent. However, that radius will be $>4M$ (i.e. probably several times the Schwarzschild radius). Another way of saying this, is that neutron stars can't get as close together before the merger takes place. So from that point of view I would expect $<$5% of the combined mass-energy is radiated as gravitational waves. Observationally, there is no accurate estimate of the final remnant mass for GW170817.
{ "domain": "astronomy.stackexchange", "id": 4775, "tags": "astrophysics, gravitational-waves, neutron-star" }
Sorting alphabetically ascending and descending using .sort()
Question: I have created this basic use case of .sort() for my own reference. I am asking for a review to see if there are bad practices present or areas that could be optimised. Is my use of modifying the array correct? Do I need to place the newly sorted list into a new array? Using .map() I found it was parsing it as a string. Is insertAdjacentHTML() the right method to use here? const listArea = document.getElementById('listArea'); const btnAsc = document.getElementById('asc'); const btnDes = document.getElementById('des'); // The Original Array let architects = [ "Lloyd Wright", "Hadid", "Mies van der Rohe", "Le Corbusier", "Foster", "Gaudi", "Piano", "Gropius", "Niemeyer", "Aalto", "Saarinen", ] // Empty Array to Store Sorted Array let sortedList = []; // Click Events for the sort buttons btnAsc.addEventListener('click', () => sortAsc(architects)); btnDes.addEventListener('click', () => sortDes(architects)); // Sort the array in ascending order function sortAsc(arr) { // Clear the sortedList array sortedList.length = 0; // Convert each item in the architects array to lower case for(const element of arr) { // Push these lowercase elements into the sortedList array sortedList.push(element.toLowerCase()); } sortedList.sort(); populateList(sortedList); } function sortDes(arr) { sortedList.length = 0; for(const element of arr) { sortedList.push(element.toLowerCase()); } sortedList.sort().reverse(); populateList(sortedList); } function populateList(arr) { // Empty out the listArea of all content while(listArea.firstChild) { listArea.removeChild(listArea.lastChild); } // Map each item in the supplied array to an li, and remove comma at end of each let listArchitects = arr.map(item => `<li>${item}</li>`).join(''); listArea.insertAdjacentHTML('beforeend', listArchitects); } window.addEventListener('onload', populateList(architects)); #listArea { text-transform: capitalize; } <div class="list"> <div class="buttons"> <button id="asc">Sort Ascending</button> <button id="des">Sort Descending</button> </div> <ul id="listArea"> </ul> </div> Answer: Some of the things I'd point out in your code for improvements: Arrays can be a const since you will not modify the array itself but just push new elements to the array. In your functions sortAsc, sortDesc, you are iterating through arr just to make it lowercase and iterating through the array again in sortedList.sort(). In such scenarios, think about how you can reduce number of iterations. sort() does in-place sorting, so no need to create a new array. In function populateList by running arr.map you are creating another new array which I don't think is necessary. const listArea = document.getElementById('listArea'); const btnAsc = document.getElementById('asc'); const btnDes = document.getElementById('des'); const architects = [ "Lloyd Wright", "Hadid", "Mies van der Rohe", "Le Corbusier", "Foster", "Gaudi", "Piano", "Gropius", "Niemeyer", "Aalto", "Saarinen", ]; function sortAsc() { architects.sort(); populateList(); } function sortDes() { architects.sort().reverse(); populateList(); } function populateList() { listArea.innerHTML = ''; architects.forEach( (a) => { const list = document.createElement('li'); list.innerText = a; listArea.appendChild(list); }); } btnAsc.addEventListener('click', sortAsc); btnDes.addEventListener('click', sortDes); window.addEventListener('onload', populateList()); <div class="list"> <div class="buttons"> <button id="asc">Sort Ascending</button> <button id="des">Sort Descending</button> </div> <ul id="listArea"> </ul> </div>
{ "domain": "codereview.stackexchange", "id": 41066, "tags": "javascript, sorting" }
Calculating acceleration of a particle from Radiation Pressure
Question: I am trying to calculate the the acceleration of a particle from radiation pressure, assuming all radiation is absorbed. I got $$\Delta \vec{p} = \frac{\Delta U}{c_0}$$ and the intensity $I_S$=$1367 \ \frac{W}{m^2}$. I think that $\Delta U = I_S A$. Since $\vec{p}_0=0$, I get $$\vec{p}=\frac{I_S A}{c_0}=m \dot{x}$$ Now I am trying to find a way to calculate the acceleration. Since $$\vec{F}=\dot{\vec{p}}=m \ddot{x}=ma$$ I tried to do $$\frac{\mathrm{d}}{\mathrm{dt}} \frac{I_S A}{c_0} = 0 \quad \mathrm{???}$$ Obviously I can not get the derivate of that part of the equation with respect to $t$. I know there is some correlation with $I_S$ (because its unit is $\frac{W}{m^2} = \frac{\frac{kg m^2}{s^3}}{m^2}$) and time, but I don't know how to differentiate that equation. Answer: The total energy $\delta U$ is also proportional to time. So, the energy deposited is $I\times Area\times time$ given that the radiation is falling normally on the body. Else you have to take a $cos\theta$ component.Now, it's trivial to see that the acceleration will be constant if the intensity is constant.
{ "domain": "physics.stackexchange", "id": 44217, "tags": "electromagnetism, forces, electromagnetic-radiation, acceleration" }
How to compute the QFI of a thermal state?
Question: Let $\rho=\frac{1}{Z}\exp(-\beta H)$ be the thermal state associated to the Hamiltonian $$H=\hbar\omega\sum_i\left( a_i^\dagger a_i+\frac12\right).$$ I wonder how the quantum Fisher information of such a state is computed. Most sources give expressions for pure states, or for mixed states in the form $\rho=\sum_kp_k|\psi_k\rangle\langle\psi_k|$. Can anyone point me to some reference that illustrates how to do such calculations? Answer: QFI must always be computed with respect to a parameter "$\theta$". Perhaps it is the temperature that you want to use here, or $\beta$? Regardless, we can put the state into your desired form by expanding the exponential. Taking the complete set of orthonormal basis states $|\mathbf{n}\rangle=|n_1\rangle\otimes |n_2\rangle\otimes\cdots$ that satisfy $a_i^\dagger a_i|\mathbf{n}\rangle=n_i|\mathbf{n}\rangle$, the state can be rewritten as $$\rho=\sum_{\mathbf{n}}\frac{\exp[-\beta\hbar\omega(\sum_i n_i+\frac{1}{2})]}{Z}|\mathbf{n}\rangle\langle\mathbf{n}|.$$ All of the probabilities are thus given by $p_{\mathbf{n}}=\frac{\exp[-\beta\hbar\omega(\sum_i n_i+\frac{1}{2})]}{Z}$ and the eigenstates do not depend on any parameter, so you can use classical Fisher information formulas to proceed: $$I_Q(\rho,\theta)=\sum_{\mathbf{n}}p_{\mathbf{n}}\left(\frac{\partial \ln p_{\mathbf{n}}}{\partial \theta}\right)^2.$$ Now, if you would like a general formula for the QFI $I_Q$ for mixed states that doesn't require this canonical form, there are many options, e.g. collected in this review. If you have an eigen-decomposition of your state $\rho=\sum_k p_k|k\rangle\langle k|$, you can use $$I_Q(\rho,\theta)=2\sum_{jk}\frac{|\langle j|\partial_\theta\rho|k\rangle|^2}{p_j+p_k}.$$ If you do not, you can use $$I_Q(\rho,\theta)=\min_\Psi(\langle \partial_\theta\Psi|\partial_\theta\Psi\rangle-|\langle \Psi|\partial_\theta\Psi|^2\rangle)=\min_\Psi(\langle \partial_\theta\Psi|\partial_\theta\Psi\rangle\rangle)$$ where $|\Psi\rangle$ is a purification of $\rho$ in a larger Hilbert space, or a Lyapunov representation $$I_Q(\rho,\theta)=2\int_0^\infty ds \mathrm{Tr}[(\partial_\theta \rho)e^{-\rho s}(\partial_\theta \rho)e^{-\rho s}].$$ If $\rho$ is invertible (i.e., full-rank) then we can solve the Lyapunov equation to find $$I_Q(\rho,\theta)=2\mathrm{vec}(\partial_\theta \rho)^\dagger (\rho^*\otimes \mathbb{I}+\mathbb{I}\otimes \rho)^{-1}\mathrm{vec}(\partial_\theta \rho)$$ and if it is not full-rank then we can add a small component of a maximally mixed state and take the limit of that component going to zero (https://doi.org/10.48550/arXiv.1801.00945).
{ "domain": "quantumcomputing.stackexchange", "id": 5177, "tags": "information-theory, quantum-fisher-information, quantum-metrology" }
What does U mean in pulse schedules for superconducting quantum computers?
Question: What does U channel mean in pulse schedules for superconducting devices? In the image you can see the schedule for a CX gate. I assume D stands for the drive channels. What does U mean, is it perhaps the control channel? Why the letter U, what it stands for? Answer: You're already answering your own question: U0 stands for the control channel on qubit 0, as you can read in Table 1 of https://iopscience.iop.org/article/10.1088/2058-9565/aba404/pdf. In your schedule the CX is implemented as echoed cross resonance gate (ECR), where we drive the control qubit 0 at the frequency of the target qubit 1. Therefore you can see a cross resonance (CR) pulse on the control channel of qubit 0, U0. Since we echo the pulse, you can also see a secondary CR pulse with inverted angle, interleaved with X gates. You can also see something happening on the drive channel of qubit 1, D1, during the execution of the CR pulse on the control qubit. This is a cancellation tone to improve the CX fidelity. Note that even though the channels are drawn separately, they can be implemented on the same wire on the device.
{ "domain": "quantumcomputing.stackexchange", "id": 5274, "tags": "qiskit, openpulse" }
ROS_INFO not printing anything after some point
Question: In my code below, constructor of a class calls functions within the same class. All ROS_INFO in func1 print successfully but func2 doesn't print anything. Any idea? Update 8/13/2012) Each action server is robot manipulation, which means there's fair amount of time before func2 gets called after func1. #include <ros/ros.h> #include <actionlib/client/simple_action_client.h> #include <actionlib/client/terminal_state.h> #include <pointcloud_filter/PointCloudFilterBoxAction.h> #include <pointcloud_filter/PointCloudFilterBoxGoal.h> #include <pointcloud_segmenter/RGBPointCloudSegmenterAction.h> #include <pointcloud_segmenter/RGBPointCloudSegmenterGoal.h> class OurClass { public: static const double DURATION = 30.0; OurClass() { this->func1(); this->func2(); } void func1() { ROS_INFO("msg 1-0"); actionlib::SimpleActionClient<ourns::OurAction> ac("ourAction", true); bool finished_before_timeout = ac.waitForResult(ros::Duration(DURATION)); if (finished_before_timeout) { actionlib::SimpleClientGoalState state = ac.getState(); ROS_INFO("msg 1-1"); } else ROS_INFO("msg 1-2"); } void func2() { ROS_INFO("msg 2-0 "); actionlib::SimpleActionClient<pointcloud_filter::PointCloudFilterBoxAction> ac_filter("PointCloudFilterBox", true); ac_filter.sendGoal(goal_filter); bool finished_before_timeout2 = ac_filter.waitForResult(ros::Duration(DURATION)); if (finished_before_timeout2) { actionlib::SimpleClientGoalState state = ac_filter.getState(); ROS_INFO("2-1"); } else ROS_INFO("2-2"); } }; int main(int argc, char **argv) { ros::init(argc, argv, "AcClient"); OurClass(); } Environment) fuerte, Ubuntu 10.04 Originally posted by 130s on ROS Answers with karma: 10937 on 2012-08-12 Post score: 1 Answer: If the code you posted really is 'all there is', then it seems to at least be missing a ros::spin() or similar statement. If the ros event loop is run somewhere else, the ros::init(..) seems out of place. What is most likely going on is that the ros logging subsystem has 'enough' time to flush out the ROS_INFO(..) calls in func1(), but not for those in func2(). I've noticed this myself a few times, especially when trying to print some output at node(let) shutdown (to print statistics for instance). Suggestions I've received included 'sleeping some time' before calling ros::shutdown(..) (if doing so explicitly) or before returning from your main. Originally posted by ipso with karma: 1416 on 2012-08-13 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by 130s on 2013-03-28: Looks reasonable explanation although I can't test it since I don't have an access to the environment I was seeing the phenomenon...
{ "domain": "robotics.stackexchange", "id": 10580, "tags": "ros, rosconsole" }
What is the way in which an audio signal is "cleaned"?
Question: I was studying how signals can be "decompose" with the Frequency domain analysis and Fourier transforms; I have also read that through these actions it is possible to clean the signals from the noises, separating the unwanted frequencies from the main signal. So I was wondering how this procedure works, in a practical way, for example to clean up wind noise signals? Answer: for example to clean up wind noise signals? Wind noise is a relatively good approximation of white noise, i.e. noise that has the same power at all frequencies. So, by filtering the signal with a filter that only lets through the frequencies mostly used by the kind of sounds you're interested in (e.g., speech, or birdsong, or wind turbine buzz, which would all have significant content at different frequencies), you suppress a significant amount of the noise power. Anyway, this means you need to learn what filtering is! You can do that in the frequency domain, as well, but need to take special care (because you can only transform audio into the frequency domain in "chunks", and thus you get effects on the ends of these chunks).
{ "domain": "dsp.stackexchange", "id": 11141, "tags": "signal-analysis, noise-cancellation, decomposition" }
Classification of Signals using Features Extracted from Wavelet Scattering Coefficients
Question: I have a set of RF signal samples of 2s length each recorded at 2MHz sampling rate, such as this IQ file: https://www.dropbox.com/s/dd6fr4va4alpazj/move_x_movedist_1_speed_25k_sample_6.wav?dl=0 I can effectively gather the zeroth, first and second order coefficients using kymatio and plot them using the following code: import scipy.io.wavfile import numpy as np import matplotlib.pyplot as plt from kymatio.numpy import Scattering1D path = r"move_x_movedist_1_speed_25k_sample_6.wav" # Read in the sample WAV file fs, x = scipy.io.wavfile.read(path) x = x.T print(fs) # Once the recording is in memory, we normalise it to +1/-1 x = x / np.max(np.abs(x)) print(x) # Set up parameters for the scattering transform ## number of samples, T N = x.shape[-1] print(N) ## Averaging scale as power of 2, 2**J, for averaging ## scattering scale of 2**6 = 64 samples J = 6 ## No. of wavelets per octave (resolve frequencies at ## a resolution of 1/16 octaves) Q = 16 # Create object to compute scattering transform scattering = Scattering1D(J, N, Q) # Compute scattering transform of our signal sample Sx = scattering(x) # Extract meta information to identify scattering coefficients meta = scattering.meta() # Zeroth-order order0 = np.where(meta['order'] == 0) # First-order order1 = np.where(meta['order'] == 1) # Second-order order2 = np.where(meta['order'] == 2) #%% # Plot original signal plt.figure(figsize=(8, 2)) plt.plot(x) plt.title('Original Signal') plt.show() # Plot zeroth-order scattering coefficient (average of # original signal at scale 2**J) plt.figure(figsize=(8,8)) plt.subplot(3, 1, 1) plt.plot(Sx[order0][0]) plt.title('Zeroth-Order Scattering') # Plot first-order scattering coefficient (arrange # along time and log-frequency) plt.subplot(3, 2, 1) plt.imshow(Sx[0][order1], aspect='auto') plt.title('First-order scattering [1]') plt.subplot(3, 2, 2) plt.imshow(Sx[1][order1], aspect='auto') plt.title('First-order scattering [2]') # Plot second-order scattering coefficient (arranged # along time but has two log-frequency indicies -- one # first- and one second-order frequency. Both are mixed # along the vertical axis) plt.subplot(3, 3, 1) plt.imshow(Sx[0][order2], aspect='auto') plt.title('Second-order scattering [0]') plt.subplot(3, 3, 2) plt.imshow(Sx[1][order2], aspect='auto') plt.title('Second-order scattering [1]') plt.show() However, my task is to classify each of the samples using a neural network architecture. The problem I am having is that the shape and size of these coefficients is quite large and simply storing them in memory is not feasible (with around ~2k samples overall). Therefore, I am wanting to know if there is a good way of extracting features I can use to represent this (i.e. if I take the MFCC i can create feature columns of 1-13 MFCC coefficients such as via mfcc = librosa.feature.mfcc(y=x, sr=fs, n_mfcc=14, hop_length=hop_length, n_fft=M)[1:]). Or perhaps there are other ways of shortening this data so it is actually useable in a neural network for training (i.e. taking the spatial averages of each of the coefficients: ¯Sm,J = ∑x ˜Sm,J ((λ1,··· ,λm), x) but this spatial info whilst reducing the dimension). Any help would be great! EDIT: Here are the plots for power spectrum and time-frequency from matlab signal analyser. How could this be used to identify the spectral occupancy for downsampling to minimise data. Answer: Expanding on my related answer but for wavelet scattering: Higher T -> greater time-shift invariance. Higher Q -> greater frequency resolution, lower time resolution and time-warp stability. Higher J -> larger largest-scale feature (largest kernel convolution). Should be set to whatever we think our largest-scale relevant structure is - e.g. a sentence is longer than a word, a word is longer than a character. Global averaging, i.e. T==len(x), can perform surprisingly well, while drastically slashing input size. Would also help to split up input along time and then join scatterings afterwards ("no-overlap-join convolution") as it's faster than one big convolution. A pressing question is also whether such a high sampling rate is indeed required (as Dan pointed), and whether relevant features span the entire 0-2MHz bandwidth. If the effective bandwidth is only e.g. 1-1.5MHz, then data can be shrunk four-fold with downsampling techniques before feeding to the scattering network, greatly reducing processing time. Re: downsampling From what I can tell per the graphs, downsampling by 2 shouldn't hurt, but 1) that's not a lot, and 2) it might indeed hurt if what's discarded is important, which depends on whether there's e.g. power law scaling as in e.g. EEG. The freqs seem rather evenly distributed, so if nothing else is known of thier "importance", maybe not much can be done. One approach: Downsample by 8, train, test Repeat for downsampling by 4, see if there's improvement Yes: repeat for downsampling by 2. No: repeat for downsampling by 16. P.S, this should also help, but no meta.
{ "domain": "dsp.stackexchange", "id": 11024, "tags": "wavelet, classification, scattering" }
Induced voltage in a secondary coil?
Question: The induced voltage of the secondary coil of a transformer, is caused by the changing magnetic flux of the primary coil... ...and by decreasing the number of turns of the primary coil, the flux decreases and the voltage across the secondary coil should also decrease... ...right, but when I was experimenting, I lowered the primary turns, and that leads to an increase in the secondary voltage. Why? Answer: The magnitude of the voltage induced in the secondary is the magnitude of the rate of change of flux in the secondary, not the primary. Neglecting leakage flux, if the flux in the transformer core is $\Phi$, the fluxes through the primary and secondary are $N_1\Phi$ and $N_2\Phi$, respectively. When you apply a voltage on the primary, you set $$V_1 = N_1 \frac{d\Phi}{dt} $$ and the secondary voltage is $$V_2 = N_2 \frac{d\Phi}{dt} = \frac {N_2}{N_1} V_1$$ which is inversely proportional to $N_1$. I've left out the signs here to not obfuscate the point, i.e. all quantities are magnitudes.
{ "domain": "physics.stackexchange", "id": 94226, "tags": "electromagnetism, voltage, electromagnetic-induction" }
Calculating $\langle p | [x,p] | \psi \rangle $ using Dirac notation
Question: Calculating $\langle p | [x,p] | \psi \rangle $ using Dirac notation. I am aware of the relations $$\langle p|x| \psi \rangle = i \hbar \frac{d}{dp}\langle p| \psi \rangle, \langle x | p|\psi\rangle = i \hbar\frac{d}{dx} \langle x | \psi \rangle$$ which should become relevant here: $$\langle p | xp - px| \psi \rangle = \langle p | xp| \psi \rangle - \langle p | px| \psi \rangle$$ However, I am a bit stuck on how to think about two operators acting on $| \psi \rangle$. For example, for the first element in the equation above, should I think about it as $p$ acting on $| \psi \rangle$ first, then $x$? As such, the following becomes: $$ \langle p|x| \psi \rangle = i\hbar \frac{d}{dp} \langle p | \psi \rangle$$ How do I approach the other one? I know that my final andger needs to somehow result in $i\hbar \langle p | \psi\rangle$... Your help is appreciated. Answer: The first term on the RHS of your second equation is not equal to your last equation. But you're right that we first apply $P$ and then $X$. More concretely, it might help to denote $P|\psi\rangle =:|\tilde \psi\rangle$. Then, by using your first equation as well as $\langle p|P|\psi\rangle = p \langle p|\psi\rangle$, we find $$ \langle p|XP|\psi\rangle = \langle p|X|\tilde\psi\rangle = i\hbar \frac{\mathrm d}{\mathrm dp} \langle p|\tilde \psi\rangle =i\hbar \frac{\mathrm d}{\mathrm dp} \langle p|P|\psi\rangle = i\hbar \frac{\mathrm d}{\mathrm dp} p \langle p|\psi\rangle = i\hbar \left( \langle p|\psi\rangle + p \frac{\mathrm d}{\mathrm dp}\langle p|\psi\rangle\right) \quad . $$ With the second term you can proceed similarly and it might help again to define $X|\psi\rangle =: |\psi^\prime\rangle$ etc...
{ "domain": "physics.stackexchange", "id": 89799, "tags": "quantum-mechanics, homework-and-exercises, hilbert-space, operators, commutator" }
Why are optimization algorithms slower at critical points?
Question: I just found the animation below from Alec Radford's presentation: As visible, all algorithms are considerably slowed down at saddle point (where derivative is 0) and quicken up once they get out of it. Regular SGD itself is simply stuck at the saddle point. Why is this happening? Isn't the "movement speed" constant value that is dependent on the learning rate? For example, weight for each point on regular SGD algorithm would be: $$w_{t+1}=w_t-v*\frac{\partial L}{\partial w}$$ where $v$ is a learning rate and $L$ is a loss function. In short, why are all optimization algorithms slowed down by the saddle point even though step size is constant value? Shouldn't a movement speed be constantly same? Answer: In that simulation, the movement speed is a proxy for step size. The step size is a function of learning rate (v) and the approximate gradient of the function at that point ($\frac{\partial L}{\partial w}$). The learning rate can be constant. However, the approximate gradient is not constant. The approximate gradient is typically smaller closer to a critical point (i.e., the valley starts to gradually level out). Thus, the computed update value is smaller and the movement speed slows down.
{ "domain": "datascience.stackexchange", "id": 3609, "tags": "gradient-descent, loss-function, learning-rate" }
Wheeled Robot Motion Primitives: Is throttling forward and crab motion considered as one?
Question: I am simulating a wheeled robot of six-wheels and can be independently steered, like MER-Opportunity. The wheeled robot can perform throttling forward, ||---|| <--wheel orientation || || ||---|| crab-motion, //---// <--wheel orientation when heading is 45 // // //---// and turning on the spot. //---\\ <--wheel orientation || || \\---// My question is: Is it correct to say that I have 2 motion primitives? Throttling forward is basically crab-motion with heading zero. Answer: While I am not really sure what you exactly mean by "motion primitive", having the ability to do the "crab-motion" enables you to translate regardless of your orientation and without changing it (assuming 180 degrees of steering) which would not be the case if you had only one pair of steerable wheels (like in cars, remember the parallel parking problem). Note however that this is still non-holomonic motion system unless the wheels are omniwheels (you cannot rotate and translate independently).
{ "domain": "robotics.stackexchange", "id": 470, "tags": "wheeled-robot, motion" }
Mysql PDO Wrapper that throws Unique Constraint Exception
Question: The main reasoning behind this PDO wrapper, is that I find myself using unique constraints quite frequently in my designs, and I have if ($ex->errorInfo[1] == 1062) littered throughout my code, and I thought there has to be a better way. <?php class PDOMysql extends \PDO { public function __construct($dsn, $username = null, $passwd = null, $options = null) { parent::__construct($dsn, $username, $passwd, $options); $this->setAttribute(\PDO::ATTR_ERRMODE, \PDO::ERRMODE_EXCEPTION); $this->setAttribute(\PDO::ATTR_DEFAULT_FETCH_MODE, \PDO::FETCH_OBJ); } public function prepareAndExecute($statement, array $params = []): \PDOStatement { try { $stmt = $this->prepare($statement); $stmt->execute($params); return $stmt; } catch (\PDOException $ex) { if ($ex->errorInfo[1] == 1062) { throw new PDOMysqlUniqueConstraintException($ex); } else { throw $ex; } } } } class PDOMysqlUniqueConstraintException extends \PDOException { public function __construct(\PDOException $ex) { parent::__construct($ex->getMessage(), $ex->getCode(), $ex->getPrevious()); } } // example usage $user = 'vps'; $pass = 'vps'; $db = new PDOMysql("mysql:host=localhost;dbname=testdb", $user, $pass); // create table $sql = "create table if not exists `mytable` (col1 varchar(100) null, constraint idx_u unique (col1) )"; $db->prepareAndExecute($sql); // insert first row $params = ['col1' => uniqid('')]; $sql = "insert into mytable (col1) values (:col1)"; $db->prepareAndExecute($sql, $params); // select row $sql = "select * from mytable where col1 = :col1"; $stmt = $db->prepareAndExecute($sql, $params); var_dump($stmt->fetchAll()); // insert duplicate row try { $sql = "insert into mytable (col1) values (:col1)"; $db->prepareAndExecute($sql, $params); } catch (\PDOMysqlUniqueConstraintException $ex) { die('Unique Constraint Detected'); } Answer: This code is so good that it's the first time I have nothing to say. Your approach to the problem is absolutely correct and you should keep using it. There could be only microscopic nuances, such as one mentioned by Sam. Or, the fact that your constructor would override the configuration settings passed in the $options array. Say, if someone would want to use FETCH_ASSOC as a default fetch mode, your class won't let them. I would rather make it this way $defaults = [ PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_OBJ, PDO::ATTR_EMULATE_PREPARES => false, // I would throw in this one too ]; $options = array_merge($defaults,$options); parent::__construct($dsn, $username, $passwd, $options); So this way you would have your preferred defaults that, however, could be overwritten, making your class flexible, without artificial restrictions.
{ "domain": "codereview.stackexchange", "id": 40570, "tags": "php, mysql, error-handling, pdo" }
Why do waveforms that are symmetrical above and below their horizontal centerlines contain no even-numbered harmonics?
Question: All About Circuits site states that waveforms that are symmetrical above and below their horizontal centerlines contain no even-numbered harmonics. Can somebody explain this mathematically, or point to a resource? I do not know much about dsp, but understand what a Fourier transform is. Answer: The complex Fourier coefficients of a $T$-periodic function $f(t)$ are given by $$c_n=\frac{1}{T}\int_{0}^Tf(t)e^{-j2\pi nt/T}dt\tag{1}$$ with $$f(t)=\sum_{n=-\infty}^{\infty}c_ne^{j2\pi nt/T}\tag{2}$$ The coefficients with even indices are $$\begin{align}c_{2n}&=\frac{1}{T}\int_{0}^Tf(t)e^{-j4\pi nt/T}dt\\&=\frac{1}{T}\left[\int_{0}^{T/2}f(t)e^{-j4\pi nt/T}dt+\int_{T/2}^{T}f(t)e^{-j4\pi nt/T}dt\right]\\&=\frac{1}{T}\left[\int_{0}^{T/2}f(t)e^{-j4\pi nt/T}dt+\int_{0}^{T/2}f(t+T/2)e^{-j4\pi nt/T}e^{-j2\pi n}dt\right]\\&=\frac{1}{T}\int_{0}^{T/2}\left[f(t)+f(t+T/2)\right]e^{-j4\pi nt/T}dt\\&=\frac12\frac{2}{T}\int_{0}^{T/2}\left[f(t)+f(t+T/2)\right]e^{-j2\pi nt/(T/2)}dt\\&=\frac12 d_n\tag{3}\end{align}$$ where $d_n$ are the complex Fourier coefficients of the $T/2$-periodic function $g(t)=f(t)+f(t+T/2)$. These coefficients can only be zero if $g(t)=0$, i.e., if $f(t)=-f(t+T/2)$. This latter condition is exactly the symmetry condition you mention in your question. Consequently, the even Fourier coefficients are zero if (and only if) $f(t)$ satisfies $$f(t)=-f(t+T/2)\tag{4}$$ Or, in other words, a $T$-periodic function $f(t)$ has only odd harmonics if it is $T/2$-antiperiodic. I.e., if you shift the function by half its period and flip it across the horizontal axis, it must look the same as before.
{ "domain": "dsp.stackexchange", "id": 7240, "tags": "fourier-series, wave" }
Why DoHeatmap Does not show all genes in genes.use?
Question: I am heatmaping a list of genes by DoHeatmap function in Seurat R package. I am sure I have 212 genes but heat map shows only a few of my genes > DoHeatmap( + object = seurat, + genes.use = genes, + slim.col.label = TRUE, + remove.key = TRUE) > dim(as.matrix(seurat@data)) [1] 12293 209 > length(genes) [1] 212 > class(genes) [1] "character" > class(seurat) [1] "seurat" attr(,"package") [1] "Seurat" > Please look at heat map, I was trying to plot 212 genes but you seeing only few genes are among my 212 genes Please somebody save me from this confusion Answer: https://github.com/satijalab/seurat/issues/287 In this post there is a answer for my question, however when I set scale to FALSE again the number of genes did not change but the shape of plot dramatically changed. So, maybe the issue is because of scaling that did not solve my problem
{ "domain": "bioinformatics.stackexchange", "id": 688, "tags": "r, scrnaseq, seurat" }
What should be the value of batch_size in fit() method when using sgd (Stochastic Gradient Descent) as the optimizer?
Question: I am confused about the batch size of this model. I have used sgd i.e., Stochastic Gradient Descent as the optimizer (see the code). I am aware that in sgd, a single random instance from the training set is used to compute the gradient at each step. So, according to it, the batch_size should be equal to 1. Now, in the tf.keras.Sequential.fit() documentation it says: If unspecified, batch_size will default to 32. So, do I have to manually set the batch_size equal to 1? It is because the default value, 32 will make it a Mini-batch Gradient Descent. import tensorflow as tf from tensorflow import keras fashion_mnist = keras.datasets.fashion_mnist (X_train_full, y_train_full), (X_test, y_test) = fashion_mnist.load_data() X_valid, X_train = X_train_full[:5000]/255.0, X_train_full[5000:]/255.0 y_valid, y_train = y_train_full[:5000], y_train_full[5000:] model = keras.models.Sequential() model.add(keras.layers.InputLayer(input_shape = [28, 28])) model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(300, activation = "relu")) model.add(keras.layers.Dense(100, activation = "relu")) model.add(keras.layers.Dense(10, activation = "softmax")) model.compile(loss = "sparse_categorical_crossentropy", optimizer = "sgd", metrics = ["accuracy"]) history = model.fit(X_train, y_train, epochs = 30, validation_data = (X_valid, y_valid)) Answer: First, using the appropriate terminology you can say batch Stochastic Gradient Descent and batch Gradient descent are in the extreme ends, where Stochastic Gradient Descent is training with $batch size = 1$ and for batch gradient descent with $batch size=n$ where $n$ denotes the number of data points. In the appropriate terminology, what we are often using (and similar to your example as well) is called mini-batch gradient descent. Note that the term mini here does not mean it is necessarily very small like 4, 32 or 64 but instead can be anything bigger than $1$ but smaller than $n$. In practice, people use the term mini-batch gradient descent and stochastic gradient descent interchangeably. This is because in practice they behave similarly. I personally do not think that such practice (using SGD and minibatch SGD interchangeably) is bad, because I don't think that it differs a lot such that it requires a specific new term.
{ "domain": "datascience.stackexchange", "id": 6606, "tags": "deep-learning, keras, tensorflow" }
Will the mirror reflection make the interference pattern disappear?
Question: I have read this question: When light reflects off a mirror, does the wave function collapse? where dmckee says: Now, even giving a precise statement of what makes a "measurement" is non trivial, but a starting place is that a measurement leaves a record. Coherent reflection form a mirror generally does not leave a record. where S. McGrew says: The answer is that the photon does not change the state of the mirror. After the photon has been reflected, the mirror is unchanged. There is no way to prove that the photon struck the mirror without also detecting the photon's path downstream. Why is quantum entanglement considered to be an active link between particles? where Luboš Motl says: But this step, in which the original overall probabilities for the second particle were replaced by the conditional probabilities that take the known outcome involving the first particle into account, is just a change of our knowledge - not a remote influence of one particle on the other. Why can interaction with a macroscopic apparatus, such as a Stern-Gerlach machine, sometimes not cause a measurement? where Ruben Verresen says: a superposition is destroyed/decohered when information has leaked out. In this setting that would mean that if by measuring, say, the momentum of the Stern-Gerlach machine you could figure out whether the spin had curved upwards or downwards, then the quantum superposition between up and down would have been destroyed. Now this is in contradiction with radiation pressure, and the way solar sails work. The mirror gets a recoil from the reflecting (elastically scattered) photons, and that is detectable, measurable, because it is a change in the mirror's (sail) momentum vector. Radiation pressure is the pressure exerted upon any surface due to the exchange of momentum between the object and the electromagnetic field. This includes the momentum of light or electromagnetic radiation of any wavelength which is absorbed, reflected, or otherwise emitted (e.g. black-body radiation) by matter on any scale (from macroscopic objects to dust particles to gas molecules).1[2][3] https://en.wikipedia.org/wiki/Radiation_pressure Do photons attenuate when they reflect? Thus it has to collapse the wavefunction. The signal photon goes to the mirror, reflects and goes through slit 1 and 2 and goes to screen1. The idler goes directly to slit3 and 4 and to screen2. The two photons in this case have a common wavefunction (entangled), and in this case I will either: see an interference pattern on screen1 and screen2 (coherent reflection does not change the state of the mirror), because the common wavefunction is not changed will not see an interference pattern on screen1 and screen2 (because of radiation pressure, the change in the mirror's momentum is detectable, measurable), and the common wavefunction is changed Question: Will there be an interference pattern on screen1 and screen2? Answer: Knowing that a photon has gone through a particular slit will prevent interference, but knowing that a photon has gone through a given pair of slits will not prevent interference. If recoil is measured at the mirror, we will know that a photon has reflected and has gone through the slits 1 and 2, and an interference pattern will indeed be formed on screen 1. Note: the interference pattern will be affected by the mirror's recoil. The photon will transfer a tiny portion of its momentum to the mirror, so will go through the slits with a slightly longer wavelength, and as a result the interference fringes on the screen will be slightly farther apart. The heavier the mirror, the less momentum will be transferred and the less the interference pattern will be affected. Let's complicate your experiment a little bit. Run photon s through a beamsplitter so that it can follow either of two paths to the mirror, and make the beam(s) narrow enough that one of the paths can only be reflected to slit 1 and the other can only be reflected to slit 2. Path s1 hits one half of the mirror and path s2 hits the other half of the mirror. Now, reflection of a photon on path s1 will give the mirror a clockwise torque and reflection on path s2 will give the mirror a counter-clockwise torque. The wavefunction will presumably take both paths, but the mirror's recoil can be used to accurately predict which slit the photon goes through. Will there be an interference pattern formed on screen 1? The answer in this case is "NO", because the mirror's recoil (clockwise or counter-clockwise) will constitute a measurement of which slit the photon passed through.
{ "domain": "physics.stackexchange", "id": 63225, "tags": "quantum-mechanics, photons, reflection, double-slit-experiment" }
Change of system of coordinates for the stress matrix
Question: I have a stress matrix in cartesian coordinates : $\begin{pmatrix} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c \end{pmatrix}$. How can I convert it to spherical coordinates ? Answer: One way to conceptualize the stress matrix is to view it as a tensor. In general, your matrix $$T = \begin{bmatrix} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c \end{bmatrix}$$ should be thought of in terms of how it relates on a displacement vector $v^T = (\mathrm{d}x, \mathrm{d} y, \mathrm{d} z)$. The stress tensor tells you that the energy change associated to this small displacement vector is $$\delta E = v^T T v = a {\mathrm{d}x}^2 + b {\mathrm{d}y}^2 + c {\mathrm{d}z}^2$$ Now, let's consider what happens if we change into spherical coordinates. Recall that in spherical coordinates $(r,\phi,\theta)$ $$ x = r \cos \phi \sin \theta \\ y = r \sin \phi \sin \theta \\ z = r \cos \theta $$ This gives the relations $$ \mathrm{d}x = \mathrm{d}r (\cos\phi \sin\theta) + \mathrm{d}\phi (- r\sin\phi \sin\theta) + \mathrm{d}\theta(r \cos\phi \cos\theta)\\ \mathrm{d}y = \mathrm{d}r (\sin\phi \sin\theta) + \mathrm{d}\phi (r \cos\phi \sin\theta) + \mathrm{d}\theta(r \sin\phi \cos\theta)\\ \mathrm{d}z = \mathrm{d}r (\cos\theta) + \mathrm{d}\theta(-r \sin\theta) $$ which can be written in matrix form as $$\begin{bmatrix} \mathrm{d}x \\ \mathrm{d}y \\ \mathrm{d}z \end{bmatrix} = J \begin{bmatrix} \mathrm{d}r \\ \mathrm{d}\phi \\ \mathrm{d}\theta \end{bmatrix}$$ Where J is the "Jacobian Matrix" (or change of coordinates) $$J = \begin{bmatrix} \cos\phi \sin\theta & -r \sin\phi \sin\theta & r \cos\phi \cos\theta \\ \sin\phi \sin\theta & r \cos\phi \sin\theta & r \sin \phi \cos\theta \\ \cos \theta & 0 & -r \sin \theta \end{bmatrix}$$ This means that for the displacement vector $\tilde{v}^T = (\mathrm{d}r, \mathrm{d}\phi,\mathrm{d}\theta)$ written in spherical coordinates, the vectors in two different coordinate systems can be related to each other by $$v = J \tilde{v}$$ We'll end up having that the energy change can be written as $$\delta E = (J \tilde{v})^T T (J \tilde{v}) = \tilde{v}^T (J^T T J) \tilde{v}$$ which is some pretty complicated expression in terms of the $\mathrm{d}r, \mathrm{d}\phi, \mathrm{d}\theta$. So since $(J^T T J)$ is the matrix that generates the same energy change for the vector in different coordinates, the stress matrix $\tilde{T}$ in spherical coordinates is really $$\tilde{T} = J^T T J$$ This is a pretty general lesson that will let you express the stress matrix in any coordinate system, not just spherical ones. And note that this transformation rule is different $T \rightarrow J^T T J$ is different than that for a linear transformation, which is $A \rightarrow J^{-1} A J$. This means that T is a tensor quantity, and not a linear transformation. The defining property of a tensor is that is defines a length, sending a vector $v$ to a number, $v \rightarrow v^T T v$. In our case is the energy change $\delta E$ defined above, and the goal is to keep that length the same irrespective of the coordinate system. On the other hand, a linear transformation is defined as sending vectors to vectors, $v \rightarrow A v$.
{ "domain": "physics.stackexchange", "id": 59448, "tags": "coordinate-systems, elasticity, stress-strain" }
Did birds descend from a single or multiple species of dinosaur?
Question: It seems like there are mixed results because sometimes I read about a single missing link, like an archaeopteryx that somehow single-handedly explains all modern-day birds, but then I see conflicting articles about how different birds are descended from different dinosaurs like a t-rex or a velociraptor and so on. Which theory is correct? Did birds all descend from one common ancestor or multiple? Answer: The answer is "one common ancestor", but I'll expand. All organisms descend from one common ancestor so that question is not quite well-posed, but what you are actually asking I think is whether birds all descend from one common ancestor that was a bird, or whether their common ancestor wasn't a bird, which implies that different branches of birds became birds independently. In other words, are birds a "polyphyletic" group or a "monophyletic/paraphyletic" one (the difference between the latter is whether all of that common ancestor's descendants are birds, or whether it also had descendants that aren't birds). The answer to that is that modern birds are monophyletic: they all descend from a common ancestor that, itself, was a bird. (and that common ancestor doesn't have any descendants that aren't birds) But modern birds aren't the whole story - their group originates in the Cretaceous, and there are many groups of birds that are clearly recognizable as birds but also aren't modern birds - they might have teeth, they have subtle but unavoidable skeletal differences, etc. Like Enantiornithes and Confuciusornis. In other words, modern birds absolutely, unambiguously have a common ancestor that was itself a bird, which isn't hard because animals we would recognize as "birds" were already common by the time it appeared and the ancestor of our modern birds was just one of them. Whether those older, "extended" groups of birds themselves descend from a single species is harder to tell because fossil evidence is all we're going on, and it's even harder to tell when we go closer to the origin of birds and they no longer look unambiguously like birds. But at that point it's not so much a matter of "did this group originate from one ancestor or several" but "are these fossils that we've been putting in this group as closely related as we were assuming?" and "did this trait that these different fossils have evolve in their common ancestor, or did they evolve it independently, and what about those other fossils that seem related but don't have the trait: did they lose it from the common ancestor who had it or are they a sign that the common ancestor didn't have it to begin with?". Like, you might get different answers depending on how you define "bird". BUT paleontologists these days like monophyletic groups, so whichever groups they call "bird", those will have a common ancestor that was a "bird". The Wikipedia page for Avialae expresses some of the issues with what "bird", or "Aves" means in the context of paleontology. To quote: Aves can mean those advanced archosaurs with feathers (alternately Avifilopluma) Aves can mean those that fly (alternately Avialae) Aves can mean all reptiles closer to birds than to crocodiles (alternately Avemetatarsalia [=Panaves]) Aves can mean the last common ancestor of all the currently living birds and all of its descendants (a "crown group"). (alternately Neornithes) "Avialae" is one of those "extended groups of birds", in that paleontologists will often refer to any member of that group as "birds", but there's still a wide gap between "Avialae" and "Neornithes"; some stepping stones on that gap: Neornithes (modern birds) are part of Euornithes (birds have a certain articulation oriented like modern birds, as opposed to Enantiornithes, or "opposite-birds", who have it the other way, in addition to other differences like having teeth; note this page has a cladogram), which is one branch of: Ornithothoraces (all descendants of the common ancestor of Euornithes and Enantiornithes), which is part of Avebrevicauda ("short-tailed birds", to distinguish them from long-tailed Avialans like Archeopteryx) which is part of Avialae, which as mentioned is basically birdinosaurs that could fly like Archeopteryx but it doesn't stop there because that's part of Paraves, a group of dinosaurs that by and large had wings and feathers including the four-winged dinosaurs like Microraptor gui, which if we saw today we'd recognize as not like other birds but would we recognize as not a bird, really? That's part of Pennaraptora, which is the first group to contain names that we definitely think of as not-birds; this group contains birds as well as Oviraptor and Deinonychus, but to quote the page: The earliest known definitive member of this clade is Anchiornis, from the late Jurassic period of China, about 160 million years ago. And Anchiornis, as its name suggests, is bird-like, feathered wings and all. This is where we get into "wait, Velociraptors were basically like turkeys?" reactions (Deinonychus is Jurassic Park's "Velociraptors"). (also these points are where the groupings become disputed, or fast-changing, or ambiguous between groups that are defined based on phylogeny, or similarity, or others and you can get different results depending on which group's Wikipedia page you look at; for example the Wikipedia page for Theropods puts Avialae directly under Maniraptora). Pennaraptora is part of Maniraptora, which also contains your velociraptors, and Maniraptora is part of Coelurosauria which also contains your Tyrannosaurs. At this point we have clear non-bird dinosaurs, although it's likely they all had feathers. I don't know which articles you read saying some birds would have descended from T-Rex and others from Velociraptor; that seems completely counter to what we know today of bird evolution, even if we extend "birds" to "Avialae" or even "Paraves". In fact the only way it would work is if we call T-rex or velociraptor themselves "birds". What you could have is conflicting articles about how close birds are to T-rex or Velociraptor; that part of the family tree is based entirely on fossils and so new fossils can change our understanding of how things are related... But the current phylogeny that makes, say, Avialae a group that's quite a few nodes away from velociraptors, and both of those a couple of nodes away from T-rex, seems quite robust. We have found many, many bird and proto-bird fossils in the last few decades that have clarified the picture on that scale (you can get an idea of how many by clicking the various links). One thing those many, many bird and proto-bird fossils also made clear is that the traits of modern birds (feathers, wings, toothless beaks, etc) didn't evolve in a simple line from non-bird to bird. Many of those traits evolved convergently in several lineages, were lost in some, maybe regained in others, and feathers in particular turn out to be a widespread dinosaur feature that cannot be considered a uniquely bird trait anymore (unless we want to call T-rexes "birds"). Still, saying "beaks evolved several times" or "feathers evolved several times" doesn't mean that birds, let alone modern birds, evolved from several different ancestors. It can mean that the common ancestor of birds had lots of variously bird-like more-or-less distant cousins living around the same time.
{ "domain": "biology.stackexchange", "id": 7086, "tags": "evolution, ornithology, palaeontology" }
Why has hypercomputation research died down?
Question: I see a lot of research on hypercomputation in the 1990's, but in more recent years there seems to be little work on the topic. Is it true that research in this area has died down? If so, what could be the reasons for it? Was this area convincingly shown to be unpromising? Answer: It would be better if you specified what you mean exactly by hyper-computation and gave evidence for why you think it has "died down". In any case, assuming that you are talking about computation of functions over natural numbers (and finite strings) (since I think it is clear that models for higher type computation is a very active area, e.g. CCA) and models of computation not equivalent to computability defined by Turing machines, I don't think the claim is correct, for example see CiE'05 and CiE'11. Also see the criticisms made against the claim that hyper-computation is something new: Martin Davis, "Why there is no such discipline as hypercomputation", 2006. Martin Davis, "The Myth of Hypercomputation", in "Alan Turing: Life and Legacy of a Great Thinker", 2004. If you are interested, there is also some discussion on FOM mailing list starting by Timothy Chow's email about Martin Davis' article.
{ "domain": "cstheory.stackexchange", "id": 1392, "tags": "reference-request, computability, hypercomputation" }
Interpreting the Cross Section Ratio $R$
Question: Below is experimental data for the ratio $$R=\frac{\sigma(e^+e^-\rightarrow hadrons)}{\sigma(e^+e^-\rightarrow\mu^+\mu^-)}$$ as a function of the centre of mass energy $\sqrt s$. I am interested in the peak at around $100GeV$ which corresponds to the resonance of the $Z^0$ boson. There are two ways of looking at this that I have in mind: We can effectively ignore the electromagnetic process around this energy, and so each process should have it's own Breit-Wigner peak centred on the mass energy of the $Z^0$. The ratio of these is just a constant. Thinking in terms of Feynman rules and again ignoring the electromagnetic process around this region, each process has the same propagator and vertex factors (roughly - ignore quark mixing) and there are some extra factors due to different quarks being possible and colour degeneracy, but still the ratio should be a constant (one possible issue here is interference between Feynman diagrams which I have neglected?). So my question is why does the peak exist in the data? Answer: [The figure shown in the OP question above ...] is experimental data for the ratio $R = $ [...] as a function of the centre of mass energy $\sqrt{s}$ The so called cross section ratio $$R[~\sqrt{s}~] = \frac{\sigma^{(0)}[~e^+~e^- \rightarrow \text{hadrons}, \sqrt{s}~]}{\sigma^{(0)}[~e^+~e^- \rightarrow \mu^+~\mu^-, \sqrt{s}~]}$$ ?? Actually, surprisingly, No!; that's not (quite) what's shown there; and therein lies the answer to your question. Rather, as the caption of PDG Figure 44.6 explains (and surely the PDG is the authoritative source) the figure shows instead the ratio of the experimentally determined hadronic cross section (incl. suitable corrections), $$\sigma^{(0)}[~e^+~e^- \rightarrow \text{hadrons},\sqrt{s}~]$$ and a "normalization denominator" $$ \frac{4}{3}~\pi~\alpha^2[~s~] ~/~ s,$$ where apparently $\alpha[~s~]$ expresses electromagnetic coupling only, i.e. without accounting for weak coupling which is experimentally dominant at center-of-mass energy near the nominal mass of the $Z^0$ boson. Consequently, while the arguments laid out in the OP seem quite compelling to me, they don't address what's actually presented in the figure, including the obvious "$Z^0$ peak". (I consider the OP question quite astute nevertheless, +1; and I even hope that it might lead the PDG to reconsider the labelling of its Figure 44.6.)
{ "domain": "physics.stackexchange", "id": 30018, "tags": "particle-physics, standard-model, feynman-diagrams, scattering-cross-section, electroweak" }
Could macroscopic primordial black holes have created metals shortly after the big bang?
Question: After seeing articles about the JWST like these two: https://news.cornell.edu/stories/2023/02/astronomers-discover-metal-rich-galaxies-early-universe https://www.livescience.com/james-webb-space-telescope-discovers-oldest-black-hole-in-the-universe-a-cosmic-monster-ten-million-times-heavier-than-the-sun And some older articles like this one: https://www.salon.com/2021/11/18/gold-black-holes/#:~:text=According%20to%20the%20new%20paper,occur%2C%20which%20creates%20heavy%20metals. I thought: "Wouldn't the birth of primordial supermassive black holes during the big bang create a bunch of heavier elements as it ate and fling them all over the place?" -edit- If the density was sufficient enough early enough in the process of the big bang to form a macroscopic black hole, then it may also have had sufficient mass and enough surrounding dense gases to form a metal-producing accretion disk. I guess the question then becomes: "How early in the big bang process could a black hole accretion disk have formed? Also, could such an accretion disk in that environment be capable of forming heavier elements from the hydrogen and helium surrounding it?" Answer: I am a new user so if you find any error in my answer kindly correct it. Perhaps, but indirectly Heavier elements are made in nuclear furnaces, where processes such as nucleosynthesis/neutron capture processes i.e slow and fast In primordial black holes (a type of micro blackhole), there are no physical processes that take place inside the blackhole which will form heavier elements. In the early universe where everything was a soup of plasma, the extreme heat and the comparatively tiny universe, acted like a star, so early primordial nucleosynthesis happened to create hydrogen and helium (and a few traces of heavier elements e.g lithium), and once the universe cooled down, no more primordial nucleosynthesis happened and therefore the values of abundances of elements stayed fixed until the first generation of stars/population III (low metallicity stars) were born. The first point made that even the primordial blackholes ate the primordial plasma, Virtual particles and CMB is spot on The second point, that it will fling it is also correct but the flinging which I think is about Hawking radiation, which is that absorbing a pair of virtual particles will cause one of the particles to have negative energy while the other one escapes and the negative energy destructs the singularity (if there is one, because quantum gravity, ruling the early universe doesn't prefers singularities). and Hawking radiation would tear the matter into pieces that are subatomic particles like protons and neutrons and expel it (causing loss of information. Refer to the Information paradox), but the main point is that the energy of Hawking radiation in the case of the smallest primordial blackholes called planck relics (Blackholes with the schwarzchild radius equal to the Planck length) are 3.562×10^48 watts thereby converting 100% of the mass to energy which is a huge amount of power and has the potential to heat it's surroundings (unless it is in thermal equilibrium) and thus causing nucleosynthesis (also the nucleons expelled may have collided with other atoms causing heavier elements to form) so it has the potential to create heavier metals outside the blackholes (outside the event horizon) due to the energy supplied by hawking radiation rather than inside the blackhole though the matter fallen in may or may not. So the hypothesis is not entirely wrong but the point that the black hole is the nuclear furnace is wrong instead the expulsion may have caused the nucleosynthesis rather than the blackhole itself so heavier elements may have formed indirectly (plus it would not have increased the mettalicity drastically). Or else if by flinging you mean by shooting matter out of the accretion disk, the chances are rather high because if the rare accretion disk on PBH is formed by devouring population III/first-gen stars (which were formed early), it may have caused nucleosynthesis because the accretion disks are insanely hot, and if the trajectory, is finetuned then the metal-rich elements could have escaped from the blackholes Also, PBHs on the heavier side may have infested neutron stars and have created heavy metals
{ "domain": "astronomy.stackexchange", "id": 6998, "tags": "galaxy, big-bang-theory, supermassive-black-hole, james-webb-space-telescope, metallicity" }
Why does our LM2576 circuit suddenly cut down the power?
Question: I have an LM2576 circuit plus an adjuster to adjust the output voltage, for controlling motor speed in a line follower robot. The circuit works great when adjusted to give out low voltages, but when I adjust it to higher voltages for my motors to go faster, it works great for 1-2 minutes, then suddenly cuts down the power and motors start to go extremely slow. Even when I decrease or increase the output voltage, it won't respond until I turn off the power and turn it back on again. There is something mentioned in the LM2576 datasheet that if we overload the IC it will cut down the power until the load comes lower, so I think it might be a problem with that. Since this problem has already caused us to lose the competitions with 5+ teams, I would like to solve it for our next competition, so why does our LM2576 circuit suddenly reduce the power? Answer: If I were you, I would first (1) read the LM2576 datasheet. I'm assuming you are using a circuit similar to the schematic and PCB layout on page 23 of the LM2576 datasheet. I'm guessing you've tweaked the circuit slightly, replacing the manually-operated pot shown on the schematic for R2, replaced with some sort of microprocessor-controlled thing that frequently changes its effective resistance to make that motor spin faster or slower. Then I would (2) put my finger on the chip. If it feels so hot that I can't hold my finger on it, I would suspect thermal shutdown. georgebrindeiro covered this. You'll also want to read p. 18 of the datasheet, the section "Thermal Analysis and Design": "The following procedure must be performed to determine whether or not a heatsink will be required. ...". The typical solution is to add a big heat sink to the chip. Do you see the size of the heatsink on the PCB layout on page 23 of the datasheet? Next I would (3) take a cheap multimeter, switch it to "Amp" mode, and connect it in-line with the motor leads. When the output voltage drops (which I measure with my more expensive multimeter), and I hear and see the motors slow down, what does the cheap multimeter say? Does the cheap multimeter stay pegged at some high current above 3 A (the datasheet says the internal limit will be somewhere between 3.5 A and 7.5 A)? If so, then we almost certainly have: current limiting. The regulator is working as-designed. The typical solution is to figure out what is pulling so much current, and somehow replace it with something that pulls less current. Perhaps the robot has run into a wall or got stuck in a rut, and the motors have stalled out. Then maybe the controller needs to sense this condition, stop its futile efforts to punch through the wall, and reverse direction. Sometimes we really need more current than one regulator can supply. Then we must replace that regulator with multiple regulators or higher-current regulators or both. (But not so much current that it immediately melts the wires in the motor). On the other hand, if the output voltage and the output current drop, then I would (4) connect my more expensive multimeter to the power input pins of the regulator. If that is much lower than I expected, then perhaps: The input voltage is lower than I expected. When you supply the motors with a higher voltage, they drain the batteries much quicker. Partially-drained batteries have a no-load voltage lower than you might expect from the "nominal" voltage printed on the battery, and loaded batteries have an even lower voltage. I doubt this is your problem, since you imply that if you disconnect the battery and reconnect the same battery, it seems to start working again. The typical solution is to put more batteries in series to increase the actual working voltage. Next I would (5) try disconnecting the batteries, and powering everything from a high-current grid-powered "power supply" set to the appropriate output voltages. If it seems to run fine off that power supply, but not from batteries, I would suspect: switching-regulator latchup. "Latchup of Constant-Power Load With Current-Limited Source" http://www.smpstech.com/latch000.htm When you supply the motors with a higher voltage, they pull a higher current. Surprisingly often, the total impedance at the input power pins of switching regulator is so high (bad) that when the regulator turns its internal power switch on, the voltage at its input power pins plummets so low that it's not possible to pull the desired current from that battery. Milliseconds later, when the switch turns off, the battery pulls the voltage back up to some reasonable voltage -- so this problem is difficult to debug with a standard multimeter, but obvious when you have the input power pins connected to an oscilloscope. The only complete cure is to somehow reduce the total input impedance at the input power pins. Occasionally all you need to do is reduce the wiring impedance -- shorten the wires (reduce the resistance), or bring closer together the PWR and GND wires (reduce the inductance), or both. Occasionally all you need to do is put more capacitance across the power input pins of the regulator. As a battery drains, its effective series resistance increases. In theory, you can always cure this by putting more batteries in parallel to reduce the net ESR of of all the batteries in parallel. Some people want a very lightweight robot, and so they can't add more batteries, so they can't completely cure the problem. Sometimes they can get adequate performance from the robot by using various "soft-start" or "load-shedding" techniques. Rather than trying to pull a lot of power from the batteries to get up to speed quickly -- more power than the battery can physically supply, and so triggering this unwanted latchup -- we pull somewhat less power from the batteries, slowly ramping up to speed, and applying various "limp mode" and "tired mode" techniques to keep the total power at any one instant low enough that the battery can supply it. (You may be interested in What is the best way to power a large number (27) servos at 5 V? ).
{ "domain": "robotics.stackexchange", "id": 72, "tags": "motor, electronics, power" }
How does measurement work in this Multi qbit entangled system?
Question: Given a 4 qbit entangled state: $C_1= 1/2 (|0000\rangle+|1100\rangle+|0011\rangle-|1111\rangle)$. Assuming we take one particle, and measure them it in the basis: $1/\sqrt2(|0\rangle+|1\rangle)$, $1/\sqrt2(|0\rangle-|1\rangle)$ then we take another particle and measure it too in the same basis. What would the resultant state of the entangled system be in each stage? I assume we can write the above system in the form: ($1/\sqrt2(|00\rangle+|11\rangle)(|00\rangle/\sqrt2)+1/\sqrt2(|00\rangle-|11\rangle (|11\rangle/\sqrt2)$ But I am unclear beyond this. P.S. New to QM and don't have much idea with 4 qubit systems. Can someone help with the steps in multi qbit examples, preferably using matrix operators for this specific case and how to measure in a general case? Most sources only talk about 2 qbit systems. Answer: Measurement in the $|+\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$ and $|-\rangle = \frac{1}{\sqrt{2}}(|0\rangle - |1\rangle)$ can be expressed in terms of a Von-Neumann measurement. i.e, probability of an observable i for a projective valued measurement is given by, $$P_i = tr(\rho \Pi_i), $$ where $\lbrace\Pi_i\rbrace$ is the set of projection operators of the measurement and $\rho$ is the density matrix. For such measurements the post measurement state for the i$^{th}$ outcome is given by, $$ \rho' = \frac{\Pi_i \rho \Pi_i}{tr(\rho\Pi_i)}$$ Now for the case asked in the question, the projection operators in the $|+/-\rangle$ basis would be $\Pi_+ = |+\rangle\langle+|$, $\Pi_- = |-\rangle\langle-|$. In order to measure just a subsystem, for example the second qubit of the four qubits, the operator has to be tensored with identity. $$\Pi_+ = I_2\otimes \Pi_+ \otimes I_2 \otimes I_2$$
{ "domain": "physics.stackexchange", "id": 89935, "tags": "quantum-mechanics, quantum-entanglement, measurements" }
How does electromagnetic radiation produce heating effect in a material?
Question: A laser beam (a form of electromagnetic radiation) has various applications in laser cutting, drilling, welding etc. which is possible by melting or vaporization of target material by heat produced by laser. My question: How does electromagnetic radiation produce heating effect in a material? Answer: To understand how EM radiation can cause the heating up of a certain material, it is very important to understand what we mean by the heat energy of the material and how it is stored. It is stored in the degrees of freedom of atoms and molecules. Heat energy, at a microscopic level, is stored in the degrees of freedom of atoms and molecules. These degrees of freedom are translational, rotational and vibrational. They all store different amounts of energy, depending on the geometry of the atom. Translational degrees of freedom are the atom or molecule moving around in space, and there are always 3 for the 3 dimensions of space. The rotational and vibrational modes come from the geometry of the atom/molecule. How is heat represented on a quantum level? There are mainly three types of freedoms in connection with heat capacity: translational Translational degrees of freedom arise from a gas molecule's ability to move freely in space. rotational A molecule's rotational degrees of freedom represent the number of unique ways the molecule may rotate in space about its center of mass which a change in the molecule's orientation. vibrational The number of vibrational degrees of freedom (or vibrational modes) of a molecule is determined by examining the number of unique ways the atoms within the molecule may move relative to one another, such as in bond stretches or bends. https://en.wikibooks.org/wiki/Statistical_Thermodynamics_and_Rate_Theories/Degrees_of_freedom Now when a photon interacts with the material's atoms and molecules, it might be absorbed (transfers all its energy and ceases to exist) or inelastically scattered (transfers part of its energy and changes angle). As the photon transfers its energy to the atom or molecule, then its translational, vibrational or rotational energies might rise, and the material heats up.
{ "domain": "physics.stackexchange", "id": 70089, "tags": "thermodynamics, electromagnetic-radiation" }
Are there any colored RAW photos of the Milky Way taken from space?
Question: We can get RAW versions for many ISS images, like those of airglow, e.g. ISS043-E-143486. But all the photos of the Milky Way taken from ISS that I found were posted on Twitter (e.g. this, Flickr or other social networks heavily tone-mapped, it's not possible to get the RAW sources, and apparently they are not in public domain as those obtainable from the NASA website. But maybe I've missed something, or some other missions than ISS have colored RAW photos of the Milky Way. Or, if there are any colored colorimetrically calibrated images taken in the visible spectrum from space, this would also suit my needs. I know there is Axel Mellinger's panorama, available as a FITS file from this page. But this one was made from Earth-based photos, so I'm not completely convinced of its hue correctness; besides, with the FITS data I failed to reproduce the hues from the JPEG preview like this one. So, are there any non-tone-mapped or RAW images of the Milky Way taken from space available to the general public? Answer: Among astronaut photographs https://eol.jsc.nasa.gov/SearchPhotos/photo.pl?mission=ISS044&roll=E&frame=45215 is a photo of airglow with the Milky Way in the background. A "RAW" image (actually NEF, Nikon's RAW file format) is available. This may be suitable for your requirements. Only a small part of the Milky Way is visible, and there is considerable motion blur on the whole image. It was taken from space, but through the glass of the ISS viewing dome.
{ "domain": "astronomy.stackexchange", "id": 7011, "tags": "milky-way" }
Bitex - Cryptocurrency Exchange API Framework for Python - Round 2
Question: This is a follow up to my previous review request. BitEx is a Python module I've been working on for a little over 9 months now, as a side project. It was published 6 months ago on GitHub, and as I edge closer to my 1.0 release, I wanted to take the opportunity to present my code on here, in order to straighten it out. What it solves and offers It's designed to eliminate the need to get into the gory details of REST APIs of crypto exchanges, and offer a homogeneous and intuitive interface for all supported APIs. It takes care of authentication procedures, and offers a standardized set of methods (with identical method signature) for all commonly used methods at an exchange (polling order book and tickers, placing and cancelling orders, amongst others), as well as all other specific methods (or as many as I had the time to implement thus far). It comes, essentially, as two sub packages: bitex.api is the backend taking care of setting up http requests via the requests module, as well as handling authentication specifics. It can be seen as wrapper for requests and technically could be used all on its own to send and receive data to/from exchanges. The other is bitex.interfaces, which offers the above mentioned homogeneous, standardized methods for all implemented exchanges. In addition to offering identical method signatures, it also aims to standardize method's return values. As these can differ significantly from exchange to exchange, these methods take care of data formatting, via the help of formatters found in bitex.formatters and the return_json decorator. It relies on bitex.api. Why I am submitting this for review Ever since I started this project, I've rewritten the base code several times significantly. It took me a long time to figure out how to lay out the structure (which is mostly due to my learning curve over the past year as a first year software development apprentice). Over the past two months, however, I've become rather fond and proud of the current structure and deem it quite presentable - enough so, to have it publicly audited. I have read the meta question on how to get the best value out of my review, and quite initially settled on having three 'rounds' of reviews for my code: Round: code style (completed), PEP8, readability, pythonic-ness Round: Refactoring options and the evaluation of present layout, especially API class' sign() method, return_json() decorator and usage of formatter funcs. Round: flaws, improvements in code and logic, bugs, etc. Review Round 2: Refactoring and Layout I have especially my worries about the bitex.api sub-module. The sign() method is difficult to generalize as it is since the inputs vary massively, forcing me to pass everything to them. I was not able to come up with a more sensible solution. In bitex.interfaces, it at first looked like a good idea to have a BaseInterface class, since I have several methods, which appear in all interfaces - query_public(), query_private(), and the standardized methods. At second glimpse, however, this mixin class did not make sense.This is because it looks to me like I'd introduce a class for the sake of having a mixin class, as I'd have to override methods for most exchanges to some degree anyway, resulting in almost identical code to now - with minimal improvement. And lastly, the use and design of bitex.formatters to format returned data from bitex.interfaces classes via the @return_json decorator (which may be slightly ill named, by now, since it no longer just returns the requests.response JSON value). Is mine an acceptable approach? It seemed like the most straightforward solution (as opposed to cluttering each Interface class with the formatting directly). bitex.api # Import Built-Ins import logging import requests import time # Import Third-Party # Import Homebrew log = logging.getLogger(__name__) class RESTAPI: def __init__(self, uri, api_version='', key='', secret=''): """ Base Class for REST API connections. """ self.key = key self.secret = secret self.uri = uri self.apiversion = api_version self.req_methods = {'POST': requests.post, 'PUT': requests.put, 'GET': requests.get, 'DELETE': requests.delete, 'PATCH': requests.patch} log.debug("Initialized RESTAPI for URI: %s; " "Will request on API version: %s" % (self.uri, self.apiversion)) def load_key(self, path): """ Load key and secret from file. """ with open(path, 'r') as f: self.key = f.readline().strip() self.secret = f.readline().strip() def nonce(self): return str(int(1000 * time.time())) def sign(self, url, endpoint, endpoint_path, method_verb, *args, **kwargs): """ Dummy Signature creation method. Override this in child. URL is required to be returned, as some Signatures use the url for sig generation, and api calls made must match the address exactly. """ url = self.uri return url, {'params': {'test_param': "authenticated_chimichanga"}} def query(self, method_verb, endpoint, authenticate=False, *args, **kwargs): """ Queries exchange using given data. Defaults to unauthenticated query. """ request_method = self.req_methods[method_verb] if self.apiversion: endpoint_path = '/' + self.apiversion + '/' + endpoint else: endpoint_path = '/' + endpoint url = self.uri + endpoint_path if authenticate: # sign off kwargs and url before sending request url, request_kwargs = self.sign(url, endpoint, endpoint_path, method_verb, *args, **kwargs) else: request_kwargs = kwargs log.debug("Making request to: %s, kwargs: %s" % (url, request_kwargs)) r = request_method(url, timeout=5, **request_kwargs) log.debug("Made %s request made to %s, with headers %s and body %s. " "Status code %s" % (r.request.method, r.request.url, r.request.headers, r.request.body, r.status_code)) return r bitex.api.rest # Import Built-ins import logging import json import hashlib import hmac import base64 import time import urllib import urllib.parse from requests.auth import AuthBase # Import Third-Party # Import Homebrew from bitex.api.api import RESTAPI log = logging.getLogger(__name__) class BitfinexREST(RESTAPI): def __init__(self, key='', secret='', api_version='v1', url='https://api.bitfinex.com'): super(BitfinexREST, self).__init__(url, api_version=api_version, key=key, secret=secret) def sign(self, url, endpoint, endpoint_path, method_verb, *args, **kwargs): try: req = kwargs['params'] except KeyError: req = {} req['request'] = endpoint_path req['nonce'] = self.nonce() js = json.dumps(req) data = base64.standard_b64encode(js.encode('utf8')) h = hmac.new(self.secret.encode('utf8'), data, hashlib.sha384) signature = h.hexdigest() headers = {"X-BFX-APIKEY": self.key, "X-BFX-SIGNATURE": signature, "X-BFX-PAYLOAD": data} return url, {'headers': headers} class BitstampREST(RESTAPI): def __init__(self, user_id='', key='', secret='', api_version='', url='https://www.bitstamp.net/api'): self.id = user_id super(BitstampREST, self).__init__(url, api_version=api_version, key=key, secret=secret) def load_key(self, path): """ Load key and secret from file. """ with open(path, 'r') as f: self.id = f.readline().strip() self.key = f.readline().strip() self.secret = f.readline().strip() def sign(self, url, endpoint, endpoint_path, method_verb, *args, **kwargs): nonce = self.nonce() message = nonce + self.id + self.key signature = hmac.new(self.secret.encode(), message.encode(), hashlib.sha256) signature = signature.hexdigest().upper() try: req = kwargs['params'] except KeyError: req = {} req['key'] = self.key req['nonce'] = nonce req['signature'] = signature return url, {'data': req} class BittrexREST(RESTAPI): def __init__(self, key='', secret='', api_version='v1.1', url='https://bittrex.com/api'): super(BittrexREST, self).__init__(url, api_version=api_version, key=key, secret=secret) def sign(self, url, endpoint, endpoint_path, method_verb, *args, **kwargs): try: params = kwargs['params'] except KeyError: params = {} nonce = self.nonce() req_string = endpoint_path + '?apikey=' + self.key + "&nonce=" + nonce + '&' req_string += urllib.parse.urlencode(params) headers = {"apisign": hmac.new(self.secret.encode('utf-8'), (self.uri + req_string).encode('utf-8'), hashlib.sha512).hexdigest()} return self.uri + req_string, {'headers': headers, 'params': {}} class CoincheckREST(RESTAPI): def __init__(self, key='', secret='', api_version='api', url='https://coincheck.com'): super(CoincheckREST, self).__init__(url, api_version=api_version, key=key, secret=secret) def sign(self, url, endpoint, endpoint_path, method_verb, *args, **kwargs): nonce = self.nonce() try: params = kwargs['params'] except KeyError: params = {} params = json.dumps(params) # sig = nonce + url + req data = (nonce + endpoint_path + params).encode('utf-8') h = hmac.new(self.secret.encode('utf8'), data, hashlib.sha256) signature = h.hexdigest() headers = {"ACCESS-KEY": self.key, "ACCESS-NONCE": nonce, "ACCESS-SIGNATURE": signature} return url, {'headers': headers} class GdaxAuth(AuthBase): def __init__(self, api_key, secret_key, passphrase): self.api_key = api_key.encode('utf-8') self.secret_key = secret_key.encode('utf-8') self.passphrase = passphrase.encode('utf-8') def __call__(self, request): timestamp = str(time.time()) message = (timestamp + request.method + request.path_url + (request.body or '')) hmac_key = base64.b64decode(self.secret_key) signature = hmac.new(hmac_key, message.encode('utf-8'), hashlib.sha256) signature_b64 = base64.b64encode(signature.digest()) request.headers.update({ 'CB-ACCESS-SIGN': signature_b64, 'CB-ACCESS-TIMESTAMP': timestamp, 'CB-ACCESS-KEY': self.api_key, 'CB-ACCESS-PASSPHRASE': self.passphrase, 'Content-Type': 'application/json' }) return request class GDAXRest(RESTAPI): def __init__(self, passphrase='', key='', secret='', api_version='', url='https://api.gdax.com'): self.passphrase = passphrase super(GDAXRest, self).__init__(url, api_version=api_version, key=key, secret=secret) def load_key(self, path): """ Load key and secret from file. """ with open(path, 'r') as f: self.passphrase = f.readline().strip() self.key = f.readline().strip() self.secret = f.readline().strip() def sign(self, url, endpoint, endpoint_path, method_verb, *args, **kwargs): auth = GdaxAuth(self.key, self.secret, self.passphrase) try: js = kwargs['params'] except KeyError: js = {} return url, {'json': js, 'auth': auth} class KrakenREST(RESTAPI): def __init__(self, key='', secret='', api_version='0', url='https://api.kraken.com'): super(KrakenREST, self).__init__(url, api_version=api_version, key=key, secret=secret) def sign(self, url, endpoint, endpoint_path, method_verb, *args, **kwargs): try: req = kwargs['params'] except KeyError: req = {} req['nonce'] = self.nonce() postdata = urllib.parse.urlencode(req) # Unicode-objects must be encoded before hashing encoded = (str(req['nonce']) + postdata).encode('utf-8') message = (endpoint_path.encode('utf-8') + hashlib.sha256(encoded).digest()) signature = hmac.new(base64.b64decode(self.secret), message, hashlib.sha512) sigdigest = base64.b64encode(signature.digest()) headers = { 'API-Key': self.key, 'API-Sign': sigdigest.decode('utf-8') } return url, {'data': req, 'headers': headers} class ItbitREST(RESTAPI): def __init__(self, user_id = '', key='', secret='', api_version='v1', url='https://api.itbit.com'): self.userId = user_id super(ItbitREST, self).__init__(url, api_version=api_version, key=key, secret=secret) def load_key(self, path): """ Load user id, key and secret from file. """ with open(path, 'r') as f: self.userId = f.readline().strip() self.clientKey = f.readline().strip() self.secret = f.readline().strip() def sign(self, url, endpoint, endpoint_path, method_verb, *args, **kwargs): try: params = kwargs['params'] except KeyError: params = {} verb = method_verb if verb in ('PUT', 'POST'): body = params else: body = {} timestamp = self.nonce() nonce = self.nonce() message = json.dumps([verb, url, body, nonce, timestamp], separators=(',', ':')) sha256_hash = hashlib.sha256() nonced_message = nonce + message sha256_hash.update(nonced_message.encode('utf8')) hash_digest = sha256_hash.digest() hmac_digest = hmac.new(self.secret.encode('utf-8'), url.encode('utf-8') + hash_digest, hashlib.sha512).digest() signature = base64.b64encode(hmac_digest) auth_headers = { 'Authorization': self.key + ':' + signature.decode('utf8'), 'X-Auth-Timestamp': timestamp, 'X-Auth-Nonce': nonce, 'Content-Type': 'application/json' } return url, {'headers': auth_headers} class OKCoinREST(RESTAPI): def __init__(self, key='', secret='', api_version='v1', url='https://www.okcoin.com/api'): super(OKCoinREST, self).__init__(url, api_version=api_version, key=key, secret=secret) def sign(self,url, endpoint, endpoint_path, method_verb, *args, **kwargs): nonce = self.nonce() # sig = nonce + url + req data = (nonce + url).encode() h = hmac.new(self.secret.encode('utf8'), data, hashlib.sha256) signature = h.hexdigest() headers = {"ACCESS-KEY": self.key, "ACCESS-NONCE": nonce, "ACCESS-SIGNATURE": signature} return url, {'headers': headers} class BTCERest(RESTAPI): def __init__(self, key='', secret='', api_version='3', url='https://btc-e.com/api'): super(BTCERest, self).__init__(url, api_version=api_version, key=key, secret=secret) def sign(self, url, endpoint, endpoint_path, method_verb, *args, **kwargs): nonce = self.nonce() try: params = kwargs['params'] except KeyError: params = {} post_params = params post_params.update({'nonce': nonce, 'method': endpoint.split('/', 1)[1]}) post_params = urllib.parse.urlencode(post_params) signature = hmac.new(self.secret.encode('utf-8'), post_params.encode('utf-8'), hashlib.sha512) headers = {'Key': self.key, 'Sign': signature.hexdigest(), "Content-type": "application/x-www-form-urlencoded"} # split by tapi str to gain clean url; url = url.split('/tapi', 1)[0] + '/tapi' return url, {'headers': headers, 'params': params} class CCEXRest(RESTAPI): def __init__(self, key='', secret='', api_version='', url='https://c-cex.com/t'): super(CCEXRest, self).__init__(url, api_version=api_version, key=key, secret=secret) def sign(self, uri, endpoint, endpoint_path, method_verb, *args, **kwargs): nonce = self.nonce() try: params = kwargs['params'] except KeyError: params = {} params['apikey'] = self.key params['nonce'] = nonce post_params = params post_params.update({'nonce': nonce, 'method': endpoint}) post_params = urllib.parse.urlencode(post_params) url = uri + post_params sig = hmac.new(url, self.secret, hashlib.sha512) headers = {'apisign': sig} return url, {'headers': headers} class CryptopiaREST(RESTAPI): def __init__(self, key='', secret='', api_version='', url='https://www.cryptopia.co.nz/api'): super(CryptopiaREST, self).__init__(url, api_version=api_version, key=key, secret=secret) def sign(self, uri, endpoint, endpoint_path, method_verb, *args, **kwargs): nonce = self.nonce() try: params = kwargs['params'] except KeyError: params = {} post_data = json.dumps(params) md5 = base64.b64encode(hashlib.md5().updated(post_data).digest()) sig = self.key + 'POST' + urllib.parse.quote_plus(uri).lower() + nonce + md5 hmac_sig = base64.b64encode(hmac.new(base64.b64decode(self.secret), sig, hashlib.sha256).digest()) header_data = 'amx' + self.key + ':' + hmac_sig + ':' + nonce headers = {'Authorization': header_data, 'Content-Type': 'application/json; charset=utf-8'} return uri, {'headers': headers, 'data': post_data} class GeminiREST(RESTAPI): def __init__(self, key='', secret='', api_version='v1', url='https://api.gemini.com'): super(GeminiREST, self).__init__(url, api_version=api_version, key=key, secret=secret) def sign(self, uri, endpoint, endpoint_path, method_verb, *args, **kwargs): nonce = self.nonce() try: params = kwargs['params'] except KeyError: params = {} payload = params payload['nonce'] = nonce payload['request'] = endpoint_path payload = base64.b64encode(json.dumps(payload)) sig = hmac.new(self.secret, payload, hashlib.sha384).hexdigest() headers = {'X-GEMINI-APIKEY': self.key, 'X-GEMINI-PAYLOAD': payload, 'X-GEMINI-SIGNATURE': sig} return uri, {'headers': headers} class YunbiREST(RESTAPI): def __init__(self, key='', secret='', api_version='v2', url='https://yunbi.com/api'): super(YunbiREST, self).__init__(url, api_version=api_version, key=key, secret=secret) def sign(self, uri, endpoint, endpoint_path, method_verb, *args, **kwargs): nonce = self.nonce() try: params = kwargs['params'] except KeyError: params = {} params['tonce'] = nonce params['access_key'] = self.key post_params = urllib.parse.urlencode(params) msg = '%s|%s|%s' % (method_verb, endpoint_path, post_params) sig = hmac.new(self.secret, msg, hashlib.sha256).hexdigest() uri += post_params + '&signature=' + sig return uri, {} class RockTradingREST(RESTAPI): def __init__(self, key='', secret='', api_version='v1', url='https://api.therocktrading.com'): super(RockTradingREST, self).__init__(url, api_version=api_version, key=key, secret=secret) def sign(self, uri, endpoint, endpoint_path, method_verb, *args, **kwargs): nonce = self.nonce() try: params = kwargs['params'] except KeyError: params = {} payload = params payload['nonce'] = int(nonce) payload['request'] = endpoint_path msg = nonce + uri sig = hmac.new(self.secret.encode(), msg.encode(), hashlib.sha384).hexdigest() headers = {'X-TRT-APIKEY': self.key, 'X-TRT-Nonce': nonce, 'X-TRT-SIGNATURE': sig, 'Content-Type': 'application/json'} return uri, {'headers': headers} class PoloniexREST(RESTAPI): def __init__(self, key='', secret='', api_version='', url='https://poloniex.com'): super(PoloniexREST, self).__init__(url, api_version=api_version, key=key, secret=secret) def sign(self, uri, endpoint, endpoint_path, method_verb, *args, **kwargs): try: params = kwargs['params'] except KeyError: params = {} params['nonce'] = self.nonce() payload = params msg = urllib.parse.urlencode(payload).encode('utf-8') sig = hmac.new(self.secret.encode('utf-8'), msg, hashlib.sha512).hexdigest() headers = {'Key': self.key, 'Sign': sig} return uri, {'headers': headers, 'data': params} bitex.interfaces bitex.interfaces.kraken """ https:/kraken.com/help/api """ # Import Built-Ins import logging # Import Third-Party # Import Homebrew from bitex.api.rest import KrakenREST from bitex.utils import return_json from bitex.formatters.kraken import cancel, trade, order_book # Init Logging Facilities log = logging.getLogger(__name__) class Kraken(KrakenREST): def __init__(self, key='', secret='', key_file=''): super(Kraken, self).__init__(key, secret) if key_file: self.load_key(key_file) def make_params(self, *pairs, **kwargs): q = {'pair': ','.join(pairs)} q.update(kwargs) return q def public_query(self, endpoint, **kwargs): path = 'public/' + endpoint return self.query('GET', path, **kwargs) def private_query(self, endpoint, **kwargs): path = 'private/' + endpoint return self.query('POST', path, authenticate=True, **kwargs) """ BitEx Standardized Methods """ @return_json(None) def ticker(self, *pairs): q = self.make_params(*pairs) return self.public_query('Ticker', params=q) @return_json(order_book) def order_book(self, pair, **kwargs): q = self.make_params(pair, **kwargs) return self.public_query('Depth', params=q) @return_json(None) def trades(self, pair, **kwargs): q = self.make_params(pair, **kwargs) return self.public_query('Trades', params=q) def _add_order(self, pair, side, price, amount, **kwargs): q = {'pair': pair, 'type': side, 'price': price, 'ordertype': 'limit', 'volume': amount, 'trading_agreement': 'agree'} q.update(kwargs) return self.private_query('AddOrder', params=q) @return_json(trade) def bid(self, pair, price, amount, **kwargs): return self._add_order(pair, 'buy', price, amount, **kwargs) @return_json(trade) def ask(self, pair, price, amount, **kwargs): return self._add_order(pair, 'sell', price, amount, **kwargs) @return_json(cancel) def cancel_order(self, order_id, **kwargs): q = {'txid': order_id} q.update(kwargs) return self.private_query('CancelOrder', params=q) @return_json(None) def order_info(self, *txids, **kwargs): if len(txids) > 1: q = {'txid': txids} elif txids: txid, *_ = txids q = {'txid': txid} else: q = {} q.update(kwargs) return self.private_query('QueryOrders', params=q) @return_json(None) def balance(self, **kwargs): return self.private_query('Balance') @return_json(None) def withdraw(self, _type, source_wallet, amount, tar_addr, **kwargs): raise NotImplementedError() @return_json(None) def deposit_address(self, **kwargs): raise NotImplementedError() """ Exchange Specific Methods """ @return_json(None) def time(self): return self.public_query('Time') @return_json(None) def assets(self, **kwargs): return self.public_query('Assets', params=kwargs) @return_json(None) def pairs(self, **kwargs): return self.public_query('AssetPairs', params=kwargs) @return_json(None) def ohlc(self, pair, **kwargs): q = self.make_params(pair, **kwargs) return self.public_query('OHLC', params=q) @return_json(None) def spread(self, pair, **kwargs): q = self.make_params(pair, **kwargs) return self.public_query('Spread', params=q) @return_json(None) def orders(self, **kwargs): q = kwargs return self.private_query('OpenOrders', params=q) @return_json(None) def closed_orders(self, **kwargs): q = kwargs return self.private_query('ClosedOrders', params=q) @return_json(None) def trade_history(self, **kwargs): q = kwargs return self.private_query('TradesHistory', params=q) @return_json(None) def fees(self, pair=None): q = {'fee-info': True} if pair: q['pair'] = pair return self.private_query('TradeVolume', params=q) bitex.interfaces.bitfinex """ http://docs.bitfinex.com/ """ # Import Built-Ins import logging # Import Third-Party # Import Homebrew from bitex.api.rest import BitfinexREST from bitex.utils import return_json from bitex.formatters.bitfinex import trade, cancel, order_status # Init Logging Facilities log = logging.getLogger(__name__) class Bitfinex(BitfinexREST): def __init__(self, key='', secret='', key_file=''): super(Bitfinex, self).__init__(key, secret) if key_file: self.load_key(key_file) def public_query(self, endpoint, **kwargs): return self.query('GET', endpoint, **kwargs) def private_query(self, endpoint, **kwargs): return self.query('POST', endpoint, authenticate=True, **kwargs) """ BitEx Standardized Methods """ @return_json(None) def order_book(self, pair, **kwargs): return self.public_query('book/%s' % pair, params=kwargs) @return_json(None) def ticker(self, pair, **kwargs): return self.public_query('pubticker/%s' % pair, params=kwargs) @return_json(None) def trades(self, pair, **kwargs): return self.public_query('trades/%s' % pair, params=kwargs) def _place_order(self, pair, amount, price, side, replace, **kwargs): q = {'symbol': pair, 'amount': amount, 'price': price, 'side': side, 'type': 'exchange limit'} q.update(kwargs) if replace: return self.private_query('order/cancel/replace', params=q) else: return self.private_query('order/new', params=q) @return_json(trade) def bid(self, pair, price, amount, replace=False, **kwargs): return self._place_order(pair, amount, price, 'buy', replace=replace, **kwargs) @return_json(trade) def ask(self, pair, price, amount, replace=False, **kwargs): return self._place_order(pair, str(amount), str(price), 'sell', replace=replace, **kwargs) @return_json(cancel) def cancel_order(self, order_id, all=False, **kwargs): q = {'order_id': int(order_id)} q.update(kwargs) if not all: return self.private_query('order/cancel', params=q) else: endpoint = 'order/cancel/all' return self.private_query(endpoint) @return_json(order_status) def order(self, order_id, **kwargs): q = {'order_id': order_id} q.update(kwargs) return self.private_query('order/status', params=q) @return_json(None) def balance(self, **kwargs): return self.private_query('balances', params=kwargs) @return_json(None) def withdraw(self, _type, source_wallet, amount, tar_addr, **kwargs): q = {'withdraw_type': _type, 'walletselected': source_wallet, 'amount': amount, 'address': tar_addr} q.update(kwargs) return self.private_query('withdraw', params=q) @return_json(None) def deposit_address(self, **kwargs): q = {'method': currency, 'wallet_name': target_wallet} q.update(kwargs) return self.private_query('deposit/new', params=kwargs) """ Exchange Specific Methods """ @return_json(None) def statistics(self, pair): return self.public_query('stats/%s' % pair) @return_json(None) def funding_book(self, currency, **kwargs): return self.public_query('lendbook/%s' % currency, params=kwargs) @return_json(None) def lends(self, currency, **kwargs): return self.public_query('lends/%s' % currency, params=kwargs) @return_json(None) def pairs(self, details=False): if details: return self.public_query('symbols_details') else: return self.public_query('symbols') @return_json(None) def fees(self): return self.private_query('account_infos') @return_json(None) def orders(self): return self.private_query('orders') @return_json(None) def balance_history(self, currency, **kwargs): q = {'currency': currency} q.update(kwargs) return self.private_query('history/movements', params=q) @return_json(None) def trade_history(self, pair, since, **kwargs): q = {'symbol': pair, 'timestamp': since} q.update(kwargs) return self.private_query('mytrades', params=q) bitex.interfaces.gdax """ https://docs.gdax.com/ """ # Import Built-Ins import logging # Import Third-Party # Import Homebrew from bitex.api.rest import GDAXRest from bitex.utils import return_json # Init Logging Facilities log = logging.getLogger(__name__) class GDAX(GDAXRest): def __init__(self, key='', secret='', key_file=''): super(GDAX, self).__init__(key, secret) if key_file: self.load_key(key_file) def public_query(self, endpoint, **kwargs): return self.query('GET', endpoint, **kwargs) def private_query(self, endpoint, method_verb='POST', **kwargs): return self.query(method_verb, endpoint, authenticate=True, **kwargs) """ BitEx Standardized Methods """ @return_json(None) def ticker(self, pair, **kwargs): return self.public_query('products/%s/ticker' % pair, params=kwargs) @return_json(None) def order_book(self, pair, **kwargs): return self.public_query('products/%s/book' % pair, params=kwargs) @return_json(None) def trades(self, pair, **kwargs): return self.public_query('products/%s/trades' % pair, params=kwargs) @return_json(None) def bid(self, pair, price, size, **kwargs): q = {'side': 'buy', 'type': 'market', 'product_id': pair, 'price': price, 'size': size} q.update(kwargs) return self.private_query('orders', params=q) @return_json(None) def ask(self, pair, price, amount, **kwargs): q = {'side': 'sell', 'type': 'market', 'product_id': pair, 'price': price, 'size': size} q.update(kwargs) return self.private_query('orders', params=q) @return_json(None) def cancel_order(self, order_id, all=False, **kwargs): if not all: return self.private_query('orders/%s' % order_id, method_verb='DELETE', params=kwargs) else: return self.private_query('orders', method_verb='DELETE', params=kwargs) @return_json(None) def order(self, order_id, **kwargs): return self.private_query('orders/%s' % order_id, method_verb='GET', params=kwargs) @return_json(None) def balance(self, **kwargs): return self.private_query('accounts', method_verb='GET', params=kwargs) @return_json(None) def withdraw(self, _type, source_wallet, amount, tar_addr, **kwargs): raise NotImplementedError() @return_json(None) def deposit_address(self, **kwargs): raise NotImplementedError() """ Exchange Specific Methods """ @return_json def time(self): return self.public_query('time') @return_json(None) def currencies(self): return self.public_query('currencies') @return_json(None) def pairs(self): return self.public_query('products') @return_json(None) def ohlc(self, pair, **kwargs): return self.public_query('products/%s/candles' % pair, params=kwargs) @return_json(None) def stats(self, pair, **kwargs): return self.public_query('products/%s/stats' % pair, params=kwargs) bitex.utils # Import Built-Ins import logging import json import requests # Import Third-Party # Import Homebrew # Init Logging Facilities log = logging.getLogger(__name__) def return_json(formatter=None): def decorator(func): def wrapper(*args, **kwargs): try: r = func(*args, **kwargs) except Exception as e: log.error("return_json(): Error during call to " "%s(%s, %s) %s" % (func.__name__, args, kwargs, e)) raise try: r.raise_for_status() except requests.HTTPError as e: log.error("return_json: HTTPError for url %s: " "%s" % (r.request.url, e)) return None, r try: data = r.json() except json.JSONDecodeError: log.error('return_json: Error while parsing json. ' 'Request url was: %s, result is: ' '%s' % (r.request.url, r.text)) return None, r except Exception as e: log.error("return_json(): Unexpected error while parsing json " "from %s: %s" % (r.request.url, e)) raise # Apply formatter and return if formatter is not None: return formatter(data, *args, **kwargs), r else: return data, r return wrapper return decorator You can also find the code at its GitHub repository. I've omitted some of the interface classes - the three I've provided are about as diverse as they come anyway. GitHub Repository (dev branch) Answer: There's a lot of code here, so I'm just going to review bitex.api. You'll see that there's plenty here for one review. Maybe some of the other reviewers here at Code Review will review some of your other code. There is no module docstring. What is the purpose of this module? What does it contain? import requests is in the "Built-Ins" section but as far as I know this module is not built into Python, so it should be in the "Third-Party" section. The name RESTAPI could be improved. Does an instance of this class represent an API client or an API server? The name does not make it clear. The RESTAPI class has no docstring. What does an instance of this class represent? Can you give some examples of how it might be used? There is text in your introduction to the post that could be used as a starting point for the docstring. When you log a message, don't use the % operator to format the message, instead pass the format string and format arguments separately as described in the documentation. (This gives more flexibility to the logger, for example if logging is suppressed then the message may never need to be formatted.) The docstring for the __init__ method needs to document that method, not the whole class. So it should start "Create a RESTAPI object ..." and go on to document the meaning of the arguments. It's very minor, but why is the argument spelled api_version with an underscore, but the attribute spelled apiversion without? It is risky to have default arguments for key and secret that are insecure. There is a risk that due to an oversight, the load_key method may not get called (especially since there does not appear to be documentation explaining when you need to call it). This would leave the API insecure. It is much better if code can be secure by default. So I would have prefer to have default arguments that are invalid. self.req_methods is the same every time, so it should be a global constant or a class attribute, not an instance attribute. The repetition in the construction of self.req_methods could be avoided, for example: # Supported HTTP verbs. VERBS = 'DELETE GET PATCH POST PUT'.split() REQ_METHODS = {verb: getattr(requests, verb.lower()) for verb in VERBS} But actually I think that these data structures are unnecessary. Instead of: request_method = self.req_methods[method_verb] # ... r = request_method(url, timeout=5, **request_kwargs) why not use requests.request? r = requests.request(method_verb, url, timeout=5, **request_kwargs) The docstring for load_key should explain the meaning of the path argument. (It is the path to a text file whose first two lines are the key and the secret respectively.) The second argument to open defaults to 'r', so this could be omitted. The method nonce has no docstring. Basing a nonce only on the output of time.time is risky, because this can go backwards: >>> import time >>> time.get_clock_info('time').monotonic False and that would mean that nonces could repeat. The docstring for sign does not explain what the method is supposed to do. The sign method is insecure by default. It would be easy to forget to override it, and then nothing would appear to go wrong, but all the signatures would be bogus. It would be better to be secure by default. A good way to do that would be to use the facilities from the abc (Abstract Base Classes) module: from abc import ABCMeta, abstractmethod class RESTAPI(metaclass=ABCMeta): # ... @abstractmethod def sign(self, ...): # ... Now any attempt to inherit from RESTAPI without overriding the sign method will raise an exception. The docstring for the query method does not explain what the arguments mean, or what is returned. The timeout=5 keyword argument is hard-coded. What if someone needs to query an API that needs a longer timeout than this? The value should be a keyword argument to the query function (or possibly an attribute of the class or object).
{ "domain": "codereview.stackexchange", "id": 23884, "tags": "python, python-3.x, rest, framework, cryptocurrency" }
Assuming Wave Function Collapse Actually Exists, Can a Wavefunction Collapse into Another Wavefunction that is not the Delta Function
Question: Suppose we have a time independent potential and suppose $\psi_1(x)$ and $\psi_2(x)$ are two stationary states of the potential with energies $E_1$ and $E_2$. Suppose the wavefunction is $$ \Psi(x,t) = \frac{1}{\sqrt{2}} \bigg(\psi_1 e^{-iE_1 t/\hbar} + \psi_2 e^{-iE_2 t/\hbar}\bigg) $$ Scenario 1: Suppose at a particular time $t_0$ I measure the position. The theoretical probability I get a position between $a$ and $b$ is $$ \int_a^b |\Psi(x,t_0)|^2 dx $$ Scenario 2: Suppose at a time $t < t_0$ I measure the energy and obtain $E_1$. Then suppose at time $t_0$ I measure the position. What would be the theoretical probability that I get a position between $a$ and $b$? Will it be $$ \int_a^b |\Psi(x,t_0)|^2 dx $$ once again, or will it be $$ \int_a^b |\psi_1(x)|^2 dx? $$ Answer: According to the Copenhagen interpretation of quantum mechanics, measuring energy $E_1$ can be thought of as a projection/collapse of your initial superposition state onto the energy eigenstate $\psi_1$, caused by the measurement. So if you measure eigenvalue $E_1$ at $t<t_0$, then your results for measurements at $t\geq t_0$ will be given by $$\int^b_a |\psi_1(x)|^2dx.$$ The collapse occurs to one of the eigenstates of the operator associated with the property that you are measuring. If you are measuring energy, your state will be projected onto an energy eigenstate. If you measure spin, your system is projected onto a spin eigenstate and if you measure position your system is projected onto a position eigenstate. The projection to a dirac delta function is thus associated with the "spectrum" of the position operator and measurements of the position of a particle.
{ "domain": "physics.stackexchange", "id": 92487, "tags": "quantum-mechanics, wavefunction, measurements, wavefunction-collapse" }
Can we combine the square roots inside the definition of the fidelity?
Question: The (Uhlmann-Jozsa) fidelity of quantum states $\rho$ and $\sigma$ is defined to be $$F(\rho, \sigma) := \left(\mathrm{tr} \left[\sqrt{\sqrt{\rho} \sigma \sqrt{\rho}} \right]\right)^2.$$ However, as discussed here, the cyclical property of the trace extends to arbitrary analytic functions: $$\mathrm{tr}[f(AB)] \equiv \mathrm{tr}[f(BA)]$$ for any analytic function $f$ whenever either side is well-defined. Letting $f$ be the square root function, this seems to imply that $$F(\rho, \sigma) \equiv \big(\mathrm{tr} \left[\sqrt{\rho \sigma} \right]\big)^2,$$ which is much easier to deal with. (I don't think the branch point at the origin of the square root function is an issue, because the function is still continuous there.) Am I correct that these two expressions are equivalent? If so, is there any reason why the much clunkier former expression with nested square roots is always given? The only benefit of the original definition that I can see is that it makes it clear that the operator inside the trace is Hermitian and positive-semidefinite, so that the resulting fidelity is a non-negative real number. Related on physics.SE Answer: Square root is not differentiable at 0, so that cyclic property cannot be applied While $\rho\sigma$ has the same non-negative eigenvalues as $\sqrt\rho\sigma\sqrt\rho$, it's not self-adjoint. Non self-adjoint matrices are not diagonalizable in general, so the square root $\sqrt{\rho\sigma}$ can be not well-defined (see edit below). Anyway, $\text{Tr}(\sqrt{\sqrt\rho\sigma\sqrt\rho})$ is equal to the sum of square roots of eigenvalues of $\rho\sigma$. EDIT It turns out that $\rho\sigma$ is always diagonalizable https://www.sciencedirect.com/science/article/pii/002437959190239S So, taking principal square root of it is a correct operation. And it is indeed possible to write this shorter formula. Though this is not very well known and not conventional, since $\rho\sigma$ is not self-adjoint.
{ "domain": "quantumcomputing.stackexchange", "id": 5184, "tags": "fidelity, information-theory" }
GMAPPING: How to mark 'out of range' laserscan as free-space
Question: I'm using a SICK TIM 300 Laserscanner with the gmapping node to create a Map. But whenever the Laserscanner can not see an obstacle at a certain angle (because the obstacles are out of range), gmapping marks these areas as 'unknown'. But the Laserscanner not seeing anything at a certain angle, means there is no obstacle within the measuring-range (in my case: 4m). As a result gmapping should mark these areas as free. How can I solve this problem? I'm using Ubuntu 12.04 with ROS hydro. The Laserscanner marks these 'out-of-range'-areas as 'distance = 0m', the header from the /scan - message tells that the minimum distance is 0.05m. Thanks for your help. Originally posted by Alex1209 on ROS Answers with karma: 3 on 2015-12-17 Post score: 0 Answer: If you take a look at the gmapping page there're two parameters called maxUrange and maxRange. I quote: "If regions with no obstacles within the range of the sensor should appear as free space in the map, set maxUrange < maximum range of the real sensor <= maxRange". So, if you want that region appears as free space, you have to set your maxUrange under the maximum range of the real sensor (which is the maxRangeparameter). Originally posted by S.Prieto with karma: 258 on 2015-12-17 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Alex1209 on 2015-12-17: Thank you very much :D It's that easy. Now the map looks way better. Comment by aarontan on 2018-05-19: what is the unit of maxUrange and max range?
{ "domain": "robotics.stackexchange", "id": 23244, "tags": "ros, slam, navigation, gmapping, sick" }
using dynamic_reconfigure forces a rebuild of the package
Question: I'm using ubuntu 12.04 with fuerte. I noticed, that if my packages use dynamic_reconfigure the config header is generated every time I call make. So the whole package is rebuild, even no file was changed. Did anyone else observed this problem? Is there a solution? Tanks Originally posted by MichaelKorn on ROS Answers with karma: 1480 on 2012-12-19 Post score: 2 Original comments Comment by Lorenz on 2012-12-19: I think that's a pretty old issue that noone fixed so far. I'm not sure if there's a ticket already though and I don't know who's maintaining dynamic_reconfigure at the moment. To fix it, you probably just need to fix the dynamic_reconfigure cmake target. Answer: This was a bug in the recent dynamic_reconfigure version and should be fixed with 1.5.29. Basically the command failed to generate the docs in the correct location and therefore always regenerated the headers/Python files. Originally posted by Dirk Thomas with karma: 16276 on 2012-12-19 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by MichaelKorn on 2012-12-20: how can I update dynamic_reconfigure to 1.5.29? I'm using the ros ubuntu repository. Comment by Dirk Thomas on 2012-12-21: That version is currently only available for Groovy but the fix might be backported to Fuerte soon. Comment by klapow on 2012-12-21: 1.5.29 is in groovy but I am working on back-porting the fix. Should be available in a day or so. Comment by MichaelKorn on 2013-01-08: Are there any updates?
{ "domain": "robotics.stackexchange", "id": 12163, "tags": "rosmake, dynamic-reconfigure" }
Quantum entanglement - static or dynamic?
Question: The answer to this question implies that a future change in particle A will cause an immediate related change in particle B. Does this include destruction of the particle by decay or other means (e.g. throwing it into the LHC)? Answer: Your understanding of entanglement ist wrong. "A future change in particle A will cause an immediate related change in particle B" is not true. When one particle changes, the other doesn't. Entanglement is simply non-classical correlations. These correlations have been set up in the past - usually by interactions of the involved "particles". In that sense, you could answer your question by saying that entanglement is "static". Since decay of a particle is a spontaneous process, when one particle is entangled to another and it decays, the other particle doesn't (or if it does, it's a coincidence). If you destroy one half of an entangled pair, the other one remains untouched. There is a simple test to see whether something could in principle work with entanglement or not: You cannot, under no circumstances, transmit information faster than light. If the forced destruction of one particle would cause the immediate destruction of the other, you could use that to transmit information (I send the information that I destroy the particle or I don't). The reason you cannot transmit anything faster than light is because entanglement is "just" correlations, in other word, any "weird behaviour" at later times has to be determined when the particles interacted in the past.
{ "domain": "physics.stackexchange", "id": 46340, "tags": "quantum-entanglement" }
Method of image charges (semisphere on a metal)
Question: I'm currently trying to study ahead for the upcoming semester since I'm on break and I'm stuck on the method of image charges. I've tried watching some youtube videos on that topic and I thought I understood to some degree and so I tried myself on some exercises in my workbook. Sadly there was only one on that topic since the book only covered briefly on that topic. There's a metallic semisphere with radius a lying on an infinitely expanded metal plate, which lies in the x-z-plane. At $X= (R \cos\alpha, R\sin\alpha, 0)$ there is a charge q. (1) Determine with the use of the method of image charges the potential $\phi(x)$ outside the sphere and show that it solves the Poisson's equation. (2) Write down the electrostatic potential $\phi(x)$ in the quadrupole approximation with monopole-, dipole- and quadropole-moment. (3)In relation to the origin compute the monopole-, dipole- and quadrupole-moment of the system with the charges and image charges. To be honest I'm totally lost here. I've tried following the video step by step to try and solve this but I can't seem to do it. I don't even know where I should place the image charge. Should it be mirrored across the z-axis? Or mirrored across the semisphere? My guess would be the first one, but I'm not totally sure since I tried it with that and still didn't know how to proceed from that. And concerning (2) and (3): I'm guessing the Quadrupole approximation is concerning the multipole expansion with monopole, dipole and quadrupole? At least that was the first thing I found while trying to search for the Quadrupole approximation. I know it's rude of me to ask since I can't really provide any work on my part considering worthy but can anyone show me a general approach to these type of problems? I feel really stupid after dealing with this problem. My workbook even says that this is an easy exercise to approach this topic. Edit: @nbubis I tried drawing the configuration of the image charges. Is it like this?: About (2) and (3): Should I look at it as two dipoles and then add both potentials of those dipoles to get the potential of the quadrupole? But then again I wouldn't need the quadrupole moment for that, right? I'm guessing they just want a general formula for quadrupole approximation and the explicit answer of that in (3) then. So if I were to go the multipole expansion route would I just need to compute the monopole-, dipole- and quadrupole moment and add them up to get the potential? And truth be told I'm still kinda lost on the terminology in the multipole expansion provided in Wikipedia. It's hard to understand that from me without seeing an example where the multipole expansion is applied. Answer: First understand the method of image charges. The idea behind the method is to bypass actually solving the differential equation with boundary conditions, and instead "cheat" by guessing the correct solution. To this end, we find a configuration of imaginary charges that together with the real ones will make the potential on all surfaces be what is given. In your case, you have two surface, each with constant potential zero. For a plane, if you have a charge $q$ at $(x,y,z)$ and another one at $(x,-y,z)$, clearly the potential at $y=0$ will be zero. For the sphere, if a charge is at radius $R$ from the sphere with radius $a$, an image charge with charge $-qa/R$ should be placed at radius $a^2/R$, but the same idea remains. Now, first add the image charge for the sphere. Now the potential on the sphere is zero, but on the plane, we still have a gradient. However, if you mirror both charges off the plane, you should see that the potential on both the sphere and the plane is zero. Writing this all down, we have: $$\begin{eqnarray} \phi({\bf x}) &=& \frac{q}{\sqrt{(x-R\cos\alpha)^2+(y-R\sin\alpha)^2+z^2}} \\ &-& \frac{q}{\sqrt{(x-R\cos\alpha)^2+(y+R\sin\alpha)^2+z^2}} \\ &-&\frac{qa}{R\sqrt{(x-(a^2/R)\cos\alpha)^2+(y-(a^2/R)\sin\alpha)^2+z^2}} \\ &+&\frac{qa}{R\sqrt{(x-(a^2/R)\cos\alpha)^2+(y+(a^2/R)\sin\alpha)^2+z^2}} \end{eqnarray}$$ Clearly, at $y=0$ we have $\phi=0$. Now, what about on the hemisphere? points on the hemisphere satisfy $z^2 = a^2 - x^2-y^2$, so that substituting leads us to the potential on the hemisphere: $$\begin{eqnarray} \phi({\bf x}_{sphere}) &=& \frac{q}{\sqrt{-2xR\cos\alpha -2yR\sin\alpha + R^2 +a^2}} \\ &-& \frac{q}{\sqrt{-2xR\cos\alpha +2yR\sin\alpha + R^2 +a^2}} \\ &-&\frac{qa}{R\sqrt{-2x(a^2/R)\cos\alpha -2y(a^2/R)\sin\alpha + (a^2/R)^2 + a^2}} \\ &+&\frac{qa}{R\sqrt{-2x(a^2/R)\cos\alpha +2y(a^2/R)\sin\alpha + (a^2/R)^2 + a^2}} \\ &=&0 \end{eqnarray}$$ By putting the $a/R$ into the square root. To write down the multipole expansion, you just need to write down the Taylor expansion of the potential around $1/r$, with $r = \sqrt{x^2+y^2+z^2}$. This gives that any expression of the form: $$\lim_{r\to\infty}\frac{1}{\sqrt{r^2+b}}= \frac{1}{r} - \frac{b}{2r^3} + \frac{3b^2}{8r^5} + \cdots$$ You can then use this expansion on the components of the potential to get your result.
{ "domain": "physics.stackexchange", "id": 28812, "tags": "homework-and-exercises, electrostatics, charge, potential, multipole-expansion" }
Gravitational potential of a nearly spherical body
Question: In my book an equation is stated for the gravitational potential $V(r,\theta)$ of a nearly spherical body, such as the Earth. It says that the equation is derived using Laplace's equation though the derivation is not given and it contains the terms $J_0$, $J_2$, $J_3$, as well as the terms $P_0$, $P_1$, $P_2$: $$V(r,\theta)=-\frac{GM}{r}\Big(J_0P_0-J_1\frac{a}{r}P_1(\cos{\theta})-J_2\frac{a^2}{r^2}P_2(\cos{\theta}) ...\Big)$$ As the derivation is not given, it is not clear to me at all what the $J_i$ and $P_i$ terms stand for. It says that the $J_i$ terms represent the distribution of mass, but what does that mean? And what do the $P_i$ terms mean? Answer: The $P_i$ terms are indeed Legender Polynomials. I am no expert on this system, but apparently the $J_0$ term is typically taken to be 1. The $J_1$ term is zero if the center of the coordinate system is the center of mass. The $J_2$ term describes the ellipticity of the object. I found quite a detailed discussion of the "nearly spherical mass potential" here: http://www-gpsg.mit.edu/12.201_12.501/BOOK/chapter2.pdf and the good news is the author does appear to be somewhat of an expert on the matter, and includes some derivations.
{ "domain": "physics.stackexchange", "id": 34896, "tags": "newtonian-gravity, terminology, potential, planets" }
Search for a value in tuple of lists/vectors
Question: I guess there is some smarter way to implement such simple piece of code, but I couldn't figure out other solution by myself. Could it be done recursively without some sort of if's? import Data.Vector (Vector) import qualified Data.Vector type SeqV = (Vector Integer, Vector Integer, Vector Integer) seqv :: SeqV seqv = (Data.Vector.fromList[1..5], Data.Vector.fromList[10..15], Data.Vector.fromList[20..25]) -- | searchs for n in SeqV from left to right -- returns first found value find :: Integer -> SeqV -> Maybe Integer find n (a,b,c) = case f a of Just v -> Just v Nothing -> case f b of Just v -> Just v Nothing -> f c where f v = let f = Data.Vector.filter (==n) v in if Data.Vector.length f > 0 then Just (Data.Vector.head f) else Nothing Answer: find :: Integer -> SeqV -> Maybe Integer find n (a,b,c) | elem n a || elem n b || elem n c = Just n find _ _ = Nothing There's also (<|>) :: Maybe a -> Maybe a -> Maybe a and asum :: [Maybe a] -> Maybe a, but if you simplified your question and find above doesn't unsimplify, post the whole code and there might be an unreasonably short answer again. I also have the feeling wherever you use find could be simplified further since find only really returns a boolean.
{ "domain": "codereview.stackexchange", "id": 27596, "tags": "haskell" }
Publisher publishing message only on state/msg change
Question: Hi, Is it possible to publish ros2 message only on state change ? I was wondering if there is any flag that can set while creating a publisher, so that it will automatically handle publishing only on state change. I can always save the previously published message locally and compare them before publishing but just wondering if there is a better way. Thanks Originally posted by BhanuKiran.Chaluvadi on ROS Answers with karma: 241 on 2021-11-02 Post score: 0 Answer: No, there is no built-in way to do that. You will need to compare the messages yourself. Originally posted by Geoff with karma: 4203 on 2021-11-02 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 37078, "tags": "ros2" }
Why are ropeway pillars tilted?
Question: While skiing I have noticed that ropeways pillars are usually tilted to be perpendicular to the slope (fig.1). If the gravity is pulling straight down, why aren't they vertical as they are supposed to support ropeway's weight? Is there something more they "do"? Also there are pillars which function is not to support weight but to "push" the rope down in order to keep it tense, why is it so crucial? Why the rope needs to be tense? fig.1 Answer: The cable can only put a force on the mast that is perpendicular to itself: It runs over rollers which do not allow it to transfer any force in the direction of the cable. As such, the force on the mast is exactly given by the angle between the oncoming and the outgoing cable. This force is roughly perpendicular to the slope of the cable (assuming a small deflection of the cable by the mast), especially if the cable has several supports in a roughly straight line. The mast is setup in such a way, that the sum of this perpendicular force and its own weight is roughly in the direction of the mast itself. And since the weight of the mast is by far the smaller force of the two, you see the strong diagonal setup. Of course, the cabins pull the cable straight down, even though the masts only take a force perpendicular to the cable. The difference is taken by the cable itself, which has a much higher tension at the upper end of the ropeway than at the lower end due to this. As to why you need so much tension on the cable: The lower the tension, the greater the slack of the cable between the masts, and the more masts you need. It's generally just much cheaper to build two very well-anchored stations and use a high tension, than to build more masts. And that assumes that the additional masts can actually be built at sensible costs, which may not even be the case. Also, the higher the tension, the higher the designed, directional forces of the cable relative to the forces introduced by wind, and seat-rocking passengers. And thus the safer the seat of the cable on the rollers.
{ "domain": "physics.stackexchange", "id": 54522, "tags": "newtonian-mechanics, statics" }
Clearly expression in electric field strength
Question: For a uniformly charged disk the field strength in z axis(z axis passes through the center of disk) is equal to $ε(z)=(\frac{2KQ}{a^2}) (1-\frac{z}{\sqrt{a^2+z^2}})$ In case $z\gg a$, $ε(z)= \frac{2KQ}{a^2}(1-(1-\frac{a^2}{2 z^2}))$ I cant understand why $(\frac{1}{\sqrt{1+\frac{a^2}{z^2}}})=(1-\frac{a^2}{2 z^2})$ a is radius of disk, K is dielectric constant Answer: Hint: a Taylor expansion of the function $\frac{1}{\sqrt{1+x}}$ around $x=0$ can be used, if you realize $z>>a$ makes this viable... (Also there are still brackets missing etc. In your question... and I fixed my own latex now....)
{ "domain": "physics.stackexchange", "id": 90140, "tags": "electrostatics" }
Why does the ratio between deBroglie wavelength and average separation the size of quantum-mechanical effects?
Question: In the ideal gas model of statistical mechanics one very often assumes the gas is dilute so the average separation between particles is large and hence their mutual interaction is correspondingly small. In addition, if the gas is sufficiently dilute, the average separation between its particles is much larger than de Broglie wavelength of a particle. In this case, quantum-mechanical effects are of negligible importance and it is permissible to treat the molecules as distinguishable particles moving along classical trajectories (classical approximation). I am confused about the shaded statement, why we judge the quantum-mechanical effect in terms of the wavelength? I mean if the observable processes are of no disturbance then the particle's dynamics is governed by classical mechanics, so here is there anything I am missing? Answer: There are two quantum regimes that one might want to check: The wave-like behaviour of particles which renders the concept of trajectory and definite positions somewhat fuzzy, leading to interferences which in turn lead to discretised values of the momenta. This particular regime can be checked by ensuring that $\Delta x \Delta p \gg \hbar$ thus avoiding what is sometimes called the saturation of Heisenberg inequality. If $L$ is the size of the box then for an ideal gas particle $\Delta x = L$. Furthermore, assuming classical fluctuations (i.e. equipartition) we have that $\Delta p \simeq \sqrt{m k_B T}$ this leads then to the condition that $L \sqrt{m k_B T} \gg \hbar$ which is equivalent to $\Lambda \ll L$ where $\Lambda$ is the thermal de Broglie wavelength $\Lambda = h/\sqrt{2\pi m k_B T}$. The fact that indistinguishable particles may want to obey quantum statistics depending on whether they are likely to be in the same state or not. To determine this regime, you can imagine that you have a system of N particles in an ideal gas. The semi-classical partition function for a single particle is $q = L^3 / \Lambda^3$. Keep in mind that $q$ gives roughly an estimate of the number of states accessible to a particle. Now, if you were to insert an additional particle in the system, you want to know how likely it is that it has the same state as one of the particles already in the gas. The probability for it being in the same state as one of the N others is roughly $N/q$. If you ask this probability to be much smaller than one you get that $N/q \ll 1$ which is equivalent to $\Lambda \ll (V/N)^{1/3}$ where the right hand side is the typical distance between any two particles in the gas.
{ "domain": "physics.stackexchange", "id": 37112, "tags": "quantum-mechanics, ideal-gas" }
Find the subarray with the max sum
Question: In an interview I was asked to solve the following problem: find a subarray with max sum I have written a piece of code for the same. I need your help in reviewing this code. package com.ankit.rnd; public class MaxSubArrSum { int largestSum=0; int previousLargestSum=0; public static void main(String[] args) { // int [] array = {-2,1,-3,4,-1,2,1,-5,4}; int [] array = {-5,1,-3,7,-1,2,1,-4,6}; // int [] array = {-2,-3,-4,2}; MaxSubArrSum obj = new MaxSubArrSum(); for(int varindex=0;varindex<array.length;varindex++){ // int sumis =new MaxSubArrSum().findSum(varindex,array); // System.out.println("sumis::" +sumis); obj.splitCurrentArray(varindex,array); } } private void splitCurrentArray(int in, int[] arr) { int [] tempArr = new int[arr.length-in]; for(int i=in;i<arr.length;i++){ if(in ==0){ tempArr[i] = arr[i]; } else{ tempArr[i-in] = arr[i]; } } int sum =findSum(in, tempArr); System.out.println("Previous Largest Sum::" + previousLargestSum); System.out.println("Largest Sum found:" + sum); } @SuppressWarnings("unused") private int findSum(int start,int [] array) { int[] currentArray ={}; int [] largestArray = new int[array.length]; int sum=0; /*for(int i=start;i<array.length;i++){*/ for(int i=0;i<array.length;i++){ //a little inefficient here as it always create an array with size more than total number of elements that should be there in the temp array. int psuedoIndex=i; if(start==0){ currentArray = new int[i + 1]; for (int j = 0; j <= i; j++) { currentArray[j] = array[j]; psuedoIndex=psuedoIndex+1; } } else { currentArray = new int[i+1]; for (int j = 0; j <= i; j++) { currentArray[j] = array[j]; /* * if(psuedoIndex == array.length){ //needs a fix. as we * have reached the end of the array. //currentArray[j] = * array[psuedoIndex-1]; currentArray[j] = 0; break; } else{ * * currentArray[j] = array[psuedoIndex]; is commented out * because it missed the element in the previous array. * * * //currentArray[j] = array[psuedoIndex]; currentArray[j] = * array[j]; } */ psuedoIndex=psuedoIndex+1; } } if((sum = calculate(currentArray))>largestSum){ previousLargestSum=largestSum; largestSum=sum; for(int k=0;k<currentArray.length;k++){ System.out.print(currentArray[k] + "|"); } System.out.println(""); } } return largestSum; } private int calculate(int [] currentArr){ int sumOfElements =0; for(int index=0;index<currentArr.length;index++){ sumOfElements +=currentArr[index]; } //System.out.println("sum is:" + sumOfElements); return sumOfElements; } } Answer: Reading and debugging your code and two other answers pops up. (And a third one popped up while writing this answer) There's a whole lot to say about your code. I will provide comments about your existing code and also comment about a different way to think about the problem. You have a whole lot of commented code, and you don't write why it has been commented. Either way, I would recommend removing all your commented code before putting it up for review. The //needs a fix comments here almost made me close the question for not being working code. Your code works (even though it is inefficient) so remove all such commented code: /* * if(psuedoIndex == array.length){ //needs a fix. as we * have reached the end of the array. //currentArray[j] = * array[psuedoIndex-1]; currentArray[j] = 0; break; } else{ if(in ==0){ tempArr[i] = arr[i]; } else{ tempArr[i-in] = arr[i]; } There is no need to treat in ==0 as a special case here. Your else can take care of that as well. Replace all this with: tempArr[i-in] = arr[i]; currentArray = new int[i + 1]; currentArray = new int[i+1]; Please use consistent spacing. And this applies to all your code. I recommend using spaces around + signs and such. Compare for (int j = 0; j <= i; j++) { vs. for(int i=0;i<array.length;i++){ I prefer the first version. No need to make code too compact. Itwillonlymakeithardertoread (right?). if((sum = calculate(currentArray))>largestSum){ Sure, this works, but I think it would be more readable by splitting it on two lines. int sum = calculate(currentArray); if (sum > largestSum) { private int calculate(int [] currentArr){ Non-optimal spacing in both the method declaration and method body. Unclear method name. calculate what exactly? Rename it to calculateSum, please. Perfect use-case for a for-each loop. Here's what I would do: private int calculate(int[] currentArr) { int sumOfElements = 0; for (int value : currentArr) { sumOfElements += value; } return sumOfElements; } for(int k=0;k<currentArray.length;k++){ System.out.print(currentArray[k] + "|"); } There is a Arrays.toString method, use it. System.out.print(Arrays.toString(currentArray)); A different approach You're using way too many for-loops for my taste. Consider your starting array: int[] array = {-5,1,-3,7,-1,2,1,-4,6}; It starts with a negative number, and as you want to find the sub-array with the largest sum, there's no need to start counting on a negative number. So, skipping that one we have these left: int[] array = {1,-3,7,-1,2,1,-4,6}; By starting on \$1\$ and looping until you encounter a negative number, you can see that \$1 + -3 = -2\$, which makes this part unnecessary to check. It will not improve your array. The highest sum we can create out of this is simply \$1\$ for now. Skipping the \$1\$ and \$-3\$ and we end up with: int[] array = {7,-1,2,1,-4,6}; Now let's start looping. \$7 + -1 = 6\$ so that's good, let's continue. \$2 + 1 + -4 = -1\$ OK, that ended up being a not so pleasant experience, but considering the \$6\$ we have from before, we end up with \$5\$ so let's continue. \$6\$ now that's good. And now the array has ended. Adding this to our previous result and we end up with \$5 + 6 = 11\$ and we have our final result. There's no need to start the original looping at 2 which will only count 2,1,-4,6 as the 7,-1 in the beginning of the array will give a more positive result than starting from this 2. By taking this in consideration, you can increase the speed of your algorithm a lot. The code for this approach can be found and review in my Code Review question Edit: Another addition. You don't provide a method to retrieve the result without printing it. Your way of returning the result seems to be to simply print it to the console. That's not a good way. You should have provided a method like this: int[] maxSubArraySum(int[] array)
{ "domain": "codereview.stackexchange", "id": 7835, "tags": "java, array, interview-questions" }
What type of control system is this? open loop?
Question: Suppose I have a liquid in a container, the type of liquid is such that it expands when its temperature rises. The contrainer is marked with scale to measure the height of liquid expanded. There is a controller that maintains the temprature at it desired value. Temprature sensor feedback the temprature value to generate error signal into the controller. But the main purpose of the system is to maintain the height which does not have any feedback sensor. The only way to find out is through scale reading which is done by human observer. To maintain the height, the observer changes set point temprature. So first, what is the actual name of this kind of system, an open loop system for height control? Second is this block diagram correct in control theory? Answer: This system can be considered "closed loop," since the control input is being determined by some sort of feedback loop (even though the mathematical expression of the feedback loop through the operator is unknown). Also, yes the diagram you've shown is valid. I often work with control systems that have a human operator in the control loop, such as robotic teleoperation systems, and they are often shown as some black box operating on a feedback signal. However, it would be difficult to apply any sort of control theory to your system without some sort of model of how the human operator reacts to the height of the liquid. Often you can make an assumption, like maybe the operator behaves like a proportional controller with a bit of noise (you would need to have solid justifications for any assumptions you make). Either way, in this case if you want to control the height of the fluid with any amount of precision, replace that operator with an ultrasonic rangefinder (or some other sensor) plus some variation on a PID controller.
{ "domain": "engineering.stackexchange", "id": 1173, "tags": "control-engineering, control-theory, temperature, pid-control, feedback-loop" }
Null check when enabling Up navigation in Android
Question: This upvoted SO answer and this upvoted SO answer both check for null in a similar way: they call a function in the if or assert statement and "throw away" the result and then call the same method again in the success clause and operate on the result. I am new to Android development and have limited knowledge of Java, but my experience in other languages (and every bone in my body) tells me it would be better to store the result of the function in a local variable, test that in the if statement and then use that local variable value in the success clause like this: ActionBar actionBar = getSupportActionBar(); if (actionBar != null) { actionBar.setDisplayHomeAsUpEnabled(true); } Am I missing something (because the answers on SO are upvoted and no-one has suggested this alternative); is my suggestion above flawed in Java? Answer: Assuming java conventions are being followed, methods that begin with get should return an already-computed value with no business logic performed. In that case, the only real 'cost' tradeoff is the overhead of a second method call vs. the memory for new pointer to the value. Both should be insignificant. It's generally a matter of style as to which approach is preferable.
{ "domain": "codereview.stackexchange", "id": 16339, "tags": "java, android, null" }
What are Connection Forms in General Relativity?
Question: I'm trying to follow an article by H. Ellis (1973), where he developed the first ever metric of a traversable Wormhole (more info here). In pages 105-106 (the end of the 3rd page in the linked file above, and the start of the 4th) he calculates the "connection forms" $\omega{}_\kappa{}^\mu$. (Warning: This is an old paper, with different notations then used today in GR.) What is the relation between the Connection Forms $\omega{}_\kappa{}^\mu$ and the more familiar Affine Connection $\Gamma^{\sigma}_{\mu\nu}$? I know that there is a Wikipedia article about connection Forms, but it's written very technically, and using a notation more commonly used by mathematicians then by physicists. P.S. Is there anything here that relates to the formalism of Cartan of GR? I just finished my GR course and we didn't have enough time to learn it properly :( Answer: Short version : ${\omega_\kappa}^\mu$ is a matrix of (dual) vectors, and it is basically identical to the Christoffel symbol, via each such vector being \begin{equation} {\omega_\kappa}^\mu = ({\Gamma^\mu}_{\kappa 0}, {\Gamma^\mu}_{\kappa 1}, \ldots) \end{equation} In that paper in particular, they use the dual basis $\{ \omega^\nu \}$, so that \begin{equation} {\omega_\kappa}^\mu = {\Gamma^\mu}_{\kappa 0} \omega^0 + {\Gamma^\mu}_{\kappa 1} \omega^1 + \ldots \end{equation} Although please note, that basis may not be a coordinate basis, so it should be more exactly \begin{equation} {\omega_\kappa}^\mu = {\Gamma^\mu}_{\kappa 0} e^0_{\nu}\omega^\nu + {\Gamma^\mu}_{\kappa 1} e^1_{\nu} \omega^\nu + \ldots \end{equation} Long version : The covariant derivative can be seen as a map from the tangent bundle to the product of the tangent bundle with the exterior bundle of $1$-forms, \begin{eqnarray} \nabla : TM &\to& TM \otimes \Omega^1 M\\ X &\mapsto& \nabla X \end{eqnarray} That way, we can define the application of a vector to the covariant derivative, $\nabla_Y$X, as an application of that $1$-form to $\nabla_X$, ie $[\nabla X](Y) = \nabla_Y X$. Under this form, we can in particular define, given a function $f$ and a vector field $v$, \begin{eqnarray} \nabla(fv) = df \otimes v + f \nabla v \end{eqnarray} with the application \begin{eqnarray} [\nabla(fv)](Y) = df(Y) \otimes v + f [\nabla v] (Y) \end{eqnarray} We can always decompose the vector on some frame field $\{ e_\mu \}$, so that \begin{eqnarray} v = v^\mu e_\mu \end{eqnarray} The components are then just a set of four scalar functions. We then have \begin{eqnarray} \nabla(v^\mu e_\mu) = dv^\mu \otimes e_\mu + v^\mu \nabla e_\mu \end{eqnarray} $\nabla e_\mu$ is once again a covariant derivative, so it will be equal to some object in $TM \otimes \Omega^1 M$. That object is the connection form. The vector part of this object can itself be decomposed in components \begin{eqnarray} \nabla e_\mu = \omega_\mu^\nu e_\nu \end{eqnarray} Each component here $\omega_\mu^\nu$ is a $1$-form. It can be decomposed further in a slightly more obvious vector/$1$-form decomposition by considering the dual basis $\theta^\mu$, which is the basis of $1$-forms such that $\theta^\mu(e_\nu) = \delta^\mu_\nu$ : \begin{eqnarray} \omega_\mu^\nu e_\nu &=& (\omega_\mu^\nu)_\sigma e_\nu \otimes \theta^\sigma \end{eqnarray} In that decomposition, this is basically the Christoffel symbol. Just apply a vector $Y = Y^\alpha e_\alpha$ and apply linearity and the Leibniz rule to it to see that : \begin{eqnarray} [\nabla(X^\mu e_\mu)](Y^\alpha e_\alpha) &=& dX^\mu(Y^\alpha e_\alpha) \otimes e_\mu + X^\mu [\nabla e_\mu](Y^\alpha e_\alpha)\\ &=& Y^\alpha \left[dX^\mu(e_\alpha) \otimes e_\mu + X^\mu (\omega_\mu^\nu)_\sigma e_\nu \otimes [\theta^\sigma](e_\alpha)\right]\\ &=& Y^\alpha \left[\partial_\alpha X^\mu e_\mu + X^\mu (\omega_\mu^\nu)_\sigma e_\nu \delta^\sigma_\alpha\right]\\ &=& Y^\alpha \left[\partial_\alpha X^\mu e_\mu + X^\mu (\omega_\mu^\nu)_\alpha e_\nu \right] \end{eqnarray} You will recognize the appropriate form of the covariant derivative, so that \begin{eqnarray} (\omega_\mu^\nu)_\alpha = {\Gamma^\nu}_{\mu\alpha} \end{eqnarray} The link between the two is then that for any two components $\mu, \nu$, the connection form is a $1$-form equal to \begin{eqnarray} \omega_\mu^\nu = {\Gamma^\nu}_{\mu\alpha} \theta^\alpha \end{eqnarray}
{ "domain": "physics.stackexchange", "id": 66051, "tags": "general-relativity, differential-geometry, curvature, differentiation, notation" }
prerelease test a package
Question: Hi, i am trying to run a prerealese test on our packages, but i am having some trouble. The links on http://wiki.ros.org/regression_tests which lead you to the "release.yaml rosdistro file" do not work. I am not sure, but is this the file where i have do upload my repo information https://github.com/ros/rosdistro/blob/master/hydro/distribution.yaml ? Second question, after i added my repo information in that file, how long does it take until i can start the prerelease test on http://prerelease.ros.org/create_job/hydro ? One last question, in the distribution.yaml file all packages/stacks contain that line: release: release/hydro/{package}/{version} What does this line mean ? For now I want to test our packages for hydro. Regards, Peter Originally posted by pkohout on ROS Answers with karma: 336 on 2014-09-24 Post score: 1 Answer: Yes, to former release.yaml is now called distribution.yaml. I have updated the wiki page accordingly (http://wiki.ros.org/action/diff/regression_tests?action=diff&rev1=44&rev2=45). Your repository will appear in the prerelease list as soon as the ROS distro cache has been updated (which happens roughly every five minutes). (New jobs on the farm (for building Debian packages, documentation and devel tests) are generated once every day (in the late evening PST/PDT)). The release-line you refer to actually has some context which should make the semantic clearer: release: packages: - catkin tags: release: release/hydro/{package}/{version} url: https://github.com/ros-gbp/catkin-release.git version: 0.5.89-0 The entry refers to the tag in the git repository which identifies the release for each package. The variables surrounded by curly braces are expanded with each package name and the version number to identify the tag under which the source code of that specific package version can be found (including the Debian control files). For the above example (which contains only a single package) that would be https://github.com/ros-gbp/catkin-release/tree/release/hydro/catkin/0.5.89-0 Originally posted by Dirk Thomas with karma: 16276 on 2014-09-24 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by pkohout on 2014-09-25: Thanks for the answers! But on http://wiki.ros.org/bloom/Tutorials/FirstTimeRelease I use the same distribution file to release my repo ? Will it start the build automatically after I push my entry into the distibution file ? Comment by Dirk Thomas on 2014-10-01: If you use bloom to release your code and let it generate the pull request for you, the build farm will start building that package in the next 24h.
{ "domain": "robotics.stackexchange", "id": 19506, "tags": "ros" }
How bright is Comet PANSTARRS C/2017 S3 now?
Question: The LiveScience article Incredible Hulk? Nah, This Glowing, Green Light in the Night Sky Is a Comet published just hours ago discusses Comet PANSTARRS C/2017 S3 and mentions: bright bursts exploding from its surface twice in close succession — first on June 30, and then again about two weeks later, Sky and Telescope reported. As Hulk himself might say, "Comet flash!" The linked Sky and Telescope article PanSTARRS Comet, Rocked by Outburst, Goes Green is dated July 17, 2018 and offers the tantallizing information that Comet PanSTARRS (C/2017 S3) has erupted again! Now bright enough to see in binoculars, it might become a naked-eye object if it survives until perihelion. Question: How bright is the comet now, and how is it predicted to behave in the next week or two (baring further outbursts)? below: "On July 15, Comet PanSTARRS (C/2017 S3) appeared bright and well condensed with a narrow ion tail." Photo Credit: Michael Jäger. From here. update: The Universe Today's article Catch Comet C/2017 S3 PANSTARRS In Outburst links to S Seiichi Yoshida's page for the comet, but there's no recent data! Answer: The solar elongation (angle from the Sun to the object) is about 15 degrees right now and decreasing as the comet moves in towards perihelion (closest approach to the Sun) on August 15. As James K says this means that comet is lost in the Sun's glare. Anything below 30 degrees solar elongation is tricky to impossible to see. Assuming it survives the close perihelion passage of 0.21 AU, it is predicted to enter the field of view of the SOHO LASCO Sun-observing coronographs on August 24: Assuming there is anything left of C/2017 S3, perihelion will occur this month on the 15th at a rather small perihelion distance of 0.21 AU. Being only 2 weeks from perihelion, it is already at a small elongation of 31 degrees on the 1st. Most observers will lose sight of it during the first week of August as its elongation drops below 25 degrees on the 4th and 20 degrees on the 6th. If the comet is still a going concern it will be visible in the SOHO LASCO C3 FOV between August 24 and September 13. The comet appear around the 7:30 position, passes to the east (left) of the Sun (halfway between the Sun and edge of FOV) and exits the FOV around the 12:30 position. The above info is from the ALPO Comet Section. Latest LASCO image and movies are available here and many other places.
{ "domain": "astronomy.stackexchange", "id": 3106, "tags": "amateur-observing, comets" }
hector map static map
Question: Hello everyone, I'm new to the forum and to ROS. I'm trying to use hector_slam without it updating the map. The reason is I have loaded a static map via map_server, the map loads correctly but only for a few frames, after which it is replaced by hector generated map. I modified HectorMappingRos.cpp by switching p_use_static_map. Now no new map is generated (which is good), only laser scans are shown in navViz, but the static map is still removed after a few seconds after the static map is loaded and displayed. Is there a way to make hector mapping to listen to map_server, or to connect to mapTopic from map_server or to make it not to replace the static map at all? Thank you, Octavio Originally posted by opo on ROS Answers with karma: 16 on 2017-10-17 Post score: 0 Original comments Comment by nrakoski3 on 2019-10-25: Hello Octavio I am also trying to use hector_mapping with a static map for a school project. Do you have a link to your code or could you explain to me the changes you made in hector mapping? When I broadcast my .tif map with the map_server the process dies. How did you feed hector_mapping the static map in the launch file? Answer: Hello, I figured it out. It was an issue with the yalm configuration file having too much offset for the center. A zoom out revealed the map and was able to fix the offset. Thank you, Octavio Originally posted by opo with karma: 16 on 2017-10-18 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 29113, "tags": "slam, navigation, mapping, hector-slam, map-server" }
Differences between astronomy, astrophysics and cosmology?
Question: What is the main difference between Astronomy, Astrophysics, and Cosmology? I have the impression that astronomy is a subject that runs parallel to physics but it is outside the physics field. This is based on the division of departments present in many universities: the department of physics being separated from the department of astronomy. The difference between cosmology and astrophysics is more obscure to me. I have the impression, though, that cosmology is more concerned with the structure of spacetime and universe models while astrophysics is more concerned with stellar life cycles, physical properties of stars, galaxies etc. Which is the field that had more activity/ scientific breakthroughs recently? In real life, are those classifications even important? Are there many differences between the education process of a future astronomer, astrophysicist or cosmologist? Somebody, please, give examples of famous astrophysicists, cosmologists and astronomers at the present time. Answer: Richard Feynman has nice words about science. It is not bad to read chapter 3 of "Feynman Lectures on Physics". The main point of his lecture is that "there is no strict boundary between different fields of science", "nature doesn't concern what we call its parts!" So, we can't look for a line that divides celestial works into astronomical or astrophysical or cosmological. Although the main keywords about these fields are respectively: "observation of celestial bodies", "machinery of celestial bodies", "evolution of cosmos as a whole". Surely, they use findings of each other. Recently, cosmology has encountered with big questions, really big! Dark matter and dark energy are the most challenging ones. But it is interesting that if there weren't astronomers and their numerical data, cosmology could not progress. Usually the students of astronomy focus on classical mechanics and optical methods to be able to observe celestial bodies. Astrophysicists concern general relativity and nuclear physics as two important tools to describe stars or galaxies. But cosmologists are interested in modern theories too, especially string theory. Therefore the courses which they pass should be related to these subjects.
{ "domain": "physics.stackexchange", "id": 17862, "tags": "cosmology, astronomy, soft-question, astrophysics, terminology" }
How does it make sense that the force of friction is equal to the force applied if a object is in constant motion?
Question: In order for a object to be in motion one force must overcome the other and not be at an equilibrium. How is it possible that when a object is moving at a constant speed that the force of friction equals the force applied? Answer: Obviously it's not accelerating if the net force is zero. So if it's moving, it has a constant velocity.
{ "domain": "physics.stackexchange", "id": 38326, "tags": "newtonian-mechanics, forces, friction, free-body-diagram" }
Identify for $f(\infty)+f(-\infty)$ in quantum field theory
Question: In Matthew Schwartz's textbook, Quantum Field Theory and the Standard Model, equation 14.68 on page 266 says the following: $$f(\infty)+f(-\infty)=\lim_{\varepsilon\rightarrow0^+}\varepsilon\int_{-\infty}^{\infty}dtf(t)e^{-\varepsilon|t|}.\tag{14.68}$$ The only constraint stipulated in the text is that $f(\tau)$ must be a smooth function. Can anyone help me to prove this expression? When I try to evaluate the right side by bringing the limit and the $\varepsilon$ inside the integral, I find that the right side is equal to zero. I've tried using Dirac delta function identities to prove this, but I've had no luck. Answer: Make the change of variable $x=\varepsilon t$, then the integral becomes $$\lim_{\varepsilon \rightarrow 0^+} \int_{-\infty}^{+\infty}dxf(x/\varepsilon)e^{-|x|}$$ Then (assuming the mathematicians allow it!) take the limit inside the integral to render the $f(x/\varepsilon)$ into $f(\pm\infty)$ (break the integral into [-$\infty$,0] and [0,$\infty$]) and take it out of the integral. Intuitively, with reference to $f(x/\varepsilon)$, what $\varepsilon$ does is to shrink the function along $x$. Take $\varepsilon$ to zero and what is left is the steady-state values of the function at infinity.
{ "domain": "physics.stackexchange", "id": 20731, "tags": "quantum-field-theory" }
Adding a link on page with current URL (including hash) in a querystring
Question: I'm trying to add a link to a page. The link should contain the current URL in a querystring (to be used as a ReturnUrl) and it can contain a hash. Middle-clicking, right-clicking and ctrl+clicking should be supported. For example, middle-clicking will open the link in a new tab with (or without) the querystring. Is this completely off-track, or how can I improve it? <a href="@Url.Action(MVC.Links.ThereAndBackAgain())?ReturnUrl=" onclick="this.href += encodeURIComponent(location.href)"> Answer: I don't think this can possibly be improved. The location hash may be changed by the client after the page is loaded, by clicking on local anchors. This is not visible by the asp.net, and fully visible by the client-side javascript, in the value of location.href. And since you require the complete URL to be included as part of the query string, it must be encoded with encodeURIComponent, which you correctly did.
{ "domain": "codereview.stackexchange", "id": 26742, "tags": "javascript, url, asp.net-mvc" }
image resolution in smartphones and laptop
Question: If I used a smartphone with high pixel per inch (ppi) value for capturing an image with certain dimensions, what happened if this image is displayed on a laptop (larger screen) with: the same ppi value lower ppi value higher ppi value Do the dimensions of the image change? or still the same? Is interpolation required when displaying an image on a mobile phone different from the mobile phone that captured it, if both mobile phones have different ppi values? Answer: Based on your comment: I mean that the image is captured using say, for example, the 5 megapixels camera of the smartphone, and displayed on its screen having a certain dpi value. What happens if the same 5 megapixels image is displayed on another smartphone having different screen size and different dpi? If you display an image (taken by whichever device, with whatever megapixel camera) on two different displays with two different DPI or PPIs, then it will seem smaller on the high PPI screen and larger on the low DPI screen. This is very easy to see. Lets say the image has 2000V x 3000H pixels. And display this image on a screen with 200 PPI. Then it will take 15 inches horizontally and 10 inches vertically. On the other hand, if you display it on a second device with 100 PPI, then it will take 30 inches horizontally and 20 inches vertically, hence enlarged by 4 times. Lets conclude : Everything else being the same except the screen PPIs; a given image displays larger on a low PPI screen and smaller on the high PPI.
{ "domain": "dsp.stackexchange", "id": 8165, "tags": "image-processing" }
2D Harmonic Oscillator Commutators
Question: So I am given a 2-dimensional harmonic oscillator with $H=H_1+H_2$ where $$H_i=\frac{p_i^2}{2m}+\frac{1}{2}m\omega^2x_i^2$$ Additionally, $$L=x_1p_2-x_2p_1$$ If we define $$A=\frac{1}{2\omega}[H_1-H_2]$$ $$B=\frac{1}{2}L$$ $$C=\frac{-i}{\hbar}[A,B]$$ Where [A,B] is the commuatator of A with B. We are asked for the explicit form of C, but isnt it just $$[H_1-H_2,L] = [H_1,L]-[H_2,L]=0$$ Due to the isotropy of space. It just does not make sense that C would be 0, because then the three would not be closed under commutation (which I am supposed to show). Answer: When I compute the commutator explicitly, I don't get $0$. Use the canonical commutation relations \begin{align} [x_j, p_k] = i\hbar I\delta_{jk} \end{align} where $I$ is the identity operator, and recall that the harmonic oscillator components are independent which means; \begin{align} [x_k, x_j] = 0, \qquad [p_i, p_j] = 0 \end{align} to compute: \begin{align} [H_1-H_2, L] &= [H_1, L] - [H_2, L] \\ &= [H_1, x_1p_2 - x_2p_1] - [H_2, x_1p_2 - x_2p_1] \\ &= [H_1, x_1]p_2 -x_2[H_1, p_1] - x_1[H_2, p_2] + [H_2, x_2]p_1 \\ &= \frac{1}{2m}[p_1^2, x_1]p_2 - \frac{1}{2}m\omega^2 x_2[x_1^2, p_1] - \frac{1}{2} m\omega^2 x_1[x_2^2,p_2] + \frac{1}{2m} [p_2^2, x_2]p_1 \\ &= \frac{1}{2m} (-2i\hbar)(p_1p_2 + p_2p_1) - \frac{1}{2}m\omega^2(2i\hbar)(x_2x_1+x_1x_2) \\ &= -\frac{2i\hbar}{m}p_1p_2 - 2im\omega^2\hbar x_1x_2\\ &\neq 0 \end{align}
{ "domain": "physics.stackexchange", "id": 11751, "tags": "quantum-mechanics, homework-and-exercises, commutator" }
Can a single qutrit in superposition be considered entangled?
Question: Often in quantum computing the idea of quantum superposition is introduced well before the concept of entanglement. I suspect this may be because our conception of (classical) computing privileges bits, and hence we also privilege qubits in a Hilbert space of dimension $d=2$. It's easy enough to consider a single qubit in superposition, but transitioning to entanglement requires a plurality of such particles. Or does it? For example, suppose we lived in a world that privileged qudits, with $d=4$; e.g. four-level quantum systems as opposed to two-level qubits. We can think of our system (say, a particle-in-a-box or a harmonic-oscillator or what-have-you); our qudit could be in any superposition of $\{\vert 0\rangle,\vert 1\rangle,\vert 2\rangle,\vert 3\rangle\}$. We can think of a particle in a superposition of $\vert \Psi\rangle=\frac{1}{\sqrt{2}}\vert 0\rangle\pm\vert 3\rangle$, or $\vert\Phi\rangle=\frac{1}{\sqrt{2}}\vert 1\rangle\pm\vert 2\rangle$. Now if we envision our (single) qudit instead as two virtual qubits, with a mapping/isomorphism such as: $$\vert 0\rangle_{qudit}=\vert 00\rangle_{qubit}$$ $$\vert 1\rangle_{qudit}=\vert 01\rangle_{qubit}$$ $$\vert 2\rangle_{qudit}=\vert 10\rangle_{qubit}$$ $$\vert 3\rangle_{qudit}=\vert 11\rangle_{qubit},$$ then we can see that both $\vert \Psi\rangle$ and $\vert \Phi\rangle$ are the Bell states, e.g. are entangled. This works nicely for $d=4$ or any other power of $2$. But would it work for any other dimension, such as $d=3$? Can we decompose a qutrit that is in superposition into smaller components, and ask whether the qutrit thusly is in some sense entangled? Answer: To talk about entanglement, you have to first identify subsystems. In your $d=4$ example, you defined an isomorphism $\mathbb{C}^4\simeq \mathbb{C}^2\otimes\mathbb{C}^2$ via the identification of basis states. Whether this is meaningful, depends on the context/the physical scenario you have in mind. But it definitely can be. For $d=3$, this is never possible. Why? Because you have to single out subsystems i.e. you have to define a tensor product structure. But necessarily, if your Hilbert space is $\mathcal H \simeq \mathcal H_1 \otimes \mathcal H_2$, then $\dim\mathcal H = \dim \mathcal H_1 \times \dim\mathcal H_2$. So if $\mathcal H$ has prime dimension, it cannot be factored (non-trivially). The trivial factorisation is of course always possible, this is $\mathcal H \simeq \mathcal H \otimes \mathbb C$. But you can easily see that in this case, no entanglement is possible. (Maybe unrelated) note: I have observed multiple times that people confuse subsystems with subspaces. Subspace give rise to a direct sum decomposition, most commonly $\mathcal H = U\oplus U^\perp$. This is vastly different from a tensor product structure!
{ "domain": "quantumcomputing.stackexchange", "id": 1989, "tags": "entanglement, superposition, quantum-state" }
what is dependency?
Question: when the package create, dependencies can be selected by needs. I don't get why package needs dependencies. Somebody plz explain about it... Is system dependency same with dependency? I mean System dependency is referred at This page plz help me out.. Originally posted by Akali on ROS Answers with karma: 49 on 2015-01-14 Post score: 1 Answer: A package does not "need" dependencies. You can have a package without dependencies. However, if you want to develop something you might want to reuse some code/knowlegde of others. E. g. you might want to use message of service definitions or libraries created by others and maintained in other packages. In this case your package will "depend" on the other package that contains the message/service/library definitions and thereby you have to state those packages as dependencies. Creating packages without dependencies basically means that you have to reinvent every wheel you need for the development by yourself. System dependencies are like ros-dependencies -> you use libraries created by others to speed up your development. However, in this case the library is not in a ros package but is installed as library on your computer, e. g. zlib for compression or opencv for image processing.... Originally posted by Wolf with karma: 7555 on 2015-01-14 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 20564, "tags": "ros" }
Audio not saving in google colab
Question: # all imports from IPython.display import Javascript from google.colab import output from base64 import b64decode RECORD = """ const sleep = time => new Promise(resolve => setTimeout(resolve, time)) const b2text = blob => new Promise(resolve => { const reader = new FileReader() reader.onloadend = e => resolve(e.srcElement.result) reader.readAsDataURL(blob) }) var record = time => new Promise(async resolve => { stream = await navigator.mediaDevices.getUserMedia({ audio: true }) recorder = new MediaRecorder(stream) chunks = [] recorder.ondataavailable = e => chunks.push(e.data) recorder.start() await sleep(time) recorder.onstop = async ()=>{ blob = new Blob(chunks) text = await b2text(blob) resolve(text) } recorder.stop() }) """ def record(sec=5): display(Javascript(RECORD)) s = output.eval_js('record(%d)' % (sec*1000)) b = b64decode(s.split(',')[1]) with open('audio.wav','wb') as f: f.write(b) return 'audio.wav' # or webm ? The above is the code for audio recording in google colab. But the audio file is not saved in google colab and showing after running the code. kindly help? Answer: You are only defining the record function but never actually running it. Adding record() to the end of the code works for me in Google Colab and saves the audio to the audio.wav file.
{ "domain": "datascience.stackexchange", "id": 9329, "tags": "python, colab, javascript" }
Putting in order my two packages and possibly merging them
Question: I am making an application which fetches tweets for a specified amount of time, then inserts those tweets in a database and when the user presses another button the top n words and the hashtags will be shown. This is my Twitter package: [Package]Twitter >[Class]TwitterTools.java This is my Analyzing package: [Package]Analyzing >[Class]WordCounting.java This is the TwitterTools class: public class TwitterTools { public static List<Status> search(Query query) { } public static void filterTweetsBasedOnCity(List<Status> tweets, final String city) { } public static Query queryMaker(final String keywords, final Date since, final Date until, final int count) { } } search - returns a list of status based on a query filterTweetsBasedOnCity - deletes status from a list if they were not made in a certain city queryMaker - makes a query based on the parameters This is the WordCounting class: public class WordCounting { public static String getHtmlTable(final List<String[]> words, final List<String[]> hashtags) { } public static Stream<Map.Entry<String, Long>> getTopWords(final int topX, final Stream<String> words) { } public static String listToHtmlTable(List<Map.Entry<String, Long>> topEntries, final String title) { } } getHtmlTable - returns the html table of the top X words getTopWords - returns a stream of the top words listToHtmlTable - converts a list to an html table My question is how should I arrange these two packages. Should I merge them since they have only one class each? Should I split them even more by having some of the functions in another class? Answer: The ideal package structure is one which indicates usage. Classes such as these which exclusively contain static methods are definitely utility classes, which I usually put in a util subpackage of your main application. From what it sounds like, you don't have a main package at the moment which ideally would be your website (in reverse domain order), e.g. com.stackexchange.codereview so that the more specific elements are later, then on top of that you'd have the name of your application which may make it something like com.stackexchange.codereview.tweettrends. This would be the core of your program, and from here you could then add on your extras. The Twitter package in particular doesn't sound helpful: your entire program seems to relate to Twitter so it doesn't really convey what that part of the program does. Frankly, I'd just put all those classes under com.stackexchange.codereview.tweettrends.util and add in more sub-packages if you add more classes. And generally speaking, in Java packages should be exclusively lowercase.
{ "domain": "codereview.stackexchange", "id": 7534, "tags": "java, classes" }
Dark matter and gravitation
Question: Dark matter does not interact with the electromagnetic force, however does interact with the gravitational force. Do we know if there are any big ‘clumps’ (like a star) of dark matter? If they exist (or could exist) would their collision with a black hole or neutron star give out gravitational waves? Answer: Generally, it is difficult for dark matter to clump. That's due to angular momentum conservation: Normal matter only clumps so efficiently into stars, planets and all that because we have are dissipative forces ("friction") that permit efficient clustering. For dark matter on the other hand, we do not expect any (or at least no strong) dissipative forces; this is the Lambda-Cold-Dark-Matter paradigm where dark matter only interacts through gravity. In that case, clustering can only happen through three-body interactions (known more readily from swing-by maneuvers of solar system probes, where one body gains angular momentum at the expense of another loosing it). And in the vanilla scenario that means we don't expect any (or at least not much) structure at scales much smaller than dwarf galaxies: no dark matter stars or planets expected. That said, there are proposed modifications to this vanilla cold dark matter scenario that are consistent with data, in which the dark sector has extra forces, which in turn could induce stronger clumping. Self-interacting dark matter is one example of one such model, see e.g. this nicely written though somewhat outdated paper. Dark matter is matter too, so yes, any motion of a dark matter clump gives the same gravitational structure (and potentially also gravitational waves) as a clump of the same size and mass that was made of normal matter. But if one had different density profiles and very sensitive gravitational wave detectors one might tease out a different signature. Thinking purely phenomenologically, dark matter quanta could be anything from 10^-22 electronvolts light up to 10 solar masses heavy (roughly. Let's not go into arguing about those exact limitations in this post). At this massive end of the allowed range, primordial black holes are one candidate that is being discussed in the literature. In fact, this model recently got a lot of attention exactly because of the early gravitational wave signatures seen by aLIGO, though by now it is disfavored as making up all of the dark matter. tl;dr: No, we do not really know if there are any star-sized clumps of dark matter, yes we could see them in gravitational waves.
{ "domain": "physics.stackexchange", "id": 70243, "tags": "electromagnetism, gravity, gravitational-waves, dark-matter" }
Why do soil bacteria produce nitrous oxide as a result of anhydrous ammonia fertilizer application?
Question: The NPR article and 4 minute audio news item Can Anyone, Even Walmart, Stem The Heat-Trapping Flood Of Nitrogen On Farms? discusses unexpected contributions of greenhouse gasses from the manufacture of consumer goods. One example is bread, or agricultural products in general, and part of this is the anhydrous ammonia used in fertilizer. Carbon dioxide seems to be a bi-product of the manufacture, but nitrous oxide a more potent greenhouse gas is produced by bacteria in the soil as a result of application of ammonia. Manufacturing nitrogen fertilizer is energy-intensive, burning lots of fossil fuels and releasing carbon dioxide. What's just as damaging, and perhaps even more so, is what happens when it's spread on a field. Bacteria feed on it and release a super-powerful greenhouse gas called nitrous oxide. These bacteria are naturally present in the soil, says Philip Robertson, a researcher at Michigan State University, "but once they get exposed to nitrogen fertilizer, they really light up" and pump out nitrous oxide. Question: Is it common for most soil bacteria to make significant quantities of nitrous oxide after ammonia is added, or is there only a small subset of bacteria present in soil that might do this? Also, what is the metabolic pathway in this case? Is this a waste product from "eating" ammonia, or something more complicated/subtle? below: Anhydrous ammonia tanks in a newly planted wheat field. Walmart has promised big cuts in emissions of greenhouse gases. To meet that goal, though, the giant retailer may have to persuade farmers to use less fertilizer. It won't be easy. TheBusman/Getty Images. From here. Answer: Addressing the first part of your question: Is it common for most soil bacteria to make significant quantities of nitrous oxide after ammonia is added, or is there only a small subset of bacteria present in soil that might do this? No, not all soil bacteria can oxidize ammonia. According to wiki: The oxidation of ammonia into nitrite is performed by two groups of organisms, ammonia-oxidizing bacteria (AOB) and ammonia-oxidizing archaea (AOA). The role of archaea in the soil nitrogen cycle is not well-known yet, although it could be greater than that of bacteria. On the other hand, the role and metabolism of AOB are more or less studied. The process of ammonia-oxidizing is also called nitrification and AOB sometimes referred as nitrifying bacteria. The most known bacteria genera from this group are Methylomonas, Nitrosococcus, Nitrosomonas and Nitrosospira. I think the most studied species is Nitrosomonas europaea Winogradsky 1892. For reference see: Arp DJ, Sayavedra-Soto LA, Hommes NG. Molecular biology and biochemistry of ammonia oxidation by Nitrosomonas europaea. Arch. Microbiol. 178 (2002) 250-5. The second part of the question: What is the metabolic pathway in this case? Although, the metabolic pathways vary in different genera, let's look at ammonia metabolism in N. europea. In the simplest case, it is a two-step process. According to KEGG: Step 1. The enzyme ammonia monooxygenase converts ammonia to hydroxylamine. The enzyme contains copper, iron and possibly zinc. It requires two electrons, which are derived indirectly from the quinone pool via a membrane-bound donor. The reaction is: NH3 + a reduced acceptor + O2 = NH2OH + an acceptor + H2O Step 2. The enzyme hydroxylamine dehydrogenase converts hydroxylamine to nitrite. This reaction requires specialized cytochrome as an acceptor of electrons. The reaction is: NH2OH + H2O + 4 ferricytochrome c = NO2 + 4 ferrocytochrome c + 5 H+ While nitrite is the main product, the enzyme from N. europaea can produce nitric oxide as well. Therefore Step 2 can produce nitric oxide (NO). In this case, additional mechanisms of converting NO to NO2 are involved. Further destiny of released nitrite can be different. In one process it could be oxidized by different microorganisms (bacteria, archaea) to nitrate (NO3) which could be consumed by plants or washed out to deeper horizons with water. In another case, nitrite, nitric oxide and nitrate could be involved in process of denitrification. During this process, a group of denitrifying organisms (bacteria, archaea and fungi) consume nitrogenous compounds and reduce them to nitrogen (N2), which can escape to the atmosphere. This process can occur through different metabolic pathways mainly with aid of reductases like nitric oxide reductase. One of the possible intermediate product of such pathways is a nitrous oxide (NO2). If N2O escapes from cells we can observe a resale of nitrous oxide from soil. Let's return to your first question, and particularly to the part where you ask about significant amounts of nitrous oxide. First of all, nitrification is a natural process and is a part of the nitrogen cycle. Some organisms fixate nitrogen and convert it to ammonia, some oxidize ammonia to nitrous compounds and some of them reduce it back to nitrogen. It occurs everywhere virtually in all terrestrial and water biocenoses. In a healthy ecosystem, the inflow of nitrogen is theoretically equal to the outflow (with temporal deposition in trophic chains). In such natural conditions, the release of nitrous oxide should be neglectable. The situation changes significantly when we add an ammonia to soil artificially as a fertilizer. The main problem is that we break an equilibrium between nitrogen fixation, nitrification and denitrification. The second big problem is that an ammonia is toxic for most organisms, so we can alter microbiome. In such case, we can expect an excess of products of nitrification and improper denitrification with exaggerated levels of released nitrous oxide. The actual proportions vary significantly depending on both biotic and abiotic conditions, therefore it is difficult to predict real levels of nitrous oxide released without in situ examination. P.S. sorry for possible bad writing.
{ "domain": "biology.stackexchange", "id": 7860, "tags": "bacteriology, soil, nitrogen-cycle" }
Extracting WDL map keys as a task
Question: I want to get the array of keys from a map data structure, so I can perform array-based functions against the map in subsequent tasks. I am limited to draft-2 Workflow Description Language (WDL), so I don't have access to the newer spec's extraction functions. I thought I could achieve this by writing a task: task extract_keys { Map[String, String] map command { cut -f1 ${write_map(map)} } output { Array[String] keys = read_lines(stdout()) } } This task works just fine and, as far as I can tell, returns the array of map keys. However, when I try to feed this array into subsequent tasks, I get an error: workflow test { Map[String, String] my_map = { "a": "foo", "b": "bar", "c": "quux" } call extract_keys { input: map = my_map } scatter (k in extract_keys.keys) { call task_foo { input: k = k, s = my_map[k] } } } Call input and runtime attributes evaluation failed for task_foo I don't have access to the full error message or the logs, unfortunately. The frontend I'm using hides these away. (Incidentally, I'm aware that you can scatter over maps, although it's undocumented, per https://github.com/openwdl/wdl/issues/106#issuecomment-356047538. This code is just illustrative; I wish to do other things, such as length and cross, that only accept arrays.) Is it not possible to scatter over the output of a previous task? Answer: I seem to have solved this myself by virtue of the linked undocumented feature (scattering over a map). It turns out you don't need a separate task at all. This works: workflow test { Map[String, String] my_map = { "a": "foo", "b": "bar", "c": "quux" } scatter (pairs in my_map) { String keys = pairs.left # String values = pairs.right } scatter (k in keys) { call task_foo { input: k = k, s = my_map[k] } } }
{ "domain": "bioinformatics.stackexchange", "id": 1848, "tags": "wdl" }
rxbag export crashes
Question: Hi, I'm trying to export data from a bagfile using rxbag. After plotting without problems the data I'm interested on I do right click-> export to csv and after naming the file everything crashes with just a seg fault: 1- Does anybody know why does this happen and how to solve this? 2- Has somebody any C++ code or python using the rosbag API to extract data from a bagfile and save it to a csv to use it as a template? Any help will be appreciated! :) I'm on Ubunutu 11.10 and rxbag crashes both in Ros electric and fuerte :S Originally posted by Jep on ROS Answers with karma: 195 on 2012-11-07 Post score: 0 Answer: After some checks, I can contribute that: 1- In another machine with Ubuntu 12.04 Rxbag was exporting without problems 2- In my machine I end up using: rostopic echo /topic -b=path/to/bagfile.bag -p > output_data.csv That was much faster to use for my purposes than rxbag. Originally posted by Jep with karma: 195 on 2012-11-09 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 11657, "tags": "ros, rosbag, bagfile, rxbag" }
Normalizing eigenvectors
Question: Over the course of this quantum class I'm taking I've run into issues with properly normalizing my eigenvectors. Here is my TA's explanation of this particular example is done. I am lost as to where these $x+y=ax$, $x-y=ay$, and $x^2 +y^2 =1$ conditions in their solutions come from. The eigenvalues are easy via Mathematica or a characteristic equation, but the "normalized" eigenvectors Mathematica gives do not match up: Any help would be greatly appreciate and would hopefully clear things up for me during my studying today before the final tomorrow. Answer: The equations $x + y = ax$ and $x - y = ay$ are obtained by the following. $$\begin{aligned} A \left( \begin{matrix}x \\ y \end{matrix}\right ) &= a\left( \begin{matrix}x \\ y \end{matrix}\right )\\[1.0em] \left( \begin{matrix}1 & 1 \\ 1 & -1 \end{matrix}\right )\left( \begin{matrix}x \\ y \end{matrix}\right )&=a\left( \begin{matrix}x \\ y \end{matrix}\right ) \end{aligned}$$ Do the matrix multiplication and you get in the first line the first expression and in the second line the second expression. $y = (\sqrt2 -1) x $ is obtained by adding the first line and the second, pluging in $a=\sqrt2$ and rearanging for $y$. $$\begin{aligned} 2x &= a(x+y) \\ \frac{2}{a}x -x &= y \\ \left(\frac{2}{a}-1 \right)x &= y \\ a &=\sqrt 2\\ (\sqrt 2 -1 )x&=y \end{aligned}$$ That means that your eigenvector with eigenvalue $a=\sqrt 2$ is proportional to a vector of the form $$ \left( \begin{matrix}x \\ y \end{matrix} \right)_{a=\sqrt 2} = x_{a=\sqrt 2} \left( \begin{matrix}1 \\ \sqrt 2 -1 \end{matrix} \right) $$ The last free parameter $x_{a=\sqrt 2}$ is fixed by the normalization condition $$ \left( \begin{matrix} x \\ y \end{matrix} \right)_{a=\sqrt 2} \cdot \left( \begin{matrix}x \\ y \end{matrix} \right)_{a=\sqrt 2} = 1 $$ where i have omitted complex conjugation of the left vector since everything is real. The normalization condition is the same as the equation $x^2 + y^2 = 1 $. Plugging the results in we get $$\begin{aligned} x^2_{a=\sqrt 2} &= \frac{1}{\left( \begin{matrix}1 \\ \sqrt 2 -1 \end{matrix} \right)\cdot \left( \begin{matrix}1 \\ \sqrt 2 -1 \end{matrix} \right)} \\[1.0em] x^2_{a=\sqrt 2} &= \frac{1}{4-2\sqrt 2} \\ x_{a=\sqrt 2} &= \sqrt{\frac{1}{4-2\sqrt 2}} \end{aligned}$$ With this we have all parts to write down our first normed eigenvector belonging to eigenvalue $a=\sqrt 2 $, $$ \vec a_{\sqrt 2} =\sqrt{\frac{1}{4-2\sqrt 2}}\left( \begin{matrix}1 \\ \sqrt 2 -1 \end{matrix} \right) $$ All that remains to get the same solution as your TA is some algebra. $$\begin{aligned} \sqrt{\frac{1}{4-2\sqrt 2}} &= \sqrt{\frac{1}{2}\frac{1}{2-\sqrt 2}} \\ &=\sqrt{\frac{1}{2}\frac{ 2 +\sqrt 2 }{ (2-\sqrt 2)(2 + \sqrt 2) }} \\ &=\sqrt{\frac{1}{2}\frac{ 2 +\sqrt 2 }{ 2 }} \\ &=\sqrt{\frac{1}{2}\left(1+\frac{1}{\sqrt 2}\right) } \\ &=\frac{1}{\sqrt 2}\sqrt{ 1 + \frac{1}{\sqrt 2}} \\ \end{aligned}$$ $$\begin{aligned} \frac{1}{\sqrt 2}\sqrt{ 1 + \frac{1}{\sqrt 2}}( \sqrt 2 -1) &=\frac{1}{\sqrt 2} \sqrt{ \left(\frac{\sqrt 2 + 1}{\sqrt 2}\right)(\sqrt 2 -1)^2 } \\ &= \frac{1}{\sqrt 2}\sqrt{ \frac{\sqrt 2 -1}{\sqrt 2} } \\ &= \frac{1}{\sqrt 2}\sqrt{ 1 - \frac{1}{\sqrt 2} } \\ \end{aligned}$$ With this we can write the vector as $$\begin{aligned} \vec a_{\sqrt 2} &=\sqrt{\frac{1}{4-2\sqrt 2}}\left( \begin{matrix}1 \\ \sqrt 2 -1 \end{matrix} \right) \\[1.4em] \vec a_{\sqrt 2} &=\frac{1}{\sqrt 2} \left( \begin{matrix} \sqrt{1 + \frac{1}{\sqrt 2}} \\ \sqrt{1 - \frac{1}{\sqrt 2}} \end{matrix}\right) \end{aligned}$$ This is how $|a_1\rangle$ is obtained. It is also important to remember that the phase of an eigenvector is arbitrary. Generally, if we allow complex coefficients, we can multiply an eigenvector with any complex number of magnitude 1 without changing anything. Limiting ourselves to reals, that leaves only the choice for the phase of $1$ or $-1$. That means is could just as well use $-1|a_1 \rangle$ as eigenvector belonging to eigenvalue $a_1$. This can also lead to different looking results when two people compare their results. You can also bring the solution into the form of your output via some algebra, $$\begin{aligned} \vec a_{\sqrt 2} &=\sqrt{\frac{1}{4-2\sqrt 2}}\left( \begin{matrix}1 \\ \sqrt 2 -1 \end{matrix} \right) \\[1.4em] &=\sqrt{\frac{1}{4-2\sqrt 2}} \frac{(1+\sqrt 2)}{(1+\sqrt 2)} \left( \begin{matrix}1 \\ \sqrt 2 -1 \end{matrix} \right) \\[1.4em] &=\sqrt{\frac{1}{4-2\sqrt 2}} \frac{1}{(1+\sqrt 2)} \left( \begin{matrix}1+\sqrt 2 \\ (1+\sqrt 2)(\sqrt 2 -1) \end{matrix} \right) \\[1.4em] &=\sqrt{\frac{1}{4-2\sqrt 2}} \frac{1}{(1+\sqrt 2)} \left( \begin{matrix}1+\sqrt 2 \\ 2-1 \end{matrix} \right) \\[1.4em] &=\sqrt{\frac{1}{4-2\sqrt 2}} \frac{1}{(1+\sqrt 2)} \left( \begin{matrix}1+\sqrt 2 \\ 1 \end{matrix} \right) \\[1.4em] \end{aligned}$$ As you see that form is proportional to the result that you got, which means that the result that your TA got is the same, its only hard to see due to the algebra with the normalization factor. After playing a bit more with the algebra, we can show that it is exactly the same as the result that you got, $$\begin{aligned} (1+\sqrt 2) \sqrt{4-2\sqrt 2} &= \sqrt{(1+\sqrt 2)^2(4-2\sqrt 2) } \\ &=\sqrt{(3+2\sqrt 2 )(4-2\sqrt 2)} \\ &=\sqrt{12+2\sqrt 2 - 8} \\ &=\sqrt{4+2\sqrt 2} \\ (1+\sqrt 2) \sqrt{4-2\sqrt 2}&= \sqrt{1 + (1+\sqrt 2)^2} \end{aligned}$$ $$ \sqrt{\frac{1}{4-2\sqrt 2}} \frac{1}{(1+\sqrt 2)} \left( \begin{matrix}1+\sqrt 2 \\ 1 \end{matrix} \right)=\frac{1}{\sqrt{1 + (1+\sqrt 2)^2} } \left( \begin{matrix}1+\sqrt 2 \\ 1 \end{matrix} \right) $$
{ "domain": "physics.stackexchange", "id": 74228, "tags": "quantum-mechanics, hilbert-space, linear-algebra, eigenvalue, software" }
Quantum XNOR Gate Construction
Question: Tried asking here first, since a similar question had been asked on that site. Seems more relevant for this site however. It is my current understanding that a quantum XOR gate is the CNOT gate. Is the quantum XNOR gate a CCNOT gate? Answer: Any classical one-bit function $f:x\mapsto y$ where $x\in\{0,1\}^n$ is an $n$-bit input and $y\in\{0,1\}$ is an $n$-bit output can be written as a reversible computation, $$ f_r:(x,y)\mapsto (x,y\oplus f(x)) $$ (Note that any function of $m$ outputs can be written as just $m$ separate 1-bit functions.) A quantum gate implementing this is basically just the quantum gate corresponding to the reversible function evaluation. If you simply write out the truth table of the function, each line corresponds to a row of the unitary matrix, and the output tells you which column entry contains a 1 (all other entries contain 0). In the case of XNOR, we have the standard truth table, and the reversible function truth table $$ \begin{array}{c|c} x & f(x) \\ \hline 00 & 1 \\ 01 & 0 \\ 10 & 0 \\ 11 & 1 \end{array} \qquad \begin{array}{c|c} (x,y) & (x,y\oplus f(x)) \\ \hline 000 & 001 \\ 001 & 000 \\ 010 & 010 \\ 011 & 011 \\ 100 & 100 \\ 101 & 101 \\ 110 & 111 \\ 111 & 110 \end{array} $$ Thus, the unitary matrix is $$ U=\left(\begin{array}{cccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ \end{array}\right). $$ This can easily be decomposed in terms of a couple of controlled-not gates and a bit flip or two. The method that I just outlined gives you a very safe way of making the construction that works for any $f(x)$, but it does not perfectly reconstruct the correspondence between XOR and controlled-not. For that, we need to assume a little bit more about the properties of the function $f(x)$. Assume that we can decompose the input $x$ into $a,b$ such that $a\in\{0,1\}^{n-1}$ and $b\in \{0,1\}$ such that for all values of $a$, the values of $f(a,b)$ are distinct for each $b$. In this case, we can define the reversible function evaluation as $$f:(a,b)\mapsto(a,f(a,b)).$$ This means that we're using 1 fewer bits than the previous construction, but from here on the technique can be repeated. So, let's go back to the truth table for XNOR. $$ \begin{array}{c|c} ab & f(a,b) \\ \hline 00 & 1 \\ 01 & 0 \\ 10 & 0 \\ 11 & 1 \end{array} $$ We can see that, for example, when we fix $a=0$, the two outputs are $1,0$, hence distinct. Similarly for fixing $a=1$. Thus, we can proceed with the reversible function construction $$ \begin{array}{c|c} ab & af(a,b) \\ \hline 00 & 01 \\ 01 & 00 \\ 10 & 10 \\ 11 & 11 \end{array} $$ and this gives us a unitary $$ U=\left(\begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right) $$ You can easily check that this is the same as $\text{cNOT}\cdot(\mathbb{1}\otimes X)$.
{ "domain": "quantumcomputing.stackexchange", "id": 206, "tags": "universal-gates, gate-synthesis, quantum-gate" }
Is beryllium an alkaline earth metal?
Question: For context: I got a question asking, "Which of the following alkaline earth metals do not give flame colour?". I quickly marked $\ce{Be}$ and $\ce{Mg}$ and got negative marks. The following is a quote from my book (emphasis in the original): The elements of Group 2 include beryllium, magnesium, calcium, strontium, barium and radium. These elements with the exception of beryllium are commonly known as the alkaline earth metals. These are so called because their oxides and hydroxides are alkaline in nature and these metal oxides are found in the earth's crust. As you can see, it's stated that beryllium is not an alkaline earth metal! Is beryllium an alkaline earth metal or not? Answer: There is some disagreement in usage among authors, but IUPAC standard nomenclature approves calling beryllium an alkaline earth metal, as explained on page 51 of IUPAC's last Red Book. In fact, all the elements belonging to group 2, $\ce{Be,Mg,Ca,Sr,Ba,Ra}$, are called alkaline earth metals with IUPAC's approval. Other common traditional names approved by IUPAC are alkali metals for the elements of group 1 except hydrogen, i.e. $\ce{Li,Na,K,Rb,Cs,Fr}$; halogens for $\ce{F,Cl,Br,I,At}$ in group 17,whose only member excluded from such a designation is $\ce{Uus}$ (now $\ce{Ts}$), and noble gases for all the elements of group 18 except $\ce{Uuo}$ (now $\ce{Og}$), i.e. $\ce{He,Ne,Ar,Kr,Xe,Rn}$.
{ "domain": "chemistry.stackexchange", "id": 8811, "tags": "inorganic-chemistry, terminology, alkaline-earth-metals" }
Scan and Output ACLs to Excel in Python
Question: I am working to solve an issue that my company is currently having, We have an extremely large share drive that is over 20 years old. There are so many security groups attached to some of these folder as well as individual users that no longer exist and appear as scrambled text. We would like to scan through the entire directory and output all of the permissions to an excel file. I have come up with this solution, however, I don't believe I have done it as efficiently as I believe it can be. Here is the entire script: import os import subprocess import xlsxwriter import win32com.client as win32 # *** IMPORTANT **** # MUST INSTALL XLSXWRITER AND PYWIN32 IN VENV # PIP INSTALL XLSXWRITER # PIP INSTALL PYWIN32 def CheckPerms(t, save_dir): global firstDir # LOOP THROUGH ALL PATHS IN ROOT DIRECTORY for path in os.listdir(t): # VARIABLE HOLDS FULL PATH OF THE ROOT DIRECTORY full_root_path = os.path.join(t, path) # CHECKS IF PATH IS A FILE, IF IT IS A FILE, SKIP AND CONTINUE WITH SCAN if not os.path.isfile(full_root_path): # VARIABLE HOLDS ROOT NAME OF FULL ROOT PATH # EXAMPLE: INPUT - C:\TEST, RETURNS - C:\ firstDir = os.path.split(full_root_path) # PRINTS CURRENT DIRECTORY THAT IS BEING SCANNED print("Working on: " + firstDir[1]) # PREPARE EXCEL WORKBOOK IN SPECIFIED LOCATION # SAVES AS NAME OF DIR BEING SCANNED workbook = xlsxwriter.Workbook(f'{save_dir}\\{firstDir[1]}.xlsx') worksheet = workbook.add_worksheet() worksheet.write("A1", "Directory Path") worksheet.write("B1", "Security Groups") row = 1 col = 0 # LOOP THOUGH A WALK OF EACH ROOT PATH # THIS WILL SCAN FRONT TO BACK, TOP TO BOTTOM OF THE ENTIRE TREE for r, d, f in os.walk(full_root_path): # CAPTURE THE OUTPUT OF THE SYSTEM CMD ICALCS COMMAND ON DIRECTORY BEING SCANNED sub_return = subprocess.check_output(["icacls", r]) try: # TRY TO DECODE OUTPUT AS UTF-8 sub_return = sub_return.decode('utf-8') except: # SOMETIMES CANNOT DECODE, THIS WILL CATCH ERROR AND CONTINUE print("Decode Error: Skipping a line") continue # SPLIT THE LINES OF THE RETURNED STRING split_icacl_lines = sub_return.splitlines() # ICACLS RETURNS A STATUS LINE AFTER COMPLETE # THIS WILL REMOVE THE LAST LINE, EXAMPLE: # "Successfully processed 1 files; Failed processing 0 files" del split_icacl_lines[-1:] # FIRST LINE OF ICACLS INCLUDES THE DIRECTORY AS WELL AS FIRST LINE OF ICACLS # THIS WILL REVERSE SPLIT BY FIRST EMPTY SPACE TO SEPARATE THE LINES # AND DELETE IT FROM THE ORIGINAL LIST firstLine = split_icacl_lines[0].rsplit(" ", 1) del split_icacl_lines[0] # THERE HAPPENS TO BE AN EMPTY LINE, SO WE REMOVE IT HERE del split_icacl_lines[-1:] # ADD THE FIRST ELEMENT OF THE FIRST LINE BACK INTO THE BEGINNING OF THE LIST split_icacl_lines.insert(0, firstLine[0].lstrip()) # APPEND THE SECOND ELEMENT OF THE FIRST LINE TO END OF LIST split_icacl_lines.append(firstLine[1].lstrip()) # FIRST ELEMENT OF LIST IS THE TARGET DIRECTORY target_directory = split_icacl_lines[0] # DELETE TARGET DIRECTORY FROM LIST del split_icacl_lines[0] # ADD TARGET DIRECTORY TO EXCEL FILE worksheet.write(row, col, target_directory) # MOVE OVER EXCEL COLUMN BY 1 col += 1 # LOOP THROUGH EACH LINE IN THE FINAL ICACL DATA AND # OUTPUT IT TO EXCEL FILE NEXT TO THE DIRECTORY IT # BELONGS TO for lines in split_icacl_lines: # STRIP LINES OF ALL ABNORMAL CHARACTERS output = lines.lstrip() # INSERT LINE INTO WORKBOOK worksheet.write(row, col, output) row += 1 # EMPTY LINE BETWEEN EACH SCAN OUTPUT row += 1 # RESET COLUMN TO 0 col = 0 # CLOSE WORKBOOK, SAVING IT workbook.close() # OPEN WORKBOOK IN WIN32, AUTO-FIT EACH COLUMN AND SAVE IT excel = win32.gencache.EnsureDispatch('Excel.Application') wb = excel.Workbooks.Open(f'C:\\Test\\{firstDir[1]}.xlsx') ws = wb.Worksheets("Sheet1") ws.Columns.AutoFit() wb.Save() excel.Application.Quit() # REPEAT ALL ABOVE FOR EACH DIRECTORY CheckPerms("C:\\", "C:\\Test") The problem I have, is when ICACLS is run in windows, the first line it returns includes the directory as well as the first permission, example: C:\Users\Michael>icacls C:\\ C:\\ BUILTIN\Administrators:(OI)(CI)(F) NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Users:(OI)(CI)(RX) NT AUTHORITY\Authenticated Users:(OI)(CI)(IO)(M) NT AUTHORITY\Authenticated Users:(AD) Mandatory Label\High Mandatory Level:(OI)(NP)(IO)(NW) This is why I had to do some weird stuff with the output after I split it into a list. After splitting the original output, I split the first line of that output by an empty space and added each element back into the list, however, for some files such as in the directory "Program Files" I get strange outputs like this: Can anyone suggest a better way to do this? It would be very much appreciated. Answer: when ICACLS is run in windows, the first line it returns includes the directory as well as the first permission That's because your parsing algorithm is incorrect. About the only way to parse this output is to look at the second line to see how far indented it is. Otherwise: DON'T SHOUT IN YOUR COMMENTS; THIS IS ESPECIALLY IMPORTANT FOR PIP WHERE IT WILL NOT WORK ON CASE-SENSITIVE OPERATING SYSTEMS UNLESS IT'S LOWER-CASE Don't store firstDir as a global CheckPerms should be check_perms by PEP8 Divide this out into more subroutines Prefer pathlib over os where possible Your firstDir[1] index is strange. I assume this is just the stem of the directory. Put your xlsx file path into a variable for reuse. You have inconsistent worksheet indexing, mixing A1 with row/column. Prefer the latter. Do not use single-letter variable names like r, d, f Don't decode yourself; pass encoding to subprocess functions Don't call workbook.close; put it in a with Don't open Excel just for the purpose of column auto-sizing; keep track of the longest string and use that as the column width Add a __main__ guard Never bare except: Strongly consider parsing the result of icacls /T which includes multiple files and will be more efficient. I have not shown this below. Suggested import os import subprocess from typing import Iterator import xlsxwriter from pathlib import Path # must install xlsxwriter in venv # pip install xlsxwriter def parse_icacls(dir_path: str) -> Iterator[str]: """ Example output: icacls some_long_file some_long_file NT AUTHORITY\SYSTEM:(I)(OI)(CI)(F) BUILTIN\Administrators:(I)(OI)(CI)(F) LAPTOP\me:(I)(OI)(CI)(F) Successfully processed 1 files; Failed processing 0 files """ sub_return = subprocess.check_output(('icacls', dir_path), encoding='utf-8') lines = sub_return.splitlines() lead = len(lines[1]) - len(lines[1].lstrip()) for line in lines: if not line: break yield line[lead:] def write_workbook(full_root_path: Path, saved_to: Path) -> None: with xlsxwriter.Workbook(saved_to) as workbook: worksheet = workbook.add_worksheet() headers = ('Directory Path', 'Security Groups') row = 0 for col, header in enumerate(headers): worksheet.write(row, col, header) row += 1 longest_dir = len(headers[0]) longest_group = len(headers[1]) for dir_path, dir_names, file_names in os.walk(full_root_path): try: groups = parse_icacls(dir_path) except UnicodeDecodeError: print('Decode Error: Skipping a line') continue col = 0 worksheet.write(row, col, dir_path) longest_dir = max(longest_dir, len(dir_path)) col += 1 for group in groups: worksheet.write(row, col, group) longest_group = max(longest_group, len(group)) row += 1 row += 1 worksheet.set_column(first_col=0, last_col=0, width=longest_dir) worksheet.set_column(first_col=1, last_col=1, width=longest_group) def check_perms(top: Path, save_dir: Path) -> None: save_dir.mkdir(exist_ok=True) for full_root_path in top.iterdir(): if full_root_path.is_dir(): saved_to = (save_dir / full_root_path.stem).with_suffix('.xlsx') print(f'Working on: {full_root_path}; saving to {saved_to}') write_workbook(full_root_path, saved_to) if __name__ == '__main__': check_perms(Path('.'), Path('Test'))
{ "domain": "codereview.stackexchange", "id": 42759, "tags": "python" }
Electron flow: Are these two simulations contradicting each other?
Question: These two videos explain how a diode works: https://www.youtube.com/watch?v=W6QUEq0nUH8#t=03m25s (3:25) https://www.youtube.com/watch?v=JBtEckh3L9Q#t=07m40s (7:40) In the second video, electrons seem to travel from (+) to (-), which looks wrong. To my understanding, positive ions (-) "get rid" of electrons, while negative ions (+) "accept" electrons. The electron, being a negatively charged particle, travels from (-) to (+). In the second video, electrons seem to travel from (+) to (-). Please shed some light into my mind. Answer: We are not talking about ions here. If you go to the second video you will hear from 7:51 that she talks about electrons and holes. Electrons are free to move around in the n-type material on one side of the junction, while holes, which are simply an empty space missing an electron, are free to move around in the p-type material. n-type: Think of a pure material lattice where one atom is suddenly replaced with an atom of one higher atomic number. This atom has an electron too many in the outer shell. To fit into the lattice, it must give up this electron. So it "pushed" it away. Now this excess electron is more free to move around. Therefore electrons will be charge carriers in this material. p-type: If you instead replace an atom with another atom of one smaller atomic number, then it is missing one electron. It has a hole. To fit in, it must take an electron from a neighbour. The neighbour will then take an electron from another neighbour etc. The hole seems to "move around freely", and corresponds to a missing electron, that is a positive charge. We call it a positice charge carrier. An electron carries a negative charge. A hole corresponds to a positive charge. If electrons move to the right, then negative charge moves right. If holes move to the left then also this corresponds to negative charge to the right. The stuff that moves from (+) to (-) in the second video is simply positive charge. It can be no other. In different simulations and illustrations you might see either positive or negative charge displayed. It is simply a choice of what to show, and it illustrates the same thing nevertheless. In standard electric circuits with diodes like this, just remind yourself all the time that what is actually moving, is electrons in the wires (towards +), electrons in the n-type (towards +) and holes in the p-type (towards -). All these correspond to the same thing: negative charge towards (+) and positive charge towards (-).
{ "domain": "physics.stackexchange", "id": 23689, "tags": "electricity" }
Refactoring long class to make it more maintainable
Question: I am developing an Android app. Due to my low experience and the requirement for network operation, my class uses a lot of AsyncTask instances and it grew quite large. I'd like to know how I can split this in to different classes, and generally make it better (not required). I am really confused in cases where I have to deal with UI elemnts. I wish to see some small code snippets which will demonstrate good practice. I am not asking for a full and really detailed answer, as it probably will be very long, but it would be really nice. (Should I add some comments?) import java.io.BufferedOutputStream; import java.io.File; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.net.URI; import java.util.ArrayList; import java.util.Calendar; import java.util.HashMap; import java.util.List; import org.apache.http.annotation.ThreadSafe; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import ua.mirkvartir.android.frontend.adapter.GPSTracker; import ua.mirkvartir.android.frontend.adapter.JSONParser; import ua.mirkvartir.android.frontend.adapter.UserFunctions; import android.app.Activity; import android.app.AlertDialog; import android.content.ContentValues; import android.content.Context; import android.content.DialogInterface; import android.content.Intent; import android.database.Cursor; import android.graphics.Bitmap; import android.graphics.Bitmap.CompressFormat; import android.graphics.BitmapFactory; import android.net.Uri; import android.os.AsyncTask; import android.os.Bundle; import android.os.Environment; import android.provider.MediaStore; import android.support.v4.content.CursorLoader; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.AdapterView; import android.widget.AdapterView.OnItemSelectedListener; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.CheckBox; import android.widget.EditText; import android.widget.ImageView; import android.widget.RelativeLayout; import android.widget.Spinner; public class AddFillActivityApp extends ErrorActivity { Uri mCapturedImageURI; int gSelected; List<String> photos; int sourse = 3; String user_id; int dSelected; // File f; String filePath; int stSelected; int sSelected; int rSelected; EditText street; EditText price; Spinner spinPriceType; EditText flat; EditText sqTotal; EditText sqLiving; EditText sqKitchen; EditText floor; EditText floors; EditText text; EditText phone1; EditText phone2; Spinner spinBuildType; Button btnSend; Context context; String not_sourse = ""; String not_user = ""; String not_district = ""; String not_settle = ""; String not_section = ""; String not_street = ""; String not_price = ""; String not_priceFor = ""; String not_flat = ""; String not_sqTotal = ""; String not_sqLiving = ""; String not_sqKitchen = ""; String not_floor = ""; String not_floors = ""; String not_text = ""; String not_phone1 = ""; String not_phone2 = ""; String not_build_type = ""; String photo1 = ""; String photo2 = ""; String photo3 = ""; ImageView pho1; ImageView pho2; ImageView pho3; CheckBox checkbox; List<Integer> priceID = new ArrayList<Integer>(); List<Integer> buildTypeID = new ArrayList<Integer>(); List<String> priceField = new ArrayList<String>(); List<String> buildTypeField = new ArrayList<String>(); Activity app; ArrayAdapter<String> adapterBuild; ArrayAdapter<String> adapterPrice; String not_coord = ""; Uri link1 = null; Uri link2 = null; Uri link3 = null; @Override protected void onRestoreInstanceState(Bundle savedInstanceState) { super.onRestoreInstanceState(savedInstanceState); if (savedInstanceState.containsKey("link1")) { link1 = Uri.parse(savedInstanceState.getString("link1")); } if (savedInstanceState.containsKey("link2")) { link2 = Uri.parse(savedInstanceState.getString("link2")); } if (savedInstanceState.containsKey("link3")) { link3 = Uri.parse(savedInstanceState.getString("link3")); } } @Override protected void onResume() { super.onResume(); Log.d("links", link1 + "\n" + link2 + "\n" + link3); if (link1 != null && !link1.equals("")) { saveFile( decodeSampledBitmapFromResource(getRealPathFromURI(link1), 800, 800), 1); pho1.setImageBitmap(decodeSampledBitmapFromResource( getRealPathFromURI(link1), 80, 60)); } if (link2 != null && !link2.equals("")) { saveFile( decodeSampledBitmapFromResource(getRealPathFromURI(link2), 800, 800), 2); pho2.setImageBitmap(decodeSampledBitmapFromResource( getRealPathFromURI(link2), 80, 60)); } if (link3 != null && !link3.equals("")) { saveFile( decodeSampledBitmapFromResource(getRealPathFromURI(link3), 800, 800), 3); pho3.setImageBitmap(decodeSampledBitmapFromResource( getRealPathFromURI(link3), 80, 60)); } } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); app = this; context = this; pds.setTitle("Загружаются фото..."); setContentView(R.layout.aadd_advappartment); pho1 = (ImageView) findViewById(R.id.imagePhoto1); pho2 = (ImageView) findViewById(R.id.imagePhoto2); pho3 = (ImageView) findViewById(R.id.imagePhoto3); Button f1 = (Button) findViewById(R.id.btnPhoto1); Button f2 = (Button) findViewById(R.id.btnPhoto2); Button f3 = (Button) findViewById(R.id.btnPhoto3); ImageView back_button = (ImageView) findViewById(R.id.imageView3); checkbox = (CheckBox) findViewById(R.id.checkBox1); f1.setOnClickListener(new OnClickListener() { public void onClick(View arg0) { addPhoto1(); } }); f2.setOnClickListener(new OnClickListener() { public void onClick(View arg0) { addPhoto2(); } }); f3.setOnClickListener(new OnClickListener() { public void onClick(View arg0) { addPhoto3(); } }); back_button.setOnClickListener(new OnClickListener() { public void onClick(View arg0) { delDir(); finish(); } }); Bundle ex = getIntent().getExtras(); if (ex != null) { btnSend = (Button) findViewById(R.id.btnSendAdd); gSelected = ex.getInt("Group"); sSelected = ex.getInt("Section"); dSelected = ex.getInt("District"); stSelected = ex.getInt("Settle"); rSelected = ex.getInt("Region"); user_id = ex.getString("user_id"); RelativeLayout lp = (RelativeLayout) findViewById(R.id.layoutPark); RelativeLayout la = (RelativeLayout) findViewById(R.id.layoutApp); not_sourse = "" + 3; not_user = "" + user_id; not_district = "" + dSelected; not_settle = "" + stSelected; not_section = "" + sSelected; if (gSelected == 9) { lp.setVisibility(View.VISIBLE); la.setVisibility(View.GONE); spinPriceType = (Spinner) findViewById(R.id.sppricetipe); street = (EditText) findViewById(R.id.etpAdress); FillData dat = new FillData(); dat.execute(1, gSelected, sSelected); } else { la.setVisibility(View.VISIBLE); lp.setVisibility(View.GONE); spinPriceType = (Spinner) findViewById(R.id.saPricetipe); spinBuildType = (Spinner) findViewById(R.id.saBuildType); FillData dat = new FillData(); dat.execute(1, gSelected, sSelected); adapterBuild = new ArrayAdapter<String>(app, android.R.layout.simple_spinner_item, buildTypeField); adapterBuild .setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); spinBuildType.setAdapter(adapterBuild); dat = new FillData(); dat.execute(0); } adapterPrice = new ArrayAdapter<String>(this, android.R.layout.simple_spinner_item, priceField); adapterPrice .setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); spinPriceType.setAdapter(adapterPrice); // Listening to Login Screen link btnSend.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { if (gSelected == 9) { price = (EditText) findViewById(R.id.etpPrice); flat = (EditText) findViewById(R.id.etpCarSpace); text = (EditText) findViewById(R.id.etpDescription); not_street = street.getText().toString(); not_price = price.getText().toString(); not_flat = flat.getText().toString(); not_sqTotal = ""; not_sqLiving = ""; not_sqKitchen = ""; not_floor = ""; not_floors = ""; not_text = ""; not_phone1 = ""; not_phone2 = ""; not_build_type = "0"; } else { street = (EditText) findViewById(R.id.etaAdress); price = (EditText) findViewById(R.id.etaPrice); flat = (EditText) findViewById(R.id.etaCarSpace); sqTotal = (EditText) findViewById(R.id.etaArea1); sqLiving = (EditText) findViewById(R.id.etaArea2); sqKitchen = (EditText) findViewById(R.id.etaArea3); floor = (EditText) findViewById(R.id.etaFlor); floors = (EditText) findViewById(R.id.etaHight); text = (EditText) findViewById(R.id.etaDescription); not_street = street.getText().toString(); not_price = price.getText().toString(); not_flat = flat.getText().toString(); not_sqTotal = sqTotal.getText().toString(); not_sqLiving = sqLiving.getText().toString(); not_sqKitchen = sqKitchen.getText().toString(); not_floor = floor.getText().toString(); not_floors = floors.getText().toString(); not_text = text.getText().toString(); not_phone1 = "0"; not_phone2 = "0"; } Upload upload = new Upload(); upload.execute(); } }); spinPriceType .setOnItemSelectedListener(new OnItemSelectedListener() { @Override public void onItemSelected(AdapterView<?> parent, View view, int position, long id) { if (position > -1 && priceID != null) { not_priceFor = priceID.get(position).toString(); } } @Override public void onNothingSelected(AdapterView<?> arg0) { // TODO Auto-generated method stub } }); if (gSelected != 9) { spinBuildType .setOnItemSelectedListener(new OnItemSelectedListener() { @Override public void onItemSelected(AdapterView<?> parent, View view, int position, long id) { if (position > -1 && buildTypeID != null) { not_build_type = buildTypeID.get(position) .toString(); } } @Override public void onNothingSelected(AdapterView<?> arg0) { // TODO Auto-generated method stub } }); } } } class FillData extends AsyncTask<Integer, Void, JSONArray> { private int op = 0; @Override protected void onPreExecute() { pd.show(); } @Override protected JSONArray doInBackground(Integer... option) { JSONArray json = null; UserFunctions u = new UserFunctions(); op = option[0]; if (option[0] == 0) { json = u.getBt(); } else if (option[0] == 1) { String not_group = option[1].toString(); String not_section = option[2].toString(); json = u.getPriceType(not_group, not_section); } // send add return json; } @Override protected void onPostExecute(JSONArray result) { super.onPostExecute(result); pd.dismiss(); if (result == null) { onCreateDialog(LOST_CONNECTION).show(); } else if (op == 0) { if (result != null) { for (int i = 0; i < result.length(); i++) { String a = "0"; String b = "0"; try { a = ((JSONObject) result.get(i)).getString("bt_id"); b = ((JSONObject) result.get(i)) .getString("bt_name"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } buildTypeID.add(Integer.parseInt(a)); buildTypeField.add(b); } runOnUiThread(new Runnable() { public void run() { adapterBuild.notifyDataSetChanged(); } }); } } else if (op == 1) { if (result != null) { for (int i = 0; i < result.length(); i++) { String a = "0"; String b = "0"; try { a = ((JSONObject) result.get(i)) .getString("pricefor_id"); b = ((JSONObject) result.get(i)) .getString("pricefor_title"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } priceID.add(Integer.parseInt(a)); priceField.add(b); } runOnUiThread(new Runnable() { public void run() { adapterPrice.notifyDataSetChanged(); } }); } } } } class Upload extends AsyncTask<String, Void, JSONObject> { @Override protected void onPreExecute() { pds.show(); } @Override protected JSONObject doInBackground(String... params) { JSONObject json1 = null; JSONParser u = new JSONParser(); photos = new ArrayList<String>(); photos.add(photo1); photos.add(photo2); photos.add(photo3); List<File> f = new ArrayList<File>(); for (String p : photos) { Log.d("FilePath", p); if (!p.equals("") && !p.equals("empty")) { f.add(new File(p)); } } if (f.size() > 0) { json1 = u.getJSONFromUrl(f); } else { return null; } photos.clear(); return json1; } @Override protected void onPostExecute(JSONObject result) { JSONArray urls = null; SendData s; if (result != null && result.has("url")) { try { urls = result.getJSONArray("url"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } photos.clear(); if (urls != null) { for (int i = 0; i < urls.length(); i++) { try { photos.add(urls.getJSONObject(i).getString( "photo_id")); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } } s = new SendData(); s.execute(not_sourse, not_user, not_district, not_settle, not_section, not_street, not_price, not_priceFor, not_flat, not_sqTotal, not_sqLiving, not_sqKitchen, not_floor, not_floors, not_text, not_phone1, not_phone2, not_build_type, "" + rSelected); } } class SendData extends AsyncTask<String, Void, JSONObject> { @Override protected void onPreExecute() { pd.setTitle("Создаю Объявление..."); if (checkbox.isChecked()) { runOnUiThread(new Runnable() { public void run() { GPSTracker mGPS = new GPSTracker(context); if (mGPS.canGetLocation) { double mLat = mGPS.getLatitude(); double mLong = mGPS.getLongitude(); not_coord = "" + mLong + "," + mLat; } else { // can't get the location } } }); } } @Override protected JSONObject doInBackground(String... option) { JSONObject json = null; UserFunctions u = new UserFunctions(); // publishProgress(values); String not_sourse = option[0]; String not_user = option[1]; String not_district = option[2]; String not_settle = option[3]; String not_section = option[4]; String not_street = option[5]; String not_price = option[6]; String not_priceFor = option[7]; String not_flat = option[8]; String not_sqTotal = option[9]; String not_sqLiving = option[10]; String not_sqKitchen = option[11]; String not_floor = option[12]; String not_floors = option[13]; String not_text = option[14]; String not_phone1 = option[15]; String not_phone2 = option[16]; String not_build_type = option[17]; String not_region = option[18]; json = u.sendAdd(not_sourse, not_user, not_district, not_settle, not_section, not_street, not_price, not_priceFor, not_flat, not_sqTotal, not_sqLiving, not_sqKitchen, not_floor, not_floors, not_text, not_phone1, not_phone2, not_build_type, not_region, photos, not_coord); return json; } @Override protected void onPostExecute(JSONObject result) { super.onPostExecute(result); Log.e("result", "going to result"); String not_id = "0"; pds.dismiss(); if (result == null) { Log.e("result", "null"); onCreateDialog(LOST_CONNECTION).show(); } else { Log.e("result", "not null:" + result.toString()); try { if (result != null) { not_id = result.getString("notice_id"); Intent i = new Intent(getApplicationContext(), AddToCheckActivity.class); i.putExtra("add_id", not_id); startActivity(i); link1 = null; link2 = null; link3 = null; delDir(); finish(); } } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } } public void addPhoto1() { AlertDialog.Builder builder = new AlertDialog.Builder(context); builder.setMessage("Выберите первое фото") .setCancelable(false) .setPositiveButton("С диска", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { // do things Intent intent = new Intent(); intent.setType("image/*"); link1 = null; intent.setAction(Intent.ACTION_GET_CONTENT); startActivityForResult(Intent.createChooser( intent, "Select Picture"), 11); if (filePath != null) { } } }) .setNegativeButton("Сфотографировать", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { // do things Calendar c = Calendar.getInstance(); ContentValues values = new ContentValues(); values.put(MediaStore.Images.Media.TITLE, "zdanie " + c.getTime()); link1 = null; link1 = getContentResolver() .insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values); Intent intentPicture = new Intent( MediaStore.ACTION_IMAGE_CAPTURE); intentPicture.putExtra(MediaStore.EXTRA_OUTPUT, link1); startActivityForResult(intentPicture, 12); } }); builder.create().show(); } public void addPhoto2() { AlertDialog.Builder builder = new AlertDialog.Builder(context); builder.setMessage("Выберите второе фото") .setCancelable(false) .setPositiveButton("С диска", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { // do things Intent intent = new Intent(); intent.setType("image/*"); link2 = null; intent.setAction(Intent.ACTION_GET_CONTENT); startActivityForResult(Intent.createChooser( intent, "Select Picture"), 21); if (filePath != null) { Log.d("Execution", "CCCCMON"); } } }) .setNegativeButton("Сфотографировать", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { // do things Calendar c = Calendar.getInstance(); ContentValues values = new ContentValues(); values.put(MediaStore.Images.Media.TITLE, "zdanie " + c.getTime()); link2 = null; link2 = getContentResolver() .insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values); Intent intentPicture = new Intent( MediaStore.ACTION_IMAGE_CAPTURE); intentPicture.putExtra(MediaStore.EXTRA_OUTPUT, link2); startActivityForResult(intentPicture, 22); } }); builder.create().show(); } @Override protected void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); if (link1 != null) { outState.putString("link1", link1.toString()); } if (link2 != null) { outState.putString("link2", link2.toString()); } if (link3 != null) { outState.putString("link3", link3.toString()); } } protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); String a = ""; if (data == null) { a = "null"; } else a = data.toString(); Log.d("RealPAth", "resultCode " + resultCode + " data " + a); if (requestCode == 12 && resultCode == RESULT_OK) { } if (requestCode == 11 && resultCode == RESULT_OK && null != data) { link1 = data.getData(); String picturePath = getRealPathFromURI(link1); pho1.setImageBitmap(decodeSampledBitmapFromResource(picturePath, 80, 60)); saveFile(decodeSampledBitmapFromResource(picturePath, 800, 800), 1); } if (requestCode == 22 && resultCode == RESULT_OK) { } if (requestCode == 21 && resultCode == RESULT_OK && null != data) { link2 = data.getData(); String picturePath = getRealPathFromURI(link2); pho2.setImageBitmap(decodeSampledBitmapFromResource(picturePath, 80, 60)); saveFile(decodeSampledBitmapFromResource(picturePath, 800, 800), 2); } if (requestCode == 32 && resultCode == RESULT_OK) { } if (requestCode == 31 && resultCode == RESULT_OK && null != data) { link3 = data.getData(); String picturePath = getRealPathFromURI(link3); pho3.setImageBitmap(decodeSampledBitmapFromResource(picturePath, 80, 60)); saveFile(decodeSampledBitmapFromResource(picturePath, 800, 800), 3); } } public String saveFile(Bitmap bm, int id) { if (bm == null) { Log.d("bitmap", "null"); return ""; } else { String file_path = getExternalCacheDir () + "/Mk"; File dir = new File(file_path); if (!dir.exists()) { dir.mkdirs(); } File file = new File(dir, "smaller" + id +bm.getHeight()+ ".jpeg"); FileOutputStream fOut; try { fOut = new FileOutputStream(file); // bm.compress(Bitmap.CompressFormat.PNG, 85, fOut); BufferedOutputStream bos = new BufferedOutputStream(fOut); bm.compress(CompressFormat.JPEG, 85, bos); if (bos != null) { bos.flush(); bos.close(); } if (fOut != null) { fOut.flush(); fOut.close(); } if (id == 1) { photo1 = file.getPath(); } else if (id == 2) { photo2 = file.getPath(); } else if (id == 3) { photo3 = file.getPath(); } } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return ""; } } public void photo1Clear(View v) { photo1 = ""; link1 = null; pho1.setImageResource(R.drawable.home); } public void photo2Clear(View v) { photo2 = ""; link2 = null; pho2.setImageResource(R.drawable.home); } public void photo3Clear(View v) { photo3 = ""; link3 = null; pho3.setImageResource(R.drawable.home); } public String getRealPathFromURI(Uri contentUri) { if (contentUri == null) { Log.e("RealPath", "URI: null"); } else Log.e("RealPath", "URI: " + contentUri.toString()); try { String[] proj = { MediaStore.Images.Media.DATA }; CursorLoader loader = new CursorLoader(app, contentUri, proj, null, null, null); Cursor cursor = loader.loadInBackground(); int column_index = cursor .getColumnIndexOrThrow(MediaStore.Images.Media.DATA); cursor.moveToFirst(); return cursor.getString(column_index); } catch (Exception e) { Log.e("RealPath", "exeption" + " " + e.toString()); e.printStackTrace(); } return ""; } public static int calculateInSampleSize(BitmapFactory.Options options, int reqWidth, int reqHeight) { // Raw height and width of image final int height = options.outHeight; final int width = options.outWidth; int inSampleSize = 1; if (height > reqHeight || width > reqWidth) { // Calculate ratios of height and width to requested height and // width final int heightRatio = Math.round((float) height / (float) reqHeight); final int widthRatio = Math.round((float) width / (float) reqWidth); // Choose the smallest ratio as inSampleSize value, this will // guarantee // a final image with both dimensions larger than or equal to the // requested height and width. inSampleSize = heightRatio < widthRatio ? heightRatio : widthRatio; Log.d("bitmap", "original options: height/reqheite " + height+"///"+reqHeight + " widnth/reqwidth " + width +"///"+reqWidth + " inSampleSize " + inSampleSize); } return inSampleSize; } public static Bitmap decodeSampledBitmapFromResource(String path, int reqWidth, int reqHeight) { Log.e("PATH", path); // First decode with inJustDecodeBounds=true to check dimensions final BitmapFactory.Options options = new BitmapFactory.Options(); options.inJustDecodeBounds = true; Bitmap b = null; b = BitmapFactory.decodeFile(path, options); // Calculate inSampleSizeа options.inSampleSize = calculateInSampleSize(options, reqWidth, reqHeight); // Decode bitmap with inSampleSize set options.inJustDecodeBounds = false; try { b = BitmapFactory.decodeFile(path, options); Log.d("bitmap", "decoded sucsessfully"); } catch (Exception e) { e.printStackTrace(); Log.d("bitmap", "decoding failed; options: " + options.toString()); } return b; } } Answer: A few tips, but there may be more. These are just the things that immediately jumped out at me: Break out AsyncTasks into their own class With multiple tasks, it might be better to make them each their own class/file. You can call back to the main Activity with listeners. If you must keep them in place, at least group them together at the end of the file. There's no reason to have them right in the middle of another class, with methods on both sides. Combine the button listeners Instead of making several anonymous listeners, you can let the Activity implement the listener and have all the corresponding code in one neat method. There, you switch on button id to call individual methods to handle each. This is a personal style issue, but I think it makes it much more readable to have one onClick() function. Combine the addPhoto() methods The only difference I see between addPhoto1() and addPhoto2() is the request code. Combine them into one addPhoto(int reqCode) method. If you add more later, it will be much easier than copy/pasting a whole new method called addPhoto7(), etc. Learn to use switch more, or refactor those if blocks Big if/else blocks are ugly, and can sometimes be rewritten with a single switch. Sometimes, you need a bit more. For instance, in your onActivityResult() method, you can easily check resultCode just once, with a switch inside that. Same with the data == null checks.
{ "domain": "codereview.stackexchange", "id": 4575, "tags": "java, android" }
Eigenfunctions of $C\psi(x) = \psi^*(x)$
Question: I am considering the complex conjugation operator $C\psi(x) = \psi^*(x)$. As $C^2$ is the identity operator, I've deduced that the possible eigenvalues of the $C$ are $\pm 1.$ To determine the eigenfunctions of $C$, I've observed that, if $\lambda = 1$, then an eigenfunction $\psi_{1}$ satisfying $C\psi_{1}(x) = \psi_{1}(x)$ must be purely real. Likewise, if $\lambda = -1$, the eigenfunction $\psi_{-1}$ satisfying $C\psi_{-1}(x) = -\psi_{-1}(x)$ must be purely imaginary. Am I right to deduce that the eigenfunctions of the complex conjugation operator therefore consist of the set of purely real and imaginary wavefunctions? I feel that something is quite off with this deduction. If it is correct, however, how would I begin to prove that such a set should or should not be complete and/or orthogonal? Answer: Let us denote by $C$ the standard complex conjugation of functions on a Hilbert space $H:=L^2(X)$. It is already false that the eigenvalues of $C$ are $\pm 1$. First of all, from $C\psi = \lambda \psi$ you have $CC\psi = \overline{\lambda} C\psi = |\lambda|^2 \psi$. Since $CC=I$ and $\psi\neq 0$, we conclude that $|\lambda|=1$ and not $\lambda = \pm 1$. Furthermore, suppose that $\psi \neq 0$ is a real ($L^2$) function and consider $\phi_\alpha:= e^{i\alpha} \psi$ for a given real $\alpha$. In this case $$C\phi_\alpha = e^{-i\alpha} C\psi = e^{-i\alpha} \psi = e^{-2i\alpha} e^{i\alpha}\psi \:.$$ In summary, $$C\phi_\alpha = e^{-2i\alpha} \phi_\alpha\:.$$ So that there are also eigenvectors with arbitrary unit complex eigenvalue and not only $\pm 1$. Notice also that the complex combination of eigenvectors with a given eigenvalue are not eigenvectors (unless the coefficients of the combination are real) so that the notion of eigenspace is more delicate (it makes sense only referring to real combinations). It is also false that eigenvectors with different eigenvalues are orthogonal as a trivial consequence of the construction above: $\phi_\alpha$ and $\phi_{\alpha'}$ are not orthogonal if $\alpha\neq \alpha'$, but the eigenvalues are different if $\alpha-\alpha' \not \in 2\pi \mathbb{Z}$. This failure prevents us from obtaining a complete orthogonal decomposition of vectors in terms of all the eigenvectors of $C$. However, it is possible to have complete expansions in terms of some eigenvectors if the Hilbert space is separable. As a matter of fact there is a Hilbert basis of eigenvectors with eigenvalue $1$. Let $N:=\{u_n\}_{n\in \mathbb{N}}$ be a Hilbert basis (countable because the Hilbert spece is assumed to be separable). Then we can always extract another Hilbert basis made of real functions, thus eigenvectors of $C$ with eigenvalue $1$. To this end, decompose every $u_n$ into real and imaginary part and form a new set of vectors $M:=\{\mathfrak{R}\left(u_n\right), \mathfrak{I}\left(u_n\right)\}_{n\in \mathbb N}$ (some of these vectors may be $0$!). The $L^2$ scalar products of pairs of vectors in $M$ are real by construction. Therefore the standard orthogonalisation procedure of Gram-Schmidt yields a new orthonormal set of real vectors $N'$. Since the finite complex linear combinations of $N$ are the same as those of $N'$, we conclude that the complex span of $N'$ is dense in $H$. In other words $N'$ is a Hilbert basis of $H$, made of eigenvectors of $C$ with eigenvalue $1$. (But it does not mean that $H$ is the eigenspace of $C$ with eigenvalue $1$ as it would happen for bounded linear operators, avoinding the contradiction $C=I$!) All the discussion can be extended to generic Hilbert spaces $H$ for antilinear operators $C:H\to H$ satisfying $CC=I$.
{ "domain": "physics.stackexchange", "id": 91265, "tags": "hilbert-space, operators, complex-numbers, linear-algebra, eigenvalue" }
Exocyclic vs. endocyclic double bonds in E1 elimination
Question: What will be the major product of this elimination reaction? The exocyclic product has more substituent groups on the alkene which should make it more stable but I have heard that exocyclic alkenes are less stable than their endocyclic counterparts. Why is this the case and is it a strong enough effect to outweigh the stability provided by a fourth alkyl substituent? Additionally, what influence will kinetics have on this reaction? I think that the endocyclic product should be kinetically favoured because there is less steric hindrance for deprotonation of the intermediate cation at the ring position than at the isopropyl position. Answer: What will be the major product of this elimination reaction? In the acid-catalyzed dehydration of 1-methylcyclohexanol, both 1-methylcyclohexene and methylenecyclohexane are formed in ca. an 85:15 ratio. In your example we have two more substituents on the exocyclic double bond which will serve to further stabilize the exocyclic double bond and increase the amount of the exocyclic isomer. I don't know for sure how much it will push the equilibrium towards the exocyclic isomer, but I might guess that the product mixture would be at least 50:50, if not more of the exocyclic isomer - but I would expect a mixture. what influence will kinetics have on this reaction? Well you won't see the kinetics unless you go out of your way and look for it. Factors that influence which isomer is kinetically favored include: statistical factors - there are 4 ring hydrogens that can be eliminated vs. only 1 on the isopropyl group; this favors kinetic formation of the endocyclic isomer alignment of the hydrogen with the p-orbital on the carbocation carbon - the hydrogen on the isopropyl group can better align with the p-orbital than those hydrogens in the ring, however the cyclohexane ring can easily distort; so while this isn't a major factor it favors the exocyclic isomer steric factors - I agree with your analysis, but would guess that the difference is small; still sterics would slightly favor the endocyclic isomer It seems pretty close to call, but if I had to guess I'd say that the statistical factor wins out (and the steric argument further helps) and the endocyclic isomer would likely be the the kinetic product. But again, in acid this is a fast equilibrium; the products will quickly equilibrate and the thermodynamically controlled product distribution will be observed. Maybe if you run the reaction at low temperature with a strong dehydrating agent ($\ce{POCl3}$) you could capture some meaningful kinetic information.
{ "domain": "chemistry.stackexchange", "id": 3505, "tags": "organic-chemistry, reaction-mechanism, stability" }
Embedding python in C++
Question: Hi, I'm trying to embedded a python function in a c++ node. I've read the documentation on docpython for Python 3.4 and I've tried a little example from the doc : #include "ros/ros.h" #include "std_msgs/String.h" #include <python3.4/Python.h> int main(int argc, char *argv[]) { ros::init(argc, argv, "fault_detector"); Py_Initialize(); PyRun_SimpleString("from time import time,ctime\n" "print('Today is', ctime(time()))\n"); Py_Finalize(); return 0; } and i get this error : CMakeFiles/ft_node.dir/src/fault_detector.cpp.o: In function `main': fault_detector.cpp:(.text+0x63): undefined reference to `Py_Initialize' fault_detector.cpp:(.text+0x72): undefined reference to `PyRun_SimpleStringFlags' fault_detector.cpp:(.text+0x77): undefined reference to `Py_Finalize' collect2: error: ld returned 1 exit status make[2]: *** [/home/mamy/exo_ws/devel/lib/test/ft_node] Error 1 make[1]: *** [test/CMakeFiles/ft_node.dir/all] Error 2 I've done some research to find my problems but I can't find anything pertinent. It could be a problem of version as I read and i don't know how to change that. If anyone has an answer. Thank you in advance for your answer Originally posted by bulgrozer on ROS Answers with karma: 75 on 2015-06-01 Post score: 1 Answer: Looks like a linker error... Have you added the somehing like the following to your CMakeLists.txt? find_package (PythonLibs 3.4 EXACT) target_link_libraries(fault_detector ${catkin_LIBRARIES} ${PYTHON_LIBRARIES}) Originally posted by timster with karma: 396 on 2015-06-01 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 21813, "tags": "python, c++" }
Calculating the cartesian position of each joint with DH transform
Question: Premise: I am by no means an expert on Robotics, I have to deal with this project where we use a robotic arm (franka emika), so feel free to assume that I got the basics wrong! I have this arm ( https://frankaemika.github.io/docs/control_parameters.html ) and I have its DH parameters for each joint. I need to compute each x-y-z joint positions. I understand from here that you can compute, for each joint, a $ [^{n-1}T_n]$ matrix which represents a "change of frame" from the previous frame; I also understand that the product of the first n matrixes gives the change-of-frame from the initial frame to the n-th joint frame. But, how do you use this T matrix to compute the x-y-z positions? Answer: By combining all the matrices, you'll end up having an homogeneous transformation $T \in SE\left(3\right)$ that can be expressed as: $$ T= \left( \begin{matrix} \mathbf{R} & \mathbf{p} \\ 0 & 1\end{matrix} \right), $$ where the matrix $\mathbf{R} \in \mathbb{R}^{3 \times 3}$ is symmetric and defines how the final frame is rotated with respect to the root, whereas the vector $\mathbf{p} \in \mathbb{R}^{3 \times 1}$ accounts for the translational part. Indeed, it comes out that: $$ \mathbf{p}=\left( \begin{matrix} x \\ y \\ z \end{matrix} \right). $$
{ "domain": "robotics.stackexchange", "id": 1867, "tags": "forward-kinematics" }
Is my understanding of Gauge Symmetries correct?
Question: I'm currently working on a project about Symmetry Breaking for my physics bachelor. Right now I'm trying to understand Gauge Symmetries (although I guess it's not much of a symmetry). And I've been thinking about it and want to know if my understanding of the concept is correct. Now for a global symmetry: Suppose I am in empty space and there is a chair facing me. In my understanding a symmetry is e.g. if we translate and then rotate the chair a bit so that it is still facing me. For me this operation is entirely symmetric. Now to think of a gauge symmetry I visualized an axis system with an origin at e.g. my location. Would a gauge symmetry be moving the origin at the location of the chair? Because it doesn't change anything to the system and it is not measurable. In other words, I came to think of it like this: Normal Symmetry, with some unitary matrix U (e.g.that translates then rotates the chair) \begin{align} \hat U|a⟩=|b⟩ \end{align} Gauge Symmetry \begin{align} |a⟩=|a⟩ \end{align} So basically you've changed nothing that can be measured. Is this correct? Thanks Edit: Are there similar examples one could think of for a local gauge symmetry and a local symmetry? Answer: I would say what you're describing in your question has more to do with the difference between "active" and "passive" transformations, than it does global vs local/gauge symmetries. In particular, an active transformation is one where we perform some operation on the actual physical state, wheres a passive transformation is a change of our description keeping the physical state fixed. So for example, an active translation of a chair is, actually moving the chair. A passive translation of a chair is us moving our origin of coordinates. In the rest of the answer I'll use the language appropriate to active transformations, as they make more physical sense to me. Global vs local symmetries are a different beast entirely. Exactly how to describe them depends a bit on your level of sophistication. The precise answer, which is possibly useless depending on your level of knowledge, is that a generic symmetry transformation (assuming a continuous symmetry) has the form $\phi(x) \rightarrow \phi'(\phi(x),\theta)$. In other words, we take all our fields $\phi$, and transform them all to new fields $\phi'$ in a way that depends on some parameters $\theta$. What makes these transformations symmetries is that after doing this we find that the physics looks the same in our transformed point of view. The relationship between the transformed fields $\phi'$ and the original fields $\phi$ can be linear or non-linear. Then the key question is whether the parameter of the transformation, $\theta$, depends on space or not. If it does not, we call it a global symmetry: this is a genuine physical symmetry of the system. If $\theta$ does depend on space, we call it a local symmetry: this isn't a genuine symmetry, it really represents a redundancy of our description of the system. To get an intuitive picture of what's going on let's imagine a field (ie a set of numbers of each point in space) which is made as simply the time at every point on the surface of the earth. Imagine we've covered the surface of the earth with lots of people each wearing wristwatches, the value of a field at point x on the earth is the reading on the wristwatch of the person standing there. An example of a global symmetry would be time translations. Imagine moving forward in time by one second. Everyone's watch has been shifted forward in time by the same amount, one second. This is a symmetry in the sense that the laws of physics are still the same. A local symmetry, on the other hand, would be if everyone suddenly and independently changed the settings on their wristwatches. This is a local transformation in the sense that each person can adjust their watch by different amounts. Note that in this example, the actual time has not changed, all that's happened is that our description of time in terms of people's watches has changed. One thing that's not present in this example is the notion of curvature. It is possible for everyone to synchronize their watches with each other, and for them to remain synchronized? Where gauge theory really gets interesting is when you consider situations where that is not possible. One way you can go with the wristwatch analogy I'm making is to start thinking about curvature of spacetime: black holes, clocks run at different speeds in a gravitational fields. Observers at different radial distances from the center of a black hole cannot synchronize their clocks. Another way to describe the situation, from a less gravitational perspective, is given by Maldacena in this amusing essay: http://arxiv.org/abs/1410.6753.
{ "domain": "physics.stackexchange", "id": 25835, "tags": "symmetry, gauge-theory, symmetry-breaking" }
Can either of LISA, NanoGrav or LIGO measure the polarization of gravitational wave background (GWB)?
Question: Polarization in GWB should carry as much important information as in CMB. However, I've done some superfluous literature research and found little discussion. Is there any planned project for measuring the polarization? Or is there a method to extract the information from existing experiments? Answer: General Relativity predicts two polarization states for GWs, often referred to as "plus" and "cross". Both of these are tensor-transverse (TT) modes. In theories that extend GR there could be additional scalar-transverse (ST), scalar-longitudinal (SL), and/or vector-longitudinal (VL) modes. Looking for non-GR modes is one way to test GR. If GR is correct, then we shouldn't find any additional modes. Each mode leads to a different overlap reduction function (ORF) for the correlations between detectors. For pulsar timing arrays the TT ORF is the Hellings-Downs curve, which NANOGrav and other PTAs showed evidence for in June 2023. This figure from NANOGrav's 15yr stochastic background analysis paper shows the Hellings-Downs curve for pulsar correlations from the TT modes of GR compared to the measured correlations from data. One way to detect additional polarization modes is to look for deviations from the Hellings-Downs curve (although other effects cause deviations too). NANOGrav carried out that analysis using their 12.5yr dataset in 2021. The Hellings-Downs correlation pattern wasn't measured back then so the results aren't conclusive. In the past few days an independent group has used NANOGrav's publicly available data to look for non-GR modes, and NANOGrav released their findings using their most recent dataset as well. Both analyses found the GR model (with only TT modes) to be marginally preferred over other models. One would need more data to get a more constraining measurement of the correlation pattern to put stronger limits on the existence of non-TT polarization modes. The same group has previously analyzed other public PTA data sets from NANOGrav, PPTA, and IPTA, looking for additional polarization modes.
{ "domain": "physics.stackexchange", "id": 98048, "tags": "general-relativity, cosmology, gravitational-waves, cosmological-inflation, gravitational-wave-detectors" }
What does units/mg mean for Streptavidin
Question: I got streptavidin for surface reaction. The label says "biotin binding: 16 units/mg". What does units/mg mean? Does it mean "1 mg biotin can bind to 16 units SA"? How much is the unit here? Answer: 16 units/mg means 16 units per milligram of protein. Many companies, including Invitrogen, define 1 unit streptavidin as the amount of streptavidin necessary to bind 1 microgram of biotin.
{ "domain": "biology.stackexchange", "id": 3412, "tags": "proteins, protein-binding" }
text classification : comparing classification reports
Question: I have a 4-leabelled text classification problem. Could someone help me choose among the below text classifiers ? I was advised to select the second one ( the one which uses both unigrams and bigrams ) but I cannot really see why. Answer: Okay so keeping it very short and precisely in context of your question- Accuracy tells us, out of all the documents how many are classified correctly. Precision tells us out of all documents which are predicted in a category, how often its correct. Uni -gram- "nasa", "is" "space" , "agency" bi-gram- "nasa is", "space agency" Now lets go over the numbers, in both the cases accuracy and precision doesn't have significant difference. But as we can see bi-grams can give me much more information and hence can have better performance on unseen data. Try to test the model on unseen data/validation set and compare the difference.May be Try tri-grams etc also.
{ "domain": "datascience.stackexchange", "id": 7566, "tags": "classification, confusion-matrix, text-classification" }
Newton's third law "limits"
Question: I push my hand into a wall... The wall pushes at my hand. That's why I don't fall through the wall. BUT what if I get a sledgehammer and smash through the wall... Now all I see is Newton's third law working up until a point where the wall can't exert a big enough force to react against the sledgehammer. So Newton's third law isn't universal? Edit: wait guys, you are not making sense to me. But tell me if this is wrong or right. Is it because the wall pushes back against the sledgehammer with an equal and opposite force as they interact, but when the wall breaks it's just like the hammer is going through the air? Like the wall isn't there anymore? Answer: This is a common misconception when learning Netwon's law. You aren't alone! The trick to understanding how to apply Newton's law in situations involving walls and sledgehammers is to break free of our intuition regarding how much force a sledgehammer actually exerts. Intuitively, we tend to associate the force of a strike (such as that of a sledgehammer) with its maximum force -- the force it would exert against an immovable object. A sledgehammer can exert a lot of force like this. However, our intuition leads us astray here. What actually happens is that the sledgehammer while it is impacting the wall, is exerting less than it's maximum force. In fact, it's exerting exactly the same force as the wall is exerting on it (Newton's Third Law in action). The result of those forces is that the hammer is decelerated by the force the wall applies on it, and the wall is driven backwards by the force of the hammer applied on it. This force accelerates the wall away from the sledgehammer. When the wall breaks, that decreases how much force the wall can apply on the hammer, and thus by the 3rd law, how much force the hammer can apply on the section of wall directly in front of the head of the hammer. This lesser force is still used to accelerate the wall and its reaction is still decelerating the hammer. Given that there is likely very little material in front of the hammer and that it has been accelerated by the hammer already (before the wall fully broke), the amount of force applied here is very small. Almost the same as if the hammer was just flying through the air. What this does mean is that if you tossed a feather in the air and then struck it with a sledgehammer, the sledgehammer would apply a very small force to the feather, and the feather would apply an equally small force to the hammer. The result would rapidly accelerate the feather up to the speed of the hammer, and would decelerate the hammer ever so slightly. Beyond this, another thing that may help is to remember that striking events like these are not actually instantaneous. They can take a few milliseconds to occur, and the forces can change dramatically during that time. As an example, you can see how golf balls deform during impact when hit with a golf club. I find this helpful because a lot of the confusion that can happen learning Newton's laws comes from the idea of a strike happening instantaneously. In slow motion, you can see a lot more is going on!
{ "domain": "physics.stackexchange", "id": 43198, "tags": "newtonian-mechanics, forces, free-body-diagram" }
What is "data scaling" regarding StandardScaler()?
Question: I'm trying to figure out the purpose of StandardScaler() in sklearn. The tutorial I am following says "Remember that you also need to perform the scaling again because you had a lot of differences in some of the values for your red and white [wines]" So I looked up the function in the sklearn docs. "Standardize features by removing the mean and scaling to unit variance" https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html What good would removing the mean do? What is scaling the data? Hard to google that. # Scale the data with `StandardScaler` X = StandardScaler().fit_transform(X) Answer: I will use k-Nearest Neighbor algorithm to explain why we must do scaling as a preprocessing step in most machine learning algorithms. Let's say you are trying to predict if I transaction is fraudulent or not, that is, you have a classification problem and you only have two features: value of transaction and time of the day. Both variables have different magnitudes, while transactions can vary from 0 to 100000000 (It is just an example), and time of the day between 0 to 24 (let's use only hours). So, while we are computing the nearest neighbor, using euclidean distance, we will do distance_class = sqrt( (new_value_transaction - old_value_transaction)**2) + (new_time_of_day - old_time_of_day)**2) ) Where old is the reference to our train data and new is related to a new transactions we want to predict the class. So now you can see that transactions will have a huge impact, for example, new_value_transaction = $100 new_time_of_day = 10 old_value_transaction = $150 new_time_of_day = 11 class_distance = sqrt(($50)**2) + (1)**2) Now, you have no indication that transaction value is more important than time of the day, that is why we will scale our data. Between the alternatives, we can have a lot of different, such as MinMaxScaler, StandardScaler, RobustScaler, etc. Each of them will treat the problem different. To be honest? Always try to use at least two of them to compare results. You can read more about in the sklearn documentation: https://scikit-learn.org/stable/modules/preprocessing.html#preprocessing I hope you got the feeling why we should use standardization techniques. Let me know if you have any further questions. To complement, here is a visual explanation of what I explained above. Credit: https://www.youtube.com/watch?v=d80UD99d4-M&list=PLpQWTe-45nxL3bhyAJMEs90KF_gZmuqtm&index=10 In the video they give a better explanation. Also, I highly recommend this course, the guys are amazing.
{ "domain": "datascience.stackexchange", "id": 5930, "tags": "machine-learning, scikit-learn" }
ROS Answers SE migration: ROS on fedora
Question: Hello, I was wondering since there is no "official" way to installing hydro on fedora, would it be possible to install it from source ? Originally posted by Maya on ROS Answers with karma: 1172 on 2014-03-23 Post score: 0 Answer: Yes, there seem to be a few people using it on Fedora. I would go with the generic from-source install instructions. rosdep does support Fedora to install system dependencies: http://wiki.ros.org/hydro/Installation/Source People are working on a binary installer for Fedora, so it might soon be even easier than that: https://github.com/ros-infrastructure/bloom/pull/228 Originally posted by demmeln with karma: 4306 on 2014-03-23 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 17392, "tags": "ros, ros-hydro, fedora" }
Anomalous dimension for bare actions with a standard kinetic term
Question: In this paper on p42, it is explained that when starting with a bare action that contains a standard kinetic term, this kinetic term attains a correction in the course of the RG flow which can be denoted by $1/Z_{\Lambda}$, such that the effective kinetic term at a scale $\Lambda$ can be written as $$ \frac{1}{2Z_{\Lambda}}\int\frac{d^dp}{(2\pi)^d}\phi(-p,\Lambda)p^2\phi(p,\Lambda) $$ the effective four point interaction would be $$ \frac{1}{4!Z_{\Lambda}^2} \,\, \int\limits_{p_1,\ldots,p_4}\phi(p_1,\Lambda)\ldots\phi(p_4,\Lambda)\hat{\delta}(p_1+ ... p_4) $$ and analogously for higher order interactions. To get rid of the change done to the action described by the $Z_{\Lambda}$ dependent corrections, on redifine the field $\phi$ such as $$ \phi \rightarrow \phi\left( 1- \frac{\eta}{2}\frac{\delta\Lambda}{\Lambda}\right) $$ with the anomalous dimension $$ \eta = \Lambda\frac{d\ln Z_{\Lambda}}{d\Lambda} $$ I dont see why the anomalous dimension has to be defined like this in this case in order for the rescaling of the field leading to a cancellation of the changes to the action due to a renormalization step and like to see an explanation. How generally valid the above expression for the anomalous dimension anyway, is it "only" valid whan starting with bare actions that have a (standard) kinetic term? How can the anomalous dimension be calculated generally? I always thought that roughly speaking, when rescaling after the course graining step, the anomalous dimension "parameterizes" just the deviations from the canonical (some call them engineering) scaling dimensions that have to be applied. Answer: I will give some hints: It is anomalous scaling dimension. scaling dimension is defined as $$x \rightarrow \lambda x,\\ \phi(x) \rightarrow \lambda^\Delta \phi(\lambda x) $$ From the formula (3.45) in the reference (maybe it is better to be $\phi(x) \rightarrow \Lambda^\frac{d-2}{2}\phi(\Lambda^{-1 }x)$), we know that the classical dimension of $\phi$ is $\Delta$. Due to $Z_{\Lambda}^{-1/2}$ in front of $\phi$, there will be a correction to classical dimension which gives the anomolous scaling dimension $\delta$ satisfying $$ \Lambda^{\delta}= Z_{\Lambda}^{-1/2}=e^{-\frac{1}{2} \ln Z_\Lambda }$$ So the anomalous scaling dimension is $\delta =-\frac{1}{2}\Lambda \frac{ d\ln Z}{d \Lambda}$.
{ "domain": "physics.stackexchange", "id": 10528, "tags": "quantum-field-theory, renormalization, scaling" }
Complexity of multiplication
Question: I've been reading around the area of complexity and arithmetic operations using logic gates; one thing that is confusing me is that \begin{equation} \Theta (n^{2}) \end{equation} is quoted as being the complexity for multiplication for iterative adition. But addition of a number requires \begin{equation} log_2(n) \end{equation} operations, 1 for each bit or 8 times that for each nand gate involved in doing this. So it strikes me as obvious that adding that number n times will have a complexity of \begin{equation} n \log_2(n) \end{equation} Which is definitely less than \begin{equation} \Theta (n^{2}) \end{equation} So where is this additional factor of \begin{equation} \frac{n}{\log_2(n)} \end{equation} coming from? Answer: Addition of a number of size $n$ takes time $O(n)$. Don't confuse a number and its encoding size, which is logarithmically smaller. When multiplying an $n$-bit integer $a$ by an $n$-bit integer $b$ using the iterative addition algorithm, you are adding up to $n$ shifted copies of $a$. Each addition costs you $O(n)$ rather than $O(\log n)$. The numbers $a,b$ itself could be as large as $2^n$.
{ "domain": "cs.stackexchange", "id": 6001, "tags": "complexity-theory, arithmetic" }