anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Finding out the coefficient of friction | Question: A cube is moving with constant velocity on a smooth horizontal surface. Surface is rough beyond point $A$.The question is to find out the minimum value of coefficient of friction of surface such that the cube starts toppling as soon as it crosses point $A$.
To topple the cube there must be unbalanced torque .If F is the force that acts on the cube horizontally at a distance $x$ from the base of the cube (Assuming $a$ to be the side length of the cube).Then at critical condition of toppling $mg(a/2)=Fx$ From where we can find $F$ given $x$ or vice versa.But in the present question neither of two is given and I couldn't get how to approach this question.Any ideas?Thanks.
Answer: Friction $f=\mu mg$ exerts a force to the left on the base of the cube. The deceleration $a=f/m$ of the cube causes an inertial (pseudo-) force $ma=f$ to act to the right on the COM of the cube. (This is similar to centrifugal force which acts radially outward on an object in circular motion, which accelerates radially inward.) These two equal and opposite forces constitute a couple which attempts to rotate the cube clockwise.
As the cube begins to rotate clockwise its weight transfers wholly on the lower right corner. The normal reaction $N$ acts vertically through this corner. Weight $W$ acting down on the COM and reaction $N=W$ acting upwards on the corner constitute another couple which attempts to rotate the cube anti-clockwise.
If the clockwise torque due to $f$ is greater than the anti-clockwise torque due to $W$, the cube will rotate. The torque due to $f$ increases as the cube tilts, whereas the torque due to $W$ decreases, until the tipping point is reached at which the COM is above the pivot.
See related questions Why doesn't a block rotate due to friction? and Would this box on the floor rotate based on friction?.
You might wonder : Isn't there something missing here? Why doesn't $v$ come into this analysis? Experience tells us that the cube is more likely to topple if the velocity $v$ of the cube is high.
It takes time for the net torque to rotate the cube from its initial position past the tipping point. At this point the torque due to $W$ changes from anti-clockwise to clockwise and then promotes toppling instead of opposing it.
If $v$ is low the cube may have stopped decelerating before the COM reaches the tipping point. The torque due to $f$ then disappears, so there is only the anti-clockwise torque due to $W$ still acting. If $v$ is high the deceleration time is more likely to be long enough for the COM to pass the tipping point.
Even if the cube has stopped sliding over the rough surface before the cube reaches the tipping point, its angular momentum might be enough to carry it past that point. | {
"domain": "physics.stackexchange",
"id": 40041,
"tags": "homework-and-exercises, newtonian-mechanics, friction"
} |
Coordinate vector fields associated to normal coordinates | Question: Let $(M,g)$ be a pseudo-Riemann manifold, $p\in M$ and $\mathcal{B}_p=\{X_i^p \,:\,i=1,\ldots,n\}\in T_pM$ an orthonormal basis of the tangent space of the point $p \in M$.
Attached to this basis $\mathcal{B}_p$ and using the exponential map (induced by the Levi-Civita connection), we have the so-called normal coordinate system around a neighbourhood $U \subset M$ of $p$
$\phi:U \subset M \rightarrow \phi(U) \subset \mathbb{R}^n$.
My question is about the local coordinate vector fields attached to this coordinate system:
$ \left\{ \left.\frac{\partial}{\partial \phi_1}\right|, \ldots, \left.\frac{\partial}{\partial \phi_n}\right| \right\} \subset \mathfrak{X}(U)$ .
It is not difficult to see that these vector fields evaluated at $p$ coincide with the original tangent vectors used to define the normal coordinate system, i.e.
$ \left.\frac{\partial}{\partial \phi_i}\right|_p = X_i^p \;\;\;\;$ for all $i=1,\ldots, n$ ,
and hence they form an orthonormal basis of $T_pM$.
I would like to know whether the coordinates vector fields evaluated at a different point $ U \ni q \neq p$
$ \left\{ \left.\frac{\partial}{\partial \phi_1}\right|_q, \ldots, \left.\frac{\partial}{\partial \phi_n}\right|_q \right\} \subset T_qM$
form an orthonormal basis? If not, in which cases this is true?
I'm thinking that, perhaps, the above situation is related with the curvature of the spacetime. Perhaps, something like "coordinate vector fields attached to normal coordinates are orthogonal on $U$ if and only if the spacetime is locally flat on $U$ (the Riemann tensor vanishes on $U$)" may hold?
Answer: If the coordinate basis is orthonormal on an open set $U$, then on that set we have
$$g_{ij} = \left\langle \frac{\partial}{\partial \phi^i}, \frac{\partial}{\partial \phi^j} \right\rangle = \ \eta_{ij},$$
which is only possible if the metric is flat on $U$.
Also, with the usual meaning (at least in physics) of "locally flat", every manifold is locally flat, because the metric can always be set equal to the Euclidean/Minkowski metric plus second order terms. You're using a slightly different meaning, so be careful with that. | {
"domain": "physics.stackexchange",
"id": 67431,
"tags": "general-relativity, differential-geometry, coordinate-systems, vector-fields"
} |
Lindblad superoperator and generated dynamics | Question: In quantum mechanics, in order to evolve the state of an open system, I can use an equation like this $\dot\rho(t)=\mathcal{L}\rho(t)$, where $\mathcal{L}$ is the Lindblad superoperator. In general, $\mathcal{L}$ satisfies
$$\frac{\partial V(t,t_0)}{\partial t} = \mathcal{L}V(t,t_0) \, .$$
On Breuer-Petruccione's book about the theory of open quantum systems, it seems that it is remarked a difference between the dynamics generated by $\mathcal{L}$, depending on whether $\mathcal{L}$ itself is time dependent or not. I don't understand in particular in which case the dynamics is Markovian and why. I am also a bit confused about the semigroup structure followed by this dynamics, in particular I am not sure if the semigroup property vanishes when $\mathcal{L}$ is time-dependent.
Thank you in advance
Answer: I try to unravel the question as far as I know about the topic.
I try to answer the question of when evolution is markovian or not. We recall that in general a quantum evolution is described by a one-parameter family of dynamical maps $\Phi_t$ which are CPT (completely positive and trace-preserving) maps from the set of states.
At this point, to define what is markovian and what is not, we must deal with the property of these dynamical maps. Several definitions of markovianity can be found in the literature (if you are interested, just ask me). In particular, we must consider two-parameter family $\Phi_{t,s} = \Phi_{t} \Phi_s^{-1}$. We assume the existence of the inverse, but, pay attention, we can not assure that the inverse is CP and neither is Positive, thus $\Phi_{t,s}$ may not be dynamical map even if $\Phi_t$ and $\Phi_s$ are so. This is a further request one has to take into account. If the map is CPT as well, then the map is called divisible. Then, one defines a markovian evolution as the set of CPT-divisible map.
Other definitions are possible, in terms of trace distance or of flow of information, and so on (I can indicate several reviews if you are interested in the topic of non-markovianity).
Anyway, the point I'm trying to make clear is the following: a markovian evolution is not necessarily described by a Lindblad evolution. More specifically: if the CPT-divisible map is also differentiable, that is, the following limit exists (in the norm topology, and with other mathematical assumptions)
$$
\lim_{\epsilon \to 0^+} \left[\frac{\Vert \Phi_{t+\epsilon,t} - \mathbb{I} \Vert}{\epsilon}\right] := \mathcal{L}_t
$$
then we obtain a Quantum Markovian Semigroup whose generator is the operator obtained from the latter limits. In this sense, this is a subclass of Markovian processes, which are homogeneous in time, namely, we can write the two-parameter family as a one-parameter family since
$$
\Phi_{t,s} = e^{\mathcal{L}(t-s)} \Longrightarrow \Phi_{t} = e^{\mathcal{L}t}.
$$
However, as previously stated, these are not all the markovian evolution possible, that is, that are not homogeneous in the time parameter.
After this brief review on the definition of markovianity (I repeat there: markovianity $\neq$ semigroup), I move more precisely to your question, about semigroup property. The GKLS theorem stated the following: to have a semigroup property you need that the Lindbladian operator and the coefficient $\gamma_i$ are all time-independent. This is not sufficient to have dynamical maps: you need also CP, and this is possible if all the coefficients $\gamma_i > 0$ are positives. So the answer is: yes if the coefficient are time-dependent, the semigroup property does not hold anymore. However, if the inverse of the one-parameter family exists, you can still write a sort of Lindbladian equation, that is, a time-local quantum master equation, but you have to keep attention to many properties that do not hold anymore.
In order to make all the things clear, I also answer directly a question you made in a comment to another answer, which should be clear after all the discussion above. What you said is
Instead of semigroup property, we now have $V(t,t_1)V(t_1,t_0)=V(t,t_0)$". This actually seems to me a semigroup structure, but I think I am missing something, maybe on the mathematical side
Ok, that's actually true, this is not the semigroup property stated in 3.45 in Breuer Petruccione, which I report here
$$
V(t_1) V(t_2) = V(t_1 + t_2),
$$
since it is not homogeneuous in time, but it is still markovian, since it represent a divisible map. Pay attention also to the fact that $V(t,t_1)V(t_1,t_0)=V(t,t_0)$ is two-parameter family, while $V(t_1) V(t_2) = V(t_1 + t_2)$ is a one-parameter family. | {
"domain": "physics.stackexchange",
"id": 68127,
"tags": "quantum-mechanics, quantum-optics, open-quantum-systems"
} |
Why can detecting middleweight black holes be harder than the other types? | Question: This source says that, until recently there were no middle weight (i.e. with 1000 solar-mass) black holes discovered. Astronomers had evidence of small and super-massive black holes, but not for the intermediate group. Now, at least 10 middleweight black holes are confirmed to exist.
Detecting black holes is difficult, especially the smaller ones. So, it should have been relatively easier to find middleweight ones than smaller ones, however, it seems not the case.
Why?
Answer: In a nutshell, intermediate-mass black holes (IMBHs for short) cannot be formed by the collapse of a star, which is how stellar mass black holes are formed, and can't be formed from the extreme conditions that form supermassive black holes. The three proposed methods of formation of IMBHs are:
The merging of two or more stellar mass black holes.
Collision of runaway stars, which then collapse.
Primordial black holes from the big bang.
The last option of formation is a particularly interesting possibility and is an open area of research (I think Alan Guth's group is doing a great deal of research around it and how it relates to inflation). These are all pretty seldom-occuring events, amd so IMBHs aren't predicted to exist in as high numbers as other kinds.
I'm no expert in this field, so corrections/comments are appreciated.
I hope this helps! | {
"domain": "physics.stackexchange",
"id": 50693,
"tags": "black-holes, astrophysics, astronomy"
} |
Birds sitting on electric wires: potential difference between the legs | Question: We have seen birds sitting on uninsulated electric wires of high voltage transmission lines overhead without getting harmed, because sitting on only one wire doesn't complete any circuit.
But what about the potential difference between their legs? Is this not a small complete circuit? Because the wire has a potential gradient, there should be a potential difference between the bird's feet. Is this potential difference so very small that we can say the bird is sitting at a single point on the wire? If a bird of a sufficiently large size, with a wide gap between its feet, sits on a single wire, shouldn't the bird receive a shock if the potential difference is sufficient?
Answer:
Here is a circuit representing the system. $R_{wire}$ is the resistance of the section of wire between the bird's legs. $R_{bird}$ is the resistance of the bird (which you can measure by sticking the two probes of the multimeter to the bird's two feet - if the cable is insulated, you will have to add the resistance of the insulation as well).
When the bird lands, do everyone's lights dim? (does the bird affect how much electricity goes through)
When the bird lands, the resistance between the two points (where its feet touch the wire) changes, so first we must determine whether the current coming from the transformer at the beginning of the power line changes. The resistance would go from $R_{wire}$ to:
$$R_T = \frac{1}{\frac{1}{R_{wire}}+\frac{1}{R_{bird}}}=\frac{R_{wire} \cdot R_{bird}}{R_{wire} + R_{bird}}$$
As evidenced by the fact that we use metal cables, and not birds, to transmit electricity, $R_{wire} << R_{bird}$:
$$ R_{wire} + R_{bird} \approx R_{bird} \Rightarrow R_T \approx \frac{R_{wire} \cdot R_{bird}}{R_{bird}}=R_{wire} $$
Therefore, the resistance does not change much, and the current should also stay about the same because $I=V / R$. (Actually, the current will increase very slightly, because the bird's resistance will be in parallel with the wire's resistance, and this will decrease the overall resistance of the power line very slightly - thanks Nate Eldredge and Max)
Does the bird experience extreme voltage?
The potential difference between two points is $V_0 = I \cdot R$. $I$ here is the total current passing through the wire, which we have already established does not differ much with the bird or without. So:
Without the bird we have $V_0=I \cdot R_{wire}$.
With the bird we have $V_{bird}=I \cdot R_T \approx I \cdot R_{wire}$ (see previous section).
Therefore the voltage experienced by the bird can be approximated with $I \cdot R_{wire}$. Once again, the wire is very conductive, so $R_{wire}$ will be small; $I$ may be large but not very large. $V_0$ will probably be a volt or less, likewise for $V_{bird}$.
Alternatively, we can observe that resistance is proportional to length, and therefore so is voltage: $$\frac{R_{wire}}{R_{line}} = \frac{L_{wire}}{L_{line}} = \frac{V_{wire}}{V_{line}} $$
Here:
$R_{line}$ is the resistance between the two endpoints of the whole line
$V_{line}$ is the potential between the two endpoints of the whole line (typically tens of kV)
$L_{line}$ is the length of the entire power line (typically several kilometers)
$L_{wire}$ is the length of wire spanned by the bird's legs (typically a few centimeters)
Therefore you can appreciate that the right side of the equation is a very small number, so likewise, $V_{wire}$ must be less than a volt - and the bird experiences approximately $V_{wire}$ potential difference as well.
Does the bird experience extreme current?
Despite low voltage, high current may still be dangerous to animals. As pointed out before, the amount of current passing through the bird-wire block is $I_T=V/R_T \approx V/R_{wire}$.
At one of the bird feet, the current will split into $I_{wire}$ (which goes through the wire) and $I_{bird}$ (which goes through the bird), and then combine at the other foot. Because $V_T = V_{bird} = V_{wire}$, we can conclude that $I_{bird} = V_T/R_{bird}$ and $I_{wire} = V_T/R_{wire}$, therefore current and resistance of either component is inversely proportional:
$$ \frac{I_{bird}}{I_{wire}} = \frac{V_T/R_{bird}}{V_T/R_{wire}} = \frac{R_{wire}}{R_{bird}}$$
We previously established that $R_{wire} << R_{bird}$, so $I_{wire} >> I_{bird}$. Current must be conserved (otherwise the bird must be stealing electrons) so $I_{wire} + I_{bird} = I_T > I_{wire} >> I_{bird}$.
$I_T$ can be pretty large for the higher capacity lines, but it's not that large - it's on the order of hundreds of amperes. Even though even 0.1 A is considered lethal to humans, the bird will experience a current $I_{bird}$ which is much smaller than this.
Recall the inverse proportion between current and resistance: Typically, animal bodies have a resistance of a few $M \Omega$ or a few hundred $k \Omega$ (original research), while good metal wires a few centimeters long will have less (often much less) resistance than 1 $\Omega$. So the current passing through the bird will be a few $\mu A$ at most - harmless.
Is it dangerous for the bird to open its legs wide?
A critical factor is the ratio of the resistance of the bird's body $R_{bird}$ to the resistance of the section of wire between its 2 legs $R_{wire}$. First let's consider the the effect of opening legs on total current.
With legs closed, we get total resistance of the power line $R_{closed} = R_{line} + R_1 + \frac{1}{\frac{1}{R_2} + \frac{1}{R_{bird}}}$. With legs open, $R_{open} = R_{line} + \frac{1}{\frac{1}{R_1 + R_2} + \frac{1}{R_{bird}}}$. $R_{closed} > R_{open}$ (intuitively, you are replacing more of the wire with a more conductive bird/wire composite module). Accordingly, the total current through the whole power line will be higher when legs are open $I_{closed} < I_{open}$.
Furthermore, as Ilmari Karonen pointed out, increasing $R_{wire}$ increases both the potential experienced by the bird and how much of the (now higher) total current "splits" off into the bird part of the circuit.
If the bird increases the distance between its legs hundredfold, the increase in total current on the line will be negligible. $V_{wire} = V_{bird}$ will go up hundredfold, and correspondingly, the bird will experience hundredfold-stronger current. However, for a normal bird, if we repeat our original analysis we will find that even 100 cm of cable still has negligible resistance compared to a bird, so I doubt real birds would notice a difference.
What if you stretched a bird's legs so much that they could span the whole power line? Besides looking ridiculous, the bird would now experience tremendous potential difference. But in stretching the bird, you would also make it very thin (which increases resistance) and make it very long (which also increases resistance). So $R_{bird}$ would also be much larger and the current would still be very small. The bird would probably experience some form of discomfort, but not due to electrical phenomena.
What if you had a giant bird that is so big, its two legs could span the whole power line, even without stretching? Resistance is proportional to length, but inversely proportional to thickness. So if the bird was well-proportioned, it would have the same resistance as a small bird. However, now the resistance of $R_{wire}$ is non-trivial - many kilometers of even very conductive wire can have significant resistance. As said earlier, if 100 A passes through the power line, the bird need only get 0.1% of that to be at risk of death, so if the bird is long enough to span enough kilometers of power line that the resistance of the line is at least a few $k\Omega$, it will experience a very dangerous shock. Although a bird that big would also have other problems, such as the square-cube law, or current going through its head to make lightning in the upper layers of the atmosphere. | {
"domain": "physics.stackexchange",
"id": 13272,
"tags": "electricity, potential"
} |
Why does the copper anode dissolve? | Question:
In such a setup, the copper anode is known to dissolve. My question is, why? Does it not receive electrons from the $ \ce{OH^-}$ ions. And if you say that those electrons just flow to the battery, let me provide the counterargument: the electrons flowing to the cathode do not prevent the copper cathode from dissolving because copper ions in solution accept the ions, so there is actually nothing stopping the cathode from dissolving as well. So, very simply, my question is this: Why does the copper anode dissolve?
Answer: The anode is where oxidations happen (the cathode is where reduction takes place).
In the vicinity of your anode, you have the following compounds: $\ce{H2O}$, $\ce{Cu}$, $\ce{Cu^2+}$, $\ce{SO4^2-}$. If you check oxidation states, you will realise that oxygen is always $\mathrm{-II}$, hydrogen is $\mathrm{+I}$, sulphur is $\mathrm{+VI}$ and copper is present in both $\mathrm{\pm 0}$ and $\mathrm{+II}$. Thus, the only species available for oxidation are copper($0$) and oxygen($\mathrm{-II}$).
Checking standard potentials:
\begin{align}
\ce{O2 + 4 H+ + 4 e- &<=> 2 H2O} \qquad &E^\circ = \pu{+1.23 V}\tag{1}\\
\ce{Cu^2+ + 2 e- &<=> Cu} \qquad &E^\circ = \pu{+0.35 V}\tag{2}
\end{align}
It should be clear why copper is oxidised.
On the reduction side, hydrogen, copper and sulphate all could accept electrons. Copper is again the happiest of the three to accept electrons. | {
"domain": "chemistry.stackexchange",
"id": 4619,
"tags": "electrochemistry, redox, electrolysis"
} |
Should the eigenkets be weighted in $|P\rangle = \sum\limits_{r}|\xi^r\rangle$? | Question: Page 37 of Dirac's book The Principles of Quantum Mechanics, states
The condition for the eigenstates of $\xi$ to form a complete set must thus be formulated, that any ket $|P\rangle$ can be expressed as an integral plus a sum of eigenkets of $\xi$, i.e.
$$|P\rangle = \int |\xi'c\rangle\; d\xi' + \sum\limits_{r}|\xi^rd\rangle$$
where the $|\xi'c\rangle$, $|\xi^rd\rangle$ are all eigenkets of $\xi$, the labels c and d being inserted to distinguish them when the eigenvalues $\xi'$ and $\xi^r$ are equal, and where the integral is taken over the whole range of eigenvalues and the sum is taken over any selection of them.
I find it inconsistent that the eigenkets under the integral are weighted by what appears to be a differential eigenvalue $d\xi'$, yet all the eigenkets under the summation are weighted by the value 1. Is this correct?
Answer: If eigenkets are defined up to arbitrary constants, it is possible to write the sum without any coefficients. | {
"domain": "physics.stackexchange",
"id": 16930,
"tags": "quantum-mechanics, hilbert-space, superposition"
} |
To delineate the drainage basin for a lake, would the pour point be the inlet or the outlet? | Question: I am interested in the area draining into and affecting a particular lake. Delineating this drainage basin, should I use the lake's inlet or outlet to best capture the relevant drainage basin? I think including the lake is valuable in this exercise, as events on or in the lake itself also affect the lake.
I'd think outlet, but want to double check with you all. To complicate things, most lakes have more than one inlet and outlet. I guess I'd go with the lowest outlet in order to avoid missing any land which may drain into the lake.
Answer: I think there are a few things to consider. First, a drainage basin is defined as the area upstream of the point to which all precipitation converges. Flow does not converge at the outlet - flow converges at the lake. Outlets don't contain any additional information of the upstream area draining into the lake. This implies that to find the area draining into a lake, you would necessarily have to proceed beginning from each and every inlet.
Here's another way to see this. Approximately 20% of all land drains to lakes with no outlets. These are referred to as endorheic lakes or endorheic basins. Even though there is no outlet, there is absolutely still a definable drainage basin area. | {
"domain": "earthscience.stackexchange",
"id": 1164,
"tags": "hydrology, gis, watershed"
} |
How does an electromagnetic field oscillate if time does not pass for the speed of light? | Question: As far as I'm aware, traveling at $c$ will prevent time passing due to time dilation. Electromagnetic waves rely upon oscillations to propagate. Since oscillations rely upon the passing of time, how does an electromagnetic wave oscillate if time does not pass?
Answer:
Electromagnetic waves rely upon oscillations to propagate
Electromagnetic waves are propagating disturbances (or oscillations) in the electromagnetic field and the electromagnetic field exists everywhere and everywhen.
But the electromagnetic field itself does not 'travel' at $c$ so your reasoning isn't clear to me. | {
"domain": "physics.stackexchange",
"id": 20612,
"tags": "special-relativity, electromagnetic-radiation, time-dilation, inertial-frames"
} |
About electrostatic induction | Question: when we approach a charged rod (+) to a neutral metal rod ( not touching) a number of electrons to that side ( lets call it side B) negating the effect of the introduced electric field, reaching equilibrium.
But what about the electrons that where already on that side , will there presence affect the equilibrium , if some electrons ( those where already at side B before the electric field is introduced) disappeared what would happen to the equilibrium?
Forgive me if some of my ideas are faulty , i am new to the world of physics
Answer: You cannot say for sure as to where each and every electron will end up after attaining equilibrium, primariliy because the electrons are constantly moving about in the lattice.
It is kind of like trying to determine the exact position of every gas molecule after you compress the gas by a certain amount. The problem is that a gas molecule's motion is pretty random, so you cannot say for sure where that molecule will end up.
Similarly, in attaining the equilibrium, some electrons might end up at completely random places. But we can surely say that there will be more electrons on side B, than positive ions, because a net (-) charge will develop on side B. We can also determine how the electrons will arrange themselves, if we know the field of the charged rod. | {
"domain": "physics.stackexchange",
"id": 9089,
"tags": "electrostatics, charge"
} |
What is the difference between pose goal ,joint goal, and space joint goal | Question:
I was following Moveit tutorials ,
i am confused about these 3 terms , plz let me know.
ubantu 18.04 3.28.2 , ROS Melodic.
Originally posted by saztyga on ROS Answers with karma: 5 on 2020-08-18
Post score: 0
Answer:
I think there only 2 things here - and they are two different ways to specify what the success criteria for the planner are:
"joint goal" or "joint space goal" (same thing) - this means your goals are defined based on what angle each joint should be in at the end of the plan.
"pose goal" - this means your goal is defined on the final pose of the end effector. MoveIt will do Inverse Kinematics (IK) to find the joint angles required to reach this goal.
The reason it is called "joint space" is because the "configuration space" is defined by the number of joints in your robot arm and their allowable range of motions. The "configuration space" is a widely used notion in mathematics and planning - so you can find lots more information if you want to delve into that later, but really, all you need to decide now is: do you have a goal based on where the end effector should be? or based on what angle the joints should each be set to?
Originally posted by fergs with karma: 13902 on 2020-08-18
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by saztyga on 2020-08-18:
Thank You for the response sir,
so in pose goal approach we provide XYZRPY and moveit will do rest of the job,
but in joint goal we need to provide value to each joint in order to get xyzrpy?
Comment by fergs on 2020-08-22:
That is correct. Generally if you're trying to do XYZRPY of the end effector, you'd just use the pose goal. The joint space goals are really for putting the arm in a specific known configuration (for instance, a "ready" pose that tucks the arm out of the way of the camera, etc).
Comment by saztyga on 2020-08-22:
Thank you sir. . Now its Crystal clear to me. . Thanks a lot | {
"domain": "robotics.stackexchange",
"id": 35437,
"tags": "moveit, ros-melodic"
} |
GNSS : Code Phase and Carrier Phase Relation | Question: What is the relation between code phase and carrier phase in a GNSS system (GPS L1CA, GPS L5 etc)? Any document or section of any book explaining the same will also do.
Answer: Basically, the only relation is that they're both caused by the distance between transmitter and receiver (and delays, phase shifts).
"Code phase" applies the idea of phase (which is typically a property of a harmonic oscillation relative to some other reference oscillation of the same freqeuncy) to the code sequence of GNSS systems. Instead of asking "how much of the $2\pi$ of a full period of the oscillation are we shifted?" you'd be asking "how much of the full length of a code sequence are we shifted?". You'd learn that by cross-correlating receive signal with the known PRN, and getting a position between 0 shift and sequence length (e.g. 1023); that position doesn't have to be an integer, though, as you're free to interpolate in your receiver if SNR permits.
Hence, code phase is the result of range modulo (code sequence length)/$c_0$.
Carrier phase is just that – the phase of your carrier at the receiver. Look at the correlation peak above: it's a complex number, it not only got a magnitude, but also a phase.
Galileo publishes information on the phase centers of their satellite antennas – so, if your GNSS pseudorange estimation (based on the estimate of pseudoranges made up of full multiples of sequence lengths, and the code phase) is good enough, you can further increase it by observing the carrier phase as recovered from your correlation-based estimate. | {
"domain": "dsp.stackexchange",
"id": 11009,
"tags": "spread-spectrum, gps, gnss"
} |
Car flip in crash while dragging | Question: In some movies (Cars2), a driver would take a sharp turn while dragging the car and it would launch in air and flip.
Can this happen in real life?
If yes then what is the physics behind it?
Only friction acts on the car which passes through the axis of rotation, so what gives it the torque?
Can inertia produce torque?
Answer: You'd need some materials, configurations, and energies that just aren't found in regular vehicles. But there are vertical forces that can do this.
First you need a lot of energy. That's handled by the forward motion of the vehicle.
Second, you need a wide wheelbase. This provides a longer arc when tipping.
Finally, you need a lot of force from the road. Tires coefficient of friction maxes out around 1. This will need a lot more. If you don't have magic tires, you'll need a curb or similar to provide a lot of force.
Now with the car moving at high speed, turn it so that it hits something grippy (magic tires/curb).
As the car grips/strikes the curb, the curb supplies a force to oppose the motion. But because it is so low on the car, this also creates a torque that tries to spin the car.
Initially, the weight of the car transfers to the leading edge. But if the force is great enough, the normal force from the ground will no longer be sufficient to stop the spin. The car will start to rotate.
As it rotates, the car pushes into the ground, so the normal force from the ground increases. $N$ now exceeds the weight of the car and causes the center of mass to accelerate upward. You can see in the diagram that the mass has risen. If this acceleration is fast enough, the car will lift completely off the ground.
In any normal situation, the parts of a real car would deform, losing energy, making this much harder to do. But you could probably get a die-cast car going fast enough for this to be possible. Attempting it with regular tires on a flat road is just going to skid the tires. | {
"domain": "physics.stackexchange",
"id": 56550,
"tags": "newtonian-mechanics, friction"
} |
How to add Kinect sensor input to a URDF model? | Question:
I'm able to run the Kinect demo as shown here:
http://www.ros.org/wiki/kinect/Tutorials/Getting%20Started
and also separately can visualize my URDF model. The question now arises as to how to combine the two, such that the model and sensor data appear together. I have a node within the model for the camera location, so presumably the kinect topic needs to be linked to this somehow.
Originally posted by JediHamster on ROS Answers with karma: 995 on 2011-02-20
Post score: 8
Answer:
In your model for the kinect you need to create a fixed joint between the link that represents your model and the frame_id being published by the kinect node.
for example:
<link name="kinect_link">
<visual>
<geometry>
<box size="0.064 0.121 0.0381" />
</geometry>
<material name="Blue" />
</visual>
<inertial>
<mass value="0.0001" />
<origin xyz="0 0 0" />
<inertia ixx="0.0001" ixy="0.0" ixz="0.0"
iyy="0.0001" iyz="0.0"
izz="0.0001" />
</inertial>
</link>
<joint name="kinect_depth_joint" type="fixed">
<origin xyz="0 0.028 0" rpy="0 0 0" />
<parent link="kinect_link" />
<child link="kinect_depth_frame" />
</joint>
<link name="kinect_depth_frame">
<inertial>
<mass value="0.0001" />
<origin xyz="0 0 0" />
<inertia ixx="0.0001" ixy="0.0" ixz="0.0"
iyy="0.0001" iyz="0.0"
izz="0.0001" />
</inertial>
</link>
<joint name="depth_optical_joint" type="fixed">
<origin xyz="0 0 0" rpy="${-M_PI/2} 0 ${-M_PI/2}" />
<parent link="kinect_depth_frame" />
<child link="kinect_depth_optical_frame" />
</joint>
<link name="kinect_depth_optical_frame">
<inertial>
<mass value="0.0001" />
<origin xyz="0 0 0" />
<inertia ixx="0.0001" ixy="0.0" ixz="0.0"
iyy="0.0001" iyz="0.0"
izz="0.0001" />
</inertial>
</link>
Then based on the above urdf launch the kinect node with the following parameteres:
<node pkg="openni_camera" type="openni_node" name="openni_camera" output="screen" respawn="true" >
<param name="device_type" value="1" />
<param name="registration_type" value="1" />
<param name="point_cloud_resolution" value="1" />
<param name="openni_depth_optical_frame" value="kinect_depth_optical_frame" />
<param name="openni_rgb_optical_frame" value="kinect_rgb_optical_frame" />
<param name="image_input_format" value="5" />
<rosparam command="load" file="$(find openni_camera)/info/openni_params.yaml" />
</node>
Originally posted by mmwise with karma: 8372 on 2011-02-20
This answer was ACCEPTED on the original site
Post score: 10
Original comments
Comment by mmwise on 2011-02-21:
yes they are offset from each other. we have the rgb_frame about 3cm to the right.
Comment by JediHamster on 2011-02-20:
I assume that on the Kinect itself the depth frame and the RGB frame should be in different places (depth in the center, RGB to the right?).
Comment by onurtuna on 2016-06-10:
by doing this, will I be able to move my robot model with the tf data coming from the kinect?
Comment by mancer on 2018-05-11:
I tried to do this, and installed openni_camera and openni_launch through sudo apt-get, but ran into a problem: "file does not exist [/opt/ros/kinetic/share/openni_camera/info/openni_params.yaml]" | {
"domain": "robotics.stackexchange",
"id": 4811,
"tags": "kinect, rviz, urdf"
} |
Confluence to show equivalent terms have one common reduct | Question: In lemma 30.3.9, Pierce states a confluence property for $F_{\omega}$:
$S \to_* T \land S \to_* U \implies \exists V. T \to_* V \land U \to_* V$
He then states the following proposition:
$S \leftrightarrow_* T \implies \exists U. S \to_* U \land T \to_* U$
However, he doesn´t use the above property to prove it. I remember this was the case for other books on term rewriting systems that I read. However, to me it looks very simple to prove using the confluence lemma.
From $S \leftrightarrow_* T$ one has $S \to_* T$ and $S \to_* T \to_* S$ thus by confluence $\exists U. S \to_* U \land T \to_* U$.
Why is this approach not correct?
Answer: $S \leftrightarrow^* T$ does not mean that $S \rightarrow^* T$ and $T \rightarrow^* S$! It means that there is a chain of reductions $S = S_0 \rightleftharpoons_1 S_1 \rightleftharpoons_2 S_2 \rightleftharpoons_3 \cdots \rightleftharpoons_n S_n = T$ where each of the $\rightleftharpoons_i$ might be either $\rightarrow$ or $\leftarrow$. The directions can alternate any number of times.
The confluence property does imply generically that if $S \leftrightarrow^* T$ then there exists $W$ such that $S \rightarrow^* W \leftarrow^* T$, but it takes a bit more work to show it. You can combine reductions in $S \leftrightarrow^* T$ to group consecutive reductions in the same direction together: $S = T_0 \leftarrow^* U_1 \rightarrow^* T_1 \leftarrow^* U_2 \rightarrow^* T_2 \leftarrow^* \cdots \leftarrow^* U_n \rightarrow^* T_n = T$. By confluence on $T_{n-1} \leftarrow^* U_n \rightarrow^* T_n$, there exists $V_n$ such that $T_{n-1} \rightarrow^* V_n \leftarrow^* T_n$. So we have $T_{n-2} \leftarrow^* U_{n-1} \rightarrow^* T_{n-1} \rightarrow^* V_n \leftarrow^* T_n$.
$$
\begin{matrix}
& & U_1 & & & & & & U_{n-1} & & & & U_n & & \\
& _*\swarrow & & \searrow^* & & \cdots & & _*\swarrow & & \searrow^* & & _*\swarrow & & \searrow^* & \\
T_0 & & & & T_1 & & T_{n-2} & & & & T_{n-1} & & & & T_n \\
& & & & & & & & &
& & \color{green}{\searrow^*} & & \color{green}{_*\swarrow} & \\
& & & & & & & & & & & & \color{green}{V_n} & & \\
\end{matrix}
$$
Now apply the confluence property again to $T_{n-2} \leftarrow^* U_{n-1} \rightarrow^* V_n$, getting $V_{n-1}$ such that $T_{n-2} \rightarrow^* V_{n-1} \leftarrow^* V_n \leftarrow^* T_n$. Repeat until you get $W = V_1$ such that $T_0 \rightarrow^* V_1 \leftarrow^* T_n$. Formally, this is a proof by induction on the number of alternations of directions in the chain $S \leftrightarrow^* T$. Confluence lets you remove one alternation at a time. In more colorful language, confluence lets you “pop a crease”; each time you pop a crease, you join two depressions together, and if you keep doing that, you eventually get a single depression. | {
"domain": "cs.stackexchange",
"id": 15247,
"tags": "programming-languages, type-theory, term-rewriting, types-and-programming-languages"
} |
Passing listener to customized extended View in Android | Question: I extended NumberPicker with several methods needed in my application. When I came to defining a OnValueChangeListener this is how I did it:
import android.widget.NumberPicker;
interface ValueChangeCallback {
public void execute(RemListView list, int oldVal, int newVal);
}
public class RemListView extends NumberPicker {
ValueChangeCallback mValueChangeCallback;
RemListView thisList = this;
public void setValueChangeCallback(ValueChangeCallback callback) {
mValueChangeCallback = callback;
setOnValueChangedListener(new NumberPicker.OnValueChangeListener() {
@Override
public void onValueChange(NumberPicker picker, int oldVal, int newVal) {
mValueChangeCallback.execute(thisList, oldVal, newVal);
}
});
}
}
This code works fine but I'm new to Android and somehow I'm not sure I did it the proper way so I'm asking for a code review.
Do I have to save the callback in member variable?
I couldn't access this from within callback so I introduced thisList variable. Can it be avoided?
Answer: You don't need either thisList or the mValueChangeCallback in your class.
As your value change listener is an anonymous inner class, it has access to the outside this object, which you can access with RemListView.this.
As your callback is passed as a parameter and you want to use it inside the anonymous inner class inside the method that the parameter is passed to, you can mark it as final to make it usable.
public void setValueChangeCallback(final ValueChangeCallback callback) {
setOnValueChangedListener(new NumberPicker.OnValueChangeListener() {
@Override
public void onValueChange(NumberPicker picker, int oldVal, int newVal) {
callback.execute(RemListView.this, oldVal, newVal);
}
});
}
I do believe though that since you're not modifying anything about the listener functionality, I think you don't need this method at all. The NumberPicker.OnValueChangeListener is already publically available because you extend the NumberPicker, you can't hide it. Therefore, the ValueChangeCallback just sounds useless to me. I would recommend using the NumberPicker.OnValueChangeListener instead. | {
"domain": "codereview.stackexchange",
"id": 8316,
"tags": "java, android"
} |
Literate Programming in C | Question: Inspired from Literate programming in Haskell.
Below follows a Literate script filter. A literate script is a program in which comments are given the leading role, whilst program text must be explicitly flagged as such by placing it between \begin{code} and \end{code} in the first column of each line.
litc is a filter that can be used to strip away all of the comment lines out a literate script file.
The default code markers are \begin{code} and \end{code}, but they can be changed with command-line arguments, provided that the begin and end markers are nonsimilar.
The following is a valid literate script:
Some text speaking about the code:
\begin{code}
#include <stdio.h>
int main(void) {
puts("Hello World!");
}
\end{code}
Some more text..
and so is this:
Some text speaking about the code:
```c
#include <stdio.h>
int main(void) {
puts("Hello World!");
}
```
Some more text...
GHC itself uses a standalone C program called unlit to process .lhs files.
Code:
#undef _POSIX_C_SOURCE
#undef _XOPEN_SOURCE
#define _POSIX_C_SOURCE 200819L
#define _XOPEN_SOURCE 700
#define IO_IMPLEMENTATION
#define IO_STATIC
#include "io.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include <assert.h>
#include <unistd.h>
#include <getopt.h>
/* C2X/C23 or later? */
#if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 202000L
#include <stddef.h> /* nullptr_t */
#else
#include <stdbool.h> /* bool, true, false */
#define nullptr ((void *)0)
typedef void *nullptr_t;
#endif /* nullptr, nullptr_t */
#define SHORT_OP_LIST "b:e:o:h"
#define DEFAULT_BEGIN "\\begin{code}"
#define DEFAULT_END "\\end{code}"
/* Do we even need getopt? */
typedef struct {
const char *bflag; /* Begin flag. */
const char *eflag; /* End flag. */
FILE *output; /* Output to FILE. */
} flags;
static void help(void)
{
printf("Usage: litc [OPTIONS] SRC\n\n"
" litc - extract code from a LaTEX document.\n\n"
"Options:\n"
" -b, --begin Line that denotes the beginning of the code\n"
" block in the markup langugage. Default: %s.\n"
" -e, --end Line that denotes the end of the code block in\n"
" the markup language. Default: %s.\n"
" -h, --help Displays this message and exits.\n"
" -o, --output=FILE Writes result to FILE instead of standard output.\n\n"
"Note: The begin and end markers must be different.\n\n"
"For Markdown, they can be:\n"
" ```python\n"
" # Some code here\n"
" ```\n",
DEFAULT_BEGIN, DEFAULT_END);
exit(EXIT_SUCCESS);
}
static void err_and_fail(void)
{
fputs("The syntax of the command is incorrect.\n"
"Try litc -h for more information.\n", stderr);
exit(EXIT_FAILURE);
}
static void parse_options(const struct option long_options[static 1],
flags opt_ptr[static 1],
int argc,
char * argv[static argc])
{
while (true) {
const int c =
getopt_long(argc, argv, SHORT_OP_LIST, long_options, nullptr);
if (c == -1) {
break;
}
switch (c) {
case 'e':
case 'b':
if (optarg == nullptr) {
err_and_fail();
}
*(c == 'b' ? &opt_ptr->bflag : &opt_ptr->eflag) = optarg;
break;
case 'h':
help();
break;
case 'o':
/* If -o was provided more than once. */
if (opt_ptr->output != stdout) {
fprintf(stderr, "Error: Multiple -o flags provided.\n");
err_and_fail();
}
errno = 0;
opt_ptr->output = fopen(optarg, "a");
if (opt_ptr->output == nullptr) {
perror(optarg);
exit(EXIT_FAILURE);
}
break;
/* case '?' */
default:
err_and_fail();
break;
}
}
}
static bool write_codelines(flags options[static 1],
size_t nlines,
char * lines[static nlines])
{
/* Would these 7 variables be better expressed as a struct? */
/* Perhaps:
*
* What would be a better name for this struct?
*
* struct {
* FILE *const f;
* const char *const begin_marker;
* const char *const end_marker;
* size_t ncodelines; // No. of lines seen so far.
* size_t curr_line;
* size_t last_begin_pos;
* bool code_mode;
* } code = {options->output,
* options->bflag ? options->bflag : DEFAULT_BEGIN,
* options->eflag ? options->eflag : DEFAULT_END,
* 1,
* 1,
* 0,
* false
* };
*/
const char *const begin_marker =
options->bflag ? options->bflag : DEFAULT_BEGIN;
const char *const end_marker =
options->eflag ? options->eflag : DEFAULT_END;
FILE *const f = options->output;
if (f != stdout && ftruncate(fileno(f), 0)) {
perror("seek()");
return false;
}
bool code_mode = false;
size_t ncodelines = 1;
size_t curr_line = 1;
size_t last_begin_pos = 0;
for (size_t i = 0; i < nlines; ++i, ++curr_line) {
if (code_mode) {
if (strcmp(lines[i], begin_marker) == 0) {
fprintf(stderr, "Error: %s missing, %s started at line: %zu.\n",
end_marker, begin_marker, last_begin_pos);
goto cleanup_and_fail;
}
if (strcmp(lines[i], end_marker) == 0) {
code_mode = false;
} else {
if (fprintf(f, "%s\n", lines[i]) < 0) {
perror("fprintf()");
goto cleanup_and_fail;
}
++ncodelines;
}
} else {
if (strcmp(lines[i], end_marker) == 0) {
fprintf(stderr, "Error: spurious %s at line: %zu.\n",
end_marker, curr_line);
goto cleanup_and_fail;
}
code_mode = strcmp(lines[i], begin_marker) == 0;
if (code_mode) {
/* Only print newlines after the first code block. */
if (ncodelines > 1) {
fputc('\n', f);
}
last_begin_pos = curr_line;
}
}
}
if (code_mode) {
fprintf(stderr, "Error: %s missing. %s started at line: %zu.\n",
end_marker, begin_marker, last_begin_pos);
goto cleanup_and_fail;
}
if (ncodelines == 0) {
fprintf(stderr, "Error: no code blocks were found in the file.\n");
goto cleanup_and_fail;
}
goto success;
cleanup_and_fail:
if (f != stdout) {
fclose(f);
}
return false;
success:
return f == stdout || !fclose(f);
}
int main(int argc, char *argv[])
{
/* Sanity check. POSIX requires the invoking process to pass a non-null
* argv[0].
*/
if (!argv) {
fputs("A NULL argv[0] was passed through an exec system call.\n",
stderr);
return EXIT_FAILURE;
}
static const struct option long_options[] = {
{ "begin", required_argument, nullptr, 'b' },
{ "end", required_argument, nullptr, 'e' },
{ "help", no_argument, nullptr, 'h' },
{ "output", required_argument, nullptr, 'o' },
{ nullptr, 0, nullptr, 0 },
};
FILE *in_file = stdin;
flags options = { nullptr, nullptr, stdout };
parse_options(long_options, &options, argc, argv);
if ((optind + 1) == argc) {
in_file = fopen(argv[optind], "r");
if (!in_file) {
perror(argv[optind]);
if (options.output) {
fclose(options.output);
}
return EXIT_FAILURE;
}
}
else if (optind > argc) {
err_and_fail();
}
size_t nbytes = 0;
char *const content = io_read_file(in_file, &nbytes);
size_t nlines = 0;
char **lines = io_split_lines(content, &nlines);
int status = EXIT_FAILURE;
if (!lines) {
perror("fread()");
goto cleanup_and_fail;
}
if (!write_codelines(&options, nlines, lines)) {
goto cleanup_and_fail;
}
status = EXIT_SUCCESS;
cleanup_and_fail:
/* As we're exiting, we don't need to free anything. */
/* free(lines); */
/* free(content); */
if (in_file != stdin) {
fclose(in_file);
}
return status;
}
#ifndef IO_H
#define IO_H
#include <stdio.h>
#include <stdbool.h>
#include <stdint.h>
/*
* To use, do this:
* #define IO_IMPLEMENTATION
* before you include this file in *one* C to create the implementation.
*
* i.e. it should look like:
* #include ...
* #include ...
*
* #define IO_IMPLEMENTATION
* #include "io.h"
* ...
*
* To make all functions have internal linkage, i.e. be private to the source
* file, do this:
* #define `IO_STATIC`
* before including "io.h".
*
* i.e. it should look like:
* #define IO_IMPLEMENTATION
* #define IO_STATIC
* #include "io.h"
* ...
*
* You can #define IO_MALLOC, IO_REALLOC, and IO_FREE to avoid using malloc(),
* realloc(), and free(). Note that all three must be defined at once, or none.
*/
#ifndef IO_DEF
#ifdef IO_STATIC
#define IO_DEF static
#else
#define IO_DEF extern
#endif /* IO_STATIC */
#endif /* IO_DEF */
#if defined(__GNUC__) || defined(__clang__)
#define ATTRIB_NONNULL(...) __attribute__((nonnull (__VA_ARGS__)))
#define ATTRIB_WARN_UNUSED_RESULT __attribute__((warn_unused_result))
#define ATTRIB_MALLOC __attribute__((malloc))
#else
#define ATTRIB_NONNULL(...) /* If only. */
#define ATTRIB_WARN_UNUSED_RESULT /* If only. */
#define ATTRIB_MALLOC /* If only. */
#endif /* defined(__GNUC__) || define(__clang__) */
/*
* Reads the file pointed to by `stream` to a buffer and returns it.
* The returned buffer is a nul-terminated string.
* If `nbytes` is not NULL, it shall hold the size of the file. Otherwise it
* shall hold 0.
*
* Returns NULL on memory allocation failure. The caller is responsible for
* freeing the returned pointer.
*/
IO_DEF char *io_read_file(FILE *stream, size_t *nbytes)
ATTRIB_NONNULL(1) ATTRIB_WARN_UNUSED_RESULT ATTRIB_MALLOC;
/*
* Splits a string into a sequence of tokens. The `delim` argument
* specifies a set of bytes that delimit the tokens in the parsed string.
* If `ntokens` is not NULL, it shall hold the amount of total tokens. Else it
* shall hold 0.
*
* Returns an array of pointers to the tokens, or NULL on memory allocation
* failure. The caller is responsible for freeing the returned pointer.
*/
IO_DEF char **io_split_by_delim(char *restrict s, const char *restrict delim,
size_t *ntokens)
ATTRIB_NONNULL(1, 2) ATTRIB_WARN_UNUSED_RESULT ATTRIB_MALLOC;
/*
* Splits a string into lines.
* A wrapper around `io_split_by_delim()`. It calls the function with "\n" as
* the delimiter.
*
* Returns an array of pointers to the tokens, or NULL on memory allocation
* failure. The caller is responsible for freeing the returned pointer.
*/
IO_DEF char **io_split_lines(char *s, size_t *nlines)
ATTRIB_NONNULL(1) ATTRIB_WARN_UNUSED_RESULT ATTRIB_MALLOC;
/*
* Reads the next chunk of data from the stream referenced to by `stream`.
* `chunk` must be a pointer to an array of at least size IO_CHUNK_SIZE.
*
* If `size` is a non-null pointer, it'd hold the size of the chunk, else it
* would hold 0 on failure.
*
* Returns a pointer to the chunk on success, or NULL elsewise. The returned
* chunk is null-terminated.
*
* `io_read_next_chunk()` does not distinguish between end-of-file and error; the
* routines `feof()` and `ferror()` must be used to determine which occured.
*/
IO_DEF char *io_read_next_chunk(FILE *restrict stream, char *restrict chunk, size_t *size)
ATTRIB_NONNULL(1, 2) ATTRIB_WARN_UNUSED_RESULT;
/*
* Reads the next line from the stream pointed to by `stream`. The returned line
* is terminated and does not contain a newline, if one was found.
*
* The memory pointed to by `size` shall contain the length of the
* line (including the terminating null character). Else it shall contain 0.
*
* Upon successful completion a pointer is returned and the size of the line is
* stored in the memory pointed to by `size`, otherwise NULL is returned and
* `size` holds 0.
*
* `io_read_line()` does not distinguish between end-of-file and error; the routines
* `feof()` and `ferror()` must be used to determine which occurred. The
* function also returns NULL on a memory-allocation failure.
*
* Although a null character is always supplied after the line, note that
* `strlen(line)` will always be smaller than the value is `size` if the line
* contains embedded null characters.
*/
IO_DEF char *io_read_line(FILE *stream, size_t *size)
ATTRIB_NONNULL(1, 2) ATTRIB_WARN_UNUSED_RESULT ATTRIB_MALLOC;
/*
* `size` should be a non-null pointer. On success, the function assigns `size`
* with the number of bytes read and returns true, or returns false elsewise.
* The function also returns false if the size of the file can not be
* represented.
*
* Note: The file can grow between io_fsize() and a subsequent read.
*/
IO_DEF bool io_fsize(FILE *stream, uintmax_t *size)
ATTRIB_NONNULL(1, 2) ATTRIB_WARN_UNUSED_RESULT;
/*
* Writes `lines` to the file pointed to by `stream`.
* A wrapper around
* On success, it returns true, or false elsewise.
*/
IO_DEF bool io_write_lines(FILE *stream, size_t nlines, char *lines[const static nlines])
ATTRIB_NONNULL(1, 3);
/*
* Writes nbytes from the buffer pointed to by `data` to the file pointed to
* by `stream`.
*
* On success, it returns true, or false elsewise.
*/
IO_DEF bool io_write_file(FILE *stream, size_t nbytes, const char data[static nbytes])
ATTRIB_NONNULL(1, 3);
#endif /* IO_H */
#ifdef IO_IMPLEMENTATION
#if defined(IO_MALLOC) != defined(IO_REALLOC) || defined(IO_REALLOC) != defined(IO_FREE)
#error "Must define all or none of IO_MALLOC, IO_REALLOC, and IO_FREE."
#endif
#ifndef IO_MALLOC
#define IO_MALLOC(sz) malloc(sz)
#define IO_REALLOC(p, sz) realloc(p, sz)
#define IO_FREE(p) free(p)
#endif
#undef _POSIX_C_SOURCE
#define _POSIX_C_SOURCE 200809L
#include <stdlib.h>
#include <string.h>
#include <stdbool.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#define IO_CHUNK_SIZE ((size_t)1024 * 8)
#define IO_TOKEN_CHUNK_SIZE ((size_t)1024 * 2)
#define GROW_CAPACITY(capacity, initial) \
((capacity) < initial ? initial : (capacity) * 2)
IO_DEF char *io_read_file(FILE *stream, size_t *nbytes)
{
char *content = NULL;
size_t len = 0;
size_t capacity = 0;
if (nbytes) {
*nbytes = 0;
}
for (size_t rcount = 1; rcount > 0; len += rcount) {
capacity = GROW_CAPACITY(capacity, IO_CHUNK_SIZE);
void *const tmp = IO_REALLOC(content, capacity + 1);
if (tmp == NULL) {
IO_FREE(content);
return NULL;
}
content = tmp;
rcount = fread(content + len, 1, capacity - len, stream);
if (rcount < capacity - len) {
if (!feof(stream)) {
IO_FREE(content);
return content = NULL;
}
/* If we break on the first iteration. */
len += rcount;
break;
}
}
if (nbytes) {
*nbytes = len;
}
content[len] = '\0';
return content;
}
IO_DEF char **io_split_by_delim(char *restrict s, const char *restrict delim,
size_t *ntokens)
{
char **tokens = NULL;
size_t capacity = 0;
size_t token_count = 0;
if (ntokens) {
*ntokens = 0;
}
while (s != NULL && *s != '\0') {
if (token_count >= capacity) {
capacity = GROW_CAPACITY(capacity, IO_TOKEN_CHUNK_SIZE);
char **const tmp = IO_REALLOC(tokens, sizeof *tokens * capacity);
if (tmp == NULL) {
IO_FREE(tokens);
return NULL;
}
tokens = tmp;
}
tokens[token_count++] = s;
s = strpbrk(s, delim);
if (s) {
*s++ = '\0';
}
}
if (ntokens) {
*ntokens = token_count;
}
return tokens;
}
IO_DEF char **io_split_lines(char *s, size_t *nlines)
{
return io_split_by_delim(s, "\n", nlines);
}
IO_DEF char *io_read_next_chunk(FILE *stream, char *chunk, size_t *size)
{
if (size) {
*size = 0;
}
const size_t rcount = fread(chunk, 1, IO_CHUNK_SIZE, stream);
if (rcount < IO_CHUNK_SIZE) {
if (!feof(stream)) {
/* A read error occured. */
return NULL;
}
if (rcount == 0) {
return NULL;
}
}
chunk[rcount] = '\0';
if (size) {
*size = rcount;
}
return chunk;
}
IO_DEF char *io_read_line(FILE *stream, size_t *size)
{
size_t count = 0;
size_t capacity = 0;
char *line = NULL;
for (;;) {
if (count >= capacity) {
capacity = GROW_CAPACITY(capacity, BUFSIZ);
char *const tmp = realloc(line, capacity + 1);
if (tmp == NULL) {
free(line);
return NULL;
}
line = tmp;
}
const int c = getc(stream);
if (c == EOF || c == '\n') {
if (c == EOF) {
if (feof(stream)) {
if (!count) {
free(line);
return NULL;
}
/* Return what was read. */
break;
}
/* Read error. */
free(line);
return NULL;
} else {
break;
}
} else {
line[count] = (char) c;
}
++count;
}
/* Shrink line to size if possible. */
void *const tmp = realloc(line, count + 1);
if (tmp) {
line = tmp;
}
line[count] = '\0';
*size = ++count;
return line;
}
/*
* Reasons to not use `fseek()` and `ftell()` to compute the size of the file:
*
* Subclause 7.12.9.2 of the C Standard [ISO/IEC 9899:2011] specifies the
* following behavior when opening a binary file in binary mode:
*
* >> A binary stream need not meaningfully support fseek calls with a whence
* >> value of SEEK_END.
*
* In addition, footnote 268 of subclause 7.21.3 says:
*
* >> Setting the file position indicator to end-of-file, as with
* >> fseek(file, 0, SEEK_END) has undefined behavior for a binary stream.
*
* For regular files, the file position indicator returned by ftell() is useful
* only in calls to fseek. As such, the value returned may not be reflect the
* physical byte offset.
*
*/
bool io_fsize(FILE *stream, uintmax_t *size)
{
/*
* Windows supports fileno(), struct stat, and fstat() as _fileno(),
* _fstat(), and struct _stat.
*
* See: https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/fstat-fstat32-fstat64-fstati64-fstat32i64-fstat64i32?view=msvc-170
*/
#ifdef _WIN32
#define fileno _fileno
#ifdef _WIN64
#define fstat _fstat64
#define stat __stat64
#else
/* Does this suffice for a 32-bit system? */
#define fstat _fstat
#define stat _stat
#endif /* WIN64 */
#endif /* _WIN32 */
/* According to https://web.archive.org/web/20191012035921/http://nadeausoftware.com/articles/2012/01/c_c_tip_how_use_compiler_predefined_macros_detect_operating_system
* __unix__ should suffice for IBM AIX, all distributions of BSD, and all
* distributions of Linux, and Hewlett-Packard HP-UX. __unix suffices for Oracle
* Solaris. Mac OSX and iOS compilers do not define the conventional __unix__,
* __unix, or unix macros, so they're checked for separately. WIN32 is defined
* on 64-bit systems too.
*/
#if defined(_WIN32) || defined(__unix__) || defined(__unix) || (defined(__APPLE__) && defined(__MACH__))
struct stat st;
/* rewind() returns no value. */
rewind(stream);
if (fstat(fileno(stream), &st) == 0) {
*size = (uintmax_t) st.st_size;
return true;
}
return false;
#else
/* Fall back to the default and read it in chunks. */
uintmax_t rcount = 0;
char chunk[IO_CHUNK_SIZE];
/* rewind() returns no value. */
rewind(stream);
do {
rcount = fread(chunk, 1, IO_CHUNK_SIZE, stream);
if ((*size + rcount) < *size) {
/* Overflow. */
return false;
}
*size += rcount;
} while (rcount == IO_CHUNK_SIZE);
return !ferror(stream);
#endif /* defined(_WIN32) || defined(__unix__) || defined(__unix) || (defined(__APPLE__) && defined(__MACH__)) */
#undef fstat
#undef stat
#undef fileno
}
IO_DEF bool io_write_lines(FILE *stream, size_t nlines,
char *lines[const static nlines])
{
for (size_t i = 0; i < nlines; ++i) {
if (fprintf(stream, "%s\n", lines[i]) < 0) {
return false;
}
}
return true;
}
IO_DEF bool io_write_file(FILE *stream, size_t nbytes,
const char data[static nbytes])
{
return fwrite(data, 1, nbytes, stream) == nbytes;
}
#undef ATTRIB_NONNULL
#undef ATTRIB_WARN_UNUSED_RESULT
#undef ATTRIB_MALLOC
#undef TOKEN_IO_CHUNK_SIZE
#undef GROW_CAPACITY
#endif /* IO_IMPLEMENTATION */
#ifdef TEST_MAIN
#include <assert.h>
int main(int argc, char **argv)
{
if (argc != 2) {
fputs("Error: file argument missing.\n", stderr);
return EXIT_FAILURE;
}
FILE *fp = fopen(argv[1], "r");
if (!fp) {
perror(argv[1]);
return EXIT_FAILURE;
}
size_t nbytes = 0;
char *const fbuf = io_read_file(fp, &nbytes);
assert(fbuf && nbytes);
assert(io_write_file(stdout, nbytes, fbuf));
rewind(fp);
size_t size = 0;
bool rv = io_fsize(fp, &size);
assert(rv);
printf("Filesize: %zu.\n", size);
size_t nlines = 0;
char **lines = io_split_lines(fbuf, &nlines);
assert(lines && nlines);
assert(io_write_lines(stdout, nlines, lines));
printf("Lines read: %zu.\n", nlines);
for (size_t i = 0; i < nlines; ++i) {
if (lines[i][0]) {
size_t ntokens = 0;
char **tokens = io_split_by_delim(lines[i], " \t", &ntokens);
assert(tokens && ntokens);
assert(io_write_lines(stdout, ntokens, tokens));
free(tokens);
}
}
rewind(fp);
/* This can be allocated dynamically on the heap too. */
char chunk[IO_CHUNK_SIZE];
char *p = chunk;
size_t chunk_size = 0;
while ((p = io_read_next_chunk(fp, chunk, &chunk_size))) {
printf("Read a chunk of size: %zu.\n", chunk_size);
puts(chunk);
}
rewind(fp);
char *line = NULL;
size_t line_size = 0;
while ((line = io_read_line(fp, &line_size))) {
line[strcspn(line, "\n")] = '\0';
printf("Read a line of size: %zu.\n", line_size);
puts(line);
putchar('\n');
free(line);
}
free(fbuf);
free(lines);
fclose(fp);
return EXIT_SUCCESS;
}
#endif /* TEST_MAIN */
The programs need to be in the manner and order imposed by the relevant compiler, unlike Wikipedia's definition of Literate Programming.
Review Request:
General coding comments, style, et cetera.
Do you see any bugs or undefined/implementation-defined behavior?
As io.h has previously been reviewed (this is not to say that it must be bug-free now), I am mainly interested in getting the main program reviewed.
Answer: I'm going to ignore the larger questions about the history of Literate Programming and just address the issues behind "a program to delete commentary but preserve code blocks set off between specifiable marker lines". I made some of these observations in the comments to the question.
Using while (true) { earns several demerits. And using the const in const int c = … is pretty pointless too. In my opinion, it would be better and more idiomatic C to use:
int c;
while ((c = getopt_long(…)) != -1)
{
…
}
In the help message, you use -o, --output=FILE to describe the output file option. Why isn't a similar notation used for -b, --begin and -e, --end (e.g. -b, --begin=start-mark and -e, --end=end-mark)?
The indentation in the #ifndef IO_DEF block in io.h is erratic — the #else and the next line should be indented more. That assumes that the indentation of preprocessing directives is desirable; I am not convinced, but it is a valid choice when done consistently.
I'm also unconvinced about including both the function declarations and the implementation in a single file. I don't think this is a good design decision. It becomes awkward to create a library containing the I/O code; you need to create an io.c file with an appropriate set of #define lines before including io.h.
I won't critique (or analyze) the code in io.h further (in part because the question states,"io.h has previously been reviewed").
I don't think you need anywhere near as much I/O code. You simply need to read lines — POSIX getline() is appropriate since you already assume POSIX support — and look at one line at a time. You keep track of the state — either processing commentary or processing code.
If the input line is a begin marker line, then switch to code mode.
If the input line is an end marker line, switch to comment mode.
If you're in comment mode, skip the line.
If you're in code mode, print the line.
You have to worry about invalid transitions — reading a start marker while in code mode, or reading an end marker in comment mode. You also need to warn if you reach EOF in code mode.
This design avoids the need to store the whole file in memory.
I don't think the check on argv is sensible. You wrote:
if (!argv) {
fputs("A NULL argv[0] was passed through an exec system call.\n",
stderr);
return EXIT_FAILURE;
}
This checks whether the argv pointer is null (not whether the argv[0] pointer is null as the error message suggests).
The C standard guarantees that argv will not be null; even if the system cannot identify the program name, argv[0] must point to an empty string.
C11 §5.1.2.2.1 Program startup ¶2
If they are declared, the parameters to the main function shall obey the following constraints:
The value of argc shall be nonnegative.
argv[argc] shall be a null pointer.
If the value of argc is greater than zero, the array members argv[0] through argv[argc-1] inclusive shall contain pointers to strings, which are given implementation-defined values by the host environment prior to program startup. The intent is to supply to the program information determined prior to program startup from elsewhere in the hosted environment. If the host environment is not capable of supplying strings with letters in both uppercase and lowercase, the implementation shall ensure that the strings are received in lowercase.
If the value of argc is greater than zero, the string pointed to by argv[0] represents the program name; argv[0][0] shall be the null character if the program name is not available from the host environment. If the value of argc is greater than one, the strings pointed to by argv[1] through argv[argc-1] represent the program parameters.
The parameters argc and argv and the strings pointed to by the argv array shall be modifiable by the program, and retain their last-stored values between program startup and program termination.
Your check is pointless — even if argc is 0, argv will not be a null pointer. argv[0] might be a null pointer, but the only code that has to worry about that is code that stashes the program name away for error reporting. If you want to check that argv[0] is not null, you need to write the correct test: if (argv[0] != NULL).
I'm not yet ready to use C23 features in code that I need to be able to run on machines that only have support for older versions of the standard. YMMV on that score. I'll leave your conditional definition and uses of nullptr alone, but your code doesn't use nullptr_t, so that can be omitted.
I ended up removing all the Boolean variables. This evades problems with <stdbool.h>. In C23, bool is a built-in type, true and false are keywords, and the header is not needed. For C99 through C18, the header should be used. Prior to C99, there was no <stdbool.h> header, but that shouldn't be a concern — you shouldn't be using compilers that do not support at least C99 and preferably C11 or C18. As of March 2024, the C23 standard has not been released by ISO. It will probably be ISO 9899:2024, but will probably continue to be known as C23.
You asked about using a structure for the control variables. In a one-file program like this, it is legitimate to use static file scope ('global') variables to convey global state information to functions. If I kept the structure, I might use a name such as struct Control. Note that you cannot initialize file stream variables (such as output) at file scope with stdin in modern C libraries, though you'd find such code written for older systems. Note that the main() function is the only symbol defined by the code that is visible outside the object file. I wish that static was the default visibility and that you had to use an explicit notation to make symbols visible outside the source file. However, a language with those rules would not be C.
The long_options argument to parse_options() isn't needed if you either move the definition of the array into the function or make it a file scope 'variable' placed near the definition of the short options string. You don't pass the short options as a parameter to parse_options() — consistency suggests one or the other, not a mixture.
You've almost certainly used programs that match the 'filter' paradigm (e.g. cat or grep). If one or more file names are present on the command line, those files are processed; otherwise, standard input is processed. This program should probably follow that pattern too. Granted, it is unlikely that more than one file will be processed because concatenated C files are not usually a good idea.
You have the code:
if (f != stdout && ftruncate(fileno(f), 0)) {
perror("seek()");
It is misleading to claim a seek error when ftruncate() fails. However, if you open the output file in "w" mode, the ftruncate() operation is superfluous.
If this were my code, I'd use the error reporting routines available in my SOQ (Stack Overflow Questions) repository on GitHub as files stderr.c and stderr.h in the src/libsoq sub-directory.
On Linux, you can find loosely similar functionality documented as err(3). Either package simplifies error reporting.
This version of the code continues processing files even if one cannot be opened. It terminates the processing of the current file on an error, but continues to the next file. It is perfectly reasonable to decide to terminate on any error — in fact, it is often better to do that ("fail fast"), in my opinion.
Here's what I think will do the job for you:
/* Code Review: CR 209-991 */
#undef _POSIX_C_SOURCE
#undef _XOPEN_SOURCE
#define _POSIX_C_SOURCE 200819L
#define _XOPEN_SOURCE 700
#include <assert.h>
#include <errno.h>
#include <getopt.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
/* In C2X/C23 or later, nullptr is a keyword. */
/* Patch up C18 (__STDC_VERSION__ == 201710L) and earlier versions. */
#if !defined(__STDC_VERSION__) || __STDC_VERSION__ <= 201710L
#define nullptr ((void *)0)
#endif
static const char short_options[] = "b:e:o:h";
static const struct option long_options[] =
{
{ "begin", required_argument, nullptr, 'b' },
{ "end", required_argument, nullptr, 'e' },
{ "help", no_argument, nullptr, 'h' },
{ "output", required_argument, nullptr, 'o' },
{ nullptr, 0, nullptr, 0 },
};
static const char *argv0 = "<unknown>";
#define DEFAULT_BEGIN "\\begin{code}"
#define DEFAULT_END "\\end{code}"
static const char *bflag = DEFAULT_BEGIN; /* Begin flag */
static const char *eflag = DEFAULT_END; /* End flag */
static FILE *output = NULL; /* Output file stream */
static void help(void)
{
printf("Usage: %s [OPTIONS] SRC\n\n"
" %s - extract code from a LaTEX document.\n\n"
"Options:\n"
" -b, --begin=MARKER Line that denotes the beginning of the code\n"
" block in the markup language. Default: %s.\n"
" -e, --end=MARKER Line that denotes the end of the code block in\n"
" the markup language. Default: %s.\n"
" -h, --help Displays this message and exits.\n"
" -o, --output=FILE Writes result to FILE instead of standard output.\n\n"
"Note: The begin and end markers must be different.\n\n"
"For Markdown, they can be:\n"
" ```python\n"
" # Some code here\n"
" ```\n",
argv0, argv0, DEFAULT_BEGIN, DEFAULT_END);
exit(EXIT_SUCCESS);
}
static void err_and_fail(void)
{
fprintf(stderr, "The syntax of the command is incorrect.\n"
"Use: %s -h for more information.\n", argv0);
exit(EXIT_FAILURE);
}
static void parse_options(int argc, char **argv)
{
int c;
output = stdout;
while ((c = getopt_long(argc, argv, short_options, long_options, nullptr)) != -1)
{
switch (c)
{
case 'e':
eflag = optarg;
break;
case 'b':
bflag = optarg;
break;
case 'h':
help();
break;
case 'o':
/* If -o was provided more than once. */
if (output != stdout)
{
fprintf(stderr, "Error: Multiple -o flags provided.\n");
err_and_fail();
}
output = fopen(optarg, "w");
if (output == nullptr)
{
fprintf(stderr, "%s: failed to open file '%s' for writing: %d %s\n",
argv0, optarg, errno, strerror(errno));
exit(EXIT_FAILURE);
}
break;
default:
err_and_fail();
break;
}
}
if (strcmp(eflag, bflag) == 0)
{
fprintf(stderr, "%s: the start and end markers must be different (both are '%s')\n", eflag, bflag);
exit(EXIT_FAILURE);
}
}
static int process(FILE *fp, const char *fn)
{
size_t buflen = 0;
char *buffer = 0;
enum { COMMENT, CODE } mode = COMMENT;
size_t lineno = 0;
size_t num_code_blocks = 0;
ssize_t length;
while ((length = getline(&buffer, &buflen, fp)) > 0)
{
lineno++;
assert(buffer[length] == '\0' && buffer[length - 1] == '\n');
buffer[length - 1] = '\0';
if (mode == CODE)
{
if (strcmp(buffer, bflag) == 0)
{
fprintf(stderr, "%s: error: found begin marker '%s' in file '%s' line %zu while in code mode\n",
argv0, bflag, fn, lineno);
goto cleanup_and_fail;
}
else if (strcmp(buffer, eflag) == 0)
mode = COMMENT;
else if (fprintf(output, "%s\n", buffer) < 0)
{
fprintf(stderr, "%s: error: failed to write to output file while reading file '%s' line %zu\n",
argv0, fn, lineno);
goto cleanup_and_fail;
}
}
else
{
assert(mode == COMMENT);
if (strcmp(buffer, eflag) == 0)
{
fprintf(stderr, "%s: error: found end marker '%s' in file '%s' line %zu while in comment mode\n",
argv0, eflag, fn, lineno);
goto cleanup_and_fail;
}
else if (strcmp(buffer, bflag) == 0)
{
num_code_blocks++;
mode = CODE;
}
}
}
free(buffer);
if (mode != COMMENT)
{
fprintf(stderr, "%s: file '%s' is missing a code end marker '%s'\n", argv0, fn, eflag);
return EXIT_FAILURE;
}
if (num_code_blocks == 0)
{
fprintf(stderr, "%s: file '%s' contained zero code blocks\n", argv0, fn);
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
cleanup_and_fail:
free(buffer);
return EXIT_FAILURE;
}
int main(int argc, char *argv[])
{
int status = EXIT_SUCCESS;
/* The test is superfluous */
if (argv[0])
argv0 = argv[0];
parse_options(argc, argv);
if (optind == argc)
status = process(stdin, "standard input");
else
{
for (int i = optind; i < argc; i++)
{
FILE *fp = fopen(argv[i], "r");
if (fp == NULL)
{
fprintf(stderr, "%s: failed to open file '%s' for reading: %d (%s)\n",
argv0, argv[i], errno, strerror(errno));
status = EXIT_FAILURE;
}
else
{
int rc = process(fp, argv[i]);
if (status == EXIT_SUCCESS)
status = rc;
fclose(fp);
}
}
}
if (fclose(output) != 0)
{
fprintf(stderr, "%s: failed to close output file\n", argv0);
status = EXIT_FAILURE;
}
return status;
}
I copied your sample inputs into files litc-1.script and litc-2.script. My program was called cr-290991. The outputs I got were:
$ cr-290991 litc-1.script
#include <stdio.h>
int main(void) {
puts("Hello World!");
}
$ cr-290991 -o litc-2.c -b '```c' -e '```' litc-2.script
$ cat litc-2.c
#include <stdio.h>
int main(void) {
puts("Hello World!");
}
$ | {
"domain": "codereview.stackexchange",
"id": 45592,
"tags": "c, literate-programming"
} |
Code Reiview for an PHP PDO Queries? is there a better way to do it? | Question: so this is my code for a equipping an item in my game dev't:
try {
$db = getConnection();
$db->beginTransaction();
$sql_chara_gold = $db->query("SELECT chara_gold FROM chara WHERE chara_id = $shop->chara_id");
$get_chara_gold = $sql_chara_gold->fetchColumn();
$sql_shop_price = $db->query("SELECT item_price FROM shop WHERE item_id = $shop->item_id");
$get_shop_price = $sql_shop_price->fetchColumn();
if($get_chara_gold >= $get_shop_price){
$sql_put_item = "INSERT INTO bag(chara_id, item_id, item_qty)
VALUES(:id,:item_id,1)
ON DUPLICATE KEY
UPDATE item_qty = item_qty +1";
$sql_set_gold = "UPDATE chara ch
INNER JOIN item it
ON it.item_id = :item_id
SET ch.chara_gold = ch.chara_gold - it.item_price
WHERE chara_id = :id";
$sql_set_qty = "UPDATE shop
SET item_qty = item_qty - 1
WHERE item_id = :item_id";
$stmt = $db->prepare($sql_put_item);
$stmt->bindParam("id", $shop->chara_id);
$stmt->bindParam("item_id", $shop->item_id);
$stmt->execute();
$stmt = $db->prepare($sql_set_gold);
$stmt->bindParam("id", $shop->chara_id);
$stmt->bindParam("item_id", $shop->item_id);
$stmt->execute();
$stmt = $db->prepare($sql_set_qty);
$stmt->bindParam("item_id", $shop->item_id);
$stmt->execute();
$db->commit();
$db = null;
echo json_encode($shop);
}else{
echo '{"error":{"text":'.json_encode($get_shop_price).'}}';
}
Answer: I don't know if I'd use ORM for something so simple, but it may be something to look into. At the very least, you should get in the practice of escaping query parameters to avoid SQL injection. I often use Aura SQL for this, because it's dead simple: https://github.com/auraphp/Aura.Sql
An example query using Aura would look like this:
$user = $db->fetchOne('SELECT chara_gold FROM chara WHERE chara_id = :id', [
'id' => $shop->chara_id
]);
Aura will automatically escape any bound parameters, like :id, so you don't have to worry about SQL injection. If you don't use Aura, at least use PDO::quote.
Funds transfer is the textbook example of why you need transactions to meet the ACID requirements, so it's good that you're using one, but I don't see your rollback. I assume this is in your catch clause which wasn't included. | {
"domain": "codereview.stackexchange",
"id": 3780,
"tags": "php, mysql, sql, pdo"
} |
Image with specific frequencies | Question: I have created a frequency domain image filter and would like to test it against some images where it is known that the frequencies only occur within a given band. Trouble is, I don't know how to go about creating such images.
I have though about making an image that consists of intensities varying about the x and y axes as a function of sine waves of frequencies within the band. I don't know if this would be good enough though. Another method (don't know about the practicality of this) might be to define a matrix in the Fourier domain and then do an iDFT. Does anyone have any thoughts on the best way to proceed?
Answer: Your second option is the most common: choose a frequency of interest, set the corresponding coefficient in the Fourier domain then take the inverse transform.
You have to pay attention to the correct scaling of the non-zero coefficient and to the particular FFT coefficient arrangement (continuous frequency in the middle or in one corner of the Fourier matrix). | {
"domain": "dsp.stackexchange",
"id": 1066,
"tags": "filters, frequency, image-processing, frequency-domain"
} |
Identifying system to represent output data as combination of 3 input data | Question: I have some numerical data, which has 3 input vectors and 1 output vector. I have to use all the inputs to represent the output. I am new to dsp and struggling to understand how to do this.
X1: Mostly -80 to 80. Few outliers till +/-100 or +/-200
X2: Mostly -4 to 4.
X3: Mostly -1 to 1. Most are 0.0 something.
The data is not supposed to be considered noisy. It is a observation based data, not so much of a signal. (I don't know if that is the correct term)
I tried the least square method and got a very different output.
w1 = corr(X1, Y)
w2 = corr(X2, Y)
w3 = corr(X3, Y)
for epoch = 1:epochs
for n = 1:length(Y)
% n
Y_pred = w1*X1(n) + w2*X2(n) + w3*X3(n) + b; % initial predicted y
Y_loss_grad = 2*(Y_pred - Y(n)); % gradient of sqaure error
% update the weights
% w = w-lr*(loss_grad*y_grad)
w1 = w1-lr*( Y_loss_grad*X1(n) );
w2 = w2-lr*( Y_loss_grad*X2(n) );
w3 = w3-lr*( Y_loss_grad*X3(n) );
By varying epochs and learning rate, I finally set epochs = 100 and lr = 0.0003. Any more epochs and there are unnecessary iterations and even a slightly higher learning rate and the Y_loss_grad eventually goes to -inf and then NaN. Below is the output.
weight of X1: 6.988717
weight of X2: 1.613900
weight of X3: 596.504787
The model error is 267408804.091201
I tried this which got me this output.
I have another idea, where I take fft of each input, pass a certain filter over each and sum them. Then get the error with the fft of output vector, calculate the gradient w.r.t each input vector fft and adjust the filter of each input vector fft using that gradient.
The problems I'm facing in this idea are:
What filter to use? (low, high, band) (butterworth, chebyshev etc). Additionally the same filter might not work with each input.
How to adjust the filter? Do I change the order, the cut-off/pass frequencies?
Does optimizing using error in the frequency domain even help in the time domain?
If yes how do I change it back to the time domain to get the x[n]*h[n] = y[n], where x[n] is 3 values (1 from each input vector at n).
Any explanation about concepts, what steps to take or direction to useful resources would be appreciated. Thanks.
Answer: Your regression approach seems to be needless complicated. Let's look at this step by step"
First you need to start with a model. A simple linear model would be
$$y[n] = w_1x_1[n] + w_2x_2[n] + w_3x_3[n]$$
Going forward I'm going drop the $[n]$ for quicker typing and all sum symbols mean "sum over all n".
Then define an error metric. For example
$$E = \sum (w_1x_1 + w_2x_2 + w_3x_3-y)^2$$
Then calculate the partial derivatives of the error to each model parameters and set them to zero
Example
$$ \frac{\partial E}{\partial w_1} = 2\sum x_1 \cdot(w_1\sum x_1 + w_2 \sum x_2 + w_3 \sum x_3 - \sum y) = 0 $$
Do this for all model parameters. This will give your three equations with three unknowns which you need to solve. In this example these are just three linear equations, so that should be solvable without too much trouble. No need for regression, this can be solved in a single step.
However, your data does not look like a linear model: your output is all positive but your input appears to be double sided and maybe even symmetric to 0. Linear models typically don't do that, so changes are you need a more complicated model and/or more internal knowledge about your system. The steps for solving it as a least squares problem are still the same but the partial derivatives and are the final equation may get more complicated. | {
"domain": "dsp.stackexchange",
"id": 10777,
"tags": "matlab, system-identification"
} |
Sibling vs Parent/Child relationship detection using whole-genome sequence data | Question: I am analyzing a family-based whole-genome sequence (WGS) dataset. My identity-by-descent(IBD) analysis shows they share about 50% of their DNA. I think my pedigree structure may have gotten mixed up though and I want to see if one individual is the mother or the sister of a male individual. Is there anyway to do this either with Plink or KING?
Answer: With at least three samples, you can determine this by looking at the IBS0 output column (from either plink2 --make-king-table or KING): if the relationship in question has a very low IBS0 value compared to the other pairs, that's parent-child. You can try to do the same thing with only two samples, but with no other IBS0 values to compare against it's easier to misjudge. | {
"domain": "bioinformatics.stackexchange",
"id": 1019,
"tags": "wgs, plink"
} |
Working principle of an optical stretcher | Question: I understand that while using an optical stretcher experimental apparatus we attach a small dielectric ball to the specimen of study which can either be a DNA molecule, a Biological cell or any microscopic object that we are interested in. The way the concentrated Laser beam in the apparatus is refracted through the ball is responsible for a gradient force towards the center of the beam. How important is it for the ball to be dielectric? Now assuming we don't use any balls in the experiment, can a similar radial force be generated on Brownian particles suspended in the beam that are not necessarily spherical in shape?
Answer: In general, there are two limits to consider. In both cases, the polarizability of the object (the bead) is important. As for the importance of the geometric shapes, I think it's more subtle.
Limit 1 is where the wavelength of the laser is much shorter than the size of the object. This is the case considered by @Persian_Gulf. A dielectric object has index of refraction $n = \sqrt{\varepsilon}$, where $\varepsilon$ is the dielectric constant of the object material. Due to the index of refraction, the incoming light undergoes refraction, and the momentum of the deflected light provides force. To see how the dielectric constant is related to atomic polarizability for macroscopic objects, consult the Wikipedia article on "Clausius-Mossotti relation". I think Griffith's EM book also mentions this topic as well.
Limit 2 is where the wavelength of the laser is much larger than the size of the object. In this limit, you can consider the object as an ideal dipole moment, that is induced by the external electric field (of the laser). Griffith's EM book has an example problem on the induced dipole moment of a polarizable sphere under external E-field, I think. The induced dipole moment $d = \alpha E$ ($\alpha$ is polarizability) interacts with the external electric field and feels dipole force. The dipole force is the gradient of the dipole potential $U_{dip} = -\frac{1}{2} \alpha E^2$.
So this answers the first part of your question. As for the geometric shape, I am less familiar but I can say the following. In Limit 1, it will definitely affect how the light is refracted. If the surface of the object has some aberration profile that is comparable to wavelength scale, it will strongly scatter the incoming light. In Limit 2, the incoming light will be less sensitive to any surface profile variation, because the light cannot "resolve" the features on the sub-wavelength scale object.
But anyhow, in literature there are abundant reports of trapping yeasts and bacteria with optical tweezers, so that should leave no doubt that you can trap non-spherical objects with light. The tricky thing is that those examples do not clearly fall under either of the two limits mentioned, and there is no simple theory, although there are computational tools to numerically calculate the forces. See this paper. | {
"domain": "physics.stackexchange",
"id": 51810,
"tags": "visible-light, experimental-physics, laser, refraction, dipole"
} |
Understanding output of LSTM for regression | Question: I am working with embeddings and wanted to see how feasible it is to predict some scores attached to some sequences of words. The details of the scores are not important.
Input (tokenized sentence): ('the', 'dog', 'ate', 'the', 'apple')
Output (float): 0.25
I have been following this tutorial which tries to predict part-of-speech tags of such input. In such case, the output of the system is a distribution of all possible tags for all tokens in the sequence, e.g. for three possible POS classes {'DET': 0, 'NN': 1, 'V': 2}, the output for ('the', 'dog', 'ate', 'the', 'apple') could be
tensor([[-0.0858, -2.9355, -3.5374],
[-5.2313, -0.0234, -4.0314],
[-3.9098, -4.1279, -0.0368],
[-0.0187, -4.7809, -4.5960],
[-5.8170, -0.0183, -4.1879]])
Each row is a token, the index of the highest value in a token is the best predicted POS tag.
I understand this example relatively well, so I wanted to adapt it to a regression problem. The full code is below, but I am trying to make sense of the output.
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
class LSTMRegressor(nn.Module):
def __init__(self, embedding_dim, hidden_dim, vocab_size):
super(LSTMRegressor, self).__init__()
self.hidden_dim = hidden_dim
self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)
# The LSTM takes word embeddings as inputs, and outputs hidden states
# with dimensionality hidden_dim.
self.lstm = nn.LSTM(embedding_dim, hidden_dim)
# The linear layer that maps from hidden state space to a single output
self.linear = nn.Linear(hidden_dim, 1)
self.hidden = self.init_hidden()
def init_hidden(self):
# Before we've done anything, we dont have any hidden state.
# Refer to the Pytorch documentation to see exactly
# why they have this dimensionality.
# The axes semantics are (num_layers, minibatch_size, hidden_dim)
return (torch.zeros(1, 1, self.hidden_dim),
torch.zeros(1, 1, self.hidden_dim))
def forward(self, sentence):
embeds = self.word_embeddings(sentence)
lstm_out, self.hidden = self.lstm(embeds.view(len(sentence), 1, -1), self.hidden)
regression = F.relu(self.linear(lstm_out.view(len(sentence), -1)))
return regression
def prepare_sequence(seq, to_ix):
idxs = [to_ix[w] for w in seq]
return torch.tensor(idxs, dtype=torch.long)
# ================================================
training_data = [
("the dog ate the apple".split(), 0.25),
("everybody read that book".split(), 0.78)
]
word_to_ix = {}
for sent, tags in training_data:
for word in sent:
if word not in word_to_ix:
word_to_ix[word] = len(word_to_ix)
tag_to_ix = {"DET": 0, "NN": 1, "V": 2}
# ================================================
EMBEDDING_DIM = 6
HIDDEN_DIM = 6
model = LSTMRegressor(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix))
loss_function = nn.MSELoss()
optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()))
# See what the results are before training
with torch.no_grad():
inputs = prepare_sequence(training_data[0][0], word_to_ix)
regr = model(inputs)
print(regr)
for epoch in range(100): # again, normally you would NOT do 300 epochs, it is toy data
for sentence, target in training_data:
# Step 1. Remember that Pytorch accumulates gradients.
# We need to clear them out before each instance
model.zero_grad()
# Also, we need to clear out the hidden state of the LSTM,
# detaching it from its history on the last instance.
model.hidden = model.init_hidden()
# Step 2. Get our inputs ready for the network, that is, turn them into
# Tensors of word indices.
sentence_in = prepare_sequence(sentence, word_to_ix)
target = torch.tensor(target, dtype=torch.float)
# Step 3. Run our forward pass.
score = model(sentence_in)
# Step 4. Compute the loss, gradients, and update the parameters by
# calling optimizer.step()
loss = loss_function(score, target)
loss.backward()
optimizer.step()
# See what the results are after training
with torch.no_grad():
inputs = prepare_sequence(training_data[0][0], word_to_ix)
regr = model(inputs)
print(regr)
The output is:
# Before training
tensor([[0.0000],
[0.0752],
[0.1033],
[0.0088],
[0.1178]])
# After training
tensor([[0.6181],
[0.4987],
[0.3784],
[0.4052],
[0.4311]])
But I don't understand why. I was expecting a single output. The size of the tensor is the same as the number of tokens of the input. I would, then, guess that for each step in the input, the hidden state is given. Is that correct? Does that mean that the last item in the tensor (tensor[-1], or is it the first tensor[0]?) is the final prediction? Why are all outputs given? Or lies my misunderstanding earlier in the forward-pass? Perhaps I should only feed the last item of the LSTM layer to the linear layer?
I am also interested to know how this extrapolates to bidirectional LSTMs and multilayer LSTMs, and even how this would work with GRUs (bidirectional or not).
The bounty will be given to the person who can explain why we would use the last output or the last hidden state or what the difference means from a goal-directed perspective. In addition, some information about multilayer architectures and bidirectional RNNs is welcome. For instance, is it common practice to sum or concatenate the output and hidden state of bidirectional LSTM/GRU to get your data into sensible shape? If so, how do you do it?
Answer: I went ahead, and testing a lot of stuff out and I came up with this network which seems to work alright as far as I have tested it
def __init__(self, hidden_dim, ms_dim, embeddings):
super(LSTMRegressor, self).__init__()
self.hidden_dim = hidden_dim
# load pretrained embeddings, freeze them
self.word_embeddings = nn.Embedding.from_pretrained(embeddings)
embed_size = embeddings.shape[1]
self.word_embeddings.weight.requires_grad = False
self.w2v_lstm = nn.LSTM(embed_size, hidden_dim, bidirectional=True)
self.ms_lstm = nn.LSTM(ms_dim, hidden_dim, bidirectional=True)
self.linear = nn.Linear(hidden_dim, 1)
self.relu = nn.LeakyReLU()
def forward(self, batch_size, sentence_input, ms_input):
# 1. Embeddings
embeds = self.word_embeddings(sentence_input)
w2v_out, _ = self.w2v_lstm(embeds.view(-1, batch_size, embeds.size(2)))
# separate bidirectional output into first/last, then sum them
w2v_first_bi = w2v_out[:, :, :self.hidden_dim]
w2v_last_bi = w2v_out[:, :, self.hidden_dim:]
w2v_sum_bi = w2v_first_bi + w2v_last_bi
# 2. Other features
ms_out, _ = self.ms_lstm(ms_input.view(-1, batch_size, ms_input.size(1)))
ms_first_bi = ms_out[:, :, :self.hidden_dim]
ms_last_bi = ms_out[:, :, self.hidden_dim:]
ms_sum_bi = ms_first_bi + ms_last_bi
# 3. Concatenate LSTM outputs
summed = torch.cat((w2v_sum_bi, ms_sum_bi))
# 4. Only use the last item of the sequence's output
summed = summed[-1, :, :]
# 5. Send output to linear layer, then ReLU
regression = self.linear(summed)
regression = self.relu(regression)
return regression | {
"domain": "datascience.stackexchange",
"id": 6570,
"tags": "neural-network, regression, lstm, rnn, word-embeddings"
} |
Centrifuge speed of an object higher than a stationary orbit | Question: In the question At what altitude above equator do gravitational and centrifugal forces cancel each other?, I asked how high a tower on the equator has to be such that at its top, gravitational and centrifugal forces are the same magnitude (and opposite sign).
Now imagine that the tower is a little bit higher, and an object is released from the top of the tower. The object will float away and "climb" higher. Does the object move slower or faster around the world than the tower?
It would be nice if the answer was explained in terms of the conservation of energy.
Answer:
Now imagine that the tower is a little bit higher, and an object is released from the top of the tower. The object will float away and "climb" higher. Does the object move slower or faster around the world than the tower?
Slower. Provided that the space elevator is geo-stationary, the higher you go above geosynchronous orbit, the greater your velocity exceeds what's needed for a circular orbit. This means that the point at which you let go of the elevator, you're at perigee (the point of closest approach). Yes, you "climb" after that point.
Think of it in terms of rate of change of angle. All points on the tower move a full circle in exactly 1 day. They all have the same angular velocity. As you begin to climb, your angular velocity decreases. This is because:
The circumference at a higher altitude is greater
The spacecraft's speed decreases
The 2nd point is due to the energy balance within Earth's gravity well. As you drift further from the Earth, more of your kinetic energy is converted to gravitational potential energy. Both factors push in the same direction, so this is what happens. The orbital period will be greater than 24 hours for things released from the tower beyond the balanced geosynchronous orbit. | {
"domain": "physics.stackexchange",
"id": 16629,
"tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, centrifugal-force"
} |
Callback system for events in a window | Question: I have written a simple window event capture wrapper that gets events that happen to the window, and can be checked anywhere in the program through a global class. I wanted to be able to create a callback anywhere in the program with any event that I have asked to be checked. To hold the data that is returned from the events, I have used a pointer that is allocated whenever an event occurs, so I can grab the data returned later on in the program.
I don't want to use templates to hold the data since that would require a lot of objects to be created, or a lot of vectors for each specific event that I want. I find this to be too clunky, and I wanted a clean, simple solution.
Data that is captured
struct windowEventData
{
void *data = nullptr;
std::string id;
sf::Event::EventType event;
bool polled = false;
};
Window Event Manager
class windowEventManager
{
private:
std::vector<windowEventData> _subscribedEvents;
public:
void subscribe(const std::string &id, sf::Event::EventType event);
void pollEvent(sf::Event event);
void clearEvents();
template<typename T>
T* hasEventPolled(const std::string &id);
~windowEventManager();
};
Grabbing and setting the pointer
void windowEventManager::pollEvent(sf::Event event)
{
for (auto &eventCheck : _subscribedEvents)
{
if (eventCheck.event == event.type)
{
switch (event.type)
{
case sf::Event::KeyPressed:
case sf::Event::KeyReleased:
eventCheck.data = new sf::Event::KeyEvent(event.key);
break;
case sf::Event::MouseButtonPressed:
case sf::Event::MouseButtonReleased:
eventCheck.data = new sf::Mouse::Button(event.mouseButton.button);
break;
case sf::Event::MouseWheelMoved:
case sf::Event::MouseWheelScrolled:
eventCheck.data = new bool(true);
break;
case sf::Event::TextEntered:
eventCheck.data = new sf::String(event.text.unicode);
break;
default:
break;
}
eventCheck.polled = true;
}
}
}
I clear events before each poll, and in the destructor so I don't have memory leaks/too much on the heap
void windowEventManager::clearEvents()
{
for (auto &eventCheck : _subscribedEvents)
{
if (eventCheck.data)
{
delete eventCheck.data;
eventCheck.data = nullptr;
}
eventCheck.polled = false;
}
}
Finding out if an event has occured
template<typename T>
inline T *windowEventManager::hasEventPolled(const std::string &id)
{
auto it = std::find_if(_subscribedEvents.begin(), _subscribedEvents.end(), [id] (windowEventData &dat) { return dat.id == id && dat.polled; });
if (it != _subscribedEvents.end())
{
return static_cast<T*>((*it).data);
}
return nullptr;
}
Is my use of a void* proper? I figured I could use a union as well for this, but I would rather see if using a pointer would be a good solution.
Answer: I do NOT recommend doing this
I would do this somehow different. I'm not sure why you need an id at all. A listener/receiver is interested in a specific type of event, isn't it? If you want to follow your "poll and get" flow, all you have to do is to store your events somehow.
If you're using c++11 (and I guess you are), things are easier, variadic template arguments are so good. This whole thing could be done totally different, but I'm trying to follow and adjust your code. So here comes I've came up with:
I dropped the whole id thing and I'm using the event type only. I created a base EventDataBase class. This can be used to get the real data from the event data object.
class EventDataBase
{
public:
const sf::Event::EventType event;
EventDataBase(sf::Event::EventType event)
: event(event)
{ }
virtual ~EventDataBase() { }
template <class T>
T* getData()
{
return static_cast<T*>(getDataPtr());
}
protected:
virtual void* getDataPtr() = 0;
};
Then a simple template class is created which holds the actual data:
template <class T, typename... Args>
class EventData : public EventDataBase
{
public:
EventData(sf::Event::EventType event, Args... args)
: EventDataBase(event),
data(args...)
{ }
private:
T data;
void* getDataPtr() override { return &data; }
};
With the variadic template args the constructor of your actual data can have any parameters (but that must match!). Another solution would be to pass an existing T object to the EventData class but that means the T must be copy-constructable:
template <class T>
class EventData : public EventDataBase
{
public:
EventData(sf::Event::EventType event, const T& data)
: EventDataBase(event),
data(data)
{ }
private:
T data;
void* getDataPtr() override { return &data; }
};
Then you can use typedefs if you wish. I'm using the variadic-template-args solution from now, but you can adjust your code.
typedef EventData<sf::Event::SimpleEvent> SimpleEventData;
// downside is that the Args must match the parameters of the KeyEvent's constructor
typedef EventData<sf::Event::KeyEvent, int> KeyEventData;
I'm not sure if you have only one EventManager or not but I guess you have only one and you call the pollEvent function in the application's main loop. I ignored the subscribe functionality, but that can be added easily.
class EventManager
{
public:
// called in the application's main loop
void pollEvent(const sf::Event& event)
{
// if you want to check if anyone is interested in this event, do it here
// ...
EventDataBase* eventData = nullptr;
switch (event.type)
{
case sf::Event::Simple:
eventData = new SimpleEventData(event.type);
break;
case sf::Event::KeyPressed:
case sf::Event::KeyReleased:
eventData = new KeyEventData(event.type, 10 /* key code */);
break;
default:
break;
}
if (eventData)
polledEvents.push_back(eventData);
}
template <typename T>
T* hasEvent(sf::Event::EventType event)
{
auto it = std::find_if(polledEvents.begin(), polledEvents.end(),
[event](EventDataBase* data) { return data->event == event; });
if (it == polledEvents.end())
return nullptr;
return (*it)->getData<T>();
}
// TODO: clearEvents() function is basically the same
// you have to delete the EventDataBase objects
// and clear the polledEvents vector
private:
// you can use a map<EventType, EventDataBase*>
// if only one event is allowed per type
// but in a real application that's not true (eg. KeyPressed)
typedef std::vector<EventDataBase*> Events;
Events polledEvents;
};
And the usage:
EventManager mgr;
mgr.pollEvent(sf::Event(sf::Event::Simple));
mgr.pollEvent(sf::Event(sf::Event::KeyPressed));
mgr.pollEvent(sf::Event(sf::Event::KeyReleased));
sf::Event::SimpleEvent* simple = mgr.hasEvent<sf::Event::SimpleEvent>(sf::Event::Simple);
sf::Event::KeyEvent* keyPressed = mgr.hasEvent<sf::Event::KeyEvent>(sf::Event::KeyPressed);
sf::Event::KeyEvent* keyReleased = mgr.hasEvent<sf::Event::KeyEvent>(sf::Event::KeyReleased);
Questions:
What happens when you have multiple event with the same type (eg. KeyPressed)? (You can modify the hasEvent function to fill a vector instead of return with the first matched element)
What happens when you get an event with a wrong type? (eg. hasEvent<Event::KeyEvent>(EventType::Simple))
Do you really want to allow the hasEvent caller to modify the event data? If not, you should return (or fill) const T* instead of T*
What happens if the hasEvent caller delete the returned object? That's not even heap allocated! Instant crash I guess...
So I'm not sure if this is the best way you can do it. The hasEvent function is dangerous. What if I try to get a sf::Event::KeyEvent with a sf::Event::Simple type? The data is simply "converted" (casted) to the desired type. And eg. if I try to modify the data... You can imagine. Consider the following example.
// invalid cast, but it's done
sf::Event::KeyEvent* keyPressed = mgr.hasEvent<sf::Event::KeyEvent>(sf::Event::Simple);
// modify the data
keyPressed->key = 0;
// ... random crash
// because the `keyPressed` points to a bigger memory area than it should
// sizeof(KeyEvent) > sizeof(SimpleEvent) !!
A better approach
[...] would be message-based. I mean real message based. A class could subscribe to the manager with its object pointer and a function. That function would be called when an event occurs. So your code should not have to call "hasEvent" everywhere but instead the registered function would be called automatically. Something like this (pseudo-code):
// in the manager
// define the events
Event<int> onKeyPressed;
Event<int> onKeyReleased;
// when an application creates an event
switch (event.type)
{
case EventType::KeyPressed:
onKeyPressed(event.key);
break;
// ...
}
// your listener/receiver code:
// define the handler function
void callThisOnKeyPressed(int key)
{
// handle key pressed event
}
// register to event
manager.onKeyPressed.subscribe(this, &MyClassName::callThisOnKeyPressed);
In c++11 this can be implemented easily with some template magic. :)
Edit
I totally forget to mention the object pooling. Constantly calling new and delete is not too memory-friendly. You can use a custom memory manager (placement new) to solve this issue.
Edit 2
You can actually check if the event type is valid for the specific event data with a template "checking" function, something like this:
template <typename T>
bool isTypeAllowed(sf::Event::EventType type);
template <>
bool isTypeAllowed<sf::Event::KeyEvent>(sf::Event::EventType type)
{
return (type == sf::Event::KeyPressed || type == sf::Event::KeyReleased);
}
You can use this function to check if the specified EventType is valid for the T event data type. You can do this check in the hasEvent function:
template <typename T>
T* hasEvent(sf::Event::EventType event)
{
if (!isTypeAllowed<T>(event))
return nullptr;
auto it = std::find_if(polledEvents.begin(), polledEvents.end(),
[event](EventDataBase* data) { return data->event == event; });
if (it == polledEvents.end())
return nullptr;
return (*it)->getData<T>();
}
// or the collection variant:
template <typename T>
bool hasEvent(sf::Event::EventType event, std::vector<T*>& result)
{
result.clear();
if (!isTypeAllowed<T>(event))
return false;
for (auto& e : polledEvents)
{
if (e->event == event)
result.push_back(e->getData<T>());
}
return !result.empty();
} | {
"domain": "codereview.stackexchange",
"id": 22513,
"tags": "c++, event-handling, pointers, callback"
} |
What does "semantic gap" mean? | Question: I was reading DT-LET: Deep transfer learning by exploring where to transfer, and it contains the following:
It should be noted direct use of labeled source domain data on a new scene of target domain would result in poor performance due to the semantic gap between the two domains, even they are representing the same objects.
Can someone please explain what the semantic gap is?
Answer: In terms of transfer learning, semantic gap means different meanings and purposes behind the same syntax between two or more domains. For example, suppose that we have a deep learning application to detect and label a sequence of actions/words $a_1, a_2, \ldots, a_n$ in a video/text as a "greeting" in a society A. However, this knowledge in Society A cannot be transferred to another society B that the same sequence of actions in that society means "criticizing"! Although the example is very abstract, it shows the semantic gap between the two domains. You can see the different meanings behind the same syntax or sequence of actions in two domains: Societies A and B. This phenomenon is called the "semantic gap". | {
"domain": "ai.stackexchange",
"id": 2784,
"tags": "machine-learning, terminology, papers, transfer-learning"
} |
Why are $p$ orbitals independent from this symmetry rule? | Question: I am very new to quantum mechanics and I have a question about $p$ orbitals.
I am studying Beiser's Modern Physics and according to that in the Bohr Model the wave the probability density of azimuthal angle is constant so the wave function should be independent to the azimuthal angle and be symmetric along the $xy$ plane.
In the case of $p$ orbitals, $p_z$ is symmetric in the $xy$ plane but $p_x$ and $p_y$ vary along the $xy$ plane. How is this possible?
Answer:
I am studying Beiser's Modern Physics and according to that in the Bohr Model the wave the probability density of ...
This is a misunderstanding of the text. There are no probabilities, no waves, and no orbitals in the Bohr model. Your question makes perfect sense, but it is inscribed within full-grown QM ─ it has nothing to do with the Bohr model.
the wave the probability density of azimuthal angle is constant so the wave function should be independent to the azimuthal angle and be symmetric along the $xy$ plane.
In the case of $p$ orbitals, $p_z$ is symmetric in the $xy$ plane but $p_x$ and $p_y$ vary along the $xy$ plane. How is this possible?
The term "$p$ orbital" can mean several slightly-different things depending on the context. The mismatch here comes from taking objects from one of these and expecting them to have the properties of the others.
In general, "$p$ orbital" means an orbital which is an eigenstate of the total angular momentum, $L^2$. This includes the $p_z$ orbital as well as the $p_x$ and $p_y$ orbitals.
The property that you mention (invariance of the orbitals' probability distribution with respect to rotations around the $z$ axis) is not true of general $p$ orbitals in the sense I just described ─ it is a property of the simultaneous eigenstates of $L^2$ and $L_z$. This includes $p_z$ but it does not include $p_x$ and $p_y$; instead, for the subspace spanned by those two, you have to use the combinations
\begin{align}
p_+ & = \frac{p_x+ip_y}{\sqrt 2} \\
\text{and } \quad p_- & = \frac{p_x-ip_y}{\sqrt 2},
\end{align}
for which the invariance does hold. | {
"domain": "physics.stackexchange",
"id": 62290,
"tags": "quantum-mechanics, wavefunction, atomic-physics, orbitals"
} |
Confusion about vectors representing spins or other things in Hilbert space | Question: I am sorry if my language makes not too much sense, I am a little confused about the meaning of everything at this point. Let us say I have two vectors of dimension 4 in a complex vector (Hilbert) space. The first one represents the state of a 3/2 spin, and the second the state (tensor product) of 2 1/2 spins. They transform under 3D rotations in different ways.
Do they belong to the same vector space? What is it different about them in more formal terms?
How many other “kinds” of 4D vectors could exist that transform in different ways under rotations?
My guess is that these two exhaust all possibilities as one transforms as an irreducible representation of the group of rotations and the other one under a reducible one. Does this exhaust all possible kinds of 4D “objects” (vectors)? Is it the same for an $N$ dimensional object? In the sense that it has to transform under 3D rotations either by an irreducible representation or by a sum of irreducible representations? Is every finite dimensional vector in $H$ related in some way to spins?
Answer:
1.Do they belong to the same vector space? What is it different about them in more formal terms?
No. These questions are all put to rest in a half-decent introduction to QM angular momentum. For an irreducible representation, there is a 1-to-1 correspondence between the dimensionality of the representation, 2s+1, and its spin, s. So your first 4-spinor with spin 3/2 transforms infinitesimally by the 4×4 matrices $\vec S $ and its eigenvalue under $\vec S \cdot \vec S $ is always $3/2(3/2+1)=15/4$.
Your second 4-spinor,$1/2\otimes 1/2$, transforms under the likewise 4×4 matrices
$\vec S \otimes {\mathbb I}+{\mathbb I}\otimes\vec S $, (with the $S$s and ${\mathbb I}$s being 2×2 matrices) which, however, are reducible, ie, they act very differently on a 3D block subspace (spin 1), and a 1D block (trivially, in this particular case: a singlet). The eigenvalues of each block under $\vec S \cdot \vec S$ are different, 1(1+1)=2 and 0, respectively. So, at your level, the fact the two dimensionalities are the same is a coincidence of apples and oranges.
2.How many other “kinds” of 4D vectors could exist that transform in different ways under rotations?
My guess is that these two exhaust all possibilities as one transforms as an irreducible representation of the group of rotations and the other one under a reducible one. Does this exhaust all possible kinds of 4D “objects” (vectors)?
Well, you may juxtapose two different spin 1/2 in a direct sum, $1/2\oplus 1/2$, acted upon by $\vec S \oplus \vec S$, or consider $0\oplus 0\oplus 0\oplus 0$, but basically you are done.
Is it the same for an -dimensional object? In the sense that it has to transform under 3D rotations either by an irreducible representation or by a sum of irreducible representations?
By now you must have seen how dimensionality is not as significant as you appeared to believe, and you may find an infinity of such coincidences for higher irreducible and reducible representations, but there isn't much for you to focus on, beyond sudoku entertainment. For instance, a spin 7/2 irrep acts on an 8D spinor, just as the unrelated reducible $(1/2)^{\otimes 3}$ does...
Is every finite-dimensional vector in related in some way to spins?
If you are restricting yourself to spin operators, yes, but there are other, more significant, parts of QM, of course, and their corresponding Hilbert spaces. | {
"domain": "physics.stackexchange",
"id": 94223,
"tags": "hilbert-space, angular-momentum, quantum-spin, representation-theory"
} |
Allpass Filter with Sign Switch | Question: I want to design a digital allpass IIR filter with the following transfer function.
$$
H(\omega) =
1 \textrm{ for } \omega < \omega_1 \textrm{ or } \omega > \omega_2
$$
and
$$
H(\omega) =
-1 \textrm{ for } \omega > \omega_1 \textrm{ and } \omega < \omega_2
$$
I assume that higher-order filters are needed for a steeper transition band. Can you help me design a filter that approximates this specification? I prefer a solution that strictly uses allpass digital filters.
Answer: In this answer I'll just focus on the given design problem, even though I'm not sure if there might be a better overall solution to your underlying problem.
First of all it's instructive to realize that the given filter specification is just the difference of two (ideal) lowpass filters plus a constant. Let $h_1[n]$ be an ideal lowpass filter with cut-off frequency $\omega_1$, and $h_2[n]$ is an ideal lowpass with cut-off $\omega_2$, then the desired filter can be expressed as
$$h_d[n]=2\big(h_1[n]-h_2[n]\big) + \delta[n]\tag{1}$$
Eq. $(1)$ could be used to design an FIR filter approximating the given desired response. We would just need two odd-length linear-phase lowpass filters designed by any of the many well-known methods. We would need odd-length filters because they have an integer delay, and we could add the constant by simply replacing $\delta[n]$ in Eq. $(1)$ by $\delta[n-M]$, where $M$ is the delay of the linear phase filters. Note that the resulting FIR filter would implement the desired phase response exactly (plus a delay), but the magnitude response would only be an approximation to the desired unity gain response.
Eq. $(1)$ wouldn't help for designing an IIR filter because of the different non-linear phase responses of the individual lowpass filters. Unlike FIR filters, IIR filters can have an exactly constant magnitude response. However, the desired phase can only be approximated. If we require the filter to be IIR, we need to resort to numerical design methods.
One simple and effective method is the equation error method, which I explain in this blog post. There's also a link to a simple Matlab/Octave implementation: iir_ap.m.
The designed IIR filter needs to be causal and stable, otherwise there's no way to implement it. For this reason we need to add a linear phase term to the desired phase response. This just introduces some extra delay. The amount of delay necessary to make the resulting filter causal and stable must be determined experimentally.
I've designed two filters, one $10$th order and one $20$th order IIR allpass:
L = 500;
w = pi * linspace(0,1,L); w = w(:);
w1 = .2 * pi; w2 = .4 * pi;
I = find( w > w1 & w < w2 );
P = zeros(L,1);
P(I) = pi;
W = ones(L,1);
% 10th order allpass
N = 10;
tau10 = 8;
P2 = P - w * tau10;
a = iir_ap( N, w/pi, P2, W );
[A10, ww] = freqz( flipud(a), a, 2^12);
% 20th order allpass
N = 20;
tau20 = 18;
P2 = P - w * tau20;
a = iir_ap( N, w/pi, P2, W );
[A20, ww] = freqz( flipud(a), a, 2^12);
The figure below shows the design results. For the plots I removed the linear phase term. Note that the maximum phase error is not improved by choosing a higher filter order. What is improved by using a higher filter order is the steepness at discontinuities of the desired phase. | {
"domain": "dsp.stackexchange",
"id": 12352,
"tags": "filters, filter-design, infinite-impulse-response, allpass"
} |
Randomly generate a Hex Colour code and its contrasting colour using JavaScript | Question: I am attempting to generate a new colour and its contrasting colour with this javascript code.
I am making a web app that is different every time it is loaded (i.e. background and text colour)
I would love some critique on the quality and efficiency of my code.
How can it be done better?
Here is my code:
function startcolour() {
var values = Array('1','2','3','4','5','6','7','8','9','A','B','C','D','E','F');
var color = "#";
var item;
for (i = 0; i < 6; i++) {
item = values[Math.floor(Math.random()*values.length)];
color = color + item;
}
return color;
}
function contrastcolour(colour) {
var r;
var g;
var b;
var comp = [];
var contrast = '#';
var arrayLength;
if (colour.charAt(0) == '#') {
colour = colour.slice(1);
}
r = (255 - parseInt(colour.substring(0,2), 16)).toString(16);
g = (255 - parseInt(colour.substring(2,4), 16)).toString(16);
b = (255 - parseInt(colour.substring(4,6), 16)).toString(16);
contrast = contrast + padZero(r) + padZero(g) + padZero(b);
return contrast
}
function padZero(str) {
if (str.length < 2) {
return '0' + str;
}
else {
return str;
}
}
function colourRun() {
var colours = [];
var base = startcolour();
colours.push(base);
var contrast = contrastcolour(base);
colours.push(contrast);
return colours;
}
Sample output (aka colours):
['#123abc','#edc543']
I then do the following with jQuery to set CSS properties of HTML components:
$(document).ready(function() {
var colours = colourRun();
$("body").css({"background-color": colours[0], "color": colours[1]});
$("nav").css({"background-color": colours[1], "color": colours[0]});
});
All constructive criticism is appreciated.
Answer: There is a similar question that you may want to take a look at for some extra insight, even thought it doesn't have the contrasting color feature:
JavaScript Random Color Generator
Now lets get to the review.
Naming
The common naming convection in JS for methods is camelCase which you didn't follow in general.
Most of the variable and function names aren't clear or descriptive of what they do/hold. This is a lot more important than it may seem at first, and not naming properly makes your code difficult to read and understand.
As a side note Uncle Bob has a full chapter on this topic on his Clean Code book.
Be consistent in the names. If you look closely you'll see that in some places you used color:
var color = "#";
While in others colour:
function contrastcolour(colour) {
This is at best confusing if not mysterious. Leaves one wondering if it is a different thing or if it has a different meaning.
For this review i went with colour which was the one you used more often.
Functions
Now an overall review of the functions you presented:
startcolour
Better than using an array of elements is using a string with all possible characters:
var values = '123456789ABCDEF';
You can still use the index operator to fetch a character as you do in a normal array.
There is no need to separate the '#' in a single variable and then add only
the color part to a different one, and the += operator is also more idiomatic
and still easy to read.
Don't use variables without declaring them, as you have in the for:
for (i = 0; i < 6; i++) {
//---^
i wasn't declared at all which makes it global, and may be the source of some hard to find bugs.
Taking all this into consideration the function could be rewritten to:
function randomColour(){
const chars = '123456789ABCDEF';
let colour = '#';
for (let i = 0; i < 6; i++) {
const randomIndex = Math.floor(Math.random() * chars.length);
colour += chars[randomIndex];
}
return colour;
}
Or you can take a totally different approach, the one mentioned in the related question:
function randomColour(){
const randNum = Math.floor(Math.random() * 16777216); //from 0 to ffffff
return '#' + randNum.toString(16).padStart(6,"0");
}
This generates a number between 0 and 16777215, or ffffff in hexa, and outputs it with hexadecimal formatting, by calling toString(16). It is then padded with zeros to a length of 6.
padZero
There is no need to create a padding function, because there is one already, called padStart or padEnd. It even has a polyfill if you need to support some old browsers.
Using it in your code would look like so:
r.toString().padStart(2,"0")
Note that it's a method on string therefore if i have a number i must first transform it to string.
colourRun
The name on this one is also not very obvious to what it does. If the function returns a random color and it's contrast a better name would be baseContrastColours.
Its important to mention that push supports any number of elements, so there is no need to make two separate pushes. Also in this specific case its easier to just return the array literal with the respective elements instead of creating the array, pushing the elements and returning at the end.
Like this:
function baseContrastColours() {
const base = randomColour();
const contrast = contrastColour(base);
return [base, contrast];
}
While it has the same functionality, its simpler and easier to read.
contrastcolour
All the r, g, b variables are created up top and only set below:
function contrastcolour(colour) {
var r;
var g;
var b;
...
r = (255 - parseInt(colour.substring(0,2), 16)).toString(16);
g = (255 - parseInt(colour.substring(2,4), 16)).toString(16);
b = (255 - parseInt(colour.substring(4,6), 16)).toString(16);
...
}
You want to create variables and set the values directly if there is nothing else to do in between both actions:
var r = (255 - parseInt(colour.substring(0,2), 16)).toString(16);
Considering the variables were declared with var they are even hoisted. Also in general you want to declare the variables as close as possible to were you use them.
The arrayLength and comp variables aren't used. Always remove unused variables to keep the code as clean as possible.
Also important is that you are repeating the inverse logic 3 times, one for each color channel. So while simple you may want to consider abstracting that logic to a separate function.
So this function could be rewritten as follows:
function inverseChannelColour(channelColour){
return (255 - parseInt(channelColour, 16)).toString(16);
}
function contrastColour(colour) {
if (colour.charAt(0) == '#') {
colour = colour.slice(1);
}
const r = inverseChannelColour(colour.substring(0,2));
const g = inverseChannelColour(colour.substring(2,4));
const b = inverseChannelColour(colour.substring(4,6));
const contrast = '#' + r.toString().padStart(2,"0")
+ g.toString().padStart(2,"0")
+ b.toString().padStart(2,"0");
return contrast;
}
As pointed out by @Zeta in the comments if the color to be inverted is a gray very close to the middle, the inversion will generate a similar color which may not be easy to read.
You can try to avoid this by manually checking the difference of all channels to their inverted versions, and if all of them are bellow a certain threshold generate a different color. A simple approach to this would be to generate black if all channels are closer to white and white otherwise.
There are also some libraries/micro-libraries that generate colors and their complementary versions with other approaches, such as using HSL. These may be interesting for you to take a look.
Here are some of them for reference:
invert-color
Colors.js
complementary-colors | {
"domain": "codereview.stackexchange",
"id": 30004,
"tags": "javascript, random"
} |
Can all the theorems of classical mechanics be deduced from Newton's laws? | Question: As above, is the whole edifice of Newtonian mechanics built upon Newton's three laws of motion? Can I deduce all the theorems without referring to further assumptions?
Answer: If, by "Newtonian Mechanics" you mean what Newton derived, then yes, by definition.
But if you mean "classical mechanics" including rigid body dynamics then the answer is a resounding "no"[1] and the main reason is that Newton's three laws by themselves are not enough to imply conservation of angular momentum.
For conservation of angular momentum, you need an assumption further to Newton's three laws and the usual one is that the interaction force between two bodies points along the line joining the bodies[2]. Equivalently, one can get this from an assumption of spatial isotropy so that the formula for the force on a body from another must be invariant with respect to a rotation of co-ordinates. Given a body, the only co-ordinate free definition of a force direction that can be derived from the position of the other body relative to the first alone is of a vector along the the relative position vector if the force acts instantaneously[3]. More generally, conservation of angular momentum can be thought of as deriving from Noether's theorem applied to the rotational symmetry of the relevant classical Lagrangian.
Euler's Mechanics, where the three Euler laws are the rotational equivalent of Newton's translational ones for rigid bodies, are thus seen to be a definite broadening of Newton's laws. It is sometimes said that Euler's laws are Newton's with the assumption that everything in a rigid body must undergo only proper isometric transformations (i.e. "move rigidly") but the need for a conservation of angular momentum shows that there is another ingredient needed to get from Newton to Euler.
Footnotes:
[1]: That is even if we assume we're allowed to replace Newton's statement of the third law with its modern version of conservation of momentum (so that we're not bothered by pesky noninstantaneous electromagnetic interactions messing with the original statement).
[2]: I haven't read the Principia so I am not sure whether Newton actually comes up with the interaction force along joining line idea. I would not be surprised if he did, but then it is still further to his three laws.
[3]: If the force doesn't act instantaneously, relative positions change before the force arrives, and so we have a retarded version of this assertion in relativistic mechanics (by retarded, I mean analogous to the Feynman force formula and the Liénard-Weichert potentials). | {
"domain": "physics.stackexchange",
"id": 23814,
"tags": "newtonian-mechanics, classical-mechanics"
} |
Why is HSO4- shown as an example of a weak acid instead of HSO4? | Question: I'm watching some (really awesome) chemistry courses, and I think I have a fairly decent handle on acids and bases (or maybe not, and I'm fooling myself). But the teacher just showed a table of weak/strong acids. $\ce{HSO4-}$ was one of them.
I'm wondering, why hydrogen sulfate ion instead of just hydrogen sulfate? Why is $\ce{HSO4-}$ an acid and $\ce{HSO4}$ apparently isn't? This was the only ion on the table. Or does $\ce{HSO4}$ simply not exist?
Answer: The bisulfate molecule doesn't exist in solution. I can only find references to the hydrogen sulfate molecule as part of salts, which implies that hydrogen sulfate is usually found as an ion.
The bisulfate (hydrogen sulfate) ion is indeed an acid. It is a relatively strong weak acid too, with a $\ce{K_a}$ value of $1.2 * 10^{-2}$.
Part of the reason for its acidity has to do with its electronegative oxygens isolationg electron density away from the hydrogen atom bonded to the oxygen. This makes the hydrogen more partially positive and thus more reactive (more likely to be taken by a base). | {
"domain": "chemistry.stackexchange",
"id": 1564,
"tags": "acid-base"
} |
An invariant of the gauge group $G$ that is totally symmetric with three indices in the adjoint representation | Question: In Ch.19 of the textbook An Introduction to Quantum Field Theory by Peskin and Schroeder, on P.680 the property of a quantity
$$\mathcal{A}^{abc}=\mathrm{tr}\left[t^a\{t^b,t^c\}\right]\tag{19.132}$$
is discussed.
As a part of the scattering amplitude
$$\langle p,\nu,b;k,\lambda,c|\partial_\mu j^{\mu \alpha}|0\rangle=\frac{g^2}{8\pi^2}\epsilon^{\alpha\nu\beta\lambda}p_\alpha k_\beta \cdot \mathcal{A}^{abc}, \tag{19.131}$$
$\mathcal{A}^{abc}$ is a number and it contains all the indices regarding isospin (flavor). Therefore, it is an invariant with respect to any relevant transformation in isospin space.
The textbook reads
For example, in $SU(2)$ the adjoint representation has spin 1. The symmetric product of two spin-1 multiplets gives spin 0 plus spin 2, with no spin-1 component. Thus, there is no symmetric tensor coupling two spin-1 indices to give a spin 1.
As $t^a$s are generators of transformations, trace and product are linear operations, therefore the quantity $\mathcal{A}$ is related to the linear combination of the basis of the representation in question.
Edit
However, I understand that the above rules for angular momentum are applied when direct-product of physical states is involved, namely, $3 \otimes 3= 1\oplus 3\oplus 5$ as discussed in Mike's answer. For the present case, if the same arguments are employed to derive the properties of the l.h.s. of (19.132), an invariant (relate to the base of a scalar representation), from the l.h.s. of the relation, why the latter only involves "normal" matrix product instead of direct product.
Also, as explained in Mike's answer, invariant tensors are simply Clebsh-Gordan coefficients. Therefore, (19.132) can be viewed as a relation between different C-G coefficients. So does that mean the r.h.s. of (19.132) is vanishing since the correponding C-G coefficient is vanishing, which does not involve what stays on the r.h.s. of (19.132)?
I probably have missed something very basic, thanks a lot for pointing out my misunderstanding.
Answer: Invariant tensors are just a form of Clebsh-Gordan coefficients.
In partiular, if there was a symmetric invariant tensor ${\mathcal A}_{ijk}$ with indices in adjoint (the vector rep of ${\rm SO}(2)$ then for any two vectors $u_i$, $v_i$ the quantity $w_k={\mathcal A}_{ijk}u_iv_i$ would be a vector. We would therefore have found a new (symmetric) way to make a vector out of two vectors that differs from the usual
(antisymmetric) vector product. We know this is impossible because we know that the 3 (i.e the spin $j=1$) in $3 \otimes 3= 1\oplus 3\oplus 5$ is antisymmetric. | {
"domain": "physics.stackexchange",
"id": 58885,
"tags": "standard-model, group-theory, quantum-chromodynamics, isospin-symmetry"
} |
A self-supervised learning technique to denoise my specific signal | Question: So I work in this domain of biophysics that has to do with a light-based detection for measuring small movement of molecules (nanometer and piconewton scale) via a Quadrant Photodiode. This signal contains lots of information but is riddled with noise. One of the challenges is denoising this signal and while conventional methods such as savitsky-golay tends to work well there are set cutoff and threshold values that go into this method which makes it not as feasible.
Time-series traces from this measurement look like a sawtooth curve and as the particle moves in space and time, the noise changes (so noise is the not the same everywhere) (Figure attached below).
My question is - I have noise measurements from this signal (I have recordings where sawtooth event never happens and only noise is left). Can I train a self-supervised learning method to denoise this signal using my known noise recordings? For example - is there a high-frequency bandpass filter that takes in some noise and can be trained to automatically smooth this curve to what we might expect the ground truth to be? Is there a better approach to it? If my question is unclear please let me know and I can provide more information.
Thanks.
Answer: You may have a look at the method called JOT: A Variational Signal Decomposition Into Jump, Oscillation and Trend (You may access it in A Two Stage Signal Decomposition into Jump, Oscillation and Trend Using ADMM).
This method basically does what you're after, it decomposes the signal into 3 signals:
You may look on the results of a signal similar to yours:
The method is quite simple if you know ADMM.
In any way, they supply code. | {
"domain": "dsp.stackexchange",
"id": 12072,
"tags": "filters, signal-analysis, noise, deep-learning, decomposition"
} |
Preparation of normal DNA polymerase | Question: i don't need Taq polymerase. Just need normal DNA polymerase. So the main idea is preparation polymerase directly from Lactobacillus without using cloned vector. My idea is electrophoresis using agarose gel and calculating the isoelectric point of Polymerase based on protein sequence to know where they will migrate after electrophoresis. Is it possible for this idea ?
Answer: This is a very ambitious project for a DIY-er. To explain why, I'm going to use the example of DNA polymerase I from E. coli.
DNA polymerase I is the most abundant DNA polymerase in the E. coli cell, but nevertheless is still only present at around 300 copies per cell
Ishihama Y et al. (2008) Protein abundance profiling of the Escherichia coli cytosol. BMC Genomics. 9:102.
This was the first DNA polymerase that was ever purified, in the laboratory of Arthur Kornberg
(Enzymatic Synthesis of Deoxyribonucleic Acid : I. PREPARATION OF SUBSTRATES AND PARTIAL PURIFICATION OF AN ENZYME FROM ESCHERICHIA COLI Lehman, MJ et al. (1958) J. Biol. Chem. 33:163-170.).
In this paper a method of purification is described that starts with 60 litres of culture. There is also this statement:
"One kilo of E. coli yields less than 10 mg of the purified enzyme."
I'm guessing that you have no experience of protein purification, so I'll just tell you that this is very discouraging: these days, with overexpression, people expect to get mg quantities of their protein from just a few grams of bacterial cells.
This is a clear illustration of why it is pretty much unthinkable for you to set out to purify an enzyme like this using small-scale methods, and explains why techniques of overexpression and the use of affinity tags have revolutionised protein purification.
And it gets worse: even if you did manage to purify some DNA polymerase I from a bacterial source you would find that it is actually not very good at making DNA. This is because as well as extending primers in a 5'>3' direction, it is is very good at degrading DNA in the same direction. In other words as the enzyme moves along a template molecule, copying it, it will degrade any existing DNA that it meets "ahead" of its direction of travel. This is why the most commonly-used form of this enzyme in research is the "Klenow fragment", a fragment of the polymerase that lacks the 5'>3' exonuclease activity. This fragment was originally made by treating DNA polymerase I with the protease subtilisin, but it is now expressed from an engineered polA gene. Klenow fragment polymerase was used in the original PCR experiments (it had to be added afresh at every cycle).
I don't know how much money you have to spend on your projects, but in fact these enzymes can now be bought relatively cheaply. I don't think that you can expect to make them for yourself, without this becoming the actual project.
Finally, if you can get hold of a strain of E. coli with an expression plasmid for Taq polymerase then you really could make your own very easily - I've done this myself (but I no longer have the strain) and the heat stability of the enzyme makes the purification trivial. It would be much easier to use your own Taq with a heated water bath than to try to make DNA polymerase I. But obviously it depends upon what you are trying to do: Taq polymerase may be unsuitable for other reasons. | {
"domain": "biology.stackexchange",
"id": 1165,
"tags": "molecular-biology, diy-biology"
} |
Understanding FFT size in OFDM modulation | Question: I know that the FFT size is the the number of subcarriers OFDM system have. But I also know that the FFT has to be $ 2^N $ big (in order to process N samples from time domain to frequency domain).
But here in Matlab I can set my N to 333 for example, and The program will run without any problem (of course, the constellation diagrams will be different if I use for example N = 256, but that's just it). Why is that? Did I misunderstand something?
1) Also, If I have 1024-OFDM modulation, and the constellation on subchannels is 64 QAM, my FFT size is in the name of the modulation, right? Is it 1024?. Because 64 is the number of constellation points (size of the QAM symbols), so it can't be it.
Answer: An FFT does not need to be of length 2^N. However, FFTs of lengths that are a multiple of only tiny primes run faster. If you don't mind slower performance (or higher CPU utilization), you can use any length.
Note that multiple modulations can be combined, for instance, modulation of each subcarrier, and modulation of a collection of carriers. Each can be different, thus differently named. For instance, each subcarrier in an OFDM system can have a different size of QAM constellation. | {
"domain": "dsp.stackexchange",
"id": 7256,
"tags": "fft, digital-communications, ofdm, quadrature"
} |
Is it appropriate to call an element an amphoteric substance? | Question: In most of Japanese high-school textbook it is written that Al,Zn,Sn and Pb are amphoteric element(in Japanese 両性元素)because they react both with strong acid and base.
For example Al reacts like below.
$\ce{2Al +2NaOH +6H2O->2Na[Al(OH)4] +3H2}$
$\ce{2Al + 6HCl-> 2AlCl3 + 3H2}$
It is true that these are reaction with acid or base with Al, but in these reactions Al is not acting as a base or an acid. Al is acting as a reductant.
https://goldbook.iupac.org/terms/view/A00306
In above IUPAC definition, it is written that "A chemical species that behaves both as an acid and as a base is called amphoteric.".
Since Al acts not as an acid or a base but as a reductant, I thought it is inappropriate to call the Al element itself as an amphoteric substance, although its oxide and hydroxide is amphoteric. Or are these reaction actually a kind of acid-base reactions?
Answer: You are right, there is a problem in the textbook's description. Elements are not amphoteric, it is their oxides which are amphoteric. Zinc oxide, aluminum oxide, lead oxide will all dissolve in strong bases or strong acids. In the bases, we get correpsonding anions containing the metal which are called zincate, aluminate, and plumbate respectively. In the acids, we would get their corresponding cations. | {
"domain": "chemistry.stackexchange",
"id": 16912,
"tags": "acid-base, oxides, definitions"
} |
The length of the shortest $s$-$t$ path equals the maximum tension between $s$ and $t$ | Question: I am stuck at the following exercise:
Consider a directed graph $G = (V, A)$ with start vertex $s ∈ V$, target vertex $t \in V$ and weights $w_{ij} \in \mathbb{R}$ for each arc $(i, j)\in A$. For any $i \in V$ let further be $\pi_i \in \mathbb{R}$ be the so-called potential of $i$. The potential difference $\tau_{ij} = \pi_i − \pi_j$ is called the tension of the arc $(i, j)$ with respect to the potential vector $\pi$. Finally, let $\mathcal{P}$ denote the set of all directed paths from $s$ to $t$.
Now consider the following LP $(P)$:
\begin{align}
\max& \sum_{(i,j) \in P} \tau_{ij}\\
s.t. & \qquad P \in \mathcal{P} \\
& \qquad \tau_{ij} = \pi_i - \pi_j \text{ for all $(i,j) \in A$}\\
& \qquad \tau_{ij} \le w_{ij} \text{ for all $(i,j) \in A$}\\
& \qquad \pi_i \in \mathbb{R} \text{ for all $i \in V$}
\end{align}
If $P^\ast$
is a path for which the maximum in $\mathcal{P}$ is achieved, the value $\sum_{(i,j) \in P^\ast} \tau_{ij}$ is referred to as maximum tension between $s$ and $t$.
Prove that under the assumption that an $s$-$t$ shortest path exists with respect to the arc weights
$w_{ij}$, the length of a shortest $s$-$t$ path equals the maximum tension between $s$ and $t$.
If I am not mistaken we have by the definition of $\tau_{ij}$ that $\sum_{(i,j) \in P} \tau_{ij} = \pi_s-\pi_t$, so the maximal tension is constant for all $P \in \mathcal{P}$. Could you please tell me what I am misunderstanding?
Remark: I know that this is supposed to be the dual of the shortest path problem, but I am supposed to do this directly.
Answer: The statement in the exercise, "If $P^\ast$
is a path for which the maximum in $\mathcal{P}$ is achieved, the value $\sum_{(i,j) \in P^\ast} \tau_{ij}$ is referred to as maximum tension between $s$ and $t$." is indeed quite confusing.
As you have pointed out, given $i\to\pi_i$ for all $i\in V$ and $\tau_{ij}=\pi_i-\pi_j$, we know the sum $\sum_{(i,j) \in P} \tau_{ij} = \pi_s-\pi_t$ does not depend on the choice of $P \in \mathcal{P}$.
Here is an equivalent but clearer definition of the maximum tension.
Consider a directed graph $G = (V, A)$ with start vertex $s ∈ V$, target vertex $t ∈ V$ and weights $w_{ij} \in \mathbb{R}$ for each arc $(i, j)\in A$. Assume there is a path from $s$ to $t$. Then the maximum tension of $(G, s, t, w)$ is the optimal value of the objective function of the the following linear program. When $G$ and $w$ are understood, we also call it the maximum tension from $s$ to $t$.
$$\begin{array}{rl}
\max&\pi_s-\pi_t\\
s.t. & \pi_i - \pi_j \le w_{ij} \text{ for all }(i,j) \in A\\
& \pi_i \in \mathbb{R} \text{ for all $i \in V$}
\end{array}$$
The essential point here is that the assignment $\pi_i$ for all $i\in V$ are independent variables in the linear program above. | {
"domain": "cs.stackexchange",
"id": 20559,
"tags": "graphs, optimization, shortest-path, linear-programming"
} |
Function to check that a Python list contains only True and then only False | Question: I would like to only allow lists where the first n elements are True and then all of the remaining elements are False. I want lists like these examples to return True:
[]
[True]
[False]
[False, False]
[True, False]
[True, False, False]
[True, True, True, False]
And lists like these to return False:
[False, True]
[True, False, True]
That is, any list that can we written as [True] * n + [False] * m for n, m integers in the interval [0, infty).
I am currently using a function called check_true_then_false, but I feel like there is probably a neater way of doing this. The code doesn't need to be fast, as this will only be run once (not inside a loop) and the lists will short (single digit lengths).
def check_true_then_false(x):
n_trues = sum(x)
should_be_true = x[:n_trues] # get the first n items
should_be_false = x[n_trues:len(x)] # get the remaining items
# return True only if all of the first n elements are True and the remaining
# elements are all False
return all(should_be_true) and not any(should_be_false)
Testing shows that it produces the correct output:
test_cases = [[True],
[False],
[True, False],
[True, False, False],
[True, True, True, False],
[False, True],
[True, False, True]]
print([check_true_then_false(test_case) for test_case in test_cases])
# expected output: [True, True, True, True, True, False, False]
Answer:
You can just use x[n_trues:] rather than x[n_trues:len(x)].
Your comments don't really say more than the code. And so I'd recommend removing the comments.
If you want to keep your code documented use docstrings, which can be exported to your documentation via tools like Sphinx.
As commented by Konrad Rudolph, you can remove the and not any(should_be_false) as this will always fail if the all fails.
def check_true_then_false(x):
"""Check first n values are True and the rest are False."""
return all(x[:sum(x)])
If you want your code to work with iterators, not just sequences then you can instead use:
def check_true_then_false(it):
"""Check first n values are True and the rest are False."""
it = iter(it)
# Takes advantage of the iterating side effect, where it consumes the iterator.
# This allows `all` to simultaneously checks `it` starts with trues and advances `it`.
return all(it) or not any(it)
For the following two inputs all will result in:
>>> all([True] * n)
True
>>> all([True] * n + [False, ...])
False
However it will mean that it is still [...] as all and any are lazy. Meaning that we just need to check the rest are false. Meaning all slices the iterator for you without you having to. Leaving any with:
>>> any([False] * n)
False
>>> any([False] * n + [True, ...])
True | {
"domain": "codereview.stackexchange",
"id": 31896,
"tags": "python, python-3.x"
} |
Why is $mg$ positive and $N$ negative in this problem? | Question: In this problem:
A car of mass $430\ \mathrm{kg}$ travels around a flat, circular race track of radius $178\ \mathrm{m}$. The coefficient of static friction between the wheels and the track is $0.266$.
The same car now travels on a straight track and goes over a hill with radius $178\ \mathrm{m}$ at the top.
What is the maximum speed that the car can go over the hill without leaving the road?
Correct answer: $41.766\ \mathrm{m/s}$.
Explanation:
$$\frac{mv^2}{r} = mg - N$$
where $N$ is the normal force acting on the car from the ground. The car will fly off the ground just when $N = 0$ so the maximum speed allowed will be
$$\begin{align}v_{\text{max}} &= \sqrt{gr} \\
&= \sqrt{(9.8\ \mathrm{m/s^2})(178\ \mathrm{m})} \\
&= 41.766\ \mathrm{m/s}.\end{align}$$
The car is not driving upside down! So why is the force of gravity positive and the normal force considered negative in this problem?
Answer: The centripetal acceleration always points toward the center of the circle. In this case, the center of the circle is below the car, so the centripetal acceleration points downward.
Now, you'll notice that, in the given solution, the centripetal acceleration term is positive. That means the writer has chosen a coordinate system where positive is downward, and negative is upward. Any force that acts downward (like gravity) will be represented by a positive term ($+mg$), and any force that acts upward (like the normal force) will be represented by a negative term ($-N$).
Alternatively, you could make the opposite choice: pick positive to be up, and negative to be down. In that case, the centripetal acceleration, since it points downward, would be represented by a negative term, $-\frac{mv^2}{r}$. Same for gravity ($-mg$). And the normal force, pointing upward, would be represented by a positive term, $+N$. | {
"domain": "physics.stackexchange",
"id": 32423,
"tags": "homework-and-exercises, newtonian-mechanics, forces, newtonian-gravity"
} |
ConcurrentHashMap Implementation | Question: I have written a simplified version of my own MyConcurrentHashMap. I tried to make use of a Lock[] array for Lock striping, so that all put() operations to the MyConcurrentHashMap is through a lock if required. Also, I made my own MyHashMap made internally as the backing data structure for MyConcurrentHashMap. I created it my own as I wanted to qualify the value field as volatile for volatile read operation without an explicit locking. This ensures that the get() operation is a non synchronized call with implicit locking at memory level though volatile read and not via acquiring a lock anytime which makes it lighter. Have a look and provide your inputs and improvements if any:
import java.util.LinkedList;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class MyConcurrentHashmap {
private static final int DEFAULT_CONCURRENCY_LEVEL = 16;
private static final int INITIAL_CAPACITY = 16;
private float LOAD_FACTOR = 0.75f;
private int capacity = INITIAL_CAPACITY;
private int size;
private Lock[] locks;
private MyHashMap myHashMap = new MyHashMap();
private class MyHashMap {
private LinkedList[] lists = new LinkedList[INITIAL_CAPACITY];
private class MapEntry {
final Object key;
volatile Object value;
MapEntry(Object key, Object value) {
this.key = key;
this.value = value;
}
}
void put(Object key, Object value) {
if (key == null)
throw new IllegalArgumentException("Key Cannot be Null");
int hash = key.hashCode();
hash %= lists.length;
if (size >= LOAD_FACTOR * lists.length) {
capacity = lists.length * 2;
LinkedList[] tempLists = new LinkedList[capacity];
System.arraycopy(lists, 0, tempLists, 0, lists.length);
lists = tempLists;
reHash();
}
if (lists[hash] == null) {
lists[hash] = new LinkedList<>();
size++;
}
int i = 0;
for (; i < lists[hash].size(); i++) {
MapEntry mapEntry = (MapEntry) (lists[hash].get(i));
if (mapEntry != null && mapEntry.key.equals(key)) {
mapEntry.value = value;
break;
}
}
if (i == lists[hash].size()) {
lists[hash].addLast(new MapEntry(key, value));
}
}
Object get(Object key) {
int hash = key.hashCode();
hash %= lists.length;
Object value = null;
if (lists[hash] != null) {
for (int i = 0; i < lists[hash].size(); i++) {
MapEntry mapEntry = (MapEntry) (lists[hash].get(i));
if (mapEntry != null && mapEntry.key.equals(key)) {
value = mapEntry.value;
break;
}
}
}
return value;
}
private void reHash() {
for (int i = 0; i < lists.length; i++) {
if (lists[i] != null) {
int hash = ((MapEntry) lists[i].getFirst()).key.hashCode();
hash %= lists.length;
if (i != hash) {
lists[hash] = lists[i];
lists[i] = null;
}
}
}
}
int size() {
return size;
}
int capacity() {
return capacity;
}
}
public MyConcurrentHashmap(int concurrencyLevel) {
locks = new Lock[concurrencyLevel];
for (int i = 0; i < concurrencyLevel; i++) {
locks[i] = new ReentrantLock();
}
}
public MyConcurrentHashmap() {
this(DEFAULT_CONCURRENCY_LEVEL);
}
public void put(Object key, Object value) {
int hash = key.hashCode();
hash %= myHashMap.capacity();
locks[hash].lock();
myHashMap.put(key, value);
locks[hash].unlock();
}
public Object get(Object key) {
return myHashMap.get(key);
}
public static void main(String[] args) {
MyConcurrentHashmap myCCHashMap = new MyConcurrentHashmap();
myCCHashMap.put(1, "Thomas");
myCCHashMap.put(9, "Mathew");
myCCHashMap.put(17, "Tissa");
myCCHashMap.put(9, "Mathew Thomas");
System.out.println(myCCHashMap.get(1));
System.out.println(myCCHashMap.get(9));
System.out.println(myCCHashMap.get(17));
}
}
Answer: This is a pretty complicated task and I'd bet there'll be problems.
With volatile Object value you assure visibility of value, but Object key gets read before it and it's not final. So there are good chances, it'll break.
if (capacity == LOAD_FACTOR * lists.length) {
With floating-point multiplication, this may never happen. With your LOAD_FACTOR = 0.75f, it may as 0.75 can be represented exactly, but e.g., with 0.4 it'll fail. The test should be <= or >=, no idea which one is right.
Possibly none, as you're testing capacity, which never changes. It should be size >= LOAD_FACTOR * lists.length.
The correctness of your get depends on the internal working of the LinkedList.
for (; i < lists[hash].size(); i++) {
if (((MapEntry) (lists[hash].get(i))) != null
&& ((MapEntry) (lists[hash].get(i))).key.equals(key)) {
((MapEntry) (lists[hash].get(i))).value = value;
break;
}
As LinkedList has an inefficient get, this is doubly inefficient. Actually triply as you're using it twice. You should iterate the list instead.
Do you put any nulls in the list? If no, then you don't need to test for them.
Use a local variable for lists[hash] and also for the i-th element.
Don't break when you can return and save yourself the test in
if (i == lists[hash].size()) {
lists[hash].addLast(new MapEntry(key, value));
}
hash %= lists.length;
As your list.length is always a power of two, you can this much faster hack:
hash &= lists.length - 1;
public Object get(Object key) {
Integer hash = key.hashCode();
hash %= myHashMap.capacity();
return myHashMap.get(key);
}
Why Integer? Why hash at all when you don't use it? | {
"domain": "codereview.stackexchange",
"id": 14661,
"tags": "java, thread-safety, hash-map, concurrency, synchronization"
} |
What is the type of bearing used on swivel base chairs? | Question: I'm designing a chair from scratch and I'd like it to have a swivel base. For this my initial idea was to press fit a bearing (thrust bearing) in the shaft of the base and connect the seat by press fitting a small shaft through the middle.
But I'm having my doubts if this can handle the load of a person sitting on the chair.
Looking for a purpose built or better suited bearing for this.
Also opened to suggestions for a better way to build this.
Thanks
Answer: Most 'Swivel Chairs' use simple thrust bearings, which consist of a low-friction layer (either ballbearings, or a flat disc of plastic), sandwiched between a lower disc attached to the swivel legs and an upper disc attached to the chair base. the shaft coming down out of the chair base goes through the sandwich and extends down to where the legs branch off, and somewhere in there is a bushing that holds the shaft centered in the tube which supports the bottom disc as described above. | {
"domain": "engineering.stackexchange",
"id": 2347,
"tags": "mechanical-engineering, bearings"
} |
Creating indexes from a theoretical decimal/int, splitting into before and after arrays | Question: I'm currently populating before and after arrays with indexes, based on a number provided. If the input going in is an int, there will be three indexes (left, middle and right). If it is a decimal, there will just be two (left, right). There will me a maximum of three indexes to sort into a maximum of two arrays (before and after). To understand, here is some expected input/output:
input: 1
output: before [] after [0, 1, 2]
input: 1.5
output: before [] after [1, 2]
input: -3.4
output: before [3, 4] after []
input: 0
output: before [1] after [0, 1]
Essentially, the input index gets floored and ceiling'd (or, in the case of a whole number, their next/previous integers are used along with the original input). If any of the resulting integers are below 0, they get put in the before array, but their index is made absolute, otherwise they go into the after array. If they do get put into the before array, then they are put in to reverse order, such that they remain sorted in numerical order.
I have the below working code. But I feel like I could do much better. How would the community go about optimising this?
var input = document.getElementById('input');
var button = document.getElementById('button');
var ouput = document.getElementById('output');
function getOutput(input) {
var left = Math.floor(input);
var right = Math.ceil(input);
var middle;
var before = [];
var after = [];
if (right == left)
left--,
middle = left + 1,
right = left + 2;
if (left < 0)
before.unshift(Math.abs(left));
else
after.push(left);
if (middle < 0)
before.unshift(Math.abs(middle));
else if (typeof middle == 'number')
after.push(middle);
if (right < 0)
before.unshift(Math.abs(right));
else
after.push(right);
return {before: before, after: after};
}
function buttonPressed () {
var i = parseFloat(input.value);
var msg = 'not a number';
if (!isNaN(i)) {
var o = getOutput(i);
msg = 'before [' + o.before.toString() + '] ahead [' + o.after.toString() + ']';
}
ouput.innerHTML = msg;
}
<input id="input" type="text">
<input id="button" type="button" value="get" onclick="javascript:buttonPressed();">
<br><br>
<div id="output" style="font-family: monospace;"></div>
My interest is in the getOutput method. The other stuff is for demonstration purposes.
Update
I have adjusted the expected output and the code from the original question, after being prompted to rethink from the comments.
Answer: The code in the question seems to have adequate complexity for the getOutput function. Some small hints may be given, like:
do not use comma-expressions and decrement,
initialize middle with null and check for it with !== instead of typeof,
also, I do not like comparing undefined middle with 0.
The current code is more or less readable.
As for optimizations, one observation is that apart from [-1, 1] range, all other inputs always use the same alternative, making unnecessary to check left, right and middle individually. (eg, if right < 0, then so are middle and lift). How much optimization it really brings is hard to tell. If input numbers are almost always large, then making separate branch and constructing an array directly [left, middle, right] may make the code more efficient.
The near-zero case may need more conditions, of course (or just some kind of lookup for ready arrays - as the number of cases is small).
If you want a more compact code, maybe something like this can be done:
tmparray = ((right == left) ? [left-1, left, left+1] : [left, left+1]);
And after that push/unshift in a loop for each tmparray element. Not sure this will be faster though, but at least may be more readable. (left may be renamed to "lower", and right calculated inline only in the condition). | {
"domain": "codereview.stackexchange",
"id": 18484,
"tags": "javascript, sorting, floating-point"
} |
Which carbocation from following is stable by hyperconjugation? | Question:
These are the 4 figures, how to tell which one of them is stable by hyperconjugation? A B C all are stable by resonance if im not wrong but which one is by hyperconjugation too?
Answer: It should probably be option C as in one of its resonating structures, there is an alpha hydrogen(as it is saturated) wrt the carbocation formed. Hence, it can show hyper conjugation | {
"domain": "chemistry.stackexchange",
"id": 12028,
"tags": "organic-chemistry"
} |
Fast overlap save method | Question: If x[n]= (some values), h[n]=(some values). Now suppose length of h[n] is a multiple of two. Being specific say it has length of four.. Then M=4, Then what should be value of L and value of N to satisfy N=M+L-1..
E.g. which i tried solving is.
x[n]={7,6,4,5,2,4,5,2,3}
h[n]={1,2,3,1}
Taking N=8 i got L=5.
x1[n]={0,0,0,7,6,4,5,2}
x2[n]={4,5,2,4,5,2,3,0} (extra zero to make length=N)
Now while solving I found to lose some of values using save method.
Answer: for overlap-save, (sometimes called "overlap-scrap") when you (circularly) convolve h with x1. you scrap the first 3 output values of y1 and keep the last 5 values. when you (circularly) convolve h with x2, again you scrap the first 3 values of y2 and keep the last 5 values.
with
x[n]={7,6,4,5,2,4,5,2,3}
h[n]={1,2,3,1}
it should be
x1[n]={0,0,0,7,6,4,5,2}
x2[n]={4,5,2,4,5,2,3,0}
remember that in convolution, h[n] gets turned around to be h[-n] (h[0] remains in the same position but the other values are extended to the left). only when h[0] is aligned with x1[3] will you get a legitimate value for y1[3]. and notice, because you (necessarily) delayed x[n] by 3 samples with that initial zero-padding, the output will also be delayed by the same amount. | {
"domain": "dsp.stackexchange",
"id": 6943,
"tags": "overlap-save"
} |
What causes pitch bending in old VHS tapes? | Question: There is a familiar audio effect in old films where the music (especially if a note is held a bit longer) bends in pitch and sounds damaged or slightly worn out. The only example I could find is someone recreating the effect:
VHS Pitch Bend Effect
Although this sounds similar to a normal Doppler effect where the soundwaves are being stretched/compressed as an object moves away/toward you, the sound in this case is coming out of something stationary (e.g. the VCR), so it can't be the Doppler effect. It is my understanding that pitch is also not directly equivalent to wave frequency, but is a collective result of loudness, timbre, and duration.
From Wikipedia: "Pitch may be quantified as a frequency, but pitch is not a purely objective physical property; it is a subjective psychoacoustical attribute of sound."
So what causes this pitch bending effect? Is it a physical property or perceptual/sensory property?
Answer: In the audio world, this phenomenon is called wow and is caused by cyclic variations in the playback speed of the tape: when the speed goes up, the pitch increases, and when it goes down, the pitch decreases.
The root cause of playback wow is imperfections in the playback speed control system- and in the tape itself, caused by anisotropic stretch while stored on a spool with uneven tension. Stretched tape plays flat compared to unstretched tape. This effect is strong enough that when mylar is used as the tape base, it must be prestretched ("tensilized") in the factory. | {
"domain": "physics.stackexchange",
"id": 84448,
"tags": "acoustics, frequency, wavelength"
} |
Have red shifted photons lost energy and where did it go? | Question: I think the title says it. Did expansion of the universe steal the energy somehow?
Answer: Energy isn't a nice concept in GR, so all I'm giving is an intuitive way of looking at it.
For gravitationally redshifted stuff:
A photon has energy, thus it gravitates (as energy can gravitate analogous to mass from $E=mc^2$), thus it has some (negative) gravitational potential energy when on the surface of a planet. If it's emitted, its GPE eventually becomes 0. So, this increase in GPE had to come from somewhere: the photon's redshift gave the energy. It's pretty much the same thing that happens when you throw a ball up. It loses kinetic energy (slows down).
The GPE in relativity is basically related to the energy stored in spacetime curvature; in a complicated way that I don't know.
For a normally redshifted photon from a moving body: Energy need not be conserved if you swith frames. Energy is different from each reference frame.
See the answers to the question provided by Qmechanic above as well. Over there, they're talking about the entire universe, though, which leads to additional issues. | {
"domain": "physics.stackexchange",
"id": 7606,
"tags": "general-relativity, energy, cosmology, energy-conservation, photons"
} |
How can i use a pre loaded map in navigation stack? | Question:
I'm making a robot using RPi. I made a map using my hector_slam and rplidar on my laptop, so I have the pgm and yaml files (I was having some issue running hector_slam on RPi). How can I use it in the navigation stack. Do I need to create a map_server node and publish it? If yes, how should I do it.
Originally posted by sajal on ROS Answers with karma: 13 on 2019-06-05
Post score: 0
Original comments
Comment by billy on 2019-06-05:
This is all discussed in the Navigation Stack tutorials.
Answer:
You may use the map_server node at your launch file. Do not forget to upload the.yaml file.
Something like this.
<arg name="map_file" default="$(find robot_navigation)/maps/my_map.yaml"/>
<node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)" />
Originally posted by Lucas Marins with karma: 71 on 2019-06-06
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 33122,
"tags": "navigation, mapping, ros-kinetic, 2d-mapping, map-server"
} |
Real-time position of a distant celestial body | Question: How might it be possible to determine (via some specific online empemeris or via some well-understood algorithm) the observed position of some distant celestial body (e.g. Betelgeuse)?
To be concrete: suppose I wished to do what any number of mobile phone applications do, i.e. computationally model the observed position of stars and project them onto the screen using GPS coordinates and compass direction, so that an `overlay' of the night sky is produced. Then:
a) Where could I source the relevant data from?
b) What coordinate system would it be best to use?
Answer: A fine place for reliable information on the location of the vast majority of cataloged stars is the Simbad Astronomical Database. For example, here is the data for Betelgeuse. As you will see, the first line after "other object types" is the position given in the ICRS coordinate system (the first set of numbers is Right Ascension, in degrees minutes seconds and the second set of number is Declination in degrees minutes seconds). For all but the highest level research-grade astrometry or proper-motion purposes, this position will be more than sufficient.
Now, given you have the 'global' position of the star, you can calculate its relative position as seen from wherever you are on Earth. Doing so is a function of your latitude, longitude and current time. There are online calculators that can do this such as this one, or you can make a program to crunch through the math yourself (it's not exactly trivial, but if there are lots of stars you want to do it for, or you want to do it on the fly, this is a better bet). If you really want to be clever about it and you have lots of stars, you only have to do the calculation once, since all the stars are essentially fixed relative to one another. When all is said and done, the most intuitive local coordinate system to use for your projecting would be altitude & azimuth.
If you are concerned about also including the relative motions of stars over time, I would say don't be. Nearly all stars have proper motions of less than 1 arcsecond a year, which means it would be half a century before most stars have moved in even the most remotely perceptible way to the naked eye (the human eye has an angular resolution of roughly one arcminute). Other apparent motions, such as parallax, are also negligibly small for objects outside of our solar system (again, unless you are doing research-grade work). | {
"domain": "astronomy.stackexchange",
"id": 1067,
"tags": "positional-astronomy"
} |
Does Universal Time really track mean solar time? | Question: At innumerable places on the net you can find the claim that leap seconds are inserted into UTC in order to make it track UT1 which again tracks mean solar time at the reference meridian.
However, when I tried to locate an actual definition of UT1, the closest I could find was IAU resolution B1.8 from 2000 which seems to define it implicitly from the following relationship with the Earth Rotation Angle:
$$ \theta = 2\pi\bigl(0.7790572732640+1.00273781191135448({\it JD}_{\rm UT1}-2451545.0)\bigr)$$
Here the magic constant 1.00273781191135448 appears to be derived from the ratio between the Earth's rotational period and the sidereal year -- but no provision for adjusting it for observed changes in that ratio seems to be made. (The least significant several digits of the constant must be wrong already).
As the Earth's rotation slows down, is UT1 therefore going to drift out of sync with mean solar time at the IERS reference meridian? Or am I missing something?
Answer: This question asks in effect: Is there a discrepancy, perhaps progressive, between the official time-measure UT1, and some other credible measure of mean solar time? This is harder to answer than it might first seem.
UT1 is the current official representative of mean solar time: it is doubtful that any independent measure of mean solar time (other than direct relatives of UT1 such as UT0 or UT2) has been recently defined or computed to any precision. (The method of the equation of time as normally carried out is only roughly approximate, because it ignores the many solar perturbations discovered since the early 18th-c. and its results may vary from accurate mean solar time by up to about 3 seconds: see Hughes et al. (1989), "The equation of time", http://adsabs.harvard.edu/abs/1989MNRAS.238.1529H .) But as the question indicates, one reason for questioning UT1 is that solar time naturally depends on two independent variables, related to the angular rates of the earth's axial rotation and of the earth's orbital motion around the sun: In contrast, the official UT1 calculation appears to condense these into a single constant ratio, thus losing one of the independent variables of the natural physical model. This may well raise suspicion that UT1 deviates or will deviate from a more physically-defined mean solar time.
There is in fact a (no-longer-official) alternative and physical basis for calculating mean solar time. This can be gathered from the history of 19th-century mean solar time definitions and calculations. At the same time, there is also some documentation from the history to indicate the origin and nature of some of the steps taken in the 20th/21st-century, to change the old 19th-century physical model to the current (and very non-transparent!) basis of calculation for UT1, which is described in: (N Capitaine et al. (2003), "Expressions to implement the IAU 2000 definition of UT1", Astronomy & Astrophysics 406, 1135-1149, http://adsabs.harvard.edu/abs/2003A%26A...406.1135C ).
Classical method for determining mean solar time (Nautical Almanac, 19th-c.): (Not discussed here is the method of the equation of time, which grew more cumbersome as perturbations of the apparent solar motion were successively discovered during the 18th-c.; all of them had to be included in precise calculations.) A simpler equivalent method for mean solar time, to a suitable astronomical precision for its period, was provided by Nevil Maskelyne's tables of 1774 (found at http://docs.lib.noaa.gov/rescue/rarebooks_1600-1800/QB12M31774.pdf), especially his table of the Sun's mean longitude converted from degrees to hours and given to a precision of 0.01 time-second. His instructions showed how to use this to convert observed sidereal time to mean solar time. It may be unclear whether Maskelyne intended this method to make a standard out of Greenwich mean time, but that seems to be what happened. Thus initially, according to J J Lalande's 'Astronomie' (3 editions, Paris, 1764, 1771, 1792) methods for mean time still relied usually on the equation of time, but Lalande's third edition of 1792 (vol.1, https://archive.org/details/astronomielalande01lala , see art.1014-5, p.361) added a mention of Maskelyne's table, commenting that 'in England' it was used to find the mean time of an observation. The Nautical Almanac eventually began (from 1833) to tabulate the relation between sidereal and mean solar time (Sidereal time at (Greenwich) mean noon), and during the rest of the 19th-c. such tabulations provided what became standard observatory methods of finding mean solar from sidereal time (see J C Adams 1884, "On the Definition of Mean Solar Time", Observatory 7, 42-44, http://adsabs.harvard.edu/abs/1884Obs.....7...42A ).
The method has a simple physical basis in spherical astronomy, using a division into two parts of the arc of the celestial equator that extends from the equinoctial point eastwards to the point culminating at the meridian -- shown in the diagram below.
(The upper part of the diagram represents the celestial equator above the horizon (marked E W) for an observer in the northern hemisphere looking south. M is the culminating point where the equator intersects the (south) meridian. Q is the equinoctial point 'first point of Aries'. QS represents the sun's mean longitude reckoned from the equinox: it has been (in concept) transferred from the ecliptic to the equator where it now represents the mean right ascension of the fictitious mean sun. For convenience of arrangement, the diagram represents a position in early afternoon (S somewhat west of M) in about June (S is nearly 90° east of Q). (N) is below the horizon.)
The time-definitions and explanations of J-J de Lalande (1792, 'Astronomie', 3rd ed. vol.1, art.1014, p.361, https://archive.org/details/astronomielalande01lala) state clearly (expressing right ascensions in degrees, according to Lalande's custom):-
"The mean longitude of the sun, plus the mean time converted to degrees, gives the right ascension of the mid-heaven." That translates, in terms of the diagram here, to: QS + MS = QM .
Lalande also pointed out (art.1015) that "The Sun's right ascension, or that of the star used to find the right ascension of the midheaven, should be counted from mean equinox like the mean longitude of the sun ....". Modern readers have to note further that (a) observed and tabulated 'sidereal time' up to about the end of the 19th century always included the effect of nutation in right ascension, i.e. it was 'apparent' sidereal time (see explanation in Nautical Almanac 1864, pp.515, https://babel.hathitrust.org/cgi/pt?id=mdp.39015068159329 ), and (b) also in those days tabular mean longitudes of the sun still included the effect of mean aberration: Newcomb's tables of 1895-8 were the first to attempt to remove it.
Lalande (art.1015) continued: "it follows that the right ascension of the mid-heaven" {i.e. sidereal time} "added to the complement to 24h of the Sun's mean longitude gives the mean time", and "this method is used in England to find the time of an observation: Mr Maskelyne has given in his tables the Sun's movement in time to hundredths of a second"." In terms of the diagram, this translates as: mean time MS = QM + (24h - QS) = QM - QS .
A modern version of the foregoing classical method for defining mean solar time: During the period when methods like the foregoing were current, sources of the solar tables used to compute the annual almanacs were occasionally updated, when newer and more correct theories and tables became available. Thus, for the (UK) Nautical Almanac, Delambre's mean elements for the Sun were used with minor corrections down to 1833, then Bessel's mean elements from 1834 to 1863, and Leverrier's from 1864 to the end of the century. The principle remained unchanged although slight discontinuities in the tabulations occurred at the points of changeover. Accordingly, a modernized version of the same method can be implemented using recent accurate data for the sun's mean longitudes. Two modern sources of mean elements/longitudes for the sun (both in terms of ephemeris/dynamical time TT) are available: in J L Simon et al. (1994) "Numerical expressions for precession formulae and mean elements for the Moon and the planets" (derived from the JPL ephemeris DE200 which was the official basis of the Astronomical Almanac from 1984-2002), Astronomy & Astrophysics 282, 663-683, http://adsabs.harvard.edu/abs/1994A%26A...282..663S ; and in J Chapront et al., "A new determination of lunar orbital parameters ..." (derived from the JPL successor ephemeris DE405 in official use 2003-14), Astronomy & Astrophysics 387, 700-709, http://adsabs.harvard.edu/abs/2002A%26A...387..700C . The solar positions in those two ephemerides hardly differ from each other for present purposes. Mean solar longitudes reckoned from mean equinox of date are given in Simon et al. The data of Chapront et al. are for a fixed equinox of J2000 to which general precession in longitude has to be applied. Both sets of modern mean longitudes are geometric, leaving mean aberration to be applied as -20".4955 in arc. With that adjustment they can then be used like the old mean longitudes, converted to time, treated as right ascensions of the fictitious mean sun, and applied in the formulae given above to relate mean solar time with mean sidereal time. The results may be compared with current values for UT1. {Some example calculations will be added if requested.}
Modern IERS zero meridian: It is worth noting that it appears no allowance is needed for any difference between time at the old transit instrument at Greenwich observatory and time at the new IERS reference (zero) meridian. The ground track of the IERS reference meridian is located about 102m to the east of the meridian line through the old transit instrument. A paper of S Malys et al. (2015), "Why the Greenwich meridian moved" (J Geodesy 89, 1263-1272, at http://adsabs.harvard.edu/abs/2015JGeod..89.1263M ) accounts for the difference of ground track as an effect of local gravitational deflection of the vertical. Thus, while the ground track of the IERS zero meridian is appreciably east of the old Airy transit instrument, by a distance which would make a rotation of about 5.3" arc in longitude or about 0.35 time-seconds if it represented a geocentric rotation on a spheroidal earth, there has in fact been no rotation of the reference frame (within measurement error): the deflection of the vertical makes the difference a parallel displacement. With the nominally meridian (but off-geocentric) plane of the old transit instrument practically parallel to a geocentric meridian plane through the more easterly ground track of the IERS reference meridian, a transit instrument mounted in the IERS geocentric meridian plane would 'see' stellar transits at the same times as those occurring at the Airy transit instrument.
Modern changes in mean solar time/UT1: The 19th-century Nautical Almanac standard for mean solar time was altered in several successive ways towards the current standard for UT1. (Not discussed here are the corrections for observed polar wandering and seasonal effects leading to the differences between UT0, UT1 and UT2.) But there was a history of 19th-c. controversy about one of the changeovers mentioned above from older to newer solar data, worth brief description because it foreshadows and may possibly have influenced some of the 20th/21st-c. departures from the use of mean solar longitudes.
At one of the changeovers (from Bessel's data down to the end of 1863, to LeVerrier's data used from 1864 on), a discontinuity of a little over half a second was generated in the tabular relation between sidereal and mean solar time (shown e.g. by the daily tabular values of sidereal time at mean noon in the Nautical Almanac for '1863 Dec 32' and 1864 Jan 1) (data for Greenwich mean noon 1864 Jan 1 in the 1864 volume, at https://babel.hathitrust.org/cgi/pt?id=mdp.39015068159329 , and for the same physical day in the 1863 volume where it was designated '1863 Dec 32', https://babel.hathitrust.org/cgi/pt?id=mdp.39015068159006 ).
There was a brief explanation and apology in the 1864 preface: the step-change arose because LeVerrier's theory had identified new long-term solar periodic perturbations: as long as these remained undiscovered, their effect had been implicit in the old mean elements, and so they had to be removed from the new; the largest of them due to Mars and Jupiter was near its peak at the start of 1864 and responsible for nearly all the time-discrepancy of the changeover.
This changeover and discontinuity attracted little attention for nearly twenty years; but then E J Stone, an astronomer of some repute and director of a university observatory, was trying in the 1880s to account for growing discrepancies between observed places of the moon and positions calculated by Hansen's 1857 'Tables de la Lune'. Stone seized (quite mistakenly), as the cause of the lunar errors, on the 1864 discontinuity of about half a second in calculated mean solar time arising from the changeover from Bessel's mean solar elements to those of Leverrier (E J Stone, 1883 MNRAS 43, 335-345 and 401-407). The two effects were unrelated and Stone had also made a mistake that exaggerated errors by a factor of about 365. Stone's mistakes were pointed out in print by no less a quartet than John Couch Adams (1883 MNRAS 44, 43-47; 1884 MNRAS 44, 82-84), Arthur Cayley (1883 MNRAS 44, 47-49; 1884 MNRAS 44, 84-85), Simon Newcomb (1884 MNRAS 44, 234-5; 1884 MNRAS 44, 381-3; 1894 MNRAS 54, 286-8) and G B Airy (1883 Observatory 6, 184-5), but Stone stubbornly stuck to his position. He also wrote a vehement paper for the Royal Society, based on his faulty analysis of the lunar errors, in which he insisted that no revision should ever again be allowed in future to create any discontinuity in the tabular measure of mean solar time (E J Stone (1883), Proc. R. Soc. Lond. 35, 135-7, http://rspl.royalsocietypublishing.org/content/35/224-226/135 ). This was of course a demand incompatible with the use of corrected solar mean longitudes whenever new theories and observations would show the need for correction. But strangely enough, Stone's principle of requiring continuity has actually been followed in the alterations to UT1 made explicitly in 1984 and 2003 (and implicitly in 1960). It is hard to tell whether or not this arose from some descendant of Stone's influence.
There have been four main occasions of alteration away from the 19th-c. practice described above: in 1900, with Newcomb's formula for time quantities; in 1960, and the introduction of Ephemeris Time into many calculations but not into the time quantities; in 1984 with a fresh official definition of UT1 adopted from Aoki et al (1982), "The new definition of Universal Time", http://adsabs.harvard.edu/abs/1982A%26A...105..359A ; and in 2003 with the introduction of the current basis of UT1, described briefly in IAU 2000 resolution B1.8 and more fully in N Capitaine (2003) "Expressions to implement the IAU 2000 definition of UT1", http://adsabs.harvard.edu/abs/2003A%26A...406.1135C .
1900: Newcomb's time formula made a very small alteration in the time-equivalent of the sun's mean longitude, by increasing the quadratic term. The motivation and effect go back to the work and practice of LeVerrier, summarised by A Gaillot in 1886: "Sur la mesure du temps", Bulletin Astronomique, Ser. I, v.3, 221-232, http://adsabs.harvard.edu/abs/1886BuAsI...3..221G . LeVerrier's analysis of precession supposed that the earth's axial rotation was perfectly uniform. He also found that his theory of the precession of the equinoxes in right ascension led to a quadratic term discordant with the quadratic term for the sun's secular acceleration in longitude. By incorporating an increment in the quadratic term in the sun's longitude to remove the 2nd-order discrepancy, the effect was to make increments in mean time closely proportional to those in sidereal time (and to transmit the supposed uniformity of earth rotation into a proportional uniformity of the derived (but altered) calculation for mean solar time). Newcomb mentioned the point briefly in his book "The elements of the four inner planets and the fundamental constants of astronomy" (1895, at https://archive.org/details/cihm_16774 , see p.188). This appears to be the historical origin of the still-current practice of making the measure of mean solar time an exact linear function of earth rotation, a point specifically raised by the current question. No-one now supposes that earth- rotation is uniform.
1960: With the introduction of Ephemeris Time (https://en.wikipedia.org/wiki/Ephemeris_time) into the official almanacs from 1960, the solar data was effectively corrected, not as had been usual by change of numbers, but by time-shifting it by a displacement DeltaT, the difference (Ephemeris Time - Universal Time). The numbers were maintained unchanged, but they related now to a different timescale. No alteration at all was made in the calculation of nominally mean solar time. The effect was to retain the buildup of the time errors constituted by the difference between the new best estimate of the sun's mean position and the old.
1984: The definition of UT1 was revised in a way that maintained continuity of value and rate at the moment of changeover, the start of 1984 (see S Aoki et al., (1982), link above).
2003: A further revision to UT1 was made, maintaining continuity at the moment of changeover, the start of 2003 (see N Capitaine et al., (2003), link above).
Summary/conclusion: The description above shows something of how the current calculations for UT1 have departed in principle from classical methods used to define and calculate mean solar time. But it seems no longer to be claimed that UT1 is mean solar time: thus the IAU 2006 NFA glossary calls UT only "a measure of time that conforms, within a close approximation, to the mean diurnal motion of the Sun". Current values of UT1 can be compared with results of a modernized version of an old standard method for mean solar time from sidereal time. It seems that differences between this and current UT1 are still well under a second, though they can be expected to increase with increasing DeltaT, the difference in seconds between ((dynamical) TT - UT1). Thus the differences probably are remaining less than allowable differences for example between UT1 and UTC. The IAU gloss seems broadly justified, although it may be of continuing interest to have a method of calculating a more traditional form of mean solar time to check on any accumulating discrepancies.
{Notice of any mistakes in all of this will be gratefully appreciated!} | {
"domain": "astronomy.stackexchange",
"id": 2892,
"tags": "time"
} |
Sorting a sorted array after increasing several elements | Question: I know that most of the efficient sort algorithms can run with a complexity of $O(n\cdot log(n))$, but this is given an unsorted array.
However, given that the initial array is already sorted, is there an algorithm that can sort the array after multiplying several elements by 2 (or increasing them by some value) with complexity of $O(n)$?
Answer: You can update the array using the merge procedure of mergesort. Decompose your array into two smaller arrays: unchanged elements and updated elements. Since multiplying elements by $2$ preserves their relative order, you can do this decomposition in $O(n)$ while producing two sorted smaller arrays. You can then merge them in $O(n)$. | {
"domain": "cs.stackexchange",
"id": 21529,
"tags": "algorithms, sorting, arrays"
} |
Is the Schrödinger equation derived or postulated? | Question: I'm an undergraduate mathematics student trying to understand some quantum mechanics, but I'm having a hard time understanding what is the status of the Schrödinger equation.
In some places I've read that it's just a postulate. At least, that's how I interpret e.g. the following quote:
Where did we get that (equation) from? Nowhere. It is not possible to derive it from anything you know. It came out of the mind of Schrödinger. -- Richard Feynman
(from the Wikipedia entry on the Schrödinger equation)
However, some places seem to derive the Schrödinger equation: just search for "derivation of Schrödinger equation" in google.
This motivates the question in the title: Is the Schrödinger equation derived or postulated? If it is derived, then just how is it derived, and from what principles? If it is postulated, then it surely came out of somewhere. Something like "in these special cases it can be derived, and then we postulate it works in general". Or maybe not?
Thanks in advance, and please bear with my physical ignorance.
Answer: The issue is that the assumptions are fluid, so there aren't axioms that are agreed upon. Of course Schrödinger didn't just wake up with the Schrödinger equation in his head, he had a reasoning, but the assumptions in that reasoning were the old quantum theory and the de Broglie relation, along with the Hamiltonian idea that mechanics is the limit of wave-motion.
These ideas are now best thought of as derived from postulating quantum mechanics underneath, and taking the classical limit with leading semi-classical corrections. So while it is historically correct that the semi-classical knowledge essentially uniquely determined the Schrödinger equation, it is not strictly logically correct, since the thing that is derived is more fundamental than the things used to derive it.
This is a common thing in physics--- you use approximate laws to arrive at new laws that are more fundamental. It is also the reason that one must have a sketch of the historical development in mind to arrive at the most fundamental theory, otherwise you will have no clue how the fundamental theory was arrived at or why it is true. | {
"domain": "physics.stackexchange",
"id": 69266,
"tags": "quantum-mechanics, schroedinger-equation"
} |
Why does a spurion analysis work independently of the UV physics? | Question: In short, my question is why does a spurion analysis work to produce the correct symmetry breaking terms regardless of the high energy physics?
The context that this question arose is from an Effective Field Theory course (for more context, see here, Eq. 5.50). Consider the QCD Lagrangian,
\begin{equation}
{\cal L} _{ QCD} = \bar{\psi} \left( i \gamma^\mu D_\mu - m \right) \psi
\end{equation}
The kinetic part is invariant under a chiral transformation:
\begin{equation}
\psi \rightarrow \left( \begin{array}{cc}
L & 0 \\
0 & R
\end{array} \right) \psi
\end{equation}
however, the mass term is not. Now the claim I don't understand is as follows. Suppose the mass transformed as,
\begin{equation}
m \rightarrow L m R ^\dagger
\end{equation}
In that case the mass term would be invariant under such a transformation. To write down the correct chiral symmetry breaking terms in our Lagrangian we find the terms invariant given this transformation for $ m $ and then make $ m $ a constant again.
The way I understand this physically is that the breaking arises from a high energy spurion field, $ X $, which gets a VEV, $ m $. When we write down all possible chiral symmetry preserving terms using the transforming $m$, we are writing down all the terms that the spurion couples to. The VEV is then inserted and is equal to $ m $.
But this procedure assumes that the spurion obeys the chiral symmetry, $ SU(2) _L \times SU(2) _R $, and transforms as, $ X \rightarrow L X R ^\dagger $. How do we know this assumption is true? In fact it seems to fail for the case of QCD since the ``spurion field'' is really the Higgs field, which is a singlet under $ SU(2) _R $.
Answer: The spurion has nothing to do with UV physics and actual physical fields getting vevs. The field $X$ should be considered jut as a fictitious entity or a a tool to write down the correct effective lagrangian. The point is that the structure of the IR effective theory is largely independent of the particular UV completion that it originated from (as long as all the symmetries are respected and all light degrees of freedom kept in the theory). You can even mock up you own semplified UV theory to study the structure of the terms that are admissible in the IR lagrangian. A spurion exploits exactly this freedom.
Imagine instead you know the UV theory and want to assign quantum numbers to the spurion in order to recovery the symmetry and be able to eventually write down the explicit breaking terms in the IR theory. Say that the term breaking the symmetry in the lagrangian is $g \mathcal{O}$ where $\mathcal{O}$ is a non-singlet operator. Thus you will just need to promote the coupling $g$ to a spurion carrying a representation of the symmetry group such that the product $g\otimes\mathcal{O}$ contains a singlet. Only a finite number of irreducible representations below a certain dimension will do the job. Those will provide all the possible assignments of quantum numbers for the spurion you are after. | {
"domain": "physics.stackexchange",
"id": 13526,
"tags": "quantum-field-theory, renormalization, effective-field-theory"
} |
Are eigenvalues of the states $|0\rangle$ and $|1\rangle$ always same? | Question: Nielsen and Chuang, Chapter 2 (Box 2.6):
Suppose $M$ is any observable on a system $A$, and we have some
measuring device which is capable of realizing measurements of $M$.
Let $\tilde M$ denote the corresponding observable for the same
measurement, performed on the composite system $AB$. Our immediate
goal is to argue that $\tilde M$ is necessarily equal to $M \otimes
I_B$. Note that if the system $AB$ is prepared in the state $|m\rangle
|\psi\rangle$, where $|m\rangle$ is an eigenstate with an eigenvalue
$m$ and $|\psi\rangle$ is any state of $B$, then the measuring device
must yield the result $m$ for the measurement, with probability one.
Thus, if $P_m$ is the projector onto the $m$ eigenspace of the
observable $M$, then the corresponding projector for $\tilde M$ is
$P_m \otimes I_B$. We therefore have
$\tilde M = \sum_{m} m P_m\otimes I_B = M \otimes I_B$.
Our professor mentioned that for a system of two qubits $A$ and $B$, if we consider the wavefunction to be $|\phi\rangle = a |00\rangle + b |11\rangle$, then the expectation value of the observable $\tilde M$ will be:
$$\langle \phi| M \otimes I_B |\phi \rangle$$
$$= (a^*\langle00|+b^*\langle 11|) M \otimes I_B (a |00\rangle + b |11\rangle)$$
The expression for the observable $\tilde M = M\otimes I_B$ is only valid if the system $AB$ was initially prepared in the state $|m\rangle |\psi\rangle$ where $|m\rangle$ is an eigenstate with eigenvalue $m$. That means it is necessary that both the eigenstates $|0\rangle$ and $|1\rangle$ of $A$ had the same eigenvalue $m$. Otherwise the professors' expression for the expectation value is simply not true! I don't understand why the professor mention it explicitly. Am I missing something?
Or, are eigenvalues for the states $|0\rangle$ and $|1\rangle$ (for measurement operator $M$) always same, by default?
Answer: The text says, if you start in an arbitrary eigenvector $|m \rangle$, then you get the eigenvalue $m$. However, $m$ has no meaning, it's just a dummy variable. In your specific case, this means that if you start in $|0 \rangle$ you get $0$ and if you start in $|1 \rangle$ you get $1$. The equivalence $\tilde{M} = M \otimes I_B$ is true for all vectors since it's true for all eigenvectors . | {
"domain": "physics.stackexchange",
"id": 46317,
"tags": "quantum-mechanics, quantum-information, quantum-computer"
} |
Why is the total energy of an orbiting system negative? | Question: Assume it's an circular orbit. Object A orbits around object B. Take object B as frame of reference.
.$E=KE_a + GPE$
.$E=\frac 12m_av_a^2 +(-\frac {GM_bm_A}r)$
.$E=\frac 12m_a(GM_br)+(-\frac {GM_bm_a}r)$
.$E=-\frac {GMm}{2r} < 0$
What does negative total energy at any instant of time mean?
Answer: Negative energies are totally fine, because you had to pick a zero-point for energy. In your calculation you picked it to be at infinity. You could have chosen the zero-point for potential energy in such a way that your system had zero energy, or whatever. Only changes in energy are meaningful, in general.
Consider this: what happens if you add energy to this system? It gets closer to zero, and zero for us is the point where the particle is at rest, but is infinitely far away from the other particle. So negative energy represents the fact that to "free" the particle from the central potential requires you to add energy. This comes up a lot in quantum mechanics--the ground state energy of the hydrogen atom is -13.6 eV. | {
"domain": "physics.stackexchange",
"id": 53151,
"tags": "newtonian-mechanics, potential-energy, conventions, binding-energy, virial-theorem"
} |
Geodesics equation in a 2-space with a certain $ds^2$ | Question: This is exercise 3.20 of Hobson's general relativity. It's presented as follows:
In the 2-space with line element $$ds^2=\frac{dr^2+r^2d\theta^2}{r^2-a^2}-\frac{r^2dr^2}{(r^2-a^2)^2}$$
Where r>a, show that the differential equation for the geodesics may be written as: $$a^2\left(\frac{dr}{d\theta}\right)^2+a^2r^2=Kr^4$$ Where $K$ is a constant such that $K=1$ if the geodesic is null.
In my attempt for a solution, I summed the terms with $dr^2$ in the expression for the line element in order to get: $$ds^2=-\frac{a^2}{(r^2-a^2)^2}dr^2+\frac{r^2}{r^2-a^2}d\theta^2$$ From this expression I got the components of the metric tensor, being these: $$g_{rr}=-\frac{a^2}{(r^2-a^2)^2}$$ $$g_{\theta\theta}=\frac{r^2}{r^2-a^2}$$ As the metric tensor is diagonal, it's straightforward to get its contravariant components, since they will just be the inverse of their covariant counterparts: $g^{rr}=g_{rr}^{-1}$ and $g^{\theta\theta}=g_{\theta\theta}^{-1}$. Having calculated the metric tensor and its inverse, now let's compute the connection coefficients via: $$\Gamma^a{}_{bc}=\frac{1}{2}g^{ad}(\partial_bg_{dc}+\partial_cg_{bd}-\partial_dg_{bc})$$ I found out that every connection coefficient vanishes except for the following:
$$\Gamma^{r}{}_{rr}=\frac{-2r}{r^2-a^2}$$
$$\Gamma^{r}{}_{\theta\theta}=-r$$
$$\Gamma^{\theta}{}_{\theta r}=\Gamma^{\theta}{}_{r \theta}=\frac{-a^2}{r(r^2-a^2)}$$
Using the geodesic equations: $\ddot{x}^a+\Gamma^{a}{}_{bc}\dot{x}^b\dot{x}^c=0$, I get the two geodesic equations:
$$\ddot{r}+\Gamma^{r}{}_{rr}\dot{r}^2+\Gamma^r{}_{\theta \theta}\dot{\theta}^2=\ddot{r}+r\left( \frac{-2\dot{r}^2}{r^2-a^2}-\dot\theta^2\right)=0$$
$$\ddot\theta + 2\Gamma^\theta{}_{r \theta}\dot{r}\dot{\theta}=\ddot{\theta}-\frac{2a^2}{r(r^2-a^2)}\dot{r}\dot{\theta}=0$$
Where $\dot x$ stands for the derivative with respect to the parameter of the geodesic, $\dot x=\frac{dx}{du}$.
So far so good, I believe (unless I made a mistake calculating, which is actually possible though I checked my calculations several times before posting), but here I'm stuck. I think working out a bit with both equations and swapping the derivatives with respect to the parameter to derivatives with respect to the coordinates I might be able to get an expression as the one I'm after... But I wasn't able to do it. Any help on how to continue will be much appreciated!
Answer: $$
\ddot{r}+r\left( \frac{-2\dot{r}^2}{r^2-a^2}-\dot\theta^2\right)=0
\tag{1}$$
$$
\ddot{\theta}-\frac{2a^2}{r(r^2-a^2)}\dot{r}\dot{\theta}=0
\tag{2}$$
The Eq.(2) can be solved by separation:
$$
\ddot{\theta}-\frac{2a^2}{r(r^2-a^2)}\dot{r}\dot{\theta}=0
$$ $$
\frac{\ddot{\theta}}{\dot{\theta}} = \frac{2a^2}{r(r^2-a^2)}\dot{r} = \frac{a^2}{r^2(r^2-a^2)} 2 r \dot{r} = \left[ \frac{1}{r^2-a^2} - \frac{1}{r^2}\right] \frac{d}{dt}r^2
$$
Both sides are integrable:
$$
\ln\dot\theta = \ln\frac{r^2 - a^2}{r^2} + constant $$ $$ \tag{3}
\dot\theta = C \frac{r^2 - a^2}{r^2}
$$
Substitue Eq.(3) into Eq.(1)
$$ \tag{4}
\ddot{r}+r\left\{ \frac{-2\dot{r}^2}{r^2-a^2}- C^2 \left(\frac{r^2 - a^2}{r^2}\right)^2 \right\}=0
$$
Scale with $a$, $r \to r / a$
$$
\ddot{r}+r\left\{ \frac{-2\dot{r}^2}{r^2-1}- C^2 \left(\frac{r^2 - 1}{r^2}\right)^2 \right\}=0
$$ | {
"domain": "physics.stackexchange",
"id": 77211,
"tags": "homework-and-exercises, general-relativity, differential-geometry, tensor-calculus, geodesics"
} |
ROS2 generating 'core' file on error, how to disable? | Question:
I'm running a simple talker/listener demo, using a Docker image based on ros:galactic-ros-core w/ros-galactic-desktop installed. I'm getting an error about half the time when I run the demo, but my question is not about the error, but about the core dump. When I get the error, a large file "core" is created in the current directory which I really don't want. Is this standard behavior? Is there any supported method to stop this from happening by default, either within ROS2 or linux?
Following a cryptic comment from https://www.oreilly.com/library/view/ros-programming-building/9781788627436/fa252378-8d64-4c42-ad20-e8a5374549a5.xhtml "... there is already a core directory to prevent core dumps" I can prevent the 'core' file from being created by creating an empty 'core' directory in the current working directory, but this is a workaround.
Not necessarily relevant, but here's the script creating the error:
# start talker and listener in background
ros2 run demo_nodes_cpp talker &
ros2 run demo_nodes_cpp listener &
sleep 10
kill %+
kill %-
and the error is "Failed to create log directory: /root/.ros/log"
Originally posted by rkent on ROS Answers with karma: 20 on 2021-10-14
Post score: 0
Original comments
Comment by gvdhoorn on 2021-10-15:
off-topic for your question, but
I'm running a simple talker/listener demo [..] I'm getting an error about half the time when I run the demo
is this an unaltered demo package part of the default ROS 2 set?
If it is, I would encourage you to report the crashes you're seeing, as they should not happen.
Comment by rkent on 2021-10-15:
This can be duplicated, though intermittently, with the standard components from package demo_nodes_cpp and osrf:/ros:galactic-desktop. Where would I file the bug? The demo files it is based on a pretty simple, the issue is probably in roscpp.
Comment by gvdhoorn on 2021-10-16:
The issue tracker for the demos packages is ros2/demos/issues.
Answer:
Core dumps are not specific to ROS (1 or 2), managed by it or created by any of the client libraries, but a feature of your OS. See #q260977 for a previous Q&A about them.
They're actually a very useful thing to have after a crash, as they allow a developer to debug a process post-mortem (ie: after it has already crashed and been terminated by the OS).
As to disabling their generation, you could take a look at the Arch documentation on Core dumps.
The Docker image you mention (ros:galactic-ros-core) is based on Ubuntu Focal. Canonical doesn't appear to have any documentation specifically about the configuration of core dumps (that I can find). The ulimit section in the Arch documentation should work however, as it's for Bash in general. This SO post also has some information.
Edit: I'm hesitant to duplicate information that's so easily found already, but to make this answer more self-contained:
To disable core dumps in the current Bash session: ulimit -c 0.
Or edit /etc/security/limits.conf and set soft core for * (ie: all users) to 0 (see again the Arch documentation linked earlier).
And just to reiterate: this is OS-level configuration. Not ROS.
Originally posted by gvdhoorn with karma: 86574 on 2021-10-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by rkent on 2021-10-15:
I tried the fix in /etc/security/limits.conf earlier and it did not prevent the core file from appearing. But ulimit -c 0 seems to work when I do it in the current session. Thanks! | {
"domain": "robotics.stackexchange",
"id": 37022,
"tags": "docker"
} |
Confusion in Poynting's theorem | Question: The total force on a charge is equal to $\mathbf {F}=q\mathbf {(E+ v×B)}$ where everything have their usual meanings . We can say that:
$$dW= \mathbf {F\cdot dl} = {\mathbf{F}\cdot \frac{d \mathbf{l} }{dt}} dt= \mathbf {(F\cdot v)}dt = q(\mathbf {E\cdot v})dt = \mathbf {(E\cdot j)} dt ...... $$
where $\mathbf {j}$ is the current density vector created by the charge and it is non zero only at the point of point charge.
Now from Maxwell's laws:
$$\mathbf {j}= \frac{1}{\mu}\mathbf {\nabla×B} -\epsilon \frac{d\mathbf {\tilde E}}{dt}$$ where $\epsilon$ and $\mu$ are electric and magnetic permittivities. We have to note here that $\mathbf {\tilde E}$ is the total electric field, not only the electric field which acts on the charge . Thus $\mathbf {E}$ and $\mathbf {\tilde E}$ are not same. But in Poynting's theorem both are treated same and thus dot produced with $\mathbf {j}$. So is the theorem wrong?
Answer: Whenever you have a conflict between some established principle and the concept of a classical point charge, the issue is the classical point charge. They lead to all sorts of oddities like infinite energy, weird self-forces, and other such things.
Poynting's theorem follows directly from Maxwell's equations. So it can be used any time that Maxwell's equations apply. To resolve the issue you mention, simply, use continuous charge and current distributions, $\rho$ and $\vec J$. These are the variables that appear in Maxwell's equations, so applying them makes direct sense. In terms of those variables the Lorentz force density is $\vec f = \rho \vec E + \vec J \times \vec B$ and it can be applied directly. | {
"domain": "physics.stackexchange",
"id": 100639,
"tags": "electromagnetism, poynting-vector"
} |
1H-NMR: How does the oxygen in ethers act as a spin coupling barrier while the oxygen in hydroxy groups doesn't? | Question: It is possible to observe coupling between a hydroxyl proton and other protons. Why does the oxygen atom in ethers prevent any further coupling and act as a sort of barrier between spin systems? Or is coupling technically possible, but the resulting $^4\mathrm{J}$ coupling constant is just too small to be observed?
[Edit]:
I initially asked this question because we learned in our spectroscopy lecture that proton couplings across R-O-R, R-N-R and R-S-R (with R ≠ H) aren't really possible.
It's nice to see that this is just a rule of thumb and that some coupling can occur.
Could you tell me if my reasoning for the coupling in the examples below is correct or not: Fermi contact interactions are usually the most important mechanism for spin-spin coupling. The carbon atoms in the furane derivate are $\ce{sp^2}$ hybridized, so there should be additional σ-π-spin-polarization interactions. The carbonyl-carbon of the formate ester also has a lower s-character. The large $^5\mathrm{J}$ coupling in the trioxaadamantane derivate is the most surprising. In contrary to what I initially thought, the coupling mechanism seems to be different from the stereospecific sigma-bond contributions that are responsible for "W-coupling" (so maybe some through-space mechanism?).
Here is the original adamantane paper for anyone whos interested: https://doi.org/10.1021/ja00958a011
Quote from the paper: "The coupling systems in I and VII differ from these in that rotation of 180 and 120° about the central bond has taken place. If one mechanism is responsible for all the five-bond couplings cited, it must be independent of the geometry of the central bond."
So let me ask a slightly better question: Is there any difference between $^4\mathrm{J}_{\ce{HCCCH}}$ and $^4\mathrm{J}_{\ce{HCOCH}}$ coupling in acyclic, unstrained, $\ce{sp^3}$ hybridized molecules (i.e. do things like lone pairs play a role in spin-spin coupling and if so, why)?
Answer: Hans Reich's website remains a treasure trove of NMR information that you can consult for theory and values of J couplings. All of the following comes from that site.
First, on the origin of through-bond couplings:
The scalar coupling J is a through-bond interaction, in which the spin of one nucleus perturbs (polarizes) the spins of the intervening electrons, and the energy levels of neighboring magnetic nuclei are in turn perturbed by the polarized electrons. [....] Because the effect is usually transmitted through the bonding electrons, the magnitude of J falls off rapidly as the number of intervening bonds increases.
On 4-bond proton J-couplings
Proton-proton couplings over more than three bonds are usually too small to detect easily (< 1 Hz). However, there are a number of important environments where such couplings are present, and can provide useful structural information. Coupling across π-systems are the most frequently encountered 4J couplings
As the following examples show, the oxygen nucleus in an ether or ester does not inherently interfere with a long-range coupling, and does not account for the sparsity of reported $^4\mathrm{J}_{\mathrm{HH}}$. | {
"domain": "chemistry.stackexchange",
"id": 17233,
"tags": "nmr-spectroscopy, ethers"
} |
How to calculate a Galactocentric distance of another galaxy | Question: So I have two parameters, I have the distance to a galaxy of 879 kpc and the sky distance in degrees (0.131501). How would I then use these two bits of information to calculate the Galactocentric distance in kpc?
N.B. for the two objects within M33, i.e., the centre of M33 and an undefined region of M33 are separated by 0.131501 degrees. Both objects can assume to have a distance from us of ~879 kpc.
So I want to convert the value for the distance between the two objects within M33 (0.131501 degrees) to kpc...
Answer: Basic angle calculations: $\mathrm{arc\ length} = r\theta$
(angle measured in radians) and for small angles the arc length approximates the chord length.
The angle in radians is therefore $0.131501\pi/180=0.00223$, so the separation is $0.00223\times 879= 2.02 \mathrm{kpc}$ | {
"domain": "astronomy.stackexchange",
"id": 1748,
"tags": "distances, parallax, parsec"
} |
How to rosmake project with c and cpp? | Question:
Hi all,
I am making a project with c and cpp involved. For example, in the src folder I have:
a.h a.cpp -- main function
b.h b.c -- some utility functions
How can I modified the CMakeList.txt to make it work? I get errors saying that a.cpp has undefined reference to some functions, which are defined in b.c, while I already include b.h in a.h. My current CMakeList.txt looks like this.
rosbuild_add_library(mylib src/b.c)
rosbuild_add_executable(mymain src/a.cpp)
target_link_libraries(mymain mylib)
I have not solved the problem with ROS rosmake yet. But I add the following lines in a makefile and use nomal make command and it works.
# The pre-processor options used by the cpp (man cpp for more).
CPPFLAGS = -Wall -I/opt/ros/fuerte/include/
# The options used in linking as well as in any direct use of ld.
LDFLAGS = -L/opt/ros/fuerte/lib/ -lopencv_core -lopencv_imgproc -lopencv_highgui
Originally posted by ZiyangLI on ROS Answers with karma: 93 on 2014-03-03
Post score: 0
Original comments
Comment by brice rebsamen on 2014-03-03:
those 3 cmake lines look OK
Answer:
(Long comment) First of all there seems to be a basic linker error, which is giving you the message about undefined functions. This means that no matter what make system you use, you're going to get the same error. This is generic and not ROS-specific. I suggest looking at resources like these first to nip this issue in the bud.
Once this is sorted out, get a hang of how CMake works first. I also suggest using Catkin to build instead of rosbuild, which uses rosmake. Then, go to catkin docs to get an idea of how the catkin build tool works on ROS.
Keep in mind: the basic build system is the GNU Make, on Unix and Linux systems. CMake is an abstraction that generates Makefiles. Catkin is yet another abstraction over CMake.
Originally posted by PKG with karma: 365 on 2014-03-03
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 17150,
"tags": "rosmake"
} |
Loop through months (1-12) | Question: I currently have this monstrosity:
next_period = (((current_month + tax_period_months)-1) % 12) + 1
I want to get the month that is tax_period_months after the current month.
E.g. if tax_period_months=3, and current_month=09 then next_period=12
This works, but has that horrible -1 +1 to stop month 12 becoming 0
Any alternatives?
Answer: If that really is all you need to calculate, then it's not too bad.
However, I suspect that you have other date calculations happening in your application. In that case, why not take advantage of Ruby's Date class? The >> operator adds months.
require 'date'
next_period = (Date.today >> tax_period_months).month | {
"domain": "codereview.stackexchange",
"id": 21864,
"tags": "ruby, datetime, ruby-on-rails"
} |
Trimming a string | Question:
Function: Trim
Returns a new string after removing any white-space characters from
the beginning and end of the argument.
string trim(string word) {
if(startsWith(word, " ")) return removeSpaces(word, "Front");
if(endsWith(word, " ")) return removeSpaces(word , "Back");
return word;
}
string removeSpaces(string word , string position){
if(position == "Front"){
for(int i =0; i <word.length();i++){
if(word[i] != ' '){
return word;
}else{
word.erase(i);
}
}
return word;
}else if(position == "Back"){
for(int i =word.length() - 1 ; i >=0 ; i--){
if(word[i] != ' '){
return word;
}else{
word.erase(i);
}
}
return word;
}
}
Answer: The second argument to removeSpaces should by no means be a string. I suggest an enum:
enum StringPostion {
BEGINNING_OF_STRING,
END_OF_STRING,
};
You appear to have a using namespace std; statement in your program. You shouldn't do this. This StackOverflow question explains why.
Your implementation of trim removes spaces from the front or back of word, not both.
You've chosen an O(kn) algorithm (which is in any case incorrect), where k is the number of spaces at the beginning of the string and n is the number of characters in the string. Each call to s.erase(i) causes all of the characters after i to be shifted to the left. It also causes the string to be shortened by one character. Your function will only erase half of the leading or trailing spaces since you shorten the string and increment i.
Try this:
std::string trim(std::string word) {
removeSpaces(word, BEGINNING_OF_STRING);
removeSpaces(word, END_OF_STRING);
return word;
}
void removeSpaces(std::string& word, StringPosition position) {
switch (position) {
case BEGINNING_OF_STRING: {
const auto first_nonspace = word.find_first_not_of(' ');
word.erase(0, first_nonspace);
return;
}
case END_OF_STRING: {
const auto last_nonspace = word.find_last_not_of(' ');
if (last_nonspace != std:string::npos) word.erase(last_nonspace + 1);
return;
}
}
} | {
"domain": "codereview.stackexchange",
"id": 45311,
"tags": "c++, strings"
} |
How do you build a circuit to make an equal superposition of $n$ outcomes? | Question: Suppose we start with $|00...0\rangle$.
We want to build an equal superposition over $|0\rangle + ... + |n-1\rangle$.
When $n=2^m$ for some $m$, I know I can do this using $H^{\otimes m}$.
What is the general circuit for this (i.e. in case $n$ is not power of 2)?
Answer: How to prepare a uniform superposition over a range.
Simple: Repeat until success
The simplest approach is to prepare an $n$-qubit quantum integer register $k$, where $n=\lceil \lg_2 N \rceil$, in the state $|+\rangle^{\otimes n}$. That can be done very cheaply using reset gates and Hadamard gates. Then use a comparison operation to measure whether $k<N$ or not. If not, then retry. The main downside of this repeat-until-success approach is that it isn't reversible, and sometimes these sorts of preparation tasks occur inside subroutines where reversibility is needed due to controlling and uncomputing the preparation.
Example repeat-until-success circuit in Quirk, cycling over N=1 to N=127 (technically should only go from 65 to 127 for a 7 qubit system like this):
Flexible: Amplitude Amplification
There is a deterministic reversible circuit with the same cost as the repeat-until-success strategy. You can use a single step of amplitude amplification, with a less-than-N comparison as the oracle, to get to a uniform superposition over the range [0, N). This has a gate count of $O(\lg N+\lg\frac{1}{\epsilon})$, where the $\epsilon$ comes from how closely you approximate the rotation angle within the amplification.
Remove any factors of two in $N$ by adding a qubit in the $|+\rangle$ state. $N$ is now odd. Skip the remaining steps if $N=1$.
Let $\theta = \arccos(1 - 2^{\lfloor \lg_2 N \rfloor} / N)$.
Perform one step of amplification. Use a diffusion angle of $\theta$. Use $f(k) = k < N$ as the oracle.
The system is now in the desired state.
Example amplification circuit in Quirk, for N=100 | {
"domain": "quantumcomputing.stackexchange",
"id": 2703,
"tags": "quantum-algorithms, circuit-construction, entanglement"
} |
Contrradicting answers for relation of speed of a motor with current | Question: For a DC motor, speed of the armature is inversly proportional to current.
But at the same time, we know that torque of the armature is directly proportional to the armature current. Since speed can be increased by applying torque, this means that speed is directly proportional to armature current.
How is this possible ?
Where have I gone wrong ?
Answer: For a DC motor, speed of the armature is inversely proportional to current.
But at the same time, we know that torque of the armature is directly proportional to the armature current.
Since speed can be increased by applying torque, this means that speed is directly proportional to armature current.
How is this possible ? Where have I gone wrong ?
Where you have gone wrong 1
Firstly your question is poorly defined you should clearly state what type of equipment you are referencing. There are numerous types of D.C. Motors and variations on how the excitation can be achieved. Therefore I will ignore differential compound D.C. Motors.
Where you have gone wrong 2 is your statement Since speed can be increased by applying torque. Torque speed curve
In fact this is the exact opposite of what happens. As speed increases the torque reduces. As the the load is reduced and torque approaches zero the armature speed reaches maximum. because AS THE SPEED INCREASES THE BACK EMF INCREASES AND THEREFORE THE CURRENT REDUCES. so the statement speed of the armature is inversely proportional to current is correct.
Tutorials emf = inductance x rate of current change” and a circuit has an inductance of one Henry will have an emf of one volt induced in the circuit when the current flowing through the circuit changes at a rate of one ampere per second.
One important point to note about the above equation. It only relates the emf produced across the inductor to changes in current because if the flow of inductor current is constant and not changing such as in a steady state DC current, then the induced emf voltage will be zero because the instantaneous rate of current change is zero, di/dt = 0.
Therefore it is obvious that as the speed of the armature increases so the rate of change of flux will increase and so the back emf also increase.
However if you were to say that the speed of a Electric locomotive can be increased by applying torque that is correct, at least in the initial stages of movement.
Where you have gone wrong 3
Your statement torque of the armature is directly proportional to the armature current is also incorrect electrical guide
In fact T α Iaφ Torque α Armature current x Field flux and thus the equation T= I? will depend on what type of motor you are referencing.
On light loads, the torque produced by the series motor is proportional to the square of armature current and hence curve drawn between torque and armature current up to magnetic saturation is a parabola. But after magnetic saturation flux φ is independent of excitation current and so torque is proportional to Ia and hence characteristics become a straight line.
Whilst
The flux of a shunt motor is practically constant. Therefore, T α Ia Torque produced is proportional to the armature current | {
"domain": "engineering.stackexchange",
"id": 2901,
"tags": "motors"
} |
A circuit out of a ring in amper law paradox | Question: Assuming that there is three line of circuit and our amper ring is surrounded just two circuit . According to amper law the magnitude of B is just related to the circuits in the ring . But here the magnitude of B is affected by the circuit outside of the ring,too though it's not shown on the other side.
Does it a paradox ?
Answer: I believe your confusion lies in when Ampère's law can be applied.
Ampère's law only applies in situations with sufficient symmetry, that is situations in which the magnetic field is the same at every point in the Ampèrian loop. For example in the case of an infinitely long wire.
I assume what you are talking about is a case with three parallel wires an you place an Ampèrian loop around two of them. In this case the magnetic field is evidently not the same around the entire loop. In this case you can use the Biot-Savart law which does not have any symmetry requirements. | {
"domain": "physics.stackexchange",
"id": 59080,
"tags": "electromagnetism, laws-of-physics"
} |
Is infinitary Böhm-reduction wrt. root-active terms for $\lambda$-calculus transitive? | Question: I expect the answer to be "obviously yes", but to my inexperienced eye, that's not directly obvious, because the definition of infinite Böhm-reduction does not include a transitivity rule (it wouldn't work), and because I couldn't find a relevant lemma in the papers themselves.
I'm referring in particular to the definition by Czajka [1] of the relation $\rightarrow^\infty_{\beta\bot}$, called infinitary Böhm-reduction. I've looked at [2], which however does not include Böhm-reduction (such that the defined relation isn't confluent IIUC, which is a problem for me).
Rationale: Defining reduction for infinitary $\lambda$-calculus is tricky. In particular, you cannot create an "infinite transitive closure" which allows an infinite number of transitivity steps, but you need to be more careful. In particular, if you define multi-step reduction coinductively, you cannot include a transitive rule, lest your relation becomes total and thus degenerate. So one ends up doing transitivity elimination, which is not always trivial; and given how unintuitive coinduction is, I'm afraid I'd fool myself when attempting a proof.
[1] Łukasz Czajka, 2014. A Coinductive Confluence Proof for Infinitary Lambda-Calculus. Proc. of Rewriting and Typed Lambda Calculi, Springer. http://www.mimuw.edu.pl/~lukaszcz/coind.pdf
[2] Jorg Endrullis and Andrew Polonsky, 2011. Infinitary Rewriting Coinductively. In Proc. of TYPES, volume 19 of LIPIcs, pages 16–27. Schloss Dagstuhl. http://drops.dagstuhl.de/opus/volltexte/2013/3897/
[3] Richard Kennaway, Jan Willem Klop, M. Ronan Sleep, and Fer-Jan de Vries, 1997. Infinitary lambda calculus. Theoretical Computer Science, 175(1):93–125. http://www.sciencedirect.com/science/article/pii/S0304397596001715
Answer: I think you'll find that the exact same proof of Lemma 3 in [1] (the proof itself appears in [2]) concerning $\rightarrow^\infty_\beta$ also holds for $\rightarrow^\infty_{\beta\bot}$: indeed, they are defined in the same way from $\rightarrow^*_\beta$ and $\rightarrow^*_{\beta\bot}$ respectively, which are transitive by definition!
The lemma holds for an arbitrary reflexive-transitive congruence relation in place of $\rightarrow^*_{\beta\bot}$, as you can verify in [2] without needing to go into the detail of the coinductive proofs. In particular, the proofs of lemmas 4.1-5 in [2] are unchanged (with extra induction cases in 4.4). | {
"domain": "cstheory.stackexchange",
"id": 3319,
"tags": "pl.programming-languages, lambda-calculus, term-rewriting-systems"
} |
Reference Request: Submodular Minimization and Monotone Boolean Functions | Question: Background: In machine learning, we often work with graphical models to represent high dimensional probability density functions. If we discard the constraint that a density integrates (sums) to 1, we get an unnormalized graph-structured energy function.
Suppose we have such an energy function, $E$, defined on a graph $G = (\mathcal{V}, \mathcal{E})$. There is one variable $x$ for each vertex of the graph, and there are real-valued unary and pairwise functions, $\theta_i(x_i) : i \in \mathcal{V}$ and $\theta_{ij}(x_i, x_j) : ij \in \mathcal{E}$, respectively. The full energy is then
$$E(\mathbf{x}) = \sum_{i \in \mathcal{V}} \theta_i(x_i) + \sum_{ij \in \mathcal{E}} \theta_{ij}(x_i, x_j)$$
If all $x \in \mathbf{x}$ are binary, we can think of an $x$ as indicating set membership and with just a small abuse of terminology talk about submodularity. In this case, an energy function is submodular iff $\theta_{ij}(0, 0) + \theta_{ij}(1, 1) \le \theta_{ij}(0, 1) + \theta_{ij}(1, 0)$. We are typically interested in finding the configuration that minimizes the energy, $\mathbf{x}^* = \arg \min_{\mathbf{x}} E(\mathbf{x})$.
There seems to be a connection between minimizing a submodular energy function and monotone boolean functions: if we lower the energy of some $\theta_i(x_i=1)$ for any $x_i$ (i.e., increase its preference to be "true"), then the optimal assignment of any variable $x_i^* \in \mathbf{x}^*$ can only change from 0 to 1 ("false" to "true"). If all $\theta_i$ are restricted to be either 0 or 1, then we have $|\mathcal{V}|$ monotone boolean functions:
$$f_i(\mathbf{\theta}) = x_i^*$$
where as above, $\mathbf{x^*} = \arg \min_{\mathbf{x}} E(\mathbf{x})$.
Question: Can we represent all monotone boolean functions using this setup by varying the pairwise terms, $\theta_{ij}$? What if we allow $E$ to be an arbitrary submodular energy function? Conversely, can we represent all submodular minimization problems as a set of $|\mathcal{V}|$ monotone boolean functions?
Can you suggest references that will help me towards better understanding these connections? I'm not a theoretical computer scientist, but I'm trying to understand if there are insights about monotone boolean functions that are not captured by thinking in the submodular minimization terms.
Answer: As far as I understand, the submodular minimization case captures all there is to be said about the monotone Boolean case, and binary submodular Boolean functions can express all submodular Boolean functions. However, if the domain is non-Boolean, then binary submodular functions are not enough to express all submodular functions, even if hidden variables may be introduced. (Apologies if I have missed a subtlety in your precise problem phrasing.)
The state of the art is discussed in this nice paper which has lots of links to related work, and that also makes the links to computer vision quite explicit:
Stanislav Živný, David A. Cohen, Peter G. Jeavons, The expressive power of binary submodular functions, DAM 157 3347–3358, 2009. doi: 10.1016/j.dam.2009.07.001 (preprint)
In case your next question is about approximation, this recent paper looks at the approximation version:
Dorit S. Hochbaum, Submodular problems - approximations and algorithms, arXiv: 1010.1945
Edit: fixed link. | {
"domain": "cstheory.stackexchange",
"id": 286,
"tags": "reference-request, machine-learning, lg.learning, boolean-functions"
} |
Matrix Representation of Four Dimensional Lorentz Transformation | Question: According to Peskin and Schroeder page no.39,
Just to see that we have this right, let us look at one particular representation (which we will simply pull out of a hat). Consider 4 x 4 matrices
$$ ({\cal J}^{\mu\nu})_{\alpha\beta} = i(\delta^{\mu}_{\alpha}\delta^{\nu}_{\beta} - \delta^{\mu}_{\beta}\delta^{\nu}_{\alpha}). \tag{3.18}$$
How did the book arrive at this matrix representation from the following realization?
$$ J^{\mu\nu} = i(x^\mu\partial^\nu-x^\nu\partial^\mu). \tag{3.16}$$
Answer: How does the realization (3.16) act on a 4-dimensional vector?
$$ J^{\mu\nu} x_\alpha= i(x^\mu\delta^\nu_\alpha-x^\nu\delta^\mu_\alpha) \\
=i(\delta^\nu_\alpha \delta^\mu_\beta - \delta^\nu_\beta \delta^\mu_\alpha)x^\beta =-({\cal J}^{\mu\nu})_{\alpha\beta} x^\beta ~~. $$
If the sign grated on you, you could raise indices of the 4-representation matrix $\cal J$ to be transforming a covariant vector instead, $ ({\cal J}^{\mu\nu})_\beta ^{~~~\alpha} x_\alpha $, if so inclined. | {
"domain": "physics.stackexchange",
"id": 79285,
"tags": "special-relativity, representation-theory"
} |
Saving high scores in a pickle database | Question: Seeking to improve upon my high score 'module' code. It saves users' high score in a pickle database and if needed prints out the current scores in the database.
import pickle
import os
global high_scores
def setup_scores():
global high_scores
high_scores = {}
if os.path.isfile('highscores.pkl'):
with open("highscores.pkl", "rb") as h:
high_scores = pickle.load(h)
else:
high_scores = {"Adam Smith": 65536, "John Doe": 10000}
def save_score(name, score):
new_score = (name, score)
if new_score[0] in high_scores:
if new_score[1] > high_scores[new_score[0]]:
high_scores[new_score[0]] = new_score[1]
else:
high_scores[new_score[0]] = new_score[1]
with open("highscores.pkl","wb") as out:
pickle.dump(high_scores, out)
def print_scores():
for name, score in high_scores.items():
print("{{name:>{col_width}}} | {{score:<{col_width}}}".format(col_width=(80-3)//2).format(name=name, score=score))
setup_scores()
save_score(raw_input('Name:'), raw_input('Score:')) # inputs only for testing, can use variables instead
print_scores()
Answer:
Use of global should generally be avoided. You could create a class to hold the dictionary and the three functions would become methods.
Don't repeat yourself: use a variable for the file name so you don't have to change it in three places if it needs to change.
The save_score function assigns new_score = (name, score) for no reason at all. Using the two variables directly in the code that follows would improve readability. | {
"domain": "codereview.stackexchange",
"id": 8137,
"tags": "python, python-2.x"
} |
Mg2+ ion formation | Question: Why does Mg not form a Mg+1 ion, even though its second ionization energy is much higher than the first ionization energy?
(I know that an ion should resemble the noble gas closest to the element from which it was formed (at least in the p and s blocks), but the question is why).
I saw that there is a similar question on the site, but I want to understand it atomically and not by enthalpy.
(I mean I want to understand it according to considerations such as whether because there is a decrease in the energy level from Mg+1 to Mg+2 (from n=3 to n=2) energy is released and the energy it emits is more than the energy needed to go from n=3 to n=2 or based on similar considerations)
Answer: If we are talking about monatomic ions, then we have to consider the balance between ionization energy and electrostatic attraction. When we go to $\ce{Mg^{2+}}$ instead of just $\ce{Mg^{1+}}$, the electrostatic attraction to anions such as chloride or hydroxide is stronger and the energy saved from that could be enough to allow the more costly additional ionization. Roughly speaking, at least under conditions we encounter everyday, an extra ion charge can balance out about $3000$ kJ/mol of ionization energy, and the second ionization energy of magnesium comes in well under that. However the third ionization energy, involving an electron from the neon core, is far too high.
Magnesium does form diatomic $\ce{Mg2^{2+}}$ ions in certain complexes; these are made more stable than a monatomic ion would be by a magnesium-magnesium bond (which may have some ion-electride character[1], i.e. a superposition of $\ce{Mg^+–Mg^+}$ with $\ce{Mg^{2+}[e_2^{2-}]Mg^{2+}}$). Such compounds are widespread enough to earn an article in Wikipedia.
Reference
Platts, J. A.; Overgaard, J.; Jones, C.; Iversen, B. B.; Stasch, A., "First Experimental Characterization of a Non-nuclear Attractor in a Dimeric Magnesium(I) Compound," J. Phys. Chem. A, 2011, 115, 194-200, DOI: 10.1021/jp109547w | {
"domain": "chemistry.stackexchange",
"id": 17953,
"tags": "energy, electrons, ions, atoms, ionization-energy"
} |
MVC Layout - Which way to add listeners is better? | Question: So I'm doing a basic MVC layout for a pretty basic game that I am making. The game requires the user to move up/down/left/right via buttons on the GUI. Since I'm using an MVC layout and my buttons are in a different class than the ActionListeners, I was wondering what the best way to add the action listeners are?
Method 1:
View Class ActionListener method:
public void addMovementListeners(ActionListener u, ActionListener d, ActionListener l, ActionListener r){
moveUp.addActionListener(u);
moveDown.addActionListener(d);
moveLeft.addActionListener(l);
moveRight.addActionListener(r);
}
Control Class add ActionListener method:
private void addListeners(){
viewGUI.addMovementListeners(new ActionListener() {
public void actionPerformed(ActionEvent e) {
buttonPressed = 1;
}
},
new ActionListener() {
public void actionPerformed(ActionEvent e) {
buttonPressed = 2;
}
},
new ActionListener() {
public void actionPerformed(ActionEvent e) {
buttonPressed = 3;
}
},
new ActionListener() {
public void actionPerformed(ActionEvent e) {
buttonPressed = 4;
}
});
}
Method 2:
View Class ActionListener method:
public void addMovementListeners(ActionListener a){
moveUp.addActionListener(a);
moveDown.addActionListener(a);
moveLeft.addActionListener(a);
moveRight.addActionListener(a);
}
public JButton[] getButtons(){
JButton[] temp = new JButton[4];
temp[0] = moveUp;
temp[1] = moveDown;
temp[2] = moveLeft;
temp[3] = moveRight;
return temp;
}
Control Class add ActionListener method:
JButton[] movementButtons;
private void addListeners(){
movementButtons = viewGUI.getButtons();
viewGUI.addMovementListeners(new ActionListener() {
public void actionPerformed(ActionEvent e) {
if(e.getSource() == movementButtons[0]){
buttonPressed = 1;
}else if(e.getSource() == movementButtons[1]){
buttonPressed = 2;
}else if(e.getSource() == movementButtons[2]){
buttonPressed = 3;
}else if(e.getSource() == movementButtons[3]){
buttonPressed = 4;
}
}
});
}
Which method is better? Is there another method that is even better than these two? Let me know! Trying to get this MVC thing down :). (Any other suggestions welcome as well)!
Answer: Method #1 feels like the better approach, because it doesn't require your view to expose its buttons, but the four-times ActionListener method seems odd. Both methods have some trouble with magic numbers.
Introduce an enum for directions. It may feel like overkill at the moment, but it'll take care of your direction-to-action mapping. It will also help shape some of your options.
Then consider supplying Action instead of ActionListener. This will also make it easier to enable/disable the movement buttons in case something blocks your way (like going off-field), without your view having to expose its buttons.
When you combine these two, you get something like the following:
View:
void setAction(Direction dir, Action action);
Model:
void move(Direction dir);
/** Returns directions possible to move to at this moment. */
Set<Direction> getPossibleDirections();
enum Direction { UP, RIGHT, DOWN, LEFT; }
class Controller {
Map<Direction, MoveAction> actions;
public void setView(View view) {
actions = new EnumMap<>(Direction.class);
for ( final Direction direction : Direction.values() ) {
final MoveAction action = new MoveAction(direction, direction.toString());
actions.put(direction, action);
view.setAction(direction, action);
}
}
void refreshState() {
final Set<Direction> dirs = model.getPossibleDirections();
for ( final MoveAction action : actions.values() ) {
action.setEnabled(dirs.contains(action.direction));
}
}
class MoveAction extends AbstractAction {
final Direction direction;
MoveAction(Direction direction, String name) {
super(name);
this.direction = direction;
}
public void actionPerformed(ActionEvent evt) {
model.move(direction);
refreshState(); // or have model firePropertyChange
}
}
} | {
"domain": "codereview.stackexchange",
"id": 7413,
"tags": "java, object-oriented, game, mvc, swing"
} |
Relation between electric potential and Electric field | Question: I'm unable to understand why the integration of $\mathbf E$ is done in two different ways for constant $\mathbf E$ and varying $\mathbf E$, as in case of parallel plate capacitor and spherical capacitor. More specifically, why do we get $\Delta V=Ed$ for the constant field?
Also, if somehow I understand the idea of integration described above, it doesn't seem to work for calculating the potential energy $U$ for the varying force.
Answer: In general, the difference in electric potential between position $\mathbf a$ and $\mathbf b$ is given by
$$\Delta V=V(\mathbf b)-V(\mathbf a)=-\int_{\mathbf a}^{\mathbf b}\mathbf E\cdot\text d\mathbf r$$
where the integral is a line integral following any path from $\mathbf a$ to $\mathbf b$. This definition is true for any static electric field, constant or not.
However, if the field is constant along the integration path, then we are allowed to take the $\mathbf E$ term outside of the integral:
$$\Delta V=V(\mathbf b)-V(\mathbf a)=-\mathbf E\cdot\int_{\mathbf a}^{\mathbf b}\text d\mathbf r$$
Now, the line integral of $\text d\mathbf r$ is just the vector that points from the start to the end of the path, i.e.
$$\int_{\mathbf a}^{\mathbf b}\text d\mathbf r=\mathbf b-\mathbf a$$
Therefore, for a constant electric field,
$$\Delta V=-\mathbf E\cdot(\mathbf b-\mathbf a)$$
Now, if you are only interested in the magnitude of the potential difference, and if the field points in the same direction as the displacement, we can simplify to
$$|\Delta V|=|\mathbf E|d$$
Where $d$ is the magnitude of $\mathbf b-\mathbf a$.
This same idea is true for electric potential energy and forces as well, because potential and field are just the energy and force respectively per unit charge, i.e. $\Delta V=\Delta U/q$ and $\mathbf E=\mathbf F/q$. So if you take the above section and divide everything by $q$ then you are good to get
$$\Delta V/q=V(\mathbf b)/q-V(\mathbf a/q)=-\int_{\mathbf a}^{\mathbf b}\mathbf E/q\cdot\text d\mathbf r$$
$$\Delta U=U(\mathbf b)-U(\mathbf a)=-\int_{\mathbf a}^{\mathbf b}\mathbf F\cdot\text d\mathbf r$$ | {
"domain": "physics.stackexchange",
"id": 60949,
"tags": "potential, potential-energy, voltage"
} |
Color-coded console output | Question: I wrote this Console.Write function to use in my applications where I need to easily color-code output.
Example syntax would be:
Console.WriteLine("<f=red>this is in red");
Console.WriteLine("<b=gray><f=white>This is white on gray. <f=green>This is green on gray. <b=darkmagenta>This is green on dark magenta. <b=d>This is green on default.");
Which results in this...
I'm looking for a general review of the code.
//github.com/BenVlodgi/CommandHelper
public static class Console
{
private static Regex _writeRegex = new Regex("<[fb]=\\w+>");
public static void WriteLine(string value, int? cursorPosition = null, bool clearRestOfLine = false)
{
Write(value + Environment.NewLine, cursorPosition, clearRestOfLine);
}
public static void Write(string value, int? cursorPosition = null, bool clearRestOfLine = false)
{
if (cursorPosition.HasValue)
System.Console.CursorLeft = cursorPosition.Value;
ConsoleColor defaultForegroundColor = System.Console.ForegroundColor;
ConsoleColor defaultBackgroundColor = System.Console.BackgroundColor;
var segments = _writeRegex.Split(value);
var colors = _writeRegex.Matches(value);
for (int i = 0; i < segments.Length; i++)
{
if (i > 0)
{
ConsoleColor consoleColor;
// Now that we have the color tag, split it int two parts,
// the target(foreground/background) and the color.
var splits = colors[i - 1].Value
.Trim(new char[] { '<', '>' })
.Split('=')
.Select(str => str.ToLower().Trim())
.ToArray();
// if the color is set to d (default), then depending on our target,
// set the color to be the default for that target.
if (splits[1] == "d")
if (splits[0][0] == 'b')
consoleColor = defaultBackgroundColor;
else
consoleColor = defaultForegroundColor;
else
// Grab the console color that matches the name passed.
// If none match, then return default (black).
consoleColor = Enum.GetValues(typeof(ConsoleColor))
.Cast<ConsoleColor>()
.FirstOrDefault(en => en.ToString().ToLower() == splits[1]);
// Set the now chosen color to the specified target.
if (splits[0][0] == 'b')
System.Console.BackgroundColor = consoleColor;
else
System.Console.ForegroundColor = consoleColor;
}
// Only bother writing out, if we have something to write.
if (segments[i].Length > 0)
System.Console.Write(segments[i]);
}
System.Console.ForegroundColor = defaultForegroundColor;
System.Console.BackgroundColor = defaultBackgroundColor;
if (clearRestOfLine)
ClearRestOfLine();
}
public static void ClearRestOfLine()
{
int winTop = System.Console.WindowTop;
int left = System.Console.CursorLeft;
System.Console.Write(new string(' ', System.Console.WindowWidth - left));
System.Console.CursorLeft = left;
System.Console.CursorTop--;
System.Console.WindowTop = winTop;
}
}
Answer: Comments are inline.
public static class Console
Where's your using directives? They're needed for the code to compile; you should post them.
It's a little bit unusual to give your class the same name as one in System but I see why you did it -- so you can just switch to using your new module with a using.
{
private static Regex _writeRegex = new Regex("<[fb]=\\w+>");
Don't forget to make it readonly. The pattern for that Regex should be specified as a verbatim string literal so you don't have to double the backslashes (only you can prevent Leaning Toothpick Syndrome) -- @"<[FB]\w+>". You don't save length but it's easier to understand.
I had initially thought that it might make sense to construct this Regex using RegexOptions.Compiled, but it turns out that has no benefit since it's only used in a static method, so it will automatically be cached. But if you ever use nonstatic methods, it's worth trying it and see if it gives a speed increase.
public static void WriteLine(string value, int? cursorPosition = null, bool clearRestOfLine = false)
{
Write(value + Environment.NewLine, cursorPosition, clearRestOfLine);
}
It's inefficient to use + here to append onto the string. Since .NET System.String is immutable, it has to build a whole new string and copy. It would be better to just call Write with the parameters as you got them, followed by System.Console.Write(Environment.NewLine);.
public static void Write(string value, int? cursorPosition = null, bool clearRestOfLine = false)
{
if (cursorPosition.HasValue)
System.Console.CursorLeft = cursorPosition.Value;
ConsoleColor defaultForegroundColor = System.Console.ForegroundColor;
ConsoleColor defaultBackgroundColor = System.Console.BackgroundColor;
var segments = _writeRegex.Split(value);
var colors = _writeRegex.Matches(value);
This seems a bit inefficient since you're doing the same matching twice. I strongly suspect that it would be better to skip the .Split call and use a foreach loop on the Matches.
for (int i = 0; i < segments.Length; i++)
{
if (i > 0)
As another reviewer suggested, this is suboptimal. If i==0 is a special case, move it outside the loop so you don't need to test i each time through the loop. (The compiler will probably do this for you, so you won't gain any speed -- it's just for code legibility)
{
ConsoleColor consoleColor;
// Now that we have the color tag, split it int two parts,
// the target(foreground/background) and the color.
var splits = colors[i - 1].Value
.Trim(new char[] { '<', '>' })
.Split('=')
.Select(str => str.ToLower().Trim())
.ToArray();
This is clever, but it really would be easier and clearer to just use capturing groups in your Regex so you can access the two parts directly from the Match.
// if the color is set to d (default), then depending on our target,
// set the color to be the default for that target.
if (splits[1] == "d")
if (splits[0][0] == 'b')
This code would be a lot clearer if you used meaningful variable names instead of splits[1] and splits[0].
consoleColor = defaultBackgroundColor;
else
consoleColor = defaultForegroundColor;
You like terseness -- how about the ternary operator here?
else
// Grab the console color that matches the name passed.
// If none match, then return default (black).
consoleColor = Enum.GetValues(typeof(ConsoleColor))
.Cast<ConsoleColor>()
.FirstOrDefault(en => en.ToString().ToLower() == splits[1]);
Again, very clever, but I think it would be more efficient to just build a Dictionary<string,ConsoleColor> one time at the start, rather than searching the whole list each time. LINQ is extremely useful; that doesn't mean it's best used everywhere.
// Set the now chosen color to the specified target.
if (splits[0][0] == 'b')
System.Console.BackgroundColor = consoleColor;
else
System.Console.ForegroundColor = consoleColor;
}
// Only bother writing out, if we have something to write.
if (segments[i].Length > 0)
System.Console.Write(segments[i]);
}
Like the other reviewer said, this if should be at the start so you don't bother unpacking the string at all.
System.Console.ForegroundColor = defaultForegroundColor;
System.Console.BackgroundColor = defaultBackgroundColor;
How about System.Console.ResetColor()? It would make these two variables unnecessary.
if (clearRestOfLine)
ClearRestOfLine();
}
public static void ClearRestOfLine()
{
int winTop = System.Console.WindowTop;
int left = System.Console.CursorLeft;
System.Console.Write(new string(' ', System.Console.WindowWidth - left));
System.Console.CursorLeft = left;
System.Console.CursorTop--;
System.Console.WindowTop = winTop;
}
}
Does this do what you want after a WriteLine? It looks to me like it clears the next line not the current one. | {
"domain": "codereview.stackexchange",
"id": 13096,
"tags": "c#, .net, console"
} |
How to understand the charge of a black hole | Question: How can I understand the charge of a black hole?
We can understand the charge of elementary particles, like the charge of a proton or neutron. But what does the charge of a big object like that of a black hole mean? A black hole will have so much stuff inside it. So will we add the charge of all the particles inside of a black hole to calculate the charge of a black hole as a whole? I know this sounds inappropriate, but I am very confused by the statement "Charge of a black hole". It would be very helpful if someone can clarify.
Note: There is a similar question, How can a black hole have a charge, or be charged?. I have gone through it, but I am not satisfied.
Answer: You cannot dig into the black hole, count "all the quantised charges it contains", and report the result to the observer at infinity. The same applies to the BH mass -- you do not count masses of whatever particles "inside"/within the event horizon.
Instead, you can measure the electric field the BH produces far enough -- the usual way, by measuring the force upon the charged particle, subtracting the gravitational force from that (e.g. measured by the electrically neutral device/test particle of the same mass). That electric field asymptotically (at large distance $r$) scales as ~$q/r^2$. From that you infer the electric charge $q$. The force exerted upon the test particle gets somewhat more involved if the BH has non-zero angular momentum -- but the general idea is as outlined above: from observations "far enough" you find how the metric tensor and the EM vector potential depend on $r$, the coefficients at the leading terms (e.g. $\propto r^{-2}$ for the Newtonian approximation of the gravitational force and electric field) give you the mass and the charge of the BH. | {
"domain": "astronomy.stackexchange",
"id": 6500,
"tags": "black-hole, astrophysics"
} |
Percent variation of a physical quantity: how to choose who is in the denominator? | Question: I came up with a very basic doubt on percent variation of a physical quantity,say $f$, defined as
$$\delta f_{\%}=|{\frac{\Delta f}{f}}|\cdot 100$$
The problem is with the denominator. I'll make a practical example.
I met in some problems on acoustic with the following sentence
Consider that acoustic beats are heard only if the percentual difference between the two frequencies is less than $5\%$, that is $$ |{\frac{\Delta f}{f}}|<0.05$$
I do not understand what is the deniminator in this situation. I have two different frequencies $f_1$ and $f_2$, so $|\Delta f|=|f_1-f_2|=|f_2-f_1|$ but what about the $f$ below? Is it $f_1$ or $f_2$ ? What is the criteria to choose that?
Depending on the choose of the denominator, the range of frequencies for which we have beats is different so it's important. And it seems there is no a selection criteria between $f_1$ and $f_2$ because none of them is something like the "initial" frequency. The two of them are just there, playing the same role.
So how to deal with these situations? Is there a general rule to follow to choose who should be in the denominator between two values of the same physical quantity of which we want to find the percent variation?
Answer: Based on some experience in music, I realized that the relation of the frequency of two pitches with equal steps say from C to D (whole step) or D to E (same whole step) is
$f_2 = af_1$,
where $f_2$ is the higher pitch, and $f_1$ is the lower,
and $a$ depends only on the distance of steps from the lower to higher pitch.
This means that the frequency difference at higher pitches will be greater given the same distance of steps (say, the difference is higher in $F_3$ to $G_3$ than that in $C_3$ to $D_3$ and higher still in $F_4$ to $G_4$)
Let's say we have $f_2 = cf_1$, and $c$ is the threshold constant for $f_1$ so that for constants with values lesser than c, we will be able to hear beats from the two pitches. Then the condition must be
$$f_2<cf_1$$
$$f_2-f_1 < cf_1 - f_1$$
$$\frac{f_2-f_1}{f_1}<c-1$$
where $c-1 = 0.05$
which means $c = 1.05$
And it looks like we should have $f_1$ for the denominator..
I mean the one with lower frequency.. | {
"domain": "physics.stackexchange",
"id": 31536,
"tags": "homework-and-exercises, acoustics, frequency"
} |
Simple comment system | Question: This is a comment system that I wrote, is it secure?
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<link rel="stylesheet" type="text/css" href="style.css">
<?PHP
//Turn off error reporting (Not Necessary)
error_reporting(E_ALL ^ E_NOTICE);
//Connect to the database
if (!@mysql_connect('localhost', 'user', '') or !@mysql_select_db('comments')) {
die('Could not connect, please check back later.');
}
//Variables
$guest_name = htmlentities(str_replace(' ', '',$_POST['guest_name']));
$comment = htmlentities($_POST['comment']);
$time = date('g:i A', time());
$date = date('n/j/Y');
$query = "INSERT INTO `user_comments` VALUES(
'".mysql_real_escape_string('')."',
'".mysql_real_escape_string($guest_name)."',
'".mysql_real_escape_string($comment)."',
'".mysql_real_escape_string($date)."',
'".mysql_real_escape_string($time)."'
)";
$array_query = mysql_query("SELECT * FROM `comments` ORDER BY `ID` DESC");
//Insert comment into the database
if (isset($_POST['submit_comment']) && !empty($comment) && !empty($guest_name)) {
if(mysql_query($query)) {
header('Location: index.php');
}
}
//Check if user has filled in all fields
if (isset($_POST['submit_comment']) && empty($guest_name)) {
echo '<span style="color:red">*Please enter a Guest Name*</span><br>';
}
if (isset($_POST['submit_comment']) && empty($comment)) {
echo '<span style="color:red">*Please enter a Comment*</span>';
}
//List comments
function list_comments () {
$query = "SELECT * FROM `user_comments` ORDER BY `ID` DESC";
$query_run = mysql_query($query);
while ($result = mysql_fetch_array($query_run)) {
echo '<div id="comment_border">' . $result['Guest Name'] . ' - ' . $result['Date'] . ' - ' . $result['Time'] . '<br>' . $result['Comment'] . '<br></div><br>';
}
};
?>
<title>Main_Page</title>
</head>
<body>
<!-- \/ COMMENT FORM \/ -->
<form method="post" action="index.php">
<input type="text" name="guest_name" maxlength="15" placeholder="Guest Name"><br>
<textarea cols="70" rows="5" name="comment" maxlength="512" placeholder="Comment"></textarea>
<input type="submit" name="submit_comment">
</form>
<!-- /\ COMMENT FORM /\ -->
<hr><h1>Comments</h1><br>
<!-- Comments will list below here -->
<?PHP
list_comments();
?>
</body>
</html>
Answer: <?PHP
Normally it's all lower case, just saying.
//Turn off error reporting (Not Necessary)
error_reporting(E_ALL ^ E_NOTICE);
This is one prime example of confusing and not helpful commenting. If this call is not necessary, why is it there at all?
if (!@mysql_connect('localhost', 'user', '') or !@mysql_select_db('comments')) {
Stop using the mysql_* functions, seriously.
$guest_name = htmlentities(str_replace(' ', '',$_POST['guest_name']));
Why are you replacing entities here? You should not replace stuff when saving it into your database. Ideally the data in your database would be as unbiased as possible. Only replace entities if you need to, f.e. you display it on a webpage.
$time = date('g:i A', time());
$date = date('n/j/Y');
$query = "INSERT INTO `user_comments` VALUES(
'".mysql_real_escape_string('')."',
'".mysql_real_escape_string($guest_name)."',
'".mysql_real_escape_string($comment)."',
'".mysql_real_escape_string($date)."',
'".mysql_real_escape_string($time)."'
)";
From this query I can tell you a few things:
Your database layout is sub-optimal, it uses VARCHAR fields if it should use a DATETIME field.
You like to copy and paste code, that's bad.
You don't do things explicitly, consider adding a fields list to your queries.
First, you should be using a DATETIME type if you're gonna save date and time. MySQL supports also a 'pure' DATE and TIME format, there's absolutely no reason and no excuse to use use a VARCHAR field for that.
And then there's your missing fields list, I made a habit out of adding a fields list to my queries for two reasons:
It let's the reader know exactly what is getting set to where.
It makes your code more robust, the query will not fail if you add a field. Removing a field is also easier, as a simple grep for the field will yield all queries that use it.
So, your query should look like this:
$query = "
INSERT INTO
user_comments
SET
name = :name,
comment = :comment;
";
It's easy to read and you know exactly what ends where. Additionally the date/time stuff is here completely missing, because I changed the database layout:
CREATE TABLE user_comments
name VARCHAR,
comment VARCHAR,
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP;
The TIMESTAMP type is optimal for this kind of operation, as it is able to save automatically the time when the row is created.
$array_query = mysql_query("SELECT * FROM `comments` ORDER BY `ID` DESC");
Same here, your usage of the * makes it easily broken by adding new fields. Additionally you don't know what fields are fetched until you look at the table declaration.
header('Location: index.php');
I'm 99% sure that this should not work. Headers can not be changed after data was send, you set the header in the middle of the file, but your php script should be moved to the top of the file. Additionally if you want to redirect people, it would be good to die() afterwards to make sure that your script stops executing at this point:
<?php
if (condition) {
header("Location: othersite.php");
die("Redirect: <full address here>");
}
?>
<html>
<head>
...
This will also allow everyone who is not automatically redirected to see where to go to now.
echo '<span style="color:red">*Please enter a Guest Name*</span><br>';
You're outputting stuff while inside the <head>, that's bad practice, content should go into the <body>.
<!-- \/ COMMENT FORM \/ -->
What is this?!
And just to make sure:
Stop using the mysql_* functions and start using prepared queries! | {
"domain": "codereview.stackexchange",
"id": 6036,
"tags": "php, html, security"
} |
Can I sum up feature vectors of a user‘s collection? | Question: I want to find items that are similar to items users already have in their collection. Every item has attributes, so I created feature vectors where every element of the vector represents an attribute and is either $0$ or $1$ (if an item has that attribute).
For the user collection I summed up all vectors, creating one vector which I then used to calculate similarities with other items.
Is this a correct approach or should I make this "user vector", binary like the other ones? Or is it easier to just calculate $n \times m$ (I.e. user items and new items) similarities?
The set of new items will consist of $\sim1000$ items, while the user collections tend to be $<1000$. As similarity function I used cosine distance, but wanted to try Pearson coefficient as well.
Answer: You can use total sum of boolean values. That will be fast and give a general notion of similarity.
A more useful metric might be Hamming distance, the sum of matching booleans between two vectors. | {
"domain": "datascience.stackexchange",
"id": 9185,
"tags": "recommender-system, similarity, cosine-distance, vector-space-models"
} |
Problem in casting with aluminium? (misrun?) | Question: I'm trying to cast an impeller but I have a problem. The melted aluminium doesn't go inside the thin parts of the mould. Here is the mould before casting:
After the casting:
After breaking the mould:
As you can see the vanes of the impeller are not produced. It should be something like this:
What's the problem? How to figure it out?
Answer: The aluminum was not hot enough, you need more super heat. Possible poor alloy choice, a high silicon ( such as 12% ) is common for Al castings. Shell molds are normally heated ; I would use at least 500 F. Surface finish looks rough, a finer grain first layer of sand is needed. The good example required years for many experienced engineers to develop. | {
"domain": "engineering.stackexchange",
"id": 4776,
"tags": "casting"
} |
Does shining a laser on a curved surface reduce its effective intensity? | Question: Suppose I have a Gaussian laser with a width $w$ and an intensity $I$ defined as $P/\pi w^2$, where $P$ is the laser power. If I shine the laser beam onto a convex surface with a radius of curvature $r$, would the effective intensity of the laser change due to this curved surface having a larger effective area under the beam than for a flat surface?
Answer: Intensity is defined as the power through a unit area. In other words, it is the amount of energy that arrives per unit area, per unit time.
Therefore, over a larger surface area, the intensity of the light (laser) will be reduced at any given point. This means if you were to shine the laser over a convex surface as oppose to a flat surface, the intensity at any point will be smaller since the amount of area is increased.
A good example of this generalized for any light source, is by considering the seasons on earth (earth is also a convex surface). Many people confuse the temperatures of the seasons being due to the distance of the earth from the sun. But in reality, because the earth has a tilt, when this tilt is pointing toward the sun, the amount of light strikes a smaller area and there is more direct sunlight per unit area making it warmer, whereas in winter the tilt is in the opposite direction and so light strikes a greater area making it cooler. | {
"domain": "physics.stackexchange",
"id": 82436,
"tags": "optics, laser"
} |
Is there a faster algorithm than FFT if interested only on the maximum amplitude frequency? | Question: Given an $n$ input array, is there an algorithm that is faster than Fast Fourier Transform if we are only interested in obtaining the maximum amplitude frequency?
Looking at the Cooley–Tukey algorithm it seems there would be a way to "prune" frequencies that are guaranteed to not be of maximum amplitude.
The application seems common enough that some other people might have already worked on a similar problem.
I am both interested in algorithms that are time-complexity-wise faster and practically faster through small "shortcuts" arising from this particular problem.
Is there such an algorithm?
Answer: Partial FFT and Sparse FFT can exploit an expected range of said maximum.
Practically, the range of frequencies over which the maximum may occur is often known. There are also approaches to estimate said location. These are signals, not algorithms questions.
Generally, the maximum can occur anywhere. That is also a signals question. The FFT is merely a fast computation of the Discrete Fourier Transform, so the place to look is Fourier theory.
Cooley–Tukey ... seems there would be a way to "prune" [non-maximum] frequencies
FFT devotes equal compute to all bins, and bin amplitude isn't known before computation is complete. If there's a way to conditionally "intercept" the computation, it'll likely add way more compute than it saves - even when the interval is known ahead of time (as in the two linked algos), the savings are marginal. Put simply, the FFT is brutally fast for what it computes - it's not just math, but hardware optimizations. | {
"domain": "cs.stackexchange",
"id": 21475,
"tags": "time-complexity, fast-fourier-transform"
} |
Which redshift is used to determine the Hubbleconstant? | Question: I think they measure cosmological redshift to use in the Law of Hubble-Lemaître together with the distance to calculate $H_0$. Is this correct, or do they use Doppler shift (too)?
$H_0$ indicates how fast the universe is expanding, so I find it logical that we measure cosmological redshift. But the galaxies are receding (due to the Hubble Flow) away from us, so do they gain an additional Doppler shift?
Answer: The term "Hubble flow" refers to the homologous expansion of space and the resulting recession of all galaxies from each other (if they're not close enough to be gravitationally bound). This effect causes the "cosmological redshift", i.e. the redshift that light from distant galaxies attain as it travels through space.
In addition to this motion away from each other, galaxies have a so-called peculiar velocity, i.e. a motion through space. This motion adds an additional Doppler shift to the cosmological redshift, either to larger or smaller wavelengths, depending on the direction.
Whether or not you see these two types of redshift as fundamentally different, is not trivial I think. In most textbooks, they are described as two different things, the former having to do with the dynamics of the fabric of space, and the latter having to do with motions of emitters and observers. But in fact it may not be so different. For instance the Welsh cosmologist Geraint Lewis argues that, in some sense, the cosmological redshift can be interpreted as the sum of infinitely many infinitesimally small Doppler shift (Lewis 2016). On the other hand, the American physicist Sean Carroll argues that the notion of expanding space is nevertheless an extremely useful concept (Carroll 2008). | {
"domain": "astronomy.stackexchange",
"id": 4574,
"tags": "cosmology, redshift, hubble-constant, doppler-effect"
} |
How can I convert my tree to the correct nexus format? | Question: I would like to use the program BayesTraitV4. I have my phylogenetic tree in NEXUS format but the program does not accept it. I checked the example data and while they are also in NEXUS format, they seemed different. I will provide a part of my data and the example data respectively.
My tree:
#NEXUS
begin taxa;
dimensions ntax=85;
taxlabels
LevantinaFC4779
Assyriella_naegelei
Ceu_LAG_2
CeuMist2
CeuMist1
Ceu_PAN_2
Ceu_PAN_1
Ceu_PAN_3
Ceu_PAN_4
CcMath1
CcNav1
CcNav2
Cc_Mal_1
Cc_KalR2_2
CcRod1
CcRod2
CcRod3
CcRod4
CcHr1
CeST6_1
CeST6_4
CeST6_5
CeAA7_1
CeST6_3
CeST6_2
Cg_MelR7_1
Cg_MelR3_1
Cg_MelR6_2
Cg_MelR7_2
Cg_MelR6_1
Cg_MelR7_3
Cg_MelR5_1
Cg_MelR4_2
Cg_MelR1_1
Cg_MelR2_1
Cg_MelR4_1
Cg_MelR7_4
Cg_MelR5_2
ChTM4_2
ChTM5_1
ChTM4_3
ChTM4_1
Ch_MEN_5
Ch_MEN_13
Ch_MEN_8
Ch_LEV_3
Ch_LEV_5
Ch_LEV_4
Ch_MEN_6
Ch_MEN_10
Ch_LEV_2
Ch_MEN_4
Ch_MEN_14
Ch_LEV_1
Ch_MEN_11
Ch_MEN_3
Ch_MEN_9
Ci_MSP_3
Ci_MSP_2
Ci_MSP_1
Ci_DIA_1
Ci_DIA_2
CiPLAN1
CiPLAN3
CiPLAN2
Ci_MEN_1
Ci_MEN_2
Ci_MEN_12
CpGravR6_1
CpGravR3
CpGravR4
CpGio_2
CpGravR5_2
CpGravR2
CpGravR7
CpGio_1
CpGio_3
CpGravR1
CpGravR6_2
CpGravR5_1
CpThe_1
CpSka_1
Cn_IG_2
Cn_IG_3
Cn_IG_1
;
end;
begin trees;
tree TREE1 = [&R] ((LevantinaFC4779[&rate_range={7.888601174550756E-4,0.2727277039812744},height_95%_HPD={0.0,1.5711876244495215E-12},length_range={0.6843282088933,6.197383612374},height_median=6.004086117172847E-13,length_95%_HPD={1.080900382604,3.875114824373},height=6.614724410322848E-13,rate=0.049900260878735954,height_range={0.0,5.799805080641818E-12},rate_median=0.045084434131968686,length=2.411967820837161,length_median=2.3472043999879997,rate_95%_HPD={0.0108358382756707,0.10141567250363237}]:2.4119678208371664,Assyriella_naegelei[&rate_range={2.4004562386929353E-4,0.19480057878691465},height_95%_HPD={0.0,1.5711876244495215E-12},length_range={0.6843282088933,6.197383612374},height_median=6.004086117172847E-13,length_95%_HPD={1.080900382604,3.875114824373},height=6.614724410322848E-13,rate=0.03027613080757391,height_range={0.0,5.799805080641818E-
Example tree:
#NEXUS
Begin trees;
translate
1 Opossum,
2 Diprotodontian,
3 Sloth,
4 Armadillo,
5 Anteater,
6 Hedgehog,
7 Mole,
8 Shrew,
9 Tenrecid,
10 Golden_Mole,
11 Sirenian,
12 Hyrax,
13 Elephant,
15 Lo_Ear_Ele_shrew,
17 Sciurid,
18 Mouse,
19 Rat,
20 Hystricid,
21 Caviomorph,
22 Rabbit,
23 Pika,
24 Flying_Lemur,
25 Tree_Shrew,
26 Strepsirrhine,
28 Phyllostomid,
29 Free_tailed_bat,
30 False_vampire_bat,
31 Flying_Fox,
32 Rousette_Fruitbat,
33 Whale,
35 Hippo,
36 Llama,
37 Ruminant,
38 Pig,
39 Horse,
40 Rhino,
41 Tapir,
42 Cat,
43 Caniform,
44 Pangolin;
tree tree.164000.208997.409491 = ((1:0.07628,2:0.065151):0.1546885,(((15:0.133631,(9:0.157087,10:0.077841):0.007403):0.005959,(11:0.0406,(12:0.092769,13:0.052284):0.001099):0.01453):0.029813,((4:0.05807,(3:0.054688,5:0.062757):0.008238):0.029441,(((26:0.083487,(24:0.064103,25:0.110063):0.003501):0.000775,((22:0.058663,23:0.116805):0.045375,(17:0.0898,((18:0.039267,19:0.043896):0.125808,(20:0.083297,21:0.089139):0.067194):0.006543):0.011062):0.005229):0.00803,((7:0.090648,(6:0.150383,8:0.133021):0.006503):0.01198,((36:0.062932,(38:0.066525,(37:0.076473,(35:0.039222,33:0.031429):0.00357):0.008228):0.005968):0.026714,((44:0.081521,(42:0.046822,43:0.059428):0.024607):0.004257,((39:0.046001,(40:0.029049,41:0.029533):0.003965):0.022397,((28:0.093611,29:0.050095):0.013432,(30:0.061026,(31:0.018601,32:0.018701):0.036144):0.004989):0.015808):0.001216):0.000872):0.002868):0.009567):0.010713):0.00234):0.1546885);
tree tree.165000.208997.199794 = ((1:0.07628,2:0.065151):0.1546885,(((15:0.133631,(9:0.157087,10:0.077841):0.007403):0.005959,(11:0.0406,(12:0.092769,13:0.052284):0.000485):0.01453):0.029813,((4:0.05807,(3:0.054688,5:0.062757):0.008238):0.029441,(((26:0.083487,(24:0.064103,25:0.110063):0.002826):0.000775,((22:0.058663,23:0.116805):0.045375,(17:0.0898,((18:0.039267,19:0.043896):0.125808,(20:0.083297,21:0.089139):0.067194):0.006543):0.011062):0.003736):0.00803,((7:0.090648,(6:0.150383,8:0.133021):0.010132):0.01198,((44:0.084486,(42:0.046822,43:0.059428):0.024607):0.004257,((36:0.062932,(38:0.066525,(37:0.076473,(35:0.039222,33:0.031429):0.004674):0.008228):0.005968):0.026714,((39:0.046001,(40:0.029049,41:0.029533):0.003965):0.022397,((28:0.093611,29:0.050095):0.008576,(30:0.061026,(31:0.018601,32:0.018701):0.036144):0.004989):0.015808):0.001216):0.000872):0.002868):0.009567):0.010713):0.00234):0.1546885);
;
end;
Why are these formats different and how can I make my file to look like the example one?
Keep in mind the files are not presented in their entirety because I'm limited with the amount of characters.
Edit: I will be providing my tree in this link https://drive.google.com/file/d/1lYmzW9htwaclC_yYs3fQom-Z3zbjk8wH/view?usp=sharing
Answer: Okay ... this will probably work
#NEXUS
begin trees;
tree TREE1 = [&R] ((LevantinaFC4779:2.411968,Assyriella_naegelei:2.411968):5.752052,((Cn_IG_1:0.126901,(Cn_IG_2:0.075971,Cn_IG_3:0.075971):0.05093):7.4805,(((CpThe_1:0.122189,CpSka_1:0.122189):2.663216,((CpGio_2:0.137489,(CpGravR6_1:0.087156,(CpGravR3:0.038234,CpGravR4:0.038234):0.048922):0.050334):0.257308,(CpGravR5_1:0.28159,((CpGravR7:0.099712,(CpGravR5_2:0.020255,CpGravR2:0.020255):0.079458):0.15617,(CpGravR6_2:0.109222,(CpGio_1:0.068711,(CpGio_3:0.049345,CpGravR1:0.049345):0.019367):0.040511):0.14666):0.025708):0.113207):2.390607):1.772627,(((Ci_MEN_12:0.072368,(Ci_MEN_1:0.022725,Ci_MEN_2:0.022725):0.049643):1.78894,((CiPLAN2:0.253487,(CiPLAN1:0.067509,CiPLAN3:0.067509):0.185978):0.386279,((Ci_MSP_3:0.061614,Ci_MSP_2:0.061614):0.231268,(Ci_MSP_1:0.113777,(Ci_DIA_1:0.047073,Ci_DIA_2:0.047073):0.066704):0.179105):0.346885):1.221542):1.580745,(((Ceu_LAG_2:1.580984,(CeuMist2:0.198982,CeuMist1:0.198982):1.382002):0.341226,((Ceu_PAN_2:0.046898,Ceu_PAN_1:0.046898):0.20368,(Ceu_PAN_3:0.018814,Ceu_PAN_4:0.018814):0.231764):1.671632):1.226475,((CcHr1:1.247104,((CcRod1:0.403751,(CcRod4:0.276104,(CcRod2:0.158977,CcRod3:0.158977):0.117127):0.127647):0.308704,((Cc_Mal_1:0.138232,Cc_KalR2_2:0.138232):0.395435,(CcMath1:0.338389,(CcNav1:0.091762,CcNav2:0.091762):0.246627):0.195278):0.178788):0.534649):1.609976,(((CeST6_5:0.056918,(CeST6_1:0.030107,CeST6_4:0.030107):0.026812):0.733078,((CeST6_2:0.143014,(CeAA7_1:0.097814,CeST6_3:0.097814):0.0452):0.349431,((Cg_MelR7_1:0.136715,(Cg_MelR3_1:0.088065,(Cg_MelR6_2:0.047987,(Cg_MelR7_2:0.019268,Cg_MelR6_1:0.019268):0.028719):0.040078):0.04865):0.074941,(Cg_MelR5_2:0.143401,(Cg_MelR7_3:0.119131,(Cg_MelR5_1:0.090065,((Cg_MelR4_1:0.020938,Cg_MelR7_4:0.020938):0.051613,(Cg_MelR4_2:0.052913,(Cg_MelR1_1:0.021387,Cg_MelR2_1:0.021387):0.031526):0.019639):0.017514):0.029066):0.02427):0.068255):0.280789):0.297551):0.793423,(((ChTM4_2:0.049871,ChTM5_1:0.049871):0.111934,(ChTM4_3:0.023134,ChTM4_1:0.023134):0.138672):0.228306,(Ch_MEN_9:0.174548,((Ch_MEN_11:0.021817,Ch_MEN_3:0.021817):0.10954,((Ch_MEN_8:0.057141,(Ch_MEN_5:0.022973,Ch_MEN_13:0.022973):0.034168):0.053065,((Ch_LEV_3:0.031056,Ch_LEV_5:0.031056):0.061225,((Ch_LEV_4:0.029359,(Ch_MEN_6:0.017456,Ch_MEN_10:0.017456):0.011904):0.034407,((Ch_LEV_2:0.017911,Ch_MEN_4:0.017911):0.040741,(Ch_MEN_14:0.03419,Ch_LEV_1:0.03419):0.024462):0.005114):0.028515):0.017925):0.021151):0.043191):0.215563):1.193307):1.27366):0.291605):0.293368):1.115978):3.04937):0.556619);
end;
Alternatively, I'm not sure about the root, just remove [&R] in the above.
Finally there could be an issue that you haven't obtained the sample directly from the raw Beast output, instead you've summarised it. It might complain about that. Thus it might need all those trees from the raw Beast output. Thats not a format issue it's a data input issue. BayesTraits might be perform the calculation against the total MCMCMC sample minus the burn-in.
from comments
Read it into Rs ape::read.nexus() and then write via phytools::writeNexus(). This should produce your translate table. Biopython recognises this format but doesn't write to it.
The closest Biopython can achieve is a taxablock here
#NEXUS
Begin Taxa;
Dimensions NTax=85;
TaxLabels LevantinaFC4779 Assyriella_naegelei Cn_IG_1 Cn_IG_2 Cn_IG_3 CpThe_1 CpSka_1 CpGio_2 CpGravR6_1 CpGravR3 CpGravR4 CpGravR5_1 CpGravR7 CpGravR5_2 CpGravR2 CpGravR6_2 CpGio_1 CpGio_3 CpGravR1 Ci_MEN_12 Ci_MEN_1 Ci_MEN_2 CiPLAN2 CiPLAN1 CiPLAN3 Ci_MSP_3 Ci_MSP_2 Ci_MSP_1 Ci_DIA_1 Ci_DIA_2 Ceu_LAG_2 CeuMist2 CeuMist1 Ceu_PAN_2 Ceu_PAN_1 Ceu_PAN_3 Ceu_PAN_4 CcHr1 CcRod1 CcRod4 CcRod2 CcRod3 Cc_Mal_1 Cc_KalR2_2 CcMath1 CcNav1 CcNav2 CeST6_5 CeST6_1 CeST6_4 CeST6_2 CeAA7_1 CeST6_3 Cg_MelR7_1 Cg_MelR3_1 Cg_MelR6_2 Cg_MelR7_2 Cg_MelR6_1 Cg_MelR5_2 Cg_MelR7_3 Cg_MelR5_1 Cg_MelR4_1 Cg_MelR7_4 Cg_MelR4_2 Cg_MelR1_1 Cg_MelR2_1 ChTM4_2 ChTM5_1 ChTM4_3 ChTM4_1 Ch_MEN_9 Ch_MEN_11 Ch_MEN_3 Ch_MEN_8 Ch_MEN_5 Ch_MEN_13 Ch_LEV_3 Ch_LEV_5 Ch_LEV_4 Ch_MEN_6 Ch_MEN_10 Ch_LEV_2 Ch_MEN_4 Ch_MEN_14 Ch_LEV_1;
End;
Begin Trees;
Tree tree1=((LevantinaFC4779:2.41197,Assyriella_naegelei:2.41197):5.75205,((Cn_IG_1:0.12690,(Cn_IG_2:0.07597,Cn_IG_3:0.07597):0.05093):7.48050,(((CpThe_1:0.12219,CpSka_1:0.12219):2.66322,((CpGio_2:0.13749,(CpGravR6_1:0.08716,(CpGravR3:0.03823,CpGravR4:0.03823):0.04892):0.05033):0.25731,(CpGravR5_1:0.28159,((CpGravR7:0.09971,(CpGravR5_2:0.02025,CpGravR2:0.02025):0.07946):0.15617,(CpGravR6_2:0.10922,(CpGio_1:0.06871,(CpGio_3:0.04934,CpGravR1:0.04934):0.01937):0.04051):0.14666):0.02571):0.11321):2.39061):1.77263,(((Ci_MEN_12:0.07237,(Ci_MEN_1:0.02272,Ci_MEN_2:0.02272):0.04964):1.78894,((CiPLAN2:0.25349,(CiPLAN1:0.06751,CiPLAN3:0.06751):0.18598):0.38628,((Ci_MSP_3:0.06161,Ci_MSP_2:0.06161):0.23127,(Ci_MSP_1:0.11378,(Ci_DIA_1:0.04707,Ci_DIA_2:0.04707):0.06670):0.17910):0.34688):1.22154):1.58075,(((Ceu_LAG_2:1.58098,(CeuMist2:0.19898,CeuMist1:0.19898):1.38200):0.34123,((Ceu_PAN_2:0.04690,Ceu_PAN_1:0.04690):0.20368,(Ceu_PAN_3:0.01881,Ceu_PAN_4:0.01881):0.23176):1.67163):1.22647,((CcHr1:1.24710,((CcRod1:0.40375,(CcRod4:0.27610,(CcRod2:0.15898,CcRod3:0.15898):0.11713):0.12765):0.30870,((Cc_Mal_1:0.13823,Cc_KalR2_2:0.13823):0.39543,(CcMath1:0.33839,(CcNav1:0.09176,CcNav2:0.09176):0.24663):0.19528):0.17879):0.53465):1.60998,(((CeST6_5:0.05692,(CeST6_1:0.03011,CeST6_4:0.03011):0.02681):0.73308,((CeST6_2:0.14301,(CeAA7_1:0.09781,CeST6_3:0.09781):0.04520):0.34943,((Cg_MelR7_1:0.13672,(Cg_MelR3_1:0.08807,(Cg_MelR6_2:0.04799,(Cg_MelR7_2:0.01927,Cg_MelR6_1:0.01927):0.02872):0.04008):0.04865):0.07494,(Cg_MelR5_2:0.14340,(Cg_MelR7_3:0.11913,(Cg_MelR5_1:0.09007,((Cg_MelR4_1:0.02094,Cg_MelR7_4:0.02094):0.05161,(Cg_MelR4_2:0.05291,(Cg_MelR1_1:0.02139,Cg_MelR2_1:0.02139):0.03153):0.01964):0.01751):0.02907):0.02427):0.06825):0.28079):0.29755):0.79342,(((ChTM4_2:0.04987,ChTM5_1:0.04987):0.11193,(ChTM4_3:0.02313,ChTM4_1:0.02313):0.13867):0.22831,(Ch_MEN_9:0.17455,((Ch_MEN_11:0.02182,Ch_MEN_3:0.02182):0.10954,((Ch_MEN_8:0.05714,(Ch_MEN_5:0.02297,Ch_MEN_13:0.02297):0.03417):0.05307,((Ch_LEV_3:0.03106,Ch_LEV_5:0.03106):0.06123,((Ch_LEV_4:0.02936,(Ch_MEN_6:0.01746,Ch_MEN_10:0.01746):0.01190):0.03441,((Ch_LEV_2:0.01791,Ch_MEN_4:0.01791):0.04074,(Ch_MEN_14:0.03419,Ch_LEV_1:0.03419):0.02446):0.00511):0.02851):0.01792):0.02115):0.04319):0.21556):1.19331):
1.27366):0.29161):0.29337):1.11598):3.04937):0.55662):0.00000;
End;
The solution could be coded from the taxa block if the above doesn't work, but easier to use R ape and phytools | {
"domain": "bioinformatics.stackexchange",
"id": 2587,
"tags": "phylogenetics, phylogeny, file-formats"
} |
can this php function be written to perform better? | Question: I have this function takes a long time to complete. Is there a way i can improve it to quicken the procedure
function do_updatebonus() {
global $site_config;
$res200 = SQL_Query_exec("SELECT DISTINCT userid FROM peers WHERE seeder = 'yes'");
while ($row200 = mysql_fetch_assoc($res200)) {
$userid = $row200["userid"];
$res201 = SQL_Query_exec("SELECT COUNT(torrent) FROM peers WHERE seeder = 'yes' AND userid = $userid");
$c = mysql_result($res201, 0);
if ($c >= 5) {
SQL_Query_exec("UPDATE users SET seedbonus = seedbonus + '" . $site_config["bonuspertime"] . "' WHERE id = $userid");
}
}
}
Answer: Yeah, you can optimize your query,
$res201 = SQL_Query_exec("SELECT COUNT(torrent) FROM peers WHERE seeder = 'yes' AND userid = $userid");
As this query will first check for seeder which has 'yes' and then for userid. So you can first check for userid, then for seeder. This will improve performance
$res201 = SQL_Query_exec("SELECT COUNT(torrent) FROM peers WHERE userid = $userid" AND seeder = 'yes' ); | {
"domain": "codereview.stackexchange",
"id": 1861,
"tags": "php"
} |
launch file error: cannot launch node of type [map_server/map_server] | Question:
I created a launch file that contains:
<launch>
<param name="/use_sim_time" value="true"/>
<node pkg="map_server" type="map_server" name="map_server_node" args="/home/maysam/fuerte_workspace/sandbox/mit/map.yaml"/>
</launch>
But the output is:
ERROR: cannot launch node of type [map_server/map_server]: can't locate node [map_server] in package [map_server]
I can run the node manually and successfully:
rosrun map_server map_server /home/maysam/fuerte_workspace/sandbox/mit/map.yaml
Originally posted by maysamsh on ROS Answers with karma: 139 on 2014-02-05
Post score: 2
Original comments
Comment by bvbdort on 2014-02-05:
what command you are using to run launch file ?
Comment by maysamsh on 2014-02-06:
roslaunch myfile.launch
Comment by dido_yf on 2014-12-18:
hello, I have exactly the same problem with you. Did you solve this problem? Could you tell me how?
Answer:
try this way http://answers.ros.org/question/9379/roslaunch-and-map_server/?answer=13602#post-id-13602
Originally posted by bvbdort with karma: 3034 on 2014-02-06
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 16890,
"tags": "roslaunch"
} |
Building intuition for the AC Stark shift | Question: The AC Stark shift is a pretty ubiquitous phenomenon in atomic physics that describes the shift of energy levels in the presence of a periodic drive (usually it's due to laser light).
In the common perturbative regime, one can approximate the AC Stark shift with second order perturbation theory. This procedure gives you terms like $$\Delta \omega \sim \frac{\Omega^2}{\Delta}$$ where $\Omega$ is the Rabi frequency ($\propto d\cdot E$) and $\Delta$ is the detuning from the atomic transition.
Some of the major lessons of the AC Stark shift are
Drives with frequency smaller than the transition frequency will cause the ground state to lower in energy and the excited state to increase. The opposite occurs for drives faster than the transition frequency
The AC Stark shift is linear in power (second order in electric field)
The net shift is a total sum over terms like $\Omega^2/\Delta$ for all excited states accessible from the ground state.
Over the years I've grown to accept the math, but recently I've found it a bit difficult to convey these concepts intuitively. I tried explaining it to a student in terms of driving a harmonic oscillator with detuning, but it wasn't straightforward to explain the typical effects associated with the AC Stark shift.
My question is what are some intuitive and valuable ways of understanding the AC Stark shift in general?
Answer: I am partial to a hand-wavy picture that is heavily inspired by Brian Skinner's exposition of Raman scattering (go there for nice animations of that related process).
So, imagine that an applied AC electric field induces an electric dipole in the electron. The dipole energy has the usual form of $U=-\vec{p}\cdot\vec{E}$, but in this case the dipole moment is itself proportional to the electric field, since it is an induced dipole, leading to an energy proportional to $|\vec{E}|^2$.
The induced dipole has a natural frequency, which is the atomic transition. In the case of the AC Stark shift, the external field also has an associated frequency. Just like a driven harmonic system, when you drive the dipole very slowly it can move smoothly with the changing field, and the two stay aligned. This leads to a negative energy as seen in the expression above. When you drive it above the natural frequency it cannot keep up, and turns out to oscillate out of phase with the applied field, leading to a positive overall sign for $U$.
That gives you properties 1) and 2). I am not as troubled by property 3) but I suppose I would appeal to superposition and Fourier decomposition to try to argue that once you've analyzed this for one detuning from resonance, you can extend it to others. | {
"domain": "physics.stackexchange",
"id": 95482,
"tags": "quantum-mechanics, atomic-physics"
} |
What is the moment of inertia of this object? | Question: I have the following task, and am not sure what I am doing it wrong. Help would be truly appreciated!
Task:
A ping-pong ball has a mass m, and a radius r. Assume that one ping-pong ball can be modeled as a hollow sphere, with I = 2/3mr2. Three ping-pong balls are glued together in a line, as in the image below.
What is the moment of inertia through the y-axis?
How I tried to solve it:
I figured I should use the parallel-axis theorem Ir = I0 + Mr2.
I0 = 2/3mr2 * 3 = 2mr2
Also, Mr2 = 3m * (2r)2 = 12mr2. Adding these together I get 2mr2 + 12mr2 = 14mr2
However, the right answer is 22mr2. What am I doing wrong?
Answer: The error is calculating $I_0$, because the center of mass of the three balls system pass trough the center of only one of the three ball. You have to use the parallel-axis theorem also for calculating $I_0$. | {
"domain": "physics.stackexchange",
"id": 42516,
"tags": "homework-and-exercises, rotational-dynamics, moment-of-inertia"
} |
1-D discrete cosine transform(DCT) | Question: This was a past year paper question. Not sure how to answer it.
Question: The 1-D discrete cosine transform(DCT) of a sequence f(x), x =0,1,...,N-1 is
F(u) = c(u)* Summation of f(x) cos ( (2x+1)u*pi/2N)
show that 1-D DCT of the sequence
g(x) = f(N-1-x), x = 0,1,...,N-1
can be expressed as G(u) = (-1)*F(u), u=0,1..., N-1
My solution (If wrong please correct me)
G(u) = c(u)* Summation of f(N-1-x) cos ((2(N-1-x)+1)u*pi/2N)
= c(u)* Summation of f(N-1-x) cos (2N-2x-1)u*pi/2N)
Answer: If $g(x)= f(N-1-x)$ then
$$\begin{aligned}
(\operatorname{DCT}\:g)(u)
=& c(u)\cdot\sum_{x=0}^{N-1}g(x)\cdot\cos\bigl((2x+1)u\cdot\tfrac\pi{2N}\bigr)
\\=& c(u)\cdot\sum_{x=0}^{N-1}f(N-1-x)
\cdot\cos\bigl((2N-2N+1+2x)u\cdot\tfrac\pi{2N}\bigr)
\\=& c(u)\cdot\sum_{(N-1-x)=(N-1)}^{0}f(N-1-x)
\cdot\cos\Bigl(\bigl(2N-1-2(N-1-x)\bigr)u\cdot\tfrac\pi{2N}\Bigr)
\end{aligned}$$
substitute now $N-1-x=:\xi$,
$$\begin{aligned}
(\operatorname{DCT}\:g)(u)
=& c(u)\cdot\sum_{\xi=0}^{N-1}f(\xi)
\cdot\cos\Bigl(\bigl(2N-1-2\xi\bigr)u\cdot\tfrac\pi{2N}\Bigr)
\\=& c(u)\cdot\sum_{\xi=0}^{N-1}f(\xi)
\cdot\cos\Bigl(\pi\bigl(\tfrac{2N}{2N}-\tfrac{2\xi+1}{2N}\bigr)u\Bigr).
\end{aligned}$$
In the cosine, we have now a shift by an integer multiple ($u$) of $\pi$. Such a shift in a sine or cosine function is equivalent to multiplying the function with $(-1)^u$, So the result is
$$\begin{aligned}
(\operatorname{DCT}\:g)(u)
=& c(u)\cdot\sum_{\xi=0}^{N-1}f(\xi)
\cdot\cos\bigl(\pi\tfrac{2\xi+1}{2N}u\bigr)\cdot(-1)^u
\\=& (-1)^u\cdot(\operatorname{DCT}\:f)(u).
\end{aligned}$$
Not quite the expression you wanted to show, but this should be the correct one. | {
"domain": "dsp.stackexchange",
"id": 82,
"tags": "dct, homework"
} |
Would Birkhoff's Theorem also apply for a field with negative energy density? | Question: Birkhoff's Theorem says that any spherically symmetric solution to the field equations must be the Schwarzschild metric. I'm wondering whether that could be generalized to a field with negative energy density?
Answer: No, it wouldn't. Birkhoff's Theorem states not that any spherically symmetric solution to the field equations must be the Schwarzschild solution, but rather that any spherically symmetric solution to the vacuum field equations must be the Schwarzschild solution. If there is any non-negative energy density on spacetime and the Einstein Field Equations are assumed to hold, then that spacetime can't be Schwarzschild spacetime.
Notice that the Schwarzschild metric has $R_{\mu\nu} = 0$, and hence $R = 0$. As a consequence, the Einstein Equations lead to
$$\begin{align}
R_{\mu\nu} - \frac{1}{2} R g_{\mu\nu} &= 8 \pi T_{\mu\nu}, \\
0 &= 8 \pi T_{\mu\nu}, \\
T_{\mu\nu} &= 0.
\end{align}$$
If the stress-energy tensor is anything other than identically zero (including if one considers a non-vanishing cosmological constant), then the Schwarzschild metric is not a solution to the Einstein equations on that region where $T_{\mu\nu} \neq 0$. | {
"domain": "physics.stackexchange",
"id": 84332,
"tags": "general-relativity, stress-energy-momentum-tensor, cosmological-constant"
} |
Discrete point particles stress energy tensor | Question: I am trying to solve an exercise in Sean Carroll's GR book "Spacetime and Geometry". Basically we need to derive the stress-energy tensor of a perfect fluid (ie $T^{\mu\nu}=(\rho +p)U^{\mu}U^{\nu} + p\eta^{\mu\nu}$) from the stress-energy tensor of a discrete set of particles (ie $T^{\mu\nu}=\sum_a \frac{p^{\mu}_a p^{\nu}_a}{p^0_a}\delta^{(3)}(\mathbf x - \mathbf x^{(a)})$), under the hypothesis of isotropy.
I managed to get in for the $T^{00}$ component and the $T^{0i}$ components, by replacing $p^{\mu}$ by $p^0$ a trivial sum appears: energy density for the 00-component and momentum density the 0i-components (vanishing by isotropy). But I am still struggling with the pure spatial part, I was thinking of substituting the sum by an integral, then the non diagonal part vanishes by isotropy again. Could it be something like: $\sum_a = \int d^3x \rho(x)$? Because then I need a relation that relates the density distribution $\rho$ and $p^{\mu}$ to the pressure (more exactly the definition of the pressure from these two concepts).
Answer: There are two points I wish to highlight.
1) Simple substitution of the sum by an integral would not work and is not justified. However, one should switch from microscopical quantities to macroscopical by doing avaraging over 4-volume, throughout which interparticle distances and times can be considered small.
Macroscopic stress-energy tensor will be then:
$$
{\bf T}^{\mu\nu}=\dfrac{1}{\Delta V_4}\int_{\Delta V_4}T^{\mu\nu}d V=\dfrac{1}{\sqrt{-g} d^3x^i dx^0}\int_{\Delta V_4}T^{\mu\nu} \sqrt{-g} d^3x^i dx^0
$$
Then a)in $T^{\mu\nu}$ only delta-functions depend on x, b) metric determinant g is a macroscopic quantity, is constant over selected volume and can also be taken away from the integra. One arrives then at:
$$
{\bf T}^{\mu\nu}=\dfrac{1}{d^3x^i dx^0}\sum_a\dfrac{p_a^\mu p_a^\nu}{p_a^0}\int_{\Delta V_4} \delta^{(3)}({\bf x}-{\bf x}^{(a)}) d^3 x^i dx^0 = \dfrac{1}{d^3x^i}\sum_a\dfrac{p_a^\mu p_a^\nu}{p_a^0}.
$$
In the last expression the sum is taken over the particles which have world lines passing through $\Delta V_4$ (we ignore the fact that some particles could have left or entered the volume through its 3-boundary, as there are much less of them then the particles inside the volume).
Now the expression ${\bf T}^{\mu\nu}= \dfrac{1}{d^3x^i}\sum_a\dfrac{p_a^\mu p_a^\nu}{p_a^0}$ can be more comfortably treated.
2) The symmetry considerations themselves. Consider the conponenent of the macroscopic tensor:
${\bf T}^{0 0} = \dfrac{1}{d^3x^i}\sum_a p_a^0 \equiv \rho $
${\bf T}^{i 0} = \dfrac{1}{d^3x^i}\sum_a p_a^i $.
As the sum $\sum_a p_a^i$ of 3-vectors is taken over a macrospoic volume, the result should result in a macroscopic 3-vector. However, if this vector was not zero, it would violate isotropy, which states that there exists no preferable direction. Hence ${\bf T}^{i 0} = 0$
${\bf T}^{i j} = \dfrac{1}{d^3x^i}\sum_a\dfrac{p_a^i p_a^j}{p_a^0}$. As just before, the sum should produce a symmetric macroscopic 3-tensor of second order. But all symmetric 3-tensors are defined by 3 eigenvectors. If eigenvalues are non-degenerate, then there exists 3 preferred directions (3 eigenvectors), if eigenvalues are single degenerate, then there are 2 preferred directions etc. No preferred directions correspond to the case when the matrix has all eigenvalues equal, that is when the matrix is proportional to kronecker delta. The coefficient of proportionality is the pressure:
${\bf T}^{i j} \equiv P \delta^{ij}$
Expressing $\delta^{ij}$ as $\eta^{ij}+U^0 U^0$ ($U^i$ is zero by symmetry considerations, and $U^0$ is hence equal to unity), and $T^{00}$ as $\rho U^0 U^0$, one arrives at the final expression for $T$. | {
"domain": "physics.stackexchange",
"id": 84560,
"tags": "homework-and-exercises, general-relativity, point-particles"
} |
A simple lotto game in Python | Question: I have a simple task of creating a simple lotto game to practice python. I had some troubles with return of functions so i dropped few because of not understanding the scopes maybe.
In my next version i want to try to make it fully in functions, and in the next version i want to do it in OOP principles.
By posting this code I would like to hear maybe some suggestions where my logic was wrong and some PEP comments.
Game rules
Each player is given 15 numbers, in increasing order. This is why i use sorted() on the lists of player and computer. Also there is a common list with numbers in it from 1 to 90.
A random number is picked and removed from this list and printed to players. The player who has this number in his card crosses it out or skips its turn. This continues until either player or computer crosses out all his numbers, or for some strange reason they are out of numbers :) I think in other parts of the world it is also called BINGO but I might be wrong.
import random
number_pool = list(range(1, 91))
computer_card = random.sample(number_pool, 15)
computer_card_sorted = sorted(computer_card)
player_card = random.sample(number_pool, 15)
player_card_sorted = sorted(player_card)
def display_cards():
print("Computer card:\n")
print(computer_card_sorted)
print("====================================")
print("Player card:\n")
print(player_card_sorted)
def lotto_choice():
choice = random.choice(number_pool)
number_pool.remove(choice)
return choice
def the_game():
while number_pool:
choice = lotto_choice()
print("The random lotto is: " + str(choice))
display_cards()
cross_number = input("Do you want to cross out a number")
cross_number.lower()
if cross_number == "y":
if choice in player_card_sorted:
player_card_sorted.remove(choice)
elif choice in computer_card_sorted:
computer_card_sorted.remove(choice)
if cross_number == "n":
if choice in computer_card_sorted:
computer_card_sorted.remove(choice)
else:
continue
else:
if len(player_card_sorted) == 0:
print("Congratulations Player ! You won")
elif len(computer_card_sorted) == 0:
print("The computer have won, too bad !")
else:
print("It is a tie you both ran out of numbers, very straange !")
the_game()
Answer: For your current purposes, the biggest problem is that you are relying on global variables exclusively. This makes it a lot tougher to understand, maintain, and debug your code. This question actually has some great answers that discuss why this can cause real problems in code:
https://softwareengineering.stackexchange.com/questions/148108/why-is-global-state-so-evil
As a specific example, this bit of code had me confused for a few minutes:
while number_pool:
Because number_pool wasn't used anywhere in your while loop, and so I had no idea why this wasn't just an infinite loop. I eventually realized that it is getting modified inside the lotto_choice method which is called within the while loop. This is pretty much exactly why you don't want to use global state: if any bit of code can modify any variable anywhere, you can quickly lose track of who modifies what and when it happens, and bugs can creep in very quickly. You also will have your own trouble if you come back to the code after a while and find that you no longer remember exactly what gets modified by what functions.
So, always make sure and keep input into functions and output out of functions clear. In the case of lotto_choice you could easily do something like this:
def lotto_choice( number_pool ):
choice = random.choice(number_pool)
number_pool.remove(choice)
return ( choice, number_pool )
Then you would call it like this inside your while loop:
( choice, number_pool ) = lotto_choice( number_pool )
Now you can easily see where number_pool gets modified. Since you are just now getting started, now is the time to establish good habits. So take the time and understand how variable scope works and (especially in a learning exercise like this) make sure not to use any global state. My biggest suggestion is to fix that and then post a new code review question with your updated code.
Also, small change. You want to only execute your game if called in the 'main' context, like this:
if __name__ == '__main__':
the_game() | {
"domain": "codereview.stackexchange",
"id": 26476,
"tags": "python, beginner, python-3.x, random"
} |
Would a double gyroscope still have gyroscopic properties? | Question: Imagine you have a gyroscope that has two spinning parts, one on top of the other. When you pull the string, the two halves spin in opposite directions. Would this cancel out the gyroscopic properties, or double them?
Answer: Short answer: it cancels the gyroscopic effect (with caveats).
As long as the system holds together (see below), if the two halves spin with exactly the same magnitude but opposite sign angular momentum, from the point of view of an outside observer, the system behaves like one of zero angular momentum. In particular, it takes negligible torque on the part of an outside observer to rotate the system, and there is no phenomenon of precession or nutation. Indeed this kind of principle is sometimes used in robotics and mechanical engineering to allow high speed rotating components to be manipulated easily.
However: from the standpoint of each rotating component, each requires a torque to change its own angular momentum. Indeed, if you spin the system quickly, you're forcing the angular momentum of the two separate components to change extremely fast. The two spinning components must therefore exert huge torques on one another to achieve this. Rotation of the whole system, although easy for the outside observer, begets huge stresses on the shaft joining the two components. If you set a system like this up and rotate it, you can see the shaft between the components bending slightly at right angles to the plane of rotation, as the massive torque between the components sets up high bending moments in the shaft. This kind of experiment needs to be done with great care, with very lightweight components and with safety glasses on. Systems like this can explode if the joining shaft fails, and whenever the principle is exploited in robotics, the control system imposes very strict limits on the maximum rate of rotation of the system as a whole, if the rotation is in a different plane from that of the two components' angular momentums. | {
"domain": "physics.stackexchange",
"id": 34533,
"tags": "angular-momentum, gyroscopes"
} |
Does a low-gain free electron laser (FEL) emit coherent radiation? | Question: For the low-gain free electron laser (FEL), an external field is injected into an undulator alongside an electron bunch. Due to phase slippage (because the light is faster than the electrons) the electron and light field are always in resonance. When the electron loses energy to the light field, the fields' amplitude increases. On the other hand, electrons that gain energy reduce the fields' amplitude. To obtain a net gain for the fields' energy, the electrons are injected off-resonance.
What I am wondering; is the emitted radiation in the end coherent?
For only one electron: sure! But the electrons are part of an electron bunch (no microbunching) with a length much larger than the wave-length of the light field. How does that influence the coherence?
Answer: Yes!
The incoherent synchrotron radiation of an electron bunch in an undulator is not what matters. The low-gain FEL is driven by an external field which is coherent. The low-gain FEL's job is to increase the intensity of this field. This amplified field is what becomes the FEL radiation.
Because the external field is intrinsically coherent, an amplification of this field due to the (low-gain) FEL still leads to a coherent output field. | {
"domain": "physics.stackexchange",
"id": 82418,
"tags": "coherence, accelerator-physics, free-electron-lasers"
} |
Is it possible to conserve the total kinetic energy of a system, but not its momentum? | Question: It is possible to conserve momentum without conserving kinetic energy, as in inelastic
collisions. Is it possible to conserve the total kinetic energy of a system, but not its momentum? How?
To clarify, I am not necessarily talking about an isolated system. Is there any scenario which we could devise in which momentum is not conserved but kinetic energy is?
Answer: In order for momentum to be conserved, it must be the case that $$\mathbf F_\text{net}=\frac{\text d\mathbf p}{\text dt}=0$$
In order for kinetic energy to be conserved, it must be the case that
$$\text dK=\text dW_\text{net}=\mathbf F_\text{net}\cdot\text d\mathbf x=0$$
at all instants in time.
So, is there a case where the net work done on an object is $0$, yet there is still a net force acting on the object? The answer is yes! We just need $\mathbf F_\text{net}\neq0$ to be perpendicular to the path of the object at all times. A simple example of this is an object undergoing uniform circular motion. The object's kinetic energy is not changing (as its speed remains constant), yet the momentum is constantly changing due to the non-zero net force. | {
"domain": "physics.stackexchange",
"id": 63471,
"tags": "newtonian-mechanics, momentum, energy-conservation, conservation-laws, collision"
} |
Light Ray Trajectory through Periodic Refractive Index | Question: Consider a ray of light travelling between two points A and B on the $xy$ plane. Using the calculus of variations and Fermat's Principle we can derive equations which give the trajectory of a ray of light through a medium of given refractive index $n(y)$ by minimising,
\begin{equation}
T = \int n(y) \sqrt{1+{\Big(\frac{dy}{dx}\Big)}^2}dx
\end{equation}
and using the Euler-Lagrange equations we arrive quickly at an integral expression for the trajectory as
\begin{equation}
x = \int \frac{\alpha\> dy}{\sqrt{{n(y)^2-\alpha}^2}}
\end{equation}
with $\alpha$ as some arbitrary constant based on initial conditions.
Now consider the refractive index of the medium to be $\> n(y)= n_0 \cos(ky)$.
At this point I can't seem to progress; Is this integral solvable with this continuously varying index? Is this the correct approach or is there a more convenient way to find the trajectory?
Answer: The integral you're trying to solve is
$$
x = \int \frac{\beta}{\sqrt{\cos^2 (ky) - \beta^2}} dy = \int \frac{\beta}{\sqrt{(1 - \beta^2) - \sin^2 (ky)}} dy \\= \frac{\beta}{\sqrt{1 - \beta^2}} \int \frac{dy}{\sqrt{1 - \sin^2 (ky)/(1 - \beta^2)}} dy,
$$
where $\beta \equiv \alpha/n_0$. This can be recognized to be an incomplete elliptic integral of the first kind:
$$
x - x_0 = F \left( ky \left| \frac{1}{1 - \beta^2}\right) \right.
$$
where $x_0$ is the $x$-coordinate when $y = 0$. If desired, this can be inverted to find $y(x)$ in terms of the Jacobi elliptic functions:
$$
y = \frac{1}{k} \sin^{-1} (\mathrm{sn} (x - x_0)).
$$
In this notation, the use of the parameter $1/(1- \beta^2)$ in the definition of the inverse function is understood.
In some sense, I haven't really told you how to evaluate this integral; I've just told you that this integral has a name and has been studied (along with its inverse function.) The problem of plotting $y(x)$ (or $x(y)$) and understanding where the trajectories go is still there; and unless you have a completely intuitive understanding of the elliptic functions (I don't), then this is probably unilluminating. Still, it might give you somewhere to go from here; most mathematical handbooks can tell you about the analytical properties of the elliptic integrals and the Jacobi elliptic functions, and a computer language that can do upper-level math (e.g., Mathematica, Maple, MATLAB, etc.) could plot the trajectories out for you. | {
"domain": "physics.stackexchange",
"id": 55740,
"tags": "optics, geometric-optics, variational-calculus"
} |
Possible Error in Assumption - Griffiths Quantum Mechanics | Question: In "Introduction to Quantum Mechanics" by Griffiths, right at the beginning of section 9.1.1 (Time-Dependent Perturbation Theory, The Perturbed System), Griffiths states:
Now suppose we turn on a time-dependent perturbation, $H'(t)$. Since $\psi_a$ and $\psi_b$ constitute a complete set [of the two-level system], the wave function $\Psi (t)$ can still be expressed as a linear combination of them. The only difference is that $c_a$ and $c_b$ are now functions of t:
I don't understand. You modify the Hamiltonian, you modify the solution basis - easy as that. Why on earth does he assume that if you add a time-dependent perturbation to the Hamiltonian the basis (for the two-level system that he considered in the section right before) will remain the same? And if this is indeed a mistake, then how valid is the assumption that the true wave function $\Psi (t)$ is merely a time-dependent linear combination of the two states $\psi_a$ and $\psi_b$?
Answer: A basis is a set of wave functions such that a any wave function can be formed as a linear combination of basis wave functions. Often you choose them to be eigenfunctions of the Hamiltonian. But you don't have to.
If you change the Hamiltonian, you change the egienfunctions, so you change the most common choice for a basis. | {
"domain": "physics.stackexchange",
"id": 34108,
"tags": "quantum-mechanics, wavefunction, hilbert-space, schroedinger-equation, time-evolution"
} |
CNOT gate broken in 2 different quantum simulators? Or am I wrong? | Question: My understanding is that the control qubit in a controlled-not gate remains unchanged after the controlled-not operation is performed on a target-qubit (so the Pauli-X gate is performed only on the target-qubit, and nothing is done to the control qubit). Such that, if the control-qubit is measured later, it would have the same value as though it had never acted as a control-qubit.
That understanding seems to be supported by most of the materials available, and by, for instance, the first few minutes of this video: http://www.youtube.com/watch?v=rLF-oHaXLtE
However, I find two quantum computer simulators which produce results contrary to my assumption, wherein the control-qubit appears to be modified by the CNOT gate in certain circumstances.
There is a nifty quantum computer simulator here that I suggest you use to see what I'm talking about. It runs in your browser and doesn't need Java or Flash (it's HTML5):
http://www.davyw.com/quantum/
Here is the setup code: On the webpage, select the [Workspace] Menu, then select [Import JSON] and paste in the following code to set up the example circuit:
{"gates":[],"circuit":[{"type":"h","time":0,"targets":[1],"controls":[]},{"type":"h","time":0,"targets":[0],"controls":[]},{"type":"r2","time":1,"targets":[1],"controls":[]},{"type":"x","time":3,"targets":[1],"controls":[0]},{"type":"h","time":9,"targets":[1],"controls":[]},{"type":"h","time":9,"targets":[0],"controls":[]}],"qubits":2,"input":[0,0]}
The results of psuedo-measurement return |00> and |11>, however, shouldn't the first qubit always be 0 when measured, even if it is in fact a control-qubit, since the control-qubit remains unchanged regardless of what the target qubit's value is? The second hadamard gate should just reverse the first hadamard gate, meaning whatever value that qubit was initialized with (|0>) should be be the same upon measurement (|0>) -- which is what happens when the CNOT gate is not present.
Why are these programs flipping the first qubit, the control-qubit, in some cases?
The second quantum computing simulation program that does the same is: QCAD 2.0 (qcad.sourceforge.jp).
Note: In the webpage simulator linked, you may remove the CNOT gate by right clicking on it, and you will observe that running the simulator (by pressing [ENTER]) produces 0 for all possible measurements of the first qubit as expected, but with a CNOT in place and the first qubit as the control-qubit, placed mid-way on the circuit between the hadamards, the first qubit sometimes measures 1 according to this simulator (press [ENTER] to see the adjusted measurements).
Answer: I have checked out http://www.davyw.com/quantum/
Everything I've seen seems OK to me. What you need to be careful with is the order of the qubits.
The control qubit is not always the first qubit!
The control qubit is the one with the black dot, but the order of the qubits is still top to bottom. So, CNOT transforms |00> and |01> into themselves if the black dot is on the top wire, while mapping |10> to |11> and vice versa. In the case of the black dot being on the bottom wire, |00> and |10> remain intact while |01> and |11> map to each other.
Edit.
The circuit under consideration is
I think that the reason of your confusion might be that you actually have to look at what happens to the bottom cubit in order to calculate what happens to the top one. For example, applying $R_2=\mathrm{diag} (1;i)$ matrix to the bottom cubit is represented by the tensor product $I\otimes R_2$ which is equivalent to the 4x4 matrix
$$
\begin{bmatrix}R_{2}\\
& R_{2}
\end{bmatrix}.
$$
Applying the Hadamard gate to both qubits is represented by $H\otimes H$, or the 4x4 matrix
$$
\begin{bmatrix}H & H\\
H & -H
\end{bmatrix}.
$$
Therefore, R2 together with the two Hadamard gates gives $(H\otimes H)(I\otimes R_2)(H\otimes H)=I\otimes HR_2H$ which, again, is represented by the 4x4 matrix
$$
\begin{bmatrix}HR_{2}H\\
& HR_{2}H
\end{bmatrix}.
$$
So, in this case the first cubit indeed doesn't change its state.
At the same time, the CNOT gate cannot be represented by a tensor product of two matrices. In other words, it causes quantum entanglement, hence you have to follow both wires in order to find out what happens to either one of the qubits. The easiest way to do that is to perform a direct calculation of the matrix products.
Not being able to replace CNOT with a tensor product, we have to write that our circuit is represented by the product
$$\begin{bmatrix}H & H\\
H & -H
\end{bmatrix}\begin{bmatrix}R_{2}\\
& R_{2}
\end{bmatrix}\begin{bmatrix}I\\
& X
\end{bmatrix}\begin{bmatrix}H & H\\
H & -H
\end{bmatrix}=\frac{1}{2}\begin{bmatrix}1+i & & & 1-i\\
1-i & & & 1+i\\
& 1-i & 1+i\\
& 1+i & 1-i
\end{bmatrix},$$
where the basis vectors are chosen from top to bottom as follows: |00>, |01>, |10>, |11>.
Without the CNOT gate it evaluates to
$$
\begin{bmatrix}HR_{2}H\\
& HR_{2}H
\end{bmatrix}=\frac{1}{2}\begin{bmatrix}1+i & 1-i\\
1-i & 1+i\\
& & 1+i & 1-i\\
& & 1-i & 1+i
\end{bmatrix}.
$$
Hence, an input of |00> leads to
$$
\frac{1}{2}\begin{bmatrix}1+i & & & 1-i\\
1-i & & & 1+i\\
& 1-i & 1+i\\
& 1+i & 1-i
\end{bmatrix}\begin{bmatrix}1\\
0\\
0\\
0
\end{bmatrix}=\frac{1}{2}\begin{bmatrix}1+i\\
0\\
0\\
1-i
\end{bmatrix}=\frac{1}{2}\left((1+i)\left|00\right\rangle +(1-i)\left|11\right\rangle \right)
$$
for the full circuit and
$$
\frac{1}{2}\begin{bmatrix}1+i & 1-i\\
1-i & 1+i\\
& & 1+i & 1-i\\
& & 1-i & 1+i
\end{bmatrix}\begin{bmatrix}1\\
0\\
0\\
0
\end{bmatrix}=\frac{1}{2}\begin{bmatrix}1+i\\
1-i\\
0\\
0
\end{bmatrix}=\frac{1}{2}\left((1+i)\left|00\right\rangle +(1-i)\left|01\right\rangle \right)
$$
for the circuit without the CNOT. Which coincides with what the simulator says. | {
"domain": "physics.stackexchange",
"id": 11301,
"tags": "quantum-mechanics, quantum-information, quantum-computer"
} |
Why is $\Delta U = nC_v \Delta T$ true,intuitively, regardless of the path? | Question: My question differs from these questions:
When is $\Delta U=nC_V \Delta T$ true?
Because here he asks to distinguish between $C_P$ and $C_V$ and not why it's always $C_V$
Work done in adiabatic process
Here the answer is "just because it is always the case"
Is there no mathematical proof for this? What is the real physical intuition for this being the case?
Answer: There are two criteria. First, $c_V=nC_V$ must be constant; second, we must have that $P = f(V) T$ for some function $f$. Both criteria hold in the specific case of an ideal gas, but neither holds for a general thermodynamical system. I'll give the mathematical explanation first, and then the physical explanation second.
Starting from the perspective that $U=U(S,V)$ and $T= \left(\frac{\partial U}{\partial S}\right)_V (S,V)$, note that a small change in $S$ and $V$ will cause small changes
$$dU=\left(\frac{\partial U}{\partial S}\right)_V dS + \left(\frac{\partial U}{\partial V}\right)_S dV \equiv TdS - PdV$$
and
$$dT = \left(\frac{\partial T}{\partial S}\right)_V dS + \left(\frac{\partial T}{\partial V}\right)_SdV$$
Solving the second equation for $dS$ and substituting it in the first equation yields
$$dU = T \frac{1}{\left(\frac{\partial T}{\partial S}\right)_V}dT- \left[T\frac{\left(\frac{\partial T}{\partial V}\right)_S}{\left(\frac{\partial T}{\partial S}\right)_V} + P\right]dV$$
$$= c_V dT+ \left[T\frac{\left(\frac{\partial P}{\partial S}\right)_V}{\left(\frac{\partial T}{\partial S}\right)_V} - P\right]dV$$
Where we've used that $\left(\frac{\partial T}{\partial V}\right)_S = \frac{\partial^2 U}{\partial V\partial S} = -\left(\frac{\partial P}{\partial S}\right)_V$, and that the definition of the specific heat at constant volume is $c_V \equiv T \left(\frac{\partial S}{\partial T}\right)_V$. Finally, note that
$$\frac{\left(\frac{\partial P}{\partial S}\right)_V}{\left(\frac{\partial T}{\partial S}\right)_V} \equiv \left(\frac{\partial P}{\partial T}\right)_V$$
so finally
$$ dU = c_V dT + \left[T\left(\frac{\partial P}{\partial T}\right)_V - P \right]dV$$
Assuming that we are not dealing with variable numbers of particles, what has been written here is completely general, so your question boils down to asking when the second term is zero. The answer is that
$$\left(\frac{\partial P}{\partial T}\right)_V = \frac{P}{T} \implies P = f(V) T$$
for some function $V$. If this is the case, then the second term on the right of the preceding equation vanishes, and we have
$$dU = c_V dT \implies \Delta U = \int c_V dT = \int nC_V dT$$
since $C_V$ is the specific heat per mole. If $C_V$ is constant, then this just becomes
$$\Delta U = nC_V \Delta T$$
From a physical standpoint, the answer is that the energy of an ideal gas is purely kinetic - the gas particles do not have any long-range interactions with each other at all. As a result, since the temperature can be shown to be a measure of the average kinetic energy of the ideal gas particles, the internal energy of the system is unaffected by changes in volume, as long as the temperature is fixed.
This would not be the case if the particles attracted each other, for example. Putting such a system in a larger box with the same amount of kinetic energy would result in a larger average spacing, and therefore a less negative potential energy (remember that attractive potential energies are negative). Therefore, larger box $\implies$ more energy, even if the kinetic energy didn't change.
As I showed,
$$ dU = c_V dT + \left[T\left(\frac{\partial P}{\partial T}\right)_V - P \right]dV$$
The first term on the right describes the change in energy due to change in temperature while holding the volume fixed; the second describes the change in energy due to a change in volume while holding temperature fixed. Because of the lack of interaction between gas particles, the second term goes away, leaving only the first, and so
$$dU = c_V dT = nC_V dT$$ | {
"domain": "physics.stackexchange",
"id": 67498,
"tags": "thermodynamics"
} |
If X (an NP-hard problem) is polynomial-time many-one reducible to problem Y, then Y is NP-hard. Why is it the case? | Question: According to this source,
If A is reduced to B and A ∈ class X, then B cannot be easier than X. This reduction is used to show if a problem belongs to NPH – just reduce some known NPH problem to the given problem. This reduction thus gives a lower bound for the complexity of B.
Now consider the following scenario: X is an NP-hard problem. It is polynomial-time many-one reducible to problem Y. This makes Y NP-hard. My question is, it could be possible that X has a better solution like NP or P, which we could not find. Then why should Y be NP-hard? It could be NP or P. If Y turns out to be P (assume) then X becomes P.
Answer: All NP-complete problems are "equally hard", in the sense that if one of them is in P, then all of them are in P. Similarly, if one NP-complete problem can be solved in quasipolynomial time (that is, time $e^{O(\log^C n)}$ for some constant $C$) then all of them can be solved in quasipolynomial time.
Some NP-hard problems are harder than others, but all of them are at least as hard as all NP-complete problem. That is, if some NP-hard problem is in P, then all NP-complete problems are in P. Similarly, if some NP-hard problem can be solved in quasipolynomial time, then the same holds for all NP-complete problems.
However, some NP-hard problems are provably harder than others. For example, the halting problem is NP-hard, but in contrast to all NP-complete problems, it cannot be solved in exponential time; in fact, it cannot be decided at all!
My question is, it could be possible that X has a better solution like NP or P, which we could not find.
There are two problems with this statement. The first is the assumption that NP is a better complexity class than NP-hard. In fact, all NP-complete problems are both NP-hard and in NP (that's the definition of NP-completeness), so it's in fact common for problems to be both. NP-hardness is a lower bound on complexity, whereas being in NP is an upper bound. The two are not contradictory in any way.
The second problem is the assumption that it could be possible that X is in P. As explained above, that would imply that P=NP, which most people consider unlikely. An NP-hard problem cannot be "easy" – it's at least as hard as any problem in NP, and in particular at least as hard an any NP-complete problem.
Then why should Y be NP-hard?
You can prove that Y is NP-hard if you can reduce, in polynomial time, another NP-hard problem X to it. I won't repeat the proof here, since it can be found in textbooks, lecture notes, and even here on Computer Science.
If Y turns out to be P (assume) then X becomes P.
That's correct, but there is no contradiction here. This is actually the point of NP-hardness. We don't believe that Y is in P, since that would imply that X, as well as many other problems, are also in P. | {
"domain": "cs.stackexchange",
"id": 14399,
"tags": "complexity-theory, np-hard, np"
} |
Glass properties | Question: This is a question about the properties and strength of glass. I know that glass is an amorphous solid but I am not sure if that is relevant to my question. My question is that if there's no physical cracks or blemishes, is it possible for glass to be damaged? For example, metal can be damaged by a sharp hit or force to it, as it is evident in a dent. For glass however, if a sharp force hits it and there is no evident crack or scratch, is the glass still in the same condition as it was before it got hit. For instance if I had something in contact with the glass that repeatedly struck it, would it over time weaken (ignore scratching)? (I know that with metal, each successive blow will slightly deform it)
Answer: What you're describing is the difference between brittle and ductile behaviour. Most materials show both properties under the appropriate conditions. For example glass becomes ductile as the temperature rises towards the glass transition, while metals become ductile at low temperatures.
A brittle material may be superficially unaffected by a blow, but it is likely that there will be some effect at the atomic scales. If you study the surface with an electron microscope you'll probably find the blow has caused small defects on the surface, and these could nucleate a crack under stress. Alternatively the blow could have caused very small cracks, that again could lead to failure under stress. To what extent this happens depends on the force of the blow.
In metals repeated small deformations can lead to the well known phenomenon of metal fatigue. Analogous processes do happen with brittle materials, though I have to confess this is outside my area of expertise. A quick Google for something like "fatigue in brittle materials" will find lots of related articles like this one. | {
"domain": "physics.stackexchange",
"id": 9312,
"tags": "material-science, amorphous-solids, glass"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.