anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
If I evaluate degree of freedom and got some number $n$, then how can I know what are those $n$ independent coordinates? | Question: Using $3N-f=d$ we can evaluate the degree of freedom or independent coordinates of a system.
But how can we know which coordinates are actually independent?
(Here $n$ = number of particles, $f$ = number of constraint equations and $d$ = degree of freedom or number of independent coordinates.)
If we take the case of double Atwood machine, we get 2 dof. So which two coordinates should be said to be independent? $x$ and $y$?
Update:
If I take the case of, "A particle falling under gravity", the dof will be 1. So there should be only one independent coordinate with which we can describe the situation. If I take the fall of the particle in $y$ direction, then that one independent coordinate will be $y$?
Answer: If you have a system of $N$ particles with $f$ equations of constraints, then the effective degrees of freedom reduces from $3N$ to $3N-f$. This means you don't have to worry about all the $3N$ coordinates, but just focus on the $3N-f$ coordinates to study the dynamics of the system. This idea is based on a simple logic as I will try to explain it in terms of a circular motion.
Suppose a free particle exhibits a circular motion in the $xy$-plane. The solution for the equation of motion will be in the form
$$z(t)=0$$
$$x(t)^2+y(t)^2=r^2$$
where $x(t)$, $y(t)$ and $z(t)$ are the $x$, $y$ and $z$ coordinates at time $t$ and $r$ is the radius of the circular path and is assumed to be a constant. At first sight, a free particle has $3$ degrees of freedom. But the above equation of constraint($x(t)^2+y(t)^2=r^2$) and the restriction of the particle's motion confined to a plane ($z=0$) reduces the degree of freedom. Hence there is in effect $3-2=1$ degree of freedom of the system.
This means you only need any of the coordinates- $x(t)$ or $y(t)$- to describe the mechanics of the system. The choice is yours. If you choose $x(t)$ as the independent coordinate, then since $r$ is a constant, once you fix a value of $x(t)$, the value of $y(t)$ get fixed automatically because of the constraint equation. In other words, $y(t)$ depends on $x(t)$ by the above equation.
Now, you may ask that if you change the coordinate system from Cartesian to some other, say Spherical polar coordinates, then will the dof changes? No, it will not. The choice of a coordinate system will not affect the dynamics of the system. The above circular motion in plane polar coordinates can be written as:
Put $x=rcos\theta$ and $y=rsin\theta$ in the previous equation and we get
$$\phi(t)=0$$
$$(rcos\theta)^2+(rsin\theta)^2=r^2$$
Here, we have $r=\text{constant}$ and hence the only variable that changes with time is $\theta(t)$. So there is only one degree of freedom. You choose $\theta(t)$ as your independent coordinate. As you can see, the degree of freedom is still one.
In the case of free fall of a particle, the solution is given by:
$$y(t)=\frac{1}{2}gt^2$$
where $y(t)$ is the position of the object in the $t^{th}$ second. Here, the degree of freedom is one. You only need $y$ to spot the particle at any time $t$.
Update: Degree of freedom of Double Atwood Machine
In the case of a simple Atwood machine, there is only one degree of freedom. Now, you replace one of the masses by another Atwood machine to form a double Atwood machine (or sometimes called compound Atwood machine). Here the system has two degrees of freedom:
1. one is the freedom of mass $1$ (and the attached movable pulley) to move up and down about the fixed pulley, and
2. one is the freedom of mass $2$ (and the attached mass $3$) to move up and down about the movable pulley
How to do this in terms of constraints?
To describe the configuration of the system, we need 3 coordinates each for masses $m_1$, $m_2$ and $m_3$ and another three for the movable pulley. i.e., a total of 12 coordinates. But thanks to the constraints present. There are $10$ constraints present here:
$8$ of which limit the motion of all the coordinates in a single direction ($x$ I'am taking here);
the remaining $2$ are given by
$(x_p+x_1)=l$ and $(x_2-x_p)+(x_3-x_p)=l'$
where $x_p$ and $x_i$ are the vertical positions of the pulley and the masses $m_i$ respectively.
Hence the effective degree of freedom of the system reduces to $2$.
In a simple way, one degree of freedom is that of the mass $m_1$ and the other is that due to the mass $m_2$. As you can see from the figure, their motions are independent to each other. Hence $x$ and $x'$ are the independent coordinates here. | {
"domain": "physics.stackexchange",
"id": 34659,
"tags": "classical-mechanics, coordinate-systems, constrained-dynamics, degrees-of-freedom"
} |
Catkin_make error | Question:
Hello,
I'm having a problem with catkin_make. When I run run it i get
-- Using these message generators: gencpp;genlisp;genpy
-- tum_ardrone: 1 messages, 5 services
-- Configuring done
CMake Error at rosberry_pichopter/CMakeLists.txt:94 (add_library):
Cannot find source file:
src/rosberry_pichopter/joy/src/servo.cpp
Tried extensions .c .C .c++ .cc .cpp .cxx .m .M .mm .h .hh .h++ .hm .hpp
.hxx .in .txx
-- Build files have been written to: /home/donni/catkin_ws/build
make: *** [cmake_check_build_system] Error 1
Invoking "make cmake_check_build_system" failed
With a bunch of stuff before it. I don't know why I'm getting the error, becouse servo.cpp is in catkin_ws/src/rosberry_pichopter/joy/src/servo.cpp.
I'm using hydro.
Originally posted by dshimano on ROS Answers with karma: 129 on 2014-09-16
Post score: 0
Answer:
The error:
CMake Error at rosberry_pichopter/CMakeLists.txt:94 (add_library):
Cannot find source file:
Means that cmake is looking for the file and did not find it. It looks for the file relatively to the base directory of your package (not the base directory of your catkin workspace!!!!). I. e. the error message is fully consistent with what you mentioned:
Your source file is placed in
catkin_ws/src/rosberry_pichopter/joy/src/servo.cpp
CMake looks for the file in
catkin_ws/src/rosberry_pichopter/src/rosberry_pichopter/joy/src/servo.cpp
The reason, I guess, is that you a have an add_executable or add_library in your CMakeLists.txt which looks like:
add_executable( your_target src/rosberry_pichopter/joy/src/servo.cpp )
while it should rather look like:
add_executable( your_target joy/src/servo.cpp )
Originally posted by Wolf with karma: 7555 on 2014-09-17
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 19424,
"tags": "catkin-make"
} |
Do all phases of ice look the same visually? | Question: I recently read about different phases of ice on Wikipedia. But I can't find any pictures of the different ice phases. Do they all look alike visually? If you weren't able to measure the pressure and temperature, would you be able to tell one phase from another in any way?
Answer: I don't know of any experimental results on the optical properties of ice at high pressures. I'm sure they must exist, but I couldn't find any relevent publications.
However there has been lots of work on theoretical calculations of the optical properties. See for example Ab initio investigation of optical properties of high-pressure phases of ice and Blueshifting the Onset of Optical UV Absorption for Water under Pressure. The results of these calculations are that the optical properties do not change in the optical spectrum, though you get big changes in the UV spectra.
So assuming you trust the calculations the answer is that all the different phases of ice look the same to the eye. All the phases are clear, and the refractive index doesn't change (much) so you wouldn't see a difference in the sparkle. | {
"domain": "physics.stackexchange",
"id": 13666,
"tags": "water, phase-transition, ice"
} |
Heat and work are not the state functions of the system. Why? | Question:
Heat and work, unlike temperature, pressure, and volume, are not intrinsic properties of a system. They have meaning only as they describe the transfer of energy into or out of a system.
This is the extract from Halliday & Resnick.
My chem book writes:
Heat & work are the forms of energy in transit. They appear only when there occurs any change in the state of system and the surroundings. They don't exist before or after the change of the state.
So, heat energy is dependent on the path or the way the system changes, right? So, are they saying, for one path connecting two states, more heat energy can be liberated while for another path, less heat is released? How? For the same two states, how can there be a different amount of heat energy liberated? Is there any intuitive example to understand this?
Answer: Think of it like this. When you have an object of mass $m$ which is held a height $h$ above some reference point, you think of it as having potential energy (considering only gravitational interactions) $U= m g h$, and gravity will exert an amount of work $W_g = m g h$ on the object. When you drop the object, it shall fall towards the ground, towards “equilibrium”, so to speak. You do not speak of the amount of “work” that the mass has when at its original height, nor of the amount of “work” lost, but of its energy (relative to a reference point) at any given state, $U$. Moreover, we say that this potential energy is a state function because it depends only on the initial and final heights of the mass in question.
In the same way, one does not concern his or her self with the amount of “heat” that an object has, since it is merely a term used to denote the amount of transferred energy between systems as they move in and out of equilibria. We speak of thermal energy, internal energy, free energies and such that are state functions of the system - in exactly the same way that the gravitational potential energy $U$ was in the mechanical analogue to this thermodynamical case. In the same way, we say that the thermal energy of the system is a state function insofar that it generally depends (more or less) on the initial and final temperatures and thermodynamic quantities of the mass in question.
Edit: I’ve reread your question and I want to make another point to clear things up. Yes, indeed, different paths can result in different amounts of heat transfer - the first law of thermodynamics states:
$\delta E = Q + W$,
wherein $Q$ is the amount of heat flow into the system, $W$ is the work done onto the system, and $\delta E$ is the total state internal energy change of the system. One can see that one can input say, 100 J of heat and do no work on a system to result in a net change $\delta E$ of 100 J, and in the same way, one can divide that $100 J$ amongst $W$ and $Q$ to get the same effect.
The intuition is as follows. Imagine you have a jar of gas. You can increase the temperature (and so impart a positive $\delta E$) by adding $100 J$ of heat, or you may compress it by doing $100 J$ of work to gain the same effect. I hope that clears things up! | {
"domain": "physics.stackexchange",
"id": 25480,
"tags": "thermodynamics"
} |
Is a carbon chiral it two of its groups are cis - trans isomers? | Question: For example:
In this image, the two groups sticking to a carbon are cis-trans isomers. So does that make the carbon chiral, exhibiting optical properties?
What if the two groups were both having the same configuration(both cis or both trans), would it then be achiral?
Thanks for any help :D
Answer: Any time all four groups bonded to carbon are distinct, including any sort of isomeric groups, you have a chiral center. Comments associated with the related question referenced by @Loong (since deleted, unfortunately) indicate that the cis vs trans scenario is even included in the $R$ vs. $S$ naming convention. This case is an $S$ configuration as the cis group takes second place and is counterclockwise from the hydroxyl group in the appropriate view. | {
"domain": "chemistry.stackexchange",
"id": 9048,
"tags": "stereochemistry, chirality, cis-trans-isomerism"
} |
Why do veins undergo vasoconstriction during physical activity? | Question: How would this help increase blood circulation?
Answer: I believe that the answer is in the same cited source (BioSBCC).
Veins are considered a blood reservoir and undergo constriction to mobilize blood.
[...] the veins act somewhat like a blood reservoir, containing 60% of the total blood volume at rest. [...] When the body needs to mobilize more blood for physical activity, the sympathetic nervous system induces vasoconstriction of veins.
The result is an increase in circulating blood volume. This changes cardiac output and arterial blood pressure (CV Physiology: Blood Volume).
I also recommend reading on Biology.SE: Where does extra blood come from to fill your muscles during exercise? | {
"domain": "biology.stackexchange",
"id": 4528,
"tags": "blood-circulation, veins"
} |
Principal components analysis need standardization or normalization? | Question: Principal components analysis need standardization or normalization?
After some google, I get confused. pca need the scalar be same. So which should I use.
Which technique needs to do before PCA?
Does pca need standardization? standardized values will always be zero, and the standard deviation will always be one.
Does pca need normalization? range zero to one
or both ?
Answer: Purpose of PCA is to find directions that maximizes the variance. If variance of one variable is higher than others we make the pca components biased in that direction.
So, best thing to do is make the variance of all variables the same. One way of doing this is by standardizing all the variables.
Normalization does not make all variables to have the same variance. | {
"domain": "datascience.stackexchange",
"id": 8753,
"tags": "pca"
} |
Why will a dropped object land at the same time as a sideways thrown one? | Question: My textbook says that a ball dropped vertically and a ball thrown sideways will not only both land simultaneously but their height will be corresponding for the entire fall, as shown in a diagram which has a ball falling vertically and a ball with an arch landing simultaneously.
This has really struck me as it feels intuitive that the ball dropped vertically would land faster, perhaps due to it traveling a shorter distance.
When I tried to think about it for myself, I came to the thought that perhaps it was due to gravity constantly pulling both of the balls, however would the sideways velocity not slow it downwards speed? For eg wouldn't a bullet take longer to land than a dropped bullet as it is traveling straight?
Answer: You've got it right when you say it's "due to gravity constantly pulling both of the balls" and NOTHING else pulling on the balls. Since gravity only acts in the vertical direction, what the balls are doing in the horizontal direction doesn't matter. Just remember that the thrown ball has to be thrown EXACTLY horizontally, and we are ignoring air resistance.
Another answer to the title question could be "Why wouldn't they land at the same time?" You guessed that maybe the sideways velocity would slow the downward speed. Nope. That's the point of these types of physics problems. Gravity will affect a horizontally released object the Same way as one released when stationary. Same will happen with a bullet. Same for someone running off a cliff vs. walking off. You don't float in the air momentarily like the coyote and the roadrunner! Gravity starts acting immediately. So even a bullet is never "traveling straight" | {
"domain": "physics.stackexchange",
"id": 46641,
"tags": "newtonian-mechanics, newtonian-gravity, kinematics, projectile"
} |
Is it possible to have an atmospheric pressure driven transport without any fuel? | Question: Since atmospheric pressure is very large, so we just have to create vaccuum on one side of the transport to make it go to that side by a large force. And, the size of the vaccuum doesn't even matter. A few millimeters thick vaccuum would lose the contact of atmosphere from one side of the vehicle. I hope some vaccuum creating devices have been invented by now. Now, those devices have to do work on air to move it by a few millimeters and create vaccuum there. We'll derive that energy required by the device from the kinetic energy of the vehicle itself.
Suppose we create some initial vaccuum on top of the vehicle upto a few meters by spending some energy. The vechicle will move up and will require some kinetic energy. And, we'll give some of this kinetic energy to the vaccuum creating device to produce some more vaccuum. I hope machines have been invented to convert kinetic energy of the vehicle to heat to give to the vaccuum creating device. And once the vehicle has acquired some kinetic energy from the initially created vaccuum, the vaccuum creating device has to maintain a 1mm vaccuum for the rest of the journey, the energy for which it will acquire from the kinetic energy of the vehicle itself. And, work done to move air through some millimeters against atmospheric pressure should be less than the kinetic energy of the vehicle if the lower cross sectional area of the vehicle is large.
And once the vehicle has acquired a convenient height, we'll somewhat let the air fill in the space above it to the extent that the lower atmospheric pressure balances its weight. And then we'll create vaccuum towards one of its horizontal faces to move it horizontally.
What are the flaws in the idea?
I have used the word 'vaccuum' while writing all this. But we don't even need complete vaccuum. Atmospheric pressure is very large. So, even removing 50% of the air above should give a good pressure difference.
Answer: It is not possible without any fuel.
Creating vacuum needs energy. A part of that energy gets transferred to the vehicle as kinetic energy - the remaining part is transferred to air as kinetic energy, too, or wasted as head. If you take again kinetic energy from vehicle to make more vacuum again, you will have lost part of the initial energy making less vacuum each time. You can try to be very efficient and reduce all energy losses, but according to first law of thermodynamics you will never achieve continuous movement without fuel.
If you can use fuel it's different: there are devices to move vehicles by creating a relative vacuum. In fact, they lover pressure in one side of the device and increase it in the other side, therefore getting a differential of pressure (or a differential of vacuum) that pushes the device and the vehicle. The most common of such devices are propellers and wings. | {
"domain": "physics.stackexchange",
"id": 37176,
"tags": "newtonian-mechanics, fluid-dynamics, pressure"
} |
How to visualize sensor_msgs/LaserScan without using RViz? | Question:
Hi all, thank you for reading this post.
i don't get any images in RViz, not even sensor_msgs/images, just a black screen. I think it's a bug. cause i can see them using image_view pack.
Now I want to visualize sensor_msgs/LaserScan without using RVIz, Is there any other way to do this?
Any answers would be appreciated.
Originally posted by Rm4n2aa on ROS Answers with karma: 1 on 2014-08-16
Post score: 0
Original comments
Comment by bvbdort on 2014-08-16:
did you select topic for displaying images?
Comment by Rm4n2aa on 2014-08-16:
Hello @bvbdort.
yes I did select the topic. and RViz is subscribing to the topic. but it shows just a black screen.
Comment by ahubers on 2014-08-17:
In reference to the black screen, try updating your graphics drivers. This solved it for me (I too had the persistent black screen of death on every open of Rviz).
Answer:
If you add a screenshot of rviz, we can probably help you debug why it isn't working. rviz also has a troubleshooting page that may be useful.
rviz really is the correct tool for visualizing data, because it understands the transform tree and can display different data sources in 3D, relative to one another.
If you just want to visualize your laser data to confirm that your laser is working, I wrote a simple laser viewer many years ago. I stopped using it once I discovered rviz.
Originally posted by ahendrix with karma: 47576 on 2014-08-16
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 19080,
"tags": "rviz"
} |
Does water have a smell? | Question: I was washing my cup with hot water (without soap) and upon it nearing my nose, there was some sort of 'smell' (I lack a better word)-
However the 'smell' was different compared to when I was drinking just plain cold water.
So my question is, does water have a smell?
Answer: Water in its pure form, i.e. $\ce{H2O}$ does not have a smell, or at least no smell that we can distinguish because the receptors in our nose (and mouth) are continuously exposed to it.
What you smell are dissolved gases and other volatile impurities. The nature of these chemicals will vary mainly by location/source of the water, and might also be a bit influenced by your specific local plumbing.
For example, in Iceland near Myvatn there is a lot of volcanic activity and the water from the tap (in particular the warm water because it can dissolve more) smells like rotten eggs due to the sulfur. Another example is that in many countries the water will smell a bit like chlorine, because chlorine is used to keep the water safe to drink (kills germs). | {
"domain": "chemistry.stackexchange",
"id": 3337,
"tags": "everyday-chemistry, water, smell"
} |
Is there a systematic way to derive constraint equations? | Question: There's this problem in Goldstein's (Classical Mechanics) derivations section:
5. Two wheels of radius $a$ are mounted on the ends of a common axle of length $b$ such that the wheels rotate independently. The whole combination rolls without slipping on a plane. Show that there are two nonholonomic equations of constraint,
$$\begin{align}
\cos\theta dx + \sin\theta dy &= 0 \\
\sin\theta dx - \cos\theta dy &= \frac{1}{2}a(d\phi + d\phi'),
\end{align}$$
(where $\theta$, $\phi$, and $\phi'$ have meanings similar to those in the problem of a single vertical disk, and $(x,y)$ are the coordinates of a point on the axle midway between the two wheels) and one holonomic equation of constraint,
$$\theta = C - \frac{a}{b}(\phi - \phi'),$$
where $C$ is a constant.
And here's the image from the problem with a single vertical disk:
Now, I believe I have successfully derived the equations for two of those constraints, but I'll write it anyway, in case my reasoning is somehow wrong or too sloppy. (I use the labels $1$ and $2$ for the wheels, instead of unprimed and primed.)
$$\dot{x} = v \sin{\theta}$$
$$\dot{y} = -v \cos{\theta}$$
$$\implies \color{red}{\cos{\theta} \, dx + \sin{\theta} \, dy = 0}$$
And the second one:
By rotating the wheels about the midpoint $(x,y)$, the angle $\theta$ changes such that $$d \theta = \frac{2}{b} \, dl$$ where $dl$ is the length of the arc swept by both wheels, satisfying $$dl = v_1 \, dt = - v_2 \, dt$$ because the wheels turn with anti-parallel velocities.
$$ dl = v_1 \, dt = a \frac{d \phi_1}{dt} \, dt = a \, d\phi_1$$
$$ dl = -v_2 \, dt = -a \frac{d \phi_2}{dt} \, dt = -a \, d\phi_2$$
$$\implies \color{red}{d\theta = -\frac{a}{b} (d \phi_1 - d \phi_2) },$$
which implies the holonomic constraint equation, with flipped signs.
(I guess I just picked different labels, right?)
How can I get the last one? I don't have much experience with these sorts of problems, so I was wondering, is there a systematic way to approach them or is it always just hacking at the problem, hoping to pull out the constraint equations?
P.S. My question got edited because of policy reasons according to which I cannot ask some questions, so I would like to say that I don't want to know if my reasoning is correct for the derivation of first two constraints. :)
EDIT, PLEASE READ:
Although I answered my own question regarding the specific problem mentioned here, if anyone provides a good answer regarding a systematic way to derive constraint equations, I will accept that answer instead.
Answer: Got it, I found a much better way to solve this problem, which eliminates my wish to confirm my previous reasoning and it partially answers the question the moderators' policy forced upon me, which was only a side-question to the main thing I wanted to ask, namely to help me solve this problem... That's why this answer might look like missing the point, but it isn't. Anyway, here's my answer:
The contact points of the wheels with the $xy$ plane have these coordinates for the lower (1) and the upper (2) wheel respectively:
$$(x_1,y_1) = \left(x-\frac{b}{2}\cos{\theta},\, y - \frac{b}{2}\sin{\theta}\right)$$
$$(x_2,y_2) = \left(x+\frac{b}{2}\cos{\theta},\, y + \frac{b}{2}\sin{\theta}\right)$$
Taking the time derivatives yields:
$$(\dot{x_1},\dot{y_1}) = \left(\dot{x}+\frac{b}{2}\dot{\theta}\sin{\theta}, \, y - \frac{b}{2}\dot{\theta}\cos{\theta}\right)$$
$$(\dot{x_2},\dot{y_2}) = \left(\dot{x}-\frac{b}{2}\dot{\theta}\sin{\theta}, \, y + \frac{b}{2}\dot{\theta}\cos{\theta}\right)$$
Also, we have these relations:
$$(\dot{x_1},\dot{y_1}) = (v_1 \sin{\theta}, -v_1 \cos{\theta}) = (a \dot{\phi_1} \sin{\theta}, -a \dot{\phi_1} \cos{\theta})$$
$$(\dot{x_2},\dot{y_2}) = (v_2 \sin{\theta}, -v_2 \cos{\theta}) = (a \dot{\phi_2} \sin{\theta}, -a \dot{\phi_2} \cos{\theta})$$
From there, eliminating $dt$ and performing simple algebraic manipulations gives:
$$dx = \sin{\theta}\left(-\frac{b}{2} d\theta + a \, d\phi_1\right)$$
$$dx = \sin{\theta}\left(\frac{b}{2} d\theta + a \, d\phi_2\right)$$
$$dy = -\cos{\theta}\left(-\frac{b}{2} d\theta + a \, d\phi_1\right)$$
$$dy = -\cos{\theta}\left(\frac{b}{2} d\theta + a \, d\phi_2\right)$$
Getting the final three equations of constraint is simply a matter of combining these, but if anyone wants it, I can write out the procedure explicitly. | {
"domain": "physics.stackexchange",
"id": 12846,
"tags": "homework-and-exercises, classical-mechanics, constrained-dynamics"
} |
What is meant by "heads" and "tails" in the context of gene orientation? | Question: I have a hard time understanding what this paper is talking about when it says:
We observed maximal cleavage at sites oriented tail-to-tail and separated by -10 bp to +30 bp (Fig. 2d). Finally, adjacent sites on the same DNA strand (head-to-tail orientation) did not show damage by NHEJ
Then it shows a figure/chart that I don't understand. I don't know the head and tail or the gene is, in terms of what it is relative or with respect to.
Any help would be appreciated.
Thanks.
Answer: The heads and tails, in this paper, refer to the orientation of sgRNA binding sites. If there are two tandem sites in the same orientation then they are referred to as head-to-tail (end of the first site followed by the beginning of the second site). It is also apparent from the excerpt that you have included in your question.
Head-to-head and tail-to-tail refer to the site pairs that are placed in the opposite strands; the former (H2H) refers to a configuration in which the beginnings of the two sites are closer whereas the latter (T2T) refers to the case where the ends of the two sites are closer.
Regarding your question about the placement of sites within -10 to +30 bp of each other:
I think the point of the paper is that the two cuts by the nickases should be close enough and in the T2T orientation for a maximally disruptive double strand break. If the distance between the single stranded cuts (nicks) is increased then they effectively are just two single strand breaks which are repaired efficiently. The authors also conclude that using two nickases in this fashion produces has lower chances of mistargeting compared to that of a normal Cas9 (that makes double strand cuts). | {
"domain": "biology.stackexchange",
"id": 9614,
"tags": "molecular-biology, terminology, crispr, cas9"
} |
How do I compute the amplitude for this QCD diagram? | Question: Studying the following scattering process at a tree-level:
$$\bar{q}^i (p_a) + q^j (p_b) \to \gamma(k_1) + \gamma(k_2)$$
Considering the “reduced” amplitude that is obtained by stripping away the
polarisation vectors:
$$\mathcal{M}_{\bar{q}q \to \gamma \gamma}= \mathcal{M}^{\mu_1 \mu_2}_{\bar{q}q \to \gamma \gamma}\varepsilon^*_{\mu_1} (k_1, \lambda_1)\varepsilon^*_{\mu_2} (k_2, \lambda_2)$$
Compute $\mathcal{M}^{\mu_1 \mu_2}_{\bar{q}q \to \gamma \gamma}$ using the Feynman rule for the quark-phton vertex:
$$V[\bar{\Psi}^i_q, \Psi^j_q, A_\mu ]= -ieQ_q \delta_{ij}\gamma_\mu$$
where i and j denote the colour indices of the quark legs.
I drew the Feynman diagram similar to the one in the link
https://en.wikipedia.org/wiki/Annihilation#/media/File:Mutual_Annihilation_of_a_Positron_Electron_pair.svg
and from it, I wrote the following amplitude:
$$\mathcal{M}^{\mu_1 \mu_2}_{\bar{q}q \to \gamma \gamma} = \epsilon^*_{\gamma}(k_1)(-ieQ\gamma^{k_1})(p_a)\frac{i\delta^{ij}(\require{cancel}\cancel{q}+m_{p_a})}{q^2 -m_q ^2+i\varepsilon}\epsilon^*_{\gamma}(k_2)\bar{v}(-ieQ\gamma^{k_2})(p_b)$$
This question has been set for a QCD module but I believe it only involves QED knowledge, which I have struggled with for a while now.
Is my approach correct? If not, what is the correct answer and how would I obtain that answer?
Answer: This is close, but you need to be extra careful with the spin structure of the diagram. I find it most useful to start at the end of the fermion line, and follow the arrows backwards through the diagram (because we write left to right). I will write this out in pure QED, then after showing what the term should look like I'll show you how to add in color.
outgoing positron gets $v_{s_1}(p)$, an incoming positron is $\bar{v}_{s_1}(p)$. An outgoing electron gets $\bar{u}_{s_1}(p)$, and an incoming electron gets $u_{s_1}(p)$. Similarly incoming photons get $\epsilon_{r_1}^\mu(k)$, and an outgoing photon gets $\epsilon_{r_1}^{*\nu}(k)$. I label the helicity of the photons and spin of the electrons explicitly in the subscripts.
Start with the incoming positron line, this gets a $\bar{v}_{s_1}(p_a)$, then you get the first vertex $-ieQ\gamma^{\mu_2}\epsilon^*_{r_2\mu_2}(k_2)$.
Now you get the propagator with the four momentum of $p_a-k_2$ since energy and momentum are conserved at the vertex and the photon carried away momentum $k_2$. This looks like $i\frac{\gamma^\nu(p_{a\nu} - k_{2\nu}) + m_q}{(p_a-k_2)^2 - m_q^2}$
Next you get to the next vertex which carried $-ieQ\gamma^{\mu_1}\epsilon^*_{r_1\mu_1}(k_1)$. Finally the end of the line, the incoming electron line $u_{s_2}(p_b)$
This looks like:
$$\mathcal{M}_{q\bar{q}\rightarrow{}\gamma\gamma} = \big(\bar{v}_{s_1}(p_a)\big)\big(-ieQ\gamma^{\mu_2}\epsilon^{*}_{r_2\mu_2}(k_2) \big) \bigg( i\frac{\gamma^\nu(p_{a\nu} - k_{2\nu}) + m_q}{(p_a-k_2)^2 - m_q^2}\bigg)\big(ieQ\gamma^{\mu_1}\epsilon^*_{r_1\mu_1}(k_1)\big)\big( u_{s_2}(p_b)\big)$$
Now because the polarization vectors commute with everything you can just pull them out of the term,
$$\mathcal{M}_{q\bar{q}\rightarrow{}\gamma\gamma} = \mathcal{M}_{q\bar{q}\rightarrow{}\gamma\gamma}^{\mu_1\mu_2}\epsilon^*_{r_1\mu_1}(k_1)\epsilon^*_{r_2\mu_2}(k_2),$$
where the "reduced amplitude" is,
$$\mathcal{M}_{q\bar{q}\rightarrow{}\gamma\gamma}^{\mu_1\mu_2} = -ie^2Q^2\left \lbrack\bar{v}_{s_1}(p_a)\gamma^{\mu_2} \frac{\gamma^\nu(p_{a\nu} - k_{1\nu}) + m_q}{(p_a-k_1)^2 - m_q^2}\gamma^{\mu_1}u_{s_2}(p_b)\right\rbrack. $$
Now you have the gamma matrices in the correct order, and they are sandwiched between the incoming and outgoing spinors forming the current. If you follow the direction of the current in this way you should be able to get all of the spin structure correct. The basic structure of the current will always be some number of gamma matrices with spinors on both sides (the left spinor will always have a bar over it, the right spinor will not).
You are correct this is a QED diagram, adding color just amounts to labeling your spinors with an $i$ and a $j$, and require it is conserved at each QED vertex, and in propagation. This is exactly what you did in your version.
Note this is only one of two diagrams at this order that contributes to this process, and the correct amplitude is the sum of the two terms. If you want to get the correct answer you need to figure out what the other diagram is, and compute it as well. | {
"domain": "physics.stackexchange",
"id": 64980,
"tags": "quantum-electrodynamics, feynman-diagrams, quantum-chromodynamics"
} |
Which is the best Visual Slam algorithm to implement using stereo vision? Are there any source code available? | Question:
I want to implement visual SLAM using stereo camera in C/C++. I found papers on SLAM using Laser scanners and also cameras, but they are for robots. I need it for cars.Please let me know which algo to implement or are there any source code available?I know programming in C/C++ and also OpenCV. Suggest me how to start.
Originally posted by Vijeet on ROS Answers with karma: 11 on 2016-08-17
Post score: 1
Answer:
There are many open source algorithms using visual SLAM. Use this link to check for a few implementation. I am not sure what you mean by applying it for cars, theoretically they should work if you are using stereo vision to get data, be it robots or cars.
I would suggest starting with ORB-SLAM, Open RatSLAM and OKVIS since they seem to provide better results in most cases. You can of course try other algorithms for comparing the performance. Some of the algorithms may be for monocular camera but later might release a version for stereo camera, so look out for that. Other algorithms that may be interesting based on your question would be LSD SLAM, LibViso, PTAM to list a few. You also mentioned finding some papers on the related topic, it is quite likely that they already have some kind of implementation in place to take a look at.
Also if you are looking for data with cars, KITTI dataset seems to be quite popular.
With my limited experience I would say there is not yet best algorithm which can be identified.
Originally posted by apoorv98 with karma: 36 on 2016-10-14
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Vijeet on 2017-05-23:
Thanks for the information. Its really helpful.
Comment by sunt40 on 2019-02-23:
Could you please tell me how to run openratslam in ubuntu16.04 with ROSkinetic and opencv3.4 ? | {
"domain": "robotics.stackexchange",
"id": 25538,
"tags": "stereo"
} |
Which method to use to remove trend from time series? | Question: From what I understand, differencing is necessary to remove the trend and seasonality of a time series. So I assumed it basically does the same thing as signal.detrend from the scipy library.
But I tried differencing and then, separately, used signal.detrend and my time series looked completely different.
Original:
Differencing:
Imported libraries:
The x axis represents months and the y axis is sales. The colours on the first two charts just represent three different years.
Answer: Detrend does a least squares fit (linear or constant) and subtracts this from your data points. You can look this up in the docs.
Simply taking the difference between consecutive data points will in general lead to other results.
In general the regression based detrending seems to be more reasonable. You could also think about using random sample consensus (RANSAC) to be more robust to outliers. | {
"domain": "datascience.stackexchange",
"id": 5755,
"tags": "python, time-series, scipy, difference"
} |
Create share folder and AD security group and apply security to new folder | Question: I think I'm finally done with my code and was hoping for some pointers. My goal for this script was to:
Create new folder
Create AD group FS-TESTSHARE-R
Create AD group FS-TESTSHARE-RW
Apply both groups to the new share folder
Set full read permissions to FS-TESTSHARE-R
Set full read/Rights permissions to FS-TESTSHARE-RW
Set full access permissions for local machine admins and domain admins
Everything seems to be working ok, but I was wondering if there was a cleaner way to do this or if I'm on the right track. Right before posting this I was doing the "FS-$($NAME.replace(' ',''))-RW".toupper() in all of the individual spots where I now have $ADNameRW (and $ADNameR), but I realized that it would work better to create the string first.
Any other tips or suggestions on my creation?
$FileShare = Read-Host -prompt "Verify Fileshare Server Name - (ie ECCOFS01)"
if ( ($FileShare -eq $null) -or ($FileShare -eq "") -or ($FileShare -eq " ") -or ($FileShare -eq "c:") -or ($FileShare -eq "d:"))
{
Write-Host "You entered $FileShare which is an incorrect value: This Script is now Exiting." -Foreground "White" -Background "Red"
Exit
}
else
{
Write-Host "You Entered $FileShare"
}
$Parent = Read-Host -prompt "Enter full parent path that will contain the new folder (ie. \\eccofs01\Groups\ECCO IT\ or C:\Test Share)"
if ( ($Parent -eq $null) -or ($Parent -eq "") -or ($Parent -eq " ") -or ($Parent -eq "c:") -or ($Parent -eq "d:"))
{
Write-Host "You entered $Parent which is an incorrect value: This Script is now Exiting." -Foreground "White" -Background "Red"
Exit
}
else
{
Write-Host "You Entered $Parent"
}
$Name = Read-Host -prompt "Enter New Folder Name."
if ( ($Name -eq $null) -or ($Name -eq "") -or ($Name -eq " "))
{
Write-Host "You entered $Name which is an incorrect value: This Script is now Exiting." -Foreground "White" -Background "Red"
Exit
}
else
{
Write-Host "You Entered $Name"
}
$Path = "$($parent)\$($Name)"
Write-Host = "New Folder Path = $Path"
$Country = Read-Host -prompt "Enter the Country OU that the Security Group will reside in (i.e. Global, Americas, Europe, Asia Pacific)."
if ( ($Country -eq $null) -or ($Country -eq "") -or ($Country -eq " "))
{
Write-Host "You entered $Country which is an incorrect value: This Script is now Exiting." -Foreground "White" -Background "Red"
Exit
}
else
{
Write-Host "You Entered $Country"
}
# These do not check for empty values - that can be figured out later
#$Parent = read-host -prompt "Enter full parent path that will contain the new folder (ie. \\eccofs01\Groups\ECCO IT\ or C:\Test Share)"
#$Name = read-host -prompt "Enter New Folder Name."
#$Country = read-host -prompt "Enter the AD Security Group Country (i.e. Global, Americas, Europe, Asia Pacific)"
#$City = read-host -prompt "Enter City name: spelling must be correct. You can also leave this blank to create Security Group in the country-specific OU"
Import-Module ActiveDirectory
$FileShareFull = $($FileShare.replace(' ',''))
$NameFull = ($NAME.replace(' ',''))
$ADNameRW = "FS-$FileShareFull-$Name-RW".toupper()
$ADNameR = "FS-$FileShareFull-$Name-R".toupper()
# Create Security Groups =
# Creates the parameters for the new groups
# Capitalizes and removes spaces from the answer to $Name
# Creates the Security Groups in the appropriate Active Directory OU based off of the answser to $Country
$GroupParams1= @{
'Name' = $ADNameRW
'SamAccountName' = $ADNameRW
'GroupCategory' = "Security"
'GroupScope' = "Global"
'DisplayName' = "$NAME Read-Write Access"
'Path' = "OU=$Country,OU=FILE SHARE GROUPS,OU=Security Groups,DC=esg,DC=intl"
'Description' = "Members of this group have read-write access to $Path."
}
New-ADGroup @GroupParams1
$GroupParams2= @{
'Name' = $ADNameR
'SamAccountName' = $ADNameR
'GroupCategory' = "Security"
'GroupScope' = "Global"
'DisplayName' = "$NAME Read-Write Access"
'Path' = "OU=$Country,OU=FILE SHARE GROUPS,OU=Security Groups,DC=esg,DC=intl"
'Description' = "Members of this group have read access to $Path"
}
New-ADGroup @GroupParams2
# Create New Share Folder
New-Item -Path $Path -ItemType Directory
# Create initial ACE
# Create the initial Object
# Set domain - This could also be changed to prompt for domain if we decide it is needed
# Define local Administrators group by Well Known SID
# Set additional ACEs for the new AD File Share Groups
# Set ACLs on the new folder
function New-Ace {
[CmdletBinding()]
Param(
[Parameter(Mandatory=$true, Position=0)]
[Security.Principal.NTAccount]$Account,
[Parameter(Mandatory=$false, Position=1)]
[Security.AccessControl.FileSystemRights]$Permissions = 'ReadAndExecute',
[Parameter(Mandatory=$false, Position=2)]
[Security.AccessControl.InheritanceFlags]$InheritanceFlags = 'ContainerInherit,ObjectInherit',
[Parameter(Mandatory=$false, Position=3)]
[Security.AccessControl.PropagationFlags]$PropagationFlags = 'None',
[Parameter(Mandatory=$false, Position=4)]
[Security.AccessControl.AccessControlType]$Type = 'Allow'
)
New-Object Security.AccessControl.FileSystemAccessRule(
$Account, $Permissions, $InheritanceFlags, $PropagationFlags, $Type
)
}
$domain = 'ESG.INTL'
$administrators = ([wmi]"Win32_Sid.Sid='S-1-5-32-544'").AccountName
$acl = Get-Acl $path
$administrators, "$domain\Domain Admins" | ForEach-Object {
$acl.AddAccessRule((New-Ace $_ 'FullControl'))
}
$acl.AddAccessRule((New-Ace $ADNameRW 'Modify'))
$acl.AddAccessRule((New-Ace $ADNameR 'ReadAndExecute'))
Set-Acl $path $acl
Answer: Advanced Functions
I would have a serious look at the about_advanced_function_parameters. In the beginning you are doing some parameter validation on read-host Input. I would consider making this into an advanced script capable of accepting parameters so that you can make a series of calls to it and you don't need to rely on Read-Host which can get tedious if you are going to be running this several times at once.
Another set back in this is that if a user gets the first 2 prompts correct and makes a mistake on the third they have to start over again. You could go a couple of different routes but in its simplest form perhaps you should give users a couple of chances so they have a margin for error. If you used parameters like I mentioned earlier then the could just call the command again and fix the mistake without going through all the prompts.
Input Validation
I see logic this several times which is you trying to verify that a string is not null or empty space....
($Name -eq $null) -or ($Name -eq "") -or ($Name -eq " ")
The string static method [string]::IsNullOrWhiteSpace() would cover all those bases nicely.
If([string]::IsNullOrWhiteSpace($name)){"Do Stuff"}else{"Do Less Stuff"}
String Concatenation for Creating File Paths
You gather both parent folder and folder name separately and put them together later. Consider using the [io.path] method Combine() which will make it so you don't have to worry about the presence of a slash in the input from the user.
$Path = [IO.Path]::Combine($parent, $Name)
Continuing on this path I also see that you ask for the server name and then the full share path anyway (which contains the server path). Instead of asking for them separately you can cast the string as a URI to get just the host name without much hassle. Saves asking the user for something twice.
([uri]"\\eccofs01\Groups\ECCO IT\").Host
Split-Path would also help when taking a single sting and separating a path from its parent. Just something to keep in the back pocket.
split-path "\\eccofs01\Groups\ECCO IT\" -Parent
\\eccofs01\Groups
split-path "\\eccofs01\Groups\ECCO IT\" -Leaf
ECCO IT
-Parent can be omitted and is used by default.
Maybe a Little Regex Would Help
You are removing spaces using the string method `.Replace()` which is fine. Depending on your local other white-space characters are possible to run into. You can cover them with a simple `-replace '\s'` which will replace all white-space characters with nothing.
$FileShareFull = $FileShare -replace '\s'
Avoid duplicate code
I read your previous post and we helped to remove some redundancy I would like to point out a couple other of areas to consider improvement.
$ADNameRW = "FS-$FileShareFull-$Name-RW".toupper()
$ADNameR = "FS-$FileShareFull-$Name-R".toupper()
You are making almost the same string accept that one has a W at the end. Why no make one a basic copy of the other. This way, in the unlikely event you change your naming convention you only really have to edit the one line. THINK OF THE SAVINGS!
$ADNameR = "FS-$FileShareFull-$Name-R".toupper()
$ADNameRW = $ADNameR + "W"
I love splatting an applaud your use of it. In the cases of $GroupParams1 and $GroupParams2 both share similar params. Why not more that shared group out and add it to the unique param groups.
$sharedParams = @{
'GroupCategory' = "Security"
'GroupScope' = "Global"
'Path' = "OU=$Country,OU=FILE SHARE GROUPS,OU=Security Groups,DC=esg,DC=intl"
}
$GroupParams1 = $sharedParams + @{
'Name' = $ADNameRW
'SamAccountName' = $ADNameRW
'DisplayName' = "$NAME Read-Write Access"
'Description' = "Members of this group have read-write access to $Path."
}
$GroupParams2= $sharedParams + @{
'Name' = $ADNameR
'SamAccountName' = $ADNameR
'DisplayName' = "$NAME Read-Write Access"
'Description' = "Members of this group have read access to $Path"
}
There are others that are so close to being able to consolidate but making it work would hamper readability which ultimately is important.
Potentially unwanted output
The New-Item cmdlet would return the directory object that was created to the console as output. That can sometimes be seen as a nuisance. If that is the case an easy approach is to cast the whole command to [void] or pipe into Out-Null. Both have the same result.
[void](New-Item -Path $Path -ItemType Directory)
New-Item -Path $Path -ItemType Directory | Out-Null
These suggestions don't all have to be implemented but would be good to at least consider. I do like the overall feel of the code and I have always been procrastinating about doing something very similar. Nice work. | {
"domain": "codereview.stackexchange",
"id": 19504,
"tags": "security, windows, powershell, active-directory"
} |
Reduce-reduce conflict in SLR vs LALR | Question: I was wondering if I could say any of the following is true.
Given a grammar $G$,
If the LALR parser has reduce-reduce conflict for $G$, then the SLR parser also has reduce-reduce conflict for $G$.
If the SLR parser has reduce-reduce conflict for $G$, then the LALR parser also has reduce-reduce conflict.
The LALR parser has reduce-reduce conflict for $G$ if and only if SLR also has reduce-reduce conflict for $G$.
Since SLR $\subset$ LALR, I think point 1 is true. Is this wrong?
Further more, based on a few examples that I have come across it seems like points 2 & 3 are also true. Is this correct. If so, could you please point me to the proof. If not could you please give me a counter example?
Answer: Note: I'm assuming that when you wrote $SLR$ and $LALR$ that you actually meant $SLR(1)$ and $LALR(1)$.
It is certainly the case that if a grammar shows a conflict in an $LALR(1)$ automaton, that conflict will also be present in the $SLR(1)$ automaton, because the two automata have the same states and the $LALR(1)$ lookaheads for any production are a subset of the $SLR(1)$ lookaheads.
The remaining two proposals are false, as can be demonstrated by the same counter-example.
The classic example of an $LALR(1)$ grammar which is not $SLR(1)$ (taken directly from the Dragon book) is the abstraction of a grammar which attempts to discriminate between "lvalue" and "rvalue" syntaxes. (Roughly speaking, an "lvalue" is something which can be assigned to, so-named because it can appear on the left-hand side of an assignment operator; an "rvalue" is a value which cannot be assigned to, which means that it can only appear on the right-hand side.
It's usually convenient to include the production $R\to L$, which says that "lvalues" can also appear on the right-hand side of the assignment operator. But it is precisely this fact which leads to the conflict:
$$\begin{align}S &\to L = R\\
S &\to R\\
L &\to * R\\
L &\to id\\
R &\to L\\
\end{align}$$
As the Dragon book points out, that grammar leads to an $SLR(1)$ conflict in the state reached from $\text{GOTO}(I_0, L)$. That state ($I_2$ in the Dragon book exposition) contains the items $S\to L\;\cdot = R$ and $R\to L\;\cdot$. Since $=$ is in $\text{FOLLOW}(R)$, the state has a shift-reduce conflict. In the $LALR(1)$ automaton, that conflict is resolved.
Since that's a shift-reduce conflict, it doesn't address your question, which concerns reduce-reduce conflicts. But it's easy to modify the grammar slightly in order to turn the shift-reduce conflict into a reduce-reduce conflict. All that's necessary is to introduce a new intermediate non-terminal:
$$\begin{align}S &\to L' = R\\
S &\to R\\
L' &\to L\\
L &\to * R\\
L &\to id\\
R &\to L\\
\end{align}$$
That changes the itemset for state $I_2$ to include the items $\{L'\to L\;\cdot, R\to L\;\cdot\}$, which is now an $SLR(1)$ reduce-reduce conflict, for the same reason that the previous example had a shift-reduce conflict. Also for the same reason, the $LALR(1)$ algorithm resolves that conflict using more precisely-computed lookahead sets.
That disproves your statement 2; since the "if and only if" in statement 3 requires both statement 1 and statement 2 to be true, it also disproves statement 3. | {
"domain": "cs.stackexchange",
"id": 19469,
"tags": "context-free, formal-grammars, compilers, parsers, lr-k"
} |
Object Oriented Design of Juke Box | Question: Please review my code for JukeBox
Jukebox is plays songs or playlists
Each song has an Artist
For playlist, the first song is played and other songs are added to the queue
Relation between playlist and songs is many-to-many
Relation between Song and Artist is Many-to-one
In particular, I am not sure about the relation between Playlist and Song. Suppose I want to find all the playlists that a particular song is present in..How do I change my design?. Please mention other issues too.
public class Song {
String songName;
Artist artist;
public Song(String songName, Artist artist) {
this.songName = songName;
this.artist = artist;
}
public void playSong(){
System.out.println("Playing the song "+songName);
}
public String getSongName() {
return songName;
}
public Artist getArtist() {
return artist;
}
@Override
public String toString() {
return "Song{" +
"songName='" + songName + '\'' +
", artist=" + artist +
'}';
}
}
import java.util.List;
public class PlayList {
String playlistName;
List<Song> songsInPlaylist;
public PlayList(String playlistName,List<Song> songsInPlaylist) {
this.playlistName = playlistName;
this.songsInPlaylist = songsInPlaylist;
}
public void addSong(Song song){
songsInPlaylist.add(song);
}
public String getPlaylistName() {
return playlistName;
}
public List<Song> getSongsInPlaylist() {
return songsInPlaylist;
}
}
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.util.Queue;
public class JukeBox {
List<Song> allSongs;
List<PlayList> allPlayLists;
Queue<Song> currentPlayQueue;
public JukeBox(List<Song> allSongs,List<PlayList> allPlayLists) {
this.allSongs = allSongs;
this.allPlayLists = allPlayLists;
currentPlayQueue = new LinkedList<Song>();
}
//Song related methods
public Song selectSong(String songName){
Song requiredSong = null;
for(Song song : allSongs){
if(song.getSongName().equals(songName)) {
requiredSong = song;
break;
}
}
if(requiredSong == null){
throw new IllegalArgumentException("Provided song not available");
}
return requiredSong;
}
public void playSong(String songName){
Song requiredSong = selectSong(songName);
requiredSong.playSong();
}
public void queueNextSong(String songName){
Song requiredSong = selectSong(songName);
currentPlayQueue.add(requiredSong);
}
//PlayList Related Methods
public PlayList selectPlayList(String playListName){
PlayList requriedPlayList = null;
for(PlayList playList : allPlayLists){
if(playList.getPlaylistName().equals(playListName)){
requriedPlayList = playList;
}
}
if(requriedPlayList == null){
throw new IllegalArgumentException("Provided Playlist not available");
}
return requriedPlayList;
}
public void playPlayList(String playListName){
PlayList playList = selectPlayList(playListName);
List<Song> allSongsInThePlayList = playList.getSongsInPlaylist();
//Play First Song
allSongsInThePlayList.get(0).playSong();
//Add remaining songs to the queue
Song currentSong;
for(int i = 1 ; i < allSongsInThePlayList.size() ; i++){
currentSong = allSongsInThePlayList.get(i);
currentPlayQueue.add(currentSong);
}
}
}
public class Artist {
String artistName;
public Artist(String artistName) {
this.artistName = artistName;
}
@Override
public String toString() {
return "Artist{" +
"artistName='" + artistName + '\'' +
'}';
}
}
import java.util.ArrayList;
import java.util.List;
public class Main {
public static void main(String[] args) {
//In memory database of songs and play lists
List<Song> allSongs = new ArrayList<Song>();
Song song1 = new Song("Roses",new Artist("David"));
Song song2 = new Song("Ride",new Artist("21Pilots"));
allSongs.add(song1);
allSongs.add(song2);
//Playlists
List<PlayList> allPlayLists = new ArrayList<PlayList>();
List<Song> songsInThePlayList = new ArrayList<Song>();
songsInThePlayList.add(song2);
PlayList playList1 = new PlayList("Hello",songsInThePlayList);
allPlayLists.add(playList1);
JukeBox jukeBox = new JukeBox(allSongs,allPlayLists);
jukeBox.playSong("Roses");
jukeBox.playPlayList("Hello");
}
}
Answer: First off, I really like this Juke Box idea as an OO design project, and it's looking quite good.
Write immutable classes when you can
String songName;
Artist artist;
Can become
private String songName;
private Artist artist;
And your Song class is a perfect candidate for an immutable one,
so we can go one step further
private final String songName;
private final Artist artist;
Same goes for the Artist class in terms of immutability.
Naming
In your Song class you have playSong and getSongName methods. These are methods on a Song. there's no real need to specify song in the method name too!
We can simply have
song.play()
song.getName()
It's just as readable (if not more readable) than before!
Don't trust anyone
In your PlayList class, you provide a getter to a mutable object, in this case a list. If you're providing getters for mutable objects like this, you should provide a defensive copy. To make more robust code, it's better to be less trusting of people using your code (including yourself!)
So instead of
return songsInPlaylist;
we can use
return new ArrayList<>(songsInPlaylist)
If you return the member variable songsInPlaylist directly, the caller can do whatever they want with that same instance of the list.
(An alternative is to provide an unmodifiable version of the list. Note that if the objects inside the list are mutable, then you still have things to worry about.)
Similarly, in your PlayList constructor
public PlayList(String playlistName,List<Song> songsInPlaylist) {
this.playlistName = playlistName;
this.songsInPlaylist = songsInPlaylist;
}
If someone passes in that list, they can add/remove stuff from it and it will effect the list in your playlist object! Try this instead
public PlayList(String playlistName,List<Song> songsInPlaylist) {
this.playlistName = playlistName;
this.songsInPlaylist = new ArrayList<>(songsInPlaylist);
}
Now they can do whatever they want with the list they passed in, and it won't effect your object at all!
Efficiency
at the moment your selectSong method has O(n) time complexity, because you need to (worst case) iterate through the entire list to find your song. A better data structure to use here would be a Map. This allows you for O(1) or constant time access.
Instead of your JukeBox being backed by a list, you could have it backed by a HashMap. So you could have O(1) access to any song given its name.
Or maybe a Map, or even both.
your selectSong method could look something like this
public Song selectSong(String name){
if(!songMap.containsKey(name)){
throw new IllegalArgumentException("Provided song not available");
}
return songMap.get(name);
}
(we no longer need a getSongName() method now!)
if you had a Map, your selectPlayList would almost be identical.
In particular, I am not sure about the relation between Playlist and
Song. Suppose I want to find all the playlists that a particular song
is present in.
If you really wanted to, you could have a Song know about what playlists it's on, but I think that it's perfectly fine to just iterate through the playlists until you find it. I don't think a song should know about what playlist it's on. But a playlist should know what songs are on it.
you could have a method like,
List<PlayList> playLists = jukeBox.getPlaylistsWith(song);
it could maybe look like this
public List<PlayList> getPlaylistsWith(Song song){
List<PlayList> playLists = new ArrayList<>();
for(PlayList playList : playListMap.values()){
if (playList.hasSong(song)){
playLists.add(playList);
}
}
return playLists;
}
(bonus points for using streams instead :D)
Stylistic Choices
I see that in some places you use a for each loop
for(PlayList playList : allPlayLists){
if(playList.getPlaylistName().equals(playListName)){
requriedPlayList = playList;
}
}
And in other places you use
for(int i = 1 ; i < allSongsInThePlayList.size() ; i++){
currentSong = allSongsInThePlayList.get(i);
currentPlayQueue.add(currentSong);
}
I would stick with one (especially in the same project) just for consistency.
My personal preference would be to go with the first one.
Where possible, I would avoid checking for null.
Hopefully this was useful! | {
"domain": "codereview.stackexchange",
"id": 25411,
"tags": "java, object-oriented"
} |
Integral of an upsampled signal, without actually resampling it | Question: Given a signal X which is sampled at a certain frequency. The value we currently compute is given as the integral of the upsampled signal. Thus: Y = X but 100 times upsampled, by means of sinc interpolation or by using an FFT resampler. The integral is simply the sum of all values in Y.
The calculation of this integral is easy, yet I would like to speed it up by avoiding the upsampling step. Is there any possibility to obtain the integral of the upsampled signal, without actually resampling it ?
Preserving integral through downsampling and Do lowpass filters affect the integral over the signal? are similar questions, but deal with cases of downsampling or bandlimiting. In both cases, it is clear that an integral will not exactly be preserved by downscaling.
Yet, in this question, we are talking about upsampling. I suspect this is possibly because X does contain all information necessary to create Y.
Answer: If you upsample a causal discrete-time signal $x[n]$ by an integer factor $L$ and you use an interpolation filter with a causal impulse response $h[n]$, the upsampled and interpolated signal is given by
$$y[n]=\sum_{k=0}^{\infty}x[k]h[n-kL]\tag{1}$$
(Note that the requirement of causality can be easily removed; I just use it to have the lower summation indices start with the value $0$.)
If you want to compute the sum of $y[n]$, you would need to compute
$$s_y=\sum_{n=0}^{\infty}y[n]=\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}x[k]h[n-kL]\tag{2}$$
However, you can interchange the sums in (2), resulting in
$$s_y=\sum_{k=0}^{\infty}x[k]\sum_{n=0}^{\infty}h[n-kL]\tag{3}$$
The value of the inner sum in (3) is independent of the index $k$, so we can write
$$s_y=s_h\cdot \sum_{k=0}^{\infty}x[k]=s_h\cdot s_x\tag{4}$$
with
$$s_h=\sum_{n=0}^{\infty}h[n]$$
So the sum of the upsampled and interpolated signal is simply the sum of the original signal times a factor which is given by the sum over the impulse response values of the interpolation filter. Note that for the ideal sinc filter, this factor equals the upsampling factor $L$. This is also at least approximately true for all practical interpolation filters, because the sum over the impulse response equals the DC gain of the interpolation filter, which should ideally equal $L$.
So if you want to keep things simple (and why wouldn't you?), you could use
$$s_y\approx L\cdot s_x$$
which is a very good approximation (or even exactly true) for any reasonable interpolation filter.
In the above derivations I've assumed that all the (infinite) sums exist, which in practice is always the case if we deal with finite length signals and with stable interpolation filters. | {
"domain": "dsp.stackexchange",
"id": 2781,
"tags": "resampling, integration"
} |
How to calculate the 4th order perturbation energy of harmonic oscillator with diagram correctly? | Question: I'm learning Feynman diagram. It is said that:
$$\Delta E=\frac {i} {T_{tot}}\sum connected\ vacuum\ diagram$$
in which $T_{tot}$ is the total time for perturbation acting.
I want to test the theorem in simple isolated harmonic oscillator whose Hamiltonian reads:
$$H=p^2/2m+\frac 1 2 m \omega_0 ^2 x^2=\omega_0(a^\dagger a+1/2)$$
The ground state $|0\rangle$ has energy of $\omega_0 /2$.
Now I perturb it with squared term:
$$H=p^2/2m+\frac 1 2 m \omega_0 ^2 x^2 + \frac 1 2 m \omega _1 ^2 x^2=\sqrt{\omega_0^2+\omega_1^2}(a^\dagger a+1/2).$$
The exact ground state energy now is:
$$\frac 1 2\sqrt{\omega_0^2+\omega_1^2}=\frac {\omega_0} 2 \sqrt{1+\frac{\omega_1^2}{\omega_0^2}}=\frac {\omega_0} 2(1+\frac 1 2\frac{\omega_1^2}{\omega_0^2}-\frac 1 8(\frac{\omega_1^2}{\omega_0^2})^2+\frac{1}{16}(\frac{\omega_1^2}{\omega_0^2})^3-\frac{5}{128}(\frac{\omega_1^2}{\omega_0^2})^4+\ldots)$$
in which the square root is expanded to polynomials.
Next, I want to calculate the perturbed energy order by order using Feynman diagram. Since the perturbation term is quadratic $\frac 1 2 m \omega _1 ^2 x^2$, I think the possible connected vacuum diagrams are single circles, with several vertexes on arcs.
I can calculate the 1st, 2nd, 3ird order $\Delta E$ consistent with former exact solution. However, I countered inconsistency in 4th order.
The exact 4th order energy $$\Delta E^{(4)}=-\frac{5}{256}\omega_0(\frac{\omega_1^2}{\omega_0^2})^4$$ has a factor of 5 in numerator. The corresponding diagram is circle with 4 vertexes, and I really have no idea how the 4 vertexes circle diagram generate combination numbers involving factor 5. Actually, I come up with $$\Delta E_{diagram}^{(4)}=-\frac{6}{256}\omega_0(\frac{\omega_1^2}{\omega_0^2})^4$$ via calculate diagram. Am I wrong?
Here is my calculation details (only real factors without units are concerned):
Every vertex provide factor of 1/2;
Green's function of oscillator is: $$G(t_1-t_2)=-i\langle 0|Tx(t_1)x(t_2)|0 \rangle =-i\frac 1 {2m\omega}\exp(-i\omega |t_1-t_2|).$$ Then every Green's function provide factor 1/2;
Every time-integral also leads to 1/2;
Multiplicity of four-node circle is $3\times 2^4$
I have 4 vertexes, 4 Green's function, 3 time-integrals, then come to the factor: $\frac 3 {128}=\frac 6 {256}$.
Answer: Hints:
Let us for simplicity put mass $m=1$ and go to Euclidean signature as e.g. my Phys.SE answer here.
The free propagator is
$$\Delta(t,t^{\prime})~=~\frac{1}{2\omega_0}e^{-\omega_0 |t-t^{\prime}|},\tag{A} $$
while the Feynman rule for the 2-vertex insertion is
$$-\frac{\omega_1^2}{\hbar}\int_{[0,T]}\!dt. \tag{B} $$
The connected Feynman vacuum bubble diagrams are 1-loop diagrams of the form
x--x--x--x
| |
x--x--x--x
$\uparrow$ Fig. 1. A vacuum bubble with $n=8$ 2-vertex insertions.
The Feynman 1-loop bubble diagram with $n$ 2-vertex insertions has a symmetry factor $S=2n$.
The time integrals in the Feynman 1-loop diagram with $n$ 2-vertex insertions
$$ \int_{[0,T]} \!dt_1 \ldots \int_{[0,T]} \!dt_n $$
$$\exp\left\{-\omega_0\left[ |t_1-t_n|+|t_n-t_{n-1}|+\ldots+|t_2-t_1|\right]\right\} $$
$$~=~ \frac{(\#)}{\omega_0^{n-1}}T+{\cal O}(T^0) \tag{C}$$
are a bit more complicated to evaluate than OP suggests. Here $(\#)\in\mathbb{Q}$ is some rational number. One may check that the first few numbers for small $n$ are
$$(\#)~=~1,1,\frac{3}{2},\frac{5}{2},\ldots,\tag{D}$$
cf. OP's main question.
Altogether, the $n$th energy contribution therefore takes the form
$$E_n~=~ \frac{-\hbar}{S} \left(\frac{-\omega_1^2}{2\omega_0}\right)^n \frac{(\#)}{\omega_0^{n-1}}. \tag{E}$$ | {
"domain": "physics.stackexchange",
"id": 82540,
"tags": "quantum-mechanics, homework-and-exercises, quantum-field-theory, feynman-diagrams, perturbation-theory"
} |
Is it possible to calculate atomic radius with electron configuration? | Question: I need to know whether is it possible to calculate the atomic radius according to the number of electrons and electron configuration. Or is there any way to calculate the atomic radius using common characteristics which an atom has?
Answer: There's no measuring the radius of a single atom, mainly because electrons are around the nuclei: We can't define a radius for any single atom. Take a loom at the uncertainty principle:
Introduced first in 1927, by the German physicist Werner Heisenberg, it states that the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa. The formal inequality relating the standard deviation of position $\sigma _x$ and the standard deviation of momentum $\sigma _p$ was derived by Earle Hesse Kennard later that year and by Hermann Weyl in 1928: $$\sigma _x \sigma _p \geq \frac{\hbar}{2}$$ Wikipedia
Thus, if you want to measure an atom's radius, you have to measure the distance between two nuclei, and then divide it by two. This results in different types of radius defined for atoms. The most common are covalent radius, ionic radius, and Van der Waals radius. Radii trends are what that are mostly studied in undergrad chemistry, rather than the how of measuring them with advanced techniques.
Chemguide is pretty informative when it gets to teaching the trends:
chemguide: atomic radius | {
"domain": "chemistry.stackexchange",
"id": 2613,
"tags": "quantum-chemistry, electrons, electronic-configuration, atomic-radius"
} |
Sanitize object to only include specific properties | Question: Object Property Sanitization
I'm learning to code servers using JavaScript, Node, and Express. While writing controllers that create new entries in the database, the need to sanitize the user input arised. Take this post controller as an example.
const Model = require('./Model')
const post = (req, res, nxt) =>
Model
.create(req.body)
.then(createdItem => res.json(createdItem))
.catch(err => nxt(err))
Ideally, we'd like to sanitize the req.body object (that is parsed from JSON, and is provided by the user) before passing it to the create method of the model, to only include specific properties, so the users can't mess with the database schema or change values they aren't supposed to.
I experimented a little, and came up with four different solutions, and a test suite for them.
I'm using Jest as test framework, Standard as code style, and JSDoc for documentation comments. Here is the code:
ops-for-each.js
/**
* Creates a new object with the specified properties of the
* input object, if they are present.
* @param {object} object Object to sanitize.
* @param {string[]} properties Properties to copy.
* @returns Object with the specified properties if present in
* original object.
*/
const sanitizeObjectProperties = (object, properties) => {
const accumulator = {}
properties.forEach(property => {
if (property in object) accumulator[property] = object[property]
})
return accumulator
}
module.exports = {
fun: sanitizeObjectProperties,
id: 'for each'
}
ops-reduce.js
/**
* Creates a new object with the specified properties of the
* input object, if they are present.
* @param {object} object Object to sanitize.
* @param {string[]} properties Properties to copy.
* @returns Object with the specified properties if present in
* original object.
*/
const sanitizeObjectProperties = (object, properties) => {
const reducer = (copy, property) => {
if (property in object) copy[property] = object[property]
return copy
}
return properties.reduce(reducer, {})
}
module.exports = {
fun: sanitizeObjectProperties,
id: 'reduce'
}
ops-filter-reduce.js
/**
* Creates a new object with the specified properties of the
* input object, if they are present.
* @param {object} object Object to sanitize.
* @param {string[]} properties Properties to copy.
* @returns Object with the specified properties if present in
* original object.
*/
const sanitizeObjectProperties = (object, properties) =>
properties
.filter(property => property in object)
.reduce((copy, property) => {
copy[property] = object[property]
return copy
}, {})
module.exports = {
fun: sanitizeObjectProperties,
id: 'filter reduce'
}
ops-entries-filter-map.js
/**
* Creates a new object with the specified properties of the
* input object, if they are present.
* @param {object} object Object to sanitize.
* @param {string[]} properties Properties to copy.
* @returns Object with the specified properties if present in
* original object.
*/
const sanitizeObjectProperties = (object, properties) =>
Object.fromEntries(
properties
.filter(property => property in object)
.map(property => [property, object[property]])
)
module.exports = {
fun: sanitizeObjectProperties,
id: 'Object.fromEntries filter map'
}
ops.test.js
const testSubjects = [
require('./ops-for-each'),
require('./ops-reduce'),
require('./ops-filter-reduce'),
require('./ops-entries-filter-map')
]
testSubjects.forEach(({ fun, id }) => {
test(`${id} sanitizes properly`, () => {
const object = { title: 'One', year: 2021, hack: true }
const properties = ['title', 'year']
const sanitizedCopy = fun(object, properties)
// Must not have the 'hack' property
expect(sanitizedCopy).toEqual({ title: 'One', year: 2021 })
})
})
Specific Review
I have some specific questions, but you can safely ignore some or all of them if you have something else to share about this code.
Is this code readable? If not, what would you change to make it more readable?
Is this code maintainable? If not, what would you change to make it more maintainable?
Is there a more efficient solution?
Is there a more concise or idiomatic alternative to these solutions? Perhaps using Object.assign, or object deconstruction, or other techniques?
Which solution do you prefer, in terms of readability and/or efficiency and/or conciseness?
Are the tests written correctly? Should other test scenarios be considered?
Are the documentation comments written correctly?
Are there any anti-patterns, code smells, or anything that I should not be doing? Am I missing any best practices?
Would you approach this problem in a different way?
I'm not mutating the original object, but creating a new one and inserting the specified properties. Is that ok? Are there advantages to mutating the original object instead?
General Review
While I have some specific questions, I want this to be an open review. Please share anything you'd do differently, maybe as an improvement, or maybe as an alternative worth considering. I'm interested in observations of the code in general, including the solutions, the tests, and the documentation comments.
Answer: I prefer either the reduce or fromEntries approaches. The first, ops-for-each.js, is really just an awkward reduce.
You should think about whether you really need the "filter" concept at all. Since you have a list of acceptable properties, as long as you are iterating through those, you will get the right answer. Consider the two cases of:
if (property in object) copy[property] = object[property]
The first, is where property is present, in which case the property is copied. Otherwise, it is not assigned, and therefore undefined. You'll notice you get the same answer without the if statement, as in simply copy[property] = object[property].
Consider
anonymous functions that are only called once
shorter variable names if the function is small
more standard variable names. I found it helpful to use accumulator in the first function, but was surprised when it wasn't named that in the reduce function, where that's the normal name. I find the following easier to get through than the more verbose one:
const sanitizeObjectProperties = (obj, props) => {
return props.reduce((acc, p) => {
acc[p] = obj[p]
return acc
}, {})
}
As long as you're making just one pass through the properties, you should be relatively good in terms of efficiency. The test looks good, and by having tests, you are improving your maintainability. There are a couple other tests: empty object and pre-sanitized object, which you'll want to have. Your approach of creating a new object is solid and safer than removing extra properties.
It's Javascript, so there's lots of ways to do the same thing. There's probably a clever way to do this with rest/spread, but what you have done (#2 or #4) are pretty clear. | {
"domain": "codereview.stackexchange",
"id": 42654,
"tags": "javascript, unit-testing, properties"
} |
Union find using unordered map and templates in C++ | Question: I tried to implement union find (disjoint set data structure) algorithm in C++. There I used unordered_map data structure. Is there a better way of doing this using any other data structure.
While finding the root I have used the following way.
int root(int i){
if(i != parent[i]) parent[i] = root(parent[i]);
return parent[i];
}
Also, I can implement it as
int root(int i){
while (i != parent[i]){
parent[i] = parent[parent[i]]);
i = parent[i];
}
return i;
}
It seems like both are same other than the (recursive and iterative) approach for me. Which is better here!
The following is my implementation.
#include<iostream>
#include<unordered_map>
#include<vector>
using namespace std;
class Disjoinset{
private:
unordered_map<int, int> parent, rank;
// find the root
int root(int i){
if(i != parent[i]) parent[i] = root(parent[i]);
return parent[i];
}
public:
// initialize the parent map
void makeSet(int i){
parent[i] = i;
}
// check for the connectivity
bool is_connected(int p, int q){
return root(p) == root(q);
}
// make union of two sets
void Union(int p, int q){
int proot = root(p);
int qroot = root(q);
if(proot == qroot) return;
if(rank[proot] > rank[qroot]) {
parent[qroot] = proot;
}
else{
parent[proot] = qroot;
if(rank[proot] == rank[qroot]) rank[qroot]++;
}
}
};
Driver code
int main(){
vector<int> arr = {1,2,9,8,6,5,7};
Disjoinset dis; // create a disjoint set object
for(int x: arr){
dis.makeSet(x); // make the set
}
dis.Union(1,9); // create connections
dis.Union(6,5);
dis.Union(7,9);
cout << dis.is_connected(1,7) << endl; // check the connectivity.
cout << dis.is_connected(1,6) << endl;
return 0;
}
Thereafter I tried to use generics and I came up with following. Can someone help me on this as I am new to C++. Is my implementation is correct or are there any better approach.
#include<iostream>
#include<unordered_map>
#include<vector>
#include<string>
using namespace std;
template<class T>
class Disjoinset{
private:
unordered_map< T, T> parent;
unordered_map< T, int>rank;
// find the root
T root(T i){
if(i != parent[i]) parent[i] = root(parent[i]);
return parent[i];
}
public:
// initialize the parent map
void makeSet(T i){
parent[i] = i;
}
// check for the connectivity
bool is_connected(T p, T q){
return root(p) == root(q);
}
// make union of two sets
void Union(T p, T q){
T proot = root(p);
T qroot = root(q);
if(proot == qroot) return;
if(rank[proot] > rank[qroot]) {
parent[qroot] = proot;
}
else{
parent[proot] = qroot;
if(rank[proot] == rank[qroot]) rank[qroot]++;
}
}
};
int main(){
vector<string> arr = {"amal", "nimal", "kamal", "bimal", "saman"};
Disjoinset <string>dis; // create a disjoint set object
for(string x: arr){
dis.makeSet(x); // make the set
}
dis.Union("amal", "kamal"); // create connections
dis.Union("kamal", "nimal");
cout << dis.is_connected("amal", "nimal") << endl; // check the connectivity.
cout << dis.is_connected("bimal", "amal") << endl;
return 0;
}
Thank you for every answer.
Answer: Overall, that looks good to me. A few nitpicks, however, follow:
Advice 1
#include<iostream>
#include<unordered_map>
#include<vector>
#include<string>
I would put a space between e and <. Also, I would sort the rows alphabetically, so that we get:
#include <iostream>
#include <string>
#include <unordered_map>
#include <vector>
Advice 2: One template per file
I would extract away the main and put the entire disjoint set template into its own file; call it DisjointSet.hpp to begin with.
Advice 3: Protect your header files from multiple inclusion
I would most definitely put the header guards in DisjointSet.hpp so that it looks like this:
#ifndef COM_STACKEXCHANGE_RUSIRU_UTIL_HPP
#define COM_STACKEXCHANGE_RUSIRU_UTIL_HPP
.
. Your funky DisjointSet.hpp code here. :-)
.
#endif // COM_STACKEXCHANGE_RUSIRU_UTIL_HPP
Advice 4: "Package" your code into namespaces
I have a habit of putting my data structures into relevant namespaces in order to avoid name collisions with other people's code:
namespace com::stackexchange::rusiru::util {
template<class T>
class DisjointSet {
...
};
}
Advice 5: Remove minor noise
unordered_map< T, T> parent;
unordered_map< T, int>rank;
I would write:
unordered_map<T, T> parent;
unordered_map<T, int> rank;
Note the placement of spaces!
Advice 6: Arbitrary method naming scheme
Essentially, you have makeSet, is_connected, Union. That's three different method naming schemes right there; choose one and stick to it. For example, makeSet, isConnected, union.
Advice 7: Printing booleans to stdout:
You can do cout << std::boolalpha << ...; in order to print true/false.
Advice 8: Don't pollute your namespace
Generally speaking, using namespace std; is not what you may see in professional C++ code. Consider using individual uses such as:
using std::cout;
using std::endl;
Putting all together
Overall, I had this in mind:
DisjointSet.hpp
#ifndef COM_STACKEXCHANGE_RUSIRU_UTIL_HPP
#define COM_STACKEXCHANGE_RUSIRU_UTIL_HPP
#include <iostream>
#include <string>
#include <unordered_map>
#include <vector>
namespace com::stackexchange::rusiru::util {
using std::boolalpha;
using std::cout;
using std::endl;
using std::string;
using std::unordered_map;
using std::vector;
template<typename T>
class DisjointSet {
private:
unordered_map<T, T> parent;
unordered_map<T, int> rank;
// find the root
T root(T i) {
if (i != parent[i]) parent[i] = root(parent[i]);
return parent[i];
}
public:
// initialize the parent map
void makeSet(T i) {
parent[i] = i;
}
// check for the connectivity
bool isConnected(T p, T q) {
return root(p) == root(q);
}
// make union of two sets
void union(T p, T q) {
T proot = root(p);
T qroot = root(q);
if (proot == qroot) return;
if (rank[proot] > rank[qroot]) {
parent[qroot] = proot;
}
else {
parent[proot] = qroot;
if (rank[proot] == rank[qroot]) rank[qroot]++;
}
}
};
}
#endif // COM_STACKEXCHANGE_RUSIRU_UTIL_HPP
main.cpp
#include "DisjointSet.hpp"
#include <iostream>
#include <string>
#include <vector>
using std::boolalpha;
using std::cout;
using std::endl;
using std::string;
using std::vector;
using com::stackexchange::rusiru::util::DisjointSet;
int main() {
vector<string> arr = { "amal", "nimal", "kamal", "bimal", "saman" };
DisjointSet<string> dis; // create a disjoint set object
for (const string x : arr) {
dis.makeSet(x); // make the set
}
dis.union("amal", "kamal"); // create connections
dis.union("kamal", "nimal");
cout << boolalpha << dis.isConnected("amal", "nimal") << endl; // check the connectivity.
cout << dis.isConnected("bimal", "amal") << endl;
return 0;
} | {
"domain": "codereview.stackexchange",
"id": 38446,
"tags": "c++, algorithm, template, c++17, union-find"
} |
How to read 1H-NMR- and IR-spectre (for ibuprofen) | Question: I have read the answers to related/similar questions. They didn't help.
I have to read a proton-NMR-spectrum and an IR-spectrum of ibuprofen. The structure formula is given to:
I have not been formally taught how to do this, nor have I solved any problems beforehand. This is because it's required for my project, but we haven't had the training like the former classes have for reasons I won't waste time explaining.
I know the very basics of the two spectroscopy methods as I am a high school student.
My questions at the moment have become the following:
(proton-NMR) How does di-substituted aromatic rings affect the splitting of the signals from A and B?
(IR) Should I only focus on confirming the aromatic ring and carboxylic acid? And do I only have to figure out the fingerprint area or do I have to analyze the whole IR-spectrum?
Before you take a look at the spectra, I recommend looking at the last two pages of this link
The spectras are as follows:
proton-NMR-spectrum: (Source)
IR-spectrum: (Source)
Answer:
To a very crude first approximation, A and B appear as doublets in the NMR. That's because A couples to B and vice versa, and to a first approximation, the AB pair on one side doesn't interact with the AB pair on the other side.
This is not entirely true, of course. The system is not really two separate AB pairs, but is properly described as AA'BB' (or AA'XX', depending on the exact frequencies involved). This website: https://organicchemistrydata.org/hansreich/resources/nmr/?page=05-hmr-15-aabb/ and this question: How can multiplets in para-disubstituted benzene rings be described? discuss it in more detail, but at your current level, you almost certainly don't need to worry about that.
For the IR, I'd focus on picking out whatever you can pick out; what you mentioned is probably enough. It's rare to use the messy region of the IR for structural elucidation. IR still has its uses, but there's a reason why NMR is nowadays considered the 'primary' characterisation technique for typical organic molecules. | {
"domain": "chemistry.stackexchange",
"id": 17394,
"tags": "organic-chemistry, spectroscopy, nmr-spectroscopy, ir-spectroscopy"
} |
How do biologists infer correct ORF of a DNA sequence? | Question: Each DNA (RNA) sequence has 6 possible Open Reading Frames(ORF). My question is: What are the theoretical bases of in vitro or in silico tries to find correct reading frame of a sequence?
Is it just distance between Start and Stop codons, or are there some other factors with more important impacts in this subject?
Answer: TransDecoder is a commonly used program for extracting likely coding regions from transcriptome assemblies, which does the following to make a call:
TransDecoder identifies likely coding sequences based on the following
criteria:
a minimum length open reading frame (ORF) is found in a transcript
sequence
a log-likelihood score similar to what is computed by the GeneID
software is > 0.
the above coding score is greatest when the ORF is scored in the 1st
reading frame as compared to scores in the other 5 reading frames.
if a candidate ORF is found fully encapsulated by the coordinates of
another candidate ORF, the longer one is reported. However, a single
transcript can report multiple ORFs (allowing for operons, chimeras,
etc).
optional the putative peptide has a match to a Pfam domain above the
noise cutoff score.
So in essence, look for the longest ORF, and then use some secondary metric (hidden Markov model, position weight array, database query, etc) to refine your prediction. | {
"domain": "biology.stackexchange",
"id": 3455,
"tags": "transcription, sequence-analysis, orf"
} |
Primer design for introduction of restriction sites flanking a gene of interest | Question: I am wondering what the correct method for primer design to introduce restriction sites. Specifically between two methods.
1) Primer first partially hybridises to the gene, has a mis-match where the restriction site and extra bases are on the primer, then has a hybridised site following the restriction site, to DNA upstream of the gene.
or
2) Primer first partially hybridises to the gene, then completely dangling and non-hybridising has the restriction site and extra bases.
Answer: The option #2 is most common. Do not forget to add 3 or more additional terminal base pairs for optimal restriction enzyme cutting (source: BioTechniques 1998, 24:582-584) | {
"domain": "biology.stackexchange",
"id": 321,
"tags": "molecular-biology, cloning, restriction-enzymes"
} |
Comparing multiple variables and assigning new values respectively | Question: I know there is a more elegant way to write this. The logic is solid and doing what I want it to, but I feel like when written like this, it is awfully cumbersome to have these many if statements essentially doing the same thing.
if (forward) {
if (counter == listEnd) {
counter = 0;
}
else {
counter++;
}
if (nextCounter == listEnd) {
nextCounter = 0;
}
else {
nextCounter++;
}
if (prevCounter == listEnd) {
prevCounter = 0;
}
else {
prevCounter++;
}
} // end if
Answer: Introduce a function:
var boundedIncrement = function(current, maximum) {
return current === maximum ? 0 : current + 1;
};
Then use it:
if (forward) {
counter = boundedIncrement(counter, listEnd);
nextCounter = boundedIncrement(nextCounter, listEnd);
prevCounter = boundedIncrement(prevCounter, listEnd);
}
No more logic duplication.
You could also close over listEnd if you wanted to have a single argumented function:
if (forward) {
var boundedIncrement = function(current) {
return current === listEnd ? 0 : current + 1;
};
counter = boundedIncrement(counter);
nextCounter = boundedIncrement(nextCounter);
prevCounter = boundedIncrement(prevCounter);
} | {
"domain": "codereview.stackexchange",
"id": 16224,
"tags": "javascript"
} |
Orange Data Mining load saved models | Question: I am planning to use the Orange Data Mining Tool for easy data exploration and model generation. What is still unclear to me is: after finding a good model, what can I do with it, how can I use or deploy it in production?
I already found out that there is no Orange server which can run the Orange workflow. But is it at least possible to load the model in Python and use it there which was generated and saved via the Orange UI?
Answer: Yes, this is what the Save Model widget does under the Model tab. Create your model, then click save model and your model will be saved to a pickle file. Then just load your pickle file in Python.
Here is a similar question on Stack Overflow: | {
"domain": "datascience.stackexchange",
"id": 3655,
"tags": "data-mining, orange"
} |
How long does an enzyme work? | Question: I googled this, and I found only that shelf lives are around 2-3 years, but the enzymes use to work even after 5 years if you store them properly. I'd like to know how long I can use an enzyme in solution? Is it hours, days, months? If it is a lot shorter than the shelf life, then why is that?
Answer: The product manufacturer can define a suitable shelf life at a stated storage condition because they've performed stability studies that confirm the product in question remains suitably potent in that timeframe. Often times an expiration date ends abruptly at 1-3 years because the manufacturer chooses not to test out any longer. So that's to say, if you receive a protein with a shelf-life of 3 years, if stored properly the protein may be suitable long after.
It's a fairly different question, however, if you took and reconstituted the reagent, because chances are that stability wasn't tested in that form. In order to make a claim about the stability of a processed/reconstituted protein you need either a statement from the manufacturer, data from someone who has tested stability on it themselves, or you need to test stability for yourself. The fairly short answer is you can't know, and the answer depends on a lot of variables such as the structure, concentration, diluent and storage temperature. All such variables can cause structural changes to the enzyme, i.e. protein degradation, that will result in a different activity for that product. | {
"domain": "biology.stackexchange",
"id": 9473,
"tags": "enzymes"
} |
Question about time evolution of a scalar field for the Källén-Lehmann formula proof | Question: I'm going through the proof of the Källén-Lehmann formula in my professor's notes, and at one point he writes that, in the Heisenberg picture, we can write a real scalar field $\hat{\varphi}(t, \vec x)$ as
$$
\hat{\varphi}(t, \vec x) = e^{i\hat p x} \hat{\varphi}(0, \vec x) e^{-i\hat p x}
$$
so that when inserted in $\langle\Omega|\hat{\varphi}(t, \vec x)|n\rangle$ ($\Omega$ being the vacuum of the interation theory) we retreive the eigenvalues $p_n$. I don't understand why $\hat \varphi$ evolves in time with the momentum $\hat p$ in the exponential and not with the Hamiltonian $\hat H$, as the Heisenberg picture dictates.
EDIT: actually my prof wrote
$$
\hat \varphi(x) = e^{i\hat p x} \hat{\varphi}(0) e^{-i\hat p x}
$$
without being explicit about the time dependence. I'm not sure if this is just abuse of notation or actually makes a difference.
Answer: This is just the usual unitary transformation for translation in Quantum mechanics; see eg Sakurai. You can think of $x$ as a 4-vector $(t,\vec{x})$. Then, the last equation you wrote is just a translation along all coordinates. Note that along the time coordinate $x^0$, the generator $p^0$ IS the hamiltonian $H$(recall what $p^0$ means in special relativity). Along the spatial directions, the translation operator is the usual $\vec{p}=-i\nabla$.
So everything is consistent- time translation is generated by the Hamiltonian ($p^0$) while spatial translation is generated by $\vec{p}$. Note the power of a covariant formulation-you can encode all of this as the single operator $e^{ix^\mu \hat{p}_\mu}$. | {
"domain": "physics.stackexchange",
"id": 80872,
"tags": "quantum-field-theory, correlation-functions, time-evolution, propagator"
} |
Are 2 independent PDAs equivalent to a turing machine? | Question: I was thinking about the language $a^nb^nc^n$, which is obviously not context free, but if we run it through 2 automata at the same time (the first for $a$ and $b$ and the second for $b$ and $c$ and they both accept, the construction would basically accept the language)
Answer: No, such a construct can recognise at most the intersection of two context-free languages. To see where it's lacking, consider $L = \{\textsf{a}^n~|~n\in\mathbb{N}~\text{is composite}\}$. I conjecture that to express $L$ as the intersection of CFLs requires infinitely many CFLs.
The paper An infinite hierarchy of intersections of context-free languages by Liu and Weiner (Math. Systems Theory 7, 185–192 (1973). https://doi.org/10.1007/BF01762237) presents languages that can be written as intersection by $k$ context-free languages but not by intersecting $k-1$-languages. For $k=3$ the language is $\{\; a^k b^\ell c^m a^k b^\ell c^m \mid k,\ell,m \in \mathbb{N}\}$.
Thus, that language is the intersection of three cf-languages, but not the intersection of two cf-languages.
For other $k$ the same pattern is repeated. The quote that paper "The proof is quite complicated".
The language
$\{\textsf{a}^n\textsf{b}^n\textsf{c}^n\textsf{d}^n\textsf{e}^n~|~n\in\mathbb{N}\}$ seems as complicated but is in fact the intersection of two cf-languages: $\{ \textsf{a}^m\textsf{b}^m\textsf{c}^n\textsf{d}^n\textsf{e}^p \mid m,n,p\in \mathbb N \} \cap\{ \textsf{a}^m\textsf{b}^n\textsf{c}^n\textsf{d}^p\textsf{e}^p \mid m,n,p\in \mathbb N \} $. | {
"domain": "cs.stackexchange",
"id": 21439,
"tags": "formal-languages, turing-machines, pushdown-automata"
} |
Why is benzyne an intermediate? | Question: I've always seen benzyne (benzene with a triple bond) classified as an "intermediate". I really don't see why it needs to be an intermediate, though. The possible reasons I can come up with are flimsy:
It has an unstable sp2–sp2 $\pi$ bent bond: we have bent bonds with more angle strain in cyclopropane, which is not an intermediate. Though those bonds are $\sigma$ bonds.
It immediately reacts with itself to form a dimer, biphenylene, in the absence of other reagents: there are many other molecules which spontaneously dimerize (eg $\ce{NO2}$ at certain temperatures), yet they are not classified as "intermediates".
So, what makes benzyne an "intermediate"?
More generally (but possibly too broad), what makes a molecule an "intermediate"?
Answer: As Juha says, o-benzyne (to distinguish from e.g. p-benzyne which is generated by the Bergman reaction) is an intermediate, in the sense of o-benzyne itself not being the product of interest; rather it is subsequently reacted with a trapping agent (e.g. a diene for Diels-Alder reactions) within the reaction mixture, and it is the adduct that is isolated.
However, there have been isolation experiments conducted, where o-benzyne is trapped in such a way that it is barred from reacting; for instance, Chapman and coworkers isolated benzyne in an argon gas matrix at 8 K for subsequent spectroscopic study. More recently, Warmuth studied benzyne by trapping it within a so-called hemicarcerand; the benzyne complex, kept at 173 K, is stable enough that NMR experiments can be performed on it.
See this survey article by Wentrup for more references. | {
"domain": "chemistry.stackexchange",
"id": 15,
"tags": "organic-chemistry, bent-bond"
} |
Ising model simulation using metropolis algorithm | Question: I am new to this community; I have tried my best to respect the policy of the community. I have written the Monte Carlo metropolis algorithm for the ising model. I want to optimize the code. I have tried my best. I want to optimize it further. The following is the code:
(I have used tricks like finding exponential only once, careful generation of random number, etc.)
import numpy as np
import time
import random
def monteCarlo(N,state,Energy,Mag,Beta,sweeps):
if sweeps > 10000:
print("Warning:Number of sweeps exceeded 10000\n\tsetting number of sweeps to 10000")
sweeps = 10000
start_time = time.time()
expBeta = np.exp(-Beta*np.arange(0,9))
E = np.zeros(sweeps)
M = np.zeros(sweeps)
for t in range(sweeps):
for tt in range(N*N):
a = random.randint(0, N-1)
b = random.randint(0, N-1)
s = state[a,b]
delta_E = 2*s*(state[(a+1)%N,b] + state[a,(b+1)%N] + state[(a-1)%N,b] + state[a,(b-1)%N])
if delta_E < 0:
s *= -1
Energy += delta_E
Mag += 2*s
elif random.random() < expBeta[delta_E]:
s *= -1
Energy += delta_E
Mag += 2*s
state[a, b] = s
E[t] = Energy
M[t] = Mag
print("%d monte carlo sweeps completed in %d seconds" %(sweeps,time.time()-start_time))
return E,M #returning list of Energy and Magnetization set
#####lattice config#####
"""N is lattice size
nt is number of Temperature points
sweeps are number of mc steps per spin"""
print("Starting Ising Model Simulation")
N = int(input("Enter lattice size : "))
startTime = time.time()
nt = 10
N2 = N*N
sweeps = 10000 #mc steps per spin
"""we will plot the following wrt temperature, T"""
T = np.linspace(2, 3, nt) #you can set temperature range
"""preparing lattice with all spins up"""
state = np.ones((N,N),dtype="int")
Energy = -N2
Mag = N2
#temperature loop
#for k in tqdm_gui(range(nt)):
for k in range(nt):
temp = T[k]
Beta=1/temp
print("____________________________________\nTemperature is %0.2f, time is %d" %(temp,time.time()-startTime))
E, M = monteCarlo(N,state,Energy,Mag,Beta,sweeps) #list of Energy and Magnetization
Energy = E[-1]
Mag = M[-1]
#further code is finding magnetization, autocorrelation, specific heat, autocorrelation, etc.
```
Answer: Naming variables
The usual rule of variable names in snake_case applies, i.e. energyFunctional would become energy_functional. Class names on the other hand should be written in CamelCase. I don't mind using single capital letters for matrices.
Why use a,b for discrete indices? I would use one of i,j,k,l,n,m,p,q,r.
Use descriptive names: e.g. energies instead of E.
Merging conditions
Instead of
if delta_E < 0:
s *= -1
Energy += delta_E
Mag += 2*s
elif random.random() < expBeta[delta_E]:
s *= -1
Energy += delta_E
Mag += 2*s
you could simply write
if delta_E < 0 or random.random() < expBeta[delta_E]:
s *= -1
Energy += delta_E
Mag += 2*s
which is easier to read.
String formatting
Use the sweet f-strings.
sweep_time = int(time.time() - start_time)
print(f"{sweeps} monte carlo sweeps completed in {sweep_time} seconds")
Logging warnings
Consider using the logging library. I'd write warnings to stderr, but it's up to you, see.
import sys
print("Warning: Number of sweeps exceeded 10000", file=sys.stderr)
print(" setting number of sweeps to 10000", file=sys.stderr)
Truncating it to a single line allows for easier parsing later.
print("Warning: Number of sweeps truncated to 10000.", file=sys.stderr)
Refactorisation
If performance wasn't the primary goal, I'd introduce a few new functions.
I'd separate the timing part anyway.
start_time = time.time()
energies, mags = monte_carlo(n, state, energy, mag, beta, sweeps)
elapsed_seconds = int(time.time() - start_time)
print(f"{sweeps} monte carlo sweeps completed in {elapsed_seconds} seconds")
monte_carlo
Applying these ideas, the monteCarlo function becomes the following.
def monte_carlo(n, state, energy, mag, beta, sweeps):
if sweeps > 10000:
print("Warning: Number of sweeps truncated to 10000.", file=sys.stderr)
sweeps = 10000
exp_betas = np.exp(-beta*np.arange(0,9))
energies = np.zeros(sweeps)
mags = np.zeros(sweeps)
for t in range(sweeps):
for tt in range(n*n):
j = random.randint(0, n-1)
k = random.randint(0, n-1)
s = state[j,k]
neighbour_sum = (state[(j-1)%n, k] +
state[j, (k-1)%n] + state[j, (k+1)%n] +
state[(j+1)%n, k])
energy_diff = 2*s*neighbour_sum
if energy_diff < 0 or random.random() < exp_betas[energy_diff]:
s *= -1
energy += energy_diff
mag += 2*s
state[j, k] = s
energies[t], mags[t] = energy, mag
return energies, mags | {
"domain": "codereview.stackexchange",
"id": 38296,
"tags": "python, performance, python-3.x, random"
} |
Does SPDC (spontaneous parametric down-conversion) type used depend on design of individual quantum erasure experiment? | Question: What is the type of spontaneous parametric down-conversion (SPDC) used in the following quantum eraser experiment?
One of the facts that need clarification is whether the generation of entangled photons implicitly means that such an experiment must use type-II SPDC BBO crystal or whether the design of the experiment allows freedom of choosing between type-I and type-II.
Source: Wikipedia: Quantum eraser experiment
First, a photon is shot through a specialized nonlinear optical
device: a beta barium borate (BBO) crystal. This crystal converts the
single photon into two entangled photons of lower frequency, a process
known as spontaneous parametric down-conversion (SPDC). These
entangled photons follow separate paths. One photon goes directly to a
polarization-resolving detector, while the second photon passes
through the double-slit mask to a second polarization-resolving
detector. Both detectors are connected to a coincidence circuit,
ensuring that only entangled photon pairs are counted. A stepper motor
moves the second detector to scan across the target area, producing an
intensity map. This configuration yields the familiar interference
pattern.
Next, a circular polarizer is placed in
front of each slit in the double-slit mask, producing clockwise
circular polarization in light passing through one slit, and
counter-clockwise circular polarization in the other slit (see Figure
1). (Which slit corresponds to which polarization depends on the
polarization reported by the first detector.) This polarization is
measured at the second detector, thus "marking" the photons and
destroying the interference pattern (see Fresnel–Arago laws).
Finally, a linear polarizer is introduced in the path of the first
photon of the entangled pair, giving this photon a diagonal
polarization (see Figure 2). Entanglement ensures a complementary
diagonal polarization in its partner, which passes through the
double-slit mask. This alters the effect of the circular polarizers:
each will produce a mix of clockwise and counter-clockwise polarized
light. Thus the second detector can no longer determine which path was
taken, and the interference fringes are restored.
A double slit with rotating polarizers can also be accounted for by
considering the light to be a classical wave. However, this
experiment uses entangled photons, which are not compatible with
classical mechanics.
Figure 1. Crossed polarizations prevent interference fringes
Figure 2. Introduction of polarizer in upper path restores interference fringes below
Answer: There is no particular requirement that Type II BBo crystals must be used for a quantum eraser experiment. As you mention, Type II is used in your reference (see link to full paper below). The choice is really one of convenience, quality of entangled photon source, etc. You can actually shift (rotate) the polarization by suitable use of wave plates anyway, although there is no real purpose to doing so.
https://arxiv.org/abs/quant-ph/0106078 | {
"domain": "physics.stackexchange",
"id": 95557,
"tags": "quantum-mechanics"
} |
Subscriber gets older timestamped messages | Question:
2 different nodes are listening to a msg with the same topic. 1 node (written in C++) gets the msg and print its content in an expected, minimum delay. But the other (python) node shows the older content and I have no idea why this delay occurs. I'd appreciate any ideas.
Codes (partial, they show where I compare the values):
Successful listener (C++)
Unsuccessful listener (python) subclass
Unsuccessful listener (python) parent class
Coincident or not, the difference is the language (C++ vs Python), and extension (Codes in python is separated into parent and sub class). But I don't know if these factors matter (and hopefully they do not).
All nodes run on the same machine (Ubuntu 11.04, electric, Intel i7 quad).
Update 11/2/11) I'm now comparing timestamp and approx. 30 seconds after starting program, the difference of timestamps from both nodes are 8 ROS seconds.
Originally posted by 130s on ROS Answers with karma: 10937 on 2011-11-01
Post score: 0
Answer:
Solved by myself - changing subscriber's queue_size works out for me. Using the code example on my environment I described in the question, I increased it from 1 to 5 in the problematic code (the one in python). Btw the queue_size in the publisher is not showing influence here.
PS. However I still don't get why...I've read documents including this QA though. Is my understanding below correct...? I'll give a vote if someone corrects it.
Increasing queue_size means the number of msgs processed in one callback increases, and that number could be greater than calling callback during exactly same length of period.
Originally posted by 130s with karma: 10937 on 2011-11-02
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by 130s on 2011-11-02:
@tfoote thank you, that makes perfect sense to me. Now I feel this queuing architecture might not be ROS specific but instead more common in general distributed system, and also suppose that could be because the documents out there don't explain detail.
Comment by tfoote on 2011-11-02:
All messages being received are put into a queue. When this queue has data in it, your callbacks will be called. However if data comes in faster than your callback executes, the queue will fill up. It looks like you had your queue length set to 1000. Meaning you could be 1000 messages behind. | {
"domain": "robotics.stackexchange",
"id": 7158,
"tags": "ros, rospy, timestamp"
} |
Where to learn more about what Theoretical Computer Science is? | Question: I am a graduate student in math, and theoretical computer science is a domain which I never understood what it is about because I couldn't find a good read about the topic. I want to know what this domain is actually about, what kind of topics it is concerned with, what prerequisites are needed to embark into it, etc. For now, I just want to know:
What is a good introductory book to theoretical computer science?
Given that there is such a thing. If not, where should a mathematician who has basic knowledge about computer science (i.e. they know the basics of one or two programming languages) start if they want to understand what theoretical computer science is about? What do you recommend?
thanks!
Answer: First, "theoretical computer science" means different things to different people. I think for most users on this site, a historical caricature (which reflects some modern sociological tendencies) is that there is "Theory A" and "Theory B" (with no implied order relation between them): Theory A consists of the theory of algorithms, complexity theory, cryptography, and similar. Theory B consists of things like the theory of programming languages, theory of automata, etc. Depending on your tastes in mathematics, you may prefer one over the other (or like both equally). I am more familiar with "Theory A," so let me give some references there:
Start with Sipser's book. This will give you a good introduction to automata, Turing machines, computability, Kolmogorov complexity, P vs NP, and a few other complexity classes. It is very well-written (in my opinion, it is one of the best-written technical books ever)
For algorithms, I have a slighty preference for Kleinberg-Tardos, but there are many good introductory books out there. You might be especially interested in computational geometry, which has its own set of great books.
Given that you are a mathematics graduate student, a major branch of TCS that is missing from these books is algebraic complexity theory, which often is closely related to algebra (both commutative and non-commutative), representation theory, group theory, and algebraic geometry. There is a canonical text here, which is Burgisser-Clausen-Shokrollahi. It is somewhat encyclopedic, so may not be the best introduction, but I'm not sure there is a really introductory book in this area. You might also check out the surveys by Chen-Kayal-Wigderson and Shiplka-Yehudayoff.
After that, I'd suggest browsing through more advanced books on particular topics, depending on your mathematical taste:
Arora-Barak is more modern complexity theory (continues on where Sipser's book ends, so to speak), giving you a flavor of the techniques involved (mix of combinatorics and algebra, mostly)
Jukna's book on Boolean function complexity does similar, but more in-depth for Boolean circuit complexity in particular (very combinatorial in flavor)
Geometric complexity theory. See here or Landsberg's introduction for geometers.
O'Donnell's book Analysis of Boolean Functions has a more Fourier-analytic bent.
Cryptography. The more advanced mathematical aspects here are typically number theory and algebraic geometry. While these pure mathematical aspects represent only a small portion of cryptography, they are an important one that you might find interesting. Not being my area, I'm not sure of what a good starting book is here.
Coding theory. Here, the mathematical theory ranges from sphere-packing (see the book by Conway and Sloane) to algebraic geometry (e.g., the book by Stichtenoth). Again, not my area, so I'm not sure if these are the best starting points, but flipping through them you will quickly get the flavor and decide if you want to delve deeper.
And then there are many other mathematical topics that only appear in the research literature, like connections with foams, graph theory, C*-algebras (let me just point you to the Kadison-Singer conjecture), invariant theory, representation theory, quadratures, and on and on. See also these related questions
Resources for mathematicians hoping to learn more computer science
Are there any topics in theoretical CS that are more about pure math?
Applications of TCS to classical mathematics?
Solid applications of category theory in TCS? | {
"domain": "cstheory.stackexchange",
"id": 3792,
"tags": "reference-request, soft-question, books"
} |
Damped Coupled Oscillations | Question: I'm currently revising for a vibrations and waves module that I am taking as part of my physics degree.
One of the our final questions involved finding equations for the displacements of the two masses in this system as a superposition of their normal modes:
I found the equations of motion for each mass to be:
$$ \ddot{x_a} + \gamma\dot{x_a} + x_a(\omega_0^2+\omega_s^2) = x_b\omega_s^2\\\ddot{x_b} + \gamma\dot{x_b} + x_b(\omega_0^2+\omega_s^2) = x_a\omega_s^2\\Here: \omega_0^2 = \frac{g}{l}~~\omega_s^2=\frac{k}{m}~~\gamma=\frac{b}{m}$$ Here I let $ q_1 = x_a+x_b~and~q_2 = x_a-x_b: $ $$\ddot{q_1} + \gamma\dot{q_1} + q_1\omega_0^2=0\\\ddot{q_2} + \gamma\dot{q_2} + q_2(\omega_0^2+2\omega_s^2)=0 $$ From here I can't see where to go. I did attempt substituting in a general solution such as $q_1 = C_1 \cos(\omega t)$ but I get a mixture of sines and cosines and I can't solve it for anything useful.
Any help would be great as this is the last topic that I need to learn! Thanks, Sean.
Answer: The problem boils down to solving a linear system of (first order) differential equations. I suppose you have not had this topic in your math classes yet, so I will not delve into the intricacies. You did the transformation from x and y variables to qs, and now you seem to have what is essentially two uncoupled damped oscillators. I would imagine that you have had the general solution to these explained in your classes. See Wikipedia for the formulae in case you have not: linear differential equations and damping. | {
"domain": "physics.stackexchange",
"id": 13726,
"tags": "homework-and-exercises, classical-mechanics, coupled-oscillators"
} |
gene duplicated on genome but is different | Question: I've been looking at a miRNA cluster on a genome and I found it three times. The first two are right next to each other and look exactly the same, the third sequences is on a different scaffold of the genome and is both shorter and has a number of mismatches with the first two sequences.
Is there a particular name for this? The first two are most likely some sort of gene duplication, but I'm not too sure on what to call the third,
Thanks
Answer: That sounds line one of two things. Either a sequencing or assembly error (quite likely since you're talking about scaffolds so, presumably, not a completely assembled genome) or duplication and subsequent pseudogenization.
It is relatively common for genes to be duplicated and then to be rendered inactive by a mutation event. Such genes can still be detected in the genome and are called pseudogenes. The process of their creation is, unsurprisingly, called pseudogenization. You can think of such genes as fossils. They often have significant sequence similarity to the original and can even be transcribed.
A quick search brought up a couple of articles discussing the pseudogenization of specific genes:
Pseudogenization of CCL14 in the Ochotonidae (pika) family.
Pseudogenization of sopA and sopE2 is functionally linked and contributes to virulence of Salmonella enterica serovar Typhi.
That said, is also quite possible that what you describe is a simple duplicate that has diverged without becoming a pseudogene. The details will depend on the particular sequence. Are there in-frame STOP codons? Can the gene still be transcribed? Such duplication events give rise to paralogs (note that all three of the genes you mention would be paralogs of each other, it's just that one has diverged more than the others). You can read more about those here: What is the difference between orthologs, paralogs and homologs? | {
"domain": "biology.stackexchange",
"id": 4481,
"tags": "genetics"
} |
How to go about getting raw data from turtlebot kinect sensor? | Question:
Hello, I'm trying to create a program that will have a turtlebot map a given area. However, I am very new to this. I understand that the turtlebots publish some kind of point cloud as sensed by the kinect, but I don't understand exactly how to subscribe to this data, and what type it is so that I can work with it to create a map.
Anyone have any suggestions or know any good tutorials that could help me out?
Originally posted by mysteriousmonkey29 on ROS Answers with karma: 170 on 2014-01-21
Post score: 0
Answer:
Subscribing to a topic is explained in the Publisher/Subscriber Tutorial.
To subscribe to the Kinect you have to adapt the type and name of the topic.
One way to get information about the data the Kinect node is publishing is to type
$ rostopic list
in your terminal. This outputs a list of topics that are currently active on your machine. With
rostopic info <topic name>
You get more information about this topic. There you can also find the message type of this topic. With
rosmsg show <message name>
You get detailed information about the data structure of this message type.
I hope this helps you a little bit.
Originally posted by BennyRe with karma: 2949 on 2014-01-22
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by mysteriousmonkey29 on 2014-01-30:
Ok, thank you! | {
"domain": "robotics.stackexchange",
"id": 16718,
"tags": "ros"
} |
Identifying an Unknown Blood Type | Question:
The following case study has a student working with blood samples to
identify their blood types (A, B, AB, and O). Consider the situation
and answer the questions.
A student is given eight red blood cell suspensions, which contain
only red blood cells, and the matching serum from each sample. (Serum
is blood plasma in which fibrinogen has been removed.)
She is asked to identify each of the four different blood types
present. To test these samples, the only materials she has at her
disposal are a sample of type A red cell suspension and the serum from
type A blood.
Describe the step-by-step procedure that she must use to identify the
four different blood types present. In each step, interpret what it
means if the cells in the sample clump together, or if they do not
clump together.
I am mostly confused on what exactly "she has at her disposal are a sample of type A red cell suspension and the serum from type A blood." means..? What is a A red cell suspension?
I already know that:
TYPE A: has anti-B antibodies
TYPE B: has anti-A antibodies
TYPE AB: has neither anti-A nor anti B antibodies
TYPE O: has both anti-A and anti-B antibodies.
Answer: The passage:
To test these samples, the only materials she has at her disposal are
a sample of type A red cell suspension and the serum from type A
blood.
means, I take it, that she has available two containers: One has red blood cells from a known blood-type A individual suspended in a neutral fluid such as saline solution. The other container has blood plasma (without red blood cells or fibrinogen) also from a known blood-type A individual.
These should be enough to determine the blood types of four individuals from the unknown test tubes (four pairs of test tubes, each pair being one with cell suspensions and one with plasma, coming from four individuals having different blood types). | {
"domain": "biology.stackexchange",
"id": 5266,
"tags": "human-biology, hematology, red-blood-cell, blood-group"
} |
Renaming node (or nodelet) in roslaunch for top | Question:
I have some pretty nodelet heavy roslaunch files.
I'd like their 'command' in top to be more descriptive than just 'nodelet'.
Is there a way to rename the execution command so that linux will report its CPU usage with a custom name? It can get pretty bad when there's 12+ nodelets. :)
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3634 chad 20 0 164m 18m 10m S 46 0.5 3:58.35 nodelet
3618 chad 20 0 163m 18m 10m S 8 0.5 1:32.81 nodelet
3592 chad 20 0 215m 32m 12m S 5 0.8 0:41.08 nodelet
3574 chad 20 0 37860 9896 2920 S 1 0.2 0:02.60 python
I do have a workaround with 'htop' as it displays the entire command including launch args, but that's clunkier and not installed by default.
Although, it's not really relevant to my problem, here's an example launch file. The real issue is that top reports the executable name as compiled, but for nodelets, it's always 'nodelet'.
Originally posted by Chad Rockey on ROS Answers with karma: 4541 on 2011-09-29
Post score: 1
Original comments
Comment by Chad Rockey on 2011-09-29:
I've edited to include an example launch file that contains many nodelets.
Comment by DimitriProsser on 2011-09-29:
Could you post an example launch file?
Answer:
Try "top -c". I think that will at least replace htop.
Originally posted by DimitriProsser with karma: 11163 on 2011-09-29
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 6826,
"tags": "roslaunch, nodelet"
} |
CSV Splitter in Bash | Question: I am splitting a csv file where the first 3 columns will be common for all the output files.
input file:
h1 h2 h3 o1 o2 ....
a b c d e ....
a1 b1 c1 d1 e1 ....
output files:
o1.csv:
h1 h2 h3 o1
a b c d
a1 b1 c1 d1
o2.csv:
h1 h2 h3 o2
a b c e
a1 b1 c1 e1
So if there are n columns in the input file , the code creates n-3 output files.
However my code is inefficient and is quite slow. It takes 20 seconds for 50000 rows.
old_IFS=$IFS
START_TIME=`date`
DELIMITER=,
# reading and writing headers
headers_line=$(head -n 1 "$csv_file")
IFS=$DELIMITER read -r -a headers <<< $headers_line
common_headers=${headers[0]}$DELIMITER${headers[1]}$DELIMITER${headers[2]}
for header in "${headers[@]:3}"
do
# writing headers to every file
echo $common_headers$DELIMITER$header > "$header$START_TIME".csv
done
# reading csv file line by line
i=1
while IFS=$DELIMITER read -r -a row_data
do
test $i -eq 1 && ((i++)) && continue # ignoring headers
j=0
common_data=${row_data[0]}$DELIMITER${row_data[1]}$DELIMITER${row_data[2]}
for val in "${row_data[@]:3}"
do
# appending row to every new csv file
echo $common_data$DELIMITER$val >> "${headers[(($j+3))]}$START_TIME".csv
((j++))
done
done < $csv_file
IFS=${old_IFS}
Any suggestions are appreciated.
Answer: Bash is not efficient for processing large files line by line. For small data it's fine, but when a script starts to feel heavy, it's good to look for other alternatives. Also note that the line by line processing and breaking into columns is not easy to get right, I bet you spent quite some time on this. You wrote it well, but the result is not particularly easy to read, and I'm afraid this is as good as it gets with Bash.
So what's the alternative? Try with cut in a loop. Yes that will imply reading the file n-3 times, but I bet it will be faster than the pure Bash solution. And it will be nicely readable too, which is an extremely important benefit.
A few notes about technique:
Use $(...) instead of `...`
You took care to save IFS and then restore at the end, but it was unnecessary: when you do var=... somecmd, the value of var is only set in the environment of somecmd, it is unchanged for the current script. That being said, what you did is safe, so it's fine.
The incrementing i variable in the loop is a bit misleading, because i is a common name in counting loops, and at first I thought the count itself has some purpose. But it doesn't, this variable is used only to distinguish the first line from the others. I would write differently, to make the intention perfectly obvious. | {
"domain": "codereview.stackexchange",
"id": 25762,
"tags": "performance, bash, csv, linux"
} |
Why is para-aminobenzoic acid more acidic than ortho-aminobenzoic acid? | Question:
Compare the acidic strength of o-, m-, p-aminobenzoic acids.
I got that meta will be the most acidic as it won't be able to show R+ effect of $\ce{NH2}.$
But among the other two, i.e. ortho and para, shouldn't ortho be more acidic as it will cause ortho effect?
But according to my book and the data I found online:
$$
\begin{array}{lc}
\hline
\text{Compound} & \mathrm{p}K_\mathrm{a}\text{(amino)} \\
\hline
\textit{o}\text{-Aminobenzoic acid} & 4.89 \\
\textit{m}\text{-Aminobenzoic acid} & 4.79 \\
\textit{p}\text{-Aminobenzoic acid} & 4.77 \\
\hline
\end{array}
$$
Answer: The problem in ortho-aminobenzoic acid is that the acidic hydrogen of carboxylic group is H-bonded with the lone pair of nitrogen in amino group. As a result it is more difficult to extract it compared to that in para-aminobenzoic acid since the H-bond must also be broken during acid-base reaction. Para-aminobenzoic acid does not have a H-bond due to the distance between the two groups.
I would like to add that since the hydrogen of the $\ce{-COOH}$ group is more acidic and $\ce{N}$ has a lone pair. This is why H-bond occurs with $\ce{-NH2}$ group as electron donor and not the other way round.
Also, I presume that by ortho effect you intended to tell that due to repulsion of $\ce{-NH2}$ group $\ce{-COOH}$ group would go out of the plane. However, as we have seen, it is difficult as it is already H-bonded with the nitrogen atom and moving out of the plane would put strain into the H-bond. Moreover, the repulsion by $\ce{-NH2}$ group is not too intense so it would (in most part) avoid the strain. | {
"domain": "chemistry.stackexchange",
"id": 15159,
"tags": "organic-chemistry, acid-base, aromatic-compounds, carbonyl-compounds, amines"
} |
Templated Fraction Class and programming design | Question: Story
I was trying to solve a hard problem from Project Euler earlier today (continued from yesterday) and I wanted to graph some things up and play a little bit with fractions. To work with fractions, I decided to implement a little OOP-ish fraction class. I was hoping to brush up whatever little knowledge of C++ I have into trying and implementing a full blown templated project (as I don't practice software design very often and am relatively on the newer side of programming (1.8-ish years)). After fixing what seems like a million template errors (we all can agree on how terrible compilers are with reporting template errors), the following code is what I've arrived at in one day's work.
Code
/**
Lost Arrow (Aryan V S)
Monday 2020-10-19
**/
#ifndef FRACTION_H_INCLUDED
#define FRACTION_H_INCLUDED
#include <algorithm>
#include <stdexcept>
#include <type_traits>
#include <utility>
/**
* The fraction class is meant to represent and allow usage of fractions from math.
* It is represented by a numerator and a denominator (private context).
* The interface provides you with simple operations:
* - Construction/Assignment
* - Comparison (equality/inequality, less/greater than, lesser/greater equal to, three-way (C++20 spaceship operator))
* - Arithmetic (addition/subtraction, multiplication/division)
* - Conversion (bool, decimal)
* - I/O Utility
* - Access/Modification
*/
template <typename T>
class fraction {
/**
* fraction <T> is implemented to work only with T = [integer types]
* using floating point or any other types in fractions doesn't make sense.
*/
static_assert(std::is_integral <T>::value, "fraction requires integral types");
public:
/**
* Constructors
*/
fraction ();
fraction (const T&, const T&);
fraction (T&&, T&&);
fraction (const fraction&);
fraction (fraction&&);
fraction (const std::pair <T, T>&);
fraction (std::pair <T, T>&&);
/**
* Assignment
*/
fraction <T>& operator = (const fraction&);
fraction <T>& operator = (fraction&&);
/**
* Access/Modification
*/
T nr () const;
T dr () const;
T& nr ();
T& dr ();
std::pair <T, T> value () const;
std::pair <T&, T&> value ();
/**
* Utility
*/
void print (std::ostream& = std::cout) const;
void read (std::istream& = std::cin);
template <typename S>
S decimal_value () const;
std::pair <T, T> operator () () const;
std::pair <T&, T&> operator () ();
operator bool() const;
/**
* Comparison
*/
int8_t spaceship_operator (const fraction <T>&) const;
template <typename S>
friend bool operator == (const fraction <S>&, const fraction <S>&);
template <typename S>
friend bool operator != (const fraction <S>&, const fraction <S>&);
template <typename S>
friend bool operator <= (const fraction <S>&, const fraction <S>&);
template <typename S>
friend bool operator >= (const fraction <S>&, const fraction <S>&);
template <typename S>
friend bool operator < (const fraction <S>&, const fraction <S>&);
template <typename S>
friend bool operator > (const fraction <S>&, const fraction <S>&);
/**
* Arithmetic
*/
fraction <T>& operator += (const fraction <T>&);
fraction <T>& operator -= (const fraction <T>&);
fraction <T>& operator *= (const fraction <T>&);
fraction <T>& operator /= (const fraction <T>&);
private:
mutable T m_nr, m_dr;
/**
* Utility
*/
void normalize () const;
};
/** ::normalize
* The normalize utility function converts a fraction into its mathematically equivalent
* "simplest" form.
* The functions throws a domain error if the denominator is ever set to 0.
*/
template <typename T>
inline void fraction <T>::normalize () const {
if (m_dr == 0)
throw std::domain_error("denominator must not be zero");
T gcd = std::gcd(nr(), dr());
m_nr /= gcd;
m_dr /= gcd;
}
/** ::spaceship_operator
* The spaceship operator provides similar functionality as the <=> operator introduced in C++20.
* If self < other : return value is -1
* If self == other : return value is 0
* If self > other : return value is 1
*/
template <typename T>
inline int8_t fraction <T>::spaceship_operator (const fraction <T>& other) const {
normalize();
other.normalize();
if ((*this)() == other())
return 0;
if (nr() * other.dr() < other.nr() * dr())
return -1;
return 1;
}
/**
* Constructors
*/
template <typename T>
fraction <T>::fraction ()
: m_nr (0)
, m_dr (1)
{ }
template <typename T>
fraction <T>::fraction (const T& nr, const T& dr)
: m_nr (nr)
, m_dr (dr)
{ normalize(); }
template <typename T>
fraction <T>::fraction (T&& nr, T&& dr)
: m_nr (std::move(nr))
, m_dr (std::move(dr))
{ normalize(); }
template <typename T>
fraction <T>::fraction (const fraction <T>& other)
: fraction (other.nr(), other.dr())
{ }
template <typename T>
fraction <T>::fraction (fraction <T>&& other)
: fraction (std::move(other.nr()), std::move(other.dr()))
{ }
template <typename T>
fraction <T>::fraction (const std::pair <T, T>& other)
: fraction (other.first, other.second)
{ }
template <typename T>
fraction <T>::fraction (std::pair <T, T>&& other)
: fraction (std::move(other.first), std::move(other.second))
{ }
/**
* Assignment
*/
template <typename T>
fraction <T>& fraction <T>::operator = (const fraction <T>& other) {
if (this != &other) {
nr() = other.nr();
dr() = other.dr();
}
return *this;
}
template <typename T>
fraction <T>& fraction <T>::operator = (fraction <T>&& other) {
if (this != &other) {
nr() = std::move(other.nr());
dr() = std::move(other.dr());
}
return *this;
}
/** ::nr
* An accessor function that returns a copy of the numerator value to the caller.
*/
template <typename T>
inline T fraction <T>::nr () const {
return m_nr;
}
/** ::dr
* An accessor function that returns a copy of the denominator value to the caller.
*/
template <typename T>
inline T fraction <T>::dr () const {
return m_dr;
}
/** ::nr
* An modification function that returns a reference of the numerator value to the caller.
*/
template <typename T>
inline T& fraction <T>::nr () {
return m_nr;
}
/** ::dr
* An modification function that returns a reference of the denominator value to the caller.
*/
template <typename T>
inline T& fraction <T>::dr () {
return m_dr;
}
/** ::print
* An utility function that prints to the standard output stream parameter in the form: nr/dr
*/
template <typename T>
inline void fraction <T>::print (std::ostream& stream) const {
normalize();
stream << nr() << "/" << dr();
}
/** ::read
* An utility function that reads from the standard input stream in the form: nr dr
*/
template <typename T>
inline void fraction <T>::read (std::istream& stream) {
stream >> nr() >> dr();
normalize();
}
/** ::decimal_value
* An utility function that converts a fraction to its mathematically equivalent floating point representation
* This function requires its template type as a floating point type.
*/
template <typename T>
template <typename S>
inline S fraction <T>::decimal_value () const {
static_assert(std::is_floating_point <S>::value, "decimal notation requires floating point type");
normalize();
return static_cast <S> (nr()) / static_cast <S> (dr());
}
/** ::value
* An utility function that returns a standard pair with the copy of the numerator and denominator to the caller.
*/
template <typename T>
inline std::pair <T, T> fraction <T>::value () const {
return std::pair <T, T> (nr(), dr());
}
/** ::value
* An utility function that returns a standard pair with a refernce to the numerator and denominator to the caller.
*/
template <typename T>
inline std::pair <T&, T&> fraction <T>::value () {
return std::pair <T&, T&> (nr(), dr());
}
/** ::operator ()
* Provides same functionality as ::value with copy.
*/
template <typename T>
inline std::pair <T, T> fraction <T>::operator () () const {
return value();
}
/** ::operator ()
* Provides same functionality as ::value with reference.
*/
template <typename T>
inline std::pair <T&, T&> fraction <T>::operator () () {
return std::pair <T&, T&> (nr(), dr());
}
/** ::bool()
* Converts a fraction to a boolean value which is false if the numerator is 0.
*/
template <typename T>
inline fraction <T>::operator bool() const {
return bool(nr());
}
/**
* Comparison (implemented with the spaceship_operator
*/
template <typename S>
inline bool operator == (const fraction <S>& left, const fraction <S>& right) {
return left.spaceship_operator(right) == 0;
}
template <typename S>
inline bool operator != (const fraction <S>& left, const fraction <S>& right) {
return left.spaceship_operator(right) != 0;
}
template <typename S>
inline bool operator <= (const fraction <S>& left, const fraction <S>& right) {
return left.spaceship_operator(right) <= 0;
}
template <typename S>
inline bool operator >= (const fraction <S>& left, const fraction <S>& right) {
return left.spaceship_operator(right) >= 0;
}
template <typename S>
inline bool operator < (const fraction <S>& left, const fraction <S>& right) {
return left.spaceship_operator(right) == -1;
}
template <typename S>
inline bool operator > (const fraction <S>& left, const fraction <S>& right) {
return left.spaceship_operator(right) == 1;
}
/**
* Arithmetic
*/
template <typename T>
fraction <T>& fraction <T>::operator += (const fraction <T>& other) {
normalize();
other.normalize();
T lcm = std::lcm(dr(), other.dr());
nr() = nr() * (lcm / dr()) + other.nr() * (lcm / other.dr());
dr() = lcm;
normalize();
return *this;
}
template <typename T>
fraction <T>& fraction <T>::operator -= (const fraction <T>& other) {
return *this += fraction <T> (-other.nr(), other.dr());
}
template <typename T>
fraction <T>& fraction <T>::operator *= (const fraction <T>& other) {
normalize();
other.normalize();
nr() *= other.nr();
dr() *= other.dr();
normalize();
return *this;
}
template <typename T>
fraction <T>& fraction <T>::operator /= (const fraction <T>& other) {
return *this *= fraction <T> (other.dr(), other.nr());
}
template <typename T>
fraction <T> operator + (fraction <T>& left, fraction <T>& right) {
fraction <T> result (left);
return result += right;
}
template <typename T>
fraction <T> operator - (fraction <T>& left, fraction <T>& right) {
fraction <T> result (left);
return result -= right;
}
template <typename T>
fraction <T> operator * (fraction <T>& left, fraction <T>& right) {
fraction <T> result (left);
return result *= right;
}
template <typename T>
fraction <T> operator / (fraction <T>& left, fraction <T>& right) {
fraction <T> result (left);
return result /= right;
}
#endif // FRACTION_H_INCLUDED
Questions and other stuff
According to modern programming standards in C++, how well is my code written? According to software engineering standards, what are the areas I need to improve in? Is my code efficient and well written? Are there any ways I could speed up some computations for efficiency? What improvements to the interface are possible? What could I clean up or refactor in my code? Other relevant insights and tips/discussion/reviews will be appreciated.
To make my implementation more efficient, I'm thinking that I have normalize() at a lot of places which runs in O(log(a.b)) and I could add another private variable which checks if a fraction has been normalized or not. This could get rid of redundant runs on normalize. Any other suggestions will be warmly welcomed.
Please don't mind the terrible documentation. I would really do that part much better if my goal was to write a fraction implementation for the sake of writing a fraction implementation and not for solving some problem, and if I were to not have to finish it in one days work. Also, any comments on a good procedure to document stuff are welcome (I know this is subjective and different people have different answers to this but something like a short 101 will likely provide helpful info for future projects).
Thanks SE Community!
Answer: Overview
Prety good. Big points.
Move Semantics should be noexcept
Overuse of normalize() is going to make the class ineffecient.
I would only normalize for printing personally.
Don't use nr() when m_nr is just as valid.
You are already tightly bound in all other member functions.
read() should not change the state of the object if it fails.
Things You should probably do:
I would add a swap()
Put one liners into the class definition.
Put forward declarations of the arithmatic operations next to the class.
I should not need to read all the code to find them.
print/read should be symetric
Code Review
Best practice would to put this in a namespace.
This works:
/**
* fraction <T> is implemented to work only with T = [integer types]
* using floating point or any other types in fractions doesn't make sense.
*/
static_assert(std::is_integral <T>::value, "fraction requires integral types");
But since you specified C++20 should you not be using concepts (require rather than static_assert). Not very familiar with the exact definitions yet so don't know how to do it myself.
The move opertator should be noexcept:
fraction (fraction&&) noexcept;
fraction <T>& operator = (fraction&&) noexcept;
When your object is used in a standard container this helps. Becuase the standard gurantees the "Strong Excetpion Gurantee" on some operations it must use the copy semantics when resizing; unless you guranted (with noexcept) that you can not throw an exception then it can use move semantics and speed up these operations.
You have the normal assignment operators.
fraction (const T&, const T&);
fraction (T&&, T&&);
fraction <T>& operator = (const fraction&);
fraction <T>& operator = (fraction&&);
But since this type is supposed to represent integers. It should be able to support just assigning integers to directly. I would allow the construction/assignment with a single integer (and use a 1 as bottom of the fraction).
fraction (const T& n, const T& d = 1);
fraction (T&& n, T&& d = 1);
fraction <T>& operator = (const fraction&);
fraction <T>& operator = (fraction&&);
// I would provide these explicitly
// Otherwise the compiler will generate a conversion with a single
// parameter constructor and then use the assignment above. This
// may be slightly sub optimal.
fraction <T>& operator = (const T&);
fraction <T>& operator = (T&&);
Return these by const reference.
T nr () const;
T dr () const;
You mention that T may be some "Big Integer" implementation not just a standard POD int. So you want to avoid a copy if you don't actually need it.
T const& nr () const;
T const& dr () const;
^ not a fan of this space.
You give accesses to the internal state.
T& nr ();
T& dr ();
This makes redundant your use of normalize() in the constructor. You no longer gurantee that the fraction is normalized.
Not sure I like this.
std::pair <T&, T&> value ();
Not going to argue against it. But it seems like you are giving accesses to the internals without good reason.
Like this:
void print (std::ostream& = std::cout) const;
void read (std::istream& = std::cin);
Don't see the friend functions to help printing:
friend std::ostream& operator<<(std::ostream& st, fraction const& d)
{
d.print(str);
return str;
}
friend std::istream& operator>>(std::istream& st, fraction& d)
{
d.read(str);
return str;
}
I would always make the bool operator explicit. This will prevent an accidental conversion to bool in contexts that you don't want it happening.
operator bool() const;
The reason to make these free standing functions (FSF) rather than members is to allow symetric auto conversion. i.e. If they are members only the rhs could be auto converted to a fraction. If these are non member then both the lhs or the rhs can by auto converted to fractions.
friend bool operator == (const fraction <S>&, const fraction <S>&);
... etc
Notrmally not a fan of auto conversion. But this use case it is a good idea. Unfortunately you don't provide the appropriate constructions to allow auto conversion from an integer type to a fraction (you need a single argument constructor).
If you create these:
fraction<T>& operator += (const fraction <T>&);
... etc
Then why don't you implement:
fraction<T> operator + (fraction<T> const& lhs, fraction<T> const& rhs);
... etc
these are FSF for the same reason as these:
friend bool operator == (const fraction <S>&, const fraction <S>&);
This is not a good use of mutable.
mutable T m_nr, m_dr;
Mutable should be used on members that don't represent the state of the object. i.e. you are caching some printable value. i.e. It has a state that is expensive to compute so you keep it around but it could be easily re-computed from the members that actually represent the state of the obejct.
No need to call nr() inside the class simply use m_nr.
T gcd = std::gcd(nr(), dr());
Is normalize() a rather expensive operation?
inline int8_t fraction <T>::spaceship_operator (const fraction <T>& other) const {
Thus calling it as part of a comparison seems overkill.
normalize();
other.normalize();
Why not calcualte the value and do a comparison?
double lhsTest = 1.0 * m_nr/m_dr;
double rhsTest = 1.0 * other.m_nr/other.m_dr;
In comparison to normalization this is relatively cheap? I think?
I would not bother to normalize during construction.
fraction <T>::fraction (const T& nr, const T& dr)
: m_nr (nr)
, m_dr (dr)
{ normalize(); }
You don't gurantee that the object will remain normalized (As you give access to the internal members. So why do this expensive operation each time. I would simply save normalization for printing.
I prefer the standard Copy and Swap Idiom here.
template <typename T>
fraction <T>& fraction <T>::operator = (const fraction <T>& other) {
if (this != &other) {
nr() = other.nr();
dr() = other.dr();
}
return *this;
}
The check for assignment to self is a test for something that basically never happens and is thus a pessimization in real code (obviously it still needs to work with assignment to self). But the copy and swap gets away from this by always doing the copy (which may seem like a pesimization but in real life is not as you don't get the branch prediction reset).
Prefer the swap implementation of this:
template <typename T>
fraction <T>& fraction <T>::operator = (fraction <T>&& other) {
if (this != &other) {
nr() = std::move(other.nr());
dr() = std::move(other.dr());
}
return *this;
}
Again. no need for a self assignment test.
I would also add a swap() method and swap() friend function:
void fraction<T>::swap(fraction<T>& other) noexpcept
{
using std::swap;
swap(m_nr, other.m_nr);
swap(m_dr, ither.m_dr);
}
friend void swap(fraction<T>& lhs, fraction<T>& rhs)
{
lhs.swap(rhs);
}
I don't like the external definition of functions that are this simple.
template <typename T>
inline T fraction <T>::nr () const {
return m_nr;
}
One liner functions should be part of the class declaration.
inline void fraction <T>::print (std::ostream& stream) const {
normalize();
stream << nr() << "/" << dr();
^^^^ You are inside the class.
You are already tightly coupled to the implementation.
You should simply use m_nr.
}
Here is "about" the only time I would normalize fraction. Printing is already relatively slow operation, so normalizing is not going to add a great cost relatively speaking.
I would implement this function like this:
void fraction <T>::print (std::ostream& stream) const
{
fraction<T> temp(*this);
temp.normalize(); // Now normalize can happen on a non cost object.
stream << temp.m_nr << "/" << temp.m_dr;
}
I would want the read operation to be symetric with the print() function. Thus I would expect it to read and discard the / operator.
Also the read operation should only mutate the current object if the read succedes. So I would change it slightly.
void fraction<T>::read(std::istream& stream)
{
fraction<T> temp
char div;
if ((stream >> temp.m_nr >> div >> temp.m_dr) && div == '/') {
// read worked and we have correct seporator.
// so we can update the state of the object.
temp.swap(*this);
}
else {
// some failure. Make sure the stream is marked bad.
stream.setstate(std::ios::badbit);
}
}
That seems like a non optimal comparison.
inline bool operator == (const fraction <S>& left, const fraction <S>& right)
{
return left.spaceship_operator(right) == 0;
}
Quite expensive if it is not eual.
You don't need to call normalize three times!! Call it once at the end of the operation (if you must). I would not bother (unless there is some overflow issue).
PS. You missed the inline here.
template <typename T>
fraction <T>& fraction <T>::operator += (const fraction <T>& other) {
normalize();
other.normalize();
T lcm = std::lcm(dr(), other.dr());
nr() = nr() * (lcm / dr()) + other.nr() * (lcm / other.dr());
dr() = lcm;
normalize();
return *this;
}
This can be simplified to:
fraction<T>& fraction<T>::operator+=(fraction <T> const& other)
{
if (m_dr == other.m_dr) {
m_nr += other.m_nr;
}
else {
m_nr = (other.m_nr * m_dr) + (m_nr * other.m_dr);
m_dr = m_dr * other.m_dr;
if (m_dr > SOME_BOUNDRY_CONDITION) {
normalize();
}
}
return *this;
}
// Notice the lhs is passed by value to get a copy.
// So we don't need to do an explicit copy in the function;
fraction<T> operator+(fraction<T> lhs, fraction<T> const& rhs) {return lhs += rhs;} | {
"domain": "codereview.stackexchange",
"id": 39704,
"tags": "c++, object-oriented, c++17"
} |
What makes a superconductor topological? | Question: I have read a fair bit about topological insulators and proximity induced Majorana bound states when placing a superconductor in proximity to a topological insulator.
I've also read a bit about cuprates being related to topological superconductivity if that helps.
What I cant quite understand is what defines and what is a pure topological superconductor?
Or is this simply not the case and is topological superconductivity something that can only be achieved by means of proximity effect arrangements?
A general description of what one is would probably be most helpful.
Answer: In short, what makes a superconductor topological is the nontrivial band structure of the Bogoliubov quasiparticles. Generally one can classify non-interacting gapped fermion systems based on single-particle band structure (as well as symmetry), and the result is the so-called ten-fold way/periodic table. The topological superconductivity mentioned in the question is related to the class D, namely superconductors without any symmetries other than the particle-hole symmetry. The simplest example in 2D is a spinless $p_x+ip_y$ superconductor:
$H=\sum_k c_k^\dagger(\frac{k^2}{2m}-\mu)c_k+ \Delta c_k^\dagger(k_x+ik_y)c_{-k}^\dagger+\text{h.c.}=\sum_k (c_k^\dagger, c_{-k})\left[(k^2/2m-\mu)\tau_z+\Delta k_x\tau_x+\Delta k_y\tau_y\right]\begin{pmatrix}c_k\\ c_{-k}^\dagger\end{pmatrix}$
This Hamiltonian defines a map from the $k$ space (topologically a sphere $S^2$) to a $SU(2)$ matrix $m_k\cdot \sigma$ where $m_k\propto (\Delta k_x, \Delta k_y, \frac{k^2}{2m}-\mu)$ (then normalized), which also lives on a sphere. Therefore such maps are classified by $\pi_2(S^2)=\mathbb{Z}$. If two Hamiltonians belong to the same equivalence class in the homotopy group, it means that one can continuously deform the Hamiltonian from one to another without closing the gap, thus topologically indistinguishable.
The integer, called the Chern number $C$, that classifies the class D topological superconductors can be calculated from the Hamiltonian, and in this case it is $C=1$. This idea can be generalized to other symmetry classes and dimensions, basically one needs to understand the map from the momentum space to the appropriate single-particle "Hamiltonian" space (the general case is much more complicated than the $2\times 2$ Hamiltonian).
This toy model (and its one-dimensional descendants) is behind all recent proposals of realizing topological superconductors in solid state systems. The basic idea is to combine various mundane elements (semiconductors, s-wave superconductor, ferromagnet, etc): since electrons have spin-$1/2$, one needs to have Zeeman field to break the spin degeneracy and get a non-degenerate Fermi surface (thus effectively "spinless" fermions, really spin-polarized). However, in s-wave superconductors electrons with opposite spins are paired. This is why spin-orbit coupling is necessary since it makes the electron spin "winds" around on the Fermi surface, so that at $k$ and $-k$ electrons can pair up. Putting all these together one can realize a topological superconductor.
There are various physical consequences. The general feature is that something peculiar happens on the boundary between superconductors belonging to different topological classes. For example, if the $p_x+ip_y$ superconductor has an edge to the vacuum, there are gapless chiral Majorana fermions localized on the edge. Also if one puts a $hc/2e$ vortex into the superconductor, it traps a zero-energy Majorana bound state.
The question also mentioned cuprates. There are some speculations about the possibility of $d+id$ pairing in cuprates, probably motivated by measurement of Kerr rotations which is a signal of time-reversal symmetry breaking. However this is highly debatable and not very well accepted. Notice that $d+id$ superconductor is the $C=2$ case of the class D family.
To learn more about the subject I recommend the excellent review by Jason Alicea: http://arxiv.org/abs/1202.1293. | {
"domain": "physics.stackexchange",
"id": 35924,
"tags": "condensed-matter, superconductivity, topological-order, topological-insulators, majorana-fermions"
} |
Python factory method with easy registry | Question: My aim is to define a set of classes, each providing methods for comparing a particular type of file. My idea is to use some kind of factory method to instantiate the class based upon a string, which could allow new classes to be added easily. Then it would be simple to loop over a dictionary like:
files = {
'csv': ('file1.csv', 'file2.csv'),
'bin': ('file3.bin', 'file4.bin')
}
Here is what I have so far:
# results/__init__.py
class ResultDiffException(Exception):
pass
class ResultDiff(object):
"""Base class that enables comparison of result files."""
def __init__(self, path_test, path_ref):
self.path_t = path_test
self.path_r = path_ref
def max(self):
raise NotImplementedError('abstract method')
def min(self):
raise NotImplementedError('abstract method')
def mean(self):
raise NotImplementedError('abstract method')
# results/numeric.py
import numpy as np
from results import ResultDiff, ResultDiffException
class NumericArrayDiff(ResultDiff):
def __init__(self, *args, **kwargs):
super(NumericArrayDiff, self).__init__(*args, **kwargs)
self.data_t = self._read_file(self.path_t)
self.data_r = self._read_file(self.path_r)
if self.data_t.shape != self.data_r.shape:
raise ResultDiffException('Inconsistent array shape')
np.seterr(divide='ignore', invalid='ignore')
self.diff = (self.data_t - self.data_r) / self.data_r
both_zero_ind = np.nonzero((self.data_t == 0) & (self.data_r == 0))
self.diff[both_zero_ind] = 0
def _read_file(self, path):
return np.loadtxt(path, ndmin=1)
def max(self):
return np.amax(self.diff)
def min(self):
return np.amin(self.diff)
def mean(self):
return np.mean(self.diff)
class CsvDiff(NumericArrayDiff):
def __init__(self, *args, **kwargs):
super(CsvDiff, self).__init__(*args, **kwargs)
def _read_file(self, path):
return np.loadtxt(path, delimiter=',', ndmin=1)
class BinaryNumericArrayDiff(NumericArrayDiff):
def __init__(self, *args, **kwargs):
super(BinaryNumericArrayDiff, self).__init__(*args, **kwargs)
def _read_file(self, path):
return np.fromfile(path)
As you can see, the classes CsvDiff and BinaryNumericArrayDiff have only very minor changes with respect to NumericArrayDiff, which could potentially be refactored using constructor arguments. The problem is that different file types would then require different constructor syntax, which would complicate the factory pattern.
I did also consider providing @classmethods to NumericArrayDiff, which could be put in a dict in order to link to the file types. However, I'm hoping for a more natural way of registering these classes to the factory.
Any advice would be much appreciated.
Answer: 1. Stop writing classes
The title for this section comes from Jack Diederich's PyCon 2012 talk.
A class represents a group of objects with similar behaviour, and an object represents some kind of persistent thing. So when deciding what classes a program is going to need, the first question to ask is, "what kind of persistent things does this program need to represent?"
In this case the program:
knows how to load NumPy arrays from different kinds of file format (CSV and plain text); and
knows how to compute the relative difference between two NumPy arrays (so long as they come from files with the same format).
The only persistent things here are files (represented by Python file objects) and NumPy arrays (represented by numpy.ndarray objects). So there's no need for any more classes.
2. Other review points
The code calls numpy.seterr to suppress the warning:
RuntimeWarning: invalid value encountered in true_divide
but it fails to restore the original error state, whatever it was. This might be an unpleasant surprise for the caller. It would be better to use the numpy.errstate context manager to ensure that the original error state is restored.
When dispatching to NumPy functions, it usually unncessary to check shapes for compatibility and raise your own error. Instead, just pass the arrays to NumPy. If they can't be combined, then NumPy will raise:
ValueError: operands could not be broadcast together ...
3. Revised code
Instead of classes, write a function!
import numpy as np
def relative_difference(t, r):
"""Return the relative difference between arrays t and r, that is:
0 where t == 0 and r == 0
(t - r) / r otherwise
"""
t, r = np.asarray(t), np.asarray(r)
with np.errstate(divide='ignore', invalid='ignore'):
return np.where((t == 0) & (r == 0), 0, (t - r) / r)
Note the following advantages over the original code:
It's much shorter, and so there's much less code to maintain.
It can find the difference between arrays that come from files with different formats:
relative_difference(np.loadtxt(path1), np.fromfile(path2))
It can find the difference between arrays that don't come from files at all:
relative_difference(np.random.randint(0, 10, (10,)), np.arange(1, 11)) | {
"domain": "codereview.stackexchange",
"id": 13708,
"tags": "python, numpy, factory-method"
} |
Document Categorization Problem | Question: I'm very new to data science in general, and have been tasked with a big challenge.
My organization has a lot of documents that are all sorted on document type (not binary format, but a subjectively assigned type based on content, e.g. "Contract", "Receipt", "Statement", etc...).
Generally speaking assignment of these types is done upon receipt of the documents, and is not a challenge, though we would like to remove the human element of this categorization. Similarly, there are times when there are special attributes that we would like to identify, like "Statement showing use." Thus far, this is entirely done by human intervention.
I am a python programmer, and have been looking at tools to extract the text from these docs (all PDFs, all OCR'ed and searchable) and run analysis. Research has led me to look at standard libraries like NLTK, scikit-learn and gensim. But I'm struggling to identify what would be the best methodology for categorizing newly received documents.
My research is leading me down a few paths...one is creating a Tf-iDf vector model based on a sampling of current corpa and then creating a model for an incoming document's corpus and doing a naive bayes analysis against existing models to discern which category the incoming document belongs to based on highest probability. Question 1: is this right? If so question 2 becomes what is the right programmatic methodology for accomplishing this?
The reason I bring this up at all is because most tutorials I find seem to lean toward a binary discernment of text corpa (positive vs negative, spam vs ham). I did see scikit-learn has information on multi-label classification, but I'm not sure I'm going down the right road with it. The word "classification" seems to have different meaning in document analysis than what I would want it to mean.
If this question is too vague let me know and I can edit it to be more specific.
Answer: Except for the OCR part, the right bundle would be pandas and sklearn.
You can check this ipython notebook which uses TfidfVectorizer and SVC Classifier.
This classifier can make one-vs-one or one-vs-the-rest multiclass predictions, and if you use the predict_proba method instead of predict, you would have the confidence level of each category.
If you're looking for performances and you don't need prediction confidence levels, you should use LinearSVC which is way faster.
Sklearn is very well documented and you will find everything you need for text classification. | {
"domain": "datascience.stackexchange",
"id": 740,
"tags": "python, scikit-learn, nltk, gensim"
} |
Do electrostatic forces affect the induced EMF? | Question: Let's say we have a linear conductor which is slipping with a constant velocity in a static magnetic field perpendicular to the plane defined by the axis and the velocity of the conductor. Due to the accumulation of the charge at its edges, an electrostatic field is developed. Are we taking this field into account when we calculate the induced EMF?
Answer: Electrostatic field is not part of induced or motional EMF.
EMF due to motion of the conductor in magnetic field is called motional EMF. It is present whether the charges are accumulated at the endpoints or not - it is due to action of magnetic forces and internal constraint forces in the conductor.
Due to presence of magnetic field, the conductor is doing some work on those charges to move them to a state of higher elecric potential energy and as a result, not acquiring as much kinetic energy as it would in the absence of magnetic field.
This process or rearrangement starts due to change of speed of the conductor. It takes some (very short) time until the new charge distribution is established. So the electrostatic field is established due to motional EMF, it is not part of it. | {
"domain": "physics.stackexchange",
"id": 66698,
"tags": "electromagnetism, magnetic-fields, conductors"
} |
How linearly additive are the x-ray mass attenuation coefficients for molecules? | Question: Having a look at https://www.nist.gov/pml/x-ray-mass-attenuation-coefficients, the introduction states:
For compounds and mixtures, values for $μ/ρ$ can be obtained by simple
additivity, i.e., combining values for the elements according to their
proportions by weight. To the extent that values for $μ_{en}/ρ$ are
affected by the radiative losses (bremsstrahlung production,
annihilation in flight, etc.) suffered during the course of slowing
down in the medium by the electrons and positrons that have been set
in motion, simple additivity is no longer adequate. The 1982 compilation ignored such matrix effects (they tend to be small at photon energies below 20 MeV)
So how valid is this for typical x-rays in the medical range, i.e. 40-150 keV?
Answer: Perfectly valid in that energy range and commonly used technique if you are looking for accuracy within a few percent. If you need better, then you'll need to take into account those more subtle effects | {
"domain": "physics.stackexchange",
"id": 50920,
"tags": "absorption, x-rays, attenuation"
} |
Version of knap sack problem | Question:
The are cuisenaire rods with N differnt lengthes $x_1,x_2,...,x_n$ (each length is a natural number), the number of the Cuisenaire rods is unlimited. Given a natural number B. you should tell if you can pick a bunch of Cuisenaire rods with exactly the length of B.
for example: if the cuisenaire rods are $3,11,19$ and $B=20$ so you should return true because $3+3+3+11=20$. but for $B=19$ return false.
I made a $(N+1)\times(B+1)$ table named "opt", checked for all $0\leqslant i\leqslant n$ and for all $0\leqslant b\leqslant B$ if i can get a length using the lengthes $x_1,x_2,...,x_i$ and weights $0\leqslant b\leqslant B$
For the given example he output should be (this is my output also)
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0
0 0 0 1 0 0 1 0 0 1 0 1 1 0 1 1 0 1 1 0 1
0 0 0 1 0 0 1 0 0 1 0 1 1 0 1 1 0 1 1 1 1
My idea:
fill in with zeros the first line and the first column, i.e opt[0][b]=0 for all b and opt[n][0]=0 for all n
Here cuisenaire={0,3,11,19} is array with the cuisenaire lengths
for n=1 to N+1
for b=1 to B
opt[n][b]=opt[n-1][w];//copy the previos row
if w modulu cuisenaire[k]=0
opt[n][w]=1;
if w >=cuisenaire[n] and n>1 and (w+cuisenaire[n]-1)module (cuisenaire[n-1])=0)
opt[n][w]=1;
my Idea works only for small B's $\implies $ wrong idea but maybe close, e.g for $B=100$ and cuisenaire={0,13,19,29} my idea doesn't work
can't think on recursive idea
Answer: Your question is equivalent to the decision version of the unbounded knapsack optimization problem. The decision is whether the maximum value (after optimization) is equal to the upper bound. Moreover, the weights and values are identical in your question (i.e. the length is both the weight and the value).
Problem Formulation
The unbounded knapsack optimization problem is as follows:
Given a set of $n$ objects where object $i$ has weight $w_i$ and value $v_i$,
$\>\>$ maximize $\sum_{i=1}^{n} v_i m_i$
$\>\>$ subject to $\sum_{i=1}^{n} w_i m_i \leq W, \\ m_i \in \mathbb{N}$
where $m_i$ denotes the multiplicity of object $i$, and $W$ is some upper bound.
(note that we're using the definition of $\mathbb{N}$ that includes zero here)
The decision version of this problem can be formulated in terms of the optimization version:
Is the maximum value of the unbounded knapsack optimization problem over the given set of objects $(w_i,v_i)^*$ and upper bound $W$ equal to $V$?
Now let's map these descriptions to your particular question. First, note that length of a cuisenaire rod is equal to both its weight and its value. So we can replace every $w_i$ and $v_i$ with just $x_i$. Similarly, the upper bound on the weight is also the target value for the decision problem. So we can replace both $W$ and $V$ with $B$. Finally, since the selection variable $m_i$ is already using the definition of the natural numbers that includes zero, let's be precise and replace every instance of "natural number" in your problem description with "positive integer".
This results in the following problem formulation:
Given some $B$ and a set of $n$ cuisenaire rods of lengths $x_1, x_2, ..., x_n$ where $B \in \mathbb{Z^+} \land \forall x_i (x_i \in \mathbb{Z^+})$,
is the result of maximizing
$\>\>\>\>\>\> \sum_{i=1}^{n} x_i m_i$
subject to
$\>\>\>\>\>\> \sum_{i=1}^{n} x_i m_i \leq B, \\ \>\>\>\>\>\> m_i \in \mathbb{N}$
equal to $B$?
Solution as Recurrence Relation
Since $\forall x_i(x_i \in \mathbb{Z})$, we can most easily solve this knapsack problem by formulating a recurrence relation for it. So let $M[b]$ denote the maximum length less than or equal to $b$ of an unbounded combination of the given cuisenaire rods. The base case is $M[0] = 0$, since we can always select a combination of zero cuisenaire rods (i.e. set every $m_i$ to $0$) whose sum is the maximum value less than or equal to zero (i.e. zero itself).
The recursive case is formulated in terms of reducing the allowable maximum length by the different cuisenaire rod lengths. The idea is to see whether specifying that we have used one of rod $x_1$, $x_2$, ..., or $x_n$ gets us closer to $B$. In specifying that we have used one of rod $x_i$, we may remove that rod from the tally and recurse on the smaller value, i.e. $M[b-x_i]$. In order to find the maximum value, we add the value of the rod that we just removed to whatever the maximum length of the smaller limit is, i.e. $M[b] = x_i + M[b-x_i]$. More specifically, $M[b]$ is equal to the maximum possible value that we could achieve between selecting one rod from all the different rods, i.e. $M[b] = max_{i=1}^{n} (x_i + M[b-x_i])$.
Note that since this is the unbounded knapsack problem, we cannot say whether we will use the selected rod (or any other rod) again in the solution. In terms of formulating the recurrence relation, all we can say for certain is that the selected rod is used at this specific step. The decision about what rod to select next (or even to select this rod again) is left entirely up to the next step in the recursion.
Putting this all together, and assuming that both $B > 0$ and $\forall x_i (x_i > 0)$ in order to avoid wonky edge cases, we get the following recurrence relation:
$ \>\> M[0] = 0, \\ \>\> M[b] = max_{i=1}^{n}(x_i + M[b-x_i]) $
Note that $M[b]$ is undefined for any $b < 0$. If we want to be really precise about it, we could express this formally as $ M[b] = max_{\{x_i | b-x_i \geq 0\}}(x_i + M[b-x_i]) $.
We can now find the maximum length less than or equal to $B$ of any unbounded combination of cuisenaire rods by evaluating $M[B]$. This solves the optimization problem. To solve the decision problem, simply check if $M[B] = B$.
Dynamic Programming
Since $\forall x_i(x_i \in \mathbb{Z})$, we can easily apply dynamic programming by instantiating an array (with all elements initialized to zero) of length $B$. Store and lookup the value of each term in this recurrence relation in that array.
We can optimize this dynamic programming solution in two main ways. First, we can reduce the runtime by iteratively filling in the array from $M[0]$ to $M[B]$ in order to avoid the function call overhead of recursively starting at $M[B]$.
Second, we can reduce the space by observing that a circular array of size $\mathcal{max_{i=1}^{n}(x_i)}+1$ is sufficient to hold as many previously calculated values as is required to calculate any $M[b]$. This is because for any $b$, in order to calculate $M[b]$ we will only need to look as far "back" in the array as $max_{i=1}^{n}(x_i)$. For example, if $x_1 = 5$ and $x_2 = 7$, then to calculate $M[20]$ we would only need to look at $M[20-5]$ and $M[20-7]$.
Furthermore, if desired, we can also preprocess the set of rods to determine if there is a simpler solution (i.e. check if $\exists x_i (B \> mod \> x_i \equiv 0)$, to remove rods that cannot be part of the solution (i.e. $x_i > B$), to remove redundant rods (i.e. $x_i$ is redundant if $\exists j \neq i (x_j \> mod \> x_i \equiv 0)$), etc.
Example
Here is a working example in Python (without any of the preprocessing optimizations):
def cuisenaire(B, rods):
"""
B is a positive integer that serves as the targeted upper bound (inclusive).
Rods is a list of cuisenaire rods with positive integer lengths listed
in ascending order.
"""
dp = [0] * (max(rods)+1) # dynamic programming workspace
for i in range(1,B+1): # iteratively fill DP workspace (1 to B, inclusive)
for rod in rods: # evaluate each first-order subproblem
if i - rod >= 0: # dp[index] undefined for index < 0
temp = rod + dp[(i - rod) % len(dp)] # the core recurrence relation
if temp > dp[i % len(dp)]: # if this value is greater than other,
dp[i % len(dp)] = temp # then update DP cell to greater value
else:
break # optimization, not required for correctness
return dp[B % len(dp)] == B # solve the decision problem
The time complexity of this code is $O (B \cdot n)$ which is B * len(rods).
The space complexity of this code is $O (max_{i=1}^{n}(x_i))$ which is max(rods). | {
"domain": "cs.stackexchange",
"id": 8138,
"tags": "algorithms, dynamic-programming, knapsack-problems, java"
} |
Can we exit the event horizon of merging black holes? | Question: I have an intuitive scenario. Consider we have a spaceship just below the event horizon of a BH, which is merging with another black hole.
Finally, the singularities merge and we have a single black hole again.
But, in the transient stage, it is unclear to me if a timelike world-line would exist to leave the system.
I suspect, the metric is probably far too complex for an analytical solution, but in the worst case, it could be maybe solved numerically.
As far I know, black hole merges are examined mainly in an inspiral scenario. I suspect, maybe the escape is possible only if they have a hyperbolic-like orbit (i.e. there is no inspiral, but they simply collide).
Is it possible?
Answer: No. When they merge their horizons will change shape, and eventually become the static or stationary shape of a BH horizon. Nothing inside either horizon while this is happening can escape. At all times the timelike curves stay inside, and the deformed horizons are where the lightlike curves end up. In each, and after they merge.
The area of each horizon right before they merge can not be smaller than before, as area is proportional to entropy which must increase or stay the same. All deformations will increase it (or be the same, but probably increase). At no times can lightlike Curves escape because of some deformation, and much less timelike curves.
I assume you meant you were right inside before. If you meant right outside anything can happen, now you'd have to take the ergospehereergosphere into account, and if inside it also probably no but I am a not sure.
There was similar question posted maybe 3 or so months ago, not in my saved list so I can't give you the reference. There were some answers. | {
"domain": "physics.stackexchange",
"id": 32587,
"tags": "general-relativity, black-holes, metric-tensor, event-horizon"
} |
how the GAN architecture maintain similar images close in the latent space? | Question: I am learning about generative models, and I don't quite understand how the GAN architecture can maintain similar generated images close in the latent space. For example, an autoencoder and a variational autoencoder can map similarly generated images very close in their latent space representation. This can be done because the encoder learns to map similar images in the latent space in order to reduce the loss.
However, in the case of a GAN, there is no encoder. Instead, the latent space comes from a high-dimensional Gaussian distribution. The problem is that each time a vector is sampled from the latent space, it is a completely random vector from the distribution. This could lead to the following possibilities:
First: Very different images could be sampled with very close latent points. This would mean that we could have similar images in different parts of the space.
Second: Non-similar images could be very close in their latent representation. Meaning that a single point could represent two different types of images.
My problem comes with the question: If a latent point is sampled randomly with each generation, how is it possible to cluster similar images in the latent space?
Answer: The GAN generator is an encoder from a latent space. The latent space is unconstrained by any individual items of training data, it doesn't matter which real images are shown to help train the discriminator. Training the discriminator to correctly classify "real" images is handled as a separate stage of training to the stage of training on fake images, and there is no direct link between the training images used and the generator's output.
The discriminator does not take the latent space as an input, only an output image to classify as real or fake. As such, the discriminator cannot provide feedback based on the "closeness" of images or not to a target expression of the latent space, only whether it can differentiate between generated and real images. This is very different to a Variational Autoencoder (VAE), which is trained based on reconstruction errors from specific target images.
The generator is therefore free to create an arbitary latent space during training to represent any subset of images that will fool the discriminator. As there is no strong reason to create strongly delineated sub-spaces of output based on the input, the generator will naturally tend to produce similar/related images when the inputs are similar.
The generator doesn't have to produce images from either a smooth or noisy latent space. It will just tend towards what the architecture and initial weights encourage. As it happens, this will more often be a smoothly interpolatable space than a high frequency pseudo-random one.
First: Very different images could be sampled with very close latent points.
This could happen, but only if the generator was already producing very different images from close latent points due to its current weights. There is no association with real training images for the discriminator - which of those images are sampled are completely independent of the generations.
Second: Non-similar images could be very close in their latent representation. Meaning that a single point could represent two different types of images.
A single point won't be required to represent two different images, because it doesn't have to represent anything specific. It is free to drift to whatever the generator makes with that output, under only the constraint that it fools the discriminator. It doesn't have to fool the discriminator that it has made a specific image, but just any image from the class of "real images".
In general, close latent spaces can produce very different images, and this can happen, but there will often be a tendency to create a smooth latent space, because that is an easier set of weights to learn for most neural networks. NNs tend towards global average solutions first during training, and add detail after - this is why early stopping works well as a regularisation technique for neural networks.
In practice, GANs can suffer from the opposite problem - the outputs becoming too similar regardless of input position in the latent space, and lacking the full gamut of allowed values from the training data. This can result in a failure state called mode collapse where the generator focuses output on a small subsection of possible outputs, which the discriminator will decide on average are always fake, and the whole training process become stuck. | {
"domain": "ai.stackexchange",
"id": 3921,
"tags": "generative-adversarial-networks, generative-model, latent-variable"
} |
Check if a postcode is in a list | Question: I'm looking for improvement / advice / reviews on how I can improve this code. I feel like there is a better way or more efficient way of doing this and I am over looking it.
The Plugin I made is very simple:
It stores a list of Postcodes that the admin inputs into a textarea - 1 Per Line.
Plugin creates a text input on the front end that allows their customers to search if their postcode is in the list with Ajax.
The Plugin has a few options to allow the admin to customize the look and functionality of the plugin. Below is the function that takes the postcode and searches the list:
function sapc_ajax_check() {
$checker_defaults = get_option( 'sapc_checker_settings_options' );
$found = 0; $msg = ''; $passed = 0;
if (!empty( $_POST ) && isset($_POST['action']) && strcmp($_POST['action'], 'sapc_ajax_check') === 0) {
if(isset($_POST['pc'])){
if(isset($_POST['verify-int']) && $_POST['verify-int'] == 'on'){
if(is_numeric($_POST['pc']) && is_int((int)$_POST['pc'])){
$passed = 1;
}
}elseif($checker_defaults['verify-int'] == 'on'){
$passed = 1;
}else{$passed = 1;}
if($passed === 1){
$temp_l = $checker_defaults['postcodes'];
$postcode_l = explode(PHP_EOL, $temp_l);
if(is_array($postcode_l) && !empty($postcode_l)){
foreach($postcode_l as $i=>$temp_p){
if( strpos($temp_p, ':') !== false) {
$v = explode(':', $temp_p);
if(strcmp($v[0], $_POST['pc']) === 0){
$found = 1;
$msg .= $v[1] .', ';
}
}else{
if(strcmp($temp_p, $_POST['pc']) === 0){
$found = 1;
$msg .= $temp_p .', ';
}
}
}
if($msg != ''){
$msg = substr($msg, 0, strlen($msg) - 2);
}
}else{$msg ='Error: Try Again';}
}else{$msg ='Error: Invalid Postcode';}
}else{$msg ='Error: No Postcode';}
}else{$msg = 'Error: No Data';}
if($found == 0){
echo json_encode(array('Error', 'Postcode Not Found'), JSON_FORCE_OBJECT);
}else{
echo json_encode(array('Success', $msg), JSON_FORCE_OBJECT);
}
die();
}
There are 2 sets of variables.
First set is the default settings from the admin options page, this helps with placing widgets and allows admins to set their own defaults. Postcodes can only be set here.
Second set is from the widget instance which allows admins to personalize each widget if need be.
I'll explain some of the variables.
$_POST['pc'] = Postcode from User
$_POST['verify-int'] = Option to check if postcode is all integers - Passed from Postcode Widget
$checker_defaults['postcodes'] = list of postcodes
Postcode List accepts the following formats:
Postcode:Surbubr
Postcode
(1 per line)
What can I do to make this function more efficient and more cleaner? If you need anything else please let me know I think I've covered everything.
Answer: Inside of a function call, avoid echoing. By hardcoding echoes, you prevent the "silent" usage of the function. It may be necessary in the future to present the output in more than one format, so use a return inside the function declaration and perform the echo on the function call.
Pay close attention to psr-2's guidelines on control structures. They will help you (and future developers of your code) to read your code. Always imagine that the next person to read your code will have a headache, a hatchet, and your home address; don't set them off.
Empty on $_POST is an imprecise way of checking for expected submission data.
strcmp() provides greater specificity than your condition logic requires. For your logic, just check if the input is identical to the string without a function call.
Condense conditionals within the same block that have the same outcome. Multiple conditions lead to $passed = 1 so they can be consolidated. I didn't really bother to understand the conditional logic behind $passed = 1 but it should certainly be refined.
Refine your validation check on $_POST['pc']. You are checking if is_numeric(), that's fine. Then checking if the value that is cast as (int) is an integer -- um, at this point of course it is, it has no choice. Better yet, why not just make a single check with ctype_digit()? You might also like to check that the strlen() is valid (only you will know if/how to design this for your region). If you want to check the quality and length of the postcode value, perhaps it would be more sensible to use preg_match() where you can design robust/flexible validation with a single function call (again, only you can determine this).
$temp_l and $temp_p are poor variable naming choices. As a new dev to your script, I don't instantly know what they contains (I can venture a guess, but don't ask devs to do this). Try to practice a more literal naming convention. Furthermore, try to avoid declaring single-use variables ($temp_l). Often, fewer variables will lead to fewer typos/mistakes, concise code, and improved readability. When data needs some explaining, use commenting. *notes: 1. I have read some cases where declaring a variable prior to a foreach loop can improve performance 2. Some devs don't like to see functions fed to a foreach loop, I can respect this and I don't typically do this in my own projects.
There is no use in checking if the return value from explode() is an array. It returns an array by design, so you can remove that check. Even if you explode a empty string with PHP_EOL, you will not get a true evaluation from empty(), so that check is pointless to write. At the end of the day, if you try to use foreach() on an empty array, it simply won't iterate -- no worries.
If you have no intentions of using $i in your foreach loop, don't bother to declare it. I don't like single-use variables; I super don't like no-use variables.
How to get the substring before the first occurrence of a character without explode()? strstr() with a true third parameter. Otherwise, explode has to create an array enroute to delivering the string that you need. My suggested snippet will attempt to extract the substring before the first colon, if there is no colon the full string will be used.
By storing qualifying matches as an array, you can avoid having to trim any trailing delimiters from your output string. In fact, return the data without delimiters as an array so that you can easily adjust the way that your qualifying values are delimited.
Strictly speaking, having zero qualifying results from a postcode search doesn't mean that there was an "Error", so just have your function calling script accommodate for "Successful" yet "Empty" results.
I can't imagine a benefit from JSON_FORCE_OBJECT.
die() in nearly every scenario should be avoided.
Suggested Code Overhaul:
function sapc_ajax_check() {
$checker_defaults = get_option('sapc_checker_settings_options');
if (!isset($_POST['action']) || $_POST['action'] !== 'sapc_ajax_check') {
$errors[] = 'Missing required submission data';
}
if (!isset($_POST['pc'])) {
$errors[] = 'Missing postcode value';
}
if (isset($_POST['verify-int']) && $_POST['verify-int'] === 'on' && ctype_digit($_POST['pc'])) {
$errors[] = 'Invalid postcode value';
}
if (isset($errors)) {
return json_encode(['Error', $errors]);
}
$result = [];
foreach (explode(PHP_EOL, $checker_defaults['postcodes']) as $postcode) {
$before_colon = strstr($postcode, ':', true);
$postcode = ($before_colon === false ? $postcode : $before_colon);
if ($postcode === $_POST['pc'])) {
$result[] = $postcode;
}
}
return json_encode(['Success', $result]);
}
Much cleaner right? | {
"domain": "codereview.stackexchange",
"id": 33414,
"tags": "php, strings, array, iteration, wordpress"
} |
Why NP is not certain subset in P/poly? | Question: Complexity class P/poly includes languages, which cannot be calculated by means of classic Turing machine, including unary halting problem
However, class NP is relatively simple, can be calculated via Turing machine and is placed on lowest level in polynomial hierarhy
So every NP-complete problem, which can be represented as NP-language recognition, can be easily turned into P/poly class:
Step 1: Make Turing maching which recognises apropriate NP-complete language. If language is not inside NP, perform force stuck in infinite loop
Step 2: Make unary representation of that turing machine
Step 3: Solve halting problem for this machine via boolean circuit, that lies in P/poly class
Step 4: If machine halts, original language is NP-language, that had been proved in P/poly-wide scope. So, NP lies in P/poly
Where is my misunderstooding? Thanks!
Answer: The problem is that you have to transform the input for the original machine into a unary input for the unary machine. This cannot be done in polynomial time in the size of the original input, therefore your proposed algorithm is not a $\mathsf{P/poly}$ algorithm. | {
"domain": "cs.stackexchange",
"id": 21009,
"tags": "complexity-theory, np-complete, computation-models"
} |
Interpretation of the adjoint Dirac equation | Question: Adjoint spinors $\bar{\psi}$ satisfy the adjoint Dirac equation
$$
\bar{\psi} \left( \gamma^{\mu} \, p_{\mu} - mc \right) = 0 \;
$$
(I'm using the terminology and units of Griffiths' "Intro. to Elementary Particles"). What puzzles me here is that $p_{\mu}$ is presumably a (four-) vector operator (as indicated below), and by normal convention one expects the operand (i.e. that which it the operator operates upon) to stand on its right side... but here there is nothing on its right side to operate upon. Is it implicit, then, that here $p_{\mu}$ "acts" to its left?
Secondarily, and under the convention $x^{\mu}=(ct, \mathbf{x})$ with diagonal components of the metric tensor being
$$
(g_{00}, g_{11}, g_{22}, g_{33}) = (1,-1,-1,-1) \; ,
$$
is it correct that since
$$
p_{\mu}=i \hbar \partial_{\mu} = i \hbar (\frac{\partial}{\partial (ct)}, \nabla)
$$
one must have
$$
p^{\mu}=i \hbar (\frac{\partial}{\partial (ct)}, - \nabla) \; ?
$$
(Note: in my original post I had unintentionally reversed these two definitions, for the co- and contravariant cases.)
Perhaps the resolution of my question is that $p_{\mu}$ is here to be interpreted here not as an operator, but as $p_{\mu}=\hbar k_{\mu}$, where $k_{\mu}$ is the wavenumber vector?
Answer: The four momentum operator in in the Adjoint Dirac equation do operates right to left.
For clarification, we wright it with "backward arrow on top".
Also, the contravariant vector is the one you have written as covariant vector and vice verse.
This is very basic, you can verify the Dirac adjoint eq. simply by Eular-Lagrange eq. for Dirac Lagrangian for /psi. | {
"domain": "physics.stackexchange",
"id": 52043,
"tags": "quantum-field-theory, particle-physics, dirac-equation"
} |
Directional derivatives in the multivariable Taylor expansion of the translation operator | Question:
Let $T_\epsilon=e^{i \mathbf{\epsilon} P/ \hbar}$ an operator. Show that $T_\epsilon\Psi(\mathbf r)=\Psi(\mathbf r + \mathbf \epsilon)$.
Where $P=-i\hbar \nabla$.
Here's what I've gotten: $$\begin{align}T_\epsilon\Psi(\mathbf r)&= e^{i \mathbf{\epsilon} P/ \hbar}\Psi(\mathbf r)\\
&=\sum^\infty_{n=0} \frac{(i\epsilon \cdot (-i\hbar \nabla)/\hbar)^n}{n!} \Psi(\mathbf r) \\
&=\sum^\infty_{n=0} \frac{(\mathbf \epsilon \cdot \nabla)^n}{n!}\Psi(\mathbf r) \\
&= \Psi(\mathbf r) + (\epsilon \cdot \nabla) \Psi(\mathbf r) + \frac{(\epsilon \cdot \nabla)^2 \Psi(\mathbf r)}{2} + \cdots\end{align}$$
This looks somewhat like a Taylor expansion of $\Psi(\mathbf r)$, but it's different than I've seen before -- I've never seen it in terms of a directional derivative. Can you confirm if this is the Taylor expansion of $\Psi(\mathbf r + \mathbf \epsilon)$? Or if not, what I should be getting when expanding $e^{i \mathbf{\epsilon} P/ \hbar}\Psi(\mathbf r)$?
Answer: Calculus method
The Taylor series of a function of $d$ variables is as follows:
$$f(\mathbf{x} + \mathbf{y}) = \sum_{n_1=0}^{\infty}\ldots\sum_{n_d=0}^{\infty}
\frac{(y_1-x_1)^{n_1} \ldots (y_d - x_d)^{n_d}}{n_1!\ldots n_d!}
\left. \left(
\partial_1^{n_1}\ldots \partial_d^{n_d} \right) f \right|_{\mathbf{x}}.$$
where $\partial_i^{n_i} f$ means "the $n_i^{\text{th}}$ order partial derivative of $f$ with respect to the $i^{\text{th}}$ coordinate".
Consider only the terms where $\sum_{i=1}^{\infty} n_i = 1$.
These are the terms for which there is only one derivative being taken, a.k.a. the "linear terms".
Keeping the $0^{\text{th}}$ order term and the linear terms gives
$$f(\mathbf{x} + \mathbf{y}) = f(\mathbf{x})+\sum_{i=1}^{d}(y_i - x_i) \left. (\partial_i f \right)|_{\mathbf{x}} . \qquad (*)$$
Define the displacement $\epsilon = \mathbf{y} - \mathbf{x}$. Then
$$\epsilon \cdot \nabla = \sum_{i=1}^d (y_i - x_i) \partial_i$$
by definition of what $\cdot$ means. Therefore, keeping only up to linear terms, $(*)$ becomes
$$f(\mathbf{y} - \mathbf{x}) = f(\mathbf{x}) + \left. (\epsilon \cdot \nabla)\right|_{\mathbf{x}}f . $$
So you see, your formula is correct, just in a particular notation you may not be used to.
Algebraic method
For the sake of typing less, I'm going to explain this in one dimension, but nothing in the actual calculation is restricted to one dimension.
The state vector can be thought of as a linear combination of eigenvectors of the position operator
$$|\Psi\rangle = \int_x \Psi(x) |x\rangle \qquad (1)$$
where $\hat{x}|x\rangle = x|x\rangle$.
Bringing in a bra $\langle y |$ from the left and using $\langle y | x \rangle = \delta(x-y)$ in (1) gives
$$\langle y | \Psi \rangle = \Psi(y) . \qquad (2)$$
Combining (1) and (2) gives
$$|\Psi\rangle = \int_x |x\rangle \langle x| \Psi \rangle \qquad (3)$$
which shows that $\int_x |x\rangle \langle x | = \text{Identity}$.
Let's really understand what all this means:
Eq. (1) just says that you can write a vector as a linear combination of basis vectors.
In this case, $\Psi(x)$ are the coefficients in the linear combination.
In other words, the wave function is just the coefficients of a linear combination expansion written in the position basis.
Eq. (2) makes this explicit by showing that the inner product of a position basis vector $|y\rangle$ with $|\Psi\rangle$ is precisely $\Psi(y)$.
Eq. (3) just expresses a neat way of expressing the identity operator in a way which we will find very useful.
Note that this works with any basis.
For example,
$$\int_p |p\rangle \langle p | = \text{identity} .$$
We can use this to express a position eigenvector in terms of momentum eigenvectors,
$$|x\rangle = \int_p |p\rangle \langle p | x \rangle = \int_p e^{-i x p/\hbar}|p\rangle ,$$
where we've used the fact that $\langle x | p \rangle = e^{i p x/\hbar}$ [1].
Now let's get back to the question at hand. Define $|\Psi'\rangle \equiv e^{i \epsilon \hat{p}/\hbar}|\Psi\rangle$.
Let us evaluate $\Psi'(y) \equiv \langle y|\Psi'\rangle$:
\begin{align}
\Psi'(y) &= \langle y | \Psi' \rangle \\
&= \langle y| e^{i \epsilon \hat{p}/\hbar} |\Psi \rangle \\
&= \langle y| e^{i \epsilon \hat{p}/\hbar} \int_x \Psi(x) |x\rangle \\
&= \int_x \Psi(x) \langle y | e^{i \epsilon \hat{p}/\hbar} |x \rangle .
\end{align}
To proceed we need to compute $e^{i \epsilon \hat{p}/\hbar} |x \rangle$.
We can do this using our expression for the identity in terms of momentum states,
$$
\begin{align}
e^{i \epsilon \hat{p}/\hbar} |x \rangle
&= e^{i \epsilon \hat{p}/\hbar} \left( \int_p |p\rangle \langle p | \right) | x \rangle \\
&= \int_p e^{i \epsilon \hat{p} / \hbar} |p\rangle \langle p | x \rangle \\
&= \int_p e^{i \epsilon p / \hbar} |p\rangle \langle p | x \rangle \\
&= \int_p e^{i \epsilon p /\hbar} |p\rangle e^{-i p x/\hbar}\\
&= \int_p e^{-i(x - \epsilon) p / \hbar} |p\rangle \\
&= |x - \epsilon \rangle
\end{align}$$
This is actually the thing you should remember about this question:
$$e^{i\epsilon\hat{p}/\hbar}|x\rangle = |x - \epsilon \rangle .$$
This is an excellent equation because it tells you something about how an the $e^{i \epsilon \hat{p} /\hbar}$ operator changes a position eigenvector without having to write it out in a particular basis.
Now that we have that, we can go back to what we were trying to compute,
$$
\begin{align}
\Psi'(x) &= \int_x \Psi(x) \langle y | e^{i \epsilon \hat{p}/\hbar} |x \rangle \\
&= \int_x \Psi(x) \langle y | x - \epsilon \rangle \\
&= \int_x \Psi(x) \delta(y - (x - \epsilon)) \\
&= \Psi(y + \epsilon) .
\end{align}
$$
This is the equation you wanted to prove.
If I messed up any minus signs I hope someone will edit :)
[1] I'm not proving that, and if you want to know why that's true it would make a good question on this site. | {
"domain": "physics.stackexchange",
"id": 16266,
"tags": "quantum-mechanics, homework-and-exercises, operators, momentum"
} |
Convolution of shifted signal | Question: If $y(t) = x(t)*h(t)$, then what is the expression for $y(t+a)$?
Is it $x(t+a)*h(t+a)$ or $x(t+a)*h(t)$?
Answer: From your confusion of $x(t+a) \star h(t+a)$ vs $h(t) \star x(t+a)$
I guess that a little help on the argument manipulations on functions and convolutions could be appropriate here, moving in simple examples:
First let us express the usual simplistic case. Consider the relation:
$$ y(t) = h(t) \cdot x(t) + g(t) \tag{1}$$
then manipulations on the argument $t$ is applied to all functions on both sides
$$ y(t+a) = h(t+a) \cdot x(t+a) + g(t+a)$$
or an arbitrary transform on $t$ would similary be:
$$ y(\phi(t)) = h(\phi(t)) \cdot x(\phi(t)) + g(\phi(t))$$
Now consider the case where two functions convolved to produce the third:
$$y(t) = \int_{-\infty}^{\infty} h(\tau) x(t-\tau) d\tau $$
which is abbreviated as
$$ y(t) = h(t) \star x(t) \tag{2} $$
Now be careful to interpret the case-2. The variable $t$ shows in all functions as an argument but you may not apply the transform on $t$ as you did with the case-1, so assume you have an arbitrary transform on $t$ as $\phi(t)$ then
$$ y(\phi(t)) \neq h(\phi(t)) \star x(\phi(t)) \tag{3} $$
For example, as in your case, if $\phi(t) = t+a$ then you get
$$ y(t+a) \neq h(t+a) \star x(t+a)$$
but
$$ y(t+a) = h(t) \star x(t+a) = h(t+a) \star x(t) $$
The justification of this can (only) be seen when you consider the integral definition of the convolution operator:
$$
\begin{align}
y(t) &= h(t) \star x(t) \\
& = \int_{-\infty}^{\infty} h(\tau) x(t-\tau)d\tau \\
y(t+a) & = \int_{-\infty}^{\infty} h(\tau) x((t+a)-\tau)d\tau \\
& = h(t) \star x(t+a) \\
\end{align}
$$
Note that since the live variable inside the integral only happens in one function ($x(t-\tau)$ in this case) then a change in $t$ will only affect one of them and you get:
$$ y(t+a) = h(t) \star x(t+a) $$
or from commutativity of convolution you get
$$ y(t+a) = h(t+a) \star x(t) $$
So this provides the answer you were looking for. However, it's not over. Because the following case represents an exception:
$$ y(-t) \neq h(t) \star x(-t) $$
but
$$ y(-t) = h(-t) \star x(-t) \tag{4} $$
So how to see this case-4. Again, using the integral definition :
Assuming that $y(t) = h(t) \star x(t)$, then compute the convolution between two new signals $g(t)=h(-t)$ and $z(t)=x(-t)$ as:
$$
\begin{align}
w(t) &= g(t) \star z(t) \\
& = \int_{-\infty}^{\infty} g(\tau) z(t-\tau)d\tau &g(\tau)=h(-\tau),z(t-\tau)=x(-(t-\tau)) \\
& = \int_{-\infty}^{\infty} h(-\tau) x(-(t-\tau))d\tau &\text{ let } \tau'=-\tau \\
& = -\int_{\infty}^{-\infty} h(\tau') x(-(t+\tau'))d\tau' &\text{ replace } \tau' \text{ with } \tau \\
& = \int_{-\infty}^{\infty} h(\tau) x(-t-\tau) d\tau \\
& = y(-t) \\
\end{align}
$$
hence we conclude that $h(-t) \star x(-t) = w(t) = y(-t) $. As stated before, you must always consult to the (explicit) integral definition to decide on the correct functions used in the convolution operator. | {
"domain": "dsp.stackexchange",
"id": 6816,
"tags": "convolution"
} |
How can I speed up this solution to the "Rock-Paper-Scissors Tournament" | Question: As I have recently been selected to participate in my country's final preselection to the IOI -- this took me aback -- I have decided to get a bit more serious about competetitive programming.
The Rock-Paper-Scissors Tournament is an old question form the Waterloo Programming Contest 2005-09-17, available on kattis.
What it asks you to do is:
You first get two numbers n and k for the number of participants and the number of games played in the tournament.
And then you get k lines detailing every one of those games.
Using this data you have to output the winrate for each contestant in three decimal places.
A player that has not played a single game throughout the tournament should get a '-' as output.
A zero for the number of players terminates the program.
2 4
1 rock 2 paper
1 scissors 2 paper
1 rock 2 rock
2 rock 1 scissors
2 1
1 rock 2 paper
0
Should give the output
0.333
0.667
0.000
1.000
As far as I know my solution, which follows below, solves this problem perfectly, but, sadly, it is too slow.
My question is, does anyone see where I made a suboptimal choice in the datatype I used or how I calculated a result that causes my program to fail to beat the timelimit? I hope that understanding what I could have done better here allows me to become a better programmer in the future.
#include <iostream>
#include <iomanip>
#include <vector>
#include <unordered_map>
std::unordered_map< std::string, std::unordered_map<std::string, int> >
result(
{
{
"rock",
{
{"rock", 0},
{"paper", -1},
{"scissors", 1}
}
},
{
"paper",
{
{"rock", 1},
{"paper", 0},
{"scissors", -1}
}
},
{
"scissors",
{
{"rock", -1},
{"paper", 1},
{"scissors", 0}
}
}
});
int main() {
int n,
k,
p1,
p2;
std::string m1,
m2;
while (1) {
std::cin >> n;
if (n == 0) {
break;
}
std::cin >> k;
std::vector<int> wins(n, 0);
std::vector<int> losses(n, 0);
while (k--) {
std::cin >> p1;
std::cin >> m1;
std::cin >> p2;
std::cin >> m2;
int v = result[m1][m2];
if (v == 1) {
wins[p1-1]++;
losses[p2-1]++;
} else if (v == -1) {
losses[p1-1]++;
wins[p2-1]++;
}
}
std::cout << std::fixed;
std::cout << std::setprecision(3);
for (int i = 0; i < n; i++) {
float numOfWins = (float)wins[i];
float numOfGames = (float)(wins[i]+losses[i]);
if (numOfGames != 0) {
std::cout << numOfWins/numOfGames << std::endl;
} else {
std::cout << '-' << std::endl;
}
}
std::cout << std::endl;
}
return 0;
}
Answer: The choice of data structure is fundamentally sound (but more on that later).
The first thing that jumped out to me in your code is actually not directly related to performance: you could drastically improve readability by declaring variables where you use them rather than at the beginning, and by assigning better names. Don’t be afraid that declaring variables inside a loop will lead to performance degradation! First off, in most cases it doesn’t. And secondly, where it does the difference in performance is usually negligible, and will not contribute sufficiently to be noticed. If, and only if, that’s not the case does it make sense to change this.
Some more points regarding readability:
Since you’re already using uniform initialisation, use {…} instead of (…) in your initialisation of result. Writing ({…}) is pretty unusual and consequently tripped me up.
Since C++11 there’s no need to put a space between template argument list terminators (> > vs. >>).
Don’t use integer literals in place of boolean values: don’t write while (1), write while (true).
Your (C-style) casts to float are unnecessary. Remove them.
Make more variables const — in particular result! You don’t want to accidentally modify that. You will also need to change your lookup to using find then, unfortunately.
Now on to performance improvements. There are effectively two things to improve.
First off, two things about unordered_map:
Although the choice of this structure is algorithmically correct, C++’s standard library specification of it is poor due to a fault in the standard wording, which forbids efficient implementations. The structure is therefore a cache killer.
In your case a general string hash is overkill: you only need to check the first letter of each word to determine which move was played.
You could exploit the second point by providing a custom Hash template argument to std::unordered_map which only returns the first character. But given the first point, I would suggest ditching std::unordered_map altogether and just using a 256×256 array as a lookup table (or, if you want to optimise space, subtract some common value from the first character or find a perfect hash function for the letters “r”, “p” & “s”).1
And now something more mundane, since the execution time of your program is at any rate completely dominated by IO: std::cin and std::cout are by default synchronised with C’s buffered standard IO, which makes them excruciatingly slow. To fix this, put std::ios_base::sync_with_stdio(false) a the beginning of your main function. Similarly, untie standard output from standard input via std::cin.tie(nullptr). Secondly, replace std::endl with "\n". std::endl flushes the stream each time, which is slow. It’s also unnecessary to set the stream format manipulators in every loop (although I don’t expect this to change the performance).
— It’s worth noting that none of that had a measurable impact on the performance of the code on my machine. In fact, formatted input via std::cin totally dominates the runtime. This is surprising and disappointing (because there’s no reason for it: it hints at a broken standard library implementation). Using scanf is significantly faster, which should not happen. Of course using scanf also requires changing the type of m1 and m2 (you can use a static buffer of size sizeof "scissors"). It’s worth emphasising that it’s really IO that’s slow, and not the std::strings: simply replacing the std::strings with static buffers has almost no perceptible impact on runtime (likely due to SSO). It’s really std::cin vs scanf.
1 We’re in luck, and the character codes of “r”, “p” and “s” in common encodings differ in the lower two bits, so that we only need a 4×4 lookup and minimal recoding:
static int const result[4][4] = {
// p r s
{ 0, 0, 1, -1}, // paper
{ 0, 0, 0, 0},
{-1, 0, 0, 1}, // rock
{ 1, 0, -1, 0} // scissors
};
…
int const winner = result[move1[0] & 3][move2[0] & 3];
But of course given what I said about the IO bottleneck that’s completely unnecessary obfuscation. | {
"domain": "codereview.stackexchange",
"id": 38131,
"tags": "c++, performance, rock-paper-scissors"
} |
Pangrams python implementation | Question: Given
Roy wanted to increase his typing speed for programming contests. So, his friend advised him to type the sentence "The quick brown fox jumps over the lazy dog" repeatedly, because it is a pangram. (Pangrams are sentences constructed by using every letter of the alphabet at least once.)
After typing the sentence several times, Roy became bored with it. So he started to look for other pangrams. Given a sentence s, tell Roy if it is a pangram or not. Input Format Input consists of a line containing s.
Constraints
Length of s can be at most 103 \$(1≤|s|≤103)\$ and it may contain spaces, lower case and upper case letters. Lower case and upper case instances of a letter are considered the same.
Solution 1
from collections import defaultdict
import string
def is_pangram(astr):
lookup = defaultdict(int)
for char in astr:
lookup[char.lower()] += 1
for char in string.ascii_lowercase:
if lookup[char] == 0:
return False
return True
print "pangram" if is_pangram(raw_input()) else "not pangram"
Solution 2
from collections import Counter
import string
def is_pangram(astr):
counter = Counter(astr.lower())
for char in string.ascii_lowercase:
if counter[char] == 0:
return False
return True
print "pangram" if is_pangram(raw_input()) else "not pangram"
Which is better in terms of running time and space complexity?
Answer: Since you do not actually need the count of each character, it would be simpler to use a set and the superset operator >=:
def is_pangram(astr):
return set(astr.lower()) >= set(string.ascii_lowercase):
Your two solutions are almost identical, though here
for char in astr:
lookup[char.lower()] += 1
it would be faster to lowercase the whole string at once (like you already do in solution 2):
for char in astr.lower():
lookup[char] += 1
Other than that, the only difference is defaultdict vs. Counter. The latter makes your code more elegant, but since the standard library implements Counter in pure Python, there may be a speed advantage to defaultdict. | {
"domain": "codereview.stackexchange",
"id": 15942,
"tags": "python, programming-challenge"
} |
ROS Stack/Package to Deb/Ubuntu package mapping? | Question:
When rosmake cannot build something because of a missing dependency on some particular package it informs the user of the missing package. If it is a system package, it can produce the apt-get line.
However, if it's a ROS package it's not always clear what deb package that might be part of without doing a bit of research on ros.org
Is there a command line way of determining what deb package must be obtained to get a particular ROS package?
I've tried apropos and apt-cache search without much success. Is there a better/endorsed way to do this? Am I missing something obvious?
Originally posted by Asomerville on ROS Answers with karma: 2743 on 2011-08-15
Post score: 1
Answer:
Duplicate of http://answers.ros.org/question/1793/which-deb-package-contains-ros-package-x
Originally posted by tfoote with karma: 58457 on 2011-08-15
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Asomerville on 2011-08-15:
Oops. Thanks. | {
"domain": "robotics.stackexchange",
"id": 6424,
"tags": "rosdep, rosmake, ubuntu"
} |
Mounting a digital indicator on a plate | Question: I have a flat aluminium plate and I need to mount a digital indicator[1] on it.
The digital indicator doesn't seem to have any mounting holes itself so I
think I need to use a clamp to hold it.
I don't have much experience with industry standard parts in this area and I'd
rather not make something myself if I can buy it. So is there any standard
part or way I can mount this digital indicator on a flat aluminium plate? I can
drill holes in this plate no problem.
Any help is appreciated!
[1] https://shop.mitutoyo.eu/web/mitutoyo/en/mitutoyo/01.04.04A/Digital%20Indicator%20ID-H%2C%20CEE%20AC-Adapter/$catalogue/mitutoyoData/PR/543-563D/index.xhtml
Answer: You could have a 8mm hole drilled and reamed slightly undersize for a press fit 8mm dowel pin and use a 8mm/8mm swivel clamp which would let you set the angle and height as desired (photo from McMaster). | {
"domain": "engineering.stackexchange",
"id": 3966,
"tags": "mechanical-engineering, materials, fasteners"
} |
global warming due to CO2 vs water | Question: What is the physical reason for CO2 being a vital greenhouse gas when it is present at only approx 0.03% of atmospheric composition compared to water vapour, which is present at at least 100 times greater concentration, and which is also greatly variable.
Answer: Water vapour can be said to be one of the the most important contributor to the greenhouse effect.
To estimate its potentiality is a bit complex exercise as the absorption ranges of wavelengths in the infra-red region overlaps for different green house gases.
In some of these overlap regions , the atmosphere already absorbs 100% of radiation, meaning that adding more greenhouse gases cannot increase absorption at these frequencies.
For other wavelength ranges , only a small proportion is currently absorbed, so higher levels of greenhouse gases do make a difference.
If it were possible to remove all water vapour(except clouds) from the atmosphere, only about 40% less infrared of all frequencies would be absorbed.
Take away the clouds and all other greenhouses gases, however, and the water vapour alone would still absorb about 60% of the infrared now absorbed.
By contrast, if CO2 alone was removed from the atmosphere, only 15% less infrared would be absorbed.
If Carbon di Oxide was the only greenhouse gas, it would absorb 26% of the infrared currently absorbed by the atmosphere.
A simple scenario is that about 50% of the greenhouse effect is due to water , 25% due to clouds, 20% to CO2, with other gases accounting for the remainder.
One may wonder why the climate watchers are not taking water as important contributor.
The point is that CO2 persists in the atmosphere but water vapour stays for a few days in a place and it has its own cycle of transformations.
Moreover the amount of water vapour is dependent on temperature and its not involved in radiative forcing .
The level of CO2 gets adjusted by the sources and sinks, and it would take hundreds of years for it to return to pre-industrials levels even if all further emissions ceased .
There is no limit to how much rain can fall, but there is a limit to how much extra CO2 the oceans and other sinks can soak up.
Of course, CO2 is not the only greenhouse gas emitted by humans. And many, such as methane, are far more powerful greenhouse gases in terms of infrared absorption per molecule.
While methane persists for only about a decade before breaking down, other gases, such as the chlorofluorocarbons (CFCs), can persist in the atmosphere for hundreds or thousands of years.
reference https://www.newscientist.com/article/dn11652-climate-myths-co2-isnt-the-most-important-greenhouse-gas/ | {
"domain": "physics.stackexchange",
"id": 31630,
"tags": "water, physical-chemistry"
} |
String Comparison vs. Hashing | Question: I recently learned about the rolling hash data structure, and basically one of its prime uses to searching for a substring within a string. Here are some advantages that I noticed:
Comparing two strings can be expensive so this should be avoided if possible
Hashing the strings and comparing the hashes is generally much faster than comparing strings, however rehashing the new substring each time traditionally takes linear time
A rolling hash is able to rehash the new substring in constant time, making it much quicker and more efficient for this task
I went ahead and implemented a rolling hash in JavaScript and began to analyze the speed between a rolling hash, traditional rehashing, and just comparing the substrings against each other.
In my findings, the larger the substring, the longer it took for the traditional rehashing approach to run (as expected) where the rolling hash ran incredibly fast (as expected). However, comparing the substrings together ran much faster than the rolling hash. How could this be?
For the sake of perspective, let's say the running times for the functions searching through a ~2.4 million character string for a 100 character substring were the following:
Rolling Hash - 0.691 seconds
Traditional Rehashing - 71.009 seconds
Just comparing the strings (no hashing) 0.100 seconds
How could the string comparing be so much faster than the rolling hash? Could it just have something to do with the particular language I tested this in? Strings are a primitive type in JavaScript; would this cause string comparisons to run in constant time?
Answer: One possible hypothesis:
Javascript is interpreted, which can slow things down.
When you compare two strings, this will invoke a string comparison routine that is coded directly in assembly language or C, and thus string comparison will probably be about as fast as the architecture can possibly compare strings. In contrast, when you hash strings, you have to execute multiple instructions in Javascript, which might incur overhead from an interpreter.
Another possible hypothesis:
Numbers in Javascript are a floating-point type. There is no integer type in Javascript. Your rolling hash probably involves what looks like integer arithmetic. However, under the covers this is probably getting interpreted as operations on double-precision floats, which can be a bit slower. This might also make your hash-based schemes slower. Some Javascript engines might optimize this to integer arithmetic in certain circumstances, but no guarantees about whether any particular Javascript engine will or won't be able to do that in any particular case. | {
"domain": "cs.stackexchange",
"id": 5887,
"tags": "programming-languages, runtime-analysis"
} |
Why was the Shadow robot hand been utilized for teleoperation? | Question: With a deep neural network it's possible to convert a camera signal into the movement trajectory of a dexterous robotics hand.[1] In contrast to previous attempts in teleoperation, which are working with a dataglove or a joystick, the visual shape of the hand controls the robot. The human operator uses it's own hand to make a hand sign and the Shadow hand is doing the same action.
In this setup, a human operator is in the loop, which means, it has nothing to do with Artificial Intelligence. A real AI system would control the robot with software, a motion planner and can grasp objects by it's own. If the sensor signals are taken from a human, the system is only semi-autonomous. Does this make sense if the aim is to program autonomous robots? Wouldn't it be better to program an AI which controls the hand autonomously?
[1] Li, Shuang, et al. "Vision-based teleoperation of shadow dexterous hand using end-to-end deep neural network." arXiv preprint arXiv:1809.06268 (2018).
Answer: Teleoperation is about letting the human do what humans do best, and letting the robot do what automation does best. There are some situations in which it would take too long, or cost too much, to construct an effective AI system to do certain tasks. Check out, for example, Betczy’s work at JPL, Sheridan’s at MIT, and Hamel’s at Oak Ridge. The idea is to not need to program the entire human intelligence about the task, but instead allow the operator to respond to dynamic task scenarios as if they were present at the task, while taking advantage of the increased strength, stability, and the like from the robot.
As AI continues to progress, the line marking the efficiency of implementing telerobots vs complete automation will continue to move toward increasing amounts of automation. But it will be very challenging to efficiently build an AI engine that can accommodate the unexpected things that a human can handle, in a reasonable amount of time, when the tasks are quite variable and in dynamic environments (think wartime, or catastrophic outside environments, or highly specialized surgeries). | {
"domain": "robotics.stackexchange",
"id": 1947,
"tags": "hri"
} |
Implementation of Block Orthogonal Matching Pursuit (BOMP) Algorithm | Question: How would one implement the Block Orthogonal Matching Pursuit (BOMP) Algorithm in MATLAB?
Answer: The Block Orthogonal Matching Pursuit (BOMP) Algorithm is basically the Orthogonal Matching Pursuit (OMP) Algorithm with single major difference - Instead of selecting single index which maximizes the correlation we chose a set of indices, sub set of columns of the matrix and the solution vector.
A good reference for the algorithm is given in:
An Optimal Condition for the Block Orthogonal Matching Pursuit Algorithm.
Block Sparsity: Coherence and Efficient Recovery.
The code is given by:
function [ vX ] = SolveLsL0Bomp( mA, vB, numBlocks, paramK, tolVal )
% ----------------------------------------------------------------------------------------------- %
%[ vX ] = SolveLsL0Omp( mA, vB, paramK, tolVal )
% Minimizes Least Squares of Linear System with L0 Constraint Using
% Block Orthogonal Matching Pursuit (OMP) Method.
% \arg \min_{x} {\left\| A x - b \right\|}_{2}^{2} subject to {\left\| x
% \right\|}_{2, 0} \leq K
% Input:
% - mA - Input Matirx.
% The model matrix (Fat Matrix). Assumed to be
% normlaized. Namely norm(mA(:, ii)) = 1 for any
% ii.
% Structure: Matrix (m X n).
% Type: 'Single' / 'Double'.
% Range: (-inf, inf).
% - vB - input Vector.
% The model known data.
% Structure: Vector (m X 1).
% Type: 'Single' / 'Double'.
% Range: (-inf, inf).
% - numBlocks - Number of Blocks.
% The number of blocks in the problem structure.
% Structure: Scalar.
% Type: 'Single' / 'Double'.
% Range: {1, 2, ...}.
% - paramK - Parameter K.
% The L0 constraint parameter. Basically the
% maximal number of active blocks in the
% solution.
% Structure: Scalar.
% Type: 'Single' / 'Double'.
% Range: {1, 2, ...}.
% - tolVal - Tolerance Value.
% Tolerance value for equality of the Linear
% System.
% Structure: Scalar.
% Type: 'Single' / 'Double'.
% Range [0, inf).
% Output:
% - vX - Output Vector.
% Structure: Vector (n X 1).
% Type: 'Single' / 'Double'.
% Range: (-inf, inf).
% References
% 1. An Optimal Condition for the Block Orthogonal Matching Pursuit
% Algorithm - https://ieeexplore.ieee.org/document/8404118.
% 2. Block Sparsity: Coherence and Efficient Recovery - https://ieeexplore.ieee.org/document/4960226.
% Remarks:
% 1. The algorithm assumes 'mA' is normalized (Each column).
% 2. The number of columns in matrix 'mA' must be an integer
% multiplication of the number of blocks.
% 3. For 'numBlocks = numColumns' (Equivalent of 'numElmBlock = 1') the
% algorithm becomes the classic OMP.
% Known Issues:
% 1. A
% TODO:
% 1. Pre Process 'mA' by normalizing its columns.
% Release Notes:
% - 1.0.000 19/08/2019
% * First realease version.
% ----------------------------------------------------------------------------------------------- %
numRows = size(mA, 1);
numCols = size(mA, 2);
numElmBlock = numCols / numBlocks;
if(round(numElmBlock) ~= numElmBlock)
error('Number of Blocks Doesn''t Match Size of Arrays');
end
vActiveIdx = false([numCols, 1]);
vR = vB;
vX = zeros([numCols, 1]);
activeBlckIdx = [];
for ii = 1:paramK
maxCorr = 0;
for jj = 1:numBlocks
vBlockIdx = (((jj - 1) * numElmBlock) + 1):(jj * numElmBlock);
currCorr = abs(mA(:, vBlockIdx).' * vR);
if(currCorr > maxCorr)
activeBlckIdx = jj;
maxCorr = currCorr;
end
end
vBlockIdx = (((activeBlckIdx - 1) * numElmBlock) + 1):(activeBlckIdx * numElmBlock);
vActiveIdx(vBlockIdx) = true();
vX(vActiveIdx) = mA(:, vActiveIdx) \ vB;
vR = vB - (mA(:, vActiveIdx) * vX(vActiveIdx));
resNorm = norm(vR);
if(resNorm < tolVal)
break;
end
end
end
The MATLAB code is available at my StackExchange Signal Processing Q60197 GitHub Repository (Look at the SignalProcessing\Q60197 folder).
In the full code I compare the Block implementation to OMP to verify the implementation. | {
"domain": "dsp.stackexchange",
"id": 7801,
"tags": "matlab, compressive-sensing, optimization, linear-algebra, sparse-model"
} |
electric field and electrostatic potential of non conducting overlapping spheres | Question:
Here is how I approached the question. Since the charge density on the two sphere are of opposite polarity therefore, electric field is zero in the mid way. This implies A is correct. Above and below the mid way line, magnitude of electric field is constant. This implies C is correct. As C is correct therefore, B is also correct.
But the answers according to the book are C and D.
I assumed the spheres are partially kept one over other and overlapping region is referred in the question.
Please help me to improve my concepts and solve the question.
Answer: This is a difficult problem. It hinges on the fact that, inside the individual spheres, the field is radial and linear with the distance, i.e. $\vec E(\vec r)\sim \vec r$. Thus, if $\vec R$ is the vector joining the two centres of the sphere, the electric field on the rightmost sphere will be $\sim \vec r_2=(\vec r_1-\vec R)$, where $\vec r_2$ is the location of a point as measure from the centre of the second sphere.
Thus, in your question, the field of the first sphere at $\vec r_1$ minus the field of the second sphere at $\vec r_1-\vec R$ will give you a constant field along $\vec R$. | {
"domain": "physics.stackexchange",
"id": 38234,
"tags": "homework-and-exercises, electrostatics, electric-fields"
} |
Goldstein's derivation of Noether's theorem | Question: This is a followup to my previoucs question: Translation invariance Noether's equation
In Goldstein's derivation of the Noether's theorem in chapter 13,
we have the infinitesimal transformation $$x'^\mu \rightarrow x^{\mu} + \delta x^{\mu}$$
$$\eta_\rho'(x'^\mu)=\eta_\rho(x^\mu)+\delta\eta_\rho(x^\mu).$$ The Lagrangian becomes $$\tag{13.129} \mathcal{L}(\eta_\rho(x^\mu),\eta_{\rho,\nu}(x^\mu),x^\mu)\rightarrow \mathcal{L}'(\eta'_\rho(x'^\mu),\eta'_{\rho,\nu}(x'^\mu),x'^\mu)$$
My question is: are the two sides of 13.129 just equal?
If that is the case then, in $$\tag{13.133} \int_{\Omega}\mathcal{L}(\eta_\rho(x^\mu),\eta_{\rho,\nu}(x^\mu),x^\mu)d^4x=\int_{\Omega'}\mathcal{L}'(\eta'_\rho(x'^\mu),\eta'_{\rho,\nu}(x'^\mu),x'^\mu)d^4x'$$
Since both sides of 13.133 are equal, so all we need to satisfy 13.133 is that the Jacobian from $x^\mu$ to $x'^\mu$ is 1?
Answer: Yes, those are equal and moreover, you do not even need the Jacobian from $x^{\mu}$ to $x^{'\mu}$, because of the fact that the authors, when they write $dx^4$, they actually mean the invariant integration measure $\sqrt{|g|}dx^0dx^1dx^2dx^3$.
You can verify yourself that the aforementioned integration measure is invariant under general coordinate transformations.
So, the following two equations hold separately:
$\mathcal{L}'
\Big(\phi'(x^{'\mu}),\partial'_{\rho}\phi(x^{'\mu}),x^{'\mu}\Big)=
\mathcal{L}
\Big(\phi(x^{\mu}),\partial_{\rho}\phi(x^{\mu}),x^{\mu}\Big)$
$dx^{'4}=\sqrt{g}dx^{'0}dx^{'1}dx^{'2}dx^{'3}=\sqrt{g}dx^{0}dx^{1}dx^{2}dx^{3}=dx^4$
where $\sqrt{g}=\sqrt{|\text{det}(g)|}$ | {
"domain": "physics.stackexchange",
"id": 90124,
"tags": "classical-mechanics, noethers-theorem, classical-field-theory"
} |
What is the origin of the client server model? | Question: I was wondering if someone knew the origin of the client server model. Where does the term come from (paper, software application, book)?
Answer: This is a good question.
It appears that the term server was commonly used already in 1960s. For example, RFC 5, which was published in 1969, already uses the term, and it seems that it was in a common use already back then.
However, the term client in this context seems to be much more recent; the earliest references that I was able to find are from 1978. The following paper seems to be the earliest hit:
Jay E. Israel et al. (1978): Separating Data From Function in a Distributed File System.
I did not find the full text of this paper. It seems that it was published in the Proceedings of the Second International Symposium on Operating Systems Theory and Practice, which was held in October 1978. A preview is available here; I am quoting the relevant part (emphasis mine):
The distributed file system (DFS) is so named because it is implemented on a cooperating set of server computers which together create the illusion of a single, logical system. The other computers in the network that use the DFS for creating, destroying, and randomly accessing files are called its clients (we employ the term "user" to stand only for human users; programs that access the DFS are always called clients).
This looks like a good candidate of the first paper that uses the client-server terminology. Note the way it is written: the authors clearly assume that the reader is familiar with the term "server", but they are here introducing the unfamiliar term "client"—so strange that they have to justify its use.
I checked various resources, including the digital libraries of IEEE and ACM, and I was not able to find any hits that predate 1978. However, already in 1979 there was at least one paper that is boldly using the new term "client" in its title. Unsurprisingly, it is citing Israel et al. (1978).
OED knows the term, but again the earliest use is by Isreal et al.
Edit: Here are some further comments on the term "server". Looking at various papers written in 1960s, it seems that the term "server" was primarily used in the context of queueing theory; there a "server" can be any kind of entity that provides some service.
Whenever a "server computer" was mentioned in computer science papers written in 1960s, it was typically related to the applications of queueing theory in the context of computer systems. Perhaps this is the origin of the term in our field?
I am not sure what is the first instance of a "server" used in this sense without any direct connection to queueing theory.
However, RFC 5 from 1969 that I mentioned above seems to be already using the term "server" in the context of client-server systems and computer networks, without any explicit references to queueing theory. Of course the term "client" was not introduced yet, so they used the words "server-host" and "user-host". | {
"domain": "cs.stackexchange",
"id": 4803,
"tags": "terminology, reference-request, distributed-systems, history"
} |
Bumblebee2 applicability for long-range object detection | Question:
Hello.
Maybe this question is little offtopic here (sorry if so), I don't know where to ask it instead.
I need a method to detect the following kinds of objects using a stereo camera:
any large obstacle, like building or tree, on a distance about 15m (tolerance doesn't matter, it's just for long-term collision avoidance)
human body on distance about 7m
a human's hand on distance about 2m
(Assuming good lighting, clear view, and so on)
Question for the people who have practiced with Bumblebee2: is this camera a good choice for these tasks? (http://www.ptgrey.com/products/bumblebee2/bumblebee2_stereo_camera.asp)
The Point Grey folks gave me nice guidelines about the stereo accuracy: http://www.ptgrey.com/support/kb/index.asp?a=4&q=103, but this isn't enough to make decision.
Thanks in advance,Pavel.
UPD: Are there any other stereo cameras that are supported by ROS? A TOF camera seems to be a poor choice because of long ranges.
Originally posted by Spym on ROS Answers with karma: 32 on 2012-03-19
Post score: 0
Answer:
I think that the Bumbleebee will work for what you are looking at.
Generally, stereo vision is going to be a little less accurate than a structured light approach (like the Kinect), especially with fine grain details.
I know that in my experimentation with the Bumbleebee, hands and arms come through fairly well, but fingers tend to show up as a "mitten" with no fine detail.
As for Bumbleebee support in ROS, it's a bit fragmented. Unfortunately, Point Grey hasn't release an updated library that will work with the newer Ubuntu systems. Because of this, using their libraries natively with ROS is impossible.
It is still possible to capture images from the Bumbleebee and then run them through OpenCV's stereo block matcher (what is used in stereo_image_proc), but you lose the benefit of Point Grey's factory calibration for rectification.
Originally posted by mjcarroll with karma: 6414 on 2012-03-19
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 8638,
"tags": "pcl, bumblebee, stereo, bumblebee2, pointgrey"
} |
Setting controller, action, and values based on the number of URL chunks | Question: Is there any way to improve this code? It looks a bit ugly for me.
if($url_chunks_num == 1) {
$this->controller = $this->pageData[0];
} elseif($url_chunks_num == 2) {
$this->controller = $this->pageData[0];
$this->action = $this->pageData[1];
} elseif($url_chunks_num > 2) {
$this->controller = $this->pageData[0];
$this->action = $this->pageData[1];
$this->values = array_slice($this->pageData, 2);
}
I thought about nested conditions, but this option is ugly as well.
Answer: I agree with luiscubal about restructuring the code (with the one minor difference that I would use braces), however, I think you could make something a bit cleaner with array_shift.
$this->controller = array_shift($this->pageData);
$this->action = array_shift($this->pageData);
$this->values = $this->pageData;
This is functionally equivalent to the code luiscubal posted, though his does not define defaults. (The two strings would default to NULL and then $this->values would default to array().)
This does change $this->pageData though, so if you need to keep that data around (which is implied by it being a property), you'll need to either use luiscubal's approach or create a temporary copy.
Since the array is small, and PHP is typically copy-on-write, making a copy shouldn't be an issue, however, if you're worried, you could take a non-direct route to basically have the same amount of copying as the if trees have:
$this->values = $this->pageData;
$this->controller = array_shift($this->values);
$this->action = array_shift($this->values); | {
"domain": "codereview.stackexchange",
"id": 1948,
"tags": "php, url-routing"
} |
Conceptual Question from Signal Processing - Impulse Response and AR Coefficients | Question: In continuation to the previous question Conceptual questions from signal processing I have a doubt which is: Consider an Autoregressive model (AR(2)):
$$
y(t) = ay(t-1) + by(t-2)
$$
and a FIR (Moving Average, MA(2)) model
$$
x(t) = a\epsilon(t-1) + b\epsilon(t-2).
$$
According to the reply in the prev question, in time domain
$$
y[n] = h[n]\star x[n]
$$
$h$ is the impulse response.
Is there any relation between impulse response and the coefficients of AR and MA model?
What is the intuition of the coefficients and how do we get them?
Answer: Impulse Response is basically the FIR coefficients of the system.
Namely, a system $ H $ with an impulse response given by $ f [n] $ and a Filter $ F $ with an FIR representation of $ {f[0], f[1], \cdots, f[n]} $ are equivalent.
Now, systems with Feedback are equivalent of both FIR and IIR (AR) filters.
But given infinite length of FIR model any LTI system can be represented by FIR coefficients.
The relation could be easily displayed by the Laplace Transform of the system. | {
"domain": "dsp.stackexchange",
"id": 1812,
"tags": "linear-systems, impulse-response, moving-average, autoregressive-model"
} |
Graph States subjected to finite erasures | Question: The appendix to the paper Graph States as a Resource for Quantum Metrology states that when graph states subjected to finite erasures, $$G\Rightarrow Tr_\vec{y}G.$$ While more explicitly he explains the formula as
$$G_\vec{y} = \mathrm{const}\times\sum_{\vec{y}}Z_\vec{j}\lvert G\rangle\langle G\rvert Z_\vec{j}.$$
I was wondering, how can these two statements the same, i.e., why the erasure error can be stated as the operation of $Z$? I know $I + Z$ can be the erasure error, but how can $Z$ alone stand for the erasure error?
Answer: The idea of erasure being projection onto $|0\rangle$ is perhaps misleading in this context (my fault for mentioning it in a comment without having looked at the full details of what this specific paper did). This paper does not project the set of qubits $y$ onto the $|0\rangle$ state. Instead, they trace out those qubits. Perhaps the best way of writing this, to be more consistent with the notation is
$$
G\rightarrow \text{Tr}_yG\otimes \frac{I_y}{2^{|y|}}.
$$
If you think of $G$ as some sum of terms in the Pauli basis,
$$
G=\sum_{x\in\{0,1,2,3\}^n}\alpha_x\sigma_x,
$$
then what the partial trace is doing is selecting all the $x$ for which the $y$ components of $\sigma_x$ are all $I$.
So, let us consider terms
$$
\sum_{z\in\{0,1\}^{n}}'Z_z\sigma_xZ_z.
$$
I'm using the notation $\sum'$ to denote the fact that while $z\in\{0,1\}^n$, I'm only summing over terms where $z$ is 0 on every site not specified by $y$ or the neighbours of $y$ (i.e. those not in the set $L_y$ in the paper's notation).
There are two possibilities for the fixed $x$ and a particular $z$ - either $Z_z$ commutes or anti-commutes with $\sigma_x$.
$$
Z_z\sigma_xZ_z=(-1)^{x'\cdot z}\sigma_x.
$$
Here I'm using $x'$ to denote a binary string derived from $x$ which is 1 for a given site if $x$ yielded an $X$ or $Y$.
Thus,
$$
\sum_{z\in\{0,1\}^{n}}'Z_z\sigma_xZ_z=\sigma_x\sum_{z\in\{0,1\}^{n}}'(-1)^{x'\cdot z}.
$$
This is $2^{|L_y|}\sigma_x$ if $x'$ is 0 on all the sites $y$ and its neighbours. It's 0 otherwise. This immediately excludes any stabilizer (product of generators) that contains a generator (of the form $XZZZ\ldots Z$) where the $X$ acts on one of the sites to be traced out, or one of its neighbours. Since all generators act on a single vertex and its neighbours, this means that the only terms remaining do not act at all on the sites being traced out, i.e. they are $I$ on those sites. Exactly the set of stabilizers you were trying to select. | {
"domain": "quantumcomputing.stackexchange",
"id": 2637,
"tags": "noise, graph-states"
} |
Ray Diagrams: Where is the eyepiece located in a reflector telescope? | Question: I'm in the process of building my own reflector telescope; I have an 8" primary mirror with a focal length of 1200mm.
Of course a telescope has a focuser that lets the eyepiece move up and down until the image of whatever you're observing is perfectly in focus. My question is - where on a ray diagram is this 'perfect focus' found, and what determines it?
For an object at infinity, reflected by a concave mirror, the image is formed at the focal point. Therefore, I originally assumed that focus is achieved when the eyepiece is at the focal point, and the image is magnified by the eyepiece so that it's visible to a human eye.
However I've seen several ray diagrams that depict the eyepiece as being positioned some distance beyond the focal point, where the rays have started to diverge, and the eyepiece then 'straightens out' the rays so that they're parallel. If this is the case, what determines this distance?
Answer: The mirror forms a real image at its focal point, the primary image. The eyepiece forms a virtual image of that real image. To do so it must be at a distance from it, as shown in your diagram. If it's one focal length of the eyepiece away, the eyepiece forms an image at infinity. But people have different ability to focus at different distances (they're maybe near-sighted or far-sighted) so there has to be some ability to focus the eyepiece from a focal length away from the primary image to a closer distance from it. There is no "perfect focus." | {
"domain": "physics.stackexchange",
"id": 76189,
"tags": "optics, astronomy, geometric-optics, telescopes"
} |
Born rule for a sequence of measurements: Why this particular form? | Question: If we have some observable $O=\sum_i\lambda_iP_i$ where $P_i$ are the usual projectors you get in a spectral decomposition, then the probability for a single measurement yielding an outcome $\lambda_j$ is $$p(\lambda_j) = \mathrm{Tr}\left[P_j\rho\right]$$ If at a later time the observable $O' = \sum_i \lambda'_iP'_i$ is measured, the probability of the sequence of outcomes $\lambda_j\lambda'_k$ is often given in literature as $$p(\lambda_j,\lambda'_k) = \mathrm{Tr}\left[P'_kP_j\rho P_jP'_k\right]$$ See e.g. this paper. But I have never seen this written as $$p(\lambda_j,\lambda'_k) = \mathrm{Tr}\left[P'_kP_j\rho\right]$$ Is there a reason this second expression would not hold?
Answer: Using the cyclic property of the trace, and the fact that projectors are idempotent we have:
$$\mathrm{tr}[P'_kP_j\rho P_jP'_k] = \mathrm{tr}[P_jP_k'P_j\rho].$$
Thus would like to know if it is true that
$$\mathrm{tr}[P_jP_k'P_j\rho] = \mathrm{tr}[P_k'P_j\rho].$$
There are many ways to see that this is not the case, for example, let $P=|0\rangle\langle0|$ and $P'=|+\rangle\langle+|$ be the two projectors, where as usual $$|+\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle+|1\rangle\big)$$
Then clearly $$P'P= \frac{1}{\sqrt2}|+\rangle\langle0|$$ and $$PP'P= \frac{1}{2}|0\rangle\langle0|$$
and if you let $\rho=|+\rangle\langle+|$ then you see that they are different.
More abstractly, the map $$(A,B)\longmapsto\mathrm{tr}\left[A^\dagger B\right]$$ is a scalar product on the real vector space of positive trace-class linear operators on a Hilbert space. As you might or might not know, this means that
$$(\forall \rho:\mathrm{tr}[P_jP_k'P_j\rho] = \mathrm{tr}[P_k'P_j\rho]) \iff P_jP_k'P_j = P_k'P_j,$$
which clearly cannot be. | {
"domain": "physics.stackexchange",
"id": 82843,
"tags": "quantum-mechanics, quantum-measurements, born-rule"
} |
LSZ Reduction formula: Peskin and Schroeder | Question: On page 224 of P&S, they have the following expression (7.36),
The integral over $d^3q$ gives us all the $q \to p$, then the integral over $dx^0$ is computed. The RHS given matches when only the $x^0 = T_+$ term of the integral is kept, what happens to the $x^0=\infty $ term?
Answer: I think I can answer your question. Some complex analysis is used in order to derive the above-mentioned result. Starting with the RHS expression,
$$\sum_{\lambda}\int_{T_+}^{\infty}dx^0\int\frac{d^3q}{(2\pi)^3}
\frac{1}{2E_q(\lambda)}
e^{i(p^0-q^0+i\epsilon)x^0}\langle\Omega|\phi(0)|\lambda_0\rangle
(2\pi)^3\delta^3(\textbf{p}-\textbf{q})
\\\times\langle\lambda_{\textbf{q}}|T\{\phi(z_1)...\}|\Omega\rangle=\\
\sum_{\lambda}\int_{T_+}^{\infty}dx^0\int\frac{d^3q}{(2\pi)^3}
\frac{1}{2E_q(\lambda)}
\bigg[\frac{e^{i(p^0-q^0+i\epsilon)x^0}}{i(p^0-q^0+i\epsilon)}\bigg]_{T_+}^{\infty}
\\\langle\Omega|\phi(0)|\lambda_0\rangle
(2\pi)^3\delta^3(\textbf{p}-\textbf{q})
\times\langle\lambda_{\textbf{q}}|T\{\phi(z_1)...\}|\Omega\rangle$$
from which one only needs to consider the term in the brackets, namely
$$\frac{e^{i(p^0-q^0+i\epsilon)x^0}}{i(p^0-q^0+i\epsilon)}
\Bigg|_{T_+}^{\infty}=
-i\lim_{x^0\to\infty}
\frac{e^{i(p^0-q^0+i\epsilon)x^0}}{(p^0-q^0+i\epsilon)}
+i\frac{e^{i(p^0-q^0+i\epsilon)T_+}}{(p^0-q^0+i\epsilon)}$$
The last term is the term you are looking for, see LHS of Eq. (7.36) in P&S. The first term vanishes, since
$$-i\lim_{x^0\to\infty}
\frac{e^{i(p^0-q^0+i\epsilon)x^0}}{(p^0-q^0+i\epsilon)}=
-\frac{i}{(p^0-q^0+i\epsilon)}
\lim_{x^0\to\infty}e^{i(p^0-q^0)x^0}
\lim_{x^0\to\infty}e^{-\epsilon x^0}$$
This result is obviously zero, since the second limit vanishes, whereas the first one rapidly oscillates.
I hope this helps. | {
"domain": "physics.stackexchange",
"id": 97081,
"tags": "quantum-field-theory, correlation-functions, regularization"
} |
ar_track_alvar image topic to point cloud | Question:
Hi there!
This is my first time using AR Tags in ROS.
I already had my camera with /image_raw and /camera_info topics published.
However, when I ran "roslaunch ar_track_alvar pr2_indiv.launch"
there was an error saying:
"Could not process inbound connection: topic types do not match: [sensor_msgs/PointCloud2] vs [sensor_msgs/Image]"
I believe my sensor_msgs/Image topic already correct, since I can visualize them in rviz and in image_view.
Is this a bug? since in the instructions it is clearly stated that Image topic is used...
Thanks for your help!
Originally posted by Dipta on ROS Answers with karma: 3 on 2014-04-08
Post score: 0
Answer:
If you are using ar_track_alvar with a normal camera (not with a Kinect) use the pr2_indiv_no_kinect.launch launch-file.
Originally posted by BennyRe with karma: 2949 on 2014-04-08
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 17588,
"tags": "ros, image, pointcloud, ar-track-alvar"
} |
Making a webService file smaller | Question: I'm developing an iOS app, and I have a file, WebService.m, that is now 2300 lines long. My file contains all the web service calls on my app, with their parsing if successful, like:
- (void)searchSettingsWithSuccess:(void (^)(UserSettings *settings))success
failure:(void (^)(WKWebServiceError *error))failure
{
if ([WKUtil isUsingLocalData])
{
success([self parseSettingsJsonResponse:[WKUtil loadJsonFile:@"UserSettings"]]);
}
else
{
// Query the location building from web service
AFHTTPRequestOperationManager *manager = [AFHTTPRequestOperationManager manager];
[self addHeaderAuthToken:manager];
[self updateSecurityPolicy:manager];
NSString *url = [NSString stringWithFormat:@"%@%@", API_URL_ROOT, API_URL_USER_SETTINGS];
[manager GET:url
parameters:nil
timeInterval:10
success:^(AFHTTPRequestOperation *operation, id responseObject)
{
if ([self isSuccessJsonResponse:responseObject])
{
success([self parseSettingsJsonResponse:responseObject]);
}
else
{
failure([self parseErrorJsonResponse:responseObject apiUrl:API_URL_USER_SETTINGS]);
}
}
failure:^(AFHTTPRequestOperation *operation, NSError *error)
{
failure([WKWebServiceError errorWithSystemError:error apiUrl:API_URL_USER_SETTINGS]);
}];
}
}
I then parse it:
- (UserSettings*)parseSettingsJsonResponse:(NSDictionary*)responseObject
{
NSDictionary *data = [WKUtil parseJsonDictionary:[responseObject objectForKey:API_KEY_DATA]];
if (data == nil)
return nil;
UserSettings *settings = [[UserSettings alloc] init];
settings.doNotDisturbDate = [WKUtil parseJsonDateString:[data objectForKey:API_KEY_DO_NOT_DISTURB_END_TIME] format:DATE_FORMAT_YEAR_TIME];
settings.isDoNotDisturb = [[WKUtil parseJsonNumberBool:[data objectForKey:API_KEY_IS_DO_NOT_DISTURB]] boolValue];
settings.isPushNotifications = [[WKUtil parseJsonNumberBool:[data objectForKey:API_KEY_IS_PUSH_NOTIFICATIONS]] boolValue];
settings.isCrisisAvailable = [[WKUtil parseJsonNumberBool:[data objectForKey:API_KEY_IS_CRISIS_AVAILABLE]] boolValue];
settings.primaryMobile = [WKUtil parseJsonString:[data objectForKey:API_KEY_PRIMARY_MOBILE]];
settings.capabilities = [WKUtil parseJsonArray:[data objectForKey:API_KEY_CAPABILITIES]];
return settings;
}
Should I place the parsing of the return data on another file?, or what other method would make this file smaller?
Answer: I think you should try to keep your web-service methods separately in relavent classes.In most cases,we are fetching data-models or some results from webservice so it is better to keep them in relevant files.
For example you can put
- (void)searchSettingsWithSuccess:(void (^)(UserSettings *settings))success
failure:(void (^)(WKWebServiceError *error))failure
method in controller related to settings. | {
"domain": "codereview.stackexchange",
"id": 9398,
"tags": "objective-c, json, http"
} |
Is a "local" version of 3-SAT NP-hard? | Question: Below is my simplification of part of a larger research project on spatial Bayesian networks:
Say a variable is "$k$-local" in a string $C \in 3\text{-CNF}$ if there are fewer than $k$ clauses between the first and last clause in which it appears (where $k$ is a natural number).
Now consider the subset $(3,k)\text{-LSAT} \subseteq 3\text{-SAT}$ defined by the criterion that for any $C \in (3,k)\text{-LSAT}$, every variable in $C$ is $k$-local. For what $k$ (if any) is $(3,k)\text{-LSAT}$ NP-hard?
Here is what I have considered so far:
(1) Variations on the method of showing that $2\text{-SAT}$ is in P by rewriting each disjunction as an implication and examining directed paths on the directed graph of these implications (noted here and presented in detail on pp. 184-185 of Papadimitriou's Computational Complexity). Unlike in $2\text{-SAT}$, there is branching of the directed paths in $(3,k)\text{-LSAT}$, but perhaps the number of directed paths is limited by the spatial constraints on the variables. No success with this so far though.
(2) A polynomial-time reduction of $3\text{-SAT}$ (or other known NP-complete problem) to $(3,k)\text{-LSAT}$. For example, I've tried various schemes of introducing new variables. However, bringing together the clauses that contain the original variable $x_k$ generally requires that I drag around "chains" of additional clauses containing the new variables and these interfere with the spatial constraints on the other variables.
Surely I'm not in new territory here. Is there a known NP-hard problem that can be reduced to $(3,k)\text{-LSAT}$ or do the spatial constraints prevent the problem from being that difficult?
Answer: $(3,k)\text{-LSAT}$ is in P for all $k$. As you have indicated, locality is a big obstruction to NP-completeness.
Here is a polynomial algorithm.
Input: $\phi\in (3,k)\text{-LSAT}$, $\phi=c_1\wedge c_2\cdots \wedge c_m$, where $c_i$ is the $i$-th clause.
Output: true if $\phi$ becomes 1 under some assignment of all variables.
Procedure:
Construct set $B_i$, the variables that appear in at least one of $c_i, c_{i+1}, \cdots, c_{i+k}$, $1\le i\le m-k$.
Construct set $A_i=\{f: B_i\to\{0,1\} \mid c_i, c_{i+1}, \cdots, c_{i+k} \text{ become 1 under} f\}$.
Construct set $E=\cup_i\{(f, g)\mid f\in A_i, g\in A_{i+1}, f(x)=g(x)\text{ for all }x\in B_i\cap B_{i+1} \}$
Let $V=A_1\cup A_2\cdots\cup A_{m-k}$. Consider directed graph $G(V,E)$. For each vertex in $A_1$, start a depth-first search on $G$ to see if we can reach a vertex in $A_{m-k}$. If found, return true.
If we have reached here, return false.
The correctness of the algorithm above comes from the following claim.
Claim. $\phi$ is satisfiable $\Longleftrightarrow$ there is a path in $G$ from a vertex in $A_1$ to a vertex in $A_{m-k}$.
Proof.
"$\Longrightarrow$": Suppose $\phi$ becomes 1 under assignment $f$. Let $f_i$ be the restriction of $f$ to $B_i$. Then we have a path $f_1, \cdots, f_{m-k}$.
"$\Longleftarrow$": Suppose there is a path $f_1, \cdots, f_{m-k}$, where $f_1\in A_1$ and $f_{m-k}\in A_{m-k}$. Define assignment $f$ such that $f$ agrees with all $f_i$, i.e., $f(x)=f_i(x)$ if $x\in B_i$. We can verify that $f$ is well-defined. Since $c_\ell$ becomes 1 for some $f_j$ for all $\ell$, $\phi$ becomes 1 under $f$.
The number of vertices $|V|\le 2^{3(k+1)}(m-k)$. Hence the algorithm runs in polynomial time in term of $m$, the number of clauses and $n$, the number of total variables. | {
"domain": "cs.stackexchange",
"id": 13955,
"tags": "np-hard, satisfiability, polynomial-time, 3-sat, 2-sat"
} |
What's the point of the half coefficient in the max-cut cost Hamiltonian | Question: Below is the cost Hamiltonian for an unweighted max-cut problem, I don't understand what the point of the half coefficient is. Why couldn't we omit it?
$C_\alpha = \frac{1}{2}\left(1-\sigma_{z}^j\sigma_{z}^k\right),$
Answer: You are correct that a constant factor in the cost Hamiltonian isn't relevant. This factor is included so that the eigenvalues of the cost Hamiltonian can be interpreted in terms of the number of cuts in the graph. In this simple case of a two-node graph, the eigenvalues are 0 or 1 when the factor of 1/2 is included, which correspond to 0 or 1 cuts. | {
"domain": "quantumcomputing.stackexchange",
"id": 2390,
"tags": "quantum-enhanced-machine-learning"
} |
BNO055 mounted orientation configuration | Question:
For this BNO055, I am trying to figure out the correct transpose and sign tuples (or the byte values) for reorienting the sensor so that it will tell me the heading "forward" (from the perspective of this photographer), pitch forward/backward (+- don’t care which), and roll left/right (+- don’t care which). That is, the board is mounted on the parked vehicle such that the plane of the breakout board is perpendicular to the ground.
I have read the chip's datasheet (near page 25, though it doesn't look to me like it should be any of those examples) and tried to study the orientation handedness to choose correct values of transpose and sign, but for whatever reason I cannot figure it out. Here I think I'm trading -Z for X, X for Z. After calibration, the heading is wrong and pitch and roll sensitivity are swapped.
imu = BNO055(i2c, transpose=(2, 1, 0), sign=(1, 0, 0))
# imu.set_offsets(bytearray(ubinascii.unhexlify('e1ff0400ebff4bfe1d0232ff0000feff0000e803a702')))
What are the correct transpose and sign tuples for this mounting orientation?
Are the sensor offsets/calibration that I extract and reuse relative to the raw, untransposed sensors or are those adjustments applied after the transposition? I’d guess raw, but any time I change the transpose & sign values, the existing calibration seems bad because the chip stays in calibration mode for much longer than when I reuse offsets from the same orientation settings.
Update
The bno055 quickstart guide has extra info on the heading interpretation which I find helpful:
https://www.bosch-sensortec.com/media/boschsensortec/downloads/application_notes_1/bst-bno055-an007.pdf
Answer: By my best reading, this doesn't match the documentation, but this seems to give me good heading, roll, pitch values for my mount orientation.
imu = BNO055(i2c, transpose=(1, 2, 0), sign=(1, 1, 0))
and I also believe that the calibration must be done specifically for each transpose/sign reconfiguration. I hope this helps someone. | {
"domain": "robotics.stackexchange",
"id": 2683,
"tags": "orientation"
} |
Lewis dot structures of CO2 | Question: I know the general Lewis dot structure of carbon dioxide is the one where there are two double bonds connecting oxygens to carbon.
However the question is, does $\ce{CO2}$ have resonance structures?
We could have moved one of the bonds (pi bond in this case) to the other $\ce{O-C}$ bond, leaving us with a single bond and a triple bond forming $\ce{CO2}$.
Answer: It is almost always possible to draw resonance structures. In case of $\ce{CO2}$ you could imagine a resonance structure in which the carbon doesn't have an electron octet and a positive charge and one of the oxygen atoms carrying a negative charge.
You could also draw one in which the carbon is only connected via a single bound to each oxygen and carrying two positive charges while the two oxygens carry a negative charge each.
You might come up with more of those structures.
So the question you really wanna ask is whether these resonance structures have any real contribution to the actual bonding state of $\ce{CO2}$ and for that you will see that all of the described strucutures are highly unfavourable due to their energetic state. Therefore the normal Lewis-notation of $\ce{CO2}$ decribes the real bonding state pretty well as all other resonance structures don't contribute much. | {
"domain": "chemistry.stackexchange",
"id": 9296,
"tags": "resonance, lewis-structure"
} |
How to arrive to / calculate gravitational acceleration on Earth? | Question: What are the exact steps and figures to calculate gravitational acceleration on Earth (9.8 m/s^2) and where do these figures come from?
(Please use an algebraic explanation, but assume understanding of the principles of calculus without being handy with the notation.)
I'm assuming (getting some figures from WolframAlpha) we're looking at something along the lines of these for values:
G = 6.674×10^-11
m(earth) = 5.9721986×10^24 kg
d(r of earth) = 6371.0088 km
As far as the simplest answer, I'd be happy with some pointers to solid reference material in addition to a simple walkthrough of the equation.
Answer: From a Newtonian perspective, every particle of mass ($m_j$) produces a gravitational field in the space surrounding it:
$$\vec{g}_j=\dfrac{-Gm_j}{r^2}\hat{r},$$
where $r$ is the distance from the mass to the point in space you're interested in, and $\hat{r}$ is a unit vector pointing from the mass toward the point in space.
If you have several masses, the gravitational field at a point is the sum of all the individual fields from the individual masses. If you place a separate mass, $M$, at that point in space, it will experience a force:
$$\vec{F}_{\mathrm{on }M}=M \sum_j \vec{g}_j=M\vec{g}_{\mathrm{net}}.$$
As a consequence of applying Newton's 2nd Law to this force, we find that the acceleration is equal to the gravitational field:$$\vec{a}=\vec{g}_{\mathrm{net}}.$$
For spherically symmetric mass distributions it turns out (see Gauss's law for gravity) that the net gravitational field (that sum) is equal to the field of a single, large mass as the same location as the center of the distribution. In other words, for a planet or a star, the gravitational field at a distance $r$ from the center of that object is simply
$$\vec{g}=\dfrac{-Gm_{\mathrm{total}}}{r^2}\hat{r}.$$
This holds for any distance $r$ larger than the radius of the sphere, so not only can you use it to find the gravitational field at the surface of Earth, but also 2000 km above the surface (where it will be less). | {
"domain": "physics.stackexchange",
"id": 38958,
"tags": "homework-and-exercises, newtonian-gravity, acceleration, earth"
} |
Assigning defaults for Smarty using object-oriented style | Question: I have a custom class for Smarty that was partially borrowed. This is how the only example reflects the basic idea of using it across my current project:
class Template {
function Template() {
global $Smarty;
if (!isset($Smarty)) {
$Smarty = new Smarty;
}
}
public static function display($filename) {
global $Smarty;
if (!isset($Smarty)) {
Template::create();
}
$Smarty->display($filename);
}
Then in the PHP, I use the following to display templates based on the above example:
Template::display('head.tpl');
Template::display('category.tpl');
Template::display('footer.tpl');
I made the following example of code (see below) work across universally, so I wouldn't repeat the above lines (see 3 previous lines) all the time in each PHP file.
I would just like to set, e.g.:
Template::defauls();
that would load:
Template::display('head.tpl');
Template::display('template_name_that_would_correspond_with_php_file_name.tpl');
Template::display('footer.tpl');
As you can see Template::display('category.tpl'); will always be changing based on the PHP file, which name is corresponded with the template name, meaning, if for example, PHP file is named stackoverflow.php then the template for it would be stackoverflow.tpl.
I've tried my solution that have worked fine but I don't like it the way it looks (the way it's structured).
What I did was:
Assigned in config a var and called it $current_page_name (that derives the current PHP page name, like this: basename($_SERVER['PHP_SELF'], ".php"); ), which returned, for e.g.: category.
In my PHP file I used Template::defaults($current_page_name);.
In my custom Smarty class I added the following:
public static function defaults($template) {
global $Smarty;
global $msg;
global $note;
global $attention;
global $err;
if (!isset($Smarty)) {
Templates::create();
}
Templates::assign('msg', $msg);
Templates::assign('note', $note);
Templates::assign('attention', $attention);
Templates::assign('err', $err);
Templates::display('head.tpl');
Templates::display($template . '.tpl');
Templates::display('footer.tpl');
}
Is there a way to make it more concise and well structured?
Answer: Well, there's a few things wrong here, but let's go over it together and I'll try to steer you in the right direction. Let's start with globals. Globals are extremely bad and should be avoided at all costs. If you have a variable that needs to be available in multiple parts of your script, then you should pass that variable as a parameter to whichever function needs it, and return the modified version if applicable. Why? Well globals are an old feature that have been proven to be full of all kinds of security issues. Not to mention they are just plain impossible to track. Sure, I know where $Smarty came from because I saw you define it in your constructor, but lets say I'm pages into your code and see that for the first time. How am I to know where this came from? Hopefully PHP will deprecate this soon, but until then, just take my word, and that of the entire community's, and don't use globals.
Lets take a look at your constructor. First you should know that the access parameter (public, private, or protected) should be used on every function, more accurately known as a method. This includes the constructor. By default it uses the public status, but I've heard rumors of this getting deprecated soon and it is better to go ahead and assume the default won't always be there for you. Besides, its always a good idea to explicitly define what you expect a method or property's access type to be. Second, the proper way to make a constructor is no longer to create a method with the same name as the class. This used to be the case in PHP 4 I think, but it is now deprecated. It still exists for backwards compatibility, but there's no saying how long that will last. Use the magic method __construct() instead. Magic meaning reserved.
So, let's review. Globals are bad, constructors are built with __construct(). How then do we create a class where we have a "global" variable created in the class contructor? Well like this:
private $smarty;
public function __construct() {
$this->smarty = new Smarty();
}
You'll notice there's a couple of new things going on here. Hopefully you'll also notice how much cleaner this is. So, what's private $smarty? Its pretty much the same as global $smarty, except the variable isn't available globally, only within the class scope ($this->smarty). If we set the access parameter to public, it would be a little more like a global in that we could access it outside of the class, but in order to do so we would have to use an instance of the class like so: $template->smarty. That's because the variable, or, more accurately property, is associated with that class. The reason we are using a private access type her is because $smarty defines an interior "property" that should only be accessed from within the scope of the class. Sorry I can't explain that any better. Look at it like this. If I created a user class, I wouldn't want the password database to be "public" and available outside of the class. It is sort of the same here, except without sensitive data in the mix.
Let's continue. Next is a static display() method. Static methods exist for no other purpose than to defy OOP. Some might argue that I'm wrong here, but let's break down that acronym. Object Oriented Programming. About the only thing static still has in common with OOP is that it is vaguely related to an object. The inner principles of OOP (encapsulation, inheritance, and polymorphism) are simply ignored by static methods. In a lot of cases you'd be better off with just a normal helper function. Assume you won't ever need static methods for the time being. It is an advanced feature of classes and isn't likely to be necessary until you have a better grasp on the whole concept. Instead, what you should do is drop the static keyword and use the class property like it was meant to be.
public function display( $filename ) {
$this->smarty->display( $filename );
}
Now, you may have started noticing a pattern here. Each of these new methods I have showed you are simply wrappers for the Smarty class. And that is because you are not doing anything yet that requires "extending" the Smarty class. From what I can see, you would be better of just using the smarty class directly and dropping classes altogether. But hopefully the above helped explain some of the basic concepts for you so that when you do decide to start using OOP, you have a better grasp of it.
EDIT
Oh, and about your problem. The above should help you with the structure, but until you start getting into the more advanced stuff (frameworks and autoloading), this is the best solution. At least as far as I know. The only improvement I would make is to use an absolute path name instead of relative, and you can do that using the magic constant __FILE__. | {
"domain": "codereview.stackexchange",
"id": 2396,
"tags": "php, beginner, smarty"
} |
How can I determine whether the mass of an object is evenly distributed? | Question: How can I determine whether the mass of an object is evenly distributed without doing any permanent damage? Suppose I got all the typical lab equipment. I guess I can calculate its center of mass and compare with experiment result or measure its moment of inertia among other things, but is there a way to be 99.9% sure?
Answer: Malicious counter example
The desired object is a sphere of radius $R$ and mass $M$ with uniform density $\rho = \frac{M}{V} = \frac{3}{4} \frac{M}{\pi R^3}$ and moment of inertia $I = \frac{2}{5} M R^2 = \frac{8}{15} \rho \pi R^5$.
Now, we design a false object, also spherically symmetric but consisting of three regions of differing density
$$ \rho_f(r) = \left\{
\begin{array}{l l}
2\rho\ , & r \in [0,r_1) \\
\frac{1}{2}\rho\ , & r \in [r_1,r_2) \\
2\rho\ , & r \in [r_2,R) \\
\end{array} \right.$$
We have two constraints (total mass and total moment of inertia) and two unknowns ($r_1$ and $r_2$), so we can find a solution which perfectly mimics our desired object. | {
"domain": "physics.stackexchange",
"id": 23780,
"tags": "newtonian-mechanics, experimental-physics"
} |
Gravitational Force of hemispherical shell | Question:
My approach:
$dF= \dfrac{Gm(dM)}{R^2} $
$ \int dF = \int \dfrac{Gm}{r^2} dM$
$ dM=\sigma dS $
$ dS=R^2sin \phi d\theta d\phi $
$F= \int_{0}^{2\pi} \int_{\frac{pi}{2}}^{0} \dfrac{Gm\sigma R^2}{R^2} sin\phi d\phi d\theta $
$F = - 2 \pi G m \sigma \int_{0}^{\frac{pi}{2}} sin \phi d\phi $
$F = - 2 \pi G m \sigma (cos\dfrac{\pi}{2} - cos0 )$
$F = 2 \pi G m \sigma $
My answer is off by a factor of 2 and I don't understand why that is the case.
Answer: You have summed $dF$ whereas you should have summed $dF \, \cos \theta$ as shown in the diagram below with the summation of $dF \, \sin \theta$ being zero by symmetry. | {
"domain": "physics.stackexchange",
"id": 49027,
"tags": "homework-and-exercises, newtonian-gravity, calculus"
} |
How do we share pain? | Question: When somebody else tells me about his or her itching or pain in some specific body part, I sometimes begin to feel similar feelings.
I can think of about three explanations:
I feel pain all over my body all the time for many different reasons but my entrance neural pathways stop it as unimportant of these signals but when I focus on a single signal, it begins to be important for me.
Mirror neurons are doing their job.
For an unknown reason, I psychosomatically induce my friend's feelings.
Which of these theories is at least partially engaged in said phenomenon?
Answer: There has been a study by Jackson, Meltzoff & Decety (2005) who investigated the neurocorrelates involved in the perception of pain.
In order to assess this, they carried out an fMRI study in which their subjects were shown photographs of of feet and hands in situations that are likely to evoke pain and also a control set of photos that were free of any pain evoking stimuli.
Their results display that with the experimental photographs, there is a strong increase in bilateral changes in several regions that we know are involved in the perception of pain, namely:
the anterior cingulate, the anterior insula, the cerebellum, and to a lesser extent the thalamus
Notably, the activity of the anterior cingulate was most strongly correlated with the participants rating of the other persons pain, which suggests that the anterior cingulate is especially modulated according to the subjects reactivity to pain of others.
Source:
Jackson, P. L., Meltzoff, A. N., & Decety, J. (2005). How do we
perceive the pain of others? A window into the neural processes
involved in empathy. NeuroImage, 24, 771-779.
doi:10.1016/j.neuroimage.2004.09.006 | {
"domain": "biology.stackexchange",
"id": 7367,
"tags": "neuroscience, neuroanatomy, pain, psychology, cognition"
} |
Interpreting abbreviations for days of the week such that they are consecutive days | Question: The code below works, but it seems it could be done better. Is there a better/more efficient way to do this?
Given:
All days off ("relief days") will be consecutive.
Relief days will be imported as a string. Days of week are represented by their first letter. "S" can represent Saturday or Sunday, "T" can represent Tuesday or Thursday. I.e., someone with Sat/Sun off would be "SS", someone with Tue/Wed/Thu off would be "TWT".
Depending on how many hours a day they work, they'll either have 2, 3, or 4 days off.
The existing code:
public static String getReliefDays(){
Map<String, String> user = new HashMap<>(); //test example user
user.put("name", "Schmoe, Joe");
user.put("employeeNo", "123456");
user.put("startTime", "0800");
user.put("endTime", "1700");
user.put("reliefDays", "TF");
String rd = (String) user.get("reliefDays");
int rdLength = rd.length();
String reliefDays = "";
String fl = rd.substring(0, 1); //first letter in the days off string
String sl = rd.substring(1, 2); //second letter in the days off string
switch (fl) {
case "S":
if (sl.equalsIgnoreCase("M")) {
reliefDays = "SUN MON";
if (rdLength == 3) {
reliefDays += " TUE";
}
if (rdLength == 4) {
reliefDays += " WED";
}
}else{
reliefDays = "SAT SUN";
if (rdLength == 3) {
reliefDays += " MON";
}
if (rdLength == 4) {
reliefDays += " TUE";
}
}
break;
case "M":
reliefDays = "MON TUE";
if (rdLength == 3) {
reliefDays += " WED";
}
if (rdLength == 4) {
reliefDays += " THU";
}
break;
case "T":
if (sl.equalsIgnoreCase("W")) {
reliefDays = "TUE WED";
if (rdLength == 3) {
reliefDays += " THU";
}
if (rdLength == 4) {
reliefDays += " FRI";
}
} else {
reliefDays = "THU FRI";
if (rdLength == 3) {
reliefDays += " SAT";
}
if (rdLength == 4) {
reliefDays += " SUN";
}
}
break;
case "W":
reliefDays = "WED THU";
if (rdLength == 3) {
reliefDays += " FRI";
}
if (rdLength == 4) {
reliefDays += " SAT";
}
break;
case "F":
reliefDays = "FRI SAT";
if (rdLength == 3) {
reliefDays += "SUN";
}
if (rdLength == 4) {
reliefDays += " MON";
}
break;
}
return reliefDays;
}
Answer: Your indentation has gone wrong here, and it's confusing:
if (sl.equalsIgnoreCase("M")) {
reliefDays = "SUN MON";
if (rdLength == 3) {
reliefDays += " TUE";
}
if (rdLength == 4) {
reliefDays += " WED";
}
}
Let your IDE help and auto-format your code as you work.
Other than that, you can get better feedback when you post not only the code you want feedback on but also the surrounding context, so that you can get feedback on how you are applying OOP and using the correct signatures.
My version is below, see how I've divided up the logic.
I figured that this job of creating a String with the weekday short names would belong to the User so I put it there.
I also thought that a Weekday class might know how to deal with the problem of disambiguating weekdays as single letters in a String that represents week days. But I can also see the argument for this to belong to User since that sort of String is known to be specifically used when creating Users. In the end, I decided that Weekday would expose the logic, and User will expose setReliefDays taking a String, which would delegate to the logic in Weekday, but then this can easily be changed if the implementation of User needs to change.
public class ReliefDays {
public static void main(String[] args) {
User joe = new User();
joe.name = "Schmoe, Joe";
joe.employeeNo = "123456";
joe.startTime = "0800";
joe.endTime = "1700";
joe.setReliefDays("TFSS");
System.out.println(joe.reliefDaysShortNames());
}
static class User {
String name;
String employeeNo;
String startTime;
String endTime;
Weekday[] reliefDays;
/**
* Returns a string containing the short names of this user's relief days
* seperated by spaces.
*
* For example, "SAT SUN MON".
*/
public String reliefDaysShortNames() {
StringBuilder shortDays = new StringBuilder();
for (int i = 0; i < reliefDays.length - 1; i++) {
shortDays.append(reliefDays[i].shortName);
shortDays.append(" ");
}
shortDays.append(reliefDays[reliefDays.length - 1].shortName);
return shortDays.toString();
}
/**
* Set this user's relief days based on a string of characters representing
* consecutive days.
*
* For example, when days is "MT", this user's relief days are set to
* {Weekday.MONDAY, Weekday.TUESDAY}.
*/
public void setReliefDays(String days) {
reliefDays = Weekday.parseWeekdays(days);
}
}
static enum Weekday {
MONDAY('M', "MON"),
TUESDAY('T', "TUE"),
WEDNESDAY('W', "WED"),
THURSDAY('T', "THU"),
FRIDAY('F', "FRI"),
SATURDAY('S', "SAT"),
SUNDAY('S', "SUN");
char letter;
String shortName;
Weekday(char letter, String shortName) {
this.letter = letter;
this.shortName = shortName;
}
/**
* Given at least two characters representing consecutive week days, returns an
* array of those weekdays of the same size and in the same order.
*/
public static Weekday[] parseWeekdays(String s) {
Weekday[] weekDays = new Weekday[s.length()];
Weekday currentDay = firstWeekdayOfPair(s.substring(0, 2));
Weekday nextDay;
weekDays[0] = currentDay;
for (int i = 1; i < s.length(); i++) {
nextDay = currentDay.nextWeekday();
if (s.charAt(i) != nextDay.letter) {
throw new IllegalArgumentException("Unexpected sequence of days: "
+ currentDay.letter + s.charAt(i));
}
weekDays[i] = nextDay;
currentDay = nextDay;
}
return weekDays;
}
private static Weekday firstWeekdayOfPair(String s) {
switch (s) {
case "MT":
return MONDAY;
case "TW":
return TUESDAY;
case "WT":
return WEDNESDAY;
case "TF":
return THURSDAY;
case "FS":
return FRIDAY;
case "SS":
return SATURDAY;
case "SM":
return SUNDAY;
default:
throw new IllegalArgumentException("Unexpected weekday pair: " + s);
}
}
private Weekday nextWeekday() {
switch (this) {
case MONDAY:
return TUESDAY;
case TUESDAY:
return WEDNESDAY;
case WEDNESDAY:
return THURSDAY;
case THURSDAY:
return FRIDAY;
case FRIDAY:
return SATURDAY;
case SATURDAY:
return SUNDAY;
case SUNDAY:
return MONDAY;
default:
throw new IllegalArgumentException("Unexpected day: " + this.toString());
}
}
}
} | {
"domain": "codereview.stackexchange",
"id": 41854,
"tags": "java, datetime"
} |
Is it possible to implement the ZX-calculus bialgebra rule without adaptivity or post-selection? | Question: In the ZX-calculus, one of the fundamental rules of the diagrammatic reasoning is known as the bialgebra rule and it is described by the given diagrammatic equation:
Question: Can we implement this diagram without using post-selection or adaptivity?
I know that this rule preserves the gflow of a diagram, which implies that if a circuit is transformed to a ZX-calculus diagram and after simplification this rule is used then the resulting ZX-diagram can be transformed back into a quantum circuit. But from what I could understand, the bialgebra seems to preserve the gflow but it is not directly circuit implementable (i.e. with unitaries) because there is a $2 \to 1 \to 2$ qubit flow that must result in measurement. Hence, any non-adaptive and non-postselected implementation likely must come from not using single two systems but a possibly more complex system.
The adaptive implementation of the bialgebra (left side of the diagram) is very interesting but I think that it would be more interesting to have a circuit implementing. Moreover there is the $a\oplus_2b\pi $ term in the beggining that is very complicated to get rid of.
In this question, notation and reference to known results were taken from Ref. ZX-calculus for the working quantum computer scientist. The last diagram is mine so it might have mistakes.
Answer: It is not possible to implement this operation without post-selection nor adaptivity. As I had described in my attempt, this kind of diagram does not represent a unitary, and although with post-selection it is possible the answer given by @John in the comments completely settles the question.
His remark might be relevant for future users though:
It is not possible to implement anything that is not an isometry in a deterministic way, even allowing for adaptation. You need to allow for different branches to have different outcomes. | {
"domain": "quantumcomputing.stackexchange",
"id": 3793,
"tags": "zx-calculus"
} |
How is the Poisson bracket $\{\mathbf{c},\mathbf{l}\cdot\hat{n}\}=(\hat{n}\times \mathbf{c})$, for constant $\mathbf{c}$, and not zero? | Question: The Poissonian formulation of mechanics tells us that for a generating function $g(q,p,t)$, the Poisson bracket of some function/variable $f(q,p,t)$ with the generating function corresponds with an infinitesimal change in $f$ along the transformation or "motion" generated by $g$.
$$\delta f = \epsilon \left\{f,g \right\}$$
An example of this is momentum conservation due to invariance under infinitesimal translations. To show this, take $f$ to be the Hamiltonian and $g$ to be $\mathbf{p}\cdot\hat{n}$, where $\mathbf{p}$ is the momentum $p_x \hat{x}+p_y\hat{y}+p_z\hat{z}$ and $\hat{n}$ is an arbitrary unit vector. The canonical transformation generated by $\mathbf{p}\cdot \hat{n}$ is an infinitesimal translation along the $\hat{n}$ direction of the system variables with which the Hamiltonian is evaluated.
$$\begin{align*}
\epsilon\left\{H,\mathbf{p}\cdot\hat{n}\right\}&=\epsilon\left(\sum_i \frac{\partial H}{\partial q_i}\frac{\partial\,(\mathbf{p}\cdot\hat{n})}{\partial p_i}-\frac{\partial H}{\partial p_i}\frac{\partial\,(\mathbf{p}\cdot\hat{n})}{\partial q_i}\right)\\
&=\epsilon\left(\sum_i \frac{\partial H}{\partial q_i}(\hat{n})_i\right)\\
&=\epsilon (\nabla_q H)\cdot \hat{n}\\
&\\
&\implies \left\{H,\mathbf{p}\cdot\hat{n}\right\}=(\nabla_q H)\cdot \hat{n}
\end{align*}$$
Now, if we were to take an polar angle $\theta$ about some axis $\hat{n}$ to be a coordinate, the above procedure with $\mathbf{l}$, the angular momentum, in place of $\mathbf{p}$ would then translate as an infinitesimal "translation" of the $\theta$ variable - i.e. a rotation about the $\hat{n}$ axis. An example of this is given in Landau & Lifshitz, Goldstein, and many other mechanics textbooks - the rotation of a constant vector $\mathbf{c}$ about a specified axis.
$$\left\{\mathbf{c},\mathbf{l}\cdot\hat{n}\right\}=\hat{n}\times\mathbf{c}$$
In terms of the interpretation of the Poisson brackets through generating functions (which I just gave), I can see why this would be true. The vector $\mathbf{c}$ changes by an amount $d\theta(\hat{n}\times\mathbf{c})$ when rotated by an infinitesimal angle $d\theta$ about an axis $\hat{n}$, and that result can be reached by simple analytical geometry. However, by direct evaluation of the Poisson bracket, I can't see why this isn't zero (as $\mathbf{c}$ is a constant). The angular momentum operator (vector-valued function in terms of phase space variables) is given by
$$\begin{align*}
\mathbf{l}&=\mathbf{r}\times\mathbf{p}\\
&=(yp_z-zp_y)\hat{x}+(zp_x-xp_z)\hat{y}+(xp_y-yp_x)\hat{z}
\end{align*}$$
Note that this, assuming a typical classical Hamiltonian, entirely in terms of phase space variables. Now, the Poisson bracket of this with a constant vector is
$$\begin{align*}
\left\{\mathbf{c},\mathbf{l}\cdot\hat{n}\right\}&=\sum_i\left(\frac{\partial \mathbf{c}}{\partial q_i}\frac{\partial (\mathbf{l}\cdot\hat{n})}{\partial p_i}-\frac{\partial \mathbf{c}}{\partial p_i}\frac{\partial (\mathbf{l}\cdot\hat{n})}{\partial q_i}\right)\\
&=0\,\,\,\,(\mathbf{c}\textrm{ doesn't depend on phase space variables)}
\end{align*}$$
Please, could you tell me how to resolve this paradox?
P.s: I originally wrote this question extremely briefly because I thought somebody would certainly know what I'm talking about.
Answer: Let's just do it for a simple example. By $\vec{c}$ I imagine you mean the location of the particle relative to some origin, so $\vec{c}=\vec{r}$. Later on for simplicity we'll suppose further the particle is located on the x-axis (but it is important to do this only after differentiating as we will see).
We'll also suppose we are rotating around the z axis so that $\hat{n}=\hat{z}$.
Then we have
\begin{equation}
\{\vec{c} , \vec{l}\cdot\vec{n}\}=\{\vec{r},xp_y - y p_x\} =\{x\hat{x}+y\hat{y}+z\hat{z},xp_y - y p_x\}= -y \hat{x}+x\hat{y}.
\end{equation}
Now that we have differentiated (meaning, evaluated the brackets) we can set $y=0$ and $x=R$ (that is, we can suppose our particle started on the $x$-axis at the position $R$). Then
\begin{equation}
\{R \hat{x},\vec{l}\cdot\hat{z}\}=R \hat{y}=\hat{z}\times(R\hat{x})
\end{equation}
which is consistent with your formula.
Incidentally, you might be worried that I started off by setting $\vec{c}=\vec{r}$. I think in the framework you are working in--particle mechanics--the vectors should all start from the same origin. If you want to start taking poisson brackets of vectors with different origins, I think you really need to generalize this discussion to field theory (which will complicate the story a bit because in addition to rotating the direction of the vector you need to rotate the origin, so you will end up with an additional term). So I think that may be what you have in mind but that is a more complicated story. | {
"domain": "physics.stackexchange",
"id": 28806,
"tags": "classical-mechanics, poisson-brackets"
} |
Regular expression for the language accepting the strings containing at most one pair of $1$'s over $\{0,1\}$ | Question:
Design a regular expression for the language accepting the strings containing at most one pair of $1$'s over $\{0,1\}$
So basically we have the language $L=\{11,011,110,0011,1100\ldots\}$.
I find the expression to be $R=(0+1)^*11(0+1)^*$.
But I found here the answer to be $R=(0 + 10)^*(11 + \epsilon)(0 + 10)^*$.
I cannot differentiate these two answers and the logic behind the later answer.
EDIT: In my answer, the first $(0+1)^*$ may provide $11$, so after this another $11$ is not permissible since at most one pair of $11$ is required. But what's the logic behind $(11+\epsilon)$?
Answer: The logic behind $(11+\epsilon)$ is to either allow a $11$, or allow no such pairs. The question states "[...] containing at most one pair [...]", and therefore we need to allow either one or zero pairs, and this is achieved by $(11+\epsilon)$. | {
"domain": "cs.stackexchange",
"id": 21525,
"tags": "regular-expressions"
} |
(Leetcode; Dynamic Programming) Partition to K Equal Sum Subsets memoization | Question: Given an integer array nums and an integer k, return true if it is possible to divide this array into k non-empty subsets whose sums are all equal.
This is leetcode problem #698.
Below is code that I have written with backtracking to solve the problem (it is correct and passes all test cases).
When we apply memoization for a DP solution shouldn't the current state we are in for memoization depend on all of: currSum, count, and taken? Instead, all we need is taken and not currSum and count to uniquely identify our state.
For clarification: taken tells us which numbers we've selected thus far (or havent selected), count tells us the number of subsets we've currently divided into subsets, and currSum tells us the current sum for the subset we're trying to solve.
So why do we only need taken to identify the memoization state, and not all of taken and count and currSum?
class Solution {
bool helper(const vector<int>& nums, int k, int count, int currSum, int targetSum, int index, vector<int>& taken) {
if (count == k - 1)
return true;
if (currSum > targetSum)
return false;
if (currSum == targetSum)
return helper(nums, k, count + 1, 0, targetSum, 0, taken);
for (int i = index; i < nums.size(); i++) {
if (!taken[i]) {
taken[i] = 1;
if (helper(nums, k, count, currSum + nums[i], targetSum, i + 1, taken))
return true;
taken[i] = 0;
}
}
return false;
}
public:
bool canPartitionKSubsets(vector<int>& nums, int k) {
sort(nums.begin(), nums.end(), greater<>());
vector<int> taken(nums.size(), 0);
const int sum = accumulate(nums.begin(), nums.end(), 0);
if (sum % k != 0 || *max_element(nums.begin(), nums.end()) > sum / k)
return false;
return k == 1 || helper(nums, k, 0, 0, sum / k, 0, taken);
}
};
Answer: Here is the signature of the recursive method,
bool helper(const vector<int>& nums, int k, int count, int currSum, int targetSum, int index, vector<int>& taken);
That means a state is implemented as a collection of 7 items, nums, k, count, currSum, targetSum, index, taken.
However, a state is completely determined by nums, k and taken. Once they are known, other items, targetSum, currSum, and index are determined.
targetSum is the sum of all elements in nums divided by k.
count is the integer quotient of the sum of all elements taken except the element taken at the largest index divided by targetSum.
currSum is the sum of all elements taken minus the product of count and targetSum.
index is $0$ if no element has been taken or one more than the largest index at which an element is taken.
Since nums and k are constants, we can also view a state as completely determined by taken, once the input vector<int>& nums, int k is given. That is why we only need taken to identify the state.
Given the constants nums and k, in math terms, we can view a state as having one independent variable taken and other dependent variables. In database terms, the table of all states has one primary key taken and other non-key columns. | {
"domain": "cs.stackexchange",
"id": 19748,
"tags": "algorithms, dynamic-programming"
} |
Why do we say that gravitational waves travel at the speed of light, when they can escape a black hole and light cannot? | Question: Or rather, does "speed" relate to the same measure? The speed of light is the speed of a photon/an electromagnetic wave in the empty space, but gravitational waves are wave of this very same space? How to make sense of it in a more formal way?
Answer: When we talk about black holes, we usually talk about the insides of the event horizon. The escape velocity of a event horizon is more than the speed of light therefore only particle like tachyons which travel faster than light can escape, but those are hypothetical so technically nothing can escape the event horizon, and the speed of gravity, bending of spacetime will be equal to gravitational waves, ripples in spacetime. So these waves can't escape the event horizon.
But the point is that, gravitational waves don't occur from the insides, they occur outside of the event horizon. If an assymettric distribution of mass is there then the gravity of the masses, will create gravitational waves along it's barycenter, which happens outside of the black hole. Let's take an example of 2 blackhole's merging.
Both the black holes move violently around, till the merge, till the event horizon collides. After that we are unable to observe gravitational waves, we observe gravitational waves from spindown because spindown is also external to the black hole
They are just like other fields like photons, deflected by a massive object so it can also not escape.
In order for gravitational waves to occur, frame-dragging (spacetime dragged by gravitational fields, stretching it) may occur, which happens outside the event horizon (within or on or outside the ergosphere but never inside the event horizon) as the insides are not a part of our universe.
This has been tried in a thought experiment known as stick bead argument that whether gravitational waves have physical effects like this.
Note: The only restriction is Young's modulus, as expansion of universe is speeding up the spacetime is stretching more and more and making it difficult to form waves
Note: Gravity can escape a black hole and a gravitational wave cannot because the curvature of spacetime that causes gravity is not at all influenced by the mass of the object, it is the local region where the gravitational field extends so it does not care about the event horizon, when an massive object is there on spacetime, the cause of curvature isn't mass, it's the surrounding area which pulls on to it. Even if gravity were particles according to QFT, there is no way they could escape because the force carrier, graviton would be as a virtual particle and they are not at all restricted by speed. While gravitational waves are unable to escape the horizon because they are influenced by a mass (since they are bent), they are bent, deflected or even swallowed by the singularity | {
"domain": "astronomy.stackexchange",
"id": 7041,
"tags": "gravitational-waves, speed"
} |
At the instant of release of an object from rest. Is the only force that can act its weight? | Question: Q3 from a mechanics exam past paper:
I can do parts i) and ii) but for iii) in finding the angular acceleration, i used $C=I\alpha$, where $C$ is the applied couple or torque, $I$ is the moment of inertia for the lamina about A and $\alpha$ is the angular acceleration.
At the instant the object is released the only force acting on it is its own weight. Hence, $6g*0.8=9\alpha$ which yields $\alpha=5.227$.
However the mark-scheme says the answer is $\alpha=1.65$, as they have taken into account the frictional couple. Here is the mark-scheme:
It was my understanding that at the moment of release no friction (and hence no frictional couple) can act as there is no movement (yet).
So could someone please kindly explain what the mark-scheme is talking about?
Answer: Friction does not depend on velocity (unlike viscous drag). An object that is stationary on a table will continue to be stationary when you push it gently - because there is an opposing force of friction.
So no, your understanding is wrong: friction is present even when the object is just starting to move.
Let me draw a diagram:
That ought to clear it up... | {
"domain": "physics.stackexchange",
"id": 15847,
"tags": "homework-and-exercises, newtonian-mechanics, torque, moment-of-inertia"
} |
Control Volume Analysis - enthalpy substituted for internal energy? | Question: In the process of deriving stagnation / total enthalpy, Fundamentals of Gas Dynamics (Zucker & Biblarz) claim that for any locations $a$ and $b$,
$$h_a + \frac{V_a^2}{g_c} + \frac{g}{g_c}z_a + q = h_b + \frac{V_b^2}{g_c} + \frac{g}{g_c}z_b + w_s$$
This appears to come from the 1st law of thermodynamics $q = w +\Delta e$, where $e = u + \frac{V^2}{g_c} + \frac{g}{g_c}z$. However, I am not sure why enthalpy $h$ is substituted for internal energy $u$? I thought that $h = u + pv$, and I don't think $\Delta(pv)=0$?
Answer: The shaft work is not the total work that is done by the fluid within the control volume. In the control volume derivation, part of the work goes into pushing fluid into and out of the control volume. This $\Delta (pv)$ portion of the work is separated out and lumped together with the internal energy terms to give the enthalpy terms. | {
"domain": "physics.stackexchange",
"id": 65724,
"tags": "thermodynamics"
} |
No ROS_WORKSPACE set | Question:
I have upgraded from Electric to Fuerte on Ubuntu 11.10. Every time I type roscd, I get No ROS_WORKSPACE set.
This is part of my bashrc:
#source /opt/ros/electric/setup.bash
export ROS_PACKAGE_PATH=~/ros:${ROS_PACKAGE_PATH}
#source /opt/ros/electric/setup.bash
export ROS_PACKAGE_PATH=~/ros_workspace:$ROS_PACKAGE_PATH
#source /opt/ros/electric/setup.bash
source ~/setup.sh
source /opt/ros/fuerte/setup.bash
source /opt/ros/fuerte/setup.bash
I appreciate any help.
Originally posted by Morpheus on ROS Answers with karma: 111 on 2012-10-07
Post score: 1
Original comments
Comment by yigit on 2012-10-07:
what do you get when you type "echo $ROS_WORKSPACE"
Comment by Morpheus on 2012-10-07:
when I type echo $ROS_WORKSPACE, I don't get anything
Comment by sai on 2012-10-07:
As mentioned below, try adding the ros workspace in bashrc or setup.sh. Try creating a new workspace or overlay following the tutorials
Answer:
In my opinion you would benefit from cleaning up your .bashrc, ideally there should only be one line in it regarding ROS workspaces, which could be:
$ source ~/ros_workspace/setup.sh
This setup.sh in ~/ros_workspace could best be created using rosws or rosinstall, and it would set the ROS_PACKAGE_PATH for you.
Originally posted by KruseT with karma: 7848 on 2012-10-08
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Lorenz on 2012-10-08:
Relevant link
Comment by jbohren on 2012-10-09:
Except some people are still getting used to the idea of workspaces, remember how long it took me to drink the kool-aid... | {
"domain": "robotics.stackexchange",
"id": 11260,
"tags": "ros"
} |
Removing from a collection in Excel VBA | Question: In my VBA code, I am trying to generate a specific list of cell row positions.
Basically, I first fill a collection with the entire list of row positions pertaining to a specific column:
Dim arrPos As New Collection
....
For i = 3 To bottomRow
arrPos.Add i
Next i
Then I try to remove remove values from this collection if there's no problem at that specific row.
For h = matchRow To 3 Step -1
For g = arrPos.Count To 1 Step -1
If CLng(Worksheets(".....").Range("C" & h).Value) = arrPos(g) Then
arrPos.Remove (g)
Exit For
End If
Next g
Next h
Basically Range("C" & h).Value is a column where the =MATCH function was used so there's a whole list row positions in that column. If the MATCH worked, then I can remove it from the collection. A similar type of loop is made use of further down the code for the rows where the MATCH came up false.
The code gives the proper results but it can drag on at times (especially since row counts can get up to the 5000's) and even crash my puny laptop. As you can see, the method makes use of nested loops and I believe that significant results could be attained by re-factoring this portion.
Does anyone have any suggestions? The question is basically requesting a more efficient way to identify the row positions that did not come up from a MATCH function: either because the value was slightly erroneous or just simply missing.
Answer: The answer is to instead use VBA object Dictionary accessible by adding the "Microsoft Scripting Runtime" As a reference in your project. I don't know the internal mechanics but the dicitonary object has a method to see if an object .Exists within it's collection. This is much faster than my nested looping through an ordinary collection object and seeing if a specific value is contained. | {
"domain": "codereview.stackexchange",
"id": 3482,
"tags": "vba, excel"
} |
Does electron degeneracy pressure decrease over time in a star? | Question: As far as i understand, electron degeneracy pressure relies upon the confined electrons having a lot of kinetic energy, which causes them to push 'outwards', counteracting gravitational pressure in a white dwarf.
Shouldn't this kinetic energy slowly decrease as the white dwarf cools into a brown then black dwarf? If so, wouldn't the star collapse when the pressure decreases enough?
Answer: The degeneracy pressure is the pressure exerted by a fermion gas even at zero temperature. The existence of a (potentially very large) pressure even at $T=0$ is counterintuitive and it represents a fundamentally quantum-mechanical effect.
The Pauli Exclusion Principle means that no two identical electrons can be in the same quantum state. If you imagine adding electrons one at a time to a finite region, with each electron falling into the lowest available energy state, you will soon need to be putting electrons into a states with substantial amounts of momentum, because the the low-lying states are already filled. So, even at $T=0$, many of the electrons in a dense electron gas are whizzing around at fairly high speed. It is the kinetic action of these moving electrons that is responsible for the degeneracy pressure. | {
"domain": "physics.stackexchange",
"id": 58942,
"tags": "quantum-mechanics, white-dwarfs"
} |
How to see that $F$ $F$ dual is a surface term? | Question: The renormalisable 'theta term' that one can add to a Lagrangian describing Yang-Mills fields is often neglected on the grounds that it contributes a surface term. For QED, this is easy to see:
$$\theta \int F \wedge F = \theta \int \mathrm{d}(A \wedge F) $$
But for a non-Abelian field with $F = \mathrm{d}A + A \wedge A$, $F \wedge F$ contains an $A^4$ term which isn't obviously an exact form. Either I'm missing something obvious here, or perhaps $F \wedge F$ is not the correct way to write the theta term for a non-Abelian field?
EDIT (problem solved): I found a proof that
$$ \mathrm{tr}(F \wedge F) = \mathrm{d}\, \mathrm{tr}\left(A \wedge \mathrm{d}A + \frac{2}{3}A^3 \right)$$
In section 10.5.5 (lemma 10.3) of Nakahara. As pointed out by ACuriousMind below, the proof isn't very enlightening, but one crucial step I was missing is that
$$ \mathrm{tr}(A^4) = 0$$
On account of the cyclicity of the trace and antisymmetry of the wedge product.
Answer: In terms of the components $A=A_\mu dx^{\mu}$, we have$$
\\\
\frac{\theta}{2\pi}\mathrm{tr}\left[F\wedge F\right]=\frac{2\theta}{\pi}\mathrm{tr}\left[\varepsilon^{\mu\nu\rho\sigma}(\partial_{\mu} A_{\nu}+A_{\mu}A_{\nu})(\partial_{\rho} A_{\sigma}+A_{\rho}A_{\sigma})\right]
\\\\
$$
And then
$$
\frac{\theta}{2\pi}\mathrm{tr}\left[F\wedge F\right]=\frac{2\theta}{\pi}\mathrm{tr}\left[\varepsilon^{\mu\nu\rho\sigma}\partial_{\mu}(A_\nu\partial_\rho A_\sigma+\frac{2}{3}A_{\nu}A_\rho A_\sigma)\right]+\frac{2\theta}{\pi}\mathrm{tr}\left[A_{\mu}A_{\nu}A_\rho A_\sigma\right]\varepsilon^{\mu\nu\rho\sigma}
$$
by cyclic permutations on $\nu$, $\rho$, $\sigma$ and the fact that $\partial_\mu \partial_\nu$ is symmetric. Now, the last term vanish since a cyclic permutation of even number of elements is always odd (in this case four elements) | {
"domain": "physics.stackexchange",
"id": 47112,
"tags": "differential-geometry, topological-field-theory, yang-mills, boundary-terms"
} |
Are there any unsupervised learning algorithms for time sequenced data? | Question: Each observation in my data was collected with a difference of 0.1 seconds. I don't call it a time series because it don't have a date and time stamp. In the examples of clustering algorithms (I found online) and PCA the sample data have 1 observation per case and are not timed. But my data have hundreds of observations collected every 0.1 seconds per vehicle and there are many vehicles.
Note: I have asked this question on quora as well.
Answer: What you have is a sequence of events according to time so do not hesitate to call it Time Series!
Clustering in time series has 2 different meanings:
Segmentation of time series i.e. you want to segment an individual time series into different time intervals according to internal similarities.
Time series clustering i.e. you have several time series and you want to find different clusters according to similarities between them.
I assume you mean the second one and here is my suggestion:
You have many vehicles and many observations per vehicle i.e you have many vehicles. So you have several matrices (each vehicle is a matrix) and each matrix contains N rows (Nr of observations) and T columns (time points). One suggestion could be applying PCA to each matrix to reduce the dimenssionality and observing data in PC space and see if there is meaningful relations between different observations within a matrix (vehicle). Then you can put each observation for all vehicles on each other and make a matrix and apply PCA to that to see relations of a single observation between different vehicles.
If you do not have negative values Matrix Factorization is strongly recommended for dimension reduction of matrix form data.
Another suggestion could be putin all matrices on top of each other and build a NxMxT tensor where N is the number of vehicles, M is the number of observations and T is the time sequence and apply Tensor Decomposition to see relations globally.
A very nice approach to Time Series Clustering is shown in this paper where the implementation is quiet straight forward.
I hope it helped!
Good Luck :)
EDIT
As you mentioned you mean Time Series Segmentation I add this to the answer.
Time series segmentation is the only clustering problem that has a ground truth for evaluation. Indeed you consider the generating distribution behind the time series and analyze it I strongly recommend this, this, this, this, this and this where your problem is comprehensively studied. Specially the last one and the PhD thesis.
Good Luck! | {
"domain": "datascience.stackexchange",
"id": 314,
"tags": "algorithms"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.