anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
How is uniqueness quantification translated in First Oder Logic
Question: I have this following natural language statement: "There is only one house in area1 the size of which is less than 200m²." which is mistranslated to FOL: ∃x.(house(x) ∧ In(x,area1) ∧ ∀y.(house(y) ∧ In(y,area1) ∧ size(y) < 200 -> x=y)) This translation is wrong according to my lecturer, because it is not necessary that the size of x must be less than 200. The statement is true if there only houses which are bigger. I have two questions: I don't get the FOL translation at all and don't see where the uniqueness part is expressed : so translated it back : "if all houses in area1 have a size less then 200m² then there exists one house which equals to all houses ??" why is not necessary that the size of x is less than 200, when it clearly says in the statement above that must exist one house with a size less then 200 ? Answer: According to the Wikipedia entry on Uniqueness Quantification your lecturer is correct. There is no size requirement expressed in the FOL expression. The point about the implication is that it can be true if the antecedent is false. So, there is a house in area1 (which we call x). And all houses in area1 which are smaller than 200 are the same as x. But if there aren't any, then the antecedent is false, and the consequence (x = y) is false, but the whole statement is still true. As another example: "If Trump is the 31st president of the USA, then the moon is made of green cheese". Both antecedent and consequence are false, but the whole statement is still logically true. Same as "If there is a house in that area, and there are houses that are less than 200 (which there aren't), then that house is one of them." Moving on to the correct expression: The unique quantifier (usually written as ∃!) can be rewritten using the existential and universal quantifiers as follows (see the above mentioned Wikipedia page): ∃x (P(x) ∧ ∀y (P(y) -> y = x)) This is not what you have got; you have got two different predicates, P1 and P2. Your P1(x) is (house(x) ∧ in(x, area1)), and your P2(x) is (house(x) ∧ in(x, area1) ∧ size(x) < 200) The correct expression would require the same predicate for the quantifiers and would therefore be ∃x ((house(x) ∧ in(x, area1) ∧ size(x) < 200) ∧ ∀y ((house(y) ∧ in(y, area1) ∧ size(y) < 200) -> y = x)) The difference is that you state that there is at least one house in the area with a size of less than 200. So the second predicate, that y is a house in the area with a size of less than 200, cannot be false.
{ "domain": "ai.stackexchange", "id": 554, "tags": "logic" }
Do particles rotate around themselves or they just move while the object rotates?
Question: In this question, I'm not talking about particle spin. I guess, when an object rotates, its atoms also rotate. When an atom rotates, its particles must move in space. I wonder that if the particles have a direction. Can they rotate or do they just move position around the axis (middle) of a proton so we consider that the proton rotates? Let's think about single particles like electron instead of composite particles like hadrons. Can electrons rotate ? Edit : I think this is a simple and good question but I couldn't get a sufficient answer yet . Answer: A particle as a point mass does not have rotation defined. So the question does not apply to point masses. In fact, rotations are used to describe the position of a point mass riding on a moving coordinate frame. I see rotation as a property of the frame of reference, and not necessarily of the masses tracked.
{ "domain": "physics.stackexchange", "id": 71597, "tags": "rotation, particle-physics" }
Evaluation of a surface integral in Electromagnetism
Question: There is an integral that I stumbled upon when I saw a calculation related to magnetic field energy (in static current density case) $$ U_B= \int_\text{whole space} \mathbf j \cdot \mathbf A \:\mathrm dV . $$ The integral is expanded using Maxwell's Laws and we reach to something like this $$ U_B = C\left( \int_\text{whole space} \mathbf B\times \mathbf A \cdot \mathbf n \:\mathrm dS + \int_\text{whole space} \mathbf B\cdot \mathbf B \:\mathrm dV \right), $$ where $C$ is a constant. Then the first integral is taken to be zero. There was no logic stated behind this. Is there is some trivial fact that I am missing because I cannot figure any reason for the first integral to be $0$. Answer: This condition essentially needs to be imposed 'on faith', to some degree, because we intuitively feel that physical sources should be localized, and for those we can work out that the fields decay fast enough that the surface term vanishes. This is not to say that we're just wishing away terms that we find inconvenient: instead, we explicitly make the caveat that what follows is valid for localized sources only ─ and we make the slightly circular definition that a 'localized source' (or a localized field) is one for which the relevant surface terms vanish. In the majority of cases, it is very hard to come up with a (useful) necessary condition on the fields and sources which is equivalent to the vanishing of that surface term. We know plenty of sufficient conditions (of the form 'for such-and-such class of localized sources, the surface term vanishes'), and those are broad enough to cover most cases of interest, but it's hard to make more general statements. In a sense, this is more of a "credit" view of mathematical rigour, as opposed to the "debit" view that mathematicians tend to hold: where they say "OK, my fields satisfy X and Y niceness hypotheses, let's work out what results I can prove from those", physicists tend to tell rigour "well, I want that term to go away, so you can just bill me later by telling me what conditions my fields need to satisfy for my formalism to hold". And indeed, this kind of assumption can indeed come back and bite us. One famous example is the separation of optical angular momentum into orbital and spin contributions, as described here, via an integration by parts of the form $$ \mathbf J = \frac{1}{\mu_0c^2} \int\mathrm d\mathbf r \: \mathbf r \times (\mathbf E\times\mathbf B) = \frac{1}{\mu_0c^2} \int\mathrm d\mathbf r \: (E_i (\mathbf r \times\nabla) A_i ) + \frac{1}{\mu_0c^2} \int\mathrm d\mathbf r \: \mathbf E\times \mathbf A =\mathbf L + \mathbf S, $$ where you get this weird paradox: if you plug in a circularly-polarized plane wave, then $\mathbf L$ is zero but $\mathbf S$ is not, but the initial $\mathbf J$ also seems to vanish, so something is off: the problem here is that the boundary terms (or the regions where the beam tapers off) carry nonzero angular momentum, and cannot be neglected. This apparent paradox has puzzled many an unsuspecting author. Nevertheless, we still use the equation (as we do with the one you mentioned) because it holds for what we think are physical fields, and we're prepared to say "it's zero because I say so" and accept that if it's not zero then the fields are probably not physical and we shouldn't be using them as such.
{ "domain": "physics.stackexchange", "id": 42033, "tags": "electromagnetism" }
Is it possible to both move and stabilize a two wheeled robot with no gyroscopes?
Question: With two wheeled robot like this one, I have managed to stabilize it while keeping it stationary. This was done using a digital feedback control system by reading the position of the wheels to determine position, and the natural back electromotive force from the wheel motors was used in the feedback loop to determine velocity. It was kept stable with a PID controller, which was designed using a root locus algorithm to keep it stable and modulate the performance parameters (such as percent overshoot, settling time, etc.). I wanted to attempt to keep it stable while simultaneously propelling it forward, but I couldn't figure out how to go about designing a linear controller that could do that. Is it possible to both propel the robot forward and keep it stable using a feedback controller on the wheels, or is a gyroscope necessary? Answer: You could use other ways of measuring orientation, such as an accelerometer, optical tracking of markers, or a depth sensor pointed at the floor.
{ "domain": "robotics.stackexchange", "id": 4, "tags": "two-wheeled, stability" }
I feel as if my quicksort can be made more efficient, but what?
Question: I just learned the quicksort algorithm and tried to implement it, but it feels dirty: #include <iostream> void quicksort(int list[], int low, int high) { if(low >= high) return; else { int pivot = low, i = low, j = high; while(i < j) { while(list[i] <= list[pivot] && i < high) { i++; } while(list[j] > list[pivot] && j > low) { j--; } if(i > j) break; int temp = list[i]; list[i] = list[j]; list[j] = temp; } int temp = list[pivot]; list[pivot] = list[j]; list[j] = temp; quicksort(list, low, j-1); quicksort(list, j+1, high); } } int main() { int arr[10] = { 12, 2, 24, 32, 5, 1203, 7, 123, 2354, 2 }; quicksort(arr, 0, 9); for(int i = 0; i < 10; i++) { std::cout << arr[i] << " "; } } The break condition in the while loop feels really cheap; as if I did something wrong and needed to put it there... What can I improve? Answer: I don't like seeing arrays passed as parameters. void quicksort(int list[], int low, int high) // ^^^^^ Inside the function all similarity to an array has disappeared. It has decayed into a pointer. By using the array like syntax you might catch people out that want to treat it as an array (which is a real maintenance issue). If this code is C then just pass as a pointer. If this code is C++ then pass as a reference to an array, or use a container type and pass by reference (I prefer the container option as you can template it). In quick pre-condition checks at the head of a function like this. There is no need for the else part. if(low >= high) return; else It looks neater and saves you a level of indentation. One variable per line. Also give the variables more meaningful names. int pivot = low, i = low, j = high; Also I would say that pivot is really the value you are pivoting around. Not the location of the value you are pivoting around. int pivot = list[<location Of Pivot Value>]; // See below for more. Pretty sure there is a bug here i < high is not correct. while(list[i] <= list[pivot] && i < high) Same thing here. Pretty sure there is a bug here j > low is not correct. while(list[j] > list[pivot] && j > low) Yep. You are correct the break is ugly here. if(i < j) { std::swap(list[i], list[j]); } You are only doing this (below) to prevent your self choosing the same pivot point each time. So you should choose a different technique to choose the pivot point. Why not the element in the middle of the list? int temp = list[pivot]; list[pivot] = list[j]; list[j] = temp; You pass the location of the first and last element in the array. quicksort(list, low, j-1); quicksort(list, j+1, high); It is more C++ like to use first and one past the point you consider end. It also makes the code look neater try it and see.
{ "domain": "codereview.stackexchange", "id": 5225, "tags": "c++, optimization, algorithm, quick-sort" }
rosrun rqt_graph rqt_graph shows error NameError: name 'basestring' is not defined
Question: I have started the tutorials and when I try to run rqt_graph it gives me a basestring error. Can someone please help me how to proceed? I am using Ubuntu 16.04, and Kinetic distro. Traceback (most recent call last): File "/opt/ros/kinetic/lib/rqt_graph/rqt_graph", line 8, in <module> sys.exit(main.main(sys.argv, standalone='rqt_graph.ros_graph.RosGraph')) File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_gui/main.py", line 59, in main return super(Main, self).main(argv, standalone=standalone, plugin_argument_provider=plugin_argument_provider, plugin_manager_settings_prefix=str(hash(os.environ['ROS_PACKAGE_PATH']))) File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/main.py", line 349, in main from .perspective_manager import PerspectiveManager File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/perspective_manager.py", line 44, in <module> class PerspectiveManager(QObject): File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/perspective_manager.py", line 48, in PerspectiveManager perspective_changed_signal = Signal(basestring) NameError: name 'basestring' is not defined Is this what you were asking for? EDIT:- I thought of proceeding for the time being, but I get same error on running rqt_console rqt_console; also in rqt_plot rqt_plot. Maybe the problem is in rqt package? Originally posted by manish.nayak on ROS Answers with karma: 3 on 2017-01-22 Post score: 0 Original comments Comment by gvdhoorn on 2017-01-22: There is probably a multi-line stacktrace associated with that error. Could you include that in your post? Please edit your question and use the Preformatted Text button to format things properly (it's the one with 101010 on it). Comment by ahendrix on 2017-01-22: A quick google search suggests that this may be an incompatibility with trying to run python2 code in python3. Have you modified your system in some way to make python3 the default? Comment by manish.nayak on 2017-01-23: Yes you are right. I have installed anaconda for python, which sets python 3.5.2 as default. What should i do then? Comment by gvdhoorn on 2017-01-23:\ Maybe the problem is in rqt package? No, the problem is (trying to) us(e)(ing) Python 3.x with code that is written for 2.x, as @ahendrix wrote in his comment. Answer: Given Ubuntu's (and therefore ROS's) long-term goal of python3 compatibility, you should probably report this as a bug in the the qt_gui package: https://github.com/ros-visualization/qt_gui_core/issues Depending on how python-savvy you are, and how long you want to wait for a fix, there are at least a few ways to proceed: Wait for the upstream maintainers to fix the python3 compatibility bug. Since this doesn't affect any other users, and doesn't affect the default configuration, this may take a while. If you know python well, you can attempt to fix the bug yourself and submit it as a pull request. Uninstall switch the default version of python back to python 2.7, without anaconda I'm not very familiar with Anaconda, but I've seen a few posts indicating that, with some effort, it can be made to work. This suggests that there's a version of Anaconda for python 2.7 that may be more compatible. Originally posted by ahendrix with karma: 47576 on 2017-01-23 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by manish.nayak on 2017-01-23: Ok, I would definitely report the bug. I have just basic knowledge of python so I don't think fixing the bug by myself is feasible. On the other hand Anaconda is really handy and contains quite a few packages which I frequently use. Maybe un-installing it and installing the 2.7 version would be good Comment by manish.nayak on 2017-01-23: Although the division in python 2 is what I wanted to avoid. But that can be fixed using from_future_import division right? because I have codes already written in python 3.
{ "domain": "robotics.stackexchange", "id": 26796, "tags": "ros, ros-kinetic, rqt-graph" }
Launch small satellite?
Question: I want to build a small satilite and launch it into low space orbit. Nothing 'too fancy' a Raspberry Pi to power the systems, camera which'all transmit video and also a radio reviver and transmitter, maybe a small power bank to power the Pi and solar panels to charge that and gyro for stability in space instead of multiple thrusters. Hopefully it should last a few weeks before it burns up. The weight of the satilite would be under 5Kg. Also would radiation cause any short term harm to the electronics? I was also thinking if I could first launch it with a weather balloon, have a small chemical rocket fire just before the weather balloon pops and then once it's reached micro gravity the rocket falls away and a canister of compressed air can accelerate it from there to a distance where should last a couple weeks. I was also thinking about ion thrusters but they use a lot of electricity Would this be possible at all? Answer: @antlersoft is right. You'd still need a powerful rocket to get moving fast enough to enter into orbit around the Earth, so probably not, unless it's a really really big balloon holding a really big rocket! The term "microgravity" might be a bit misleading. The gravity up there is almost as strong as it is on the surface. The key is to go fast enough so that your "fall" towards earth actually ends up being an orbit. That's about 7.7 kilometers per second!! If you are inside a spacecraft, in an orbit, moving at such an orbital velocity, you would also be in orbit around the earth. If you just look at how your body moves with respect to the spacecraft, you could call it "microgravity" (lots of people do, even astronauts) but maybe it should be called micro-acceleration with respect to the spacecraft. Without the buoyancy that the balloon provides only while it's within the atmosphere, your rocket would accelerate towards earth. At only say 30km above the surface, you're only 0.5% farther from the center of the earth. Since the force behaves like $1/r^2$ that means gravity is only about 1% lower, so you'd accelerate towards Earth at roughly 8.8 $m/s^2$ instead of 9.8 at the surface, until you got low enough where the air is dense enough to start slowing you down. So the hard part about getting to - and staying in space is not just the altitude, it's speeding up to that 7.7 kilometers per second that's the hard part.
{ "domain": "physics.stackexchange", "id": 33720, "tags": "homework-and-exercises, rocket-science, estimation, satellites, propulsion" }
Could I survive at (or near) absolute zero with a very, very, very thick sweater?
Question: Imagine I'm in an infinitely large vacuum and have a special apparatus built into my body that allows me to breath, eat, pee/poo, etc. and never age. The vacuum is similar to deep space and has no heat source or visible light and is therefore quite close to absolute zero. Perhaps we should say just above absolute zero to exclude some strange phenomena that might take place at absolute zero? Let's just say it is cold. I don't want this question to be about phenomena associated with deep space that might cause issues here (e.g., gamma waves or something). Could I survive off my body heat alone if I had a very very large sweater? What if the sweater was 10 or 10 million miles thick? Or thicker? If not, what if my sweater was pre-heated to some temperature, would it work then forever? Answer: A super-thick sweater probably isn't the way to go - you may be better off wrapping yourself in aluminum foil. The body loses heat through a handful of mechanisms: During conduction, your body transfers heat to the surrounding air which is in contact with your skin. This raises the temperature of the air, which (if the air is still) decreases the rate of heat loss. If the air is moving, then that energy is carried away by the breeze, and you're in contact with fresh, cool air basically the entire time. This is convection. Evaporation occurs when moisture in your skin is pulled from the liquid into the gas phase, taking energy along with it. This depends on the relative humidity of the air - see wet bulb temperature for more. At all times, your body emits radiation (primarily in the infrared), with a total power loss given by $P=\epsilon A\sigma T^4$. Here, $\epsilon$ is the emissivity of your body, ($\epsilon\approx 0.95$ if you are naked) $A$ is the "effective radiation area" of your body ($A\approx 0.7 (2\text{ m}^2)\approx1.4\text{ m}^2$) $\sigma$ is the Stefan-Boltzmann constant $\sigma = 5.67 \times 10^{-8} \frac{\text{W}}{\text{m}^2\text{ K}^4}$ $T$ is the absolute temperature of your body in Kelvin. (Note that your body emits radiation, but also receives it, with the amount depending on your particular radiation environment.) Of these four mechanisms, the first two are irrelevant to your question because you are in a vacuum. Evaporation will definitely occur, especially around your nose, mouth, and eyes, but I think that the primary mode of heat loss here will be radiation, so let's focus on that. Your body generates heat at all times via your metabolism as well as internal friction. If you are relaxing in comfortable conditions, you are producing roughly 100 W - but this number increases if you start to exercise. In particular, when your body gets cold your brain activates the shiver reflex, which can cause your body's power output to jump to 200-300 W. Source (Note that $1 \text{ Cal/hr} \approx 1 \text{ W}$). Ignoring for a moment the effect of clothing, then your equilibrium body temperature can be roughly estimated by equating the power generated by your metabolic processes (and possibly movement) with the power loss via radiation, assuming that you are not absorbing radiation from anywhere else. I am assuming that the body is at a uniform temperature here. This would not be the case - the core of your body would be warmest and then a gradient would form to your skin - but this can be neglected because the gradient would not be very extreme. In this simplified model, this is the resulting equilibrium body temperature as a function of emissivity, assuming first 100 W and then 300 W of generated power. As you can see, the situation is rather bleak if you're facing the void in the nude. Your core temperature can't drop much below its normal 37 C before you enter a hypothermic state; even shivering ferociously, this requires an emissivity of something like $0.425$, far below your body's typical value of $0.95$. This is where clothing comes in. Textiles have a somewhat lower emissivity than naked humans do. The surface emissivity of wool is about 0.74, and most textiles are in that range or higher, which means that the surface of the garment would still equilibriate below 0 C. However, the thermal conductivity of wool is only about $0.03\frac{\text{W}}{\text{m K}}$. For a garment of thickness $t$ covering your entire body, the temperature gradient from your body's surface to the surface of the garment would be $$\frac{\Delta T}{t} = - \frac{100\text{ W}}{2\text{ m}^2 \cdot 0.03 \text{W/mK}} \approx 1670 \frac{\text{K}}{\text m}$$ Starting from the temperature of the garment's exterior, this allows us to track back and find the corresponding body temperature as a function of thickness. I've performed the calculation for wool and for cotton, with the results shown below. The surface of a wool sweater would equilibriate at approximately -5 C, which would correspond to a 37 C body temperature if the thickness of the sweater were only about 3 cm. That's thick, certainly, but not absurdly so. For a cotton sweater, which would have both higher emissivity and higher thermal conductivity, the surface would equilibriate around -10 C and you would need a thickness of closer to 6 cm to keep you warm. On the other hand, you could consider wrapping yourself in a layer of extremely low-emissivity material, and that would be much more effective. Polished silver, for instance, has an emissivity of only $0.02$, which would be problematic in the wrong direction. To radiate 100 W/m$^2$, our layer would need to have a surface temperature of about 60 C, which would roast us alive. The sweet spot - at which our body would equilibriate at 37 C - appears to correspond to an emissivity of approximately $0.15$. Based on this table of emissivities, it seems that alumel (an alloy of nickel, aluminum, manganese, and silicon) would do the trick. Further Reading: Convective and radiative heat transfer coefficients for individual human body segments The Relative Influences of Radiation and Convection on the Temperature Regulation of the Clothed Body
{ "domain": "physics.stackexchange", "id": 69042, "tags": "thermodynamics, temperature, thermal-radiation, estimation, biology" }
Help in understanding an exercise on observable / measurement
Question: I'm working through Quantum Computing for Computer Scientists (Yanofsky & Mannucci, 2008), and am getting a little confused about Observables. From what I understand an observable is a question represented by a hermitian matrix. But that's as far as it goes. When we use an observable to make a measurement we obtain a real result, which can change the state space $|\Psi\rangle$. There is mention in the book that "after an observation" (presumably after a measurement is taken) then the result should be an eigenvalue, and the state of the system should collapse into the state which is the eigenvector corresponding to that eigenvalue. Then, in example 4.3.1 on p.126 the authors use the observable $$\Omega=\begin{bmatrix}-1&-i\\i&1\end{bmatrix},$$ which they state has eigenvalues $\lambda_1=-\sqrt{2}$ and $\lambda_2=\sqrt{2}$, with corresponding eigenvectors $|e_1\rangle=[-0.923i,-0.382]^T$ and $|e_2\rangle=[-0.382,0.923i]^T$. It goes on to say "now, let us suppose that afer an observation of $\Omega$ on $|\Psi\rangle=\frac{1}{2}[1,1]^T$, the actual value observed is $\lambda_1$. The system has "collapsed" from $|\Psi\rangle$ to $|e_1\rangle$. I'm finding it difficult to understand this. Do the authors mean to perform a measurement, i.e. $$\Omega|\Psi\rangle=\begin{bmatrix}-1&-i\\i&1\end{bmatrix}\frac{1}{2}\begin{bmatrix}1\\1\end{bmatrix}= \begin{bmatrix} -\frac{i}{2}-\frac{1}{2} \\ \frac{i}{2}+\frac{1}{2} \\ \end{bmatrix} $$ But then how have we observed $\lambda_1=\sqrt{2}$ ? I think I've got the wrong end of the stick because the authors say "now, let us suppose that after an observation...," so maybe there is no calculation to be made, but it's very confusing. Can anybody help me understand this? Answer: When you give an observable, such as $\Omega$, that is used to define the measurement basis. It is not something that you would usually use to directly perform calculations (you can, and for $2\times 2$ matrices, we often do, as I'll detail below). Normally, you want to take your observable, $\Omega$, and find the eigenvalues and eigenvectors. More specifically, you want projectors onto the different eigenspaces (this distinction is important if your matrix has degeneracy). So, we take $$ P_1=|e_1\rangle\langle e_1|,\qquad P_2=|e_2\rangle\langle e_2|. $$ In a sense, the eigenvalues are irrelevant, and hence so is $\Omega$. Our outcome is a state $$ |\Phi_i\rangle=P_i|\Psi\rangle, $$ up to normalisation (for rank 1 projectors, as here, the renormalised version is just $|e_i\rangle$), and the probability of getting outcome $i$ is $\langle\Phi_i|\Phi_i\rangle$. Note that you have to explicitly describe the branching outcomes (here, two different possibilities, with different probabilities). One simple calculation such as a matrix multiplication can't give you that. Now, it turns out that you can use the matrix $\Omega$ directly if you want. This is because, by completeness, $$ \sum_iP_i=I, $$ and we know that $\Omega=\sqrt{2}(-P_1+P_2)$. Hence, we can rearrange for $P_1$: $$ P_1=\frac{1}{2}(I-\Omega/\sqrt{2}), $$ and we can perform the same calculation as previously without having directly calculated the eigenvectors first. There's always a trick like this for $2\times 2$ matrices. For $n\times n$ matrices, you'd probably have to describe each of your projectors as a polynomial in $\Omega$ up to a power $n-1$, which is why we don't usually do this for anything other than $2\times 2$ matrices - it just gets more messy.
{ "domain": "quantumcomputing.stackexchange", "id": 953, "tags": "quantum-state, measurement" }
How large is the universe?
Question: We know that the age of the universe (or, at least the time since the Big Bang) is roughly 13.75 billion years. I have heard that the size of the universe is much larger than what we can see, in other words, much larger than the observable universe. Is this true? Why would this not conflict with what is given by Special Relativity (namely, that faster-than-light travel is prohibited)? If it is true, then what is the estimated size of the universe, and how can we know? Answer: It is indeed more useful to cite the age of the universe, because this defines the region in space which is observable, a 13.75 billion lightyear sphere (approximately). Clearly, however, the entire universe could be more than 13.75 billion years across in diameter; that number is merely a radius. For example, let's suppose a naive view of the expansion of the universe which doesn't include inflation or dark energy. At the moment of the big bang, photons rush off in every direction at the speed of light (again, naive cosmology - ignore the fact that the universe is opaque for 300,000 years). These photons are all moving out from one point at the speed of light. We imagine, then, an expanding sphere whose surface is defined by the furthest point which light has so far reached. This sphere is expanding in volume very quickly indeed - the radius is expanding at the speed of light. So by now, 13.7 billion years later, the radius is 13.7 billion light years. The diameter of the sphere is twice that, 27.4 billion light years. The volume is volume of a sphere with radius r=13.7 GLy, which is $4\pi r^3 / 3 =57.4$ billion cubic lightyears. This is shows that the universe can easily be much larger than 13.75 billion light years across. Also, note that, if the earth is formed on the expanding sphere, no one from earth will ever be able to "catch up" and see the other side of the sphere, since that side is still expanding. This is what people mean when they say that the universe is larger than we can see. Now, this answer is wrong. Do not go quoting these numbers. It doesn't take into account inflation or the expansion of the universe. No one knows enough about either of those two effects to give a really good precise number for the size of the universe, but you can be certain that they only result in a bigger universe. One lower-bound for the radius of the universe is 39 billion light years, based on some analysis of the cosmic microwave background.
{ "domain": "physics.stackexchange", "id": 8378, "tags": "cosmology, big-bang, special-relativity, visible-light" }
Confusion about the value of normal force on a wedge
Question: I was solving a banking problem when I had the following doubt. We have a mass m on a wedge having an angle A. We have to find the normal force acting on the block of mass m. I tried to decompose the force mg acting in downward direction into two components:one parallel to the direction of wedge and other in a perpendicular direction with respect to the wedge. It gave the following equation:- $$N=mgcosA$$ On the other hand if I decompose normal force into two components and I would get the following equation:- $$NcosA=mg$$ $$N=mgsecA$$ So which one is right? Answer: Don't forget the acceleration! You can split $N$ into two components, yes: $$N_y=N\cos A\qquad\text{ and }\qquad N_x=N\sin A$$ But equating the first one to gravity $w$ is wrong. If you set the upwards and downwards y-forces equal to each other, then you are applying Newton's 1st law, which is not the case here. Rather, you should use Newton's 2nd law, giving: $$N_y-w=ma_y\quad\Leftrightarrow\quad N\cos A=ma_y+w$$ because there is an acceleration component along this y-direction. So you were missing a term in the expression. In slope-systems it is usually easier to choose the coordinate system along the slope. This is how you got your red arrows. This is usually easier, because the y-directions then is perpendicular to the slope (and thus to the motion) so that there is no acceleration along this axis and Newton's 1st law can be used. Otherwise in the case of your blue arrows, you have chosen a coordinate system, where there is an acceleration component along both axes. We would rather avoid that by placing the coordinate system smarter, if possible.
{ "domain": "physics.stackexchange", "id": 54952, "tags": "kinematics" }
Can a single optical device converge as well as diverge a parallel beam of light?
Question: A biconvex lens converges a parallel beam of light when the beam is incident on either of its convex surfaces. A biconcave lens diverges a parallel beam of light when the beam is incident on either of its concave surfaces. Similarly, plano convex and plano concave lenses converge or diverge a parallel beam irrespective of whether the plane side or the curved side faces the beam. It can be seen that when the medium in which the lens is placed is uniform throughout, and if the lens converges or diverges a parallel beam of light when one of its side faces the beam, then, when the other side faces the beam it behaves in a similar manner, although the focal length may vary which may depend on the curvature of the two sides. The sign of the focal length determines whether the lens behaves as a converging or a diverging device. When it's positive, the lens converges a parallel beam and when it's negative, the lens diverges the beam. Is it possible for an optical device (consisting of lens(es)) to converge as well as diverge a parallel beam of light when the beam is incident on its two different sides? Alternatively, can the focal length of an optical device have two different signs when measured along different sides? If an optical device converges a parallel beam of light when the beam is incident on one side, does it imply when the beam is incident on the other side, the beam converges again? I've constructed the following diagrams to make my question clearer. In the following images, the blue coloured dotted rectangle represents an optical device. It's orientation is marked using the two fat red arrows at the top and the bottom of the rectangle. The optical centre is assumed to be at the geometric centre of the rectangle. A parallel beam of light is incident from the left side. If the optical device converges the beam as shown in the first diagram below: Is it possible for the same optical device to diverge the parallel beam when it's rotated so that the other side faces the beam, as shown in the following diagram?: I'm unable to think of any such optical devices (lenses or combination of lenses). I also tried various combination of lenses using this Phet simulation. But in all cases when a device converges from one side, it behaves in the same way when the beam is incident on the other side. Image credit : My own work :) Answer: The easy answer is "no". In lenses, prisms, mirrors, etc., if the direction of a light ray is reversed, it simply follows the reverse path. However, depending on why you want such a device, it may be possible to accomplish something close to what you want. For example, there are optical metasurface lenses and polarizing devices that respond differently to light of orthogonal polarizations, so that if (e.g.) right circularly polarized light is incident from the left in your drawing, the lens can be rotated 90 degrees and thereby switch from acting as a positive lens to acting as a negative lens. See, for example, "Multifunctional metasurface lens for imaging and Fourier transform". There are also angularly selective volume holographic lenses that could do something close to what you want, if instead of simply reversing the lens it were okay to tilt it (and if only monochromatic light were used, and if the only function were to converge and diverge as opposed to forming an image).
{ "domain": "physics.stackexchange", "id": 63750, "tags": "optics, refraction, geometric-optics, lenses" }
How does a piece of paper manage to pump out the water from a bowl
Question: When we go to bed at home, we started to put a bowl of water on the radiator (the air gets a bit dry). By instinct I put a soaked piece of paper (e.g. toilet paper) into the bowl and let it touch the radiator. The next morning all the water in the bowl was gone. My wife was not so sure that the paper actually had any effect. So I put two bowls of water.. one with the paper and one without. The next morning the bowl without the paper had all the water remaining and the ball with the paper was empty and the paper completely dry. paper _____ / | \ -----/---/ | \ ------ / \ \------/ \__ ------------------------- radiator What is the physical mechanism for this "paper pump"? By how much does the temperature difference between the water and the radiator overcome the gravity force? Answer: The paper absorbs water, and the adhesion energy is, per molecule, much stronger than the pull of gravity. You can make water climb up capillaries as far as the top of a tree from the bottom of the trunk, so it is not difficult to get the water to soak the paper against gravity. The paper has a large surface area, so it probably evaporated the water into the air. I don't believe it actually acts as a pump, like a siphon, to transfer water onto the surface of the radiator absent evaporation, because the water would have to detach at the radiator end in a continuous stream for this to work, setting up an actual steady state material flow, but it is an interesting question.
{ "domain": "physics.stackexchange", "id": 1964, "tags": "fluid-dynamics, home-experiment, capillary-action" }
Invariance and forms of the Lagrangian
Question: I have been reading the 1st chapter of Landau & Lifshitz Mechanics, and due to its concise style been facing a few problems. I hope you can help me out here somehow. Does the "homogeneity of space and time" essentially talk about the invariance of the Lagrangian with co-ordinate system or time invariance? Is this the basis of Landau's proof of the Law of Inertia? The Lagrangian for a closed system of particles is described by adding a function of the co-ordinates. This essentially means that the same trajectory satisfies the extremal principle. (I am a bit confused about this though). So why the function added is not of time too? On the subject of " the Lagrangian being defined only to within an additive total time derivative of any function of co-ordinates and time", is it true that the function can be of velocity too (Though this would make the entire exercise useless, since we would know everything)? The disadvantage of trying to teach oneself physics without anyone to refer to is that I am not completely sure that what I am saying makes sense. Answer: 1. Yes it is about invariance with respect to time and space translations: $t\to t+t_0$ and $\vec{r}\to\vec{r}+\vec{r}_0$ 2. I see no fundamental problen in introducing the time dependence of the potential energy. But it would mean that the way our particles interact with each other changes with time somehow. Seems to me that it is pressuposed that the interaction cannot change in closed systems -- rather natural assumption for me... 3. The problem with a function of speed is that the new "lagrangian" will depend on second derivative of coordinates: $$L'(q,\dot{q},\ddot{q},t)=L(q,\dot{q},t)+\frac{d}{dt}f(q,\dot{q},t)$$ Which contradicts to the experimentally established statement that a system fully described by coordinates and velocities (begining of chap.1). ... trying to teach oneself... I'd stronlgy discourage to study mechanics by means of Landau only. He sometimes is too fast -- there are some subtleties that he skips without paying attention. Also in some places he uses non-standard terminology, which can be misleading when dealing with other sources. I'd recommend Mathematical Methods of Classical Mechanics by V.I. Arnold. Personally I find it to be very nice complement to Landau.
{ "domain": "physics.stackexchange", "id": 51958, "tags": "classical-mechanics, lagrangian-formalism, spacetime, invariants" }
Show that for a singly-connected graph the number of edges $E$ must be equal to the number of vertices minus $1$, $E=V-1$
Question: I am reading "Bayesian Reasoning and Machine Learning By David Barber". I am not completely sure how to do question 19 on page 23: Show that for a connected graph that is singly-connected, the number of edges $E$ must be equal to the number of vertices minus $1$, $E=V-1$. Give an example graph with $E=V-1$ that is not singly connected. Definition (Singly-Connected Graph). A graph is singly-connected if there is only one path from a vertex a to another vertex b. Otherwise the graph is multiply-connected. This definition applies regardless of whether or not the edges in the graph are directed. An alternative name for a singly-connected graph is a tree. A multiply-connected graph is also called loopy. My approach to proving that $E=V-1$: Proof by induction: Let $P(n)$ be the statement that a singly-connected graph with $n$ vertices has $n-1$ edges. We prove the base case, $P(1)$: For a graph $G$ with $1$ vertex, it is clear that there are $0$ edges. (**Question 1:**Is this correct though, why can't there be $1$ or even $2$ edges such that this one vertex connects to itself with $1$ or $2$ paths respectively?) We now prove the case for $P(n+1)$: Suppose for the sake of induction that $P(n)$ is true. Let $G$ be a singly-connected graph with $n$ vertices and hence $n-1$ edges. Then if we add a vertex to $G$ with one edge connecting it to any of the vertices of $G$, then we have a new graph $G'$ which has $n+1$ vertices and $n$ edges. Hence we have shown that $P(n)\implies P(n+1)$ and hence it it true for all $n\in \mathbb{N}$ Question 2: Would this proof be correct? Question 3: I can't think of a graph which is not singly connected but has $E=V-1$ edges. What would be some examples? Answer: For your question 1, as already noted by rici, you cannot have loops (edges connected a vertex to itself) in singly-connected graphs (which are often also called trees). In most graph textbook definitions in my experience as well, loops are not allowed. For your question 2, also as noted by rici, your approach constructs a specific instance of P(n+1) so you do not prove that the the claim works for all instances of P(n+1). In order to prove that this holds for any instance, you are right that you can work by induction but you must work in reverse: consider a tree with n+1 vertices. You must extract a tree with n vertices from it. The property holds for it because you have proven that it holds for all trees with n vertices. Then you must use the relationship between the two to show that the wanted property holds for the larger object. In this case I would go like this: consider a tree with n+1 vertices. It's easy to show that it has at least one vertex with degree 1. Remove this vertex. You now have a tree with n node, and therefore with n-1 edges (by induction hypothesis). Your original n+1 vertex tree has one more vertex and one more edge, so it must have n+1 vertices and n edges, so the property holds for trees with n+1 vertices (notice that I did not make any assumption on the original tree with n+1 vertices I used so the proof is valid for any such tree).
{ "domain": "cs.stackexchange", "id": 18894, "tags": "graphs, induction" }
Creating filter pipelines for OpenCV
Question: Hello ROS fans, I'd like to use or develop a simple pipeline interface for sticking together various OpenCV filters. As far as I can tell, the only package that currently works like this is ecto but it is a little more than I need right now so I was looking for a simple alternative. I came across a simple method using Python that seems to get the job done. For example, the following code glues together a grey_scale filter followed by Gaussian smoothing and histogram equalization. The functions grey_scale, blur and equalize are defined in terms of standard OpenCV functions: pipeline = create_pipeline((grey_scale,), (blur, cv.CV_GAUSSIAN, 15, 0, 7.0), (equalize,)) where def create_pipeline(self, *filters): def pipeline(frame): piped_frame = frame for filter in filters: piped_frame = filter[0](piped_frame, *filter[1:]) return piped_frame return pipeline I don't want to reinvent any wheels if someone has already developed something similar. Is there anything like this out there other than ecto? Thanks! patrick Originally posted by Pi Robot on ROS Answers with karma: 4046 on 2012-02-25 Post score: 0 Answer: http://simplecv.org/ and ecto would be your best bets I think Originally posted by Vincent Rabaud with karma: 1111 on 2012-07-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Pi Robot on 2012-07-23: Thanks @Vincent. I'm very impressed with Ecto to the extent that I have played with it. I have also taken a quick look at simplecv and that looks nice too.
{ "domain": "robotics.stackexchange", "id": 8384, "tags": "opencv" }
Bruteforce MD5 Password cracker
Question: I just started learning Go, and I wanted to created a project to learn more about concurrency in go. I heard about Go's lightweight threads, so I wanted to give them a try. This program uses backtracking to brute-force a list of passwords loaded from a file. It tries from password of length 2 and go ahead until all passwords are been found. It works well until password length doesn't come to 6: then my RAM will get full. I've already optimized my code in some ways e.g. in the first iteration of the program I used to create a chan for every thread, and every thread would wait for the spawned thread to terminate. Now it has a barrier. I would need some suggestion about my code and spatial optimization tips. package main import ( "fmt" "io/ioutil" "strings" "sync" "log" "time" "crypto/md5" ) var alfabeto = []rune {'a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z','0','1','2','3','4','5','6','7','8','9'} func compute(prefix int, n int, a string, wgFather *sync.WaitGroup){ defer wgFather.Done() if prefix == n-1 { for _, d := range alfabeto { password := fmt.Sprintf("%s%c", a, d) if searchPassword(password) { if len(passwords) == 0{ return } } } }else { for i := range alfabeto{ wgFather.Add(1) go compute(prefix+1, n, fmt.Sprintf("%s%c", a, alfabeto[i]), wgFather) } } } var passwords []string func main(){ if loadPasswords() { return } fmt.Println("File with passwords loaded. We're gonna crack", len(passwords),"passwords!") start := time.Now() cont := 2 for len(passwords) > 0 { fmt.Println("Searching for passwords at length: ", cont) var wg sync.WaitGroup wg.Add(1) go compute(0, cont, "", &wg) wg.Wait() cont++ } elapsed := time.Since(start) fmt.Println("Password's file cracked in:", elapsed) } func searchPassword(pass string) bool{ hash := fmt.Sprintf("%x", md5.Sum([]byte(pass))) for i, value := range passwords{ if strings.Compare(hash, value) == 0{ // Password found! fmt.Println("Find Password:", pass, " with hash:", hash) passwords = append(passwords[:i], passwords[i+1:]...) return true } } return false } func loadPasswords() bool{ stream, err := ioutil.ReadFile("file.txt") if err != nil{ log.Fatal(err) return true } readstring := string(stream) passwords= strings.Split(readstring, "\n") return false } This is my password's file, file.txt: 56cc213a6303180cbab6a3da15108751 b1c9b44a9a0a65615f21834aee53594b 4b855fcca0a7140f60dd8259d1a0f1ad 14db43821fb74030ac6e8bdf662646d5 93eea5a9dfb14219e8e9d51ab1ae2b82 b0804ec967f48520697662a204f5fe72 ab6d50d5a9ecafd6fd429d38877837ca 168908dd3227b8358eababa07fcaf091 3cf4046014cbdfaa7ea8e6904ab04608 02c77002a0c646684b3325959fe147b2 f38fef4c0e4988792723c29a0bd3ca98 f3abb86bd34cf4d52698f14c0da1dc60 e842795b282293fd61bc294c49edb12b c4fdb9019bcca7e82296952ba3e1895b 8dc01b0de0431cb7eced92277c1f04c7 bbebde933d57f88406bc530e5df0df3a 44fd79ea712293e5b5a7b51aceb6c0a7 56eb473ffd7429b00eb136c80664be30 875a8ec1acd2fa9f02ca152974dfd904 23ba6002aa3583a61db26e957b1fbe43 dcaa9fd4f23aaf0c29f540becf35b46f 98f740d68822f4209674ca9f23c20abe a079350a0e30d9f293f6acaea80bb015 c44ab68f3a5ef32c8dfbbaa1daa86f98 6057b96acf9d41c1ca26a8923d970404 daa10b9d19015cf1cbf4bb53cf135b61 b29533fb6f81a9dbf8eae44b05ce8f49 22b35da29d5fa740fab4cb83ccb820aa 11cff46e84b6cae9951ea65eb5716d9e 9dd8ecec47e0c96bb189038fdb35bf16 e03a5f28b2349d6735adf6f528a7f18c 6057b96acf9d41c1ca26a8923d970404 Answer: First let's mention a bug / issue: Your passwords variable is accessed (read and modified) from multiple goroutines without any syncronization: this is a race condition! Primary problem Your memory problem arises from launching a tremendous amount of goroutines. For example when you call compute() with n = 6 (to try passwords with a length of 6), it will create: 36^5 = pow(36, 5) = 60466176 (36 is the length of your alphabet; 5 is the prefix value for the condition prefix == n-1 when compute() stops spawning new goroutines) 60 million goroutines! Goroutines are lightweight, but not that lightweight! Even if managing 1 goroutine would cost only 1 KB memory (it has its own stack etc.), this would require 60 GB memory! Understandable you run out of it. Your code spawns goroutines at a much quicker rate than they complete. (It should be noted that nothing in your code prevents spawning these new goroutines before any would be completed, so this is kind of worst case but still...) An easy fix! But the good news is that there is a really easy fix to this tremendous number of goroutines: simply do not spawn many goroutines. A trivial way to limit spawning goroutines is that when you would spawn them, bind it to a condition to keep them at bay. For example launch 36 goroutines to process passwords starting with the different letters, but after that let 1 goroutine try all the combinations with that starting letter. We can test this "first letter" condition by comparing prefix to 0: for i := range alfabeto { wgFather.Add(1) if prefix == 0 { go compute(prefix+1, n, fmt.Sprintf("%s%c", a, alfabeto[i]), wgFather) } else { compute(prefix+1, n, fmt.Sprintf("%s%c", a, alfabeto[i]), wgFather) } } By inserting this condition and the else branch just calling compute() on the same goroutine, you keep your goroutine number and memory usage at bay! But still, you utilize multiple goroutines and multiple CPU cores. There is a minor "downside": you have no control how these 36 goroutines finish compared to each other, there may be a "relaxed" period when only 1 or 2 goroutines are running and others are finished, in this relaxed period CPU utilization will not be 100%. More formally CPU utilization will be < 100% if # of goroutines is less than # of CPU cores. Optimization tips Here are some optimization tips: You do unnecessary round-trips: you build potential passwords as string, then when your searchPassword() function computes its MD5, it has to convert it to []byte. Best would be to build password in a []byte. Go stores strings as UTF-8 encoded sequences (see blog post Strings, bytes, runes and characters in Go for details), and all your alphabet letters map to bytes one-to-one in UTF-8 encoding, so you could just use their byte value for faster building. In your searchPassword() when you have the MD5 of the potential password, you always iterate over all crackable MD5 strings and you compare to all. This is a waste, you could sort the crackable MD5 values and use binary search to find if the potential is in it (sorting and binary search is implemented in the sort package). Or even better: you could build a map from the crackable MD5 strings, and just check if the candidate is in the map (now this check would be O(1) complexity instead of O(log n) of the binary search). It is also an unnecessary round-trip to convert a calculated MD5 to string in order to check if it is a crackable one. Best would be to convert the crackable MD5 values to a byte array (note: array and not slice), and when you have the MD5 of a potential password as an array, you can check if it is a crackable one without converting it to string. Arrays are comparable in Go (unlike slices!), so you could also build a map from the MD5 arrays to check if a potential MD5 is in the map. Also note that your algorithm generates passwords multiple times. For example if you want to check passwords with a length of 3, these 3-letter passwords are essentially all the 2-letter passwords +1. But you don't make use of this, you always generate all passwords with a given length from "scratch". Utilizing these tips would speed up your algorithm big time; not just because we got rid of lots of needless computation / conversion, but also because much less "garbage" will be generated for the GC. Alternative An alternative way of implementing this brute-force cracker would be to use the producer-consumer pattern. You could have a designated producer goroutine that would generate the possible passwords, and send them on a channel. You could have a fixed pool of consumer goroutines (e.g. 5 of them) which would loop over the channel on which generated passwords are delivered, and each would do the same: receive passwords, hash them (MD5) and check if it matches a crackable one. The producer goroutine could simply close the channel when all combinations were generated, properly signalling consumers that no more passwords will be coming. The for ... range construct on a channel handles the "close" event and terminates properly. This would result in a clean design, would result in fixed number of goroutines, and it would always utilize 100% CPU. It also has the advantage that it can be "throttled" with the proper selection of the channel capacity (buffered channel) and the number of consumer goroutines. Here is how this producer-consumer could look like in Go if someone wants to play with it (also note that I elaborated this with full examples and much deeper explanation in StackOverflow question Is this an idiomatic worker thread pool in Go?): var wg sync.WaitGroup func produce(ch chan<- []byte) { // Now generate all passwords: for { if noMore { // If no more passwords close(ch) break } pass := ... // Here generate next password ch <- pass // send it for processing } } func consume(ch <-chan []byte) { defer wg.Done() for pass := range ch { // Hash, check } } func main() { ch := make(chan []byte, 100) // Buffered channel // Start consumers: for i := 0; i < 5; i++ { // 5 consumers wg.Add(1) go consume(ch) } // Start producing: we can run this in the main goroutine produce(ch) wg.Wait() // Wait all consumers to finish processing passwords } This blog post is an excellent introduction to parallel computation in Go using goroutines and channels: Go Concurrency Patterns: Pipelines and cancellation Further optimization tip Now if you go with this producer-consumer goroutine model, another optimization becomes available. The md5.Sum() function (which takes a []byte and returns the MD5 checksum of its content) always creates a new, internal md5.digest value which is used to do the MD5 hashing. Then it is discarded. Now if we have a small, fixed pool of consumer goroutines, we can now create and it is profitable to create a designated MD5 hasher for each with the md5.New() function. To what end? We can use the returned hasher (which is of type hash.Hash) to compute MD5 hashes, but what's cool is that we can reuse it to compute hashes of multiple byte slices. hash.Hash implements io.Writer so we can write any []byte into it of which we want to compute the MD5 hash, and it also has a Hash.Sum() method which returns the MD5 hash, giving the option to not create a new array return value which will hold the calculated MD5, but we can pass our slice to it in which we want the result. We can also create prepared arrays ([16]byte) and slice them to obtain a slice []byte, and do this for all of the consumer goroutines. As a result, we can further suppress "memory-garbage" generation and reduce GC work. Once we queried the MD5 sum of a password, we can simply call Hash.Reset() to re-initialize the hasher for the next password.
{ "domain": "codereview.stackexchange", "id": 17905, "tags": "go, memory-optimization" }
Pong-like game built in Love2D
Question: I just started learning programming in the past two or three months. I started with Python, and now I'm learning lua. I started game programming this week and this is a game I made with the Love2D engine for lua. If you don't know how it works, there are these three main functions: love.load() -- Runs at game startup and doesn't run again unless explicitly called love.update() -- Updates values in the game and runs every tick. It uses delta time as an argument for consistent speed on every machine love.draw() -- Draws the updates to the screen, however you choose for them to be displayed. love.update() and love.draw() loop back and forth in-between each other. love.update() always runs first and then love.draw(). This is my first game, a Pong like game where you destroy blocks with a ball bounced off of a paddle like player. I would like less criticism on the game itself, but more on the structure of the code. Please let me know if you see anywhere I could make a function, or achieve something with less code. function love.load() -- WINDOW SETUP love.window.setTitle("Block Buster") height = love.graphics.getHeight() width = love.graphics.getWidth() -- SOUND SOURCES hit = love.audio.newSource("hit.mp3") bounce = love.audio.newSource("bounce.mp3") loss = love.audio.newSource("loss.mp3") -- PLAYER SETUP player = {} function player.load() player.width = 70 player.height = 20 player.x = width/2 - player.width/2 player.y = height - player.height player.speed = 400 player.lives = 5 player.points = 0 end player.load() -- BLOCKS blocks = {} blocks.draw = {} -- LOAD BLOCKS function blocks.load() column = 0; row = 1 while 5 >= row do block = {} block.width = width/10 - 5 block.height = 20 block.x = column * (block.width + 5) block.y = row * (block.height + 5) table.insert(blocks.draw, block) column = column + 1 if column == 10 then column = 0; row = row + 1 end end end blocks.load() -- BALL ball = {} function ball.load() ball.radius = 5 ball.x = width/2 ball.y = player.y - 200 ball.speed = 200 ball.direction = "d" ball.cooldown = 200 end ball.load() -- CHECK TOP FOR BOUNCE function topbounce() if ball.direction == "ull" then ball.direction = "dll" elseif ball.direction == "ul" then ball.direction = "dl" elseif ball.direction == "uul" then ball.direction = "ddl" elseif ball.direction == "u" then ball.direction = "d" elseif ball.direction == "uur" then ball.direction = "ddr" elseif ball.direction == "ur" then ball.direction = "dr" elseif ball.direction == "urr" then ball.direction = "drr" end end end ------ UPDATE ------ function love.update(dt) if ball.cooldown > 0 then ball.cooldown = ball.cooldown - 1 end -- Player movement if love.keyboard.isDown("right") and player.x <= (width - player.width) then player.x = player.x + (dt * player.speed) elseif love.keyboard.isDown("left") and player.x >= 0 then player.x = player.x - (dt * player.speed) elseif love.keyboard.isDown("r") then ball.load() end -- Hitbox for player if ball.y >= player.y and ball.y <= height and ball.x >= player.x and ball.x <= (player.x + player.width) then if ball.x >= player.x and ball.x < (player.x + 10) then ball.direction = "ull" elseif ball.x >= (player.x + 10) and ball.x < (player.x + 20) then ball.direction = "ul" elseif ball.x >= (player.x + 20) and ball.x < (player.x + 30) then ball.direction = "uul" elseif ball.x >= (player.x + 30) and ball.x < (player.x + 40) then ball.direction = "u" elseif ball.x >= (player.x + 40) and ball.x < (player.x + 50) then ball.direction = "uur" elseif ball.x >= (player.x + 50) and ball.x < (player.x + 60) then ball.direction = "ur" elseif ball.x >= (player.x + 60) and ball.x < (player.x + 70) then ball.direction = "urr" end love.audio.play(bounce) end -- Hitbox for blocks for i,v in ipairs(blocks.draw) do if ball.y <= (v.y + v.height) and ball.y >= v.y then if ball.x <= (v.x + v.width) and ball.x >= v.x then topbounce() love.audio.play(hit) table.remove(blocks.draw, i) player.points = player.points + 1 end end end -- Bounces ball off walls if (ball.x <= 0) or (ball.x >= width) then if ball.direction == "uur" then ball.direction = "uul" elseif ball.direction == "ur" then ball.direction = "ul" elseif ball.direction == "urr" then ball.direction = "ull" elseif ball.direction == "drr" then ball.direction = "dll" elseif ball.direction == "dr" then ball.direction = "dl" elseif ball.direction == "ddr" then ball.direction = "ddl" elseif ball.direction == "ddl" then ball.direction = "ddr" elseif ball.direction == "dl" then ball.direction = "dr" elseif ball.direction == "dll" then ball.direction = "drr" elseif ball.direction == "ull" then ball.direction = "urr" elseif ball.direction == "ul" then ball.direction = "ur" elseif ball.direction == "uul" then ball.direction = "uur" end love.audio.play(bounce) end -- Bounce ball off ceiling if ball.y <= 0 then topbounce() end -- Move ball if ball.cooldown == 0 then if ball.direction == "u" then ball.y = ball.y - 2 * (dt * ball.speed) elseif ball.direction == "uur" then ball.y = ball.y - 2 * (dt * ball.speed) ball.x = ball.x + 1 * (dt * ball.speed) elseif ball.direction == "ur" then ball.y = ball.y - 2 * (dt * ball.speed) ball.x = ball.x + 2 * (dt * ball.speed) elseif ball.direction == "urr" then ball.y = ball.y - 1 * (dt * ball.speed) ball.x = ball.x + 2 * (dt * ball.speed) elseif ball.direction == "drr" then ball.y = ball.y + 1 * (dt * ball.speed) ball.x = ball.x + 2 * (dt * ball.speed) elseif ball.direction == "dr" then ball.y = ball.y + 2 * (dt * ball.speed) ball.x = ball.x + 2 * (dt * ball.speed) elseif ball.direction == "ddr" then ball.y = ball.y + 2 * (dt * ball.speed) ball.x = ball.x + 1 * (dt * ball.speed) elseif ball.direction == "d" then ball.y = ball.y + 2 * (dt * ball.speed) elseif ball.direction == "ddl" then ball.y = ball.y + 2 * (dt * ball.speed) ball.x = ball.x - 1 * (dt * ball.speed) elseif ball.direction == "dl" then ball.y = ball.y + 2 * (dt * ball.speed) ball.x = ball.x - 2 * (dt * ball.speed) elseif ball.direction == "dll" then ball.y = ball.y + 1 * (dt * ball.speed) ball.x = ball.x - 2 * (dt * ball.speed) elseif ball.direction == "ull" then ball.y = ball.y - 1 * (dt * ball.speed) ball.x = ball.x - 2 * (dt * ball.speed) elseif ball.direction == "ul" then ball.y = ball.y - 2 * (dt * ball.speed) ball.x = ball.x - 2 * (dt * ball.speed) elseif ball.direction == "uul" then ball.y = ball.y - 2 * (dt * ball.speed) ball.x = ball.x - 1 * (dt * ball.speed) end end if ball.y >= height then love.audio.play(loss) player.lives = player.lives - 1; ball.load() end if player.lives < 0 then love.graphics.print("GAME OVER", width/2, height/2) love.load() end end ------ DRAW ------ function love.draw() -- Cooldown if ball.cooldown > 0 then love.graphics.print("Get ready!", width/2, height/2) end -- Points/Lives love.graphics.print("Lives: " .. player.lives, 10, height/3) love.graphics.print("Points: " .. player.points, 10, height/3 + 20) -- Draw player love.graphics.setColor(255, 255, 255) love.graphics.rectangle("fill", player.x, player.y, player.width, player.height - 10) -- Draw blocks love.graphics.setColor(255, 0, 0) iter = 0 for _,v in pairs(blocks.draw) do love.graphics.rectangle("fill", v.x, v.y, v.width, v.height) end -- Draw ball love.graphics.setColor(255, 255, 255) love.graphics.circle("fill", ball.x, ball.y, ball.radius) end Answer: 1) Instead of this: if ball.direction == "uur" then ball.direction = "uul" elseif ball.direction == "ur" then ball.direction = "ul" elseif ball.direction == "urr" then ball.direction = "ull" elseif ball.direction == "drr" then ball.direction = "dll" elseif ball.direction == "dr" then ball.direction = "dl" elseif ball.direction == "ddr" then ball.direction = "ddl" elseif ball.direction == "ddl" then ball.direction = "ddr" elseif ball.direction == "dl" then ball.direction = "dr" elseif ball.direction == "dll" then ball.direction = "drr" elseif ball.direction == "ull" then ball.direction = "urr" elseif ball.direction == "ul" then ball.direction = "ur" elseif ball.direction == "uul" then ball.direction = "uur" You may want to use a table with all the values. local WALL_BOUNCE = { uur = "uul", ur = "ul" -- etc... } And in your function just use: ball.direction = WALL_BOUNCE[ball.direction] 2) Another point: if ball.direction == "u" then ball.y = ball.y - 2 * (dt * ball.speed) elseif ball.direction == "uur" then ball.y = ball.y - 2 * (dt * ball.speed) ball.x = ball.x + 1 * (dt * ball.speed) elseif ball.direction == "ur" then ball.y = ball.y - 2 * (dt * ball.speed) ball.x = ball.x + 2 * (dt * ball.speed) elseif ball.direction == "urr" then ball.y = ball.y - 1 * (dt * ball.speed) ball.x = ball.x + 2 * (dt * ball.speed) elseif ball.direction == "drr" then ball.y = ball.y + 1 * (dt * ball.speed) ball.x = ball.x + 2 * (dt * ball.speed) elseif ball.direction == "dr" then ball.y = ball.y + 2 * (dt * ball.speed) ball.x = ball.x + 2 * (dt * ball.speed) elseif ball.direction == "ddr" then ball.y = ball.y + 2 * (dt * ball.speed) ball.x = ball.x + 1 * (dt * ball.speed) elseif ball.direction == "d" then ball.y = ball.y + 2 * (dt * ball.speed) elseif ball.direction == "ddl" then ball.y = ball.y + 2 * (dt * ball.speed) ball.x = ball.x - 1 * (dt * ball.speed) elseif ball.direction == "dl" then ball.y = ball.y + 2 * (dt * ball.speed) ball.x = ball.x - 2 * (dt * ball.speed) elseif ball.direction == "dll" then ball.y = ball.y + 1 * (dt * ball.speed) ball.x = ball.x - 2 * (dt * ball.speed) elseif ball.direction == "ull" then ball.y = ball.y - 1 * (dt * ball.speed) ball.x = ball.x - 2 * (dt * ball.speed) elseif ball.direction == "ul" then ball.y = ball.y - 2 * (dt * ball.speed) ball.x = ball.x - 2 * (dt * ball.speed) elseif ball.direction == "uul" then ball.y = ball.y - 2 * (dt * ball.speed) ball.x = ball.x - 1 * (dt * ball.speed) end You can use a table here too, it will have a format local VELOCITY = { u = {0, -2}, uur = {1, -2} -- etc... -- direction = {velocity_x, velocity_y} } local velocity = VELOCITY[ball.direction] ball.x = ball.x + velocity[1] * dt * ball.speed ball.y = ball.y + velocity[2] * dt * ball.speed 3) Use local variables wherever possible. However for a simple project like this one this is not obligatory
{ "domain": "codereview.stackexchange", "id": 15109, "tags": "game, lua" }
GNU Radio filter normalization
Question: I have noticed that all filter taps from the firdes class are normalized by dividing them over the sum of all taps magnitude, as shown in the example below for Gaussian Filter. Is this normalization meant to keep the filter unit energy (or power)? and if that is the case, what is the mathematical justification for doing so in this way? vector<float> firdes::gaussian(double gain, double spb, double bt, int ntaps) { vector<float> taps(ntaps); double scale = 0; double dt = 1.0 / spb; double s = 1.0 / (sqrt(log(2.0)) / (2 * GR_M_PI * bt)); double t0 = -0.5 * ntaps; double ts; for (int i = 0; i < ntaps; i++) { t0++; ts = s * dt * t0; taps[i] = exp(-0.5 * ts * ts); scale += taps[i]; } for (int i = 0; i < ntaps; i++) taps[i] = taps[i] / scale * gain; return taps; } Answer: I have noticed that all filter taps from the firdes class are normalized by dividing them over the sum of all taps magnitude Not strictly: for (int i = 0; i < ntaps; i++) taps[i] = taps[i] / scale * gain; is not summing over the magnitude, but over the taps. (doesn't make a difference here, all these taps are positive real numbers, but for other filters it would.) So, @AndyWalls is right, this is the DC gain, and you're normalizing this low-pass filter to have gain 1 at 0 Hz. Think about this for a moment: when you feed in a constant streams of "1", what value do you want to get?
{ "domain": "dsp.stackexchange", "id": 10411, "tags": "filter-design, digital-filters, gnuradio" }
Is this HW question even valid? (linear systems)
Question: The question is: Are the following systems linear time invariant? $y(t)=x(t)$, with $x(t)=u(t)$ (unit step function) $y[n]=x[n]$, with $x[n]=\delta[n]$ (Dirac delta function) The reason I am asking is because I don't see the relevance of the values given to the input functions. LTI is a property of the system not its inputs, so I don't understand what is meant! Is it a restriction? In that case how would I check for linearity? One probably can't, right? Answer: You are absolutely right: the data you are given say nothing on their own about whether the system concerned is linear or time shift invariant. To check that a system is linear, you need to know its behaviour for sums of inputs and multiples of inputs to check that the sum / multiples of the outputs in response to different inputs is the response when the sum of those separate inputs is the lone system input. You only have the behaviour for one input $x(t) = u(t)$. You need to be able to say that $x(t)\mapsto A\,x(t)\Rightarrow y(t)\mapsto A\,y(t)\,\forall\,A\in\mathbb{R}$ and also that if $x_1(t),\,x_2(t)$ lead to outputs $y_1(t),\,y_2(t)$, then input $x_1(t)+x_2(t)$ gives rise to output $y_1(t)+y_2(t)$ for any $x(t),\,x_1(t),\,x_2(t){}{}{}{}{}{}{}{}$ in a suitable space of functions. To check time shift invariance, you need to know how the system behaves when we input $y(t-t_0)$ instead of $y(t)$ to check whether the output is the same, but shifted by the amount $t_0$. You only know the behaviour for one value of the delay $t_0$, that is $t_0=0$. You need to be able to say that input $x(t)$ gives rise to output $y(t)$ implies that $x(t-t_0)$ gives rise to output $Y(t-t_0)$ for any constant $t_0\in\mathbb{R}$. So you can say nothing about the linearity or the time shift invariance of this system.
{ "domain": "dsp.stackexchange", "id": 2218, "tags": "linear-systems" }
What is the purpose of base_footprint?
Question: What purpose does the base_footprint link serve? I'm reading through the Gazebo simulation tutorial, and it instructs the user to create an infinitesimally small nearly weightless box which is linked to the base_link, but it doesn't explain what this is or what it's used for, nor does it link to any resources that do. I did see this similar question, but that answer doesn't really explain why such a link is used. Why does any part of ROS need to know how the model projects onto the ground? Does every Gazebo model require this link? Edit: I've noticed the joint linking the footprint to the base_link has an origin whose z component represents a fixed distance from the ground. What value do you use for a legged model that doesn't have a fixed distance from the ground? Originally posted by Cerin on ROS Answers with karma: 940 on 2015-04-27 Post score: 4 Answer: In addition to some empirical answers, there's a REP-120 that defines base_footprint link. Originally posted by 130s with karma: 10937 on 2015-05-01 This answer was ACCEPTED on the original site Post score: 6
{ "domain": "robotics.stackexchange", "id": 21561, "tags": "ros, gazebo, urdf, base-footprint" }
Formatter class
Question: In our production code, we cannot use Boost or C++0x. Formatting strings using sprintf or stringstream is annoying in this case, and this prompted me to write my own little Formatter class. I am curious if the implementation of this class or the use of it introduces any Undefined Behavior. In particular, is this line fully-defined: Reject( Formatter() << "Error Recieved" << 42 << " " << some_code << " '" << some_msg << "'"); My belief is that it is OK, but I wanted peer-review. Three main points of concern: Is there a double-assignment within a single sequence point? Is it UB? Do I have a problem with the lifetime of temporaries? Does my Formatter class (or the intended use of it) introduce any UB? The Formatter class has both a (templatized) operator<< and an operator std::string. The intent is to use the Formatter() class as a temporary in place of a std::string parameter for any function taking a const std::string&. Here is the class definition: class Formatter { public: Formatter() {}; template<class Field> Formatter& operator<<(Field f) { ss_ << f; return *this; } operator std::string() const { return ss_.str(); } private: std::stringstream ss_; }; And here is a complete test harness, including the above definition. You should be able to compile & run as-is. Do you see any UB? #include <cstdlib> #include <string> #include <sstream> #include <iostream> class Formatter { public: Formatter() {}; template<class Field> Formatter& operator<<(Field f) { ss_ << f; return *this; } operator std::string() const { return ss_.str(); } private: std::stringstream ss_; }; void Reject(const std::string& msg) { std::cout << "Recieved Message: '" << msg << "'" << std::endl; } int main() { const char& some_code = 'A'; const char* some_msg = "Something"; Reject( Formatter() << "Error Recieved" << 42 << " " << some_code << " '" << some_msg << "'"); } Answer: In addition to what's already been said, I would: Mark the stringstream as public. This won't affect most uses of your code, and can already be hacked around with a custom manipulator to get at the "internal" stream object, but it will enable those that need to access the internal stream (such as to avoid the string copy inherent in the stringstream interface), and know the specifics of their implementation that allow what they want, to do so. Of course, 0x move semantics allay much of this need, but are still Not Quite Here Yet™. Check the stream before returning the string; if it's in a failed state, throw an exception (or at least log the condition somewhere before returning a string). This is unlikely to occur for most uses, but if it does happen, you'll be glad you found out the stream is failed rather than screw with formatting while wondering why "it just won't work". Regarding double-assignment, there's no assignment at all. The sequence points should be mostly what people expect, but, exactly, it looks like: some_function(((Formatter() << expr_a) << expr_b) << expr_c); // 1 2 3 The operators order it as if it was function calls, so that: Formatter() and expr_a both occur before the insertion marked 1. The above, plus insertion 1, plus expr_b happen before insertion 2. The above, plus insertion 2, plus expr_c happen before insertion 3. Note this only limits in one direction: expr_c can happen after expr_a and before Formatter(), for example. Naturally, all of the above plus the string conversion occur before calling some_function. To add to the discussion on temporaries, all of the temporaries created are in the expression: some_function(Formatter() << make_a_temp() << "etc.") // one temp another temp and so on They will not be destroyed until the end of the full expression containing that some_function call, which means not only will the string be passed to some_function, but some_function will have already returned by that time. (Or an exception will be thrown and they will be destroyed while unwinding, etc.) In order to handle all manipulators, such as std::endl, add: struct Formatter { Formatter& operator<<(std::ios_base& (*manip)(std::ios_base&)) { ss_ << manip; return *this; } Formatter& operator<<(std::ios& (*manip)(std::ios&)) { ss_ << manip; return *this; } Formatter& operator<<(std::ostream& (*manip)(std::ostream&)) { ss_ << manip; return *this; } }; I've used this pattern several times to wrap streams, and it's very handy in that you can use it inline (as you do) or create a Formatter variable for more complex manipulation (think a loop inserting into the stream based on a condition, etc.). Though the latter case is only important when the wrapper does more than you have it do here. :)
{ "domain": "codereview.stackexchange", "id": 12, "tags": "c++, formatting" }
How can I block ultrasound from an automotive transducer at about 8'?
Question: We are testing a new ultrasound product using a Texas Instruments PGA450 automotive transducer. Three of us are sharing a lab and we need some kind of ultrasound-blocking partition walls around each product to prevent interference. Think cubicle walls that absorb ultrasound. Answer: Typical towels do a pretty good job of not echoing ultrasound around too much, and also attenuate ultrasound going thru them. Ordinary sheetrock walls are really bad because they are great mirrors for ultrasound. There are special materials for absorbing sounds at various frequencies, but hanging a bunch of large beach towels from the ceiling around each work area will likely be cheaper and just as effective. Two or three towels an inch or so apart should help a lot. You will still either have to put sound absorbent material on the ceiling, or make sure the towels extend to the ceiling. The remaining path will then be by bouncing off the floor. That's not so simple. Some "area rugs" should help, but some of them reflect ultrasound much better than others. Ideally you want something that has roughness extending over ½ wavelength or so, but that will be hard to find. Beach towels on the floor will work to attenuate ultrasound, but won't last long as they aren't designed to take that sort of abuse.
{ "domain": "engineering.stackexchange", "id": 591, "tags": "mechanical-engineering, electrical-engineering, automotive-engineering" }
Why are equilibrium constants unitless?
Question: I haven’t quite reached the point where I can read a full-fledged text on chemical kinetics and thermodynamics yet, so bear with me, please. I’m wondering why a value like $K_\text{eq} = \frac{[\ce{NO}]^2[\ce{O2}]}{[\ce{NO2}]^2}$ wouldn't have units of M? Answer: I goofed up the first time I tried to answer this question, erroneously applying dimensional analysis to your equilibrium expression. It turns out that Silberberg[1] gives a good explanation of why $K_\text{eq}$ is dimensionless, which is often glossed over as the terms of the equilibrium expression are generally taught as concentrations. In actual fact, the terms are ratios of the concentration or activity of each species with a reference concentration (1 $\mathrm{mol\cdot{L^{-1}}}$ for solutions.) For example, a concentration of 2 $\mathrm{mol\cdot{L^{-1}}}$ divided by a reference of 1 $\mathrm{mol\cdot{L^{-1}}}$ yields a ratio of 2, with no units. As each term has no units, so too does $K_\text{eq}$. [1] Silberberg, M.E.; Chemistry – The Molecular Nature of Matter and Change 3e; 2003, p. 719
{ "domain": "chemistry.stackexchange", "id": 174, "tags": "kinetics, equilibrium" }
What is the "‡" symbol meaning in a reaction mechanism?
Question: I was studying hydroboration from Clayden's Organic Chemistry [1, p. 1281] and the transition state had “‡” symbol in the upper right corner of the activated complex: We know that this is not the whole story because of the stereochemistry. Hydroboration is a syn addition across the alkene. As the addition of the empty p orbital to the less substituted end of the alkene gets under way, a hydrogen atom from the boron adds, with its pair of electrons, to the carbon atom, which is becoming positively charged. The two steps shown above are concerted, but formation of the C–B bond goes ahead of formation of the C–H bond so that boron and carbon are partially charged in the four-centred transition state. What does this symbol mean? I have seen it before as a superscript at the Gibbs energy symbol. References Organic Chemistry; Clayden, J., Ed.; Oxford University Press: Oxford; New York, 2001. ISBN 978-0-19-850347-7. Answer: The symbol is called "double dagger" (sometimes also "double cross") and is used to denote transition state (a maximum in an energy diagram; also often denoted with "*" or "TS") or a related physical property. Note, however, that a transition state and an intermediate are two different terms. The symbol has the peculiar origins: as written by H. Eyring, it's entirely the merit of the secretary of the department, Miss Lucy D’Arcy [1, p. 9]: The symbol, ‡, first appeared in the second paper dealing with the activated complex. In that manuscript a star was used to designate the activated state but Miss Lucy D'Arcy, the departmental secretary, lacking a star used a plus and minus sign. Believing the typesetters would interpret this as a star it was left and by their option the cross, ‡, has become the almost universally used sign for the activated state. Also, note that according to IUPAC recommendations [2, p. 1080]: In accordance with previous IUPAC recommendations (IUPAC QUANTITIES (1988)) the symbol ‡ to indicate transition states ("double dagger") is used as a prefix to the appropriate quantities, e.g. $Δ^‡G$ rather than the more often used $ΔG^‡.$ In the future, I suggest to use services like Shapecatcher or Detexify to find the name of the symbol. Finding its usage in chemistry afterwards is normally a trivial task. References Eyring, H. Models in Research. International Journal of Quantum Chemistry 1969, 3 (S3A), 5–15. DOI: 10.1002/qua.560030705. Muller, P. Glossary of Terms Used in Physical Organic Chemistry (IUPAC Recommendations 1994). Pure and Applied Chemistry 1994, 66 (5), 1077–1184. DOI: 10.1351/pac199466051077. (Free Access)
{ "domain": "chemistry.stackexchange", "id": 13895, "tags": "organic-chemistry, physical-chemistry, reaction-mechanism, notation" }
Vector potential of position field
Question: Consider the position vector field $\vec{r}=(x,y,z)^T$. What would be a vector potential $\vec{A}$ for this field? I was thinking of something like $\vec{A}=(yz,zx,-xy)^T$, which gives $$\nabla\times A=(-2x,2y,0)^T.$$ But that is not exactly correct. Answer: It is not possible to find a vector potential $\mathbf{A}$ such that $$\nabla\times\mathbf{A}=\mathbf{r}. \tag{1}$$ You can prove this by contradiction. Assume (1) is possible, and apply the divergence operator ($\nabla\cdot$) to it. Then you get $$\underbrace {\nabla\cdot\nabla\times\mathbf{A}}_{=0} =\underbrace {\nabla\cdot\mathbf{r}}_{=3}.$$ This is obviously a contradicction, and hence (1) is not possible.
{ "domain": "physics.stackexchange", "id": 100263, "tags": "homework-and-exercises, potential, vector-fields, calculus" }
Rubber-Rubber friction
Question: We all know rubber is known to have a high friction coefficient, and it's quite difficult to drag a block of it across a surface. What happens when two blocks of flexible rubber are dragged against each other, and by how much does polishing the surface of rubber affect it's friction against another polished rubber surface? Answer: Rock climbing shoes are made with rubber that is formulated to be as sticky as possible. Steven Won has done tabletop experiments using climbing shoes on the rough back side of a granite slab from a kitchen countertop, and has estimated $\mu_s=1.17$. I did a quick and dirty experiment just now by putting a climbing shoe on top of an upside-down climbing shoe, so that the soles were in contact. I tilted them until they slipped. My result was about $\mu_s=0.99\pm 0.05$. So it appears that rubber on rubber does not have a higher coefficient of static friction than rubber on granite. by how much does polishing the surface of rubber affect it's friction against another polished rubber surface? I don't think you can really polish rubber in the same sense that you can polish materials like metal or rock, and I don't think the frictional properties of rubber depend very much on the details of the shape of the surface. That's why road bikes tires and rock climbing shoes have no tread pattern.
{ "domain": "physics.stackexchange", "id": 58319, "tags": "forces, friction, material-science" }
colorize area of point cloud on a 2D image
Question: I am projecting a 3D Point Cloud on a 2D image incoming as ROS 2 messages. Now, I want to fill the gaps between pixels from the point cloud, so that I can see an area, instead of single pixels and publish the overlay (point cloud + camera) image. I found here a similar question, but unfortunately it hasn't been answered. What I am trying to achieve is: before after I tried at first some voronoi diagram, but I haven't succeeded. My Node is written in C++ using OpenCV for image processing. Does anyone has an idea, how can I achieve it? Answer: I managed to achieve better results with cv::dilate. As far as I tested, I haven't lost any perfomance and it is really easy to implement.
{ "domain": "robotics.stackexchange", "id": 38719, "tags": "c++, opencv, pointcloud" }
Is there no mode expansion for $:e^{i k_\mu X^\mu(z, \bar z)}:$ in the free boson CFT?
Question: We are told that in a 2D CFT, all primary operators can be written in the form of \begin{equation} \tag{1} \mathcal{O}(z, \bar{z} ) = \sum_{m,n \in \mathbb{Z} } \frac{\mathcal{O}_{m,n} }{z^{m + h} \bar{z}^{n + \bar h}} \end{equation} Recently I have been wondering about what happens when $h$ and $\bar{h}$ are not integers. (For instance, I wonder how $\mathcal{O}(0,0)$ when acting on the ground state $|0\rangle$ is not trivially $0$.) In order to look at a specific example, I tried to see what the mode expansion of the operator \begin{equation} \mathcal{O}(z, \bar{z}) = \; :e^{ i k_\mu X^\mu (z, \bar z)} : \end{equation} is in the free boson CFT. When this operator acts on the ground state $|0\rangle$ when $z = \bar{z} = 0$, it creates a state of weight \begin{equation} (h, \bar{h} ) = \left( \frac{\alpha' k^2}{4}, \frac{\alpha' k^2}{4} \right) \end{equation} from equation 2.4.17 in Polchinski. As $h$ and $\bar{h}$ aren't integers, I therefore tried to see what the mode expansion of this operator is. However, it doesn't quite seem to take the form of $(1)$, which makes me wonder if all operators really can be expressed as $(1)$. I present my work below. Equation 2.7.4 in Polchinski reads \begin{equation} X^\mu(z, \bar{z}) = x^\mu - i \frac{\alpha'}{2} p^\mu \ln(|z|^2)+ i \left( \frac{\alpha'}{2} \right)^{1/2} \sum_{m \in \mathbb{Z} - \{0\} } \frac{1}{m} \left( \frac{\alpha^\mu_m}{z^m} + \frac{\widetilde{\alpha_m^\mu} }{\bar{z}^m} \right). \end{equation} Note that $\alpha^\mu_m$ is an annihilation operator for $m > 0$ and a creation operator for $m < 0$. Normal ordering pulls all the annihilation operators to the right and all the creation operators to the left. In this procedure $p^\mu$ is regarded as an annihilation for this procedure and $x^\mu$ is regarded as a creation operator. (The normal ordering proscriptions are related by equation 2.7.12 in Polchinski.) We now find \begin{align} &:e^{ i k_\mu X^\mu (z, \bar z)} : \\ &= e^{i k_\mu x^\mu} \exp(\sum_{m<0} \frac{1}{m} \left( \frac{\alpha^\mu_m}{z^m} + \frac{\widetilde{\alpha}^\mu_m}{{\bar z}^m} \right) ) e^{ \alpha' k_\mu p^\mu \ln|z|} \exp(\sum_{m>0} \frac{1}{m} \left( \frac{\alpha^\mu_m}{z^m} + \frac{\widetilde{\alpha}^\mu_m}{{\bar z}^m} \right) ) \end{align} As brief check, note that \begin{align} p^\mu |0\rangle &= 0 \\ \alpha^\mu |0\rangle &= 0 \hspace{0.5 cm} m > 0 \end{align} so \begin{equation} : e^{i k_\mu X^\mu(0,0)} : |0\rangle = e^{i k_\mu x^\mu} |0\rangle = |k; 0\rangle \end{equation} as expected. We now use the commutation relation (2.7.5b in Polchinski) \begin{equation} [x^\mu, p^\nu] = i \eta^{\mu \nu} \end{equation} and Baker-Campbell-Hausdorff to write \begin{align} e^{i k_\mu x^\mu} e^{\alpha' k_\nu p^\nu \ln|z|} &= \exp( i k_\mu x^\mu + \alpha' k_\nu p^\nu \ln|z| + \tfrac{1}{2} [i k_\mu x^\mu , \alpha' k_\nu p^\nu \ln|z| ] ) \\ &= \exp( i k_\mu x^\mu + \alpha' k_\nu p^\nu \ln|z| - \tfrac{1}{2} \alpha' k^2 \ln|z| ) \\ &= \exp( i k_\mu x^\mu + \alpha' k_\nu p^\nu \ln|z| ) |z|^{ - \tfrac{1}{2} \alpha' k^2 } \end{align} making \begin{align} &:e^{ i k_\mu X^\mu (z, \bar z)} : \\ &= e^{i k_\mu x^\mu + \alpha' k_\nu p^\nu \ln|z|} \\ &\exp(\sum_{m<0} \frac{1}{m} \left( \frac{\alpha^\mu_m}{z^m} + \frac{\widetilde{\alpha}^\mu_m}{{\bar z}^m} \right) ) \exp(\sum_{m>0} \frac{1}{m} \left( \frac{\alpha^\mu_m}{z^m} + \frac{\widetilde{\alpha}^\mu_m}{{\bar z}^m} \right) ) \frac{1}{z^{\alpha' k^2/4 }}\frac{1}{{\bar z}^{\alpha' k^2/4 }} \end{align} This is a somewhat confusing answer. It almost looks to be of the form \begin{equation} :e^{ i k_\mu X^\mu (z, \bar z)} : \; \stackrel{?}{=} \sum_{m \in \mathbb{Z} } \frac{\mathcal{O}_{m,n} }{z^{m + h} \bar{z}^{n + \bar h}} \end{equation} but not quite, due to the $e^{i k_\mu x^\mu + \alpha' k_\nu p^\nu \ln|z|} $ out front. Does anyone have any ideas? Answer: Okay, I have the answer. The first place I'd like to direct you to is this question + answer where I explain in a lot of detail how mode expansions work in general, including for non integer weight. For this problem in particular, I should have just stopped when I obtained the formula. \begin{align} &:e^{ i k_\mu X^\mu (z, \bar z)} : \\ &= e^{i k_\mu x^\mu} \exp(\sum_{m<0} \frac{1}{m} \left( \frac{\alpha^\mu_m}{z^m} + \frac{\widetilde{\alpha}^\mu_m}{{\bar z}^m} \right) ) e^{ \alpha' k_\mu p^\mu \ln|z|} \exp(\sum_{m>0} \frac{1}{m} \left( \frac{\alpha^\mu_m}{z^m} + \frac{\widetilde{\alpha}^\mu_m}{{\bar z}^m} \right) ). \end{align} This formula is indeed of the form \begin{equation} \mathcal{O}(z, \bar{z})| 0 \rangle = \sum_{m,n = 0}^\infty \frac{\mathcal{O}_{mn}}{z^m \bar{z}^n} |0\rangle \end{equation} because \begin{equation} p^\mu |0\rangle = 0. \end{equation} You see, just because we have \begin{equation} \mathcal{O}(z, \bar{z})| 0 \rangle = \sum_{m,n = 0}^\infty \frac{\mathcal{O}_{mn}}{z^m \bar{z}^n} |0\rangle\hspace{1cm} (\text{true}) \end{equation} does NOT mean we have \begin{equation} \mathcal{O}(z, \bar{z}) = \sum_{m,n = 0}^\infty \frac{\mathcal{O}_{mn}}{z^m \bar{z}^n} \hspace{1cm} (\text{false}). \end{equation} They are very different expressions. You simply can't do this mode expansion without acting on the vacuum on the right. If you would like to see how to do mode expansions in general, I will once again link you to here where I explain it in detail.
{ "domain": "physics.stackexchange", "id": 78505, "tags": "operators, string-theory, conformal-field-theory, wick-theorem" }
Robot_description not found
Question: I installed Fuerte using 'sudo apt-get install ros-fuerte-pr2-desktop' I can rosmake various packages, including arm_navigation/planning_environment. However, when I run 'roslaunch planning_environment environment_server.launch', the service cannot find the (presumably PR2) robot model: ...: Robot model '/robot_description' not found! Did you remap 'robot_description'? I thought any required models would have come down with the ros-fuerte-pr2-desktop installation. Unless by side effect, I did not remap 'robot_description'. Anything else I need to install or configure? Thanks! Andreas Originally posted by paepcke on ROS Answers with karma: 71 on 2012-05-23 Post score: 2 Original comments Comment by paepcke on 2012-05-26: The problem ended up being libmysql.so missing various symbols. This defect then prevented Gazebo from initializing properly. Re-installing libmysqlclient16 made the problem disappear. Unclear how the defective library came down. (Thanks to Lorenz for the answer, though.) Answer: Did you upload a robot description to the parameter server? Normally, you execute the planning launch file when you either have a real robot or simulation running. You can also upload the robot description with roslaunch pr2_description upload_pr2.launch Originally posted by Lorenz with karma: 22731 on 2012-05-24 This answer was ACCEPTED on the original site Post score: 6
{ "domain": "robotics.stackexchange", "id": 9520, "tags": "ros, robot-description" }
class IText - Validates text from Ajax POST | Add consistent return type, defensive coding, functional focus
Question: I denotes independent, no dependencies. <?php /** *Input - One Dimensional Non-Empty Post Array *Ouput - Boolean on pass or fail */ class IText { private $text_array = array(); private $patterns = array( 'domain' => '/:\/\/(www\.)?(.[^\/:]+)/', 'prefix_url' => '/^(http:)|(https:)\/\//', 'url' => '/^.{1,2048}$/', 'tweet' => '/^.{1,40}$/', 'title' => '/^.{1,32}$/', 'name' => '/^.{1,64}$/', 'email' => '/^.{1,64}@.{1,255}$/', 'pass' => '/^.{6,20}$/' ); public function __construct() { if ( count( $_POST ) === 0 ) { echo "Hack Attempt"; return; } else { foreach( $_POST as $key => $value ) { if( !is_scalar( $value ) ) { echo "Hack Attempt"; return; } $this->text_array[ $key ] = htmlentities( $value ); } } } public function get( $key ) // basic getter { return $this->text_array[ $key ]; } public function set( $key, $value ) // basic setter { $this->text_array[ $key ] = $value; } public function checkPattern( $pattern ) // checks for pattern and returns bool { return ( ( boolean )preg_match( $this->patterns[ $pattern ], $this->text_array[ $pattern ] ) ); } public function checkEmpty() // checks for empty and returns bool { return ( !in_array( '', $this->text_array, TRUE ) ); } } Answer: I've tried to cover as many things I could. A few of my explanations are rather brief, so if you want me to explain or defend anything, let me know. And hopefully it doesn't read too ramble-y :). IText IName typically denotes an interface. It's not a naming scheme often used in PHP, but it's used often enough that you should avoid prefixing an I onto a class that isn't an interface. This is confusing. In fact, when I first saw the name, I was expecting an interface. As a developer consuming the class, it really is not all that important if it's "independent" since most IDEs will show the constructor signature. Interface vs abstract vs concrete tends to matter, but I do not feel that independent necessitates its own naming scheme. mixed concerns I'm not sure if I understand what your class is doing. Based on the name, it looks like you're trying to model a string. BUt what it's actually doing is sort-of processing POST. Instead of doing this, I would create validators that are separate from the model of a string (or the model of any data type really). Also, why hold an array of values inside of the object? Is the object meant to be some kind of repository or something? Once you get a lot of validations, or once you get a few very complicated ones, this class will grow to be unmanagable. Personally, I like the model of Zend_Validate. It completely separates the concept of data and the validation of data. In your class, storage, validation and manipulation have all been (needlessly) coupled. (Obviously storage and validation go together when storing things long term, but in this context, I mean storing as in storing in PHP-land.) A perfect example of your validation and storage coupling is your checkPattern method. What if you want to validate something as a URL, but it's not stored in $_POST['url']? What if you want to check $_POST['website']? Or what if you have an arbitrary $url that you want to check? Your code now does not facilitate that. It is directly tied to the expectation that the validation and the data will be stored with the same keys. htmlentities Your values are not being used in the context of HTML, so they should not be treated as HTML. Only treat HTML as HTML. In other words, don't escape data until it actually needs to be escaped. Your validations are going to be confused when they see a &#38; b instead of a & b (or any other substitutions htmlentities may make). $_POST A lot of the comments in the "mixed concerns" section, you're probably thinking "but it's only meant to manipulate post?" Well, what if at some point in the future, you have an arbitrary array you want to handle? There's no need to couple your code to $_POST. Instead, pass the array in to the constructor. (Well, in all honesty, I think your design is flawed, but if you do stick with it, pass the array in.) What if at some point in the future you have a JSON based API and you want to validate that? How would you do that with your current code? technicalities If nothing is posted (count($_POST) === 0), then text_array is going to be NULL. This will mean that every array access will issue a notice. You should initialize the member to an array either in the declaration or in the constructor. I would do it in the declaration: private $text_array = array(); Also, you're assuming $_POST is one dimensional. If someone posts a form like: <input type="text" name="test[]" value="array"> Then $_POST will look like: array('test' => array('array')) This means that you'll end up passing an array to htmlentities. In other words, don't assume that POST values are strings. (Though they are always either a string or an array.) in addPrefix, you've used textArray instead of text_array Constants Instead of using names of the validations, use class constants instead. This will mean that calling code doens't need to know the array keys, and that they can be changed if needed/wanted. It will also tend towards cleaner looking and more maintainable code since typos will be errors instead of falling through the cracks. For example: const DOMAIN = "domain"; Then: $text->checkPattern(IText::DOMAIN); Be paranoid Never assume anything in code. In particular, don't assume that a variable will be the type that you expect or that an array key will exist. Note that these both basically only apply to user data since if you create the data, you can know with certainty what it is. Consider your checkPattern method: public function checkPattern( $pattern ) // pattern checker returns bool { return ( preg_match( $this->patterns[ $pattern ], $this->text_array[ $pattern ] ) ); } What if an invalid pattern name is provided? Then suddenly you have at least 3 notices (two that the array key does not exist, and one that preg_match received an empty pattern). You should be safe guarding things like this: public function checkPattern($pattern) { if (!array_key_exists($pattern, $this->patterns)) { //Pattern doesn't exist throw new Exception("Invalid pattern provided: " . $pattern); } if (!array_key_exists($pattern, $this->text_array[$pattern])) { //data doesn't exist return false; } else if (!is_scalar($this->text_array[$pattern])) { //If this were checked on construction, this would not be necessary throw new Exception("The data stored for {$pattern} is not a scalar, but it's being treated as such"); } return (bool) ( preg_match( $this->patterns[ $pattern ], $this->text_array[ $pattern ] ) ); } Also, while I'm at it, your comment is a lie. preg_match returns an int, not a bool. A wrong comment is significantly worse than no comment. Final suggestions I would consider decoupling your data and validations. What you could do is create a "Form" that encapsulates all of this. Basically your form would have elements and each element would have validations. The form would then be capable of taking in an arbitrary array and telling you if it's valid per the specification of the form. If you want a few ideas on this, you could look at Zend_Form. It has a lot of limitations, and the rendering side of it can be a bit hard to get along with, but overall, it's a fairly decent modeling of a form. Zend_Form is likely not the best, just Zend Framework happens to be what I'm familiar with. Edit: URL handling Just noticed that you've used a regex to extract out the domain from a url. Instead of doing that, you might want to use parse_url.
{ "domain": "codereview.stackexchange", "id": 1903, "tags": "php" }
How is bond length of C-O and C-N same?
Question: In NCERT and many other books it is given that bond length of $\ce{C-O}$ and $\ce{C-N}$ is same. But, how can it be possible as size of oxygen is smaller than that of nitrogen so bond length of $\ce{C-N}$ should have been more but it is not. similarly in $\ce{C-O}$ and $\ce{N-O}$, bond length of $\ce{N-O}$ should be smaller as nitrogen is smaller than carbon. but in books it is just reverse of that. Bond order of all these molecules is same so bond length should depend on size of the bonding atoms but is not so in many cases. So what is the reason behind this. is there any other factor that is responsible for such result? Answer: The bond lengths given in your example tables are average bond lengths. That means, the actual bond length in given compound can be larger or smaller than the given value. Keep in mind that bond lengths are not just proportional to sizes of atoms involved making them. As pointed in the other answer, they are determine by other factors as well, which is a broad subject. One such factor is chemical structure of a compound. For example, let's compare chemical bonds in oxazole nucleus in oxazole derivatives with at least 2-substitutions (Ref.1-3): $$ \begin{array}{c|ccc} \text{Bonds} & \text{Bond length in $\bf{I}$} & \text{Bond length in $\bf{II}$} & \text{Bond length in $\bf{III}$} \\ \hline \ce{O_{(1)}-C_{(2)}} & \pu{1.370 \mathring{A}} & \pu{1.356 \mathring{A}} & \pu{1.356 \mathring{A}} \\ \ce{C_{(2)}-N_{(3)}} & \pu{1.299 \mathring{A}} & \pu{1.294 \mathring{A}} & \pu{1.297 \mathring{A}} \\ \ce{N_{(3)}-C_{(4)}} & \pu{1.382 \mathring{A}} & \pu{1.410 \mathring{A}} & \pu{1.405 \mathring{A}} \\ \ce{C_{(4)}-C_{(5)}} & \pu{1.333 \mathring{A}} & \pu{1.310 \mathring{A}} & \pu{1.332 \mathring{A}} \\ \ce{C_{(5)}-O_{(1)}} & \pu{1.375 \mathring{A}} & \pu{1.399 \mathring{A}} & \pu{1.402 \mathring{A}} \\ \hline \end{array} $$ These data demonstrate how bond lengths in oxazole ring deffere by its substitutions and attached ring systems. Even two $\ce{C^\mathrm{sp^2}-O^\mathrm{sp^3}}$ bonds in the same ring gives two different values (c.f., $\ce{O_{(1)}-C_{(2)}}$ and $\ce{C_{(5)}-O_{(1)}}$ values of each compound) because of substitution differences. References: Boon-Chuan Yip, Hoong-Kun Fun, Siang-Guan Teoh, Omar Bin Shawkataly, "Structure of 2-(1-naphthyl)-5-phenyl-1,3-oxazole ($\alpha$-NPO)," Acta Cryst. C 1993, C49, 1532-1534 (https://doi.org/10.1107/S0108270193001192). A. Albinati, M. G. Marcon, P. Traldi, P. Cavoli, "The structure of 2-amino-1,3-oxazole," Acta Cryst. B 1981, B37, 2090-2092 (https://doi.org/10.1107/S0567740881008078). P. Luger, G. Griss, R. Hurnaus, G. Trummlitz, "The $\alpha_2$-adrenoceptor agonists B-HT 920, B-HT 922, and B-HT 958, a comparative X-ray and molecular-mechanics study," Acta Cryst. B 1986, B42, 478-490 (https://doi.org/10.1107/S0108768186097859).
{ "domain": "chemistry.stackexchange", "id": 14159, "tags": "organic-chemistry, experimental-chemistry, bond, molecules" }
Is there muscle hyperplasia or hypertrophy after surgery?
Question: After exercise there is hypertrophy only. I am curious, if a surgery removes muscle, will there be hyperplasia or hypertrophy to restore it? Answer: It does not required much research to find the answer. After injury and surgery the muscle undergoes inflammation to clear up the debris. After that myogenic cells start to proliferate and differentiate, so as expected, there is hyperplasia by restoring injured muscle. Which means that the body can fully restore muscle function if the injury is not too severe. Effective fiber hypertrophy in satellite cell-depleted skeletal muscle. Cellular and Molecular Regulation of Muscle Regeneration Regeneration of mammalian skeletal muscle. Basic mechanisms and clinical implications. I investigated this a little bit further. Muscle regrowth is mediated by the BM (basement membrane) which is the extracellular matrix that sorrounds the muscle fiber. If that is badly damaged or the nerve is cut, then the muscle fiber won't grow back and we will get scar tissue instead. The muscle will compensate this at a certain level by hypertrophy. There is an interesting new treatment for severe muscle injury, which removes the scar tissue and adds pig extracellular matrix to regrow the muscle. Implant Lets Patients Regrow Lost Leg Muscle
{ "domain": "biology.stackexchange", "id": 9125, "tags": "human-biology, human-physiology" }
Object Detection: Can I modify this script to support larger images (Scaled YOLOv4)?
Question: I am looking at training the Scaled YOLOv4 on TensorFlow 2.x, as can be found at this link. I plan to collect the imagery, annotate the objects within the image in VOC format, and then use these images/annotations to train the large-scale model. If you look at the multi-scale training commands, they are as follows: python train.py --use-pretrain True --model-type p5 --dataset-type voc --dataset dataset/pothole_voc --num-classes 1 --class-names pothole.names --voc-train-set dataset_1,train --voc-val-set dataset_1,val --epochs 200 --batch-size 4 --multi-scale 320,352,384,416,448,480,512 --augment ssd_random_crop As we know that Scaled YOLOv4 (and any YOLO algorithm at that) likes image dimensions divisible by 32, I have plans to use larger images of 1024x1024. Is it possible to modify the --multi-scale commands to include larger dimensions such as 1024, and have the algorithm run successfully? Here is what it would look like when modified: --multi-scale 320,352,384,416,448,480,512,544,576,608,640,672,704,736,768,800,832,864,896,928,960,992,1024 Answer: Yes, the functionality should is there. But, don't you think you are overdoing the scales. You have at least 18 scales mentioned here. Too much of anything is bad. There is a reason it likes things divisible by 32 because at that increase in size something more meaningful will show up in the image. Spamming sizes like this won't help you at all, it would rather waste your time.
{ "domain": "ai.stackexchange", "id": 2771, "tags": "deep-learning, object-detection, yolo, scalability" }
How to get x-axis when using FFT to cross-correlate
Question: Suppose there are two functions $f(x)$ and $g(x)$. Each function is nonzero only in the specified interval, $[m_f, M_f]$ and $[m_g, M_g]$. What I want to calculate is the cross-correlation between two functions: $$ (f\star g)(t) = \int_{-\infty}^{\infty} f^\ast(x)g(x+t)dx $$ which can be obtained by applying convolution theorem: $$ (f\star g)(t) = \mathcal{F}^{-1}\left\{{\mathcal{F}\left\{f\right\}}^\ast\cdot\mathcal{F}\left\{g\right\}\right\}=\mathcal{F}^{-1}\left\{\hat{f}^\ast\cdot\hat{g}\right\} $$ where $\hat{.}$ denotes forward transformation. Now, to write this in FFT, I prepared four sets of arrays: x_f, x_g, y_f, y_g. x_f and x_g denote the $x$ domain of respective functions, and they are 1. zero-pad and 2. sampled in the same sampling. Basically, I took whichever was greater between $M_f-m_f$ and $M_g-m_g$, defined it to be $L/2$, made it as the interval axes. So, x_f[0] is $m_f$, and x_f[-1] is $m_f+L\geq M_f$. The same goes for x_g. y_f and y_g are defined to be $f(x)$ and $g(x)$ at each x_f and x_g. If we take y_c = ifft(fft(y_f).conjugate()*fft(y_g)), we get a somewhat valid result. However, it is not easy to map y_c corresponds to what x_c so that y_c is equal to $(f\star g)(x)$ at x_c. How do you calculate this? Also, what happens if there are three terms? i.e., $(f\star (g\star h))(t)$? Answer: Let's say we have two discrete functions $f[n]$ and $g[n]$ that have finite support on $[m_f, M_f]\ m_f,M_f \in \mathbb{Z}$ and $[m_g, M_g] \ m_g,M_g \in \mathbb{Z}$. Than the lengths of the functions are $N_f = M_f-m_f+1$ and $N_g = M_g-m_g+1$ The length of the cross correlation $h[n]$ (or convolution) is simply $$N_h = N_f+N_g-1$$ We treat the cross correlation as "convolve with the time flipped version of g[n]", i.e. $$h[n] = f[n]*g[-n] $$ where $*$ is the convolution operator. Since we are time flipping around $n=0$ The support of the time-flipped version is $[-M_g,-m_g]$, i.e. the limits change signs and flip positions. For example $[3,8]$ turns into $[-8,-3]$ The FFT implements the DFT (Discrete Fourier Transform). Multiplication in the DFT domain implements circular (not linear) convolution. Hence we need to zero pad both sequences to the length $N_h$ to avoid time domain aliasing. Note, that you need to zero-pad $g[n]$ AFTER time flipping it. For convolution the support intervals just add, i.e. $h[n]$ will be non-zero on $[m_h,M_h] = [m_f-M_g,M_f-m_g]$ Programming languages like Matlab don't have a built-in way for managing the x-interval of an array so you have to keep track of it manually. Below is a code example that demonstrate the cross correlation between two different triangular waves. %% create two tri-angular waves and cross correlate them % wave 1: width 9, starting at n = 3; f = conv(ones(5,1),ones(5,1)); f = f/max(f); mf = [3,11]; % wave 2: width 13, starting at n = -5 g = conv(ones(7,1),ones(7,1)); g = g/max(g); mg = [-5,7]; %% do the crosscorrelation Nh = mf(2)+mg(2)-mf(1)-mg(1) + 1; % length of cross correlation h2 = ifft(fft(f,Nh).*fft(flip(g),Nh)); % F-domain convolution with padding mh = mf + flip(-mg); % support interval of result %% now plot it all clf; plot(mf(1):mf(2),f,'LineWidth',2); hold on; plot(mg(1):mg(2),g,'LineWidth',2); plot(mh(1):mh(2),h2/max(h2),'LineWidth',2); grid on set(gca,'xlim',[-5 18]); xlabel('Time in samples'); legend('f[n]','g[n]','h[n]'); Since the peak of $g[n]$ precedes the peak of $f[n]$ by 6 samples, the maximum of the cross correlation will be at $n=6$.
{ "domain": "dsp.stackexchange", "id": 12249, "tags": "fft" }
Electric field becomes non-conservative in electrostatic problem
Question: This question is about what I think was a misinterpretation of the statement of an exercise. But first I have to give some context. The question is at the end. During a class, my professor wrote the solution for the following problem: Find the capacitance of two parallel cables of radii $a$, very large length $L$ and separated by a distance of $2d$. Both cables are also at a distance $h$ from an infinite grounded plane. Assume $2d >> a$, and the charge density is the same for both cables. To find the capacitance (thing which I don't particularly care now) one needs first the potential difference. Before he started to write equations he said that: Since the cables have the same charge, and they are symmetrical vertically and horizontally respect to the plane, then both are at the same potential. Then he started to solve the problem with the image method. Remove the grounded plane and put another pair of cables with opposite charge, such that the distance between the true cables and the fictitious ones become $2h$. The potential that comes from just one cable is: $$V_i = -\int_0^r \vec{E} d\vec{r} = \frac{Q}{2\pi L\epsilon_0} \ln \left( \frac{r_i}{|\vec{r}-\vec{r_i}|} \right) $$ Where we assumed that $V(0) = 0$, since this was imposed when we had the grounded plane. Now, the total potential is: $\sum_{i=1}^4 V_i$. What changes is the value of $\vec{r_i}$ and since the pair of ficticious cables have opposite charge, will be -Q the charge for them. I don't want to put all the computes here. But the supposed result is that the potential difference between the real cables is: $$\Delta V = \frac{Q}{2 \pi L \epsilon_0} \left[ \ln \left(\frac{2 \ln2d}{a \sqrt{4d^2+4h^2}} \right) - \ln \left( \frac{\sqrt{4d^2+4h^2} a}{2d+2h} \right) \right] $$ Here comes the problem. Undo the image method step. Remove the fictitious cables from below and put again the grounded plane. Now I think on what my professor said at the beginning. That is the statement that confused me. Because if both cables are at the same potential, the potential difference between them should be zero. But the result that I showed doesn't seem to convey the feeling that the field is conservative at all, because the path won't be independent. Can I have a non conservative field in an electrostatic situation? Am I missing something? What I think is that the lines (cables) have to be opposite charged in order to make sense and then the assumption of same potential becomes false—my professor misunderstood the excersice. So, if this is the case, two more questions arise: does this mean that if both cables had same charge (with same sign) then $\Delta V$ would be equal to zero and the capacitance is not defined, hence the problem doesn't make sense at all? if the cables were opposite charged, still could I use the image method like in the solution, but with the fictitious cables being opposite charged? P. S. Note that I'm not asking for any computation in particular. I present here the equations that I could copy just to give some context. Answer: To find the capacitance (thing which I don't particularly care now) one needs first the potential difference. Before he started to write equations he said that: Since the cables have the same charge,... The key question here is, to find the capacitance of what? In a system of three conductors we can find the mutual capacitance between any two of those conductors, or we can find the self-capacitance of any one of the conductors. Your instructor is following a procedure that will find the mutual capacitance between the two wires (taken as a single electrode) and ground. In signal integrity engineering this is an important property of the wires because it determines how common mode signals (usually undesirable, but often present and responsible for producing undesired radiation) will behave on these wires. If you want to you can also find the mutual capacitance between the two wires. That's also an important property of the wires, but it's not the one your instructor is solving for here. But the result that I showed doesn't seem to convey the feeling that the field is conservative at all, If I understood correctly, you have calculated the potential of the first wire (call it A) considering only its own charge and its image charge. Then you calculate the potential of the second wire (call it B) considering both its own charge and image charge and the charge of wire A. This is why you calculate a different potential for A and B. But the charge on B has just as much effect on the potential of A as the the charge of A has on the potential of B, and you neglected that contribution. From the symmetry of the geometry, it's certain that with equal charge on the wires there will be no potential difference between A and B. If you found a difference then you must have made an error somewhere, even if I haven't understood correctly where the error lies. Can I have a non conservative field in an electrostatic situation? No, if you solve Poisson's equation correctly for the electrostatic potential you will obtain a conservative field. What I think is that the lines (cables) have to be opposite charged in order to make sense and then the assumption of same potential becomes false Two conductive bodies near each other have mutual capacitance between them. That capacitance exists whether they are (differently) charged or not. We must (in our minds, at least) charge them to calculate or measure the capacitance. But the capacitance doesn't become undefined or uncertain when we (by whatever means) keep the two bodies at the same potential. What I think is that the lines (cables) have to be opposite charged in order to make sense and then the assumption of same potential becomes false—my professor misunderstood the excersice. We can force the wires to be at equal potential simply by connecting them together at some point (far away from the point where we're evaluating the fields, to be sure that the connection doesn't disturb the fields there). So having the wires at equal potential is not a non-physical scenario. As mentioned before, it's actually one that's very important for signal integrity engineers. In your statement of the problem, it isn't made clear which mutual capacitance is meant to be calculated, so if you have transcribed the problem statement completely it does seem that the problem is ambiguous. Edit to add: The potential that comes from just one cable is: $$V_i = -\int_0^r \vec{E} d\vec{r} = \frac{Q}{2\pi L\epsilon_0} \ln \left( \frac{r_i}{|\vec{r}-\vec{r_i}|} \right) $$ This looks like a solution for the potential on an isolated wire in space referenced to a 0 potential at infinity. You could use this to find the self-capacitance of the wire. But the isolated wire in space will have its charge distributed evenly around its circumference. The wires in your scenario with 2 wires and a ground plane will not have even charge distribution on their surfaces. Therefore you can't simply superimpose the solution for 4 isolated wires to find the potentials in your problem, unless we're taking the statement $d\gg a$ to mean the wires can be treated as thin wires (simple lines of charge).
{ "domain": "physics.stackexchange", "id": 97014, "tags": "homework-and-exercises, electrostatics, electric-fields, potential, capacitance" }
Position vs momentum space calculation
Question: I want to calculate $J^\dagger J$ with $J = x - \langle x\rangle$ , where $x$ is the position operator. In position space: $J^\dagger J = (x - \langle x\rangle)^2 = x^2 - 2 x \langle x \rangle + \langle x \rangle^2$. Now switching to momentum space with $ x = \mathrm{i} d/dp$ (setting $\hbar = 1$): $J^\dagger J = - d^2/dp^2 - 2 \mathrm{i} d/dp \langle x \rangle + \langle x\rangle^2 $. However, if I switch to momentum space right in the beginning, I get: $J^\dagger J = d^2/dp^2 + \langle x \rangle^2$ (which is probably wrong). Where is my mistake? I would guess that $(d/dp)^\dagger \neq (d/dp)$, since the derivative should act to the left ? Could someone point out where exactly I am wrong? Answer: You are right, that the problems is $d/dp$ not being hermitian. Since $\hat x$ is hermitian, however, we know that $\hat x = i\hbar \frac{d}{dp}$ is hermitian. Inserting the expression we find, $$J^\dagger J = \Big(i\hbar \frac{d}{dp} - \langle \hat x \rangle\Big)^\dagger\Big(i\hbar \frac{d}{dp} - \langle \hat x \rangle\Big).$$ In the first bracket we have $(i\hbar d/dp)^\dagger = i\hbar d/dp$ since it is hermitian and the problem is resolved, $$= \Big(i\hbar \frac{d}{dp} - \langle \hat x \rangle\Big)\Big(i\hbar \frac{d}{dp} - \langle \hat x \rangle\Big) = -\hbar^2\frac{d^2}{dp^2} - i\hbar \frac{d}{dp} + \langle \hat x \rangle^2. $$
{ "domain": "physics.stackexchange", "id": 66562, "tags": "quantum-mechanics, homework-and-exercises, operators, momentum" }
can i use heliocentric velocity as a rotation speed?
Question: In the research of the galaxies, does heliocentric velocity involves space's expanding velocity? and can i use heliocentric velocity by galaxy's rotation speed? Answer: It's a little unclear what you're asking for, but... "Heliocentric velocity" means measured radial velocity of an object relative to the Sun. (Which basically means relative to us, except you take out the variations due to the Earth's motion around the Sun.) For galaxies more distant than, say, the Local Group, the heliocentric velocity is the combination of a) the Sun's motion in our Galaxy; b) the local peculiar motion of our Galaxy; c) the local peculiar motion of the other galaxy; and d) the cosmic expansion of the universe ("cosmological redshift"). The first three never vary that much, but the fourth increases with distance, so for distant galaxies, it's (almost) all due to the cosmic expansion. "Rotation velocity" for a galaxy generally means the speed of stars and gas clouds about the center of that galaxy; it is not one single number but a function of distance from the galaxy's center -- though in the outer parts of the galaxy it often settles to a nearly constant value, which may indeed be referred to as "the" rotation velocity. The two have nothing to do with each other, though. For example, the rotation velocity in the outer part of M31 (the Andromeda Galaxy) is about 230 km/s, while its heliocentric velocity is about $-300$ km/s. If it were 100 megaparsecs away in the Coma Cluster, its heliocentric velocity would be something like 7000 km/s, but its rotation velocity would be the same.
{ "domain": "astronomy.stackexchange", "id": 5498, "tags": "observational-astronomy, galaxy, rotation, speed, velocity" }
receive rosbag play topics
Question: hi, i want to receive rosbag play topics and use them in may source is there any help? i use ros::Subscriber sub = n.subscribe("/worldmodel/objects", 1000, chatterCallback); but when i run rosbag play *.bag --clock i received this error : ERROR] [1364626445.198828503]: Client [/listener] wants topic /worldmodel/objects to have datatype/md5sum [std_msgs/String/992ce8a1687cec8c8bd883ec73ca41d1], but our version has [worldmodel_msgs/ObjectModel/f9cae8b2109e6f4fe92735ca9596083c]. Dropping connection. what shuold i do now? Originally posted by MMB on ROS Answers with karma: 1 on 2013-03-29 Post score: 0 Answer: The error message is telling you that your subscriber is expecting a std_msgs/String, but the message is actually a worldmodel_msgs/ObjectModel. You should modify your subscriber to subscribe to the correct message type which is worldmodel_msgs/ObjectModel Originally posted by Dan Lazewatsky with karma: 9115 on 2013-03-30 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 13607, "tags": "rosbag" }
Why circularization of an orbit has longer time scale than tidal locking?
Question: I'm trying to understand the basic physics of orbital evolution. I know that in a two-body system (a planet orbiting a sun for example), eccentric orbits become circular, and the spin of the planet becomes tidally locked (like the Earth-moon system). However, I don't understand why the timescales of these two processes are different (as stated for example in Storch & Lai (2013)), and if there's a simple way to quantify the timescale of circularization (kind of back of the envelope calculation, like the one for tidal locking found here)? Answer: Tidal locking is a primary effect due to a direct torque on a body from its tidal bulge lagging its rotation. This is pictured nicely on wikipedia with an example from the Earth/Moon system. Tidal Orbital Circularization is a secondary effect. To quote Daddy Kropotkin's excellent answer to Is the moon's orbit circularizing? Why does tidal heating circularize orbits? tidal torque drives dissipation, and this dissipation brings the binary to a minimum kinetic energy state, i.e. circular orbit, synchronized spins with the orbit, aligned spins with the orbit. There is really no reason to expect the timescales for these two effects to be similar since there are different mechanisms at play. Another way to think about tidal circularization is to imagine a tidally locked planet in an eccentric orbit. From Kepler's 2nd law, we know the planet moves fastest at periapsis and slowest at apoapsis. So the planet's tidal bulge won't always quite face its star. Instead, the tidal bulge will lead and lag its orientation to the star. This offset will reduce orbital energy at periapsis and increase orbital energy at apoapsis, circularizing the orbit. Tidal heating will also continue to occur, since the tidal bulge will continue to move until the orbit comes to a minimum energy state (a circular orbit). Is there a simple way to quantify the timescale of circularization? No. There is a way, but it is not simple. From Rodriguez and Ferraz-Mello 2009, the time scale for orbital circularization of a short period planet in a 2-body system is $$\tau_e=\frac{3n^{-1}a^5}{18\hat{s}+7\hat{p}}$$ Here, $n$ is the mean orbital motion, $a$ is the orbital semi-major axis, and $$\hat{s} = \frac{9k_{d*}m_p}{4Q_*m_*}R_*^5$$ $$\hat{p} = \frac{9k_{dp}m_*}{2Q_pm_p}R_p^5$$ where $Q$ are quality factors defined by $Q_*=|\epsilon'_{0*}|^{-1}$ and $Q_p=|\epsilon'_{2p}|^{-1}$. Here, $\epsilon'_{0*}$ and $\epsilon'_{2p}$ are lag angles associated to the tidal waves whose frequencies are $2\Omega_*-2n$ for the star and $2\Omega_p-n$ for the planet. The angular velocity of the rotation of the tidally deformed body is $\Omega$. The masses, radii, and dynamical Love numbers for the star and planet are respectively $m_*$,$R_*$,$k_{d*}$ and $m_p$,$R_p$,$k_{dp}$. The above equations also assume no obliquities. That is, the axis of rotation of each body is perpendicular to the orbital plane. Without this assumption, things really start to get complicated. The locking time equation for a planet around a star is much simpler: $$\tau_{l}=\frac{2\Omega_p a^6m_pQ}{15Gm_*^2k_{dp}R_p^3}$$ Here, $G$ is the gravitational constant, and $Q$ is the dissipation function of the planet. Notes: The Earth-Moon system is not an ideal system to discuss tidal orbital circularization since the gravitational effects of the Sun are acting against such circularization. My tidal locking time equation looks different from Wikipedia because I wanted to be consistent with notation internal to this answer. In general, tidal lock will occur faster than orbital circularization because tidal lock is dependent on the amount of lag in the tidal bulge, whereas circularization is dependent on the change in the lag in the tidal bulge as a function of orbital speed change. However, there may be cases in which tidal circularization occurs sooner than tidal lock. Take Venus, for example, which has the least eccentric orbit of the planets, but may be prevented from tidal lock with the Sun due to torque from its atmospheric tides.
{ "domain": "astronomy.stackexchange", "id": 6219, "tags": "orbital-mechanics, tidal-forces, tidal-locking" }
Simplifying and improving (namely DRY) for flight info fetcher
Question: I've put together this simple fetcher for text data (which I just copy and pasted from a flight info website) - it takes in text data, and spits out an array of objects containing values for each property of each flight it finds. I thought it would be a good accompaniment exercise to "Eloquent Javascript: Chapter 3, Data Structures: Objects and Arrays." I'm looking for ways in which I should condense the functions, or possibly merge their tasks. I don't think there's a whole lot that can be shared amongst each fetch function, however. I think there may be a more simple way of noting the index of each item (found from the specific fetcher function), and then passing that on to the next fetcher function, but I'm not sure if that applies for each property of the flight, and I'm not sure what pattern I would use for that. General code critique or advice? *Note: I held off on adding fetchers for timeSched, status, and onSched until I get some feedback here, thanks! * var flightObject = { /* We assume our text value comes in this form: airlineCode airlineName destAbbrev dest timeSched status onSched */ text: "9E 3801 Pinnacle Airlines (MSP) Minneapolis 3:38 PM Landed On-time\nDL 3801 Delta Air Lines (MSP) Minneapolis 3:38 PM Landed On-time\n1I 131 Netjets Aviation (MEM) Memphis 3:06 PM Scheduled On-time\n1I 880 Netjets Aviation (HPN) Westchester County 3:06 PM En Route On-time\nRAX 308 Royal Air Freight, Inc. (FDY) Findlay 3:06 PM Landed On-time\nWN 627 Southwest Airlines (FLL) Fort Lauderdale 3:16 PM En Route On-time\nWN 2541 Southwest Airlines (SAT) San Antonio 3:35 PM En Route Delayed\nWN 1939 Southwest Airlines (LAS) Las Vegas 3:35 PM En Route On-time\nFIV 540 Citationshares (PWK) Chicago 3:10 PM Scheduled On-time", /* Here is our function that grabs the Flight Number (airlineCode), Airline (airlineName), Destination Abbreviation (destAbbrev), and Destination Long Title (dest), Scheduled Time (timeSched), Status of Flight (stat), and whether flight is on time (onSched) */ extractFlight: function() { /* Split paragraphs into lines */ var paragraphs = this.text.split("\n"); /* Get index for parenthesis which will help in finding destAbbrev note: start at i = 3 because we know it can't occur earlier due to data's nature. */ function getParenthIndex(){ for( var i = 3 ; i < words.length ; i++ ) { var word = words[i]; if (word.charAt(0) === "(") { return i; } } } function getAirlineName(){ var parenthIndex = getParenthIndex(); var airlineName = ""; /* note: we start for loop at i = 2 because we know it can't occur earlier due to data's nature. */ for( var i = 2; i < parenthIndex; i++ ) { var word = words[i]; if (i === parenthIndex - 1) { return airlineName += word; } else { airlineName += word + " "; } } } /* Grab destination abbreviation using index of word that starts with parenthesis as guide */ function getDestAbbrev(){ var parenthIndex = getParenthIndex(); return words[parenthIndex]; } /* Grab destination using index of word that starts with parenthesis as guide, while searching for number to know when to stop. */ function getDest(){ var parenthIndex = getParenthIndex(); var dest = ""; for( var i = parenthIndex + 1 ; i < words.length ; i++ ) { var word = words[i]; var re = /\d/; if (!re.test(word)){ if (i === parenthIndex + 1) { dest+= word; } else { dest += " " + word; } } else { return dest; } } } /* Take array and add flight objects by looping through each paragraph and grabbing each property value we're interested in. */ var flights = []; for( var i = 0 ; i < paragraphs.length ; i++ ) { var paragraph = paragraphs[i]; var words = paragraph.split(" "); /* Now we find the flight number which is the 1st and 2nd word */ var flightCode = words[0] + " " + words[1]; var flightCodeConden = words[0] + words[1]; var airlineName = getAirlineName(); var destAbbrev = getDestAbbrev(); var dest = getDest(); flights[flightCodeConden] = { "flightCode" : flightCode, "airlineName" : airlineName, "destAbbrev" : destAbbrev, "dest" : dest }; } console.log(flights); } }; Answer: The individual flights here are good candidates for being individual objects, along the following lines. I take a slightly different approach of first identifying the index of the flight code and the index of the date, and then using that to parse all the other information using words.slice. That simplifies things so much you don't have to worry about passing indices around between different functions (though if you did need to do that as your parser gets more complicated, you could do so by making the relevant indices properties of the Flight object). function Flight(text) { var ch, i, words = text.split(' '), parenthIndex, dateIndex; for (i = 3; i < words.length; i++) { ch = words[i].charAt(0); if (ch === "(") { parenthIndex = i; } else if (ch >= '0' && ch <= '9') { dateIndex = i; break; } } this.airlineName = words.slice(2, parenthIndex).join(' '); this.destAbbrev = words[parenthIndex]; this.dest = words.slice(parenthIndex + 1, dateIndex).join(' '); this.code = words[0] + ' ' + words[1]; this.id = words[0] + words[1]; } Then in your extractFlights function you only need the following, to break the text down into individual lines and send them to new Flight objects to be parsed. extractFlights: function() { var paragraphs = this.text.split("\n"), flights = {}, f, p; while (p = paragraphs.pop()) { f = new Flight(p); flights[f.id] = f; } console.log(flights); } (fiddle)
{ "domain": "codereview.stackexchange", "id": 2882, "tags": "javascript" }
PrimeSense Carmine doesn't work, but ASUS Xtion does (OpenNI launch: devices connected, but not found)
Question: [ROS Fuerte, Ubuntu Lucid] Hi all, I can use an ASUS Xtion Pro Live successfully, but a new PrimeSense Carmine 1.09 (short range) does not work. I don't have the 1.08 (long range) to test. Has anyone used the PrimeSense on Fuerte or Grooovy? Thanks. For the working ASUS sensor, I run: $ roslaunch openni_launch openni.launch camera:=camera depth_registration:=true load_driver:=true publish_tf:=true $ rosrun rviz rviz The driver reports: Number devices connected: 1 1. device on bus 001:15 is a PrimeSense Device (600) from PrimeSense (1d27) with serial id '' Searching for device with index = 1 Opened 'PrimeSense Device' on bus 1:15 with serial number '' rgb_frame_id = '/camera_rgb_optical_frame' depth_frame_id = '/camera_depth_optical_frame' $ lsusb -v (in brief) idVendor 0x1d27 idProduct 0x0600 iManufacturer PrimeSense iProduct PrimeSense Device However, when I try to launch with the Primesense sensor, the driver reports: Number devices connected: 1 1. device on bus 001:13 is a PrimeSense Device (601) from PrimeSense (1d27) with serial id '' Searching for device with index = 1 No matching device found.... waiting for devices. Reason: openni_wrapper::OpenNIDevice::OpenNIDevice(xn::Context&, const xn::NodeInfo&, const xn::NodeInfo&, const xn::NodeInfo&, const xn::NodeInfo&) @ /tmp/buildd/ros-fuerte-openni-camera-1.8.6/debian/ros-fuerte-openni-camera/opt/ros/fuerte/stacks/openni_camera/src/openni_device.cpp @ 61 : creating depth generator failed. Reason: USB interface is not supported! $ lsusb -v (in brief) idVendor 0x1d27 idProduct 0x0601 <---- iManufacturer PrimeSense iProduct PrimeSense Device But interestingly the full information has 2 times more entries than the ASUS. I tried giving openni_launch various device/bus IDs which makes no difference, e.g. device_id:=001@0 The most similar problem on ROS Answers http://answers.ros.org/question/50325/can-not-use-xtion-pro-live/ suggests installing PrimeSense-Sensor-Stable-5.1.0.41-1, but I already have that version installed. Installed packages & versions: i A libopenni-dev - Version: 1.5.4.0-3+lucid1 p libopenni-java - i libopenni-nite-dev - Version: 1.3.1.5~lucid i A libopenni-sensor-primesense-dev - Version: 5.1.0.41-2+lucid3 i A libopenni-sensor-primesense0 - Version: 5.1.0.41-2+lucid3 i A libopenni0 - Version: 1.5.4.0-3+lucid1 c openni-dev - p openni-doc - i openni-sensor-primesense-bin - Version: 5.1.0.41-1.1+lucid2 i A openni-utils - Version: 1.5.4.0-3+lucid1 p ros-fuerte-ecto-openni - i ros-fuerte-openni-camera - Version: 1.8.6-s1356636433~lucid i ros-fuerte-openni-kinect - Version: 0.5.2-s1356648471~lucid i ros-fuerte-openni-launch - Version: 1.8.3-s1356638395~lucid i ros-fuerte-openni-tracker - Version: 0.1.3-s1356648033~lucid p ros-unstable-openni-kinect - There was a thread on the mailing list about the debs being out of date, so I'm going to try to compile the unstable/forked libraries from avin2... . Originally posted by dbworth on ROS Answers with karma: 1103 on 2013-02-08 Post score: 3 Original comments Comment by dbworth on 2013-02-08: After installing OpenNI 2.1 Beta & avin2/SensorKinect, now the ASUS & PrimeSense both don't work: openni_launch reports: No devices connected.... waiting for devices to be connected . I guess the package name should have been a give away! Comment by dbworth on 2013-02-08: Building OpenNI Version 1.5.4.0 from jspricke/debian-openni doesn't work either, No devices connected. Answer: I am using Carmine with ROS. It does not work out of the box, you need to install the drivers after installing ROS openni stuff. Primesense updated their openni webpages and software, it is quite hard to find the old drivers that you need to install. Here is the link: http://www.openni.org/openni-sdk/openni-sdk-history-2/ Choose the OpenNI-Compliant Sensor Driver v5.1.2.1 for your OS. Compile, and install. ROS openni should work perfectly. The only problem is that like asus Xtion, Carmine doesn't report its serial number. In a multiple camera setup, I can't choose which camera to use. Originally posted by Akin with karma: 186 on 2013-02-08 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by dbworth on 2013-02-08: Thanks @akin I will try that. Here is something you can test: try plugging your sensors into different USB ports. You should find that when you run openni_launch it reports that the device is connected to a different USB bus. Normally you will find the ports are on at least 2 different buses.... Comment by dbworth on 2013-02-08: You can tell openni_launch device_id=bus@0 to load the first device on the specified bus. Let me know if it works? . There are potentially other tricks at the OS level, or you could parse 'lsusb' to get the info, but if 2 devices look identical in there, you're out of luck. Comment by AHornung on 2013-02-12: What is the sensor range you get out of the Carmine in practice, are the 0.25m-1.4m realistic? And how is the accuracy at close ranges? Sorry if this is too OT... Comment by dbworth on 2013-02-13: Hi @ahornung, sorry I don't know yet... Comment by fergs on 2013-02-21: In testing, I've seen data in the 0.25-5m range, although the error seems to be pretty high after 2-3m (very ugly point cloud in RVIZ). Comment by liborw on 2013-02-24: I have installed the OpenNI-Compliant Sensor Driver v5.1.2.1 Linux-x64 (there is nothing to compile just install) and I was able to start the PrimeSence device just once, then it stopped working again. It is recognised but there is no output. Comment by tianb03 on 2013-02-27: Thanks Akin! My problem is that I have kinect working properly, an old version of Xtion works fine also. A new Xtion got same problem as yours! After install the related openni drivers in software center I still cannot get it work. After install the 5.1.2.1 driver it works! Thx very much ! Comment by paulbovbel on 2013-05-28: @liborw, I'm having the same problem as you for the new Xtion sensors (only works once), did you have any luck solving that issue? Comment by liborw on 2013-05-28: @agentx3r I have found that the sometimes dies, so have changed openni_driver from nodelett to node and add respam parameter in launch file. And it somehow works but some times there is no out either. Comment by bona on 2013-07-02: Well, @Akin I did exactly what you suggested, however even kinect fails to work after installing the driver. Really took me a while to recover, by reinstalling openni_launch & openni_camera. What version of ROS and Ubuntu were you using? Mine: 12.10+Groovy
{ "domain": "robotics.stackexchange", "id": 12810, "tags": "openni, asus-xtion-pro-live, xtion, asus, primesense" }
Simulating Small Differential Drive Robot in Gazebo
Question: Hi, I am trying to simulate a small 4WD differential drive robot in Gazebo. I am using the erratic_gazebo_plugin diffdrive_plugin controller which I modified to work with a 4WD system. My simulation works fine when I use the erratic's wheel diameter of .15 m. But when I change the diameter to .1 m for my wheels the odometry published claims the robot is moving at .7 m/s despite me commanding 1 m/s in the Twist msg. I am using a moment of inertia of ixx = 0.01 ;ixy = 0 ; ixz = 0 ; iyy= .01 ; iyz = 0 ; izz = .01. My wheel mass is .1 kg I think my issue has to do with the mu1, mu2, kp, and kd parameters currently I have them set at mu1 = 200 ; mu2 = 100 ; kp = 1000000 ; and kd = 1. I guess my issue is I am unclear how these parameters are used in the simulator as well as what is the effect decreasing the wheel diameter. I can trick the simulator by decreasing my wheel diameter in the controller's parameters but I would like to understand why this phenomenon occurs. Any help would be much appreciated. Thanks, Matt Originally posted by malvarado on ROS Answers with karma: 80 on 2012-11-06 Post score: 2 Original comments Comment by Arkapravo on 2012-11-06: @Malvarado Gazebo has a known issue with very low moments of inertia Comment by malvarado on 2012-11-07: I am using the moment of inertia values that are used in the erratic URDF. What do you recommend I increase them to? Comment by SL Remy on 2013-02-01: was this concern ever resolved? Comment by malvarado on 2013-03-07: No this was never resolved Comment by dmngu9 on 2015-02-03: hey, im trying to add differential drive to my custom robot but dont know how. Can you give me any instructions please? I have my robot urdf file from solidworks already Answer: Try changing the odometrySource parameter of your differential drive plugin from world to encoder. This fixed it for me. Edit: This actually breaks the linear velocity you get from odom. See also: https://github.com/ros-simulation/gazebo_ros_pkgs/issues/327 Originally posted by AReimann with karma: 88 on 2015-05-26 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 11648, "tags": "ros, gazebo, velocity, robot" }
Algorithm for splitting an array into k sub-arrays
Question: We want implement a data structure that have the following methods: Init(A,k)- Gets an array A with n different values and initialize our data structure that so it will divide A into k equally sized sub-arrays (+-1), that each value in the i'th sub-array will be larger than any value in the (i-1)'th sub-array, and smaller than any value in the (i+1)'th sub-array. This method need to be applied in complexity of O(n log k). Insert(x)- Gets a value x which isn't in our data structure, and adds it. This method need to be applied in complexity of O(k logn). I did the init method using Medians- ofMedians QuickSelect, by dividing the array into k' sub arrays when k' equals to the closest power of 2 for k, and then I adjusted my pointers to the dividers by using Select on the smaller arrays which added me only O(n). With the Insert part I'm having some trouble and would appreciate any help, Thanks:) Answer: Insert$(x)$ can be implemented in $O(k \log(n/k))$ time. I will use the terms subarray and group interchangeably. Let $n$ be the number of elements in the data structure before the insert operation. We will maintain this invariant: each subarray contains either $\lfloor n/k \rfloor$ or $\lceil n/k \rceil$ elements. The elements of each subarray are stored in a min-heap and in a max-heap that support element deletion. Insert $x$ into the unique group $i$ whose minimum element is smaller than $x$ and such that the minimum element of the next group (if any) is larger than $x$. Notice that if, before the insert operation, $i$ contained $\lfloor n/k \rfloor$ elements then the operation cannot possibly violate the invariant. This means that if the invariant is violated then $\lfloor n/k \rfloor < \lceil n/k \rceil$, $i$ contained $\lceil n/k \rceil$ elements, and there is a group $j \neq i$ that contained $\lfloor n/k \rfloor$ elements. Let $j^*$ be the value of $j$ that satisfies the above conditions and minimizes $|j^*-i|$. We can restore the invariant as follows: If $j^* < i$ then all groups $h \in \{j^*+1, \dots, i-1\}$ have $\lceil n/k \rceil$ elements. For each $h= j^*+1, \dots, i$ do the following: pick the minimum element $m$ from group $h$ (this can be done in $O(\log n/k)$ time by a pop() operation on the min-heap of $h$ and a delete($m$) operation on the max-heap of $h$) and add $m$ to group $h-1$ (this amounts to adding $m$ to both the min-heap and the max-heap of group $h-1$). If $j^* > i$ then all groups $h \in \{i+1, \dots, j^*-1\}$ have $\lceil n/k \rceil$ elements. For each $h= i, \dots, j^*-1$ do the following: pick the maximum element $M$ from group $h$ and add it to group $h+1$.
{ "domain": "cs.stackexchange", "id": 16226, "tags": "algorithms, data-structures, arrays, selection-problem" }
How to count occurences in a Skiplist
Question: If we have an sorted skiplist how can we count the occurences if it is sorted on a effective way? Occurence of the same element in the skiplist? Answer: Suppose the skip list is constructed with probability parameter $p$. Suppose you want to find the number of occurrences of an element $e$ in the skip list. Algorithm: Perform a standard search of $e$ in the skip list. Suppose, the element is found at level $i$. Then, the element must also appear in each of the levels from $1$ to $i$ since a level $j$ is a subset of level $i$ if $j < i$. To count the number of occurrences of the element at any such level, the algorithm makes a linear scan to left and right from the current position. Since each level is sorted, the algorithm stops once it finds an element different from $e$. This takes time $O(p_j)$, where $p_j$ is the number of occurrences of element $e$ at $j^{th}$ level. Overall time is: search time + $\sum_{j = 1}^{i} O(p_j)$ = search time + $p_{e}$, where $p_e$ is the total number of occurrences of element $e$ in the skip list. If the element $e$ has $t$ copies in the input set, then the expected number of times it appears in the skip list is $t /p$. That is $\mathbb{E}[p_e] = t/p$. Also, the expected search time in a skip list is $O(\frac{1}{p} \cdot \log_{1/p} n)$. Therefore, the overall search time becomes $O(\frac{1}{p} \cdot \log_{1/p} n) + O(t/p)$.
{ "domain": "cs.stackexchange", "id": 18973, "tags": "algorithms, data-structures" }
Would increasing system memory speed reduce a Von Neumann Architecture bottleneck?
Question: A Von Neumann Architecture bottleneck is a limit on the amount of data a computer can process due to limited bandwidth between the CPU and RAM. Possible mitigations to the problem according to Wikipedia are... Providing a cache between the CPU and the main memory Providing separate caches or separate access paths for data and instructions Using branch predictor algorithms and logic Providing a limited CPU stack or other on-chip scratchpad memory to reduce memory access Implementing the CPU and the memory hierarchy as a system on chip, providing greater locality of reference and thus reducing latency and increasing throughput between processor registers and main memory. I know that increasing the number of data lanes between the CPU and RAM would help, but would increasing the system memory speed also help? Answer: No, increasing the memory speed won't help solve the Von Neumann architecture bottleneck. The reason is as memory size is increased the time required to access the memory contents increases. So no matter how fast the memory is, if it is large it will be slower. So faster, smaller memories called caches are used to provide the illusion of a large, fast memory to the user.
{ "domain": "cs.stackexchange", "id": 17948, "tags": "computer-architecture, cpu-cache, cpu, memory-access, cache" }
Should I be concerned about health (bacteria/lice) after having birds fly directly over me?
Question: Yesterday a flock of birds (crows) passed over my head. I've heard that there are many bacteria and lice in the feathers of birds. So I'm thinking about whether to wear or launder the clothes that I wore. What do you recommend I do? Is it ok to wear it again? Answer: The situation and context is of some minor importance: Did you encounter the birds in close proximity, such as startling them from the ground and receive a cloud of dust and feathers as they departed? Did they fly dozens or hundreds of meters overhead? While it's true that many birds have some population of feather lice (Mallophaga), even in healthy birds, they are harmless to humans.1 All animals, including humans, are hosts to vast populations of bacteria. Of these, the majority are innocuous or even necessary or helpful. That said, microorganisms can be the cause of illness and moreso if they are unfamiliar to your immune system.2 Wild animals can be carriers of disease, and one should always exercise caution when handling or working with animals. Getting bitten, scratched, or even removing a carcass from a roadway, for example, has certain risks associated. Note that most situations where illness or health problems arise usually involve contact with fluids, breaks in your skin, etc. Our skin and immune systems are remarkably efficient at keeping pathogens out or neutralized. Unless you were in direct contact with the animals and/or were injured in some way, or had a sneeze or excretion come your way, you should be fine. (And even if you were in direct contact you would likely still be fine. An injury such as a scratch from a wild animal should be disinfected and monitored.) References Temple, S. A. 2001. Form and Function: The External Bird. In Handbook of Bird Biology (S. Podulka, R. Rohrbaugh, Jr., and R. Bonney, eds.) The Cornell Lab of Ornithology. Ithaca, NY. https://en.wikipedia.org/wiki/Bacteria
{ "domain": "biology.stackexchange", "id": 9402, "tags": "ornithology" }
Why do we need Equilibrium Constant as well as Acid Disocciation Constant?
Question: I did the experiment to find out the dissociation constant of a weak acid using Henderson equation. However, after going through the theoretical part of the experiment, I am wondering why we need two separate equations for finding out the Acid Dissociation constant and the Equilibrium constant. Simply put, the expression for dissociation constant and equilibrium constant is identical. As such, when an acid dissociates, we could have explained it using the equilibrium constant equation as well; then, why do we need a new term called Acid Dissociation constant for explaining the same? One explanation for this question that I can think of right now is - we can use the equation for equilibrium constant only when an equilibrium is achieved, and the concentration of acid, i.e., [Acid] must be present in the denominator of the equilibrium constant equation. However, we can use the equation for acid dissociation constant irrespective of the fact that the reaction reaches an equilibrium or not. So, the presence of the [Acid] term in the denominator is of little significance in case of strong acid since they get dissociated completely, however, it will still have a great significance for dissociation of a weak acid. I have got an explanation in terms of activity from this answer, but is there a simpler explanation? A more intuitive one will help me understand the concept better. Answer: For any system at equilibrium we can define an equilibrium constant as follows: $$\ce{A(aq) + B(aq) <=> C(aq) + D(aq) }$$ $$K_\text{eq} = \frac{[\ce{C}][\ce{D}]} {[\ce{A}][\ce{B}]}$$ When the solutions are aqueous, we might use the subscript $c$ for the $K$, to indicate concentration. When we are using gas pressure instead of concentration, we use the subscript $p$. In the case of acid dissociation, it's just another reaction at equilibrium with $\ce{H3O+}$ in the products. $$\ce{HA(aq) + H2O(l) <=> H3O+(aq) + A^-(aq)}$$ $$K_\text{a} = \frac{[\ce{H3O+}][\ce{A^-}]} {[\ce{HA}]}$$ The term "acid dissociation constant" is used instead of "equilibrium constant" for added clarity. In this situation they mean the same thing.
{ "domain": "chemistry.stackexchange", "id": 16159, "tags": "physical-chemistry, acid-base, equilibrium, physical-organic-chemistry" }
Adjoint representation in Liouville-von Neumann equation
Question: I am having trouble understanding the adjoint representation of a Lie algebra in the scope of a very specific example, so I thought physics.SE would be the best place to ask. Background: A $N \times N$ density matrix $\rho$ contains only $N^2 - 1$ non-redundant real quantities. Therefore, it is convenient to represent it as real vector $d$ (called coherence or pseudospin vector). One possible parameterization [1] of the density matrix is $$\rho = N^{-1}I + \frac{1}{2} \sum_{j=1}^{N^2-1} d_j s_j,$$ where $s_j$ are the generalized Pauli matrices and $I$ is the identity. This parameterization can be plugged into the Liouville-von Neumann equation $\partial_t \rho = \mathcal{L}(\rho)$ and after applying $\mathrm{Tr}\{ \cdot s_i\}$ on both sides, one receives a differential equation for the vector $d$: $\partial_t d = L d$, where $L$ is a real $(N^2-1)\times (N^2-1)$ matrix with the elements $L_{ij} = \mathrm{Tr}\{ \mathcal{L}(s_j) s_i \}$. Q0: Can the adjoint representation represent elements of a vector space in a different vector space? Basically, we transform the density matrix (Hermitian, $N \times N$) to a vector (real, $N^2-1$), which is an isomorphism between two vector spaces. However, if I am not mistaken, the adjoint representation considers only one vector space. Q1: Is (and if yes, how) the matrix $L$ related to the adjoint representation of the $\mathfrak{su}(N)$ algebra? As far as I understood, the $s_j$ (traceless and Hermitian) span the $\mathfrak{su}(N)$ algebra and can be used to compose both density matrix $\rho$ and Hamiltonian $H$, where the latter enters the Liouvillian $$\mathcal{L}(\rho) = -\mathrm{i}\hbar^{-1} [ H, \rho ].$$ Then, $L$ resembles $\mathrm{ad}_{H} (\rho)$, where the Lie bracket $-\mathrm{i} [\cdot, \cdot]$ is used (I think this bracket must be used with the Hermitian version of the generalized Pauli matrices). Q2: Does the relation in Q1 change when considering a general Liouvillian? Of course the commutator term will remain in the Liouvillian, but possibly a dissipation term $\mathcal{G}(\rho)$ (similar to the Lindblad master equation) will be added. The matrix $L$ can be derived for any Liouvillian, but can it still be called adjoint representation? Q3: How does this all relate to the equation $$\mathrm{Ad}_{\exp(x)} = \exp(\mathrm{ad}_x)?$$ The solution for the vector ODE is $d(t) = \exp({Lt})d(0)$. If $L$ is the adjoint representation of the Liouvillian, then the solution for the original density matrix reads $\rho(t) = \exp(\mathcal{L}t) \rho(0) \exp(-\mathcal{L}t)$. This would make sense, but is it correct? And does it hold for general Liouvillians? [1] Hioe, F. T., & Eberly, J. H. (1981). $N$-level coherence vector and higher conservation laws in quantum optics and quantum mechanics. Physical Review Letters, 47(12), 838. Edit #1: Added Q0 for a more basic understanding. Edit #2: I have asked a separate question on maths.SE regarding the change of basis https://math.stackexchange.com/questions/2682901/matrix-exponential-and-change-of-basis Answer: It is unclear to me where your block is, but I suspect it is in the trivial routine translation to adjoint vectors from abstract Lie generators, here in the defining (fundamental) representation. Since the canonical paradigm for all physicists is the Pauli matrices and su(2), that's what suffices to illustrate here, before you complicate life for yourself with the trivial extension to general N. So you wish to recognize how (15) relates to (1) for the 3 Pauli matrices. To spare you complication, let us just use adjoint 3-vectors $\vec d$ instead of $d_i$s, and recall the standard-normalization generators of su(2) in the fundamental are half the Pauli matrices. $$ \rho=\frac{1}{2} I+ \frac{1}{4} \vec{d}\cdot \vec{\sigma}. $$ Likewise, for the sake of argument, take the most general Hermitian Hamiltonian $$ H=\hbar \gamma \vec{B}\cdot \vec{\sigma}~/2, $$ where a piece proportional to the identity would not matter, and γ is real for Hermiticity. The von Neumann equation then reduces to (1) of your reference, $$ i \hbar \partial_t \rho = [H,\rho] \qquad \Longrightarrow \qquad \partial_t \vec{d}= \gamma ~\vec{B}\times \vec{d} . $$ Your adjoint representation 3×3 matrix then, sending a 3-vector $\vec d$ to a velocity 3-vector by merely left-multiplication is $\mathbb{L}=\gamma \vec{B}\times$, which is to say $L_{ij}=\gamma \epsilon^{ikj} B^k$. This amounts to a familiar 3-space rotation around an axis parallel to $\vec B$, further amounting to you equation (15)--you should be able to generalize by inserting arbitrary su(N) structure constants in the commutator! Again, bifundamental 2×2 commutator relations were faithfully mapped to 3×3 left-multiplication relations on vectors. (You can see how to generalize to arbitrary N). Indeed, the generic solution is thus your left-multiplication $$ \vec{d}(t)= \exp (t~\mathbb{L}) ~~\vec{d}(0). $$ The point of your reference is that all operations are now left-multiplication matrix operations on 3 vectors ($N^2-1$-vectors in the general case), instead of commutations, so, left and right multiplications on 2×2 (defining) matrices. In this language, the adjoint has reduced to just a routine representation whose generators are in the 3×3 space of the structure constants. Your answer then is just $$ \rho(t) = \frac{1}{2} I+ \frac{1}{4} \vec{d}(t) \cdot \vec{\sigma}, $$ as you noted, and you need not have gone the full bifundamental Ad way, $$ \rho(t) = e^{-itH/\hbar} \rho(0)~ e^{it H/\hbar } $$ for the generic matrix solution of the von Neumann equation. (Differentiate both sides to see that). This expression can be re-written more abstractly as $$ \rho(t) = e^{\operatorname{ad}(-itH/\hbar)} \rho(0) = \operatorname{Ad}(\exp (-itH/\hbar)) ~\rho(0). $$ But for your very last in-text ($\cal L$) formula, your expressions appear sound. As you transcribe your expressions, just remind yourself whether you are operating on fundamental rep matrices (via commutation) or else, equivalently, adjoint vectors (via left multiplication). Note added in response to comments. Well, that's why I gave you the su(2) example. Instead of looking at commutators, you get rid of them as described, and look at mere rotations of 3-vectors. The set of three (one for each component of B) 3×3 matrices $\mathbb{L}$ are the generators of su(2) in the adjoint representation. So, the full rotations are the exponentials of linear combinations of these matrices, merely left-multiplying 3-vectors. That's all you need to know about them, and you may virtually forget about commutators at this level, so, yes, even with a dissipation correction in the Liouvillian you produce the suitable $\mathbb{L}$, plug it into the answer, etc... That's the point: that you may forget about the $\cal L$ 2×2 matrices and stick to the $\mathbb{L}$ 3×3 matrices, where you do not (directly) evaluate commutators... (Any irreducible 3-d rep of su(2) is the adjoint, so isomorphic to this: you do not need to know much about commutators here.) It might be easier if you simply tried an explicit example. The generalization to su(N) is straightforward. Again, you transition from N×N matrices and commutators to $(N^2-1)\times (N^2-1)$ matrices left-multiplying $(N^2-1)$-vectors, and forget about the adjoint representation being special. Possibly looking at the Pauli matrix article linked, you could remind yourself how infinitesimal rotations in the bifundamental amount to left-rotations in the adjoint; it is sophomore angular momentum. There, you learned about the Pauli vector map of 3-vectors to 2x2 hermitean traceless matrices: they are both the triplet (adjoint) rep of su(2) in alternate realizations.
{ "domain": "physics.stackexchange", "id": 47397, "tags": "hamiltonian-formalism, group-theory, representation-theory, lie-algebra, density-operator" }
How to prove P = NP if problem Π ϵ NP-complete and Problem complement Πc ϵ NP?
Question: How to prove if P = NP if problem Π ϵ NP-complete and Problem complement Πc ϵ NP? OR P = NP if NPC intersects with Co-NPC Answer: Proving $NP=co-NP$ doesn't necessarily mean that $P=NP$. Although, the other way around is correct: Assume $P=NP$, then $co-NP=co-P=P=NP$.
{ "domain": "cs.stackexchange", "id": 16652, "tags": "complexity-theory, np-complete, np, polynomial-time, p-vs-np" }
Problems finding messages when compiling ROS project
Question: Hi, I've been having some problems lately with ROS and catkin_make. Whenever I compile the whole project cmake struggles to find certain messages. I would paste the errors codes but they are just like "package/msg.h not found". The weird thing is that after trying to compile the project 3 or 4 times for some reason the compiler is able to end up finding everything and the whole thing works. I would like to solve this issue, whoever after spending some days studying cmake and package files I could not find the source of the problem, does anyone know how could I track down this bug? I've been able to isolate this problem to one package, and the only different thing from the rest is that I compiled it with the c++11 flag. Thanks in advance! Originally posted by lavnir on ROS Answers with karma: 3 on 2020-07-17 Post score: 0 Answer: Hi, since your information provided is quite sparse I'm not certain in the answer. However I'm assuming your messages are defined in the same package as your node. Have you checked that your include directiories are set include_directories( include ${catkin_INCLUDE_DIRS} ) and your dependencies are set to include ${${PROJECT_NAME}_EXPORTED_TARGETS} to make sure your messages are built before your node/library... add_dependencies(my_node ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS}) Originally posted by ipa-jba with karma: 153 on 2020-07-17 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by lavnir on 2020-07-18: Thanks! That was it, just missed to add the dependencies so that msgs are built before the rest.
{ "domain": "robotics.stackexchange", "id": 35296, "tags": "c++, ros-kinetic" }
GNURadio signal degradation *above* Nyquist rate
Question: I am going through the basic tutorials for GNURadio, and I have a question related to the sample rate tutorial. In it they demonstrate the degradation effects of sampling at too low a sampling rate. I understand why sampling below the Nyquist rate will result in degradation due to aliasing, but in one of their examples they get significant signal degradation already when sampling a 15 kHz sinusoid at a rate of 32 kHz. That is above the Nyquist rate, so why is the signal degraded? When they go on to sample an 18 kHz sinusoid at a rate of 32 kHz, i.e. below the Nyquist rate, the frequency plot shows the expected aliasing, producing a spike at 14 kHz. But the time domain plot still shows something unexpected, because it does not show a 14 kHz sinusoid (as I would expect after reconstruction) but some other significantly degraded waveform. So there seems to be some source of degradation other than the sampling rate. Could someone clarify? Is it related to the fact that a DFT (specifically an FFT) is used rather than an ideal DTFT? Answer: That is above the Nyquist rate, so why is the signal degraded? It's not degraded in any way form or shape. The perceived degradation is purely cosmetic but not functional. See for example: How is sampling affecting this sine wave? But the time domain plot still shows something unexpected No it doesn't. It looks exactly as it should. If that's unexpected, the expectations are are wrong. The effect is purely visual and caused by sloppy plotting. You need to be clear about what exactly your are plotting: a discrete sequence (which ideally should be a stem plot) or the representative continuous waveform. The latter needs proper interpolation between the sampled points. At high frequencies the basic "connect the dots" method of interpolation gives visually poor results.
{ "domain": "dsp.stackexchange", "id": 10293, "tags": "sampling, gnuradio" }
Can a hovering helicopter travel half the globe in 12 hours?
Question: Suppose we have a helicopter that is able to stay stationary in flight for extended periods of time. If such a helicopter stayed at point A in the sky for 12 hours straight, would it reach the other side of the globe? Answer: No. A helicopter that "stays stationary" does so in relation to the atmosphere around it and the atmosphere pretty much follows the ground underneath it. The atmosphere does not stand still while the earth rotates. If it did, we would experience constant winds on the order of 1000 km/h. That would not be pleasant.
{ "domain": "physics.stackexchange", "id": 7017, "tags": "gravity, fluid-dynamics, atmospheric-science, aircraft" }
Parallel pseudorandom number generators
Question: This question is primarily related to a practical software-engineering problem, but I would be curious to hear if theoreticians could provide more insight in it. Put simply, I have a Monte Carlo simulation that uses a pseudorandom number generator, and I would like to parallelise it so that there are 1000 computers running the same simulation in parallel. Therefore I need 1000 independent streams of pseudorandom numbers. Can we have 1000 parallel streams with the following properties? Here $X$ should be a very well-known and widely-studied PRNG with all kinds of nice theoretical and empirical properties. The streams are provably as good as what I would get if I simply used $X$ and split the stream generated by $X$ into 1000 streams. Generating the next number in any stream is (almost) as fast as generating the next number with $X$. Put otherwise: can we get multiple independent streams "for free"? Of course if we simply used $X$, always discarding 999 numbers and picking 1, then we certainly would have property 1, but we would lose in the running time by factor 1000. A simple idea would be to use 1000 copies of $X$, with seeds 1, 2, ..., 1000. This certainly would be fast, but it is not obvious if the streams have good statistical properties. After some Googling, I have found, for example, the following: The SPRNG library seems to be designed for exactly this purpose, and it supports multiple PRNGs. Mersenne twister seems to be a popular PRNG nowadays, and I found some references to a variant that is able to produce multiple streams in parallel. But all this is so far from my own research areas, that I couldn't figure out what is really the state-of-the-art, and which constructions work well not only in theory but also in practice. Some clarifications: I do not need any kind of cryptographic properties; this is for scientific computation. I will need billions of random numbers, so we can forget any generator with a period of $< 2^{32}$. Edit: I cannot use a true RNG; I need a deterministic PRNG. Firstly, it helps a lot with debugging and makes everything repeatable. Secondly, it allows me to do, e.g., median-finding very efficiently by exploiting the fact that I can use the multi-pass model (see this question). Edit 2: There is a closely related question @ StackOverflow: Pseudo-random number generator for cluster environment. Answer: You can use an evolution of the Mersenne Twister algorithm developed by Saito and Matsumoto: SIMD-oriented Fast Mersenne Twister (SFMT) SFMT is a Linear Feedbacked Shift Register (LFSR) generator that generates a 128-bit pseudorandom integer at one step. SFMT is designed with recent parallelism of modern CPUs, such as multi-stage pipelining and SIMD (e.g. 128-bit integer) instructions. It supports 32-bit and 64-bit integers, as well as double precision floating point as output. SFMT is much faster than MT, in most platforms. Not only the speed, but also the dimensions of equidistributions at v-bit precision are improved. In addition, recovery from 0-excess initial state is much faster. See Master's Thesis of Mutsuo Saito for detail. The period varies from $2^{607}-1$ to $2^{216091}-1$. Using one same pesudorandom number generator for generating multiple independent streams by changing the initial values may cause a problem (with negligibly small probability). To avoid the problem, using different parameters for each generation is preferred. This technique is called dynamic creation of the MT parameters. In the SFMT source code you can find some examples of parameter sets (of variable periods) and an awk script to convert a CSV file to a compilable parameter set. There is also a tool called "Dynamic Creation of Mersenne Twister generators". The authors recently developed another modified version of the Mersenne Twister - Mersenne Twister for Graphic Processors - designed to run in GPUs and take advantage of their native parallel execution threads. The key feature is speed: $5 \times 10^7$ random integers every 4.6ms on a GeForce GTX 260. The periods of generated sequence are $2^{11213}-1$ , $2^{23209}-1$ and $2^{44497}-1$ for 32-bit version, and $2^{23209}-1$, $2^{44497}-1$, $2^{110503}-1$ for 64-bit version. It It support 128 parameter sets for each period, in other words, it can generate 128 independent pseudorandom number sequences for each period. We have developed Dynamic Creator for MTGP, which generates more parameter sets Indeed they provide a MTGPDC tool to create up to $2^{32}$ parameter sets (i.e. independent streams). The algorithm passes the main randomness tests like Diehard and NIST. A preliminary paper is also availbale on arXiv: A Variant of Mersenne Twister Suitable for Graphic Processors
{ "domain": "cstheory.stackexchange", "id": 1646, "tags": "cr.crypto-security, dc.parallel-comp, pseudorandom-generators" }
re-write javascript array
Question: I am using a long array of over 300 var & 7,000 lines of code written like this: var a = []; var b = []; var c = []; var d = []; var e = []; a[0] = "a"; b[0] = "b"; c[0] = "c"; d[0] = "d"; e[0] = "e"; a[1] = "1"; b[1] = "2"; c[1] = "3"; d[1] = "4"; e[1] = "5"; a[2] = "one"; b[2] = "two"; c[2] = "three"; d[2] = "four"; e[2] = "five"; Im guessing it is the same as a much cleaner and shorter - var a = [a,1,one]; var b = [b,2,two]; var c = [c,3,three]; var d = [d,4,four]; var e = [e,5,five]; Is there an easy or automatic way to rewrite the original array like the 2nd method? Answer: First of all I think you meant: a = ["a","1","one"]; // instead of [a, 1, one] I handle these rare kind of issues as follow: Open your browser (the following code is tested against Chrome) Open the JavaScript console [Control -Shift -J (Chrome:Windows/Linux)] Copy/paste your own code into the console and then paste the following code copy('abcde'.split('').map(function(varName){ return 'var ' + varName + ' = ' + JSON.stringify(window[varName])+';'; }).join('\n')); If your variable names are more than 1 character use this code instead: (of course the following code won't work with the file example you specified, I changed the variable names for demonstration purpose) copy(['var1', 'myVar2', 'blablabla'].map(function(varName){ return varName + ' = ' + JSON.stringify(window[varName])+';'; }).join('\n')); Hit enter the above code will copy the following JavaScript code into your clipboard: a = ["a","1","one"]; b = ["b","2","two"]; c = ["c","3","three"]; d = ["d","4","four"]; e = ["e","5","five"]; Paste it into your file, save it and that's all!
{ "domain": "codereview.stackexchange", "id": 3901, "tags": "javascript, array" }
Create a Tree Node from JavaScript Array
Question: I'm trying to create a list of subpaths from a flattened array. Here's the starting point: var flat = [ {path: "/" }, {path: "A" }, {path: "A.A"}, {path: "A.B"}, {path: "A.C"}, {path: "B" }, {path: "C" } ] The goal is that anything under a parent would be a subpath of that. So the goal would look like this: var goal = [ {path: "/" }, {path: "A", subpaths: [ {path: "A.A"}, {path: "A.B"}, {path: "A.C"} ] }, {path: "B" }, {path: "C" } ] I've hobbled together a solution, but it doesn't feel very clean and is probably prone to breakage: var nested = []; for (var i=0; i < flat.length; i++) { // split components up var routes = flat[i].path.split("."); // get parent key var key = routes[0]; // check if we've already added the key var index = -1; for (var j=0; j < nested.length; j++) { if (nested[j].path == key) { index = j; break; } } if (index===-1) { // if we have a new parent add it nested.push(flat[i]) } else { // create subpaths property on new object if (!nested[index].subpaths) { nested[index].subpaths = [] } // add child paths to existing parent nested[index].subpaths.push(flat[i]) } } Here's a working Demo in Stack Snippets var flat = [ {path: "/" }, {path: "A" }, {path: "A.A"}, {path: "A.B"}, {path: "A.C"}, {path: "B" }, {path: "C" } ] var nested = []; for (var i=0; i < flat.length; i++) { // split components up var routes = flat[i].path.split("."); // get parent key var key = routes[0]; // check if we've already added the key var index = -1; for (var j=0; j < nested.length; j++) { if (nested[j].path == key) { index = j; break; } } if (index===-1) { // if we have a new parent add it nested.push(flat[i]) } else { // create subpaths property on new object if (!nested[index].subpaths) { nested[index].subpaths = [] } // add child paths to existing parent nested[index].subpaths.push(flat[i]) } } console.log(nested); I'm also open to using jQuery or Underscore if they expose any functionality that would tidy up the source code. I'd also like to modify the code so it could handle this recursively with an unspecified level of depth on each node. Answer: Your code seems to only support 2 levels down. Also, the second loop is costly because it has to search through the array if the path already exists. I believe your structure can be better represented using an object. Lookups won't need searching through, and there's still room for path metadata. var goal = { // Room for "goal" metadata. "/": {}, "A": { // Room for "A" metadata. subpaths: { "A": {}, "B": {}, "C": {}, } }, "B": {}, "C": {}, } Here's an example var paths = [ {path: "/" }, {path: "A" }, {path: "A.A"}, {path: "A.B"}, {path: "A.C"}, {path: "A.D.C"}, {path: "B" }, {path: "C" } ]; // Move out or template into a creator function. function createPath(){ return { subpaths: {} }; } // Resolves the path into objects iteratively (but looks eerily like recursion). function resolvePath(root, path){ path.split('.').reduce(function(pathObject, pathName){ // For each path name we come across, use the existing or create a subpath pathObject.subpaths[pathName] = pathObject.subpaths[pathName] || createPath(); // Then return that subpath for the next operation return pathObject.subpaths[pathName]; // Use the passed in base object to attach our resolutions }, root); } var goal = paths.reduce(function(carry, pathEntry){ // On every path entry, resolve using the base object resolvePath(carry, pathEntry.path); // Return the base object for suceeding paths, or for our final value return carry; // Create our base object }, createPath()); document.write(JSON.stringify(goal));
{ "domain": "codereview.stackexchange", "id": 15810, "tags": "javascript, array, tree" }
Evolution of dark matter and gas temperature
Question: So I have recently watched a simulation of dark matter density and gas temperature evolution in a universe. However I couldn't find description of under what assumptions it was was made and what it is telling us. Simulation is here: http://www.illustris-project.org/movies/illustris_movie_cube_sub_frame.mp4 The description next to the video: Time evolution of a 10Mpc (comoving) cubic region within Illustris, rendered from outside. The movies shows on the left the dark matter density field, and on the right the gas temperature (blue: cold, green: warm: white: hot). The rapid temperature fluctuations around massive haloes are due to radiative AGN feedback that is most active during quasar phases. The larger 'explosions' are due to radio-mode feedback. Maybe some of you had spent some time thinking about these things and could answer some of my questions about simulation? 1) It seems where we have large dark matter density, we also have large temperature, meaning large matter density, is these are the regions new galaxies or stars forms? 2) It seems that dark matter density just builds up slowly in this net but do not move anywhere, does it mean that dark matter is slow? (non relativistic) 3) Stellar mass counter is increasing meaning that new suns are forming. Does this means that initially universe were more uniform in matter density, however some regions attracted more matter and started forming galaxies? Answer: Standard cosmology assumes homogeneity (as well as isotropy) of the universe. Though we know that this is only an approximation, we also know that for cosmic scales it is a good one. This can be seen in the cosmic microwave background radiation (CMB). However, the initially small inhomogeneities are responsible for galaxy and star formation through gravitational interaction. (This should answer your third question). Careful: the next paragraph is full of speculations! Whether these irregularities are (quantum) fluctuations in an initial system, say after inflation, or origin in clumps of dark matter is not known. If the latter is the case, this still raises the question why dark matter (DM) clustered in the first place. This question however is impossible to answer right now, since we do not know what DM actually is. To answer your first question, yes these are the regions where galaxies form, but we cannot say why. However, it is unlikely that stars formed in a region of higher DM density, since the density is assumed to be rather homogeneous in the halo of a galaxy, especially in a small part, i.e. a solar system. Since DM does not interact electromagnetically, it doesn't scatter to build stars and planets like luminous matter. The last sentence already kind of addresses your second question. Furthermore, there is a distinction between cold dark matter (CDM) and hot dark matter. While most theories favor CDM (Axions, WIMPs, MACHOs), which is nonrelativistic, there are also hot dark matter models with ultrarelativistic neutrinos.
{ "domain": "physics.stackexchange", "id": 24597, "tags": "cosmology, universe, simulations, dark-matter" }
dynamic reconfigure specific directory
Question: Using generate_dynamic_reconfigure_options(ros/config/params.cfg) in my CMakeLists.txt of a package, some strange directories within the package are created and I don't know why. Here is my structure before a build (top level is the catkin workspace with the toplevel CMakeLists.txt). ├── CMakeLists.txt -> /opt/ros/kinetic/share/catkin/cmake/toplevel.cmake └── src └── pack_template ├── CMakeLists.txt ├── library │ ├── include │ │ └── lib.hpp │ └── src │ └── lib.cpp ├── package.xml └── ros ├── config │ └── params.cfg ├── include │ └── template │ └── template.hpp └── src └── template.cpp I then use catkin build with this CMakeLists.txt cmake_minimum_required(VERSION 2.8.3) project(pack_template) add_compile_options(-std=c++11) find_package(catkin REQUIRED roscpp rospy std_msgs dynamic_reconfigure ) generate_dynamic_reconfigure_options( ros/config/params.cfg ) catkin_package( LIBRARIES ${PROJECT_NAME} CATKIN_DEPENDS std_msgs ) include_directories( include ros/include library/include ${catkin_INCLUDE_DIRS} cfg/cpp ${Boost_INCLUDE_DIRS} ) add_library(${PROJECT_NAME} STATIC library/src/lib.cpp ) add_executable(${PROJECT_NAME}_node ros/src/template.cpp ) set_target_properties(${PROJECT_NAME}_node PROPERTIES OUTPUT_NAME "example_node" PREFIX "") add_dependencies(${PROJECT_NAME}_node ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS} ) target_link_libraries(${PROJECT_NAME} ${catkin_LIBRARIES} ) target_link_libraries(${PROJECT_NAME}_node ${PROJECT_NAME} ${catkin_LIBRARIES} ${Boost_LIBRARY_DIR} ) and my folder structure after a build looks (deleted some hopefully uninteresting things) ├── build │ ├── (...) ├── CMakeLists.txt -> /opt/ros/kinetic/share/catkin/cmake/toplevel.cmake ├── devel │ ├── env.sh -> ~/catkin/dynamic_reconfigure_test/devel/.private/catkin_tools_prebuild/env.sh │ ├── etc │ │ └── (...) │ ├── include │ ├── lib │ │ ├── libpack_template.a -> ~/catkin/dynamic_reconfigure_test/devel/.private/pack_template/lib/libpack_template.a │ │ ├── pack_template │ │ │ └── example_node -> ~/catkin/dynamic_reconfigure_test/devel/.private/pack_template/lib/pack_template/example_node │ │ ├── pkgconfig │ │ │ ├── catkin_tools_prebuild.pc -> ~/catkin/dynamic_reconfigure_test/devel/.private/catkin_tools_prebuild/lib/pkgconfig/catkin_tools_prebuild.pc │ │ │ └── pack_template.pc -> ~/catkin/dynamic_reconfigure_test/devel/.private/pack_template/lib/pkgconfig/pack_template.pc │ │ └── python2.7 │ │ └── dist-packages │ │ └── pack_template │ │ ├── cfg │ │ │ └── __init__.py -> ~/catkin/dynamic_reconfigure_test/devel/.private/pack_template/lib/python2.7/dist-packages/pack_template/cfg/__init__.py │ │ └── __init__.py -> ~/catkin/dynamic_reconfigure_test/devel/.private/pack_template/lib/python2.7/dist-packages/pack_template/__init__.py │ ├── setup.bash -> ~/catkin/dynamic_reconfigure_test/devel/.private/catkin_tools_prebuild/setup.bash │ ├── setup.sh -> ~/catkin/dynamic_reconfigure_test/devel/.private/catkin_tools_prebuild/setup.sh │ ├── _setup_util.py -> ~/catkin/dynamic_reconfigure_test/devel/.private/catkin_tools_prebuild/_setup_util.py │ ├── setup.zsh -> ~/catkin/dynamic_reconfigure_test/devel/.private/catkin_tools_prebuild/setup.zsh │ └── share │ ├── catkin_tools_prebuild │ │ └── cmake │ │ ├── catkin_tools_prebuildConfig.cmake -> ~/catkin/dynamic_reconfigure_test/devel/.private/catkin_tools_prebuild/share/catkin_tools_prebuild/cmake/catkin_tools_prebuildConfig.cmake │ │ └── catkin_tools_prebuildConfig-version.cmake -> ~/catkin/dynamic_reconfigure_test/devel/.private/catkin_tools_prebuild/share/catkin_tools_prebuild/cmake/catkin_tools_prebuildConfig-version.cmake │ └── pack_template │ └── cmake │ ├── pack_templateConfig.cmake -> ~/catkin/dynamic_reconfigure_test/devel/.private/pack_template/share/pack_template/cmake/pack_templateConfig.cmake │ └── pack_templateConfig-version.cmake -> ~/catkin/dynamic_reconfigure_test/devel/.private/pack_template/share/pack_template/cmake/pack_templateConfig-version.cmake ├── logs │ ├── (...) └── src └── pack_template ├── cfg │ └── cpp │ └── pack_template │ └── templateConfig.h ├── CMakeLists.txt ├── docs │ ├── templateConfig.dox │ ├── templateConfig-usage.dox │ └── templateConfig.wikidoc ├── library │ ├── include │ │ └── lib.hpp │ └── src │ └── lib.cpp ├── package.xml ├── ros │ ├── config │ │ └── params.cfg │ ├── include │ │ └── template │ │ └── template.hpp │ └── src │ └── template.cpp └── src └── pack_template ├── cfg │ ├── __init__.py │ └── templateConfig.py └── __init__.py Is there a way to define a target location to create these directories? I want them to be created in the subfolder ros, where all my ros dependent things are. the paramsConfig.h should go to ros/include not in src Edit: There was a mistake in my original post, I edited the package structure. My plan was to put all ros-dependent stuff in package_main/ros/, hence there are an include and src directory. Second my ros-independent code should go to package_main/library src and include. Unfortunately by using the dynamic_reconfigure command ros creates the two directories package_main/cfg and package_main/src. Both is a bit awkward to me, since one may expect the *.cfg file in the cfg directory and the sources in the src directory. Edit 2: Output of tree for better readability Edit 3: Since it seems to be more complex, I created a minimal example, and edited respectively. Originally posted by mherrmann on ROS Answers with karma: 9 on 2018-06-21 Post score: 0 Original comments Comment by jayess on 2018-06-22: This directory listing is a little confusing. Can you please update your question with a copy and paste of the output of using tree? Answer: After looking at the example package you provided (and fixing some issues with it, please make sure things compile on your own machine in the future), I believe the following is the problem (from #q69583 in fact): from dynamic_reconfigure.parameter_generator import * This is the line you would use for the old version of dynamic_reconfigure, ie: the one for rosbuild. For Catkin, the line should read: from dynamic_reconfigure.parameter_generator_catkin import * note the _catkin suffix there. I would suggest to take a look at the How to Write Your First .cfg File tutorial, just to see whether the .cfg file contains some other rosbuild-isms. Edit: changes to the pkg and files I had to make: the .cfg file was not executable the Start() prototype in the .cpp file did not agree with the one in the header the prototype for Stop() in the .cpp file included an extra S, prefixed to the class name the CMakeLists.txt referenced Boost twice without a find_package(Boost ..) anywhere the CMakeLists.txt referenced a non-existing include directory In addition, with the Catkin version of the dynamic_reconfigure generator, the cfg/cpp directory should not be on the include path any longer. Originally posted by gvdhoorn with karma: 86574 on 2018-06-25 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by mherrmann on 2018-06-25: Sorry, I did some mistakes creating the package. However, the line from dynamic_reconfigre.parameter_generator_catkin import * did the trick. Thank you very much.
{ "domain": "robotics.stackexchange", "id": 31061, "tags": "ros-kinetic, dynamic-reconfigure" }
How to determine that sustained wind speed exceeded the threshold value?
Question: Certain maritime ports and airports have ranges of operation relation to the wind, usually according to a given threshold value of sustained wind speed. If I have the instantaneous values from anemometer every 50 seconds, how can I determine that the sustained wind speed exceeded the threshold? Answer: You are essentially asking how to determine what the sustained wind speed is, and as pointed out in a comment, this is merely temporally averaging your 50 s instantaneous wind observations. The specific application you are targeting likely has a standardized definition of a time period over which the averaging should occur. This might be a 3 minute wind, a 5 minute wind or some other value. Once you know this value, you can calculate a rolling average over your time series of instantaneous wind and this rolling average is your "sustained" wind. If your application is aviation, you'll want to keep the sustained wind and the peak winds, as the difference between sustained and peak gust is important to landing aircraft (we call that difference the "gust factor" and is generally more important than the sustained wind alone). METAR define wind direction and speed as 2 minute averages and define gusts as a minimum 10 kt deviation over the mean during a 10 minute period. See this page for more information on observing wind for METAR applications.
{ "domain": "earthscience.stackexchange", "id": 280, "tags": "meteorology, measurements, wind" }
Converting from base-10 to word
Question: The following code is used to convert a base-10 number to a base-N number, where N is a length of a given alphabet that contains characters of which a base-N number can consist. The number to convert is always increasing, like from 1 to 45789, not from 536 to 13. The resulting number may not contain zero as first digit, so I need to carry the one. #include <stdio.h> #include <string.h> #include <stdlib.h> typedef unsigned long long ull; void conv(ull num, const char *alpha, char *word, int base){ while (num) { *(word++)=alpha[(num-1)%base]; num=(num-1)/base; } } int main(){ ull nu; const char alpha[]="abcdefghijklmnopqrstuvwxyzOSR34"; /* "OSR43" was added to show that the letters of alpha are NOT in alphabetical order */ char *word=calloc(30,sizeof(char)); // word always contains null-terminator int base=(int)strlen(alpha); for (nu=1;nu<=1e8;++nu) { conv(nu,alpha,word,base); printf("%s\n",word); } return 0; } This code's working fine but I need to speed it up as much as possible. How do I do it? Answer: Improving conv To convert a base-10 number to base-N, I don't think it gets faster than the conv function your wrote. I would write it a bit differently though: void toBaseN(ull num, const char *alpha, int base, char *word) { while (num) { *(word++) = alpha[(num - 1) % base]; num = (num - 1) / base; } } What I changed: Renamed the function: conv doesn't describe what it does Rearranged the parameters: base is tightly related to alpha (the length), and I think it's good to have out-parameters as last Add spaces around the operators to improve readability For the record, this function doesn't return what I would expect. It returns the digits in reverse order, which is a bit odd. For example with the given alphabet, it returns ba for 33 and ca for 34, when I would expect ab and ac, respectively. The function has a number expectations with regards to the input: word is expected to be big enough to contain the digits word is expected to be filled with nulls base is expected to be the length of alpha The first one is reasonable and quite natural, the others are not, and should be documented in a comment above the function. Improving main const char alpha[]="abcdefghijklmnopqrstuvwxyzOSR34"; char *word=calloc(30,sizeof(char)); // word always contains null-terminator int base=(int)strlen(alpha); for (nu=1;nu<=1e8;++nu) { There are a couple of things I don't like about this bit: Too packed code: add more spaces around operators like I did in the previous point The comment // word always contains null-terminator seems misplaced. It seems you intended it for the line above it. It's more intuitive and readable to have comments above the line they refer to. The make it even more clear, it would be good to leave a blank line before the comment, and perhaps even after the statement What is magic number 30? It would be better to make this a global constant base is derived from alpha, it's tightly related, so I'd move these closer to each other I suggest this writing style: const char alpha[] = "abcdefghijklmnopqrstuvwxyzOSR34"; int base = (int) strlen(alpha); // word always contains null-terminator char *word = calloc(30, sizeof(char)); for (nu = 1; nu <= 1e8; ++nu) { Improving speed As I mentioned above, the general functionality of conv without a context is as fast as it can be. (And incorrect: normally I'd expect digits to be reversed.) In the context of printing base-N numbers within the range [start : end], you can do better. You could convert start and end to base-N, to do the counting in base-N. This will be significantly faster than counting in base-10 and converting in every step.
{ "domain": "codereview.stackexchange", "id": 12401, "tags": "algorithm, c" }
Stretching a rod?
Question: The stress applied on a rod is linearly proportional to its strain. But shouldn't the opposite be true? I mean if you pull particles further apart doesn't the force they apply on each other decrease because the distance between them increases? Kinda like gravity? Answer: First we need to understand the force between individual atoms. At relatively large separations (e.g., a few atomic diameters) atoms attract each other with a force that does, as you suggest, get weaker with distance due to polarization and ionic effects that we needn't go into here. If that was all there was to the story, however, collections of atoms would all end up at zero separation, become arbitrarily dense, and spontaneously form black holes. There would be no "rods" in the first place! Fortunately, at very small separations, atoms repel each other due to the large positive charges on their nuclei. That repulsive force is much stronger than the attractive force at small separations, but falls off much faster as the separation increases. As a result there is a unique separation at which the attractive force is balanced by the repulsive force. Moreover, this equilibrium separation is maintained (i.e., it is a "stable equilibrium") due to the facts that 1) If the atoms are actively pulled apart to a slightly larger separation distance, the now larger attractive force will try to pull the atoms back together and 2) If the atoms are actively pushed together to a slightly smaller separation distance, the now larger repulsive force will try to push the atoms back apart. All of the above is illustrated in the graph below. Notice that the attractive and repulsive forces balance at the equilibrium separation producing zero net force. Notice also that for small increases in separation the net force becomes attractive. Notice especially, that for very, very, small increases in separation (the kind that you get when you try to stretch a rod) the net force, the "stress," becomes more attractive in nearly direct (i.e., linear) proportion to the increase in separation, the "strain." Finally, notice that, if you increase the separation by enough, then the net attractive force does start decreasing again … as you'd expect if you rip the atoms far enough apart.
{ "domain": "physics.stackexchange", "id": 10778, "tags": "newtonian-mechanics, material-science, stress-strain" }
Pulling yourself upward with a pulley — How is this consistent with energy conservation?
Question: Suppose you are in a cage suspended by a massless rope that goes around an ideal frictionless pulley. You are supposed to pull yourself up using the rope. You and your cage together are of mass $m$. Let's say you are pulling yourself up at a constant velocity, so the acceleration is zero. We can solve for the force $T$ you have to apply to the rope. You apply a force $T$ on the rope downward, which puts a tension on the rope. Your cage is pulled up by the rope with a force $T$ upward. By Newton's third law, there is an additional force $T$ upward. The gravitational force on the cage is $mg$ downward. Assuming you are pulled at constant velocity, we must have $$ 2T - mg = 0, $$ so then $$ T = \frac{mg}{2}. $$ Now if there is a mistake here, please correct me, because this solution is crucial to my question. This seems consistent with hint and solutions found on other parts of the internet. Question My question is, how is this consistent with conservation of energy? Let's say you pull yourself up to a height $h$ from the ground. The amount of work you've done is $W = \int_{[0, h]} F\, dx = mgh/2$, which is how much energy you've put in to lift yourself, but you've gained gravitational potential energy $\Delta U = mgh$. It seems you've gained more energy than you've put in. How do you resolve this apparent paradox? I realize that in order for the person to go up by $h$, they would have to pull $2h$ worth of rope. Presumably, we would want to plug in $2h$ into the work formula and we would get the "correct answer" here. What I'm wondering exactly is, how can we justify plugging in $2h$ instead of $h$ into the work formula? Moreover, what exactly is wrong with my reasoning to begin with? Plugging in $2h$ just because they had to pull $2h$ worth of rope seems ad-hoc, so I am wondering if there is a way to justify it from first principles. Answer: My question is, how is this consistent with conservation of energy? The easiest and clearest way to analyze such problems is not using the work formula but using the power formula: $$P=\vec F \cdot \vec v$$ Now, if $\hat x$ is the upwards pointing unit vector then at the hand $$P=\vec F \cdot \vec v=(T\hat x)\cdot(-v\hat x)=-vT$$ and at the cage $$P=\vec F \cdot \vec v=(T\hat x)\cdot(v\hat x)=vT$$ So the total power from the rope is $vT-vT=0$ which is as it must be since the rope gets all of its energy from you. The power from gravity is $$P=\vec F \cdot \vec v= (-mg\hat x) \cdot (v \hat x)=-mgv$$ so mechanical energy is leaving you at a rate of $mgv$ and being converted to gravitational potential energy. In order to be consistent with the conservation of energy there must be some form of internal energy which is decreasing at the rate of $mgv$. This was not specified in the problem, but it is consistent with experience and required for the conservation of energy.
{ "domain": "physics.stackexchange", "id": 95146, "tags": "newtonian-mechanics, forces, energy-conservation, work, free-body-diagram" }
How to use multiple encoders(one-hot and numerical) together for PCA
Question: I want to implement PCA on a dataset(retail) but the data is categorical. One-hot encoding on some columns like Gender, Fabric, Brand makes sense but on other features like price range, size, I would like the encoded values to have some numeric significance, i.e. higher value actually means something. Any suggestions on implementing both these encodings together for PCA? Answer: There is a method that can handle multiple types of data simulataneously called Generalized Low Rank Models - actually one paper that deals with it is called PCA on a Data Frame. GLRMs in Python (and not only Python) are implemented in H2O. Other than that you could try encoding your categorical data as numeric. There are multiple approaches to this. One example is mean encoding - see this answer for details. For implementation see Category Encoders. BTW if your task is totally unsupervised (you don't have any target) you can choose any continuous feature you have for mean encoding (so you can produce many columns from each categorical column).
{ "domain": "datascience.stackexchange", "id": 3363, "tags": "python, pca, encoding" }
Exercise to create an insertString function to linked-list
Question: Working from 'Programming in C' by Kochan. I'm on an exercise in the chapter 'Pointers'. This was the exercise: 'Write a function called insertEntry() to inset a new entry into a linked list. Have the procedure take as arguments a pointer to the list entry to be inserted (of type struct entry as defined in chapter), and a pointer to an element after which the new entry is to be inserted. I've been struggling through this book but this exercise only took me a few minutes, I'm concerned I'm missing the point. Can you please make some suggestions regarding if I've gone wrong? It compiles and runs fine. Could I have done this better? #include <stdio.h> struct entry { int value; struct entry *next; }; void insertEntry(struct entry *addOn, struct entry *element); int main (void) { struct entry n1, n2, n3, addOn; struct entry *list_pointer = &n1; n1.value = 100; n1.next = &n2; n2.value = 200; n2.next = &n3; n3.value = 300; n3.next = (struct entry *) 0; while(list_pointer != (struct entry *) 0) { printf("%i\n", list_pointer->value); list_pointer = list_pointer->next; } list_pointer = &n1; insertEntry(&addOn, &n3); while(list_pointer != (struct entry *) 0) { printf("%i\n", list_pointer->value); list_pointer = list_pointer->next; } return 0; } void insertEntry(struct entry *addOn, struct entry *element) { element->next = addOn; addOn->value = 400; addOn->next = (struct entry *) 0; } Answer: I'm not sure if your insertEntry function is correct. It seems to be hardcoded to add an entry at the end of the linked list; you want to be able to add an entry anywhere (except at the start of the list which is the object of the next exercise in the book). Here's my solution to this exercise. /* Exercise 10.2 Write a function called insertEntry() to insert a new entry into a linked list. Have the procedure take as arguments a pointer to the list entry to be inserted (of type struct entry as defined in this chapter), and a pointer to an element in the list *after* which the new entry is to be inserted. note: inserts n2_5 after n2 */ #include <stdio.h> struct entry { int value; struct entry *next; }; void insertEntry (struct entry *new, struct entry *follow) { new->next = follow->next; follow->next = new; } int main (void) { void insertEntry (struct entry *new, struct entry *follow); struct entry n1, n2, n3, n4, n5, n2_5, *listPtr; n1.value = 100; n1.next = &n2; n2.value = 200; n2.next = &n3; n3.value = 300; n3.next = &n4; n4.value = 400; n4.next = &n5; n5.value = 500; n5.next = (struct entry *) 0; printf ("\nlinked list: "); listPtr = &n1; while ( listPtr != (struct entry *) 0 ) { printf ("%i ", listPtr->value); listPtr = listPtr->next; } printf ("\n"); // insert new entry n2_5.value = 250; printf ("inserting new entry %i ...\n", n2_5.value); insertEntry (&n2_5, &n2); printf ("linked list: "); listPtr = &n1; while ( listPtr != (struct entry *) 0 ) { printf ("%i ", listPtr->value); listPtr = listPtr->next; } printf ("\n"); return 0; }
{ "domain": "codereview.stackexchange", "id": 21220, "tags": "c, linked-list, pointers" }
Relationship of Bias and size of dataset
Question: I was reading the following book: http://www.feat.engineering/resampling.html where the author mentioned the below: Generally speaking, as the amount of data in the analysis set shrinks, the resampling estimate’s bias increases. In other words, the bias in 10-fold cross-validation is smaller than the bias in 5-fold cross-validation. I am unable to understand what the author is trying to say here. My understanding is, as we reduce the size of the dataset, we can induce bias that, a certain sample is restricted to those values only, which is not true since it's just a sample of a larger set. Is this what the author meant? If so, then how does 10-fold cross-validation has a smaller bias than 5-fold cross-validation, since the 10-fold cross-validation will have fewer samples in each fold, than a 5-fold cross-validation? Thanks. Answer: In K fold cross validation, the training data is divided into K folds. In each iteration of training fold K-1 subset are used for training the model and one of the subsets is used for validation.The error estimation is averaged over all k trials to get total effectiveness of our model. As can be seen, every data point gets to be in a validation set exactly once, and gets to be in a training set k-1 times. This significantly reduces bias as we are using most of the data for fitting, and also significantly reduces variance as most of the data is also being used in validation set.
{ "domain": "datascience.stackexchange", "id": 10510, "tags": "data-mining, cross-validation, variance" }
Order a delicious pie here
Question: For a quick summary: I've created this internal web application, and I've hit a point where I can really see the mess I've made. I need some help separating the logic, the view, and the data. More detail: Over the past few months, I've been doing all I can to learn more about JavaScript built web sites/applications. I've created the below code, and it seems as though any additions are just ruining the entire thing. This is the fourth time I've started from scratch on this, and I can't get a product I really like. Now, it works as it should, I just don't like the way it's built. I tried using Angular.js, but that was over-kill for this single page app (plus working with the routing was nightmarish). Now I've just created this mess of a main.js file, and it needs refactoring. It'd be great if we could avoid suggesting tools requiring node.js. index.html <!DOCTYPE html> <html> <head> <!-- Basic Page Needs –––––––––––––––––––––––––––––––––––––––––––––––––– --> <meta charset="utf-8"> <title>Pies</title> <meta name="description" content=""> <meta name="author" content=""> <!-- Mobile Specific Metas –––––––––––––––––––––––––––––––––––––––––––––––––– --> <meta name="viewport" content="width=device-width, initial-scale=1"> <!-- FONT –––––––––––––––––––––––––––––––––––––––––––––––––– --> <link href="//fonts.googleapis.com/css?family=Raleway:400,300,600" rel="stylesheet" type="text/css"> <!-- SCRIPTS –––––––––––––––––––––––––––––––––––––––––––––––––– --> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.js"></script> <script src="scripts/main.js"></script> <!-- CSS –––––––––––––––––––––––––––––––––––––––––––––––––– --> <link rel="stylesheet" href="css/normalize.css"> <link rel="stylesheet" href="css/skeleton.css"> <link rel="stylesheet" href="css/main.css"> </head> <body> <div class="container"> <header class="row"> <div class="twelve columns"> <h1>Pies</h1> </div> </header> <div class="row"> <div class="six columns" id="newOrderContainer"> <h2>New Order?</h2> <span class="response success">Successfully created!</span> <span class="response fail">Something went wrong, try again later.</span> <form id="newOrderForm"> <label for="customerName">Customer name</label> <input class="u-full-width" type="text" id="customerName"> <label for="dueDate">Due date (optional)</label> <input type="date" id="dueDate"/> <div class="flavorSelector u-full-width"></div> </form> </div> <div class="six columns"> <h2>Payments</h2> </div> </div> <div class="row"> <div class="twelve columns" id="ordersContainer"> <h2>Orders</h2> <span class="response success">Successfully paid!</span> <span class="response fail">Something went wrong, try again later.</span> <div id="ordersTableContainer"> <table class="u-full-width"> <thead> <tr> <th>Name</th> <th>Priority</th> <th>Flavor</th> <th>Payment</th> </tr> </thead> <tbody></tbody> </table> </div> </div> </div> <div class="row"> <div class="u-full-width"> <h2>Settings</h2> </div> </div> <div class="row"> <div class="four columns" id="newFlavorContainer"> <h3>New Flavor</h3> <span class="response success">Successfully added!</span> <span class="response fail">Something went wrong, try again later.</span> <form id="newFlavorForm"> <label for="newFlavor">Flavor</label> <input class="u-full-width" type="text" placeholder="Apple, pecan, etc." id="newFlavor"> <div class="button" style="display: block;"> Add New Flavor </div> </form> </div> <div class="eight columns" id="flavorEditContainer"> <h3>Existing Flavors</h3> <div id="flavorsTableContainer"> <table class="u-full-width"> <thead> <tr> <th>Flavor</th> <th>Delete</th> </tr> </thead> <tbody></tbody> </table> </div> </div> </div> </div> </body> </html> There's our main page. I've used the Skeleton CSS boilerplate. main.js $(document).ready(function() { $.get("scripts/appdata.json", function(data) { $(".flavorSelector").each(function() { for (flavor in data.flavors) { $(this).append('<div class="button" style="width: 60%">' + flavor + '</div>'); } }); for (order in data.orders) { var details = data.orders[order]; if (details.paid === false) { $("#ordersTableContainer tbody").append(newOrderRow(order, details)); } } for (flavor in data.flavors) { $("#flavorsTableContainer tbody").append(newFlavorRow(flavor)); } $("#newOrderForm div.button").click(function() { var pdata = { "action" : "newOrder", "name" : $("#customerName").val(), "due" : $("#dueDate").val(), "flavor" : $(this).text() }; $.post("scripts/server.php", pdata, function(data) { if (data != true) { $("#newOrderContainer > span.fail").show().delay(5000).fadeOut(600, function() { $(this).hide(); }); } else { $("#customerName").val(""); $("#dueDate").val(""); fetchOrders(); $("#newOrderContainer > span.success").show().delay(5000).fadeOut(600, function() { $(this).hide(); }); } }); }); $(".payment-form div.button").click(function() { var pdata = { "action" : "payOrder", "hash" : $(this).parents("tr").data("hash"), "paid" : $(this).siblings("input").val() }; $.post("scripts/server.php", pdata, function(data) { if (data != true) { $("#ordersContainer > span.fail").show().delay(5000).fadeOut(600, function() { $(this).hide(); }); } else { $("tr[data-hash=" + pdata["hash"] + "]").fadeOut(600, function() { $(this).remove(); }).delay(1000); $("#ordersContainer > span.success").show().delay(5000).fadeOut(600, function() { $(this).hide(); }); } }); }); }, 'json'); $("#newFlavorForm div.button").click(function() { var pdata = { "action" : "newFlavor", "flavor" : $("#newFlavorForm input[type=text]").val() }; $.post("scripts/server.php", pdata, function(data) { if (data != true) { $("#newFlavorContainer > span.fail").show().delay(5000).fadeOut(600, function() { $(this).hide(); }); } else { $("#newFlavorForm input[type=text]").val(""); $("#newFlavorContainer > span.success").show().delay(5000).fadeOut(600, function() { $(this).hide(); }); } }); }); }); function newOrderRow(hash, data) { var name = data.name, flavor = data.flavor, paymentForm = "<form class='payment-form'><input type='text'/><div class='button paid'>Paid</div></form>"; var priority = "<div class='priority' style='background: " + getPriority(Math.floor(Date.now() / 1000), data.made, data.due) + "'></div>"; return "<tr data-hash='" + hash + "'><td>" + name + "</td><td>" + priority + "</td><td>" + flavor + "</td><td>" + paymentForm + "</td></tr>"; } function newFlavorRow(flavor) { return "<tr><td><input type='text' value='" + flavor + "'/></td><td><span>Delete</span></td></tr>"; } function getPriority(now, made, due) { var colors = ["#A30E0E", "#FF9401", "#6FBF0D"]; var marks = [172800, 64800, 0]; var elapsed = now - made; if (due == "") { for (var i = 0; i < marks.length; i++) { if (elapsed > marks[i]) { return colors[i]; } } } var until = due - now; var i = 0; colors.reverse(); for (var i = 0; i < marks.length; i++) { console.log(until, ">", marks[i], i); if (until > marks[i]) { return colors[i]; } } } I know, it's bad. Everything is mixed together, and I don't know what to do! Suggestions for architectures would be great, but if you could help me fit this into some framework, then would be great too. Right now, I've got the data stored in a JSON file. I'd prefer not to have it in an RDBMS like MySQL, but I've never worked with anything else so I'm open to suggestions! The data I have is looking like this: { "flavors": { "Berry": "", "Apple": "", "Pecan": "" }, "orders": { "43d133ecaed389cf527c93117fc29969": { "name": "Customer1", "flavor": "Berry", "made": 1421471493, "due": 1421884800, "paid": false }, "4bb7e6668a2a63d32a7487267128d406": { "name": "Customer2", "flavor": "Pecan", "made": 1421471572, "due": 1421884800, "paid": false } } } I had some data being paired with flavors, but I got rid of that feature. Is there any chance I can turn this project into something scalable, fast, and modern? Answer: First of all I'd like to congratulate you on the HTML part, that one looks clean and takes almost all best practices into consideration. I say 'almost' since ... yeah well ... these days people argue you should put script tags before the </body> tag. This to avoid http stalling the reflow of the browser. Oh well, for simple applications leave it like that. If you want to scale up and add more libraries in the future, you might consider moving the scripts to the bottom. Next one then, the JavaScript part. If you say it works, well done! You say it looks ugly ... ? Do you also know why? Let me sum that up for you just to make sure we're on the same page: logic mixed with strings is a "nono" if you want to write beautiful code => configurable strings the templating is kinda hard-coded into the logic => templating system the use of globally defined functions => closure the amount of iterations (not sure if I can reduce them, we'll see along the way) => best practices in DOM appending no function describes what the "main" part is (this becomes essential once you scale up) => modular approach You don't want to use some additional libraries for this piece of code? I totally agree! You sound like a good decision maker, now you need a little push in the right direction. So ... I've been spreading my logic here and there over stackoverflow/codereview and I believe it will help you too. Please read them as I'm not going to copy/paste the whole idea again. I will provide refactored code and the extra information I'm sure you can take that from some of those answers I've linked. I use Re-Sharper for JavaScript and I like it "green" (read: jshint valid) so let me tell you what goes wrong even though your code "works": ln 24 & 45: Declaration hides parameter data from outer scope ln 32: Use of an implicitly declared global variable 'fetchOrders' (assuming this is a false positive) ln 95 102 104: Duplicate declaration ln 102: Value assigned is not used in any execution path ln 110: Not all code paths return a value So this is how I do it. Take the time to compare the approach below with my previously posted answers. It's actually the same stuff over and over again. Once you get the hang of it, you'll notice the benefit of object literals and how to extend/configure YOUR OWN library. window.DeliciousPie = (function ($, project) { // 1. CONFIGURATION var cfg = { cache: { container: '[data-component="orderpie"]', flavors: '.flavorSelector', flavorsTable: '#flavorsTableContainer tbody', flavorForm: '#newFlavorForm', flavorFormInputs: 'input[type=text]', flavorSuccess: '#newFlavorContainer > span.success', ordersTable: '#ordersTableContainer tbody', orderForm: '#newOrderForm', orderSuccess: '#newOrderContainer > span.success', orderFail: '#newOrderContainer > span.fail', dueDate: '#dueDate', customerName: '#customerName', paymentForm: '.payment-form', paymentSuccess: '#ordersContainer > span.success', paymentFail: '#ordersContainer > span.fail', formTarget: 'div.button' }, data: { hash: 'hash' }, events: { click: 'click' }, tpl: { flavor: '<div class="button" style="width: 60%">{{flavor}}</div>', paymentForm: '<form class="payment-form"><input type="text"/><div class="button paid">Paid</div></form>', priority: '<div class="priority" style="background: "{{priority}}"></div>', orderRow: '<tr data-hash="{{hash}}"><td>{{name}}</td><td>{{priority}}</td><td>{{flavor}}</td><td>{{paymentForm}}</td></tr>', flavorRow: '<tr><td><input type="text" value="{{flavor}}"/></td><td><span>Delete</span></td></tr>' }, ajaxOptions: { get: { url: 'scripts/appdata.json', dataType: 'json' }, post: { flavor: { url: 'scripts/server.php', data: { action: 'newFlavor' } }, order: { url: 'scripts/server.php', data: { action: 'newOrder' } }, pay: { url: 'scripts/server.php', data: { action: 'payOrder' } } } }, priorityOptions: { colors: ['#A30E0E', '#FF9401', '#6FBF0D'], marks: [172800, 64800, 0] } }; // 2. ADDITIONAL FUNCTIONS /** * @description Render html template with json data * @see handlebars or mustache if you need more advanced functionality * @param {Object} obj * @param {String} template : html template with {{keys}} matching the object * @return {String} template : the template string replaced by key:value pairs from the object */ function renderTemplate(obj, template) { var tempKey, reg, key; for (key in obj) { if (obj.hasOwnProperty(key)) { tempKey = String("{{" + key + "}}"); reg = new RegExp(tempKey, "g"); template = template.replace(reg, obj[key]); } } return template; } // 3. COMPONENT OBJECT project.OrderPie = { version: 0.1, init: function () { this.cacheItems(); if (this.container.length) { this.getData(); this.bindEvents(); } }, cacheItems: function () { var cache = cfg.cache; this.container = $(cache.container); this.flavors = $(cache.flavors); this.flavorsTable = $(cache.flavorsTable); this.flavorForm = $(cache.flavorForm); this.flavorFormInputs = this.flavorForm.find(cache.flavorFormInputs); this.flavorSuccess = $(cache.flavorSuccess); this.flavorFail = $(cache.flavorFail); this.ordersTable = $(cache.ordersTable); this.orderForm = $(cache.orderForm); this.dueDate = $(cache.dueDate); this.customerName = $(cache.customerName); this.orderSuccess = $(cache.orderSuccess); this.orderFail = $(cache.orderFail); this.paymentForm = $(cache.paymentForm); this.paymentSuccess = $(cache.paymentSuccess); this.paymentFail = $(cache.paymentFail); }, bindEvents: function () { var self = this, cache = cfg.cache, data = cfg.data, events = cfg.events, ajaxOptions = cfg.ajaxOptions; this.flavorForm.on(events.click, cache.formTarget, function () { var options = $.extend({}, ajaxOptions.post.flavor, { data: { flavor: self.flavorFormInputs.val() } }); $.ajax(options).done(function (flavorData) { if (flavorData) { self.flavorFormInputs.val(''); self.flavorSuccess.show().delay(5000).hide(600); } }).fail(function () { self.flavorFail.show().delay(5000).hide(600); }); }); this.orderForm.on(events.click, cache.formTarget, function () { var options = $.extend({}, ajaxOptions.post.order, { data: { name: self.customerName.val(), due: self.dueDate.val(), flavor: $(this).text() } }); $.ajax(options).done(function (orderData) { if (orderData) { self.customerName.val(''); self.dueDate.val(''); self.fetchOrders(); self.orderSuccess.show().delay(5000).hide(600); } }).fail(function () { self.orderFail.show().delay(5000).hide(600); }); }); this.paymentForm.on(events.click, cache.formTarget, function () { var options = $.extend({}, ajaxOptions.post.order, { data: { hash: $(this).closest('tr').data(data.hash), paid: $(this).siblings('input').val() } }); $.ajax(options).done(function (paymentData) { if (paymentData) { $('[data-hash="' + options.data.hash + '"]').hide(600).delay(1000).remove(); self.paymentSuccess.show().delay(5000).hide(600); } }).fail(function () { self.paymentFail.show().delay(5000).hide(600); }); }); }, getData: function () { var self = this; $.ajax(cfg.ajaxOptions.get).done(function (data) { if (data.hasOwnProperty('flavors')) { self.setFlavors(data.flavors); } if (data.hasOwnProperty('orders')) { self.setOrders(data.orders); } }); }, setFlavors: function (dataFlavors) { var tpl = cfg.tpl.flavor, rows = [], arr = []; this.flavors.each(function () { for (var flavor in dataFlavors) { arr.push(renderTemplate(flavor, tpl)); rows.push(this.addFlavorRow(flavor)); } $(this).append(arr); this.flavorsTable.append(rows); }); }, setOrders: function (dataOrders) { var details, rows = []; for (var order in dataOrders) { if (dataOrders.hasOwnProperty(order)) { details = dataOrders[order]; if (!details.paid) { rows.push(this.addOrderRow(order, details)); } } } this.ordersTable.append(rows); }, addOrderRow: function (hash, data) { var tplVars = $.extend({}, data, { paymentform: cfg.tpl.paymentForm, priority: getPriority(Math.floor(+(new Date) / 1000), data.made, data.due), hash: hash }); return renderTemplate(tplVars, cfg.tpl.orderRow); }, addFlavorRow: function (flavor) { return renderTemplate({flavor: flavor}, cfg.tpl.flavorRow); }, getPriority: function (now, made, due) { var priorityOptions = cfg.priorityOptions, colors = priorityOptions.colors, marks = cfg.priorityOptions.marks, elapsed = now - made, until = due - now; if (!due) { for (var i = 0; i < marks.length; i++) { if (elapsed <= marks[i]) { continue; } return colors[i]; } } colors.reverse(); for (var j = 0; j < marks.length; j++) { console.log(until, ">", marks[j], j); if (until <= marks[j]) { continue; } return colors[j]; } }, fetchOrders: function () { console.warn('not implemented function'); } }; // 4. GLOBALIZE NAMESPACE return project; }(window.jQuery, window.DeliciousPie || {})); Once this file is loaded, you can call DeliciousPie.OrderPie.init() on DOM ready and you're good to order some pie (or whatever it is :p) What you gain with this approach: configurable objects extendable objects (multiple HTML classes, activated by JavaScript, with different config if needed) separation of concerns scalable/modular approach event control reflow optimization memory optimization better readability an easy templating sytem for free (no additional libraries required ^^) RESPEC(t) from your colleagues I can invent some more, but all in all, quality code 1) Overhead When a lot of components/modules are loaded from one file and let's say you have a lot of pages ... the overhead you create for undetected modules/components: cfg variable => so try to keep strings in it and only extend cfg in a method cacheItems() => depends on the speed of your selectors and sizzle init() method checking the length of the container So scalability wise this performs very well. Remember JavaScript in itself is really fast. It's the DOM that slows down quite a lot. For that reason it could be interesting to split-up the cacheItems. 2) Templating The templating in my example is also not "ideal". It's very basic but also puts HTML into JavaScript and then you can argue that separation of concerns doesn't apply to this approach. Hence the whole script idea seen in Handlebars/Mustache which covers that. However, I would only take that approach if logic in templating is required {{if}}{{else}}{{/if}}. For string replacement only, keep it simple. Extra logics for the templates while looping can be done inside a specific function as well (ex: addOrderRow, addFlavorRow). Besides, you can always leave a comment <!-- js rendering --> inside your HTML as well ... As suggested in the comments: you can create a hidden class or with a data- attribute and pick those chunks up. Some additional reads: JavaScript Module Pattern If I find some time I'll try to test this as well. Probably you'll need to add the flavor data back in there for the templating sytem. And a data-component="orderpie" on the "main" wrapper to kick it in. I hope you are familiar with debugging tools. If not, at least I hope you'll learn a thing or 4. GL!
{ "domain": "codereview.stackexchange", "id": 11784, "tags": "javascript, html, mvc" }
Instant messaging system
Question: I'm now developing an instant messaging system but I'm a little confused: I can't decide which approach I have to choose! I want you to tell me what is good/bad with this programming approach. One friend of mine said it's difficult to read, but it works. /*! | (c) 2012, 2013 by Bellashh*/ /// <reference path="jquery-2.0.0-vsdoc.js" /> $(document).ready(function () { var ChatProvider = { "Friends": [], "People": [], "Conferences": [], "UI": {}, "Utils": { "functions": { "alert": function (str) { alert(str); } /*ChatProvider.Utils.functions.alert*/ , "addFriend": function (friend) { ChatProvider.Friends.push(friend); alert(ChatProvider.Friends.pop().Names); } /*ChatProvider.Utils.functions.addFriend*/ , } /*ChatProvider.Utils.functions*/ , "events": {} /*ChatProvider.Utils.events*/ , "settings": { } /*ChatProvider.Utils.settings*/ } } //var ChatProvider ChatProvider.Utils.functions.addFriend({ "Names": "bellash" }); }); //$(document).ready P.S: for visibility sake, I added some space between line separating objects' properties. Answer: From a once over: drop functions, it is too long and it's obvious that alert in ChatProvider.utils.alert() is a function. Furthermore, specifically for alert, that should be under UI in my mind Your functions are all over the place, I would think that adding a friend would be ChatProvider.friends.add() but you put it in ChatProvider.Utils.functions.addFriend() You treat your object as 1 singleton, what if you need more than 1 instance ? Maybe you are from a Java background, but namespacing the way you approach it should be avoided
{ "domain": "codereview.stackexchange", "id": 6273, "tags": "javascript, jquery" }
How did Rømer measure the speed of light by observing Jupiter's moons, centuries ago?
Question: I am interested in the practical method and I like to discover if it is cheap enough to be done as an experiment in a high school. Answer: Method The method is based on measuring variations in perceived revolution time of Io around Jupiter. Io is the innermost of the four Galilean moons of Jupiter and it takes around 42.5 hours to orbit Jupiter. The revolution time can be measured by calculating the time interval between the moments Io enters or leaves Jupiter's shadow. Depending on the relative position of Earth and Jupiter, you will either be able to see Io entering the shadow but not leaving it or you will be able to see it leaving the shadow, but not entering. This is because Jupiter will obstruct the view in one of the cases. You might expect that if you keep looking at Io for a few weeks or months you will see it enter/leave Jupiter's shadow at roughly regular intervals matching Io's revolution around Jupiter. However, even after introducing corrections for Earth's and Jupiter's orbit eccentricity, you still notice that for a few weeks as Earth moves away from Jupiter the time between observations becomes longer (eventually by a few minutes). At other time of year, you notice that for a few weeks as Earth moves towards Jupiter the time between observations becomes shorter (again, eventually by a few minutes). This few minutes difference comes from the fact that when Earth is further away from Jupiter it takes light more time to reach you than when Earth is closer to Jupiter. Say you have made two consecutive observations of Io entering Jupiter's shadow at t0 and t1 separated by n Io's revolutions about Jupiter T. If the speed of light was infinite, one would expect \begin{equation} t_1 = t_0 + nT \end{equation} This is however not the case and the difference \begin{equation} \Delta t = t_1 - t_0 - nT \end{equation} can be used to measure the speed of light since it is the extra time that light needs to travel the distance equal to the difference in the separation of Earth and Jupiter at t1 and t0: \begin{equation} c = \frac{\Delta d}{\Delta t} = \frac{d_{EJ}(t_1)-d_{EJ}(t_0)}{\Delta t} \end{equation} (both numerator and denominator can be negative representing Earth approaching or receding from Jupiter) In reality more than two observations are needed since T isn't known. It can be approximated by averaging observations equally distributed around Earth's orbit accounting for eccentricity or simply solved for as another variable. Practical considerations Note that you will not manage to see Io enter/leave Jupiter's shadow every Io's orbit (i.e. roughly every 42.5 hours) since some of your observation times will fall on a day or will be made impossible by weather conditions. This is of no concern however. You should simply number all Io's revolutions around Jupiter (timed by Io entering/leaving Jupiter's shadow) and note which ones you managed to observe. For successful observations you should record precise time. It might be good to use UTC to avoid problems with daylight saving time changes. After a few weeks you will notice cumulative effect of the speed of light in that the average intervals between Io entering/leaving Jupiter's shadow will become longer or shorter. Cumulative effect is easier to notice. At minimum you should try to make two observations relatively close to each other (separated by just a few Io revolutions) and then at least one more observation a few weeks or months later (a few dozens of Io revolutions). This will let you calculate the average time interval between observations within a short and long time period by dividing the length of the time period by the number of revolutions Io has made around Jupiter in that period. The average computed over the long time period will exhibit cumulative effect of the speed of light by being noticeably longer or shorter than the average computed over the short time period. More observations will help you make a more accurate determination of the speed of light. You must plan all of the observations ahead since you can't make the observations when Earth and Jupiter are close to conjunction or opposition. Calculations Once you collected the observations you should determine the position of Earth and Jupiter at the times of the observations (for example using JPL's Horizons system). You can then use the positions to determine the distance between the planets at the time the observations were made. Finally, you can use the distance and the variation in Io's perceived revolution period to compute the speed of light. You will notice that roughly every 18 millions kms change in the distance of Earth and Jupiter makes an observation happen 1 minute earlier or later. Cost The cost of the experiment is largely the cost of buying a telescope that allows you to see Io. Note that the experiment takes a few months and requires measuring time of the observations with the accuracy of seconds. History See this wikipedia article for historical account of the determination of the speed of light by Rømer using Io.
{ "domain": "physics.stackexchange", "id": 2081, "tags": "experimental-physics, speed-of-light, measurements, jupiter" }
Coreset and VC dimension
Question: I am trying to understand the notion of $\epsilon$-coreset and its relation with sampling bounds of a range space having a finite VC-dimension. Although both of them give an $\epsilon$-approximation sketch of the input data. However, for the later case we know a characterization i.e. if a range space has finite VC dimension, then it can be approximated by a small (constant) size sample. On the other hand, for the case of $\epsilon$-coreset (similar to the concept of VC dimension) is there any characterization known over the problems which can or cannot be sketched by a small size sample? Thanks, Answer: The answer of Har-Peled is mainly regarding problems where you wish to cover points by shapes (e.g. balls). A strong relation to eps-nets and hitting sets can be found e.g. in his paper here For the case of e.g. sum or sum of squared distances to a shape or set of shapes (as PCA, linear regression or k-means) there is a generic reduction from eps-net to coreset using importance sampling. I tried to summarize it here.
{ "domain": "cstheory.stackexchange", "id": 4014, "tags": "approximation-algorithms, computational-geometry, sample-complexity" }
Why does solution for magnetic field of moving charge from special relativity give $dq/dt=0$?
Question: From electric field of a point charge: $$ \vec{E} = \frac{k Q \vec{r}}{\gamma^{2}r^3(1-\beta^2sin^2\theta)^{\frac{3}{2}}}, \vec{B} = \frac{\vec{u} \times \vec{E}}{c^2} $$ taking curl of B gives $$ \nabla \times B = {\mu_0J}$$ Taking divergence of above gives $$ \nabla . J = \frac{\nabla .(\nabla \times B)}{\mu_0} =0 = -\frac{\partial q}{\partial t} $$ which implies stationary charge distribution despite the point charge being in motion. Expressions for E and B derived using special relativity where the charge Q moves across space with constant velocity to inertial observer. So why exactly does the expressions for E and B for moving field fail? I have found answers on this post claim it is due to not considering speed of light limit but assuming we let enough time for the charge in its rest frame to produce electric field in all of its space, it doesn't seem to be a factor since we only need to lorentz transform four force experienced in charge's rest frame to get above same equation for E and B, where the charge will be in motion. Other answer on the same post attribute the error to lack of consideration of charge conservation which if we let q in its rest frame be constant (avoids need to re-consider speed of light limit) would also be constant in inertial frame where the charge is moving, by invariance of charge (already assumed for the derivation of above equations). Answer: taking curl of B gives $$ \nabla \times B = {\mu_0J}$$ No, it gives: $$ \nabla \times \vec B = {\mu_0\vec J} +\epsilon_0\mu_0\frac{\partial \vec E}{\partial t} $$ This is literally one of Maxwell's equations.
{ "domain": "physics.stackexchange", "id": 89495, "tags": "electromagnetism, special-relativity, maxwell-equations" }
FFT and spatial frequency basic knowledge
Question: Using the following website: on how the Fourier transform works (Interested in the Basic Principle part). I found out that if you have 3 pixels closer in the spatial domain you'll get more spaced fringes in the Fourier domain (frequency domain). How could one explain this ? Also I don't understand why our 3 closer points would have a lower spacial frequency ? If they're closer then the distance ($\lambda$) separating them is lesser thus the frequency ($\propto \frac{1}{\lambda}$) is greater ? Answer: Also I don't understand why our 3 closer points would have a lower spacial frequency ? Wider spaced fringes or points in the Fourier domain means a higher frequency, not a lower frequency. They are spaced further away from 0, which means they are higher in value. The further away from the center in the frequency domain, the higher the frequency. If they're closer then the distance (λ) separating them is lesser thus the frequency (∝1λ) is greater ? Yes
{ "domain": "dsp.stackexchange", "id": 11072, "tags": "fourier-transform, frequency-domain" }
Grouping elements in array by multiple properties
Question: During work, I was given this task: to group elements with similar properties in the array. In general, the problem is as follows: var list = [ {name: "1", lastname: "foo1", age: "16"}, {name: "2", lastname: "foo", age: "13"}, {name: "3", lastname: "foo1", age: "11"}, {name: "4", lastname: "foo", age: "11"}, {name: "5", lastname: "foo1", age: "16"}, {name: "6", lastname: "foo", age: "16"}, {name: "7", lastname: "foo1", age: "13"}, {name: "8", lastname: "foo1", age: "16"}, {name: "9", lastname: "foo", age: "13"}, {name: "0", lastname: "foo", age: "16"} ]; If I group this elements by lastname and age, I will get this result: var result = [ [ {name: "1", lastname: "foo1", age: "16"}, {name: "5", lastname: "foo1", age: "16"}, {name: "8", lastname: "foo1", age: "16"} ], [ {name: "2", lastname: "foo", age: "13"}, {name: "9", lastname: "foo", age: "13"} ], [ {name: "3", lastname: "foo1", age: "11"} ], [ {name: "4", lastname: "foo", age: "11"} ], [ {name: "6", lastname: "foo", age: "16"}, {name: "0", lastname: "foo", age: "16"} ], [ {name: "7", lastname: "foo1", age: "13"} ] ]; After some experimentation, I came to the following solution: Array.prototype.groupByProperties = function(properties){ var arr = this; var groups = []; for(var i = 0, len = arr.length; i<len; i+=1){ var obj = arr[i]; if(groups.length == 0){ groups.push([obj]); } else{ var equalGroup = false; for(var a = 0, glen = groups.length; a<glen;a+=1){ var group = groups[a]; var equal = true; var firstElement = group[0]; properties.forEach(function(property){ if(firstElement[property] !== obj[property]){ equal = false; } }); if(equal){ equalGroup = group; } } if(equalGroup){ equalGroup.push(obj); } else { groups.push([obj]); } } } return groups; }; This solution works, but is this a right and best way? It still looks a little ugly to me. Answer: I felt compelled to write that you probably should combine forEach and map with the answer of Alexey Lebedev. function groupBy( array , f ) { var groups = {}; array.forEach( function( o ) { var group = JSON.stringify( f(o) ); groups[group] = groups[group] || []; groups[group].push( o ); }); return Object.keys(groups).map( function( group ) { return groups[group]; }) } var result = groupBy(list, function(item) { return [item.lastname, item.age]; });
{ "domain": "codereview.stackexchange", "id": 18825, "tags": "javascript, sorting, properties" }
C function to check the validity of a date in DD.MM.YYYY format
Question: I was tasked with writing a function, that checks whether a date is valid in the following format: DD.MM.YYYY. Here's what I wrote: bool isValidDate(const char* date) { int a, b, c; size_t len = strnlen(date, 255); for (size_t i = 0; i < len; i++) { if (!isdigit(date[i]) && date[i] != '.') return false; } int validConversions = sscanf_s(date, "%2d.%2d.%4d", &a, &b, &c); return validConversions == 3 ? true : false; } From my testing I can conclude that this function does reasonably well: it doesn't accept anything except dots and digits, it expects to read exactly 3 numbers and it knows how many digits to read for the days and months. Something it doesn't check however, is whether the days are less than 31 or the months are less than 12. Another thing it doesn't do is that it doesn't check correct looking, but invalid dates such as 30.02.2013 or 31.06.2012. Solving that problem is trivial, but I don't wanna write 12 ifs for each month to check the dates. I'm starting to think I should just generate a data structure of valid dates and use that to check, i.e. if the passed date is there, then it's valid. Is that the best way to do it or are there other ways? Answer: Use descriptive variable names Instead of int a, b, c, give them more descriptive names: int day, month year; sscanf() will ignore trailing garbage The function sscanf() will stop parsing after the last conversion. So the string "1.2.3...." will be cleared by your check for digits and period characters, and then sscanf() will read 3 integers, and returns 3. But obviously, this is an invalid date. It would be best if you could have sscanf() determine the validity of the whole input string. One way is to use the %n conversion to check how many characters of the string were parsed so far, and then check that this corresponds to the whole string. This is how you can do that: int day, month, year, chars_parsed; if (sscanf(date, "%2d.%2d.%4d%n", &day, &month, &year, &chars_parsed) != 3) return false; /* If the whole string is parsed, chars_parsed points to the NUL-byte after the last character in the string. */ if (date[chars_parsed] != 0) return false; You don't need 12 ifs to check the month You can just write: if (month < 1 || month > 12) return false; And similar for days and perhaps even years if you want to limit the allowed range. Avoid redundant checks It is likely that your program will normally handle valid date strings. So you want to optimize for this case. In your code, you are checking for the string to be empty at the start, but this is not necessary; if the input string is empty, then sscanf() will not return 3, so it will already correctly return false. Since most input strings will be valid, checking the string length is just a waste of CPU cycles. Similarly, checking for each character to be a digit or a period is redundant if you just use sscanf() with the %n method to check that there was no trailing garbage after the year. Better date checking Just checking whether the month is between 1 and 12 and day between 1 and 31 is not enough. A given month might have less than 31 days. There are also leap years to consider. And if you want to allow dates far in the past, you run into the problem that we have had different calenders. To give an idea of how difficult the problem is, watch: https://www.youtube.com/watch?v=-5wpm-gesOY One way to validate the date is to use the C library's date and time conversion routines. After scanning the day, month and year, create a struct tm from that, then convert it to seconds since epoch using mktime(). This might still accept invalid dates, but if you convert that back to a struct tm, you can check whether that conversion matched the original input: int day, month, year, sscanf(date, "%2d.%2d.%4d", &day, &month, &year); struct tm input = { .tm_mday = day, .tm_mon = month - 1, .tm_year = year - 1900, }; time_t t = mktime(&input); /* note, this might modify input */ struct tm *output = localtime(&t); /* prefer localtime_r() on systems that support it */ if (day != output->tm_mday || month != output->tm_mon + 1|| year != output->tm_year + 1900) return false; These routines will probably still not handle dates hundreds of years in the past correctly, but it should suffice for recent dates.
{ "domain": "codereview.stackexchange", "id": 36351, "tags": "c, datetime, validation" }
Number Partitioning targeting a Ratio of subset sums?
Question: I have a problem, which appears to be similar to number partitioning: a set of numbers partitioned (two-way), such that the sum of the numbers in each subset are as close as possible to a targeted ratio (instead of having the minimum difference, as in the number partitioning problem). Can I use the same algorithms? How would I modify them? Is another approach more suitable? Context: I am implementing a load-balancing algorithm, for a work conserving Group Weighted Round Robin (GWRR) scheduler. In some corner-case (some groups have less tasks than CPUs in the system), I want to partition the weighted groups and assign them to the subsets of a partitioning of the set of CPUs, so that the ratio of the sums of (predicted) CPU idle times, in the subsets of the CPU partition, is as close as possible to the ratio of the weight sums, in the subsets of the group partition. The group partition splits the groups, using a normal number partitioning algorithm (CKK), so that group weight sums, in each subset, have the minimum possible difference (but they are unlikely to match, in the general case). Any ideas? Answer: Yes. Suppose the numbers are $x_1,\dots,x_n$ and the target ratio is $r$. Define $$y=\left\lfloor {1-r \over 1+r} \times (x_1 + \dots + x_n) \right\rceil,$$ and then look for the partition of $x_1,\dots,x_n,y$ that has minimum difference. I suspect this might be either the solution to your original problem, or very close to it. Why? A partition of $x_1,\dots,x_n$ with exactly the ratio $r$ will have one group whose total is $c (x_1+ \dots + x_n)$ and another whose total is $cr(x_1+\dots+x_n)$. These two must sum to $x_1+\dots+x_n$, so we find that $c+cr=1$, i.e., $c=1/(1+r)$, so their difference is $$c(x_1+\dots+x_n) - cr(x_1+\dots+x_n) = {1-r \over 1+r} (x_1+\dots+x_n).$$ We have defined $y$ to be this difference, rounded to the nearest integer. So, if you add $y$ to the smaller group, you get a perfect partition of $x_1,\dots,x_n,y$ with difference zero. And the converse is true as well.
{ "domain": "cs.stackexchange", "id": 8305, "tags": "process-scheduling" }
Quick sort with $K-1$ pivots
Question: I was thinking about quicksort with multiple pivots and I came across this question. How can we efficiently implement a version of Quicksort where we choose $k−1$ pivots to partition an array of unique numbers into $k$ classes? My goal is to demonstrate that this multiary partitioning can be achieved in $O(n \log k)$ time, ensuring that all classes are approximately of the same size (to within 1). I found this paper https://cs.stanford.edu/~rishig/courses/ref/l11a.pdf but it doesn't seem to talk about how to go about partitioning the array into nearly equal. Note: I am just interested in selecting the pivots not the partition algorithm I did try finding the $n/k^{\text{th}}, 2n/k^{\text{th}} \ldots$ smallest element in the array and using them as pivots but the complexity isn't coming right (as each element can be found using BFPRT algorithm in $O(n)$ therefore total complexity would be $O(nk))$ Any Insights will be appreciated. Answer: Hint: Suppose $k$ is a power of two at first, for simplicity. Find the $n/2$th element in the array. Then.... (you fill in the next part)
{ "domain": "cs.stackexchange", "id": 21834, "tags": "algorithms, algorithm-analysis, quicksort, divide-and-conquer, selection-problem" }
Wouldn't an AI that specializes in making other AI be an AGI if they can cooperate?
Question: If said AI can assess scenarios and decide what AI is best suited and construct new AI for new tasks. In sufficient time would the AI not have developed a suite of AIs powerful/specialized for their tasks, but versatile as a whole, much like our own brain’s architecture? What’s the constraint ? Answer: If the AI can indeed assess arbitrary scenarios and come up with solutions to handle them, then it would indeed be an AGI. What’s the constraint ? It doesn't exist. Current programmers are very good at developing AI that can handle specific tasks ("narrow AIs"), but it is currently impossible to build an AI that can assess and handle "general" situations (unlike your proposed algorithm, which possess that capacity). Theoretically, we can have a program that can build other programs (genetic algorithms are arguably one such example), but handling arbitrary scenarios and problems requires a form of "general intelligence", which we don't know how to program. Therefore, we can't build this machine. It's possible that we can built this machine, but we must first figure out the hard problem of "general intelligence". We're nowhere near reaching that level. If we figure out how to program "general intelligence", then it should be fairly simple to use your approach (building an AGI to assess scenarios and then build "narrow AIs" that can handle Those scenarios). Only then we can understand the AGI's limitations and weaknesses, and be able to identify probable constraints to its power. For example, it's possible that such an AGI may be slow in handling arbitrary scenarios and developing the "narrow AIs"...in which case, it may take an absurdly long period of time to develop "a suite of AIs powerful/specialized for their tasks". But until we build the AGI itself, we won't be able to identify its faults or weaknesses. Going beyond that would be science-fiction speculation.
{ "domain": "ai.stackexchange", "id": 129, "tags": "neural-networks, philosophy, agi" }
Ongoing feature selection
Question: If you have a set of n features you have 2^n-1 non-empty feature subsets. As a result, if you pick one of them you are unlikely to have found the best one. To me, it seems intuitive that as you build your model, you would want to look at the things it does badly and try to find features that would help it to improve, or take out features that don't seem to be helping. Although I've seen this done in practice and muddled through this way I've never seen any formal theory behind it. How do you know which features to add to the set your training on? and WHich to remove? Answer: There are various features selections techniques. The most common techniques rank individual features by how much information they bring with respect to the target, for example with information gain or conditional entropy. Techniques based on individual features are efficient (i.e. fast) and usually help to reduce dimensionality and improve performance. But they are not necessarily optimal, because they cannot take into account the contribution of a subset of features together. For example they might select several features which are highly correlated between each other, even though selecting only one of them would be enough. In order to take into account how features interact, ideally one would train and test a model with every possible subset of features, and then select the best one. However the full exploration of $2^N$ subsets is rarely feasible, but some optimization methods can be used, for example feature selection with genetic learning. Note that there are also features extraction techniques. In this case the original semantic of the features is not preserved, since the whole set of features are transformed into a new representation.
{ "domain": "datascience.stackexchange", "id": 9970, "tags": "feature-selection" }
What electric charges do the $SU(2)$ weak isospin bosons carry?
Question: The $SU(2)$ weak isospin group has three bosons, before symmetry breaking: The $W^1$, $W^2$, and $W^3$ weak isospin bosons. The $W^1$ and $W^2$ mix to form the $W^+$ and $W^-$ bosons with electric charges $+1$ and $-1$, respectively. So far, I can assume that the $W^3$ boson carries a neutral charge. However, I am not sure what the electric charges (which are somewhat present before the photon exists) of the $SU(2)$ triplet bosons ($W^1$, $W^2$, and $W^3$) are. According to Wikipedia, the $SU(2)$ triplet bosons form a Weak Isospin triplet (so one boson has Weak Isospin $+1$, another one has $-1$, and the last one has 0). Also, the formula for electric charge is $Q=T_3+\frac12 Y_w$. Answer: $W^1,W^2$ do not have well defined electric charge. They are not charge eigenstates: $$ \begin{aligned} Q|W^1\rangle&=+i|W^2\rangle\\ Q|W^2\rangle&=-i|W^1\rangle \end{aligned} $$ On the other hand, $W^\pm\propto W^1\pm iW^2$ do have well-defined charge: $$ Q|W^\pm\rangle=\pm |W^\pm\rangle $$
{ "domain": "physics.stackexchange", "id": 36267, "tags": "quantum-mechanics, particle-physics, charge, electroweak, isospin-symmetry" }
Why are usually 4x4 gamma matrices used?
Question: As far as I understand gamma matrices are a representation of the Dirac algebra and there is a representation of the Lorentz group that can be expressed as $$S^{\mu \nu} = \frac{1}{4} \left[ \gamma^\mu, \gamma^\nu \right]$$ Usually the representations used for them are the Dirac representation, the Chiral representation or the Majorana representation. All of these are 4x4 matrices. I would like to know what the physical reason is that we always use 4x4, since surely higher dimensional representations exist. My guess is that these are the smallest possible representation and give spin half fermions as the physical particles, which are common in nature. Would higher dimensional representations give higher spin particles? Answer: You have no other choice than to use $4\times 4$ matrices. All these "representations" are different realizations (related by similarity transformations) of the only possible irreducible representation of the Clifford algebra that is spanned by the abstract $\gamma^\mu$. This representation, in a way, is the definition of what a "Dirac spinor" is, and it is usually a representation of the covering group of the rotation group, but only a projective representation of the rotation group itself. Also, it is not always irreducible as a representation of the rotation group (e.g. the 4D Diac spinor decomposes into the two Weyl spinors and also into two Majorana spinors). You can show in general that the Clifford algebra in $(1,d-1)$ dimensions has its only irreducible representations given by a vector space of dimension $2^{\lfloor {d/2}\rfloor}$, which is $2^2 = 4$, by considering the "raising/lowering operators" ${\gamma^\pm}^k = \gamma^{2k}\pm\gamma^{2k+1}$ in close analogy to the usual ladder operator method for $\mathfrak{su}(2)$. It turns out that the space spanned by $\lvert s_1,\dots,s_k\rangle $ for $s_i=\pm 1/2$ (the $s_i$ are the eigenvalues of $S^k = [\gamma^{+k},\gamma^{-k}]$) is the only consistent non-trivial irreducible representation you can construct. In odd dimensions, there are two different ones of these that differ by chirality. Another way uses the group of the $\Gamma^M$ constructed by taking products $\gamma^{\mu_1}\dots\gamma^{\mu_k}$ for $k \leq d$ and $\mu_1 < \mu_2 < \dots \mu_k$. The $M$ runs from $1$ to $2^d$ (another thing one must show...). Any irreducible representation of the Clifford algebra is an irreducible group representation of this group. Now consider $S = \sum_M \rho(\Gamma^M) N\sigma(\Gamma^M)^{-1}$ for two irreducible representations $\rho$ of dimension $n$ and $\sigma$ of dimension $n'$ and any $n\times n'$-matrix $N$. You can show that $S\rho(\gamma^M) = \sigma(\gamma^M)S$, so $S$ is an intertwiner, and by Schur's lemma either $S$ is invertible, so $n=n'$, or $S=0$. So if there are two different irreducible representations, this says that $\sum_M\rho(\Gamma^M)N\sigma(\Gamma^M)^{-1} = 0$ for any choice of $N$. Therefore $$ \sum_M \rho(\Gamma^M)_{kl}\sigma(\Gamma^M)_{ij} = 0$$ for all $k,l,i,j$. Choosing $k=l$ and $i=j$ summing (i.e. taking the trace of the two matrices independently) and thinking about which gamma matrices contribute to these traces, one can conclude both for the even and the odd case that $n=n'$ must hold, and that there is one irreducible representation for even $d$ and two of them for the odd case.
{ "domain": "physics.stackexchange", "id": 31346, "tags": "special-relativity, dirac-equation, representation-theory, dirac-matrices" }
Could someone see anything while being inside black hole?
Question: If we managed to survive in a black hole and move inside the event horizon then could we see the surroundings of the black hole inside the event horizon by source of light? Can the light not come up to the event horizon or does it have to travel through different spheres of the black hole and enter a sphere which won't allow the light to return to its previous sphere? If the we reach up to the singularity then can we ever reach up to the event horizon or would we get stuck in the point of the singularity? Answer: The answer is most definitely yes, or at least yes, as far as our current understanding of how gravity works goes. It is observationally untestable (let's be more specific - nobody could report the results of an observational test!) since no signal can emerge from inside the event horizon. The scenario is treated in some detail by Taylor & Wheeler ("Exploring Black Holes", Addison, Wesley, Longman - highly recommended) in terms of what an observer would see on a direct radial trajectory into a non-rotating "Schwarzschild" black hole. I won't bore you with the maths - it is fairly complex. A star situated at exactly 180 degrees from the observer's radial path will always appear in that position as the observer looks back - right down to the singularity. The light will be gravitationally blueshifted by an ever-increasing amount - essentially tending towards an infinite blueshift at the singularity. For stars at an angle to the radial path, their positions will be distorted such that they appear to move away from the point at which the observer has come from (and are also blue-shifted). In the final moments (it takes less than $1.5\times 10^{-4}$ seconds of proper time to fall from the event horizon to the singularity of a 10 solar mass black hole, but a huge $\sim 60$ seconds for the black hole at the centre of our Galaxy) the light from the external universe will flatten into an intense ring at 90 degrees to the radial direction of motion. So you would end up seeing blackness in front of you, blackness behind and the sky split in two by a dazzling ring of light (almost seems worth it!).
{ "domain": "astronomy.stackexchange", "id": 878, "tags": "black-hole, light" }
Can the US realistically mine and produce all Rare Earth Elements, without relying on China?
Question: Why exactly isn't the US mining and producing its own rare earth metals (REEs)? I'm baffled by the mixed messages below. Because China uniquely possesses some REEs? US safeguards against pollution more than China? US would take too much time or is inefficient to establish its own production? Can the US gainfully do so? How long would the US need to self-rely? I know of the Mountain Pass mine in California, Canada, Australia's Mount Weld and Lynas Corp's processing facility in Malayasia and Brazil, Vietnam, and Russia. Pentagon in talks with Australia on rare earths plant : geopolitics There is a large subsection of rare earth materials that are not found outside of China in any mining development. There's I believe a list of 18 and Australia's Lynas only has about 5-6 iirc, and altogether the world can't cobble all 18 together without relying on China. Rare Earths in the US-China Trade War : geopolitics There's no suppliers outside China. There are potential minerals deep in some countries with no technology or existing extraction operation. United States implements Energy Resource Governance Initiative (ERGI) with countries such as Australia, Peru, Argentina, Namibia and the Philippines to limit China's control of rare earth minerals : geopolitics It is not the source of supply that matters but the extraction and refining tech that help China dominate. The supply still have to be sent to China for processing. Pentagon legislation seeks to end US dependence on Chinese rare earth metals : wallstreetbets REMs actually aren't that rare, despite their name. Japan recently found deposits that could basically supply Earth from now to infinity. The US has huge deposits as well. The US already has infrastructure in place. Before China undercut the market, the US actually supplied the world with most of the REMs needed from mining at Mountain Pass. Mountain Pass still exists and has output shutdown because China didn't make it profitable, but the point is that it still exists and infrastructure is already there. In theory, we could just start it up again without needing too much more investment. REMs can easily be recycled. In fact, this is what Apple does out of environmental concerns and to hopefully reach a point where REMs never have to be mined for again: https://www.engadget.com/2019-09-18-apple-will-use-recycled-rare-earth-metals-iphone-taptic-engine.html The US and even private companies have some stockpiles of REMs to hold out in the short term if there were disruptions in REM supplies. When you combine Mountain Pass, the ability to recycle REMs, and REM reserves, there's very little strategic gain for China to cut supplies. It'll just encourage development in other countries and shifting of sourcing to places like Japan. The US would just fire up Mountain Pass again. China would lose market share. In fact, China already tried to restrict REM supplies earlier in the 2010s; it wasn't effective at all. enronCoin. 11 points 10 months ago You definitely aren’t wrong, but maybe you’re underestimating the fact that China has 1/3 of the world rare earth reserves and 40x as much untapped supply as the US. Who can produce at the lowest cost? Probably China, where rare earth is abundant and labor is cheap. LukeSkyWRx. 12 points 10 months ago* They are also the only ones that will separate the ores as it is a really nasty process with vats of acid and tons of toxic chemical waste. Look up lake Baotou in China, that is where lots of China’s rare earths come from and where they pump the waste when they are done. http://www.bbc.com/future/story/20150402-the-worst-place-on-earth it could be done ‘clean’ if there were some incentive to do so. Since lots of the deposits in the US have a bunch of Thorium in them as well they are heavily regulated similar to uranium mining. With that extra burden most deposits are uneconomical to extract. China Mouthpiece on Twitter says they may stop Rare Earth sales to US. : wallstreetbets The reason why China and REMs are always mentioned in the same sentence is because China pulled an Amazon and undercut their competitors with government subsidies to dominate the market. You can find other sources but they cost much more, mainly because they don’t have people backing them trying to monopolize the entire industry. This is actually a blessing in disguise. We need more neodymium if we ever want to fulfill a dream of renewable energy overtaking coal and gas. Just like any business, people follow the money, and we know the US is sitting on mountains (literally) of resources waiting to be mined and used. Provided domestic industry gets some help I can see this blowing up in China’s face. Answer: There's an adage spoken by some in the minerals industry ... "gold is you find it". That adage is equally applicable to all mineral commodities, including rare earth elements (REE). Rare earth elements are not particularly rare, they are found throughout the Earth's crust. The trouble is, their concentrations within the crust are very low. Mineral resources are classified as either mineral deposits or ore reserves. The difference between the two is economics. Ore reserves are mineral deposits that can be mined for a profit. This includes the costs of geological exploration, evaluation, mining, subsequent processing (sometimes primary and secondary) and marketing/sales. What makes REEs regarded as being rare is they are rarely found in large concentrations that would make them economical to mine. This is a quirk of geology. As is stated, China contains the largest reserves of REEs, with smaller deposits occurring in a small number of countries. That's just mother nature! The other thing that can be attributed to geology is not all deposits of REEs contain every rare earth element. REEs are have been classified as being either light or heavy: Light REEs (lanthanum, cerium, praseodymium, neodymium, promethium, samarium, europium, gadolinium and scandium) are produced in global abundance and are in surplus supply Heavy REEs (terbium, dysprosium, holmium, erbium, thulium, ytterbium, lutetium and yttrium) are produced mainly in China and are in limited supply. Global efforts to bring new resources to the marketplace continue. American REE deposits may not contain all the REEs America may require and given that some of the REEs, particularly the heavy ones, are mostly found in China, America may be dependent on China for supplies of these metals. After all, there 17 REEs. Even if American deposits contained every REE, the concentrations for some may be so low that it is uneconomic to extract all of them. Having high concentrations of a small number of metals and very low concentrations of other metals, within the same deposit, is very common. It's just another quirk of geology. Murphy's second law comes to mind - Mother nature is a bitch. The other thing to consider is America's reserves of REEs is small and finite. Unless it finds something else, once it has mined its resources, it will have to source the metals from elsewhere. The other thing that needs to be considered are the health and safety and the environmental impacts of mining and processing REEs. To begin with, all REE deposits contain thorium, some also contain uranium, another quirk of geology. Both thorium and uranium are radioactive. The REEs cannot be mined and initially processed separately. The other aspect is the chemicals used in the initial processing of REEs and the metals not recovered, such as cadmium, result in the production of toxic tailings dumps. The US EPA has produced a comprehensive review of the environmental impact of mining and processing REEs. Edit 22 July 2020 Additional new information, as of today. Are we ready to recycle the “rare earths” behind an energy revolution? Ores with high concentrations of rare earths mostly fall into two general categories: igneous rocks and weathered sediments. The igneous ores are mostly carbonatite—an unusual product of magmas rich in carbonate minerals. It’s unusual enough that there’s only one volcano in the world erupting carbonatite lavas today, although others have in the past. Something like half of current global rare earth element production comes from China’s Bayan Obo mine alone, which features many carbonatites. Southern California’s Mountain Pass mine along Interstate 15 has exploited similar rocks over its on-again-off-again history. Australia’s Mount Weld straddles the two categories of rock and sediment. The ultimate source of REEs is carbonatite rock, but current mining is focused on the soil and sediment on top of this rock. That soft stuff is the result of weathering that has broken down the bedrock, carrying away some of the less resilient minerals and further concentrating the rare-earth-rich ones. Similar processes are responsible for deposits of ionic clays in China and of mineral sands in India. The different sources have different ratios of rare earth elements in them. “In general, all carbonatites are enriched in lanthanum and cerium,” UNLV’s Simon Jowitt told Ars. “So as you go from lanthanum down towards lutetium, basically the concentrations drop off sharply. In the ionic clays, it's the other way around; you get far less lights and far more heavies. But what we actually want is some of the stuff in the middle.”
{ "domain": "engineering.stackexchange", "id": 3348, "tags": "metallurgy, mining-engineering" }
Interpretation of the data through scatter plot
Question: I was exploring the data and had observed the data points are forming a triangle on the lower side. x-axis: Total items y-axis: Cancelled items Can someone help me in interpreting this data? And help me how to proceed further in analyzing and building a model? Answer: It is forming a triangle because you always have Cancelled items < Total items which is expected. Using this representation of the data is not so informative as many points are clustered and we can't assess the distribution. You might want to consider a plot such as a 2D histogram of (x=Total items, y=Proportion of Cancelled Items) in order to assess if some relation exists. And if you do so you might want to normalize each X slice so as to visualize the distribution for each range [Xmin, Xmax] If you are looking for a linear regression, you might want to use scikit-learn to fit a linearRegression model with X = total Items and Y=Cancelled Items and check the correlation coefficient.
{ "domain": "datascience.stackexchange", "id": 8266, "tags": "machine-learning, linear-regression, data-analysis" }
Why does Newton's first law create two different answers to this question?
Question: I have been having some difficulty with a recent question. Take the following pulley system: The three blocks have masses $M$, $m_1$, and $m_2$. All are subjected to a gravitational force $g$. The pulley and string are of negligible mass, and all surfaces are frictionless. The problem is (to paraphrase): What magnitude of force $F$ is necessary for $m_1$ and $m_2$ to be motionless relative to $M$? When I first solved this, I just considered Newton's first law for each of the three masses and added the additional conditions $a_{xm_1}=a_{xM}$, $a_{xm_2}=a_{xM}$, $a_{ym_1}=0$. After eliminating most of the variables, I ended up with $F=g\frac{m_1}{m_2}(M+m_1)$. However, the textbook gives the answer $F=g\frac{m_1}{m_2}(M+m_1+m_2)$. In attempting to find the discrepancy, I solved the problem again somewhat differently: Let $a=a_{xM}=a_{xm_1}=a_{xm_2}$. Since $m_1$ does not accelerate vertically, $T=m_1g$. Since $m_2a_{xm_2}=T=m_1g$, we have that $a=g\frac{m_1}{m_2}$. Finally, since $F$ is pushing on $M$, which in turn is pushing on $m_1$ via its normal-force interaction, we have $F=(M+m_1)a=g\frac{m_1}{m_2}(M+m_1)$, the same answer I previously came to. I asked my teacher about this problem to determine where my error occurred, and his reasoning was as follows: As with the earlier reasoning, $a=g\frac{m_1}{m_2}$. Since $M$, $m_1$, and $m_2$ are motionless relative to each other, we can treat the three as a single system and ignore internal forces: Now, we simply have $F=M_sa=g\frac{m_1}{m_2}(M+m_1+m_2)$. Both lines of reasoning are compelling, so my overall question is this: Which of these two answers is correct, and how is the other answer incorrect? Answer: The first approach is indeed valid and should have given you the correct answer. I just used that approach and got it correct, so you must have made a mistake. Without seeing your work I can only guess what the mistake would be. However, the most likely guess is that you forgot to include the force from the pulley on the free body diagram for M.
{ "domain": "physics.stackexchange", "id": 61030, "tags": "homework-and-exercises, newtonian-mechanics" }
Odds to clone a parent
Question: I was asking me something weird, and I thought you might validate or not my theory. Hypothesis: A brother and a sister have one chance over 78 billions to clone one of their parent if they procreate Here is my method: I have 23 chromosomes from my mother, and 23 from my father. My father has 23 * 23 = 529 combinations of chromosomes in his "gametes". If my experience is willing to work, I need to have the exact opposites of my father chromosomes than my sister ones. That means that we can't have any of my father's chromosome in common with my sister. That makes approximately 1/529 * 1/529 = 1/279 841. Then, if I decide to procreate with my sister, our child needs to have the 46 chromosomes back from my father. So the computation stays the same and I will have 1 chance over 279 841 to make it so. 1/279841 * 1/279841 = 1/7.8*10^10 My question is: Is my theory valid? If not why? If yes, that will make a hell of a story at parties... Thank you for your scientific answers and have a nice day! Answer: Assumptions You are ignoring recombination and mutations. If you were to add these things, then this probability would be extremely smaller. In apparence, it would be infinitely smaller. It makes no biological sense to make such assumption but for the sake of the argument, let's assume only segregation happen. We will also assume that the parents are completely unrelated to start with (see @BryanKrause 's comment below). For on given pair of chromosome First, for a given chromosome number, both parents must each transmit a different chromosome to each offspring. This happens with probability $\frac{1}{2}$ per parent and therefore $\frac{1}{4}$ for both parents. Then, if the brother transmit the maternally derived chromosome, then the sister must transmit the maternally derived chromosome as well (and vice-versa). This occurs with probability $\frac{1}{2}$. So for a given pair of chromosome the probability to clone the parent pair of chromosomes (assuming no recombination and no mutation) is $\frac{1}{4}\frac{1}{2} = \frac{1}{8}$. For the whole genome The probability of this to happen for all 23 pairs of chromosome is $\frac{1}{8}^{23} = 1.69 \cdot 10^{-21}$, that is one chance out of a 1.69 thousands billions billions. As you ask for the odds, note that for such low probability the odds ratio is pretty much equal to the probability.
{ "domain": "biology.stackexchange", "id": 7176, "tags": "genetics" }
Flow control experiment
Question: In this flow control simple experiment in the lab, i am made to control the flow through the pipe and at the same time record down the output given by the sensor in voltage. I am told to record down values for increasing output and decreasing output. The lowest flow is 0cc/min, and highest flow is 3000cc/min. Just wondering why must i take datas for both increasing and decreasing output. Is this so that i can determine the recommended operating range of the sensor? which will be the min. value to the max value of the both sets of outputs (increasing and decreasing) ? The Objective of this experiment is by the way to investigate the linearity and hysteresis of the flow sensor. Answer: The Objective of this experiment is by the way to investigate the linearity and hysteresis of the flow sensor. The key word here is hysteresis: the output of the sensor at the same flow rate might not the same depending on whether you are in the increasing or decreasing output direction. This is why you are being asked to test in both directions. Check out https://en.wikipedia.org/wiki/Hysteresis for more details on hysteresis. Here's a typical characteristic curve with hysteresis (albeit from a different domain): You also ask about linearity: it's pretty basic but it means that if you plot your sensor output vs. flow rate, you basically get a straight line if the sensor is linear, or a curved line if not: If the sensor is linear but has hysteresis, then you'll essentially get 2 parallel lines. If it's linear but without any hysteresis, both lines will be on top of each other and you'll get a unique characteristic regardless of which direction you go.
{ "domain": "engineering.stackexchange", "id": 2562, "tags": "flow-control" }
What make a CNN suitable for image classification or semantic segmentation?
Question: I've just started with CNN and there is something that I haven't understood yet: How do you "ask" a network: "classify me these images" or "do semantic segmentation"? I think it must be something on the architecture, or in the loss function, or whatever that makes the network classify its input or do semantic segmentation. I suppose its output will be different on classification and on semantic segmentation. Maybe the question could be rewritten to: What do I have to do to use a CNN for classification or for semantic segmentation? Answer: Disclaimer: This question is very broad, my answer is admittedly partial and is intended to just give an idea of what's out there and how to find out more. How do you "say" a network: "classify me these images" or "do semantic segmentation"? You're mixing two very different problems there. Although there are SO many variations of problems people are applying CNNs to, for this example we can focus on the "classification of something in the image" subset ad identify 4 key tasks: Image Classification answers the question "What is this image about" (the answer is an image where each pixel is assigned to one of the given classes) Semantic Segmentation answer he question "What areas of this image are part of a Cat?" (e.g., Object Detection answers the question "Where are the objects in the image AND what objects are they?" (e.g. of answer: "Cat in bounding box at x,y,w,h [10,20,50,60]") Semantic Segmentation answers the question "Where are the individual objects in this image AND what class are they AND give me the pixels that belong to each object". You may guess from the number of ANDs there, this is the hardest of the four. The output here would be a set of class, bounding_box, mask tuples where the mask is typically defined in relation to the returned bounding box. So, how do we build networks capable of solving one problem or the other? We build architectures towards one specific problem, exploiting reusable parts where possible. For example, typically classification and object detection are based on a deep "backbone" that extracts highly complex features from the image, that finally are used by a classifier layer interprets to make a prediction (for image classification) or a box prediction head to predict where objects lie in the image (very big simplification, look up object detection architectures and how they work for the proper description!). What do I have to do to use a CNN for classification or for semantic segmentation? In principle you can't just take a network built for classification and just "ask" it to do semantic segmentation (think of it as trying to use a screwdriver as scissors... it just was not built for that!). You need changes in the architecture, which necessarily imply new training, at the very least for the new parts that were added.
{ "domain": "ai.stackexchange", "id": 1670, "tags": "convolutional-neural-networks, classification, image-recognition, image-segmentation" }
How do you stop wobbling in a carbon nano tube elevator to space?
Question: If there was a carbon nanotube elevator that went to space how would you keep it from wobbling? As I picture it I see the structure being very strong, but would that prevent wobbling? It would be like a tall, thin building? Answer: The post-1959 proposals for such a structure are meant to be under tension: the centrifugal force of Earth rotation balances its weight. Thus it is tensed as a rope that you would swing in circles. The tension in the structure is acting against wobbling.
{ "domain": "physics.stackexchange", "id": 14061, "tags": "experimental-physics" }
How exactly does black hole evaporate?
Question: Reading about Hawking's Radiation I understood that black holes lose energy over time - which is logical in some way (otherwise they would be there forever and heat death would never technically happen) But - how exactly does it "evaporate"? What happens when it no longer contains enough mass within it's Schwartzshild radius ? Does it explode somehow? Transforms into "regular matter"? Simply vanishes over time? Or? Answer: A black hole evaporates by radiating away energy in the form of photons, gravitons, neutrinos, and other elementary particles in a process that involves quantum field theory in curved spacetime. This causes it to lose mass, and so its radius shrinks. It remains a black hole as it shrinks. The increased spacetime curvature at the horizon makes it radiate more and more powerfully; its temperature gets hotter and hotter. The more mass it loses, the faster it loses what it has left! I agree with Michael Walsby that small black holes are speculative and have not been detected. I am not so sure that they never will be, and it is important to understand how they behave. As the Wikipedia article explains, for a non-rotating black hole of mass $M$, the radius of the event horizon is $$R=\frac{2G M}{c^2}$$ and the Hawking temperature is $$T=\frac{\hbar c^3}{8\pi k_B G M}.$$ If you make the approximation that the black hole is a perfect blackbody, then the radiated power is $$P=\frac{\hbar c^6}{15360\pi G^2 M^2}$$ and the lifetime of the hole is $$t=\frac{5120\pi G^2 M^3}{\hbar c^4}.$$ Notice the simple power dependence of all these quantities on $M$. Everything else is just constants. It is easy to substitute numerical values and compute the following table for black holes whose masses range from that of an asteroid down to that of a bowling ball: $$\begin{array}{ccccc} M\text{ (kg)} & R\text{ (m)} & T\text{ (K)} & P\text{ (W)} & t \text{ (s)}\\ 10^{20} & 1.49\times10^{-7} & 1.23\times10^{3} & 3.56\times10^{-8} & 8.41\times10^{43}\\ 10^{19} & 1.49\times10^{-8} & 1.23\times10^{4} & 3.56\times10^{-6} & 8.41\times10^{40}\\ 10^{18} & 1.49\times10^{-9} & 1.23\times10^{5} & 3.56\times10^{-4} & 8.41\times10^{37}\\ 10^{17} & 1.49\times10^{-10} & 1.23\times10^{6} & 3.56\times10^{-2} & 8.41\times10^{34}\\ 10^{16} & 1.49\times10^{-11} & 1.23\times10^{7} & 3.56\times10^{0} & 8.41\times10^{31}\\ 10^{15} & 1.49\times10^{-12} & 1.23\times10^{8} & 3.56\times10^{2} & 8.41\times10^{28}\\ 10^{14} & 1.49\times10^{-13} & 1.23\times10^{9} & 3.56\times10^{4} & 8.41\times10^{25}\\ 10^{13} & 1.49\times10^{-14} & 1.23\times10^{10} & 3.56\times10^{6} & 8.41\times10^{22}\\ 10^{12} & 1.49\times10^{-15} & 1.23\times10^{11} & 3.56\times10^{8} & 8.41\times10^{19}\\ 10^{11} & 1.49\times10^{-16} & 1.23\times10^{12} & 3.56\times10^{10} & 8.41\times10^{16}\\ 10^{10} & 1.49\times10^{-17} & 1.23\times10^{13} & 3.56\times10^{12} & 8.41\times10^{13}\\ 10^{9} & 1.49\times10^{-18} & 1.23\times10^{14} & 3.56\times10^{14} & 8.41\times10^{10}\\ 10^{8} & 1.49\times10^{-19} & 1.23\times10^{15} & 3.56\times10^{16} & 8.41\times10^{7}\\ 10^{7} & 1.49\times10^{-20} & 1.23\times10^{16} & 3.56\times10^{18} & 8.41\times10^{4}\\ 10^{6} & 1.49\times10^{-21} & 1.23\times10^{17} & 3.56\times10^{20} & 8.41\times10^{1}\\ 10^{5} & 1.49\times10^{-22} & 1.23\times10^{18} & 3.56\times10^{22} & 8.41\times10^{-2}\\ 10^{4} & 1.49\times10^{-23} & 1.23\times10^{19} & 3.56\times10^{24} & 8.41\times10^{-5}\\ 10^{3} & 1.49\times10^{-24} & 1.23\times10^{20} & 3.56\times10^{26} & 8.41\times10^{-8}\\ 10^{2} & 1.49\times10^{-25} & 1.23\times10^{21} & 3.56\times10^{28} & 8.41\times10^{-11}\\ 10^{1} & 1.49\times10^{-26} & 1.23\times10^{22} & 3.56\times10^{30} & 8.41\times10^{-14}\\ 10^{0} & 1.49\times10^{-27} & 1.23\times10^{23} & 3.56\times10^{32} & 8.41\times10^{-17}\\ \end{array}$$ As you can see, as the hole shrinks, it gets tremendously hot and radiates enormous amounts of power. This is why Hawking titled one of his papers "Black hole explosions?" As far as I know, no one is sure whether a hole evaporates completely or leaves behind a Planck-scale remnant.
{ "domain": "physics.stackexchange", "id": 96607, "tags": "black-holes, event-horizon, hawking-radiation" }
How to replace these two forces with one force?
Question: Here, $P>Q$. $O$ is the center of mass of the rigid and uniform bar/stick. As $P>Q$, the resultant is situated to the right of $\vec{P}$ and is parallel to $\vec{P}$. The magnitude of the resultant is $P-Q$. To convince you that the figure is correct, I'll do some math to prove it. Let us obtain the sum of torques about the center of mass, $$(P-Q)b=Pa+Qa$$ $$b=\frac{P+Q}{P-Q}a$$ $$b=fa\ \left[\text{Let $f=\frac{P+Q}{P-Q}$}\right]$$ As $P>Q$, $f>1$, and $b>a$. So, the correct figure will be, I hope you're satisfied that the figure is correct. My comments: Is it possible to replace $\vec{P}$ and $\vec{Q}$ with a single force? I mean practically, not theoretically. From the figure, we can see that the resultant force is outside the bar. In other words, $\vec{P}$ and $\vec{Q}$ can be replaced by a force of magnitude $P-Q$, which will act outside the bar. This may be possible theoretically; however, this is not possible practically as the resultant force will be acting on literally nothing as it is outside the bar. Therefore, I conclude that it is impossible to replace $\vec{P}$ and $\vec{Q}$ with a single force practically. Theoretically, it is possible, but practically, no. My question: Can $\vec{P}$ and $\vec{Q}$ be replaced by a single force? Is my conclusion correct? These may help you to answer this question: Comment by @Ivan Answer by @Farcher This question was posted with the help of @Eli. Answer: As you have calculated the torque on the bar is $$\tau= (P+q)A$$ and a net force $$F=P-Q$$ This will cause the bar to turn with an angular acceleration, $$\alpha=\frac{\tau}{I}$$ and also accelerate with, $$a=\frac{P-Q}{m}$$ Any substitute pair of forces acting within the length of the bar can be scaled by the factor of $A/D$ to impart the same torque. But the new net force will not be the same. $P_N-Q_N\neq P-Q$. A= half-length of bar m= mass a= linear acceleration D = distance of new pair of force Pn, Qn, from the center of the bar $\alpha$= angular acceleration I= bar's moment of inertia $\tau$= torque So depending on what you demand the answer varies, if you require just the same torque, yes. If you require the same torque and linear acceleration no!
{ "domain": "engineering.stackexchange", "id": 4660, "tags": "mechanical-engineering, applied-mechanics, torque, statics, dynamics" }
Character recognition neural net topology/design
Question: I'm building a neural net for classifying characters in pictures. The input can be any character a-z, lowercase and uppercase. I only care about classifying the characters, and not the case, so the neural net has an output vector of length 26; one for each character. It makes sense, intuitively, to have a hidden layer of size 26*2 just upstream of the output layer. It also makes intuitive sense for this layer not to be fully connected to the output layer, but instead having two and two hidden nodes connect to each output node. I have some questions: a) Does this make sense? I'm getting about 75 % success rate on a pretty hard data set with just one hidden layer, but I'm not certain on how to improve from there. b) If so, What activation function should I use from the hidden layer with 26*2 nodes to the output layer? Maybe I should use an OR function for this, since both the lowercase and the uppercase version of a character should output for a single character. c) Would it be wiser to have 26*2 output nodes instead, and just combine lowercase and uppercase outputs after the neural net? Answer: Your design makes some sense, but there is no need to limit connections even if you expect to represent probabilities of upper/lower case separately, because they will interact usefully. E.g if the character could most likely be one of o, O, Q, G then this might be useful information to choose the correct one. If you went ahead, you would need to train this network without the final layer (so that it learns the representations you expect, not some other group of 52 features), then add the final layer later, with no need for special connection rules, just use existing ones. Initially you would training the new layer separately from the full output of the 52-class net i.e. probability values, not selected class. Then you would combine with the existing net and fine-tune the result by running a few more epochs with a low learning rate on the final model. That all seems quite complex, and IMO unlikely to gain you much accuracy (although I am guessing, it could be great - so if you have time to explore ideas, you could still try). Personally I would not take your hidden layer idea further. The full 52-class version with simple logic to combine results is I think simpler. This is also not necessary, the neural net can learn to have two different-looking images be in the same class quite easily, provided you supply examples of them in training. However, it may give you useful insights into categorisation failures in training or testing. It is not clear from the question, but if you are not already using convolutional neural network for lower layers, then you should do so. This will make the largest impact on your results by far.
{ "domain": "datascience.stackexchange", "id": 785, "tags": "machine-learning, neural-network, deep-learning, theano, image-recognition" }
Representing tensor products using Dirac's bra-ket notation
Question: I know, that $$ \uparrow \equiv \left[ \begin{array} { l } { 1 } \\ { 0 } \end{array} \right] $$ and $$ \bigg| \frac { X - i Y } { \sqrt { 2 } } \bigg \rangle = \sqrt { \frac { 3 } { 8 \pi } } \frac { x - iy } { r } $$ but I don't know $$ \bigg| \frac { X - i Y } { \sqrt { 2 } } \uparrow \bigg\rangle $$ what this function means. Can a matrix represent a function? Can anyone help me? I don't understand quantum mechanics very well. Answer: When there are kets from different spaces put together, it is assumed that there is a hidden tensor product. So $|\psi\ \rangle\ |\uparrow\ \rangle\ $ actually means $|\psi\rangle \otimes |\uparrow\ \rangle$. In the same way, notation can put everything inside the same ket. For example: $$ | n\ \mathcal{l},\ m_l,\ s,\ m_s \rangle $$ is also $$ | n\ \mathcal{l},\ m_l\rangle \otimes |s,\ m_s \rangle $$ Because the first one is the spatial part, and the second part is the spin. They belong to different spaces. The spatial part is in $\mathcal{H}$, and the spinpart is $\mathbb{C}^n$, usually $\mathbb{C}^2$, which are ½ spins.
{ "domain": "physics.stackexchange", "id": 54815, "tags": "quantum-mechanics, homework-and-exercises, hilbert-space, notation" }
Fastest implementation of fft in C++?
Question: I have a MATLAB program that uses fft and ifft a lot. Now I want to translate it to C++ for production. I used OpenCV but I noticed that OpenCV's implementation of fft is 5 times slower than MATLAB's. Then I tried armadillo but it was even slower. It was 10 times slower than MATLAB. Now I wonder is there any implementation of fft in C++ that is fast enough to compete with MATLAB? Answer: Matlab's fft functions are all based on FFTW (this is confirmed here), so I guess the obvious choice for you should be FFTW. FFTW is hardware-independent but it can take advantage of some hardware-specific features.
{ "domain": "dsp.stackexchange", "id": 7406, "tags": "fft, fourier-transform, c++" }
Effect on rate of diffusion in addition of an inert gas
Question: What will be the effect on the rate of diffusion on addition of an inert gas to the gaseous mixture? I think the rate of diffusion should increase as the addition of extra gas will increase the inside pressure. But the given answer contradicts my proposed explanation. Where am I wrong? And why is the rate of diffusion decreasing? My question does not ask about the effect of addition of an inert gas on a reaction equilibrium. Answer: The inter-diffusion caused by two gasses is described by the Stefan-Maxwell equation. If $x_1,x_2$ are the mole fractions of the two gasses, $\bar v$, the average speed and $\lambda$ the mean free path then $$D_{1,2}= \frac{x_2}{2}\bar v_1\lambda_1+ \frac{x_1}{2}\bar v_2\lambda_2$$ where $D_{1,2}$ is the inter-diffusion coefficient. Substituting for the mean free paths does not lead to a useful result because terms that involve collision between molecules of the same kind cannot have any extra effect compared to when only one gas is present, and so these are ignored. The result is $$D_{1,2}= \frac{1}{\pi\sigma_{1,2}^2(n_1+n_2)} \left( \frac{2k_BT}{\pi\mu} \right)^{1/2}$$ where $\sigma_{1,2}$ is the sum of the radii of the two molecule types and $n_1, n_2$ number of molecules/m$^3$ of each, the reduced mass is $\mu$ kg ($\mu=m_1m_2/(m_1+m_2)$). This equation shows that the inter-diffusion depends on the total concentration at a given temperature, a result that is close to that observed experimentally. So your intuition was correct. (ref chapter (II). E. Moelwyn-Hughes, 'Physical Chemistry')
{ "domain": "chemistry.stackexchange", "id": 9821, "tags": "physical-chemistry, kinetic-theory-of-gases, diffusion" }
What is the electric field at a distance $r$ due to an infinitely long wire carrying a constant current $I$?
Question: We know that an electric field in terms of potentials is given by $\overrightarrow{E}=-\overrightarrow{\nabla }\phi -\dfrac{\partial \overrightarrow{A}}{\partial t}$. But I just came across, while solving a problem, that an electric field at a distance r due to an infinitely long wire carrying a constant current I is $\overrightarrow{E}=-\dfrac{\partial \overrightarrow{A(r)}}{\partial t}$. Why is $\overrightarrow{\nabla }\phi =0$? I am attaching the picture of the actual question that I was solving below: Answer: The wire is neutral as it contains as much positive charge as negative charge. Therefore the charge density $\rho$ is zero everywhere, and hence $\phi=0$.
{ "domain": "physics.stackexchange", "id": 82343, "tags": "homework-and-exercises, electromagnetism, electric-fields" }
Craps game rules and code: version 2
Question: I've taken all the advice from here and have created a Craps class and gameplay. I just wanted to ask for more advice on how to optimize my code. #include <iostream> #include <cmath> // atan(), fabs() #include "Craps.h" using namespace std; int main(int argc, char* argv[]) { int choice; Craps game; do { cin>> choice; if(!cin) { cin.clear(); cin.ignore(200,'\n'); cout<<"\n please inter int number \n\n"; } switch (choice) { case 1: game.Play(); break; case 2: break; default: break; } } while (choice != 2); return 0; } #include <iostream> //#include <catdlib> #include <ctime> #include <string> using namespace std; class Craps { public: void DisplayInstruction(); void Play(); int DiceRoll(); private: int random; }; void Craps::DisplayInstruction(){ cout<<"DisplayInstruction"<<endl; } int Craps::DiceRoll() { int die1 = (rand() + time(0)) % 6+1 ; int die2 = (rand() + time(0)) % 6+1 ; int sum = die1 + die2; return sum; } void Craps::Play() { bool won = false; bool lost = false; string anyKey; int myPoint = 0; int sumeOfDice = DiceRoll(); switch (sumeOfDice) { case 7: won= true; break; case 11: won= true; break; case 2: lost = true; break; case 3: lost = true; break; case 12: lost = true; break; default: myPoint = sumeOfDice; cout<< myPoint <<endl; break; } while (won == false && lost ==false) { sumeOfDice = DiceRoll(); if(sumeOfDice == myPoint) { won = true; } else if(sumeOfDice == 7) { lost = true; } } if (won == true) { cout<<"player win"<<endl; } else { cout<<"player lose "<<endl; } } #endif /* defined(__Craps__Craps__) */ Answer: Try to refrain from using using namespace std, especially in a program like this. If you were to put it in the header, you could introduce bugs due to issues such as name-clashing. Your comments next to <cmath> suggests that you're using it, but I don't see these functions used anywhere. No need to include this library. #include headers before libraries to prevent unwanted dependency: #include "Craps.h" // header won't be forced to use <iostream> #include <iostream> Naming convention states that functions should start lowercase: thisIsAFunction(); this_is_also_a_function(); DisplayInstruction() is a needless function as it only displays one line of text. Functions shouldn't be created just for that. In general, functions that don't do anything with the data members are best as free functions (non-member functions). The dice functionality should have its own class, allowing the Craps class to maintain object(s) of it as needed. This will also ensure separation of concerns. You may refer to my implementation to get an idea for this. You didn't seed std::rand() with std::srand(). This will cause rand() to produce the same random numbers each time. Put this at the top of main() only: // this requires <cstdlib> and <ctime> // use nullptr instead if using C++11 std::srand(static_cast<unsigned int>(std::time(NULL))); However, if you're using C++11, you should no longer use rand. I'll quote one of my answers: You should refrain from using rand altogether in C++11 as it is considered harmful. Instead, utilize C++11's <random> library. You also no longer need <ctime> and should use C++11's <chrono>. From this library, you can obtain a seed for a PRNG with this: auto chrono_seed = std::chrono::system_clock::now().time_since_epoch().count(); The switch in Play() could be its own function, specifically of type bool. As it isn't working with a data member, it can be a free function. If you go this route, then each case will need a return true or return false. The default will also need to handle errors resulting from an invalid case value, such as by throwing an exception. You never use anyKey anywhere, so remove it. Tested conditions could be shortened like this: if (condition) // if (condition == true) if (!condition) // if (condition == false) Tested streams can be shortened like this: if (!(cin >> choice)) // cin >> choice; // if (!cin)
{ "domain": "codereview.stackexchange", "id": 4597, "tags": "c++, optimization, classes, game" }
A star or a galaxy?
Question: When we look into the beautiful sky in the night, exclaiming how beautiful these shining stars are. My question is how could we tell, whether any of these shining "point" is a star or a galaxy? If indeed many of these are shining galaxies, then what roughly the percentage is of these shining galaxies in the sky (using the naked eye only without telescope)? (And why.) Answer: With the naked eye, virtually every point you see is a star.* That's because there are very few galaxies that are visible with the naked eye. With telescopes, for many galaxies you'll be able to resolve multiple light sources (aka. stars). That lets you tell it's a galaxy. For those galaxies that are further away, you can still tell by taking a spectrum. Galaxies have fundamentally different spectra from stars, because they're composed of lots of different stars at different metallicities, temperature, etc (+ other stuff). The percentage of stars/galaxies you see depends on what you're using to observe. In the Hubble Deep Field for example, virtually every point is a galaxy. Comparatively, if you're using an ordinary pair of binoculars, virtually every point is a star. *Some of the brightest points will not be stars, but planets.
{ "domain": "physics.stackexchange", "id": 63811, "tags": "astronomy, stars, galaxies" }
Reaction force for buoyancy?
Question: According to Newton's 3rd law each force (action) has a counter-force (reaction). What is the reaction (counter-force) of buoyancy? Answer: We call buoyancy a force, but really, what is it ? It's only gravity. This is only a difference in the gravity force applied to the water and the gravity force applied to your object. So buoyancy is not a force applied by the water to the object. It's gravity applied differently to the water and the object by the earth. Here, the real force is gravity.
{ "domain": "physics.stackexchange", "id": 29255, "tags": "forces, buoyancy" }
Normalizing the sum of wavefunctions and calculating probabilty - understanding concepts
Question: A state of a particle bounded by infinite potential walls at x=0 and x=L is described by a wave function $\psi = a\phi_1 + b\phi_2 $ where $\phi_i$ are the stationary states. So let's say we want to normalize this wave function. As I understand it the procedure is as follows: The probability of the particle being at any point from 0 to L is 1. So I need to integrate the wave functions squared over that interval. By the superposition principle it is OK to just add them. On top of that, any $\psi$ can also be expressed as $\psi \psi^*$ $\psi = a\phi_1 + b\phi_2 $ $\psi = (a\phi_1\phi_1^* + b\phi_2\phi_2^*)$ We want to integrate $\vert\psi\vert ^2$ $(a\phi_1\phi_1^*)^2 + 2ab\phi_2\phi_2^*\phi_1\phi_1^*+(b\phi_2\phi_2^*)^2 = (a^2 + b^2)$ Since the phi functions are eigenvalues, the ones on the diagonal of the matrix are the only ones not zero, which is why the cross terms in the middle disappear (they are zero) and the end terms $(\phi_i\phi_i^*)$ are equal to 1. So we get $\psi = \int_0^L\vert\psi(x)\vert ^2dx = \int_0^L\vert(a^2 + b^2)\vert^2dx=1 $ and therefore $\int_0^L\vert(a^2 + b^2)\vert^2dx=1 $ and $(a^2+b^2)^2x\vert^L_0 = 1 \rightarrow (a^2+b^2)^2L=1\rightarrow L=1/(a^2+b^2)^2$ The conceptual question I had was that if we have the probability squared here, is it that or the square root of that probability that is your normalization constant? Further, would it also be permissible to treat each of the wavefunctions as $A\sin\frac{n\pi x}{L}$ where $A_1=a$ and $A_2=b$, and try the integration that way? Given that the wave functions are supposedly different that seemed like it would be wrong, but we also know they are stationary states so they go to zero at either end of the potential well and are sinusoidal, correct? I know that this area doesn't always cotton to HW type questions. But this is the kind of thing that I think could help a lot of people get their heads around this concept, because I can't be the only one who is a bit lost on how to actually use these techniques. Answer: Honestly, the argument you're making here is a mess - the question is based on bad premises. So let me show you how to do it properly, and hopefully that will resolve your confusion. You're right that in order for a wavefunction to be normalized, it must satisfy $$\int_\text{all space} P(x)\mathrm{d}x = \int_\text{all space} \psi^*(x)\psi(x)\mathrm{d}x = 1\tag{1}$$ But this statement: On top of that, any $\psi$ can also be expressed as $\psi\psi^∗$ is not correct. Given a function $\psi(x)$, you can write $\psi(x)\psi^*(x)$, but that's a different function. Anyway, given that your wavefunction can be written $$\psi(x) = a\phi_1(x) + b\phi_2(x)$$ then you just plug that into the normalization condition (1) and get $$\int_0^L \bigl(a^* \phi_1^*(x) + b^* \phi_2^*(x)\bigr)\bigl(a \phi_1(x) + b \phi_2(x)\bigr)\mathrm{d}x = 1$$ which expands to $$\begin{multline} a^*a \int_0^L \phi_1^*(x)\phi_1(x)\mathrm{d}x + a^*b \int_0^L \phi_1^*(x)\phi_2(x)\mathrm{d}x \\ + b^*a \int_0^L \phi_2^*(x)\phi_1(x)\mathrm{d}x + b^*b \int_0^L \phi_2^*(x)\phi_2(x)\mathrm{d}x = 1 \end{multline}\tag{2}$$ Now you can use the identity $$\int_0^L\phi_1^*(x)\phi_2(x)\mathrm{d}x = \int_0^L\phi_2^*(x)\phi_1(x)\mathrm{d}x = 0$$ which follows from the fact that $\phi_1$ and $\phi_2$ are orthogonal functions (it's not enough that they are eigenfunctions of an operator, they have to be orthogonal), and the identity $$\int_0^L\phi_1^*(x)\phi_1(x)\mathrm{d}x = \int_0^L\phi_2^*(x)\phi_2(x)\mathrm{d}x = 1$$ which simply reflects the fact that $\phi_1$ and $\phi_2$ are normalized. (Check for yourself that this is the same as the normalization condition, equation (1).) With these two identities, equation (2) reduces to $$\lvert a\rvert^2 + \lvert b\rvert^2 = 1$$ The conceptual question I had was that if we have the probability squared here, is it that or the square root of that probability that is your normalization constant? That all depends, how do you define your normalization constant? It depends on what you're normalizing and how exactly you express it as a function. However you do it, the end requirement for normalization is just that $\lvert a\rvert^2 + \lvert b\rvert^2 = 1$. As far as using the specific sinusoidal form for the $\phi_i$, you can do that in this case, because you're given enough information to figure out that the eigenfunctions are in fact sinusoidal. But you don't really need to know that they are sinusoidal for the preceding argument to work; all you need to know is that the $\phi_i$s are orthonormal.
{ "domain": "physics.stackexchange", "id": 13220, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, superposition, normalization" }
Number of Fully connected layers in standard CNNs
Question: I have a question targeting some basics of CNN. I came across various CNN networks like AlexNet, GoogLeNet and LeNet. I read at a lot of places that AlexNet has 3 Fully Connected layers with 4096, 4096, 1000 layers each. The layer containing 1000 nodes is the classification layer and each neuron represents the each class. Now I came across GoogLeNet. I read about its architecture here. It says that GoogLeNet has 0 FC layers. However, you do need the 1000 node layer in the end with Softmax activation for the classification task. So the final layer isn't treated as FC in this? Also then, what is the number of FC layers in LeNet-5? A bit confused. Any help or leads would be greatly appreciated. Answer: I think the confusion with the Inception module is the somewhat complicated structure. The point on the relevant CS231n slide (#37), saying there are no FC layers is partially correct. (Remember this is only a summary of the model to get the main points across!). In the actual part of the model being explained on that slide, they are referring only to the Inception modules: No FC layers! Definitions will, however, play a big role in deciding whether or not there are FC layers in the model. In the bigger scheme of things (beyond a single Inception module), we have first to distinguish between the train and test time architectures. At train time there are auxilliary branches, which do indeed have a few fully connected layers. These are used to force intermediate layers (or inception modules) to be more aggressive in their quest for a final answer, or in the words of the authors, to be more discriminate. From the paper (page 6 [Szegedy et al., 2014]): One interesting insight is that the strong performance of relatively shallower networks on this task suggests that the features produced by the layers in the middle of the network should be very discriminative. By adding auxiliary classifiers connected to these intermediate layers, we would expect to encourage discrimination in the lower stages in the classifier, increase the gradient signal that gets propagated back, and provide additional regularization. The slice of the model shown below displays one of the auxilliary classifiers (branches) on the right of the inception module: This branch clearly has a few FC layers, the first of which is likely followed by a non-linearity such as a ReLU or tanh. The second one simply squishes the 1000 input weights into whatever number of classes are to be predicted (coincidentally or not, this is a 1000 here for ImageNet). However, at test time, these branches are not active. They were used simply to train the weights of the modules, but do not contribute to the final classification probabilities produces at the end of the entire model architecture. This all leaves us with just the suspsicious looking block right at the end of the model: There is clearly a big blue FC layer there! This is where definitions come into play. It is somewhat subjective. Is a fully connected layer one in which each $m$ weight is connected to each of $n$ nodes? Is it a layer in which representation are learned, and if so, does the layer require a non-linearity? We know that neural networks requires the non-linearities, such as ReLU and tanh functions to be applied to the outputs of a layer (thinking in forward flow). Without these, neural networks would simply be a combinations of linear functions, and so going deeper wouldn't theoretically add any power as we essentially would just be performing a huge linear regression. In this spirit, we can look at the final piece of the puzzle, and point out that tis final FC layer is noted to simply be linear! That is, it takes all the weights resulting from the preceding Average Pooling layer, and combines them into a linear combination of only 1000 values - ready for the softmax. This can all be understood from the tabular overview of the network architecture: So, do you agree with the stanford guys or not? I do!
{ "domain": "datascience.stackexchange", "id": 3053, "tags": "deep-learning, neural-network, cnn, convolutional-neural-network, alex-net" }
Calculate all the prime numbers between two given numbers
Question: I've made an application that calculates all the prime numbers between two given numbers and prints then into a .txt document... anything I can improve? use std::io; use std::fs::{OpenOptions}; use std::io::{Write, BufWriter}; fn main() { loop{ let mut format = 1; let mut input = String::new(); println!("Say a start for the prime loop! "); io::stdin().read_line(&mut input).unwrap(); let start: u128 = input.trim().parse().unwrap(); let mut input = String::new(); println!("Say an end for the prime loop! "); io::stdin().read_line(&mut input).unwrap(); let end: u128 = input.trim().parse().unwrap(); let path = "path/to/file.txt"; let f = OpenOptions::new() .write(true) .open(path) .expect("Could not open file"); let mut f = BufWriter::new(f); for i in start..end{ if prime(i) == true{ f.write_all(i.to_string().as_bytes()).expect("unable to write to file"); f.write_all(b"\t").expect("unable to write to file"); format += 1; } if format == 10{ f.write_all(b"\n").expect("unable to write to file"); format = 0; } } } } fn prime(x: u128) -> bool { if x == 4 || x == 6 || x == 8 || x == 9{ //The loop doesnt quite work for numbers below 10 so this is for those numbers return false; } for i in 2..((x as f64).sqrt() as u128){ if x % i == 0 { return false; } //modulo to see if the number is dividable by variable i } true } ``` Answer: In addition to l0b0's answer: The formatting is inconsistent. You can run cargo fmt to clean it up. Personally, I prefer to rewrite the use declaration in a tree-like manner: use std::{ fs::OpenOptions, io::{self, BufWriter, Write}, }; The path can be made into a const: const PATH: &str = "path/to/file.txt"; format is not a descriptive name. A helper function simplifies the input process by eliminating the code duplication. Here's a modified version, using the sieve of Eratosthenes: use { anyhow::Result, bitvec::prelude::*, itertools::Itertools, std::{ fs::File, io::{self, prelude::*}, ops::Range, }, }; const PATH: &str = "path/to/file.txt"; const N_COLUMNS: usize = 10; fn main() -> Result<()> { let start = input("Enter start of range: ")?; let end = input("Enter end of range: ")?; let table = sieve_to(end); write_primes(&table, start..end)?; Ok(()) } fn sieve_to(end: usize) -> BitVec { let mut table = bitvec![1; end]; // set table[0] and table[1] to false for cell in table.iter_mut().take(2) { cell.set(false); } // floor(sqrt(end)) let limit = num::integer::sqrt(end); for number in 2..limit { if !table[number] { continue; } for multiple in (number..end).step_by(number).skip(1) { table.set(multiple, false); } } table } fn input(message: &str) -> Result<usize> { eprint!("{}", message); let mut line = String::new(); io::stdin().read_line(&mut line)?; Ok(line.trim().parse()?) } fn write_primes(table: &BitSlice, range: Range<usize>) -> Result<()> { let mut file = File::create(PATH)?; writeln!( file, "{}", range .filter(|&n| table[n]) .chunks(N_COLUMNS) .into_iter() .map(|row| row.format("\t")) .format("\n"), )?; Ok(()) } Cargo.toml: [package] name = "prime" version = "0.1.0" authors = ["L. F."] edition = "2018" [dependencies] anyhow = "1.0" bitvec = "0.20" itertools = "0.9" num = "0.3" I limited the program to one loop per execution, since overwriting the same file again and again does not seem useful to me.
{ "domain": "codereview.stackexchange", "id": 40316, "tags": "primes, rust" }