anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Arsenic (V) Reduction to Arsenic (III) by Microorganisms
Question: I am currently doing research on arsenic toxicity in microorganisms, and I learned about arsenic (V)/(III) cycling. Arsenic (III) (usually in the form of arsenite) is generally 50 times more toxic to biological life; however, most bacteria (or at least some) reduce the less toxic arsenate into arsenite. I just do not understand why these organisms would reduce arsenate into something more toxic. This doesn't seem to be an accidental reduction in which the microorganism mistook arsenate for phosphate, as there are plasmids specifically encoded with genes that reduce arsenate. Why is this happening? How would this be an advantage in these organisms and not a disadvantage? Answer: You can find a great and recent review on this topic, which is thankfully also free to access, here: https://www.sciencedirect.com/science/article/pii/S2319417016000159 Arsenic resistance is ubiquitous among bacteria, they virtually all maintain the ars operon. Among Arsenic-resistant bacteria, this is not an advantage, but Arsenic was and remains a ubiquituous substance and being resistant to it confers an advantage over those that aren't; which is why bacteria that save themselves the effort aren't abundant. The core to your answer is in the very last summary paragraph: ArsM transforms inorganic arsenic into a highly toxic organoarsenical species that kills off by competing the bacterial species and may also be responsible for carcinogenesis in animals. Competing microbial species have responded to this environmental pressure by evolving detoxification mechanisms for MAs(III). Some produces ArsI that demethylates MAs(III) to less toxic inorganic As(III) while other produces ArsH that oxidizes MAs(III) to nontoxic pentavalent MAs(V). It is likely that all of these processes are taking place in environmental microbial communities as bacteria, archaea, fungi, and protozoans constantly fight for dominance. Summarised in Figure 5: https://ars.els-cdn.com/content/image/1-s2.0-S2319417016000159-gr5.jpg The reason why ArsC and other arsenate reductases (reducing Ars(V) to the more toxic Ars(III)) evolved in the first place, setting off this whole Arsenic methylation cycle, is that they are required for resistance to arsenate, which became the predominant arsenic species after oxygen appeared in the atmosphere. Simply put: If I can make a poison that kills you but not me, why shouldn't I make it?
{ "domain": "biology.stackexchange", "id": 9274, "tags": "biochemistry" }
changing roslaunch parameter
Question: Hi! I would like to use an external webcam with a laptop that already has a built-in webcam which is mapped to /dev/video0. The uvc_camera node wants to use /dev/video0 and would like to change it to /dev/video1 is there any way to change the parameter from command line? I know I could change the .launch file, but I don't want to touch it. I tried to run roslaunch as follows, but it didn't work: roslaunch uvc_camera camera_node.launch device:=/dev/video1 What is the best way to change a parameter in roslaunch without editing the original roslaunch file? Update 1: In the meantime I just found out, that if I copy the original launch file to somewhere in my home directory, I can edit it and I can launch it from there as well. This was probably too obvious, but I was confused by having the package name after the roslaunch command. For example: cp /opt/ros/groovy/stacks/camera_umd/uvc_camera/launch/camera_node.launch /home/user/ros_workspace/ roslaunch /home/user/ros_workspace/camera_node.launch Originally posted by ZoltanS on ROS Answers with karma: 248 on 2013-04-05 Post score: 0 Answer: As you said, no parameters are defined for that launch script. It is possible to add them. Simple solution: make a copy of the script in a package of your own (e.g. my_camera_params), and edit to taste. Complete solution: open an enhancement ticket, fork that UMD git repository, add <arg> tags for all the parameters, submit a pull request to Ken Tossell. Originally posted by joq with karma: 25443 on 2013-04-05 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Jeremy Zoss on 2013-04-05: You can also modify the (private) parameter value after the launch file has run, from the calling/external code. This technique will not help in this case, though, since the uvc_camera node only reads the parameter value at initial start-up.
{ "domain": "robotics.stackexchange", "id": 13693, "tags": "ros, roslaunch, parameter, uvc-camera" }
What's the point of having an einbein in your action?
Question: One often comes across actions written with an extra auxiliary field, with respect to which, if you vary the action, you get the equation of motion of the auxiliary field, which when plugged into the original action lets you retrieve a more familiar looking action, without the auxiliary field. An example is: $$\begin{equation*} S=\int d\tau e^{-1}(\tau)\eta_{\mu\nu}\frac{dx^{\mu}}{d\tau}\frac{dx^{\nu}}{d\tau} \end{equation*}$$ which when solved for the einbein $e$ gives you the familiar square root form of the action. Why would you want this extra auxiliary field apart from having an action which is not in an annoying square root form? Why, for example, does Green-Schwarz-Witten (GSW) mention on Page 18 of "Superstring Theory", that "the role of $e(\tau)$ is to ensure the action is invariant under reparametrizations of $\tau$", when you can have reparametrization invariance even in the square root form of the action? Answer: first, you and Moshe may have somewhat misunderstood the comment that "the purpose of introducing $e$ is to guarantee the reparameterization symmetry of the action". This wasn't meant to express the circular argument that we need to add a redundant field in order to have a redundancy. Instead, I think that the authors simply wanted to write that "in order for this action - which we already chose to study - to be equivalent to the einbein-frei form, the new field $e$ has to be unphysical, so the reparameterization has to be insured, so the factor of $e$ has to be included." If the $e$ were simply omitted in this form of the action, the action wouldn't be reparameterization-invariant, so it wouldn't be equivalent to the "proper length" of the world line we want to describe: the proper length of a world line is independent of the way how we parameterize the world line to calculate the integral (the proper length), so any equivalent description has to have the same property. Motivation for gauge redundancies In this particular case, the main technical reason is that by introducing the new auxiliary variable, the einbein, you get rid of the square root. Note that without this variable, the action would depend on the standard Lorentz gamma factor, $1/\sqrt{1-v^2/c^2}$. From the Lagrangian, it would imprint itself into the Hamiltonian as well. Hamiltonians that depend on square roots of functions of observables are hard - relatively to what you get with $e$. In the world sheet generalization, the Hamiltonian would even include an integral of a square root which is really bad - especially if you want to Fourier-transform things. On the other hand, the underlying physics actually doesn't need the messiness as can be seen by introducing clever redundancies. There is no square root in the action including the einbein - but it's still equivalent. By introducing this fake degree of freedom, or a redundancy, we actually convert the equations of motion for $x$ to a simple wave equation, $\nabla_\mu\nabla^\mu x^\alpha=0$. Well, it's a wave equation in at least $1+1$ dimensions. In $0+1$ dimensions, it's just an equation for a uniform motion of $x^\alpha$. Case of other gauge symmetries The reparametrization of $\tau$ is a redundancy of the system, but so is any gauge symmetry, for example the colorful $SU(3)$ in QCD or the diffeomorphism group in general relativity (which is nothing else a higher-dimensional generalization of this einbein example: $e(\tau)\equiv \sqrt{g_{00}}$ is the redundant field, and gravity has no physical components in $0+1$ dimensions). You could also work without it - and the twistor description of $N=4$ $d=4$ $SU(N)$ gauge theory actually has no locality so it has no local gauge symmetry in spacetime. Other descriptions that don't have any gauge redundancy include the unitary gauge in spontaneously broken gauge theories; AdS/CFT that replaces the redundancy by a totally different one in the dual description; the S-dual description that replaces the gauge group with the magnetic monopoles' dual gauge group; and many others. The purpose of having local symmetries - or gauge redundancies - is always either to simplify the equations or to make some of their symmetries manifest. In particular, if you want elementary particles with spin equal to one or higher, and if you want a manifest Lorentz symmetry in spacetime and manifest locality, you need local vector- or tensor-valued fields such as $A_\mu$ and $g_{\mu\nu}$ that also have time-like components. In a quantum theory, the time-like components would produce negative-norm states: the theory could predict negative probabilities if these polarizations would stay in the spectrum. Of course, gauge invariance is the only consistent way to remove them. Because gauge symmetry is a "symmetry" in the sense that it commutes with the Hamiltonian, the gauge invariance of the initial state guarantees the gauge invariance of the final state, so unphysical, gauge-dependent states are not produced by the evolution. At any rate, the choice of a redundant equivalent description is always made in order to make some or several of the properties below manifest: Unitarity combined with manifest Lorentz invariance: gauge symmetry is needed to remove the ghosts Locality - one needs local tensor fields Simple, harmonic-oscillator-like underlying character of the dynamics that would be masked by square roots in the einbein-frei formulation (typical in the case of diffeomorphisms of the world volumes) Cheers, LM
{ "domain": "physics.stackexchange", "id": 369, "tags": "lagrangian-formalism, field-theory, action, point-particles" }
Does all matter of a Black Hole finally fall into its center singular point and why?
Question: I'm not sure whether a collapsing star which forms a black hole actually must collapse to a single point. In the Schwarzschild Metric for a given r it is assumed, that all matter is below r. Therefore, a particle is always moving on a time like trajectory and inside Rs all such trajectories end up at r=0. But this doesn't take into account that only a part of the total mass is below any actual point within the star. Comparing with earth's gravitational field, I would assume that gravitation gets smaller when going into the massive star. Doesn't that have an influence on the geometry? To take an extreme example, what about of a sphere with a void inside? I see no reason why all matter should end up in the center, because there is - at least classically - not even an attractive force within the void. The same consideration could be done when assuming constant density of the star and the radius we consider gets smaller and smaller. What counts for the outer mass to be attracted to the center? Is it only the inner mass? If yes, this attraction gets smaller an smaller as we move towards the center. Why is there a point like singularity, where all of the mass of the Black Hole will concentrate? I know, this is totally handwaving, but what would be the effect of not taking a point mass for calculating the metric? I hope it became clear, what I mean... Answer: The collapsing star can be treated as a sequence of three cases: first, a star which is not collapsing, then the situation during collapse, then the final black hole with a singularity and vacuum everywhere else. For a star which is not collapsing, let's first compare with the situation in Newtonian gravity for a uniform spherical ball. Outside the ball the gravitational field falls away as $GM/r^2$. But inside the ball it is different: the field is $G M r/R^3$. And the case of an empty shell is different again (no field at all inside the shell). The corresponding situation for a spherical star in General Relativity is that outside the star we have the Schwarzschild metric and inside the star there is another metric. It will depend on the matter distribution. It does not have any singularity and it will reproduce the Newtonian predictions in the limit of small density. So far so good. Now suppose our star undergoes gravitational collapse. At first there will not be anything special happening at the centre, except that the density there is growing. But if at any stage the matter within some $r$ has a mass such that the Schwarzschild radius associated with that mass is greater than or equal to $r$, then we have that this part of the star, at least, has fallen past a horizon and cannot emerge. In this case that matter cannot avoid falling to $r=0$ and the density there will increase in a run-away process; no force is strong enough to prevent it. Of course whenever we encounter a singularity we must suspect that our physical model is running out of validity but the main point is that either the curvature goes to infinity or some other unknown physics intervenes. What happens to any mass which is not within a horizon is that either it may be thrown off if there is any sort of explosive or radiative process going on, or else it too will fall and will eventually cross a horizon. Or it could end up in orbit if we have some angular momentum, but this won't happen in a spherically symmetric case. But any matter which does fall inwards past a horizon will then reach the singularity in a small amount of proper time. All the above uses the terminology of "first this, then that" somewhat loosely. If we adopt coordinate time as in the Schwarzschild metric then the collapse process will itself be infinitely slow near the horizon, such that the coordinate time for events where material crosses the horizon is infinity. Then you have all the usual puzzles about how to discuss this fact without confusing ourselves.
{ "domain": "physics.stackexchange", "id": 81682, "tags": "general-relativity, black-holes, metric-tensor" }
How can I conduct an experiment on the buring power of a 2 inch cubic block of wood?
Question: I'm trying to help my son put together a science project and the hypothesis that we've come up with is this: If you take cubes of wood of the same volume (say 2 inches cubed) and start them burning with the same amount of heat over a 1 liter of water, that those woods with a higher density of mass will produce a greater increase in the temperature of the water. But what I am struggling with is can I determine the MASS first or density? Or can I only determine the volume by water displacement or something? Basically is there a way to determine the mass/density in some type of units before I burn each piece of wood? Answer: cut your wood samples into accurate cubes. Measuring the cubes then lets you calculate their volume. Then weigh each cube. Now you can calculate their density. Assuming the wood is dry, the amount of chemical potential energy contained in each cube is proportional to the mass of the cube. Since different woods have different densities, a 2 inch cube of eastern hard maple (high density) will contain more energy than an identical cube cut from western cedar (low density). Burning the wood samples to heat water out in the open will transfer only a fraction of the heat released by burning the wood into the water, and once warm, that water will be continually losing its heat to its surroundings. This will make it difficult for you to use the heating-water technique to measure the "heat content" of your fuel. Chemists and engineers use (among other things) a device called a bomb calorimeter to make these sorts of measurements accurately.
{ "domain": "physics.stackexchange", "id": 45092, "tags": "mass, volume" }
A basic question about tetramethylammonium hydroxide --strong or weak?
Question: This is a follow-up to the question here. There is an answer claiming that tetramethylammonium hydroxide (TMAH) is a suitable strong base that can be used instead of the usual alkalies to reach pH 14. However, Wikipedia says that TMAH is only a weak base. But, when I state this I get comments that other sources document TMAH as a strong base. The other sources, however, just mention it in passing and do not offer direct proof. Can anyone provide a reference that actually does settle this question? Thanks! Note: At the time the question was posed Wikipedia did make the weak base claim. It has since been edited, and it now appears that the edit is correct. See the answers. Answer: TETRAMETHYLAMMONIUM HYDROXIDE Journal of the Chemical Society 87 (1905): 955-961: Practically the preparation of tetramethylammonium hydroxide from its salts resolves itself into a question of solubility as follows. In general, the equation $$\ce{NMe4X + MOH = NMe4OH + MX}$$ will represent a real action proceeding nearly to completion if M, X, and the solvent are so chosen that all the substances except MX shall be soluble, or at least that MX shall be much less soluble than either of the original reacting substances. This principle was applied by Hofmann in his preparations with water as solvent. For M and X he chose either the pair Ag,I or Ba,SO4, in both of which cases MX is practically insoluble in water. It is clear that if the general application of the above principle is justifiable, tetramethylammonium hydroxide may be prepared from a tetramethylammonium salt by means of potassium hydroxide if we so choose X and the solvent that of the substances represented in the equation $$\ce{NMe4X + KOH = NMe4OH + KX}$$ all shall be soluble except KX. ... An estimation of the strength of the base by means of the velocity of saponification of methyl acetate in N/80 solution showed that it was somewhat weaker than sodium hydroxide. The velocity constants obtained at 25 [degrees] were 0.0106 and 0.0115 respectively, so that if the strength of sodium hydroxide is represented as 100, that of tetramethylammonium hydroxide, under the above conditions, will be represented by 92. So by two independent methods it is shown that tetramethylammonium hydroxide is much stronger than pKb=4.2. Firstly, the halide cannot not be converted to the hydroxide by adding metal hydroxide, unless an insoluble halide is produced to force the equilibrium forward. Seconding, it is shown by velocity of saponification to be only slightly weaker then NaOH. See also Essential Principles of Organic Chemistry (1956) It follows that a quaternary ammonium hydroxide is completely dissociated and should be a strong base, as is sodium hydroxide. The experimental facts are entirely consistent with this view. Although trimethylamine in water solution gives only a small concentration of hydroxide ion, tetramethylammonium hydroxide gives a full molar equivalent of hydroxide ion in solution.
{ "domain": "chemistry.stackexchange", "id": 9876, "tags": "acid-base, reference-request" }
Checking whether one string is a rotation of another
Question: Given a method isSubstring, two strings s1 and s2, check whether s1 is a rotation of s2 using only one call to isSubstring. Any comments on the code below? I would like to check the complexities as well: For isSubstring: runtime: \$O(n^2)\$ memory: in place For isRotation: runtime: \$O(n^2)\$ memory: \$O(n)\$ import static org.junit.Assert.*; import org.junit.Test; public class Solution { // Find whether string s1 is a rotation of string s2 // using only one call to isSubstring static boolean isSubstring(String s1, String s2) { // return true if s1 is a substring of s2 // false otherwise int n1 = s1.length(); int n2 = s2.length(); for (int i = 0; i < n2; i++) { if (i+n1>n2){ return false; } else if (s1.equals(s2.substring(i,i+n1))) { return true; } } return false; } static boolean isRotation(String s1, String s2) { // return true if s2 is a rotation of s1 // false otherwise // concatenate all the rotated strings of s1 StringBuffer allRotated = new StringBuffer(); int n1 = s1.length(); int n2 = s2.length(); if (n1 != n2) { return false; } allRotated.append(s1); for (int i = n1 - 1; i > 0; i--) { allRotated.append(s1.substring(i) + s1.substring(0, i)); } // Return true if s2 is a substring of allRotated String rotated = allRotated.toString(); return isSubstring(s2, rotated); } @Test public static void abcTest() { String abc = "abc"; String cab = "cab"; String bca = "bca"; assertTrue(isRotation(abc, cab)); assertTrue(isRotation(abc, bca)); } @Test public static void falseTest(){ String abc = "abc"; String cba = "cba"; assertFalse(isRotation(abc,cba)); } @Test public static void evenTest(){ String abcd = "abcd"; String dcba = "dcba"; assertFalse(isRotation(abcd,dcba)); } @Test public static void identityTest(){ String aaaa = "aaaa"; assertTrue(isRotation(aaaa,aaaa)); } public static void main(String[] args) { abcTest(); falseTest(); evenTest(); identityTest(); } } Answer: I have a couple of comments on the code that you have posted. First, your isSubstring function is not checking for a 0 length n1 string, so it will always return true for an empty string - is that what you want? Second, your isSubstring function is calling s2.substring for every possible position until it finds a match. You can limit the number of calls by performing a simple pre-check to see if s1.charAt(0) == s2.charAt(i). If this is not true, then you know the substring will not match, saving you the overhead of calling the substring() function and creating the additional substring, and then performing the full String.equals() comparison. Third, your rotation function is overly complex. Since you're just doing an inString comparison, you only have to concatenate the input string to itself once, and then all valid rotations will be a substring of it. To show the example visually, given an input string of "abcd", and the valid rotations of "bcda", "cdab", and "dabc": abcdabcd bcda cdab dabc All of the rotations are substrings of the double-concatenated version. This accomplishes the same result of your function, but uses \$O(2n)\$ memory. You originally listed yours as using \$O(n)\$ memory, but that is too low - you are using \$O(n^2)\$ memory, because you are concatenating all the permutations of the rotations. Finally, your runtime \$O(n^2)\$ calculations aren't quite right, because you are short-circuiting out once you know that the string being searched is too long to be in the remainder. Unless the input string is 1 character, then the number of permutations being checked is actually smaller - and if you add in the first character optimization I listed above, the total cost of the function would go down even further.
{ "domain": "codereview.stackexchange", "id": 12528, "tags": "java, algorithm, strings" }
Computing narrowband spectrograms using MATLAB
Question: I'm having an assignment of computing narrowband spectrograms using MATLAB. And I totally have no idea about the code. Can someone help me writing the code and explaining what's going on to me. Lots of thanks!! Followings are the requirement: B.1. Load the recorded .wav file using the function “wavread”. Let y be the variable storing the signal; B.2. Perform short-time Fourier transform on y as follows: (i) Divide the signal into many short-time frames. Each frame is 50 msec long. Frame shift is 1 msec; (ii) Apply 1024-point FFT on each frame; (iii) Store the power spectrum of each frame in a column vector (note that only the first 512 FFT coefficients are needed). (iv) Append the power spectrum vectors of all frames to form super matrix S. The size of the matrix will be 512 × no. of frames; (v) Plot the spectrogram using the following MATLAB commands Answer: No one is going to give you a full working code - the point is that you should learn and understand how to do it. Following guidelines for you: As given, use the wavread function to load the recording (please mind that in the newest 2014a version this function is deprecated, you should use audioread instead). MATLAB has wonderful help with very good examples, so simply use it. Having your signal loaded, together with its sampling frequency, you need to split it into chunks (many short-time frames). This can be done in two ways, either you are using a loop construct (for, while - whichever you prefer), or (my favourite) use the MATLAB buffer function (please consider using 'nodelay' option in it, as first frame will have zeros in it up to overlap length). Although you require the Signal Processing Toolbox license (I'm sure your University has it). This function allows you to specify the length of frame, overlapping and other things. In result you will obtain the matrix which columns are your signal frames. Having the sampling frequency from wavread, it is very easy to calculate corresponding length of your frame in samples for given number of miliseconds Now you can iterate over the columns of your matrix and calculate the FFT for each of them. You might want to use the fft function to get it. In extra parameter you can specify the length of your signal (to apply zero padding eventually). So let's say that now you have FFT for first frame. Now take the magnitude of it (abs), and choose first 512 samples. Store this result into fist column of your super matrix $S$. Optional step: before calculating the FFT, I suggest you to multiply your frame by the window function, i.e. hamming window. Now you have the matrix $S$, so plotting can be done by using various functions (pcolor, imagesc, and so on - pick whichever you prefer). I suggest you to compare the final result with MATLAB built-in spectrogram function. Good luck!
{ "domain": "dsp.stackexchange", "id": 1698, "tags": "stft, transform, fourier" }
What are the consequences of drinking water with food?
Question: Does drinking water with a meal release lesser energy/glucose into the blood stream due to a diluted digestive mixture ? For simplification : If a person eats 100 grams of cooked jasmine rice with no water in take, when digested in 90 minutes releases, say, a 100 calories. Then, if the person eats 100 grams of cooked jasmine rice and drinks 50 ml of water along with it (during/after) (basically, halving the power of the digestive juices) then will the energy released be 50 calories ? Answer: A moderate amount of water while eating will not dilute digestion ...according to Michael F. Picco, M.D. and the Mayo Clinic: There's no concern that water will dilute the digestive juices or interfere with digestion. In fact, drinking water during or after a meal actually aids digestion. Water and other liquids help break down food so that your body can absorb the nutrients. Water also softens stool, which helps prevent constipation. Water, after all, is the "universal solvent" which should help break food and organic materials down further. Effect of Stomach Acid Dilution This is a total myth, especially if you consider it from totally a chemical perspective. Stomachs secrete about 400 to 700 mL of gastric acids per meal. Just to be conservative, let's say 500 mL. Also, the average stomach has a pH of about 2. Now let's look at the formula for pH and what the effects of adding water might be: pH = log(1/(mols/volume)) 2 = log(1/(mols/0.5L)) Now solve for mols, and we get mols = 0.005 Now let's assume our meal contains about a quarter of a liter of water in it. We'll also assume that our food is completely pH neutral. This is probably a little high on water concentration, but let's see its effect on our stomach's pH. Since it has a neutral pH, the mols does not change, but our volume does (0.5L [previous contents] + 0.25L [from the food] = 0.75L). pH = log(1/(0.005 mols/(0.5L + 0.25L))) Now solve for pH and we get pH = 2.18. No problem here, since normal digestion occurs between a pH of 1.5 to 3.5. Finally, let's say we drink a lot of neutral pH water with our meal now. pH = log(1/(0.005 mols/(0.75L + 4.25L))) pH = 3 So we are still within a normal pH of digestion, even though we just drank over 4.25 liters (~1.12 gallons) of water with our fairly liquid meal of 0.25L! We would need to drink about another 45 L of water to knock our gastic pH above 4. But don't worry, we would die of water intoxication well before that ever happens. Furthermore, our stomachs (and digestive tract) are incredibly good at adapting their secretions to the consistency of a meal. Water and Speed of Digestion Perhaps another way that water may change caloric intake is to increase the speed at which solid foods exit the stomach. This is thought to reduce the meal’s contact time with stomach acid and digestive enzymes, resulting in poorer digestion. As logical as this statement may sound, no scientific research supports it. A study that analyzed the stomach’s emptying speed observed that, although liquids do pass through the digestive system more quickly than solids, they have no effect on the overall solids’ digestion speed. Bottom Line Water during meals should not decrease the amount of calories absorbed. With all that being said, digestion is a very complex system, and there are still many unknowns as well as individual responses. Before making drastic changes concerning your health and digestion, you should consult a medical professional. This answer should in no means necessary replace any medical advice or treatments, but rather aid in your understanding of how digestion works. A Final Word of Caution I feel obligated to include that however you found this question/answer, staying hydrated to aid in digestion is a very healthy function, especially in considering weight loss and 'feeling full for longer.' However, obsessive water consumption aimed at decreasing food consumption/appetite ("fluid loading" or "water loading") is a common symptom of eating disorders. If this is a concern for you or someone you know, please, seek professional support and help. Sources: NCBI - Adaptive Secretions NCBI - Digestion Speeds www.mayoclinic.org water.usgs.gov
{ "domain": "biology.stackexchange", "id": 5869, "tags": "digestive-system, digestion" }
Can the fusion and fission of a group of atoms occur infinitely?
Question: Is it possible to split a nucleus and put it back together? If so, is it feasible to do it an indefinite number of times, i.e. without them wearing out or failing to stick together once again? Answer: That is a 'what if' multi-layered question, so let's start with building the hypotheses to make this work until we can arrive at a conclusion. So what can we do to make aforementioned process to work? 1st scenario: We have an 'open system', that is, any energy that might be emitted is lost. This would cause the trial to fail. Any system (nucleus) has an associated energy: the rest mass of the system (in grams), and the nuclear bond energy. If this nucleus goes through fission, it breaks into two smaller and releases energy through gamma ray photons. As stated previously, this gamma ray photons would just vanish in the conditions of the experiment. Then, the total energy of the system would be fewer than of the original, making it impossible to recreate the initial nucleus. 2nd scenario: The system is closed, we could somehow store all the energy that is released and then reuse it in further reactions, the answer is a bothersome 'depends'. Some nuclei already do that in the nature; others don't, because the reaction of their products doesn't result in themselves. 2.1 scenario: The nucleus is that of the kind initial nucleus => lighter element + lighter element, but lighter element + lighter element =X=> initial nucleus. Then no, if we take for example Lithium-7 + proton, it would result in Lithium-7 + proton => Beryllium-8, beryllium-8 then, would be likely to decay in attoseconds into two nuclei of Helium-4. And it doesn't matter how you smash those two Helium-4 together, they will form Beryllium-8 again only for the latter to disband once again, never returning to the Lithium-7 + proton state. Other examples are possible, such as H-2 + H-3 => H-4 + proton, but H-4 + proton =X=> H-2 + H-3. 2.1.1: Curiosity: the addition of one more Helium-4 to the Be-8 would create Carbon-12, just like the Helium-fusion reactions that are only found in stars, the Triple-alpha process. Alpha being the name of a Helium-4 nucleus, see: alpha radiation, triple because there are three "alpha's" involved 2.2 scenario: The nucleus is that of the kind initial nucleus => lighter element + lighter element, and lighter element + lighter element => initial nucleus. Then it is very possible, and it even happens in nature itself. Remember the cycle Be-8 => He-4 + He-4 => Be-8 =>...? There you have it, smash two Helium-4's together and you have Be-8, which will undergo fission to generate two He-4, which could be fused again to finally create a cycle of fusion-fission; actually, this already happens in Helium-fusing stars, until the Be-8 successfully fuses with another He-4 to form Carbon. Other examples are possible, such as Lithium-4 => Helium-3 + proton => Lithium-4. Note: proton and Hydrogen-1 are used as the same thing, a proton alone. Note 2: H-2 = Deuterium; H-3 = Tritium; H-4 = Hydrogen-4
{ "domain": "physics.stackexchange", "id": 46249, "tags": "energy, nuclear-physics, mass-energy, fusion" }
How to interpret binary classification metrics on an imbalanced data set?
Question: I have an imbalanced dataset on intrusion detection. I have (attack class) 3668045 samples and (benign class) 477 samples. I made a 70:30 Train test split. My problem is to predict whether the given node belongs to the attack class or the benign class. As a first step, I trained a decision tree model on the dataset without using any balancing technique. I obtained the following results for my model on the test set using the sklearn metrics. Scores for Decision Tree Accuracy: 0.9998991419799247 True positive 1100391 True Negative 55 False Positive 86 False Negative 25 F2-score 0.9999661949775551 Precision 0.9999218520696025 Recall 0.9999772813190648 F1-score 0.9999495659261946 Log loss: 0.0034835750853569407 Decision Tree : AUROC (ROC Curve) = 0.999 Decision Tree : AUPR(Precision/Recall curve) = 1.000 Classification Report precision recall f1-score support 0 0.69 0.39 0.50 141 1 1.00 1.00 1.00 1100416 accuracy 1.00 1100557 macro avg 0.84 0.70 0.75 1100557 weighted avg 1.00 1.00 1.00 1100557 Why am I getting high, almost perfect AUROC and AUPR scores, even though the precision and recall for my minority class are very low? What measures can I take to improve the results such that they are not biased and my model is generalizing well? How can I ensure that? Answer: As you point out, your results in this case are biased. Using skewed target distributions even with a simple train-test split would lead to training your model with more of one class than the other. There are multiple ways to prevent this, depending on how imbalanced (skewed) is your dataset. If the imbalance is not severe (e.g. a 40:60 target distribution) you could use: Cross-validation Stratified Cross-validation A stratified train-test split If the dataset is severely imbalanced, then you might train balancing techniques like: Perform under- or over-sampling. By the amount of data that you have, it might be better for you to consider under-sampling. Generate synthetic data based on prior knowledge Create synthetic data using SMOTE Use Penalised models that penalise making classification mistakes on the minority class Depending on your goal and the severity of the imbalance you might consider simply focusing on either a high precision or high recall. It might be also worth considering changing your perspective (i.e. maybe this is not a classification problem, it could be more of a "detection" problem)
{ "domain": "ai.stackexchange", "id": 3532, "tags": "binary-classification, decision-trees, anomaly-detection, scikit-learn, imbalanced-datasets" }
Why is the equilibrium constant unaffected by a change of the initial concentration?
Question: In my class, it was taught that the equilibrium constant does not vary with the intial concentration of the reactants? Why is it so? Answer: Clearly the math tells you that the equilibrium constant is independent of the initial compositions: For $a\mathrm{A}+b\mathrm{B}\rightarrow c\mathrm{C}+d\mathrm{D}$, $$K_c=\frac{[\mathrm{C}]^c[\mathrm{D}]^d}{[\mathrm{A}]^a[\mathrm{B}]^b}$$ where $[\mathrm{C}]$, $[\mathrm{D}]$, $[\mathrm{A}]$, and $[\mathrm{B}]$ are concentrations at equilibrium. This makes some sense intuitively: the rate of the reaction is dependent only on the present concentrations of compounds (and some other stuff like pressure and temperature), not previous concentrations. And equilibrium is defined as a position where the rate of forward and backward reactions are the same.
{ "domain": "chemistry.stackexchange", "id": 10765, "tags": "thermodynamics, physical-chemistry, equilibrium" }
What are the strongest sources of collimated neutrons and protons?
Question: I am imagining an unusual experiment which will require intense beams of either protons or neutrons. The experiment would work better with neutrons, but neutron sources are much weaker messier so I am considering both. I am interested in beams of almost any non-relativistic energy. I am not sure yet about how focused I will need the beam to be. I know this is a beginner question, but what are the basic options here for the strongest source (as measured by neucleons per second)? Neutron apparently come from two sources: reactors and accelerators. (As Sir Patrick Stewert can attest.) In the latter case ("spallation"), a proton beam is just slammed into a heavy-metal target and the neutrons that fly out are caught. It appears that the highest intensity protons beams are often used to produce the highest intensity neutron beams in this manner. Saclay High Intensity Proton Injector Project (IPHI): $6 \times 10^{17}$ protons/second. Low-Energy Demonstration Accelerator (LEDA): $6 \times 10^{17}$ protons/second. (These agree with Andre's chart below showing 100 mA. Note also that the LHC current is about 500 mA, although the protons are grouped into bunches. And as Andre pointed out, the LHC circulates the same protons and so could not provide a continuous 500 mA beam into a target/experiment that consumed them.) Neutron sources, being the result of either fission or spallation, are messy. I still don't have a good handle on what kind of neutron intensities are possible for what momentum ranges. The Lujan Neutron Scattering Center at LANCE is a neutron source for many experiments. In spallation, the number of neutrons produced per incident proton is usually on the order of 40, but the phase space distribution is much wider than the incident protons. (Protons beams can be directly focused using magnets.) According to a CERN report by Lengeler, reactors typically can produce thermal neutrons (~0.025 eV) fluxes of order $10^{14}$ neutrons/cm${}^2$s, whereas spallation sources can exceed $10^{17}$/cm${}^2$s. It is probably best, then, to think of continuous neutron sources as phase-space-density limited, with the above numbers giving a rough estimate of the currently achievable densities. Apparently, spallation sources are limited by the need to dissipate the heat from the incident proton beam. If high fluxes are needed only briefly, laser pulses can produce $4 \times 10^{9}$/cm${}^2$ over a nanosecond, i.e. an instantaneous flux of $4 \times 10^{18}$/cm${}^2$s. As for protons, it's worth noting that the high-intensity 50 mA beam at the European Spallation Source is limited by "space charge effects at low energy, by the power that can be delivered to the beam in each cavity at medium and high energies, and by beam losses." Answer: arxiv:1305.6917 has a nice graph comparing different proton accelerators with respect to beam intensity (in milliampere, $6.241\cdot10^{15}$ protons per second) vs. beam energy (I assume you mean man-made particle sources...): The 590 MeV (kinetic energy), 2.2 mA proton beam at PSI is operational, I'm not sure about the higher intensity (but lower energy) LEDA and IPHI beams. Also, the graph does not contain the proposed proton source for DAEδALUS. Some of these proton beams are also used (e.g. the PSI beam) to produce neutrons (for example using a spallation target). Since neutrons are neutral, there aren't any neutron accelerators.
{ "domain": "physics.stackexchange", "id": 9531, "tags": "particle-physics, protons, neutrons, accelerator-physics" }
MFCC for very short signals
Question: I have very short signals (usually 100 samples from a 44100 hz audio recording ). These are 'bang' noises that a hammer makes on a piece of metal. I want to use these short signals as input to some analysis (and a machine learning algorithm). I am not experienced in sound processing but from what I've gathered one of the most informative features you can use from a sound recording is the MFCC. I am using python's scikit talkbox package to compute the mfcc of my signals. The problem is that this package requires the signal to be at least 300 samples long (and as i've said mine are about 100 samples long and sometimes shorter). I wanted to know if there is a different process I should be doing in this case to calculate the MFCC or if there are other suggestions for other methods, besides the MFCC, that I should be using. Answer: Have you tried zero-padding your signals? By adding zeroes to the front and/or end of the signals you could make them longer than 300 samples without affecting the outcome of your analysis. Bear in mind that with very short signals your frequency-domain resolution is going to be pretty limited, and an impulsive sound such as that you describe is likely to have a very broad spectrum. The impact this has will depend on what you plan to do with your MFCCs, but essentially, there'll be fewer of them! Using an existing package to perform the signal processing is generally a great idea, as long as you're happy to trust it! I'd certainly encourage you to make sure you understand all the processing being carried out on your data - this will affect your ability to demonstrate that your work is trustworthy.
{ "domain": "dsp.stackexchange", "id": 4450, "tags": "sound, mfcc, machine-learning" }
Kinect content missing on the Turtlebot
Question: I'm coming back to Turtlebot after a couple of months on other things. Wow, things are looking great! It's really coming together. I'm having a curious problem and I hope someone can provide some guidance. This started out when I started running through the gmapping tutorial. Teleoperation was working fine, but I wasn't seeing anything coming up in rviz. So I tried subscribing to the image topics, using image_view, etc. Still no video. So now I've backed up and am trying to resolve this problem. After figuring out that the stuff in kinect.launch wasn't referenced in turtlebot.launch and is now brought up separately (I am lazy and created a custom launch file that includes kinect.launch), I got the openni_kinect topics to show up. The Kinect is powered on, it shows up in lsusb, etc., and the laser is merrily red. Here's my rostopic list: /camera/depth/camera_info /camera/depth/disparity /camera/depth/image /camera/depth/image/compressed /camera/depth/image/compressed/parameter_descriptions /camera/depth/image/compressed/parameter_updates /camera/depth/image/theora /camera/depth/image/theora/parameter_descriptions /camera/depth/image/theora/parameter_updates /camera/depth/image_raw /camera/depth/image_raw/compressed /camera/depth/image_raw/compressed/parameter_descriptions /camera/depth/image_raw/compressed/parameter_updates /camera/depth/image_raw/theora /camera/depth/image_raw/theora/parameter_descriptions /camera/depth/image_raw/theora/parameter_updates /camera/depth/points /camera/rgb/camera_info /camera/rgb/image_color /camera/rgb/image_color/compressed /camera/rgb/image_color/compressed/parameter_descriptions /camera/rgb/image_color/compressed/parameter_updates /camera/rgb/image_color/theora /camera/rgb/image_color/theora/parameter_descriptions /camera/rgb/image_color/theora/parameter_updates /camera/rgb/image_mono /camera/rgb/image_mono/compressed /camera/rgb/image_mono/compressed/parameter_descriptions /camera/rgb/image_mono/compressed/parameter_updates /camera/rgb/image_mono/theora /camera/rgb/image_mono/theora/parameter_descriptions /camera/rgb/image_mono/theora/parameter_updates /camera/rgb/image_raw /camera/rgb/image_raw/compressed /camera/rgb/image_raw/compressed/parameter_descriptions /camera/rgb/image_raw/compressed/parameter_updates /camera/rgb/image_raw/theora /camera/rgb/image_raw/theora/parameter_descriptions /camera/rgb/image_raw/theora/parameter_updates /camera/rgb/points /cloud_throttled /cmd_vel /diagnostics /diagnostics_agg /imu/data /imu/raw /joint_states /kinect_laser/parameter_descriptions /kinect_laser/parameter_updates /kinect_laser_narrow/parameter_descriptions /kinect_laser_narrow/parameter_updates /map /map_metadata /narrow_scan /odom /openni_camera/parameter_descriptions /openni_camera/parameter_updates /openni_manager/bond /robot_pose_ekf/odom /rosout /rosout_agg /scan /slam_gmapping/entropy /tf /turtlebot/app_list /turtlebot/application/app_status /turtlebot_node/parameter_descriptions /turtlebot_node/parameter_updates /turtlebot_node/sensor_state ... except nothing seems to be actually publishing to the camera topics. For example, a "rostopic hz /camera/rgb/image_color" doesn't give me anything, but a rostopic hz /odom gives me the rate info I expect. None of the other /camera topics get anything, either; I get no response from /camera/depth/image, Sooo, back to rviz. If I add a display in rviz (Camera, topic /camera/rgb/image_color), I get a Status: warning message that says "No CameraInfo received on [/camera/rgb/camera_info]. Topic may not exist. But it does exist (see above), it's just that nothing is happening on it. if I rostopic echo or rostopic hz on /camera/rgb/camera_info, I don't see anything. I tried running the gmapping stuff, which launches the kinect stuff. Again, I see the topics, but nothing is being published on them. Now I back up. Skip all of the Turtlebot stuff, and just roslaunch openni_camera openni_node.launch. Again, all of the topics show up and the laser goes red on the Kinect, but nothing is being published on any of the /camera topics, including camera_info. But wait! If I turtlebot_bringup with the kinect.launch referenced, and then go into another ssh session and run roslaunch openni_launch openni.launch, I get a lot of messages like this, but it suddenly starts to work: the rostopic hz calls start showing the rates, and the image shows up in rviz, albeit at a pretty low frame rate and with a few hundred milliseconds lag: [ WARN] [1322104870.100115748]: [image_transport] Topics '/camera/depth/image_rect_raw' and '/camera/depth/camera_info' do not appear to be synchronized. In the last 10s: Image messages received: 0 CameraInfo messages received: 300 Synchronized pairs: 0 [ WARN] [1322104880.100552875]: [image_transport] Topics '/camera/depth/image_rect_raw' and '/camera/depth/camera_info' do not appear to be synchronized. In the last 10s: Image messages received: 0 CameraInfo messages received: 328 Synchronized pairs: 0 [ WARN] [1322104890.101621893]: [image_transport] Topics '/camera/depth/image_rect_raw' and '/camera/depth/camera_info' do not appear to be synchronized. In the last 10s: Image messages received: 0 CameraInfo messages received: 450 Synchronized pairs: 0 I've tried this on two different machines (an Asus netbook and an HP notebook) and with two different Kinect sensors. Both computers are running Oneiric and Electric, with the Turtlebot software installed with apt-get. I haven't tried the Turtlebot build itself, but I can if that's what is recommended. I found this discussion, which isn't specifically Turtlebot-related, but it makes me wonder if the change to Electric has caused some issues. Originally posted by Alaina on ROS Answers with karma: 121 on 2011-11-23 Post score: 1 Answer: You might have a look at this answer by Patrick Mihelich to another similar kinect related question. I dont know if the new binaries are already out, but i doubt it. Maybe you could try to build the most recent version from source like suggested and give feedback to Patrick if it helped. I was fighting with this issue for weeks with natty and electric and for me a clean reinstall and switch vom 32 to 64 bit helped a lot, but didnt solve the problem completely. Originally posted by Ben_S with karma: 2510 on 2011-11-23 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Alaina on 2011-11-24: That's great, Ben. If I rosinstall/rosmake the latest, it works fine. Looks like we can expect a permanent fix soon.
{ "domain": "robotics.stackexchange", "id": 7405, "tags": "navigation, kinect, turtlebot, openni-kinect, camera" }
Does quantum entanglement imply the existence of a non-causal structure connecting space-time together?
Question: In contrast to a "time-like" or "causal" structure connecting space-time together, Does quantum entanglement imply the existence of a "space-like" or "non-causal" structure holding space-time together as well. A more general question; is there even any relevance to the discussion of the existence of a non-causal structure connecting space-time together? The reason I ask is because it initially seems too assuming to suggest that causal structure is the only meaningful structure just because it's intuitive; Consider the fact that two space-like separated events are even allowed to exist in a definable space (space-time diagram). Is there nothing physical in principle between the two events which can be defined? Answer: A more general question; is there even any relevance to the discussion of the existence of a non-causal structure connecting space-time together? This part of the question belongs to a future where the quantization of gravity, i.e. space -time, has been definitively established in a theoretical framework without infinities etc. The term "entanglement" is a shorthand for implying that a specific quantum mechanical solution with specific boundary conditions gives the probability distribution for a specific event. Space time at present is just the classical space time of general relativity, and in flat space, whatever holds for special relativity with respect to solutions of quantum mechanical boundary value problems should also hold there. Is there nothing physical in principle between the two events which can be defined? This is a special relativity issue. If the quantum mechanical wavefunction is a solution that gives a probability for a spacelike separation of constituents ofa system, lets say of two particles, the detection of one will provide information on the quantum state of the other (though I cannot think of an example off hand, if I do I will edit it in)
{ "domain": "physics.stackexchange", "id": 42418, "tags": "quantum-mechanics, spacetime, quantum-entanglement, causality, non-locality" }
Spatially dependent material property in Abaqus/CAE
Question: I am modelling a plate as 3D deformable solid. My plate has a spatially dependent isotropic thermal expansion coefficient. I have defined this spatial distribution as a Discrete Field in a tabular form (for the mesh elements). My question is how can I link this distribution to my material definition? Or is it not possible to have spatially dependent thermal expansion in Abaqus/CAE? Thanks! Answer: It turns out that defining predefined field variables in CAE is a feature that was added in the 2018 version. The steps that one needs to follow when creating a field-dependent material property are the following: 1) In the Property module, make your Material dependent on a Field Variable(s) by specifying the number of field variables. In the table you will have to map the material property to the field variable. 2) Create the Field Variable: In the Load module, go to "Predefined fields"->Create->Other->Field and specify the desired distribution, which can be one of several options, including an Analytical Field. This analytical field in my case is the spatial distribution of the expansion coefficient - a function of X,Y,Z. In case you have an earlier version of Abaqus/CAE, apparently you could use a temperature definition for the Field, and modify your input file to reroute to the field variable. I haven't tried this approach, but it should work just as well. A couple people suggested parts of this and then I finally figured the rest out through some trial and error. Thanks!
{ "domain": "engineering.stackexchange", "id": 2189, "tags": "finite-element-method, abaqus, thermal-expansion" }
Can we measure the one-way speed of anything at all?
Question: I know the one-way speed of light question has been exhausted, and I'm sorry for the naive question, but I would like to understand one thing. Can we measure the one-way speed of anything at all? If we "truly" can, why can't we synchronize that thing and an emission of light from one place to another to compare their speeds? For instance, and for simplicity sake assume 2 cars pass a point at exactly the same time and we know one car is going 60 mph and we do not know the speed of the other car. We could set up a clock 60 miles away, knowing that the car going 60 will take one hour to get there. Then,by using only one clock and by checking the difference in arrival times, we could calculate the second car's speed. Why can't we do something similar with light and another medium. Even if it needed to be sent from some space shuttle to the ISS, it seems like with modern equipment, we should be able to get some decent approximation of the one way speed. Answer: I would say no. One of the most fascinating things in physics is time dilation. The speed of light is always the same but the speed of time varies. Not only may A and B be in different time frames as illustrated by others but also at different rates of time change. One way to visualize this is here on earth. It is scientifically theorized that the center of earth is 2 1/2 years younger than the surface of the earth due to time dilation. so first of all which is the correct time? Now if you could send a light to the center of the earth and reflect it back to the surface you begin to see the problem. Even though the speed of light would be the same weather it’s traveling there or traveling back, the actual time and the speed of time would be completely different at both ends.
{ "domain": "physics.stackexchange", "id": 84434, "tags": "special-relativity, speed-of-light, velocity, measurements, one-way-speed-of-light" }
Why meet at the center of mass?
Question: If two objects of different masses are held at a distance $d$ and then I let them go, they will meet at the center of mass of the particle system due to mutual gravitational attraction My question (which may be rather silly) is that why do they meet a the center of mass, why not anywhere else? Is there some sort of mathematical proof behind this? Answer: Let two particles be $A$ and $B$ at x-coordinates $x_A$ and $x_B$ respectivly having masses $m_A$ and $m_B$ respectivily at time $0$ sec. And centre of mass $x_{C.M}=R$ at $t=0$ Let us assume the initial velocities of $A$ and $B$ be 0 in lab's frame. Centre of mass will be at $(m_A+m_B)R=m_Ax_A+m_Bx_B \tag{1}$. differentiating both sides twice: $(m_A+m_B)a_{C.M}=m_Aa_A+m_Ba_B \tag{2}$ Let $F_A$ and $F_B$ be the forces on $A$ and $B$ respectivily. Gravitational force on $A$ and $B$ are same in magnitude and opposite in direction. $F_B=-F_A$ So acceleration of $A$ is $a_A=F_A/m_A$ and acceleration of $B$ is $a_B=-F_A/m_B$. Putting these values in eqn (2) we got: $a_{C.M}=0$ Now since the initial velocities of $A$ and $B$ were $0$ so eqn(1)'s derivative directly implies that initial velocity of $C.M$ is $0$. Since $a_{C.M}$ is 0 so centre of mass remains at $R$ whatever the positions of $A$ and $B$ becomes. Let gravitational force acts for a time $t$. After time $t$ the positions of $A$ and $B$ becomes $x_A(t)$ and $x_B(t)$ Integrating both sides of eqn 2 within limits $0$ to $t$ we got: $m_A(v_A(t)-v_A(0))=-m_B(v_B(t)-v_B(0)) \tag{3}$ But $v_A(0)$ and $v_B(0)$ are velocities at $t=0$ so $v_A(0)=v_B(0)=0$. So eqn 3 becomes: $m_Av_A(t)=-m_Bv_B(t)$ Again integrating within limits $0$ to $t$: $m_Ax_A(t)+m_Bx_B(t)=m_Ax_A+m_Bx_B \tag{4}$ When $A$ and $B$ collides $x_A(t)=x_B(t)=R^{'}$ Now we wish to find $R^{'}$ From eqn 4 we got : $m_AR^{'}+m_BR^{'}=m_Ax_A+m_Bx_B$ $\implies R^{'}=\dfrac{m_Ax_A+m_Bx_B}{m_A+m_B}$ From eqn 1 $R^{'}=R$ Q.E.D
{ "domain": "physics.stackexchange", "id": 11929, "tags": "newtonian-mechanics, gravity, forces" }
Spinning astronauts - can they tell which one is spinning?
Question: If there are two astronauts facing each other along a common axis that goes through both of their centres of gravity, and one is spinning about that axis, which one is stationary and which one is moving? It seems to me that one of them will be experiencing more blood in their extremities, and that if they make themselves into a ball they will spin faster vs. the other astronaut will not experience any such effect, but will observe that the astronaut which is actually spinning does go faster when they make themselves into a ball. This seems to be true to me even if you remove the entire universe apart from the astronauts (which is how I heard this question couched). However I have heard it said that as all motion is relative, neither astronaut can tell which one is spinning. It will appear to both that they are stationary and the other astronaut is spinning and there is no way to resolve this. Surely space itself even though empty is some kind of absolute background here and the one which is stationary is stationary in space and the spinning one, since they are rotating in space will feel the effects of the rotation? Or have I got it completely wrong? Answer: Only inertial reference frames can't be distinguished each from other. However, one of your system's is rotating, hence is a non-inertial one because it will have centrifugal inner forces acting inside system. So problem boils down about distinguishing some inertial system from a non-inertial one, which relatively easily can be done with several methods. For example, one solution you proposed already is based on conservation of angular momentum $$ \vec L = I \vec {\omega} = \text {const} ,$$ While astronaut (let's call it tester) is decreasing moment of inertia, - by drawing in his arms and legs,- and increasing it back gain, by expanding arms/legs,- he should notice that other astronaut is periodically spinning at faster and slower rates. If, such observation is confirmed- then tester is spinning. Otherwise, if no such outcome is observed, (i.e. other astronaut spins at constant rate independent on tester arms/legs contraction)- then other astronaut is truly spinning and tester is at rest.
{ "domain": "physics.stackexchange", "id": 91604, "tags": "newtonian-mechanics, rotational-dynamics, reference-frames, inertial-frames, machs-principle" }
Radially Infalling Particle
Question: Given a metric, we find out the null and timelike geodesic which helps us conclude that how the trajectory of various particles will become in a particular curvature of spacetime. But I don't understand that with respect to whom these geodesic are calculated. Are these geodesic observed by an observer who is sitting at infinity or are these geodesic according to a person who is actually travelling along these geodesics? Will the null geodesic be same for all the observers regardless of which frame they are in, because light should travel at speed c in every frame so is it true that light take the same path according to every observer? How do we differentiate between the two observers by just looking at a metric and calculating the geodesic equation? The situation should change when we consider a local inertial observer because in his frame of reference the spacetime is flat, so for him laws of special relativity would be valid. What would he see when he is falling into a black hole from a finite distance away? Answer: First it is important to note that all geodesics are observer independent, thus it makes no difference wrt whom the geodesics are calculated. A null geodesic is a null geodesic, and a timelike geodesic is a timelike geodesic no matter the observer. It makes most sense to calculate in the frame of reference of the coordinate system, so this is what is done. Your assertion that the speed of light is the same in every frame is not technically correct. The speed of a ray of light in a local frame is always $c$, but the speed the same ray as measured by some distant observer may not be. Your observer falling into a black hole cannot be assumed to be in minkowski space for his journey. Along this path he will travel through much curvature.
{ "domain": "physics.stackexchange", "id": 43161, "tags": "general-relativity, metric-tensor, coordinate-systems, observers, geodesics" }
Pretty print a 2D matrix in C (numpy style)
Question: Works for any 2D matrix represented by a 1D array. Comments and criticism welcome. Also wondering whether the code can be made shorter. It's really messy right now. #include <stdio.h> typedef float fpoint_t; void print(fpoint_t* m, int row_size, int col_size) { int i, j; printf(" array(["); if(row_size == 1 || col_size == 1) { int len = row_size == 1? col_size : row_size; if(col_size == 1) { for(i = 0; i < len; i++) { if(i == len - 1) printf("%.2e", m[i]); else { if(m[i] >= 0) printf(" %5.2e, ", m[i]); else printf("%5.2e, ", m[i]); if((i + 1) % 6 == 0) printf("\n\t"); } } } else { for(i = 0; i < len; i++) { if(i == 0) printf(" %.2e\n", m[i]); else if(i == len - 1) printf("\t %.2e", m[i]); else { if(m[i] >= 0) printf("\t %5.2e,\n", m[i]); else printf("\t %5.2e,\n", m[i]); } } } printf(" ])\n"); return; } if (row_size > 10) { for(i = 0; i < 3; i++) { if(i == 0) printf("[ "); else printf("\t[ "); if(col_size > 10) { for(j = 0; j < 3; j++) { if(j < 2) { if(m[i * col_size + j] >= 0) printf(" %5.4e,\t", m[i * col_size + j]); else printf("%5.4e,\t", m[i * col_size + j]); } else { if(m[i * col_size + j] >= 0) printf(" %5.4e, ", m[i * col_size + j]); else printf("%5.4e, ", m[i * col_size + j]); } } printf("..., "); if(m[i * col_size + col_size - 3] >= 0) printf(" %5.4e,\t\n\t", m[i * col_size + col_size - 3]); else printf("%5.4e,\t\n\t", m[i * col_size + col_size - 3]); if(m[i * col_size + col_size - 2] >= 0) printf(" %5.4e,\t", m[i * col_size + col_size - 2]); else printf(" %5.4e,\t", m[i * col_size + i]); if(m[i * col_size + col_size - 1] >= 0) printf(" %5.4e", m[i * col_size + col_size - 1]); else printf("%5.4e", m[i * col_size + col_size - 1]); } else { for(j = 0; j < col_size; j++) { if(j != col_size - 1) printf("%.6g, ", m[i * col_size + j]); else printf("%.6g", m[i * col_size + j]); } } printf(" ],\n"); } printf("\t...,\n"); for(i = row_size - 3; i < row_size; i++) { printf("\t[ "); if(col_size > 10) { for(j = 0; j < 3; j++) { if(j < 2) { if(m[i * col_size + j] >= 0) printf(" %5.4e,\t", m[i * col_size + j]); else printf("%5.4e,\t", m[i * col_size + j]); } else { if(m[i * col_size + j] >= 0) printf(" %5.4e, ", m[i * col_size + j]); else printf("%5.4e, ", m[i * col_size + j]); } } printf("..., "); if(m[i * col_size + col_size - 3] >= 0) printf(" %5.4e,\t\n\t", m[i * col_size + col_size - 3]); else printf("%5.4e,\t\n\t", m[i * col_size + col_size - 3]); if(m[i * col_size + col_size - 2] >= 0) printf(" %5.4e,\t", m[i * col_size + col_size - 2]); else printf(" %5.4e,\t", m[i * col_size + i]); if(m[i * col_size + col_size - 1] >= 0) printf(" %5.4e", m[i * col_size + col_size - 1]); else printf("%5.4e", m[i * col_size + col_size - 1]); } else { for(j = 0; j < col_size; j++) { if(j != col_size - 1) printf("%.6g, ", m[i * col_size + j]); else printf("%.6g", m[i * col_size + j]); } } if(i == row_size - 1) printf(" ]])\n"); else printf(" ],\n"); } } else { for(i = 0; i < row_size; i++) { if(i == 0) printf("[ "); else printf("\t[ "); if(col_size > 10) { for(j = 0; j < 3; j++) { if(j < 2) { if(m[i * col_size + j] >= 0) printf(" %5.4e,\t", m[i * col_size + j]); else printf("%5.4e,\t", m[i * col_size + j]); } else { if(m[i * col_size + j] >= 0) printf(" %5.4e, ", m[i * col_size + j]); else printf("%5.4e, ", m[i * col_size + j]); } } printf("..., "); if(m[i * col_size + col_size - 3] >= 0) printf(" %5.4e,\t\n", m[i * col_size + col_size - 3]); else printf("%5.4e,\t\n", m[i * col_size + col_size - 3]); if(m[i * col_size + col_size - 2] >= 0) printf("\t %5.4e,\t", m[i * col_size + col_size - 2]); else printf("\t %5.4e,\t", m[i * col_size + i]); if(m[i * col_size + col_size - 1] >= 0) printf(" %5.4e", m[i * col_size + col_size - 1]); else printf("%5.4e", m[i * col_size + col_size - 1]); } else { for(j = 0; j < col_size; j++) { if(j != col_size - 1) printf("%.6g, ", m[i * col_size + j]); else printf("%.6g", m[i * col_size + j]); } } if(i == row_size - 1) printf(" ]])\n"); else printf(" ],\n"); } } } Answer: A lot of code is repetitive with only slight changes in format. Instead of ... for(j = 0; j < 3; j++) { if(j < 2) { if(m[i * col_size + j] >= 0) printf(" %5.4e,\t", m[i * col_size + j]); else printf("%5.4e,\t", m[i * col_size + j]); } else { if(m[i * col_size + j] >= 0) printf(" %5.4e, ", m[i * col_size + j]); else printf("%5.4e, ", m[i * col_size + j]); } } ... consider the following conditional values for "%s" which may reduce code's printf()s by 75% or more #define JLOOP 3 for (j = 0; j < JLOOP; j++) { const char *pre = ""; // shown here for illustration, even though always "" const char *post = j < JLOOP - 1 ? ",\t" : ""; // v--- add space to use space for positive numbers printf("%s% 5.4e%s", pre, m[i*col_size + j], post); } Now with the above change, rather than use " %5.4e" in many places, use a macro and string literal concatenation. Certainly easier to adjust one format than many. #define FMT_M "% 5.4e" ... // Many places // printf("%5.4e%s", ...); printf("%s" FMT_M "%s", pre, m[i*col_size + j], post); Recommend to use size_t for array indexing than int. int maybe insufficient with large arrays. Certainly not for small programs - but good coding practice. // void print(fpoint_t* m, int row_size, int col_size) { void print(fpoint_t* m, size_t row_size, size_t col_size) { Add const. This allows for some compiler optimizations and lets the caller know *m is not changed. // void print(fpoint_t* m, int row_size, int col_size) { void print(const fpoint_t* m, int row_size, int col_size) { printf() of a generic string is a problem should a "%" appear in it - maybe due to code maintenance or changes. Consider 1 of the 2 alternates - which likely generate the same code on a good compiler. // printf("\n\t"); printf("%s", "\n\t"); // or fputs("\n\t", stdout); Minor. Some compilers reserve ..._t identifiers. Consider non-_t names. // typedef float fpoint_t; typedef float fpoint_T; A commented sample output would help to document coding goals
{ "domain": "codereview.stackexchange", "id": 20846, "tags": "c, matrix, formatting" }
Why work done by tension (like string force) is always zero?
Question: A fixed disc free to rotate about its center (a flywheel) has a string wrapped around it, with a block attached to it. As the block falls, The tension in the string makes the disc rotate. Now, when the disc rotates by an angle $\theta$ (assuming the string doesn't slip), a length $r\theta$ of string unwraps. The length of the piece of string which is attached to the block is changing. Why is work done by tension (on block + disc system) zero? I don't get it. So if I'm using the energy theorem in this system that states: Variation of energy = Work done by string tension (considering that the tension here is the only non conservative force) = 0?! Because work done by tension is zero? [Here is the system] (https://i.stack.imgur.com/knA8N.jpg) (https://i.stack.imgur.com/lLS4M.jpg) Answer: It does do work on each object (block and flywheel). But those two works cancel out. If only the block or only the disc had been considered, then sure the tension in the string does work. The work done on each object is equal, although with opposite sign. Here we consider them both as one system, and so the works are done internally and cancel out. In general, work is always only done by external forces, never by internal forces - not because no work is being done, but because it cancels out (the energy stays within the system).
{ "domain": "physics.stackexchange", "id": 80219, "tags": "newtonian-mechanics, forces, energy-conservation" }
Light bulbs in Series
Question: In physics we were learning about how in series the current is the same through each resistor. So that the lightbulbs will all have the same brightness but dimmer than if there was just one bulb on there. We were explained the current being the same by Ohm's law: $V = IR$. As the resistance increases from the wire to the resistor the voltage also increases, cancelling each other out in the equation $I = \frac{V}{R}$. Leaving it as being the same. But my question is: doesn't this only apply for Ohmic resistors, and lightbulbs aren't ohmic resistors? Would the first light bulb be brighter than the ones after it? Answer: Light bulbs, or any loads, in series will all have the same current. This is unrelated to Ohm's Law - it's Kirchhoff's Current Law and it applies if the loads are ohmic or not. Assuming your source voltage stays the same, adding bulbs in series will increase the total resistance which will decrease the total current and make all the bulbs dimmer. The order of the bulbs is not significant. If one of the bulbs in series has a higher resistance it will be brighter.
{ "domain": "physics.stackexchange", "id": 30680, "tags": "electric-circuits, electric-current, electrical-resistance, voltage" }
Will ROS2 work on Apple Silicon Macs?
Question: Apple is soon to release new Macs based on Arm processors (known as Apple Silicon). How might this affect the support for ROS2 on MacOS? Is this a good or a bad thing for ROS2 on Mac? Originally posted by Py on ROS Answers with karma: 501 on 2020-10-28 Post score: 0 Original comments Comment by aeltawil on 2020-12-10: Do you have an update regarding this topic? Do you someone who was successfully get ROS2 to work on any apple silicon Mac? Comment by Py on 2020-12-14: I'd be keen to find out whether anyone has tested this! Answer: Other than the CPU architecture, not much will change wrt how software is typically distributed on macos. I believe that's the limiting factor for ROS 2 (and ROS 1 actually). ROS 2 has support for OSX (it's a Tier 1 supported platform), but the UX is not consistent with Linux and Windows IIUC (I don't have a mac, so don't know). Moving to ARM doesn't really change that. ARM64 is already supported (see Targeted Platforms in REP-2000), and "Apple Silicon" is really just that. Homebrew is a bit of a mess though, and so are the other options. So from-source builds can be involved (to say the least). Conda might have a bigger impact, but I'm not sure what the status of the macos support is. Originally posted by gvdhoorn with karma: 86574 on 2020-10-28 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Py on 2020-10-29: Thanks for the insight!
{ "domain": "robotics.stackexchange", "id": 35689, "tags": "robotic-arm, ros, ros2, macosx" }
What is the nature of the constraint acting on a projectile thrown at some angle with the horizontal surface?
Question: Suppose a projectile is thrown at some angle with the horizontal surface. It is known that the trajectory of the projectile will be a parabola in the vertical plane, assuming there is no air resistance. My question is - what is the nature of the constraint acting on the projectile? Is it holonomic or non-holonomic or something else? Answer: No, I mean that when the projectile is in the air, it moves in the vertical plane only. So, it's constrained to move in that plane. No it isn't. If you have a 3D coordinate system $(x,y,z)$ with $z$ being the vertical direction, then you can arrange for it to travel in the $(x,z)$ plane by giving it an initial velocity with no $y$-component, but you can arrange for it travel in the $(y,z)$-plane by giving it an initial velocity with no $x$-component. There is no constraint forcing the particle to travel in one particular plane. That being said, once the projectile is fired it always stays in the same vertical plane because there are no forces in the $x$- or $y$-directions. But that's not the same as it being constrained to move only in one particular plane. Similarly, a free particle in 3D space moves with constant velocity, so it will trace out a line in space. But it is not constrained to move along that line; different initial conditions will yield a trajectory which is along a different line.
{ "domain": "physics.stackexchange", "id": 77377, "tags": "newtonian-mechanics, classical-mechanics, projectile" }
Why is it impossible to work with polylog length encoding schemes for quantum circuits?
Question: I am going through Quantum Computational Complexity by John Watrous. On page $12$, he said: The encoding disallows compression: it is not possible to work with encoding schemes that allow for extremely short (e.g., polylogarithmic-length) encodings of circuits; so for simplicity it is assumed that the length of every encoding of a quantum circuit is at least the size of the circuit. My question: Why is it impossible to work with polylogarithmic-length encoding schemes for quantum circuits? Answer: Clearly you can work with abstract compressed representations of circuits. You can reason about them and manipulate them and turn them into concrete lists of gates. We do it all the time. But in context the author is in the middle of explaining the complexity class BQP (bounded-probability quantum polynomial-time). I think they're just making sure that you don't sneak non-polynomial amounts of work into the encoding. You don't want "repeat the diffusion operator $\sqrt{2}^n$ times" to count as being in BQP with respect to $n$.
{ "domain": "cstheory.stackexchange", "id": 3767, "tags": "cc.complexity-theory, quantum-computing, circuit-complexity, quantum-information, physics" }
If LPG gas burners can reach temperatures above 1700 °C, then how do HCA and PAH not develop in extreme amounts during cooking?
Question: According to What is the temperature of heat generated from LPG gas?, the temperature of (1) LPG gas is above 1700 degree Celsius. Many of us are familiar with pan-frying meat (steak, pan-frying the chicken breast etc). Next, consider this: when you're (2) cooking meat at high temperatures ("above 200 °C) then it'll start forming heterocyclic amines (HCAs) and polycyclic aromatic hydrocarbons (PAHs), according to healthline.com and precisionnutrition.com. Note: The way they're formed is when fat drips onto the flame or the utensil, which is at very high temperature, and then reacts with the high heat to form HCAs and PAHs. Lastly, there is research that (3) corelates HCAs and PAHs with causing cancer in our bodies. Sources: They're are all mentioned in the two articles that I linked above. Question: If you combine these 3 facts, then we must be cooking a huge amount of HCAs + PAHs. It's a wonder that we aren't dead yet! How is it that we haven't developed cancer by eating meat cooked over a pan on an LPG gas (which I believe is quite common)? Or are we all consuming HCAs / PAHs slowly over time and will develop cancer? Or are one of the 3 facts above wrong? Answer: It is the temperature of the pan that matters not the flame The flame temperature is irrelevant if you are cooking in a vessel. the only temperature you need to worry about is the temperature of the surface of the pan (or–even more importantly–the temperature of the meat). The surface of the pan will rarely get above around 220 °C if you are monitoring it. An unattended pan can get hotter, but, if you let it get too hot, your food will rapidly burn and will not be edible. Conversely, if the food is palatable, you probably haven't heated it enough to get lots of nasty HCAs and PAHs. But the research that caused your worry is also frequently overstated. Yes, large quantities of PAHs or HCAs may be nasty, but the amounts in food–even food cooked on an open flame barbecue–are very small and there have been no convincing studies showing a notable effect on health. There are some studies showing a very small effect of meat on health (diseases such as bowel cancer have been linked to some meats but the studies are statistically weak and the effects are very small. Moreover these studies link to meat not cooking products but would probably have spotted any effects based on nasty cooking by-products). It is worth remembering that people have been cooking on open flames since we invented fire. If that were really dangerous, there would be strong evidence of harm and/or primitive man would have developed good defences to avoid the harm (as milk-drinkers evolved lactose tolerance in adults because their diets consisted of a lot of dairy products). There have been many food scares based on observations of known nasties in cooked food. Acrolein and Nitites, for example. None of these have been shown to have any notable measurable effect on people in the concentrations present in the diet.
{ "domain": "chemistry.stackexchange", "id": 12636, "tags": "organic-chemistry, food-chemistry" }
How do I determine the electric charge on two (initially discharged) spheres after an impact with a charged sphere?
Question: This is the first part of a problem on my book: Two identical conducting spheres, initially discharged, of mass $ m = 0.5 \ kg $ come in contact, in consecutive moments, with another sphere, identical to the previous ones, having charge $Q=4.8\times10^{-7}\ C.$ After the contact, they are located at a distance of $3.0\ cm$ apart. Determine the charge of the two spheres after the contact. Since obviously the total charge must be equal to $ Q$, and the three spheres are exactly alike, I had thought of an equal partition, leading to $ Q_1 = Q_2 =1.2\times10^{-7} \ C$ and $ Q'=2.4\times 10^{-7}C$. But the book's solutions are $Q_1=2.4 \times10^{-7} \ C $ and $Q_2=1.2 \times10^{-7} \ C $. Where am I mistaken? Answer: Think about it: They come into contact with it consecutively - the first contact leads to them equalizing, so you will have the charge on the initial sphere as $Q' = Q_1$ with $Q' + Q_1 = Q$. Then, the second sphere contacts the inital sphere, and now it equalizes with $Q'$, leading to $Q'' = Q_2$ with $Q'' + Q_2 = Q'$. Since $Q$ is known, this is a system of equations you can easily solve for $Q_1$ and $Q_2$.
{ "domain": "physics.stackexchange", "id": 17245, "tags": "electromagnetism, charge, homework-and-exercises" }
Comparing Equals() method from MSDN
Question: I've implemented the Equals() support for my class as follows: public override bool Equals(object obj) { return Equals(obj as TwoDPoint); } public bool Equals(TwoDPoint p) { return ((object)p != null) && (x == p.x) && (y == p.y); } This obviously doesn't match the reference implementation on MSDN, here, as they add explicit checking for null before and after converting to a TwoDPoint, and they implement the actual equality test in each method (i.e. no call from one to the other). I like my implementation as it's much more succinct. But I'm wondering - what have I missed? Am I losing (significant) performance with my version? Is there actually a bug in this approach? Answer: I like my implementation as it's much more succinct Which isn't necessarily a good thing: we care about readable code, not about oneliners. It means we don't use 5 lines to write what can be written in 1 line but it also means we don't cram 5 lines in 1 line just for the sake of it. That being said: your implementation does the same as the example. I would also prefer your example since it keeps the equality logic in one method Equals(T) rather than duplicating it. Some comments however: There's no point in doing this cast: (object)p != null You might as well implement IEquatable<T> MSDN There should be no performance difference unless you have an unusual amount of null arguments to Equals(object) that will now go through the as statement before being caught by the null check Equals(T) will probably be inlined anyway
{ "domain": "codereview.stackexchange", "id": 18946, "tags": "c#, comparative-review, casting, overloading" }
Virtual work - Tension in a string
Question: I am confused about the way Tension or Thrust is used in equation of virtual work. I mean whether to use $ T\cdot dr$ or $-T\cdot dr$ (work done by tension). For instance in the below case : A string of length a forms the shorter diagonal of a rhombus of four uniform rods each of length b and weight W which are hinged together. If the uppermost rod be supported in a horizontal position , prove that the tension of the string is $$ \frac{2W(2b^2 - a^2 )}{b\sqrt{4b^2-a^2}}. $$ My Approach: Assuming T is tension in string BD. $$ BD = 2b\cos\theta$$ As $ \theta $ increases, the length of the string decreases or movement of string is in the direction of tension, hence the work done by tension should be postive. Hence the equation of principle work would be $$ T\delta(2b \cos\theta) + 4W \delta(b \sin\theta \cos\theta) = 0 $$ but the solution in book says $$ -T\delta(2b \cos\theta) + 4W \delta(b \sin\theta \cos\theta) = 0 $$ Answer: Work done by Tension in virtual work equation should be simplified in the following way. If the length of string is increasing (decreasing) -> 'dr' is positive(negative). If the Change in length is in the direction (opposite) of force -> Tension is postive (negative).
{ "domain": "physics.stackexchange", "id": 51498, "tags": "homework-and-exercises, forces, free-body-diagram, string, statics" }
Checking whether strings are permutations of each other in Swift
Question: I'm solving this problem on Hacker Earth, just for practice. The exercise is to determine whether two given equal-length strings are permutations of each other. Except for this one test case, which has 100 very long strings to compare, all other test cases are fine. But that one is taking too long to execute. What are possible optimizations that can be done? Also it would be great if you can tell me what expensive operations I have used. import Foundation; var response:String = readLine()! func createDictionary(from arrayOfCharacters: [Character], _ charCount: Int) -> Dictionary<Character, Int>{ var dict : [Character: Int] = [:]; for index in 0...charCount-1{ if(dict.index(forKey: arrayOfCharacters[index]) != nil){ dict[arrayOfCharacters[index]] = dict[arrayOfCharacters[index]]!+1; } else{ dict[arrayOfCharacters[index]] = 1; } } return dict; } func checkEqual(_ x:String) -> Bool{ var flag = false; var names = x.components(separatedBy: " "); //let charCount = names[0].characters.count; let charCount = (x.characters.count-1)/2; var dictionaryOne: [Character:Int] = [:]; var dictionaryTwo: [Character:Int] = [:]; if(names[1].characters.count == charCount){ var charsOfStringOne = Array(names[0].characters); var charsOfStringTwo = Array(names[1].characters); //print(charsOfStringOne); //print(charsOfStringTwo); dictionaryOne = createDictionary(from: charsOfStringOne, charCount); dictionaryTwo = createDictionary(from: charsOfStringTwo, charCount); for char in "abcdefghijklmnopqrstuvwxyz".characters { if(dictionaryOne[char] != dictionaryTwo[char]){ return false; } } if(dictionaryOne != dictionaryTwo){ return false; }else{ flag=true; } return flag; }else{ return flag; } } var debug = false; var count = Int(response)! for _ in 1...count { var line = readLine()!; if(checkEqual(line)){ if(debug){ print(line, "YES") } else { print("YES"); } }else{ if(debug){ print(line, "NO") } else { print("NO"); } } } Answer: First some general remarks: There is no need to terminate statements with a semicolon in Swift. The condition in an if-statement does not require parentheses, e.g. if debug { ... } You should be consistent about whether to start else on a new line or not, I prefer if condition { // ... } else { // ... } Many of your variables are not mutated and should be declared as constants, e.g. let line = readLine()! The overall structure can be improved. Why is the first input line read at the top of the program, and the remaining lines at the bottom? I would do the complete input parsing in the "main" part of the program, which would then look like this: let numTestCases = Int(readLine()!)! for _ in 1...numTestCases { let strings = readLine()!.components(separatedBy: " ") if checkEqual(first: strings[0], second: strings[1]) { print("YES") } else{ print("NO") } } (I have omitted the "debug" feature for simplicity.) About the createDictionary() function: A loop like for index in 0...charCount-1 { // ... do something with `arrayOfCharacters[index]` ... } can always be simplified using enumeration: for char in arrayOfCharacters { // ... do something with `char` ... } But actually you don't have to create an array with all characters at all: You can pass the string itself to the function, and enumerate its characters: func createDictionary(from string: String) -> [Character:Int] { var dict = ... for char in string.characters { // insert or update `dict[char]` ... } return dict } The nil-coalescing operator ?? can be used to simplify the "insert or update". Also it might be advantageous to create the dictionary with the required capacity, in order to avoid rehashing. The function would then look like this: func createDictionary(from string: String) -> [Character:Int] { var dict: [Character: Int] = Dictionary(minimumCapacity: 26) for char in string.characters { dict[char] = (dict[char] ?? 0) + 1 } return dict } About the func checkEqual() function: A better name might be checkPermutation(). The var flag is not really needed because you always "early return" if the comparison fails. There is no need to assign empty dictionaries to var dictionaryOne/Two first because they are overwritten later. You also don't have to compare the strings lengths because they are always equal according to the problem statement. The for char in "abcdefghijklmnopqrstuvwxyz".characters { ... } loop is superfluous, it suffices to compare the dictionaries if dictionaryOne != dictionaryTwo { ... } Putting it all together, that function reduces to func checkPermutation(first: String, second: String) -> Bool { let dictionaryOne = createDictionary(from: first) let dictionaryTwo = createDictionary(from: second) return dictionaryOne == dictionaryTwo } Some syntactic sugar: If you replace the global function by an extension method of String: extension String { func isPermutation(of other: String) -> Bool { // ... } } then the check can be more expressively done as if strings[0].isPermutation(of: strings[1]) { ... } For further performance improvement, note that the strings consist of lowercase letters only, i.e. there are only 26 possible characters. Therefore you can use a fixed-size array instead to count the number of occurrences of each letter in the strings. Array subscripting is much faster than dictionary lookups. In order to compute a number (from 0 to 25) from each lowercase letter, it is more convenient to enumerate the unicodeScalars of the string. This leads to the following implementation: extension String { func characterCounts() -> [Int] { let letterA: UnicodeScalar = "a" var counts = Array(repeating: 0, count: 26) for ucs in self.unicodeScalars { let offset = ucs.value - letterA.value assert(offset >= 0 && offset < 26, "only lowercase letter 'a'...'z' allowed") counts[Int(offset)] += 1 } return counts } func isPermutation(of other: String) -> Bool { let counts1 = self.characterCounts() let counts2 = other.characterCounts() return counts1 == counts2 } } let numTestCases = Int(readLine()!)! for _ in 1...numTestCases { let strings = readLine()!.components(separatedBy: " ") if strings[0].isPermutation(of: strings[1]) { print("YES") } else{ print("NO") } }
{ "domain": "codereview.stackexchange", "id": 27783, "tags": "strings, programming-challenge, time-limit-exceeded, swift" }
The $I_{3322}$ Inequality
Question: I am trying to understand the $I_{3322}$ inequality which is an another example of Bell inequalities and which is different from the famous CHSH inequality. I haven't got hold of any standard reference for that and I'm trying to glean through the original papers. However I am getting confused because of many different formulations of the same inequality, and hence I came here to understand why they all represent the same thing. My first reference is a paper by Sliwa. The original paper is here, and an arxiv version is here. The author expresses the inequality in terms of expectations - $$-E(A_1B_1)-E(A_1B_2)-E(A_1B_3)-E(A_2B_1)-E(A_2B_2)+E(A_2B_3)-E(A_3B_1)+E(A_3B_2)-E(A_1)-E(A_2)-E(B_1)-E(B_2) \leq 4.$$ The next reference I referred to is by Pal and Vertesi (available here and on the arxiv) where the inequality is also expressed in terms of expectations - $$E(A_1B_1)+E(A_1B_2)-E(A_1B_3)+E(A_2B_1)+E(A_2B_2)+E(A_2B_3)-E(A_3B_1)+E(A_3B_2)-E(A_2)-E(B_1)-2E(B_2)\leq 0.$$ My third reference is a paper by Collins and Gisin (available here and on the arxiv). They express in terms of probabilities - $$p_{11}+p_{12}+p_{13}+p_{21}+p_{22}-p_{23}+p_{31}-p_{32}-p_1^A-2p_1^B-p_2^B\leq 0.$$ The last reference is the survey paper by Brunner et al (available here and on arxiv). They also express in terms of probabilities - $$p_{11}+p_{12}+p_{13}+p_{21}+p_{22}-p_{23}+p_{31}-p_{32}-p_1^A-p_1^B\leq 1.$$ Now if one glances over them, one finds that each one of them is different from others by few terms. I feel that there may be some simplification to derive one from the others but I don't see how. I would kindly urge the physics community to shed some light on this issue. Also in the last reference by Brunner et al, they say that the CHSH inequality can also be viewed as a game. Is there a similar kind of game from which the $I_{3322}$ inequality can be derived. P.S.- I am not a physicist, but a mathematician, so it would be be helpful if minimal physics is used. Thank you. Answer: Bell inequalities are equivalent if you can map one to another by Relabelling the parties (in this case A and B) Relabelling the inputs (in this case 1,2,3) Relabelling the outputs (in this case -1 and 1 for Śliwa version, 0 and 1 for the Pál and Vértesi version, and undefined for the other two versions) Scaling by a nonzero constant. Adding a normalisation condition. Adding a no-signalling condition. These equivalences are explored here, for example. The last two conditions are not relevant for this question, as the notations used here, correlators and Collins-Gisin, do not represent them. We start by showing that the Śliwa version is equivalent to the Collins and Gisin's version. For that, we pass the latter from Collins-Gisin notation to correlator notation via the formulas $$ 4p_{ij} = E(A_iB_j) + E(A_i) + E(B_j) +1 $$ $$ 2p_i^A = E(A_i)+1$$ $$ 2p_j^B = E(B_j)+1$$ Scaling the inequality by 4, for convenience, we end up with $$E(A_1B_1)+E(A_1B_2)+E(A_1B_3)+E(A_2B_1)+E(A_2B_2)-E(A_2B_3)+E(A_3B_1)-E(A_3B_2)+E(A_1)+E(A_2)-E(B_1)-E(B_2) \leq 4,$$ where we see that the asymmetry in the coefficients became an asymmetry in the signs. We can take care of that by relabelling Alice's outputs, mapping -1 to +1 and vice-versa. This changes the expectation values as $$E(A_iB_j) \mapsto -E(A_iB_j)$$ $$E(A_i) \mapsto -E(A_i)$$ $$E(B_j) \mapsto E(B_j)$$ And the inequality becomes $$-E(A_1B_1)-E(A_1B_2)-E(A_1B_3)-E(A_2B_1)-E(A_2B_2)+E(A_2B_3)-E(A_3B_1)+E(A_3B_2)-E(A_1)-E(A_2)-E(B_1)-E(B_2) \leq 4,$$ which is Śliwa's version. Now, we can obtain Brunner et al.'s version either from Śliwa's version by changing back to Collins-Gisin notation, or directly from Collins and Gisin's version by relabelling Alice's output. It's more interesting to do the latter. The idea of the Collins-Gisin notation is to show the probabilities only of $n-1$ outcomes out of $n$; the other probability is implicit from the normalisation condition. In this case $n=2$, so all the probabilities refer to obtaining the outcome $0$, and the probabilities of outcome $1$ are implicit (here the labels are arbitrary). Thus after relabelling Alice's outcome we have to rewrite the probabilities as functions of the outcome $0$. This is done via the identities $$p(01) = p^A(0)-p(00)$$ $$p(10) = p^B(0)-p(00)$$ $$p(11) = 1+p(00)-p^A(0)-p^B(0)$$ $$p^A(1) = 1-p^A(0)$$ $$p^B(1) = 1-p^B(0)$$ So if we relabel Alice's outcome from 0 to 1, this changes the probabilities in the inequality as $$p_{ij} \mapsto p_j^B-p_{ij}$$ $$p^A_{i} \mapsto 1-p^A_i$$ $$p^B_j \mapsto p^B_j$$ Applying this transformation to the Collins and Gisin's version, we obtain $$-p_{11}-p_{12}-p_{13}-p_{21}-p_{22}+p_{23}-p_{31}+p_{32}+p_1^A+p_1^B\leq 1.$$ Which is not equal to Brunner et al.'s version. They made a sign mistake.
{ "domain": "physics.stackexchange", "id": 66661, "tags": "quantum-mechanics, research-level, bells-inequality" }
How can we solve halting problem efficiently?
Question: I was doing exercises regarding the halting problem and there is this question where I am stuck Ques: it goes like suppose if you can decide the halting problem with a query "Is <tm,s> belongs to HALT?" (tm = turning machine and s = string) so it can solve i number of instances <tm1,s1> <tm2,s2> <tm(i),s3(i)> now it says show how can we solve 3 instances with 2 queries only? I am not sure how to start thinking about this one.. like should we have to simulate one instance with another one? Can anyone provide a proof? Answer: The idea is to determine how many of the Turing machines halt. We will show more generally that given $N = 2^n-1$ many Turing machines $T_1,\ldots,T_N$, we can determine which of them halt (on the empty input) using only $n$ calls to an oracle that solves the halting problem. First, let us see how to determine, using a single oracle call, whether at least $\ell$ of the Turing machines halt. The idea is to construct a Turing machine $S_\ell$ which runs all of $T_1,\ldots,T_N$ in parallel, and halts once $\ell$ of the machines have halted. Given this, we can determine how many of $T_1,\ldots,T_N$ using binary search. In your case, $N = 3$, this proceeds as follows: Determine whether at least two of $T_1,T_2,T_3$ halt. If so, determine whether at least three of $T_1,T_2,T_3$ halt. Otherwise, determine if at least one of $T_1,T_2,T_3$ halts. Having found out that exactly $\ell$ of the machines halt, we simply simulate all the machines in parallel until $\ell$ of them halt. The remaining machines will not halt. It would be interesting to figure out whether this is optimal, that is, whether we can determine the halting of more than $2^n-1$ machines using only $n$ oracle calls.
{ "domain": "cs.stackexchange", "id": 18395, "tags": "turing-machines, automata, halting-problem, efficiency" }
How can I justify the formula for orifice flow rate?
Question: I am writing my first math paper, and I am using a formula for the mass flow rate of a liquid through an orifice. I found this formula mentioned in some online videos (like this one) but I could not find the source of the formula. The formula is $$m' = \frac{(\Delta p)^{\frac{1}{\alpha}}}{R}$$ where $m'$ is the mass flow rate, $\Delta p$ is the pressure difference between the two sides of the orifice, $\alpha$ is 1 for laminar flow and 2 for turbulent flow, and $R$ is the resistance of the orifice. How is this formula justified? I'm looking for a derivation with accompanying citations: either a paper in which the formula is derived (it need not be a main result), or citations for the assumptions of the derivation. Some searching hints at the formula being derived from Bernoulli's Equation, or from the Poiseuille Equation (maybe Hagen-Poiseuille?), but I cannot find a clear match. Also, all the equations seem to deal specifically with laminar or turbulent flow. I also found this question which seems to have some connection to my question (it mentions that the degree-one term dominates laminar flow, while the degree-two term dominates turbulent flow) but I cannot get a concrete reference from it. Answer: Looking at the flow in a narrow pipe (laminar flow in pipe) we have the following relations $$ \begin{array}{ll} \text{Mass flow rate} & m' = \rho A v \\ \text{Pressure Resistance} & \frac{\Delta p}{\rho g} = f \frac{\ell}{d} \frac{v^2}{2 g} \\ \text{Friction Factor} & f = \frac{64}{\rm Re} \\ \text{Reynold's number} & {\rm Re} = \frac{\rho d v}{\mu} \end{array}$$ Combined together yield $$ m' = \frac{\rho A d^2}{32 \mu \ell} \Delta p $$ which matches the given equation for laminar flow if $R = \frac{32 \mu \ell}{\rho A d^2}$. For turbulent flow the friction factor is more complex, and so they must have done some curve fitting to get to the $\sqrt{\Delta p}$ factor (probably a velocity related term since $\Delta p \propto \frac{\ell}{d} v^2$).
{ "domain": "physics.stackexchange", "id": 33099, "tags": "fluid-dynamics" }
What is the name of this neural network architecture with layers that are also connected to non-neighbouring layers?
Question: Consider a feedforward neural network. Suppose you have a layer of inputs, which is feedforward to a hidden layer, and feedforward both the input and hidden layers to an output layer. Is there a name for this architecture? A layer feeds forward around the layer after it? Answer: This could be called a residual neural network (ResNet), which is a neural network with skip connections, that is, connections that skip layers. Here's a screenshot of a figure from the paper Deep Residual Learning for Image Recognition (2015), an important paper that shows the usefulness of these architectures.
{ "domain": "ai.stackexchange", "id": 1690, "tags": "machine-learning, terminology, feedforward-neural-networks, residual-networks" }
Marshaling to a complex structure in .net
Question: This is a follow up from question https://stackoverflow.com/questions/45106245/net-file-random-access-recoard-locking I have been able to make the reading and marshalling work ok but the performance is a lot slower then using Fileget(). There is a few articles around e.g. https://msdn.microsoft.com/en-us/library/ff647812.aspx However we do have to use Non-Blittable entities unfortunately and they don't provide working examples on how to make performance gains. Examples of current code being used: <StructLayout(LayoutKind.Sequential, _ CharSet:=Runtime.InteropServices.CharSet.Ansi, Pack:=1)> Structure SALbchCX 'SAL555 <MarshalAs(UnmanagedType.ByValArray, SizeConst:=1)> <VBFixedString(1)> Public XJ1s As Char() 'H/D <MarshalAs(UnmanagedType.ByValArray, SizeConst:=8)> <VBFixedString(8)> Public XJ2s As Char() ' <MarshalAs(UnmanagedType.ByValArray, SizeConst:=8)> <VBFixedString(8)> Public XJ3s As Char() ' <MarshalAs(UnmanagedType.R4, SizeConst:=4)> Public XJ4 As Single Public XJ5 As Single <MarshalAs(UnmanagedType.R8, SizeConst:=8)> Public XJ6d As Double <MarshalAs(UnmanagedType.R4, SizeConst:=4)> Public XJ7 As Single <MarshalAs(UnmanagedType.ByValArray, SizeConst:=61)> <VBFixedString(61)> Public XJ8s As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=1)> <VBFixedString(1)> Public XJ8bs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=1)> <VBFixedString(1)> Public XJ8cs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=1)> <VBFixedString(1)> Public XJ8ds As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=1)> <VBFixedString(1)> Public XJ8es As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=1)> <VBFixedString(1)> Public XJ8fs As Char() <MarshalAs(UnmanagedType.I4, SizeConst:=4)> Public XJ8Al As Integer <MarshalAs(UnmanagedType.R4, SizeConst:=4)> Public XJ9 As Single <MarshalAs(UnmanagedType.ByValArray, SizeConst:=11)> <VBFixedString(11)> Public XJAs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=30)> <VBFixedString(30)> Public XJAAs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=9)> <VBFixedString(9)> Public XJBs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=35)> <VBFixedString(35)> Public XJBBs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=5)> <VBFixedString(5)> Public XJCs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=7)> <VBFixedString(7)> Public XJDs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=8)> <VBFixedString(8)> Public XJDas As Char() <MarshalAs(UnmanagedType.R4, SizeConst:=4)> Public XJE As Single <MarshalAs(UnmanagedType.ByValArray, SizeConst:=1)> <VBFixedString(1)> Public XJFs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=6)> <VBFixedString(6)> Public XJGs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=30)> <VBFixedString(30)> Public XJHs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=30)> <VBFixedString(30)> Public XJIs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=30)> <VBFixedString(30)> Public XJJs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=30)> <VBFixedString(30)> Public XJKs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=35)> <VBFixedString(35)> Public XJKKs As Char() <MarshalAs(UnmanagedType.R4, SizeConst:=4)> Public XJL As Single <MarshalAs(UnmanagedType.ByValArray, SizeConst:=1)> <VBFixedString(1)> Public XJMs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=2)> <VBFixedString(2)> Public XJNs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=70)> <VBFixedString(70)> Public XJOs As Char() <MarshalAs(UnmanagedType.R4, SizeConst:=4)> Public XJP As Single Public XJQ As Single Public XJR As Single <MarshalAs(UnmanagedType.ByValArray, SizeConst:=1)> <VBFixedString(1)> Public XJSs As Char() <MarshalAs(UnmanagedType.ByValArray, SizeConst:=128)> <VBFixedString(128)> Public XJTs As Char() End Structure .... Public SAL555 As SALbchCX .... Dim f5 As New FileStream(coUC, FileMode.Open, _ FileAccess.ReadWrite, FileShare.ReadWrite, Marshal.SizeOf(SAL555)) For X = 1 To 10 GetBatchRec(X, f5) 'populates SAL555 Next X .... Public Sub GetBatchRec(RecNumber As Integer, File As FileStream) Dim b() As Byte ReDim b(Marshal.SizeOf(SAL555) - 1) File.Seek((RecNumber - 1) * 600, SeekOrigin.Begin) 'Marshal.Size(SAL555) File.Read(b, 0, b.Length) Dim h = GCHandle.Alloc(b, GCHandleType.Pinned) SAL555 = Marshal.PtrToStructure(Of SALbchCX)(h.AddrOfPinnedObject()) h.Free() End Sub We are using ByValArray rather then ByValTStr as ByValTStr did not capture the last char of the string due to it expecting a ending return char. We copy the char() to strings using SAL5.XJIs = New String(SAL555.XJIs) which again takes up processing as its around 2ms to do. The main two parts of the code which seem to be taking up most of the time are the following. File.Seek((RecNumber - 1) * 600, SeekOrigin.Begin) 'Marshal.Size(SAL555) SAL555 = Marshal.PtrToStructure(Of SALbchCX)(h.AddrOfPinnedObject()) Each operation takes around 10ms and just using fileget() uses around 8ms so we are taking around twice as slow even before to copy the char() to strings. I was hoping that someone had a better idea of how to read/marshal the data to a complex structure which is a bit more efficient. FYI: going to ignore answers related to "Stop using flat files" as this is not an option as present :) Answer: After extensive performance testing we discovered that the solution of Marshaling is actually faster then vb6 ported fileget(). The misleading thing was that the development environment was taking an age to run through the loops however when running the compiled EXE or the visual studio inbuilt performance profiler the performance of the marshalling was better then fileget() and we are also not hitting locked records prematurely, win/win!! Learn something new every day! Thanks, Richard.
{ "domain": "codereview.stackexchange", "id": 27049, "tags": "performance, .net, vb.net, serialization" }
Calculation of an inflation on volume/year
Question: I would like to find out if my current solution to the problem described below is "good enough" or if there is an alternative way of achieving it. All I care about is the length (no. of lines) of the code and its efficiency. I am trying to follow the "DRY" principle and come up with something that when seen by another developer in the future will be considered a "good practice". I am forced[1] to enter a few formulas into spreadsheet cells via vba. The formula is for a calculation of an inflation (say labour rate) on volume/year. It has some constants and some variants but the tricky bit is a variable inflation rate added each year. In my real life example this is far more complicated but I have shortened it and came up with an SSCCE. SSCCE You can view the example as a spreadsheet on Google Docs, or have a quick look through below: The formulas are: D6 = -$B$12*D4*(D8+1) E6 = -$B$12*E4*(D8+1)*(E8+1) F6 = -$B$12*F4*(D8+1)*(E8+1)*(F8+1) G6 = -$B$12*G4*(D8+1)*(E8+1)*(F8+1)*(G8+1) H6 = -$B$12*H4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1) I6 = -$B$12*I4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1)*(I8+1) J6 = -$B$12*J4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1)*(I8+1)*(J8+1) K6 = -$B$12*K4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1)*(I8+1)*(J8+1)*(K8+1) L6 = -$B$12*L4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1)*(I8+1)*(J8+1)*(K8+1)*(L8+1) I tried it but didn't like it. There are 9 really long and ugly lines (remember, in my real life example there are many more variables). Range("D6").Formula = "-$B$12*D4*(D8+1)" Range("E6").Formula = "-$B$12*E4*(D8+1)*(E8+1)" Range("F6").Formula = "-$B$12*F4*(D8+1)*(E8+1)*(F8+1)" Range("G6").Formula = "-$B$12*G4*(D8+1)*(E8+1)*(F8+1)*(G8+1)" Range("H6").Formula = "-$B$12*H4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1)" Range("I6").Formula = "-$B$12*I4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1)*(I8+1)" Range("J6").Formula = "-$B$12*J4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1)*(I8+1)*(J8+1)" Range("K6").Formula = "-$B$12*K4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1)*(I8+1)*(J8+1)*(K8+1)" Range("L6").Formula = "-$B$12*L4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1)*(I8+1)*(J8+1)*(K8+1)*(L8+1)" Or Cells: Cells(6, 4).Formula = "-$B$12*D4*(D8+1)" Cells(6, 5).Formula = "-$B$12*E4*(D8+1)*(E8+1)" Cells(6, 6).Formula = "-$B$12*F4*(D8+1)*(E8+1)*(F8+1)" Cells(6, 7).Formula = "-$B$12*G4*(D8+1)*(E8+1)*(F8+1)*(G8+1)" Cells(6, 8).Formula = "-$B$12*H4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1)" Cells(6, 9).Formula = "-$B$12*I4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1)*(I8+1)" Cells(6, 10).Formula = "-$B$12*J4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1)*(I8+1)*(J8+1)" Cells(6, 11).Formula = "-$B$12*K4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1)*(I8+1)*(J8+1)*(K8+1)" Cells(6, 12).Formula = "-$B$12*L4*(D8+1)*(E8+1)*(F8+1)*(G8+1)*(H8+1)*(I8+1)*(J8+1)*(K8+1)*(L8+1)" But try to play with the right hand side of the statement. I see so much potential** in terms of optimising it, but the best I could have come up with in vba is: Sub Main() Dim i As Long Dim s As String For i = 4 To 12 s = s + "*(" + Cells(8, i).Address + "+1)" Cells(6, i).Formula = "=-$B$12*" + Cells(4, i).Address + s Next i End Sub I realize that I am concatenating Strings to achieve my goal. The code is working fine in this SSCCE and in my real life problem. It's not slow but I am purely curious if there is any other approach or if my current one can be optimized any further. Questions Can this be optimized any further? Would you do this differently? If so, how? Why would your solution be any better than my current one? [1] Forced because the real life example pulls its values from multiple places like another closed workbook, database, website, user input etc. For example, INDIRECT() does not work with closed workbooks so I can't rely on pure built-in functions. Answer: Here is what I would do if I were you: Sub Main() Dim b12_value As Double Dim inflation As Double 'Seed the inflation variable and store the static B12 value inflation = 1 b12_value = Range("$B$12").value 'Loop through each cell in D6:L6 and calculate its value. For Each cell In Range("D6:L6") inflation = inflation * (Cells(8, cell.Column).Value + 1) cell.Value = -b12_value * Cells(4, cell.Column).Value * inflation Next End Sub Since you are concerned about number of lines, I chose not to store the volume for each calulation in its own variable (which I would prefer to do as it makes for more readable code). The solution is only 1 line longer than your original solution and it does not rely on string concatenation (which is slow). It also simply does the formula work itself instead of setting the cell's value instead of setting the formula then having Excel calculate the value from said formula. NOTE: If you wanted the formula visible in Excel (instead of just its value), I would think your original solution (with a couple tweaks) is sufficient for you needs: Sub Main() 'Use a more descriptive variable name than "s". Descriptive names improve 'readability and better facilitates user understanding. Dim inflation_string as String 'Using the for each syntax creates more readable code. Plus, as a bonus, 'it removed the use of the index variable i. For Each cell In Range("D6:L6") inflation_string = inflation_string + "*(" + Cells(8, cell.Column).Address + "+1)" cell.Formula = "=-$B$12*" + Cells(4, cell.Column).Address + inflation_string Next End Sub
{ "domain": "codereview.stackexchange", "id": 7527, "tags": "performance, algorithm, vba, excel, finance" }
How do I set the inital pose of a robot in gazebo?
Question: Let's say I've created my own world file that has: This normally spawns the robot at x=0, y=0. How could I spawn it somewhere else, say x=1, y=1? I know I could spawn the robot and then move it by calling the service: set_model_state, but I've found doing this with the pr2 doesn't work very well with other objects in the world. pr2_no_controllers has an arg called 'optenv ROBOT_INITIAL_POSE.' How do I get at this from my world file? Originally posted by EmbeddedSystem on ROS Answers with karma: 120 on 2012-08-06 Post score: 9 Original comments Comment by fvd on 2018-07-10: If you came to this thread trying to set an initial joint configuration of the robot, check this thread too: https://answers.ros.org/question/248178/can-i-set-initial-joint-positions-in-gazebomoveit-via-configuration/ Answer: you can set the environment variable in commandline, e.g.: ROBOT_INITIAL_POSE="-y 2" roslaunch pr2_gazebo pr2_empty_world.launch^C Originally posted by hsu with karma: 5780 on 2012-08-06 This answer was ACCEPTED on the original site Post score: 9 Original comments Comment by inani47 on 2018-04-14: is there a way to add this to my launch file for turtlebot?
{ "domain": "robotics.stackexchange", "id": 10494, "tags": "ros, gazebo, pose, gazebo-worlds" }
Calculation and Visualization of Correlation Matrix with Pandas
Question: I have a pandas data frame with several entries, and I want to calculate the correlation between the income of some type of stores. There are a number of stores with income data, classification of area of activity (theater, cloth stores, food ...) and other data. I tried to create a new data frame and insert a column with the income of all kinds of stores that belong to the same category, and the returning data frame has only the first column filled and the rest is full of NaN's. The code that I tired: corr = pd.DataFrame() for at in activity: stores.loc[stores['Activity']==at]['income'] I want to do so, so I can use .corr() to gave the correlation matrix between the category of stores. After that, I would like to know how I can plot the matrix values (-1 to 1, since I want to use Pearson's correlation) with matplolib. Answer: Another alternative is to use the heatmap function in seaborn to plot the covariance. This example uses the Auto data set from the ISLR package in R (the same as in the example you showed). import pandas.rpy.common as com import seaborn as sns %matplotlib inline # load the R package ISLR infert = com.importr("ISLR") # load the Auto dataset auto_df = com.load_data('Auto') # calculate the correlation matrix corr = auto_df.corr() # plot the heatmap sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns) If you wanted to be even more fancy, you can use Pandas Style, for example: cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True) def magnify(): return [dict(selector="th", props=[("font-size", "7pt")]), dict(selector="td", props=[('padding', "0em 0em")]), dict(selector="th:hover", props=[("font-size", "12pt")]), dict(selector="tr:hover td:hover", props=[('max-width', '200px'), ('font-size', '12pt')]) ] corr.style.background_gradient(cmap, axis=1)\ .set_properties(**{'max-width': '80px', 'font-size': '10pt'})\ .set_caption("Hover to magify")\ .set_precision(2)\ .set_table_styles(magnify())
{ "domain": "datascience.stackexchange", "id": 3689, "tags": "python, statistics, visualization, pandas" }
Change in internal energy for the heating and vaporisation of water
Question: What is the value of change in internal energy at $\pu{1atm}$ in the process $$\ce{H2O(l,$323\ \mathrm K$)->H2O(g,$423\ \mathrm K$)}$$ given $C_{\mathrm m,V}(\ce{H2O,l})=\pu{75.0 J K^{-1} mol^{-1}}$ and $C_{\mathrm m,p}(\ce{H2O,g})=\pu{33.314 J K^{-1} mol^{-1}}$ a. $\pu{42.91 kJ/mol}$ b. $\pu{43086 kJ/mol}$ c. $\pu{42.6 kJ/mol}$ d. $\pu{49.6 kJ/mol}$ I tried to use the First Law here. For chemical reaction I found $\Delta n_\mathrm g$ to be $1$ and the heat of the reaction $\Delta H$ is given as $\pu{40.7 kJ mol-1}$. However, I am not able to use the specific heat data given in the question. What am I doing wrong here? I am getting stuck in phase change reactions, particularly in using $C_p$ and $C_V$ here. I know the First Law will work, but how do we use $C_p$ and $C_V$ in such reactions? Answer: The easiest way to do this problem is to first determine the change in enthalpy for this constant pressure process. Do you think you can do that? Then determine the change in PV between the initial and final states.
{ "domain": "chemistry.stackexchange", "id": 10619, "tags": "thermodynamics" }
Guess number game
Question: I'm making a simple guess a number game. In game, user should input minimum and maximum value and program should generate random number between this numbers, later user should try and guess the number. This is one of my first programs, so I'd like to check my coding techniques, what should I do differently etc. I have getInterval() methode that I'm not sure I've done in best way, first I thought I should past two arguments (min and max) and change them in methode, but looks like I can do that like that in Java, so I've done it this way. I'm going to make this game a bit more advanced later, but just want to check if this part is ok and if I should change some things before I move on. import java.util.Random; import java.util.Scanner; public class Main { public static void main(String[] args) { int min=0; int max=0; min = getInterval("Min"); // Get minimum for guessing interval max = getInterval("Max"); // Get Maximum for guessing interval int randomNumber = getRandom(min, max); // generate random number between Min and Max playRandom(min, max, randomNumber); // play the game } public static int getInterval(String text){ System.out.println("Please Input " + text + " number for guessing: "); Scanner inputMin = new Scanner(System.in); return Integer.parseInt(inputMin.nextLine()); } public static int getRandom(int min, int max){ Random rand = new Random(); return rand.nextInt((max - min) + 1) + min; } public static void playRandom(int min, int max, int randomNumber){ boolean guess = false; int numGuesses = 1; while (!guess){ System.out.println("Enter your guess: "); Scanner inputGuess = new Scanner(System.in); int userGuess = Integer.parseInt(inputGuess.nextLine()); if (userGuess < randomNumber){ System.out.println("Too low!"); numGuesses++; } else if(userGuess > randomNumber){ System.out.println("Too high!"); numGuesses++; }else{ System.out.println("You win! You tried " + numGuesses + " time(s) total"); guess = true; } } } } Answer: Use a class to manage state instead of passing it around. Right now your program has 3 variables that represent the programs state. min, max and randomNumber. The state is being passed around to wherever it's needed as function arguments: public static int getRandom(int min, int max) {...} getRandom(min, max); public static void playRandom(int min, int max, int randomNumber) {...} playRandom(min, max, randomNumber); You could simplify these method calls and method signatures by using a class that holds this state as fields, and making these methods part of the class. public class Game { private int min; private int max; private int randomNumber; ... public void getInterval() {...} // parameter no longer needed public void getRandom() {...} public void playRandom() {...} } Instead of returning the min and max from getInterval, you would assign the result to these fields: public void getInterval() { Scanner inputMin = new Scanner(System.in); System.out.println("Please Input Min number for guessing: "); // return Integer.parseInt(inputMin.nextLine()); this.min = Integer.parseInt(input.nextLine()); // the "this." is optional here. System.out.println("Please Input Max number for guessing: "); this.max = Integer.parseInt(input.nextLine()); } Then in getRandom, you would use the fields instead of method parameters: // return rand.nextInt((max - min) + 1) + min; // this.randomNumber = rand.nextInt((this.max - this.min) + 1) + this.min; randomNumber = rand.nextInt((max - min) + 1) + min; // leaving out "this." here. The result would be simpler method calls and signatures, since there is no more need for the arguments: Game game = new Game(); public void getRandom() {...} game.getRandom(); public void playRandom() {...} game.playRandom(); The state that was previously passed around is now managed internally by the class.
{ "domain": "codereview.stackexchange", "id": 20408, "tags": "java, game, number-guessing-game" }
Pennylane:WireError: Did not find some of the wires on device with wires
Question: My proj is from https://pennylane.ai/qml/demos/tutorial_rotoselect.html In verifying the application of this theory in more complex circuits, the following problems occurred. How to fix this, I tried the suggestions in github still no solution ? How to fix this, I tried the suggestions in github still no solution, https://github.com/PennyLaneAI/pennylane/issues/1459 > WireError Traceback (most recent call > last) File d:\miniconda3\lib\site-packages\pennylane\_device.py:369, > in Device.map_wires(self, wires) > 368 try: > --> 369 mapped_wires = wires.map(self.wire_map) > 370 except WireError as e: > > File d:\miniconda3\lib\site-packages\pennylane\wires.py:273, in > Wires.map(self, wire_map) > 272 if w not in wire_map: > --> 273 raise WireError(f"No mapping for wire label {w} specified in wire map {wire_map}.") > 275 new_wires = [wire_map[w] for w in self] > > WireError: No mapping for wire label 0 specified in wire map > OrderedDict([(tensor(18, requires_grad=True), 0)]). > > The above exception was the direct cause of the following exception: > > WireError Traceback (most recent call > last) Input In [11], in <cell line: 137>() > 136 costs_rotosolve = [] > 137 for i in range(n_steps): > --> 139 costs_rotosolve.append(cost(params_rsol, wires)) > 140 params_rsol = rotosolve_cycle(cost, params_rsol) > 142 params_gd = init_params.copy() > > Input In [11], in cost(params, wires) > 89 def cost(params, wires): > ---> 90 Z = circuit(params, wires) > 91 return Z > > File d:\miniconda3\lib\site-packages\pennylane\qnode.py:576, in > QNode.__call__(self, *args, **kwargs) > 569 using_custom_cache = ( > 570 hasattr(cache, "__getitem__") > 571 and hasattr(cache, "__setitem__") > 572 and hasattr(cache, "__delitem__") > 573 ) > 574 self._tape_cached = using_custom_cache and self.tape.hash in cache > --> 576 res = qml.execute( > 577 [self.tape], > 578 device=self.device, > 579 gradient_fn=self.gradient_fn, > 580 interface=self.interface, > 581 gradient_kwargs=self.gradient_kwargs, > 582 override_shots=override_shots, > 583 **self.execute_kwargs, > 584 ) > 586 if autograd.isinstance(res, (tuple, list)) and len(res) == 1: > 587 # If a device batch transform was applied, we need to 'unpack' > 588 # the returned tuple/list to a float. (...) > 595 # TODO: find a more explicit way of determining that a batch transform > 596 # was applied. > 598 res = res[0] > > File > d:\miniconda3\lib\site-packages\pennylane\interfaces\execution.py:409, > in execute(tapes, device, gradient_fn, interface, mode, > gradient_kwargs, cache, cachesize, max_diff, override_shots, > expand_fn, max_expansion, device_batch_transform) > 402 interface_name = [k for k, v in INTERFACE_NAMES.items() if interface in v][0] > 404 raise qml.QuantumFunctionError( > 405 f"{interface_name} not found. Please install the latest " > 406 f"version of {interface_name} to enable the '{interface}' interface." > 407 ) from e > --> 409 res = _execute( > 410 tapes, device, execute_fn, gradient_fn, gradient_kwargs, _n=1, max_diff=max_diff, mode=_mode > 411 ) > 413 return batch_fn(res) > > File > d:\miniconda3\lib\site-packages\pennylane\interfaces\autograd.py:64, > in execute(tapes, device, execute_fn, gradient_fn, gradient_kwargs, > _n, max_diff, mode) > 58 tape.trainable_params = qml.math.get_trainable_indices(params) > 60 parameters = autograd.builtins.tuple( > 61 [autograd.builtins.list(t.get_parameters()) for t in tapes] > 62 ) > ---> 64 return _execute( > 65 parameters, > 66 tapes=tapes, > 67 device=device, > 68 execute_fn=execute_fn, > 69 gradient_fn=gradient_fn, > 70 gradient_kwargs=gradient_kwargs, > 71 _n=_n, > 72 max_diff=max_diff, > 73 )[0] > > File d:\miniconda3\lib\site-packages\autograd\tracer.py:48, in > primitive.<locals>.f_wrapped(*args, **kwargs) > 46 return new_box(ans, trace, node) > 47 else: > ---> 48 return f_raw(*args, **kwargs) > > File > d:\miniconda3\lib\site-packages\pennylane\interfaces\autograd.py:108, > in _execute(parameters, tapes, device, execute_fn, gradient_fn, > gradient_kwargs, _n, max_diff) > 87 """Autodifferentiable wrapper around ``Device.batch_execute``. > 88 > 89 The signature of this function is designed to work around Autograd restrictions. (...) > 105 understand the consequences! > 106 """ > 107 with qml.tape.Unwrap(*tapes): > --> 108 res, jacs = execute_fn(tapes, **gradient_kwargs) > 110 for i, r in enumerate(res): > 112 if isinstance(res[i], np.ndarray): > 113 # For backwards compatibility, we flatten ragged tape outputs > 114 # when there is no sampling > > File > d:\miniconda3\lib\site-packages\pennylane\interfaces\execution.py:165, > in cache_execute.<locals>.wrapper(tapes, **kwargs) > 161 return (res, []) if return_tuple else res > 163 else: > 164 # execute all unique tapes that do not exist in the cache > --> 165 res = fn(execution_tapes.values(), **kwargs) > 167 final_res = [] > 169 for i, tape in enumerate(tapes): > > File > d:\miniconda3\lib\site-packages\pennylane\interfaces\execution.py:90, > in cache_execute.<locals>.fn(tapes, **kwargs) > 88 def fn(tapes, **kwargs): # pylint: disable=function-redefined > 89 tapes = [expand_fn(tape) for tape in tapes] > ---> 90 return original_fn(tapes, **kwargs) > > File d:\miniconda3\lib\contextlib.py:79, in > ContextDecorator.__call__.<locals>.inner(*args, **kwds) > 76 @wraps(func) > 77 def inner(*args, **kwds): > 78 with self._recreate_cm(): > ---> 79 return func(*args, **kwds) > > File > d:\miniconda3\lib\site-packages\pennylane_qiskit\qiskit_device.py:428, > in QiskitDevice.batch_execute(self, circuits) > 425 def batch_execute(self, circuits): > 426 # pylint: disable=missing-function-docstring > --> 428 compiled_circuits = self.compile_circuits(circuits) > 430 # Send the batch of circuit objects using backend.run > 431 self._current_job = self.backend.run(compiled_circuits, shots=self.shots, **self.run_args) > > File > d:\miniconda3\lib\site-packages\pennylane_qiskit\qiskit_device.py:417, > in QiskitDevice.compile_circuits(self, circuits) > 413 for circuit in circuits: > 414 # We need to reset the device here, else it will > 415 # not start the next computation in the zero state > 416 self.reset() > --> 417 self.create_circuit_object(circuit.operations, rotations=circuit.diagonalizing_gates) > 419 compiled_circ = self.compile() > 420 compiled_circ.name = f"circ{len(compiled_circuits)}" > > File > d:\miniconda3\lib\site-packages\pennylane_qiskit\qiskit_device.py:230, > in QiskitDevice.create_circuit_object(self, operations, **kwargs) > 218 """Builds the circuit objects based on the operations and measurements > 219 specified to apply. > 220 (...) > 226 pre-measurement into the eigenbasis of the observables. > 227 """ > 228 rotations = kwargs.get("rotations", []) > --> 230 applied_operations = self.apply_operations(operations) > 232 # Rotating the state for measurement in the computational basis > 233 rotation_circuits = self.apply_operations(rotations) > > File > d:\miniconda3\lib\site-packages\pennylane_qiskit\qiskit_device.py:269, > in QiskitDevice.apply_operations(self, operations) > 265 circuits = [] > 267 for operation in operations: > 268 # Apply the circuit operations > --> 269 device_wires = self.map_wires(operation.wires) > 270 par = operation.parameters > 272 for idx, p in enumerate(par): > > File d:\miniconda3\lib\site-packages\pennylane\_device.py:371, in > Device.map_wires(self, wires) > 369 mapped_wires = wires.map(self.wire_map) > 370 except WireError as e: > --> 371 raise WireError( > 372 f"Did not find some of the wires {wires} on device with wires {self.wires}." > 373 ) from e > 375 return mapped_wires > > WireError: Did not find some of the wires <Wires = [0]> on device with > wires <Wires = [tensor(18, requires_grad=True)]>. Answer: the main reason is because n_wires = np.tensor(18, requires_grad=True) n_wires = np.tensor(18, requires_grad=True) dev = qml.device("qiskit.aer", shots=1000, wires=n_wires) @qml.qnode(dev) def test(wires): qml.Hadamard(wires=wires[0]) return [qml.expval(qml.PauliZ(i)) for i in range(n_wires)] wires=range(n_wires) test(wires)
{ "domain": "quantumcomputing.stackexchange", "id": 3938, "tags": "qiskit" }
Leap year check in Java (functional style)
Question: I have done the leap year check on https://exercism.io already in a lot of languages. Today I came back to the exercise in Java and was playing around with some maybe more funny ways to do the check. The following more functional one kept me thinking of: import java.util.stream.IntStream; class Leap { boolean isLeapYear(final int year) { return IntStream.of(4, 100, 400) .filter(divisor -> year % divisor == 0) .count() % 2 == 1; } } What do you think? Is this readable? At least it seems to be extensible in case more rules will ever get added ;) Another funny fact: due to prime factorization we actually could also use the IntStream.of(2, 25, 16) (but for sure that doesn't help in readability). Answer: What do you think? It certainly is clever. That is not always a good thing, because it might be harder to understand what is going on (you have to be at least as clever) Is this readable? Not really. I can't get from a glance that 100 is treated differently from 400 and why. Also there is no room for comments close to you code. If you make a set of ifs you could do something like: if (year % 400 ... ) // if 400 then a leap year {.. ..} In your IntStream you could do something like: IntStream.of( 4, //yes 100, //no 400 //yes ) But then you kinda miss the 'one-liner' point of your solution. Is it efficient? Not per se. Most efficient would be to first check divisibility by 400, if true, return true, etc. Prime factorization funny fact This in incorrect; because it would label 1998 as leap year. Or do you mean something different?
{ "domain": "codereview.stackexchange", "id": 38084, "tags": "java, programming-challenge, functional-programming" }
Why is creating an AI that can code a hard task?
Question: For people who have experience in the field, why is creating AI that has the ability to write programs (that are syntactically correct and useful) a hard task? What are the barriers/problems we have to solve before we can solve this problem? If you are in the camp that this isn't that hard, why hasn't it become mainstream? Answer: AI has been applied to programming (check out TabNine, my favorite autocomplete engine) although not in as robust a fashion as you describe. Programming requires a high level of abstract while AI is typically trained to solve a very specific task. Given thousands of examples of insert sort in Python I think a model could be trained (perhaps after autocomplete and syntax correction) figure it out. However at this point the field has not developed a more general intelligence that can apply the ideas of the algorithm to other problems. Addition based on comments: Big picture, training an algorithm to solve a general class of problems (say, web dev) requires a huge number of examples or an immense number of trials. Further, as the complexity of the problem grows the number of parameters necessary to build the model grows. Writing code is a very complex problem and would thus require a huge amount of data and a huge number of parameters making it totally infeasible with today's math and (because of how the math is solved) hardware. Modern AI is has a very simple goal: find the model that solves a problem optimally. If we could quickly search every possible model this would be simple. Fields like machine (deep) learning and reinforcement learning are concerned with finding a good solution in a reasonable amount of time. At this point no such solution exists for a problem of such complexity.
{ "domain": "ai.stackexchange", "id": 1847, "tags": "machine-learning, philosophy, genetic-programming, ai-completeness, inductive-programming" }
Why doesn't momentum seem to be conserved here?
Question: So, this problem should be easy, and I feel pretty silly even asking this question given my physics knowledge: A particle of mass $m$ is moving along the x-axis with speed $v$ when it collides with a particle of mass $2m$, initially at rest. After the collision, the first particle has come to rest, and the second particle has split into two equal-mass pieces that move at equal angles, $\theta$, with the x -axis. Which statemente correctly describes the speeds of the two pieces? Apparently, the answer is: Each piece moves with speed greater than $v/2$ ... Meaning that the momentum of each piece of mass $m$ is greater than $mv/2$—so their total momentum is greater than $mv$. How can this be possible? Apparently no one here has a problem with it, and I certainly understand the mathematics used to justify it, but it seems to fly in the face of one of the most basic physical assumptions. Is the assumption here that there's some other force that propelled the two pieces, as if you were splitting a magnet or something? Answer: For the two dimensional collision detailed in the problem statement, momentum must be conserved in both the x direction, and in the y direction. For the x direction, the total x-momentum after the collision must equal the total x-momentum before the collision. Due to this, each particle after the collision must move with speed v/2 in the x direction. Momentum must also be conserved in the y direction. For the particles after the collision, one particle moves up with a velocity the depends on the angle under which the collision took place, and the other particle moves down with this same velocity, ensuring that there is zero total y-momentum before and after the collision. The velocity of each particle after the collision is the vector sum of the x and y components of each particle's velocity. This means that the final velocity of each particle after the collision must be greater than v/2.
{ "domain": "physics.stackexchange", "id": 32906, "tags": "energy-conservation, momentum" }
Hamiltonian of Qiskit QFT is not hermitian
Question: I am trying to generate the Hamiltonian of a quantum Fourier transform by taking the log of the corresponding unitary using qiskit and scipy. I don't find a hermitian matrix. Why? import numpy as np from qiskit.circuit.library import QFT from scipy.linalg import logm, norm from qiskit.quantum_info import Operator circuit = QFT(num_qubits=4, do_swaps=True) op = Operator(circuit) U = op.data H = 1j*logm(U) print(norm(U@U.T.conj()-np.identity(2**4))) #check if U is unitary print(norm(H-H.T.conj())) #check if H is hermitian Note that I find U to be unitary and that there is no issue when do_swaps=False. Answer: This is due to the fact that if $U$ is unitary with negative eigenvalues (which is the case here), its logarithm is not uniquely defined. Note that the fact that $U$ is unitary only ensures that there is some hermitian matrix $H$ such that: $$U=\exp(\mathrm{i}H).$$ It does not, however, ensure that every matrix $H$ verifying the previous equation is Hermitian. There is, however, in this case, a method to obtain such an $H$. According to this answer, the eigenvalues of a $QFT$ matrix are $\pm1$ and $\pm\mathrm{i}$. Let $Q_n$ be the $QFT$ matrix on $n$ qubits. We know that $Q_n$ can be written as: $$Q_n = VDV^\dagger$$ with $D$ being a diagonal matrix whose entries belong to the set $\{1;-1;\mathrm{i},-\mathrm{i}\}$. We know that the matrix: $$L=V\log(D)V^\dagger$$ is a logarithm of $Q_n$. Computing the logarithm of a diagonal matrix is easily done by taking the $\log$ of its entries. As such, we will map $1$ to $0$, $-1$ to $\mathrm{i}\pi$ (or $-\mathrm{i}\pi$, it does not matter), $\mathrm{i}$ to $\mathrm{i}\frac\pi2$ and $-\mathrm{i}$ to $-\mathrm{i}\frac\pi2$. Finally, we compute $H$ with: $$H=-\mathrm{i}L.$$ Since we know that $L$ is a logarithm of $Q_n$, it is easy to see that $H$ satisfies the desired equation $Q_n=\exp(\mathrm{i}H)$. We thus simply have to prove that it is Hermitian. We have: $$H^\dagger=\mathrm{i}L^\dagger=\mathrm{i}P\log(D)^\dagger P^\dagger=\mathrm{i}P\overline{\log(D)}P^\dagger.$$ Thus, $H=H^\dagger$ holds if and only if $\mathrm{i}\overline{\log(D)}=-\mathrm{i}\log(D)$, which is true if and only if every element of $\log(D)$, which is a diagonal matrix, can be written as $\alpha\mathrm{i},\alpha\in\mathbb{R}$. Since this is the case, we have found one matrix $H$ which is Hermitian and which statisfies $Q_n=\exp(\mathrm{i}H)$.
{ "domain": "quantumcomputing.stackexchange", "id": 3281, "tags": "qiskit, quantum-fourier-transform, linear-algebra" }
Does a massive spherical shell expand the time inside itself?
Question: Considering that the gravitational field of a spherical shell is rectified in its interior, can I consider that even then the time inside it is dilated with respect to a distant clock? Imagine the following situation: A massive spherical shell located in the cosmos away from everything. Inside it has a watch (A). On the outside of the shell is a second watch (B). Far from this shell is the third watch (C). These clocks were initially synchronized, but the second clock (B) already shows a significant delay compared to the third clock (C). I would like to know the time that A indicates. I think the watch (A) inside the spherical shell is slow, and always, it will indicate the same time as the second watch (B), because there is a rectified gravitational field internally with the same intensity value as that field located near the external surface. I would like to know: Time A = Time B, or Time A = Time C? Answer: The solution to your question is time A = time B < time C. The reason is that since there is no mass inside the hollow region, the Schwarzschild radius is zero. The metric in the hollow region is flat Minkowski space. Time ticks still differently inside the shell then outside the shell. Because where $\Phi$ is the gravitational potential, defined such that $\Phi \to 0$ as $r \to \infty$. Thus, $$ d \tau = \sqrt{ 1 + \frac{2 \Phi}{c^2}} dt. $$ The time dilation formula is the same everywhere inside the shell. You can understand why time is ticking slower inside the shell is that when you send a photon from inside the shell, it will be redshifted. So although the spacetime is flat inside the shell, time still ticks slower, because it depends on the gravitational potential. An that is not zero inside the shell.
{ "domain": "physics.stackexchange", "id": 50196, "tags": "general-relativity, gravity, black-holes, event-horizon" }
How is the dynamic Casimir effect different from normal radiation from accelerating charges?
Question: The Casimir effect involves two parallel conducting plates very close to each other. The static Casimir effect consists in an attractive force between the two plates, explained by the spatial mismatch in modes inside and outside the plates, resulting in a net energy gradient pointing away from the middle of the plates and hence a force inward. The dynamic Casimir effect, has the plates accelerating at a speed $v$ near the speed of light. This introduces a mismatch in time, "the plate moves faster than the virtual particles so that these become real", resulting in emission of light. Now, I really don't like the notion of virtual particles because they are not real, they are just artifacts of the theory. I always try to explain the phenomenon without virtual particles. So, what is the underlying cause of the dynamic Casimir effect? Can I not just think of it as accelerating charges emitting radiation? Answer: I gave myself the answer that there is essentially no difference. Classically, one would expect the vacuum to have a zero electric field, so there should not be any radiation emitted as a result of the accelerated plate. But because of quantum fluctuations, there is still some coupling, and radiation ("real photons") are emitted.
{ "domain": "physics.stackexchange", "id": 50864, "tags": "relativity, casimir-effect" }
CLI to bump package.json version and add git tags
Question: I'm writing a command line app in node.js. At this stage, the app does the following: Get the version from package.json Update the package.json with a new version depending on the command line argument Commit the changes Tag the commit Push commits and tags to the remote Some comments: I'm really new to async/await and promises so I may be using those incorrectly in my code. A lot of my code is wrapped in try-catch blocks which I understand is not supposed to be the case with async/await or Promises. I'm not sure how to fix this. There's a lot of duplication with the spinner initialisation code. I am aware the ora has an ora.promise method as well. I couldn't get this to work. Another example of duplication is with the process.exit(1) line within each catch block. Is this really needed or are there better ways to achieve this? Do I really need to use the async IIFE at the end or is there a better way to do this? I would also like feedback on whether I've divided my code into functions adequately. I haven't added much error checking yet. For example, later, I'd like to check if a package.json exists, and if it doesn't, get the current version from git tags. How can I structure my code in such a way that I can easily add such functionality in the future? Any other feedback is also welcome. If you can address even some of the above, I would appreciate it. Usage: ship <major|premajor|minor|preminor|patch|prepatch|prerelease> (I named the script index.js, and created a symbolic link ship to it in /usr/local/bin) #!/usr/bin/env node 'use strict'; const fs = require('fs'); const semver = require('semver'); const readPkg = require('read-pkg'); const writePkg = require('write-pkg'); const git = require('simple-git/promise')(process.cwd()); const ora = require('ora'); const chalk = require('chalk'); const env = process.env; const validArgs = [ 'major', 'premajor', 'minor', 'preminor', 'patch', 'prepatch', 'prerelease' ]; const argv = require('yargs').argv; const arg = argv._[0]; let spinner; if (arg === undefined) { console.error(chalk.red(`Missing argument.`)); console.error(chalk.red(`Should be one of ${validArgs.join(', ')}.`)); process.exit(1); } if (!validArgs.includes(arg)) { console.error(chalk.red(`Invalid argument ${arg}`)); process.exit(1); } async function nextVersion() { let current; try { const data = await readPkg(); current = data.version; } catch(err) { process.exit(1); // todo: get version from git tags } return semver.inc(current, arg); } async function updatePkg(version) { spinner = ora('Writing package.json').start(); try { const data = await readPkg(); delete data._id; data.version = version; await writePkg(data); spinner.succeed(); } catch (err) { spinner.fail(); process.exit(1); } } async function commit(msg) { spinner = ora('Commiting changes').start(); try { await git.add('package.json'); await git.commit(msg); spinner.succeed(); } catch (err) { spinner.fail(); process.exit(1); } } async function makeTags(name) { spinner = ora('Adding tag').start(); try { await git.addTag(name); spinner.succeed(); } catch (err) { spinner.fail(); process.exit(1); } } async function push() { spinner = ora('Pushing changes').start(); try { // both of these may run together await git.push(); await git.pushTags(); spinner.succeed(); } catch (err) { spinner.fail(); process.exit(1); } } (async () => { const next = await nextVersion(); console.log(); // gap between input and spinners await updatePkg(next); await commit(next); await makeTags(next); await push(); })(); Answer: (Disclaimer: I'm not a JavaScript expert and I don't have much experience with async/await either, so take my advices with a grain of salt. I hope you will find it helpful anyway.) Running async calls in parallel If you assume these will run in parallel, they won't: // both of these may run together await git.push(); await git.pushTags(); To make them run in parallel, you could use Promise.all: await Promise.all([git.push(), git.pushTags()]); Or save the result of the calls without await, and await on the results: const pushResult = git.push(); const pushTagsResult = git.pushTags(); await pushResult; await pushTagsResult; Not too much async and spinners? Most of the operations are fast, such as updating the file, git add, and git commit, and I don't see a good reason to make them run in parallel and with a progress indicator. The only operation that may be slow and where the progress indicator makes sense is the git push. Since the main logic is not packaged as reusable code that can be imported and called from another module, wrapping it in async (in the IIFE at the end) seems unnecessary too. If the updating of the package was separated from the command line parsing, then it could make sense to make it possible to run async, if you had a use case for that. Without a use case, I think it's over-engineering. Addressing your questions A lot of my code is wrapped in try-catch blocks which I understand is not supposed to be the case with async/await or Promises. I'm not sure how to fix this. I don't see anything wrong with your try-catch blocks (but see the disclaimer at the top). There's a lot of duplication with the spinner initialisation code. I am aware the ora has an ora.promise method as well. I couldn't get this to work. As mentioned earlier, I think the spinners are overkill. However, even if you remove them from most places, much duplication will still remain. I don't think you need the granularity of catching exception at every step. You could wrap those steps in a single try-catch block, and that will reduce the duplication and boilerplate code. I haven't added much error checking yet. For example, later, I'd like to check if a package.json exists, and if it doesn't, get the current version from git tags. How can I structure my code in such a way that I can easily add such functionality in the future? The way you did so far seems fine to me. If later you want to get tags from Git, you can add a function for that, and call it from nextVersion as needed.
{ "domain": "codereview.stackexchange", "id": 31240, "tags": "javascript, node.js, console, async-await" }
generate_dynamic_reconfigure_options stopped generating .h files
Question: So far I had no problems building our project. I'm sure I built it successfully containing several _gencfg targets. Now, suddenly, with no obvious reason, catkin_make fails on all _gencfg targets. The only thing that's changed is I did apt-get upgrade, but I don't think it touched any ROS-related packages. We're using Indigo on 14.04. The .cfg files have the executable bit set. After running catkin_make and receiving the error, I look into /home/peckama2/tradr-git/tradr-ws/devel/include/optris_camera/, and the .h file has not been generated. Here is the compile log (shortened): $ catkin_make Base path: /home/peckama2/tradr-git/tradr-ws Source space: /home/peckama2/tradr-git/tradr-ws/src Build space: /home/peckama2/tradr-git/tradr-ws/build Devel space: /home/peckama2/tradr-git/tradr-ws/devel Install space: /home/peckama2/tradr-git/tradr-ws/install #### #### Running command: "make cmake_check_build_system" in "/home/peckama2/tradr-git/tradr-ws/build" #### #### #### Running command: "make -j4 -l4" in "/home/peckama2/tradr-git/tradr-ws/build" #### ... shortened ... [ 1%] [ 1%] Built target std_srvs_generate_messages_py Generating dynamic reconfigure files from cfg/camera_nodelet_params.cfg: /home/peckama2/tradr-git/tradr-ws/devel/include/optris_camera/camera_nodelet_paramsConfig.h /home/peckama2/tradr-git/tradr-ws/devel/lib/python2.7/dist-packages/optris_camera/cfg/camera_nodelet_paramsConfig.py : Directory or file does not exist make[2]: *** [/home/peckama2/tradr-git/tradr-ws/devel/include/optris_camera/camera_nodelet_paramsConfig.h] Error 127 make[1]: *** [tradr-payload/optris_camera/CMakeFiles/optris_camera_gencfg.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... Built target std_srvs_generate_messages_lisp [ 1%] [ 1%] Built target std_srvs_generate_messages_cpp Built target _optris_camera_generate_messages_check_deps_AutoFlag make: *** [all] Error 2 Invoking "make" failed The .cfg file: #!/usr/bin/env python PACKAGE='optris_camera' #import roslib #roslib.load_manifest(PACKAGE) from dynamic_reconfigure.parameter_generator_catkin import * gen = ParameterGenerator() # Name Type Level Description Default Min Max gen.add("frame_id", str_t, 0, "Camera frame ID.", "thermo_camera") gen.add("framerate", int_t, 0, "Camera frame rate. [Hz]", 10, 1, 120) gen.add("recali_time", int_t, 0, "Time for recalibration of camera. [s]", 0, 0, 120) gen.add("timestamp_offset", double_t, 0, "Camera timestamp offset. [s]", 0, -2, 2) gen.add("use_ros_time", bool_t, 0, "Switch between ros time and time monotonic.", False) exit(gen.generate(PACKAGE, "optris_camera", "camera_nodelet_params")) And an excerpt from CMakeLists.txt (comments and some unnecessary code removed): cmake_minimum_required(VERSION 2.8.3) project(optris_camera) find_package(catkin REQUIRED COMPONENTS roscpp std_msgs std_srvs sensor_msgs image_transport message_generation camera_info_manager nodelet dynamic_reconfigure ) generate_dynamic_reconfigure_options( cfg/camera_nodelet_params.cfg) catkin_package( INCLUDE_DIRS include LIBRARIES optris_camera_nodelets CATKIN_DEPENDS roscpp std_msgs std_srvs sensor_msgs image_transport message_runtime camera_info_manager nodelet DEPENDS libudev-dev Boost ) include_directories(include) add_executable(thermo_node src/thermo_node.cpp src/camera.cpp src/error.cpp) add_executable(thermo_colorconvert_node src/thermo_colorconvert_node.cpp) add_library(optris_camera_nodelets src/thermo_processing_nodelet.cpp src/error.cpp) ## Specify libraries to link a library or executable target against target_link_libraries( thermo_node Imager ImageProcessing udev ${catkin_LIBRARIES} ) add_dependencies(thermo_node ${PROJECT_NAME}_gencfg ${PROJECT_NAME}_generate_messages_cpp) target_link_libraries( thermo_colorconvert_node PIImager PIImageProcessing ${catkin_LIBRARIES} ) target_link_libraries( optris_camera_nodelets ${catkin_LIBRARIES} ) Thanks for any help. Originally posted by peci1 on ROS Answers with karma: 1366 on 2014-10-21 Post score: 0 Original comments Comment by ahendrix on 2014-10-21: Is it possible that it's complaining about python itself being missing for some reason? What do you get if you run /usb/bin/env python ? Comment by peci1 on 2014-10-21: That was also my try, but when I run the command after shebang, I get a normal python console. Comment by Dirk Thomas on 2014-10-21: Please try it with a minimal CMakeLists.txt file: project(optris_camera) find_package(catkin REQUIRED COMPONENTS dynamic_reconfigure) generate_dynamic_reconfigure_options(cfg/camera_nodelet_params.cfg) catkin_package() This works fine for me. Comment by peci1 on 2014-10-21: Same result :( What could go wrong? Comment by Dirk Thomas on 2014-10-21: Please post your version number of dynamic_reconfigure. Also consider putting your minimal package in a GitHub repo to share - may be something is wrong which is not obvious from your posted code. Comment by peci1 on 2014-10-22: ros-inidigo-dynamic-reconfigure 1.5.37-0trusty-20140905-0110-+0000 Comment by peci1 on 2014-10-22: I've created a whole new catkin workspace to test this thing. I've properly sourced its setup.bash, and the problem is the same. Here is the minimal workspace with the buildfiles included: https://app.box.com/s/af4foufx341jfk6rlzjz Answer: Got it! The problem was in CRLF (Windows) line-ending that somehow propagated to my cfg files. I use ROS virtualized under Windows and the sources dir is a shared folder, so maybe Windows did something odd to the files. And the message about file or dir not existing was really about the interpreter - the system tried to launch /usr/bin/env python<CR>, which obviously does not exist. Thanks to all for your help. Originally posted by peci1 with karma: 1366 on 2014-10-23 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 19807, "tags": "catkin, dynamic-reconfigure" }
How does a wire loop know about magnetic flux change inside it, where there is no wire?
Question: I've always thought that a magnetic field must cross the wire to induce any change in it. But now I've learned that it actually doesn't have to even touch it, since the induced voltage in it is only dependent on the rate of change of magnetic field inside the loop of wire (doesn't matter if that field is actually changing inside the wires of the loop). How is it that magnetic field that isn't even touching the wires induces a voltage inside of them? How does the wire "know" what the amplitude of magnetic field is in a space where there is no wire? This is beyond bizarre. Answer: The wire never 'knows' about anything outside of the wire itself. What really happens it that the changing magnetic flux induces an electric field locally, by $$\nabla \times \mathbf{E} = - \frac{\partial \mathbf{B}}{\partial t}.$$ Next, this changing electric field induces a magnetic field, by $$\nabla \times \mathbf{B} = \frac{\partial \mathbf{E}}{\partial t}.$$ If the radius of the wire is $R$, then after time $R/c$, these induced fields propagate out to the wire, which feels the induced electric field. That's where the electromotive force comes from. What I've just described is usually called 'light'. In situations where the wire is relatively small, you can ignore the light travel time and just use the equation $$\mathcal{E} = - \frac{d \Phi_B}{dt}.$$ However, the underlying mechanism is still the light-speed propagation shown above.
{ "domain": "physics.stackexchange", "id": 37348, "tags": "electromagnetism, inductance" }
ROS Answers SE migration: UR10e status
Question: What will is to get the status of the ur10e. When it is in some kind of stop for for instance protected stop or when the emergency button is press. I use now the ur_modern_driver and i subscribe to the /robot_status and get: header: seq: 6566 stamp: secs: 0 nsecs: 0 frame_id: '' mode: val: 2 e_stopped: val: 0 drives_powered: val: 1 motion_possible: val: 1 in_motion: val: -1 in_error: val: 0 error_code: 0 but here i can only see if it is in error not if it is a protected stop of emergency button was press. Is there a anther topic where this status is shown or? Originally posted by JoeryTemmink on ROS Answers with karma: 18 on 2019-04-08 Post score: 0 Original comments Comment by gvdhoorn on 2019-04-08: Could you please reference ros-industrial/ur_modern_driver#285? It's almost a duplicate and provides more context. Answer: i can only see if it is in error not if it is a protected stop of emergency button was press. Is there a anther topic where this status is shown or? that is because robot_status carries industrial_msgs/RobotStatus messages which are generic. "protected stop" is UR nomenclature, not generic. What you're looking for is called RobotMode. It's the data packet broadcast by the UR controller that contains fields like is_emergency_stopped and is_protective_stopped. In the ur_modern_driver, that data is published in ur_msgs/RobotModeDataMsg messages (on the /ur_driver/robot_mode_state topic): uint64 timestamp bool is_robot_connected bool is_real_robot_enabled bool is_power_on_robot bool is_emergency_stopped bool is_protective_stopped bool is_program_running bool is_program_paused Please note that by integrating these topics (and messages) into an application, you embed an (implicit) assumption into your code that makes it less reusable / reconfigurable (namely: you're checking for the status of a Universal Robot). Edit: But if i read this correctly is that there is no topic where the RobotMode data is published to. As noted in the readme (in the Improvements section) the ur_modern_driver publishes RobotModeDataMsgs on the /ur_driver/robot_mode_state topic. Originally posted by gvdhoorn with karma: 86574 on 2019-04-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by JoeryTemmink on 2019-04-08: yes that is what i mean! But if i read this correctly is that there is no topic where the RobotMode data is published to. Comment by JoeryTemmink on 2019-04-08: Ah now i know why i coun't find it! The RobotMode is on the master branche of the ur_modern_driver (at GitHub). But i use the kinetic_devel and there is no RobotMode. I use the kinetic-devel instead of the master becouse when i use the master with the robot it stops at every waypoint and with the kinetic-devel it wil go with a smooth motion. Comment by gvdhoorn on 2019-04-08:\ The RobotMode is on the master branche of the ur_modern_driver (at GitHub). But i use the kinetic_devel and there is no RobotMode. No, it should be there on kinetic-devel as well. But you need to be running a recent version. If you've checked out ros-industrial/ur_modern_driver before April the 4th then you need to fetch the latest version, as it was only added recently. If you can show us the output of git rev-parse HEAD | cut -c1-8 (after cding to the ur_modern_driver directory in your workspace) then we can see whether your clone is up-to-date. Comment by JoeryTemmink on 2019-04-09: Thanks that was the problem i had the version before 4th of April.
{ "domain": "robotics.stackexchange", "id": 32837, "tags": "ros, ur-modern-driver, ros-kinetic" }
Why must quantum logic gates be linear operators?
Question: Why must quantum logic gates be linear operators? I mean, is it just a consequence of quantum mechanics postulates? Answer: Suppose you pick a state $|\psi_i\rangle$ at random with probability $p_i$ and send it through a logic gate denoted by $G$. This random state is written as a density matrix $\rho = \sum_i p_i |\psi_i\rangle\!\langle\psi_i|$. Denote $G(|\psi_i\rangle\!\langle\psi_i|)$ as the result of applying $G$ to a particular state. Now if the input is $|\psi_i\rangle\!\langle\psi_i|$ with probability $p_i$, then output is $G(|\psi_i\rangle\!\langle\psi_i|)$ also with probability $p_i$. Thus, the output state must be $\sum_i p_i G(|\psi_i\rangle\!\langle\psi_i|)$ and therefore $$G(\rho) \;=\; G\left(\sum_i p_i |\psi_i\rangle\!\langle\psi_i|\right) \;=\; \sum_i p_i G\Bigl(|\psi_i\rangle\!\langle\psi_i|\Bigr)\,.$$ This can be extended to random mixed input $\rho_i$ in place of $|\psi_i\rangle$, leading us to conclude that $G(\sum_i p_i \rho_i) = \sum_i p_i G(\rho_i)$, which is precisely the definition of linearity.
{ "domain": "physics.stackexchange", "id": 6011, "tags": "quantum-mechanics, quantum-information, operators, hilbert-space" }
Accessible system where specific heat and thermal conductivity vary in different ways?
Question: I'm teaching undergrad thermo this semester and to my surprise several students are having trouble conceptualizing heat capacity and thermal conductivity as different properties; they can apply them in equations just fine, but they are baffled as to why they are treated as distinct properties. I'm going to try a different verbal track today, but I would love to be able to give them an example system where both quantities can be calculated from a microscopic model and they show different functional dependence. Alas, this is not a question that I have asked myself and I don't have an answer. For the ideal gas I keep finding expressions like $$ \kappa = \left( \frac{n\bar{v} \lambda}{s N_A} \right) c_v\,,$$ but I haven't followed the derivation closely enough to know if it continues to hold once rotational and vibrational modes are excited. In my ideal world the system would admit a description that 3rd year undergrads could follow in detail. I don't find an answer to this question in Are specific heat and thermal conductivity related?. Answer: Description of the difference I will come at this from my perspective where I can't possibly imagine how thermal conductivity and specific heat capacity are not distinct properties. Maybe it will help. Specific heat capacity is a measure of how much energy can be stored in a system. We know it is related to the degrees of freedom for the molecules and it is really a property of the molecules. Thermal conductivity is a measure of how effectively collisions transfer temperature (and viscosity is a measure of how effectively collisions transfer momentum, diffusion coefficient, mass). This makes thermal conductivity a property of the fluid (as opposed to a property of the molecules). Derivation of $\kappa$ I will outline the derivation of the expression you have above as it is given in the intro chapter of Introduction to Physical Gas Dynamics. The derivation is set up for any generic property (mass, momentum, energy, etc): Consider the 1D variation of a mean molecular quantity $\tilde{a}(x)$ which varies only in the $x$ direction. When a molecule moves across the plane given by $x_0$, it carries with it the value from the location of it's last position/collision, $\tilde{a}(x_0 - \delta x)$ when moving from the left of the plane to the right, and a value of $\tilde{a}(x_0 + \delta x)$ when moving from the right to the left. The distance from the plane, $\delta x$ is very close to the mean free path $\lambda$ such that $\delta x = \alpha_\tilde{a} \lambda; \alpha_\tilde{a} \approx 1$. They further assume that to a first order approximation, the number of molecules crossing the plane per unit area per unit time from either side is proportional to $nC$ where $n$ is the molecular number density and $C$ is the average speed of random molecular motion, both evaluated at $x = x_0$. This means that the net amount of $\tilde{a}$ is: $$ \Lambda_\tilde{a} = \eta nC\left[\tilde{a}(x_0 - \alpha_\tilde{a} \lambda) - \tilde{a}(x_0 + \alpha_\tilde{a} \lambda)\right]$$ where $\eta$ is a constant of proportionality. Then a Taylor series expansion where only first-order terms are retained is: $$\Lambda_\tilde{a} = -2\eta \alpha_\tilde{a} n C \lambda \frac{d \tilde{a}}{d x}$$ Okay, with all that out of the way, they then develop expressions for the diffusion of momentum and mass by comparing the expressions to the classical expressions. For thermal conductivity, the quantity being diffused is energy (heat) and they consider the internal structure: $\tilde{a} = \tilde{e} = (\xi/2)kT = (\xi/2)m RT = m c_v T$. This gives: $$ q = -\kappa \frac{d T}{d x} = -2\eta\alpha_T n C m c_v \frac{dT}{dx} $$ And equating the two coefficients: $$ \kappa = \beta \rho C \lambda c_v $$ where $\beta$ is a new constant of proportionality. This is effectively what your expression is. tl;dr Heat capacity is a measure of how a molecule can store energy. Thermal conductivity is a measure of how efficiently energy is transferred during collisions. Heat capacity is a property of the molecule (it depends only on the molecular structure). Thermal conductivity is a property of the fluid (it depends on how the molecules interact with one another during collisions). The expression you have shows that $\kappa \propto c_v$ so they actually vary pretty much the same way. And it is based on the internal structure, so it holds as other modes get excited. Simple analogy And maybe a simple analogy would work: You have a wallet that has some slots to hold money. I have one also. Maybe my wallet has more slots than yours so I can store more money. This is a specific financial capacity. You know I am just rolling in money cause I chose engineering as a career and times might be a little tough for you since you picked physics. So every time we pass in the hallway, you ask to borrow "some money," whatever I can spare. When I hand you money, I can hand you a whole bunch of bills, or only a few bills. If I hand you a bunch, my wallet slots will empty pretty quickly. If I hand you a little, it will take longer. As you take my money in each passing, your wallet slots will fill up (and maybe become totally full, and then you won't ask for money anymore). This is financial conductivity. Clearly how efficiently I can hand you money depends on how much money I have and how many empty slots you have in your wallet. So this financial conductivity depends on the money itself and your ability to store what I give you. Now, if our wallets were molecules and the slots our energy modes (and so our heat capacity), then our money would be energy. Every time we cross paths, I hand you some energy and you start filling up your modes while I deplete mine. How rapidly you fill and I empty at each crossing is the thermal conductivity. Hopefully that story would make it clear that while they show similar behavior, it's two very different properties.
{ "domain": "physics.stackexchange", "id": 19760, "tags": "thermodynamics, kinetic-theory" }
Corrosion of a galvanic couple made with silver and gold
Question: If a bangle is made out of gold and silver, connected with each-other would there be corrosion happening? If so, can it be explained using the galvanic series? Also do those metals undergo oxidation under normal conditions? Answer: As you state, though silver is not very reactive, silver jewelry would corrode more rapidly when in contact with gold and a bit of sweat as electrolyte. Though the difference in electronegativity of pure gold and pure silver is 0.15V, and is considered acceptably corrosion resistant, commercial jewelry contains copper and other metals in the alloys. There is clearly evident corrosion in photos of copper/silver jewelry. According to the Victoria and Albert Museum staff, "All metals, with the exception of 24 carat gold, suffer the effects of corrosion." However, in a bangle, physical abrasion is likely to cause more damage and loss of material than galvanic corrosion. Wear the jewelry and enjoy it, but it will not last forever.
{ "domain": "chemistry.stackexchange", "id": 3613, "tags": "electrochemistry, metal" }
What is a sticky bit in computer architecture?
Question: I am reading about a counter implementation in RISC Architecture. The specification reads, Sticky overflow bit is set when the counter wraps through zero. I can infer that the overflow bit is set when the counter reaches maximum value(all 1's) and another increment makes it 0(all 0's). What is the meaning of the sticky overflow bit? Answer: A sticky overflow bit means that the next operation that does not overflow (but would set the bit if it did) will not clear the bit. The value is sticky/persistent and must be cleared to detect newer overflows.
{ "domain": "cs.stackexchange", "id": 14740, "tags": "computer-architecture" }
Is pork poisonous?
Question: Besides religious prohibition, there are several non-religious arguments against eating pork. A few of which are: Pigs and swine are so poisonous that you can hardly kill them with strychnine or other poisons. Swine and pigs have over a dozen parasites within them, eg tapeworms, flukes, worms, and trichinae. There is no safe temperature at which pork can be cooked to ensure that they will be killed The swine carries about 30 diseases which can be easily passed to humans I would like to hear some scholarly verification regarding these points. Simple Yes-No-Yes will be enough, elaboration is welcome, though. Thank You Answer: Pigs and swine are so poisonous that you can hardly kill them with strychnine or other poisons. This is a non-sequitur. An animal being poisonous does not imply that it resists to poison, nor the reverse is true. In any case, to the extent of my knowledge pigs do not produce any specific poison. Obviously, if you could provide a more specific claim, this could be tested a bit more in depth. The second part, instead, is plain false. You can definitely kill a pig with strychnine. Both the Merk veterinary manual and Diseases of swine, 9th edition report an oral lethal dose of 0.5-1 mg/kg. For comparison, the CDC reports a probable lethal oral dose of 1.5-2 mg/kg for humans. Swine and pigs have over a dozen parasites within them, eg tapeworms, flukes, worms, and trichinae. Sure, pigs do have parasites. Chapter 55 of Diseases of swine is specifically on parasitic infections. Now, do ALL pigs have them? The answer is no, some parasites are more common than other, and the amount of parasites depends on the health status of the farm. For instance: Northern Europe (Roepstorff et al. 1998) In Denmark (DK), Finland (FIN), Iceland (I), Norway (N), and Sweden (S), 516 swine herds were randomly selected in 1986-1988. Individual faecal analyses (mean: 27.9 per herd) from eight age categories of swine showed that Ascaris suum, Oesophagostomum spp., Isospora suis, and Eimeria spp. were common, while Trichuris suis and Strongyloides ransomi-like eggs occurred sporadically. Large fatteners and gilts were most frequently infected with A. suum with maximum prevalences of 25-35% in DK, N and S, 13% in I and 5% in FIN. With the exception of the remarkably low A. suum prevalence rates in FIN, no clear national differences were observed. Oesophagostomum spp. were most prevalent in adult pigs in the southern regions (21-43% in DK and southern S), less common in the northern regions (4-17% adult pigs infected), and not recorded in I. I. suis was common in piglets in DK, I, and S (20-32%), while < 1% and 5% were infected in N and FIN, respectively. Eimeria spp. had the highest prevalences in adult pigs (max. 9%) without clear geographical differences. I. suis and Eimeria spp. were recorded for the first time in I, and I. suis for the first time in N. USA (Gamble et al. 1999) To determine Trichinella infection in a selected group of farm raised pigs, 4078 pigs from 156 farms in New England and New Jersey, employing various management styles, were selected based on feed type (grain, regulated waste, non-regulated waste). [...] A total of 15 seropositive pigs on 10 farms were identified, representing a prevalence rate of 0.37% and a herd prevalence rate of 6.4%. A total of nine seropositive pigs and one suspect pig from six farms were tested by digestion; four pigs (representing three farms) harbored Trichinella larvae at densities of 0.003-0.021 larvae per gram (LPG) of tissue; no larvae were found in six pigs. China (Weng et al. 2005): The prevalence of intestinal parasites was investigated in intensive pig farms in Guangdong Province, China between July 2000 and July 2002. Faecal samples from 3636 pigs (both sexes and five age groups) from 38 representative intensive pig farms employing different parasite control strategies were examined for the presence of helminth ova and protozoan oocysts, cysts and/or trophozoites using standard techniques. Of the 3636 pigs sampled, 209 (5.7%) were infected with Trichuris suis, 189 (5.2%) with Ascaris, 91 (2.5%) with Oesophagostomum spp., 905 (24.9%) with coccidia (Eimeria spp. and/or Isospora suis) and 1716 (47.2%) with Balantidium coli. These infected pigs were mainly from farms without a strategic anti-parasite treatment regime. However, note that Boes et al. 2000 reports higher percentages. The prevalence of helminths in pigs was investigated in five rural communities situated on the embankment of Dongting Lake in Zhiyang County, Hunan Province, People's Republic of China, in an area known to be endemic for Schistosoma japonicum. The helminth prevalences identified on the basis of faecal egg count analysis were: Oesophagostomum spp. (86.7%), Ascaris suum (36.7%), Metastrongylus spp. (25.8%), Strongyloides spp. (25.8%), Trichuris suis (15.8%), Globocephalus spp. (6.7%), Gnathostoma spp. (4.2%), Schistosoma japonicum (5.0%) and Fasciola spp. (1.3%). Kenya (Nganga et al. 2008): A total of 115 gastrointestinal tracts (GIT) from 61 growers and 54 adult pigs were examined between February 2005 and January 2006. Seventy eight (67.8%) had one or more helminth parasites, of which thirty six (31.3%) were mixed infection. Ten types of helminth parasites encountered in descending order of prevalence were, Oesophagostomum dentatum (39.1%), Trichuris suis (32.2%), Ascaris suum (28.7%), Oesophagostomum quadrispinulatum (14.8%), Trichostrongylus colubriformis (10.4%), Trichostrongylus axei (4.3%), Strongyloides ransomi (4.3%), Hyostrongylus rubidus (1.7%), Ascarops strongylina (1.7%) and Physocephalus sexalutus (0.9%). There is no safe temperature at which pork can be cooked to ensure that they will be killed False. Proper cooking, as well as freezing (but see below) are effective in killing worms. The CDC suggest: The best way to prevent trichinellosis is to cook meat to safe temperatures. A food thermometer should be used to measure the internal temperature of cooked meat. Do not sample meat until it is cooked. USDA recommends the following for meat preparation. For Whole Cuts of Meat (excluding poultry and wild game) Cook to at least 145° F (63° C) as measured with a food thermometer placed in the thickest part of the meat, then allow the meat to rest for three minutes before carving or consuming. For Ground Meat (excluding poultry and wild game) Cook to at least 160° F (71° C); ground meats do not require a rest time. Also Curing (salting), drying, smoking, or microwaving meat alone does not consistently kill infective worms; homemade jerky and sausage were the cause of many cases of trichinellosis reported to CDC in recent years. Freeze pork less than 6 inches thick for 20 days at 5°F (-15°C) to kill any worms. Freezing wild game meats, unlike freezing pork products, may not effectively kill all worms because some worm species that infect wild game animals are freeze-resistant. Clean meat grinders thoroughly after each use. The swine carries about 30 diseases which can be easily passed to humans Again, sure pigs can carry diseases that can be passed to humans, but proper storing and cooking of meat is effective in getting rid of the great majority of bacteria. Finally in 2012 Public health England reported food poisoning caused by red meat as accounting for 17% of all food poisoning incidents, with pork accounting for 3%. By comparison, poultry accounted for 29% of food poisoning events (although people eat more poultry than red meat). Similarly, the CDC report on the Attribution of Foodborn Illness (1998-2008) puts red meat accounting for 12%, pork as accounting for 5.4% and poultry 9.8%.
{ "domain": "biology.stackexchange", "id": 2536, "tags": "toxicology, medicine" }
Understanding date of astronomical events
Question: I have a masters in chemistry but pretty much, a layman in astronomy. So, can you please explain to a novice like me, about this paragraph taken from the Wikipedia article on Makar Sankranti: There are two different systems to calculate the Makara Sankranti date: nirayana (without adjusting for precession of equinoxes, tropical) and sayana (with adjustment, sidereal). The January 14 date is based on the nirayana system, while the sayana system typically computes to about December 23, per most Siddhanta texts for Hindu calendars. Am I correct to understand that when the date was assigned in Hinduism some thousand (?) years back, the winter solistice used to be on 14th/15th January? Answer: Precession of equinoxes means that the North pole of the Earth is changing its direction. This also means, that the dates of the seasons (seen from outside the Solar System (= sidereal)) are changing. Nirayana calculates the date of Makar Sankranti with adding years to the first Makar Sankranti, but sayana calculates the date of Makar Sankranti relative to stars (sidereal). In a nutshell: nirayana is on the same date as the first event (first Makar Sankranti), but sayana is on the same event as the first Makar Sankranti (winter solstice). Yes, winter solstice was once on the 14 January.
{ "domain": "astronomy.stackexchange", "id": 5142, "tags": "equinox" }
Merge two models - Keras
Question: I was reading through many blogs and understood the relevance and scenario of having merging two model. But, how to decide the Merge mode for two models. ie. concat, sum, dot, etc. For eg. I am working on the task of Auto Image captioning. So, captions and Images are 2 kinds of input that I need to handle and merge them at certain point for model to let know which caption is for which image. I am learning the text representation and Image representation by 2 different network designs. Now, at the state after learning representation for both the inputs, how do i decide what ways(concat, add, etc) to use to join/merge two representations. Answer: Generally, I'd say its the same way as finding your "best" feature list, pre-processing methods and algorithm - you'd find most answers starting with "give it a try and compare", "check which combination gives you a better cross-validation?" etc.. More specifically, when there is no farther documentation (and its an open source project) I usually take a look at the code for more intuitions. In this case concat, dot, average and the like where strait forward implemented as it sounds (concatenating date, averaging, etc..), but it does imply about the amount of processing needed: some of the merging functions adds to the input needed to be processed (concat, add) and some reduces it like an aggregation function (average, min, max) Beyond that the resulted tensor object will perform a bit different for different datasets. Please give it a try and let us know what merging combination worked best for you, and the dataset type :)
{ "domain": "datascience.stackexchange", "id": 1762, "tags": "neural-network, deep-learning, keras" }
Pr2_controller for wheels robot
Question: Just wondering, I am looking to move the base of my wheels robot by using the pr2_controller. (in Gazebo) But I already looked onto the tutorial a little. I am still a bit confused. How do I incorporate the controller onto my robot (segway RMP), do I consider it as a plugin or what? Or do I have to write controller of my own? Some guidance will be extremely helpful for me to head into the correct direction. Thank you all! Originally posted by Gazer on ROS Answers with karma: 146 on 2013-05-26 Post score: 0 Answer: You should place a transmission element in your model. You can then load a PR2 controller to control the joint. I am not sure if this works in sdf. In urdf you should incorporate such an element: <transmission type="SimpleTransmission" name="caster_front_left_trans"> <actuator name="caster_front_left_motor" /> <joint name="caster_front_left_joint" /> <mechanicalReduction>1</mechanicalReduction> </transmission> See the links this thread for more info. Originally posted by davinci with karma: 2573 on 2013-05-26 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Gazer on 2013-05-27: thank you. I am following the tutorial here:http://www.ros.org/wiki/pr2_controllers/Tutorials/Using%20the%20robot%20base%20controllers%20to%20drive%20the%20robot ............. so to use it, do I have to attach the controller to my sdf robot first? Comment by davinci on 2013-05-27: For you own robot yes. Try to find the corresponding part in the urdf (or sdf) of the PR2 and copy that to your own robot. Comment by Gazer on 2013-05-29: sorry, I am still confused. Would you mind to explain a little about the purpose of controller? I use the Diff-drive plugin got it moving in gazebo. I have been following the tutorials and everything is working in sdf, and the website rarely mention about PR2 controller.
{ "domain": "robotics.stackexchange", "id": 14307, "tags": "ros" }
Where I can find a list of dependencies?
Question: I don't know where find or what dependencies to use when I will create a package dependencies. It's possible create a depends? For example, in the Tutorial I see rospy, roscpp and std_msgs. Where I can find more and how I can know what dependencies use in my projects? Is there some description about the dependencies? ROS Melodic Ubuntu 18.04 Originally posted by Jose-Araujo on ROS Answers with karma: 5 on 2019-07-24 Post score: 0 Answer: The package.xml file inside the package that you created contains the information about package dependencies. This is the place to edit the package dependencies. Originally posted by Pallav with karma: 26 on 2019-07-25 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Jose-Araujo on 2019-08-14: So, what I want to know is where I can find the list of dependencies in the site of ROS. Because, I know basically three dependencies (rospy, roscpp, std_msgs). Comment by Pallav on 2019-08-18: First you need to state an application and see what all available packages are required. Based upon application you might require tf, geometry_msgs, nav_msgs et cetera. Check this ros wiki page: http://wiki.ros.org/ROS/Tutorials/CreatingPackage Comment by Jose-Araujo on 2019-08-19: I think I get it now, thank you very much @Pallav.
{ "domain": "robotics.stackexchange", "id": 33510, "tags": "ros-melodic, rospy, add-dependencies, dependencies, roscpp" }
How to parse sensor_msgs::Image data?
Question: Hi, This may be trivial, but I have problems understanding how to parse data which comes as sensor_msgs::Image. I used the package openni_camera to get raw frames (camera/depth/image_raw). It says that it contains uint16 depths in mm. I am getting all values between 0 to 255, how do I get uint16? As I understand, the data field is uint8[] data as shown here http://ros.org/doc/api/sensor_msgs/html/msg/Image.html. The same applies for camera/depth/points which is Pointcloud2 sending Float32. Again how is this represented in a uint8? Thanks Originally posted by aswin on ROS Answers with karma: 528 on 2013-06-20 Post score: 0 Answer: Ok one way to solve this is using cv_bridge. It uses 16UC1 encoding. void dataCallback(const sensor_msgs::ImagePtr & msg) { vector<unsigned short> depth; cv_bridge::CvImagePtr cv_depth = cv_bridge::toCvCopy(msg,sensor_msgs::image_encodings::TYPE_16UC1); for(int i=0; i<cv_depth->image.rows; i++) { for(int j=0; j<cv_depth->image.cols; j++) { unsigned short dd = *(cv_depth->image.ptr<unsigned short>(i,j)); depth.push_back(dd); } } } Cheers Originally posted by aswin with karma: 528 on 2013-06-20 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14640, "tags": "ros, sensor-msgs, image, openni-camera, pointcloud" }
Fully Ionic Grapefruit
Question: Imagine I take a small grapefruit (about 100 grams), and a magic wand, and use the magic wand to : Remove all the electrons from my grapefruit. Prevent said grapefruit from exploding into a thin plasma. What would be the force of the electrical field generated by my positive ionic grapefruit at a distance of 1 meter away? For instance, what force would be felt by electrons and, respectively, protons in a mosquito one meter away? Answer: HInt: Force on what? You need some charge to exert the force on. Otherwise you can calculate the electric field. To do that, you need to know the number of protons in $100$ g of normal matter. Since electrons don't weight anything and (at this level of approximation) there are the same number of neutrons and protons....
{ "domain": "physics.stackexchange", "id": 18178, "tags": "charge" }
Differential Drive Robot on uneven surfaces
Question: So I am building a differential drive robot and I want it to autonomously drive in a straight line on an uneven surface. I know I need a position and velocity PID. As of now, I am deciding on which sensors to buy. Should I use optical encoders, accelerometers, or something else? I wanted to go with accelerometers due to the error encoders would face due to slippage, but I am not sure. Some enlightenment would help! Answer: Get a good IMU you can communicate with over i2c. Make sure it has an accelerometer and a gyroscope. Use these two sensors to calculate your heading, and then correct accordingly to make sure you are heading straight. Here's an example of a nice IMU: https://www.sparkfun.com/products/10121 Sparkfun has created an arduino library for that particular sensor that will make calculating heading easy peasy. Good luck!
{ "domain": "robotics.stackexchange", "id": 1129, "tags": "sensors, control, pid, differential-drive" }
Unitary Equivalence of Parity Operator
Question: I recently read a statement that 'parity operator is defined only up to unitary equivalence' in a paper about PT symmetric quantum mechanics. But I didn't understand the meaning of it. It was regarding a Hamiltonian $H=p^2+x^2+2x$ which is apparently non PT symmetric but when expressed as $H=p^2+(x+1)^2-1$ , it becomes PT symmetric about the point $x=-1$ Can anyone please explain? Answer: The PT symmetry is a red herring here. Your hamiltonian is visibly T symmetric, so you are only trying to understand its P-symmetry properties. The momentum (kinetic) piece is invariant under the maneuvers to follow. You want to understand how the two parity operators around 0 and -1 are related, $$ P \hat x P = -\hat x ~; \qquad \tilde P (\hat x +1) \tilde P = - (\hat x+1) ~ . $$ You appreciate that $$ e^{i\hat p /\hbar } ~~\hat x~~ e^{-i\hat p /\hbar }= \hat x +1, ~~~\leadsto \\ \tilde P ~ e^{i\hat p /\hbar } ~\hat x~ e^{-i\hat p /\hbar } ~ \tilde P= - e^{i\hat p /\hbar } ~\hat x~ e^{-i\hat p /\hbar }, ~~\implies \\ e^{-i\hat p /\hbar }~\tilde P ~ e^{i\hat p /\hbar } ~\hat x~ e^{-i\hat p /\hbar } ~ \tilde P~ e^{i\hat p /\hbar } = -\hat x~ \leadsto \\ \tilde P = e^{i\hat p /\hbar }~P~e^{-i\hat p /\hbar }. $$ That is, the two parity operators are unitarily equivalent. You may confirm the change has no bearing on parity reflections of $\hat p$, which you already know since the origin of a variable is invisible to its derivative. You may now proceed to apply the $\tilde P T$ operator on your hamiltonian.
{ "domain": "physics.stackexchange", "id": 79449, "tags": "quantum-mechanics, operators, symmetry, hamiltonian, parity" }
How is a molecular orbital a 'quantum superposition' of the atomic orbitals?
Question: Let me quote from Feynman's lectures the concept of superposition: (1)The probability of an event in an ideal experiment is given by the square of the absolute value of a complex number $ϕ$ which is called the probability amplitude: $$\begin{equation} \begin{aligned} P&=\text{probability},\\ \phi&=\text{probability amplitude},\\ P&={|\phi|}^2. \end{aligned} \end{equation}$$ (2)When an event can occur in several alternative ways, the probability amplitude for the event is the sum of the probability amplitudes for each way considered separately. There is interference: $$ϕ=ϕ_1+ϕ_2.$$ The second point is the core of quantum superposition. Also, in Dirac's language from The principles of Quantum Mechanics, [...]It requires us to assume that between these states there exists peculiar relationships such that whenever the system is definitely in one state we can consider it as being partly in each of two or more other states.[...] These all imply that when a system is in a state of superposition, it is 'partly' in both of the states that contribute to the superposition( Here 'partly' means possibility which implies the system has the possibility to exist in both states that contribute in the superposition). For instance, let's take $\ce{H_2^+}$ to illustrate the concept of superposition. Firstly, the words of Feynman: A positively ionized hydrogen molecule consists of two protons with one electron worming its way around them. If the two protons are very far apart, what states would we expect for this system? The answer is pretty clear: The electron will stay close to one proton and form a hydrogen atom in its lowest state, and the other proton will remain alone as a positive ion. So, if the two protons are far apart, we can visualize one physical state in which the electron is “attached” to one of the protons. There is, clearly, another state symmetric to that one in which the electron is near the other proton, and the first proton is the one that is an ion. We will take these two as our base states, and we’ll call them $|1⟩$ and $|2⟩.$ They are sketched in Fig. ]1 Now, the general state of $\ce{H_2^+}$, say, $|\psi_{\ce{H_2^+}}\rangle$ is in quantum superposition of the two base-states $|1\rangle$ & $|2\rangle.$ Now using the point no.(2) of Feynman, we can prove the superposition of the states as : What should be the amplitude for finding the electron in, say, $x'$ from the center of the molecule $\ce{H_2^+}?$ (We would use each-coordinate as our new base-state, that is, $|x\rangle$ is a base-state; this means we would now treat continuum of base states). We would've to find $\langle x'|\psi_{\ce{H_2^+}}\rangle.$ This can happen in two ways & thus applying the point no.(2), we get $$\begin{equation}\langle x'|\psi_{\ce{H_2^+}}\rangle\\=\int \langle x'|x\rangle \langle x|\psi_{\ce{H_2^+}}\rangle dx\\= N\left(\int \langle x'|x\rangle \langle x|1\rangle dx +\int \langle x'|x\rangle \langle x|2\rangle dx\right)\end{equation} ;$$ where $ N= \text{normalisation constant}.$ From this, we can easily shew, $$|\psi_{\ce{H_2^+}}\rangle = N(|1\rangle + |2\rangle).$$ Thus, quantum superposition is used when there is more than one way an event can happen; in this case, our event was finding the electron at $x'$ ; we superpose the alternative amplitudes to get the total amplitude of the event of finding the electron at that coordinate. Let's consider $\ce{H_2}$; we describe first the VB description; again starting with Feynman's words: As our next two-state system we will look at the neutral hydrogen molecule $\ce{H_2}.$ It is, naturally, more complicated to understand because it has two electrons. Again, we start by thinking of what happens when the two protons are well separated. Only now we have two electrons to add. To keep track of them, we’ll call one of them “electron a” and the other “electron b.” We can again imagine two possible states. One possibility is that “electron a” is around the first proton and “electron b” is around the second, as shown in the top half of Fig. We have simply two hydrogen atoms. We will call this state $|1⟩.$ There is also another possibility: that “electron b” is around the first proton and that “electron a” is around the second. We call this state $|2⟩.$ So, we can write the general state $|\psi_{\ce{H-H}}\rangle$ as the superposition of $|1\rangle$ & $|2\rangle.$ We can again prove the superposition from point no.(2) but this time we've to use $|x_1,x_2\rangle$ instead of $|x\rangle$ that is, $$\begin{equation}\langle x'_1,x'_2|\psi_{\ce{H-H}}\rangle\\=\int \langle x'_1,x'_2|x_1,x_2\rangle \langle x_1,x_2|\psi_{\ce{H-H}}\rangle dx\\= N'\left(\int \langle x'_1,x'_2|x_1,x_2\rangle \langle x_1,x_2|1\rangle dx +\int \langle x'_1,x'_2|x_1,x_2\rangle \langle x_1,x_2|2\rangle dx\right)\end{equation} ;$$ where $N'=\text{normalisation constant}.$ This evidently proves the superposition $$|\psi_{\ce{H-H}}\rangle= \frac{1}{\sqrt 2}(|1\rangle + |2\rangle)$$ where $N'= \frac{1}{\sqrt 2}.$ Come to the MO description of $\ce{H_2}$(only the bonding orbital is concerned here to shorten the query). This is where my problem begins... This is quoted from J.D.Lee's Concise Inorganic Chemistry: Consider two atoms $A$ and $B$ which have atomic orbitals described by the wavefunctions $\psi_{(A)}$ and $\psi_{(B)}.$ If the electron clouds of these two atoms overlap when the atoms approach, then the wavefunction for the molecule(molecular orbital $\psi_{(AB)}$) can be obtained by a linear combination of the atomic orbitals $$\psi_{(AB)}= N(c_1 \psi_{(A)} + c_2\psi_{(B)})$$ [...]Suppose the atoms $A$ and $B$ are hydrogen atoms ; then the wavefunctions $\psi_{(A)}$ and $\psi_{(B)}$ describe the $1s$ atomic orbitals on the two atoms. [...] Now, wavefunctions are nothing but probability amplitudes; since _molecular orbital is a quantum superposition of the atomic orbitals; it must satisfy the point no.(2) like all other superposition described above. But I'm not understanding what base states are used in this case; in VB theory, $|1\rangle$ represented the state of having one electron around one $\ce{H}$ atom & the other electron around the other $\ce H.$ So, $\langle x_1,x_2|1\rangle$ meant the amplitude to find electron $a$ at $x_1$ & electron $b$ at $x_2.$ But what should I use in this case? How to represent $\langle x_1,x_2|\psi_{\ce{H-H}}\rangle$ using point no.(2) in MO theory? I'm not getting how to apply the point no.(2) here; $\psi_{(A)}$ represents the amplitude of only one electron; same with $\psi_{(B)}.$ For applying point no.(2), there must be two ways for the event to find the electrons at $x'_1,x'_2$_ so that point no.(2) yields the result $\psi_{(\ce{H-H})_{(g)}}= N( \psi_{(H)_{1s}} + \psi_{(H)_{1s}}).$ But I'm not getting how $\psi_{(A)}$ and $\psi_{(B)}$ represent the amplitudes of the same final event of finding the electrons at $x'_1,x'_2$; if they were not representing the amplitudes of getting to the same final event, how could they be superposed by point no.(2)? In VB theory, the base states are defined so that the alternative amplitudes define the same final event & so they were superposed; however how should I interpret the base-states in MO theory, so that $\psi_{(A)}$ and $\psi_{(B)}$ represent the amplitudes to the same final event? How should I apply point no.(2) in case of molecular orbital to prove the superposition? Answer: Same story as with the previous question: quantum superposition is always expressed mathematically as a linear combination, but the converse is not necessarily true. Not each and every linear combination expression has something to do with real physical quantum superposition. In many cases it is just a mathematical trick. In the question about resonance we've already faced a situation when we could only formally think about a linear combination as a superposition of some imaginary states. Same story here: molecular orbitals are just linear combination of atomic orbitals. As for the case of resonance, the states you think are superimposed (atomic orbitals this time) are not even states of electrons in a molecule. The only "real" (though approximate) states of electrons in a molecule are molecular orbitals, not the atomic one, thus, strictly speaking, there is no superposition out there. Besides, you don't understand what molecular orbital is. It is not a many-electron wave function that describes all the electrons in a molecular system, rather, it is (approximate) one-electron wave function that describes an individual electron in a molecule. To construct (approximate) many-electron wave function you need to make (antisymmetric) product of $n$ molecular orbitals, where $n$ is the number of electrons. And since each and every molecular orbital describes just one electron, all your analysis of molecular orbitals for two-electron case of a hydrogen molecule is plain wrong. P.S. And to be honest, I don't understand what you did for one-electron $\ce{H2+}$ case as well. To me it looks like you have too many misconceptions in your head currently. I suspect, you're jumping the gun by trying to learn the applications of quantum mechanics to chemistry having no really solid background in theory itself. So, let's look on what you're saying on $\ce{H2+}$ case. Following Feynman, we write the general state of the system $\lvert \psi \rangle$ as a superposition of the two basis states $\lvert \psi_1 \rangle$ and $\lvert \psi_2 \rangle$, $$ \lvert \psi \rangle = c_1 \lvert \psi_1 \rangle + c_2 \lvert \psi_2 \rangle \, . $$ Fine. And then you decided to "reverify the superposition of the states as". What do you want to "reverify" out there? Superposition is asserted, you don't need to "reverify" it whatever it means. Yes, you can switch from the state vector formalism to the wave function one by taking the inner product of both sides with position eigenvectors $\lvert x \rangle$, $$ \langle x \vert \psi \rangle = c_1 \langle x \vert \psi_1 \rangle + c_2 \langle x \vert \psi_2 \rangle \, , \\ \psi(x) = c_1 \psi_1(x) + c_2 \psi_2(x) \, , $$ where the wave function is defined as usual, i.e. $\psi(x) = \langle x \vert \psi \rangle$. That's fine, though trivial: you indeed get the (weighted) sum of the corresponding wave functions which are the probability amplitudes. But that is because it just an alternative form of a superposition expression: it tells the same thing as before, just in different words. We didn't "reverify" anything, merely restated the superposition one more time. Anyway, that would be the thing I at least understand. But look what you did! A complete mess! First, you forget the coefficients in the linear combination. Secondly, you used the resolution of identity for absolutely no reason. Thirdly, normalisation constant just appeared out of the blue. Finally, what you think you get at the end is in fact the starting point of a journey: it is an assertion Feynman did. So, for me all you algebra appears as random manipulations of symbols you've seen somewhere but don't really understand.
{ "domain": "chemistry.stackexchange", "id": 4327, "tags": "quantum-chemistry, molecular-orbital-theory" }
Why does the pulmonary artery have higher glucose concentration than the pulmonary vein?
Question: If the pulmonary artery have higher glucose concentration than the pulmonary vein, does it mean glucose will be consumed during gas exchange? That confused me because gas exchange is something like diffusion and shouldn't consume any glucose Answer: Gas exchange doesn't but the cells of the tissue it occurs in do consume glucose, even the cells in the walls of the artery will consume some. The cells in the lungs still need to be fed and only one of those two vessels has flow going into the tissue so it is the one that has to carry that glucose into the tissue.
{ "domain": "biology.stackexchange", "id": 8328, "tags": "breathing" }
Picking the different solutions to the time independent Schrodinger eqaution
Question: The time independent Schrodinger equation $$-\frac{\hbar^2}{2m} \frac{d^2\psi}{dx^2}+V\psi = E\psi$$ can have many different solutions of $\psi$ for a particular value of $E$. For example, if we found a complex solution $\psi(x)$ for a particular value of $E$, say $E_0$, we can write $\psi(x)=a(x)+ib(x)$. Then $a(x)$ and $b(x)$ will also be solutions to the T.I.S.E with $E=E_0$. Furthermore $c_1a(x)+c_2b(x)$ will also be solutions with $c_1$ and $c_2$ being arbitrary constants. I read that one can always choose any of these solutions as the solution for the stationary state with energy $E_0$. But does that mean all these different solutions represent the same physical state of a particle? The expectation value of any dynamical variable $Q(x,p)$ is given by $\int \psi^*Q(x,\frac{\hbar}{i}\frac{d}{dx})\psi. $ How do we know for sure that $\int a(x)^*Q(x,\frac{\hbar}{i}\frac{d}{dx})a(x) $ and $\int b(x)^*Q(x,\frac{\hbar}{i}\frac{d}{dx})b(x)$ gives the same expectation values? Answer: In ordinary quantum mechanics, two wavefunctions represent the same physical state if and only if they are multiples of each other, that is $\psi$ and $c\psi$ represent the same state for any $c\in\mathbb{C}$. If you insist on wavefunctions being normalized, then $c$ is restricted to complex numbers of absolute value 1, i.e. number of the form $\mathrm{e}^{\mathrm{i}k}$ for some $k\in[0,2\pi)$. If $\psi(x) = a(x) + \mathrm{i}b(x)$ is a solution of the Schrödinger equation, then it is not automatically true that $a(x)$ and $b(x)$ are also solutions. It is "accidentally" true for the time-independent Schrödinger equation because applying complex conjugation shows us directly that $\psi^\ast(x)$ is a solution if $\psi(x)$ is, and $a(x)$ and $b(x)$ can be obtained by linear combinations of $\psi(x)$ and $\psi^\ast(x)$. There are now two cases: If $\psi$ and $\psi^\ast$ are not linearly independent - i.e. one can be obtained from the other by multiplication with a complex constant - then the space of solutions for this energy is still one-dimensional, and there's only a single physical state. If they are linearly independent, then there are at least two distinct physical states with this energy. Note that already the free particle with $V=0$ gives a counter-example to the claim that all solutions for the same energy have the same values for all expectation values. There we have plane wave solutions $\psi(x) = \mathrm{e}^{\mathrm{i}px}$ and $\psi^\ast(x) = \mathrm{e}^{-\mathrm{i}px}$ that are linearly-independent complex conjugates with the same energy that differ in the sign of their expectation value for the momentum operator $p$.
{ "domain": "physics.stackexchange", "id": 56974, "tags": "quantum-mechanics, wavefunction, schroedinger-equation" }
Why water molecules move faster when heated?
Question: In his first lecture about the nature of Matter and Atoms, Professor Feynman claims that the higher the temperature of the steam gets, the quicker the movement of Water molecules will be. I don't see however how these two phenomenas are related. It makes sense that the size of the atoms and thus the Water molecules will get bigger, but this doesn't imply that the velocity of the Water molecules will get higher. I just don't see the relationship. Could someone please elaborate further on that ? Thank you in advance. Answer: It's not at all obvious that the two phenomena - the steam being hotter and its molecules moving faster – are related. Evidence piled up quite slowly until the nineteenth century, but is now overwhelming. Some of the first evidence was indirect: if you accept the idea that the pressure a gas exerts on its container is caused by gas molecules hitting the container walls, then it's hard to explain the experimental fact that the pressure increases with temperature, unless you accept that the molecules move faster (on average) and hit the wall harder as the temperature increases. More directly, we can see the increasingly rapid 'Brownian movement' of microscopic particles suspended in a gas, when we raise the gas temperature. And there's plenty more evidence.
{ "domain": "physics.stackexchange", "id": 47467, "tags": "temperature, velocity, atoms, molecules" }
MoveIt detect collision for disabled links in SRDF
Question: Hello, I’m new to this community so if I’ve made some mistakes please let me know. My problem description is as follows: I created a new arm+sensor+gripper URDF and used the MoveIt Setup Assistant to generate a MoveIt config package. When I use MoveGroup to move the arm I get this error: Found a contact between 'gripper_finger1_inner_knuckle_link' (type 'Robot link') and 'gripper_finger1_finger_tip_link' (type 'Robot link'), which constitutes a collision. Contact information is not stored. In my SRDF these 2 links are disabled: <disable_collisions link1="gripper_finger1_finger_tip_link" link2="gripper_finger1_inner_knuckle_link" reason="Adjacent" /> I’ve also tried updating the allowed collision matrix by updating the planning scene diff and then call apply_planning_scene service. At this point I’m not sure what else to try. I hope you guys can help me. Thank you. EDIT: I'm using Ubuntu 18.04 and ROS Melodic. MoveIt is installed from deb Originally posted by truhoang_ubtrobot on ROS Answers with karma: 16 on 2020-01-27 Post score: 0 Answer: I figured out what I did wrong: I updated the allowed collision matrix in the planning scene and I think this overrides the disabled collisions in the SRDF. Commenting out that part of my code resolves this issue. Originally posted by truhoang_ubtrobot with karma: 16 on 2020-01-28 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 34334, "tags": "ros, moveit, ros-melodic, gripper, movegroup" }
What happens if a body jumps in the Moon?
Question: I need to know why on Earth (ha ha..!) this behaviour happens when a body jumps in the Moon (a celestial body without any atmosphere at all). Video explanation: https://youtu.be/Qgs5E5gKO48 The steps I followed using Kerbal Space Program: Start in position A Perform a clean jump on the Anti-radial vector Wait some time Slow myself to not impact the ground and kill myself. End up in position B I know this is basic, but, should not the object end up in position A again? Answer: Should not the object end up in position A again? No, it shouldn't, because of the Coriolis effect (assuming the moon is rotating about an axis with respect to the stars). From the perspective of a frame rotating with the moon, the coriolis acceleration is $-2\,\vec\omega\times \vec v$, where $\vec w$ is the moon's angular velocity with respect to inertial space and $\vec v$ is the velocity of the jumping Kerbal. For simplicity, I'll assume the jump is performed at the moon's equator, at initial coordinates $\vec r = r\hat x$. The jump, in moon-fixed coordinates, gives the Kerbal an initial velocity of $\vec v = v_0\hat x$. The dominant acceleration is the downward acceleration due to gravity, $-g\hat x$ (not 9.80665 m/s2). There's also a much smaller acceleration in the $-\hat y$ direction due to the Coriolis effect. I'll focus on that and ignore that this slight drift changes the direction of the gravitational acceleration vector. (This effect is very small.) With this assumption, the x component of velocity is $v_x(t) = v_0 - gt$. This means that the y component of acceleration is $\ddot y(t) = -2\omega v_x(t) = -2\omega(v_0-gt)$. Integrating twice yields $y(t) = -\omega v_0 t^2 + \frac 1 3 \omega g t^3$. Substituting $t = t_f = 2\frac{v_0^2}g$, the time at which the jumping Kerbal lands, yields $y_f = -\frac 4 3 \omega \frac{v_0^3}{g^2}$ (in other words, slightly to the west of the initial position).
{ "domain": "physics.stackexchange", "id": 33078, "tags": "newtonian-mechanics, classical-mechanics, atmospheric-science, moon" }
Why is the potential energy of spring negative while it's positive for a dipole?
Question: Potential Energy is the work done against the conservative force As the definition suggests , for a spring $$ U= -\int F \cdot \mathrm{d}x \,.$$ But for a dipole placed in an electric field $$ U= \int \tau \cdot \mathrm{d}\theta \,.$$ Why is negative sign not used in case of dipole ? Answer: The change in potential energy is defined as either being the work done by an external force in changing the positions of charges $\Delta U = + \int ^{\rm final}_{\rm initial} \vec F_{\rm external} \cdot d \vec r$ or minus the work done by the electric field in changing the positions of the charges $\Delta U = - \int ^{\rm final}_{\rm initial} \vec F_{\rm field} \cdot d \vec r$. You will note that because $\vec F_{\rm external} + \vec F_{\rm field} = 0$ all you are doing is either putting a negative sign outside the integral$- \int \vec F_{\rm field} \cdot d \vec r$ or inside the integral $\int (- \vec F_{\rm field}) \cdot d \vec r = \int \vec F_{\rm external} \cdot d \vec r $ If the initial state is one with the potential energy defined as zero then those equations give you the potential energy of the final state. With your dipole equation for potential energy the definitions for potential energy stay exactly the same and in the equation that you have quoted $U = \int \tau \, d\theta$, the torque $\tau$ is the magnitude of the torque as is shown below. An electric dipole with point charges $\pm q$ separated by a distance $d$ have a dipole moment $\vec p = q \vec d$ where the displacement $\vec d$ is in the directions from the negative charge to the positive charge. In an electric field $\vec E$ the torque on the dipole is $\vec \tau = \vec p \times \vec E = pE \sin \theta\, (-\hat z)$ where $\hat z$ is the unit vector out of the screen and the angle between the electric field direction and the electric dipole direction is $\theta$ then the magnitude of the torque on the dipole is $pE \sin \theta$ The work done by the electric field in rotating the dipole from $\theta_{\rm initial}$ to $\theta_{\rm final}$ is $\int ^{\rm final}_{\rm initial} \vec \tau \cdot d\vec \theta'= \int ^{\rm final}_{\rm initial} [pE \sin \theta' \,(-\hat z)]\cdot [d\theta'\,\hat z]= -\int ^{\rm final}_{\rm initial} pE \sin \theta' \, d\theta'$ and minus this quantity is the change in potential energy $\Delta U = + \int ^{\rm final}_{\rm initial} pE \sin \theta' \, d\theta'$ If you were considering the external torque you would still have the same integral for the change in potential energy which is your $\int \tau \,d\theta$ equation with $\tau$ as the magnitude of the torque. If the lowest potential energy state when $\theta =0$ is taken as the zero of potential energy then the potential energy at an angle $\theta$ is $U = \int ^\theta _0 pE \sin \theta' \, d\theta' = pE(1-\cos \theta) $. It is sometimes convenient to have the $\theta = \frac \pi 2$ position as the zero of potential energy and then $U = -\vec p \cdot \vec E$
{ "domain": "physics.stackexchange", "id": 49441, "tags": "potential-energy, conventions, spring, dipole" }
Uploading and showing images securely
Question: I've read various posts about how letting users to upload files can create vulnerabilities to your website such as a user injecting PHP code in an image. So I've created a small test project where you can upload (outside of web root) and see uploaded images keeping it as simple as I could having in mind security but I'm not an expert and it would be really helpful if you could answer some of my questions and tell me if something could be done better. Do you spot anything wrong regarding permissions? Are the checks that I do in upload_images.php to check that the files that are being uploaded are images of the allowed formats sufficient? Could I do something better? Fetching multiple images using base64_encode(file_get_contents($images[$i])) seems a bit slow and also the string that is being put inside img src is huge...can this be a problem (for example images don't appear in xiaomis MIUI browser)? Is there a better alternative? Let's say that a malicious image bypasses my checks during uploading. When I fetch an images using the following PHP code get the response in js using ajax and then append it to the dom to be shown to the user using <img src='data:"+ data.extention[i] +";base64," + data.images[i] + "'> is it possible to be harmful in any way? Is storing images outside of root trying to prevent access of malicious users too much of a hassle? Is it better maybe (security-speed-browser compatibility wise) to just store them inside root and make use of .htaccess to prevent someone from doing harm? Would an .htaccess like the following ( secure_images/.htaccess ) be sufficient for that purpose? Structure / public_html (root) 755 .htaccess 444 index.php 644 images.php 644 javascript 755 show_images.js 644 upload.js 644 php_scripts 755 fetch_images.php 600 upload_images.php 600 logo.png 644 secure_images .htaccess 444 201811051007191220027687.jpg 644 20181105100719574368017.jpeg 644 secure_php_scripts 500 fetch_images.php 600 upload_images.php 600 public_html/.htaccess #Deny access to .htaccess files <Files .htaccess> order allow,deny deny from all </Files> #Enable the DirectoryIndex Protection, preventing directory index listings and defaulting Options -Indexes DirectoryIndex index.html index.php /index.php #Trackback Spam protection RewriteCond %{REQUEST_METHOD} =POST RewriteCond %{HTTP_USER_AGENT} ^.*(opera|mozilla|firefox|msie|safari).*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.+/trackback/?\ HTTP/ [NC] RewriteRule .? - [F,NS,L] Upload images related files public_html/index.php <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Image upload security test</title> <meta name="description" content="Image upload security test"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script> </head> <body> <form id="upload_form" method="post" enctype="multipart/form-data"> Select image to upload: <input type="file" name="filesToUpload[]" id="filesToUpload" multiple> <input type="submit" value="Upload Image" name="submit"> </form> <a href="images.php">See images</a> <script src="js/upload.js"></script> </body> </html> public_html/javascript/upload.js $("#upload_form").submit(function(e) { e.preventDefault(); $.ajax({ url:"../php_scripts/upload_images.php", method:"POST", data:new FormData(this), processData: false, contentType: false, dataType:"JSON", success:function(data) { if(data.outcome) { console.log("Images succesfully uploaded"); } else { console.log(data.msg); } } }); }); public_html/php_scripts/upload_images.php <?php include_once($_SERVER['DOCUMENT_ROOT'] . "/../secure_php_scripts/upload_images.php"); secure_php_scripts/upload_images.php <?php $uploaded_images[0]["path"] = null; try { $isValid = validateArray($_FILES['filesToUpload']); if($isValid[0]) { $data['outcome'] = true; $data['msg'] = "Images uploaded successfully"; for($i=0; $i<count($_FILES['filesToUpload']['tmp_name']); $i++) { $new_name = date('YmdHis',time()).mt_rand() . "." . pathinfo($_FILES['filesToUpload']['name'][$i], PATHINFO_EXTENSION); $path_to_be_uploaded_to = $_SERVER['DOCUMENT_ROOT'] . "/../secure_images/" . $new_name; if(!chmod($_FILES['filesToUpload']['tmp_name'][$i], 0644) || !move_uploaded_file($_FILES['filesToUpload']['tmp_name'][$i], $path_to_be_uploaded_to) ) { $data['outcome'] = false; $data['msg'] = "There was an error uploading your file"; break; } else { $uploaded_images[$i]["path"] = $path_to_be_uploaded_to; } } } else { $data['outcome'] = false; $data['msg'] = $isValid[1]; } echo json_encode($data); } catch (Exception $e) { //If there is an exception delete all uploaded images if($uploaded_images[0]["path"] != null) { foreach($uploaded_images as $item) { if( file_exists($item["path"]) ) { unlink($item["path"]); } } } // Also delete all uploaded files from tmp folder (Files user uploads first go there) foreach($_FILES['filesToUpload']['tmp_name'] as $item) { if( file_exists($item) ) { unlink($item); } } $data['outcome'] = false; $data['msg'] = "There was an error please try again later"; echo json_encode($data); } // Create a new blank image using imagecreatetruecolor() // Copy our image to the new image using imagecopyresampled() // And also add a logo in the process function add_watermark($path_to_img, $ext) { try { if($ext == 'png') { $img = imagecreatefromjpeg($path_to_img); } else { $img = imagecreatefromjpeg($path_to_img); } $stamp = imagecreatefrompng('logo.png'); // Set the margins for the stamp and get the height/width of the stamp image $marge_right = 10; $marge_bottom = 10; $sx = imagesx($stamp); $sy = imagesy($stamp); list($width, $height) = getimagesize($path_to_img); $dest_imagex = 900;//width of new image $dest_imagey = 900;//height of new image $dest_image = imagecreatetruecolor($dest_imagex, $dest_imagey);//create new image imagecopyresampled($dest_image, $img, 0, 0, 0, 0, $dest_imagex, $dest_imagey, $width,$height);//#im to $dest_image //Now $dest_image is an image of 800x800 // Copy the stamp image onto our photo using the margin offsets and the photo width to calculate positioning of the stamp. imagecopy($dest_image, $stamp, $dest_imagex - $sx - $marge_right, $dest_imagey - $sy - $marge_bottom, 0, 0, $sx, $sy); $filename = $path_to_img; if($ext == 'png') { if(!imagepng($dest_image, $filename)) { return false;} } else { if(!imagejpeg($dest_image, $filename)) { return false;} } return true; } catch (Exception $e) { return false; } } // Checks // if the element provided is a 2D array with the expected elements (multiple pictures per upload) // if there is an error // if file extentions are the allowed ones // is each file size is bellow 1GB // if the file number is less than 16 // if file exists and if it was uploaded via HTTP POST function validateArray($array) { try{ if( $array && is_array($array) ) { if( !is_array($array['name'])) { return [false, "Wrong array format"]; } else { $pic_number = count($array['name']); } if($pic_number > 15) { return [false, "Maximum image number allowed is 15"]; } if( !is_array($array['type']) || count($array['type']) != $pic_number || !is_array($array['tmp_name']) || count($array['tmp_name']) != $pic_number || !is_array($array['error']) || count($array['error']) != $pic_number || !is_array($array['size']) || count($array['size']) != $pic_number ) { return [false, "Wrong array format"]; } $allowedExts = array('png', 'jpeg', 'jpg'); $allowedExts2 = array('image/png', 'image/jpg', 'image/jpeg'); $fileinfo = finfo_open(FILEINFO_MIME_TYPE); for($i=0; $i<count($array['name']); $i++) { if( is_array($array['name'][$i]) || is_array($array['tmp_name'][$i]) || is_array($array['error'][$i]) || is_array($array['size'][$i]) ) { return [false, "Wrong array format"]; } $ext = pathinfo($array['name'][$i], PATHINFO_EXTENSION); if( !in_array($ext, $allowedExts) ) { return [false, "Only PNG JPEG JPG images are allowed"]; } if(!file_exists($array['tmp_name'][$i]) || !is_uploaded_file($array['tmp_name'][$i])) { return [false, "File doesn't exists, try again"]; } if(!is_uploaded_file($array['tmp_name'][$i])) { return [false, "File has to be uploaded using our form"]; } if(!exif_imagetype($array['tmp_name'][$i])) { return [false, "Only images allowed"]; } if(filesize($array['tmp_name'][$i]) < 12) { return [false, "All images has to be more than 11 bytes"]; } if (!in_array(finfo_file($fileinfo, $array['tmp_name'][$i]), $allowedExts2)) { return [false, "Only PNG JPEG JPG images are allowed"]; } if($array['error'][$i] !== 0) { return [false, "File error"]; } if($array['size'][$i] > 1000000) { return [false, "Maximum image size allowed is 1GB"]; } if(!add_watermark($array['tmp_name'][$i], $ext)) { return [false, "There was an error uploading your file"]; } } } else { return [false, "Element provided is not a valid array"];} return [true, "Chill dude images are ok"]; } catch (Exception $e) { return [false, "There was an error please try again later"]; } } Show images related files public_html/images.php <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Image upload security test</title> <meta name="description" content="Image upload security test"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script> <style> section{display:block;text-align:center;} content{display:inline-block;margin:10px;height:400px;width:400px;} content img{max-height:100%;max-width:100%;min-height:100%;min-width:100%;} </style> </head> <body> <section> </section> <script src="js/show_images.js"></script> </body> </html> public_html/javascript/show_images.js window.onload = function() { $.ajax({ url:"../php_scripts/fetch_images.php", method:"POST", dataType:"JSON", success:function(data) { if(data.outcome) { if(data.images) { if(data.images.length > 0) { let text = []; for( let i=0; i< data.images.length; i++) { text[i] = "<content><img src='data:"+ data.extention[i] +";base64," + data.images[i] + "'></content>"; } $("section").append(text); } } else {console.log("no images found"); } } else { console.log("An error occured please try again later"); } } }); }; public_html/php_scripts/fetch_images.php <?php include_once($_SERVER['DOCUMENT_ROOT'] . "/../secure_php_scripts/fetch_images.php"); secure_php_scripts/fetch_images.php <?php try { $data["outcome"] = true; $directory = $_SERVER['DOCUMENT_ROOT'] . "/../secure_images/"; $images = glob($directory . "*.{[jJ][pP][gG],[pP][nN][gG],[jJ][pP][eE][gG]}", GLOB_BRACE); $fileinfo = finfo_open(FILEINFO_MIME_TYPE); for ($i = 0; $i < count($images); $i++) { $extention = finfo_file($fileinfo, $images[$i]); header('Content-Type: ' . $extention); $data["extention"][$i] = $extention; $data["images"][$i] = base64_encode(file_get_contents($images[$i])); } echo json_encode($data); } catch(Exception $e) { $data["outcome"] = false; $data["images"][0] = []; echo json_encode($data); } secure_images/.htaccess #Deny access to .htaccess files <Files .htaccess> order allow,deny deny from all </Files> #Enable the DirectoryIndex Protection, preventing directory index listings and defaulting Options -Indexes DirectoryIndex index.html index.php /index.php #Securing directories: Remove the ability to execute scripts AddHandler cgi-script .php .pl .py .jsp .asp .htm .shtml .sh .cgi Options -ExecCGI Answer: First, to your questions Do you spot anything wrong regarding permissions? Overkill. Permissions has very little to do with web-servers, as there is only one user - one under which a web-server (or a php process) runs. So it doesn't really matter what number you have a fancy to set. Are the checks that I do in upload_images.php to check that the files that are being uploaded are images of the allowed formats sufficient? Could I do something better? Overkill. All these checks do not prevent the upload of the valid PHP code. But honestly, it doesn't really matter. Fetching multiple images using base64_encode(file_get_contents($images[$i])) seems a bit slow and also the string that is being put inside img src is huge...can this be a problem (for example images don't appear in xiaomis MIUI browser)? Is there a better alternative? Yes, it prevents the caching on the client side and causes a huge bandwidth waste. It's better to show images as is. Let's say that a malicious image bypasses my checks during uploading. When I fetch an images using the following PHP code get the response in js using ajax and then append it to the dom to be shown to the user using is it possible to be harmful in any way? Not on the PHP side. I am not a JS expert though. You can try security.stackexchange.com for this. Taken alone, without all this wall of code, it will make a perfect question there (if not a duplicate though). Is storing images outside of root trying to prevent access of malicious users too much of a hassle? Is it better maybe (security-speed-browser compatibility wise) to just store them inside root and make use of .htaccess to prevent someone from doing harm? Would an .htaccess like the following ( secure_images/.htaccess ) be sufficient for that purpose? Yes allowing a web-server to handle images would be a way better way. Now to the actual code review The whole code is too big for the full review though, because it covers too many irrelevant topics, like secure image upload, file permissions, web security in general, client-side programming, and even for some reason creating watermarks. So I'll cover just the actual image upload. validateArray() function Exceptions misused. For some reason, instead of catching exceptions outside of the function, you are catching them inside, which makes both the function's code and its output more complicated. Just throw inside the function and catch outside. That's all. No need to even return true, as in case of the error, the execution won't even reach the condition where the return value is checked. You are duplicating A LOT of functionality provided by PHP. Most of errors you are checking for can be thrown by PHP. For example if you will try to iterate over a string, PHP will give you an error. So it makes no sense do all the numerous verifications. PHP can do it for you. All you need is a simple error handler that can convert PHP errors to exceptions and poof - you already have the exception without a single line of code! The same goes for the numerous file verifications. If a file doesn't exist, PHP will tell you that! Finfo and exif can be easily fooled, a file which is a valid image could be a no less valid PHP script at the same time. So use them for your convenience only, but these functions won't add too much security. Watermarks has nothing to do with validation. Formatting. PSR-2 is a de-facto standard now so you are supposed to follow it. So in your place I would make this function like this function validateArray($array) { if(count($array['name']) > 15) { throw new MyFileUploadException("Maximum image number allowed is 15"); } $allowedExts = array('png', 'jpeg', 'jpg'); foreach($array['name'] as $i => $name) { if($array['error'][$i] !== 0) { throw new MyFileUploadException("File error"); } if(!is_uploaded_file($array['tmp_name'][$i])) { throw new MyFileUploadException("File has to be uploaded using our form"); } $ext = pathinfo($name, PATHINFO_EXTENSION); if( !in_array($ext, $allowedExts) ) { throw new MyFileUploadException("Only PNG JPEG JPG images are allowed"); } if(filesize($array['tmp_name'][$i]) < 12) { throw new MyFileUploadException("All images has to be more than 11 bytes"); } if($array['size'][$i] > 1000000) { throw new MyFileUploadException("Maximum image size allowed is 1GB"); } } return true; } upload_images.php code Issues are the same: too much duplicated verifications and exceptions misuse which leads to the duplicated code. Besides , there is no point in deleting tmp_files and also I doubt we should delete already uploaded files as well. So to me the code would be rather class MyFileUploadException extends Exception {}; try { validateArray($_FILES['filesToUpload']); foreach($_FILES['filesToUpload']['tmp_name'] as $i => $tmp_name) { $ext = pathinfo($_FILES['filesToUpload']['name'][$i], PATHINFO_EXTENSION); $new_name = date('YmdHis',time()).mt_rand() . "." . $ext; $path_to_be_uploaded_to = $_SERVER['DOCUMENT_ROOT'] . "/../secure_images/" . $new_name; move_uploaded_file($tmp_name, $path_to_be_uploaded_to); add_watermark($path_to_be_uploaded_to, $ext); } $data['outcome'] = true; $data['msg'] = "Images uploaded successfully"; } catch (MyFileUploadException $e) { $data['outcome'] = false; $data['msg'] = $e->getMessage(); } catch (Exception $e) { error_log($e); $data['outcome'] = false; $data['msg'] = "There was an error please try again later"; } echo json_encode($data); Notice the user-defined exception and the difference in the processing. Your own error messages thrown via MyFileUploadException are useful for the user and do no harm when revealed - so the message is conveyed to the user as is. Whereas PHP's internal error messages are exactly the opposite: too cryptic for the site user but may contain some sensitive information that shouldn't be revealed outside. At the same time they are vital for the site programmer - so they are logged in the web-server's error error log, while a generalized error message is shown to the user. So now errors are treated according to the best standards which are explained in my article I linked above. I would question the name generation method though, and use md5() from the file contents instead.
{ "domain": "codereview.stackexchange", "id": 32578, "tags": "javascript, php, security, image, .htaccess" }
Do binary black holes mergers emit light?
Question: I am not referring to Hawking radiation and although I see there should be no way the collision can produce electromagnetic wave after all, a black hole is simply a region of space right? Answer: In principle two completely isolated black holes would not emit any light when they merge. However real black holes are invariably associated with some form of matter. This could be an accretion disk if the black hole is actively absorbing matter, or possibly material in orbit that originated from the stellar system in which the black hole formed. When matter like this is present it is going to be heated by the tidal forces created by the merger and it will radiate light. Though it's unlikely ever to be the case, a merger between two black holes with no associated matter would be an interesting test for quantum gravity effects. Quantum gravity does allow graviton-graviton scattering to create standard model particles and detecting, or failing to detect, such particles would give us information about quantum effects in gravity.
{ "domain": "physics.stackexchange", "id": 54566, "tags": "visible-light, black-holes, gravitational-collapse" }
Error when initialize rosdep
Question: I'm using Raspberry Pi 3 (Raspbian Jessie PIXEL). I'm trying to install ROS based on this tutorial: http://wiki.ros.org/ROSberryPi/Installing%20ROS%20Kinetic%20on%20the%20Raspberry%20Pi. The error I got is: sudo rosdep init ERROR: cannot download default sources list from: https://raw.github.com/ros/rosdistro/master/rosdep/sources.list.d/20-default.list Website may be down. I read a solution on this link http://answers.ros.org/question/54150/rosdep-initialization-error/, but I don't know what is the proxy server and host of my Raspberry, and I don't know how to find it too? (I'm quite new to both Raspberry and Linux, but I need to install ROS for my project). And when I browse the link on the error, the website cannot be opened and it says "Your connection is not private". Can anyone explain a little bit detailed how to find proxy server and port, and what to do next to fix this problem? Thank you. Originally posted by thanhvu94 on ROS Answers with karma: 27 on 2017-03-29 Post score: 0 Answer: Hi All, I had been facing the same issue of ERROR: cannot download default sources list from: https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/sources.list.d/20-default.list Website may be down. This error is because of your system date & time settings. I corrected my systems date and time settings and rosdep init worked. Originally posted by Shoeb Ahmed with karma: 41 on 2018-01-05 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by tfoote on 2018-01-05: Yes, the https negotiations will fail if the date and time are too far apart.
{ "domain": "robotics.stackexchange", "id": 27463, "tags": "ros, rosdep, raspberrypi" }
Why does AFSK only trasmit in binary?
Question: According to Wikipedia, audio frequency-shift keying (AFSK) is done by using binary FSK, i.e. 2 tones of different frequencies, a tone for the 1 bit and a tone for the 0 bit. But isn't it possible to choose more than 2 frequencies (any frequency between 0 and half the sampling rate) to transmit more than binary data? Why don't we use say 10 tones of different frequencies, to transmit in decimal, or say 256 tones to transmit in chars? Wouldn't that make transmission much more efficient? Answer: The three main reasons appear to be these: 1) In AFSK, it's not just jumping back and forth between frequencies. The tones must also be continuous phase. In other words, when the tone changes, there can't be any jump in phase. For example, if you're sending a 1200 hz tone and the waveform is at its peak when you switch to 2200 hz, the waveform is still at its peak, so it can't be just any two frequencies. It has to be at the peak. It can't start back at zero, or halfway through; it can't start anywhere but at the peak. Additionally, the length of each tone must be the same. Because of these two points, the amount of wavelengths that share compatible periods is finite. For instance, if you use 1000 hz, your next compatible is something like 1500 hz (my apologies if my numbers are off; I'm worn out). 2) As a follow-on from 1, since AFSK is audio, the range of compatible frequencies is not that high. 3) The amount of bandwidth required for the transmission increases for each additional tone, making data efficiency greater, but power and bandwidth efficiency less. That doesn't meant that multiple frequencies can't be used; in fact MFSK has been used since the mid-60s. However, MFSK has several drawbacks that make it untenable in some situations. Delay spread, when broadcasting to multiple locations, and fading make it less desirable for some applications.
{ "domain": "dsp.stackexchange", "id": 4285, "tags": "audio, digital-communications, fsk" }
Ward identity of correlation function
Question: For local observables $\{O_i(x_i)\}^n_{i = 1}$, one defines the Ward identity as $$\partial_{\mu}\langle j^{\mu}(x)\prod^n_{i = 1}O(x_i)\rangle = \sum^n_{i = 1}\delta(x-x_i)\langle O_1(x_1)\cdots\delta O_i(x_i)\cdots O(x_n)\rangle\tag{1}$$ Under the local variation of the field $$\Psi(x)\mapsto \Psi(x)+\epsilon\delta\Psi(x)\tag{2}$$ Whenever $\delta_{\epsilon}\mathcal{D}[\Psi(x)] = 0$, $\delta_{\epsilon} Z[\Psi(x)] = 0$, one obtains the following variation $$\begin{align}\delta_{\epsilon}\langle \prod^n_{i = 1}O_i(x_i)\rangle &= \delta_{\epsilon}\biggr(\frac{\int \mathcal{D}[\Psi(x)]\prod^n_{i = 1}O_i(x_i)e^{\frac{i}{\hbar}S[\Psi(x)]}}{Z[\Psi(x)]}\biggr)\\&= \frac{\delta_{\epsilon}\biggr(\int \mathcal{D}[\Psi(x)]\prod^n_{i = 1}O_i(x_i)e^{\frac{i}{\hbar}S[\Psi(x)]}\biggr)Z[\Psi(x)] - \delta_{\epsilon} Z[\Psi(x)]\biggr(\int \mathcal{D}[\Psi(x)]\prod^n_{i = 1}O_i(x_i)e^{\frac{i}{\hbar}S[\Psi(x)]}\biggr)}{Z^2[\Psi(x)]}\\&= \frac{1}{Z[\Psi(x)]}\delta_{\epsilon}\biggr(\int \mathcal{D}[\Psi(x)]\prod^n_{i = 1}O_i(x_i)e^{\frac{i}{\hbar}S[\Psi(x)]}\biggr)-\underbrace{\frac{1}{Z[\Psi(x)]}\delta_{\epsilon} Z[\Psi(x)]\langle \prod^n_{i = 1}O_i(x_i)\rangle}_{ = 0} \\ &= \sum^n_{i = 1}\langle O_1(x_1)\cdots\delta O_i(x_i)\cdots O_n(x_n)\rangle+\frac{i}{\hbar}\langle\delta_{\epsilon} S[\Psi(x)]\prod^n_{i = 1}O_i(x_i)\rangle\\&= \sum^n_{i = 1}\langle O_1(x_1)\cdots\delta O_i(x_i)\cdots O_n(x_n)\rangle+\frac{i}{\hbar}\int_{M} d^n x\epsilon(x)\partial_{\mu}\langle j^{\mu}(x)\prod^n_{i = 1}O_i(x_i)\rangle\end{align}\tag{3}$$ Where we assumed that $$Z[\Psi(x)] = \displaystyle\int D[\Psi(x)]e^{\frac{i}{\hbar}S[\Psi(x)]}\tag{4}$$ Further recalling that $$\int_{V}O_i(x)\delta(x-x_i)d^nx = \begin{cases}O_i(x_i), x_i\in V\\ 0, x_i\notin V\end{cases}\tag{5}$$ We observe that under arbitrary spacetime-dependent parameter $\epsilon : M\rightarrow \mathbb{R}$, $$\begin{align}\delta_{\epsilon}\langle \prod^n_{i = 1}O_i(x_i)\rangle &= \int_{M}d^nx\sum^n_{i = 1}\delta(x-x_i)\langle O_1(x_1)\cdots\delta O_i(x)\cdots O_n(x_n)\rangle+\frac{i}{\hbar}\int_{M} d^nx\epsilon(x) \partial_{\mu}\langle j^{\mu}(x)\prod^n_{i = 1}O_i(x_i)\rangle\end{align}\tag{6}$$ From which we conclude that whenever $\delta_{\epsilon}\langle \prod^n_{i = 1}O_i(x_i)\rangle = 0$, $$-i\hbar\partial_{\mu}\langle j^{\mu}(x)\prod^n_{i = 1}O(x_i)\rangle = \sum^n_{i = 1}\delta(x-x_i)\langle O_1(x_1)\cdots\delta O_i(x)\cdots O(x_n)\rangle\tag{7}$$ However, I am not sure how the Dirac delta function $\delta(x-x_i)$ arises in the Ward identity in $(1)$. Can you provide clarification? Answer: It might be completely wrong, if it sounds fishy, please tell me. You have to keep in mind the difference between $\delta\langle O\rangle$ and $\langle \delta O\rangle$. The former is 0 by definition of the invariance of the system, while the latter is not (somehow it accounts for the variation of the observable that compensate the variation of the probability density so that the total variation is 0). This implies that your equation (2) (which corresponds to the former case) is equal to 0. Moreover, the variation here: $$O_i(x_i)\mapsto O_i(x_i)+\epsilon\delta O_i(x_\color{red}i\color{black})$$ is with respect to the space time variable you're considering (and not an arbitrary $x$). These two things lead us to: $$\sum_{i = 1}\langle O_1(x_1)\cdots\delta O_i(x_i)\cdots O_n(x_n)\rangle=-i\int_{M} d^n x\partial_{\mu}\langle j^{\mu}(x)\prod^n_{i = 1}O_i(x_i)\rangle$$ This gives us, using the delta identity: $$\int_M d^nx\sum_{i = 1}\delta(x_i - x)\langle O_1(x_1)\cdots\delta O_i(x)\cdots O_n(x_n)\rangle=-i\int_{M} d^n x\partial_{\mu}\langle j^{\mu}(x)\prod^n_{i = 1}O_i(x_i)\rangle$$. Since this integral holds for every variation, the equality holds for the objects inside the integrals: $$\sum_{i = 1}\delta(x_i - x)\langle O_1(x_1)\cdots\delta O_i(x)\cdots O_n(x_n)\rangle=-i\partial_{\mu}\langle j^{\mu}(x)\prod^n_{i = 1}O_i(x_i)\rangle$$ You can make the last argument clearer by clearly using a $x$ dependent infinitesimal: $\epsilon(x)$ (which we omitted here). The argument is the same for the derivation of the Euler-Lagrange equations. Edit: Concerning your question in the comments. I don't know a damn thing about QFT :) ! So my answer would be to ask directly with another question (I'm also interested by this). My (classical mechanics) feeling would be that, your infinitesimal spacetime variations leaving unchanged your average values belong to a Lie algebra and thus, through the exponential map can form the full Lie group associated to this algebra. This means that, even if you only consider infinitesimal variation, you can reconstruct, FROM THESE INFINITESIMAL variations, any finite element of the Lie group (through exponentiation). What this means, to me, is that there is as much information in the $\epsilon$ Ward identity that there would be in an $\epsilon^2$ (or even finite variation) Ward identity. Here are a PSE link about these ideas of Lie algebras. Here is also a PSE post, with an answer by ACuriousMind who says at the end: This is "exact" since we consider an infinitesimal gauge transformation, anyways. There are rigorous foundations in Lie theory for this "throwing away" of higher order terms. Nevertheless, it is quite important to carry these tricks the physicist likes so much oneself, since the answer coming out is correct. Here is an Article saying that my argument does not always hold (albeit, in a rather specific context): Indeed, due to the group structure of the transformation under consideration, it is expected that a first-order development is sufficient because in principle any finite transformation can be deduced from the composition of infinitesimal transformations of the first-order. Thus, it was always assumed that the identities obtained in the second order or beyond should be redundant with respect to the identities in the first order because of the internal composition law of the symmetry group. In the present work, we show that this is not the case. Second-order transformations cannot be deduced from the composition of first-order transformations. While one might think that the origin of this phenomenon lies in the specific non-locality of tensor interactions, a detailed analysis shows that this is not the case. Moreover, the redundancy of Ward identities has been demonstrated for a tensor theory including a closure constraint in [66]. It seems that the origin of this phenomenon lies in the kinetic term, and in the presence or absence of constraints on the field. Indeed, it seems that this anomaly is a specificity of TGFTs without closure constraints, and we provide more details on the perspectives of this observation in the conclusion.
{ "domain": "physics.stackexchange", "id": 98377, "tags": "quantum-field-theory, lagrangian-formalism, symmetry, path-integral, ward-identity" }
Using Local Binary Pattern for finding Homography between two Image
Question: Can LBP be used to find the homography between two images with similar field of view for Image Registration? Answer: Yes. LBP can be used to find homography between two images. LBPs extract the local difference between nearby pixels with respect to center pixel and represent in 8-bit. You can analyse this 8-bit and extract just the corners which can be used as feature points. If you have an option to use other algorithm, I suggest you to go with SIFT .
{ "domain": "dsp.stackexchange", "id": 4484, "tags": "image-processing, local-features, image-registration" }
What happens with electric charges in a salt water battery?
Question: When building a simple salt water battery, consisting of a zinc-anode, copper-cathode and an NaCl-solution, I noticed the following: When closing the circuit, the current decreases over time. (No surprise, since either charge adds up around the electrodes or $\ce{Na^+}$- and $\ce{Cl^-}$-ions get depleted.) When I gently slide my finger along the zinc-anode (under the water), the measured current increases very little (and continues to decrease over time). When I do the same at the copper-anode, the current increases much more than at the anode (and continues to decrease over time). I understand that at the zinc-anode $\ce{Zn^2+}$ dissolves into the water. And I understand that at the copper-cathode the electrons will interact with the water, not the $\ce{Na^+}$. Now, I have 3 questions, where no. 3 is my main question: Close to the anode: Will $\ce{Cl^-}$-ions just float next to the $\ce{Zn^2+}$-ions independently or will they form $\ce{ZnCl_2}$? Close to the cathode: What exactly is happening there? Since the voltage I measured is ~0.7V, it can't be electrolysis of water, since 1.23V are required for this to happen. I would assume due to autoprotolysis of water, $\ce{H_3^+O + e^- -> 1/2H_2 + H_2O}$. The autoprotolysis leaves behind an $\ce{^-OH}$. I assume it bonds with the Na, giving $\ce{NaOH}$. Why do I measure a much greater temporary increase in electric current when I gently stirr the water close to the copper-cathode, compared to stirring close to the zinc-anode? Shouldn't both water-volumes around the two electrodes be equally neutral? On one side, $\ce{Zn^2+}$ gets cancelled out by $\ce{2Cl^-}$, on the other side $\ce{2OH^-}$ get cancelled out by $\ce{2Na^+}$? Here is the battery: And here is where I got my ideas for the reactions happening. (I'm not a chemist.) Answer: There are some mistakes in your questions. At the anode, $\ce{Zn}$ will form $\ce{Zn^{2+}}$ and not $\ce{ZnCl2}$. Simultaneously the anode attracts anions $\ce{Cl-}$ to compensate for the new positive charges created at the zinc plate. The $\ce{ZnCl2}$ compound does not exist in solution. but it can be obtained after evaporation of the aqueous solution. At the cathode, the only reaction is : $\ce{2 H2O + 2e^- -> H2 + 2 OH^-}$. Simultaneously, the copper plate will attract $\ce{Na^+}$ to compensate for the new $\ce{OH-}$ ions created on the copper plate. So the solution near the cathode is made of $\ce{Na+ + OH-}$ ions. But the solution does not contain $\ce{NaOH}$. This substance may be obtained after evaporation of the solution. When stirring the cathodic plate, you remove the hydrogen bubbles adsorbed on the cathode surface. Without stirring, the surface where $\ce{H2O}$ can produce $\ce{OH-}$ decreases slowly with the time, because the hydrogen bubble prevents further production of $\ce{OH-}$ ions. The current is proportional to the surface of the electrode. As the surface of the plate where $\ce{OH-}$ ions can be created decreases, the current decreases
{ "domain": "chemistry.stackexchange", "id": 16261, "tags": "inorganic-chemistry, electrochemistry, redox" }
What is the physical relevance of the classical limit to a quantum field theory?
Question: We know the physical relevance of the classical limit of quantum mechanics quite well. However, if I take the classical limit of a quantum field theory, the answer is not so clear. Suppose I take the Hamiltonian for a free electron moving in one dimension, which is $\hat H=\hat p^2/2m$. The classical limit of this theory is the Hamiltonian $H=p^2/2m$, which is that of a point particle moving at a constant velocity. However, suppose that I now take the Hamiltonian for $N$ free electrons, which is $\hat H=\int dx\,\psi^\dagger(x)(\hat p^2/2m)\psi(x)$. The classical limit of this theory is the Hamiltonian $H=\int dx\,\bar \psi(x)(-\hbar^2\partial^2/2m)\psi(x)$. Shouldn't we just get $N$ point particles moving at a constant velocity? Instead, we get this weird one-dimensional wave... Answer: It all depends on the scaling, i.e. on which parameter is taken to be small (large) in your effective description of the system. It is customary to interpret the semiclassical parameter to be a quantity "equivalent" to $\hbar$, but going to zero. This is convenient, for in the classical scale of energies Planck's constant is comparatively very small. Equivalently, we may think of the semiclassical parameter as being representing the inverse of the "characteristic frequency" of the particle's wave (and therefore the classical limit is the limit of very high frequencies). Another different parameter is the number of particles $N$. We may think of taking the limit $N\to\infty$ in a given $N$-particle system. It turns out that, mathematically, this is similar to taking the classical limit but the physical interpretation is quite different. So, let's consider a system of $N$ free non-relativistic bosons of mass $1/2$. Their Hamiltonian can be written as $$H_N=\sum_{j=1}^N -\hbar^2\Delta_{x_j}\; ;$$ where $\Delta_x$ is the Laplacian, or equivalently in "second quantization" notation $$H_N = -\hbar^2\int_{\mathbb{R}^3}a^*(x)\Delta_x a(x)dx\; \Bigr\rvert_{L^2_s(\mathbb{R}^{3N})}\; ;$$ where the restriction to $L^2_s(\mathbb{R}^{3N})$ means that we are just considering the sector with $N$ particles (since in fact the number of particles is here conserved, and it's not so useful to consider the whole Fock space). Now if you take the limit $N\to\infty$, you get indeed an energy functional (not an operator anymore, hence a "classical" infinite dimensional field theory) of the type $$E(u)=-\hbar^2\int_{\mathbb{R}^3}\bar{u}(x)\Delta_x u(x)dx\; ;$$ where $u\in L^2(\mathbb{R}^3)$ is the "classical" (more properly, mean field) variable corresponding to the annihilation operator valued distribution $a(x)$. The interpretation is of a free quantum mean-field theory : $u$ represent the effective wavefunction of a single particle under the effect of all the other particles combined (that in this case reduces to a free particle, since there were no interaction). With a two body weak (in the limit $N\to\infty$) interaction you would have got the Hartree energy functional and corresponding Hartree dynamics. If you take the limit $\hbar\to 0$ instead, you get $N$ free classical particles , with energy function $$E(\vec{x},\vec{p})=\sum_{j=1}^N p_j^2\; ;$$ where $p_j\in\mathbb{R}^3$ is the momentum of the $j$-th particle. As you see, the two limits have quite different physical intepretations, even if they're actually mathematically pretty similar. I remark that they can also be combined in a "commutative" way; in the end you would get a classical Vlasov-type evolution for infinitely many classical particles (both if you do before the $\hbar\to 0$ and then the $N\to\infty$ or vice-versa). The situation is different if you consider a "true" QFT, where particles can be created or destroyed, e.g. photons in QED. There, the classical limit $\hbar\to 0$ directly yields, as expected, a classical field theory. On the other hand, the mean field is not so meaningful since there are quantum states with an undefined (possibly very large) number of particles; and since the number is not conserved, even if you start with a fixed number of particles, after evolution you get a state with a non-zero probability of having different numbers of particles.
{ "domain": "physics.stackexchange", "id": 29752, "tags": "quantum-mechanics, quantum-field-theory, classical-mechanics" }
Most accurate depiction of cortical homunculus?
Question: I was looking at cortical homunculus and I realized there are several different pictures and they don't quite agree. For instance: http://wellbeing.media.mit.edu/2014/02/21/mindfulness-neuroimaging-and-neurofeedback/ http://nawrot.psych.ndsu.nodak.edu/Courses/465Projects11/PLS/4Thesomatosensoryandmotorcortices.htm One area, for instance, are the feet, which I feel should have a fairly large (not as big as hands) area in the brain, mainly from the personal experience of having more sensations in feet, which like the hands has many nerve endings, though obviously much less fine motor ability. But that's just my speculation, and I like an accurate depiction so I can start from there and make sense of this. Thank you. Answer: Be sure to distinguish exactly what the diagram is showing. Your first reference is sensory cortex mapping. The second reference is motor cortex mapping. The text of the second reference is admittedly confusing about this, a search for "Wilder Penfield motor cortex" verifies. It seems reasonable the feet would show sensory response similar to the hands. Stepping on a nail is just about as bad as bumping your hand into one. Motor control is different. Muscles are not all innervated the same. In some of the large muscles, say in the legs, one nerve can control hundreds of cells. In other cases, particularily in the face, one neuron might control only six muscle cells. Thus a greater number of neurons, and a larger portion of the brain, is needed for control in these delicately controlled areas. It would make sense a greater part of our brain would be needed for motor control in our hands versus our feet.
{ "domain": "biology.stackexchange", "id": 6182, "tags": "neuroscience, brain, sensation" }
Install library in ros
Question: I am wondering how I would incorporate a library in ROS. The library I am trying to install is called ulapi to communicate with the kuka robot we have in the lab. Thanks Update (7.13.16): Here is part of my code that I am trying to add the ulapi library. Might be useful. #include <ros/ros.h> //library for type of message sent to gazebo topics #include <std_msgs/Float64.h> //libraries to control model in gazebo #include <geometry_msgs/Pose.h> #include <geometry_msgs/Twist.h> #include <gazebo_msgs/ModelState.h> #include <gazebo_msgs/SetModelState.h> #include <cstdlib> #include <robsim_gazebo/ulapi.h> int main (int argc, char** argv) { //initialize node ros::init(argc, argv, "publisher_node"); //create node handle ros::NodeHandle rob_pose; ros::NodeHandle a1a2; ros::NodeHandle a2e1; ros::NodeHandle e1a3; ros::NodeHandle a3a4; ros::NodeHandle a4a5; ros::NodeHandle a5a6; ros::NodeHandle a6tp; Here is the part of my CMakeLists.txt file add_library(unix_ulapi src/unix_ulapi.c) #target_link_libraries(publisher_node unix_ulapi) add_library(ulapi_getopt src/ulapi_getopt.c) #target_link_libraries(publisher_node ulapi_getopt) Originally posted by justinkgoh on ROS Answers with karma: 25 on 2016-07-11 Post score: 0 Original comments Comment by gvdhoorn on 2016-07-12: Are you just asking how to install the library (.so file) in some specific location, or do you want to integrate the library into a ROS node? The fist is a file copy action, the second requires you program a node (a C++ program fi) and call into the libraries functions. Comment by justinkgoh on 2016-07-13: The files I have are C and H. I am use to installing libraries for Arduino and was wondering how to do it for ROS. Comment by senreot on 2016-07-13: After adding the libraries you need to link them to the executable. Try to uncomment the target_link_libraries() lines or add the next line at the bottom of the CMakeList: target_link_libraries(publisher_node ulapi_getotp unix_ulapi) Comment by justinkgoh on 2016-07-14: When I do that I get an error that says this: CMake Error at robsim/robsim_gazebo/CMakeLists.txt:214 (target_link_libraries): Cannot specify link libraries for target "publisher_node" which is not built by this project. Comment by gvdhoorn on 2016-07-14: off-topic, but what I find much more interesting: what are you doing with robsim, gazebo and ulapi? :) Comment by senreot on 2016-07-14: This is because the executable has another name. Try replacing publisher_node with the name of the executable. If you can show us the complete CMakeList we can help you better ;) Answer: Can you be more specific about "install" ? Check the library with cmake then compile your code and link it This is just a sample, modify regarding to your needs find_package( PkgConfig REQUIRED) pkg_check_modules( ulapi REQUIRED ulapi ) add_executable( my_exec src/my_source.cpp ) target_link_libraries( my_exec ${ulapi_LIBRARIES} ) Originally posted by cagatay with karma: 1850 on 2016-07-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by justinkgoh on 2016-07-13: What I mean by install is that I want to use the .c and .h files from ulapi (which is what we use to talk to the robot arms we have in the lab) in my c++ code I have written. I am familiar with the way you install libraries for arduino if that is of any help.
{ "domain": "robotics.stackexchange", "id": 25207, "tags": "ros" }
Compute the two-dimensional DFT
Question: Compute the two-dimensional DFT [4x4] for the following 4x4 image $ \begin{matrix} 0.5 & 0.5 & 0.5 & 0.5 \\ 0.5 & 0.5 & 0.5 & 0.5\\ 0.5 & 0.5 & 0.5 & 0.5\\ 0.5 & 0.5 & 0.5 & 0.5 \end{matrix} $ I know that DFT is separable by dimensions – one can calculate 4 vertical transforms first, then 4 horizontal ones For each row we get [2 0 0 0] in the first and third row and zero elsewhere. For each column we get [1 0 1 0] in the first and third column and zero elsewhere. How from this two one-dimensional dfts obtain two-dimensional one? Answer: No you are not doing the separation correctly : The horizontal 1D-DFT of the rows of input will be: $ H_1 = \begin{matrix} 2 & 0 & 0 & 0 \\ 2 & 0 & 0 & 0 \\ 2 & 0 & 0 & 0 \\ 2 & 0 & 0 & 0 \\ \end{matrix} $ and the vertical 1D-DFT of the columns of $H_1$ will be: $ H_2 = \begin{matrix} 8 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ \end{matrix} $ which is equivalent to the 2D-DFT of the original input.
{ "domain": "dsp.stackexchange", "id": 7141, "tags": "dft, 2d" }
Review on my multiple JQuery Slider on a page
Question: I have successfully placed multiple jquery ui slider on a page and its transalation text too. you can check it live on: http://outsourcingnepal.com/projects/jQuery%20slider/ I have my jquery code as : var arrLabel1 = new Array('Not Good', 'OK', 'Average', 'Above Average', 'Excellent', 'Outstanding'); $(function() { $( ".slider_control" ).slider({ value:0, min: 0, max: 5, step: 1, slide: function( event, ui ) { $( "#slider_" + $(this).attr('rel') ).val( ui.value ); $( "#label_" + $(this).attr('rel') ).text( arrLabel1[ ui.value ] ); } }).each(function(index){ $( "#slider_" + (index + 1) ).val( $( this ).slider( "value" ) ); $( "#label_" + (index + 1) ).text( arrLabel1[ $(this).slider( "value" ) ] ); }); }); Now i would like to optimize it so that i can be independent of rel attribute. MY HTML: <ul class="slider-container"> <li class="ui_slider"> <div class="title">Overall: <span id="label_1"></span></div> <input type="hidden" name="overall_review" id="slider_1" /> <div class="slider_control" rel="1"></div> </li> <li class="ui_slider"> <div class="title">Cleanliness: <span id="label_2"></span></div> <input type="hidden" name="overall_review" id="slider_2" /> <div class="slider_control" rel="2"></div> </li> <li class="ui_slider"> <div class="title">Facilities: <span id="label_3"></span></div> <input type="hidden" name="overall_review" id="slider_3" /> <div class="slider_control" rel="3"></div> </li> <li class="ui_slider"> <div class="title">Location: <span id="label_4"></span></div> <input type="hidden" name="overall_review" id="slider_4" /> <div class="slider_control" rel="4"></div> </li> <li class="ui_slider"> <div class="title">Quality of Service: <span id="label_5"></span></div> <input type="hidden" name="overall_review" id="slider_5" /> <div class="slider_control" rel="5"></div> </li> <li class="ui_slider"> <div class="title">Room: <span id="label_6"></span></div> <input type="hidden" name="overall_review" id="slider_6" /> <div class="slider_control" rel="6"></div> </li> <li class="ui_slider"> <div class="title">Value of Money: <span id="label_7"></span></div> <input type="hidden" name="overall_review" id="slider_7" /> <div class="slider_control" rel="7"></div> </li> </ul> Answer: In my option your code has two problems: You unnecessarily fill your HTML with empty elements which you could just create in your script. Changing that would remove the necessity for the rel attribute. more importantly (even if there are people that may disagree), you have unnecessarily made your form dependent on JavaScript. It would be much better to have select elements in your HTML, which you can replace with sliders. That way even users without JavaScript can use your form unrestricted. I've written an example how I would do it: $(".ui_slider select").each(function() { var select = $(this); // Cache a refernce to the current select var sliderDiv = $("<div></div>"); // create a div for the slider var displayLabel = $("<span></span>"); // create a span to display current selection if (select[0].selectedIndex < 0) // Make sure that an item is selected select[0].selectedIndex = 0; select .hide() // hide the select .before( // Insert display label before the select displayLabel .text(select.find("option:selected").text()) // and set it's default text ) .after( sliderDiv // Insert the silder div after the select .data("select", select) // store a reference to the select .data("label", displayLabel) // store a reference to the display label .slider({ max: select.find("option").length - 1, // set to number of items in select slide: function(event, ui) { var select = $(this).data("select"); select[0].selectedIndex = ui.value; // update the select $(this).data("label").text( // Update the display label select.find("option:selected").text() ); } }) ); }); HTML (repeat as needed): <div class="ui_slider"> <label for="overall">Overall: </label> <select name="overall" id="overall"> <option value="0">Not Good</option> <option value="1">OK</option> <option value="2">Average</option> <option value="3">Above Average</option> <option value="4">Excellent</option> <option value="5">Outstanding</option> </select> </div> Working sample: http://jsfiddle.net/YfqCx/2/
{ "domain": "codereview.stackexchange", "id": 171, "tags": "javascript, jquery, jquery-ui" }
Multi-agent randomized behavior
Question: In Artificial Intelligence: A Modern Approach Edition 3, Page of 43, At Single Vs. Multi-agent section's last line, Writer says, In some competitive environments, randomized behavior is rational because it avoids the pitfalls of predictability. Can someone help me understanding this line? Specially, what does he mean by pitfalls of predictability ? Answer: Think of playing the game rock-paper-scissors. If you were using a deterministic algorithm to choose which of rock, paper, or scissors to chose, you'd also choose the same one. Thus, if I played you, I could quickly figure out how to learn every time, because you are predictable. So, it is a better strategy for you to randomize your choice: choose uniformly at random among the three options every time. That makes you unpredictable, and increases your chances of winning the game.
{ "domain": "cs.stackexchange", "id": 15215, "tags": "artificial-intelligence" }
Edit one table from another table in a Windows Form DataGridView
Question: I have a Sales Order Program that allows each customer to have their own Part Number for one of our Stock Items. The Customer Part Numbers are stored in a separate table. I need to make this as low impact on the User/UI as possible. As a working model I have come up with a related demo using Person for the Order Table and a Table called NickNames that should represent the Customer Part Number Table. public class Person { private string _sobriquet; public Person() { NickName = new NickName(Id); } [Key] public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int NickNameId { get; set; } [ForeignKey("NickNameId")] public virtual NickName NickName { get; set; } public string Sobriquet { get { return _sobriquet; } set { _sobriquet = value; NickName.PersonId = Id; NickName.Sobriquet = _sobriquet; } } } public class NickName { public NickName(int personId) { PersonId = personId; } protected NickName() { } [Key] public int PersonId { get; set; } public string Sobriquet { get; set; } } This does what I wanted... by Changing the Sobriquet property on the Person table the NickName Table is updated, but I am sure there is a better or cleaner way. Answer: As I explain here, you should never initialize reference navigation properties in an entity's constructor. They prevent EF from loading the reference from the database and they impair relationship fixup. This initialized navigation property ties your Person class to one specific use case: setting Sobriquet. For other cases where loading of the Nickname is required, it has become useless, or crippled at best. Another downside of your implementation is that you'll never see a Sobriquet until you set it, because _sobriquet will never be set by EF. (I assume that the real property has a [NotMapped] attribute; it can't possibly be mapped). And then of course there is this golden rule to keep unexpected behavior and side effects away from property setters (and getters, for that matter). By setting Sobriquet, one expects to set Sobriquet only. It is not expected that NickName.PersonId is set too. Finally, in the two instances where you set NickName.PersonId, i.e. in the property setter and in new NickName(Id), the value of Person.Id can be anything. It's highly unlikely that in every imaginable scenario, Person will have its Id value on time. I'm even pretty confident that in the constructor this will never be the case. So, as usual, if you want your view to show anything that differs from the model classes, use a view model (or DTO) and map it from/to the model classes, for example using AutoMapper.
{ "domain": "codereview.stackexchange", "id": 18692, "tags": "c#, entity-framework" }
Sign convention for the Minkowski metric $\eta_{\mu\nu}$
Question: In special relativity, the proper time is defined as $$d\tau^2 = c^2t^2-(x^2+y^2+z^2).$$ One usually introduces a matrix $\eta$ to represent it. I have seen two sign conventions. One has three minuses: $$\eta_3=\left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \end{array}\right),$$ and the other one has only one, so $\eta_1=-\eta_3$. Thus, $\tau^2=x^\mu\eta_{3,\mu\nu}x^\nu=-x^\mu\eta_{1,\mu\nu}x^\nu$, summing over repeated indices. I have been told this is an issue in the physics community, and that relativity typically uses the $(-,+,+,+)$ East coast sign convention while particle physics/quantum field theory often uses the $(+,-,-,-)$ West coast sign convention. Why is that? Answer: Relativists tend to use the proper time, $d\tau$, and the proper distance, $ds$, interchangably. If you're working with proper time you'd expect the equation for it to look like: $$ d\tau^2 = dt^2 + \text{other terms} $$ while if you're working with proper distance you expect: $$ ds^2 = dx^2 + dy^2 + dz^2 + \text{other terms} $$ The sign problem comes about because the spacetime signature requires that $ds^2 = -c^2d\tau^2 $. So you end up with: $$ d\tau^2 = -\frac{ds^2}{c^2} = dt^2 - \tfrac{1}{c^2}\left( dx^2 + dy^2 + dz^2 \right) $$ or: $$ ds^2 = -c^2d\tau^2 = -c^2dt^2 + dx^2 + dy^2 + dz^2 $$ Both describe identical physics, so which you use is just personal preference.
{ "domain": "physics.stackexchange", "id": 50417, "tags": "metric-tensor, soft-question, relativity, conventions" }
Computer science for programmers
Question: I'm a self-taught programmer and have been coding for 8 years. Due to this experience, I'm already very familiar with the principles of programming (such as if-statements, classes, polymorphism, etc.). However, I never learned "computer science," only programming. What are some good resources for someone in my position to self-study computer science - that is, resources which move at a quick pace and assume basic programming knowledge? Answer: Computer Science is a multifaceted discipline - and Algorithms and Data Structures is important part of it. You can try free video-courses, like Algorithms, Part 1, from Princeton University - it's running right now. Another remarkable free video-course Algorithms: Design and Analysis, Part 1, from Stanford has finished recently - hopefully it will be repeated in the future. Part 2 of this course will start this Monday.
{ "domain": "cs.stackexchange", "id": 3049, "tags": "reference-request" }
Why is randomness a problem? (i.e. why do we care about derandomization?)
Question: I'm reading Aaronson's survey on P vs. NP, and I've come to understand that in CS theory, people really care about derandomization results like P vs. BPP etc. My question is, what's the problem with randomness? If your algorithm is known to only require a polynomial number of random bits, then simply go ask a physicist to get the bits for you, which isn't a problem since you only need a tractable number of them, and then write them on the Turing machine tape and you're good! The goal of complexity theory is to figure out what we can compute in this universe, right? Well, this universe has randomness, right? So why do we care about derandomization? Theoretical and practical answers are both welcome. Answer: Complexity theory is a mathematical theory which aims at addressing one shortcoming of computability theory, namely, it takes into account the use of resources. While it is true that in its early days it aimed to capture the notion of "practical computation" (even particular flavors such as parallel computation, supposedly captured by NC), it has since drifted apart and taken off from reality. As an example you can take higher steps on the polynomial hierarchy, higher complexity classes such as PSPACE, classes defined using alternation, and so on. Indeed, much of this material dates to the early days of complexity theory, showing that it lost touch with reality quite quickly. Traditionally the two most important resources studied by complexity theory are time and space. However, other resources have also interested complexity theorists, for example alternation and randomness (not to mention the world of circuit complexity). Philosophically speaking, it is a fascinating question whether randomness dramatically reduces the computation time of some problems, or whether the gain is only polynomial (as hinted by the conjecture P=BPP). However, it does not seem to have any practical relevance, though not for the reason you mention. In practice there is no need for actual physical randomness (other than for cryptographic purposes), and pseudorandom number generators work well enough.
{ "domain": "cs.stackexchange", "id": 8243, "tags": "complexity-theory, randomized-algorithms, randomness" }
Cold welding of a metallic surface
Question: I have heard of cold welding, it's said that it's only possible if the surface is very clean. Can cold welding be accomplished by shearing a metal object and then immediately touching the newly exposed surface together again? If this is possible, shouldn't all hairline cracks in a metal object immediately fix themselves? Answer: Firstly, I suspect that oxidation will throw a spanner into any such plan. For metals like aluminum, which have a very high affinity for oxygen, a "virgin" surface will begin to tarnish almost immediately. The second problem is that metals have microstructure. Neighboring crystals in a polycrystalline aggregate such as a metal piece have to satisfy certain restrictions, you can't put them back willy-nilly. So cold welding never gives you back the exact same structure, there must be some changes locally. If, however, you can prevent the surfaces from tarnishing and press the surfaces of the broken pieces together - it will cause plastic flow locally and cause the pieces to `weld'.
{ "domain": "physics.stackexchange", "id": 16559, "tags": "experimental-physics, everyday-life, material-science, metals" }
On the Fokker-Planck equation: deriving the transition PDF for small times
Question: I report below (part of) page 73 of the book The Fokker-Planck Equation, by H. Risken We now derive an expression for the transition probability density for small $\tau$ \begin{equation}\tag{1} p(x,t+\tau|x', t)=\bigg(1+L_{\mathrm{FP}}(x,t)+O(\tau^2)\bigg)\delta(x-x') \end{equation} with \begin{equation}\tag{2} L_{\mathrm{FP}}(x,t):=-\frac{\partial}{\partial t}D_1(x,t)+\frac{\partial^2}{\partial x^2}D_2(x, t) \end{equation} We get up to corrections of the order $\tau^2$: \begin{equation} \begin{aligned} p(x,t+\tau|x', t)&=\bigg(1-\frac{\partial}{\partial t}D_1(x',t)\tau+\frac{\partial^2}{\partial x^2}D_2(x', t)\tau\bigg)\delta(x-x')\\ &=\exp\bigg(-\frac{\partial}{\partial t}D_1(x',t)\tau+\frac{\partial^2}{\partial x^2}D_2(x', t)\tau\bigg)\delta(x-x') \end{aligned} \tag{3} \end{equation} I don't get the last equality in equation $\mathrm{(3)}$. At first glance, it would appear to be just a replacement. After all $\exp(L_{\mathrm{FP}}\,\tau)\simeq 1+L_{\mathrm{FP}}\,\tau$ for $\tau\to 0$. But is it? After a few other steps, he writes the solution as \begin{equation}\tag{4} p(x,t+\tau|x',t)=\frac{1}{\sqrt{4\pi\,D_2(x',t)}}\exp\left(\frac{[(x-x')-D_1(x',t)\tau]^2}{4D_2(x', t)\tau}\right) \end{equation} and then he says for drift and diffusion coefficients independent of $x$ and $t$, $\mathrm{(4)}$ is not only valid for small $\tau$, but for arbitrary $\tau>0$ (the last line in equation $\mathrm{(3)}$ is then the formal solution). So... did he just replace the approximate solution with the exact one? One is not supposed to know it ahead. Answer: It is more than just a replacement of $1+Lt\rightarrow \exp (Lt)$. It is the following: within a region of $x$ and $t$ that $D_{1,2}(x,t)$ can be reliably approximated as a constant, the transition probability will be given by $(4)$. In other words, if we isolate our attention to such a confined region, we can reliably approximate the transition probability as $(4)$. An analogous situation would be solving the differential equation $y'(x) = \sin(x) y(x)$ near the point $x=\pi/2$. The exact solution to this differential equation is $y(x)=C e^{-\cos(x)}$, and if we expand it about $x=\pi/2$ we get $$y(x)\approx C e^{-\left(\frac{\pi}{2}-x\right)}$$ However in the vicinity of $x=\pi/2$ we can reliably approximate $\sin (x)\approx 1$, and therefore $y(x)\approx C' e^{x}$ near this point. Notice that from this approximate solution near $x=\pi/2$, you cannot deduce anything about $y(x)$ outside this region of validity.
{ "domain": "physics.stackexchange", "id": 93394, "tags": "statistical-mechanics, stochastic-processes, brownian-motion, econo-physics" }
Project Euler # 7 in C++
Question: I attempted the Project Euler question #7 in C++ ("What is the 10 001st prime number?"). I took into account what I've been told about primes in order to create the most efficient algorithm I could in order to solve this. I would like to know if there is anything else I can do to check even fewer numbers for an even more efficient algorithm. Also, I'm wondering if creating and using functions would be good practice in this scenario, and how I could go using functions more in coding. #include <iostream> #include <vector> using namespace std; // What is the 10 001st prime number? int main() { int countPrimes = 2; int checkIfPrime = 6; // multiples of 6 // sees if numbers adjacent to 6 are prime bool lessOnePrime = true; bool plusOnePrime = true; vector<int> primeVector; // for holding primes to check for future primes //account for primes 2 and 3 beforehand: primeVector.push_back(2); primeVector.push_back(3); while (countPrimes < 10001) // checks until the 10001st prime has been found { for (int i = 0; primeVector[i] <= sqrt(checkIfPrime + 1); i++) //checks only prime factors up to the square root { // checks if numbers adjacent to 6 have prime factors // if so, makes their respective booleans false if ((checkIfPrime - 1) % primeVector[i] == 0) { lessOnePrime = false; } if ((checkIfPrime + 1) % primeVector[i] == 0) { plusOnePrime = false; } // if both adjacent numbers to multiples of 6 aren't prime, exits loop if (lessOnePrime == false && plusOnePrime == false) { break; } } if (lessOnePrime) { primeVector.push_back(checkIfPrime - 1); countPrimes++; // cout << "#" << primeVector.size() << " prime number:\t" << checkIfPrime - 1 << endl; } if (plusOnePrime) { primeVector.push_back(checkIfPrime + 1); countPrimes++; // cout << "#" << primeVector.size() << " prime number:\t" << checkIfPrime + 1 << endl; } lessOnePrime = true; plusOnePrime = true; checkIfPrime += 6; } cout << "The " << countPrimes << "st prime number is:\n" << primeVector[countPrimes - 1] << endl; return 0; } // end main() Answer: I would like to know if there is anything else I can do to check even fewer numbers for an even more efficient algorithm. The general recommendation is to use a sieve for this. The one that comes to mind is the Sieve of Eratosthenes. That said, there are some other comments that I can make on the code as it stands. int checkIfPrime = 6; // multiples of 6 Often, your code will read more naturally if you give variables noun names. This tells what you should do with the variable. I'd prefer a name like primeCandidate which describes what it is. Or should be, since 6 isn't actually a number that you want to check. int primeCandidate = 7; It can save calculations if you start with 7 rather than 6. Rather than saying primeCandidate-1 and primeCandidate+1, you can just say primeCandidate-2 and primeCandidate. Note that you need the larger number slightly more often. bool lessOnePrime = true; bool plusOnePrime = true; Since you don't use these outside the while loop nor across iterations, you should define these inside the loop instead. Of course, as we'll discuss later, these aren't actually necessary. vector<int> primeVector; This is called Hungarian notation, where you put the type in the name. This can cause difficulty in changing code later. For example, what if you wanted to change from std::vector to a type called List? Should you rename this variable as well? A better name might be primes which works regardless of type. while (countPrimes < 10001) // checks until the 10001st prime has been found You don't have to track a counter manually. You can just say while (primes.size() < 10001) The type already maintains a count for you. You might as well use it rather than duplicate it manually. for (int i = 0; primeVector[i] <= sqrt(checkIfPrime + 1); i++) //checks only prime factors up to the square root You don't need to start at 0 each time. By starting with 5 and 7 and always incrementing by 6, you ensure that it will never be divisible by 2 or 3. So you can skip the first two elements in the vector. for (int i = 2, n = sqrt(primeCandidate); primes[i] <= n; i++) You don't need to calculate the square root on each iteration. You can do it once at the beginning of the loop. You don't need to comment saying that you are only checking to the square root. The code says this. You might want to comment on why it is safe to only check up to the square root. { if ((checkIfPrime - 1) % primeVector[i] == 0) { lessOnePrime = false; } if ((checkIfPrime + 1) % primeVector[i] == 0) { plusOnePrime = false; } if (lessOnePrime == false && plusOnePrime == false) { break; } } if (lessOnePrime) { primeVector.push_back(checkIfPrime - 1); countPrimes++; // cout << "#" << primeVector.size() << " prime number:\t" << checkIfPrime - 1 << endl; } if (plusOnePrime) { primeVector.push_back(checkIfPrime + 1); countPrimes++; // cout << "#" << primeVector.size() << " prime number:\t" << checkIfPrime + 1 << endl; } lessOnePrime = true; plusOnePrime = true; This is more complicated than it needs to be. It would be much simpler to use two function calls instead of this for loop. if (isPrime(primeCandidate - 2, primes)) { primes.push_back(primeCandidate - 2); } if (isPrime(primeCandidate, primes)) { primes.push_back(primeCandidate); } Yes, this repeats the for loop, but it saves you the internal scaffolding. It also means that if one is a prime and the other isn't, it doesn't keep trying to divide the non-prime one by each potential factor. That loop can exit early. You'd have to profile to be sure, but I'd expect this to be faster. The function would look something like bool isPrime(const int candidate, const std::vector<int>& primes) { for (int i = 2, n = sqrt(candidate); primes[i] <= n; i++) { if (candidate % primes[i] == 0) { return false; } } return true; } I haven't tried to compile it though. Note that it might be more efficient to use an iterator rather than an index variable. The notation would be more complicated in this case though. I'll leave writing the actual function to you. Note that even with the overhead of writing the function, you'll still likely shorten the code a bit. You do a lot of effort just to make the single for loop work. checkIfPrime += 6; } The only change I'd make to this is the variable name. Although if you wanted to simplify the code in the loop at the cost of making this more complicated, you could change this to primeCandidate += increment; increment = 6 - increment; } Where increment would be initialized as int increment = 2; And you'd start primeCandidate at 5. You'd also only do one isPrime check per iteration of the while loop.
{ "domain": "codereview.stackexchange", "id": 14604, "tags": "c++, programming-challenge" }
Big Data - Data Warehouse Solutions?
Question: I have a dozen of databases that stores different data, and each of them are 100TBs in size. All of the data is stored in AWS services such as RDS, Aurora and Dynamo. Many times I find myself need to perform "joins" across databases, for example a student ID that appears in multiple databases with data that I want to gather. The joins are usually done after data is streamed out of the database, since the data is not located in the same database, and this sometimes requires hours just for thousands of records. Can services such as AWS redshift or Google BigQuery allow you to somehow "import" data from many data sources and then you can perform SQL queries to join them? How about Hadoop and Hive? Where we steam data out from the database and place it as files in Hadoop, and let Hive Query the data? Answer: Can services such as AWS redshift or Google BigQuery allow you to somehow "import" data from many data sources and then you can perform SQL queries to join them? It depends on your data and the type of joins you are performing. But, yeah, databases like Redshift can definitely perform better in your use case as they are column-based databases. Read this post and the associated answers for understanding how columnar data stores handle data. How about Hadoop and Hive? Hadoop + Hive is mostly a DIY hosted/cloud version of what Redshift gives you on cloud.
{ "domain": "datascience.stackexchange", "id": 2451, "tags": "bigdata, databases, redshift" }
Difference between reference and measurement signal?
Question: Which differences can we find on various signals? We assume that we have a reference signal (determined theoretically) and measured signal. Which methods exist for determining the similarities between them? The signals have no phase shift. If I calculate the cross correlation between the signals, I get max value of cross correlation result (graph) on a 0. I conclude from this that there is no phase shift between them. An example of the signals is in attached figure. Blue line is reference, red line is measurement signal. Answer: One way to look at it is to just find the difference (error): $$ e[n] = y_{\tt ref}[n] - y_{\tt meas}[n] $$ where $y_{\tt ref}$ is the reference signal and $y_{\tt meas}$ is the measured signal. Then one can look at this error signal, $e$, as an innovation process. If the innovation process is "white" (noise), then all the information in the measured signal is in the reference. If the innovation process is not white, then there is some predictable part of the error that the reference signal is not taking account of. This predictable part might be a gain or a delay, or it might be some more complex filtering. You can use something like this to determine whiteness or otherwise.
{ "domain": "dsp.stackexchange", "id": 10618, "tags": "matlab, fourier-transform, autocorrelation, cross-correlation, correlation" }
Damped Simple Harmonic Motion Proof?
Question: I was reading about damped simple harmonic motion but then I saw this equation: $$-bv - kx = ma$$ $b$ is the damping constant. Then it said by substituting $dx/dt$ for $v$ and $d^2x/dt^2$ for $a$ we will have: $$ m\frac{\mathrm d^2x}{\mathrm dt^2}+b\frac{\mathrm dx}{\mathrm dt}+kx=0 $$ Then it says the solution of the equation is: (this is my problem) $$ x(t)=x_m \mathrm e^{-bt/2m}\cos(\omega't+\phi) $$ I don't understand the last part. How can we reach the $x(t)$? I know very little of calculus can you please explain how to solve this? Answer: The differential equation you quote is fairly standard in university physics/engineering course but definitely requires some calculus to solve. As a first step, if you know how to differentiate products and chains, you can substitute the given solution into the differential equation and verify that it is indeed a solution. It contains two arbitrary constants (here $x_m$ and $\phi$) as you would expect of a second order differential equation (DE). If you wanted to solve it, you still need some kind of guess as to what the function might look like; here the trial function would be: $$ x(t) = Ae^{\lambda t} $$ and then substitute this parametrised solution into the original DE to obtain a quadratic in $\lambda$. $$ x(t) = Ae^{\lambda t} \implies {dx\over dt} = \lambda Ae^{\lambda t} = \lambda x(t) $$ and $$ {d^2x\over dt^2} = \lambda^2 Ae^{\lambda t} = \lambda ^2x(t) $$ so that $$ m\lambda^2 + b\lambda+k = 0. $$ as $x(t), A\neq 0$ Depending on the relative values of $m$, $k$ and $b$, you will get a quadratic with two, one or no real (i.e. one or two complex) solutions. Given that you have two solutions $\lambda_1$ and $\lambda_2$, the intermediate result for $x(t)$ will be then $$ x(t) = Ae^{\lambda_1 t} + Be^{\lambda_2 t} $$ For the solution you have been given, the corresponding quadratic in $\lambda$ will have no real solutions, and so the $\lambda$s will be complex, and the real part of the solution will give you the damped exponential at the front of the solution, and the imaginary parts will give you a wave-like term. $x_m$ and $\phi$ will be related to $A$ and $B$ and are determined by boundary conditions.
{ "domain": "physics.stackexchange", "id": 30788, "tags": "homework-and-exercises, friction, harmonic-oscillator, oscillators" }
Why can I touch aluminum foil in the oven and not get burned?
Question: I cook frequently with aluminum foil as a cover in the oven. When it's time to remove the foil and cook uncovered, I find I can handle it with my bare hands, and it's barely warm. What are the physics for this? Does it have something to do with the thickness and storing energy? Answer: You get burned because energy is transferred from the hot object to your hand until they are both at the same temperature. The more energy transferred, the more damage done to you. Aluminium, like most metals, has a lower heat capacity than water (ie you) so transferring a small amount of energy lowers the temperature of aluminium more than it heats you (about 5x as much). Next the mass of the aluminium foil is very low - there isn't much metal to hold the heat, and finally the foil is probably crinkled so although it is a good conductor of heat you are only touching a very small part of the surface area so the heat flow to you is low. If you put your hand flat on an aluminium engine block at the same temperature you would get burned. The same thing applies to the sparks from a grinder or firework "sparkler", the sparks are hot enough to be molten iron - but are so small they contain very little energy.
{ "domain": "physics.stackexchange", "id": 46766, "tags": "thermodynamics, everyday-life, conductors, metals" }
Inconsistency in between Nernst Equation and Gibbs Free Energy Equation
Question: Part 1 - Derivation of the Gibbs Free Energy Equation: [copied from this] Using the fundamental equations for the state function (and its natural variables): \begin{align} \mathrm{d}G &= -S\mathrm{d}T + V\mathrm{d}P\\ V &= \left(\frac{\partial G}{\partial P}\right)_T\\ \bar{G}(T,P_2) &= \bar{G}(T,P_1) + \int_{P_1}^{P_2}\bar{V} \mathrm{d}p \end{align} Here $\bar{x}$ represents molar $x$, i.e. $x$ per mole \begin{align} \bar{V} &= \frac{RT}{P}\\ \bar{G}(T,P_2) &= \bar{G}(T,P_1) + RT \ln\frac{P_2}{P_1} \end{align} Defining standard state as $P = \pu{1 bar}$ and $\bar{G}=\mu$ $$\mu(T,P)=\mu^\circ (T) + RT\ln \frac{P}{P_o}$$ consider the general gaseous reaction $\ce{a A + b B -> c C + d D}$ $$\Delta G=(c\mu_\ce{C} + d\mu_\ce{D} - a\mu_\ce{A} - b\mu_\ce{B})$$ for "unit progress" in reaction. Using $\mu_i = \mu^\circ_i + RT\ln \frac{P_i}{\pu{1 bar}}$ \begin{align} \Delta G &= (c\mu^\circ_\ce{C} + d\mu^\circ_\ce{D} - a\mu^\circ_\ce{A} - b\mu^\circ_\ce{B}) + RT \ln\frac{P_\ce{C}^c P_\ce{D}^d}{P_\ce{A}^a P_\ce{B}^b}\\ \Delta G &= \Delta G^\circ + RT\ln Q \end{align} Here, $Q=\ln\frac{P_\ce{C}^c P_\ce{D}^d}{P_\ce{A}^a P_\ce{B}^b}$ Part 2 - Derivation of the Nernst Equation [Copied from this] Realise that (reversible ideal case) $\Delta G = W_\text{non-exp}$ (non-expansion). Therefore, in an ideal chemical cell, if the potential difference between the electrodes is $E$, to move one mole electrons across the external circuit will be $FE$, which must be equal to the decrease in gibbs free energy of the system. Hence for $n$ mole electrons transferred at the same potential, $W_\text{non-exp} = \Delta G = -nFE$. $$ $$The fact that $\Delta G = W_\text{non-exp}$ can be derived as under: \begin{align} \mathrm{d}S &= \frac{\delta q}{T} && \text{(reversible case)}\\ \mathrm{d}U &= \delta q + \delta W_\text{non-exp} + \delta W_\text{exp}\\ \delta W_\text{non-exp} &= \mathrm{d}U - \delta W_\text{exp} - \delta q \\ &= \mathrm{d}U + p\,\mathrm{d}V - T\,\mathrm{d}S && \text{(const. $p$ and $T$)} \\ &= \mathrm{d}H - T\,\mathrm{d}S \\ &= \mathrm{d}G \end{align} Combining the above two, we get Nernst Equation: $$E_{cell}=E^{\circ}_{cell}-\frac{RT}{nF}lnQ$$ The Q for this remains same i.e. $Q_p=\ln\frac{P_\ce{C}^c P_\ce{D}^d}{P_\ce{A}^a P_\ce{B}^b}$ But we always use $Q_c=\frac{[C]^c[D]^d}{[A]^a[B]^b}$ in the Nernst Equation in terms of molarity instead of $Q_p$ (in terms of partial pressure). How is this justified? Thnx :) Answer: Rather than being an inconsistency, it has to do with the freedom of choosing a mathematical expression for the chemical potential that is more convenient to you. A general expression for the chemical potential is $$ \mu_j = \mu_j^0 + RT \ln\left(\gamma_j \frac{\xi}{\xi_0}\right) \tag{1} $$ where $\gamma_j$ is the activity coefficient of species $j$, $\xi$ is some measure of the concentration, and $\xi_0$ is that measure of the concentration for a defined standard state. Across chemistry you will see the next variations of Eq. (1) \begin{align} \mu_j &= \mu_j^{\mathrm{(g)}} + RT\ln\left(\gamma_j^{\mathrm{(g)}} \frac{P_j}{P_0}\right) \tag{2} \\ \mu_j &= \mu_j^{\mathrm{(l)}} + RT\ln\left(\gamma_j^{\mathrm{(l)}} x_j\right) \tag{3} \\ \mu_j &= \mu_j^{\mathrm{(c)}} + RT\ln\left(\gamma_j^{\mathrm{(c)}} \frac{c_j}{c_0}\right) \tag{4} \\ \mu_j &= \mu_j^{\mathrm{(m)}} + RT\ln\left(\gamma_j^{\mathrm{(m)}} \frac{m_j}{m_0}\right) \tag{5} \\ \end{align} Below is a description: Standard state Measure of concentration Activity coefficient Pure substance in the ideal-gas state at a pressure of $\pu{1 bar}$ Partial pressure Fugacity coefficient Hypothetical ideal liquid solution at a pressure of $\pu{1 bar}$ Molar fraction Mole-fraction activity coefficient Hypothetical ideal 1-molar solution at a pressure of $\pu{1 bar}$ Molar concentration Molarity activity coefficient Hypothetical ideal 1-molal solution at a pressure of $\pu{1 bar}$ Molality Molality activity coefficient You can use any of the Eqs. (2)-(5) for any compound to obtain an expression of $\Delta G$. In all of these cases, the activity coefficient is going to be different depending on the expression you use. Of course, the measure of concentration is going to be different also for every species. The only precaution you must take is that once you select the 'scale' for each compound, you have to be consistent with it. For example, if an equilibrium is attained you can calculate the equilibrium constant $K = \exp(-\Delta G/RT)$. When explaining in words what $\Delta G$ points too, you tell the reactants and products it refers to and the states you have considered for all of them. As you can see, depending on the different scales, you will have different $K$'s. When someone reads your $K$ and the scale you used for every compound, that person will put correctly the measure of concentration for further calculations. If you just say $K = 50$ with no additional information, I won't know which $\xi$ to choose for every compound. A discussion of this topic can be found at the start of chapter $9$ of: J. M. Prausnitz, R. N. Lichtenhaler, E. G. de Azevedo, "Molecular Thermodynamics of Fluid-Phase Equilibria, 3rd ed., Prentice Hall PTR, 1999.
{ "domain": "chemistry.stackexchange", "id": 17954, "tags": "thermodynamics, electrochemistry, equilibrium, free-energy, nernst-equation" }
Why do we prefer using materials of high resistivity in laboratory instruments?
Question: I know that :$$R=\rho\frac{l}{A}$$ where $R$ is the resistance of the wire, $\rho$ is its specific resistance (resistivity), $l$ is its length, and $A$ is the area of cross-section of the wire. Why do we prefer using materials of high resistivity (like manganin, constantan etc.) in laboratory instruments like potentiometer or Metre Bridge? I searched everywhere online, but all I always get is the definition of resistivity, which I already know. Answer: Manganin, like constantin, are alloys invented in the late 1800s to solve a specific problem: resistance varies with temperature, and every resistor passing a current is subject to Joule heating. So if you are building precise electrical metering equipment their is a design advantage to using materials that show a stable resistance with temperature variations. The follow-on to your question could be: how are precision resistors made? One way is to simply cut strips of high resistance metals to measure; modern surface mount devices use a ceramic core with a metal coating that is laser-trimmed. Carbon film resistors are similar in design, but typically less precise. Or you can grind up the resistance metal and mix it with clay - this gives a resistor that can dissipate heat, and is suitable for high voltage/current applications. These and more are described here: http://www.learnabout-electronics.org/Resistors/resistors_08.php
{ "domain": "physics.stackexchange", "id": 28836, "tags": "electricity, electric-circuits, electrical-resistance, instrument" }
demo_turtlebot_mapping.launch+rtabmap
Question: hi, i want create map with turtlebot in Gazebo with rtabmap. i used this link link text and step 6. but when run turtlebot in gazebo map is not create and this message is created in terminal: [ INFO] [1517748831.917902497, 269.000000000]: rtabmap 0.11.8 started... [ INFO] [1517748832.029270004, 269.110000000]: Using plugin "static_layer" [ INFO] [1517748832.037229982, 269.120000000]: Requesting the map... Originally posted by zahra.kh on ROS Answers with karma: 21 on 2018-02-04 Post score: 0 Original comments Comment by zahra.kh on 2018-02-04: rtabmap 0.11.8 started... Using plugin "static_layer" Requesting the map... Answer: Do you have screenshots? I followed the steps with Indigo binaries (0.11.8, which is the version you are using) and it seems to work: $ roslaunch turtlebot_gazebo turtlebot_world.launch $ roslaunch rtabmap_ros demo_turtlebot_mapping.launch simulation:=true $ roslaunch rtabmap_ros demo_turtlebot_rviz.launch In the terminal starting demo_turtlebot_mapping.launch: [ INFO] [1517852555.128380165, 9.630000000]: rtabmap 0.11.8 started... [ INFO] [1517852555.675103210, 10.180000000]: rtabmap: Rate=1.00s, Limit=0.700s, RTAB-Map=0.0483s, Maps update=0.0001s pub=0.0006s (local map=1, WM=1) [ INFO] [1517852555.839237051, 10.350000000]: Using plugin "static_layer" [ INFO] [1517852555.892405578, 10.390000000]: Requesting the map... [ INFO] [1517852556.793654151, 11.290000000]: rtabmap: Rate=1.00s, Limit=0.700s, RTAB-Map=0.0626s, Maps update=0.0003s pub=0.0007s (local map=1, WM=1) [ INFO] [1517852556.817614177, 11.320000000]: Resizing costmap to 47 X 33 at 0.050000 m/pix [ INFO] [1517852556.916866213, 11.420000000]: Received a 47 X 33 map at 0.050000 m/pix [ INFO] [1517852556.924140451, 11.420000000]: Using plugin "obstacle_layer" [ INFO] [1517852556.928436404, 11.430000000]: Subscribed to Topics: scan bump [ INFO] [1517852557.049027487, 11.540000000]: Using plugin "inflation_layer" [ INFO] [1517852557.208985115, 11.700000000]: Using plugin "obstacle_layer" [ INFO] [1517852557.254371193, 11.740000000]: Subscribed to Topics: scan bump [ INFO] [1517852557.351695756, 11.830000000]: Using plugin "inflation_layer" [ INFO] [1517852557.488057833, 11.970000000]: Created local_planner dwa_local_planner/DWAPlannerROS [ INFO] [1517852557.498203070, 11.980000000]: Sim period is set to 0.20 [ INFO] [1517852557.800370287, 12.290000000]: rtabmap: Rate=1.00s, Limit=0.700s, RTAB-Map=0.0527s, Maps update=0.0001s pub=0.0006s (local map=1, WM=1) [ INFO] [1517852558.280924102, 12.770000000]: Recovery behavior will clear layer obstacles [ INFO] [1517852558.333823092, 12.820000000]: Recovery behavior will clear layer obstacles [ INFO] [1517852558.407647042, 12.900000000]: odom received! [ INFO] [1517852558.824825239, 13.310000000]: rtabmap: Rate=1.00s, Limit=0.700s, RTAB-Map=0.0443s, Maps update=0.0001s pub=0.0006s (local map=1, WM=1) [ INFO] [1517852559.821166239, 14.300000000]: rtabmap: Rate=1.00s, Limit=0.700s, RTAB-Map=0.0461s, Maps update=0.0001s pub=0.0006s (local map=1, WM=1) ... Originally posted by matlabbe with karma: 6409 on 2018-02-05 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by zahra.kh on 2018-02-05: thanks matlabbe.finally i can create the map but when run demo_turtlebot_mapping.launch, this warn is created in terminal: [ WARN] (2018-02-06 03:53:07.582) SensorData.cpp:594::uncompressDataConst() Requested laser scan data, but the sensor data (-1) doesn't have laser scan. Comment by zahra.kh on 2018-02-05: how can solved this warn?! Comment by matlabbe on 2018-02-05: I updated the link in the tutorial to latest demo_turtlebot_mapping.launch, which should have "map_negative_poses_ignored" parameter to true. Comment by zahra.kh on 2018-02-06: i used updated launch file, but this warn isn't solved! when i run turtlebot with turtlebot_teleop_keyboard, this warn is created! Comment by zahra.kh on 2018-02-06: also after creating map, i don't navigated in this map with 2D nav Goal. Comment by zahra.kh on 2018-02-06: i don't know, what is the problem?!
{ "domain": "robotics.stackexchange", "id": 29950, "tags": "slam, navigation, rtabmap-ros, ros-indigo" }