anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Get an array of currency exchange prices based on asset
Question: Function getMarkets Makes a call to get Cryptocurrency Exchange data based on asset (USD, USDC, USDT) It calls the endpoint 3 times to return 3 arrays which are then returned to the callee. I have an endpoint which hits gets exchange-markets/prices for any asset. In my case I need to check all the paired cryptocurrency listings for USD, USDC and USDT. At first I just tried calling getMarkets 3 times passing in the currency each time. I would then get an array of 3 responses from each call, however the last call would always replace the data for the 2 previous calls. In order to fix that I had to use this ugly if statement indentation hell... which defeats the purpose of using async await. How should this be refactored to be cleaner, or is there a better way? const nomicsAPI = 'https://api.nomics.com/v1/'; const nomicsKey = '8feb...;' interface IParams { key: string; currency?: string; } interface IHeaders { baseURL: string, params: IParams } const headers: IHeaders = { baseURL: nomicsAPI, params: { key: nomicsKey } }; const prepHeaders = (currency: string) => { headers.params.currency = currency; return axios.create(headers); }; export const getMarkets = async (): Promise<any> => { try { let marketUSD; let marketUSDC; let marketUSDT; const nomicsUSD = prepHeaders('USD'); marketUSD = await nomicsUSD.get('exchange-markets/prices'); if (marketUSD) { const nomicsUSDC = prepHeaders('USDC'); marketUSDC = await nomicsUSDC.get('exchange-markets/prices'); if (marketUSDC) { const nomicsUSDT = prepHeaders('USDT'); marketUSDT = await nomicsUSDT.get('exchange-markets/prices'); return { marketUSD: marketUSD.data, marketUSDC: marketUSDC.data, marketUSDT: marketUSDT.data } } } else { throw new Error('USD Markets unavailable.'); } } catch (err) { console.error(err); } } ^ above produces the desired results: And the actions file where this function is called from: // Fetch USD, USDC & USDT markets to filter out Exchange List. export const fetchMarketPrices = (asset: string) => (dispatch: any) => getMarkets().then((res) => { console.log('res', res); // const combinedExchanges = res[0].data.concat(res[1].data).concat(res[2].data); // console.log('combinedExchanges', combinedExchanges); // const exchangesForAsset = combinedExchanges.filter((marketAsset: IMarketAsset) => // marketAsset.base === asset); // console.log('exchangesForAsset', exchangesForAsset); // return dispatch(actionGetMarketPrices(exchangesForAsset)); }); Answer: A common convention used in TypeScript/Javascript and other C-based languages is to use all uppercase letters for constants. This makes it easy to spot values that cannot be changed. So instead of lowercase constants: const nomicsAPI = 'https://api.nomics.com/v1/'; const nomicsKey = '8feb...;' make them uppercase. The naming also could be improved - e.g. NOMICS_API_BASE_URL would better describe that the value is the base URL of the API than NOMICS_API. const NOMICS_API_BASE_URL = 'https://api.nomics.com/v1/'; const NOMICS_KEY = '8feb...;' You could also consider using the readonly modifier with those constants. This code (as well as the updated code in your self-answer) is quite repetitive. It would be wise to abstract the common code. That way if something needs to be updated, it can be done in one place. This adheres to the Don't Repeat Yourself (i.e. D.R.Y.) principle. I would first recommend storing each currency in an array: const currencies = ['USD', 'USDC', 'USDT']; And abstract out the common code of making the request and throwing an error if the request fails: const getCurrencyPrice = async (currency) => { try { const request = prepHeaders(currency); const response = await request.get(NOMICS_PRICES_ENDPOINT); if (!response) { throw new Error('USD Markets unavailable.'); } return response.data; } catch(err) { console.error(err); } }; That way if any of the requests fail, the Error will be thrown immediately. Also, this function has one job and could easily lend itself to unit testing. Then use that function to iterate over the array of currencies - optionally with a for...of loop, Array.prototype.reduce(), Array.prototype.forEach(), etc const getMarkets = async _ => { const returnObj = {}; for (let currency of currencies) { const key = 'market' + currency; returnObj[key] = await getCurrencyPrice(currency); } return returnObj; }; Notice that the code above uses a new constant - NOMICS_PRICES_ENDPOINT - this can be defined with the other constants: const NOMICS_PRICES_ENDPOINT = 'exchange-markets/prices'; That way if the endpoint needs to be updated, in can be done in one place. Additionally, if the constants were stored in a separate file, you wouldn't need to alter the file that contains all of this code.
{ "domain": "codereview.stackexchange", "id": 33645, "tags": "async-await, promise, typescript" }
How to make SLAM with other programs and tools?
Question: Hi everyone I'd like to know a way to make Simultaneous Localization And Mapping (SLAM). All projects I've seen are driven by teleop_twist_key node, or a joystick. In order to apply SLAM, the wheeled robot I'm looking for needs to make autonomously a mapping for the planar environment it moves, preferably indoors. I know teleop node, navigation stack and gmapping from ROS are always the chosen by almost everyone to make this task. Edit Thanks to @gvdhoorn now I know other implementation for ROS like frontier_exploration package and turtlebot_exploration_3d Which other implementations exists to use SLAM in the way I'm looking for? Thanks Originally posted by gerson_n on ROS Answers with karma: 43 on 2017-09-05 Post score: 0 Original comments Comment by gvdhoorn on 2017-09-05: Is this related to #q269765 (your previous question)? If not, can you clarify what you actually want to do? Comment by gerson_n on 2017-09-05: Oh, yes it is. I forgot I've asked the same question a week ago. In this case I should delete this question? Comment by gvdhoorn on 2017-09-05: We can close it as a duplicate and then optionally delete it. Comment by gerson_n on 2017-09-05: Thanks for reopen this one Answer: The nav2d package provides both SLAM and navigation for indoor robots. It can load exploration strategies to make the robot explore its environment autonomously. There are also tutorials how to set it up. Originally posted by Sebastian Kasperski with karma: 1658 on 2017-09-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gerson_n on 2017-09-06: Thanks a lot man, I'd like to use the package you've done. That's what I've been looking for. Just a few questions about the params: What about it performance for kinect? is it appropiate? I haven't found people who has used that sensor for it. And how it works for odometry? Comment by gerson_n on 2017-09-06: I understand I need position (odom) and a vision sensor (kinect) which together as inputs are used for a filter like kalman or particle filter (this case). The thing is, I didn't see how can I send the odom info to this algorithm, can you tell me how to achieve that? Thanks Comment by Sebastian Kasperski on 2017-09-07: Odometry is input via topic tf, same as for navigation stack. I never used nav2d with a Kinect, which is a 3d sensor. But I think there are already ROS-nodes to create virtual 2d-scans from a Kinect sensor. Comment by gerson_n on 2017-09-11: Thanks again man.
{ "domain": "robotics.stackexchange", "id": 28767, "tags": "ros, slam, navigation" }
signal sampling for 2nd time of sampled signal
Question: A sinusoidal signal of 600hz is sampled with 1khz, If the sampled signal is applied to an ideal low pass filter of 500 hz, what is the output? Answer: This is basic sample theory. when you want more of the background you can find this at wikipedia. Check also there pages about sampling and Nyquist The output frequency will be: $$ F_{out} = \frac{Fs}{2} - f_0 \text{ so } 500-600 = -100 $$ The result of this will me that you measure the signal at $400 Hz$
{ "domain": "dsp.stackexchange", "id": 3412, "tags": "filters, sampling" }
Using denavit hartenberg parameters with xacro
Question: How do I describe a robot arm using DH parameters and xacro? Is it documented somewhere? I imagine it is pretty straightforward, but I cannot find an exampe. Originally posted by paturdc on ROS Answers with karma: 157 on 2014-08-19 Post score: 0 Answer: How about the mrpt library? http://www.mrpt.org/list-of-mrpt-apps/application-robotic-arm-kinematics/ I really don t know whether you can use xacro or not, but perhaps it could help you Originally posted by Andromeda with karma: 893 on 2014-08-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 19107, "tags": "xacro" }
Why is important for the energy density to be positive definite in field theories?
Question: Why is important that the energy density be positive definite in field theories? Answer: In physics experiments energy differences are the ones that are usually measurable. We cannot allow energies to be infinitely negative because then we will be measuring energy differences but with respect to what exactly?? In a system where there can be states (orbits, field configurations etc.) with infinitely negative energy we need an interpretation for all these states that we don't measure, given that usually we measure a finite spectrum of energy differences, like in the Zeeman experiment, but in such a situation we would be measuring an infinitum of energy differences that do not necessarily become smaller, because decays would be allowed to any possible state below the excited one, and there are infinite of those with unbounded distances between them. In a system where energy densities are finitely negative, we can just change the definition of our Hamiltonian by a constant (which doesn't change the equations of motion in classical or quantum physics) so that it attains it's minimum at zero and then we have again a positive definite spectrum, but either way, the energy differences we would measure would make sense, so that constant we are adding is for convenience. One good example for a theory with "infinitely negative spectrum" is Dirac's theory for the relativistic motion of an electron, which initially had the problem, if naively interpreted, that energies in it's spectrum are infinitely negative. Dirac justified the fact that we do not measure these states in a very solid state physics way, by asserting that they are all occupied by an electron sea, which collectively behaves as a vacuum. This assertion is just strange physically, and now we know that the Dirac equation is better interpreted as describing particles and antiparticles which have only positive energies,packaged together in one equation.
{ "domain": "physics.stackexchange", "id": 58940, "tags": "hilbert-space, field-theory, unitarity" }
Two bosons having the same state --- how do you know there are two?
Question: So, suppose that photons have the same quantum state. How do we know that there are 'two' photons having the same state, rather than just one? Is there a technical way to guarantee that there are two photons instead of one, or is it by stipulation? Thanks. Answer: How do we know that there are 'two' photons having the same state, rather than just one? Take the expectation value of the number operator on that state and look at the result. If the result is two then there are two photons. If it is one then there is one.
{ "domain": "physics.stackexchange", "id": 50107, "tags": "quantum-mechanics, bosons" }
Proton accelarating in homogeneous electric field
Question: Hi I was studying for my physics exam, when I ran in to this problem. A proton in a particle accelerator uniformly accelerated in 3.23 ms over a distance of 1 km. The field in which the proton is accelerated is homogeneous. The voltage that the proton has passed through is equal to: A) 1 kV B) 2 kV C) 3 kV D) 4 kV The answer to the question is 4 kV. I struggle however to understand why. I can't find any formulas in my course book combining acceleration and voltage. Does anyone know how to approach this kind of problem? Thanks in advance. Answer: I'm getting 2kV so I might have made a mistake, but here's my working. Start from $$s=\frac{1}{2}at^2$$ and substitute in $$a=\frac{eE}{m}\\E=\frac{V}{s}$$ gives you $$V = \frac{2 m s^2}{et^2}\approx2~\mathrm{kV}.$$ Again if I've made a mistake someone please point it out.
{ "domain": "physics.stackexchange", "id": 86345, "tags": "electromagnetism, electrostatics, protons, particle-accelerators" }
Pocketsphinx - specify a language model
Question: I installed pocketsphinx, but I don't know how to use it. I try to run python script: rosrun pocketsphinx recognizer.py and have the following output: [ERROR] [WallTime: 1395053854.598681] Please specify a language model file or a fsg grammar file. What shall I do? How can I specify this model? Originally posted by pelment on ROS Answers with karma: 47 on 2014-03-17 Post score: 0 Answer: You indeed need to build a language model. Go to CMU Sphinx tutorial page for this. There is also a web tool which is very helpful in this Language model web service. The tutorial explains further instructions on how to convert and use the model Originally posted by makokal with karma: 1295 on 2014-03-17 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by pelment on 2014-03-23: Thank you! I found language model from Voxforge and tried to run recognizer.py with parameters -dict /home/sphinx/msu_ru_nsh.dic -lm /home/sphinx/msu_ru_nsh.lm.dmp . But nothing changed. The output of command showed me that pocketsphinx already uses wsj model, but if there exists any model, why the error I noticed appears always?
{ "domain": "robotics.stackexchange", "id": 17310, "tags": "ros, pocketsphinx" }
$d$-regular bipartite expander graph
Question: I have seen that there exists $d$-left regular bipartite graphs. My question is do there exists $d$-regular bipartite expander graphs in which both the degree of the left and the right vertices is exactly equal to $d$? If yes, is there any constructive proof for such graphs and what is the expansion factor? Kindly provide a reference as well. Answer: There is a simple construction: Take any $d$-regular non-bipartite expander $G=(V,E)$ - there are several constructions of those, e.g., Margulis, or the Zig-Zag construction. Now, turn it into a bipartite graph $G' = (V_1 \cup V_2, E')$ as follows: $V_1$ and $V_2$ are copies of $V$. Two vertices $v_1 \in V_1$ and $v_2 \in V_2$ are adjacted in $G'$ if and only if the corresponding vertices in $G$ are adjacted. The expansion of $G'$ follows from the expansion of $G$. However, one thing that should be noted is that the expansion factor in all those constructions is at most $d/2$. We do not have bipartite expanders that have a better expansion factor and are balanced (i.e. have degree $d$ on both sides).
{ "domain": "cstheory.stackexchange", "id": 3293, "tags": "graph-theory, co.combinatorics, expanders" }
Installation Problems
Question: Hi, I'm a newbee in ROS and tried several methods in installing ROS-Groovy-Desktop-Full in my ubuntu system without having really success. My system is Kubuntu 12.04, 3.2.0-39-generic. 64bit (not a vmware image) I did the step due to: ...ros.org/wiki/groovy/Installation/Source sudo apt-get install python-rosdep python-wstool build-essential --> worked, ok sudo rosdep init, rosdep update --> worked, ok mkdir ~/ros_catkin_ws, cd ~/ros_catkin_ws --> worked, ok wstool init -j8 src ...packages.ros.org/web/rosinstall/generate/raw/groovy/desktop-full --> worked, ok sudo rosdep install --from-paths src --ignore-src --rosdistro groovy -y --> worked, ok sudo ./src/catkin/bin/catkin_make_isolated --install --> BROKEN ==> Processing plain cmake package: 'bfl' Makefile exists, skipping explicit cmake invocation... ==> make cmake_check_build_system in '/home/martin/ros_catkin_ws/build_isolated/bfl' ==> make -j4 -l4 in '/home/martin/ros_catkin_ws/build_isolated/bfl' ==> make install in '/home/martin/ros_catkin_ws/build_isolated/bfl' ==> Generating an env.sh <== Failed to process package 'bfl': 'NoneType' object has no attribute 'rfind' Command failed, exiting. I tried the package way of installation: martin@quadcore:~/ros_catkin_ws$ sudo apt-get install ros-groovy-desktop-full[sudo] password for martin: Paketlisten werden gelesen... Fertig Abhängigkeitsbaum wird aufgebaut Statusinformationen werden eingelesen... Fertig Einige Pakete konnten nicht installiert werden. Das kann bedeuten, dass Sie eine unmögliche Situation angefordert haben oder, wenn Sie die Unstable-Distribution verwenden, dass einige erforderliche Pakete noch nicht erstellt wurden oder Incoming noch nicht verlassen haben. Die folgenden Informationen helfen Ihnen vielleicht, die Situation zu lösen: Die folgenden Pakete haben unerfüllte Abhängigkeiten: ros-groovy-desktop-full : Hängt ab von: ros-groovy-navigation-tutorials (= 0.1.1-s1362460629~precise) soll aber nicht installiert werden Hängt ab von: ros-groovy-slam-gmapping (= 1.2.7-s1362457898~precise) soll aber nicht installiert werden Hängt ab von: ros-groovy-navigation (= 1.10.2-s1362453401~precise) soll aber nicht installiert werden E: Probleme können nicht korrigiert werden, Sie haben zurückgehaltene defekte Pakete. Not possible because of broken packages Does anybody have an idea, or could this be a bug in the installion processing script. Hint: A Fuerte installation in a LUbuntu VMWare worked, but I had to copy manually some deb files from the directory. The problem was: a +-00000.deb file was shown, downloaded i had a ...+ 00000.deb file. Renaming this into +-0000.deb and copying into var/cache/apt the the installation worked. I think that the installation process is no so clean as it should be. A lot of work to do on this place. thanks, martin Originally posted by MartinHummel on ROS Answers with karma: 156 on 2013-03-09 Post score: 1 Answer: Hello, after a terrible installation process it seems to work: Using Kubuntu 12.04 3.2.0-39-generic. 64bit base installation: Open Muon or another paket manager Install the paket ros-groovy-ros-full (this includes the ros-groovy documentation) Install ros-groovy-visualisation Then proceed the steps according the tutorial. If ROS_WORKSPACE is not defined (you see it using roscd) than add at the end of .bashrc the following lines: export ROS_PACKAGE_PATH=~/catkin_ws:$ROS_PACKAGE_PATH export ROS_WORKSPACE=~/catkin_ws (if you are using the catkin make process) Originally posted by MartinHummel with karma: 156 on 2013-03-10 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 13267, "tags": "ros" }
Gravitational wave energy transport
Question: When a gravitational wave passes through a region of space (which may be, or may not be, empty), does it deposit energy (in any form) in that region of space? A frivolous version of this question might be: "Can I boil water by waiting for gravitational waves to heat it?" Answer: When a gravitational wave passes through a region of space (which may be, or may not be, empty), does it deposit energy (in any form) in that region of space? If the region is filled with matter that has some form of dissipation then yes. Essentially this question (and answer to it) is the subject of Feynman's sticky bead argument: Feynman’s gravitational wave detector: It is simply two beads sliding freely (but with a small amount of friction) on a rigid rod. As the wave passes over the rod, atomic forces hold the length of the rod fixed, but the proper distance between the two beads oscillates. Thus, the beads rub against the rod, dissipating heat. If there is no mechanism for energy dissipation (or other form of energy storage) in this region, but it is not empty (for example, it could contain an isolated point charge and its bound electrostatic field) then the incident gravitational wave could partially scatter and also undergo partial conversion to electromagnetic radiation without changing the state of that region after passing. A frivolous version of this question might be: "Can I boil water by waiting for gravitational waves to heat it?" If you have either a body of water that has a frequency of quadrupole oscillation modes coinciding with the frequency of incoming gravitational wave, and the intensity of that wave is large enough to cause dissipation from nonlinearities of resonant oscillations. Obviously, for Earth-based bodies of water (from coffee cups to oceans) there are no astrophysical GW sources that could transfer enough energy to even approach threshold of detection. But if we for example consider a star in the same system as a black hole binary, absorption of GW by that star could have astronomically measurable consequences as considered here: McKernan, B., Ford, K. E. S., Kocsis, B., & Haiman, Z. (2014). Stars as resonant absorbers of gravitational waves. Monthly Notices of the Royal Astronomical Society: Letters, 445(1), L74-L78, doi:10.1093/mnrasl/slu136, arXiv:1405.1414.
{ "domain": "physics.stackexchange", "id": 75540, "tags": "energy, gravitational-waves" }
Why don't we consider the pressure due to height of liquid column above the orifice while calculating the speed of efflux using Bernoulli equation?
Question: While describing speed of efflux, using Bernoulli equation, we use value of $P$ at both openings as atmospheric pressure. Why don't we use $P_\text{atmosphere}+\rho gh$ for $P$ at bottom opening? Answer: At the bottom opening the jet is exposed to atmosphere, and therefore the jet is not constrained by boundaries. Therefore its pressure must adjust to that of the atmosphere; if there were any pressure difference the jet would bulge out or shrink (while respecting continuity) until pressures were equalized. We are assuming that pressure change due to surface tension is negligible.
{ "domain": "physics.stackexchange", "id": 53983, "tags": "fluid-dynamics, pressure, bernoulli-equation" }
Polyphase decomposition of IIR filter
Question: Is there a way to decompose IIR filter into set of poly phase components, that are All pass. E.g. a an IIR filter of order M, can be decomposed into a cascade of All zero filter of order 2*M( which can be implemented as polyphase) and all pole of same order( as Direct form), by rationalizing the Denominator. But, I have not been able to find a way to do it directly as sum of all pass polyphase components. any help? Answer: Using matlabs tf2cl, it may require the DSP toolbox, you can turn the transsfer function (numerator and denominator coefficients) in to a cascade allpass lattice coefficients, which is a two phase filter. % Generate some IIR filter coefficients [b,a]=cheby1(9,.5,.4); % Coupled allpass decomposition [k1,k2]=tf2cl(b,a);
{ "domain": "dsp.stackexchange", "id": 2826, "tags": "infinite-impulse-response" }
How can I find the Magnetic Field at the point without using law of cosines
Question: There are two parallel wires, both carry currents of $I=16.5A$ in the same direction. The wire on the left is $B_1$ and the one on the right is $B_2$ That said I know from $B=\frac{\mu_0 I}{2\pi R}$: $$B_1=2.7500\times10^{-5}\,\text{T}$$ $$B_2=2.5385\times10^{-5}\,\text{T}$$ Since this triangle has three diferent sides, this allows to use law of cosines. However, is there other way to solve it? Answer: You are right, but this rule applies only for infinite wires, otherwise use Bio-Savart-Laplace equation $$ \vec{B}=\frac{\mu_0}{4\pi} \int _{C}{\frac {Id\mathbf {l} \times \mathbf {{{r}}'} }{|\mathbf {r''} |^{2}}} $$ Where $r'$ is a location of infinitesimal element of a current-carrying wire and $r''$ - observation point (where you want to calculate) Integrating over path of the wire gives a field, doing the same with another wire and summing them up is your answer
{ "domain": "physics.stackexchange", "id": 41642, "tags": "electromagnetism, magnetic-fields" }
How to count in linear time worst-case?
Question: This question and this question got me thinking a little bit. For sorting an array of length $n$ with $k$ unique elements in $O(n + k \log k)$, we need to be able to store counts of values in the array. There are some suggestions, but I'm looking for a way to do this in worst case linear time. More specifically: Given a list $A$ of $n$ elements with $k$ elements distinct, determine a list of tuples $U = \{(x_i, c_i)\}^k$ of all unique elements $x_i \in A$ such that $c_i$ is the count of element $x_i$ in $A$. Here are some (failed) ideas I've had and have been suggested: Balanced Binary Search Tree - With this it will take $O(\log k)$ to insert into the tree and increase values. After inserts we could do a tree traversal in $O(k)$. Thus, total time comes out to $O(n \log k)$ which is too slow. Hash Map - With this we can get $O(1)$ expected inserts and thus $O(n)$ expected time. However, this is still not $O(n)$ worst case. Empty Space Mapping - Find the minimum and maximum element in $A$. Allocate (but do not initialize) enough memory to cover this range. Use this memory basically as a hash map and include a random hash so that we don't try to access corrupted memory. This strategy presents issues. (1) It's probabilistic with very very very low probability of failing, but still not guaranteed. Using memory like this limits us to floating-point or integer constraints. Associative Arrays - There are many other associative arrays that can be used, similar to hash maps and BSTs, but I am not finding any that match these constraints. Maybe there is some obvious method I am missing, but I also think it could be potentially not be possible. What are your thoughts? Answer: This is a nice question. In the comparison model or, what is more general, the algebraic decision-tree model, the problem of element distinctness has a lower bound of $\Theta(n\log n)$ time-complexity in the worst case as said in this Wikipedia article. So there is no algorithm to count distinct elements in linear time in the worst case, even without counting the duplicities. However, it is not clear whether it can be done in another computational model. It seems unlikely in any reasonable deterministic computational model.
{ "domain": "cs.stackexchange", "id": 13811, "tags": "algorithms, search-trees, hash-tables" }
Is "eco" shower mode actually using less power?
Question: At work we have electronic showers in the gym, they have three modes: * (A snowflake, Which I guess means no heating at all) | eco (A Bar and eco) || (Two Bars... so full power I guess) The shower also has a temperature readout in degrees Celsius. There is no water pressure change between modes. The difference between | eco and || appears to be the number of heating elements used to heat the water (1 or 2 repectively). I was recently reprimanded for using the shower on || full mode instead of | eco, though the temperature I was showering at was what I usually aim for (38 degrees C, also I left the shower switched to || after I'd got out, which is how I was found out...). So counter intuitively, am I saving energy by using || mode and having two elements at a lower temperature heating the water, or is having only a single, hotter element really using less energy to heat the water? I was under the impression that the higher the temperature of the element, the less efficient it was (higher resistance), and also the increased surface area would mean more efficient heat transfer. Answer: Heaters are one of the very few devices that are 100% efficient. All of the energy we put into them ends up as heat (though not all that heat may go where you want it to). So to a first approximation the energy used by your shower is determined by how hot you run the water and for how long, and it doesn't matter what heat setting you use. I say to a first approximation because there could be small effects e.g. a high heat setting may actually boil the water and some heat could escape as steam rather than heating the water. But remember there's a human at the end of this. Human perception of heat tends to be relative, and if you use a lower heat setting it's likely you will shower in cooler water possibly without noticing. If you do notice the water isn't as hot you probably won't shower for as long. So for the person paying the electricity bill using a low heat setting is a win - though possibly not for you. Re your comment: I was under the impression that the higher the temperature of the element, the less efficient it was (higher resistance), and also the increased surface area would mean more efficient heat transfer. Note my first paragraph. Assuming thermal equilibrium, all the heat dissipated in the heating element goes into heating the water. If this wasn't the case the heating element would get hotter, and the rate of heat flow would increase, until the energy in and energy out balanced again. The exception would be if the element got so hot that it boiled the water and heat was lost as steam (though I suspect you'd notice if the water was that hot :-). So it doesn't matter if you have one hot element or two cooler elements. The electrical energy consumed by the shower unit will still all end up as thermal energy in the water.
{ "domain": "physics.stackexchange", "id": 21649, "tags": "electrical-resistance, thermal-conductivity, efficient-energy-use" }
What is the name of the other vertex of an undirected edge?
Question: Let's say I have an undirected graph with an edge $e = \{a,b\}$ connecting vertices $a$ and $b$. Taking vertex $a$, what is the name of the other vertex? Is it the "co-vertex"? Or simply the "other vertex"? Or the "neighbouring vertex"? Answer: There's no set phrase; any clear description will do. "The other vertex in $e$", "the other endpoint of $e$", "$b$". I've never heard of "co-vertex" and "neighbouring vertex" is vague, since $a$ might have multiple neighbours.
{ "domain": "cs.stackexchange", "id": 8723, "tags": "graphs, terminology" }
Could the trajectories of non-periodic comets be used to infer properties of the 'ninth planet'?
Question: Non-periodic comets are comets which have very long orbital periods (>200 years or more), spending most of their time in the outer solar system. Planet X, recently revived by researchers at Caltech, is a proposed planet orbiting in the farthest reaches of the outer solar system. The Caltech researchers infer its presence from to its effect on the orbits of Kuiper belt objects. If such a planet exists, and is the cause of the Kuiper belt object alignments, could it not also knock such objects in to the inner solar system, creating non-periodic comets with huge aphelion orbital distances? Could our current observations of non-periodic comets be used to infer properties of the planet? Answer: The idea is basically to look for comets coming from the same direction. Look for a pair of comets that at some time earlier in their orbit was at the same place at the same time. That is a data point, but an extremely inaccurate one. It does not have to mean anything, and you are going to collect a large amount of statistical noise. For this purpose, normal periodic comets are close to useless, as they are in the same region for decades or even centuries as they fall slowly towards the sun. Hyperbolic (non-periodic) comets on the other hand, have a shorter stay in each region, allowing us to better pin-point a possible encounter. The problem is, there are not actually a lot of them observed, and the length of the time horizon for accurate enough observations is not that long. The properties you can obtain about the ninth planet, if it exists, from large amount of data on non-periodic comets are: Current distance within a reasonably large error margin. (+-10 AU) Orbital period; not so accurate, but you may get an OK estimate. Inclination; not accurate, but together with the orbital period, estimates will grow more accurate over a longer time horizon of observations.
{ "domain": "astronomy.stackexchange", "id": 1332, "tags": "solar-system, comets, kuiper-belt, 9th-planet" }
MHC restricted peptide
Question: What is an MHC restricted peptide? I got this definition from wikipaedia, but cannot exactly extract what the phrase MHC restricted peptide means. MHC-restricted antigen recognition, or MHC restriction, refers to the fact that a given T cell will recognize a peptide antigen only when it is bound to a host body's own MHC molecule. Normally, as T cells are stimulated only in the presence of self-MHC molecules, antigen is recognized only as peptides bound to self-MHC molecules. I have not studied biology since last 8 years and now I am going through it because I need it for my research. So if someone can describe it in simple language it would be very helpful. Answer: First of all MHC stands for major histocompatibility complex. There are two types of MHC. MHC type one is present on all of our cells with a nucleus. The purpose of these protein complexes is called antigen presentation. T-cells cannot recognize free antigens on their own, it has to be presented to them in the proper way. This is what these proteins do. In every cell there are lots and lots of proteins, that get digested to small polypeptides (short aminoacid sequences) by proteases during the natural recycling process. The cell takes a small portion of these polypeptides and presents them on its surface through MHC I complexes, that the immune system can read. This is like the cell saying to the immune syytem that "hey I got these proteins in me". Now if the cell is infected (by a virus most likely) then the "attacker's" proteins get digested and presented as well and its like saying " hey I'm infected and the attacker has these proteins". MHC II serves similar function but it is only present on professional antigen presenting cells (APCs). These complexes present peptides derived from proteins consumed and digested through phagocytosis or receptor mediated endocytosis. It's like saying "hey we got a larger attacker and it has these proteins" So, long story short T-cells cannot recognise antigens on their own it needs to be presented to them on MHC complexes and then and only then can they be activated. Edit: A list of useful articles in the topic of antigen processing / presentation for further details: January 14, 1998 14:59 Annual Reviews AR052-12 Annu. Rev. Immunol. 1998. 16:323–58 MECHANISMS OF MHC CLASS I–RESTRICTED ANTIGEN PROCESSING Eric Pamer and Peter Cresswell Cell, Vol. 76, 287-299, January 28, 1994. MHC-Dependent Antigen Processing and Peptide Presentation: Providing Ligands for T Lymphocyte Activation Terry Y Nakagawa, Alexaiider Y Kudensky The role of lysosomal proteinases in MHC class Il-mediated antigen processing and presentation This one may require subscription. and the wiki page :)
{ "domain": "biology.stackexchange", "id": 3658, "tags": "molecular-biology, cell-biology, immunology" }
Factory design for math problems
Question: I'm creating a factory design for math problems. The idea which I have is: When the factory initializes, creates in a list some problems (20 initially) If the program wants more than 20, the list should grow until reach the requested quantity. For example, if I require 30 problems of the X problem, will generate two times. Some problems must be generated more difficult. I pass in the constructor the level and the factory must generate them. To do this, I've got an abstract method called ConfigureLevels, where you has to set up. I set an abstract method called Generate, this one must be implemented in a concrete class. When the problem is generated, sometimes is not a good problem. When this happen, it must generate another's until gets a good problem. A good problem is according to a CRITERIA. I mean, the problem mustn't exist in the list, if we refer that the factory generates FRACTIONS, must be below than 1, etc. And I haven't done yet this last feature. This is the factory which I'm talking to: public abstract class Factory<T> where T : IProblem { protected static Random rnd; protected LinkedListNode<T> actualNode; protected LinkedList<T> problems; public virtual int TotalProblemsPossible { get; set; } protected virtual int InitialSize { get; set; } protected Factory(Levels level) { this.InitialSize = 20; Initialize(); ConfigureLevels(level); FirstTime(); } public virtual void FirstTime() { if (rnd == null) rnd = new Random(100); if (problems == null) { problems = new LinkedList<T>(); Generate(problems); } actualNode = problems.First; } public virtual T CreateProblem() { if (actualNode.Next == null) { Generate(problems); } actualNode = actualNode.Next; return actualNode.Value; } private void Generate(LinkedList<T> problems) { for (int i = 0; i < InitialSize; i++) { T newProblem; int bucles = 0; do { if (bucles == 50) throw new InvalidOperationException(); newProblem = Generate(); bucles++; } while (problems.Contains(newProblem)); problems.AddLast(newProblem); } } protected virtual void Initialize() { } protected abstract T Generate(); protected abstract void ConfigureLevels(Levels level); } And here is a concrete class. Check how I'm overriding some methods from the abstract Factory. The factory create times table problems, and it knows how to calculates. public class TimesTableProblemFactory : Factory<BinaryProblem> { private Bound<int> Bound1 { get; set; } private Bound<int> Bound2 { get; set; } public TimesTableProblemFactory(Levels level) : base(level) { } protected override void Initialize() { base.Initialize(); } protected override void ConfigureLevels(Levels level) { switch (level) { case Levels.Easy: this.Bound1 = new Bound<int>(2, 6); this.Bound2 = new Bound<int>(3, 11); break; case Levels.Normal: this.Bound1 = new Bound<int>(3, 13); this.Bound2 = new Bound<int>(3, 10); break; case Levels.Hard: this.Bound1 = new Bound<int>(6, 13); this.Bound2 = new Bound<int>(3, 10); break; } } protected override BinaryProblem Generate() { BinaryProblem problem; int number1 = rnd.Next(Bound1.Min, Bound1.Max); int number2 = rnd.Next(Bound1.Min, Bound1.Max); Answer<decimal> correctValue = new Answer<decimal>(number1 * number2); problem = new BinaryProblem(number1, number2, Operators.Multiplication, correctValue); return problem; } } Here is another factory for the same problem, which creates additions and multiplications: public class BinaryProblemFactory : Factory<BinaryProblem> { private Bound<int> _bound; private LinkedList<Operators> _operatorsList; private LinkedListNode<Operators> _node; public BinaryProblemFactory(Levels level) : base(level) { } protected override void Initialize() { _bound = new Bound<int>(); _operatorsList = new LinkedList<Operators>(new List<Operators>() { Operators.Addition, Operators.Multiplication }); base.Initialize(); } protected override void ConfigureLevels(Levels level) { switch (level) { case Levels.Easy: this._bound = new Bound<int>(10, 100); break; case Levels.Normal: this._bound = new Bound<int>(50, 200); break; case Levels.Hard: this._bound = new Bound<int>(100, 10000); break; } } private Operators NextOperator() { if (_node == null || _node.Next == null) { _node = _operatorsList.First; } else { _node = _node.Next; } return _node.Value; } protected override BinaryProblem Generate() { BinaryProblem problem; int number1 = rnd.Next(_bound.Min, _bound.Max); int number2 = rnd.Next(_bound.Min, _bound.Max); Answer<decimal> correctValue = new Answer<decimal>(number1 + number2); problem = new BinaryProblem(number1, number2, NextOperator(), correctValue); return problem; } } I feel is not a well design, so I need a little of help to organize these factories. I'm gonna create around 200 hundreds of it. So, I prefer fix these instead spent pains. Now I need to know a list of available levels. I guess create three three virtual functions CanConfigureXLevel is not a good way. Maybe it'll be great create a some dictionary which contains the available levels (Level enum as key) and value should have something like a container of objects useful to generate the problem (for example binary and times tables both needs two bound objects). Here is the current project. Answer: I think your list factory shouldn't be responsible for creating the concrete problems. How about separating the list factory from the problem factory? That way you can have timeTableProblems and binaryProblems in the same list. (even though that might not be a requirement, it's still a nice separation of concerns) It actually looks like you've catered for this since you have the IProblem interface. The list factory wouldn't have to be generic either, it'd just keep a list of IProblems generated by whatever problem factories it's given. Also, I believe you could apply the template method pattern to some of your methods. There is some repeated code in each of your concretes. For instance you could pull the ConfigureLevels method up to the base class and call down to ConfigureEasy, ConfigureMedium and ConfigureHard. Update Regarding template methods, the point is to remove duplication, and at the same time avoid letting the FirstTime method be overridden for instance. It should call out to an empty OnFirstTime method instead. Derivatives might just forget to call base.FirstTime, and you probably don't want that. (Why not just join it with Initialize() ? ) Here's a stub of a sample of separating the list factory and the problem factories. The point is that these are two different things. A list of problems, and the problems themselves. Also have a look at this: http://en.wikipedia.org/wiki/Single_responsibility_principle public class ProblemListFactory { // omitted stuff protected Factory(ProblemFactory problemFactory) { // ... } public void Generate() { for(// ...) { IProblem problem = problemFactory.Generate(); if (ProblemExists(problem)) continue; problems.Add(problem); } } } public abstract class ProblemFactory { protected Random Random = new Random(); public abstract IProblem Generate(); } public class BinaryProblemFactory : ProblemFactory { public BinaryProblemFactory(Level level) { // ... } public override IProblem Generate() { // ... return new BinaryProblem(x, y, operator, answer); } } // ... var binaryProblemListFactory = new ProblemListFactory(new BinaryProblemFactory(Level.Easy)); var binaryProblems = binaryProblemListFactory.Generate(); var timeTableProblemListFactory = new ProblemListFactory(new TimeTableProblemFactory(Level.Medium)); var timeTableProblems = timeTableProblemListFactory.Generate();
{ "domain": "codereview.stackexchange", "id": 1204, "tags": "c#, factory-method" }
Equilibrium constant vs Reaction rate constant
Question: For a reaction, e.g., $$a X + b Y → c Z$$ Its reaction rate constant is $${\displaystyle r=k_f(T)[\mathrm {X} ]^{m}[\mathrm {Y} ]^{n}}$$ where the exponents m and n are called partial orders of reaction and are not generally equal to the stoichiometric coefficients a and b. However, for the equilibrium constant, the exponents must be stoichiometric, e.g., $$K_c=\frac{[Z]^c}{[X]^a [Y]^b}$$ However, for reversible chemical reaction, $K_c=k_f/k_b$ which means that stoichiometric coefficients are required for equilibrium. Does this mean that when the exponents of the rate reaction equation are similar to that of stoichiometric coefficients, the reaction is in equilibrium? Answer: If there are more complex reactions, reaction rate equations do not follow the numerator and denominator of the equilibrium constant equation. They may not match stoichiometric coefficients. The may have non-integer exponents. They may contain additive and/or ratio terms. Such reactions are not elementary and have a reaction schema. Therefore, such forward/backward reaction rates, involving initial reactants and final products, do not belong to the opposite directions of the same elementary reaction. The ratio of their rate constants then need not to match the equation for the equilibrium constant. As example, see surprisingly complex forward reaction rate equation(small PDF) for the reaction $$\ce{H2(g) + Br2(g) <=> 2 HBr(g)}$$ $$\frac{\mathrm{d}[\ce{HBr}]}{\mathrm{d}t}=\frac{k_1[\ce{H2}]{[\ce{Br2}]}^{\frac 12}}{1 + k_2\frac{[\ce{HBr}]}{[\ce{Br2}]}}$$ (simplified)
{ "domain": "chemistry.stackexchange", "id": 16982, "tags": "physical-chemistry, reaction-mechanism, thermodynamics, equilibrium, combustion" }
How is weather formation prevented in the stratosphere?
Question: Water diffuses from an area of high concentration to an area of low concentration. By that principle, I would expect water from the troposphere to diffuse into the stratosphere, where water content is significantly lower than in the former. However, this is not what happens; a reason given in my textbook is as follows: Increasing temperatures stop clouds and weather systems from reaching this height: it acts like the lid on top of a boiling pan above the active weather systems in the troposphere. $^{1}$ How is it that rising temperatures play a role here? Plus, the hottest region of the stratosphere (its top) isn't much hotter/colder compared to the troposphere, so a temperature gradient wouldn't be responsible, would it? Essentially, what role does temperature play with weather "regulation" and how does it prevent weather formation & water entering the stratosphere through diffusion? 1. Pallister, John. IGCSE Environmental Management. Second ed., Oxford University Press, 2017. Answer: By that principle, I would expect water from the troposphere to diffuse into the stratosphere, where water content is significantly lower than in the former. You are forgetting that water is not nearly as volatile as are the long-lived gases that comprise the bulk of the Earth's atmosphere. Water's triple point temperature is 0.01° C. The triple point temperatures of nitrogen, oxygen, and argon are -209.97° C, -218.79° C, and -189.34° C. There is no point in the Earth's atmosphere where those long-lived gases can make the phase transition from gas to liquid or solid. There are plenty of places in the Earth's troposphere where water vapor can do just that. We call the result snow or rain. How is it that rising temperatures play a role here? It's not just rising temperatures in the stratosphere with increased altitude, although that does play a role. The decreasing temperatures in the troposphere with increased altitude also plays a role. The temperature at the top of the troposphere / bottom of the stratosphere is around -50° C. The Earth's atmosphere can sustain but a paltry amount of water vapor at the temperature and pressure at the tropopause. Where the increasing temperatures with increased altitude in the stratosphere does play a role is that this makes the tropopause a temperature inversion layer. Inversion layers tend to stop vertical transport of air. Inversion layers in the lower atmosphere are one of the reasons pollution can be rather nasty in cities such as Los Angeles and Denver. Diffusion of water vapor from the upper troposphere into the lower stratosphere is still possible, but at -50° C, there is very little water vapor available to diffuse across the tropopause.
{ "domain": "earthscience.stackexchange", "id": 2327, "tags": "meteorology, temperature, water, water-vapour" }
How to 3D represent model in ROS
Question: Hello people, I just want to have some basic information about this because I am new to ROS. I tried to search it on ROS and on internet but could not find a similar question. I have to represent a model of different satellites(around 15) using features from Ros. The parameters of the sattelites(coordinates, speed e.t.c) are constantly changing overtime and I have to represent their movement in 3D. Can you please tell me how can I represent all of them graphically while all of them are moving relative to time. Somebody told me that I can do it through rviz or rqt but I am not quite sure. Any help would be appreciated. Originally posted by Gudjohnson on ROS Answers with karma: 100 on 2013-07-18 Post score: 2 Answer: The usual way would be to use tf. Add some rviz displays for your satellites, e.g. Marker or whatever you have for a model. Those should only the satellites in their local frame, e.g. /sat1/base_link (or whatever is in the center). Next you write some publisher node that computes the coordinates of the satellites and publishes their position, e.g. the transformation from a fixed frame like /world to /sat1/base_link through a tf broadcaster. Within rviz you select /world as the fixed frame and all satellites should be displayed correctly moving about. Originally posted by dornhege with karma: 31395 on 2013-07-18 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Gudjohnson on 2013-07-23: Hey dornhege, When I do rostopic echo /mytopic I constantly get the output of the satellite positioning. So it tells that it is publishing the data to the node. How can I use this node to display the information in tf. Do I have to write a tf broadcaster or publisher for it and then add the tf package in launch file and run it. Comment by dornhege on 2013-07-23: Basically include a tf broadcaster in the node you are running that publishes to /mytopic. Instead (or in addition) supply this information to tf. Even without the models, the TF display in rviz can show your that everything works. Comment by Gudjohnson on 2013-07-29: @dornhege I still have a lot of problems. Is there any example on ros wiki or somewhere which incorporates this. Comment by dornhege on 2013-07-30: There is a tutorial for writing a tf broadcaster in the tf package. You can start with that, set it up correctly and verify using the TF display of rviz. Once that works it should be easy to swap in your actual satellite models. Comment by Gudjohnson on 2013-08-05: @dornhege I did as you told me, but I still have no succeeded in outputting the satellite data on the tf. I asked the question with details here (http://answers.ros.org/question/71503/when-do-i-need-a-tf-listener/) but could not reach the solution. Can you please help me here.
{ "domain": "robotics.stackexchange", "id": 14966, "tags": "ros, rviz, roslaunch, rqt, rosbuild" }
Is there any special significance of force field in physics?
Question: What is the formal definition of force field? Which is more fundamental force or field? Do field exist in nature (as force do i think as per section 12-1 of Feynman lecture volume 1, and page 8,9 of Keith Symon classical mechanics) or is it a mathematical definition? Why do field (take gravitational field for example), need time to propagate, what is it formally? Answer: There is a name clash here. Vector fields are mathematical objects, functions mapping vectors to points in (3 or 4 dimensional) space. They are purely mathematical objects, existing by definition, independent from physical reality. Meanwhile, Physical fields are believed to exist in nature. In the framework of classical mechanics, the introduction of newtonian gravitational field is avoidable, basically, you can freely decide if you want to work with instantenous forces acting between pairs of masses, or fields crested by mass distributions. However, in modern physics, fields are seen as physically existing "things", a form of matter, capable of storing energy and propagating information. In some theories (for example Quantum Field Theory) all forms of matter (particles) are described as excitations of fields. The confusion seems to arise from the fact that Vector (or scalar or tensor) fields are used as a model of physical fields. Even physicists often conflate them completely. For example, in full philosophical rigor, you cannot take the divergence of the electric field. You take the divergence of the vector field you use to model the electric field. But since all our observations suggest that the electric field can readily and fully modeled and understood using a vector field, nobody bothers to emphasize the distinction. Regarding the propagation, it is not the mathematical consequence of their modelling as vector fields. You can, and indeed, in Newtonian mechanics or electrostatics often you do, use fields that react instantly, without needing time, to distant changes to configurations of its sources. So the mathematical formalism of finite speed propagation lies in the equations governing the time evolution of the fields. (Eg. Maxwell-equations for the EM field) By the way, gravitational waves do not exist in Newtonian gravity. They are the consequence of the newer (102 years old) theory of general relativity. This theory uses an even more involved mathematical model, working not only with fields existing in space and time, but the curvature of spacetime itself.
{ "domain": "physics.stackexchange", "id": 54420, "tags": "forces, classical-mechanics, field-theory, definition, vector-fields" }
ROS installation on Ubuntu taking too long
Question: I am a ROS as well as Ubuntu newbie. I am trying to install ROS on Ubuntu 12.10 (quantal).Its been more than 6 hrs since I started install Step 1.4 (Desktop-Full Install) described on the Ubuntu installation page on ROS.org. All I see on the Ubuntu terminal is this .... "Selecting previously unselected package "...insert package name here..." Unpacking "........" (it is doing this selecting and unpacking of various packages one after the other)Seems to be never ending. Is this normal or am I doing some wrong. Originally posted by ErtahM on ROS Answers with karma: 3 on 2013-06-01 Post score: 0 Answer: That is the normal outputs. The length of time is surprising, often that suggests you have issues or hardware that is not powerful enough. What hardware are you running? Originally posted by tfoote with karma: 58457 on 2013-08-27 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ErtahM on 2013-08-27: It was the hardware.... had to get a new computer to be able to work with ROS. Old system-Dual boot PC, intel T2400 1.83GHz,RAM 2GB, ATI X1300 ... Comment by tfoote on 2013-08-27: Glad to hear you got it working. Please accept the answer at the left using the checkmark so others know your question is solved.
{ "domain": "robotics.stackexchange", "id": 14394, "tags": "ros, installation, ubuntu, ubuntu-quantal" }
Meissner Effect and Lorentz Force Paradox?
Question: The Meissner effect expels magnetic field lines in a super conductor, see the picture below. Left is normal conducting, right is the superconducting state. If I have a superconducting wire of radius $r_0$ now, the Meissner effect leads to $B(r<r_0)=0$. When I drive a current $J(r<r_0)=J_0$ this current leads to zero forces when calculating the force using the Lorentz force: $f=j\times B = 0$. This would mean that all superconducting coils experience no forces. What am I missing here? Answer: A great question! The answer is that there is a still a force of ${\bf I}\times {\bf B}_{\rm external}$ per unit length of the wire. This is because the ${\bf B}$ field does penetrate some distance (naturally this is called the penetration depth) into the supercondcuting wire, and, for reasons similar to the Meissner effect itself, this near-surface penetration depth region is also where the current carried by the wire flows. That the location of the current and strength of the penetrating ${\bf B}$ field conspire to give exactly the same answer for the force as if there were no Mesissner effect is not exactly obvious. It is, however, a magnetic analogue of the statement that if you put a charge $Q$ on a conducting body and immerse the body in a uniform electric field ${\bf E}_{\rm external}$ then the force on the body is still exactly $Q {\bf E}_{\rm external}$ despite the fact that there is no ${\bf E}$ field inside the conducting body.
{ "domain": "physics.stackexchange", "id": 69714, "tags": "electromagnetism, condensed-matter, solid-state-physics, superconductivity" }
What evidence is there of Earth-Like internal features of Europa?
Question: This question is inspired by "What are the Earth-like features of Titan?". According to NASA's Europa overview, Europa is believed to have an iron core, rocky mantle and an ocean under the frozen surface. Does the tidal heating of Europa's interior by Jupiter make it a smaller analogue for Earth? Answer: This Image, Originally sourced from NASA shows the theorized interior of Europa: According to this geology site Earth's core is predominantly iron and nickel. and this space site says that Earth's mantle is predominantly silicates So the internal similarities so far is that they both possess iron cores and silicate rock mantles. However the core of Europa is mixed with sulphur rather than Earth's nickel. Another similarity between Europa and Earth could be the composition of the oceans. NASA speculates that Europa's oceans are also briny with salts and minerals dissolved in it. This is mostly theorised due to what we know about planet formation. However this also comes with many differences, the Galileo spacecraft did not detect a magnetic field from Europa, which leads to speculation that, whilst Europa may have an Iron compound core, it is probable not molten any more. Conclusion Interior Similarities: Iron compound core Silicate rock mantle Salty, briny, Oceans Differences: Iron core mixed with sulphur not nickel Core likely to be solid, not molten Lack of internally generated magnetic field
{ "domain": "astronomy.stackexchange", "id": 34, "tags": "europa" }
Dates of Passage of Neptune at Aphelion and Perihelion
Question: I'm working with a group of colleagues on a Solar System simulator. We need two items of data that are missing to us: the nearest dates of passage of Neptune at Aphelion and Perihelion. Answer: The last perihelion passage of the Neptune system barycentre relative to the Solar System barycentre (SSB) was at 1881-Feb-02 10:00 TDB. Here's a plot (courtesy of JPL Horizons) of the distance from the Neptune barycentre to the Sun and to Solar System barycentre for a couple of decades around that date. The vertical axis is the distance in billions of kilometres. From: 2405159.5 A.D. 1873-Jan-01 00:00:00.0 To: 2411368.5 A.D. 1890-Jan-01 00:00:00.0 As you can see, relative to the Sun, there was actually a local maximum distance near that date, around 6 years after Neptune's closest distance to the Sun, and another local minimum around 5 years after that. The distance to the SSB behaves more like we expect from a nice elliptical orbit. I created that plot using my script here. Zooming in, From: 2408113.5 A.D. 1881-Feb-02 00:00:00.0 To: 2408114.5 A.D. 1881-Feb-03 00:00:00.0 That's a plot for the distance from the Neptune barycentre to the SSB. If you want to search for the closest distance for the Neptune body centre, you can use this script, setting 899 as the target. If you want the distance to the Sun, use @10 as the centre. There are brief instructions on using the script here. Here's a link for the Horizons data used for the last plot. And here's the raw data: ******************************************************************************* Ephemeris / API_USER Tue May 3 10:47:05 2022 Pasadena, USA / Horizons ******************************************************************************* Target body name: Neptune Barycenter (8) {source: DE441} Center body name: Solar System Barycenter (0) {source: DE441} Center-site name: BODY CENTER ******************************************************************************* Start time : A.D. 1881-Feb-02 00:00:00.0000 TDB Stop time : A.D. 1881-Feb-03 00:00:00.0000 TDB Step-size : 60 minutes ******************************************************************************* Center geodetic : 0.00000000,0.00000000,0.0000000 {E-lon(deg),Lat(deg),Alt(km)} Center cylindric: 0.00000000,0.00000000,0.0000000 {E-lon(deg),Dxy(km),Dz(km)} Center radii : (undefined) Output units : KM-D Output type : GEOMETRIC cartesian states Output format : 6 (LT, range, and range-rate) Reference frame : Ecliptic of J2000.0 ******************************************************************************* JDTDB, Calendar Date (TDB), LT, RG, RR, ************************************************************************************************************************** $$SOE 2408113.500000000, A.D. 1881-Feb-02 00:00:00.0000, 1.721841396032804E-01, 4.459925356440414E+09, -1.775610666244478E-01, 2408113.541666667, A.D. 1881-Feb-02 01:00:00.0000, 1.721841396030089E-01, 4.459925356433380E+09, -1.600932159790273E-01, 2408113.583333333, A.D. 1881-Feb-02 02:00:00.0000, 1.721841396027653E-01, 4.459925356427073E+09, -1.426253883932667E-01, 2408113.625000000, A.D. 1881-Feb-02 03:00:00.0000, 1.721841396025500E-01, 4.459925356421494E+09, -1.251575835878349E-01, 2408113.666666667, A.D. 1881-Feb-02 04:00:00.0000, 1.721841396023627E-01, 4.459925356416644E+09, -1.076898016158184E-01, 2408113.708333333, A.D. 1881-Feb-02 05:00:00.0000, 1.721841396022035E-01, 4.459925356412521E+09, -9.022204256409987E-02, 2408113.750000000, A.D. 1881-Feb-02 06:00:00.0000, 1.721841396020724E-01, 4.459925356409125E+09, -7.275430640336375E-02, 2408113.791666667, A.D. 1881-Feb-02 07:00:00.0000, 1.721841396019694E-01, 4.459925356406457E+09, -5.528659310734295E-02, 2408113.833333333, A.D. 1881-Feb-02 08:00:00.0000, 1.721841396018945E-01, 4.459925356404518E+09, -3.781890258825515E-02, 2408113.875000000, A.D. 1881-Feb-02 09:00:00.0000, 1.721841396018478E-01, 4.459925356403306E+09, -2.035123490434648E-02, 2408113.916666667, A.D. 1881-Feb-02 10:00:00.0000, 1.721841396018291E-01, 4.459925356402822E+09, -2.883590222513597E-03, 2408113.958333333, A.D. 1881-Feb-02 11:00:00.0000, 1.721841396018385E-01, 4.459925356403066E+09, 1.458403165229071E-02, 2408114.000000000, A.D. 1881-Feb-02 12:00:00.0000, 1.721841396018760E-01, 4.459925356404037E+09, 3.205163069338877E-02, 2408114.041666667, A.D. 1881-Feb-02 13:00:00.0000, 1.721841396019416E-01, 4.459925356405737E+09, 4.951920692708190E-02, 2408114.083333333, A.D. 1881-Feb-02 14:00:00.0000, 1.721841396020353E-01, 4.459925356408164E+09, 6.698676024561424E-02, 2408114.125000000, A.D. 1881-Feb-02 15:00:00.0000, 1.721841396021571E-01, 4.459925356411319E+09, 8.445429075517470E-02, 2408114.166666667, A.D. 1881-Feb-02 16:00:00.0000, 1.721841396023070E-01, 4.459925356415202E+09, 1.019217984320074E-01, 2408114.208333333, A.D. 1881-Feb-02 17:00:00.0000, 1.721841396024850E-01, 4.459925356419812E+09, 1.193892832482304E-01, 2408114.250000000, A.D. 1881-Feb-02 18:00:00.0000, 1.721841396026911E-01, 4.459925356425151E+09, 1.368567452307574E-01, 2408114.291666667, A.D. 1881-Feb-02 19:00:00.0000, 1.721841396029253E-01, 4.459925356431218E+09, 1.543241844351693E-01, 2408114.333333333, A.D. 1881-Feb-02 20:00:00.0000, 1.721841396031876E-01, 4.459925356438011E+09, 1.717916007232026E-01, 2408114.375000000, A.D. 1881-Feb-02 21:00:00.0000, 1.721841396034780E-01, 4.459925356445534E+09, 1.892589943495820E-01, 2408114.416666667, A.D. 1881-Feb-02 22:00:00.0000, 1.721841396037965E-01, 4.459925356453783E+09, 2.067263649984577E-01, 2408114.458333333, A.D. 1881-Feb-02 23:00:00.0000, 1.721841396041431E-01, 4.459925356462761E+09, 2.241937128177309E-01, 2408114.500000000, A.D. 1881-Feb-03 00:00:00.0000, 1.721841396045178E-01, 4.459925356472466E+09, 2.416610378027369E-01, $$EOE ************************************************************************************************************************** TIME Barycentric Dynamical Time ("TDB" or T_eph) output was requested. This continuous relativistic coordinate time is equivalent to the relativistic proper time of a clock at rest in a reference frame comoving with the solar system barycenter but outside the system's gravity well. It is the independent variable in the solar system relativistic equations of motion. TDB runs at a uniform rate of one SI second per second and is independent of irregularities in Earth's rotation. Calendar dates prior to 1582-Oct-15 are in the Julian calendar system. Later calendar dates are in the Gregorian system. REFERENCE FRAME AND COORDINATES Ecliptic at the standard reference epoch Reference epoch: J2000.0 X-Y plane: adopted Earth orbital plane at the reference epoch Note: IAU76 obliquity of 84381.448 arcseconds wrt ICRF X-Y plane X-axis : ICRF Z-axis : perpendicular to the X-Y plane in the directional (+ or -) sense of Earth's north pole at the reference epoch. Symbol meaning [1 day=86400.0 s]: JDTDB Julian Day Number, Barycentric Dynamical Time LT One-way down-leg Newtonian light-time (day) RG Range; distance from coordinate center (km) RR Range-rate; radial velocity wrt coord. center (km/day) ABERRATIONS AND CORRECTIONS Geometric state vectors have NO corrections or aberrations applied. Computations by ... Solar System Dynamics Group, Horizons On-Line Ephemeris System 4800 Oak Grove Drive, Jet Propulsion Laboratory Pasadena, CA 91109 USA General site: https://ssd.jpl.nasa.gov/ Mailing list: https://ssd.jpl.nasa.gov/email_list.html System news : https://ssd.jpl.nasa.gov/horizons/news.html User Guide : https://ssd.jpl.nasa.gov/horizons/manual.html Connect : browser https://ssd.jpl.nasa.gov/horizons/app.html#/x API https://ssd-api.jpl.nasa.gov/doc/horizons.html command-line telnet ssd.jpl.nasa.gov 6775 e-mail/batch https://ssd.jpl.nasa.gov/ftp/ssd/hrzn_batch.txt scripts https://ssd.jpl.nasa.gov/ftp/ssd/SCRIPTS Author : Jon.D.Giorgini@jpl.nasa.gov ******************************************************************************* As Barry Carter mentions in a comment, the orbit of Neptune is nearly circular. Its mean eccentricity is ~0.009, a little over half that of Earth's orbit. So perturbations by the other giant planets (on Neptune and on the Sun) can have a significant effect on the time and position of perihelion and aphelion. And that can have interesting effects on Neptune's mean and true anomaly (which are measured from the perihelion). Although real orbits aren't perfect ellipses, it can be useful to treat orbital motion as an ideal Keplerian orbit that changes over time. This is known as the osculating orbit the osculating orbit of an object in space at a given moment in time is the gravitational Kepler orbit (i.e. an elliptic or other conic one) that it would have around its central body if perturbations were absent. That is, it is the orbit that coincides with the current orbital state vectors (position and velocity). Horizons can produce ephemerides of osculating orbit elements. Here's a plot of the eccentricity of the Neptune barycentre, relative to the Sun, for (roughly) one orbital period, with a step size of 2 calendar months. As you can see, it varies quite a bit, and gets extremely low at times. From: 2397854.5 A.D. 1853-Jan-01 00:00:00.0 To: 2458849.5 A.D. 2020-Jan-01 00:00:00.0
{ "domain": "astronomy.stackexchange", "id": 6342, "tags": "neptune" }
Low copy numbers of plasmids
Question: I have a plasmid with the P15A origin which apparently has a low copy number (see here). This would explain why my purification yeilds for subsequent digestion are low (gel shows the plasmid after digestion and clean-up with 5µl being loaded). I am currently growing my E. coli containing plasmid up in 5mls overnight before doing a miniprep. My question is should I grow my 5ml cultures up for longer (2 days) or just give it up and do a larger amount for Midiprep. I am also curious as to what actually controls the plasmid copy number inside cells. An explanation of the regulation of plasmid copy number would be much appreciated. Answer: I am sure you have done it; just a note to others: do a PCR screen to ascertain that the plasmid is there. For minipreps you generally do not need a starter culture. If you have screened your clones and now just want to amplify and keep the plasmid for cloning purposes then go for a maxiprep with 200-400ml cultures. Do not overgrow the cells, your extraction becomes difficult and you would get even worse yields. Just set up a huge culture and pool the cells before extraction. Always seed and harvest cells that are in exponential growth phase. If you need to screen the plasmid for clones or something then do a colony PCR, keep the colony inoculated in a small culture (make a temporary glycerol stock if your screening is taking time). If screen is positive then incoulate the starter culture for maxiprep/gigaprep culture with this stock.
{ "domain": "biology.stackexchange", "id": 3148, "tags": "molecular-biology, dna, plasmids" }
Why doesn't the number of space dimensions equal the number of time dimensions?
Question: It seems as though symmetry is a driving force behind theoretical physics. With symmetry in mind, should we expect that the number of time dimensions should be the same as the number of spatial dimensions? Answer: Relativity treats spacetime as a four dimensional manifold equipped with a metric. We can choose any system of coordinates we want to measure out the spacetime. It's natural for us humans to choose something like $(t, x, y, z)$, but this is not the only choice. Even in special relativity the Lorentz transformations mix up the time and spatial coordinates, so what looks like a purely time displacement to us may look like a mixed time and space displacement to a moving observer. In general relativity we may choose coordinate systems like Kruskal-Szekeres coordinates where there is no time coordinate in the sense we usually understand the term. So in this sense there is a symmetry between space and time coordinates because there is no unique separation between space and time. This is the point Harold makes in the comments. But this does not answer your question. Regardless of what coordinate system we use, locally the metric will always look like: $$ ds^2 = -da^2 + db^2 + dc^2 + dd^2 \tag{1} $$ where I've deliberately used arbitrary coordinates $(a, b, c, d)$ to avoid selecting one of them as time. If you look at equation (1) it should immediately strike you that there are three + signs and one - sign, so there is a fundamental asymmetry. It's the dimension with the - sign that is the timelike dimension and the ones with the + sign are spacelike. Whatever coordinates we choose we always find that there is are three spacelike dimensions and one timelike. We call this the signature of the spacetime and write it as $(-+++)$ or if you prefer $(+---)$. Which brings us back to your question why are there three spacelike dimensions and one timelike dimension? And there is no answer to this because General Relativity contains no symmetry principle to specify what the signature is. If you decided you wanted two or three timelike dimensions you could still plug these into Einstein's equation and do the maths. The problem is that with more than one timelike dimension the equations become ultrahyperbolic and cannot describe a universe like the one we see around us. So the only answer I can give to your question is that there is only one time dimension because if there were more time dimensions we wouldn't be here to see them.
{ "domain": "physics.stackexchange", "id": 21772, "tags": "spacetime, spacetime-dimensions" }
Check if a string has HTML in C#
Question: I need a function to determine whether or not a string has HTML in it or not so that I can know whether I'm dealing with a plain-text format or HTML format. It seems simple enough in C#, using HTMLAgilityPack. Recursively go through the tree of nodes, and if any are an element node (or comment too) then we say "Yes, it's HTML" public static class HTMLUtility { public static bool ContainsHTMLElements(string text) { HtmlDocument doc = new HtmlDocument(); doc.LoadHtml(text); bool foundHTML = NodeContainsHTML(doc.DocumentNode); return foundHTML; } private static bool NodeContainsHTML(HtmlNode node) { return node.NodeType == HtmlNodeType.Element || node.NodeType == HtmlNodeType.Comment || node.ChildNodes.Any(n => NodeContainsHTML(n)); } } Am I missing anything? Thanks! Answer: I've had to do this exact thing before, your way is absolutely fine but I went for the other way round - checking if it was all text: private static bool HtmlIsJustText(HtmlNode rootNode) { return rootNode.Descendants().All(n => n.NodeType == HtmlNodeType.Text); } Then you have your public method as: public static bool ContainsHTMLElements(string text) { HtmlDocument doc = new HtmlDocument(); doc.LoadHtml(text); return !HtmlIsJustText(doc.DocumentNode); } I think that makes the code slightly more concise. I'd argue that your class should really be called HtmlUtility as per guidelines The PascalCasing convention, used for all identifiers except parameter names, capitalizes the first character of each word (including acronyms over two letters in length), as shown in the following examples: PropertyDescriptor HtmlTag
{ "domain": "codereview.stackexchange", "id": 23008, "tags": "c#, html, .net" }
Jacobian of affine warp in Lucas-Kanade image alignment
Question: I hate this kind of questions, but I'm really stuck. I'm trying to implement Lucas-Kanade algorithm as described in this paper (see pages 4 and 5). Unlike most explanations I've seen they don't assume optical flow, but instead use affine transformation for most examples and in general refer to warp function just as $W(x, p)$, where $x$ is the point and $p = (p_1, p_2, p_3, p_4, p_5, p_6)^T$ is parameter vector of affine transformation. At the beginning of page 4 authors outline their version of Lucas-Kanade algorithm. I'm stuck at steps (4) and (5), namely, evaluating the Jacobian $\frac{\partial W}{\partial p}$ and calculating the steepest descent images $\nabla I\frac{\partial W}{\partial p}$. (As far as I can understand, $\nabla I$ is just a pair of 2 matrices $(\frac{\partial I}{\partial x}, \frac{\partial I}{\partial y})$ in this context. Please, correct me if I'm wrong). Authors carefully describe affine warp and even provide formula for its Jacobian (Equation 8): $$\frac{\partial W}{\partial p} = \pmatrix{x & 0 & y & 0 & 1 & 0 \\ 0 & x & 0 & y & 0 & 1}$$ However, Jacobian of the warp is (not surprisingly) defined only for a single pixel and not the entire image. But in step (5) we calculate multiplication of image gradient and this Jacobian - $\nabla I \frac{\partial W}{\partial p}$, and as far as I can see from the context (see, for example, Figure 2 on page 5), it is done for the whole image and not per pixel. My question is, how should I interpret this multiplication and what are the real sizes/formats of $\nabla I$ and $\frac{\partial W}{\partial p}$? I understand that this question may require reading a lot from that paper, so I will be glad to explain any point that is not clear enough. Also feel free to refine question title or contents if you have an idea of a better wording. Answer: You're right that all the quantities are computed for a single pixel, so in the product $$\nabla I\cdot \frac{\partial W}{\partial p}\cdot\Delta p\tag{1}$$ the sizes of the vectors and matrices are $$\nabla I\quad\;\;\; 1\times 2\\ \frac{\partial W}{\partial p}\quad 2\times n\\ \Delta p\quad\;\;\;\; n\times 1$$ where $n$ is the number of parameters. So the expression in (1) is a scalar. The total error measure given by Equation (6) in the paper is the sum of this scalar expression over all pixels.
{ "domain": "dsp.stackexchange", "id": 1005, "tags": "computer-vision" }
Coherence with an infinfinite number of waves
Question: If I had an infinite number of sine waves with frequencies between 0 and 2, and I know what amplitude each wave has, is there a way for me to predict how they interfere? for example if I have: frequencies=[ 0 ......................... 2] amplitudes=sin(frequencies*(pi/1))+sin(frequencies*(pi/2)) wave=sum(amplitudes*sin((pi*2)*frequencies)) what would the wave's phase and amplitude be at any given point? thanks. Answer: Provided that the amplitude function $A(\omega)$ has a known Fourier transform, you can find the sum as $A(t) = \operatorname{Re} \int\!d\omega\, A(\omega) e^{i \omega t}$ . If the Fourier transform isn't known, you can estimate it numerically.
{ "domain": "physics.stackexchange", "id": 10629, "tags": "waves, coherence" }
Obtaining the velocity of a neutron from its kinetic energy
Question: I was told that the velocity of the neutron is calculated using the following formula: $$v=1.383\times10^6 \sqrt{E}.$$ $E$ is the kinetic energy in $eV$ and $v$ is in $cm\space s^{-1}$ Does this equation hold only for the neutron or is it valid with other particles? Answer: The following derivation is true for small values of $v$. $$E = \frac{1}{2}mv^2$$ $$v^2 = \frac{2}{m}E$$ $$v = \sqrt{\frac{2}{m}}\sqrt{E} = k\sqrt{E} \tag{1}$$ As your equation accepts $E$ in $eV$ (electron-volts) and gives out the answer in $cm \space s^{-1}$, we'll need to make some adjustments to equation $(1)$. $$1eV = 1.6\times 10^{-19}J$$ $$E \space eV = E \times 1.6\times 10^{-19}J$$ If we substitute $m$ and $E$ (with the conversion constant) in S.I. units in equation $(1)$, we will get the result in $ms^{-1}$. To convert $ms^{-1}$ to $cm \space s^{-1}$, we'll have to multiply the number by $100$. $$v = 100\times \sqrt{\frac{2}{m}} \sqrt{E \times 1.6 \times 10^{-19}}$$ $$v = \frac{4 \sqrt{2}\times 10^{-8}}{\sqrt{m}} \sqrt{E} \tag{2}$$ where $v$ is in $cm \space s^{-1}$, $m$ is in $kg$ and $E$ is in $eV$. For a neutron, $$m = 1.675\times 10^{-27}\space kg$$ Substituting the above number in equation $(2)$, we get: $$v = 1.38 \times 10^6\times \sqrt{E} \tag{3}$$ You can find the mass $m$ of different particles and substitute it in equation $(2)$ to obtain the $v-E$ relation. As the equation depends on the $m$ of the particle, this equation will not work for every particle. The equation for neutron might give answers pretty close to the answers for a proton as the mass of the proton $(1.673\times 10^ {-27} kg)$ is nearly equal to the mass of the neutron. As a matter of fact, you would obtain the equation $(3)$ for protons as well. The difference in the mass between the two particles is too small to affect the constant $(1.38)$ in the first two decimal places.
{ "domain": "physics.stackexchange", "id": 38428, "tags": "kinematics, nuclear-physics, nuclear-engineering" }
Cooling the early earth
Question: My son had this theory that the icy comets cooled the hot earth during the late heavy bombardment. What do we know about this cooling effect on the early evolution of the earth? He intended his theory to apply to all "fiery" planets. Answer: Nice hypothesis. However impacts from comets, thought they are made of ice won't cool it down, rather if the nucleus survives from the intense heat, and crashes down, it will convert a lot of kinetic energy into heat, more than what it could cool down. However, the hypothesis is partly correct! The comets and asteroids have Asteroidal water, with either direct water, more often in comets, or precursors of $H_2O$. Now this lead to the formation of reservoirs in the craters, on our protoearth leading to the formation of water bodies, these water bodies effectively cooled down Earth's crust by the Latent heat of vaporization, which is quite similar to when the pan is hot, water cools it down. This evaporated into the air, which contributed on the atmosphere and once the temperature dropped due to radiative cooling, the water rained down to cool it down further (Rain would occur after the cooling till at least $100$ °C, since clouds can rain down after condensation). Secondly, the ejecta plume released would contain a lot of aerosols which would block the sunlight from reaching the earth, which would block the sunlight from the solar irradiance. It is applicable for the only the protoearth and ejecta winter, since in present day, any such impact would release $CO_2$ into the atmosphere, increasing the greenhouse effect, also the water would not cool the Earth's crust down because it is already in thermal equilibrium, so it would not work in the present day scenario. Thank you!
{ "domain": "astronomy.stackexchange", "id": 7076, "tags": "solar-system, earth, comets" }
Quantum Physics - What is the probability of it being in specific state (Stuck on question)
Question: The normalised wavefunction for an electron in an infinite 1D potential well of length 65 pm can be written: $$\psi=(0.038 \psi_{n=1})+(-0.227\ i \psi_{n=10})+(g \psi_{n=5}).$$ If the state is measured, there are three possible results (i.e. it is in the $n=1$, $10$ or $5$ state). What is the probability (in %) that it is in the $n=5$ state? I am doing some revision and got stuck on this question, just can't seem to find what I mean in the Feynman lectures and my uni notes are terrible, any help? Answer: For a normalised linear combination of (orthogonal) states like this, the probability of measuring one of them is the absolute square of the coefficient in the combination: If $$\Psi = a_1\psi_1+a_2\psi_2+...$$ where $|a_1|^2+|a_2|^2+... = 1$ then $$P(\psi_1) = \left|a_1\right|^2, P(\psi_2) = \left|a_2\right|^2$$etc. Slightly more generally, if you know you have a state $|\psi>$ then the probability of measuring a (possibly different) state $|\phi>$ is $\left|<\phi|\psi>\right|^2$.
{ "domain": "physics.stackexchange", "id": 21826, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, hilbert-space, born-rule" }
Does evolution only give rise to traits that confer fitness?
Question: Does evolution only give rise to traits that confer fitness? In other words, does the existence of a trait imply it's conferring of fitness? If not, what are some counter examples? Answer: Rephrasing the question Does evolution only give rise to traits that confer fitness? The phrasing is actually a little nonsensical, but it is easy to understand what you mean. The reason is that "fitness" is not a characteristic of individuals but a measure (a variable if you wish) of a characteristic. Imagine you are talking about Shaquille O'Neal and you say "He is height!" instead of "He is tall!". Allow me to rephrase the question correctly. Note that because "high fitness" is a statement relative to the fitness of the others (just like being tall is relative to the average height), I highlighted in my rephrasing of the question this relativity. Does evolution only give rise to trait variants that are associated with higher fitness than other variants? Short answer No! Examples Very simple examples include all the genetic diseases. Reasons In short, evolution is much more than natural selection. All of the following are reasons for why there are low fitness variants in nature. genetic drift (incl. bottleneck or founder effect), spatial variation in the environment and gene flow (hence migration load) changing environment (incl. the biotic environment and the social environment) and non-equilibrium conditions in general local peaks in fitness (and the concept of pseudo-species; fitness landscape) trade-offs genomic conflicts and many others... For more information on the "why" part of the question, you should start with an intro course to evolutionary biology such as Understanding Evolution by UC Berkeley for example.
{ "domain": "biology.stackexchange", "id": 8032, "tags": "evolution, evolutionary-game-theory, macroevolution" }
How to separate KCl from NaCl and MgCl₂?
Question: Having mixed powder of $\ce{KCl}$, $\ce{NaCl}$, and $\ce{MgCl2}$ (and, perhaps, some crap), how can the components be separated? Answer: As I mentioned in the comments, a quantitative separation of the three compounds will probably not be possible outside of a proper laboratory, however a coarse separation is possible. A quick search on the internet yields this link for the separation of $\ce{NaCl}$ and $\ce{KCl}$. You didn't give us the exact composition of the table salt substitute, so I assume $\ce{MgCl2}$ is present in much smaller amounts than the first two compounds. The simplest strategy is to use the difference between the solubilities of the salts in water or ethanol. Using water is much more convenient, so we'll stick to that. I'll outline the method to obtain the highest purity $\ce{KCl}$. First, add a very large amount of table salt substitute to a container with boiling water (you'll want to add some 200-250 g per 100 g of water) and let the mixture dissolve as much as possible (though it probably won't even come close to dissolving completely). What you have then is a boiling water solution saturated with both $\ce{NaCl}$ and $\ce{KCl}$. Now, while the water is still very hot, quickly filter off the excess salt (might be doable with a fine sieve, or a funnel with some cotton would work better) and reserve the liquid in another container. Let the liquid cool down (using an ice bath is optional but increases yield and purity), and it will start to precipitate the salts. Now you perform another filtration, but this time you keep the solid retained on the filter, not the liquid. This yields purified $\ce{KCl}$. This strategy works because it turns out that $\ce{NaCl}$ and $\ce{KCl}$ behave rather differently in how their solubility in water is affected by temperature. $\ce{NaCl}$ soubility in water: 35.6 g/100 g water (0°C) 35.9 g/100 g water (20°C) 39.0 g/100 g water (100°C) $\ce{KCl}$ soubility in water: 28.0 g/100 g water (0°C) 34.2 g/100 g water (20°C) 56.3 g/100 g water (100°C) Evidently, as the filtered liquid cools down, $\ce{NaCl}$ remains almost as soluble in cooler water as in hot water, while $\ce{KCl}$ is only about half as soluble. This means that a large amount of $\ce{KCl}$ will precipitate as the filtered liquid cools, while only a bit of $\ce{NaCl}$ will follow. Hence, ideally, from 100 g of water saturated with both salts at 100°C (and containing nothing else), you will be able to extract 25.0 g of a mixture of salts containing 87.6% $\ce{KCl}$ if you perform the second filtration at 20°C. If you filter at 0°C, then the yield increases to 31.7 g of a mixture containing 89.3% $\ce{KCl}$. If for some reason you don't want to do the second filtration, you can just let the water evaporate completely after the first filtration, producing 95.3 g of a mixture containing 59.1% $\ce{KCl}$ from 100 g of saturated boiling water. In practice your yield and the purity of the $\ce{KCl}$ will likely be somewhat lower due to losses in manipulation and the presence of other impurities. The procedure may be complicated by the very large initial amount of table salt mixture that you have to dissolve in water. You can reduce the amount of the mixture, but at some point the purification yield will start to go down.
{ "domain": "chemistry.stackexchange", "id": 4097, "tags": "inorganic-chemistry, mixtures" }
Why does a blackboard dry very quickly?
Question: When you have made some stupid mistakes on a blackboard, you quickly want to erase it with a wet sponge before anyone sees them. So you clean the blackboard and within a minute the blackboard is clean and dry again! I was wondering why the board is drying so quickly compared to other surfaces. Does it absorb the water or is it all due to "good" evaporation? Answer: [Already said] A blackboard is not porous, i.e. it actually never takes up much water from the sponge in the first place (and if you were to squeeze out more than a little, it would just run down to the bottom). [Already said] Yet the surface is hydrophilic, i.e. the water that does stay on the board forms a very thin film instead of droplets (as you'd get on a plastic or freshly wiped glass surface), and together with the slightly rough texture this makes for a large surface area to only a very small volume of water. This surface is where evaporation takes place; the larger, the better. The board is mounted vertically. That's the ideal configuration for convection: water vapour has a lower density than air, so close to the surface (which, because of the second point, quickly evaporates a lot of water into the air directly next to it) the air rises up, and because the entire surface is aligned in the same direction and air can efficiently stream along the surface from below (turbulence helps further), there's a steady supply of unsaturated air into which more water can evaporate unhindered. [Already said] The bulk of the board is usually metallic, i.e. it has good thermal conductivity. To the touch (which emits heat into the board), one perceices it therefore as cold, but to the evaporating water (which requires heat) the same property has a warming effect. That keeps the evaporation speed high, both directly and through preventing the reduced temperature from mitigating the convection-causing density reduction.
{ "domain": "physics.stackexchange", "id": 39447, "tags": "everyday-life, water, evaporation, absorption" }
Sum the largest five numbers
Question: I'm doing a code-challenge problem: Input The first line consists of an integer T (T ≤ 100) which represents the number of test cases. Each test case consists of an integer N (5 ≤ N ≤ 100.000) which represents the number of integers, followed by a line that consists of N integers Ai (0 ≤ Ai ≤ 1000) separated by a space. Output For each test case, print in a line an integer that represents the sum of 5 largest numbers. My code (below) works for small tests, but on the online judge, it resulted in Time Limit Exceeded. Some approaches I've tried that I think speed up the program (need confirmation): Changing language from C++ to C (does it give significant boost?) When calculating the sum, I use while instead of for loop (which one is more effective?) #include <stdio.h> int main () { int testCases; scanf("%d", &testCases); for (int i = 1; i <= testCases; i++) { int size, sum, temp, k, limit; sum = 0; temp = 0; scanf("%d", &size); int array[size]; for (int i = 0; i < size; i++) { scanf("%d", &array[i]); } for (int i = 0; i < size; i++) { for (int j = i+1; j < size; j++) { if (array[j] < array[i]) { temp = array[i]; array[i] = array[j]; array[j] = temp; } else { continue; } } } k = size - 1; limit = size - 6; while (k > limit) { sum += array[k]; k--; } printf("%d\n", sum); } } Answer: NO: C++ is not magically slower than C. Bad C++ code is slower than good C code and vice versa. NO: For the compiler it is completely irrelevant whether you use a for or a while loop as they will all be normalized into a consistent representation anyway. Now to the actual review. Your code will not improve from porting to C as you are already not using any C++ features. In C++ you should generally use streams. They have a bad reputation but generally they greatly improve your code: int numTests; std::cin >> numTests; This also works for ranges std::vector<int> array(size); std::istream_iterator<int> eos; // end-of-stream iterator std::istream_iterator<int> iit (std::cin); // stdin iterator std::copy(iit, eos, array.begin()); // copy from std::cin For sorting you should refer to the standard library aka std::sort. However, you should not even sort but determine the 5 largest elements. This is achieved by the algorithm nth_element std::nth_element(array.begin(), std::next(array.begin(), 5), array.end(), std::greater<int>()); Now you can simply sum them up const int result = std::accumulate(array.begin(), std::next(array.begin(), 5), 0); So the whole algorithm would be const int sum_largest(const std::size_t numElements, std::vector<int>& data) { if (numElements > data.size()) { // Error handling } std::nth_element(data.begin(), std::next(data.begin(), numElements), data.end(), std::greater<int>()); return std::accumulate(data.begin(), std::next(data.begin(), numElements), 0); }
{ "domain": "codereview.stackexchange", "id": 36060, "tags": "c++, beginner, c, programming-challenge, time-limit-exceeded" }
Optimizing string analysis using slicing
Question: I am currently taking back Python after over 6 years I do not use it, and to do that I am solving some small challenges on Hacker Rank. I wrote a code that works, but which is not performant and I would like to have some ideas to optimize it. Problem statement Given a certain string, count how many anagrams are possible for that string. The anagrams are those substrings for which the letters used are the same. For example, if we take the string mama, we have 5 possible anagrams: The first m with the third m The second a with the fourth a The first ma with the second ma The first ma with the middle am The middle am with the last ma My function should count the number of anagrams for a given string. Current (working) code I've come up with the below code which works always as expected (note: I'm adding the __main__ as well so that you can copy-paste and run the code in any IDLE): (disclaimer: if you want to understand the logic without reading the code, I've added a "logic" section just after the code snippet). #!/bin/python3 import math import os import random import re import sys # Complete the sherlockAndAnagrams function below. def sherlockAndAnagrams(s): nb = 0 stringSize = len(s) for size in range(1, len(s)): for i in range(0, len(s) - 1): ss = s[i:i+size] for j in range(i+1, len(s)): if j+size <= stringSize: ss2 = s[j:j+size] if haveSameLetters(ss, ss2): nb += 1 return nb def haveSameLetters(s1,s2): s1Map = getMapOfLetters(s1) s2Map = getMapOfLetters(s2) return areMapEquals(s1Map, s2Map) def areMapEquals(m1, m2): if len(m1) != len(m2): return False for k,v in m1.items(): if not m2.__contains__(k): return False else: if m2.get(k) != v: return False return True def getMapOfLetters(s): sMap = {} for letter in s: if sMap.__contains__(letter): sMap[letter] = sMap.get(letter)+1 else: sMap[letter] = 1 return sMap if __name__ == '__main__': s1 = "kkkk" print("Expect 10, received ", sherlockAndAnagrams(s1)) s2 = "ifailuhkqq" print("Expect 3, received ", sherlockAndAnagrams(s2)) s3 = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" print("Expect 166650, received ", sherlockAndAnagrams(s3)) Logic in words With this code, I analyze blocks of substrings. First, I analyze each substring of size 1, then of size 2, then of size 3 etc. until n-1 (where n would be the size of the whole string in input). Let's take the word mama again. What I do is: The count starts at 0. I take the first substring of size 1, which is m, and I compare it with the second a (False), the third m (True) and the fourth a (False). The count is now 1. Then I take the second substring of size 1 which is a, and I compare it with the third m (False) and the fourth a (True). The count is now 2. Then I take the third substring of size 1, which is m, and I compare it with the fourth a (False). The count stays at 2. Then I move to the first substring of size 2, which is ma, and I compare it with the next substring am (True) and ma (True). The count is now 4. Then I move to the next substring which is am and I compare it with the next substring ma (True). The count is now 5. Then I move to the first substring of size 3, which is mam, and I compare it with the next substring ama (False). The count stays at 5. At this point, the loop is over and I'm left with the count 5. Optimization The code works, but it is pretty consuming. For the third example in my code snippet, a string which contains 100 times the letter a, the result of the count of possible anagrams is 166,650 and the code needs to make 490,050 iterations before getting to that result. In the case of that 100 a's string, it takes 1.324274 seconds to execute. I am trying to optimize this code. Any thought or suggestion? Answer: Unused Imports You don't use any of the imports in your code. Remove them. areMapEquals Dictionarys can be compared using the = def areMapEquals(m1, m2): return m1 == m2 But at this point, this function is obsolete since you can now make this check in the hasSameLetters function. But eventually that function gets obsolete too, since you can do that calculation without a function too. Naming Functions and variables should be in snake_case, not camelCase. Counters Instead of creating a dict yourself, utilize the collections.Counter class, which does all of this for you. It also has it's own equal comparison, so you can reduce a lot of your code. Looping Your first loop is just for keeping track of the size. But if you look closely, size is always just one more than i, so just define size inside the second loop as such. This removes the need for the first loop, greatly increasing performance. Reuse variables You have stringSize (which should be length) which hold the size of the string, but you still have len(s) in your code. Just use that variable. Efficient Algorithms Here's something I wrote a while back that solve this very question: def count_anagrams(string: str) -> int: """ This counts the total amount of anagram substrings, given a string. @param str string - String to count anagrams. @return int - Number of anagrams. """ n = len(string) _map = dict() for i in range(n): substr = '' for j in range(i, n): substr = ''.join(sorted(substr + string[j])) _map[substr] = _map.get(substr, 0) _map[substr] += 1 return sum((v * (v - 1)) // 2 for v in _map.values()) While this doesn't utilize the collections.Counter class, it's still extremely efficient and gets the job done!
{ "domain": "codereview.stackexchange", "id": 38891, "tags": "python, python-3.x" }
Relation between the plots of the spectrum of real signals X+jY separately and its absolute |X+jY|
Question: I am looking forward to understand FFT more deeply. While testing with code and math I found I am having a problem in understanding two points. First we can give to the input of FFT only $x$ or $jy$ or $x+jy$. If we applied real signals $x$ then the output will always be complex $X+jY$. But normally when I learned about the FFT of cosine it is said to be two symmetric deltas in the real domain with +ve sign while the sine signal will give two assymetric deltas in the imaginary domain (1st image). But instead looking at both the plots of the real and imaginary side of $X+jY$ separately for both sine and cosine signals (2nd Image). Nothing shows that sine is in the imaginary domain, instead they have almost similar outputs. How can I look or interpret the separate plots of the real and imaginary part of X+jY and its relation to what we have in the first image which we always assume to be the absolute of |$X+jY$| = |X(f)| (3rd plot) in which the sine is plotted in real domain as we can't draw it in imaginary domain in reality in software. Second point is that I have a problem with confirming the plotting amplitude with amplitudes I see in math. I did a mathematical and code illustration to explain what I mean. $$sin(2 \pi f_0 t) = \frac{e^{j \pi f_0 t}- e^{-j \pi f_0 t}}{2j}$$ $$X1(f) = \int_{-\infty}^{\infty} sin(2 \pi f_0 t) e^{-j2 \pi f t} \,dt = \int_{-\infty}^{\infty} \frac{1}{2j} [e^{j 2 \pi f_0 t} - e^{-j 2 \pi f_0 t}] e^{-j 2 \pi f t}\,dt $$ $$= \frac{1}{2j} \int_{-\infty}^{\infty} e^{-j 2 \pi (f-f_0) t} \,dt - \frac{1}{2j} \int_{-\infty}^{\infty} e^{-j 2 \pi (f+f_0) t} \,dt$$ $$ =\frac{1}{2j} [(x_1 - j y_1) - (x_2 - j y_2)] = \frac{1}{2} [(y_2 - y_1) + j(x_2 - x_1)]$$ $$|X1(f)| = \frac{1}{2} \sqrt{(y_2-y_1)^2 + (x_2-x_1)^2} $$ while for cosine, $$cos(2 \pi f_0 t) = \frac{e^{j \pi f_0 t} + e^{-j \pi f_0 t}}{2}$$ $$X2(f) = \int_{-\infty}^{\infty} cos(2 \pi f_0 t) \,dt = \frac{1}{2} \int_{-\infty}^{\infty} e^{-j 2 \pi (f-f_0) t} \,dt + \frac{1}{2} \int_{-\infty}^{\infty} e^{-j 2 \pi (f+f_0) t} \,dt $$ $$=\frac{1}{2} [(x_1 - j y_1) + (x_2 - j y_2)] = \frac{1}{2} [(x_1 + x_2) - j(y_1 + y_2)]$$ $$|X2(f)| = \frac{1}{2} \sqrt{(x_1 + x_2)^2 + (y_1 + y_2)^2}$$ From that I would assume that the amplitude of cos plots is greater than the amplitude of the sine plots \begin{align} I\{X1(f)\} < R\{X2(f)\},\: R\{X1(f)\} < I\{X2(f)\}, \: |X1(f)| < |X2(f)| \end{align} Instead the results was exactly the inverse in the code, f = 5; fs = 1000; t = 1 : 1 / fs : 2; xreal = cos(2*pi*f*t); xreal2 = sin(2*pi*f*t); y1 = fft(xreal, 1024); realT_realF = real(y1); realT_compF = imag(y1); y12 = fft(xreal2, 1024); realT_realF2 = real(y12); realT_compF2 = imag(y12); f1 = 1 : 1/1024 : 2; figure(1); subplot(221) plot(f1(1:end-1),realT_realF); title('cos - Real Frequency'); idx1 = find(realT_realF==max(realT_realF),1)+100; idx2 = max(realT_realF) - 0.2 * max(realT_realF); text(f1(idx1),idx2,string(max(realT_realF))) subplot(223) plot(f1(1:end-1),realT_compF); title('cos - Complex Frequency'); subplot(222) plot(f1(1:end-1),realT_realF2); title('sin - Real Frequency'); subplot(224) plot(f1(1:end-1),realT_compF2); title('sin - Complex Frequency'); figure(2) subplot(211) plot(f1(1:end-1),abs(y1)); title('cos - |X(f)|'); subplot(212) plot(f1(1:end-1),abs(y12)); title('sin - |X(f)|'); $|X(f)|$ for both sine and cosine signals" /> Answer: You've run into "spectral leakage". There are four types of Fourier Transform: the FFT is an implementation of the "Discrete Fourier Transform" NOT the regular continuous time Fourier Transform. These are fairly different things. For a quick fix: make sure that your sine/cosine waves have an integer number of periods inside your FFT window. You can easily achieve this by setting your sample rate to 1024 (instead of 1000), which will give a frequency resolution of exactly 1 Hz. You may also want to choose a higher frequency (maybe 100Hz) which makes it easier to see on the graphs. Here is how this looks like with $f = 100Hz$, $f_s = 1024Hz$ and $nfft = 1024$. Note the y-axis scale. The "fuzzy" graphs have a scale $10^{-12}$, i.e. it's just numerical noise.
{ "domain": "dsp.stackexchange", "id": 10616, "tags": "fft" }
Why, when and where is Gauss's law applicable?
Question: Why is it said that Gauss's Law is mainly applicable for symmetric surfaces/bodies? Why not for asymmetric surfaces? I want a logical explanation! BTW my teacher said that Gauss's law is applicable for any surface/body but in the case where symmetry does not exist, the calculation becomes a bit tedious. I did not get what he meant by that. Can someone please help me get a clear cut concept about when and where Gauss's law can be applied? Please note I'm not asking for the rigorous proof of Gauss's law. Answer: The answer to your question involves the fact that one does not usually know a priori the electric field $\textbf{E}$ (or, for that matter, its direction) of a charge distribution $\rho$. Gauss's law, in integral form, relates the flux of the electric field through some closed surface $S$ to the charge enclosed within the volume bounded by $S$. Precisely, it is the statement that given an electric field $\textbf{E}(\textbf{r})$ defined over space, the flux integral over any closed surface $S$ will always yield $$ \oint_S \textbf{E} \cdot d\textbf{a} = \frac{Q_\text{enc}}{\epsilon_0}.$$ Normally surface integrals over vector fields involve parametrizing the surface (i.e. describing a two-dimensional surface by two parameters $u,v$ related to the Euclidean coordinates $x,y,z$). Even then, one has to additionally compute the product $$\textbf{E} \cdot d\textbf{a} = \textbf{E} \cdot \hat{n}da,$$ where $\hat{n}$ is the unit normal to the surface and can be calculated from the parametrization. This quantity can assume different values everywhere along the surface. So far I've only talked about the difficulties in computing the flux integral of a vector field over a general surface. When using Gauss's law, we have the added problem of not knowing the electric field (this is the quantity we're trying to find!). We now are tasked with computing an integral over an undefined function! This is where symmetry comes in and saves the troubled physicist. Essentially, symmetric charge distributions allow one to choose a convenient surface (which preserves the symmetry) to remove $\textbf{E}$ from the integral. For example, consider a uniformly charged spherical volume of radius $R$ (i.e. a ball). Due to symmetry, one can argue that the electric field generated from this distribution must be radially symmetric. If we take our surface $S$ to be a sphere of radius $r$, then we find that the normal to the sphere and the direction of the electric field coincide, so $$\textbf{E} \cdot d\textbf{a} = |\textbf{E}|\oint_S da = 4 \pi r^2 |\textbf{E}|,$$ since we are now simply computing the surface area of a sphere. We can now simply divide to find the answer: $$ |\textbf{E}| = \frac{1}{4 \pi \epsilon_0} \frac{Q_\text{enc}}{r^2}.$$ To summarize, Gauss's (integral) law relates the flux integral of the electric field to the charge contained within a surface. Because we do not know the electric field, Gauss's law is only useful when we can remove the electric field from within the integral, which happens when the charge distribution displays certain spatial symmetries (spherical, cylindrical, planar).
{ "domain": "physics.stackexchange", "id": 23286, "tags": "electrostatics, symmetry, gauss-law" }
Generalizations of planar graphs that include hypercubes with large side length in $R^d$
Question: A lot of people have asked about generalizations of planar graphs on other forums. Some topics include: https://mathoverflow.net/questions/7650/generalizations-of-planar-graphs https://math.stackexchange.com/questions/22714/higher-dimensional-analog-to-planar-graphs Many have talked about minor-closed family generalizations of planar graphs, but they cannot possibly include even 3D n by n by n grids. If they did, then n by n by n grids would have to have balanced separators of size $n^{3/2}$, but in reality all of their balanced separators have size at least $n^2$. Qiaochu's definition focuses more on the direction that I am interested in. Ideally, I would like to look at graphs that are 1-skeleta of $(k-1)$-dimensional cell complexes that can be embedded in $\mathbb{R}^k$ injectively (which may be the same thing that Qiaochu is looking for). These are not minor closed for $k\ge 3$, but might be face-contraction closed. Have these graphs been studied algorithmically? For example, can one generalize the planar separator theorem to these graphs? Or is there something particularly nasty about this definition that I am overlooking that prevents these objects from being algorithmically interesting? Answer: You may want to look at nowhere dense graphs. http://www.sciencedirect.com/science/article/pii/S0195669811000151 One of the reasons why minor-closedness is natural is the following. We typically want to work with families of graphs rather than specific graphs. And we want to solve problems with arbitrary weights/capacities on edges/nodes. Suppose we want to solve the shortest path problem in a family of graphs. Then, if we allow for zero length and infinite lengths then basically we are allowing minor operations on the family. In some settings it makes sense to work with unweighted graphs where positive results can be obtained for larger families of graphs that are not necessarily minor-closed.
{ "domain": "cstheory.stackexchange", "id": 2608, "tags": "graph-theory, planar-graphs, topology, algebraic-topology" }
Why is the opening angle of synchrotron radiation less than $1/\gamma$?
Question: I am currently studying free-electron laser which accelerate electrons and use undulators to create synchrotron radiation. In a variety of graphics and diagrams I see an opening angle of $\pm 1/\gamma$ for the radiation. Where does this value come from and is there an intuitive explanation for it? Answer: This kind of thing always occurs when you have radiation from a moving object. In the rest frame of the object, the radiation comes out reasonably isotropically. But when you add the velocity of the object, and the object is moving at nearly the speed of the radiation itself, that makes almost all the radiation comes out the forward direction. This is the "headlight effect" (or "Lorentz focusing" or "relativistic beaming", as pointed out in the comments), and it's common in discussions of relativistic radiation, though it also occurs for nonrelativistic waves to a lesser degree. Explicitly, consider some radiation that comes out with velocity $$\mathbf{v} = (0, c)$$ in the rest frame of the radiator. If the radiator moves with speed $v \hat{\mathbf{x}}$ in the lab frame, then the velocity of that radiation in the lab frame is $$\mathbf{v}_{\text{lab}} = \left(v, \sqrt{c^2 - v^2} \right)$$ by relativistic velocity addition. If $v$ is near $c$, this is almost perfectly directed along $\hat{\mathbf{x}}$. Specifically, the angle to the $\hat{\mathbf{x}}$ axis is about $1/\gamma$. More generally, for most initial directions $\mathbf{v}$, the velocity in the lab frame is going to be within that angle or around it.
{ "domain": "physics.stackexchange", "id": 62683, "tags": "accelerator-physics, synchrotron-radiation, free-electron-lasers" }
What problems are believed to have an efficient algorithm?
Question: Just out of curiosity, What problems are believed to have an efficient algorithm, yet haven't been found such an algorithm for them? This question came up to my mind after reading about PRIMES problem. I hope this question is appropriate for this forum Thanks! Answer: I'll cover problems that are easy to solve (i.e. in $P$) and problems whose solutions are easy to verify (i.e. in $NP$), and some problems that are probably not, and try to explain why people think one way or the other. Under standard complexity-theoretic assumptions, $P=BPP$, meaning that most randomized algorithms can be "derandomized" (this family of conjectures is called the derandomization hypotheses). You read about a specific example, PRIMES, and it is expected that that is an instance of a more general phenomenon. So for example, polynomial identity testing probably has a polynomial-time algorithm. For similar reasons, it is conjectured that $NP=MA=AM$. See the Complexity Zoo for definitions of these classes). One language in $AM$ is graph non-isomorphism: the language of all pairs of graphs that are not isomorphic. Clearly its complement it in $NP$ (because you can check whether a purpurted permutation is in fact an isomorphism), so surprisingly the fact that two graphs are not isomorphic is something that is expected to be easy to verify given some proof. Yet (afaik) we do not know which $BPP$-language we have to derandomize to obtain this result. If $NP=MA$ or $P=RP$, then it is easy to verify that an arithmetic circuit is not minimal. These sets, $BPP$ and $AM$ are (afaik) the "largest" classes of problems that are expected to collapse to $P$ and $NP$ respectively, meaning they are expected to be easy to solve, respectively, to verify. So we do not know how to verify non-isomorphism yet, much less decide it (as in, solve it in $P$). Is it hard? Is it easy? Interestingly, if Graph isomorphism is so hard that it's NP Complete, then a widely believed complexity-theoretic hypothesis (that "the polynomial hierarchy is infinite") fails, so graph isomorphism is probably not NP-Complete. We do not know of any hypothesis that fails under the assumption that graph isorphism is easy, i.e. in $P$ or in $BPP$. The same goes for factoring: probably not NP-Complete, not known to be easy.* Problems in this Wikipedia list of NP-Intermediate problems are not expected to be in $P$. Linear programming has a polynomial algorithm if you only count the number of arithmetic operations ($+,-,\times$ etc). However, those arithmetic operations may involve numbers that are more than polynomially long, so the algorithm is called weakly polynomial. It is expected that there is an algorithm that doesn't suffer from this, called a strongly polynomial algorithm. For example, Mihály and Vempala [1] recently presented a plausible-looking candidate for such an algorithm. *Scott Aaronson once quiped that factoring might as well be in $P$. It would collapse the world economy, sure, but it wouldn't collapse the polynomial hierarchy, so wouldn't be that impressive. [1] Bárász, Mihály, and Santosh Vempala. "A new approach to strongly polynomial linear programming." (2010).
{ "domain": "cs.stackexchange", "id": 9413, "tags": "algorithms, efficiency, research" }
Vectors of polarizations from vector boson field solution
Question: Let's have the solution for vector boson Lagrangian in form of 4-vector field: $$ A_{\mu } (x) = \int \sum_{n = 1}^{3} e^{n}_{\mu}(\mathbf p) \left( a_{n}(\mathbf {p})e^{-ipx} + b_{n}^{+} (\mathbf p )e^{ipx}\right)\frac{d^{3}\mathbf {p }}{\sqrt{(2 \pi )^{3}2 \epsilon_{\mathbf p}} }, $$ $$A^{+}_{\mu } (x) = \int \sum_{n = 1}^{3}e^{n}_{\mu}(\mathbf p) \left( a^{+}_{n}(\mathbf {p})e^{ipx} + b_{n} (\mathbf p )e^{-ipx}\right)\frac{d^{3}\mathbf {p }}{\sqrt{(2 \pi )^{3}2 \epsilon_{\mathbf p}} }, $$ where $e^{n}_{\mu}$ are the components of 3 4-vectors, which are called polarization vectors. There are some properties of these vectors: $$ e_{\mu}^{n}e_{\mu}^{l} = -\delta^{nl}, \quad \partial^{\mu}e^{n}_{\mu} = 0, \quad e_{\mu}^{n}e_{\nu}^{n} = -\left(\delta_{\mu \nu} - \frac{\partial_{\mu}\partial_{\nu}}{m^{2}}\right). $$ The first two are obviously, but I have the question about the third. It is equivalent to the transverse projection operator $(\partial^{\perp })^{\mu \nu}$ relative to $\partial_{\mu}$ space. So it realizes Lorentz gauge in form of $$ \partial^{\mu}A^{\perp}_{\mu} = 0, \quad A^{\perp}_{\mu} = \left(\delta_{\mu \nu} - \frac{\partial_{\mu}\partial_{\nu}}{m^{2}}\right)A^{\nu}, $$ which is need for decreasing the number of components $A_{\mu}$ as vector form of representation $\left(\frac{1}{2}, \frac{1}{2} \right)$ of the Lorentz group by one (according to the number of components of spin-1 field). I don't understand how to interprete this property. Can it be interpreted as matrice of dot product of 4 3-vectors $e_{\mu}$, which makes one of component of $A_{\mu}$ dependent of anothers three? Answer: In a sum on polarizations, like $\sum_\lambda~e_{\mu}^{\lambda}(k)~e_{\nu}^{\lambda}(k)$, there is a fundamental difference if you are considering all the polarizations, or only the physical polarizations . If you take all polarizations, the sum is equals to $g_{\mu\nu}$, and it is in fact a normalizations of the $e_{\mu}^{\lambda}(k)$. This sum is non-physical. $$\sum_{all ~polarizations~~ \lambda}~e_{\mu}^{\lambda}(k)~e_{\nu}^{\lambda}(k)=-g_{\mu\nu} \tag{1}$$ If you consider only the physical polarizations, this sum is physical, and you will get the pole of the propagator, which is a physical quantity too : $$\sum_{physical ~polarizations~~ \lambda}~e_{\mu}^{\lambda}(k)~e_{\nu}^{\lambda}(k)=-(g_{\mu\nu} - \frac{k^\mu k^\nu}{m^2})\tag{2}$$ The propagator here is : $$D_{\mu\nu}(k) = \frac {-(g_{\mu\nu} - \frac{k^\mu k^\nu}{m^2})}{k^2-m^2} \tag{3}$$
{ "domain": "physics.stackexchange", "id": 9515, "tags": "quantum-field-theory, vector-fields, bosons" }
How to do calculation in relativity of simultaneity
Question: I have great trouble in understanding simultaneity in special relativity. Let me illustrate it with a concrete example. Assuming there is a train, its two end points are $A$ and $B$, the length of the train is $x$. The train moves at speed $v$. Assuming the train is moving in the direction from $A$ to $B$. For a ground observer observing the train movement, he notices that two lightnings strike simultaneously at $A$ and $B$ when the middle part of the train $O'$ passes through right in front of him. In other words, the ground observer is located at the middle part of the train($O'$) when the lightnings strike simultaneously at $A$ and $B$. Now there is another moving observer sitting inside the train and he sits right in the middle of the train ($O$, equidistant from $A$ and $B$). Does this moving observer think that the lighting happens at the same time? If no, how much time has passed before he notices lightning at $B$, after he had observed lightning at $A$? Answer: The only safe way for beginners to answer questions in special relativity is to sit down with a large sheet of paper and work through the Lorentz tranformations: $$\begin{align} t' &= \gamma (t - \frac{vx}{c^2}) \\ x' &= \gamma (x - vt) \end{align}$$ Let's be absolutely clear what the tranformations tell us. If we use a coordinate system $(t, x)$ to label spacetime points, and another observer moving at constant velocity $v$ relative to us uses another coordinate system $(t', x')$, the transformations convert our labels $(t, x)$ to the other observer's labels $(t' x')$. So to answer your question we take the two spacetime points labelling the ends of the train and apply the transformations. This tells us where those two points are in the moving observer's coordinates. In our frame at $t = 0$ the middle of train is at $(0, 0)$, so the front of the train is at $(0, d/2)$ and the rear of the train is at $(0, -d/2)$ (I've called the length of the train $d$ to avoid confusion with the $x$ coordinate): To find the potition of the front of the train in the primed frame we just feed $t = 0$ and $x = d/2$ into the Lorentz transformations: $$\begin{align} t' &= \gamma (- \frac{vd}{2c^2}) \\ x' &= \gamma \frac{d}{2} \end{align}$$ So in the moving frame the lightning strike at front of the train is at $(-\gamma\tfrac{vd}{2c^2}, \gamma\tfrac{d}{2})$. I won't go through the details, but same calculation puts the lightning strike at the end of the train is at $(\gamma\tfrac{vd}{2c^2}, -\gamma\tfrac{d}{2})$. So the answer is that the observer on the train sees the lightning strike the front of the train at $t' = -\gamma\tfrac{vd}{2c^2}$ and the rear of the train at $t' = \gamma\tfrac{vd}{2c^2}$. The time between the lightning strikes is $\gamma\tfrac{vd}{c^2}$. Quick footnote Rereading my answer it's just occurred to me that I've called the length of the train $d$ in the rest frame of the track. The length of the train for the observers on it will be greater - you can use the Lorentz transformations to calculate this too. Length of the train Re Graviton's comment, the easiest way to calculate the length of the train in the train's rest frame is to work backwards. Let's call the length of the train in its rest frame $\ell$, and we'll choose our zero time so that the rear of the train is at $(0, 0)$ and the front is at $(0, \ell)$. To transform from the train frame to the track frame we just use the Lorentz transformations as before, but in this case the velocity is $-v$ because if the train is moving at $v$ wrt to the track then the track is moving at $-v$ wrt the train. When we do the transformation $(0, 0)$ just goes to $(0, 0)$ so we just need to work out where $(0, \ell)$ is in the track frame. Plugging in $t = 0$ and $x = \ell$ we find the point in the track frame is: $$\begin{align} t &= \gamma (t' - \frac{(-v)x'}{c^2}) = \gamma\frac{v\ell}{c^2} \\ x &= \gamma (x' - (-v)t) = \gamma\ell \end{align}$$ So in the track frame the front of the train is at $(\gamma\tfrac{v\ell}{c^2}, \gamma\ell)$. But we don't want to know where the front of the train is at time $t = \gamma\tfrac{v\ell}{c^2}$, we want to know where is was at $t = 0$. So we take our value for $x$ at time $\gamma\frac{v\ell}{c^2}$ and subtract off the distance moved in time $\gamma\frac{v\ell}{c^2}$, which is just the time multiplied by the velocity. This gives us the value for $x_0$: $$ x_0 = \gamma\ell - \gamma\frac{v\ell}{c^2} v $$ The rest is just algebra. We write the expression out in full to get: $$\begin{align} x_0 &= \ell \left( \frac{1 - \frac{v^2}{c^2}} {\sqrt{1 - \frac{v^2}{c^2}}} \right) \\ &= \ell \sqrt{1 - \frac{v^2}{c^2}} \\ &= \frac{\ell}{\gamma} \end{align}$$ And since the rear of the train is at $x = 0$ at time zero and the front of the train is at $x = x_0$ at time zero the length of the train is just $x_0$ so: $$ d = \frac{\ell}{\gamma} $$ At all speeds $> 0$ the value of $\gamma > 1$, so the length of the train as observed from the track is less than the length of the train in its rest frame i.e. the train is shortened. This is the Lorentz contraction.
{ "domain": "physics.stackexchange", "id": 57921, "tags": "special-relativity" }
rosjava_core in Fuerte
Question: I try to install rosjava_core in Fuerte. I did: rosws set --git rosjava_core 'https://github.com/rosjava/rosjava_core.git' rosws update But there seems no manifest.xml present anymore? So I can't roscd in to the folder. I also tried rospack profile but that did not work. I also got this error which I fixed (?) by renaming the .rosinstall file ERROR in config: Ambiguous workspace: ROS_WORKSPACE=/home/lennart/fuerte_workspace, /home/lennart/fuerte_workspace/rosjava_core/.rosinstall Originally posted by davinci on ROS Answers with karma: 2573 on 2013-06-21 Post score: 1 Answer: The problem is that rosjava was recently catkinized for use with groovy and hydro, and there is no fuerte branch. I've fixed that by forking rosjava and creating a branch just before the catkinization. If you want to use my fork, you can do this: rosws set --git rosjava_core 'https://github.com/uos/rosjava_core.git' --version=fuerte-devel Alternatively, you can try getting the rosjava developers to create a fuerte-devel branch so you can do this with the official repo instead of my fork. Originally posted by Martin Günther with karma: 11816 on 2013-06-23 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by davinci on 2013-06-24: Thanks! Also submitted an issue to the developers.
{ "domain": "robotics.stackexchange", "id": 14650, "tags": "ros, ros-fuerte, rosjava-core, rosjava, java" }
Is it possible to see Fraunhofer lines with amateur equipment?
Question: Is it possible to see Fraunhofer lines with amateur equipment? Would it be possible (with reasonable effort) to identify elements or is this hard? Answer: Certainly. Spectroscopes are available to fit most telescopes, or you could make your own using a diffraction grating.
{ "domain": "physics.stackexchange", "id": 2987, "tags": "spectroscopy" }
Calculating equivalent resistance
Question: In the given circuit, find the equivalent resistance between points A and B. I solved this question in two different ways, out of which one gave the correct answer. The correct method is the most popular method: assuming the potential of given points and then calculating the others using the fact (or assumption) that in the absence of resistance, the potential difference is zero. This gives the correct answer: $R_{eq} = \frac{R}{3}$. I was confused regarding why the other method did not work. Current follows the least resistance path when available. That is why you should not connect a conducting wire (R = 0) parallel to a resistor as it might lead to short circuit. Using this, I can conclude that the current can flow in two possible paths: (1) from $V_A$ (leftmost) to $V_A$ (second from the right) to $V_B$ (rightmost) (2) from $V_A$ (leftmost) to $V_B$ (second from the left) to $V_B$ (rightmost). Each of the paths has resistance R, so $R_{eq} = \frac{R}{2}$. But this is incorrect. Where is the mistake? Update Current follows the least resistance path when available. The proof for this lies in the current divider rule. $$I_1 = I\frac{R_2}{R_1+R_2}$$ Putting $R_1 = 0$ we find that the entire current flows through $R_1$. Answer: I am used to smoothing out badly shaped circuits by pulling the wires: Then I get a better circuit by cutting the extra wires: So there are three resistors in parallel, indicating that the current flows through three possible paths.
{ "domain": "physics.stackexchange", "id": 79915, "tags": "homework-and-exercises, electric-circuits, electric-current, electrical-resistance, short-circuits" }
What is the nature of electric field? is it quantized? is it a wave?
Question: What I seek here is to understand whether the electric field in its pure form as in between the electron and the proton is uniform or does it have some kind of wave/particle nature or both, does it have frequency or wavelength, or is it quantized?. I do not want any explanations of the effects of electric field or thereof, I want to understand the field itself. Please tell me something I all ready do not know from my engineering education. Answer: The electric field itself is not accessible by experiments. We can only observe e.g. trajectories of charged particle, etc., to find the forces they are subjected to. It all comes down to the electric field just being a theoretical concept used to describe the phenomena covered by electrodynamics. Thus, we cannot make a definite statement on the nature of the electric field. It really depends on the theory you are considering. In classical electrodynamics as developed in the 19th century, the electric and magnetic fields are vector fields permeating all of space. I.e. for each position in space there is a corresponding electric and magnetic field vector, respectively. Of course, one can have a completely equivalent description in momentum (or Fourier) space such that one assigns an electric and magnetic field amplitude vector to each wave vector $\vec k$. The description in momentum space is more suited to describe the propagation of waves, e.g. dipole radiation, while the position space representation is used predominantly in electro-/magnetostatics. In the early 20th century, special relativity has been developed and it became apparent that the electric and magnetic field are basically the same phenomenon and mix when changing the frame of reference. E.g. person A travels on a train and carries an electric charge. She will only observe (the effects of) an electric field. At the same time person B stands on the platform while the train passes. To her, the moving charge person A carries is basically a current flowing, i.e. she will observe (the effects of) a magnetic field. The Maxwell equations (describing classical electrodynamics) which usually are stated in terms of electric and magnetic fields can be recast in a ("covariant") form which is independent of the frame of reference. In this version, an electromagnetic field tensor appears in the equations which combines electric and magnetic field. Thus, in the relativistic description, a 4x4-component tensor is assigned to each point in four-dimensional space-time. Still, the theory is the same as the original version of classical electrodynamics but the electromagnetic field tensor is a very different object. In the same year Einstein published his works on special relativity, he also published an explanation of the photoelectric effect which pointed towards light (EM-radiation) being constituted by photons, i.e. quantized massless particles carrying energy and momentum. There was also some other experimental evidence connected to black-body radiation. Finally, quantum mechanics was developed which has been developed even further to quantum field theory (specifically quantum electrodynamics, QED) where we again have a description in terms of fields, i.e. we assign an object to each point in space (position or momentum space). Within QED, these objects are operators for particle creation and annihilation. If you'd like an introduction to QED, I suggest R. P. Feynman's book "QED: The strange theory of light" which is written for the general public. This overview of the evolution of our understanding of "light" is by no means complete. My point is that we cannot talk about the nature of the electric field. It's just not real, it's a theoretical construct for thinking in terms of classical, non-relativistic electrodynamics. As physics refined its understanding and new theories emerged, the notion of the electric field became overruled by other concepts which will in turn be overruled as science progresses further.
{ "domain": "physics.stackexchange", "id": 14422, "tags": "electric-fields, field-theory" }
Simulation at Baseband or Passband
Question: A naive question I guess. Would it make a difference in the results when a system is simulated at Baseband or at Passband. If I have a plot of a BER vs SNR curve that has been plotted at Baseband, will the same change if I perform the simulation at Passband with a carrier frequency of 2.4GHz? AFAIK, it should not make a difference, but I would like to know if what I am thinking is correct. Another question is that if I have two signals from two transmitters and add these signals at the receiver, does it make a difference if these two signals are passband or baseband. Do the results vary? Since Passband simulation consumes a lot of time, would it be correct if I simulate the behavior at Baseband and assume that the result would be almost the same at the Passband? Answer: The only thing I could see, with the information you gave, is that the way you get from passband to baseband and vice-versa could affect your performances. It should not, but it could. Let's say you have to down sample to get to baseband, and up sample to do the contrary and you use filters to do so. If your filters have variations of 1dB in their passband and a cutoff frequency that cuts some of your baseband signal, then your performances could be deteriorated. In an ideal situation it should not, but some considerations (restriction on the number of coefficients of your filters, for example) may imply some performance degradation.
{ "domain": "dsp.stackexchange", "id": 2793, "tags": "signal-analysis, dsp-core" }
Big Bang Escape Velocity
Question: When our entire section of the universe was in a single hot dark dense state, right before our big bang, what was the escape velocity? Answer: To answer your question, it is worth noting that “escape velocity” is a term frequently used concerning objects within gravitational fields whereby an object opposes the gravitational pull of another. This point (in the case of the Big Bang singularity) represents the whole universe in an extremely dense form rather than a localized field of gravity consisting of distinct objects. Let us also consider the idea of the entire universe as one massive body (even though this is not wholly accurate) for argument's sake, then calculating its escape speed would be meaningless since there is no external field of gravitational force to escape from. The universe would amount to nothing else besides itself and thus cannot have any other thing against which to measure escape velocity. In the context of Big Bang, however, expansion does not need to achieve escape velocity in the usual sense. Instead, it occurs because space itself undergoes a rapid expansion as described by cosmic inflation theory and later by general relativity's description of accelerated expansion of the universe. Thereby, the existence of this expansion does not necessarily imply that universal magnitude has achieved a certain specific rate relative to some kind of external gravitational field but simply indicates that such motion characterizes its structure as it changes over time for all matter-energy with respect to inertia according to fabric physics nature.
{ "domain": "physics.stackexchange", "id": 100496, "tags": "cosmology, black-holes, space-expansion, big-bang, escape-velocity" }
What are the technological advancements that made it possible for modern large telescopes to work with alt-az mounts instead of equatorial mounts?
Question: The video Earth's Rotation Visualized in a Timelapse of the Milky Way Galaxy - 4K (linked below) and discussion below this answer to Why does a timelapse video of a stationary Milky Way make the horizon appear to move from horizontal to vertical? about field rotators has got me thinking. In addition to the benefit of (essentially) single axis constant speed tracking for distant celestial objects, an equatorial mount also rotates the telescope tube about it's axis such that there is no rotation of the image on the focal plane during long exposures. This was probably essential for hours-long exposures of single emulsion plates. Asked separately: Did astronomers ever use photographic plate rotation along with alt-az mounts? But for the very largest and heaviest reflecting telescopes equatorial mounts are massive and unwieldily and require huge counterbalances compared to alt-az mounts which can have the azimuth bearing right on the ground and the altitude bearing straight through the telescope's center of mass. Question: What are the technological advances that made it possible for modern large telescopes to work with alt-az mounts instead of equatorial? I'm expecting most of the answer to be related to electronic advancements of one kind or another, but there may be mechanical and optical and even electro-optical (optoelectronic?) advancements as well. Answer: I would say the field derotator was a great advancement. The field of view of an altazimuthally mounted instrument will rotate, just like for the naked eye—for example, the “Lady in the Moon” seems to be tilted left at moonrise, straight at transit, and tilted right at moonset. If one is to take long-exposure images (more than about a minute, for example, on my 10″ ƒ/4.7 Dobsonian), one need to “derotate” the image so it stays straight on the image. Derotating an image seems easy in practice, but the exact speed at which the field rotates varies with the azimuth and altitude of the target. Some computing power is thus needed “live” to drive the motors at the proper rate depending on where the telescope is pointing.
{ "domain": "astronomy.stackexchange", "id": 7292, "tags": "observational-astronomy, telescope, photography, observatory, tracker" }
Raw data acquirable from amateur astrophotography
Question: What raw data can I possibly acquire from an 8" Classical Dobsonian Telescope, and a DSLR? Could anything eye-opening to amateur astronomers be computed or calculated first-hand with such equipment? I'm sure scientists must've considered this equipment "advanced technology" at some point in history not too far back...Could I rediscover or calculate some Laws (like Kepler's laws) or some other things amateur astronomers would be amazed to calculate themselves (like the distance to a planet) using this equipment? Answer: An 8" telescope was state of the art in 1686 (even though a modern amateur instrument is certainly much better than Huygens' lenses were) and a normal DSLR sensor has only slightly better properties than plates that were used in astronomy up to the 1980s, so there isn't much gain there, in terms of instrument performance, over fairly old equipment... it's just MUCH more convenient to use. You can, however,re-live old discoveries with it, if you like and you could also discover new things with relative modest instruments. That, however, may quickly become a full-time job, irrespective of what instrument you use. If you want to understand what Copernicus and Kepler did, it's probably best to read their books, first. They may give you an idea of just how much intellectual and observational effort it took to amass the knowledge that we are teaching in high school, today. The real problem in observational astronomy is that a lot of what astronomers do is not just linked to the performance of their instruments, but to the total amount of time that is needed to perform high quality research. If you look at some of the finest amateur astronomy imagery, you will find that the "amateur" has spent months or years waiting for near optimum conditions (alternatively you can move to Hawaii and camp out on the volcano... just like the professionals), took dozens if not hundreds of frames and then spent days stacking and processing them with the same tools that the professionals are using. How about comet hunting? Does it sound like fun to be out there every night that seeing permits to get the necessary observation time for a one in a hundred (or is it thousand?) chance for a first discovery? To me it sounds like that the "amateur" label is not a good one for many of these folks. Plenty of them are just as driven as professionals, they merely never got a job title called "astronomer", but I am sure they would do great work in a professional environment just as well. Yes, you can do all of that with an 8" telescope... or with a 12" with a cooled astronomy CCD that will be your next purchase (who needs a new car, right?). But would you? Would you spend a couple years measuring the positions of Mars and Venus at least once a week to prove that Kepler was right, after all? We know that Kepler was right. We also know how long it took him to get the calculations done without a computer. Would you use your computer to calculate the orbital data or would you do it by hand, to be historically "more accurate"? As for the distance to planets... that, I am afraid, is not going to happen in your lifetime, again. The next Venus transit will be in 2117, you just missed the opportunity of two lifetimes back in 2012... And then there is the aspect that many professional astronomers actually never lay hand on an instrument themselves. They are part of collaborations of professional engineers and scientists specialized on the art of instrument building and/or they rely on the operators of the large telescopes and the satellites/probes they get access to once or twice a year (or once a lifetime like the folks who just flew by Pluto!) to make the measurements for them, and then they sit in their offices crunching the data for a year or two, eventually publishing their papers. The most ubiquitous substance you will find in any science office is paper. Some folks have stacks of scientific publications of colleagues all around them and all they do all day long is to read them. That, like it or not, is what many of the world's best scientists are doing: they collect clues in other people's work. A lot of that (and the raw data) is now on the internet. Nothing stops you from looking at it until you discover something that nobody else has seen, so far. It's definitely out there. And when you do, all you have to do is to write a science paper, submit it, get it peer reviewed and maybe you will even be published. The rest is rinse and repeat. Or you can do what I do... I grab my \$12 binoculars, I go out on the porch and I look at the Pleiades, the Orion Nebula or Andromeda, the Moon, Venus and Jupiter. Occasionally I lug my \$20 kiddy telescope out there to see Jupiter's moons or Saturn's rings (barely). That is fun, in my books. Driving fifty miles just to get out of the light pollution that surrounds me... that wouldn't be.
{ "domain": "physics.stackexchange", "id": 27562, "tags": "experimental-physics, home-experiment, data, astrophotography" }
First Order Approximation of the Navier-Stokes Equation: Order of Magnitude of the Gradients of First-Order Fields
Question: I am currently working on a project in acoustics and I am studying first and second-order approximations to the Navier-Stokes equation. I have been reading the book 'Theoretical Microfluidics' by Hendrick Bruus. You find the lecture notes corresponding to this book here. (Chapter 13: Acoustics in compressible liquids.) The Navier-Stokes equation writes $\rho \left[\frac{\partial \underline{u}}{\partial t} + (\underline{u}\cdot \nabla) \underline{u} \right]= -\nabla p + \eta \nabla^2 \underline{u} + \beta \eta \nabla (\nabla \cdot \underline{u}) \, .$ Assuming there is no "background flow" and using $p = p(\rho)$, the first-order fields are given as $ \rho = \rho_0 + \rho_1 $ $ p = p_0 + c_a^2 \rho_1$ $ \underline{u} = 0 + \underline{u}_1$ where $\rho_0$ and $p_0$ are constants. When using first-order perturbation theory, all higher order terms, i.e. the product of two first-order terms, are neglected and the nonlinear term $(\underline{u}\cdot \nabla) \underline{u}$ is dropped. The linearized equation then is $\rho_0\frac{\partial \underline{u}_1}{\partial t} = -c_a^2 \nabla \rho_1 + \eta \nabla^2 \underline{u}_1 + \beta \eta \nabla (\nabla \cdot \underline{u}_1) \, .$ My question is, how do we know that also the gradient of a first-order field, i.e. $\nabla \underline{u_1}$, is small and we therefore can drop the nonlinear term? Does this directly follow from the fact that $\underline{u}_1$ is small? Thank you for your help. Answer: Sometimes these analyses are made easier if you use a parameter for tracking the orders. This is usually done in pertubation theory where a variable is expanded in terms of small parameter $\epsilon$, but the same idea can be applied here. Take your equation: $$\rho \left[\frac{\partial \underline{u}}{\partial t} + (\underline{u}\cdot \nabla) \underline{u} \right]= -\nabla p + \eta \nabla^2 \underline{u} + \beta \eta \nabla (\nabla \cdot \underline{u})$$ and the variables expanded up to first-order terms: $$\rho = \rho_0 + \epsilon\rho_1 + O\left(\epsilon^2\right)$$ $$p = p_0 + \epsilon p_1 + O\left(\epsilon^2\right)$$ $$\underline{u} = 0 + \epsilon\underline{u}_1 + O\left(\epsilon^2\right)$$ Here the parameter $\epsilon$ is introduced just so we can track the orders as discussed in the intro; zeroth-order terms are $O(1)$, first-order terms are $O(\epsilon)$, second-order terms are $O(\epsilon^2)$, etc. Substituting we get: $$\left(\rho_{0}+\epsilon\rho_{1}\right)\left[\epsilon\frac{\partial\underline{u_{1}}}{\partial t}+\epsilon^{2}(\underline{u_{1}}\cdot\nabla)\underline{u_{1}}\right]=-\nabla\left(p_{0}+\epsilon p_{1}\right)+\epsilon\eta\nabla^{2}\underline{u_{1}}+\epsilon\beta\eta\nabla(\nabla\cdot\underline{u_{1}})$$ Now let's group terms; $O(1)$: The only term that remains is: $$\nabla p_{0}=0$$ This is clearly satisfied if $p_0$ is some constant background pressure. $O(\epsilon)$: Expanding and multiplying out all the order parameters we get: $$\rho_{0}\frac{\partial\underline{u_{1}}}{\partial t}=-\nabla p_{1}+\eta\nabla^{2}\underline{u_{1}}+\beta\eta\nabla(\nabla\cdot\underline{u_{1}})$$ Clearly we see that the non-linear term is not taken into account because it is second-order and therefore negligible in the equation for first-order terms. Second-order terms are not taken into account because they were not included in the expansion. These could be included but wouldn't change the conclusion.
{ "domain": "physics.stackexchange", "id": 55881, "tags": "acoustics, perturbation-theory, navier-stokes" }
If an infinite universe containing only a single object, is the object at rest?
Question: Imagine there exists only a single object (say a 1 metre sphere). There is nothing else in all directions. Is the object moving or at rest? Is it even possible to tell, given that there is no frame of reference? Extending the idea, suppose the sphere has some form of (rocket-like) propulsion, constantly accelerating it an arbitrary but single direction. The sphere's velocity, relative to what it was when we first imagined it, approaches the speed of light. The propulsion then stops for a while (while we think about it). At this point we - I assume - still can't tell the difference between it being at rest or moving. The propulsion resumes. What exactly prevents the above from repeating, and the sphere continuing to accelerate to a velocity any arbitrary number of times the speed of light, relative to its velocity when we first imagined it? (To preempt answers such as it's all irrelevant because there's no frame of reference, I forgot to tell you we then discover there is actually a second sphere, exceedingly far away, but in the direction of propulsion). Answer: To answer your title question, there still isn't an absolute reference frame. You can pick any inertial frame moving at any constant sub-light-speed velocity with respect to the sphere (or not moving at all), and thus you can say the sphere is moving at any speed less than the speed of light. The lack of other objects doesn't mean you can't construct other reference frames. Let's pick some frame of reference $S$ in which the sphere was initially at rest. At some event $\mathrm{A}$, the sphere begins to accelerate until, in the frame $S$, it is traveling at a speed $v_1$, where $v_1$ is almost but not quite $c$. The end of the acceleration is event $\mathrm{B}$. From the perspective of an observer in $S$, the sphere is not at rest at event $\mathrm{B}$. Now, let's consider a frame of reference in which the sphere is at rest at $\mathrm{B}$, which we'll call $S'$. Now, the object begins accelerating at some later event $\mathrm{C}$ until it reaches a speed $v_1$ again, this time from the perspective of $S'$, at event $\mathrm{D}$. Again, $v_1$ is very close to $c$. However, from the perspective of the observer in $\mathrm{A}$, the sphere is not moving faster than light. The relativistic velocity addition formula is not simply $$\text{speed in S at D}=\text{speed in S at B}+\text{speed in S' at D}$$ It's more complicated than that, and it effectively means that you can never observe an object traveling faster than light. It is $$\text{speed in S at D}=\frac{u+v}{1+(vu/c^2)}$$ where $$u=\text{speed in S at B},\quad v=\text{speed in S' at D}$$
{ "domain": "physics.stackexchange", "id": 35096, "tags": "special-relativity, speed-of-light, reference-frames, inertial-frames, faster-than-light" }
Spatial Join Pandas Dataframes of Bounding Boxes (cross match)
Question: Problem Statement Imagine there are two almost identical images with annotations (bounding boxes for certain objects), where one is so-called golden image (template) containing all must-have objects (ground truth), and the other one is the query image (input) let's say with lesser objects. Given that we already have Pandas dataframe represnetation of objects with bounding box infos for each of the images like, how one can perform a spatial left join on template between bounding boxes (objects) so that we can easily identify the missing objects in the input? Example. Let's say the template looks like: and the corresponding template dataframe: name xmin ymin xmax ymax 0 big fish 251 504 485 654 1 small fish 583 572 748 660 2 big fish 1080 484 1236 597 3 big fish 574 122 1076 505 4 big fish 1351 187 1583 369 5 small fish 369 31 506 115 6 small fish 1081 148 1111 190 7 small fish 684 505 732 535 8 small fish 939 521 992 570 9 small fish 417 661 497 705 10 small fish 743 598 792 642 11 small fish 667 657 708 691 And the input image looks like: and the corresponding input dataframe: name xmin ymin xmax ymax 0 small fish 342 16 478 101 1 big fish 221 490 459 646 2 small fish 579 564 723 641 3 big fish 1342 161 1558 337 4 big fish 557 102 1045 492 5 small fish 1049 132 1087 176 6 small fish 389 652 484 694 7 small fish 914 514 964 556 8 small fish 639 640 688 676 Expected Result In this example, there fishes are not present on the input image (missing), and I would lile to extract and identify that information by cross matching objects between the template and input dataframes. Ideally, I seek to have a subset of template dataframe only containing the missing objects in the input image: name xmin ymin xmax ymax 0 big fish 1080 484 1236 597 1 small fish 684 505 732 535 2 small fish 743 598 792 642 and the overlay on the input image would look like: Attempts I have tried to use geopandas.GeoDataFrame.sjoin, borrowing functionalities for Points, Polygons for merging the dataframes. When bounding boxes are reasonably distanced from one and other, it would work as expected. However, when are are in proximity of one another, and often even overlaps, then geopandas sjoin and any other merging functionalities wouldn't work. I have also tried to use distances (centers of bounding boxes) together with IoU (Intersection over Union) to cross match these geometries, but it wouldn't inheritly know how far it should look to cross match, and defining the threshold is not ideal, because we simply wouldn't know how many objects to expect to include or exclude (unless it is harded in the logic and definitely hard to maintain). Question: Is there a better, smarter and efficient way to accomplish this? P.S. (Materials): In order to make things easy to contribute and access these images, xmls, and dataframes, I have put everything in a pulbic Github repo, containing also a Notebook with some functions and steps using geopandas.GeoDataFrame.sjoin! Answer: Edited after comments in other answers: Generally speaking, I would reframe this as a linear sum assignment problem. This can be solved using a modified version of Munkres algorithm allowing a cost for non-assignment. wich time complexity is pretty bad ($O(n^3)$), but will work for a dozen fishes. For, reference, the Matlab version allows you to handle tracks that end and start across frames, i.e fishes that disappear and appear between frames. To use the Munkres algorithm, you need to define a cost matrix, with $N_{tracks}$ rows (first frame) and $N_{detections}$ columns (second frame). The Munkres algorithm will minimize the global assignment cost. Case 1: significant overlap of bounding boxes across frames (tracking problem): For the track $i$ in the first frame and detection $j$ in the next one, you can define the cost as $IoU(i, j)$ which is the intersection over union of the two bounding boxes for track $i$ and detection $j$. You could also consider using the distance between the centroid of the bounding boxes $d(i, j)$ or a combination of the two with a total cost such as $C(i, j) = IoU(i, j) + \alpha \times d(i, j)$ with $\alpha$ a parameter to determine to tune the respective weight of each cost in the full cost matrix. If you are only using bounding boxes the IoU is pretty easy to compute. Case 2: no significant overlap of bounding boxes across frames (detection problem): In that case, you cannot rely on positional information. But hopefully, the fish shapes remain largely unchanged. So you can build descriptors/features, for instance: bounding box area: measure the number of pixels in the bounding box (this assumes the fish also didn't change orientation drastically, as fishes are pretty flat so the area from the side will be very different from the area from the front. You could consider using the longest side of the bounding box to mitigate this color composition: create a binned RGB histogram from all the pixels in the fish (ideally you would have access to a finer segmentation than just a bounding box to make it less sensitive to the background color) You could also use feature descriptors such as SIFT, AKAZE, etc... But it all comes down to the same two steps: find a good way to compare any pair of objects across frames make an optimal decision about how to match them across frames and how to decide which are missing The second part will always be a linear sum assignment problem. So the only thing now is that the scipy version doesn't offer the option to specify the unassignedTrackCost or unassignedDetectionCost like the Matlab version does. And this is actually what will allow you to handle fishes appearing or disappearing the way the Matlab version does. So you will need to modify it. Looking at the picture below, you now have the costMatrix and you need to build the bigger matrix to be able to handle the cases when fish appear or disappear. Once you have managed to create the full cost matrix you can solve it using linear_sum_assignment and then find the tracks (resp. detections) that were assigned to dummy detections (resp. tracks). Implementation Getting the Cost matrix (distance only) import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from scipy.spatial.distance import cdist from scipy.optimize import linear_sum_assignment def transform(df): df["centroid_x"] = (df["xmax"] + df["xmin"]) / 2 df["centroid_y"] = (df["ymax"] + df["ymin"]) / 2 return df df0 = pd.read_csv("first_frame.csv") df1 = pd.read_csv("second_frame.csv") df0 = transform(df0) df1 = transform(df1) distance_cost_matrix = cdist(df0[["centroid_x", "centroid_y"]], df1[["centroid_x", "centroid_y"]]) cost_mat = np.log(distance_cost_matrix) #np.log(np.multiply(area_cost_matrix, distance_cost_matrix)) Modified Munkres algorithm def pseudo_inf(cost_mat, inf_func): pseudo_inf_val = inf_func(cost_mat[cost_mat != np.inf]) pseudo_cost_mat = cost_mat.copy() pseudo_cost_mat[pseudo_cost_mat == np.inf] = pseudo_inf_val return pseudo_cost_mat, pseudo_inf_val def get_costs(cost_mat, row_ind, col_ind): costs = [cost_mat[i, j] for i, j in zip(row_ind, col_ind)] return costs def assign_detections_to_tracks( cost_mat, cost_of_non_assignment=None ): # in case there are infinite value, replace them by some pseudo infinite # values inf_func = lambda x: np.max(x) * 2 pseudo_cost_mat, pseudo_inf_val = pseudo_inf(cost_mat, inf_func) assigned_rows = [] unassigned_rows = [] assigned_cols = [] unassigned_cols = [] full_cost_mat = None # basic case, handled by linear_sum_assignment directly if cost_of_non_assignment is None: assigned_rows, assigned_cols = linear_sum_assignment(pseudo_cost_mat) assignment_costs = get_costs(cost_mat, assigned_rows, assigned_cols) # if one cost of non assignment is provided, use it else: # build the pseudo-array top_right_corner = np.full((cost_mat.shape[0], cost_mat.shape[0]), pseudo_inf_val) np.fill_diagonal(top_right_corner, cost_of_non_assignment) bottom_left_corner = np.full((cost_mat.shape[1], cost_mat.shape[1]), pseudo_inf_val) np.fill_diagonal(bottom_left_corner, cost_of_non_assignment) top = np.concatenate((cost_mat, top_right_corner), axis=1) zero_corner = np.full(cost_mat.T.shape, 0) # zero_corner = np.full(cost.shape,cost_of_non_assignment) bottom = np.concatenate((bottom_left_corner, zero_corner), axis=1) full_cost_mat = np.concatenate((top, bottom), axis=0) # apply linear assignment to pseudo array row_idxs, col_idxs = linear_sum_assignment(full_cost_mat) # get costs for row_idx, col_idx in zip(row_idxs, col_idxs): if row_idx < cost_mat.shape[0] and col_idx < cost_mat.shape[1]: assigned_rows.append(row_idx) assigned_cols.append(col_idx) elif row_idx < cost_mat.shape[0] and col_idx >= cost_mat.shape[1]: unassigned_rows.append(row_idx) elif col_idx < cost_mat.shape[1] and row_idx >= cost_mat.shape[0]: unassigned_cols.append(col_idx) # full_costs = get_costs(full_cost_mat, row_idxs, col_idxs) assignment_costs = get_costs( full_cost_mat, assigned_rows, assigned_cols) return assigned_rows, assigned_cols, unassigned_rows, unassigned_cols, full_cost_mat, assignment_costs Peforming the assignment: cost_of_non_assignment was tuned looking at a histogram of cost_mat.ravel() ( assigned_rows, assigned_cols, unassigned_rows, unassigned_cols, full_costs, assignment_cost ) = assign_detections_to_tracks(cost_mat, cost_of_non_assignment=10) unassigned_rows, unassigned_cols Results df0.iloc[unassigned_rows, :] name xmin ymin xmax ymax centroid_x centroid_y 0 big fish 1080 484 1236 597 1158.0 540.5 1 small fish 684 505 732 535 708.0 520.0 2 small fish 743 598 792 642 767.5 620.0 it works!
{ "domain": "datascience.stackexchange", "id": 11550, "tags": "python, pandas, object-detection, geospatial" }
Bosonic Tachyon Condensation?
Question: The tachyonic string mode in perturbative bosonic string theory indicates that the "vacuum", flat Minkowski $\mathbb{R}^{25,1}$, is not really a vacuum. What is conjectured about tachyon condensation in this theory? Do we expect the theory to have a vacuum? Is there any way the condensate might generate fermions dynamically? [Edit] Since Chris Gerig asked for more background: Tachyons, sadly for Star Trek writers, are typically an indication that the state you think is the vacuum of your system is, in fact, not the vacuum. This is because you can act on the ground state with a tachyon creation operator and get a new state with lower energy. For example, the Higgs potential is $$V(\phi) = \frac{1}{2}\lambda (|\phi|^2 - c^2)^2 = -\lambda c^2 \phi^2 + \frac{\lambda}{2} \phi^4 + const$$ If you do perturbation theory around $\phi_0 = 0$ instead of around one of the true minimum $\phi_0 = C$, with $|C| = \mu^2$ , you'll find the creation operators in $\phi_{pert} = \phi - \phi_0$ create tachyons, with negative mass-squared $m^2 = -\lambda c^2$. Create enough of these tachyons, and you'll turn the state $|\phi_0 = 0\rangle$ which you thought was the vacuum into one of the true vacua $|\phi= C\rangle$. Bosonic string theory in 26d has tachons: the most basic closed and open string excitations, created by vertex operators with no derivatives. So it's a natural question to wonder about: what state do we get if we add tachyons to the false perturbative vacuum? Does this process converge? This is a pretty hard question to answer, since in bosonic string theory we don't have SUSY-protected quantities that we can compute to check our predictions. Which is why I asked what had been conjectured. Answer: These are several rather different question. First, bosonic string theory in $d=26$ has both open string tachyons and closed string tachyons. The open string excitations are attached to D-branes. In particular, open strings that can live everywhere are excitations of a spacetime-filling D25-brane. The list of these open string excitations includes a tachyon. This open string tachyon is a sign of instability of the D25-brane with respect to the complete annihilation of the D25-brane. It's a violent process but the released energy only goes like $1/g_{closed}$, proportionally to the tension of the affected D-brane, which is smaller – for a small $g_{closed}$ – than the energy densities of order $1/g_{closed}^2$ which occur in closed string processes. The difference of potential energies before (local maximum of the potential) and after the tachyon condensation was conjectured by Ashoke Sen to coincide with the tension of the D25-brane. It has been verifified by various informal proofs as well as very sophisticated and mathematically rigorous proofs in open string field theory, especially cubic string field theory where the most complete steps towards the quantitative understanding of the tachyon condensation was achieved by Martin Schnabl. http://arxiv.org/abs/hep-th/0511286 The understanding of the closed string condensation is much less clear. Most likely, there doesn't exist any nearby local minimum and the potential for the closed string tachyon is unbounded from below in all directions, signalling a neverending instability that destroys the spacetime beyond repair. Whether there are fermions in bosonic string theories is a different issue. One may mention type 0 string theory in $d=10$ – which are naively as purely bosonic as the $d=26$ string. However, it was proved by Shiraz Minwalla and pals that one may find fermionic solitons in those theories: http://arxiv.org/abs/hep-th/0107165 There are also papers claiming to find a dynamical process that interpolates between bosonic string theory and superstring theory. They require some time-dependent configurations, however. A time-dependent tachyon is a part of the picture, if I remember well. See e.g. this paper by Simeon Hellerman and Ian Swanson and other papers by the same authors (and followups and references): http://arxiv.org/abs/hep-th/0612051
{ "domain": "physics.stackexchange", "id": 3844, "tags": "string-theory, tachyon" }
How do I subscribe to a Twist message in micro-ROS
Question: I am using the ping_pong example as a starting point to create a subscriber for Twist messages. I can't find any examples of how to do this, and being new to C I am getting a bit overwhelmed. So I'm hoping you can guide me and not get too frustrated with me. I don't get any errors when I build the project, and I can successfully flash the ESP32, but none of the print() debug messages are shown when I run the monitor. Any help would be very much appreciated. #include <rcl/rcl.h> #include <rcl/error_handling.h> #include <rclc/rclc.h> #include <rclc/executor.h> // #include <std_msgs/msg/header.h> #include <geometry_msgs/msg/twist.h> #include <stdio.h> #include <unistd.h> #include <time.h> #include "driver/gpio.h" //#ifdef ESP_PLATFORM #include "freertos/FreeRTOS.h" #include "freertos/task.h" //#endif #define STRING_BUFFER_LEN 50 #define RCCHECK(fn) { rcl_ret_t temp_rc = fn; if((temp_rc != RCL_RET_OK)){printf("Failed status on line %d: %d. Aborting.\n",__LINE__,(int)temp_rc); vTaskDelete(NULL);}} #define RCSOFTCHECK(fn) { rcl_ret_t temp_rc = fn; if((temp_rc != RCL_RET_OK)){printf("Failed status on line %d: %d. Continuing.\n",__LINE__,(int)temp_rc);}} #define R_WHEEL_FORWARD_GPIO CONFIG_R_WHEEL_FORWARD_GPIO #define R_WHEEL_BACKWARD_GPIO CONFIG_R_WHEEL_BACKWARD_GPIO #define R_WHEEL_PWM_GPIO CONFIG_R_WHEEL_PWM_GPIO #define L_WHEEL_FORWARD_GPIO CONFIG_L_WHEEL_FORWARD_GPIO #define L_WHEEL_BACKWARD_GPIO CONFIG_L_WHEEL_BACKWARD_GPIO #define L_WHEEL_PWM_GPIO CONFIG_L_WHEEL_PWM_GPIO rcl_subscription_t cmd_vel_subscriber; geometry_msgs__msg__Twist incoming_twist; struct twist_message { double lin_x; double lin_y; double lin_z; double angle_x; double angle_y; double angle_z; }; void twist_subscription_callback(const void * msgin) { const geometry_msgs__msg__Twist * msg = (const geometry_msgs__msg__Twist *)msgin; //Deal with the twist message recieved here printf("I got a twist message"); } void appMain(void *argument) { printf("Starting appMain"); printf("Hello Simon"); rcl_allocator_t allocator = rcl_get_default_allocator(); rclc_support_t support; // create init_options RCCHECK(rclc_support_init(&support, 0, NULL, &allocator)); // create node rcl_node_t node; RCCHECK(rclc_node_init_default(&node, "robot_motor_node", "", &support)); // Create a best effort cmd_vel subscriber RCCHECK(rclc_subscription_init_best_effort(&cmd_vel_subscriber, &node, ROSIDL_GET_MSG_TYPE_SUPPORT(std_msgs, msg, String), "/microROS/robot_cmd_vel")); // Create executor rclc_executor_t executor; RCCHECK(rclc_executor_init(&executor, &support.context, 3, &allocator)); RCCHECK(rclc_executor_add_subscription(&executor, &cmd_vel_subscriber, &incoming_twist, &twist_subscription_callback, ON_NEW_DATA)); // RCCHECK(rcl_subscription_fini(&cmd_vel_subscriber, &node)); // RCCHECK(rcl_node_fini(&node)); while(1){ rclc_executor_spin_some(&executor, RCL_MS_TO_NS(10)); usleep(10000); } RCCHECK(rcl_subscription_fini(&cmd_vel_subscriber, &node)); RCCHECK(rcl_node_fini(&node)); } Originally posted by IonCoder on ROS Answers with karma: 13 on 2021-01-12 Post score: 1 Answer: Hello, check this code. It is a full demo that controls a Kobuki robot with an ESP32 and micro-ROS, and it uses the Twist subscriber to move to micro-ROS robot. You also can find some explanations about this demo here. If you have more questions, please open an issue in one of our Github repos. Hope that helps! Originally posted by Pablogs with karma: 443 on 2021-01-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by IonCoder on 2021-01-13: @Pablogs Thank you, I think that is probably just what I need to get me going. I will look at it tonight and then come back and accept your answer. Thanks again. Comment by IonCoder on 2021-01-13: The code you pointed to worked well the first time. I took the publisher code out, so I can understand one thing at a time and I didn't break anything :-)
{ "domain": "robotics.stackexchange", "id": 35960, "tags": "ros2" }
What is the current best state of the art algorithm for graph embedding of directed weighted graphs for binary classification?
Question: Sorry if this is a newbee question, I'm not an expert in data science, so this is the problem: We have a directed and weighted graph, which higher or lower weight values does not imply the importance of the edge (so preferably the embedding algorithm shouldn't consider higher weights as more important), they are just used to imply the timing of the events which connect the nodes, so the higher weighted edges are events that have happened after the lower ones. There can be multiple edges between nodes, and I want to do a binary classification using deep learning, meaning that my model gets a embedded graph vector as input and decides if its malicious(1) or not(0). So what is the best state of the art graph embedding algorithm for this task that can capture as much information as possible from the graph? i read some graph embedding papers but couldn't find any good comparison of them since there are so many new ones. IMPORTANT NOTE: One problem i have seen with some of the graph embedding algorithms, is that they try to have a small vector dimension, since i guess they are used in fields which there are a LOT of nodes so they need to do this, but in this task its not really important, the nodes are the functions in the program, and they very rarely go above 2000 functions, so even if the algorithm creates a 20k dimensions its no problem, I'm saying this because some of the algorithms that I'm reading will produce a vector that even has lower dimensions compared to number of the nodes in the graph! and that causes loss of information in my opinion. so to sum up, the performance and large vector size is not a problem in my task. so preferably the algorithm should gather as much information as possible from the graph. Answer: I'm no expert myself, but recently (i.e. this is true to 2019), I've heard (at a meetup from an expert) that Node2Vec is the SOTA. Here's a link to a post on Medium explaining it - basically, Node2Vec generates random walks on the graph (with hyper-parameters relating to walk length, etc), and embeds nodes in walks the same way that Word2Vec embeds words in a sentence. Note that since random walks are generated on the graph, intuitively you don't have to use edge weights, and can use multiple edges. P.S. There's an older (2014) algorithm for this, called "DeepWalk", which I don't know much about but is supposed to be similar, simpler, and not as performative. I'm just adding this in case you want to search the term.
{ "domain": "datascience.stackexchange", "id": 7205, "tags": "deep-learning, classification, graphs, embeddings" }
Why is there a magnetic field inside a solenoid?
Question: I am having a bit of trouble understanding a very basic concept, which is the following. If inside a solenoid, there is no current going through (since it goes through the wires making up the structure), how can there be a magnetic field inside? I have seen the proof both through Biot-Savart and Ampère's Law, but in both cases I am failing to understand the physical interpretation. All help is really appreciated. Answer: The relationship between magnetic field and current (in magnetostatics) is only that the curl of the magnetic field is proportional to the current density at that point. $$ \nabla \times {\bf B} = \mu_0\ {\bf J}$$ There is no relationship between the B-field itself at some point in space and the current found at that position.
{ "domain": "physics.stackexchange", "id": 45030, "tags": "electromagnetism, inductance" }
Temperature due to the cosmological horizon during inflation
Question: During inflation, the de Sitter space has a cosmological event horizon. This horizon exists because farther than the horizon distance the expansion carries light away from the observer faster than the light can travel. This cosmological horizon emits Hawking radiation. Can we estimate the temperature of this radiation during inflation? $~$Just want to know whether this temperature negligibly small or significantly large. Note that inflationary expansion is thought to be $10^{53}$ times faster than current expansion. Answer: It is called the de Sitter temperature and is given by $T = H/2\pi$, where $H$ is the Hubble parameter. It arises from the autocorrelation of superhorizon quantum fluctuations generated during inflation. EDIT: Below, I elaborate on the above showing how the autocorrelation of field fluctuations during inflation leads to a de Sitter temperature $T=H/2\pi$. Begin with the inflaton, $\phi$, a minimally-coupled scalar field evolving in a FRW universe: $$\ddot{\phi} + 3H \dot{\phi} – \frac{\nabla^2}{a^2}\phi + \frac{{\rm d}V(\phi)}{{\rm d}\phi} = 0$$ and consider small perturbations about its homogeneous background value, $\phi({\bf x},t) = \phi_0(t) + \delta \phi({\bf x},t)$. To first order in $\delta \phi({\bf x},t)$, the above expression becomes $$\ddot{\delta \phi} + 3H\dot{\delta \phi} -\left(\frac{\nabla^2}{a^2} – \left.\frac{{\rm d}^2V(\phi)}{{\rm d}\phi^2}\right|_{\phi = \phi_0}\right)\delta \phi = 0.$$ Next, take the Fourier transform of the fluctuation in comoving wavenumber, $k$, $$ \delta \phi({\bf x},t) = \int\frac{{\rm d}^3k}{(2\pi)^{3/2}}\delta \phi_k(t)e^{i{\bf k}\cdot{\bf r}}, $$ and use it to obtain an expression for the Fourier modes $\delta \phi_k$, $$\ddot{\delta \phi_k} + 3H\dot{\delta \phi_k} + \left(\frac{k}{a}\right)^2\delta \phi_k = 0.$$ Lastly, we can rescale the field $u_k = a\delta \phi_k$ to arrive at the more compact mode equation, $$ u_k'' + \left(k^2 – \frac{a''}{a}\right)u_k = 0, $$ where primes denote derivative wrt conformal time, ${\rm d} \tau = {\rm d}t/a$. Specializing to the case of de Sitter space for which $a(t) \propto e^{Ht}$ and $\tau = -1/aH$ we can write the mode equation as $$(k\tau)^2\frac{{\rm d}^2u_k}{{\rm d}(k\tau)^2} + \left[(k\tau)^2 – 2\right]u_k = 0.$$ This can be solved exactly in terms of Hankel functions, $$u_k(-k\tau) = \frac{1}{2}\sqrt{-k \tau}\left[c_1 H^{(1)}_{3/2}(-k\tau) + c_2H^{(2)}_{3/2}(-k\tau)\right],$$ where $H^{(1)}_{3/2} = J_{3/2} +iY_{3/2} = H^{(2)*}_{3/2}$ and $J$ and $Y$ are Bessel functions of the first and second kind. Now, let's figure out the constants. We can get one of them by appealing to a generic feature of quantum fields in expanding spacetimes: when the frequency is high relative to the expansion rate, the field doesn't "feel" the expansion and it oscillates as a plane wave. In the short wavelength limit $k/aH = -k\tau \rightarrow \infty$, Hankel functions indeed reduce to sinusoids, $$u_k(-k\tau) = \frac{1}{\sqrt{2k}}\left(c_1 e^{-ik\tau} + c_2 e^{ik\tau}\right).$$ To recover positive frequency plane waves, we choose $c_2 = 0$. To figure out $c_1$, we make use of the fact that $\delta \phi({\bf x},t)$ is a quantum mode, and so it must satisfy the so-called canonical commutation relation with its conjugate momentum, $\pi({\bf x},t) = a^2\delta \phi({\bf x},t)’$. This is a straight-forward but tedious calculation involving a small collection of Bessel function identities: you should find at the end that $c_1 = \sqrt{\pi/k}$. The final expression governing the time-dependence of a quantum mode in de Sitter space can now be written down, $$ u_k(-k\tau) =-\frac{\sqrt{\pi \tau}}{2}H^{(1)}_{3/2}(-k\tau)=-\frac{1}{\sqrt{2k}}\left(1 – \frac{i}{k\tau}\right)e^{-ik\tau}.$$ This is a wonderful result, particularly the second equality which results because Bessel functions of half-integer order are just combinations of trig functions. It's incredibly insightful: the fluctuation starts as a plane wave in the distant past when its wavelength is tiny ($-k\tau \rightarrow \infty$), but then as the mode is stretched by the expansion ($-k\tau \rightarrow 0$), it evolves out of the vacuum ultimately obtaining a constant amplitude on large scales (see figure). This is called mode freezing, and is due to the quantum decoherence of fluctuations on super-horizon scales. Specifically, in the large wavelength limit we can use small-argument approximations of the Bessel functions to see that $H^{(1)}_{3/2}(-k\tau)$ becomes the constant $\sqrt{2/\pi}(k\tau)^{-3/2}$ and $$ |\delta \phi_k| = \frac{|u_k|}{a} = \frac{H}{\sqrt{2k^3}}.$$ This is the quantity of interest, since it gives the amplitude of the fluctuation on large scales where it is a real, classical perturbation. The correlation function is then obtained by ensemble averaging, $$ \langle | \delta \phi_k|^2\rangle = \frac{H^4}{2k^3}. $$ To get a quantity independent of scale, it is common practice to multiply by $k^3/2\pi^2$ to get the power spectrum, $$ P(k) = \frac{H^2}{4\pi^2}. $$ This has dimensions of energy-squared, and so its square root corresponds to a temperature, the Gibbons-Hawking temperature, of $T = H/2\pi$.
{ "domain": "physics.stackexchange", "id": 53209, "tags": "cosmology, cosmological-inflation, hawking-radiation, qft-in-curved-spacetime" }
Classical blackbody radiation 'solution'
Question: I never understood how the equipartition theorem was applied electromagnetic waves inside the metallic blackbody. As hyperphysics puts it (http://hyperphysics.phy-astr.gsu.edu/%E2%80%8Chbase/mod7.html) The classical view treats all electromagnetic modes of the cavity as equally likely because you can add an infinitesmal amount of energy to any mode But the existence of the modes is conditioned to the existence of certain movements by the particle, so I don't see it as a "degree of freedom". For example, for a high frequency mode to exist there should be electrons jiggling with a certain frequency. I expected some sort of statistic on electrons speed and acceleration for a given temperature, like from the Maxwell-Boltzmann distribution, to derive the expected electromagnetic radiation spectrum for the blackbody. 1) Why are the electromagnetic modes are considered "degrees of freedom" if their existence is conditioned to the motion of the electrons? If they actually aren't, please explain it 2) Does an alternative (non cavity + modes) approach exist? I was thinking fluctuations of charge density on the surface of a solid metal sphere due to temperature. Answer: Why are the electromagnetic modes considered "degrees of freedom" if their existence is conditioned to the motion of the electrons? You assume that EM waves are determined by the motion of the electrons and have no freedom of their own. Although that is a reasonable picture of things (for example, if fields are assumed purely retarded, which is very natural), it is not necessary. The common derivations of thermal radiation spectrum do not assume that. On the contrary, they assume that EM field is a thing that can exist in vacuum independently of charged particles. This is possible because the Maxwell equations are only conditions that fields have to satisfy. They alone do not determine the fields. In order to determine the fields, some additional boundary conditions need to be assumed. The common derivations of thermal radiation spectrum are based on the equations for vacuum $$ \nabla \cdot \mathbf E = 0 $$ $$ \nabla \cdot \mathbf B = 0 $$ $$ \nabla \times \mathbf E = -\frac{1}{c}\frac{\partial \mathbf B}{\partial t} $$ $$ \nabla \times \mathbf B = \frac{1}{c}\frac{\partial \mathbf E}{\partial t} $$ and boundary conditions appropriate for closed ideal metal shell. This system then admits infinite number of solutions $\mathbf E_H, \mathbf B_H$ . Every solution can be written as discrete sum of standing waves - "modes" and these coefficients are considered as coordinates determining the configuration of the field, hence the term "degrees of freedom". The motion of the charged matter does not enter the derivation explicitly. So the answer to 1) is "in the derivation of thermal EM spectrum, the existence of EM waves is not considered to be conditioned by the motion of the charged matter". Now I agree with you that this approach is far from satisfactory. The boundary condition of ideal metallic shell is unrealistic for high frequencies. In practice, such boundary condition is an easy but only an approximately correct way to account for the motion of electrons in the metal. Does an alternative (non cavity + modes) approach exist? I was thinking fluctuations of charge density on the surface of a solid metal sphere due to temperature. The closest thing I know is later Planck's derivation, where he assumes the EM field is emitted by the matter oscillators in steps $\hbar \omega$. You can read about it in his great book M. Planck, The theory of heat radiation, P. BLAKISTON'S SON & Co. 1914 https://archive.org/details/theheatradiation00planrich There are other kinds of derivations, for example Timothy Boyer's derivations based on the zero-point radiation: https://journals.aps.org/pr/abstract/10.1103/PhysRev.182.1374 You can find other related papers by Timothy Boyer on arxiv.
{ "domain": "physics.stackexchange", "id": 10115, "tags": "electromagnetic-radiation, statistical-mechanics, thermal-radiation" }
Current in a galvanic cell
Question: I was looking through chemistry textbooks to find out how to determine how much current a galvanic cell should generate and what affects that current value. However, I did not find anything: textbooks just talk about the EMF of a cell. I wanted to know what determines how much current a cell generates. Certainly, the amount of electrons produced externally contributes to the current produced by the cell. However, what about internally? Batteries have internal resistance. Is that true of cells too (it should be because batteries are made up of many cells)? What determines amount of internal resistance: does the type of electrolyte(s) affect it. If a cell uses a weak electrolyte (like acetic or carbonic acid) will it cause more internal resistance then if something like sulfuric acid (strong electrolyte) is used? What if a cell used weak electrolyte as the cathode electrolyte and the strong electrolyte was the anode electrolyte, will that cause more internal resistance due to there being less ions in the cathode compartment as not all of the electrolyte has been dissociated into ions (thus, the cathode cation cannot be reduced)? Answer: Although voltage can be calculated from the electromotive series, maximum current is harder to predict due to a number of factors, some of which you've mentioned. Electrolyte conductivity, which varies with components, concentration, temperature, etc. As you state, weak electrolytes are less conductive. Resistance of a composite electrolyte would be the sum of its components. Cell polarization, which is an effect at the electrode/electrolyte interfaces. The causes are buildup of gas bubbles, concentration gradients that develop in the electrolyte, etc. Deterioration of the electrodes and/or electrolyte, particularly under high current drain. Of course, as @MaxW states, a major factor is cell geometry: area of electrodes and separation due to insulators and electrolyte. For these reasons, though the maximum current can be calculated from the resistances and EMF, in actual use, cells often fall far short of their theoretical potential (no pun intended). The resistance of a battery is primarily the sum of that of the cells. Since cells are joined with metallic connections, the cell-to-cell resistance is usually negligible (though I've used poorly designed battery holders with high-resistance spring contacts that severely limited the current, and that damaged the plastic holder due to ohmic heating).
{ "domain": "chemistry.stackexchange", "id": 5304, "tags": "electrochemistry, redox, electrons" }
Bag of Tricks: n-grams as additional features?
Question: I've been playing with PyTorch's nn.EmbeddingBag for sentence classification for about a month. I've been doing some feature engineering, playing with different tokenizers, etc. I'm just trying to get the best performance out of this simple model as I can. I'm new to NLP, so I figured I should start small. Today, by chance, I stumbled on this paper Bag of Tricks for Efficient Text Classification, which very well may be the inspiration for nn.EmbeddingBag. Regardless, I read the paper and saw that they increased performance through using "n-grams as additional features to capture some partial information about the local word order" So by the wording of this sentence, specifically "additional features", I take it to mean that they made n-grams as part of their vocabulary. For example "abc news" is treated as a single word in the vocabulary, and then appended to the training data that is being embedded like so: dataset = TextFromPandas(tweet_df) label, sentence, ngrams = dataset[0] label, sentence, ngrams # out: (1, 'quake our deeds are the reason of this # earthquake may allah forgive us all', ['quake our', 'our deeds', 'deeds are', 'are the', 'the reason', 'reason of', 'of this', 'this #', '# earthquake', 'earthquake may', 'may allah', 'allah forgive', 'forgive us', 'us all']) I just wanted to check my assumption, because the paper is not very explicit. I already tried to string n-grams together as a new sentence in place of the old, but performance dropped significantly. I will continue to experiment, but I was wondering if anyone knows the specific mechanism? Answer: Yes, N-grams is about joining $n$ words as one single token. Keep in mind it will greatly increase your features size: If you originally have 1000 unique words, notice you could get up to 1000² 2-gram (usually you don't get ALL the combinations, but notice the number of features can potentially grow huge!) If your dataset contains between thousands or millions samples, it could be enough to train a simple bag-of-words. But when you use a bi-gram, you'll probably need at least a million samples and lot more training steps. Besides, you'll probably have to tweak the hyperparameters. That's a common Machine Learning trade-off. A simple model is not very accurate. But a more complex model requires more data, more training and can overfit more easily.
{ "domain": "ai.stackexchange", "id": 2996, "tags": "natural-language-processing, papers, feature-extraction, bag-of-words, n-gram" }
Superconductors on each side of a flexible membrane what happens?
Question: I have a sheet of flexible Kevlar 16th of an inch thick coated on each side with a high temperature superconductor, what happens when you put hundred amp of current into each side, and then 1000 and so on. And I'm assuming that the superconductor is bonded very well to the Kevlar, and that it is at a temperature cohesive to it superconducting. Will the 2 sides repel each other? Turning the Kevlar into a rigid structure? How rigid? Please put in the layman's terms! Answer: If we ignore that coating a Kevlar sheet with a high temperature superconductor is with current technology impossible one thing will happen: It will rip the superconductor apart. The Lorentz for such a high current at small distances is enormous. Here is a demonstration what happens: 5000 Amps through a copper bar.
{ "domain": "physics.stackexchange", "id": 2335, "tags": "superconductivity" }
Partitioning a sequence into sublists
Question: This function divides a sequence into partitions, where a partition is a list of consecutive matching elements. Example Input: (A, A, B, B, B, A, C, C) Output: ((A, A), (B, B, B), (A), (C, C)) I've tried to make this code "obviously correct", but it still doesn't look that way to me. public static IEnumerable<List<T>> PartitionBy<T, PK>(this IEnumerable<T> sequence, Func<T, PK> partitionKey) { return sequence.PartitionBy(partitionKey, EqualityComparer<PK>.Default); } public static IEnumerable<List<T>> PartitionBy<T>(this IEnumerable<T> sequence, IEqualityComparer<T> comparer) { return sequence.PartitionBy(item => item, comparer); } public static IEnumerable<List<T>> PartitionBy<T, X>(this IEnumerable<T> sequence, Func<T, X> partitionKey, IEqualityComparer<X> comparer) { var itr = sequence.GetEnumerator(); if (!itr.MoveNext()) { // empty sequence was passed in, so return empty sequence yield break; } // Start the first partition. var currentList = new List<T>(new[] { itr.Current }); while (itr.MoveNext()) { var key1 = partitionKey(currentList[0]); var key2 = partitionKey(itr.Current); if (comparer.Equals(key1, key2)) { // continue current partition currentList.Add(itr.Current); } else { // yield current partition and start a new one yield return currentList; currentList = new List<T>(new[] { itr.Current }); } } // We know it has at least 1 element here. yield return currentList; } Answer: Just a small point to start: new List<T>(new[] { itr.Current }); You don't need the array too, you can just do: new List<T> { itr.Current }; I'd suggest that you aim for consistency with your generic type names too. Why X vs PK? I'd suggest TKey for both. You could do it just with a foreach: public static IEnumerable<IEnumerable<T>> Partition<T, TKey>( this IEnumerable<T> items, Func<T, TKey> keySelector, IEqualityComparer<TKey> comparer) { List<T> currentPartition = null; foreach (var item in items) { if (currentPartition != null && comparer.Equals(keySelector(item), keySelector(currentPartition[0]))) { currentPartition.Add(item); } else { if (currentPartition != null) { yield return currentPartition; } currentPartition = new List<T> { item }; } } if (currentPartition != null) { yield return currentPartition; } } Is it clearer than your code? I'm not convinced it is.
{ "domain": "codereview.stackexchange", "id": 25890, "tags": "c#" }
Transforming iterator
Question: The question is follow-up to preprocessing iterator. Specification: Given Functor functor (which is Callable) and Iterator iterator (which is OutputIterator), iterator is created that behaves the same as underlying iterator, except it applies a functor to the incoming data and then passes the result into underlying iterator. Input type of the Functor is not required to match value type of the Iterator, but the input should be only single parameter (even defaulted arguments are not allowed). Code: #ifndef SUNRISE_TRANSFORM_ITERATOR_HPP #define SUNRISE_TRANSFORM_ITERATOR_HPP #include <iterator> namespace shino { template<typename Functor, typename Iterator> class transform_iterator : public std::iterator<std::output_iterator_tag, void, void, void, void> { Functor functor; Iterator iterator; public: transform_iterator(const Functor& f, const Iterator& it) : functor(f), iterator(it) {} transform_iterator(Functor&& f, Iterator&& it) : functor(f), iterator(it) {} class proxy { friend class transform_iterator; Iterator &iterator; Functor &f; public: template <typename U> proxy &operator=(U&& value) { *iterator = f(std::forward<U>(value)); return *this; } private: proxy(Iterator &it, Functor &functor) : iterator(it), f(functor) {} }; proxy operator*() { return proxy(iterator, functor); } transform_iterator &operator++() { ++iterator; return *this; } transform_iterator operator++(int) { auto copy = *this; ++iterator; //might exhibit different behavior sometimes return copy; } const Iterator& internal_iterator() const { return iterator; } const Functor& internal_functor() const { return functor; } void swap(transform_iterator& other) { using std::swap; swap(other.functor, functor); swap(other.iterator, iterator); } }; template<typename Functor, typename Iterator> bool operator==(const transform_iterator<Functor, Iterator>& lhs, const transform_iterator<Functor, Iterator>& rhs) { return lhs.internal_iterator() == rhs.internal_iterator(); } template <typename Functor, typename Iterator> bool operator!=(const transform_iterator<Functor, Iterator>& lhs, const transform_iterator<Functor, Iterator>& rhs) { return !(lhs == rhs); } template <typename Functor, typename Iterator> void swap(shino::transform_iterator<Functor, Iterator>& lhs, shino::transform_iterator<Functor, Iterator>& rhs) { lhs.swap(rhs); } template <typename Functor, typename Iterator> auto transformer(Functor&& f, Iterator&& iterator) { return transform_iterator<std::remove_const_t<std::remove_reference_t <Functor>>, std::remove_const_t<std::remove_reference_t<Iterator>>>(std::forward<Functor>(f), std::forward<Iterator>(iterator)); } } #endif //SUNRISE_TRANSFORM_ITERATOR_HPP I don't have any special concerns, but anything, even small nitpicks are appreciated (I had serious flaw in transformer<>() during implementation, so I want to get rid of all of the dangerous things). This post shares example code with sliding window, because it is meant to be paired with it: #include <vector> #include <iostream> #include <utility> template <typename InputIt, typename OutputIt> std::pair<InputIt, OutputIt> sliding_average(InputIt first, InputIt last, const typename std::iterator_traits<InputIt>::difference_type window_length, OutputIt d_first) { using value_type = typename std::iterator_traits<InputIt>::value_type; auto divide = [&window_length](const value_type& value) { return value / window_length; }; auto iterator = shino::transformer(divide, d_first); //transform_iterator<Functor, Iterator> auto result = shino::sliding_window(first, last, iterator, window_length); return std::make_pair(result.first, result.second.internal_iterator()); } The example might not be so appealing, but currently I have lack of imagination to write something wonderful small enough to not be applicable for its own review. Answer: Lets start with the easy thing... You are told to take any Callable (also class member pointer(!)) but you only use f(...), which doesn't work for all Callable types. To show you an example which should work according the problem description, but does not: struct Person { int age_; std::string name_; int age() const noexcept { return age_; } decltype(auto) name() const noexcept { return name_; } }; int main() { auto vec = std::vector<int>(2); Person persons[] = { {24, "Foo"}, {42, "Bar"} }; auto iter = shino::transform_iterator{&Person::age, vec.begin()}; std::copy(std::begin(persons), std::end(persons), iter); // should print "24\n42\n" for (int x : vec) { std::cout << x << '\n'; } } Error message: shino.cpp:37:29: error: called object type 'int (Person::*)() const noexcept' is not a function or function pointer *iterator = f(std::forward<U>(value)); ^ /usr/local/include/c++/v1/algorithm:1706:19: note: in instantiation of function template specialization 'shino::transform_iterator<int (Person::*)() const noexcept, std::__1::__wrap_iter<int *> >::proxy::operator=<Person &>' requested here *__result = *__first; ^ /usr/local/include/c++/v1/algorithm:1731:19: note: in instantiation of function template specialization 'std::__1::__copy<Person *, shino::transform_iterator<int (Person::*)() const noexcept, std::__1::__wrap_iter<int *> > >' requested here return _VSTD::__copy(__unwrap_iter(__first), __unwrap_iter(__last), __unwrap_iter(__result)); ^ shino.cpp:137:7: note: in instantiation of function template specialization 'std::__1::copy<Person *, shino::transform_iterator<int (Person::*)() const noexcept, std::__1::__wrap_iter<int *> > >' requested here std::copy(std::begin(persons), std::end(persons), iter); ^ 1 error generated. So instead of the line *iterator = f(std::forward<U>(value)); you should use *iterator = std::invoke(f, std::forward<U>(value)); and the example works! (Proof) Lets tackle the next 'issue' The problem description says specifically: Given Functor functor (which is Callable) and Iterator iterator (which is OutputIterator), [a transform_]iterator is created that behaves the same as [the] underlying iterator This could also mean in my opinion, that if you are given an OutputIterator you shall return one, but if you are given a ForwardIterator, which is just a refinement of OutputIterator, you shall also return a ForwardIterator and so on. And if this shall be production code, than I would also add the requirement, that your code shall not compile for given iterators which only satisfy InputIterator. So to sum up, since you declare your transform_iterator to always be a an OutputIterator by setting that tag, you can not use it well with the (future) STL whenever it would make sense (well at least when we are going to have Concepts). You do not support BidrectionalIterator or RandomAccessIterator already. To achieve better compliance with the STL, you could just copy the underlying iterator_tag (if it is not input_iterator_tag) and you could use std::enable_ifs to enable only proper operators. To solve this issue cleanly you have to make bigger changes. You can look into the libraries ranges-v3 and cmcstl2 for some inspiration. They have fantastic iterator facades, which might go into the standard some day (Ranges-TS). Next issue: The following does not compile (on clang-5.0-trunk) int main() { auto vec = std::vector<int>(10); auto iter = shino::transformer( [](auto i){ return i*i; }, vec.begin() ); auto end = shino::transformer( [](auto i){ return i*i; }, vec.end() ); std::iota(iter, end, 1); for (auto&& x : vec) { std::cout << x << '\n'; } } Clang fails to compile, because the two closures have different types Error message: shino.cpp:127:2: error: no matching function for call to 'iota' std::iota(iter, end, 1); ^~~~~~~~~ /usr/local/include/c++/v1/numeric:196:1: note: candidate template ignored: deduced conflicting types for parameter '_ForwardIterator' ('transform_iterator<(lambda at shino.cpp:119:3), [...]>' vs. 'transform_iterator<(lambda at shino.cpp:123:3), [...]>') iota(_ForwardIterator __first, _ForwardIterator __last, _Tp __value_) ^ Since iterator and sentinel can not be different in the current STL algorithms, there is unfortunately not much one can do right now. This will be different in the future. Until then I would suggest adding a function which maps an output range into a transformed output range (ensuring using the same Callable type). Some more 'nitpicking': As I already stated in the comments you could use std::decay_t<T> when removing const-qualifiers and references from types. But as shown in my examples, you dont need make_functions anymore with recent compiler versions. Edit: I forgot to talk about making your Callable types Semiregular. The thing is, that Iterator types are required to be Regular, ie. DefaultConstructible and EqualityComparable. This means, that as long as your Functor type is not DefaultConstructible you have to wrap it in a std::optional. For this I would define a semiregular_box<T>-type which is T for Semiregular types or a wrapper around std::optional<T> otherwise. You can find a reference implementation in ranges-v3.
{ "domain": "codereview.stackexchange", "id": 24575, "tags": "c++, iterator" }
Would the buoyant force increase if I inflated a balloon that's inside a closed chamber hypothetically submerged underwater?
Question: Would the buoyant force increase if I inflated a balloon that's inside a closed chamber hypothetically submerged underwater? And vise versa would the chamber's buoyancy decrease if deflated? I understand that the water pressure would be pushing down onto the closed chamber but It seems that the balloon would be protected from the water pressure because its chamber is taking that force. It seems to me that in the balloons case that the depth wouldn't matter as its atmosphere I assume is the chamber that its in. Please disregard the extra space you see inside the chamber in the drawn photo and just think of it as if the balloon was the exact size of the chamber. Answer: Would the buoyant force increase if I inflated a balloon that's inside a closed chamber hypothetically submerged underwater? And vise versa would the chamber's buoyancy decrease if deflated? First, I am assuming (from the diagram) that the two chambers are fastened to the bottom of the tank and that the chamber walls are rigid. The upward buoyant force on the chamber should neither increase nor decrease. The buoyant force on each chamber depends only on the weight of the volume of water that the chamber displaces, regardless of what is inside the chamber. However, the net vertical force acting on the chamber equals the downward weight of the contents of the chamber plus the weight of the water above the chamber acting downward, minus the upward buoyant force acting on the chamber. To the extent that you are adding the weight of air into the balloon on the left, it will every so slightly decrease the net upward force (or increase the net downward force) acting on the chamber on the left compared to the chamber on the right. But the upward buoyant force itself should remain unchanged. You're correct as the chamber walls are rigid and they're fastened to the bottom of the tank. Although I forgot to mention the size of them which is important. Also to make this simple lets say they virtually weigh nothing & are 1 cubic foot in size 1'x1x'1'. If the chamber was completely filled with air (in this diagram would be a full balloon in the chamber) the buoyant force would be around 62 lb or 7.5 gallons. Does the buoyancy remain unchanged if the balloon was completely deflated? Seems like the deflation is basically air being vacuumed from its chamber making it less pressure. It doesn't matter what the chambers and their contents weigh. Only the volume of the chamber counts. If that volume is 1 cubic foot, the weight of the water the volume displaces is about 62.4 pounds. So the upward buoyant force acting on both chambers would be 62.4 lbf. Bottom line: The weight of the chamber and its contents only affect the net vertical force on the chamber, not the buoyant force. The buoyant force only depends on the volume of the chamber. Hope this helps.
{ "domain": "physics.stackexchange", "id": 61550, "tags": "pressure, buoyancy" }
Epinephrine vs. Adrenaline
Question: Both names are widely used, with what appears to me as a slight prevalence of “epinephrine” in scientific literature and an overwhelming prevalence of “adrenaline” in popular media. Are there any well-documented and/or well-motivated guidelines for the usage of these terms? Answer: Apparently I asked too soon. Summarizing my recent findings, I conclude that adrenaline is the better term. To be more explicit, here is why: The US National Library of Medicine recommends “epinephrine”, this is however mainly due to historical reasons (adrenaline used to be a trademark name in the USA). This quite comprehensive paper on the issue presents a series of arguments for the use of “adrenaline”. These include: Better compatibility with other languages (where epinephrine is not at all in use); Better integration with other biological terms (adrenergic receptors, adrenal gland, adrenectomy, etc.); Better historical accuracy (epinephrine used to be the name for a neurochemically inactive crude extract from the adrenal glands). I find the arguments of the referenced publication convincing, though I would recomend you read them in full.
{ "domain": "biology.stackexchange", "id": 1599, "tags": "neuroscience, pharmacology, nomenclature" }
Multiple position estimates fusion
Question: I have a system in which I have two separate subsystems for estimating robot positions. First subsystem is composed of 3 cameras which are used for detecting markers the robot is carrying and which outputs 3 estimates of the robot's position and orientation. The second subsystem is a system which is located on the robot and is measuring speed on the two points of the robot. By numerically integrating those two I can get an estimate on the robot's position and orientation (because I am tracking two points at once). The first system is less accurate but the second system drifts. First system gives output about once a second while the second one gives output much more frequently (100-200 times per second). I assume there must be a better approach than to just reset the position with the first system's estimate (as it is not 100% accurate), but to also use the accumulated position from the second sensor system and fuse that with the new data from the first system. Also, there is a question how to fuse 3 estimates of the first system? There must be a better way than pure average as it might happen that the two estimates are exactly the same and the third one is completely different (meaning that it is probably more wrong)? Do you have any fusion algorithms to recommend to use in such a system? I know about Kalman filter, but I am having trouble figuring out how to use it as the two systems output data at different frequencies. I hope the question is clear enough, what is the best approach to fuse the estimates into a more correct and accurate estimate? Thanks Answer: What you are describing is essentially a textbook case for using a Kalman filter. First you need a prediction step. Let's assume you are predicting the pose of the robot $(x,y,\theta)$, given the previous pose estimate and your high-frequency velocity measurements $(v,\omega)$, where $v$ is the linear velocity and $\omega$ is the angular velocity. $P$ is the 3x3 covariance matrix that represents the uncertainty of the robot pose. $Q$ is the covariance of your inputs (i.e., how noisy are those velocity measurements?) $F$ is the Jacobian of the motion model with respect to the state and $G$ is the Jacobian with respect to the inputs, i.e., Now you have your less frequent correction updates, which actually measure the full state, making this quite simple, i.e., where $z_k$ is your measurement (from the camera) and $R$ is the covariance matrix associated with that measurement (probably a diagonal matrix). This measurement is compared with the predicted measurement (which in your case is just the latest pose estimate). In this simple case, the Kalman gain is the proportion of the current pose covariance compared to the sum of the pose covariance and the measurement covariance. To answer your question about the different rates, you can just run your motion update repeatedly until your prediction update arrives. For example, it might happen that the motion update occurs 100 times before your perform a correction. You also asked about how to handle three cameras. The easiest way is to just process them sequentially; just apply three corrections in a row. Another way is to stack them and perform a single update. You would need to adjust the correction update step to do it this way.
{ "domain": "robotics.stackexchange", "id": 368, "tags": "sensors, localization, kalman-filter, sensor-fusion" }
Editing system files in Linux (as root) with GUI and CLI text editors #3
Question: One year and a half ago I posted the second iteration of this script for a review here: Editing system files in Linux (as root) with GUI and CLI text editors #2 Since then, it has been "hibernated" as I had way too much work, and I would like to ask you for a review of the possible final edit. I made a big effort for it to be final, but we all know there is always some space to improve. Thank you in advance! As stated in there: My intention is to POSIX-ly write one generalized function for running various text editors I use for different purposes through sudoedit, i.e. editing files as root safely. Safely = for instance, if a power loss occurs during the file edit; another example could be lost SSH connection, etc. The script follows #!/bin/sh # Please, customize these lists to your preference before using this script! cli_editors='vi nano' gui_editors='subl xed' # NON-COMPLIANT: code -w --user-data-dir; I did not find a way to run it through `sudoedit`; # atom -w --no-sandbox; gedit -w does not work via rooted ssh; pluma does not have -w switch # USAGE INSTRUCTIONS # 1. Customize the editor lists at the beginning of this script. # # 2. Bash: Source this script in your ~/.bashrc or ~/.bash_aliases with: # . /full/path/to/sudoedit-enhanced # Other shells: Generally put it inside your shell's startup config file... # # 3. Call an alias, for instance, one CLI, and one GUI editor: # sunano /path/to/file # susubl /path/to/file # # Explanation: This script works with standard `sudoedit`, but # it does a bit more checks and allows some GUI editors to be used. # It needs to be sourced into your shell's environment first. # Then simply use the pre-defined aliases or define some yourself. sudoedit_err () { printf >&2 '%s\n' 'Error in sudoedit_run():' printf >&2 '%b\n' "$@" exit 1 } sudoedit_run () { # print usage if [ "$#" -eq 2 ]; then printf >&2 '%s\n' 'Usage example: sunano /file1/path /file2/path' exit 1 fi # sudo must be installed if ! command -v sudo > /dev/null 2>&1; then sudoedit_err "'sudo' is required by this function. This is because" \ "'sudoedit' is part of 'sudo'\`s edit feature (sudo -e)" fi # the first argument must be an editor type case "$1" in ('cli'|'gui') ;; (*) sudoedit_err "'$1' is unrecognized editor type, expected 'cli' or 'gui'." ;; esac # store editor's type and name and move these two out of argument array editor_type=$1; editor_name=$2; shift 2 # find and store path to this editor editor_path=$( command -v "$editor_name" 2> /dev/null ) # check if such editor = executable path exists if ! [ -x "$editor_path" ]; then sudoedit_err "The '$editor_name' editor is not properly installed on this system." fi # `sudoedit` does not work with symlinks! # translating symlinks to normal file paths using `readlink` if available if command -v readlink > /dev/null 2>&1; then for file in "$@"; do if [ -L "$file" ]; then if ! file=$( readlink -f "$file" ); then sudoedit_err "readlink -f $file failed." fi fi set -- "$@" "$file" shift done fi if [ "$editor_type" = gui ]; then # 1. GUI editors will "sit" on the terminal until closed thanks to the wait option # 2. Various editors errors might flood the terminal, so we redirect all output, # into prepared temporary file in order to filter possible "sudoedit: file unchanged" lines. if tmpfile=$( mktemp /tmp/sudoedit_run.XXXXXXXXXX ); then SUDO_EDITOR="$editor_path -w" sudoedit -- "$@" > "$tmpfile" 2>&1 grep 'sudoedit:' "$tmpfile" rm "$tmpfile" else sudoedit_err "mktemp /tmp/sudoedit_run.XXXXXXXXXX failed." fi else # 1. CLI editors do not cause problems mentioned above. # 2. This is a generic and proper way using `sudoedit`; # Running the editor with one-time SUDO_EDITOR setup. SUDO_EDITOR="$editor_path" sudoedit -- "$@" fi } sudoedit_sub () { ( sudoedit_run "$@" ) } # Editor aliases generators: # - avoid generating editors aliases for which editor is not installed # - avoid generating editors aliases for which already have an alias for cli_editor in $cli_editors; do if command -v "$cli_editor" > /dev/null 2>&1; then # shellcheck disable=SC2139,SC2086 if [ -z "$( alias su$cli_editor 2> /dev/null)" ]; then alias su$cli_editor="sudoedit_sub cli $cli_editor" fi fi done for gui_editor in $gui_editors; do if command -v "$gui_editor" > /dev/null 2>&1; then # shellcheck disable=SC2139,SC2086 if [ -z "$( alias su$gui_editor 2> /dev/null)" ]; then alias su$gui_editor="sudoedit_sub gui $gui_editor" fi fi done unset cli_editors gui_editors Answer: Aside from your self-review, I noticed a few (very minor) things I thought may be worth pointing out First, a design opinion rather than a comment on the code. I'm not sure putting the lists of editors in variables in the file is the best approach - it has its advantages, but allowing it to be managed separately in configuration files may have advantages. Were I a user of this script I would prefer to be told to go edit something like a $XDG_CONFIG_DIRS/sudoedit-enhanced file over being encouraged to edit the script itself It feels a bit weird to have sudoedit_run print sunano as a usage example regardless of whether nano is desired, or available. Would it be at all realistic to offer up one of the aliases that were successfully defined? While you can usually assume /tmp exists, the user may prefer to store their temp files elsewhere. Consider having the mktemp call instead use the --tmpdir flag to create a file relative to the $TMPDIR set by the user - not many users will care, but those who do will probably have very strong feelings about it The alias generators act a bit weird on editors whose names contain spaces - which binary files can even if they rarely do. Right now, as you store each editor as a word in a space-separated string you really can't worry about what to do with them, but if you were to some day end up refactoring something in such a way that it becomes possible to do something funky like cli_editor="per se", consider how a line like su$cli_editor="sudoedit_run cli $cli_editor" would react to that (it's sometimes but not always an error, and there might be side effects). Quoting the alias name, to force a failure due to an invalid alias name (at least in bash, not sure if other shells permit spaces in alias names) might be preferable sudoedit_run also has a subtle bug in how it checks for an editor - command -v will report the name of shell builtins in favor of commands. If someone were to do something silly like listing echo as an editor, command -v echo would return echo. This would lead to it working just fine as long as there's a file called echo in your current directory, but get reported as not properly installed otherwise. It's enough of a corner case that it's almost certainly not worth caring about, but still
{ "domain": "codereview.stackexchange", "id": 42185, "tags": "linux, posix, sh, text-editor" }
Splitting set into subsets by criterion
Question: I am looking for an algorithm that will split a set $A = \{ a_1, a_2, ..., a_N \}$ into (possibly) multiple subsets based on a given criterion. In my case, the criterion is spatial overlap of the elements in the set, but it could be any criterion really. Let's define a function $crit(a_1, a_2)$ which takes as input two elements of $A$ and outputs a boolean indicating whether those two elements overlap. If $crit(a_i, a_j) = false$ for all $a_i, a_j \in A$, then the output of the algorithm should be a single set containing all of the elements of $A$. If $crit(a_i, a_j) = true$ for a single pair of elements $a_i, a_j \in A$, then the output of the algorithm should be two sets, one set containing all elements except for $a_i$, and one set containing all elements except for $a_j$. We continue this pattern until we reach the other extreme case, in which $crit(a_i, a_j) = true$ for all $a_i, a_j \in A$, in which case the output of the algorithm should be a partition of $A$: $N$ sets, each containing a single element from $A$. I would be surprised if this problem has not been formalized before. Is there an efficient algorithm for solving this problem? Edit: For clarification, it is ok if an element ends up in multiple sets in the output. In fact, an element should be in every set that does not contain an element that overlaps with it. For example, if $A = \{ w,x,y,z\}$, $crit(w, x) = true$, $crit(w, y) = false$, $crit(w, z) = true$, $crit(x, y) = false$, $crit(x, z) = false$, $crit(y, z) = false$, then the output should be $A' = \{ \{ w, y \}, \{ x, y, z \} \}$. Another edit: based on bbejot's response, I did a bit more looking and it does appear that I can reduce my problem to enumerating all maximal cliques of the graph represented by the adjacency matrix bbejot describes in his response. In response to my original question, there is no efficient algorithm for solving this problem, but the Bron–Kerbosch algorithm seems to be pretty standard. Answer: I believe there are two interpretations of your criterion. One interpretation is easily solvable, and the other is NP-Complete. First, a greedy algorithm for what you describe: Let $A_1$, $A_2$, ..., $A_N$ all start as the empty set. Loop through each $a_i \in A$. Put $a_i$ in the first $A_k$ such that for each $a_j$ already in $A_k$, $crit(a_i, a_j) = false$. After looping through all the elements of $A$, disregard any $A_i$ which is still empty. This greedy algorithm holds in the three examples you give. A second interpretation is: "Let the set $A_i$ be as large as possible but not including any elements from $A_1$ to $A_{i-1}$". Let $A'$ be an $N \times N$ matrix where $A_{i, j}'$ is 1 if $crit(a_i, a_j) = false$ and 0 otherwise. Then finding $A_1$ is exactly finding the maximum clique of a digraph with adjacency matrix $A'$. This, of course, makes the problem NP-Hard. Let's hope you only need the first.
{ "domain": "cstheory.stackexchange", "id": 870, "tags": "ds.algorithms" }
What is the UDP port subscribers open up?
Question: In preparing/configuring networking for a ROS environment, I have noticed that ROS subscribers (and rosout) opens UDP ports. (from TCPView) KineticListener.exe **4908 UDP ASHTRAY-1** 52513 * * KineticListener.exe 4908 TCP ashtray-1 51328 desktop-mp2jcia 3143 ESTABLISHED Given I am using TCPROS, I am wondering why this is so and what this 52513 port is for. UDP is not optimal, because I can't route UDP through the NAT, so if these UDP ports are important for something I may not be able to work around it. (rosout.exe also opens UDP, but it is a subscriber, so I think all subscribers do it). I would just like to know: What is this UDP port for? I would like to ignore it, because I can't (for many reasons) route UDP through the firewall. NOTE: if you notice the .exe extension and such, yes...this is a ROS installation (kinetic) on windows. I have ported the ROS client libraries and the python infrastructure (roscore, etc.) and they work very well with very little modification. Originally posted by codenotes on ROS Answers with karma: 261 on 2017-09-27 Post score: 0 Answer: I would just like to know: What is this UDP port for? Afaik each ROS node will also open a UDP port to facilitate subscribers that would like to exchange messages over UDPROS. Subscribers can negotiate that by adding the unreliable hint to their TransportHints. See also: wiki/roscpp/Overview - Publishers and Subscribers - Subscribing to a Topic - Transport Hints. I would like to ignore it, because I can't (for many reasons) route UDP through the firewall. As long as your subscribers do not request subscriptions over the unreliable transport (ie: use UDPROS), firewalling those UDP ports should be fine. There may even be fallback behaviour built-in, but I'm not sure of this: if the unreliable transport doesn't work, the reliable one (ie: TCPROS) gets used automatically. NOTE: if you notice the .exe extension and such, yes...this is a ROS installation (kinetic) on windows. I have ported the ROS client libraries and the python infrastructure (roscore, etc.) and they work very well with very little modification. You're referring to codenotes/ros_cygwin here, correct? Originally posted by gvdhoorn with karma: 86574 on 2017-09-27 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by codenotes on 2017-09-27: Actually, NOT cygwin, pure windows. I became frustrated with the cygwin layer and just sat down for a couple days and went through all python backend stuff and amended it to work on raw windows (10 anyway). The Visual C++ version of the roscpp and TF libraries I had done some time ago. Comment by gvdhoorn on 2017-09-27: That's very nice. Do you intend to provide those ports to the community as well? Comment by codenotes on 2017-09-27: BTW, thank you for the answer here...that puts my fears at rest. It is pretty cool to be doing roscore and the console tools (rostopic, rosnode, etc.) purely on windows. All that backed architecture works real well, credit to you guys. all that is needed is a change to the use of python's popen. Comment by codenotes on 2017-09-27: I do plan to share, absolutely. Not sure of the best way...I could document my steps for others to reproduce via a ROS wiki...not sure. Open to suggestions. There are remarkably few patches needed on the python/roscore and the roscpp/TF libraries. Comment by gvdhoorn on 2017-09-27: Just keep in mind I'm also "just a user". So my answers are based on my (current) understanding. Comment by gvdhoorn on 2017-09-27: re: sharing: the current instructions for a 'native ROS install' on Windows date back to Electric, Fuerte and Groovy. See wiki/win_ros fi. It would be awesome if we could update that for Kinetic. The patches themselves could either be upstreamed, or kept in a .. Comment by gvdhoorn on 2017-09-27: .. separate repository. As long as build instructions are available I imagine people should be able to manage. Binary distributions could then always be considered. Comment by codenotes on 2017-09-27: it's all good...your comment jives with my read of the source...but there's a lot going on, so just wanted to make sure. Anyway, if it proves an issue, I can patch roscpp and remove it...but I really don't like changing the core libraries too much. Comment by codenotes on 2017-09-27: windows has a package manager now (apparently) vcpkg or something. Maybe I just package for that. Well, one way or another, I will get the steps/patches out. But credit to y'all: the c++ source, python and roscpp has a large number of provisions for win32. Not much needs to be added. Comment by gvdhoorn on 2017-09-27: Hm. That pkg manager might be a nice idea. ROS2 uses chocolately, afaik. Patches themselves could probably be upstreamed, guarded with appropriate #ifdefs, similar to how some of the code paths for WIN32 are now included. I'm pretty sure those would be accepted. Comment by gvdhoorn on 2017-09-27:\ But credit to y'all: the c++ source, python and roscpp has a large number of provisions for win32. Not much needs to be added. can't take credit for it: that is largely the work of @Daniel Stonier and others on win_ros.
{ "domain": "robotics.stackexchange", "id": 28938, "tags": "ros, ros-kinetic, network" }
Is it possible for a system to be chaotic but not ergodic? If so, how?
Question: In a recent lecture on ergodicity and many-body localization, the presenter, Dmitry Abanin, mentioned that it is possible for a classical dynamical system to be chaotic but still fail to obey the ergodic hypothesis, which is frankly a pretty remarkable combination of properties. Unfortunately, the lecture had a lot of ground to cover and Abanin did not elaborate. So: Are any explicit examples known that have been shown to be both chaotic and non-ergodic? Is there some clear explanation for what properties of those systems allow them to show this behaviour? Answer: A trivial example of a non-ergodic, chaotic system is a 2D conservative system that is not fully chaotic, i.e., with a mix of regular and chaotic regions in its phase space: each individual chaotic region is ergodic in itself, but since trajectories cannot cross the regular, invariant barriers between those regions, the systems as a whole is not ergodic. An example of such a system is Chirikov's Standard Map:
{ "domain": "physics.stackexchange", "id": 58297, "tags": "classical-mechanics, chaos-theory, ergodicity" }
Finding the density
Question: 35.6g of NaOH is dissolved in 150g of 20% NaOH solution. What is the density od the solution? I have tried solving this problem, but I 've come across a few obstacles. Firstly, I found the mass of NaOH in the original solvent which is 30g. The added NaOH weighs 35.6g, meaning there is ultimately 75.6g of NaOH, and that the mass of the solution is 185.6g. The next step would be to find the density of the solution, but this requires the volume of the solution, that is not given in the problem. I also tried to calculate the number of moles to later calculate V using the formula V=n*Vm, but I am not sure how to calculate the molar mass(M) of the solution.I am not sure if I am to use H2O or NaOH. Can someone tell me if I am going in the right direction and if my calculations are even in the right direction, and if not, what is the right one? Answer: You can simplify your problems by looking at the published tables. As you have been explained in the comments, predicting densities is not an easy job and I don't know if it can even be predicted. This link provides the densities of NaOH solution as a function of mass percentage. https://wissen.science-and-fun.de/chemistry/chemistry/density-tables/density-of-sodium-hydroxide/ Plot the data in Excel and generate a function. Calculate the final mass percentage of your solution. Your first step is correct, i.e., the initial solution contains 30 g of NaOH, then you add 35.6 g to the solution. The total mass of the solution is 185.6 as you correctly calculated. What is the final mass percentage? 65.6 g NaOH/ 185.6 g solution. This is around 35% wt/wt NaOH. Roughly, interpolating from the given table, the density should be around 1.39! Commas in German data are equivalent to decimals in English, so mass percentage of 5,5 means 5.5.
{ "domain": "chemistry.stackexchange", "id": 16795, "tags": "solutions, stoichiometry" }
What would be the rate of acceleration from gravity in a hollow sphere?
Question: Lets say the Earth is hollow and you are in the center of it (Same mass except all of it is on the outside like a beach ball) If you move slightly to one side now your distance is closer to that side therefore a stronger gravitational force however at the same time you have more mass now on the other side. At what rate would you fall? Which direction? Also, is there a scenario where depending on the radius of the sphere you would fall the other direction or towards the empty center? Answer: If the mass/charge is symmetrically distributed on your sphere, there is no force acting on you, anywhere within the sphere. This is because every force originating from some part of the sphere will be canceled by another part. Like you said, if you move towards on side, the gravitational pull of that side will become stronger, but then there will also be "more" mass that is pulling you in the other direction. These two components cancel each other exactly.
{ "domain": "physics.stackexchange", "id": 22243, "tags": "newtonian-gravity, planets" }
What does a subatomic charge actually mean?
Question: I was recently reading a popular science book called The Canon - The Beautiful Basics of Science by Natalie Angier, and it talks about subatomic particles like protons, neutrons and electrons in chapter 3. I came across this section on subatomic charges that made me wonder about the nature of the positive and negative charges that we associate with protons and electrons respectively. When you talk about a fully charged battery, you probably have in mind a battery loaded with a stored source of energy that you can slip into the compartment of your digital camera to take many exciting closeups of flowers. In saying that the proton and electron are charged particles while the neutron is not, however, doesn't mean that the proton and electron are little batteries of energy compared to the neutron. A particles's charge is not a measure of the particles's energy content. Instead, the definition is almost circular. A particle is deemed charged by its capacity to attract or repel other charged particles. I found this definition/description a bit lacking, and I still don't grasp the nature of a "subatomic charge", or what do physicists mean when they say that a proton is positively charged and electron is negatively charged? Answer: When physicists say that a particle has electric charge, they mean that it is either a source or sink for electric fields, and that such a particle experiences a force when an electric field is applied to them. In a sense, a single pair of charged particles are a battery, if you arrange them correctly and can figure out how to get them to do useful work for you. It is the tendency for charged particles to move in an electric field that lets us extract work from them. A typical electronic device uses moving electrons to generate magnetic fields (moving electrons cause currents, and currents generate magnetic fields) and these magnetic fields can move magnets, causing a motor to turn. What is happening at a fundamental level is that an electric field is being applied (via the potential across the battery) that is causing those electrons to move. If I wanted a magnetic field to be generated, I could get one from a single pair of charges, say, two protons placed next to one another. The protons will repel (like charges repel) and fly away from each other. These moving protons create a current (moving charge) which creates a magnetic field. Your author is right when he says that charges attract or repel other charges. To help connect it to more familiar concepts, consider this: The negative end of your battery terminal attracts electrons and the positive end repels them. (The signs of battery terminals are actually opposite the conventional usage of positive and negative when referring to elementary charges. As a physicist, I blame electrical engineers.) The repelled and attracted electrons start moving, and these moving electrons can be used to do work.
{ "domain": "physics.stackexchange", "id": 5840, "tags": "electrons, atoms, charge, protons, neutrons" }
Isolating barium(II), copper(II) and zinc(II) from aqueous solution
Question: Q15 Each metallic ion was separated from an aqueous solution containing $\ce{Ba^2+},$ $\ce{Cu^2+},$ and $\ce{Zn^2+}$ by the procedure shown in the following figure. From ①–⑥ in the table below choose the correct combination of ions that predominates in the precipitates a, b and the filtrate c. For the precipitate a, I know the answer is $\ce{Ba^2+}$ ion because it has a highly negative standard reduction potential than the other ions, so it tends to displace the sulfuric acid to form insoluble barium sulfate. But I don't know which one can displace $\ce{NaOH}.$ $\ce{Na+}$ ion reduction potential is a lot more negative than both $\ce{Cu^2+}$ and $\ce{Zn^2+}$ ions. I suspect there is something in 'added in excess' part, but I don't know how can it be any different. Answer: Once $\ce{Ba^2+}$ is eliminated as poorly soluble barium(II) sulfate, there is a solution of soluble copper(II) and zinc(II) sulfates. Considering this has been done by careful/gradual addition of sulfuric acid, and the fact that both remaining sulfates undergo hydrolysis, the solution is going to be slightly acidic — this is one reason for sodium hydroxide "added in excess". Another reason, as both Ivan and Waylander pointed out in the comments, is the difference between reactivity of copper(II) and zinc(II) in alkali medium. Long story short, you are probably supposed to know the specific properties of the common transition metals cations as the reduction potential isn't really helpful here. Copper(II) Copper(II) with $\ce{NaOH}$ first forms colloidal copper(II) hydroxide; the gel is then solidifies to light-blue amorphous substance upon subsequent addition of hydroxide: $$\ce{CuSO4(aq) + 2 NaOH(aq) -> Cu(OH)2(s) + Na2SO4(aq)}$$ $\ce{Cu(OH)2(s)}$ appears to be precipitate b. Sodium tetrahydroxocuprate(II) $\ce{Na2[Cu(OH)4]}$ doesn't form here: it's a very moisture-sensitive and in the air quickly turns dark-brown: $$\ce{Na2[Cu(OH)4](aq) -> CuO(s) + H2O(l) + 2 Na(OH)(aq)}$$ Zinc(II) Zinc(II), on the other hand, with $\ce{NaOH}$ first forms white precipitate of zinc(II) hydroxide: $$\ce{\ce{ZnSO4(aq) + 2 NaOH(aq) -> Zn(OH)2(s) + Na2SO4(aq)}}$$ which is then dissolved again forming colorless soluble sodium terahydroxozincate(II) (filtrate c): $$\ce{Zn(OH)2(s) + 2 NaOH(aq) -> Na2[Zn(OH)4](aq)}$$ Excess of $\ce{NaOH}$ is also necessary to keep $\ce{Na2[Zn(OH)4]}$ in solution: upon diluting (lowering pH, adding water/acid), terahydroxozincate(II) decomposes to zinc(II) hydroxide. Note: chemical reactions are adapted from [1]. References R. A. Lidin, V. A. Molochko, and L. L. Andreeva, Reactivity of Inorganic Substances, 3rd ed.; Khimia: Moscow, 2000. (in Russian)
{ "domain": "chemistry.stackexchange", "id": 12695, "tags": "aqueous-solution, solubility, transition-metals, reduction-potential" }
What is the difference between kinematically non-rotating and dynamically non-rotating reference frames?
Question: Well the question in the title basically sums everything up but to put it in more context... I am trying to wrap my head around the relativistic Geocentric Celestial Reference System (GCRS). Reading the introductory documents such as IAU Resolution 2000 I notice they state that the GCRS is kinematically non-rotating with respect to the Barycentric Reference Frame. I've seen elsewhere as reference frames being described as dynamically non-rotating. I don't understand the difference. It is not clear at all what they mean. Is the GCRS rotating or not? Any suggestions? In the document, it is stated about being kinematically non-rotating in the introduction and mentioned about being dynamically non-rotating just before Eq. $24$. To be clear the document is just an example of where I have seen the terminology used. I am sure it will be used elsewhere. Answer: The distinction between the two is more readily understood from a Newtonian perspective, where reference frames are global (i.e., universe-spanning). A kinematically non-rotating reference frame is one in which the remote stars, or more recently, the remote quasars, do not appear to be rotating with respect to the origin of the frame of reference. A dynamically non-rotating reference frame is one in which none of the fictitious accelerations due to rotation (centrifugal, Coriolis, and Euler accelerations) are needed to explain the dynamical behavior of a moving object. That reference frames are local as opposed to global in general relativity makes this distinction a bit tougher in general relativity. The distinction still applies if the space in the vicinity of two reference systems is close to Newtonian. The remote stars (remote quasars) are still assumed to form the foundation of a kinematically non-rotating reference system, while descriptions of the equations of motion of a local object are still assumed to distinguish between dynamically non-rotating and rotating reference systems. For more, see Klioner and Soffel. References: Klioner, Sergei A., and Michael Soffel. "Nonrotating astronomical relativistic reference frames." Astronomy and Astrophysics 334 (1998): 1123-1135.
{ "domain": "physics.stackexchange", "id": 42101, "tags": "general-relativity, reference-frames, terminology, satellites" }
catkin_package INCLUDE_DIR set CFlags to .pc under devel/ but not set CFlags to installspace
Question: Hi, I have a package with catkin_project( INCLUDE_DIRS include1 include2 ) and this generate .pc under devel (./devel/lib/pkgconfig/.pc) that contians Cflags: -I.../include1 -I../include2 but .pc under build/..../catkin_generated/installspace/.pc becomes Cflags: -I/tmp/catkin_ws/install/include It this expected behavior? Why it is not Cflags -I/tmp/catkin_ws/install/include/<package>/include1 -I/tmp/catkin_ws/install/include/<package>/include2 Originally posted by Kei Okada on ROS Answers with karma: 1186 on 2013-08-16 Post score: 1 Answer: This is expected behavior. In install space you should have installed all header (from potentially various locations in source) into a single include folder (following FHS). Originally posted by Dirk Thomas with karma: 16276 on 2013-08-16 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by joq on 2013-08-16: With catkin, you need to explicitly install any headers you export. See: http://ros.org/doc/groovy/api/catkin/html/howto/building_libraries.html#installing Comment by Kei Okada on 2013-08-16: Thanks for the answer! I'm trying to release 3rd-parity library that does not follows FHS standard, using catkin/bloom. Is there any good examples for this? Comment by joq on 2013-08-16: The link above is the recommended solution.
{ "domain": "robotics.stackexchange", "id": 15284, "tags": "ros, catkin, catkin-package" }
Why do internal forces not affect the conservation of momentum?
Question: How is momentum conserved? I know that the condition is that no external resultant force should act on the interacting objects. But how is the momentum conserved if the objects surfaces touch and there’s friction between them. Even if for a split second Answer: As you say, momentum is conserved when there is no external resultant force acting on the system. This is a statement of Newton's 2nd law: when the net force acting on a system is zero, there is no change in momentum. $$ \vec{F}_\mathrm{net} = \frac{d\vec{p}}{dt} $$ an example It might help to think about an example. Lets consider the head on collision of two objects. During the collision each object applies a force to the other. Object $A$ pushes on object $B$ causing $B$'s momentum to change: $$\Delta \vec{p}_B = \vec{F}_{A,B}\, \Delta t $$ And object $B$ pushes on object $A$ causing $A$'s momentum to change: $$\Delta \vec{p}_A = \vec{F}_{B,A}\, \Delta t $$ Newton's 3rd law tells us that $\vec{F}_{B,A} = -\vec{F}_{A,B}$, so $\Delta\vec{p}_A = - \Delta\vec{p}_B$. We need to recognize that each force is applied for the same time interval $\Delta t$, too. The forces are applied during the collision, and both objects start and stop touching at the same times. When we consider the whole system of both objects, the total change in momentum is zero. $$\Delta\vec{p}_\mathrm{sys} = \Delta\vec{p}_A + \Delta\vec{p}_B = 0$$ Total momentum is conserved during the collision. Because the interaction forces ($\vec{F}_{A,B}$ and $\vec{F}_{B,A}$) are internal to the system the system doesn't have a net force. Notice we did not care how big those forces were, nor did we care how long the collision lasted. What matters is the symmetry of the forces that comes from Newton's 3rd law. energy So what does this have to do with energy? The example didn't use energy at all. Lets take our example particles to be equal mass and initially moving with equal speed in opposite directions. In order to conserve momentum the the particles must have equal and opposite velocities after the collision, but they don't necessarily have to have the same velocities as before. If energy is lost, they will each be moving slower than before. An explosion during the collision could add energy to the system, causing them to go faster than before. Each case would still conserve momentum.
{ "domain": "physics.stackexchange", "id": 73761, "tags": "newtonian-mechanics, forces, momentum, conservation-laws, collision" }
how to modify the color or material of link in gazebo version 1.2?
Question: when I follow the tutorial to modify the color or material like " " or "color rgb ", gazebo will appear errors, how can I modify the color or materials? Originally posted by lugd1229 on Gazebo Answers with karma: 75 on 2012-11-18 Post score: 0 Answer: Hey, here is an example that works in Gazebo 1.2.5: <material> <script> <uri>file://media/materials/scripts/gazebo.material</uri> <name>Gazebo/Grey</name> </script> </material> Originally posted by AndreiHaidu with karma: 2108 on 2013-01-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by piyushk on 2013-01-09: Are you using this code in a URDF file or a SDF file? Comment by AndreiHaidu on 2013-01-09: it is an SDF file Comment by nkoenig on 2013-01-09: Check out the SDF documentation: http://gazebosim.org/sdf Comment by J_Schaefer on 2015-12-03: This doesn't work for me. I try to spawn e.g. a simple box from an urdf.xacro-file. And it always is white in Gazebo (2.2.3). Is there another way, that allows me to set the color?
{ "domain": "robotics.stackexchange", "id": 2813, "tags": "gazebo" }
Java RSA / AES-GCM encryption utility
Question: I wrote a utility class that has simple methods for: En/decrypting data with 256-bit AES-GCM using a password Generating RSA keypair Saving RSA keypair and storing private key encrypted En/decrypting data with RSA (data with AES-GCM & random key, key with RSA) import javax.crypto.*; import javax.crypto.spec.GCMParameterSpec; import javax.crypto.spec.PBEKeySpec; import javax.crypto.spec.SecretKeySpec; import java.io.FileOutputStream; import java.nio.file.Files; import java.nio.file.Paths; import java.security.*; import java.security.spec.*; import java.util.Arrays; public class GCM { private final int SALT_LENGTH = 256/8; private final int IV_LENGTH = 12; private final int KEY_LENGTH = 256; private final int GCM_TAG_LENGTH = 16; private KeyPair keyPair; public GCM() { } private SecretKeySpec generateKey(char[] password, byte[] salt) throws InvalidKeySpecException, NoSuchAlgorithmException { SecretKeyFactory factory = SecretKeyFactory.getInstance(/*"PBKDF2WithHmacSHA3-512"*/"PBKDF2WithHmacSHA256"); KeySpec spec = new PBEKeySpec(password, salt, 65536, KEY_LENGTH); SecretKey tmp = factory.generateSecret(spec); SecretKeySpec secretKeySpec = new SecretKeySpec(tmp.getEncoded(), "AES"); return secretKeySpec; } public void generateKeyPair() //generate 4096-bit RSA keypair throws NoSuchAlgorithmException { KeyPairGenerator kpg = KeyPairGenerator.getInstance("RSA"); kpg.initialize(4096, new SecureRandom()); keyPair = kpg.genKeyPair(); } public void loadKeyPair(char[] password) { //loads RSA keypair from file try { byte[] publicKeyBytes = Files.readAllBytes(Paths.get("public.key")); byte[] privateKeyBytes = decrypt(Files.readAllBytes(Paths.get("private.key")), password); X509EncodedKeySpec x509EncodedKeySpec = new X509EncodedKeySpec(publicKeyBytes); KeyFactory keyFactory = KeyFactory.getInstance("RSA"); PublicKey publicKey = keyFactory.generatePublic(x509EncodedKeySpec); PKCS8EncodedKeySpec pkcs8EncodedKeySpec = new PKCS8EncodedKeySpec(privateKeyBytes); PrivateKey privateKey = keyFactory.generatePrivate(pkcs8EncodedKeySpec); this.keyPair = new KeyPair(publicKey, privateKey); } catch (Exception e) { e.printStackTrace(); } } public void saveKeyPair(char[] password) { //saves currently used keypair to file try { FileOutputStream fos = new FileOutputStream("private.key"); fos.write(encrypt(keyPair.getPrivate().getEncoded(), password)); fos.close(); fos = new FileOutputStream("public.key"); fos.write(keyPair.getPublic().getEncoded()); fos.close(); } catch (Exception e) { e.printStackTrace(); } } public byte[] getPublicKey() { //returns currently used public key return keyPair.getPublic().getEncoded(); } public byte[] encrypt(byte[] data, char[] password) { //encrypts data with password-generated key byte[] salt = new byte[SALT_LENGTH]; byte[] iv = new byte[IV_LENGTH]; byte[] result = null; new SecureRandom().nextBytes(salt); new SecureRandom().nextBytes(iv); try { SecretKeySpec secretKeySpec = generateKey(password, salt); byte[] encrypted = encrypt(data, secretKeySpec, iv); result = new byte[encrypted.length + salt.length + iv.length]; System.arraycopy(salt, 0, result, 0, salt.length); System.arraycopy(iv, 0, result, salt.length, iv.length); System.arraycopy(encrypted,0, result, salt.length + iv.length, encrypted.length); } catch (Exception e) { e.printStackTrace(); } return result; } public byte[] encrypt(byte[] data, byte[] publicKey) { //encrypts data with recipient public key byte[] iv = new byte[IV_LENGTH]; byte[] result = null; new SecureRandom().nextBytes(iv); try { KeyGenerator keyGen = KeyGenerator.getInstance("AES"); keyGen.init(256, new SecureRandom()); SecretKey secretKey = keyGen.generateKey(); SecretKeySpec secretKeySpec = new SecretKeySpec(secretKey.getEncoded(), "AES"); byte[] encrypted = encrypt(data, secretKeySpec, iv); Cipher cipher = Cipher.getInstance("RSA/NONE/OAEPWithSHA3-512AndMGF1Padding"); X509EncodedKeySpec keySpec = new X509EncodedKeySpec(publicKey); KeyFactory keyFactory = KeyFactory.getInstance("RSA"); cipher.init(Cipher.ENCRYPT_MODE, keyFactory.generatePublic(keySpec)); byte[] symmetricKey = cipher.doFinal(secretKeySpec.getEncoded()); result = new byte[encrypted.length + iv.length + symmetricKey.length]; System.arraycopy(symmetricKey, 0, result, 0, symmetricKey.length); System.arraycopy(iv, 0, result, symmetricKey.length, iv.length); System.arraycopy(encrypted,0, result, iv.length + symmetricKey.length, encrypted.length); } catch (Exception e) { e.printStackTrace(); } return result; } private byte[] encrypt(byte[] data, SecretKeySpec secretKeySpec, byte[] iv) //encryption utility method throws BadPaddingException, IllegalBlockSizeException, NoSuchPaddingException, NoSuchAlgorithmException, InvalidAlgorithmParameterException, InvalidKeyException { Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding"); GCMParameterSpec gcmParameterSpec = new GCMParameterSpec(GCM_TAG_LENGTH * 8, iv); cipher.init(Cipher.ENCRYPT_MODE, secretKeySpec, gcmParameterSpec); return cipher.doFinal(data); } public byte[] decrypt(byte[] data, char[] password) { //decrypts data with password-generated key byte[] salt = Arrays.copyOfRange(data, 0, SALT_LENGTH); byte[] iv = Arrays.copyOfRange(data, SALT_LENGTH, SALT_LENGTH + IV_LENGTH); byte[] encryptedData = Arrays.copyOfRange(data, SALT_LENGTH + IV_LENGTH, data.length); byte[] result = null; try { SecretKeySpec secretKeySpec = generateKey(password, salt); result = decrypt(encryptedData, secretKeySpec, iv); } catch (Exception e) { e.printStackTrace(); } return result; } public byte[] decrypt(byte[] data) { //decrypts data with currently loaded private key byte[] symmetricKey = Arrays.copyOfRange(data, 0, 4096/8); byte[] iv = Arrays.copyOfRange(data, 4096/8, 4096/8 + IV_LENGTH); byte[] encryptedData = Arrays.copyOfRange(data, 4096/8 + IV_LENGTH, data.length); byte[] decrypted = null; try { Cipher cipher = Cipher.getInstance("RSA/NONE/OAEPWithSHA3-512AndMGF1Padding"); cipher.init(Cipher.DECRYPT_MODE, keyPair.getPrivate()); byte[] decryptedSymmetricKey = cipher.doFinal(symmetricKey); SecretKeySpec secretKeySpec = new SecretKeySpec(decryptedSymmetricKey, "AES"); decrypted = decrypt(encryptedData, secretKeySpec, iv); } catch (Exception e) { e.printStackTrace(); } return decrypted; } private byte[] decrypt(byte[] data, SecretKeySpec secretKeySpec, byte[] iv) throws NoSuchPaddingException, NoSuchAlgorithmException, InvalidAlgorithmParameterException, InvalidKeyException, BadPaddingException, IllegalBlockSizeException { //decryption utility method Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding"); GCMParameterSpec gcmParameterSpec = new GCMParameterSpec(GCM_TAG_LENGTH * 8, iv); cipher.init(Cipher.DECRYPT_MODE, secretKeySpec, gcmParameterSpec); return cipher.doFinal(data); } } The IV, and either RSA-encrypted symmetric key or salt used for key derivation are prepended to the output byte[]. It is used as follows: GCM gcm = new GCM(); //symmetric encryption byte[] encrypted = gcm.encrypt(byte[] data, char[] password); byte[] decrypted = gcm.decrypt(byte[] encrypted, char[] password); //asymmetric encryption gcm.generateKeyPair(); gcm.saveKeyPair(char[] password); gcm.loadKeyPair(char[] password); byte[] encrypted = gcm.encrypt(byte[] data, byte[] publicKey); byte[] decrypted = gcm.decrypt(byte[] data); Since it is to be used in a closed system, compatibility isn't all that important (for example with key storage), but feel free to suggest alternatives. Now I have the following questions about my code: Does it have any security flaws? Are AES-GCM's features used properly (eg. Authenticated Encryption)? Is the IV generation random enough to use with AES-GCM? Does it follow best practices? Answer: Pokémon error handling "Catch them all" Never (ever!) write code like this: try { ... } catch (Exception e) { ... } It allows the program to continue executing even if fatal errors are encountered which is very bad. Only catch errors that you can handle. In practice, that means that you almost never catch anything. File handling Saving and loading the key to the process' current working directory is not nice. Better to let the user pass a Path specifying the directory to use in the GCM constructor: public GCM(Path path) { this.path = path; } It is also a good idea to store the filenames in constants because it guards against spelling errors: private final String PUBLIC_KEY = "public.key"; private final String PRIVATE_KEY = "private.key"; So to get the path to "public.key", you write path.resolve(PUBLIC_KEY). Mixing Java NIO with old style IO is not nice. Your code for writing the keypair to file can be replaced with Java NIO calls: byte[] encodedPrivateKey = keyPair.getPrivate().getEncoded(); byte[] privateKeyBytes = encrypt(encodedPrivateKey, password); Files.write(path.resolve(PRIVATE_KEY), privateKeyBytes); byte[] publicKeyBytes = keyPair.getPublic().getEncoded(); Files.write(path.resolve(PUBLIC_KEY), publicKeyBytes); Code organization While reviewing your class I realized that it does two things; it handles key pairs and it encrypts/decrypts data. Ideally, each class should only do one job. Therefore I think a better way to organize the code is to have one class for the key pair handling and one for encryption/decryption. Furthermore, the only state in your class is the keyPair variable which is not referenced in many of the methods. As a rule of thumb, if a method doesn't access any state then it should be made into a static method (a function). While refactoring that, I realized that your whole API can be better expressed as two static classes without any state. Comments If I were reviewing your code for a real project, I'd definitely complain about the lack of comments. There's a lot of things in there that is not obvious when reading the code. For example, what is an IV? Where does the number 4096 come from? What is GCM_TAG_LENGTH? And so on. Result With comments removed: import javax.crypto.*; import javax.crypto.spec.GCMParameterSpec; import javax.crypto.spec.PBEKeySpec; import javax.crypto.spec.SecretKeySpec; import java.io.IOException; import java.nio.file.*; import java.security.*; import java.security.spec.*; import java.util.Arrays; class GCM { private final static int SALT_LENGTH = 256/8; private final static int IV_LENGTH = 12; private final static int KEY_LENGTH = 256; private final static int GCM_TAG_LENGTH = 16; private static SecretKeySpec generateKey(char[] password, byte[] salt) throws InvalidKeySpecException, NoSuchAlgorithmException { SecretKeyFactory factory = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA256"); KeySpec spec = new PBEKeySpec(password, salt, 65536, KEY_LENGTH); SecretKey tmp = factory.generateSecret(spec); SecretKeySpec secretKeySpec = new SecretKeySpec(tmp.getEncoded(), "AES"); return secretKeySpec; } public static byte[] decrypt(byte[] data, KeyPair keyPair) throws GeneralSecurityException { byte[] symmetricKey = Arrays.copyOfRange(data, 0, 4096/8); byte[] iv = Arrays.copyOfRange(data, 4096/8, 4096/8 + IV_LENGTH); byte[] encryptedData = Arrays.copyOfRange(data, 4096/8 + IV_LENGTH, data.length); Cipher cipher = Cipher.getInstance( "RSA/NONE/OAEPWithSHA3-512AndMGF1Padding"); cipher.init(Cipher.DECRYPT_MODE, keyPair.getPrivate()); byte[] decryptedSymmetricKey = cipher.doFinal(symmetricKey); SecretKeySpec secretKeySpec = new SecretKeySpec(decryptedSymmetricKey, "AES"); return decrypt(encryptedData, secretKeySpec, iv); } private static byte[] decrypt(byte[] data, SecretKeySpec secretKeySpec, byte[] iv) throws GeneralSecurityException { Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding"); GCMParameterSpec gcmParameterSpec = new GCMParameterSpec(GCM_TAG_LENGTH * 8, iv); cipher.init(Cipher.DECRYPT_MODE, secretKeySpec, gcmParameterSpec); return cipher.doFinal(data); } public static byte[] decrypt(byte[] data, char[] password) throws GeneralSecurityException { byte[] salt = Arrays.copyOfRange(data, 0, SALT_LENGTH); byte[] iv = Arrays.copyOfRange(data, SALT_LENGTH, SALT_LENGTH + IV_LENGTH); byte[] encryptedData = Arrays.copyOfRange(data, SALT_LENGTH + IV_LENGTH, data.length); SecretKeySpec secretKeySpec = generateKey(password, salt); return decrypt(encryptedData, secretKeySpec, iv); } public static byte[] encrypt(byte[] data, byte[] publicKey) throws GeneralSecurityException { byte[] iv = new byte[IV_LENGTH]; new SecureRandom().nextBytes(iv); KeyGenerator keyGen = KeyGenerator.getInstance("AES"); keyGen.init(256, new SecureRandom()); SecretKey secretKey = keyGen.generateKey(); SecretKeySpec secretKeySpec = new SecretKeySpec(secretKey.getEncoded(), "AES"); byte[] encrypted = encrypt(data, secretKeySpec, iv); Cipher cipher = Cipher.getInstance("RSA/NONE/OAEPWithSHA3-512AndMGF1Padding"); X509EncodedKeySpec keySpec = new X509EncodedKeySpec(publicKey); KeyFactory keyFactory = KeyFactory.getInstance("RSA"); cipher.init(Cipher.ENCRYPT_MODE, keyFactory.generatePublic(keySpec)); byte[] symmetricKey = cipher.doFinal(secretKeySpec.getEncoded()); byte[] result = new byte[encrypted.length + iv.length + symmetricKey.length]; System.arraycopy(symmetricKey, 0, result, 0, symmetricKey.length); System.arraycopy(iv, 0, result, symmetricKey.length, iv.length); System.arraycopy(encrypted,0, result, iv.length + symmetricKey.length, encrypted.length); return result; } public static byte[] encrypt(byte[] data, char[] password) throws GeneralSecurityException { byte[] salt = new byte[SALT_LENGTH]; byte[] iv = new byte[IV_LENGTH]; new SecureRandom().nextBytes(salt); new SecureRandom().nextBytes(iv); SecretKeySpec secretKeySpec = generateKey(password, salt); byte[] encrypted = encrypt(data, secretKeySpec, iv); byte[] result = new byte[encrypted.length + salt.length + iv.length]; System.arraycopy(salt, 0, result, 0, salt.length); System.arraycopy(iv, 0, result, salt.length, iv.length); System.arraycopy(encrypted, 0, result, salt.length + iv.length, encrypted.length); return result; } private static byte[] encrypt(byte[] data, SecretKeySpec secretKeySpec, byte[] iv) throws GeneralSecurityException { Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding"); GCMParameterSpec gcmParameterSpec = new GCMParameterSpec(GCM_TAG_LENGTH * 8, iv); cipher.init(Cipher.ENCRYPT_MODE, secretKeySpec, gcmParameterSpec); return cipher.doFinal(data); } } class KeyPairs { private final static String PUBLIC_KEY = "public.key"; private final static String PRIVATE_KEY = "private.key"; public static KeyPair generate() throws NoSuchAlgorithmException { KeyPairGenerator kpg = KeyPairGenerator.getInstance("RSA"); kpg.initialize(4096, new SecureRandom()); return kpg.genKeyPair(); } public static KeyPair load(Path path, char[] password) throws IOException, GeneralSecurityException { Path publicKeyPath = path.resolve(PUBLIC_KEY); byte[] publicKeyBytes = Files.readAllBytes(publicKeyPath); Path privateKeyPath = path.resolve(PRIVATE_KEY); byte[] privateKeyBytes = GCM.decrypt( Files.readAllBytes(privateKeyPath), password); X509EncodedKeySpec x509EncodedKeySpec = new X509EncodedKeySpec(publicKeyBytes); KeyFactory keyFactory = KeyFactory.getInstance("RSA"); PublicKey publicKey = keyFactory.generatePublic(x509EncodedKeySpec); PKCS8EncodedKeySpec pkcs8EncodedKeySpec = new PKCS8EncodedKeySpec(privateKeyBytes); PrivateKey privateKey = keyFactory.generatePrivate(pkcs8EncodedKeySpec); return new KeyPair(publicKey, privateKey); } public static void save(Path path, KeyPair keyPair, char[] password) throws IOException, GeneralSecurityException { byte[] encodedPrivateKey = keyPair.getPrivate().getEncoded(); byte[] privateKeyBytes = GCM.encrypt(encodedPrivateKey, password); Files.write(path.resolve(PRIVATE_KEY), privateKeyBytes); byte[] publicKeyBytes = keyPair.getPublic().getEncoded(); Files.write(path.resolve(PUBLIC_KEY), publicKeyBytes); } } I think your usage of the Java Crypto API is correct, but I'm no expert.
{ "domain": "codereview.stackexchange", "id": 36977, "tags": "java, security, cryptography, aes" }
Sum of Squared Intensity Difference for block matching with ORB features, c++
Question: I am trying to implement feature matching algorithm between image reference and image current. The pipeline is as follows: Gray scale Images (intensities = [0-255]) are imported. Extract ORB keypoints in both reference and current images: Ptr<FeatureDetector> detector = ORB::create(); Obtain blocks surrounding all keyPoints within specified window_size in both reference and current images. NOTE: If the blocks exceed the image upper or lower bounds, the block intensities would obviously be zero (Example: KP_A in image reference). 4.Instead of using existing feature matching algorithms in opencv, I am trying to utilize sum of squared of intensity differences (SSD) in the blocks acquired across ORB keypoints in reference and current images. Extracting correct features demands implementing crossCheckedMatching() to ensure features are chosen correctly. Visualize matched features. Complete code: here. Problem 1: My final result looks something like this: Whereas, I have read in few lecture notes and articles, SSD and block matching incorporated with cross checked matching method should provide decent matches. Does these matched features look any good in someone's eyes or awful after cross check matching procedure? Problem 2: I do not know how to find the correct threshold for evaluating SSD values between two images (Empirical tuning vs. Existing logical approaches?). Final goal, adapted from the textbook: use these matched features in RANSAC to filter outliers. Find Homography (Theory says, 4 pairs of matched points needed!) Camera Pose Estimation 3D reconstruction of environment Apologies for such a long question, but I tried to squeeze it as much as I could. Answer: Your results look reasonable. However, you're likely to get a somewhat unreliable result: the good matches are located in a small portion of the image, while a large spread would help in getting overall more accurate estimates (but that's probably due to your images). Experimenting will work here! Otherwise, you can compute the expectation of the SSD between 2 blocks that are a rightful match under the noise level of your images. Eventually, bad matches should be eliminated by the RANSAC part, but you may have to do a trade-off between getting many but sometimes wrong matches (loose SSD threshold) or stricter ones (at the risk of not having enough matches o estimate the pose).
{ "domain": "dsp.stackexchange", "id": 7382, "tags": "image-processing, opencv, local-features" }
Up-sampling using ZOH
Question: The basic idea of up-sampling is to add $M-1$ zeros after each sample. I need to make upsampling(factor of 3) of the function $rcosdesign(\beta, M, T)$ using the ZOH filter in Matlab. $\beta=0.25$, $M=6$, $T=4$. I didn't get a full understanding of the upsampling using this filter, so i need help on how this should be done. Thanks in advance! Answer: ZOH and rcosdesign are two different things. What you mean may be is to generate a time domain filter of rcosdesign(0.25,6,4) and then use ZOH on it. h=rcosdesign(0.25, 6, 4); will generate the rcosdesign impulse function that you want. Then you need to upsample by 3 by interleaving 2 zeros in between each sample. h_upsby3 = upsample(h,3); Zero order hold is like a rectangular function in time domain. Here, for every sample of h_upsby3, there would be 3 ones to simulate a zero-order hold as per your specification. r=ones(1,3); y=conv(h_upsby3,r); to generate a plot like this below
{ "domain": "dsp.stackexchange", "id": 8621, "tags": "matlab, filters, sampling, zoh" }
What kind of systems of black holes satisfy the laws of black hole thermodynamics?
Question: I've come across black holes thermodynamics multiple times recently (both at this site and elsewhere) and some things started bugging me. For one thing, first law bothers me a little. It is a reflection of the law of conservation of energy. This is fine when the space-time is stationary (as in Kerr solution) and is consistent with system being in a thermal equilibrium (so that thermodynamics apply at all). But what about more general systems of black holes? What are the assumptions on the system of black holes for it to be in thermal equilibrium so that laws of BH thermodynamics can apply? Note: the reason I am asking is that I heard that the laws should also be correct for system of multiple BH (so that their total event horizon area is increasing, for example). But I cannot wrap my head about how might the system of BH be in thermal equilibrium. I mean, they would be moving around, generating gravitational waves that carry energy away (violating the first law) all the time. Right? Answer: The first law is not violated if stated properly: $$ dM = \frac{\kappa}{2 \pi}dA + \Omega dJ + \Phi dQ $$ (reference: wikipedia) where $\kappa,\Omega,J,\Phi,Q$ are the surface gravity,angular velocity, angular momentum, electric potential and charge of the black hole, respectively. Compare this to the usual expression for the first law: $$ dE = TdS + PdV + \mu dN $$ One can (heuristically) make the identifications $ T = \frac{\kappa}{2\pi}$, $ S = A/4 $, $ \mu = \Phi $ and $ N = Q $. The first two of these are well-established. Hawking showed that the temperature of the black hole is proportional to its surface gravity ($T \propto \kappa$) and Bekenstein showed that its entropy should be proportional to its area ($S=A/4$). The third and fourth equalities ($\mu = \Phi$ and $ N = Q $) can be understood if we think of the black hole as an aggregate of N particle with unit charge. Adding another charged particle to this ensemble of $N$ particles, with total charge $Q$, will cost an amount of work given by $\Phi dQ$. For the case of a single black hole, one can use the framework of dynamical horizons developed by Ashtekar, Badri Krishnan, Sean Hayward and others [refs 1, 2]. Turns out the laws of black hole entropy can be extended to completely dynamical black holes with a well-defined expressions for the first and second laws in terms of fluxes through the dynamical horizon. The definition of a dynamical horizon is in terms of the expansion of the inward pointing, null normal vector field on the 2+1 d boundary of a 3+1 d region. I can't think of the detailed expressions of the top of my head, but you can find them in the above reference. I can't give a concrete answer for the case of multiple black holes, but I would think you could extend the dynamical horizon framework to that case - probably not without some serious effort though. In any case, there will be no violation of energy conservation. The sum of the energy and momentum carried off by gravitational waves (and detected by an observer at infinity) and the change in the black hole's energy and momentum (again w.r.t such an asymptotic observer) will remain a constant. Hope that helps ! Edit: Corrected proportionality constants for $T$ and $S$. Thanks @Jeff!
{ "domain": "physics.stackexchange", "id": 135, "tags": "general-relativity, black-holes, black-hole-thermodynamics" }
Return results from goroutines
Question: I have two goroutines that each need to return their results to the main function. These results are of different types. I have implemented a solution that uses a structure that holds these results. The structure is passed to the routines and each routine knows which field in the structure to use to save the results. I am new to Golang and I am not sure if this is an elegant solution. Maybe channels are more suitable? package main import ( "fmt" "sync" ) type Results struct { x float64 y int } func main() { var r Results var wg sync.WaitGroup wg.Add(2) go func(r *Results, wg *sync.WaitGroup) { defer wg.Done() r.x = 1.2 }(&r, &wg) go func(r *Results, wg *sync.WaitGroup) { defer wg.Done() r.y = 34 }(&r, &wg) wg.Wait() fmt.Println(r) } https://play.golang.org/p/XW1LccJvAn Answer: Using a structure and waitgroups is silly for something this simple. Just make two channels and fire off two goroutines. You don't care which one finishes first just read from both channels in whatever order is convenient (or even do something like someOtherFunction(<-c1, <-c2)). E.g.: package main import ( "fmt" "math/rand" "time" ) func main() { c1 := make(chan float64, 1) c2 := make(chan int, 1) go func() { // simulate spending time to do work to get answer time.Sleep(time.Duration(rand.Intn(2000)) * time.Millisecond) c1 <- 1.2 }() go func() { time.Sleep(time.Duration(rand.Intn(2000)) * time.Millisecond) c2 <- 34 }() x := <-c1 y := <-c2 fmt.Println(x, y) } https://play.golang.org/p/Qoh5IvFROo
{ "domain": "codereview.stackexchange", "id": 12160, "tags": "beginner, go" }
Photoelectric effect:- Reduction of wavelength increases current?
Question: I did a question in which, the intensity of the incident radiation on a metal surface was kept constant but the wavelength of the photons has been reduced. The question inquired what will be the effect on the maximum photoelectric current? The initial wavelength was smaller than threshold wavelength of the metal surface. My thinking was since the intensity remains constant, thus the number of photons emitted from the source remains constant and thus the number of electrons emitted from the metal surface. And since number of electrons per unit time isn't changed, the current will remain the same. However, the answer key stated "Fewer photons (per unit time) so (maximum) current is smaller" How does decreasing wavelength (equivalent to increasing the energy of photons) result in a fewer photon emission? Answer: It is the interface between classical electrodynamics and quantum mechanics. Intensity is a classical electromagnetic wave measure of energy, measured by the average electric field in the wave : the average intensity for a plane wave can be written So for a given value of intensity/energy for a classical wave of frequency $\nu$, there are $N$ number of photons with energy $E=h\nu$. If the intensity is constant and the frequency gets larger , fewer photons are needed to add up to the classical intensity .
{ "domain": "physics.stackexchange", "id": 39001, "tags": "photoelectric-effect" }
Switch case on an enum to return a specific mapped object from IMapper
Question: I have an ever growing switch case statement I plan on adding 3 more case statements to. Given an int, compare it to an enum, and call IMapper.Map<TDestination>(sourceObject); public WebhookActivity MakeMessage(WebhookActivityRequest request) { WebhookActivity command = _mapper.Map<ConsecutiveCaseCreated>(request); var mappingId = 0; for(int i = 0; i < mappings.Length; i++) { if(request.WorkflowCode == mappings[i].WorkflowCode) { mappingId = mappings[i].EventHandlerId; } } switch(mappingId) { case (int)WorkflowEventHandler.CONSECUTIVE_CASE_CREATED: { command = _mapper.Map<ConsecutiveCaseCreated>(request); break; } case (int)WorkflowEventHandler.CONSECUTIVE_CASE_MODIFIED: { command = _mapper.Map<ConsecutiveCaseModified>(request); break; } case (int)WorkflowEventHandler.CONSECUTIVE_CASE_CANCELLED: { command = _mapper.Map<ConsecutiveCaseCancelled>(request); break; } case (int)WorkflowEventHandler.INTERMITTENT_CASE_CREATED: { command = _mapper.Map<IntermittentCaseCreated>(request); break; } case (int)WorkflowEventHandler.INTERMITTENT_CASE_TIME_ADJUSTED: { command = _mapper.Map<IntermittentCaseTimeAdjusted>(request); break; } case (int)WorkflowEventHandler.INTERMITTENT_CASE_TIME_DELETED: { command = _mapper.Map<IntermittentCaseTimeDeleted>(request); break; } case (int)WorkflowEventHandler.CASE_CHANGED_TYPE: { command = _mapper.Map<CaseChangedType>(request); break; } } return command; } I looked at other enum related questions and saw a common solution of using a dictionary. I tried that and couldn't quite figure out how to make the call to IMapper.Map() work out. Made the code a lot smaller though! Here's what the dictionary based solution I came up with looks like and what the method code became. private static readonly Dictionary<int, Type> workflowEventHandlers = new Dictionary<int, Type> { { (int)WorkflowEventHandler.CONSECUTIVE_CASE_CREATED, typeof(ConsecutiveCaseCreated)}, { (int)WorkflowEventHandler.CONSECUTIVE_CASE_CANCELLED, typeof(ConsecutiveCaseCancelled) }, { (int)WorkflowEventHandler.CONSECUTIVE_CASE_MODIFIED, typeof(ConsecutiveCaseModified) }, { (int)WorkflowEventHandler.INTERMITTENT_CASE_CREATED, typeof(IntermittentCaseCreated) }, { (int)WorkflowEventHandler.INTERMITTENT_CASE_TIME_ADJUSTED, typeof(IntermittentCaseTimeAdjusted) }, { (int)WorkflowEventHandler.INTERMITTENT_CASE_TIME_DELETED, typeof(IntermittentCaseTimeDeleted) }, { (int)WorkflowEventHandler.CASE_CHANGED_TYPE, typeof(CaseChangedType) } }; public WebhookActivity MakeMessage(WebhookActivityRequest request) { WebhookActivity command = _mapper.Map<ConsecutiveCaseCreated>(request); var mappingId = 0; for(int i = 0; i < mappings.Length; i++) { if(request.WorkflowCode == mappings[i].WorkflowCode) { mappingId = mappings[i].EventHandlerId; } } if(workflowEventHandlers.TryGetValue(mappingId, out Type mapDestinationType)) { //command = _mapper.Map(request, typeof(WebhookActivityRequest), mapDestinationType); //command = Map(request, mapDestinationType); } } // doesn't really work private WebhookActivityMessage Map<TSource, TDestination>(TSource source, TDestination destination) where TDestination: WebhookActivityMessage { return _mapper.Map<TSource, TDestination>(source); } The destination types all inherit from a base class called WebhookActivityMessage which inherits from WebhookActivity so I figured if I could say the return type of my Map() method was WebhookActivityMessage then it could be happy. Answer: You can use a switch expression (C# 8) instead of a switch statement to make the code a lot smaller: public WebhookActivity MakeMessage(WebhookActivityRequest request) { int mappingId = 0; for (int i = 0; i < mappings.Length; i++) { if (request.WorkflowCode == mappings[i].WorkflowCode) { mappingId = mappings[i].EventHandlerId; } } return (WorkflowEventHandler)mappingId switch { WorkflowEventHandler.CONSECUTIVE_CASE_CREATED => _mapper.Map<ConsecutiveCaseCreated>(request), WorkflowEventHandler.CONSECUTIVE_CASE_MODIFIED => _mapper.Map<ConsecutiveCaseModified>(request), WorkflowEventHandler.CONSECUTIVE_CASE_CANCELLED => _mapper.Map<ConsecutiveCaseCancelled>(request), WorkflowEventHandler.INTERMITTENT_CASE_CREATED => _mapper.Map<IntermittentCaseCreated>(request), WorkflowEventHandler.INTERMITTENT_CASE_TIME_ADJUSTED => _mapper.Map<IntermittentCaseTimeAdjusted>(request), WorkflowEventHandler.INTERMITTENT_CASE_TIME_DELETED => _mapper.Map<IntermittentCaseTimeDeleted>(request), WorkflowEventHandler.CASE_CHANGED_TYPE => _mapper.Map<CaseChangedType>(request), _ => _mapper.Map<ConsecutiveCaseCreated>(request) }; } But note that only the last mappingId in your for-loop will be used. Did you want to search for the first occurrence instead? int mappingId = mappings .FirstOrDefault(m => request.WorkflowCode == m.WorkflowCode)?.EventHandlerId ?? 0; As for your solution with the dictionary, you could declare a Dictionary<WorkflowEventHandler, Func<IMapper, WebhookActivityRequest, WebhookActivity>> instead. I.e., you would add it lambda expressions doing the work. private static readonly Dictionary<WorkflowEventHandler, Func<IMapper, WebhookActivityRequest, WebhookActivity>> workflowEventHandlers = new(){ { WorkflowEventHandler.CONSECUTIVE_CASE_CREATED, (m, r) => m.Map<ConsecutiveCaseCreated>(r) }, { WorkflowEventHandler.CONSECUTIVE_CASE_CANCELLED, (m, r) => m.Map<ConsecutiveCaseCancelled>(r) }, { WorkflowEventHandler.CONSECUTIVE_CASE_MODIFIED, (m, r) => m.Map<ConsecutiveCaseModified>(r) }, { WorkflowEventHandler.INTERMITTENT_CASE_CREATED, (m, r) => m.Map<IntermittentCaseCreated>(r) }, { WorkflowEventHandler.INTERMITTENT_CASE_TIME_ADJUSTED,(m, r) => m.Map<IntermittentCaseTimeAdjusted>(r) }, { WorkflowEventHandler.INTERMITTENT_CASE_TIME_DELETED, (m, r) => m.Map<IntermittentCaseTimeDeleted>(r) }, { WorkflowEventHandler.CASE_CHANGED_TYPE, (m, r) => m.Map<CaseChangedType>(r) } }; Then you can use it like this if (workflowEventHandlers.TryGetValue((WorkflowEventHandler)mappingId, out var mapCreator)) { command = mapCreator(_mapper, request); } else { command = _mapper.Map<ConsecutiveCaseCreated>(request); }
{ "domain": "codereview.stackexchange", "id": 44193, "tags": "c#, hash-map, enum" }
What's the intuitive interpretation of quantum uncertainty $\Delta \hat{A}=\sqrt{\langle\hat{A}^2\rangle-\langle\hat{A}\rangle^2}$?
Question: As per this video, if $\hat{A}$ is a quantum operator, the uncertainty is given by $$\Delta \hat{A}=\sqrt{\langle\hat{A}^2\rangle-\langle\hat{A}\rangle^2}$$ I understand what this expression means in a purely mathematical sense, but I have no physical intuition to it. How should I interpret the terms $\langle\hat{A}^2\rangle$ and $\langle\hat{A}\rangle^2$, physically? And why, physically, should uncertainty be the square root of their difference? Answer: Admittedly that expression is somewhat un-intuitive $$ \Delta \hat{A}=\sqrt{\langle\hat{A}^2\rangle-\langle\hat{A}\rangle^2} $$ But you can rewrite the term below the square root in a mathematically equivalent way, and get $$ \Delta \hat{A}=\sqrt{\left<(\hat{A}-\langle\hat{A}\rangle)^2\right>} $$ Now, here $(\hat{A}-\langle\hat{A}\rangle)$ is obviously an operator with mean value $0$, and the whole expression is quite intuitively the standard deviation of $\hat{A}$.
{ "domain": "physics.stackexchange", "id": 55972, "tags": "quantum-mechanics, operators, wavefunction, heisenberg-uncertainty-principle" }
Publishing and Subscribing at the same time
Question: I am trying to send constant velocity commands to turtlesim and at the same time want to recieve pose information. I am only recieving pose data in terminal when i runing both turtlesim and roscore. My code is following: #include <ros/ros.h> #include <turtlesim/Pose.h> #include <iomanip> #include <geometry_msgs/Twist.h> #include <stdlib.h> void poseMessageRecieved(const turtlesim::Pose& msg) { ROS_INFO_STREAM(std::setprecision(2) << std::fixed << "position=(" << msg.x << "," << msg.y << ")" << "direction=" << msg.theta); } int main(int argc,char **argv) { ros::init(argc, argv, "sub2"); ros::NodeHandle nh; ros::Subscriber sub=nh.subscribe("turtle1/pose", 1000, &poseMessageRecieved); ros::spin(); ros::init(argc, argv, "pub2"); ros::Publisher pub=nh.advertise<geometry_msgs::Twist>("turtle1/cmd_vel", 1000); ros::Rate rate(2); while(ros::ok()) { geometry_msgs::Twist msg; msg.linear.x=2.0; msg.angular.z=1.0; pub.publish(msg); ROS_INFO_STREAM("Sending constant velocity command: "<< "linear=" <<msg.linear.x << "angular=" << msg.angular.z); ros::spinOnce(); rate.sleep(); } //ros::spin(); } Originally posted by zuygar on ROS Answers with karma: 19 on 2015-02-11 Post score: 0 Answer: Ok i fixed the problem. My code is following, but i have a confusion about "delete pub" command. If i wouldn'd use this, what happened ? #include <ros/ros.h> #include <turtlesim/Pose.h> #include <iomanip> #include <geometry_msgs/Twist.h> #include <stdlib.h> ros::Publisher *pub; void poseMessageRecieved(const turtlesim::Pose& msgIn) { geometry_msgs::Twist msg; ROS_INFO_STREAM(std::setprecision(2) << std::fixed << "position=(" << msgIn.x << "," << msgIn.y << ")" << "direction=" << msgIn.theta); msg.linear.x=0.2; msg.angular.z=0.1; pub->publish(msg); ROS_INFO_STREAM("Sending constant velocity command: "<< "linear=" <<msg.linear.x << "angular=" << msg.angular.z); } int main(int argc,char **argv) { ros::init(argc, argv, "sub_and_pub"); ros::NodeHandle nh; pub = new ros::Publisher(nh.advertise<geometry_msgs::Twist>("turtle1/cmd_vel", 1000)); ros::Subscriber sub = nh.subscribe("turtle1/pose", 1000, &poseMessageRecieved); ros::spin(); delete pub; } Originally posted by zuygar with karma: 19 on 2015-02-11 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by tfoote on 2015-02-11: pub will get cleaned up by the end of the scope anyway.
{ "domain": "robotics.stackexchange", "id": 20852, "tags": "ros, subscribe" }
Extracting image metadata programatically
Question: I hope this is the right location to post this, as my question isn't really stack-overflow material but involves images. I have hundreds of SEM-Images like this: and need to track changes in certain features over the position at which the image was captured. This is why I would like to automatically retrieve metadata using something like bash or python for all images and save ti in a csv file or something. The manufacturer of the SEM implements metadata on the images in the file itself, but I haven't had much success retrieving it. It doesn't show in the EXIF header if I inspect it with gimp or with image-magick's identify image.tiff. I only saw it if I save the image as an .xml file and view it in a texteditor. The manufacturer ships an .exe program to inspect images, but as I am using Linux and have hundreds of images from which to retrieve the sample position, this isn't a very useful method for me. Before I start writing a python / bash program that will convert the files to xml and use complex regex stuff to retrieve what I am looking for, I wanted to ask if anyone could imagine a simpler approach. My knowledge on image metadata is very limited. Thanks in advance :) Answer: Open a few files in a hex editor and check, if the header containing the information is always the same size, i.e. if the image data always begins at the same adress. If this is so, you can just retrieve the header with python ("normal" open, not binary). Then, you have the complete header as a string and can create any data structure you want, using any xml package you want.
{ "domain": "dsp.stackexchange", "id": 7402, "tags": "image-processing, python" }
Convolution of BPSK signal in frequency domain implementation
Question: Here is the simplified version of code which implement convolution of BPSK-signal in frequency domain: import numpy as np import matplotlib.pyplot as plt import scipy.fftpack # Signal and related data. pulse_code = "+++++--++-+-+" pulse_shift = len (pulse_code) * 1 + 2; # Feel free to move the signal. sample_number = len (pulse_code) * 2 + 4; # Feel free to change it. time = np.linspace (0, sample_number, sample_number); signal_i = np.zeros (time.size); signal_q = np.zeros (time.size); filter_i = np.zeros (time.size); filter_q = np.zeros (time.size); # Create signal. for i in range (time.size): if i >= pulse_shift and i < pulse_shift + len (pulse_code): m = 1. if pulse_code [i - pulse_shift] == '+' else -1. signal_i [i] = m signal_q [i] = m # Create filter. for i in range (time.size): if i < len (pulse_code): m = 1. if pulse_code [i - 1] == '+' else -1. filter_i [time.size - i - 1] = m filter_q [time.size - i - 1] = m # Prepare to next computation. signal_complex= signal_i + 1j * signal_q filter_complex= filter_i + 1j * filter_q # Go to frequency domain. spectrum_signal = scipy.fftpack.fft (signal_complex); spectrum_filter = scipy.fftpack.fft (filter_complex); # Convolution. spectrum_compressed = spectrum_signal * spectrum_filter # Return to time domain. signal_compressed = scipy.fftpack.ifft (spectrum_compressed) # Get envelope. magnitude_compressed = np.zeros (time.size) for i in range (signal_compressed.size): magnitude_compressed [i] = np.sqrt (signal_compressed [i].real ** 2 + signal_compressed [i].imag ** 2) # Print result. fig = plt.figure () plt.subplot (2, 1, 1) plt.plot (time, signal_i); plt.title ("Input signal.") plt.xlabel ("Time") plt.ylabel ("Amplitude") plt.subplot (2, 1, 2) plt.plot (time, magnitude_compressed); plt.title ("Magnitude of compressed signal.") plt.xlabel ("Time") plt.ylabel ("Amplitude") plt.show() The implementation in my opinion is straightforward and clear, but result which I get is wrong: the maximum sidelobe level is 2 instead of 1, the main lobe is shifted to left and sidelobes aren't symmetric. Can anybody explain where is my error? UPD import numpy as np import matplotlib.pyplot as plt import scipy.fftpack # Signal and related data. # *_t - time domain; # *_f - frequency domain. pulse_code = "+++++--++-+-+" N = 64 M = len (pulse_code) L = N - M + 1 sample_number = L * 1; time = np.linspace (0, sample_number, sample_number); pulse_shift = len (pulse_code) + 1; signal_t = np.zeros (sample_number) + 1j * np.zeros (sample_number) filter_t = np.zeros (N) + 1j * np.zeros (N) chunk_t = np.zeros (N) + 1j * np.zeros (N) chunk_f = np.zeros (N) + 1j * np.zeros (N) envelope = np.zeros (sample_number) # Create signal. for i in range (sample_number): if i >= pulse_shift and i < pulse_shift + len (pulse_code): m = 1. if pulse_code [i - pulse_shift] == '+' else -1. signal_t [i] = m + 1j * 0 # Create filter as inverse signal with zero padding. n = len (pulse_code) - 1 for i in range (len (pulse_code) ): m = 1. if pulse_code [len (pulse_code) - i - 1] == '+' else -1. filter_t [i] = m + 1j * 0 # and get it's FFT. filter_f = scipy.fftpack.fft (filter_t) # Performs convolution using overlap-save method. for i in range (sample_number / L): for j in range (M - 1): chunk_t [j] = chunk_t [L + j] for j in range (L): chunk_t [M - 1 + j] = signal_t [i * L + j] chunk_f = scipy.fftpack.fft (chunk_t) chunk_f = scipy.fftpack.ifft (chunk_f * filter_f) for j in range (L): envelope [i * L + j] = np.abs (chunk_f [M - 1 + j]) # Print result. fig = plt.figure () plt.subplot (2, 1, 1) plt.plot (time, signal_t); plt.title ("Input signal.") plt.xlabel ("Time") plt.ylabel ("Amplitude") plt.subplot (2, 1, 2) plt.plot (time, envelope); plt.title ("Magnitude of compressed signal.") plt.xlabel ("Time") plt.ylabel ("Amplitude") plt.show() Answer: You're doing a circular convolution where you want a linear convolution. The Wikipedia article in fast convolution / save-add method has a pretty good explanation of the correct algorithm. Basically, you forgot to zero pad, extract the "valid" part and save the "tail" for the next convolution. Other than that, your filter is questionable at best, and your method of bpsk generation is unusual in its effects on constellation rotation and power. That's no problem per se, but makes your implementation hard to compare to existing convolution and fast convolution implementations.
{ "domain": "dsp.stackexchange", "id": 5843, "tags": "convolution, python, frequency-domain, bpsk" }
Dynamic equilibrium equations
Question: The system represented in the figure consists of a 2 kg pulley to which two springs and a rigid square with 10 kg/m are connected, which in turn is articulated at point B (System is in the xz plane). Dynamic equilibrium equations: Can someone explain these 2 terms in yellow? It's making me a bit of confusion since I thought it should be "+" and not "-" and both terms should be equal to each other and not symmetrical this was the free body diagramm that i draw In other exercice that i done, they consider JTeta1 and JTeta2 in the opposite direction Answer: The total term which includes the highlighted portion represents the torque exerted by the vertical spring. The spring torque depends on spring force which in turn depends on the spring extension / compression. If both ends of the spring move by equal amount, the spring doesn't get more compressed or extended and hence no change in the spring force. So the difference of the movement between the two ends of the spring determines the extension / compression of the spring. The highlighted term gives the extension / compression of the spring. The following diagram shows the movement of the each side of the spring. The movement of the top end of the spring is $r_2 \theta_1$ positive in the downward direction due to the choice of the direction of positive $\theta_1$. The movement of the bottom end of the spring is $2L \theta_2$ which is also positive in the downward direction due to the choice of the direction of positive $\theta_2$. Hence the extension / compression of the spring is given by $\mp(r_2 \theta_1 - 2L\theta_2)$. The term is the same in both equations (except for an overall sign as seen in your example) since the extension / compression of the spring is not dependent on which equation we are considering at any instant.
{ "domain": "engineering.stackexchange", "id": 4636, "tags": "vibration" }
Data augmentation in deep learning
Question: I am working on a deep learning project for face recognition. I am using the pre-trained model VGG16. The dataset has around 100 classes, and each class have 80 images. I split the dataset 60% training, 20% validation, 20% testing. I used data augmentation (ImageDataGenerator()) to increase the training data. The model gave me different results when I change ImageDataGenerator() arguments. See the following cases: Case1: train_datagen = ImageDataGenerator( rotation_range=15, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') validate_datagen = ImageDataGenerator() Test_datagen = ImageDataGenerator() Case1 result: High training accuracy and validation accuracy, but the training accuracy is lower. check the following image: Case2: train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) validate_datagen = ImageDataGenerator() Test_datagen = ImageDataGenerator() Case2 result: Overfitting. check the following image: Case3: train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) validate_datagen = ImageDataGenerator(rescale=1./255) Test_datagen = ImageDataGenerator(rescale=1./255) Case3 result: High training accuracy and validation accuracy, but the training accuracy is lower.. check the following image: 1- Why does using augmentation in validation and testing data ImageDataGenerator(rescale=1./255) in case3 give different result than case2? 2- Is adding ImageDataGenerator(rescale=1./255) to the testing and validation better than not adding it? 3- Do you think there is a problem in the result of the first case? Answer: 1 and 2: If you rescale you images, you should do it on all partitions: training, validation and test. If you only rescale your images on the training set, then your network will see very different values (0~255, vs 0.0~1.0) on validation/test set and therefore give poor accuracy. That's your case 2. I don't see any obvious problem.
{ "domain": "datascience.stackexchange", "id": 3781, "tags": "deep-learning, keras, tensorflow, computer-vision, convolutional-neural-network" }
CMakeError: package configuration by moveit_common
Question: I have a package that uses moveit2 and when I try to build it with colcon build I get the following error: CMake Error at CMakeLists.txt:28 (find_package): By not providing "Findmoveit_common.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "moveit_common", but CMake did not find one. Could not find a package configuration file provided by "moveit_common" with any of the following names: moveit_commonConfig.cmake moveit_common-config.cmake Add the installation prefix of "moveit_common" to CMAKE_PREFIX_PATH or set "moveit_common_DIR" to a directory containing one of the above files. If "moveit_common" provides a separate development package or SDK, be sure it has been installed. I tried to uninstall and install moveit with the command: sudo apt install ros-foxy-moveit. I sourced the ros environment from the directory I'm calling colcon build: source /opt/ros/foxy/setup.bash. I added <build_depend>moveit_common</build_depend> in my package.xml and find_package(moveit_common REQUIRED) in my CMakeLists.txt. None of those worked I didn't find anything relevant online so far. I don't understand how I can add the installation prefix of moveit_common in my CMAKE_PREFIX_PATH. What is the problem here? I'm using Ubuntu 20.04, ROS2 Foxy Originally posted by Spyros on ROS Answers with karma: 51 on 2021-12-10 Post score: 0 Answer: After a long search, I found out that moveit_common is not installed together with moveit. I installed it separately with: sudo apt install ros-foxy-moveit-common Originally posted by Spyros with karma: 51 on 2021-12-20 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by gvdhoorn on 2021-12-20: Please report this on the MoveIt 2 issue tracker if you feel that package should be installed by default.
{ "domain": "robotics.stackexchange", "id": 37241, "tags": "ros2" }
Unusual applications of regular expressions?
Question: I am looking for some interesting applications of regular expressions. Can you name any unusual, or unobvious, cases where regexes find their application? Answer: I don't know if this question belongs here (the answer could be subjective and depend on your definition of "unusual") but here is my favorite unusual application of regex: converting T9 input (2-9) to English text. For example if the user wants to write hello they presses 42556. Convert the input to [ghi][def][jkl][jkl][mno] and test this regex against the whole vocabulary: the word hello will match.
{ "domain": "cs.stackexchange", "id": 21457, "tags": "regular-expressions" }