anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
What does $\int_C V \,d\mathbf{l}$ mean?
Question: What does $$\int_C V \,d\mathbf{l}$$ mean? I initially thought it was simply a line integral around $C$, that is, if $\mathbf{r}: [0,1] \longrightarrow \mathbb{R}^3$ is a paremetrization of $C$, then $$\int_C V d\mathbf{l} = \int_0^1 V(\mathbf{r}(t))\, \left|\frac{d}{dt}\mathbf{r}(t)\right| \, dt$$ But quoting my textbook (Field and Wave Electromagnetics, 2nd edition by David K. Cheng): (...) $V$ is a scalar function of space, $d\mathbf{l}$ represents a differential increment of length, and $C$ is the path of integration. If the integration is to be carried out from a point $P_1$ to another point $P_2$, we write $\int_{P_1}^{P_2} V d\mathbf{l}$. If the integration is to be evalueated around a closed path $C$, we denote it by $\oint_C V d\mathbf{l}$. In Cartesian coordinates, it can be written as $$\int_C V d\mathbf{l} = \int_C V(x,y,z)[\mathbf{a}_x \,dx + \mathbf{a}_y \,dy + \mathbf{a}_z \,dz],$$ (...) which becomes $$\int_C V d\mathbf{l} = \mathbf{a}_x\int_C V(x,y,z)\,dx + \mathbf{a}_y\int_C V(x,y,z)\,dy + \mathbf{a}_z\int_C V(x,y,z)\,dz.$$ The three integrals on the right hand side are ordinary scalar integrals; they can be evaluated for a given $V(x,y,z)$ around a path $C$. I don't understand what the integrals on the last expression mean. For example, in $$\int_C V(x,y,z) \, dx$$ how is this a an "ordinary scalar integral" if $C$ is a line that can possibly be outside of the $xx$ axis. Answer: The differential element $dl$ is a projector over the path, and $a_i$ are the unitary vectors along the axis. In your formulation, the only mistake is the absolute value, as you begin with a vector, and ergo should get a vector. Note that sometimes we want the area under a curve defined on a scalar field, $\int_C f(x,y,z) ds $. Note $ds$ is not a vector.
{ "domain": "physics.stackexchange", "id": 13454, "tags": "electromagnetism, notation, integration" }
English translation of Heisenberg's paper ``Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik''
Question: Is there any english translation of Heisenberg's paper Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik online? I have found not. Thanks so much. Answer: Google translate + Google scholar ⇒ the translation is available for free on the NASA Technical Reports Server: The actual content of quantum theoretical kinematics and mechanics.
{ "domain": "physics.stackexchange", "id": 14846, "tags": "quantum-mechanics, specific-reference" }
Why is LH level much higher than FSH level at ovulation?
Question: My teacher showed us an elaborate collection of graphs with one of them showing FSH and LH plasma levels during the menstrual cycle. LH level was almost 3 times higher than FSH level. Why? Does this have anything to do with their synthesis or cellular responses? Answer: It is not the absolute plasma levels that are important, it is about their relative differences during the cycle. The steep rise in FSH during the first days induces follicle stimulation. The LH surge at day 12 induces maturation of the egg and stimulates its release (see wiki page). The fact that LH's absolute plasma levels are higher may theoretically have to do with the fact that more LH is needed to elicit its physiolocal response. However, the differences of their relative levels is way more important from a physiological point of view.
{ "domain": "biology.stackexchange", "id": 3018, "tags": "physiology, endocrinology" }
The end of the Universe
Question: I was reading about antimatter and dark energy and how Hawking Radiation can destroy black holes when I came across three theories about how the universe might end. There are various theories regarding the end of the universe, like the Big Rip, the Big Freeze, and the Big Crunch. The first two reasons that Dark Energy is stronger than the gravitational force between objects and the last one reasons that gravitational attraction is, in fact, the stronger one. The Big Freeze says the universe will experience a heat death where the endless expansion of the universe will cause the objects to get farther and farther away till the stars lose all fuel for nuclear reactions and die out, and the only things remaining are the black holes. The Big Rip is a little bit similar, stating that all objects from stars and galaxies to atoms and subatomic particles, and even spacetime itself, is progressively torn apart by the expansion of the universe until distances between particles will become infinite and the universe as we know it ceases to exist. The Big Crunch on the other hand states that since gravity is stronger than the force exerted by the dark energy the expansion of the universe eventually reverses and the universe recollapses, till the last of atoms and subatomic particles fuse, ultimately causing the cosmic scale factor to reach zero, an event potentially followed by a reformation of the universe starting with another Big Bang. Is my understanding of these three theories correct? Please elaborate and explain these if my understanding is wrong. Answer: Based on current accumulated astronomical evidence, the Big Freeze is closest to what I understand that most cosmologists accept as most likely. There is a slight detail you left out, which is the black holes will eventually convert all of their mass into particles as a result of Hawking radiation. Also, since gravity keeps the mass of a galaxy (and also some galaxy groups) these collections will remain regular mass stuff until all of the energy of orbital motion is dissipated and the galactic component objects will eventually become black holes. and then much much later become the particles from Hawking radiation. The Big Crunch is almost certain not to happen. It would only happen if some currently unknown physics would cause the dark energy to cease to exits. The Big Rip requires that the expansion will have a expansion rate value such that it can pull apart the mass inside a black hole. Imagine a very large black hole. At its event horizon the escape velocity is c. The effect of expansion causes separation motion between a reference origin and the corresponding radius of the observable universe to be c. In order for the expansion to pull apart a black hole, the event horizon must be smaller than the radius of the black hole. I have never seen any cosmological calculation that says this is possible.
{ "domain": "physics.stackexchange", "id": 78871, "tags": "cosmology, universe" }
Why isnt my executable compiling?
Question: So right now, I have two executables within one package. I had created the first executable a few days back. But after creating the second executable and using the catkin_make command, only the first program gets compiled and not the second. What am I doing wrong? Is it because of the two executables within one package? Also, if I type rosrun package executable2, I get the following error - "Couldn't find executable names executable2" My Cmake file - cmake_minimum_required(VERSION 2.8.3) project(first_package) find_package(catkin REQUIRED COMPONENTS roscpp geometry_msgs) catkin_package() include_directories(${catkin_INCLUDE_DIRS}) add_executable(hello hello.cpp pubvel pubvel.cpp) target_link_libraries(hello ${catkin_LIBRARIES}) I am pretty sure the error is due to the add_executable part of my CMakeLists.txt. hello.cpp is the first executable whereas pubvel.cpp is the second executable. Originally posted by TristanNoctis on ROS Answers with karma: 29 on 2019-01-13 Post score: 1 Original comments Comment by PeteBlackerThe3rd on 2019-01-13: Can you update your question will your CMakeLists.txt file, and the details of the source files that are needed to compile each executable. We can't help you if we don't know how your workspace is setup. Comment by Hypomania on 2019-01-13: We need to see your CMakeLists.txt. As to your last sentence, have you sourced your catkin_ws and restarted the terminal? Comment by TristanNoctis on 2019-01-13: Hey I have updated my CMakeLists.txt. Can you please check? Comment by TristanNoctis on 2019-01-13: This is my other workspace that I have created other than the 'catkin_ws'. And yes I have sourced it before using the catkin_make Comment by Hypomania on 2019-01-13: Try the first part of my answer. Comment by Hypomania on 2019-01-13: Also try adding new add_executable and target_link_libraries for pubvel instead of merging them into one (I am not sure if you can do what you are doing). Answer: You have to add appropriate CMake macros at the end of your CMakeLists.txt file: At the very bottom add these two lines for each executable: add_executable(node_name src/file_name.cpp) target_link_libraries(node_name ${catkin_LIBRARIES}) This will tell catkin to build and install the executable in your devel/lib/package_name directory. If none of the above works I am presuming there's something wrong with your ROS install. Providing your CMakeLists.txt file would be helpful. EDIT: Update your CMakeLists.txt to: add_executable(hello src/hello.cpp target_link_libraries(hello ${catkin_LIBRARIES} add_executable(pubvel src/pubvel.cpp target_link_libraries(pubvel ${catkin_LIBRARIES} Ensure your src is stored in ~/your_ws/src/your_package/src directory. Originally posted by Hypomania with karma: 120 on 2019-01-13 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by PeteBlackerThe3rd on 2019-01-13: Although you're correct in the end that the problem was with the add_executable tags in CMakeLists.txt. You absolutely do not have to re-source your workspace every time you run catkin_make. Comment by TristanNoctis on 2019-01-14: Your update to the CMakeLists.tst worked! Only problem now is that I have to source the setup.bash each time I open a new terminal. Comment by Hypomania on 2019-01-14: @PeteBlackerThe3rd, my apologies, I was meant to say everytime you build a new package, where a new devel and build folders are created! Comment by Hypomania on 2019-01-14: @TristanNoctis, you don't, just follow the steps I added in part 1. Comment by PeteBlackerThe3rd on 2019-01-14: @Hypomania no problem, could you update your answer so it doesn't include the part about sourcing the work space. It's quite confusing at the moment, as the first half doesn't answer the OPs question and is incorrect. The second half in great though. Comment by Hypomania on 2019-01-14: @PeteBlackerThe3rd, initially I thought his ws wasn't source. If it's not sourced it won't get recognized by bash (correct me if I am wrong), is that still irrelevant to the question? Comment by PeteBlackerThe3rd on 2019-01-14: That's true, but the question was about compiling multiple binaries. The first binary was okay, then the problem appeared when they tried to compile a second binary. Plus the actual fault in this case was the CMakeLists.txt so that should be the focus of the answer. Comment by Hypomania on 2019-01-15: @PeteBlackerThe3rd, that makes sense, I will edit it out.
{ "domain": "robotics.stackexchange", "id": 32267, "tags": "ros, ros-melodic" }
What is the reason behind the phenomenon of Joule-Thomson effect?
Question: For an ideal gas there is no heating or cooling during an adiabatic expansion or contraction, but for real gases, an adiabatic expansion or contraction is generally accompanied by a heating or cooling effect. What is the reason behind such a phenomenon? Is it related to the property of real gases or is it something else? Answer: In a reversible adiabatic expansion or compression, the temperature of an ideal gas does change. In a Joule-Thompson type of irreversible adiabatic expansion (e.g., in a closed container), the internal energy of the gas does not change. For an ideal gas, its internal energy depends only on its temperature. So, for an irreversible adiabatic expansion of an ideal gas in a closed container, its temperature does not change. But, the internal energy of a real gas depends not only on its temperature but also on its specific volume (which increases in an expansion). So, for a real gas, its temperature changes. The Joule-Thompson effect is one measure of the deviation of a gas from ideal gas behavior. ADDENDUM This addresses a comment from the OP regarding the effect of specific volume on the internal energy of a real gas. Irrespective of the Joule-Thompson effect, one can show (using a combination of the first and second laws of thermodynamics) that, for a pure real gas, liquid, or solid (or one of constant chemical composition), the variation of specific internal energy with respect to temperature and specific volume is given by: $$dU=C_vdT-\left[P-T\left(\frac{\partial P}{\partial T}\right)_V\right]dV$$The first term describes the variation with respect to temperature and the second term describes the variation with respect to specific volume. For an ideal gas, the second term is equal to zero. However, for a real gas, the second term is not equal to zero, and that means that, at constant internal energy (as in the Joule-Thompson effect), the temperature will change when the specific volume changes. This is a direct result of the deviation from ideal gas behavior.
{ "domain": "chemistry.stackexchange", "id": 9871, "tags": "physical-chemistry, thermodynamics" }
Is there a relation between transition density and density differences?
Question: When I have an excited state, is there a relation between the transition density associated with this electronic transition and the density difference between the excited state density and the ground state density? Can the transition density be equal to the difference density? Answer: By definition, the transition density is not the same as the difference density. In the following derivation, the transition density is $\mathbf{T}^{\mathrm{CIS}}$ and the difference density is $\mathbf{P}^{\Delta}$. Although it may seem like they can be formed from the same quantities, the true difference density contains orbital relaxation terms and can be used to build a relaxed density, which is the correct excited state density. Neglecting those terms (the $P_{ia}^{\Delta}$) will give an unrelaxed density. I will also try to show that the unrelaxed difference density is still not equivalent to the transition density. Derivation of individual CIS densities Here is the derivation of the transition density for doubly-occupied spin-restricted MOs, with quoted blocks and equations directly taken from the original CIS paper. The CIS approximation is identical to the TDA approximation in TD-DFT. Full TD-HF or TD-DFT follows the RPA equations, which include deexcitation terms in addition to excitation terms. The equations would be different for RPA, but conclusion would be identical. The standard indexing convention is followed: $\mu, \nu, \lambda, \sigma$ for AOs, $i,j,k,l$ for occupied MOs, $a,b,c,d$ for virtual MOs, and $p,q,r,s$ for any MO. Define the Hartree-Fock (or SCF-type, valid for DFT as well) density as a sum over occupied MOs: $$ P_{\mu\nu}^{\mathrm{HF}} = \sum_{i=1}^{n} c_{\mu i} c_{\nu i} $$ The two-particle CI-singles density matrix, $\mathbf{\Gamma}^{\mathrm{CIS}}$, can be written in terms of the HF ground-state density matrix and the ground-to-excited-state transition density matrix, $\mathbf{T}^{\mathrm{CIS}}$: $$ \Gamma_{\mu\nu\lambda\sigma}^{\mathrm{CIS}} = \frac{1}{2} \left[ P_{\mu\nu}^{\mathrm{HF}} P_{\lambda\sigma}^{\mathrm{HF}} + 2 T_{\mu\nu}^{\mathrm{CIS}} T_{\lambda\sigma}^{\mathrm{CIS}} - P_{\mu\sigma}^{\mathrm{HF}} P_{\lambda\nu}^{\mathrm{HF}} - 2 T_{\mu\sigma}^{\mathrm{CIS}} T_{\lambda\nu}^{\mathrm{CIS}} \right] $$ where $\mathbf{P}^{\mathrm{HF}}$ is given (above) while $\mathbf{T}^{\mathrm{CIS}}$ $$ T_{\mu\nu}^{\mathrm{CIS}} = \sum_{ia} a_{ia} c_{\mu i} c_{\nu a} $$ I can't find the definition of $\{a\}$ in the paper, but it is the set of coefficients from a converged CIS calculation that describe the single-particle excitation from every occupied orbital to every unoccupied orbital. As an aside, they are the elements of the $\mathbf{X}$ vector in the RPA eigenvalue problem: $$ \begin{pmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{B} & \mathbf{A} \end{pmatrix} \begin{pmatrix} \mathbf{X} \\ \mathbf{Y} \end{pmatrix} = \omega \begin{pmatrix} \mathbf{1} & \mathbf{0} \\ \mathbf{0} & \mathbf{-1} \end{pmatrix} \begin{pmatrix} \mathbf{X} \\ \mathbf{Y} \end{pmatrix} $$ $\mathbf{A}$ and $\mathbf{B}$ are parts of the orbital Hessian. If you diagonalize the whole thing, you get the set $\{\omega\}$ of all excitation energies back as eigenvalues. As mentioned above, setting $\mathbf{B} = \mathbf{0}$ gives the TDA or CIS approximation, so there is no longer a deexcitation vector $\mathbf{Y}$. Importantly, we always start from the set of ground-state MO coefficients $\{c\}$ and describe excitations on top of them. The CI-singles excited-state density matrix, $\mathbf{P}^{\mathrm{CIS}}$, is also constructed as a sum of HF and excited-state terms: $$ P_{\mu\nu}^{\mathrm{CIS}} = P_{\mu\nu}^{\mathrm{HF}} + P_{\mu\nu}^{\Delta} $$ Here we have introduced $\mathbf{P}^{\Delta}$, the CI-singles $\Delta$ density matrix. This can also be called a "difference density matrix", since it represents the changes in electronic distribution upon excitation. It is not, however, the same as the transition density matrix $\mathbf{T}^{\mathrm{CIS}}$ defined above. As we shall demonstrate, it is the use of the true CI-singles density matrix required by (the CIS gradient expression that isn't important here) and not the simple one-particle density matrix (1PDM) which allows the realistic computation of charge distributions, orbital populations, and electronic moments of the excited state. To see this distinction, first consider the $\Delta$ density matrix which would be added to the HF density matrix in order to generate the 1PDM for an excited state. In the MO basis, it is a symmetric matrix with both occupied-occupied (OO) and virtual-virtual contributions $$ \begin{align} P_{ij}^{\Delta} &= -\sum_{ab} a_{ia} a_{jb} \\ P_{ab}^{\Delta} &= +\sum_{ij} a_{ia} a_{jb} \end{align} $$ with the occupied-virtual (OV) elements all zero. The true CI-singles density matrix required in (the CIS gradient expression) will have exactly the same OO and VV contributions, but the OV terms are not all zero. The appearance of these off-diagonal block elements in the excited-state density matrix can be interpreted as orbital relaxation following the initial gross charge rearrangement due to excitation. That is to say, the CI coefficients will by themselves describe some of the gross features of charge redistribution in the excited state, but the response of the wave function to an external perturbation will account for further refinement in electronic properties. These OV terms can be found by solving a single set of CPHF equations: $$ L_{ai} = \sum_{bj} \left[ (ij||ab) - (ib||ja) \right] P_{bj}^{\Delta} + (\epsilon_a - \epsilon_i) P_{ai}^{\Delta} $$ where the $\mathbf{L}$ vector is the CI-singles Lagrangian: $$ \begin{align} L_{ai} &= C1_{ai} - C2_{ai} + \sum_{kl} P_{kl}^{\Delta} (al||ik) + \sum_{bc} P_{bc}^{\Delta} (ab||ic) \\ C1_{ci} &= -2 \sum_{jab} a_{ia} a_{jb} (cb||ja) \\ C2_{bk} &= -2 \sum_{ija} a_{ia} a_{jb} (ik||ja) \end{align} $$ The total CI-singles $\Delta$ density matrix required for $\mathbf{P}^{\mathrm{CIS}}$ can be generated by backtransforming the entire MO basis $\Delta$ density matrix defined by $P_{ij}^{\Delta}$, $P_{ab}^{\Delta}$, and $P_{ia}^{\Delta}$: $$ P_{\mu\nu}^{\Delta} = \sum_{pq} P_{pq}^{\Delta} c_{\mu p} c_{\nu q} $$ Show that the transition density and unrelaxed difference density are not equivalent We want to compare $T_{\mu\mu}^\mathrm{CIS}$ and $P_{pq}^{\Delta}$, but they are not in the same basis, so one must be transformed. Because I don't want to mess with the block-diagonal structure of $\mathbf{P}_{\mathrm{MO}}^{\Delta}$, I will transform $\mathbf{T}$ to the MO basis: $$ \begin{align} T_{pq}^{\mathrm{CIS}} &= \sum_{\mu\nu} c_{\mu p} T_{\mu\nu}^{\mathrm{CIS}} c_{\nu q} \\ &= \sum_{\mu\nu} \sum_{ia} c_{\mu p} c_{\mu i} a_{ia} c_{\nu a} c_{\nu q}. \end{align} $$ Now consider the unrelaxed difference density, meaning the ov/vo blocks are zero due to neglecting orbital relaxation effects: $$ P_{pq}^{\Delta} = \left( \begin{array}{c|c} -\sum_{ab} a_{ia} a_{jb} & \mathbf{0} \\ \hline \mathbf{0} & +\sum_{ij} a_{ia} a_{jb} \end{array} \right)_{pq}. $$ Here is a slightly hand-waving proof. I am not sure if the MO coefficients in the double sum can be simplified, but it doesn't matter; assume they are unity. $T_{pq}^{\mathrm{CIS}}$ contains terms with $a$, while $P_{pq}^{\Delta}$ contains terms with $|a|^2$. Unless there is an extra transformation that can be done, I don't see how they can be equivalent. If you want a little test, generate some fake matrices and see for yourself. $\texttt{x} \equiv X_{ia} \equiv a_{ia}$. #!/usr/bin/env python from __future__ import print_function import numpy as np np.random.seed(42) nocc = 2 nvirt = 2 norb = nocc + nvirt nov = nocc * nvirt nbasis = norb x = np.random.rand(nocc, nvirt) c = np.random.rand(nbasis, norb) cocc = c[:, :nocc] cvirt = c[:, nocc:] p_delta_oo = -np.einsum('ia,jb->ij', x, x) p_delta_vv = np.einsum('ia,jb->ab', x, x) p_delta_ov = np.zeros(shape=(nocc, nvirt)) p_delta_vo = p_delta_ov.T p_delta = np.asarray(np.bmat([[p_delta_oo, p_delta_ov], [p_delta_vo, p_delta_vv]])) # same as np.dot(cocc, np.dot(x, cvirt.T)) t_ao = np.einsum('ia,mi,na->mn', x, cocc, cvirt) # same as np.dot(c.T, np.dot(t_ao, c)) t_mo = np.einsum('mp,mi,ia,na,nq->pq', c, cocc, x, cvirt, c) print(t_mo) print(p_delta) which gives [[ 1.82856579 1.89892758 0.59008502 3.09931192] [ 1.48134568 1.53535055 0.49193325 2.47904162] [ 0.54242862 0.56250364 0.17874155 0.9109361 ] [ 1.79242654 1.85791848 0.59456093 3.00118592]] [[-1.75629929 -1.76345302 0. 0. ] [-1.76345302 -1.77063588 0. 0. ] [ 0. 0. 1.22441763 1.71443377] [ 0. 0. 1.71443377 2.40055604]] Final comment My original understanding of the term "difference density" comes from a coworker who was taking the real-space representation (cube files) of two state densities, say the ground state $0$ and an excited state $n$, and subtracting them. $$ \Delta \mathbf{P}^{0\rightarrow n} = \mathbf{P}^{(n)} - \mathbf{P}^{(0)} $$ Is this equivalent to the definition of the difference density from above? $$ \begin{align} \mathbf{P}^{(n)} - \mathbf{P}^{(0)} &= \mathbf{P}^{\mathrm{CIS}(n)} - \mathbf{P}^{\mathrm{HF}} \\ &= \left( \mathbf{P}^{\mathrm{HF}} + \mathbf{P}^{\Delta(n)} \right) - \mathbf{P}^{\mathrm{HF}} \\ &= \mathbf{P}^{\Delta(n)} \end{align} $$ Looks good! References James B. Foresman, Martin Head-Gordon, John A. Pople, and Michael J. Frisch. Toward a systematic molecular orbital theory for excited states. The Journal of Physical Chemistry 1992 96 (1), 135-149, DOI: 10.1021/j100180a030 Frank Neese. Prediction of molecular properties and molecular spectroscopy with density functional theory: From fundamental theory to exchange-coupling. Coordination Chemistry Reviews 2009 253 (5-6), 526-563, DOI: 10.1016/j.ccr.2008.05.014
{ "domain": "chemistry.stackexchange", "id": 8357, "tags": "theoretical-chemistry, density-functional-theory, td-dft" }
Define time in synthetic rosbag
Question: I am creating a rosbag from a csv file. I am using the python api. My question is how can I set the time information in the bag with which it will be player back. the relevant parts of my current code look like this: import os import rosbag import rospy from nav_msgs.msg import Odometry import numpy as np try: file = os.path.expanduser('test.csv') # Get data from text file(s) data = np.genfromtxt(file, delimiter=',', skip_header=0, skip_footer=0, names=True) bag = rosbag.Bag('test.bag', 'w') time = data['Nano_from_start_of_regions'] time = time - time[0] v_x = data['Velocity_forwardms'] v_y = data['Velocity_lateralms'] v_z = data['Velocity_downms'] odom = Odometry() time = np.array(time) sec_time = time.astype(int) nsec_time = (time-sec_time)*10e8 nsec_time = np.array(nsec_time) nsec_time = nsec_time.astype(int) for i in range(len(time)): odom.header.frame_id = 'odom' odom.child_frame_id = 'base_link' odom.header.stamp.secs = sec_time[i] odom.header.stamp.nsecs= nsec_time[i] odom.twist.twist.linear.x = v_x[i] odom.twist.twist.linear.y = v_y[i] odom.twist.twist.linear.z = v_z[i] bag.write('odom',odom) finally: bag.close() how ever the time in the header doesnt seem to have an influence on the bag. its just saves it as fast as possible and replays at the same speed. How could I change that speed? Clock topic? Originally posted by Peter1 on ROS Answers with karma: 38 on 2018-05-10 Post score: 0 Answer: There is a third parameter to write that holds the time, set it to header.stamp. http://docs.ros.org/api/rosbag/html/python/ (I can't link straight to it because the html prevents it: click rosbag.bag on the left, then Bag on the right, then write() appears) rosbag play will use that time during playback, it doesn't use the header time because many messages don't have headers. Not supplying the time results in the current time being used. The tutorial example should be changed to show that if it doesn't. Originally posted by lucasw with karma: 8729 on 2018-05-10 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 30801, "tags": "ros, rosbag, ros-kinetic" }
DatabaseHelper with multiple tables
Question: I am programming my first android app. It is going to be a quiz. It will have 40 different categories of questions and for each category I am going to have a SQLite Table, which holds the questions for that category. I have a working DatabaseHelper for the first 2 tables. If I keep going like this for the next 38 tables the code is going to be very very repetitive. I am just not experienced enough to figure out a way to make the code less repetitive. Could somebody with more experience help me to clean this code up and make it less repetitive? DatabaseHelper public class DatabaseHelper extends SQLiteOpenHelper { private static final int DATABASE_VERSION = 13; // Database version private static final String DATABASE_NAME = "PharmaQuestions"; // database name // Table Names private static final String TABLE_ACE = "ace"; private static final String TABLE_ANDROGENS = "androgens"; // General question columns private static final String ID = "id"; // question id private static final String QUES = "question"; // the question private static final String OPTA = "opta"; // option a private static final String OPTB = "optb"; // option b private static final String OPTC = "optc"; // option c private static final String OPTD = "optd"; // option d private static final String ANSWER = "answer"; // correct option private SQLiteDatabase database; // Create Table ace private static final String CREATE_TABLE_ACE = "CREATE TABLE IF NOT EXISTS " + TABLE_ACE + "( " + ID + " INTEGER PRIMARY KEY AUTOINCREMENT, " + QUES + " TEXT, "+OPTA +" TEXT, " +OPTB +" TEXT, "+OPTC+" TEXT, "+OPTD + " TEXT, " + ANSWER+ " TEXT)"; // Create Table androgens private static final String CREATE_TABLE_ANDROGENS = "CREATE TABLE IF NOT EXISTS " + TABLE_ANDROGENS + "( " + ID + " INTEGER PRIMARY KEY AUTOINCREMENT, " + QUES + " TEXT, "+OPTA +" TEXT, " +OPTB +" TEXT, "+OPTC+" TEXT, "+OPTD + " TEXT, " + ANSWER+ " TEXT)"; public DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { // creating required tables database = db; db.execSQL(CREATE_TABLE_ACE); db.execSQL(CREATE_TABLE_ANDROGENS); addQuestions(); } @Override public void onUpgrade(SQLiteDatabase db, int oldV, int newV) { // Drop older table if existed db.execSQL("DROP TABLE IF EXISTS " + TABLE_ACE); db.execSQL("DROP TABLE IF EXISTS " + TABLE_ANDROGENS); // Create tables again onCreate(db); } private void addQuestions() { // ACE Questions Question ace1 = new Question("ACE Frage Nr. 1?","Antwort1", "Antwort2", "Antwort3", "Antwort4", "Antwort1"); this.addACEQuestion(ace1); Question ace2 = new Question("ACE Frage Nr. 2?", "Antwort1", "Antwort2", "Antwort3", "Antwort4", "Antwort2"); this.addACEQuestion(ace2); // Androgen Questions Question androgen1 = new Question("Androgen Frage Nr. 1?", "Antwort1", "Antwort2", "Antwort3", "Antwort4", "Antwort2"); this.addAndrogensQuestion(androgen1); Question androgen2 = new Question("Androgen Frage Nr. 2?", "Antwort1", "Antwort2", "Antwort3", "Antwort4", "Antwort2"); this.addAndrogensQuestion(androgen2); } // Adding ace question public void addACEQuestion(Question quest) { //SQLiteDatabase db = this.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(QUES, quest.getQUESTION()); values.put(OPTA, quest.getOPTA()); values.put(OPTB, quest.getOPTB()); values.put(OPTC, quest.getOPTC()); values.put(OPTD, quest.getOPTD()); values.put(ANSWER, quest.getANSWER()); // Inserting Rows database.insert(TABLE_ACE, null, values); } // Adding androgen question public void addAndrogensQuestion(Question quest) { //SQLiteDatabase db = this.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(QUES, quest.getQUESTION()); values.put(OPTA, quest.getOPTA()); values.put(OPTB, quest.getOPTB()); values.put(OPTC, quest.getOPTC()); values.put(OPTD, quest.getOPTD()); values.put(ANSWER, quest.getANSWER()); // Inserting Rows database.insert(TABLE_ANDROGENS, null, values); } public List<Question> getAllACEQuestions() { List<Question> quesList = new ArrayList<Question>(); // Select All Query String selectQuery = "SELECT * FROM " + TABLE_ACE; database = this.getReadableDatabase(); Cursor cursor = database.rawQuery(selectQuery, null); // looping through all rows and adding to list if (cursor.moveToFirst()) { do { Question quest = new Question(); quest.setID(cursor.getInt(0)); quest.setQUESTION(cursor.getString(1)); quest.setOPTA(cursor.getString(2)); quest.setOPTB(cursor.getString(3)); quest.setOPTC(cursor.getString(4)); quest.setOPTD(cursor.getString(5)); quest.setANSWER(cursor.getString(6)); quesList.add(quest); } while (cursor.moveToNext()); } // return quest list return quesList; } public List<Question> getAllAndrogensQuestions() { List<Question> quesList = new ArrayList<Question>(); // Select All Query String selectQuery = "SELECT * FROM " + TABLE_ANDROGENS; database = this.getReadableDatabase(); Cursor cursor = database.rawQuery(selectQuery, null); // looping through all rows and adding to list if (cursor.moveToFirst()) { do { Question quest = new Question(); quest.setID(cursor.getInt(0)); quest.setQUESTION(cursor.getString(1)); quest.setOPTA(cursor.getString(2)); quest.setOPTB(cursor.getString(3)); quest.setOPTC(cursor.getString(4)); quest.setOPTD(cursor.getString(5)); quest.setANSWER(cursor.getString(6)); quesList.add(quest); } while (cursor.moveToNext()); } // return quest list return quesList; } public int acerowcount() { int row=0; String selectQuery = "SELECT * FROM " + TABLE_ACE; SQLiteDatabase db = this.getWritableDatabase(); Cursor cursor = db.rawQuery(selectQuery, null); row=cursor.getCount(); return row; } public int androgenrowcount() { int row=0; String selectQuery = "SELECT * FROM " + TABLE_ANDROGENS; SQLiteDatabase db = this.getWritableDatabase(); Cursor cursor = db.rawQuery(selectQuery, null); row=cursor.getCount(); return row; } } I am really new to this and would be very thankful for every line of code I can get rid of. Answer: Your functions addACEQuestion and addAndrogensQuestion seem nearly identical. The only difference is a constant. Instead of manually copy and pasting these functions and changing that constant, you can try making a new function and passing the constant. Take: public void addACEQuestion(Question quest) { addQuestion(quest, TABLE_ACE); } public void addAndrogensQuestion(Question quest) { addQuestion(quest, TABLE_ANDROGENS); } public void addQuestion(Question quest, String table) { //SQLiteDatabase db = this.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(QUES, quest.getQUESTION()); values.put(OPTA, quest.getOPTA()); values.put(OPTB, quest.getOPTB()); values.put(OPTC, quest.getOPTC()); values.put(OPTD, quest.getOPTD()); values.put(ANSWER, quest.getANSWER()); // Inserting Rows database.insert(table, null, values); } It also seems like you can do this for getAllACEQuestions and getAllAndrogensQuestions, and acerowcount and androgenrowcount. You also seem to be duplicating logic for CREATE_TABLE_ACE and CREATE_TABLE_ANDROGENS, and so I'd recommend you make another string(s) that you can add TABLE_ACE or TABLE_ANDROGENS into.
{ "domain": "codereview.stackexchange", "id": 22561, "tags": "java, android, sqlite" }
Web Development DSL
Question: I'm working on a DSL for web development, similar to sinatra. The git repository is here. I've been attempting to improve this code and write it for 4 months, and as a hobby programmer, I would like to have an evaluation of general readability/maintainability, and any ways I could more logically split up my code into files. I have a test suite which I would be happy to add to the question if that is part of the norm here. So mostly I'm looking for a code review to improve my code style, general organization (how the methods are ordered and how the files are split up), commenting practices (are my comments informative enough, are they too verbose), and general code readability. Please let me know if I should change anything to better fit the norms of this site, this is my first time here. My goal is for the code to be readable enough that I don't have to provide a README to describe how it works, but I do have one currently if that would be helpful. The code is for a rubygem based on rack. Here is an example usage (more examples can be found in the README): require "atd" request "/", "index.html" #=> for any request to / return index.html request "/home" do puts "/home was requested" @http[:output] = "This is home!" end atd.rb require_relative "atd/version" require "rack" require "webrick" require_relative "atd/builtin_class_modifications" require_relative "atd/routes" # Extension packs # require_relative "extensions/precompilers" # The assistant technical director of your website. It does the dirty work so you can see the big picture. module ATD # Creates a new ATD App based on the template of {ATD::App}. # @return [ATD::App] # @param [Symbol] name The name of the new app and new class generated. def new(name) app = Class.new(App) Object.const_set(name.to_sym, app) app end # So called because each instance stores a route, and will be called if that route is reached. # A route for the purposes of {ATD} is a parser that will be fed env in {ATD::App#call the rack app}. class Route attr_accessor :args, :method, :block, :path, :output, :app, :actions, :status_code # The first two arguments must me the path and the output. def initialize(*args, &block) @status_code = 200 @method = [:get, :post, :put, :patch, :delete] @method = [] if args.last.is_a?(Hash) && !(args.last[:respond_to].nil? || args.last[:ignore].nil?) @app = :DefaultApp parse_args(*args, &block) end # This works differently from a standard setter because is makes sure that a {Route} can belong to only one {App}. def app=(app_name) old_app = Object.const_get(@app) new_app = Object.const_get(app_name.to_sym) old_app.routes -= self if old_app.routes.is_a?(Array) && old_app.routes.include?(self) new_app.routes.nil? ? new_app.routes = Array(self) : new_app.routes += Array(self) @app = app_name end # @!method get(path = nil,*args) # @param [String] path The path at which the route should receive from. # @return ATD::Route # Sets route to receive a get request to path and execute the block provided (if one is provided) # @!method post(path = nil,*args) # @param [String] path The path at which the route should receive from. # @return ATD::Route # Sets route to receive a post request to path and execute the block provided (if one is provided) # @!method put(path = nil,*args) # @param [String] path The path at which the route should receive from. # @return ATD::Route # Sets route to receive a put request to path and execute the block provided (if one is provided) # @!method patch(path = nil,*args) # @param [String] path The path at which the route should receive from. # @return ATD::Route # Sets route to receive a patch request to path and execute the block provided (if one is provided) # @!method delete(path = nil,*args) # @param [String] path The path at which the route should receive from. # @return ATD::Route # Sets route to receive a delete request to path and execute the block provided (if one is provided) [:get, :post, :put, :delete, :patch].each do |method| define_method(method) do |*args, &block| # This conditional allows the syntax get post put "/", "Hello" because it passes # the variables up through the different method calls. if args.first.is_a?(ATD::Route) @method = args.first.method @output = args.first.output @path = args.first.path @args = args.first.args @block = args.first.block @app = args.first.app @actions = args.first.actions end @method = [method] if @method.length == 5 @method += [method] @method.uniq! parse_args(*args, &block) end end # Converts an instance of {ATD::Route} into it's Hash representation. # The format for the Hash is listed {ATD::App#initialize here} # @api private def to_h routes = {} routes[@path] = {} routes[@path][@method] = {} routes[@path][@method] = { status_code: @status_code, output: @output, block: @block, args: @args, route: self } routes end private # This should also manage @method at some point def parse_args(*args, &block) args.compact! args.flatten! args.reject! { |arg| arg.is_a?(File) || arg.is_a?(Proc) || arg ? false : arg.empty? } # File doesn't respond to empty @block = block # This requires the format ATD::Route.new(path, route, args) @path ||= args.shift @output ||= args.shift @args = Array(@args).concat(args) unless args.nil? # @output should be whatever the input is unless the input is a controller/action or the input is_file_string? if @output =~ /^\w*#\w*$/ # Check if @path is a controller#action combo controller, action = @output.split("#") @action = Object.const_get(controller.to_sym).method(action.to_sym) @output = @action.call end # TODO: Choose one! They all work... I think... # Method 1: target_location = [] caller_locations.each do |caller_location| target_dir = File.dirname(caller_location.absolute_path.to_s) target_location.push(target_dir) unless target_dir.include?(__dir__) end # Method 2: target_location = caller_locations.reject do |caller_location| File.dirname(caller_location.absolute_path.to_s).include? __dir__ end output_full_path = "#{File.dirname(target_location[0].absolute_path)}/assets/#{@output}" @output = File.new(output_full_path) if File.exist?(output_full_path) && !Dir.exist?(output_full_path) if args.is_a?(Hash) || args.last.is_a?(Hash) @method += Array(args.last[:respond_to]) unless args.last[:respond_to].nil? @method -= Array(args.last[:ignore]) unless args.last[:ignore].nil? @status_code = args.last[:status] unless args.last[:status].nil? @status_code = args.last[:status_code] unless args.last[:status_code].nil? end self end end # A template {App} that all Apps extend. When a new App is created with {ATD.new ATD.new} it extends this class. class App attr_accessor :http class << self attr_accessor :routes # An array of instances of {ATD::Route} that belong to this {App}. # Generates an instance of {ATD::Route}. # Passes all arguments and the block to {Route.new the constructor} and sets the app where it was called from. def request(*args, &block) route = ATD::Route.new(*args, &block) route.app = (self == Object || self == ATD::App ? :DefaultApp : name.to_sym) route end alias req request alias r request [:get, :post, :put, :patch, :delete].each do |i| define_method(i) do |*args, &block| request.send(i, *args, &block) # Makes get == r.get, post == r.post, etc. end end # Starts the rack server # @param [Class] server The server that you would like to use. # @param [Fixnum] port The port you would like the server to run on. def start(server = WEBrick, port = 3150) Rack::Server.start(app: new, server: server, Port: port) end end # Sets up the @routes instance variable from the {.routes} class instance variable. # Can be passed an array of instances of {ATD::Route} and they will be added to @routes. # The format of the new @routes instance variable is: # {"/" => { # get: {output: "Hello World", # block: Proc.new}, # post: {output: "Hello World", # block: Proc.new} # }, # "/hello" => { # get: {output: "Hello World", # block: Proc.new}, # post: {output: "Hello World", # block: Proc.new # } # } # } # @param [Array] routes An array of instances of {ATD::Route}. def initialize(routes = []) @routes = {} Array(routes + self.class.routes).each do |route| route = route.clone filename = ATD::Compilation.precompile(route, (route.args.last.is_a?(Hash) ? route.args.last[:precompile] : nil)) route_hash = route.to_h current_route = route_hash[route.path][route.method] current_route[:filename] = filename block = current_route[:block] # An instance method must be defined from the block make it the same as the controller actions. We don't want to # convert the controller actions to blocks because if we did that, we would have to take them out of scope to allow # them to use the @http variables. current_route[:block] = define_singleton_method(block.object_id.to_s.tr("0-9", "a-j").to_sym, &block) unless block.nil? current_route[:block] = route.actions unless route.actions.nil? @routes = @routes.to_h.deep_merge(route_hash) end end # Allows instance method route creation. Just another way of creating routes. def request(*args, &block) route = ATD::Route.new(*args, &block) filename = ATD::Compilation.precompile(route, (route.args.last.is_a?(Hash) ? route.args.last[:precompile] : nil)) route_hash = route.to_h route_hash[route.path][route.method][:filename] = filename @routes = @routes.to_h.deep_merge(route_hash) route end alias req request alias r request # Starts the rack server # @param [Class] server The server that you would like to use. # @param [Fixnum] port The port you would like the server to run on. def start(server = WEBrick, port = 3150) Rack::Server.start(app: self, server: server, Port: port) end # This is the method which responds to .call, as the Rack spec requires. # It will return status code 200 and whatever output corresponds the that route if it exists, and if it doesn't # it will return status code 404 and the message "Error 404" def call(env) @http = nil route = route(env) return error(404) if route.nil? route[:output] = Compilation.compile(route[:filename], route[:output]) unless !route[:args].nil? && !route[:args].empty? && route[:args][0].is_a?(Hash) && route[:args][0][:compile] == false return [route[:status_code].to_i, Hash(route[:headers]), Array(route[:output])] if route[:block].nil? http output: route[:output], request: Rack::Request.new(env), method: env["REQUEST_METHOD"], response: Rack::Response.new(env) return_val = method(route[:block]).call @http[:output] = return_val if @http[:output].nil? [@http[:status_code].to_i, Hash(@http[:headers]), Array(@http[:output])] end private def route(env) return nil if @routes[env["PATH_INFO"]].nil? # return @routes[env["PATH_INFO"]][[]] unless @routes[env["PATH_INFO"]][[]].nil? @routes[env["PATH_INFO"]].include_in_key?(env["REQUEST_METHOD"].downcase.to_sym) end def http(additional_params) @http = { status_code: 200, headers: {} }.merge(additional_params) end def error(number) [number, {}, ["Error #{number}"]] end end module_function :new end # @return [ATD::Route] def request(*args, &block) ATD::App.request(args, block) end alias req request alias r request # Starts the rack server # @param [Class] app The app you would like to start # @param [Class] server The server that you would like to use. # @param [Fixnum] port The port you would like the server to run on. def start(app = DefaultApp, server = WEBrick, port = 3150) Rack::Server.start(app: app.new, server: server, Port: port) end [:get, :post, :put, :patch, :delete].each do |i| define_method(i) do |*args, &block| request.send(i, *args, &block) end end Object.const_set(:DefaultApp, Class.new(ATD::App)) # Create DefaultApp atd/builtin_class_modifications.rb # @!visibility private class Hash # Not only merges two hashes, but also merges the hashes that may be nested in. # # For example: # {a: {b: "c"}} # Is a nested hash def deep_merge(second) merger = proc do |_, v1, v2| if v1.is_a?(Hash) && v2.is_a?(Hash) then v1.merge(v2, &merger) elsif v1.is_a?(Array) && v2.is_a?(Array) then v1 | v2 elsif [:undefined, nil, :nil].include?(v2) then v1 else v2 end end merge(second.to_h, &merger) end def include_in_key?(search) each do |key, val| return val if key.is_a?(Array) && key.include?(search) end end end # This method only exists for the test suite, specifically {ATDTest#test_route_creation}. # @!visibility private class Object # Checks if two objects are instances of the same class and that they have the same instance variables def same_properties_as?(other_class) other_class.class == self.class && class_instance_variables == other_class.class_instance_variables end # Returns the instance variables of a class def class_instance_variables instance_variables.map { |var| [var, instance_variable_get(var)] }.to_h end end atd/routes.rb module ATD # This module holds everything related to the compilation of routes. module Compilation # A module designed to hold all the precompilation methods module Precompiler extend self # Lists all filestypes that have defined precompiler methods def filetypes instance_methods(true) - [:filetypes] end end # A module designed to hold all the compilation methods module Compiler extend self # Lists all file extentions which have defined compiler methods def filetypes instance_methods(true) - [:filetypes] end end # This method is responsible for live compilation. It takes an ATD::Route as input, and returns either # the filename if Route.output is a file or the Route.output string if Route.output is a string. # It will also take the file and call the corresponding compilation method on it. def self.compile(name, contents) return contents if name.nil? contents = File.read(contents) if contents.is_a? File parse(Compiler, name, contents) end # This method is responsible for precompilation. It takes an ATD::Route as input, and returns either # the filename if Route.output is a file or the Route.output string if Route.output is a string. # It will also take the file and call the corresponding precompilation method on it. # route.output is either a full, expanded file path, a file, or a string def self.precompile(route, *opts) return nil if route.output.nil? if route.output.is_a?(File) name = route.output.is_a?(File) ? File.basename(route.output) : route.output.dup file = route.output.is_a?(File) ? route.output.dup : File.new(route.output) route.output = parse(Precompiler, name, File.read(file)) if opts[0].nil? || opts[0] return name end route.output end class << self private def parse(type, name, contents) name = name.split(".") extensions = name - [name.first] extensions.each do |extension| if type.filetypes.include? extension.to_sym contents = type.send(extension, contents) extensions -= [extension] end end contents end end end end class Object include ATD::Compilation::Compiler end Answer: Things you've done well There's a lot to like here. This is good code, easy to read and understand. I especially like: Comments Standard formatting (two-space indent, etc.) Short methods A little more vertical white space Comment blocks that preceed a method, module, or class definition should be preceded by a blank line. So instead of this: module Foo # Something about Bar class Bar # Something about baz def baz I prefer this: module Foo # Something about Bar class Bar # Something about baz def baz This helps to set methods, classes and definitions apart visually. Prefer lines shorter than 80 characters. Long lines cause the code to be hard to read when someone is using an editor with a narrower window than you used, or when the code is printed, or when it is displayed on a stackexchange site (witness the horizontal scroll bars above). For that reason, prefer lines shorter than 80 characters. Note: This is an opinion not universally held by Ruby programmers. use alias method instead of alias alias_method is generally preferred over alias. So, instead of: alias req request use: alias_method :req :request Use the Forwardable module You did a good job using metaprogramming to define methods for you: [:get, :post, :put, :patch, :delete].each do |i| define_method(i) do |*args, &block| request.send(i, *args, &block) # Makes get == r.get, post == r.post, etc. end end For simple forwarding methods like this, there's a better way. Ruby's Fowardable module will do it: require 'forwardable' ... extend Forwardable ... delegate %i[get put patch delete] => :request Dead code In this method: # This method is responsible for precompilation. It takes an # ATD::Route as input, and returns either the filename if # Route.output is a file or the Route.output string if # Route.output is a string. It will also take the file and call # the corresponding precompilation method on it. route.output is # either a full, expanded file path, a file, or a string def self.precompile(route, *opts) return nil if route.output.nil? if route.output.is_a?(File) name = route.output.is_a?(File) ? ... file = route.output.is_a?(File) ? ... end route.output end The inner checks for route.output.is_a?(File) are redundant: The outer if has already determined that. Type checks are not always the best way to do things Note that checking an object's type is a code smell in Ruby. It's sometimes necessary, but often it is not. It can be preferable to ask the object if it responds to the behavior you want to use: `some_object.respond_to?(:some_method)` It can be even better to turn primitive objects into your own polymorphic classes that all behave the same way, so that no type or behavior check is needed at all. When doing this, a type or behavior check is often still needed, but only in the factory method used to create the first-class object. For more on this, see Confident Ruby by Advi Grimm. For an example of how this might work in practice, let's look at a redacted version of the #initialize method from the usps_intelligent_barcode gem: # Create a new barcode # # @param routing_code [String] Nominally a String, but can be # anything that {RoutingCode.coerce} will accept. def initialize(routing_code) @routing_code = RoutingCode.coerce(routing_code) end RoutingCode::coerce does whatever it can to convert an object to a RoutingCode instance. This is where type checks are done in this code, but it's the only place. The rest of the code gets to work with a RoutingCode, without worrying how it came to be: # Turn the argument into a RoutingCode if possible. Accepts: # * {RoutingCode} # * nil (no routing code) # * String of length: # * 0 - no routing code # * 5 - zip # * 9 - zip + plus4 # * 11 - zip + plus4 + delivery point # * Array of [zip, plus4, delivery point] # @return [RoutingCode] def self.coerce(o) case o when nil coerce('') when RoutingCode o when Array RoutingCode.new(*o) when String RoutingCode.new(*string_to_array(o)) else raise ArgumentError, 'Cannot coerce to RoutingCode' end end Semantic Versioning Have you thought of using semantic versioning?. It provides a way for your version numbers to clearly communicate when your change to the library might break client code, making upgrades easier for users of your library.
{ "domain": "codereview.stackexchange", "id": 23315, "tags": "ruby, url-routing, framework, dsl" }
Got confused about weight loss in liquid
Question: Suppose two objects of the same mass but of different densities are dropped on water. The large object (say polypropylene) has a lower density than water and the small one (say stainless steel) has a higher density. As a result, the large ball will be floating and the small one will go inside the water. I can't understand which one will lose more weight? Plastic or iron. [I took all the materials arbitrarily] Answer: As a result, the large ball will be floating and the small one will go inside the water. I can't understand which one will lose more weight? Plastic or iron. Neither object "loses" weight. The iron object sinks simply because its density is greater than the density of the surrounding liquid and the plastic object floats simply because its density is less than the density of the surrounding liquid. Regardless of the size of an object, an object whose density is less than or equal the density of the surrounding liquid will float and an object whose density is greater the density of the surrounding liquid will sink. The volume $V_L$ of the liquid displaced by the object that floats (and thus equals the submerged volume of the object) will depend on the ratio of the density of the object, $\rho_{o}$, to the density of the liquid $\rho_{L}$, according to $$V_{L}=\frac{\rho _o}{\rho_L}V_o$$ or $$\rho_{L}V_{L}=\rho_{o}V_{o}$$ Where $V_{o}$ is the total volume of the object, and where $\rho_{o}\le \rho_{L}$. Note that the left side of the equation is the weight of the volume of liquid displaced by the object and the right side is the weight of the object. The weight of the volume displaced by the object equals the upward buoyant of the water acting on the object. As long as the upward buoyant force of the water on the object equals the weight of the object the object will float. That's the case for the plastic object. If the buoyant force is less than the weight of the object, the object will sink. That's the case for the iron object. Hope this helps.
{ "domain": "physics.stackexchange", "id": 76403, "tags": "fluid-statics, free-body-diagram, density, buoyancy, weight" }
Get Date Taken from Image
Question: Is there a better way of doing this? I heard that BinaryReader and Exif parsing to get the property item is faster, but I have no idea how to do that, thanks for the help. //we init this once so that if the function is repeatedly called //it isn't stressing the garbage man static Regex r = new Regex(":"); //retrieves the datetime WITHOUT loading the whole image string[] GetDateTakenFromImage(string path) { try { using (FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read)) { using (Image myImage = Image.FromStream(fs, false, false)) { PropertyItem propItem = myImage.GetPropertyItem(36867); string dateTaken = r.Replace(Encoding.UTF8.GetString(propItem.Value), "-", 2); if(string.IsNullOrEmpty(dateTaken)) return null; else return DateTime.Parse(dateTaken).ToString("yyyy-MM-dd").Split('-'); } } } catch { return null; } } Answer: //retrieves the datetime WITHOUT loading the whole image string[] GetDateTakenFromImage(string path) Function name and parameter are named well. You should use an XML-doc header to document the quirk that it doesn't need to load the entire image. Overall I like that the method is concise: it mostly does only one thing, it's easy to read, and not needing to pre-load the whole image is a nice bonus. It is strange to use string[] to denote a date. You should be returning a DateTime?. Consider changing it to accept a Stream instead of a string path. Currently it's a bit burdensome to test your method because it requires a file path, even though all it's doing is getting a stream out of it anyway. By accepting a Stream instead, you can more easily put automated tests around it that use in-memory test data and avoid a whole litany of IO nonsense. using (FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read)) fs is a poor name. Give it some more meaning, like imageStream. It can also be written a bit more concisely: using (FileStream imageStream = File.OpenRead(path)) Likewise, myImage could just be named image. PropertyItem propItem = myImage.GetPropertyItem(36867); Avoid magic numbers -- that 36867 should be in a named constant somewhere: const int ExifDateTimeOriginal = 0x9003; Your error handling in general could be improved. If I was consuming this API, I would naturally expect exceptions relating to IO (file not found, file not accessible, not a valid image file, and so on) to propagate up. It's up to you whether you want to throw or return null in the case where everything is valid but the image simply doesn't have that tag. You're returning null if anything goes wrong which makes this harder to test. Be aware that myImage.GetPropertyItem(36867); will throw if the tag is not present (which in my opinion is a totally non-exceptional circumstance), so if you do tweak your method to propagate other exceptions you will need to put that one line around a try-catch for that one exception. The EXIF tag you're checking is encoded in ASCII according to the EXIF docs I've been able to find, so this should use Encoding.ASCII instead of Encoding.UTF8: string dateTaken = r.Replace(Encoding.UTF8.GetString(propItem.Value), "-", 2); You also don't need to do any string replacing. DateTime.ParseExact is handy for parsing dates encoded in custom formats: string dateTaken = Encoding.ASCII.GetString(propItem.Value); ... return DateTime.ParseExact(dateTaken.Trim('\0'), "yyyy:MM:dd HH:mm:ss", CultureInfo.InvariantCulture); Lastly, if you want to really adhere to the letter of the spec then depending on how you decide to modify or keep your method contract you'll need to handle the case where the date and time are unknown and all non-colon characters are replaced with spaces.
{ "domain": "codereview.stackexchange", "id": 38816, "tags": "c#, image" }
Requiring deep understanding of robot_state_publisher and TF
Question: Hi all, I have turtlebot2, groovy, turtlebot arm. In reading the tutorial Using urdf with robot_state_publisher, I'm confused in its launch file. Why there are two "state_publishers" ? one is "robot_state_publisher", and the other is "state_publisher") By saying "need a method for specifying what state the robot is in" the tutorial points out the function of "state_publisher", But in checking robot_state_publisher package we can see the function of robot_state_publisher is "allows you to publish the state of a robot to tf", which seems similar to the function of "state_publisher". My goal is to broadcaster TF info, so my turtlebot arm can do grasping based on it(which have tf listening function). I have a URDF of turtlebot with arm. Thanks for your help! Originally posted by Yantian_Zha on ROS Answers with karma: 74 on 2014-04-22 Post score: 0 Answer: The naming appears to be a bit confusing indeed. There are different representations of a robot´s state. The state_publisher in the tutorial publishes the JointState of the robot, while the robot_state_publisher publishes full 6DOF tf data for all links of the robot, essentially fusing the static information about the robot from the URDF model and the dynamically changing JointState information. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-08-26 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 17743, "tags": "turtlebot, robot-state-publisher, transform" }
How do I represent a hidden markov model in data structure?
Question: My task involves a POS Tagging using HMM. I am given a training data set (word/tag). I have to write a file with transition probabilities and emission probabilities. I am currently using a nested dictionary of the form {State1: {State2: count, State3 :count}}. However, while calculating the probabilities now via the counts in nested dict, my program is running very slow for mid size files (e.g. 2000 sentences and tags) Is there a better way to store a HMM in python? For my project, I cannot use any external library that already does this, I must use standard python libraries. Answer: With 29 states and 841 possible transitions to track whilst reading a file with 2000 entries (word, tag), then you should not be experiencing a speed problem when using a dictionary of dictionaries. Assuming your data structure as described called transition_counts, and receiving data in pairs, (this_pos, next_pos) then running 2000 times: transition_counts[this_pos][next_pos] += 1 takes only a fraction of a second. This is similar for code that calculates $p(POS_{t+1}|POS_t)$: total_from_pos_t = sum(transition_counts[pos_t].values()) prob_pos_tplus_one = transition_counts[pos_t][pos_tplus_one] / total_from_pos_t This is very fast. Your problem is not with the representation.
{ "domain": "datascience.stackexchange", "id": 1557, "tags": "python, dataset, nlp" }
Waves speed and particle speed
Question: Is the speed of a wave the same as the speed of the particle it displaces? Homework question,any help is appreciated. Answer: No. The particles of the medium in which the wave is travelling (assuming that this is a mechanical wave) aren't actually displaced in the direction of the wave, but are merely disturbed. In the case of a longitudinal wave (e.g. sound), a particle will for a time be moving in the direction of propagation the wave, for a time be at rest and for a time be moving opposite to the direction of propagation of the wave, and the maximum speed it attains has to do with the amplitude, not the speed of propagation of the wave. In the case of a transverse wave (e.g. waves formed on a string when you move an end up and down), the aswer to your question becomes even more readily apparent - the particles don't even move in the direction of propagation of the wave, but rather in a direction perpendicular to propagation and so there's no reason to expect them to be moving at the same speed as the wave.
{ "domain": "physics.stackexchange", "id": 22863, "tags": "homework-and-exercises, waves" }
Can we find k shortest paths between all pairs faster than solving the pairwise problem repeatedly?
Question: I want to produce $k$ shortest path ($k$ would be less than 10) between all pairs in a graph. The graph is (actually a subway map): positively weighted undirected sparse with about 100 nodes My current plan is to apply $k$ shortest path routing to each pair; I am now looking for a more efficient alternative (possibly with dynamic programming). Answer: First of all, a crucial difference in computing $k$-shortest paths is if the paths need to be simple or not. A path is called simple, if it does not contain nodes repeatedly. A path with a loop, for example, is not simple. Note that on the Wikipedia page you linked, the articles are concerned with not necessarily simple paths. The case of simple paths seems to be harder than the case with not necessarily simple paths. The all-pairs $k$-shortest simple paths problem This seems to be a quite young area of research. A recent paper by Agarwal and Ramachandran can be found on the ArXiv [1]. The previous-work section will also give you some insight into the history of the problem. The all-pairs $k$-shortest paths problem Here, indeed, it is the best choice to just repeatedly apply Eppsteins algorithm [2]. The general observation that a repeated application of an algorithm for the single-source version of the problem is the fastest approach was already made in 1977 by E. L. Lawler [3]; Eppstein provides the fastest algorithm to date for this subproblem. References [1] Agarwal, U. and Ramachandran, V. Finding $k$ Simple Shortest Paths and Cycles. arXiv:1512.02157 [cs.DS] https://arxiv.org/pdf/1512.02157.pdf [2] Eppstein, D. Finding the k shortest paths. SIAM Journal on Computing 28, 2 (1999), 652–673. [3] Lawler, E. L. Comment on a computing the k shortest paths in a graph. Communications of the ACM, 20(8):603–605, 1977.
{ "domain": "cs.stackexchange", "id": 7170, "tags": "algorithms, graphs, optimization, shortest-path" }
CV hyperparameter in sklearn.model_selection.cross_validate
Question: I've got a problem with understanding the CV parameter in cross_validate. Could you check if I understand it correctly? I'm running ML algorithms in big set of data (train 37M rows), therefore I would like to run a big validation procedure to choose the best model. Using ShuffleSplit, I want to build 100 different ways of splitting data in random way: cv_split = model_selection.ShuffleSplit(n_splits = 100, test_size = .1, train_size = .9, random_state = 0 Then I want to use it as CV hyperparameter in cross_validate: cv_results = model_selection.cross_validate(model, X, Y, cv = cv_split) Does it mean that my Train set (X & Y) is divided into 100 random samples (each is then divided into: train (90% of sample), test (10 % of sample)) and during cross_validation model is built for each sample separetly (fitted on 90% of particular one/10 sample and tested on remaining 10% if this sample) and the mean prediction of those 100 models is the result? Also, if I am using Shaffle, does it mean that particular row can be in multiple samples and other will not be in any of them? In other words, 37M set is devided: First Sample 370k XY1, 90% *3.7 = 333k rows as XY1_1(train), 37k as XY1_2 (test); model fitted on .fit(X1_1, Y1_1), predition is build on .predict(X1_2) and validated against Y1_2 Second Sample 370k, 333k rows as XY2_1 and 37k rows as XY2_2; model fitted on .fit(X2_1, Y2_1), prediction built on .predict(X2_1) and validated against Y2_2 etc I am not sure If the second explanation is more clear. But this is how I structure it in my head. I also read scikit.learn guide: Cross-validation: evaluating estimator performance but I am still not sure Answer: I'll answer this first: if I am using Shaffle, does it mean that particular row can be in multiple samples and other will not be in any of them If I've understood the question correctly, then yes, that is possible. See below: from sklearn.model_selection import ShuffleSplit import numpy as np from sklearn.model_selection import ShuffleSplit X = np.array([[1, 2], [3, 4], [5, 6], [7, 8]]) y = np.array([1, 2, 1, 2]) rs = ShuffleSplit(n_splits=3, test_size=.25, random_state=8) rs.get_n_splits(X) train = [] for train_index, test_index in rs.split(X): print("TRAIN:", train_index, "TEST:", test_index) Output: TRAIN: [1 0 3] TEST: [2] TRAIN: [0 3 1] TEST: [2] TRAIN: [1 3 0] TEST: [2] Row 2 doesn't appear in any of the training sets. Next, to answer your first question: the mean prediction of those 100 models is the result Not quite. From the documentation (which you linked to): Returns: scores : dict of float arrays of shape=(n_splits,) Array of scores of the estimator for each run of the cross validation. Let's see this in action, on the above example: from sklearn linear_model from sklearn.model_selection import cross_validate lasso = linear_model.Lasso() cross_validate(lasso, X, y, cv = rs)['test_score'] returns Out[36]: array([ 0., 0., 0.]) So you see, it returns an array with the score on each cross-validation fold.
{ "domain": "datascience.stackexchange", "id": 3111, "tags": "machine-learning, scikit-learn, cross-validation" }
Reduce a grid array [][] into a string
Question: I have a class with a property of grid[][] and each value is either true or false. I need to convert this to a string with letters o and b in place of the boolean. First way, simple for loops over the array and return the string: get rle() { let result = ""; for(let y = 0; y < this.grid.length; y++) { for(let x =0; x < this.grid[y].length; x++) { result += ( (this.grid[y][x]) ? "o" : "b" ); } } return result; } Or a more JS style solution? get rle() { return this.grid.reduce( (total, currentValue, currentIndex, arr) => { return total + arr[currentIndex].reduce( (total, currentValue, currentIndex, arr) => { return total + ( (arr[currentIndex]) ? "o" : "b" ); }, ""); }, ""); } Is this a good JS style solution, and what in your opinion is better? I prefer the first because anyone can instantly understand it. The JS solution makes me frown with 3 nested returns, looks odd. Answer: First, a few points of the code you wrote. If you have an array of values, you can join the values together to efficiently create a string. This would eliminate the need for at least one of the reduce calls. Second, in a reduce call, the value of the second callback parameter (currentValue in your code) is the value of the array parameter at the index parameter (arr[currentIndex] in your code). Combining that with Javascript's capability to ignore excess function parameters, your reduce calls should take only two parameters, and use the currentValue in place of the arr[currentIndex]. You should also avoid using the same variable names in the same scope. Having two sets of total, currentValue, currentIndex, and arr could get confusing quickly, and lead to strange bugs. Now, for the one-liner: return this.grid.flat().map((el) => el ? "o" : "b").join(""); See Array#flat, Array#map, and the aforementioned Array#join. Of these, Array#flat is the newest and possibly unsupported method. It can be easily polyfilled or replaced. The MDN page shows some clever replacements like arr.reduce((all, row) => all.concat(row), []) and [].concat(...arr).
{ "domain": "codereview.stackexchange", "id": 33520, "tags": "javascript, array" }
Time complexity to convert a truth table to a boolean circuit
Question: The SAT problem is often explained in terms of truth tables. Given some random boolean circuit, calculate its truth table; does there exist an output of $1$ in the truth table? But how about going the other way? A function problem that inputs a truth table, and asks you to construct a boolean circuit that computes that truth table. Is this NP? Is it P? Answer: A truth table for $m$ variables has $n=2^m$ entries. A corresponding sum-of-products expression is made of at most $2^m$ terms, each of length $m$. So the total complexity of outputting them is $O(m2^m)=O(n\log n)$, in terms of the input size. This is polynomial. As regards minimization of the expression, a possible approach is the Quine–McCluskey algorithm. From Wikipedia "For a function of $m$ variables the number of prime implicants can be as large as $\dfrac{3^{m}}{\sqrt {m}}$", which is $O(n^{\log3/\log2})$, still polynomial. Then "Step two of the algorithm amounts to solving the set cover problem, which is NP-hard". But I could not find an explicit expression of the complexity of the latter.
{ "domain": "cs.stackexchange", "id": 21062, "tags": "np-complete, np, satisfiability, search-problem" }
Unique null geodesic between two points
Question: Given two points in Lorentzian spacetime $p,q\in M$, is it true that there is only a unique null geodesic (up to affine reparametrization) that connects that the two points? On the one hand, it seems that the answer is "yes", since null geodesics are obtained from the geodesic equation which have unique answers given initial data and all I need to fix is the endpoint $q$ if initial data starts at point $p$. On the other hand, I can imagine a case of, say, gravitational lensing, where null rays from a source (point $p$) behind a gravitational lens are bent along two different null directions and reach the Earth (point $q$), which seems to imply that the answer is "no". I am trying to understand the physics so I am avoiding full dive into uniqueness proof right now, unless that's the only way to go. Edit: I should have been more careful in excluding the standard counterexamples, such as those with spherical spatial topology. If I have to choose a sufficiently good restriction, it would be to let $M$ be Schwarzschild spacetime --- in particular, the exterior of the Schwarzschild black holes or spherical stars. Naively, I expect this to be non-unique but I am not sure if the solution is best dealt with using caustics and stuff used for singularity theorems. Answer: No, it is not. Even in the Schwartzschild metric, for example, given two point that only differ for the time coordinate, you can easily find two geodesics, one being the rotation around the BH, the other one being having an initial velocity pointing outwards. It is true that the geodesic equations have one and only one solution for a given "initial data", but when you're saying $p$ and $q$ you are only taking about coordinates, not velocities, thus aren't setting proper "boundary conditions"
{ "domain": "physics.stackexchange", "id": 56266, "tags": "general-relativity, differential-geometry, metric-tensor, geodesics" }
Shell script to display environment variables
Question: This shell script writes my environment variables. If there is an argument, the shell script will grep for the argument in the environment variables. PAGER=more if type less > /dev/null;then PAGER=less; fi echo $PAGER if [ -z ${1+x} ]; then printenv|"$PAGER"; else printenv|grep "$1"|"$PAGER"; fi If the above script is correct then I will work on my shell to be able to parse it. It can parse everything except the last line, which I added today. Answer: Readability Just because Bash let's you cram a lot of statements on a single line doesn't mean you should. I recommend to split this up to multiple lines, and also to put spaces around |, like this: #!/bin/bash PAGER=more if less >/dev/null; then PAGER=less fi echo $PAGER if [ -z ${1+x} ]; then printenv | "$PAGER" else printenv | grep "$1" | "$PAGER" fi But if you like compact writing style, a reasonable compromise is to replace the first if statement with an alternative writing style using && like this: less >/dev/null && PAGER=less Error handling I'm not sure if it's intentional, but if the less command doesn't exist (but in most systems it does), this line will emit an error message: if type less >/dev/null To avoid errors, you want to suppress stderr too in addition to stdout: if type less >/dev/null 2>&1 Prefer simple solutions This is hacky, cryptic: if [ -z ${1+x} ]; then I suggest a much more readable, simple equivalent: if [ -z "$1" ]; then Don't repeat yourself The printenv command is repeated in both branches of the last if statement. You could move it in front of the if, like this: printenv | \ if [ -z "$1" ]; then "$PAGER" else grep "$1" | "$PAGER" fi However, in this writing style it's important that the \ on the line before the if is the last character on the line directly in front of the line break.
{ "domain": "codereview.stackexchange", "id": 20822, "tags": "bash, shell" }
Understanding LSTM Training and Validation Graph and their metrics (LSTM Keras)
Question: I have trained a RNN/LSTM model. I would like to interpret my model results, after plotting the graph for Loss and accuracy (b/w training and Validation data set). My objective is to classify the labels (either 0 or 1) if i provide only a partial input to the model. In such a way I have performed training. Train_Validate_Test_Split Train 80% ; Validate 10 % ; Test 10% X_train_shape : (243, 100, 5) Y_train_shape : (243,) X_validate_shape : (31, 100, 5) Y_validate_shape : (31,) X_test_shape : (28, 100, 5) Y_test_shape : (28,) Model Summary Model Graph Model Metrics Question or Interpretation from the model results Q 1 : What can I understand/interpret from loss and Accuracy graph ? How can i confirm whether the model trained properly for my data set or not ? Q 2 : Whether oscillations in both loss and accuracy, have some effect in >model training ? (Or it is a normal behavior) If not, how can I regularize my model without oscillations ? Q 3 : What can I interpret or understand from my metrics tabular column ? My > Y_test accuracy is more when compared with Train & Validation accuracy, What can i interpret from this behavior ? Answer: From visually inspecting the graph, we see that the validation loss and accuracy has improved with each epoch - with the training loss and accuracy higher than that of validation. This indicates that accuracy has improved with training. As suggested in another post, one potential solution is to calculate the exponential moving average of the validation loss to remove the oscillations and better determine the improvement in this metric. If you are finding that the test accuracy is higher than that of training, this might suggest underfitting. This could imply that more training on your model is required, or has been over-regularized.
{ "domain": "datascience.stackexchange", "id": 5674, "tags": "python, deep-learning, keras, lstm" }
UTC Time Offset
Question: Really confused about the meaning of offset in email headers. Header date reads as: Sat, 8 Jun 2019 11:45:15 −0500 Does the time shown 11:45:15 is the actual time at timezone -0500 OR Actual time at time zone -0500 is calculated by subtracting offset?? Thanks. Answer: The timestamp you are seeing is the time in the given timezone using the given offset from UTC. The specific format of the timestamp you are looking at is an RFC2822 formatted timestamp which means that it follows a standard Time: 11:45:15 Offset from UTC: −0500 From RFC2822 spec: The zone specifies the offset from Coordinated Universal Time (UTC, formerly referred to as "Greenwich Mean Time") that the date and time-of-day represent. The "+" or "-" indicates whether the time-of-day is ahead of (i.e., east of) or behind (i.e., west of) Universal Time. The first two digits indicate the number of hours difference from Universal Time, and the last two digits indicate the number of minutes difference from Universal Time. (Hence, +hhmm means +(hh * 60 + mm) minutes, and -hhmm means -(hh * 60 + mm) minutes). The form "+0000" SHOULD be used to indicate a time zone at Universal Time. Though "-0000" also indicates Universal Time, it is used to indicate that the time was generated on a system that may be in a local time zone other than Universal Time and therefore indicates that the date-time contains no information about the local time zone. A date-time specification MUST be semantically valid. That is, the day-of-the-week (if included) MUST be the day implied by the date, the numeric day-of-month MUST be between 1 and the number of days allowed for the specified month (in the specified year), the time-of-day MUST be in the range 00:00:00 through 23:59:60 (the number of seconds allowing for a leap second; see [STD12]), and the zone MUST be within the range -9959 through +9959.
{ "domain": "cs.stackexchange", "id": 19016, "tags": "real-time" }
Is the full quantum circuit always in a pure state?
Question: I'm aware, that if you measure just a subset of a circuit, which is entangled to another subset, it'll be in a mixed state. For example, if you measure q1 in the circuit its purity will be 0.5. But, if you measure the full circuit, it's purity will be 1. Is it a general fact, that the "full" circuit as a theoretically isolated system will be always in a pure state? Answer: Yes. If you prepare a state, say $|0\rangle ^{\otimes n}$ (or any pure state in general) and only perform quantum gates (which are unitary operations) and qubits are perfectly isolated (there are no outside interactions with these qubits), then yes, the state will always be a pure state in the end. This follows from the 3rd postulate of quantum mechanics. Given a state $\rho(0)$, at time $t=0$, the state at a later time, $t>0$ is given by $$\begin{align} \rho(t) = U(t) \rho(0) U(t) ^ {\dagger} \,, \tag{1} \end{align}$$ where $U(t)$ is a unitary operator. You can see that if the state of your full system, $\rho$ was a pure state initially, i.e., $$\rho(0) = |\psi(0)\rangle \langle \psi(0)|\,, \tag{2}$$ then state of your full system at time $t$, $$\begin{align}\rho(t) &= U(t) \rho(0) U(t) ^ {\dagger} \tag{3.1}\\ &= U(t)|\psi(0)\rangle \langle \psi(0)| U(t)^ {\dagger} \tag{3.2} \\ &= |\psi(t)\rangle \langle \psi(t)|\,.\tag{3.3}\end{align}$$ is also a pure state.
{ "domain": "quantumcomputing.stackexchange", "id": 5262, "tags": "quantum-state, quantum-circuit" }
Is there a universal learning rate for NeuralNetworks?
Question: I'm currently creating a NeuralNetwork with backpropagation/gradient descent. There is this hyperparameter introduced called "learning rate" (η). Which has to be chosen to guarantee not overshooting the minimum of the cost function when doing gradient descent. But you also do not want to slow down learning unnecessarily. It's a tradeoff. I've found that for too small or too big η the NeuralNetwork doesn't learn at all. I've successfully trained the NN on the sin-function with η = 0.1. But for other functions like any linear combination of the inputs, a different η is required (more like η = 0.001). For the quadratic function, I still haven't been able to make the NN converge at all, maybe I just haven't found the right hyperparameters. My question now is: Is there any way I can find a η that works for any function, so I don't have to try and search for it manually. Thanks in advance, Luis Wirth. Answer: There is no universal learning rate. It depends on your problem space (Are you solving a problem with many local minima or just one? Does your problem’s solution vary dramatically based on a slight change to your input or does the solution gradually shift?, etc.), and it depends on your network architecture (number of layers, number of hidden layers. Do you have a feedback loop etc). Character recognition is relatively easy for example so you could try a faster learning rate. I once tried to teach a neural network to understand a simple computer machine code, and after a month of training with a dozen computers, it always came close but never solved the problem 100% correctly. No choice of learning rate worked to get me to the solution. Once I broke down the problem space into a few subcomponents, the entire set of neural networks were trained in under an hour. I used the same choice of training rate in both cases (see below). A good example of how the rate of convergence changes based on the problem being trained. What I found works best is to start with a fast learning rate to quickly find local minima, but then reduce the rate with each successive training, down to some limit. All that to say that there is no universal learning rate, and that it worked best for me to reduce it as the training proceeds.
{ "domain": "cs.stackexchange", "id": 11786, "tags": "machine-learning, neural-networks, gradient-descent" }
Is there an equivalent of Parseval's theorem for wavelets?
Question: Parseval's theorem can be interpreted as: ... the total energy of a signal can be calculated by summing power-per-sample across time or spectral power across frequency. For the case of a signal $x(t)$ and its Fourier transform $X(\omega)$, the theorem says: $$ \int{|x(t)|^2 \; dt} = \int{|X(\omega)|^2 \; d\omega} $$ For the case of discrete wavelet transform (DWT), or wavelet packet decomposition (WPD), we get a 2D array of coefficients along the time and frequency (or scale) axis: | | c{1,f} | ... freq | c{1,2} | c{1,1} c{2, 1} ... c{t, 1} |______________________________ time Can a sum of this series somehow be understood as a signal's energy? Is there an equivalent rule to Parseval's theorem? Answer: Yes indeed! In theory as long as the wavelet is orthogonal, the sum of the squares of all the coefficients should be equal to the energy of the signal. In practice, one should be careful that: the decomposition is not "expansive", i.e. the number of samples and of coefficients is the same. wavelet filter coefficients are not re-scaled, as happens in some applications (like lifting wavelets, to keep integer computations). wavelets are orthogonal (this is not the case in JPEG2000 compression). You can verify this indirectly, looking at approximation coefficients. At each level, their number of samples is halved, and their amplitudes have around a $1.4$ scale factor, which is just $\sqrt{2}$. This feature is used for instance to estimate the Gaussian noise power from wavelet coefficients: $$ \hat{\sigma} = \textrm{median} (w_i)/0.6745$$ A little further, there is a notion that generalizes (orthonormal) bases: frames. A set of functions $(\phi_i)_{i\in \mathcal{I}}$ ($\mathcal{I}$ is a finite or infinite index set) is a frame if for all vectors $x$: $$ C_\flat\|x\|^2 \le \sum_{i\in \mathcal{I}} |<x,\phi_i>|^2\le C_\sharp\|x\|^2$$ with $0<C_\flat,C_\sharp < \infty$. This is a more general Parseval-Plancherel-like result used for general wavelets. In other words, it "approximately preserves energy" by projection (inner product). If the constants $ C_\flat$ and $C_\sharp $ are equal, the frame is said to be tight. Orthonormal bases are non-redundant sets of vectors with $ C_\flat=C_\sharp = 1 $. For those using Matlab, you should care about the native border extension, which is obtained by dwtmode('status'). Some add tails to the data to help inversion with little border artifacts. With a periodic mode dwtmode('per') and a number of samples that can be divided by $2^L$ where $L$ is the wavelet level, you can get a good match in energy, with tiny differences:
{ "domain": "dsp.stackexchange", "id": 11301, "tags": "wavelet, signal-energy, dwt, parseval" }
Will a shrinking universe have a reverse arrow of time?
Question: I'm not a physicist so forgive me if this question is silly. I'm reading (actually listening) to Mysteries of Modern Physics: Time by Prof. Sean Carroll. I'm not sure if the concepts in this book are universally accepted by the physics community or are merely speculative or some measure in between. But i found one concept incredibly interesting and intuitive to understand: Time itself has no specific direction, time is symmetric but the arrow of time goes in one specific direction and the reason why it goes in that direction (let's call it 'forward') is because entropy increases. Why entropy increases? Because it was lower before. And why is that? Because it was just a bit higher than yet before that was higher than before... all of that leads to a point in which entropy was as low as possible and couldn't become lower but only increase. That point is supposedly the one our universe started from. I assume, since the entropy is the amount of degrees of freedom of the information, a universe that inflates is increasing the possible configurations that information can assume and therefore the entropy in increasing. In a deflating universe the opposite is true so the entropy decreases. Our universe goes from singularity(low) to expanding(hight). A shrinking one will go from expanding(hight) to singularity(low); Now my question is: if that's all true then i suppose if the universe was shrinking (as it is allowed and even forecast by some theories like quantum loop gravity) is it legitimate to imagine that the arrow of time would be reversed? As the entropy would go from higher to lower it would be allowed for a human living in that universe to see a window and think "Oh this was probably shattered glass on the floor BEFORE being a window". That is because the state of shattered-glass is a higher entropy state than being a well refined window glass. Am i right? And if so why did we ever had the doubt, knowing the second law of thermodynamics, that our universe could've been in a deflating phase? Answer: Except that it relates an electromagnetic arrow of time to the usual thermodynamic one, this answer's consistent with Nogueira's, and, as no answer's been accepted by the OP yet, I'm wanting to provide a verbal equivalent to it, per the PSE policy of permitting any participant's approval of a number of answers to the same question, and in view of my belief that acceptance of an answer preserves the Q&A. The standard view, incorporating General Relativity's "spatialization of time", is that entropy tends to increase during the passage through time of anything whose motion can be described as passage through one or more of the three spatial dimensions, regardless of the direction of that passage. I'm saying that "entropy tends to increase", instead of simply saying "entropy increases", because, for reasons bearing on the inherent uncertainty of the relationship between energy and time, there are intervals of time (usually extremely brief) when it decreases, for quantum-mechanical reasons. Because our nervous systems depend on electrical energy, such variations may be involved in our subjective experience of time's passage as occurring differently in different situations. Entropy is a form of disorder, representing energy which cannot be transformed into work: A rough example is the spouting of exhaust steam upward from the stack of a steam locomotive, which occurs regardless of whether the locomotive is moving backward or forward. Entropy also increases regardless of whether the passage of matter or energy through time is backward or forward. Cycles of expansion and contraction, in a universe containing mass, were first hypothesized by Tolman in the 1930's, and the problem with his model was the fact that the DENSITY of entropy would increase with each cycle, leaving a universe increasingly disordered. This does not seem to have happened, given the fact that our observable region is approximately as uniform in every direction as the much older cosmic microwave background radiation. Cosmological models that analyze contraction (i.e., "shrinkage") in much detail are rare: One of the few that does is Aguirre & Gratton's 2002 "Steady state eternal inflation", described at https://arxiv.org/abs/astro-ph/0111191, which provides for two multiverses each separated from the other by a Cauchy surface, with the arrow of time in one of them pointing in the direction of passage through time opposite the direction represented by the AOT in the other, effectively balancing it. AG analyze the entropic AOT in some detail, and discuss an electromagnetic AOT as well, claiming that electromagnetic AOTs are explicitly linked to the direction of the expansion. In his profoundly Christian blog, the physicist Aron Wall has pointed to consideration of the electromagnetic AOT as perhaps the simplest means of rendering such explicitly reversed time plausible to us, because of its role in biological neurology. The simplifying value of the AG model is its elimination of the need for any beginning, which is controversial in some (but not all) branches of western religion. Most inflationary cosmological models are based on approximations of de Sitter space whose exponential contraction, thermalizing any contents with mass, would necessarily precede any exponential expansion: The well-known Borde-Guth-Vilenkin Theorem, requiring that inflationary models be "on average" expanding, is the result of that consideration, and the AG model was accepted as marginally meeting that criterion, in the last footnote to the BGV Theorem's last revision, which was formulated in 2003. However, unadulterated de Sitter space does not require the presence of mass, although its expansive nature can only be seen by the motion of markers in it: In our experience, those markers are the stars, and the changes in the wavelengths of their light as the expansion of space carries them outward from matter that's bound together gravitationally, like our own galaxy and its Local Group. The link between the thermodynamic and electromagnetic arrows of time remains unclear, and a definitive answer to the OP's question consequently remains unavailable, making it rather a good one. On an astronomical scale, there's currently no way to tell, with certainty, whether we're in a cosmos like AG's, and, if so, whether we're in the side of it where the relation between the thermodynamic AOT and the electromagnetic one is direct or an inverted duplication: Science does have a lot to do with replication, as elaborated in many papers (mostly available free on Arxiv) by Lee Smolin, Nikodem J. Poplawski, and others. This may be a factor in a 2020 paper by John Barrow, at https://arxiv.org/abs/1912.12926, which excludes nearly all inflationary cosmologies from a derivation with "finite action". The only possible exception I've noticed is Poplawski's "Cosmology with torsion", based on 1929's Einstein-Cartan Theory rather than 1915's General Relativity.
{ "domain": "physics.stackexchange", "id": 72864, "tags": "cosmology, entropy, time, arrow-of-time" }
Why should we believe that $NEXP \not \subset P/poly$
Question: I am sorry if this is not an advanced question. Most computer scientists believed that $NEXP \not \subset P/poly$ but they are not even close to this assumption. The main evidence that they are used is derandomization and they believe that $P=BPP$ and I know Nissan and Wigderson's generator which exist if $EXP \not \subset P/poly$($E \not \subset Size(2^{o(n))}$). On the other hand, I see some theorems like IP = PSPACE which thought to be false. Recently I read the IKW and there is a theorem states that $NEXP \in P/poly$ then there is a polynomial witness description for any language in $NEXP$. For me, it is likely to happen for example Succinct-HC is an $NEXP$-Complete language and it is likely to have a succinct witness. On the other hand, there are undecidable problems in P/poly that we don't know, maybe we could use them as an oracle to solve Succicnt-HC, They are some reasons in IKW‌'s paper but I need more references that help me to understand why should we believe that $NEXP \not \subset P/poly$. Answer: The best evidence is in my opinion follows due to the results of Ryan Williams on even a mild speed up of $CIRCUITSAT$ provides $NQP\not\subset P/poly$ which is an extremely strong result compared to $NEXP\not\subset P/poly$. It indicates to me that either we are missing something trivial which would separate $NEXP$ from $P/poly$ or (remotely plausibly) anything that separates $NEXP$ from $P/poly$ would separate any class slightly bigger than $NP$ from $P/poly$. Update It seems all the more likely if we do not reach $NEXP\not\subset P/poly$ by speeding up GapCircuitSAT or CircuitSAT problems we might achieve a separation via embedding $NP$ problems in $MCSP$ in $PTIME$ or $LOGSPACE$. It is unclear if speeding up $SAT$ has anything to do with embedding or vice versa. Please refer Comparing SAT to MCSP reduction class separations and faster SAT class separations?.
{ "domain": "cstheory.stackexchange", "id": 5179, "tags": "circuit-complexity, derandomization, nexp" }
rqt_plot does not see backends
Question: I have installed QWT and its' bindings for both python2 and python3. Moreover I have installed pyqtgraph but no one of them is available in settings. So I stuck to matplotlib which is extremely slow. Did anyone meet this problem? P.S. OS is Ubuntu 16.04, ROS Kinetic Originally posted by Long Smith on ROS Answers with karma: 75 on 2016-11-24 Post score: 1 Original comments Comment by maxikrie on 2016-12-15: Does anybody have a solution to this problem? Answer: Maybe this patch will help: https://github.com/ros-visualization/rqt_common_plugins/pull/415 Originally posted by Dirk Thomas with karma: 16276 on 2016-12-20 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by maxikrie on 2017-01-11: Works for me, thanks!
{ "domain": "robotics.stackexchange", "id": 26329, "tags": "ros-kinetic, ubuntu, rqt, ubuntu-xenial, rqt-plot" }
Accessing UCSC genome via ssh results in a validation error
Question: I am working on macOS High Sierra. I am following the steps described here to lift an annotation over from one version of a genome to another. I am now just using the example genomes provided in the tutorial. At this step: time (doSameSpeciesLiftOver.pl -verbose=2 -buildDir=`pwd` \ -ooc=`pwd`/${target}.ooc -fileServer=localhost -localTmp="/dev/shm" \ -bigClusterHub=localhost -dbHost=localhost -workhorse=localhost \ -target2Bit=`pwd`/${target}.2bit -targetSizes=`pwd`/${target}.chrom.sizes \ -query2Bit=../${query}/${query}.2bit \ -querySizes=../${query}/${query}.chrom.sizes ${target} ${query}) > do.log 2>&1 I get the following error in the do.log file: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts. username@localhost: Permission denied (publickey,password,keyboard-interactive). HgStepManager: executing from step 'align' through step 'cleanup'. HgStepManager: executing step 'align' Thu Nov 8 14:27:32 2018. Using localhost, /Users/username/Documents/projects/main/output/software/liftover/data/genomes/GCA_000004515.3_Glycine_max_v2.0/GCA_000004515.3_Glycine_max_v2.0.2bit and ../GCF_000004515.3_V1.1/GCF_000004515.3_V1.1.2bit align: localhost does not have /Users/username/Documents/projects/main/output/software/liftover/data/genomes/GCA_000004515.3_Glycine_max_v2.0/GCA_000004515.3_Glycine_max_v2.0.ooc -- if that is not the correct location, please run again with -ooc. I think the main problem is here: username@localhost: Permission denied (publickey,password,keyboard-interactive). This post describes this issue. However: I do not have this file: ~/.ssh/authorized_keys. My files in ~/.ssh are: github_rsa github_rsa.pub known_hosts. One of the lines of known_hosts is localhost. Does this presumably allow me to ssh to my own laptop? I still don't understand why I need to ssh in this script if it's run locally. PasswordAuthentication yes is already in /etc/ssh/sshd_config If I run the command without the time command, I still get the same error. I am not prompted to give a password. Could anyone help me overcome this problem, please? Answer: You need to setup ssh keys. All the commands in the script are going to use ssh to run the commands. Note the tips here of how to setup ssh keys: http://genomewiki.ucsc.edu/index.php/Parasol_job_control_system#SSH_keysssh key setup
{ "domain": "bioinformatics.stackexchange", "id": 772, "tags": "sequence-annotation, ucsc, errors, liftover" }
What is wrong with my setup of cloud chamber?
Question: My setup: plastic box (~1l) with the cover painted black mat the bottom of the box covered with sponges saturated with ethyl alcohol (90%) to the limit (all over sponge capacity poured back to bottle) turned upside down and placed on ~0.5kg of dry ice. I can see a 'rain' of particles of alcohol, but no traces. I disassembled smoke detector and attached the sample of americium to the box. No effect. Here is the 'rain': http://youtu.be/DhnO3e2H724?t=123 Here are some pictures of the setup: Any help appreciated. Answer: I tutor a cloud chamber workshop at CERN weekly, and during the development of this workshop several setups where tested until the (relatively) easiest way to build one was found. Based on the video you showed, I would say your alcohol cloud is not stable enough. As far as I can estimate without seeing your setup live, I think the reason for this is your alcohol source and the inhomogeneity of your plastic cover. For the alcohol source, a sponge is not the best option. Felt is more efficient at absorbing a lot of alcohol and releasing it uniformly. Furthermore, the more pure your alcohol is, the better. We use isopropanol (which should not be too difficult to find in a drug store as it is commonly used for cleaning and disinfection), but ethanol should do fine as well (but try to get at least 95%). For your plastic cover; well, plastic doesn't conduct heat too well, so neither does it pass the cooling from the dry ice in an efficient way (and more importantly, the little cooling that is transferred will be very inhomogeneous; this is clear from your video). You can keep using your plastic box (glass will probably be nicer, although we use plexiglass ourselves), but you cannot use the plastic cover. Try to use any kind of metal, as dark as possible. We use a black anodised aluminium plate as our base plate, which has a groove where the box fits in. We fill the groove with alcohol before putting the box, to make it airtight. This kind of plate won't be easy to construct yourself (especially the groove), but you can use for instance a big black frying pan (you know, those coated in teflon) and put your box on it. The added bonus is that a pan is by definition a very good conductor. The only thing you have to make sure is that it is sealed airtight to the plastic box; do this by putting some alcohol along the borders of the box and try to keep it in place (if needed even by taping it). Also take into account that the cloud needs a few minutes to form, but about 10 to 15 minutes to be stable enough to show tracks. There is a handy DIY cloud chamber manual on our workshop's site: http://scool.web.cern.ch/sites/scool.web.cern.ch/files/documents/SCoolLAB_CloudChamber_DIYManual_2016.pdf By the way, if you want to use an $\alpha$ source you have to put it inside your cloud chamber, because -as was already mentioned in the other answers- $\alpha$ particles can be stopped by a sheet of paper and hence won't be able to penetrate your box. We never use radioactive sources but see lots of tracks from cosmic radiation (mainly muons), and a few from natural background radiation. The air contains for example Radon, which is a natural $\alpha$ source. In our cloud chambers this accounts to more or less 3 $\alpha$ particle tracks per minute; in yours I would estimate this to be on average 1 per 2 minutes (but keep in mind that the Radon concentration varies geographically, with the weather, and has some other factors, so it could be much more or less). You should be able to see a lot of muons though. Good luck! Ps: another thing is that a smartphone flash is not an ideal light source; it is too spread out and the colour temperature is adapted to human skin, not cloud chamber tracks. A standard torch should do better.
{ "domain": "physics.stackexchange", "id": 33932, "tags": "particle-physics, experimental-physics, radiation, radioactivity, particle-detectors" }
c++20 compile time string utility
Question: While upgrading some meta-program running in production for years to the latest (c++20) standard, I rewrote this particular compile time string utility. Even though this new version produces desired output and correctly evaluates at compile time as well as dynamically, it's difficult to get a grasp on a new standard as always. This example is sufficient for my particular use case. And for now I can only spot three inconsistencies I'd like to improve: two of them are marked in the code snippet with TODO, and the third one is tiding it up with concepts (like convertable_to, which is deliberately omitted in the code for simplicity sake) I'm looking for a general case advice on how this code can be improved further. Any other weighted criticism will be much appreciated. UPDATE: New question with all of the suggestions integrated Original code: Live @godbolt #ifndef META_STRING_H_INCLUDED #define META_STRING_H_INCLUDED #include <cstddef> #include <algorithm> #include <functional> #include <tuple> namespace meta { template <std::size_t N> struct string; constexpr auto to_string(const auto& input) { return string(input); } constexpr auto concat(const auto&... input) { return string(to_string(input)...); } template <std::size_t N> struct string { char elems[N]; // string() = delete; string() { elems[N - 1] = '\0'; } // used for CTAD guide for tuples. can't we avoid object construction there? constexpr string(const char (&s)[N]) { std::copy_n(s, N, this->elems); } constexpr string(const string<N> (&s)) { std::copy_n(s.elems, N, this->elems); } constexpr string(const std::array<char, N> (&s)) { std::copy_n(s.data(), N, this->elems); } template <std::size_t... Ni> constexpr void _copy(const string<Ni> (&... input)) { auto pos = elems; ((pos = std::copy_n(input.elems, Ni - 1, pos)), ...); *pos = 0; } constexpr string(const auto&... input) requires (sizeof...(input) > 1) { std::invoke([this](const auto&... s) constexpr { this->_copy(s...); }, to_string(input)...); } template <template <typename...> typename container, typename... T> constexpr string(const container<T...>& input) { std::apply([this](const auto&... s) constexpr { this->_copy(to_string(s)...); }, input); } constexpr auto operator + (const auto& rhs) const { return concat(*this, rhs); } constexpr operator const char* () const { return elems; } }; template<std::size_t N> string(const char (&)[N]) -> string<N>; template<std::size_t N> string(const std::array<char, N>& input) -> string<N>; string(const auto&... input) -> string<((sizeof(to_string(input).elems) - 1) + ... + 1)>; template<template <typename...> typename container, typename... T> string(const container<T...>& input) -> string<((sizeof(to_string(T()).elems) - 1) + ... + 1)>; // TODO: avoid constructing object here inline namespace meta_string_literals { template<string ms> inline constexpr auto operator"" _ms() noexcept { return ms; } } // inline namespace meta_string_literals } // namespace meta #endif // META_STRING_H_INCLUDED ////////////////////////////////////////////////////////////////////// // #include "meta_string.h" #include <iostream> template<meta::string str> struct X { static constexpr auto value = str; operator const char* () { return str.elems; } }; template <auto value> constexpr inline auto constant = value; int main() { using namespace meta::meta_string_literals; X<"a message"> xxx; X<"a massage"> yyy; X<meta::concat(xxx.value, " is not ", yyy.value)> zzz; X<"a message"_ms + " is " + "a massage"> zzz2; std::cout << xxx << std::endl; std::cout << yyy << std::endl; std::cout << zzz << std::endl; std::cout << zzz2 << std::endl; static constexpr auto x = meta::string("1"_ms, "22"); static constexpr auto y = meta::concat("11", "22"); static constexpr auto z = meta::string(std::tuple{"1xx1"_ms, std::array<char, 6>{"2qqq2"}}); std::cout << sizeof(x.elems) << ": " << x << std::endl; std::cout << sizeof(y.elems) << ": " << y << std::endl; std::cout << sizeof(z.elems) << ": " << z << std::endl; static constexpr auto a = "1"_ms; static constexpr auto b = a + "22"_ms; std::cout << b << std::endl; // TODO: Can't it be implicitly forced to constexpr? std::cout << meta::string("this one "_ms, "is not ", "constant evaluated"_ms) << std::endl; std::cout << constant<meta::string("this one "_ms, "is ", "constant evaluated"_ms)> << std::endl; return 0; } Answer: This looks more messy than I would prefer, partly because of limitations in C++, partly because you have a constructor that takes a container as an argument. So I don't see another way to do it, just small things that could be improved here and there. Unnecessary use of this-> It is almost never necessary to write this-> in C++. I would remove its use everywhere. to_string() is not necessary I see you have a free function to_string() to ensure you can cast something to a string with a different N than is used in the templated code you are in, and also to "escape" the deduction guides. This is not a bad approach, but it might not be necessary to use it. Inside class string you can avoid calling to_string(input) by writing meta::string(input). In the deduction guides it is harder to work around this, you can't just write meta::string(input).elems. But see below for a possible solution that also gets rid of the default constructor. I would be OK with leaving it as you wrote it though, but if it is only intended to be used as a helper function for struct string, consider wrapping it inside a namespace detail to signal it should not be used by external code directly. Unnecessary use of std::invoke() You don't need std::invoke() at all, the constructor that uses it can just be rewritten as: constexpr string(const auto&... input) requires (sizeof...(input) > 1) { _copy(to_string(input)...); } Make _copy() private The member function _copy() is just a helper function used by the constructors, so it should be made private. I would also remove the leading underscore from the name of this function, as some uses of leading and double underscores in identifiers are reserved, and the simplest way to avoid any issues is to avoid all leading and double underscores. Consider adding size() and other const member functions from std::string I noticed the sizeof(to_string(input).elems) in the deduction guides, and I thought that looked a bit awkward; if there was a size() member function it would be nicer to be able to write to_string(input).size(). Unfortunately, even with a static constexpr size() function, this is not considered a constant expression, so that won't compile. But it will make some of your example usage cleaner: meta::string x(...); std::cout << x.size() << ": " << x << '\n'; You might consider other const member functions from std::string as well, like empty(), data(), operator[] and so on, to make it a better drop-in replacement for const std::strings. Deleting the default constructor To be able to delete the default constructor, you need to avoid default constructing meta::strings. That means you can't use to_string(T()) if T is a meta::string. There is a very simple solution here, just use sizeof(T): template<template <typename...> typename container, typename... T> string(const container<T...>& input) -> string<((sizeof(T) - 1) + ... + 1)>; The drawback, as user17732522 mentioned, is that it doesn't seem to work when constructing a meta::string with nested std::tuples. Forcing constant evaluation of rvalue I don't think this is possible. At the same time I don't understand why the compiler wouldn't be able to optimize this at compile time, even without using constexpr. Avoid using std::endl Prefer using \n instead of std::endl; the latter is equivalent to the former, but also forces the output to be flushed, which is usually unnecessary and hurts performance.
{ "domain": "codereview.stackexchange", "id": 42663, "tags": "c++, template-meta-programming, c++20" }
Speed of a charge in a magnetic field
Question: Does speed of a charged particle change in a non-uniform magnetic field? I know that a uniform magnetic field cannot change the $KE$ of the particle, i.e. $\frac{1}{2}mv^{2}$ is constant. And we are considering mass to be constant (i.e. not considering relativistic effects). Therefore $v$ or speed is constant. I also know that the velocity of the particle has to change, as there will be a change in the direction. The charged particle is displaced unequally in equal intervals of time but covers equal distances in equal intervals of time. But what will happen when the charge moves through a non-uniform magnetic field? I suppose the results should be similar. Or are they? EDIT @user3814483 - $$\frac{mv^{2}}{r}=qvB$$ ; $$\therefore \frac{mv}{r}=qvB$$ $$or$$ $$r=\frac{mv}{qB}=\frac{p}{qB}$$ So, a stronger magnetic field strength will just make the radius smaller. So, the motion of the $p^{+}$ will be somewhat like this? Answer: Short answer: even in non-uniform fields, the speed won't change, but the guiding center can drift with some velocity. In a magnetic field (uniform or otherwise), the force on a charged particle is: $$ F = q\; \vec{v} \times \vec{B(x)} $$ The direction of this force will be tangential to the velocity vector because of the cross product (the result of the cross product is mutually orthogonal to the vectors $\vec{v}$ and $\vec{B}$). So the force will only act to change the direction of the particle's velocity. The individual velocity components will change, as you say, but the speed cannot change because magnetic fields do no work. As an aside, in a non-uniform field the guiding center of the particle can drift. The guiding center is easily understood by considering gyration of a charged particle in a uniform field. It will gyrate about a center with some radius. That centroid is called the guiding center. In a non-uniform field, the motion of the charged particle will look like a cycloid instead of a circle, because in regions of higher field the particle will have a tighter radius than in regions of lower field. This ultimately results in a whole drift of the particle's guiding center. This is called the Grad-B drift.
{ "domain": "physics.stackexchange", "id": 16547, "tags": "electromagnetism, electricity" }
Adding channel effects to a signal
Question: I have a a channel matrix "H" that is circulant. I have data blocks. I want to add the channel effects to the signal. When H was only a vector of channel coefficients I would say: %Going Through The Channel After_channel= filter(H,1,Data); but now that H is a matrix the line above wouldn't work. I'm not sure what to do Answer: This is a small example of how to implement this in Matlab: h=[0.4070; 0.8150; 0.4070]; % Channel: Proakis A d=[1; -1; 1; 1]; % Data H=convmtx(h,4); % Channel Matrix y1=H*d; % Matrix approach y2=conv(h,d); % Convolution y1-y2
{ "domain": "dsp.stackexchange", "id": 3042, "tags": "filtering" }
Why should the position vector be noted as $R\hat{R}$ in spherical polar coordinates?
Question: Why should the position vector be noted as $R\hat{R}$ in spherical polar coordinates? Now i did the calculation like this: $\vec R = R \sin\theta \cos\phi \hat{i} + R \sin\theta \sin\phi \hat{j} + R \cos\theta \hat{k}$ so now i am manipulating the unit vectors. As :- $\hat{R}= \frac{\partial R}{\partial R}$/$| \frac{\partial R}{\partial R}|$ = $\sin\theta \cos\phi \hat{i} + \sin\theta \sin\phi \hat{j} + \cos\theta \hat{k}$ by doing similiar calculations i found $\hat{\theta}$= $\cos\theta \cos\phi \hat{i} + \cos\theta \sin\phi \hat{j} -\sin\theta\hat{k}$. Similiarly i found $\hat{\phi}$ =$ \cos\phi \hat{j} - \sin\phi\hat{i}$ now position vector can be written as $\vec R= [\vec R. \hat{R}]\hat{R} + [\vec R. \hat{\theta}]\hat{\theta} + [ \hat{\phi}. \vec{R}] \hat{\phi}$. Which gives me $\vec{R} = R\hat{R} + R\sin\theta \hat{\phi}$ not $R\hat{R}$ now where i am misunderstanding or miscalculating ? Answer: The unit vectors for spherical coordinates are obtained by taking the derivatives of the coordinate transformation with respect to $r$, $\theta$, $\phi$, and normalizing to 1 if needed: $$\begin{cases} x=R\sin\theta\cos\phi\\ y=R\sin\theta\sin\phi\\ z= R\cos\theta \end{cases}$$ This is because the vectors $\frac{\partial \vec x}{\partial R}$, $\frac{\partial \vec x}{\partial \theta}$, $\frac{\partial \vec x}{\partial \phi}$ tell you which are the directions you move when you “turn the knob” on one of your three coordinates $r$, $\theta$ or $\phi$. You got the first two right but the third should be $$\hat\phi=-\sin\phi \hat i +\cos\phi \hat j$$ The vector you wrote, if you check, has a direction which is radial in the $x-y$ plane, while it should be tangent to the circle described in $x-y$ plane by the $\phi$ angle;
{ "domain": "physics.stackexchange", "id": 55067, "tags": "vectors, coordinate-systems, geometry" }
Shortest geometric distance from surface in 3d dataset?
Question: I have a three-dimensional binary image of a collection of discrete, individual voxels ("seeds") contained in a connected 3-dimensional surface ("skin"). (Like a small fruit, with a surface delineated by a one-pixel boundary, that contains seeds.) The binary matrix was derived from a three-dimensional intensity image of the same size (grayscale). (It's easy to label the surface voxels "2", the seed voxels "1", and the background "0" using MATLAB's label function). In MATLAB, How do I compute the shortest geometric distance between each individual "seed" and the surface? In the (crude) 2-dimensional diagram in the link below, each desired distance is represented by the magnitudes of the red lines (in number of voxels). Answer: Do you just need the distance, or do you need the closest point? For the closest point, the FLANN library can help, and it has Matlab bindings. If you only need the distance, you can also use a distance transform. Try googling for "distance transform 3d matlab" for implementations. Which one is faster depends on the number of "seeds" and "skin voxels".
{ "domain": "dsp.stackexchange", "id": 370, "tags": "image-processing, matlab, 3d, matrix" }
Compute end date for project
Question: The aim of this method: based on toDate (from UI), generate End Limit Date for the Project. /** * Return correct From Limit Date according the logic of start and end date of current project. * * @param project current project * @param toDate to date from UI form * @return correct to date for project holder */ private static Calendar getCorrectToDateLimit(final Project project,final Date toDate) { if (toDate != null) { final Calendar toFormDate = AbsenceHelper.getCalFromDate(toDate); //check have the project end date if (project.getLastWorkOrderEndDate() != null) { final Calendar projectEnd = AbsenceHelper.getCalFromDate(project.getLastWorkOrderEndDate()); if (toFormDate.after(projectEnd)) { final String validationMessage = Messages.get("validation.absence_details_report.to.date.later.end"); renderArgs.put("toDate-error-" + project.id, validationMessage); return projectEnd; } else { return toFormDate; } } else { //in case of ongoing project return AbsenceHelper.getCalFromDate(toDate); } } else { //in case of nullable toDate check - if project have end date - return project end date / else current date return (project.getLastWorkOrderEndDate() != null) ? AbsenceHelper.getCalFromDate(project.getLastWorkOrderEndDate()) : Calendar.getInstance(); } } AbsenceHelper: public static Calendar getCalFromDate(final Date date) { final Calendar result = Calendar.getInstance(); result.setTime((Date)date.clone()); return result; } How can I make this method more readable? Answer: Some ideas: Put the code through a linter. This will tell you about things like missing spaces and other easy wins. Pass the relevant project properties instead of the entire project (Law of Demeter) Flatten your if hierarchy, either by pulling out four methods and deciding which to call in the callers or by creating basically a switch statement using the project details mentioned. Don't repeat yourself - there are four calls to AbsenceHelper.getCalFromDate, two checks for project.getLastWorkOrderEndDate() != null, and three calls to project.getLastWorkOrderEndDate.
{ "domain": "codereview.stackexchange", "id": 26205, "tags": "java, datetime" }
On the phase-invariance of vectors in quantum mechanics
Question: One of the postulates of quantum mechanics is that if $\phi$ is a unit vector in some Hilbert space (in the simplest case let's consider $\mathbb{C}$), it describes the same state as $e^{i\theta}\phi$ for any $\theta$. If we picture all these unit vectors as part lying on some circle, does it not follow under this postulate that all unit vectors correspond to the same state, since they're only some rotation away from each other? Answer: No. You are mixing up the picture of the complex plane as a vector space with the picture of the state space/Hilbert space as a vector space. These are two independent sets of axes. Multiplying a state by any complex number does not change the ray along which the state lies in Hilbert space. For example, consider a two state system — a spin-1/2 degree of freedom, say — with orthonormal basis vectors $\left|\uparrow\right>$ and $\left|\downarrow\right>$. The two basis vectors correspond to two independent directions in Hilbert space. If you multiply the state $\left|\uparrow\right>$ by any complex number $\alpha$, you get a new vector $\alpha\left|\uparrow\right>$ that still lies along the $\left|\uparrow\right>$-axis in Hilbert space. To change the direction of your state in Hilbert space, you would need to apply a unitary linear operator that is not a multiple of the identity to your vector. You can think about this as multiplying the vector by a $2\times2$ matrix in the example of the two state system. As an example, you could apply an operator that takes the state $\left|\uparrow\right>$ to the state $\alpha \left|\uparrow\right> + \beta \left|\downarrow\right>$. Now you've rotated the state in the Hilbert space, and it no longer lies along the $\left|\uparrow\right>$-axis.
{ "domain": "physics.stackexchange", "id": 90778, "tags": "quantum-mechanics, hilbert-space, quantum-states" }
Build Debian Package Locally?
Question: Hello all, I was wondering if there was a way to build a ROS debian package locally (for testing purposes). While using Bloom, I have had some trouble in my release. So, before I submit my package again, I'd like to verify that it will build. thanks, -Hunter A. I ran the commands you suggested, but encountered an error: allenh1@allenh1-Vostro-430:~/p2os-release$ git checkout debian/ros-hydro-p2os-urdf_1.0.1-0_precise Note: checking out 'debian/ros-hydro-p2os-urdf_1.0.1-0_precise'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b new_branch_name HEAD is now at cd7c4f8... Generated debian files for precise allenh1@allenh1-Vostro-430:~/p2os-release$ git-buildpackage -uc -us --git-ignore-branch --git-ignore-new dh clean dh_testdir Unknown option: buildsystem dh_testdir: warning: ignored unknown options in DH_OPTIONS dh_auto_clean dh_clean Unknown option: buildsystem dh_clean: warning: ignored unknown options in DH_OPTIONS rm -f debian/ros-hydro-p2os-urdf.substvars rm -f debian/ros-hydro-p2os-urdf.*.debhelper rm -rf debian/ros-hydro-p2os-urdf/ rm -f debian/*.debhelper.log rm -f debian/files find . \( \( -type f -a \ \( -name '#*#' -o -name '.*~' -o -name '*~' -o -name DEADJOE \ -o -name '*.orig' -o -name '*.rej' -o -name '*.bak' \ -o -name '.*.orig' -o -name .*.rej -o -name '.SUMS' \ -o -name TAGS -o \( -path '*/.deps/*' -a -name '*.P' \) \ \) -exec rm -f {} \; \) -o \ \( -type d -a -name autom4te.cache -prune -exec rm -rf {} \; \) \) rm -f *-stamp Warning generated by debuild: Making debian/rules executable! fatal: ref HEAD is not a symbolic ref gbp:error: release/hydro/p2os_urdf/1.0.1-0 is not a valid branch I'm sure it's something simple. Originally posted by allenh1 on ROS Answers with karma: 3055 on 2013-07-15 Post score: 3 Original comments Comment by William on 2013-07-22: I haven't run into this, I thought this would be covered by the --git-ignore-branch, but maybe not... Comment by William on 2013-07-22: This seems to indicate that calling this from a tag is a problem, but that hasn't been my experience. I have no idea why this doesn't work, @tfoote any ideas? Comment by fergs on 2013-08-06: Since bloom uses a tag, not a branch, you might need to use "--git-export=WC" to use the "working copy" that is checked out? Comment by allenh1 on 2013-08-08: That seems to make sense. Do I need to change the rest of the command? Comment by ahendrix on 2013-12-14: I found that I needed to add the --git-upstream-tree=tag flags to get GBP to run properly on 13.10. Comment by mikepurvis on 2013-12-16: Austin—just ran into this trying to make builds on 14.04. Thanks for the heads-up. Comment by Daniel Stonier on 2014-07-31: Ditto...cheers Austin. Answer: You can test it locally, but there is no tutorial for doing it yet. You can checkout the tag in your bloom repo which matches your system, for example: $ git checkout debian/ros-hydro-PACKAGE-NAME_1.0.0-0_precise Then you can build it locally with: $ git-buildpackage -uc -us --git-ignore-branch --git-ignore-new This test should be sufficient for everything but missing dependencies, because this will not build it exactly like the farm, this command will use the packages you have installed locally. So this will not catch errors which happen from missing dependencies, if you have those missing dependencies installed locally. You can build it just like the farm using pbuilder, which builds your package in a vacuum, setting up an empty change root and installing only the packages you depend on. You can do this with this command: $ git-buildpackage -uc -us --git-ignore-branch --git-ignore-new --git-pbuilder This command will do the full, isolated build of your package. Your machine will likely need some setup, it's hard for me to tell what all the steps were as my machines are already setup, but I imagine these should get you close: $ sudo apt-get install git-buildpackage pbuilder cowbuilder $ sudo cowbuilder --create To really mimic the build farm you will want to be using the ros-shadow-fixed repository, which is the package repository where packages go to be tested before being synched to the public debian repositories. Comment out any ROS debian entries in your /etc/apt/sources.list.d/* files and add these lines: deb 'http://packages.ros.org/ros-shadow-fixed/ubuntu' precise main deb-src 'http://packages.ros.org/ros-shadow-fixed/ubuntu' precise main Originally posted by William with karma: 17335 on 2013-07-15 This answer was ACCEPTED on the original site Post score: 12 Original comments Comment by 130s on 2013-07-23: Now it seems that tutorial for local pre-release is started under bloom tutorial directory. Comment by 130s on 2014-09-11: For setting pbuilder on Ubuntu precise, what you listed worked for me. I only needed this tweak. Comment by VictorLamoine on 2018-11-13: I have created a bash script that helps a lot with this task: https://gitlab.com/VictorLamoine/bloom_local_release/tree/master. It makes it easier to handle generating multiple packages when you need to play with rosdep keys so that the dependencies can be resolved.
{ "domain": "robotics.stackexchange", "id": 14920, "tags": "ros, deb, bloom-release" }
Counting argument for LTF circuits
Question: In Boolean circuit complexity, Shanon's counting argument shows that a random Boolean function on $n$-input bits requires a circuit of size $\Omega(2^n/n)$ to be computed by a circuit made of AND, OR and NOT gates. Is there a similar lower bound for random functions against LTF(Linear Threshold Function) circuits? More precisely, is it known that a random Boolean function requires a super-polynomial-sized LTF circuit? Answer: It is known that any LTF in $n$ variables can be expressed as $\sum_ia_ix_i\ge b$ where $a_i$ and $b$ are integers with $|a_i|,|b|\le n^{O(n)}$ (see e.g. Lemma 1 in Goldmann, Håstad, and Razborov, Majority gates vs. general weighted threshold gates, Computational Complexity 2 (1992), 277–300, where the bound is given as $2^{-n}(n+1)^{(n+1)/2}$). Thus, for an LTF circuit of size $s$, we can describe each gate using $O(s^2\log s)$ bits, and the whole circuit using $O(s^3\log s)$ bits. It follows by a counting argument that a random Boolean function in $n$ variables requires LTF circuit size $\Omega(2^{n/3}/n)$. We can improve this to $2^{n/3}$ as follows. Let $f\colon\{0,1\}^n\to\{0,1\}$ be a LTF. The set $C_f$ of coefficients $(\vec a,b)\in\mathbb R^{n+1}$ such that $\sum_ia_ix_i\ge b$ computes $f$ is defined by a finite system of inequalities of the form $L_j(\vec a,b)\ge0$ or $L_j(\vec a,b)<0$, where each $L_j$ is a linear function with $\{0,1\}$ coefficients. Since $C_f$ is invariant by multiplication by a positive scalar, it remains nonempty if we strengthen each inequality $L_j<0$ to $L_j\le-1$. In this way, we obtain a feasible linear program, and by general properties of linear programs, there exists a set $J$ such that the affine space determined by the equalities $L_j=0$ or $L_j=-1$ (as appropriate) for $j\in J$ is a nonempty subset of $C_f$. We may assume these equalities are linearly independent, thus $|J|\le n+1$. Interpreting this in terms of the coefficients of the original LTF, we obtain: Lemma: For any LTF $f\colon\{0,1\}^n\to\{0,1\}$, there exists a set $X\subseteq\{0,1\}^n$ such that $|X|=n+1$ and every LTF $g$ that agrees with $f$ on $X$ is identical to $f$. (The $n^{O(n)}$ bound then also follows using Cramer’s rule and Hadamard’s determinant inequality, but we will not need this.) It follows that $f$ can be described using $(n+1)n+(n+1)=(n+1)^2$ bits. Thus, a LTF circuit of size $s$ can be described using $\sum_{t\le s}t^2\sim s^3/3<s^3$ bits, and a random Boolean function in $n$ variables requires LTF circuits of size $\ge2^{n/3}$ with high probability. The bound above assumes the size of the circuit is measured by the number of nodes. If we use the number of wires instead, a circuit with $w$ wires has $s$ nodes of fan-ins $\{w_i:i<s\}$ such that $\sum_iw_i=w$; by the above, the gates can be described using $\sum_i(w_i+1)^2\le(w+1)^2$ bits, and the wiring with $2w\log s\le2w\log w$ bits, thus the whole circuit can be described using $O(w^2)$ bits. Consequently, a random Boolean function in $n$ variables requires LTF circuits with $\Omega(2^{n/2})$ wires.
{ "domain": "cstheory.stackexchange", "id": 5606, "tags": "cc.complexity-theory, circuit-complexity, lower-bounds, boolean-functions" }
An alternative vector
Question: Based on this question (as a starting point) and a couple of other questions about rewriting vector in C++, I am planning to write another blog article to show the best way (or more precisely, the common pitfalls that most people fall into). Before I start the blog article just wanted feedback to make sure I am not doing anything stupid. #include <new> #include <algorithm> #include <stdexcept> namespace ThorsAnvil { template<typename T> class V { std::size_t capacity; std::size_t length; T* buffer; void makeSpaceAvailable() { if (length == capacity) { int newCapacity = std::max(10ul, capacity) * 1.62; V tmp(newCapacity); std::move(buffer, buffer + length, tmp.buffer); tmp.swap(*this); } } void validateIndex(std::size_t size) { if (size >= length) { std::stringstream message; message << "V: out of range: " << size << " is larger than: " << length; throw std::out_of_range(message.str()); } } public: V(std::size_t cap = 10) : capacity(std::max(1ul, cap)) , length(0) , buffer(static_cast<T*>(::operator new(sizeof(T) * capacity))) {} V(V const& copy) : capacity(copy.capacity) , length(copy.length) , buffer(static_cast<T*>(::operator new(sizeof(T) * capacity))) { std::copy(copy.buffer, copy.buffer + copy.length, buffer); } V& operator=(V value) // pass by value so this is a copy. { value.swap(*this); // Copy and Swap idiom return *this; } V(V&& move) noexcept : capacity(0) , length(0) , buffer(nullptr) { move.swap(*this); // Move just swaps content. } V& operator=(V&& value) noexcept { value.swap(*this); return *this; } ~V() noexcept { for(int loop = 0; loop < length; ++loop) { try { buffer[loop].~V(); } catch(...) {} // catch and discard exceptions. } try { ::operator delete(buffer); } catch(...) {} } void swap(V& other) noexcept { using std::swap; swap(capacity, other.capacity); swap(length, other.length); swap(buffer, other.buffer); } T& operator[](std::size_t index) {return buffer[index];} T const& operator[](std::size_t index) const{return buffer[index];} T& at(std::size_t index) {validateIndex(index);return buffer[index];} T const& at(std::size_t index) const{validateIndex(index);return buffer[index];} void push(T const& u) { makeSpaceAvailable(); ::new(&buffer[length++]) T(u); } void push(T&& u) { makeSpaceAvailable(); ::new(&buffer[length++]) T(std::forward<T>(u)); } template<class... Args> void emplace(Args&&... args) { makeSpaceAvailable(); ::new(&buffer[length++]) T(std::forward<Args>(args)...); } void pop() { validateIndex(length-1); buffer[--length].~T(); } }; template<typename T> void swap(V<T>& lhs, V<T>& rhs) { lhs.swap(rhs); } } Answer: I think it is a great example for an article. Vector-like classes are probably the second most reimplemented library containers, probably only losing for string classes. My comments Even though this is example code for your post, you might consider giving the class a longer name. V confuses itself with a template parameter, so much that you've made a typo in here: for(int loop = 0; loop < length; ++loop) { try { buffer[loop].~V(); } catch(...) {} // catch and discard exceptions. } It was supposed to be ~T(), but that's not a problem, the compiler will remind you ;) void validateIndex(std::size_t size) { if (size >= length) { std::stringstream message; message << "V: out of range: " << size << " is larger than: " << length; throw std::out_of_range(message.str()); } } A few things about this method: Using a stringstream seems overcomplicating in this case. You could do with a simple std::string and std::to_string(). That would import less dependencies into the code, which might be important for frequently used stuff like vectors. The method itself should be const. Right now the code will not compile if I try to call at() on a const vector, because it will select the const overload which attempts to call non-const validateIndex(). Alternatively, you could also make it a static method and explicitly pass the length that is supposed to be validated against. Minor nitpicking, but validateIndex(size) seems a bit off. You're validating an index, so why is the param called size? Don't know your thoughts, but I'm not a huge fan of using the global raw new directly. It is just so verbose and prone to typing errors, forgetting the sizeof, etc. static_cast<T*>(::operator new(sizeof(T) * capacity)) You could wrap that into a private static helper that just takes a number of T elements and handles the rest in a cleaner way: static T* allocate(std::size_t numOfElements) { return static_cast<T*>(::operator new(sizeof(T) * numOfElements)); } Miscellaneous Still missing all the size/capacity/empty accessors. front/back might also be interesting. If you have lots of free time, why not also go for some iterators? As far as I know, delete is always noexcept, so this should be pointless: try { ::operator delete(buffer); } catch(...) {} BTW, I also agree that catching an exception from the user destructor is dangerous. Could hide more serious problems. You might consider just letting the program terminate or call std::terminate/std::abort directly to trap right there and then. Other than what was already said in Deduplicator's answer, it looks quite good.
{ "domain": "codereview.stackexchange", "id": 18716, "tags": "c++, vectors" }
pcl for laser sensor
Question: I'm really confused. I want to make a map of the environment by using laser scans. I know I have to use pcl, but I cannot understand what is the use of pcl does it store all the scan over a period of time and hence i can plot it ? Please help. I tried reading PCL online, didn’t really help. Originally posted by aniket on ROS Answers with karma: 7 on 2014-10-21 Post score: 0 Answer: Why don't you use gmapping or hector_mapping? Originally posted by ROSCMBOT with karma: 651 on 2014-10-22 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by aniket on 2014-10-22: I will look into both of them. Which do feel is better for laser mapping ? and can you give me a small description on how they work ? Thank You.
{ "domain": "robotics.stackexchange", "id": 19805, "tags": "slam, navigation, mapping, laser, scan" }
What is the point of using an inverting amplifier in a circuit?
Question: This may be a simple question to most of you, but I'm having trouble understanding the concept behind the use of an inverting amplifier. What is the purpose of using an op amp in an inverting amplifier if all of the current flows through the feedback resistor. In my textbook it states that the potential on the inverting input is equal to the non-inverting input which is Earthed, which produces a differential input of 0V. How is this in any way useful for the function of the circuit? Also, how is the potential at $V_2$ 0? If current flows from $V_{in}$ to $V_{out}$ then how is the potential at that point 0? My textbook also states the currents $I_{in}$ and $I_F$ are in opposite directions seeing as $I_F=-I_{in}$. If this is the case I can see why the potential at $V_2$ is 0, but why are the currents in the opposite directions? Answer: One of the properties ("golden rules") of an ideal operational amplifier is it drives the $V_+$ input voltage to the input voltage $V_-$ (when operated closed loop). So, in your case, $V_-$ is grounded so the op-amp will function to pull $V_+$ to ground. One advantage/use of an inverting op-amp is to amplify a small signal ($V_{in}$ in your case) since the gain is, $$\frac{V_{out}}{V_{in}}\approx-\frac{R_f}{R_{in}}$$ So, in your case, if $V_{in} = 1.0V$, $R_f=100kΩ$, and $R_{in}=10kΩ$ then, $$V_{out} \approx -10.0V$$ Since the op-amp is a very high impedance device $\approx 0$ current will flow into the inputs, so we can say, $$i_{in}+i_{f}=0$$ And, $$i_{in}=\frac{V_{in}}{R_{in}}=\frac{1.0V}{10kΩ}=0.1mA $$ $$i_{if}=\frac{V_{out}}{R_{f}}=\frac{-10.0V}{100kΩ}=-0.1mA $$ The actual currents flowing in our example are thus, The assumed current arrows are in opposite directions (top figure), but because of the inversion giving us a negative sign, you will see that the actual currents are as shown in second figure. No current (of significance) flows into the op-amp inputs. The output however, is sinking 0.1mA. UPDATE showing simplification. Solve this simple circuit for the voltage $V_X$ and see if it helps.
{ "domain": "physics.stackexchange", "id": 73411, "tags": "electricity, electric-circuits, electric-current, potential, electronics" }
Do we know for sure that laws of physics are time invariant?
Question: Is there a proof that Maxwell's equations will hold true even billions of years in the future, for example? Answer: Is there a proof that Maxwell's equations will hold true even billions of years in the future, for example? No.
{ "domain": "physics.stackexchange", "id": 45723, "tags": "time, symmetry, maxwell-equations" }
Aren't antibiotic resistant probiotics dangerous?
Question: Multidrug resistant probiotics are often recommended by doctors in various cases. But since bacteriae can easily exchange genes by conjugation or other means they could promote the drug resistance of other dangerous bacteriae residing in the bowels. (Which could be just "visitors" otherwise causing infections somewhere else) Or there is something that withholds this transfer? I think not and so my former genetics professor. Answer: Usually, resistance genes are located on plasmids---additional DNA rings in the bacterium that are part of the genome. These plasmids cause their own exchange with other bacteria, even from other species. B. clausii, the probiotics organism in question here, appears to be special, though, in that it has no plasmids. His resistance genes come with the primary ring-shaped genome and should not be transferred via plasmid exchange to other bacteria. This doesn't rule out other means of gene transfer like phages or conjugation, however. In one study, it was unsuccessfully tried to transfer a macrolide resistance gene to other bacteria. They conclude A potential hazard is transfer of resistance to microorganisms pathogenic for humans. The risk that this event will occur and the consequences in terms of morbidity and mortality have not been evaluated. Parameters required for risk assessment include studies on the nature and mobility of the resistance genes of probiotics. The only other paper on a B.clausii resistance gene didn't look at its transferability. That clearly shows we don't know enough. B. Bozdogan, S. Galopin, R. Leclercq: Characterization of a new erm-related macrolide resistance gene present in probiotic strains of Bacillus clausii. In: Applied and environmental microbiology. Band 70, Nummer 1, Januar 2004, S. 280–284, {{ISSN|0099-2240}}. PMID 14711653. PMC 321311.
{ "domain": "biology.stackexchange", "id": 460, "tags": "microbiology, bacteriology, antibiotics" }
Is nesting grids a good idea?
Question: I find myself nesting a lot of grids inside grids in WPF. I just found myself 3 Grids deep, and stopped to think: "Should I be doing this?" Is there some kind of performance cost? Is it unmaintainable? (Kind of like heavily nested ifs maybe.) I guess the alternative is to have one grid, and use a whole lot of column spans. (You may ignore the use of Events, I am aware commands are prob a better idea) <Grid Name="grid"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Grid Grid.Column="1"> <Grid.RowDefinitions> <RowDefinition Height="20"/> <RowDefinition/> <RowDefinition Height="20"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Button Grid.Row="0" Grid.Column="0" VerticalAlignment="Top" HorizontalAlignment="Left" Click="btnStart_Click" Content="New Game"/> <Button Grid.Row="0" Grid.Column="1" VerticalAlignment="Top" HorizontalAlignment="Center" Click="btnRun_Click" Content="Run"/> <Button Grid.Row="0" Grid.Column="2" VerticalAlignment="Top" HorizontalAlignment="Center" Click="btnUndo_Click" Content="Undo"/> <Button Grid.Row="0" Grid.Column="3" VerticalAlignment="Top" HorizontalAlignment="Right" Click="btnStep_Click" Content="Step"/> <Grid Grid.Row="1" Grid.ColumnSpan="4"> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <board:BoardView x:Name="boardView" Grid.Row="0" Background="Firebrick"/> <GridSplitter ResizeDirection="Rows" HorizontalAlignment="Stretch" VerticalAlignment="Bottom" /> <TextBox Name="txtLog" Grid.Row="1" Text="{Binding Path=GameLog.Log, Mode=TwoWay}" VerticalScrollBarVisibility="Visible" FontFamily="Global Monospace" AcceptsReturn="True" Height="260" VerticalAlignment="Stretch"/> </Grid> <CheckBox Grid.Row="2" Grid.Column="0" Grid.ColumnSpan="3" Content="Use Algebraic Notation" IsChecked="{Binding Path=UseAlgebraicNotation}"/> </Grid> <ContentControl Grid.Column="0" Content="{Binding Path=Game.BlackPlayer}" Margin="3" /> <ContentControl Grid.Column="3" Content="{Binding Path=Game.WhitePlayer}" Margin="3" /> </Grid> Answer: It's better to use nested grids that are easy to read, than complex spanning. I try to avoid spanning in most scenarios, unless it's just 1 simple span. But when in doubt, nest the grids, because that way future layout changes won't break everything where as spanning is directly tied to the number of columns you have. A great example of this is headers and footers, you wouldn't want them to not fill the width of an app,
{ "domain": "codereview.stackexchange", "id": 3865, "tags": "c#, wpf, layout, xaml" }
Nav_view not found on ROS diamondback
Question: Hi, I was trying to install the nav_view package on ROS Diamondback on a 64 bit mac Snowleopard. However this is what I see, j0813563:~ sagnikdhar$ rospack find nav_view [rospack] couldn't find package [nav_view] Has nav_view been deprecated ? If yes, what is the alternative package for Visualization applications ? Thank you. ~ Sagnik Originally posted by Sagnik on ROS Answers with karma: 184 on 2011-02-24 Post score: 3 Answer: See this ros-users email thread. The relevant bit from that thread: So, for DTurtle, we're trying to remove dependencies on visualization stacks like ogre from our core stacks. Since navigation falls into this category, and the nav_view package relies on ogre, it has to move from navigation. That much is decided. However, I'm not sure exactly where to put the nav_view package. Do I create a new stack or just throw it in sandbox? I'm not sure where it moved to, but I believe it has been deprecated in favor of using rviz. Originally posted by Eric Perko with karma: 8406 on 2011-02-24 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by kwc on 2011-02-24: You can download the cturtle version of nav_view here: https://code.ros.org/svn/ros-pkg/stacks/navigation/tags/cturtle/nav_view/
{ "domain": "robotics.stackexchange", "id": 4862, "tags": "ros, visualization, ros-diamondback" }
The meaning of symbols
Question: What does the following symbols or expression mean? Answer: $x$ and $y$ are elements of a set $S$. For example, if $S = \{1, 2, 3\}$, we can select $x = 1$ and $y = 2$. EDIT: As pointed out by David below, $*$ could be multiplication or convolution, it's hard to tell from the context. $\in$ means 'in', that is, that $x$ and $y$ are in a set. We can write 'x and y are elements of the set $S$ ' in math as: $x, y \in S$ Again, following my example, that would be $1$ and $2$ are in the set $\{1, 2, 3\}$. $\forall$ is verbalized 'for all'. It means for every element in the set, something is either true or false. For example: $\forall x \in S, x < 5$ means that all of the elements in my set $S$ are less than 5. $I$ is the identity function. The entire phrase $I: \forall X. X \rightarrow X$ (EDIT: removed example, as it was misleading) means the identity function preserves the value of the variable $X$. More concretely, $I$ is the function that maps $X$ to itself. For all values of X, the output of the function is X. Hope that helps!
{ "domain": "cs.stackexchange", "id": 9045, "tags": "discrete-mathematics" }
What are bits in the context of channel capacity units?
Question: 'bits' are used with 2 different meanings. One can use 'bits' to mean the binary digits i.e. 1's and 0's. Bits are also the units of information in a event of discrete source. I think when channel capacity is measured in bits/sec, we use the second meaning of bits. Not how many 1's and 0's we can send down the channel but how much information we can send per second through that channel. Is my interpretation right? Answer: in Information Theory, the amount of information in a message is proportional to the logarithm of the probability of occurrence of the message: $$ I(m) = -C \log\left( P(m) \right) $$ the only difference between different bases of logarithms is a scaling constant. if $ C = \frac{1}{\log(2)} $, that is $$ \begin{align} I(m) & = -\frac{1}{\log(2)} \log\left( P(m) \right) \\ &= - \log_2\left( P(m) \right) \\ \end{align} $$ then we say that the $I(m)$ is in units of bits. the reason why is if the message $m$ had only two possibilities and both had equal likelihood, then $P(m) = \frac12 $ and $I(m) = 1$. there are two messages, two possibilities of equal likelihood, like heads and tails of a coin. and you need only one bit, 0 or 1, to represent the two possibilities.
{ "domain": "dsp.stackexchange", "id": 3668, "tags": "information-theory" }
How Does Differential Scanning Calorimetry (DSC) Differentiate Between Exothermic and Endothermic Changes?
Question: In DSC, heat flux difference between sample and reference is measured as function of temperature while temperature of sample and reference are maintained the same. DSC is often used for polymer analysis. What I am not clear is how does DSC differentiate between exothermic and endothermic changes like for example crystalline and melting point of polymer since in both state changes temperature of sample remains the same? In both cases, to keep both sample and reference at the same temperature, heat flux for reference is going to decrease because if not temperature of reference would become bigger than of sample since we are giving latent heat to the sample. Answer: In DSC, differential power (heat) is provided to keep the sample and the reference at the same temperature. The DSC plot has differential power on the y-axis and temperature on the x-axis. Also, as reference is chosen in such a way that it will not undergo any phase change or state change in the chosen temperature range. The instrumentation is not as simple as one might think. There are two separate heaters which can be independently controlled. I quote from O'Reilly's Instrumental Analysis book (slightly outdated but still a pretty good reference for concepts). It describes DSC nicely, There are two separate heating circuits, the average-heating controller and the differential heating circuit. In the average-temperature controller, the temperatures of the sample and reference are measured and averaged and the heat output of the average heater is automatically adjusted so that the average temperature of the sample and reference increases at a linear rate. The differential-temperature controller monitors the difference in temperature between the sample and reference and automatically adjusts the power to either the reference or sample chambers to keep the temperatures equal. The temperature of the sample is put on the x-axis (time) of a strip-chart (read "computer" today) recorder and the difference in power supplied to the two differential heaters is displayed on the y-axis. The power difference is calibrated in terms of calories per unit time. Now imagine that you are heating the polymer and the reference, their temperature is increasing but their difference is zero. You have a flat baseline, as shown in the figure (Taken from Google Images). Let us continue heating both, independently, a temperature is reached when the polymer is melting. There is a phase change and the sample temperature is not changing anymore, but it needs heat to stay at that temperature. The reference, when brought to the melting point, does not need further heating, so its temperature is at the melting point of the sample. The sample is consuming power, but the reference is not. You get a negative peak in the DSC due to an endothermic process. Apply the same idea to an exothermic process, such as crystallization. The temperature of the sample is becoming higher than the reference. The reference needs power to catch up to the temperature of the sample. The sample heater is not consuming power but the reference's heater is, in order to be at the same temperature. This time you get a positive (exothermic) peak.
{ "domain": "chemistry.stackexchange", "id": 16112, "tags": "analytical-chemistry, polymers, differential-scanning-calorimetry" }
Implementation about animal shelter that allows only cats and dogs
Question: Well known problem: An animal shelter, which holds only dogs and cats, operates on a strictly "first in, first out" basis. People must adopt either the oldest (based on arrival time) of all animals at the shelter, or they can select whether they would prefer a dog or cat (and will receive oldest of that type). They cannot select which specific animal they would like. Create the data structure to maintain this system and implement operations such as enqueue, dequeueAny, dequeDog, and dequeCat. You may use the built-in linked list data structure. It took me 2 hours to implement (reading problem to coding), which for 12 years exp is a lot. Basically I got messed up in my head while choosing proper interface which I changed two times (hence the time). I think If I thought about correct interface at first instance before starting to code it should've been 30 mins code at the max. And only way to know for sure about that is to let one of you optimize it in c++. What and how would you optimize it? E.g. is dog cat and animal class ok? dequeueCat() dequeue() et.al. signature look awful. #include <iostream> #include <string> #include <list> #include <cassert> enum class shout { unknown, bark, purr }; struct animal { animal(std::string n = std::string(), shout t = shout::unknown) : name(n), shoutingTraits(t) {} // properties std::string name; shout shoutingTraits; }; struct dog : public animal { dog(std::string n = std::string()) : animal(n, shout::bark) {} }; struct cat : public animal { cat(std::string n = std::string()) : animal(n,shout::purr) {} }; class adopt; std::ostream& operator<<(std::ostream& out, const adopt& center); class adopt { public: bool enqueue(const animal& animal) { if(animal.shoutingTraits == shout::unknown) { throw std::runtime_error("only dogs and cats accepted"); } animals.push_back(animal); return true; //implement size restrictions } bool dequeOldestTrait(shout trait, animal& a) { assert(trait != shout::unknown); for(std::list<animal>::iterator it = animals.begin() ; it != animals.end() ; it++) { if(it->shoutingTraits == trait) { a = *it; animals.erase(it); return true; } } return false; } bool dequeAnimal(animal& a, shout trait = shout::unknown) { if(animals.empty()) { throw std::runtime_error("Sheleter not opened yet"); } if(trait == shout::unknown){ a = animals.front(); animals.pop_front(); return true; } return dequeOldestTrait(trait, a); } bool dequeDog(dog& dog) //reference to derived type { return dequeAnimal(dog, shout::bark); } bool dequeCat(cat& cat) { return dequeAnimal(cat, shout::purr); } friend std::ostream& operator<<(std::ostream& out, const adopt& center); private: std::list<animal> animals; }; std::ostream& operator<<(std::ostream& out, const adopt& center) { out << "Adoption Center animals queue => "; for(auto a : center.animals) { out << "(" << a.name << ","<< (int)a.shoutingTraits << "), "; } out << std::endl; return out; } void createCenter(adopt& adoptionCenter) { dog d0("dog0"); dog d1("dog1"); dog d2("dog2"); dog d3("dog3"); dog d4("dog4"); dog d5("dog5"); cat c0("cat0"); cat c1("cat1"); cat c2("cat2"); cat c3("cat3"); cat c4("cat4"); cat c5("cat5"); adoptionCenter.enqueue(d0); adoptionCenter.enqueue(c1); adoptionCenter.enqueue(c0); adoptionCenter.enqueue(d3); adoptionCenter.enqueue(d4); adoptionCenter.enqueue(c5); adoptionCenter.enqueue(d1); adoptionCenter.enqueue(c3); adoptionCenter.enqueue(d2); adoptionCenter.enqueue(c4); adoptionCenter.enqueue(c2); adoptionCenter.enqueue(d5); std::cout << adoptionCenter; } void unitTests(adopt& adoptionCenter) { std::cout << "\nRemoving dog (d0 should be removed)" << std::endl; dog tmp; if(!adoptionCenter.dequeDog(tmp)) { std::cout << "dog removal failed" <<std::endl; } std::cout << adoptionCenter; std::cout << "\nRemoving one more dog (d3 should be removed)" << std::endl; if(!adoptionCenter.dequeDog(tmp)) { std::cout << "dog removal failed" <<std::endl; } std::cout << adoptionCenter; std::cout << "\nRemoving cat (c1 should be removed)" << std::endl; cat tCat; if(!adoptionCenter.dequeCat(tCat)) { std::cout << "cat removal failed" <<std::endl; } std::cout << adoptionCenter; std::cout << "\nRemoving other cat (c0 should be removed)" << std::endl; if(!adoptionCenter.dequeCat(tCat)) { std::cout << "cat removal failed" <<std::endl; } std::cout << adoptionCenter; } int main() { adopt adoptionCenter; std::cout << "AdoptionCenter should be empty" << std::endl; std::cout << adoptionCenter; std::cout << "Removing on empty queue" << std::endl; animal a; try { std::cout << adoptionCenter.dequeAnimal(a) << std::endl; } catch (const std::runtime_error& e) { std::cout << e.what() << std::endl; } createCenter(adoptionCenter); unitTests(adoptionCenter); return 0; } Answer: Wrong type identifier A barking animal is not a dog. With regard to how you built the Cat and Dog, you are using the abstraction "something that barks is a dog" and "something that purrs" is a cat. That would not really be sufficient, deriving from your classes I could smuggle a purring Leopard into the adoption agency that would be handed out to the clients instead of a cat. Yes it's a made up case but basically this is 'duck typing' in the wrong place. While this is a small somewhat contrived example i think it is important to understand what you modeled here. Right somebody could write something like this Animal dogimal("dogimal", bark); In this context I would interpret this as not a dog, but animal that barks. It's a small difference that could be important once the animal interface grows. Had you not subclassed this would probably not have been as much as an issue to me. Again with the scope of the example this would be ok. For example you could make the animal constructor protected, this would only let subclasses call it. This would also entail having to change the type from struct to class. For a more complex class making it virtual would also be an option. Wrong classname adopt is a verb, this should be something like adoptionAgency or another suitable noun Modern C++ In dequeOldestTrait i would have liked to have seen the use of std::find_if rather than an explicit for loop. Unit tests The unit tests only check whether the correct animal type has been removed, they don't check if the actual animal e.g. d0 was correctly returned. The unit tests don't check for edge cases, empty shelter, animal type not available Inconsistent interface There is a mix between failures represented as bool results and exceptions (empty shelter, only dogs and cats are accepted vs animal not found). For this scenario that is probably an unnecessary differentiation. Bools would be good enough for all of these cases. Additionally you return an unconditional true in the enqueue method. Outputs magic number The << method for adopt write out ints for the noise that the animals make, that is really not helpful for most people. Notes The dequeDog signature that you noted doesn't look bad, you are returning bool for success or failure and using a reference as an ouput parameter. I probably would have liked to have seen methods for checking on the state of the adoption agency, isEmpty(), size() hasDogs(), hasCats() for completeness. In the interview I would have asked you what you would add to your class to make it easier to use. Depending on the time that we have and how the interview was set up i might have asked where and why you got hung up and how you figured out which direction to take. I might ask you what would change in behavior if you'd use an array as a backing structure and why one would prefer one over the other. These days one followup question would always be on thread safety and how to achieve it.
{ "domain": "codereview.stackexchange", "id": 27334, "tags": "c++, c++11, interview-questions" }
Understanding "natural variables" of the thermodynamic potentials using the example of the ideal gas
Question: I'm struggling with the concept of "natural variables" in thermodynamics. Textbooks say that the internal energy is "naturally" expressed as $$ U = U(S,V,N)$$ For an ideal gas, I could take the Sackur–Tetrode equation - which gives me $S(U,V,N)$ - and solve for $U$ to get $$U = \frac{3Nh^2}{4 \pi m} \left( \frac{N}{V} \left( \exp \left( \frac{S}{Nk} \right) - \frac{5}{2} \right) \right) ^{2/3} = U(S,V,N)$$ However, I have never seen this expression before. Usually, people invoke the equipartition theorem to get $$U = \frac{3}{2}N k_B T = U(T,N)$$ Or they use the ideal gas law to get $$U = \frac{3}{2}N k_B T = \frac{3}{2}pV = U(p,V)$$ So sticking with the example of the ideal gas, this motivates the following questions: What is "natural" about $U(S,V,N)$ compared to $U(T,N)$ and $U(p,V)$? Can I derive the expressions for $U(T,N)$ and $U(p,V)$ from $U(S,V,N)$? Can I derive $U(S,V,N)$ from $U(T,N)$ and $U(p,V)$? Note that this question is not about the Legendre transformation between different thermodynamic potentials but about expressing the same thermodynamic potential $U$ in terms of different variables. Answer: I'll answer your questions one by one. About what makes $U(S,V,N)$ natural: Keep in mind that thermodynamics started out as an experimental science, with people looking to use it for practical purposes (such as Carnot, etc.). Now as energy has always had center stage in physics, since it is conserved in many systems, so it made for a good starting point. One observation people made about energy is that it $\textbf{scales with the system}$. What that means is that by increasing the size of the system (all its parameters), the energy also increased in proportion. This means that when looking for an equation for the energy, we need to look for parameters which also scale with the system, or mathematically speaking, we need an equation that is $\textbf{first order homogeneous}$, i.e. $$U(\lambda X_1, \lambda X_2 ...) = \lambda U(X_1, X_2...)$$ Where $\lambda \in \mathbb{R}$ and $X_1,X_2...$ are the parameters. The parameters for which the above is true are known as $\textbf{extensive parameters}$. Intuitively, these are the volume, $V$ and the number of particles, $N$. Less intuitively, the entropy, $S$ is also extensive. There are other extensive parameters which are used to calculate $U$, such as magnetic moment. But for most simple systems, $S,V,N$ are enough. About deriving $U(T,p,N)$ Given $U(S,V,N)$, you can find $T$ and $-p$ as taking the partial derivative of $U$ with respect to $S$ and $V$ respectively. Note that $T$ and $-p$ are $\textbf{intensive parameters}$, wherein they are NOT homogeneous first order. The equation would look something like: $$T(\lambda X_1, \lambda X_2 ...) = T(X_1, X_2...)$$ Now since we have partial derivatives, you could write down functions $U$ as a function of $T$ and $p$, but in doing so, you would be losing information. Suppose you gave your new function $U(T,p,N)$ to a friend and he wanted to find $U(S,V,N)$, he would have to conduct 2 integrals, which would leave him wanting 2 constants of integration to nail down the function. So you see you can 'derive' $U(S,V,N)$ from $U(T,p,N)$ only upto a constant. This is why the Legendre transformations are helpful. They are a way to change coordinates without losing information. $\textbf{When I say he has to conduct 2 integrals, I mean the following:}$ Suppose you expressed $U$ as a function of $T,p,N$, and using the following relations: $$T=\left(\frac{\partial U}{\partial S}\right)_{V,N} , -p=\left(\frac{\partial U}{\partial V}\right)_{S,N} , $$ Then, you'll have the following type of equation $$U=f\left(\frac{\partial U}{\partial S},\frac{\partial U}{\partial V},N\right)$$ Which is a partial differential equation ($f$ is some function of the variables). In the best case scenario that this is separable, 2 integrals would have to be conducted and hence 2 integration constants would be needed to obtain the exact function $U(S,V,N)$. $\textbf{About Ideal Gases:}$ There are many ways to work out fundamental relations for ideal gases. The one you referenced in your question actually comes from Statistical Mechanics after considering phase space volumes. Thermodynamics alone is incomplete and requires many manipulations to get the results we want. In most cases, we only have expressions for $T$ and/or $p$, known as $\textbf{equations of state}$. What is really important to understand is the difference between the $\textbf{energy representation}$ and $\textbf{entropy representation}$. When the fundamental relation takes the form $U(S,V,N)$, we are working in the energy representation. If it is of the form $S(U,V,N)$, we have the entropy representation. Now the equation of state (relating to T) for the entropy representation is $$\frac{1}{T}=\left(\frac{\partial S}{\partial U}\right)_{V,N}$$ In the above it is clear that the right hand side must be a function of $U,V,N$. So the equipartition equation you wrote down is not a fundamental equation, but an equation of state in the entropy representation!
{ "domain": "physics.stackexchange", "id": 66854, "tags": "thermodynamics, statistical-mechanics, coordinate-systems, phase-space" }
In an NFA that recognizes L(ab) where ab is a regular expression, what is the point of the empty string transition?
Question: I'm reading Introduction to Computational Theory by Michael Sipser. He provides the NFA below as one that recognizes the language of the regular expression $ab$. What is the purpose of the middle two states and the $\varepsilon$ transition between them? Could this NFA be designed without them and still work as intended? Here's another example of an NFA that recognizes the language described by $aba$. Again, I'm confused by the addition of the empty string transitions and the extra states. Answer: The example is part of the construction to obtain a finite automaton for each regular expression. This construction is known as Thompson's construction and works recursively. Given automata $A(e_1)$ and $A(e_2)$ for expressions $e_1$ and $e_2$, instructions are given to build new automata for the expressions $e_1 + e_2$, $e_1\cdot e_2$ and $e_1^*$. These new automata contain a lot of $\varepsilon$-edges to connect the original parts. Hence the purpose of the example is not to give an efficient automaton (that is a no-brainer for $ab$) but to illustrate these constructions.
{ "domain": "cs.stackexchange", "id": 15472, "tags": "regular-languages, finite-automata, regular-expressions" }
Rotation view for std::vector
Question: I have this view class for constant time rotations of a std::vector: #include <iostream> #include <string> #include <vector> template<typename T> class vector_view { public: vector_view(std::vector<T>& vec) : vec{&vec}, offset{0} {} const T& operator[](int index) const { return (*vec)[(index + offset) % vec->size()]; } T& operator[](int index) { return (*vec)[(index + offset) % vec->size()]; } void rotate(int rotation_length) { offset = (offset + vec->size() - rotation_length) % vec->size(); } friend std::ostream& operator<<(std::ostream& os, vector_view<T>& view) { os << "["; std::string separator = ""; for (int index = 0; index < view.vec->size(); ++index) { os << separator << view[index]; separator = ", "; } return os << "]"; } private: std::vector<T>* vec; int offset; }; int main() { std::vector<char> v{'a', 'b', 'c', 'd', 'e'}; vector_view<char> view{v}; for (int i = 0; i < v.size(); ++i) { std::cout << view << std::endl; view.rotate(1); } std::cout << std::endl; for (int i = 0; i < v.size(); ++i) { std::cout << view << std::endl; view.rotate(-1); } } The demo output is: [a, b, c, d, e] [e, a, b, c, d] [d, e, a, b, c] [c, d, e, a, b] [b, c, d, e, a] [a, b, c, d, e] [b, c, d, e, a] [c, d, e, a, b] [d, e, a, b, c] [e, a, b, c, d] Since I am still in the process of learning modern C++, please tell me anything I could improve here; such as: Naming, coding conventions, API, ... Answer: I'm assuming that you want your vector elements to be modifiable through this view, as vec is non-const. You store a pointer to the underlying vector, but never test it isn't null. I think it would be better to have it as a reference member. If you want your view to be as usable as the standard vector, you'll want to provide iterators, and begin() and end() to obtain them. You may also want implement front(), back() and at(). Methods such as size(), reserve, capacity etc. can simply forward to the contained vector. Like std::vector, you should override operator[] on const: const T& operator[](int index) const { return (*vec)[(index + offset) % vec->size()]; } T& operator[](int index) { return (*vec)[(index + offset) % vec->size()]; } Once we've implemented size() or begin()/end(), the streaming operator << uses only the public interface, so it no longer needs to be friend. Its loop compares (signed) int against (unsigned) size_t (g++ warns about this with -Wall). Prefer to avoid implicit promotions, so declare i as size_type to match size(). If you're pedantic, that should be vector_view<T>::size_type, which you'll forward from std::vector<T>. My code below avoids that entirely, by using iterators. Beware the gotcha when T is bool. The only thing preventing your code working on vector<bool> is: T& operator[](int index); If you return a std::vector<T>::reference instead, that will do the right thing there. (std::vector<bool>::reference is a proxy object, not a bool&). My modified class: #include <iterator> #include <vector> template<typename T> class vector_view { using vector_type = std::vector<T>; public: using value_type = typename vector_type::value_type; using allocator_type = typename vector_type::allocator_type; using size_type = typename vector_type::size_type; using difference_type = typename vector_type::difference_type; using reference = typename vector_type::reference; using const_reference = typename vector_type::const_reference; using pointer = typename vector_type::pointer; using const_pointer = typename vector_type::const_pointer; // Iterators: I've implemented little more than the minimum to // make the test program work. You could expand this, and also // provide a specialization of std::iterator_traits<> for each // one. struct iterator { vector_view& v; size_type i; iterator& operator++() { ++i; return *this;} iterator& operator-- () { --i; return *this;} reference operator*() const { return v[i]; } bool operator!=(const iterator other) const { return &v != &other.v || i != other.i; } }; struct const_iterator { const vector_view& v; size_type i; const_iterator& operator++() { ++i; return *this;} const_iterator& operator-- () { --i; return *this;} const_reference operator*() const { return v[i]; } bool operator!=(const const_iterator other) const { return &v != &other.v || i != other.i; } }; using reverse_iterator = std::reverse_iterator<iterator>; using const_reverse_iterator = std::reverse_iterator<const_iterator>; explicit vector_view(vector_type& vec, int offset = 0) : vec{vec}, offset{offset} {} const reference operator[](size_type index) const { return vec[(index + offset) % vec.size()]; } reference operator[](size_type index) { return vec[(index + offset) % vec.size()]; } void rotate(int rotation_length) { auto size = vec.size(); offset = (offset + size - rotation_length) % size; } void set_rotation(int rotation_length) { offset = rotation_length % vec.size(); } int rotation() const { return offset; } // Iterators iterator begin() { return iterator{*this, 0}; } const_iterator begin() const { return const_iterator{*this, 0}; } const_iterator cbegin() const { return begin(); } iterator end() { return iterator{*this, size()}; } const_iterator end() const { return const_iterator{*this, size()}; } const_iterator cend() const { return end(); } // You may wish to implement rbegin/rend and crbegin/crend bool empty() const { return vec.empty(); } size_type size() const { return vec.size(); } // You may also want: // reserve(), capacity() // clear(), insert(), emplace(), erase(), ..., etc. private: vector_type& vec; int offset; }; // Consider implementing operator==(), operator<() and other // comparisons (as non-member functions, probably). You could // probably use std::mismatch() for that. // Printing: greatly simplified using range-based `for`. #include <iostream> #include <string> template<typename T> std::ostream& operator<<(std::ostream& os, const vector_view<T>& view) { os << "["; std::string separator = ""; for (auto element: view) { os << separator << element; separator = ", "; } return os << "]"; } // Test program: exercises both `normal` and boolean vectors. int main() { std::vector<char> v{'a', 'b', 'c', 'd', 'e'}; vector_view<char> view{v}; for (size_t i = 0; i < view.size(); ++i) { std::cout << view << std::endl; view.rotate(1); } std::cout << std::endl; for (size_t i = 0; i < view.size(); ++i) { std::cout << view << std::endl; view.rotate(-1); } std::cout << std::endl; std::vector<bool> bvec{{false, true, true, false, false}}; auto bview = vector_view<bool>{bvec}; for (size_t i = 0; i < bview.size(); ++i) { bview.set_rotation(i); std::cout << bview << std::endl; } }
{ "domain": "codereview.stackexchange", "id": 22490, "tags": "c++, c++11, vectors, circular-list" }
Deck of cards design
Question: Can someone please review this design I created for deck of cards in Java? package prep.design; import java.lang.reflect.Array; import java.util.*; /** * Created by rohandalvi on 5/14/16. */ public class CardDeck { private ArrayList<Card> cards; CardDeck(){ cards = new ArrayList<>(); for(Card.Suit s: Card.Suit.values()){ for(Card.Rank r: Card.Rank.values()){ cards.add(new Card(r,s)); } } } public void print(){ for(Card card: cards){ System.out.println("Card: "+card.getRank()+" Suit: "+card.getSuit()); } } public void shuffle(ArrayList<Card> c){ if(c==null){ Collections.shuffle(cards); }else{ Collections.shuffle(c); } } public static void main(String[] args) { CardDeck cd = new CardDeck(); cd.shuffle(null); //cd.print(); cd.deal(5); } public void deal(int numberOfPlayers){ ArrayList<Card> tempDeck = new ArrayList<>(cards); Map<Integer,List<Card>> playerDeck = new HashMap<>(); //List<List<Card>> playerDeck = new ArrayList<>(); shuffle(tempDeck); shuffle(tempDeck); int i=0; while(i!=52){ int j = i%numberOfPlayers; List<Card> tempList; if(playerDeck.containsKey(j)){ tempList = playerDeck.get(j); }else{ tempList = new ArrayList<>(); } tempList.add(tempDeck.get(i)); playerDeck.put(j,tempList); i++; } System.out.println("Dealt"); displayPlayerCards(playerDeck); } public void displayPlayerCards(Map<Integer,List<Card>> playerDeck){ int i=0; for(Integer player: playerDeck.keySet()){ List<Card> playerCards = playerDeck.get(player); System.out.println("Player "+i); for(Card c: playerCards){ System.out.print("Rank: "+c.getRank()+" Suit: "+c.getSuit()+"\t"); } System.out.println(); i++; } } } class Card{ Suit s; Rank r; public enum Suit{ Spade , Heart , Diamond , Clubs } public enum Rank{ ACE(1) , TWO(2), THREE(3), FOUR(4), FIVE(5) , SIX(6), SEVEN(7), EIGHT(8), NINE(9), TEN (10), JACK(11), QUEEN (12), KING(13); int priority; Rank(int s) { priority = s; } public int getPriority(){ return priority; } public Rank getRankByPriority(int p){ switch (p){ case 1: return Rank.ACE; case 2: return Rank.TWO; case 3: return Rank.THREE; case 4: return Rank.FOUR; case 5: return Rank.FIVE; case 6: return Rank.SIX; case 7: return Rank.SEVEN; case 8: return Rank.EIGHT; case 9: return Rank.NINE; case 10: return Rank.TEN; case 11: return Rank.JACK; case 12: return Rank.QUEEN; case 13: return Rank.KING; default: return null; } } } Rank getRank(){ return r; } Suit getSuit(){ return s; } Card(Rank r, Suit s){ this.r = r; this.s = s; } } class cardComparator implements java.util.Comparator<Card.Rank>{ @Override public int compare(Card.Rank o1, Card.Rank o2) { if(o1.getPriority() > o2.getPriority()){ return 1; }else if(o1.getPriority() < o2.getPriority()){ return -1; }else{ return 0; } } @Override public Comparator<Card.Rank> reversed() { return null; } } Answer: Card class Specify the access modifiers of the class and members properly. Use proper names for members: class Card{ Suit s; Rank r; for example: public class Card{ private Suit suit; private Rank rank; print() method It's better to only create a String representation of an object and return that. You override toString() method which will be used implicitly if the object has to be converted to a String. It's good practice to override toString() for all custom classes, because this is where people will look for when searching for a String representation of the object. You called it print(), or was it info()? , who knows? Your future self? Probably not. toString() is the way to go. Your current approach appears to be more convenient, but actually causes some trouble like duplicated code: System.out.print("Rank: "+c.getRank()+" Suit: "+c.getSuit()+"\t"); System.out.println("Card: "+card.getRank()+" Suit: "+card.getSuit()); Card.toString() could look like this: @Override public String toString() { return "rank: " + rank + "\t suit: " + suit; } CardDeck is pretty much a collection of a number of cards. CardDeck.toString() could look something like this: @Override public String toString() { String result = cards.size() + " cards:" + System.lineSeparator(); for (Card card : cards) { result = result.concat(card + System.lineSeparator()); } return result; } Here's what both classes look like so var with a little example program: Card.java public class Card { private Suit suit; private Rank rank; public enum Suit { Spade , Heart , Diamond , Clubs } public enum Rank { ACE(1) , TWO(2), THREE(3), FOUR(4), FIVE(5) , SIX(6), SEVEN(7), EIGHT(8), NINE(9), TEN (10), JACK(11), QUEEN (12), KING(13); int priority; Rank(int s) { priority = s; } public int getPriority() { return priority; } public Rank getRankByPriority(int p) { switch (p) { case 1: return Rank.ACE; case 2: return Rank.TWO; case 3: return Rank.THREE; case 4: return Rank.FOUR; case 5: return Rank.FIVE; case 6: return Rank.SIX; case 7: return Rank.SEVEN; case 8: return Rank.EIGHT; case 9: return Rank.NINE; case 10: return Rank.TEN; case 11: return Rank.JACK; case 12: return Rank.QUEEN; case 13: return Rank.KING; default: return null; } } } Card(Rank rank, Suit suit) { this.rank = rank; this.suit = suit; } Rank getRank() { return rank; } Suit getSuit() { return suit; } @Override public String toString() { return "rank: " + rank + "\t suit: " + suit; } } CardDeck.java import java.util.ArrayList; public class CardDeck { private ArrayList<Card> cards; CardDeck() { cards = new ArrayList<>(); for(Card.Suit s: Card.Suit.values()) { for(Card.Rank r: Card.Rank.values()) { cards.add(new Card(r,s)); } } } public static void main(String[] args) { CardDeck deck = new CardDeck(); System.out.println(deck); } @Override public String toString() { String result = cards.size() + " cards:" + System.lineSeparator(); for (Card card : cards) { result = result.concat(card + System.lineSeparator()); } return result; } } result (excerpt): $ java CardDeck 52 cards: rank: ACE suit: Spade rank: TWO suit: Spade rank: THREE suit: Spade rank: FOUR suit: Spade rank: FIVE suit: Spade rank: SIX suit: Spade rank: SEVEN suit: Spade rank: EIGHT suit: Spade rank: NINE suit: Spade rank: TEN suit: Spade ... shuffle() method A method is a block of code that operates on an object. Your shuffle() method might work on an optional parameter which is entirely independent from the CardDeck object. The part of the method that operates on an arbitrary ArrayList<Card> should be static. public void shuffle(ArrayList<Card> c){ if(c==null){ Collections.shuffle(cards); }else{ Collections.shuffle(c); } } Thus splits into: public static void shuffle(ArrayList<Card> cards) { if(cards!=null) { Collections.shuffle(cards); } } public void shuffle() { Collections.shuffle(cards); } The second one makes a lot of sense, but the static version does not. What's the point of calling this method directly when you already have Collections.shuffle()? Also, the static version has nothing to do with the CardDeck class whatsoever. It should not be a member of CardDeck, but either Card or better a separate class with utility functionality named CardUtils for example. But again, there's no real point in having that functionality that does not add anything to Collections.shuffle(). Delete it. deal() method I have no idea what this is doing by only looking at it. There are so many card games and all have their own ways to deal cards. Does every player get the same number of cards? Or are all cards distributed to a number of players? Trying your code it looks like the later is true. The problem with this method is that it does not return anything. It only prints the piles of card out, without any chance to do anything with them. Instead, provide a useful return value you can work with. One such return value could be CardDeck. There's a bit of functionality already in there, so why not reuse that? Add a constructor to be able to pass a Collection of cards to it: CardDeck(Collection<? extends Card> cards) { this.cards = new ArrayList<Card>(cards); } When dealing the cards, there's no real point in dealing them one by one. You know how many cards each player receives and should just get that many from the shuffled cards. Think about it like every player takes a certain number of cards from the deck. public ArrayList<CardDeck> dealAllCards(int numberOfPlayers) { ArrayList<CardDeck> playerHands = new ArrayList<>(numberOfPlayers); ArrayList<Card> shuffledDeck = new ArrayList<>(cards); int cardsPerPlayer = shuffledDeck.size()/numberOfPlayers; // to be subtracted by 1 after dealing remaining cards int remainingCards = shuffledDeck.size()%numberOfPlayers; Collections.shuffle(shuffledDeck); for (int player = 0; player < numberOfPlayers; ++player) { if (player == remainingCards) { --cardsPerPlayer; // all remaining cards dealed } CardDeck hand = new CardDeck(shuffledDeck.subList(0, cardsPerPlayer + 1)); shuffledDeck.removeAll(hand.cards); playerHands.add(hand); } return playerHands; } The full code now looks like this: Card.java did not change, see above CardDeck.java import java.util.ArrayList; import java.util.Collection; import java.util.Collections; public class CardDeck { private ArrayList<Card> cards; public static void main(String[] args) { CardDeck deck = new CardDeck(); ArrayList<CardDeck> playerHands = deck.dealAllCards(5); for(CardDeck playerHand : playerHands) { System.out.println(playerHand); } } CardDeck() { cards = new ArrayList<>(); for(Card.Suit s: Card.Suit.values()) { for(Card.Rank r: Card.Rank.values()) { cards.add(new Card(r,s)); } } } CardDeck(Collection<? extends Card> cards) { this.cards = new ArrayList<Card>(cards); } public ArrayList<CardDeck> dealAllCards(int numberOfPlayers) { ArrayList<CardDeck> playerHands = new ArrayList<>(numberOfPlayers); ArrayList<Card> shuffledDeck = new ArrayList<>(cards); int cardsPerPlayer = shuffledDeck.size()/numberOfPlayers; // to be subtracted by 1 after dealing remaining cards int remainingCards = shuffledDeck.size()%numberOfPlayers; Collections.shuffle(shuffledDeck); for (int player = 0; player < numberOfPlayers; ++player) { if (player == remainingCards) { --cardsPerPlayer; // all remaining cards dealed } CardDeck hand = new CardDeck(shuffledDeck.subList(0, cardsPerPlayer + 1)); shuffledDeck.removeAll(hand.cards); playerHands.add(hand); } return playerHands; } public void shuffle() { Collections.shuffle(cards); } @Override public String toString() { String result = cards.size() + " cards:" + System.lineSeparator(); for (Card card : cards) { result = result.concat(card + System.lineSeparator()); } return result; } } full example output: $ java CardDeck 11 cards: rank: FOUR suit: Clubs rank: FIVE suit: Heart rank: FOUR suit: Spade rank: ACE suit: Heart rank: EIGHT suit: Heart rank: SIX suit: Heart rank: JACK suit: Spade rank: EIGHT suit: Diamond rank: JACK suit: Clubs rank: QUEEN suit: Clubs rank: TWO suit: Spade 11 cards: rank: SEVEN suit: Heart rank: NINE suit: Spade rank: SEVEN suit: Diamond rank: SEVEN suit: Clubs rank: SIX suit: Clubs rank: ACE suit: Clubs rank: KING suit: Heart rank: SEVEN suit: Spade rank: FIVE suit: Clubs rank: FOUR suit: Diamond rank: TEN suit: Diamond 10 cards: rank: KING suit: Clubs rank: TEN suit: Clubs rank: ACE suit: Diamond rank: QUEEN suit: Spade rank: SIX suit: Diamond rank: FIVE suit: Spade rank: EIGHT suit: Spade rank: QUEEN suit: Heart rank: FOUR suit: Heart rank: KING suit: Diamond 10 cards: rank: THREE suit: Clubs rank: EIGHT suit: Clubs rank: THREE suit: Diamond rank: KING suit: Spade rank: TWO suit: Clubs rank: JACK suit: Heart rank: THREE suit: Spade rank: THREE suit: Heart rank: JACK suit: Diamond rank: TWO suit: Diamond 10 cards: rank: NINE suit: Heart rank: NINE suit: Clubs rank: TEN suit: Heart rank: ACE suit: Spade rank: NINE suit: Diamond rank: TEN suit: Spade rank: FIVE suit: Diamond rank: QUEEN suit: Diamond rank: TWO suit: Heart rank: SIX suit: Spade further improvements from the comments Thanks @Boris! String.concat is conventionally written as +; this is never used in a loop - a StringBuilder (or StringWriter) is used instead. That's right. CardDeck.toString() could look like: @Override public String toString() { StringBuilder result = new StringBuilder(cards.size() + " cards:" + System.lineSeparator()); for (Card card : cards) { result.append(card + System.lineSeparator()); } return result.toString(); } This is great, because it also helps readability. It's not just some String, but a StringBuilder, which is a bit more self-documenting. Given that cards is immutable (no public acces and not private changes), this String could even be built beforehand and cached in a private member. As I don't think the toString() method will be called in such quantities that it would make a significant difference, I leave such and further optimisations of the method to the interested reader. Your deal method is very inefficient; List.removeAll will call remove on each of the elements, which is in turn going to loop through the List to find the element; this is O(n*k) - very bad. Instead use List.subList.clear() after you create the CardDeck for the player. True, I was more concerned about being close to the real world situation that the code is modelling, that is: tacking a number of cards from a set of cards. How to solve that problem? Not at all. They say it's clever to solve a problem, but wise to avoid it. Let's simply not remove anything at all. Instead, move the parameters for subList(), which essentially has the same effect. The improved version of dealAllCards() could look like this: public ArrayList<CardDeck> dealAllCards(int numberOfPlayers) { ArrayList<CardDeck> playerHands = new ArrayList<>(numberOfPlayers); ArrayList<Card> shuffledDeck = new ArrayList<>(cards); int cardsPerPlayer = shuffledDeck.size()/numberOfPlayers; // to be subtracted by 1 after dealing remaining cards int remainingCards = shuffledDeck.size()%numberOfPlayers; // #cards is not a multiple of #players Collections.shuffle(shuffledDeck); int firstCardIndex = 0; int lastCardIndex = 0; for (int player = 0; player < numberOfPlayers; ++player) { if (player == remainingCards) { --cardsPerPlayer; // all remaining cards dealed } lastCardIndex = firstCardIndex + cardsPerPlayer + 1; playerHands.add(new CardDeck(shuffledDeck.subList(firstCardIndex, lastCardIndex))); firstCardIndex = lastCardIndex; } return playerHands; } shuffledDeck is just a local temporary helper to hold the shuffled cards, after all.
{ "domain": "codereview.stackexchange", "id": 20003, "tags": "java, playing-cards" }
Effect of heat-treatment of Vitamin C
Question: When it is in the body, does it matter if vitamin C has peviously been heat-treated? Can the body still use it? For instance, if vitamin C is added to 90 degree celsius water does it matter if you drink all of the water? Will you still gain the vitamin C? Answer: Disclaimer: the denaturation is the process of protein unfolding from their native state. Since vitamin C is not a protein, it cannot denaturate. I think that the word you're looking for is simply "degradation". Vitamin C can exist either in its reduced form L-ascorbic acid (L-AA) or in the oxidised form dehydroascorbic acid (DHAA). The active (antioxidant) form of vitamin C is the former; however, there are enzymes in our body that can convert DHAA back into L-AA. In this article, the authors seek to evaluate the thermal stability of both L-AA and DHAA in broccoli. Their results are the following. Thermal treatments (for 15 min) of crushed broccoli at 30 to 60 degrees C resulted in conversion of l-AA to DHAA whereas treatments at 70 to 90 degrees C retained vitamin C as l-AA. So, cooking the vitamin C does not alter its structure and your body will still be able to use it. Another reason to eat broccoli, I guess.
{ "domain": "biology.stackexchange", "id": 8212, "tags": "vitamins" }
How to identify similar words as input from a dictionary
Question: Let's say I have a CSV file (single column) with a list of words as input. Meaning the file looks like below Input Terms Subaaaash Paruuru Mettromin Paracetamol Crocedinin Vehiclehcle Buildding Dict terms #this dict has around million records and I have to find a closest match for input term from this dictionary Metformin 250 MG Metformin ..... Crocin Vehcile Paru Subash Now, I expect my output to be like as shown below As you can see that red colored Paru is not a correct match for paracetamol, but that's the closest match we can get for the input term paracetamol from the dictionary. So, I would like to do matching based on 1) Word sounds (when pronounced). Phenotics 2) Spelling mistake corrections 3) Find the best matching word from the dictionary for input terms Can you let me know how can I do the above? Answer: So, your question concerns how to effectively translate the input words into their proposed correct words (e.g. Paruuuu --> Paru) via phonetics and spelling mistake corrections. My first idea on this would be to use a deep sequence to sequence model. In a sequence to sequence model, we encode the input word (e.g. Paruuuu) as a sequence of characters into an encoder (effectively an RNN / LSTM, etc.) into a "hidden representation". Then you decode your hidden representation as a sequence of phonemes (which denote how we pronounce the word) with a decoder (again, another RNN / LSTM, etc.). Then, we can take this sequence of phonemes as input into another encoder and then decode with a neural network, where the output layer is a softmax layer, which computes the probability distribution over all words in your vocabulary (in you Dict terms) and select the word the highest probability. So overall, we have: Encode input word --> Decode output phoneme sequence --> Encode output phoneme sequence --> Decode with neural network to classify word. The proposed method is supervised, so of course, you will need examples of input words and their correct "translations" (e.g. ("Paruuu", "Paru")) Here is an article which gives a good intuition behind sequence to sequence models which can classify your input words: https://towardsdatascience.com/understanding-encoder-decoder-sequence-to-sequence-model-679e04af4346
{ "domain": "datascience.stackexchange", "id": 7683, "tags": "machine-learning, neural-network, deep-learning, text-mining, nlp" }
Why does the recurrence equation for QuickSort consider all the elements in the array?
Question: I have been taught that QuickSort has the following recurrence equation in the best case: $T(n) = \begin{cases} c & \text{if } n=1 \\ 2\ T(\frac{n}{2}) + c \cdot n & \text{if } n>1 \end{cases}$ where $c \in \Theta(1)$ and $c \cdot n \in \Theta(n)$. As well for the general case: $T(n) = \begin{cases} c & \text{if } n=1 \\ T(q) + T(n-q) + c \cdot n & \text{if } n>1 \end{cases}$ Moreover, the algorithm used in the lecture was the following: Procedure QuickSort(A[p,...,r]) if p < r q = Partition(A, p, r) QuickSort(A, p, q-1) QuickSort(A, q+1, r) As you can see, the recursive calls don't take care of the pivot element $q$. I understand this occurs as Partition moves elements around until the pivot is where it should be is the array ordered. The following is the Partition algorithm: Function Partition(A[p,...,r], p, r) x = A[r] i = p-1 for j = p to j = r-1 do if A[j] <= x i = i+1 Exchange(A[i], A[j]) Exchange(A[i+1], A[r]) return i+1 My question is: Why do the recurrence equations assume the algorithm takes all the elements of the array (let it be $n$) if the actual algorithm only takes $n-1$ elements? Why can't it be the following? $T(n) = \begin{cases} c & \text{if } n=1 \\ 2\ T(\frac{n-1}{2}) + c \cdot n & \text{if } n>1 \end{cases}$ Or, more generally, this one? $T(n) = \begin{cases} c & \text{if } n=1 \\ T(q) + T(n-q-1) + c \cdot n & \text{if } n>1 \end{cases}$ Answer: When you're looking at the behaviour of the algorithm as $n\to\infty$, the difference between $n$ and $n-1$ becomes negligible. The simpler form is easier to deal with and it gives the same answer.
{ "domain": "cs.stackexchange", "id": 6337, "tags": "algorithm-analysis, recurrence-relation, recursion, quicksort" }
Is this a correct derivation of the second law or it is just nonsense?
Question: In this article, the author makes the following argument. The difference between entropy and information is that entropy is the bits that you don't understand while information is the bits you do understand. Bits can never be destroyed. Therefore entropy must always increase, because entropy is bits. Is this correct derivation of the second law of thermodynamics or is it wrong? I need discernment from experts in physics because I'm not a professional physicist. Answer: It is not an argument at all. At least, not a scientific argument. The concept of "bits that you want or don't want" or "that you understand or do not understand" is at least very vague: is the author suggesting an arbitrary subjective definition of information or of entropy, depending on what different persons want or understand? Even worse, there is no physical definition of a bit. Ok, the minimum information, as thought in the introductory course on information theory, is ok. But, what is a bit in the real world according to the author? In any case, information theory never identifies information and entropy. Missing the definition of what represents a bit in the physical world, claiming that bits can never be destroyed is pure nonsense. Is the author able to reconstruct the content of his hard drive after melting it in a furnace? And how can we know that it is possible to create bits from nothing? Because, if it is not possible the creation of new bits, entropy, according to his definition, should stop to increase at some point. Leaving aside these unjustified claims, there are a few things that could be said on this subject which may help to understand why this "derivation" is nonsense. There are many different concepts which (unfortunately) have all been named "entropy". Those immediately relevant for the second law of thermodynamics are the thermodynamic definition introduced by Clausius:$$ S(B) = S(A) + \int_A^B \frac{dQ_{rev}}{T},$$ the statistical mechanics definition by Boltzmann/Planck/Gibbs, which can be expressed in many ways, depending on the set of state variables one likes to use to describe a macroscopic state; the information theory definition by Shannon: $$S = -k \sum_i p_i log(p_i). $$ In the Shannon's formula, a generic system, (even a system without any thermodynamic behavior like a deck of cards) is supposed to be found in each $i-$th state with a probability $p_i$. It is reasonable to associate a quantity named $information$ to each state via the quantity $log(1/p_i)$. Thus, a value of the Shannon entropy can be assigned to each system characterized by a given probability of its states. It is evident that Shannon entropy is not the same as information but it can be seen as the average information embodied in a probability distribution. It is also interesting to notice that it is possible to assign different probability distributions to the same physical system, according to different ways of listing its (micro)states. A different entropy will correspond to each of such distributions. Last but not least, nothing is stated, at level of information theory, about the time evolution of the probabilities. So, in order to make contact with the second principle, something more should be said. The conceptual chain of links between the three entropies goes as follows: Shannon entropy reduces to the statistical mechanics expressions for the entropy in different ensembles if the probability distribution used in the information entropy coincides with the probability distribution of the relevant ensemble. The different formulae of the entropy in each ensemble are not always equivalent but the corresponding entropy per particle or per unitary volume coincides, after taking the thermodynamic limit, and always in that limit, it has all the properties of the Clausius thermodynamic entropy. In conclusion, only for an infinite system, controlled by the Boltzmann-Gibbs probability, it is possible to establish a safe scientific link between information and second law. Unfortunately, such a clean conceptual connection is very often ignored or misinterpreted, even in textbooks. Even though the whole scenario was very clear already in the fifties when Brillouin's book Science and Information Theory was written.
{ "domain": "physics.stackexchange", "id": 53773, "tags": "thermodynamics, entropy, information" }
How does one figure out where a class of languages falls under some complexity class?
Question: I was wondering how can someone prove that one class of languages is of a certain complexity? For example, how could I show the Turing-recognizable languages are in P? Would I have to come up with an algorithm that runs in deterministic polynomial time? Answer: Turing recognizable languages are not (generally) in P, as the time hierarchy theorem implies. To show that a class $L_1$ is contained in a class $L_2$, you need to show that every language in $L_1$ belongs to $L_2$. For example, to show that $L$ (logspace) is contained in $P$ (polytime), you argue as follows: every language in $L$ is decided by some machine running in logarithmic space; this machine must run in polynomial time; so the language is in $P$. In this example, the same machine was used, but this is not necessary. For example, Savitch's theorem shows that $\mathrm{NSPACE}(s(n)) \subseteq \mathrm{DSPACE}(s(n)^2)$, and here the same machine cannot be used (since a non-deterministic Turing machine is not necessarily deterministic).
{ "domain": "cs.stackexchange", "id": 2777, "tags": "complexity-theory, formal-languages, proof-techniques" }
Understanding Asymptotic Equipartition Property
Question: I have some problems in understanding the precise meaning of the Asymptotic Equipartition Property, related to a large number n of independent and identically distributed random variables (X1, X2, ..., Xn). Here an example is shown (here you may find the complete material): I do not understand the precise difference between the sentences: "Clearly, it is not true that all 2n sequences of length n have the same probability." "One might be able, however, to predict the probability of the sequence that is actually observed. The question is: what is the probability p(X1,X2,...,Xn) of the outcomes X1,X2,..., Xn, where X1,X2,..., Xn are i.i.d. according to p(x)" "the typical set has probability close to 1, all the elements of the typical set have same probability" Then I read this sentence, which increases the confusion in my mind. Answer: The sentences you've mentioned in your question all have a quite different meaning. Let me try to explain them one by one: Clearly, it is not true that all $2^n$ sequences of length $n$ have the same probability. This refers to the example of a sequence of binary i.i.d. random variables. If $p(1)\neq p(0)$ then it is clear that sequences with different numbers of ones and zeros must have different probabilities, i.e., not all sequences have the same probability. One might be able, however, to predict the probability of the sequence that is actually observed. The question is: what is the probability $p(X_1,X_2,...,X_n)$ of the outcomes $X_1,X_2,\ldots, X_n$, where $X_1,X_2,\ldots, X_n$ are i.i.d. according to $p(x)$? This question is answered by the asymptotic equipartition property (AEP). It turns out that the typical sequences do have approximately the same probability as $n$ gets large, namely $2^{-nH}$. The typical set has probability close to 1, all the elements of the typical set have [the] same probability. That's also what the AEP says. As mentioned before, the typical sequences have approximately the same probability, and the aggregate probability of the typical set approaches $1$ as $n$ gets large. Finally, the statement The concept of a typical set is generally different from that of a high probability set. is indeed confusing. What is meant is the fact that there can exist individual sequences that are not part of the typical set but which have a higher probability than the typical sequences. Still, the aggregate probability of the typical set approaches $1$ for increasing $n$, and there are too few atypical sequences with high probability to have any significance.
{ "domain": "dsp.stackexchange", "id": 8138, "tags": "digital-communications, channelcoding, information-theory, random, probability" }
Is a starter relay needed to supply the current to the starter motor solenoid?
Question: Considering the fact that the starter solenoid needs a maximum of 50 Amps initially (in order for the pinion gear to engage to the flywheel through the plunger and lever) and then some 8 Amps during cranking, do we need a relay to supply power to the solenoid? Is this relay of electromechanical type or a solid state one? Answer: Earlier cars did not use a relay, as the ignition switches were built sufficiently well to carry the 50A current for the pull-in winding current. Newer cars have components like the ignition switch that are lighter and have more functions and need a relay to drive the starter solenoid. This relay is usually in the fuse box in the engine compartment. We had a Commer van (petrol) that the ignition switch had the following positions : off, ignition, start. Another van (diesel) had off, on, heat, start... Most relays are the electro-mechanical type as they are cheap and easy to get hold of, but as cars get more computer well more micro-processor control then some things are being replaced with solid-state components. Edit: here is a link kindly provided by a comment : example starting circuits It does not include all possibilities, early Landrovers just had a big spring-loaded push button that directly operated the starter - no relays, solenoids at all...
{ "domain": "engineering.stackexchange", "id": 2755, "tags": "automotive-engineering" }
Dirac Lagrangian under charge conjugation
Question: I am trying to understand why the Dirac Lagrangian is invariant under charge conjugation. The Dirac Lagrangian is: $$\mathcal{L} = i\bar{\psi}\gamma^\mu \partial_\mu\psi - m \bar{\psi}\psi $$ I know that under charge conjugation the flowing formulas are correct: $$ \hat{C} \, \bar{\psi}\psi \, \hat{C} = +\bar{\psi}\psi, \\ \hat{C} \, \bar{\psi}\gamma^\mu\psi \,\hat{C} = -\bar{\psi}\gamma^\mu\psi, \\ \hat{C} \, \partial_\mu \,\hat{C} = \partial_\mu \\ $$ Therefore: $$ \hat{C} \, \mathcal{L} \,\hat{C} = \hat{C} \, i\bar{\psi}\gamma^\mu \partial_\mu\psi \,\hat{C} - \hat{C} \, m \bar{\psi}\psi \,\hat{C} $$ Where $$ \hat{C} \, i\bar{\psi}\gamma^\mu \partial_\mu\psi \,\hat{C} = i\hat{C} \, \bar{\psi}\gamma^\mu \hat{C} \hat{C} \,\partial_\mu\psi \,\hat{C} = i\hat{C} \, \bar{\psi}\gamma^\mu \hat{C} \partial_\mu \hat{C} \,\psi \,\hat{C} $$ And because $\partial_\mu$ commutes with all $\gamma^\mu$ we use the same calculations as in $\hat{C} \, \bar{\psi}\gamma^\mu\psi \,\hat{C} = -\bar{\psi}\gamma^\mu\psi$ and get: $$\hat{C} \, i\bar{\psi}\gamma^\mu \partial_\mu\psi \,\hat{C} = -i\bar{\psi}\gamma^\mu \partial_\mu\psi$$ Therefore overall: $$ \hat{C} \, \mathcal{L} \,\hat{C} = -i\bar{\psi}\gamma^\mu \partial_\mu\psi - m \bar{\psi}\psi \neq \mathcal{L} $$ What am I missing? Answer: Note that charge conjugation interchanges the anticommuting $\bar \psi$ and $\psi$, so after conjugation by $\hat C$ the $\partial_\mu$ acts on the $\bar\psi$, and needs a sign-changing integration by parts. In detail: $$ \hat C \psi \hat C^{-1}= {\mathcal C}^{-1}\bar\psi^T\\ \hat C \bar \psi \hat C^{-1} = - \psi^T {\mathcal C} $$ where the ${\mathcal C}$ matrix obeys $$ {\mathcal C}\gamma^\mu {\mathcal C}^{-1}= -(\gamma^\mu)^T. $$
{ "domain": "physics.stackexchange", "id": 100283, "tags": "lagrangian-formalism, field-theory, fermions, dirac-equation, charge-conjugation" }
Electrostatic field and potential
Question: At different places, I saw the different signs in the relation of electrostatic field and potential: $E=-\nabla \phi$ and $E=\nabla \phi$. I am confusing what situations they are describing. For example, when the field lines point to +y, which one I should use? Answer: The negative sign has physical meaning, it is related to $$W=-\int_a^b\vec{F}\cdot d\vec{\ell}$$ When $\vec{\nabla}\times\vec{E}=\vec{0}$, the electric force is conservative and you can define a potential energy from the force. Just as the electric field is a force per unit charge, the potential is a potential energy per unit charge; i.e. $$V(\vec{r})=-\int_\mathcal{O}^\vec{r}\vec{E}\cdot d\vec{\ell}$$ where $\mathcal{O}$ is some reference point. If you move a positive test charge against an electric field, you should be increasing its potential energy. Hence the negative sign. The integral above can be undone with a gradient; i.e. $\vec{E}=-\vec{\nabla}V$.
{ "domain": "physics.stackexchange", "id": 62195, "tags": "electromagnetism, electrostatics, electric-fields" }
What is the impact of climate change on tropospheric ozone production and vice versa?
Question: I know ground ozone reduce the photosynthesis rate of vegetation and could lead to increase respiration and hence more CO2 in the atmosphere, contributing to global warming. But what is the opposite? How is climate change affect the production of ozone? Higher temperature means higher ozone production? but also more water vapour in the atmosphere which counteract it? Answer: Since ozone is a greenhouse gas, I would say it would enhance warming. This article explains how regional climate may be impacted by ozone This article says that while global ozone should decrease as a result of warming, ozone over populated areas should increase. Water vapor plays a minute role in the production of tropospheric ozone- it increases the amount of hydroxyl radical to be used in hydroperoxyl formation.
{ "domain": "earthscience.stackexchange", "id": 1060, "tags": "climate, climate-change, air-pollution, ozone" }
How was ChatGPT trained?
Question: I know that large language models like GPT-3 are trained simply to continue pieces of text that have been scraped from the web. But how was ChatGPT trained, which, while also having a good understanding of language, is not directly a language model, but a chatbot? Do we know anything about that? I presume that a lot of conversations was needed in order to train it. Did they simply scrape those conversations from the web, and where did they find such conversations in that case? Answer: The key ingredient is called Reinforcement Learning from Human Feedback (RLHF), that is having humans rate the model answers and use the feedback to guide the model training. The official blog explains this fairly well.
{ "domain": "ai.stackexchange", "id": 3629, "tags": "natural-language-processing, chat-bots, training-datasets, language-model, chatgpt" }
Mapping values. Should a single use variable be declared as a variable at all?
Question: Where I have, let us say an object, that I will only use once, what is considered general best practice in terms of declaring it as var vs. simply putting the object directly into the method it will be used in? I'm thinking this is going to come down to "which do you think is more readable", but given my lack of experience thought I'd see if there was a consensus. This came about when looking at the following code: var aiPersonalities = model.aiPersonalities(); var newPersonalities = { qCasual: { ai_path: "/pa/q_casual", display_name: "!LOC:Q-Casual", metal_drain_check: 0.64, energy_drain_check: 0.77, metal_demand_check: 0.95, energy_demand_check: 0.92, micro_type: 0, go_for_the_kill: false, priority_scout_metal_spots: true, enable_commander_danger_responses: false, neural_data_mod: 2, adv_eco_mod: 0.5, adv_eco_mod_alone: 0, factory_build_delay_min: 0, factory_build_delay_max: 12, per_expansion_delay: 60, personality_tags: ["queller"], min_basic_fabbers: 10, min_advanced_fabbers: 3, }, // imagine another 11 objects of the same size here }; var baseline = aiPersonalities.Absurd; newPersonalities = _.mapValues( newPersonalities, function (personality, name) { var result = _.assign(_.clone(baseline), personality); result.name = name; return result; } ); _.assign(aiPersonalities, newPersonalities); model.aiPersonalities.valueHasMutated(); This could be written in a way which removes aiPersonalities, baseline, and avoids newPersonalities referencing itself: var newPersonalities = _.mapValues( { qCasual: { ai_path: "/pa/q_casual", display_name: "!LOC:Q-Casual", metal_drain_check: 0.64, energy_drain_check: 0.77, metal_demand_check: 0.95, energy_demand_check: 0.92, micro_type: 0, go_for_the_kill: false, priority_scout_metal_spots: true, enable_commander_danger_responses: false, neural_data_mod: 2, adv_eco_mod: 0.5, adv_eco_mod_alone: 0, factory_build_delay_min: 0, factory_build_delay_max: 12, per_expansion_delay: 60, personality_tags: ["queller"], min_basic_fabbers: 10, min_advanced_fabbers: 3, }, // imagine another 11 objects of the same size here }, function (personality, name) { var result = _.assign( _.clone(model.aiPersonalities().Absurd), personality ); result.name = name; return result; } ); _.assign(model.aiPersonalities(), newPersonalities); model.aiPersonalities.valueHasMutated(); I'm just interested in how people would approach this. My instinct is that aiPersonalities and baseline could be ditched, per approach 2, but that keeping the initial declaration of the newPersonalities object may make it easier to see what the _.mapValues bit is doing. There aren't any style guidelines or anything, this is a purely me project. Answer: In this specific scenario, I would definitely go with your first approach. Putting a large amount of data right in the middle of your logic makes it extremely difficult to follow the logic. To answer the general question of "should data associated with a specific function go inside its body or outside?" - there's never one right answer. I like to think of it as though there are various pressures going on. If your function is pretty cluttered but your module is pretty empty, then moving some stuff out of the function and into the module helps improve the readability of that function without sacrificing much. On the other hand, if there's already a whole lot of stuff in that module, and the function is really simple to follow then it might be worth it to move stuff into the function (or split the module up into multiple modules, if the containing folder isn't too cluttered). One important task of a programmer is to know when and how to shuffle clutter around to prevent any one spot from feeling too overwhelming.
{ "domain": "codereview.stackexchange", "id": 41408, "tags": "javascript" }
How to install Ros Groovy Beta 2
Question: Hey, I'm trying to do a fresh install of Groovy Beta 2 on Ubuntu 12.10, like documented on the wiki. sudo apt-get install ros-groovy-desktop The problem is that now there is no package ros-groovy-desktop as it was in Beta 1. Is there any new package to install it, or do I have to install all packages manually? Thanks. Originally posted by Grega Pusnik on ROS Answers with karma: 460 on 2012-12-14 Post score: 1 Answer: This is probably an error, I created a ticket: https://github.com/ros/rosdistro/issues/276 Originally posted by KruseT with karma: 7848 on 2012-12-14 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 12108, "tags": "ros, groovy-beta" }
Is there a book that is best for learning gazebo?
Question: Hi, I want to learn gazebo for turtlebots. But I am having difficulty finding the resources and documentation on turtlebots in gazebo. is there a particular book that I can buy to get in depth knowledge on gazebo? Thanks Originally posted by Vinh K on ROS Answers with karma: 110 on 2016-09-01 Post score: 0 Answer: I don't know about a book, but here you can learn in depth about gazebo. Check the whole site. P.S.: This question is not relevant to ROS, visit this site. You probably will also get a better answer there. Originally posted by patrchri with karma: 354 on 2016-09-01 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 25659, "tags": "gazebo, simulation" }
Is there a closed form expression for main-lobe width increase given a window?
Question: We know that when we window a signal, we increase the main-lobe width. Let 'main-lobe-width' here be the null-to-null bandwidth of the main lobe. Let us further more say that the main-lobe width of a square window is '1'. What I would like to know, is given a particular symmetric window, is there a closed form solution for how much the main-lobe will increase by, relative to that of a square window? I can look up this percentage increase for various windows just fine by the way. I am asking if there is a closed form solution for what the percentage increase is, given an arbitrary symmetric window. Answer: If I understand you correctly, you are in fact asking for a closed-form solution for the position of the first zero of a given window function. Such a solution cannot exist, because what you would need is an analytic solution to a polynomial equation, which does not exist for polynomials of degree 5 or higher (that's the Abel-Ruffini Theorem).
{ "domain": "dsp.stackexchange", "id": 1581, "tags": "filters, fourier-transform, window-functions" }
Counting the number of Hamiltonian cycles in cubic Hamiltonian graphs?
Question: It is $NP$-hard to find a constant factor approximation of longest cycle in cubic Hamiltonian graphs. Cubic Hamiltonian graphs have at least two Hamiltonian cycles. What are the best known upper bound and lower bound on the number of Hamiltonian cycles in cubic Hamiltonian graphs? Given a cubic Hamiltonian graph, What is the complexity of finding the number of Hamiltonian cycles? Is it #$P$-hard? Answer: Counting Hamiltonian circuits in a 3-regular Hamiltonian graph is #P-complete, as follows. Proof sketch. The membership in #P is trivial, so we will only show the #P-hardness. Section 3 of Liśkiewicz, Ogihara and Toda [LOT03] shows that counting Hamiltonian circuits in a 3-regular (and in fact planar at the same time) graph is #P-complete. Moreover, their reduction from #3SAT maps satisfiable 3CNF formula to Hamiltonian graphs. Therefore, you can reduce #3SAT to counting Hamiltonian circuits in a 3-regular Hamiltonian graph by first adding one trivial solution to a given 3CNF formula and then reducing it to counting Hamiltonian circuits by using the reduction in [LOT03]. QED. [LOT03] Maciej Liśkiewicz, Mitsunori Ogihara and Seinosuke Toda. The complexity of counting self-avoiding walks in subgraphs of two-dimensional grids and hypercubes. Theoretical Computer Science, 304(1–3):129–156, July 2003. http://dx.doi.org/10.1016/S0304-3975(03)00080-X
{ "domain": "cstheory.stackexchange", "id": 303, "tags": "cc.complexity-theory, counting-complexity" }
Dynamically MVC controls
Question: I am working project where I need to build ASP.net control based on JSON data. I am using the method below GetMeetingPollingQuestion to create this model. Any suggestions/comments are greatly appreciated to improve the code further. Thanks. JSON [{"MeetingPollingQuestionId":1,"MeetingPollingQuestionType":"LongAnswerText","MeetingPollingId":1,"SequenceOrder":1,"MeetingPollingParts":[{"MeetingPollingPartsId":1,"Type":"Question","MeetingPollingQuestionId":1,"MeetingPollingPartsValues":[{"MeetingPollingPartsValuesId":1,"Type":"label","QuestionValue":"Do you have additional comments or concerns with these changes to the Guidelines?","FileManagerId":0,"FileName":null,"FileData":null,"FileType":null}]}]},{"MeetingPollingQuestionId":12,"MeetingPollingQuestionType":"MultipleChoice","MeetingPollingId":1,"SequenceOrder":2,"MeetingPollingParts":[{"MeetingPollingPartsId":35,"Type":"Question","MeetingPollingQuestionId":12,"MeetingPollingPartsValues":[{"MeetingPollingPartsValuesId":63,"Type":"label","QuestionValue":"Do you approve the following statement to be added for all current rituximab indications in the Guidelines: “An FDA-approved biosimilar is an appropriate substitute for rituximab”? ","FileManagerId":0,"FileName":null,"FileData":null,"FileType":null}]},{"MeetingPollingPartsId":36,"Type":"Image","MeetingPollingQuestionId":12,"MeetingPollingPartsValues":[{"MeetingPollingPartsValuesId":64,"Type":"FileManagerId","QuestionValue":null,"FileManagerId":14716,"FileName":"B-cell_1.2022_panel vote_Page_02 - Copy.png","FileData":"iVBORw.....","FileType":"image/png"}]},{"MeetingPollingPartsId":37,"Type":"Answers","MeetingPollingQuestionId":12,"MeetingPollingPartsValues":[{"MeetingPollingPartsValuesId":65,"Type":"Answers","QuestionValue":"Yes","FileManagerId":0,"FileName":null,"FileData":null,"FileType":null},{"MeetingPollingPartsValuesId":66,"Type":"Answers","QuestionValue":"No","FileManagerId":0,"FileName":null,"FileData":null,"FileType":null},{"MeetingPollingPartsValuesId":67,"Type":"Answers","QuestionValue":"Abstain","FileManagerId":0,"FileName":null,"FileData":null,"FileType":null}]}]}] Method public IEnumerable<MeetingPollingQuestionViewModel> GetMeetingPollingQuestion() { List<MeetingPollingQuestionViewModel> vm = new List<MeetingPollingQuestionViewModel>(); //JSON Data parse into class Model and set to object ListofMeetingPollingQuestion foreach (MeetingPollingQuestion MeetingPollingQuestion in ListofMeetingPollingQuestion) { int SequenceOrder = MeetingPollingQuestion.SequenceOrder; switch (MeetingPollingQuestion.MeetingPollingQuestionType) { case "LongAnswerText": MeetingPollingQuestionViewModel LongAnswerText = new MeetingPollingQuestionViewModel(); LongAnswerText.QuestionType = "LongAnswerText"; var MeetingPollingParts = MeetingPollingQuestion.MeetingPollingParts; var LongAnswerQuestion = MeetingPollingParts.FirstOrDefault(part => part.Type == "Question"); var labelControl = LongAnswerQuestion.MeetingPollingPartsValues.FirstOrDefault(part => part.Type == "label"); LongAnswerText.labelControl = $"{SequenceOrder}. {labelControl.QuestionValue}'"; LongAnswerText.textboxControl = $"textboxfor_{labelControl.MeetingPollingPartsValuesId}"; vm.Add(LongAnswerText); break; case "MultipleChoice": MeetingPollingQuestionViewModel MultipleChoice = new MeetingPollingQuestionViewModel(); MultipleChoice.QuestionType = "MultipleChoice"; var MultipleChoiceMeetingPollingParts = MeetingPollingQuestion.MeetingPollingParts; var MultipleChoiceQuestion = MultipleChoiceMeetingPollingParts.FirstOrDefault(part => part.Type == "Question"); var MultipleChoicelabelControl = MultipleChoiceQuestion.MeetingPollingPartsValues.FirstOrDefault(part => part.Type == "label"); MultipleChoice.labelControl = $"{SequenceOrder}. {MultipleChoicelabelControl.QuestionValue}'"; var MultipleChoiceImage = MultipleChoiceMeetingPollingParts.FirstOrDefault(part => part.Type == "Image"); var MultipleChoiceImageControl = MultipleChoiceImage.MeetingPollingPartsValues.FirstOrDefault(part => part.Type == "FileManagerId"); if (MultipleChoiceImageControl.FileManagerId != 0){ MultipleChoice.imageSRC = MultipleChoiceImageControl.FileData; } var MultipleChoiceAnswers = MultipleChoiceMeetingPollingParts.FirstOrDefault(part => part.Type == "Answers"); var MultipleChoiceAnswersControl = MultipleChoiceAnswers.MeetingPollingPartsValues.ToList(); List<CBRBControl> RadioButtonlist = new List<CBRBControl>(); foreach (var item in MultipleChoiceAnswersControl) { CBRBControl RadioButton = new CBRBControl(); RadioButton.Value = item.MeetingPollingPartsValuesId.ToString(); RadioButton.Label = item.QuestionValue; RadioButtonlist.Add(RadioButton); } multipleChoice.RadioButtonName = $"radioList_{MultipleChoiceAnswers.MeetingPollingQuestionId}"; multipleChoice.RadioButtonList = RadioButtonlist; vm.Add(MultipleChoice); break; } } return vm; } } Model public class MeetingPollingQuestionViewModel { public string QuestionType { get; set; } public string labelControl { get; set; } public string textboxControl { get; set; } public string imageControl { get; set; } public byte[] imageSRC { get; set; } public string RadioButtonName { get; set; } public List<CBRBControl> RadioButtonList { get; set; } } public class CBRBControl { public string Label { get; set; } public string Value { get; set; } } Answer: Gerneral advice look at this line List<MeetingPollingQuestionViewModel> vm = new List<MeetingPollingQuestionViewModel>(); this variable name vm showed be plural because it's a list vms .. and you can use the simplified new expression List<MeetingPollingQuestionViewModel> vms = new(); your foreach loop could be more readable if you use var key word and short variable name foreach (var Question in ListofMeetingPollingQuestion) it's more readable than foreach (MeetingPollingQuestion MeetingPollingQuestion in ListofMeetingPollingQuestion) Redundancy elimination Looke at this of code for each case it's nearly identical foreach (MeetingPollingQuestion MeetingPollingQuestion in ListofMeetingPollingQuestion) { int SequenceOrder = MeetingPollingQuestion.SequenceOrder; switch (MeetingPollingQuestion.MeetingPollingQuestionType) { case "LongAnswerText": MeetingPollingQuestionViewModel LongAnswerText = new MeetingPollingQuestionViewModel(); LongAnswerText.QuestionType = "LongAnswerText"; var MeetingPollingParts = MeetingPollingQuestion.MeetingPollingParts; var LongAnswerQuestion = MeetingPollingParts.FirstOrDefault(part => part.Type == "Question"); var labelControl = LongAnswerQuestion.MeetingPollingPartsValues.FirstOrDefault(part => part.Type == "label"); LongAnswerText.labelControl = $"{SequenceOrder}. {labelControl.QuestionValue}'"; //... code vm.Add(LongAnswerText); break; case "MultipleChoice": MeetingPollingQuestionViewModel MultipleChoice = new MeetingPollingQuestionViewModel(); MultipleChoice.QuestionType = "MultipleChoice"; var MultipleChoiceMeetingPollingParts = MeetingPollingQuestion.MeetingPollingParts; var MultipleChoiceQuestion = MultipleChoiceMeetingPollingParts.FirstOrDefault(part => part.Type == "Question"); var MultipleChoicelabelControl = MultipleChoiceQuestion.MeetingPollingPartsValues.FirstOrDefault(part => part.Type == "label"); MultipleChoice.labelControl = $"{SequenceOrder}. {MultipleChoicelabelControl.QuestionValue}'"; //... code vm.Add(MultipleChoice); break; } } in both cases you create a new view model and change the same property and create same variables with different names and at the end add this view model in your list foreach (var Question in ListofMeetingPollingQuestion) { int SequenceOrder = Question.SequenceOrder; MeetingPollingQuestionViewModel vm = new(); vm.QuestionType = Question.MeetingPollingQuestionType; var MeetingPollingParts = Question.MeetingPollingParts; var QuestionObj = MeetingPollingParts.FirstOrDefault(part => part.Type == "Question"); var label = QuestionObj .MeetingPollingPartsValues.FirstOrDefault(part => part.Type == "label"); vm.labelControl = $"{SequenceOrder}. {label.QuestionValue}'"; switch (Question.MeetingPollingQuestionType) { case "LongAnswerText": //... code break; case "MultipleChoice": //... code break; } vms.Add(vm); } Bad type your code use string literals such as ("LongAnswerText", "MultipleChoice", "FileManagerId", "label", "Question" ,... etc) you should avoid this technique because this string's value is not checked at compile time Read more resons whe you should not use magic strings try useing Enums or constant value insted for example define this enum public enum QuestionType { LongAnswerText, MultipleChoice } then change the type MeetingPollingQuestionType to beQuestionType then give your code the compiler check superpower as following switch (Question.MeetingPollingQuestionType) { case QuestionType.LongAnswerText: //... code break; case QuestionType.MultipleChoice: //... code break; }
{ "domain": "codereview.stackexchange", "id": 44055, "tags": "c#, asp.net-mvc" }
Hacker rank - Left rotation - PHP code feedback for Timeout
Question: Here is the problem description: https://www.hackerrank.com/challenges/ctci-array-left-rotation A left rotation operation on an array of size \$n\$ shifts each of the array's elements 1 unit to the left. For example, if 2 left rotations are performed on array \$[1,2,3,4,5]\$, then the array would become \$[3,4,5,1,2]\$. Given an array of \$n\$ integers and a number, \$d\$, perform \$d\$ left rotations on the array. Then print the updated array as a single line of space-separated integers. My code passes all test cases but is stuck on Hacker rank timeout. I want to know where is the part that takes too long to execute, in order to optimize my code. <?php function rotateOnce($a) { if ($a[0] >= 1 && $a[0] <= 1000000) { $left = array_shift($a); $a[] = $left; } return $a; } function checkConstrains($d, $n) { if ($d >= 1 && $d <= $n && $n >= 1 && $n <= 10 ^ 5) return true; return false; } // Complete the rotLeft function below. function rotLeft($a, $d) { global $n; if (checkConstrains($d, $n)) { for ($i = 0; $i < $d; $i++) { $a = rotateOnce($a); } } return $a; } $fptr = fopen(getenv("OUTPUT_PATH"), "w"); $stdin = fopen("php://stdin", "r"); fscanf($stdin, "%[^\n]", $nd_temp); $nd = explode(' ', $nd_temp); $n = intval($nd[0]); $d = intval($nd[1]); fscanf($stdin, "%[^\n]", $a_temp); $a = array_map('intval', preg_split('/ /', $a_temp, -1, PREG_SPLIT_NO_EMPTY)); $result = rotLeft($a, $d); fwrite($fptr, implode(" ", $result) . "\n"); fclose($stdin); fclose($fptr); Answer: Efficiency You do a lot of manual shifting and pushing with your array. This is ok for small inputs, but as soon, as the distance $d and the size of your array $n grow, this becomes inefficient. The main overhead however is coming from calling rotateOnce. Parameters are passed by value by default. That means the array is copied every time the function is called. You could pass it by reference: function rotateOnce(&$a) {} or simply include the two lines in your original function: function rotLeft($a, $d) { $n = count($a); for ($i = 0; $i < $d; $i++) { $left = array_shift($a); $a[] = $left; } return $a; } I would guess that constraints on HackerRank mean, that you can expect values in that range and don't need to test inputs yourself. This is now significant faster, but still slow especially for large distances $d. That being said, I would take a look at PHP's internal array functions and think of a way, how to use them in combination to increase performance. My naive approach would be something like this if d is 0 or the same as the array's size, return the original else split the array into two chunks at index $d combine both arrays and return the result The function could look like this Don't hover if you don't want to get spoiled: function rotLeft($a, $d) { if (count($a) == $d || $d === 0) { return $a; } $chunk1 = array_slice($a, 0, $d); $chunk2 = array_slice($a, $d); return array_merge($chunk2, $chunk1); } For this input: $a = range(1, 100000); $n = count($a); $d = 500; … I've measured these times on my local machine*: original: 1.8130s optimized: 0.4294 rewritten: 0.0054s Exponential expression vs. bitwise Operators Your program has a flaw. It won't calculate the result correctly for large array sizes, because of this: $n <= 10 ^ 5 ^ is bitwise Xor and not pow: pow(10, 5); 10 ** 5; Try to avoid globals I can see, that n is an input parameter that is not part of the function's given signature. However, I would try to avoid globals and get the value manually, if needed: $n = count($a); You can read more about this here: PHP global in functions Are global variables in PHP considered bad practice? If so, why? * macOS 10.13, I7 2.5 GHz, 16GB RAM, MAMP PHP 7.2.1
{ "domain": "codereview.stackexchange", "id": 31750, "tags": "php, programming-challenge, time-limit-exceeded" }
How to understand the "infinite distance" part of the definition of gravitational potential?
Question: Question. In my textbook, gravitational potential is defined as the amount of work done per unit mass to move an object from an infinite distance to that point in the field. I cannot comprehend the "infinite distance" part. What does it mean? My Attempt. I get the idea that if, say, I want to leave Earth I would have to possess enough kinetic energy to overcome the gravitational pull, so if I am in the space with a distance $r$ to the Earth's center, the gravitational potential at my position would be some positive number since external energy has been put in. But how do I know how much energy would it need for me to come from an infinite distance away to where I am now? I am confused. Comment. Any kind of help would be appreciated. Thank you! Answer: In the physical sense, "infinite distance" means the separation of masses beyond which there is exactly zero gravitational force. This is "zero" in the mathematical sense - no real-life distance will ever truly be "infinite distance", as, if you take Newton's Law of Gravitation to be true, there will always be some tiny force present. The reason that you are in a position with a finite potential even though potential starts at zero an infinite distance away is because, mathematically, if you take the idea that work done, $W$, is the area under the Force-Displacement graph ($F$-$r$ graph), you find that the corresponding integral (mathematical expression of that area) evaluates to a finite potential, despite the $F$-$r$ graph literally going on forever up to $r = +\infty%$. Essentially, a mathematical reason. $$GPE(R) = \int_{r=R}^{+\infty} -\frac{GMm}{r^2} \,dr$$ $$GPE(R) = \left[ \frac{GMm}{r} \right]_{R}^{+\infty}$$ $$GPE(R) = \left( \frac{GMm}{+\infty} \right) - \left( \frac{GMm}{R} \right)$$ $$GPE(R) = 0 - \left( \frac{GMm}{R} \right)$$ $$GPE(R) = -\frac{GMm}{R}$$ An infinitely long region under a graph has a finite area - a very unintuitive mathematical truth. Also, this is incorrect: the gravitational potential at my position would be some positive number since external energy has been put in Gravitational potential energy at any distance except infinity is negative, because you need to do work against a gravitational field to add to that potential energy, such that you might bring it up to 0. This is because you want to satisfy the conservation of energy - as you come to earth from an infinite distance away, the kinetic energy you gain is taken away from the gravitational potential energy, as the gravitational field is like an agent using your gravitational potential energy as a reservoir of energy to do work on you and imbue you with some kinetic energy. I get the idea that if, say, I want to leave Earth I would have to possess enough kinetic energy to overcome the gravitational pull Indeed, you have kinetic energy - as you move away from the earth, the field does work on you with a force opposite to your escaping motion, converting that kinetic energy to gravitational potential energy. It follows, then, that you need enough kinetic energy in reserve to "top up" your GPE to zero for you to "escape" the field in the physical sense - even though you cannot "escape" a field in the mathematical sense.
{ "domain": "physics.stackexchange", "id": 75036, "tags": "gravity, potential" }
ROS Answers SE migration: Pick function
Question: Hi, Please let me know, what is the disfference between: pick() function And simple scenario in which i use attach and detach object after reaching to the defined pick and place positions respectively. How pick is different from simple attach and also Please let me know what would be the most likely (grasp/grasps) in the pick function? where should i get the detailed information about the arguments of pick function? Thanks! -I am working on ABB IRB4600 robot / ROS indigo and intend to perfom simple pick and place task using moveit! Originally posted by Muneeb on ROS Answers with karma: 31 on 2017-05-02 Post score: 0 Answer: Did you have a look at http://answers.ros.org/question/244831/moveit-pick-too-complicated/ ? Originally posted by v4hn with karma: 2950 on 2017-05-08 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27779, "tags": "moveit" }
Behaviour of microphone with high volume input
Question: I have a microphone which produces unexpected results when exposed to very loud input. The mic is a cheap electret headset mic and I am recording into a DAW. For very loud, noisy signals (eg. blowing on the mic) I get an unexpected waveform. In the MATLAB plot below, you can see a snippet of about 0.5 seconds of audio which demonstrate what I'm talking about. The signal is clearly clipping at -1.0, but is not getting near the other end of its range at 1.0. In the same signal, the clipping behaviour can then reverse and start clipping almost constantly at 1.0 and not reaching -1.0. It's as though there has been some sort of wandering, fluctuating DC offset introduced. Can anyone shed some light on what is happening? Answer: An overview of the analog signal path in a system such as yours: Clipping can take place: In the microphone as mechanical soft clipping, distorting the waveform rather than clipping exactly at some value. In the built-in impedance converter field effect transistor (FET) of the electret microphone capsule. (I think this is very much like hard clipping) In a separate preamplifier as analog hard clipping. At the analog-to-digital converter of the audio interface, as hard digital clipping. On the digital signal if any digital processing is done on integer data, as hard digital clipping. Because the clipping is hard at numerical values -1 and +1, digital clipping must be taking place. Because it looks like there is hard clipping also at other numerical values than -1 and +1, there is probably hard clipping taking place at more than one stages separated by an alternating current (AC) coupling. Blowing directly into the microphone is quite an extreme disturbance, so there is probably also the 1st and the 2nd kind of clipping going on. There are a few places in your system where there can be an AC coupling or equivalent: In the microphone, as the pressure in the back of the omnidirectional closed-back capsule slowly equalizes with that of the outside world due to a (by design) leaky seal. On the analog signal. AC coupling may be built in into the plug-in power supply to the microphone, into the preamp and into the audio interface. Digitally in the audio interface or in the digital audio workstation (DAW). Check the settings! If you blow into the microphone long enough then the AC coupling begins to see the average pressure level of that as direct current (DC), centering the signal to zero during the disturbance. When you stop blowing it takes some time for the coupling to adapt to the normal atmospheric pressure. If you want the electronics to recover faster after a disturbance, try adding a capacitor in series with the signal output of the plug-in power supply as yet another stage of AC coupling. The capacitor should be small enough to make the time constant of the resulting resistor–capacitor (RC) filter short enough.
{ "domain": "dsp.stackexchange", "id": 3371, "tags": "audio" }
Simple expense tracker
Question: I have posted below a small project of mine in AS3 regarding a super simple Expense Tracker. I finally made it basically functional, and I learnt A LOT in the way. But even I can tell that my design is horrible. If anyone was kind enough as to mention my mistakes to me and/or give me some advice, I would be extremely thankful! This is basically my main design: DocumentClass calls mainScreen and balanceField. MainScreen calls mainButton. MainButton dispatches Event to mainScreen when clicked to make mainScreen invisible. I repeat the same process with secondaryScreen and secondaryButton (using the same event as trigger). After I click on secondaryButton, InputScreen is called. InputScreen handles the balanceField from within, using static vars from DocumentClass. okButton in inputScreen dispatches Event, same as before to make this invisible and the mainScreen visible again. Here are some pieces of my code: DocumentClass: package { import flash.display.MovieClip; import flash.events.Event; public class DocumentClass extends MovieClip { public static var mainScreen:MainScreen; public static var balanceField:Balance; public static var actual:Number = 0; public function DocumentClass() { mainScreen = new MainScreen(); mainScreen.x = 0; mainScreen.y = 0; addChild(mainScreen); balanceField = new Balance(); balanceField.x = 50; balanceField.y = 0; addChild(balanceField); } } } This is my main class. Following are 2 subsequent classes: MainScreen: package { import flash.display.MovieClip; import flash.display.SimpleButton; import flash.events.Event; public class MainScreen extends MovieClip { public var expenseButton:MainButton; public var incomeButton:MainButton; public function MainScreen() { expenseButton = new MainButton('Expense'); expenseButton.x = 200; expenseButton.y = 200; addChild(expenseButton); expenseButton.addEventListener( ButtonEvent.DEAD, onButtonClick); incomeButton = new MainButton('Income'); incomeButton.x = 200; incomeButton.y = 400; addChild(incomeButton); incomeButton.addEventListener( ButtonEvent.DEAD, onButtonClick); } public function onButtonClick (buttonEvent:ButtonEvent ):void { this.visible = false; } } } I am gonna skip some stuff like the button functions: MainButton: package { import flash.events.MouseEvent; import flash.display.MovieClip; import flash.events.Event; public class MainButton extends MovieClip { public var nextScreen:SecondaryScreen; public var category:String; public function MainButton(categ:String) { this.addEventListener(MouseEvent.ROLL_OVER, onRollOverHandler); this.addEventListener(MouseEvent.ROLL_OUT, onRollOutHandler); this.addEventListener(MouseEvent.CLICK, onClickHandler); this.addEventListener(MouseEvent.MOUSE_DOWN, onPressHandler); this.addEventListener(MouseEvent.MOUSE_UP, onReleaseHandler); buttonText.text = (categ); this.buttonMode = true; this.mouseChildren = false; this.useHandCursor = true; category=categ; } function onReleaseHandler(myEvent:MouseEvent) { gotoAndStop(2); if (category == 'Expense') { nextScreen=new SecondaryScreen(category); } if (category == 'Income') { nextScreen=new SecondaryScreen(category); } nextScreen.x = 0; nextScreen.y = 0; stage.addChild(nextScreen); dispatchEvent( new ButtonEvent(ButtonEvent.DEAD)); } } } And the InputScreen: package { import flash.display.MovieClip; import flash.text.*; import DocumentClass; public class InputScreen extends MovieClip { public var okButton:OkButton; public var wvalue:String; public var gvalue:Number; public var category:String; public var total:Number = 0; public function InputScreen(categ:String,type:String) { inputField.border = true; okButton = new OkButton("Ok"); okButton.x = 200; okButton.y = 350; addChild(okButton); okButton.addEventListener( ButtonEvent.DEAD, onButtonClick); categField.text = (categ); typeField.text = ("Type: " + type); category = categ; } public function onButtonClick (buttonEvent:ButtonEvent ):void { wvalue = inputField.text; gvalue = Number(wvalue); if (category == "Expense") { gvalue = -1*gvalue; } total = total + gvalue; DocumentClass.actual += total; DocumentClass.balanceField.balanceText.text = ("Wallet: " + DocumentClass.actual + "€"); DocumentClass.mainScreen.visible = true; } } } Finally, my Event and balanceField: package { import flash.events.Event; public class ButtonEvent extends Event { public static const DEAD:String = "dead"; public function ButtonEvent( type:String ) { super( type ); } } } package { import flash.display.MovieClip; import flash.text.TextField; public class Balance extends MovieClip { public var balance:Number = 0; public function Balance() { balanceText.text = ("Wallet: " + balance + "€"); } } } In case anyone needs the rest of the code Answer: You said your structure is horrible, but it's not really horrible! I suggest you skim this (a bit old, but relevant) and try to read up on coding patterns (especially AS3 related) so you have a wider knowledge of what is possible. So, these lines are unused: this.addEventListener(MouseEvent.ROLL_OVER, onRollOverHandler); this.addEventListener(MouseEvent.ROLL_OUT, onRollOutHandler); this.addEventListener(MouseEvent.CLICK, onClickHandler); this.addEventListener(MouseEvent.MOUSE_DOWN, onPressHandler); Unless you plan on having features for each of these states, you can take them out. Right after that, you have: buttonText.text = (categ); The parentheses around categ aren't necessary. And I'm not sure where buttonText is created, but it looks like it just appears! In the handler below that, the argument myEvent isn't very concise. It's a simple name, which could be more specific, or less possessive (e, evt, handlingEvent, etc.). Directly below that, you have: if (category == 'Expense') { nextScreen=new SecondaryScreen(category); } if (category == 'Income') { nextScreen=new SecondaryScreen(category); } This could either be turned into a switch(category) if you have plans on changing the action a category makes. However, neither of these categories requires special treatment in these ifs, so why not just take out the conditionals completely! Leave it as a one line assignment. In the class of InputScreen, a variable wvalue is created and used in this context: wvalue = inputField.text; gvalue = Number(wvalue); Consider removing the extra variable and simplifying it to: gvalue = Number(inputField.text); To be consistent, I suggest separating ButtonEvent and Balance into two files. Especially if you decide to add controlling code in the Balance class. Naming I touched on this earlier, but it's come back! The names you've chosen for a few things are very misleading and confusing. First off, DocumentClass really doesn't need Class appended on the end. We know it's going to be a class. However, Document is a bit broad, try finding a name that fits the situation better. Is public static var balanceField:Balance; an account balance or a field? I think you should change one of those names (variable or class). The Balance class is a bit misleading. I want to believe that a balance class should be handing the business of the deposits and withdraws. Perhaps a better name for that class would be BalanceButton? In MainButton(categ:String), categ could really just be category. It doesn't hurt. In InputScreen, the variable gvalue is used. I can't even tell what it stands for! What is the "g"? Same class, the global total is made, but only used in the one function. Unless it's used outside of it, perhaps reduce it's scope as to not pollute your global scope. In fact, reassess the variables that are currently global, and see if they actually do need to be global. Lastly, I'm not sure what ButtonEvent's DEAD means. Usually events have verbs as constants to use.
{ "domain": "codereview.stackexchange", "id": 7989, "tags": "actionscript-3, actionscript" }
How to use pcl::registration::estimateRigidTransformation?
Question: I'm trying to register two pointclouds gathered with Kinect. The two pointclouds are of type PointXYZRGB and are pre-aligned. From the signature of the function I assumed the following usage: pcl::PointCloud<pcl::PointXYZRGB> A, B; // imagine these to be the input clouds, where A should be aligned to B pcl::registration::TransformationEstimation<pcl::PointXYZRGB,pcl::PointXYZRGB> MyEstimation; Eigen::Matrix4f T; // the result transformation MyEstimation.estimateRigidTransformation( A, B, T ); This crashes in Ubuntu 10.10 x64 using diamondback with an assertion in Eigen: Eigen::ProductBase<Derived, Lhs, Rhs>::ProductBase(const Lhs&, const Rhs&) [with Derived = Eigen::GeneralProduct<Eigen::Matrix<float, -0x00000000000000001, -0x00000000000000001, 0, -0x00000000000000001, -0x00000000000000001>, Eigen::Transpose<Eigen::Matrix<float, -0x00000000000000001, -0x00000000000000001, 0, -0x00000000000000001, -0x00000000000000001> >, 5>, Lhs = Eigen::Matrix<float, -0x00000000000000001, -0x00000000000000001, 0, -0x00000000000000001, -0x00000000000000001>, Rhs = Eigen::Transpose<Eigen::Matrix<float, -0x00000000000000001, -0x00000000000000001, 0, -0x00000000000000001, -0x00000000000000001> >]: Assertion `lhs.cols() == rhs.rows() && "invalid matrix product" && "if you wanted a coeff-wise or a dot product use the respective explicit functions"' failed. So I'm not sure what's going wrong? Am I using the registration wrong? The pointclouds are okay, as I can concatenate and display them fine without this fine registration. Originally posted by LiMuBei on ROS Answers with karma: 261 on 2011-03-02 Post score: 0 Answer: This sounds like you should submit a ticket. There's a link to submit tickets on the preception_pcl wiki page Originally posted by tfoote with karma: 58457 on 2011-03-02 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 4918, "tags": "pcl, pcl-ros" }
What is synthetic magnetism?
Question: I was reading about synthetic magnetism and wanted to confirm if my understanding is correct. What I understand is that Synthetic magnetism is just a fancy name of a method to make a charge neutral particle act like it is in a magnetic field. A charged particle in a magnetic field acquires a geometric phase, so a neutral particle if by any method is able to acquire this geometric phase, then that method is said to create a synthetic magnetic field. Answer: Yes more or less you description is correct. Any artificial methods or natural existing quasiparticles (i.e. Spin Ice) that emulate a Dirac Monopole fall under this category. Of course these synthetic monopoles are never completely isolated, having a very short Dirac String attached to an corresponding antimonopole (i.e. monopole of relative opposite magnetic polarity) thus are really not natural magnetic monopoles but instead very loosely correlated magnetic dipoles and sometimes referred as "Strongly correlated electron systems". Natural occurring isolated (i.e. with an infinite Dirac String) magnetic monopoles have not been discovered yet today and many doubt their existence. I myself believe that magnetism and magnetic field is purely a dipole phenomenon and therefore isolated magnetic charge can not exist in nature and even if there is such a thing it would have nothing magnetic and produce no magnetic field around it instead an electric charge like an electron which is a monopole creates a electric field and potential around it even if its completely isolated.
{ "domain": "physics.stackexchange", "id": 74875, "tags": "quantum-mechanics, electromagnetism, condensed-matter, gauge-theory" }
A question on using Fourier decomposition to solve the Klein-Gordon equation
Question: Given the Klein-Gordon equation $$\left(\Box +m^{2}\right)\phi(t,\mathbf{x})=0$$ it is possible to find a solution $\phi(t,\mathbf{x})$ by carrying out a Fourier decomposition of the scalar field $\phi$ at a given instant in time $t$, such that $$\phi(t,\mathbf{x})=\int\frac{d^{3}k}{(2\pi)^{3}}\tilde{\phi}\left(t,\mathbf{k}\right)e^{i\mathbf{k}\cdot\mathbf{x}}$$ where $\tilde{\phi}\left(t,\mathbf{k}\right)$ are the Fourier modes of the corresponding field $\phi(t,\mathbf{x})$. From this we can calculate the required evolution of the Fourier modes $\tilde{\phi}\left(t,\mathbf{k}\right)$ such that at each instant in time $t$, $\phi(t,\mathbf{x})$ is a solution to the Klein-Gordon equation. This can be done, following on from the above, as follows: $$\left(\Box +m^{2}\right)\phi(t,\mathbf{x})=\left(\Box +m^{2}\right)\int\frac{d^{3}k}{(2\pi)^{3}}\tilde{\phi}\left(t,\mathbf{k}\right)e^{i\mathbf{k}\cdot\mathbf{x}}\qquad\qquad\qquad\qquad\qquad\qquad\;\;\,\\ =\int\frac{d^{3}k}{(2\pi)^{3}}\left[\left(\partial^{2}_{t}+\mathbf{k}^{2}+m^{2}\right)\tilde{\phi}\left(t,\mathbf{k}\right)\right]e^{i\mathbf{k}\cdot\mathbf{x}} =0\\ \Rightarrow \left(\partial^{2}_{t}+\mathbf{k}^{2}+m^{2}\right)\tilde{\phi}\left(t,\mathbf{k}\right)=0. \qquad\qquad\qquad$$ Question: This is all well and good, but why is it that in this case we only perform a Fourier decomposition of the spatial part only, whereas in other cases, such as for finding solutions for propagators (Green's functions), we perform a Fourier decomposition over all 4 spacetime coordinates? [e.g. $$G(x-y)=\int\frac{d^{4}x}{(2\pi)^{4}}\tilde{G}\left(t,\mathbf{k}\right)e^{ik\cdot x}$$ (where in this case $k\cdot x\equiv k_{\mu}x^{\mu}$).] Is it simply because when we construct the appropriate QFT for a scalar field we do so in the Heisenberg picture, or is there something else to it? Apologies if this is a really dumb question but it's really been bugging me for a while and I want to get the reasoning straight in my mind! Answer: Notation: $x=(t,\boldsymbol x)$; $k=(k_0,\boldsymbol k)$; $kx=k_0t-\boldsymbol k\cdot\boldsymbol x$; $\mathrm dx=\mathrm dt\;\mathrm d^3\boldsymbol x$; etc. You can in principle perform the Fourier decomposition on both space and time variables, but to do so you'll need several properties of the Dirac's delta funciton: The first one is: let $\xi\in\mathbb R$; then $$ \delta(f(\xi))=\sum_{f(\xi_i)=0} \frac{\delta(\xi-\xi_i)}{|f'(\xi_i)|} \tag{1} $$ where the sum is over every $\xi_i$ such that $f(\xi_i)=0$, ie, over the roots of $f(\xi)$. The second one is that, given $g(\xi)$ a known function, the distributional solution of $g(\xi)f(\xi)=0$ is $f(\xi)=h(\xi)\delta(g(\xi))$ for an arbitrary function $h(\xi)$. If you believe these, then the Fourier decomposition is as follows: Let $\phi(x)$ be the solution of $$ (\partial^2+m^2)\phi(x)=0 $$ Take the Fourier transform of the equation to find $$ (k^2-m^2)\phi(k)=0 \tag{2} $$ where $$ \phi(k)=\int \mathrm dx\ \mathrm e^{ikx} \phi(x) $$ As $\phi(x)$ is a distribution, the solution of $(2)$ is $\phi(k)=h(k)\delta(k^2-m^2)$ for an arbitrary function $h(k)$. Inverting the Fourier Transform, we find $$ \phi(x)=\int\mathrm dk\ \mathrm e^{-ikx}h(k)\delta(k^2-m^2) $$ Next, use $(1)$ to expand the delta over the roots of $k^2-m^2$. These roots are easily found to be $k_0=\pm \omega(\boldsymbol k)$, where $\omega(\boldsymbol k)=+(\boldsymbol k^2+m^2)^{1/2}$. Therefore, it is immediate to get $$ \phi(x)=\int\mathrm dk\ \mathrm e^{-ikx}h(k)\frac{1}{2\omega}\left[\delta(k_0-\omega)+\delta(k_0+\omega)\right] $$ and, after integrating over $\mathrm dk_0$ using the deltas, we find $$ \phi(x)=\int\frac{\mathrm d \boldsymbol k}{2\omega}\ \left[\mathrm e^{-i\omega t} \mathrm e^{i\boldsymbol k\cdot\boldsymbol x}h(\omega,\boldsymbol k)+\mathrm e^{+i\omega t} \mathrm e^{i\boldsymbol k\cdot\boldsymbol x}h(-\omega,\boldsymbol k)\right] $$ Finally, make the change of variable $\boldsymbol k\to-\boldsymbol k$ in the second term, which yeilds the usual expansion $$ \phi(x)=\int\frac{\mathrm d \boldsymbol k}{2\omega}\ \left[\mathrm e^{-ikx}a(\boldsymbol k)+\mathrm e^{+ikx}b^\dagger(\boldsymbol k)\right] $$ where I have defined $a(\boldsymbol k)=h(\omega,\boldsymbol k)$ and $b^\dagger(\boldsymbol k)=h(-\omega,-\boldsymbol k)$. As you can see, the solution is the same as yours (modulo some irrelevant prefactor that can be reabsorbed into the definition of $h(k)$), though the algebraic procedure to find it is a bit harder.
{ "domain": "physics.stackexchange", "id": 26117, "tags": "field-theory, fourier-transform, greens-functions, dirac-delta-distributions, klein-gordon-equation" }
Python function to generate Spanish conjugations
Question: I tried to make a Python function to generate present tense Spanish conjugations. from unidecode import unidecode def solveSpanishPresent(pronoun,verb): if unidecode(verb).endswith("ar"): arDict = {"yo": "o", "tú": "as", "usted": "a", "él": "a", "ella": "a", "nosotros": "amos", "vosotros": "áis", "ustedes": "an", "ellos": "an", "ellas": "an"} verb = verb[:-2] + arDict[pronoun] return verb if unidecode(verb).endswith("er") or unidecode(verb).endswith("ir"): erDict = {"yo":"o", "tú":"es", "usted":"e", "él":"e", "ella":"e", "nosotros":"emos", "vosotros":"éis", "ustedes":"en", "ellos":"en", "ellas":"en"} irDict = {"nosotros":"imos", "vosotros":"ís"} if (pronoun == "nosotros" or pronoun == "vosotros") and verb.endswith("ir"): verb = verb[:-2] + irDict[pronoun] else: verb = verb[:-2] + erDict[pronoun] return verb Answer: I think that there are too many special cases in the code. A data-driven approach would be easier to read and maintain than hard-coded logic. The entire function could be done using just dictionary lookups. Furthermore, I think that you are missing a translation layer. You can classify the three pronouns "usted", "él", and "ella" as "third person singular", and it will be clear that they all lead to the same conjugation. It would be more worthwhile to "compress" your conjugation tables that way than to try to override a few entries of the erDict with irDict. I would also format the dictionaries to look more like the tables that would typically be presented in grammar books. The line containing your arDict stretches to column 163, which is too wide to be readable. from unidecode import unidecode PRONOUN_CLASSIFICATION = { 'yo': '1s', 'nosotros': '1p', 'tú': '2s', 'vosotros': '2p', 'usted': '3s', 'ustedes': '3p', 'él': '3s', 'ellos': '3p', 'ella': '3s', 'ellas': '3p', } PRESENT_CONJUGATION_ENDINGS = { 'ar': { '1s': 'o', '1p': 'amos', '2s': 'as', '2p': 'áis', '3s': 'a', '3p': 'an', }, 'er': { '1s': 'o', '1p': 'emos', '2s': 'es', '2p': 'éis', '3s': 'e', '3p': 'en', }, 'ir': { '1s': 'o', '1p': 'imos', '2s': 'es', '2p': 'ís', '3s': 'e', '3p': 'en', }, } def conjugate_spanish_present(pronoun, infinitive): person = PRONOUN_CLASSIFICATION[pronoun] base, ending = infinitive[:-2], unidecode(infinitive)[-2:] try: return base + PRESENT_CONJUGATION_ENDINGS[ending][person] except KeyError: return infinitive
{ "domain": "codereview.stackexchange", "id": 25222, "tags": "python, strings, python-3.x, hash-map" }
Web application for a restaurant
Question: I'm making a web application for a restaurant kind of type. The idea is to administrate the orders and customers. For selecting one of the orders and showing more specific data about it, I have this PHP script. As you can see I am using prepared statements to prevent SQL injection. <?php try { $connection = new PDO('mysql:host=localhost;dbname=broodjes-service;charset=utf8mb4', 'root', 'password'); $connection->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); if (!empty($_GET['order_id'])) { $order_id = $_GET['order_id']; $order_data = $connection->prepare("SELECT c.first_name, c.last_name, c.email_adress, c.customer_info, o.order_info, o.total_price, o.location, o.created FROM customers AS c LEFT JOIN orders AS o ON c.id = o.customer_id WHERE o.id = :order_id LIMIT 1"); $order_data->bindParam(":order_id", $order_id, PDO::PARAM_INT); $order_data->execute(); $query = "SELECT `products`.`name`, `orders-items`.`quantity` FROM `orders-items`" . "INNER JOIN `products` ON `orders-items`.`products_id` = `products`.`id`" . "WHERE order_id = :ordero_id LIMIT 1"; $order_items = $connection->prepare($query); $order_items->bindParam(":ordero_id", $order_id, PDO::PARAM_INT); $order_items->execute(); $orderObject = array(); $orderObject['header'] = $order_data->fetch(); $orderObject['items'] = array(); while ($orderedItem = $order_items->fetch()) { $orderObject['items'][] = $orderedItem; } header('Content-type: application/json'); echo json_encode($orderObject); $connection = null; } } catch (PDOException $e) { echo $e->getMessage(); die(); } The parameters for the 2 queries are both the same. But I don't know how to use only one line for them. The first query is for selecting the specific data about the order. The second query is for selecting the items inside the order. Both queries should be run, to get all results. Problems It's messy that I actually have 2 queries. It's messy that I'm using 2 lines for the same parameter Explanation; why there are 2 queries Whenever I use one query like this: SELECT c.first_name, c.last_name, c.email_adress, c.customer_info, o.order_info, o.total_price, o.location, o.created, p.name, ot.quantity FROM customers AS c LEFT JOIN orders AS o ON c.id = o.customer_id LEFT JOIN `orders-items` AS ot ON o.id = ot.order_id LEFT JOIN `products` AS p ON ot.products_id = p.id WHERE order_id = :order_id; I get 3 times the results of the specific data from the customer. Then I don't know how to get back the orders-items separately. Also, whenever there are no results, I have no idea how to 'not select them' within MySQL. Also, when using that query, when a customer doesn't have any orders_items no result is given. Answer: Aliases Table aliases are handy, sure. But single-letter aliases are not good. It's OK to want to save having to type more characters than needed, but you have to keep in mind that things like aliases and variables get really confusing if the name you give it does not say anything about what it means. c, o, p, ot... Why not instead cust, ord, prod, ordItems? Table naming orders-items is not a good table name. Why? Well, because you will have to use back ticks each time you reference it, as opposed to your other tables. In SQL, avoid using reserved characters in table/column names to negate this problem. If possible, rename to orders_items or OrderItems or something along those lines. To rename the table: RENAME TABLE `orders-items` TO orders_items; Consistency Your SQL queries, though both are valid, look completely different. Compare this: $order_data = $connection->prepare("SELECT c.first_name, c.last_name, c.email_adress, c.customer_info, o.order_info, o.total_price, o.location, o.created FROM customers AS c LEFT JOIN orders AS o ON c.id = o.customer_id WHERE o.id = :order_id LIMIT 1"); And this: $query = "SELECT `products`.`name`, `orders-items`.`quantity` FROM `orders-items`" . "INNER JOIN `products` ON `orders-items`.`products_id` = `products`.`id`" . "WHERE order_id = :ordero_id LIMIT 1"; One of them you use table aliases, the other you use full table names. The latter you use back ticks, but not the former. The latter you also have periods splitting your query code. Stick to one style to make your code easier to maintain. Your question Your first query is selecting one specific order along with the customer associated with it. Your second query is selecting one specific order along with the items associated with it. If you want to combine both, then assuredly you will get customer information multiple times in your result set. Your LIMIT 1 seems like it would not be very useful... Can an order be placed by multiple customers? However, since orders is your primary table, I suggest to start from there, then JOIN your other tables with INNER JOIN so you don't return nulls (if that is the intent, and assuming an order has to have a customer, and has to have items). All said, I think this should work for what you are trying to do: SELECT ord.id, ord.order_info, ord.total_price, ord.location, ord.created, cust.id, cust.first_name, cust.last_name, cust.email_adress, cust.customer_info, ordItems.quantity, ordItems.products_id, prod.name, prod.price FROM orders AS ord INNER JOIN customers AS cust ON ord.customer_id = cust.id INNER JOIN `orders-items` AS ordItems ON ord.id = ordItems.order_id INNER JOIN products AS prod ON ordItems.products_id = prod.id WHERE ord.id = :order_id;
{ "domain": "codereview.stackexchange", "id": 9370, "tags": "php, mysql, pdo" }
A naive question on the eigenvalues of fermionic operators?
Question: Let $A$ be a fermionic operator which is a product of odd number of fermion operators or a summation of them, say $A=C_{i_1}^{\dagger}\cdot \cdot\cdot C_{i_m}^{\dagger}C_{j_1}\cdot \cdot\cdot C_{j_n}$ or $A=\sum w(i_1\cdot \cdot\cdot i_mj_1\cdot \cdot\cdot j_n) C_{i_1}^{\dagger}\cdot \cdot\cdot C_{i_m}^{\dagger}C_{j_1}\cdot \cdot\cdot C_{j_n}$ where $C_i,C_j^{\dagger}$ are fermion operators satisfying the standard anticommutaion relations, and $m+n$ is an odd number. [$w(i_1\cdot \cdot\cdot i_mj_1\cdot \cdot\cdot j_n)$ are the coefficients.] My question is: If $\lambda(\neq0)$ is an eigenvalue of $A$, then is $-\lambda$ also an eigenvalue of $A$ ? If the above is true, how to prove it?If it is wrong, what is the counterexample? Answer: (i) Define an unitary operator : $\mathcal{U}(\chi)=e_{}^{-i\chi {\underset{k}\sum_{}^{}}C_{k}^{\dagger}C_{k}^{}}$ for $\chi \in \mathbb{R}$. (ii) Notice : $\mathcal{U}(\chi)_{}^{\dagger}C_{k}^{\dagger}\mathcal{U}(\chi)_{}^{}=e_{}^{i\chi}C_{k}^{\dagger}$ and $\mathcal{U}(\chi)_{}^{\dagger}C_{k}^{}\mathcal{U}(\chi)_{}^{}=e_{}^{-i\chi}C_{k}^{}$. (iii) So, for the operator defined in the question : $A=\underset{\{s_{p}^{}\},\{t_{q}^{}\}}{\sum_{}^{}}w[\{s_{p}^{}\},\{t_{q}^{}\}]\underset{\text{ordered products}}{\tilde\prod_{p=1}^{m}\tilde \prod_{q=1}^{n}}C_{s_{p}^{}}^{\dagger}C_{t_{q}^{}}^{}$, we have : $\mathcal{U}(\chi)_{}^{\dagger}A\mathcal{U}(\chi)_{}^{}=e_{}^{i\chi(m-n)}A$. (iv) For odd number $m+n$ and $\chi=\pi$ we get : $A\mathcal{U}(\pi)_{}^{}=-\mathcal{U}(\pi)_{}^{}A$. (v) Now, if $A|\lambda\rangle=\lambda|\lambda\rangle$ then $A\left[\mathcal{U}(\pi)_{}^{}|\lambda\rangle\right]=-\lambda\left[\mathcal{U}(\pi)_{}^{}|\lambda\rangle\right]$. (vi) More generally we have : $A|\lambda\rangle=\lambda|\lambda\rangle$ $\Rightarrow$ $A\left[\mathcal{U}(\chi)_{}^{}|\lambda\rangle\right]=\left[e_{}^{i\chi(m-n)}\lambda\right]\left[\mathcal{U}(\chi)_{}^{}|\lambda\rangle\right]$.
{ "domain": "physics.stackexchange", "id": 50908, "tags": "quantum-mechanics, fermions, linear-algebra, eigenvalue, second-quantization" }
Exponential factor in arrhenius equation
Question: In Physical Chemistry by Peter Atkins and Julio de Paula, the exponential factor $e^{-E_{a}/RT}$ indicates the fractions of particles/molecules that have at least the energy $E_{a}$. I'm slightly confused because from what I've learnt in statistical thermodynamics the exponential factor should be proportional to the fraction of molecules in the energy level $E_{a}$ (i.e. doesnt include molecules with energy $> E_{a} $, to find the fraction of molecules with $\geq E_{a} $, i think one should sum/integrate from the energy level $ E_{a} $ to all higher accessible energy levels). Thanks in advance for any clarification. Answer: The term $e^{-\beta E_a}$ is in fact the integrated term. This derivation relies on assuming uniformly spaced energy levels $\epsilon$, such that the partition function is: $$Z = \sum_{i=0} ^ \infty e^{-\beta i\epsilon}$$ Since $p_i = \frac{e^{-\beta i \epsilon}}{Z}$, you can sum up all $p_i$ from an $i_{min}$ to $\infty$, such that $i_{min}\epsilon\equiv E_a$: $$p_{transition} = \sum_{i=i_{min}}^\infty p_i=\sum_{i=i_{min}}^{\infty}\frac{e^{-\beta i \epsilon}}{Z} = e^{-\beta i_{min} \epsilon} \frac{\sum_{i=0} ^ \infty e^{-\beta i\epsilon}}{Z} = e^{-\beta i_{min} \epsilon} \frac{Z}{Z} \equiv e^{-\beta E_a}$$ Edit: the third step is probably the trickiest one to spot, so here I am presenting it in more detail: $$\sum_{i=i_{min}}^{\infty}\frac{e^{-\beta i \epsilon}}{Z} = \frac{e^{-\beta i _{min}\epsilon} + e^{-\beta (i _{min} + 1)\epsilon} + e^{-\beta (i _{min} + 2)\epsilon}+...}{Z}\\ = \frac{e^{-\beta i _{min}\epsilon} (1+e^{-\beta \epsilon} + e^{-\beta 2\epsilon}+...)}{Z}= e^{-\beta i_{min} \epsilon} \frac{\sum_{i=0} ^ \infty e^{-\beta i\epsilon}}{Z}$$
{ "domain": "physics.stackexchange", "id": 69775, "tags": "statistical-mechanics, equilibrium" }
Does a vacuum pump have to run constantly in a pick & place system?
Question: I'm reading this article about which of vacuum pump or venturi vacuum generator is more efficient for a pick and place system: https://fluidpowerjournal.com/pump-or-venturi/ The example application is as follows: Here’s a typical vacuum-lifting application: a small end-of-arm tool consisting of eight small vacuum cups of Ø40 mm (Ø1.5″). These eight cups are picking up a flat plastic sheet, which is, of course, non porous. Cycle rate is one sheet pick every 15 seconds, or four sheets a minute. The piece is being held for only three seconds during transfer. What’s the most efficient? Pump or venturi? The conclusion appears to be venturi, but I find the argument a bit odd: The pump will run continuously with the vacuum being turned on and off via a vacuum solenoid valve. The venturi will run only when required (during the lift part of the cycle), turned on and off using a compressed air solenoid valve. The vacuum pump uses 0.25 hp all day long during a 24-hour shift. The venturi uses 0.9 hp for every three seconds of a 15-second cycle. Therefore, the venturi uses on average 0.18 hp of compressed air over the same cycle period. Consequently, the venturi is “more efficient” in overall running costs. Does anyone know why the vacuum pump would need to run constantly for 24hrs whilst the pressure pump would not? Could the vacuum pump not be switched off just like the pressure pump? In fact, can the solenoid valve give a good seal in the venturi case? Answer: The problem with vacuum pick and place systems is the amount of time it takes to create the vacuum, so your design will revolve around your requirements. Vacuum If you run the vacuum continuously then, as you approach your target, you only have to wait long enough for the channels on your pick-up head to evacuate. If you switch off the vacuum pump, air rushes into the tubing, equalising at atmospheric pressure. The next time you want to pick, you need to evacuate all of the tubing between head and pump. This could take a long time, even if you have a vacuum 'reservoir' much larger than the volume of your tubing. In between these would be a vacuum solenoid close to the pick-up head. Here most of the tubing would stay evacuated, and you would only have to evacuate the pick-up head channels and the short tubing to the solenoid. This does require you to run power and control all the way to the head, though. The greater the volume of tubing on the open air side of the solenoid, the greater the time would be from switching the solenoid to their being sufficient vacuum. Venturi With a venturi generator, you can site it next to the head, keeping the volume to evacuate low, while putting the solenoid at the other end of the tube. High pressure air will rush into that tube much more quickly than the air could be evacuated, so you effectively get the best of both worlds. Fast actuation, but no need to add solenoids and run power and control to the head. Comparison Price and the difference in pressure differentials are the big advantages of compressed air. Air compressors can easily and cheaply supply air to a venturi generator, and can cope more easily with large changes in demand. A typical workshop compressor might store 100l of air at 8 bar and be rated at 410 litres/minute (14.5cfm). This could run venturi generators needing 80 litres/minute with the 5:1 duty cycle suggested in your quote. During that 15 second cycle, pressure would fall from 8 bar to around 7.8 bar until the pump kicks in again. This variation should be negligible in terms of pick-up performance. We could do something similar with a vacuum vessel, have a higher flow vacuum pump, a vacuum 'storage' vessel, and run it with a 3:12s duty cycle, but there are problems with this. We would need a much higher flow rate (and cost) pump. Vacuum vessels can have at most 1 atmosphere of pressure differential, in practice much less, so a 100l tank could never 'store' more than 100l of vacuum, and a 0.3bar (10"Hg) vacuum vessel would need to be around 2500 litres in size, to have a similar pressure variation over the duty cycle as with a 100l compressed air solution. Flow rate is affected significantly by current pressure in the storage vessel: Summary In summary, almost all of the vacuum based pick and place machines I've worked on have used venturi generators. Vacuum pumps always have to be sized according to maximum required vacuum, assuming no storage, whereas compressed air systems can usually be sized based on average use. If you need constant vacuum, pumps may be better, but if demand varies significantly from second to second, as it probably does in a pick and place machine, venturi may be a better fit.
{ "domain": "robotics.stackexchange", "id": 2092, "tags": "robotic-arm, valve" }
Removing elements in an array
Question: I have a piece of code where I compare two arrays for similar values, then remove the similar values from one of the arrays. Is there a better way of writing this piece of code? UpdateAddedTasksAfterDelete() { var tasksToRemove = []; var updatedTasks = []; for (var i = 0; i < this.deletedTasks.length; i++) { if (this.addedTasks.indexOf(this.deletedTasks[i]) > -1) { tasksToRemove.push(this.deletedTasks[i]); } } for (var i = 0; i < this.addedTasks.length; i++) { if (this.addedTasks.indexOf(tasksToRemove[i]) == -1) { updatedTasks.push(this.addedTasks[i]); } } this.addedTasks = updatedTasks; } Answer: I'm a little unclear as to what your goal is here, but here are a few options, depending on which outcome you are hoping for... If your goal is to take two arrays, combine them, and discard any duplicate values, using a Set will automatically discard the values that appear more than once for you. Simply use: var mySet = new Set(array1) array2.forEach((item)=>{mySet.add(item)}); Then convert the set back to an array if you need it specifically in array form, using the spread operator: var finalArray = [...mySet] 'finalArray' will now hold each unique value from the combination of your two original arrays. If your goal is to compare two arrays, removing values from the first array that are present in the second, use filter: var finalArray = array1.filter((item)=>{return !array2.includes(item)}) 'finalArray' will now hold all values of array1 that do not also exist in array2.
{ "domain": "codereview.stackexchange", "id": 24048, "tags": "javascript" }
Why aren't units with powers, like cm³, surrounded by parentheses?
Question: Since $\renewcommand{\unit}[1]{\,\mathrm{#1}} 1\unit{dm} = 10^{-1}\unit{m}$, it follows that $1\unit{dm^3} = 10^{-1} \times 10^{-1} \times 10^{-1} \unit{m^3} = 10^{-3} \unit{m^3}$. However, in regular mathematics the following equation holds true: $$a\,b^{3} = a\,b\,b\,b$$ By the above, the cube unit should expand as follows: $$\mathrm{dm^3} = \mathrm{dmmm}$$ While in actual usage (as seen in the second equation) the expansion is $\mathrm{dddmmm}$, which would arise from using $\mathrm{(dm)^3}$ instead. $$\mathrm{(dm)^3} = \mathrm{dddmmm}$$ So shortly: why aren't parentheses (commonly?) used in units? Answer: The thing is that $\mathrm{dm}$ is a single symbol, not a combination of two symbols. Yes, it can be understood in terms of a prefix and a base indicator, but it is still a single symbol. An analogy to the concatenation of variable is inappropriate. Reference to an authoritative statement: The grouping formed by a prefix symbol attached to a unit symbol constitutes a new inseparable unit symbol (forming a multiple or submultiple of the unit concerned) that can be raised to a positive or negative power and that can be combined with other unit symbols to form compound unit symbols. Example: $\renewcommand{\unit}[1]{\,\mathrm{#1}} 2.3\unit{cm^3} = 2.3\unit{(cm)^3} = 2.3 \unit{(10^{–2}\,m)^3} = 2.3 \times 10^{–6} \unit{m^3}$
{ "domain": "physics.stackexchange", "id": 15281, "tags": "soft-question, units, notation" }
Where does Light go if it is in a glass prism and why?
Question: I know that photons/light bends or bounces when it hits glass so if it was in a glass prism where would it go. I know that light/photons hit the glass at different angles and if it hits at almost a straight angle it will escape but why does it escape if it hits at almost a straight angle? I don't really understand Snell's law. Could you look at my profile before answering flagging or leaving a comment please. Answer: I know that photons/light bends or bounces when it hits glass so if it was in a glass prism where would it go. Source: Light entering prism It goes through the prism and two things happen. The white light gets split into the rainbow of colors and it also gets bent (refracted), because each of the colors is a different wavelength than the others. The colors between red and blue get refracted as well, not as much as red but more refracted than blue. The light can be reflected or refracted or both, as in the picture below. I don't really understand Snell's law. The best way to think of Snell's law is to imagine you have to run from a point A to a point C, but on the way you must touch a pole at point B, which is midway between A and C but 50 metres below the straight line connecting A and C. That's the way light works, it gets from A to B and B to C in the shortest time. You should be able to see that on the picture above. If it went any other way, it would take more time, so the angle it hits B (coming from A), must equal the angle going to C. I bet you can't think of a quicker way to go this route than the way light goes.
{ "domain": "physics.stackexchange", "id": 35624, "tags": "optics, photons, refraction, glass" }
Get rotate angle to move a robot in 3D environment
Question: I have an UAV and my idea is to make it moves around some points I pass to it. My logic is the following: 1- Moves UAV in z-axis to adjust high; 2- Rotates around z-axis; 3- Moves forward on x-axis until reach the goal. Step 1 works fine, but I'm getting problems on step number 2. I have one coordinate frame for the the UAV and I get the angle between the uav and the point (in the world) as a transform between the frames of my goal_point and the UAV: angle = transform_goal_robotPose.getRotation().getAngle(); However, I cannot get a metric to compare with and establish that the uav has rotated the correct angle and then can go to step 3. I was thinking about something like a comparison if (angle == expected_angle), but couldn't figure out none. Someone knows how can I get it? Originally posted by bxl on ROS Answers with karma: 56 on 2018-03-19 Post score: 0 Answer: Have a look at the look_at_pose package. It will do that calculation for you. Originally posted by AndyZe with karma: 2331 on 2018-03-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 30377, "tags": "ros, uav, ros-kinetic" }
Do amphibians feel pain?
Question: It seems strange, but I always hear that toads and frogs don't feel pain. Is this a true assumption? Do this species can feel pain? Answer: According the the Wikipedia article, It is widely accepted by scientists that they feel pain, based on their opioid receptors and sensory physiology. "Amphibians, particularly anurans, fulfill several physiological and behavioural criteria proposed as indicating that non-human animals may experience pain. These fulfilled criteria include a suitable nervous system and sensory receptors, opioid receptors and reduced responses to noxious stimuli when given analgesics and local anaesthetics, physiological changes to noxious stimuli, displaying protective motor reactions, exhibiting avoidance learning and making trade-offs between noxious stimulus avoidance and other motivational requirements." en.wikipedia.org/wiki/Pain_in_amphibians
{ "domain": "biology.stackexchange", "id": 7427, "tags": "zoology" }
DFA - Equivalence classes
Question: I am preparing for my exam in formal languages and I need some help with one question from one old exam. I know that the number of equivalence classes of some regular language L, is the number of states of the minimal DFA for that language. But how do I give a DFA for one of the equivalence classes ? Thanks in advance Answer: An elaborate hint: recall that the proof of the Myhill-Nerode theorem works (in one direction) by constructing a DFA for a language, given its equivalence classes. In the constructed DFA (i.e the minimal DFA), each state corresponds to an equivalence class. We then set the accepting states to be those that correspond to equivalence classes that are contained in the language. Now, suppose you create the DFA for the equivalence classes of $L$, without knowing in advance which equivalence classes are in L, and which aren't. If you choose one particular class, and set the state that corresponds to it to be accepting, then the language of the DFA is exactly that equivalence class, which is what you wanted. Indeed, the property of the minimal DFA is that for every two words $w,w'$, the runs of the DFA on $w$ and on $w'$ end in the same state iff $w\equiv_L w'$. Observe that you may end up with a DFA that is not minimal for the equivalence class.
{ "domain": "cs.stackexchange", "id": 1713, "tags": "formal-languages, finite-automata" }
Excel Range less clumsy
Question: As an old dog (age 73) learning new (Excel VBA) tricks, I am reasonably happy with putting together the code below. But I think it could be cleaner. How would you have coded it? Private Sub Workbook_Open() Dim lastRow As Long 'last row with data Dim thisDate As Double 'start timestamp thisDate = Now() With Sheets("Pressure Log") lastRow = .Range("B" & .Rows.Count).End(xlUp).Row 'populate next row with date/time .Range("B" & lastRow + 1 & ":G" & lastRow + 1).Borders.LineStyle = xlContinuous .Range("B" & lastRow).Offset(1) = Format(thisDate, "dddd") .Range("B" & lastRow).Offset(1, 1) = Format(thisDate, "mm/dd/yyyy") .Range("B" & lastRow).Offset(1, 2) = Format(thisDate, "hh:mm AM/PM") .Range("B" & lastRow).Offset(1, 3).Select 'position for data End With End Sub Answer: Properly formatting and indenting code is always a good start. Using Option Explicit at the top of every module is a must. You may already do this, just thought I would mention it. You declare thisDate as a Double but you use it as a Date. Declare it as a Date. Make your life a little easier and set a range to the start of your new row instead of calling a calculated range. Example below: Private Sub Workbook_Open() Dim lastRow As Long 'last row with data Dim thisDate As Date 'start timestamp Dim entryRange as Range thisDate = Now() With Sheets("Pressure Log") lastRow = .Range("B" & .Rows.Count).End(xlUp).Row 'populate next row with date/time Set entryRange = .Range("B" & lastRow+1) ` There are other ways of doing this too. End With entryRange.resize(1, 6).Borders.LineStyle = xlContinuous ' Yes, could do this in a With block as well. entryRange.Value = Format(thisDate, "dddd") entryRange.Offset(, 1).Value = Format(thisDate, "mm/dd/yyyy") entryRange.Offset(, 2) = Format(thisDate, "hh:mm AM/PM") entryRange.Offset(, 3).Select 'position for data End Sub
{ "domain": "codereview.stackexchange", "id": 32898, "tags": "vba, excel" }
A Simple Async Task in Clojure
Question: Purpose of Code: To simulate scraping the web and updating a db, by, presently, adding randomly generated numbers to a "database". Have I wired up the followin async code properly? Is there a simpler, more composable way to pass objects around? Or some more idiomatic way? I have added comments to self-explanatory code to hopefully make it a titch easier on reviewers. Note: the following mocks what will become a web scraper. So just be aware that the "db" isn't a real db, and "fetching urls" refers to a mock up that will eventually get swapped out, but I hope you see the semantics I intend to sub in. (ns clj-scraper.core (:require [clojure.core.async :as a :refer [>! <! >!! <!! go go-loop chan buffer close! thread alts! alts!! timeout put! take! pipe pipeline pipeline-async pipeline-blocking]])) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; Database - a mockup of what could be a db (def test-db (agent {:1 {:url "abc" :vals []} :2 {:url "bac" :vals []} :3 {:url "pg" :vals []})) ; append to a path in the test-db (defn test-append [path x] (send test-db update-in path (partial cons x))) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; Load from DB ; objects start their journey on this channel (def db-objs> (chan 1)) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; Fetch URLs ; test-data mocks fetching a url by returning the string of a random int (defn test-data [_] (str (rand-int 100))) (def pages> (chan 1)) (pipeline 4 pages> ; add the fetched page to the object getting passed around (map (fn [{:keys [url] :as obj}] (conj obj {:page (test-data url)}))) db-objs>) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; CSS Select Page ; this mock up simply converts a String -> Int (defn test-select [x] (Integer. x)) (def selections> (chan 1)) (pipeline 4 selections> ; add the converted String to the object getting passed around (map (fn [{:keys [page] :as obj}] (conj obj {:scraped (test-select page)}))) pages>) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; Update DB ; add the "scraped value" to the "db" (go-loop [] (let [{:keys [id scraped]} (<! selections>)] (test-append [id :vals] scraped)) (recur)) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; Run Things in Debugging (doseq [[id data] @test-db] (go (>! db-objs> (conj data {:id id})))) @test-db ; won't show expected results until async ops are done Answer: Commenting on this section: ; test-data mocks fetching a url by returning the string of a random int (defn test-data [_] (str (rand-int 100))) (def pages> (chan 1)) (pipeline 4 pages> ; add the fetched page to the object getting passed around (map (fn [{:keys [url] :as obj}] (conj obj {:page (test-data url)}))) db-objs>) In real-life, test-data will either do blocking I/O or be asynchronous. Therefore, you should use pipeline-blocking or pipeline-async instead, otherwise you'll exhaust the "go thread pool". Commenting on this part: (go-loop [] (let [{:keys [id scraped]} (<! selections>)] (test-append [id :vals] scraped)) (recur)) You should account for the fact that the selections> channel may close, in which case the object will be nil and you should stop the loop. (go-loop [] (when-let [{:keys [id scraped]} (<! selections>)] (test-append [id :vals] scraped) (recur))) Commenting on this part: (doseq [[id data] @test-db] (go (>! db-objs> (conj data {:id id})))) Because go is asynchronous, this consists of putting all the objects in test-db at once into the processing pipeline, which potentially consumes a lot of memory. Core.async channels are designed to give you back-pressure, you should use it. How about instead: (go (doseq [[id data] @test-db] (>! db-objs> (conj data {:id id})))) Which you can express at a higher-level using onto-chan (onto-chan db-objs (map (fn [data] (conj data {:id id})) @test-db) false)
{ "domain": "codereview.stackexchange", "id": 22329, "tags": "asynchronous, clojure" }
Misunderstanding the calculation of the potential for a system of 2 concentric shells
Question: Let's say we have 2 concentric shells. The little one has a charge $Q_1$ and a radius $R_1$ and the greater one has a charge $Q_2$ and a radius $R_2$ with $R_2$ > $R_1$. We want to calculate the potentials so : 1) $r > R_2 :$ $V(r) = \frac{(Q_1 + Q_2)}{4\pi \epsilon_0}\frac{1}{r}$ $V(R_2) = \frac{(Q_1 + Q_2)}{4\pi \epsilon_0}\frac{1}{R_2}$ 2) $R_1 < r < R_2 :$ This is where I don't understand. I would say that it's the difference of potential $V(R_2) - V(R_1)$ I would calculate : $V(r) = \frac{(Q_1 + Q_2)}{4\pi \epsilon_0}\frac{1}{R_2} - \frac{(Q_1)}{4\pi \epsilon_0}\frac{1}{R_1}$ However we have : $\frac{(Q_1)}{4\pi \epsilon_0}\frac{1}{r} + \frac{(Q_2)}{4\pi \epsilon_0}\frac{1}{R_2}$ 3) $r < R_1 :$ $V(r) = 0$ (no charges inside) I suppose that the following potential comes from the fact that to bring a charge from $+\infty$ we have to "fight against" the contributions of the charge $Q_2$ uniformly distributed on the sphere of radius $R_2$ and of the charge $Q_1$ uniformly distributed on the sphere of radius $R_1$ ? $V(R_1) = \frac{(Q_1)}{4\pi \epsilon_0}\frac{1}{R_1} + \frac{(Q_2)}{4\pi \epsilon_0}\frac{1}{R_2}$ But why don't we have (following the logic of the case $r > R_2$) : $V(R_1) = \frac{(Q_1)}{4\pi \epsilon_0}\frac{1}{R_1} + \frac{(Q_1 + Q_2)}{4\pi \epsilon_0}\frac{1}{R_2}$ ? Answer: 3) $r < R_1 :$ $V(r) = 0$ (no charges inside) Is an incorrect statement. Inside the inner shell the electric field is zero and the potential is constant. It can be zero if you define it to be so, however, from what you have written for case 1 you have taken the zero of potential to be at infinity. The electric field outside the outer shell is due to charge $Q_1+Q_2$, that between the two shells is due to charge $Q_1$. The potential outside of the outer shell is $V(r) = \dfrac{(Q_1 + Q_2)}{4\pi \epsilon_0}\dfrac{1}{r}$ and so at the outer shell it is $V(R_2) = \dfrac{(Q_1 + Q_2)}{4\pi \epsilon_0}\dfrac{1}{R_2}$ The potential of the inner shell relative to the outer shell is is $ \dfrac{Q_1}{4\pi \epsilon_0}\left (\dfrac{1}{R_1}-\dfrac{1}{R_2}\right )$ So the potential of the inner shell is $V(R_1) = \dfrac{(Q_1 + Q_2)}{4\pi \epsilon_0}\dfrac{1}{R_2}+\dfrac{Q_1}{4\pi \epsilon_0}\left (\dfrac{1}{R_1}-\dfrac{1}{R_2}\right ) = \dfrac{1}{4\pi \epsilon_0}\left (\dfrac{Q_1}{R_1}+\dfrac{Q_2}{R_2}\right )$ If the charges are positive the potentials look something like this.
{ "domain": "physics.stackexchange", "id": 61940, "tags": "electromagnetism, potential" }
Numerical values of coupling constants in the SM
Question: Where can one find the numerical values of the various coupling constants (mainly $\lambda$, $g$, $g'$, $g_S$ and $h_t$) in the SM at a fixed renormalization scale $\mu$? They don't all have to be evaluated on the same value of the RG scale of course, I just need them for a numerical simulation. I've tried using the PDG tables but I couldn't find anything useful. Answer: As described in the papers Investigating the near-criticality of the Higgs boson, p.6 and p.13, and On the gauge dependence of the Standard Model vacuum instability scale, p.12, the numerical values of the coupling constants in the $\overline{MS}$ renormalization scheme, computed at 2-loop accuracy, are the following: $\lambda(\mu=M_t)= 0.12710\\ h_t(M_t) = 0.93697\\ g_S (M_t ) = 1.1666\\ g(M_t ) = 0.6483\\ g'(M_t ) = 0.3587 $
{ "domain": "physics.stackexchange", "id": 35502, "tags": "particle-physics, resource-recommendations, standard-model, renormalization, physical-constants" }
Count character 'a' in first n characters of indefinitely repeating string s
Question: Problem statement: Lilah has a string, s, of lowercase English letters that she repeated infinitely many times. Given an integer, n, find and print the number of letter a's in the first n letters of Lilah's infinite string. For example, if the string s = 'abcac' and n = 10, the substring we consider is abcacabcac, the first 10 characters of her infinite string. There are 4 occurrences of a in the substring. Function Description Complete the repeatedString function in the editor below. It should return an integer representing the number of occurrences of a in the prefix of length n in the infinitely repeating string. repeatedString has the following parameter(s): s: a string to repeat n: the number of characters to consider Input Format The first line contains a single string, s. The second line contains an integer, n. Output Format Print a single integer denoting the number of letter a's in the first letters of the infinite string created by repeating infinitely many times. Sample Input 0 aba 10 Sample Output 0 7 Explanation 0 The first letters of the infinite string are abaabaabaa. Because there are a's, we print on a new line. Sample Input 1 a 1000000000000 Sample Output 1 1000000000000 Explanation 1 Because all of the first letters of the infinite string are a, we print on a new line. My Solution: def repeatedString(s: String, n: Long): Long = { def getCount(str: String): Int = str.groupBy(identity).get('a').map(x => x.length).getOrElse(0) val length= s.length val duplicate: Long = n / length val margin = n % length val numberOccurencesInString = getCount(s) val countInRepetetiveString = numberOccurencesInString * duplicate val numberOfOccurencesInStripedString = getCount(s.take(margin.toInt)) countInRepetetiveString + numberOfOccurencesInStripedString } Answer: Your getCount() method is a little difficult to read, on one long line like that, and way too complicated. s.count(_ == 'a') is both concise and efficient. It's not clear why the number of s repetitions possible in n is called duplicate. It seems an odd choice for that variable name. Your algorithm is sound, I just find it excessively verbose, especially for a language that prides itself on being both expressive and concise. val sLen = s.length s.count(_ == 'a') * (n/sLen) + s.take((n%sLen).toInt).count(_ == 'a')
{ "domain": "codereview.stackexchange", "id": 35286, "tags": "functional-programming, scala, immutability" }
Was the Geocentric Model correct at all?
Question: It's easy to find resources stating that the heliocentric model is right and geocentric is wrong. But how wrong was it? Was it correct in any way? It was built on incorrect assumptions, but despite that - was it of any use to describe the apparent motion of celestial bodies? Was it more accurate for some things, but less accurate for others? Or was it altogether a flop and astronomers couldn't get anything out of it either way? I can only find multiple articles proving the heliocentric model, explaining the geocentric one or claiming that it was simply wrong - but I can't find anything about its accuracy and usefulness. (1) (2) (3) Edit 1: I incorrectly used geocentric model when it seems I wanted to say Ptolemaic model - the one with deferents and epicycles, with Earth as its origin. Ptolemaic model (click for full size) Thank you for clarifying, and sorry for the confusion. The answers provided regarding any other geocentric models are still valid and useful, so this is just a minor errata. Answer: Ptolemy's epicyclic, geocentric model, in use until the Renaissance, was very accurate in terms of predicting the positions of planets and the times of eclipses. What it couldn't account for were things like the correlations between apparent size and phase of Venus, or to properly account for the variation in brightness of the planets. Thus the reason for discarding the geocentric model was not really because it lacked precision, but that it failed to explain various other observational facts, especially after the development of telescopes. No doubt you could tune the Ptolemaic system even further (more epicycles?) to iron out some of the small errors that were revealed by Tycho's positional measurements at the turn of the 16th century, which had a precision unavailable to Ptolemy. However, the advent of Kepler's laws and subsequent explanation by Newton, rendered the geocentric model obsolete. As you can judge from (well written) articles like this one, geocentrism is actually quite hard to kill-off observationally, if you are prepared to accept that the universe is arranged "just so".
{ "domain": "astronomy.stackexchange", "id": 4820, "tags": "the-sun, orbit, earth, geocentrism" }
Does a charged capacitor weigh more than uncharged?
Question: I know that charging a capacitor only moves particles from one plate to the other, so the total amount of charge in the capacitor does not change, nor does the total number of particles. However, the charged capacitor does have electrical energy that the uncharged capacitor does not have, and energy has a mass equivalence according to $e=mc^2$, so wouldn't the capacitor weigh a little bit more when charged? A 470 μF capacitor charged to 3.3 V, for instance, would have a stored energy of 2.6 mJ, which has a mass equivalence of 0.03 femtograms, which is the weight of ~1,416,666 carbon atoms? Likewise, a superconducting loop inductor would weigh slightly more when it is energized (has a current flowing through it) vs unenergized because of the mass equivalence of the energy stored in the magnetic field? Answer: Your reasoning is correct. If you store energy in a system, mass-energy equivalence provides the conversion for calculating the matter equivalent which, if you had a weighing device sensitive enough, you could (in principle) measure. For things like charged capacitors and stretched springs and pressurized gas tanks and cans filled with gasoline, the mass difference is too small to measure. Even in the most energetic reactions we can create (nuclear reactors, atomic bombs, fusion) the mass equivalence of the energy release is of order ~a fraction of a percent. The most efficient method for wringing energy out of mass occurs when mass spirals into a black hole. Theoretically, it is possible (in an idealized process) to extract up to ~40% of the mass as energy release in this case.
{ "domain": "physics.stackexchange", "id": 68604, "tags": "electric-fields, capacitance, mass-energy, inductance, weight" }
Cannot install fuerte from source because kforge.ros.org is down
Question: ardrone@raspberrypi ~ $ rosdep install -ay --os=debian:squeeze ERROR: Rosdep experienced an error: Failed to load a rdmanifest from https://kforge.ros.org/rosrelease/viewvc/sourcedeps/yaml-cpp/yaml-cpp-0.2.5.rdmanifest: <urlopen error [Errno 110] Connection timed out> Please go to the rosdep page [1] and file a bug report with the stack trace below. [1] : http://www.ros.org/wiki/rosdep rosdep version: 0.11.0 Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/rosdep2/main.py", line 136, in rosdep_main exit_code = _rosdep_main(args) File "/usr/local/lib/python2.7/dist-packages/rosdep2/main.py", line 345, in _rosdep_main return _package_args_handler(command, parser, options, args) File "/usr/local/lib/python2.7/dist-packages/rosdep2/main.py", line 435, in _package_args_handler return command_handlers[command](lookup, packages, options) File "/usr/local/lib/python2.7/dist-packages/rosdep2/main.py", line 619, in command_install uninstalled, errors = installer.get_uninstalled(packages, implicit=options.recursive, verbose=options.verbose) File "/usr/local/lib/python2.7/dist-packages/rosdep2/installers.py", line 405, in get_uninstalled resolutions, errors = self.lookup.resolve_all(resources, installer_context, implicit=implicit) File "/usr/local/lib/python2.7/dist-packages/rosdep2/lookup.py", line 391, in resolve_all self.resolve(rosdep_key, resource_name, installer_context) File "/usr/local/lib/python2.7/dist-packages/rosdep2/lookup.py", line 480, in resolve resolution = installer.resolve(rosdep_args_dict) File "/usr/local/lib/python2.7/dist-packages/rosdep2/platforms/source.py", line 216, in resolve raise InvalidData(str(ex)) InvalidData: Failed to load a rdmanifest from https://kforge.ros.org/rosrelease/viewvc/sourcedeps/yaml-cpp/yaml-cpp-0.2.5.rdmanifest: <urlopen error [Errno 110] Connection timed out> Originally posted by Alvar on ROS Answers with karma: 1 on 2014-12-19 Post score: 0 Answer: kforge.ros.org was turned off when Willow Garage shut down. It doesn't exist any more. ROS Fuerte is no longer supported. You should look for a newer version of ROS for your Raspberry Pi. Many users have reported successfully installing ROS Groovy, Hydro and Indigo on their Raspberry Pi. Originally posted by ahendrix with karma: 47576 on 2014-12-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 20394, "tags": "ros, ros-fuerte, kforge" }
Dipole field issue in particle-mesh Ewald method with periodic boundary conditions
Question: I am working on a thesis that makes a great use of molecular-dynamics simulations, and I am trying to understand how the particle-mesh Ewald method works. The problem is, I have difficulties understanding its very premise; now I will explain what I think I have learnt: Long-range electrostatic forces do not converge if periodic boundary conditions are enforced; thus we cannot obtain them by summing pairwise interactions between every charge in the system if we take the periodic images of charges into account. The problem is, I see this as an issue with serious practical consequences, and I cannot imagine how a mathematical rearranging could solve it. I’ll make an example: Let’s suppose we have an electrically neutral unit cell that gains an electric dipole moment during the simulation. This unit cell, and its dipole moment, would be instantly reproduced in all image cells. Let’s consider the electric potential in an arbitrary point: it would have an infinite number of shells of dipoles around, with area increasing as $r^2$ ($r$ is the shell radius) and the electric field dipole component would decrease as $r^{-2}$; the sum does not converge, so we would have an infinite electric potential in every point! Am I missing something? I cannot see how PME or Ewald Summation, or any other algorithm, can solve a physical issue, unless that in some way those methods put additional boundary conditions. But I don’t see how. Can you help me understand? Thank you in advance. EDIT: I was wrong about the infinite potential, because there is a cosine term in the dipole component that zeroes the potential in my proposed shell-by-shell calculation. Anyway, if we consider the electric field instead, we have it falling at $r^{-3}$; by adding the field produced by each shell we obtain a series of $1/r$ terms, that is still diverging at infinity, so my problem is still unsolved. Answer: I solved the issue. I was actually missing something. I didn't notice that the electric field at the center of a spherical shell of ideal dipoles is zero. I think it is not obvious if you look at the formula, but it can be demonstrated by using the continuous limit and integrating on the entire shell. Since we work with neutral simulation boxes (of course), and the other multipole components of the electric field are short-range, I think we can say that the total field converges. So the divergence is a mathematical issue indeed, not a physical one.
{ "domain": "physics.stackexchange", "id": 37634, "tags": "electrostatics, potential, computational-physics, boundary-conditions, dipole" }