anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
map_server error when loading map
Question: I have been trying to do autonomous navigation with a pre-constructed map. I have map files called mylaserscan1503201800.pgm and mylaserscan1503201800.yaml. Here is the move_base_amcl.launch code I use: <launch> <!-- Start the map server --> <node pkg="map_server" name="map_server" type="map_server" args="`rospack find follow_me_2dnav`/launch/mylaserdata_1503201800.yaml" /> <!-- Run AMCL --> <include file="$(find amcl)/examples/amcl_diff.launch"> <param name="transform_tolerance" value="0.2" /> <param name="recovery_alpha_slow" value="0.001" /> <param name="use_map_topic" value="false" /> <param name="laser_min_range" value="1.0" /> <param name="laser_max_range" value="7.0" /> <param name="laser_likelihood_max_dist" value="2.0" /> <param name="odom_model_type" value="diff" /> <param name="odom_frame_id" value="odom" /> <param name="base_frame_id" value="base_link" /> <param name="global_frame_id" value="map" /> </include> <!-- Start navigation stack --> <node pkg="move_base" name="move_base" type="move_base" respawn="false" output="screen" > <rosparam command="load" file="$(find follow_me_2dnav)/params/costmap_common_params.yaml" ns="global_costmap"/> <rosparam command="load" file="$(find follow_me_2dnav)/params/costmap_common_params.yaml" ns="local_costmap" /> <rosparam command="load" file="$(find follow_me_2dnav)/params/local_costmap_params.yaml" /> <rosparam command="load" file="$(find follow_me_2dnav)/params/global_costmap_params.yaml"/> <rosparam command="load" file="$(find follow_me_2dnav)/params/base_local_planner_params.yaml" /> <param name="controller_frequency" type="double" value="20.0" /> <param name="planner_patience" value="5.0" /> <param name="controller_patience" value="15.0" /> <param name="conservative_reset_dist" value="5.0" /> <param name="recovery_behavior_enabled" value="true" /> <param name="clearing_rotation_allowed" value="true" /> <param name="shutdown_costmaps" value="false" /> <param name="oscillation_timeout" value="0.0" /> <param name="oscillation_distance" value="0.5" /> <param name="planner_frequency" value="0.0" /> </node> </launch> Before running this, I run another launch file that starts the drivers for my mobile robot, Microsoft Kinect, and depthimage_to_laserscan. When I run the move_base_amcl.launch file, I get the following errors. NODES / amcl (amcl/amcl) map_server (map_server/map_server) move_base (move_base/move_base) ROS_MASTER_URI=http://192.168.0.102:11311 core service [/rosout] found process[map_server-1]: started with pid [20673] [ERROR] [1434060433.039753815]: USAGE: map_server <map.yaml> map.yaml: map description file DEPRECIATED USAGE: map_server <map> <resolution> map: image file to load resolution: map resolution [meters/pixel] [map_server-`] process has died [pid 20673, exit code 255, cmd /opt/ros/hydro/lib/map_server/map_server `rospack find follow_me_2dnav`/launch/mylaserdata_150.201800.yaml __name:=map_server __log:=/home/ecejames01/.ros/log/96e36838-1084-11e5-afa8-bc7737e7db9b/map_server-1.log]. log file: /home/ecejames01/.ros/log/96e36838-1084-11e5-afa8-bc7737e7db9b/map_server-1*.log process[amcl-2]: started with pid [20674] process[move_base-3]: started with pid [20679] [ INFO] [1434060434.299816244]: Requesting the map... [ WARN] [1434060434.311131134]: Request for map failed; trying again... [ WARN] [1434060434.818731335]: Request for map failed; trying again... Your help in finding my issue is greatly appreciated. Thanks in advanced. UPDATE: I noticed that within my mylaserdata_1503201800.yaml file that the image parameter was map.pgm rather than mylaserdata_1503201800.pgm, which I corrected. I was able to get the map loaded when running it from the command line: rosrun map_server map_server `rospack find follow_me_2dnav`/launch/mylaserdata_1503201800.yaml but I was still unable to get it to load from the launch file. Originally posted by sealguy77 on ROS Answers with karma: 323 on 2015-06-11 Post score: 0 Answer: In the launch file, instead of <node pkg="map_server" name="map_server" type="map_server" args="``rospack find follow_me_2dnav``/launch/mylaserdata_1503201800.yaml" /> use this <node pkg="map_server" name="map_server" type="map_server" args="$(find follow_me_2dnav)/launch/mylaserdata_1503201800.yaml" /> For more details : link text $(find pkg) is the right way to use the override behaviour. Originally posted by sudhanshu_mittal with karma: 311 on 2015-06-11 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by sealguy77 on 2015-06-12: Thanks. I originally tried this method, but was not successful. I had not gone back to try it after correcting my .yaml file.
{ "domain": "robotics.stackexchange", "id": 21894, "tags": "navigation, mapping, map-server, load" }
How to improve my PHP adding script?
Question: I have a script, which updates my table's column and writes an id to it. I need to check whether the column is empty or not. If it is not, I add a ,. $subs = mysql_fetch_array(mysql_query("SELECT subscribed_user_id FROM users WHERE user_id=".(int)$_GET['user'])); $subs_array = array(); $subs_array=explode(',', $subs['subscribed_user_id']); if(!in_array($_COOKIE['user_id'], $subs_array)) { if($subs['subscribed_user_id']=='') { $add=''; } else { $add = $subs['subscribed_user_id'].','; } mysql_query("UPDATE users SET subers=subers+1, subscribed_user_id='".$add.$_COOKIE['user_id']."' WHERE user_id=".(int)$_GET['user']); } I have an idea: always add ,. But when I select it, I don't use the full length of the array. But, for example array.length-2, I think that it is not OK and I need to know how I can improve this script. Answer: With regard to your question about adding ,, just do $new_user_id = $_COOKIE['user_id']; if ($subs['subscribed_user_id'] === '') { $new_user_id = $subs['subscribed_user_id'] . ',' . $new_user_id; } With that said, the details of how you add the comma is not the issue here. I began rewriting the code, but since the db interactions make the code hard to test and work with, I honestly couldn't be bothered to properly finish it. You should Wrap the functionality in a actual function taking parameters to remove use of $_ globals. Stop using the ancient and deprecated mysql extension. In fact, don't use (the much better) mysqli either, at the very least, adopt PDO, or better yet, a tool that removes the low level details of managing the DB. Fix the injection vulnerability in your update query. Consider whether you really should be comma separating values in the column. Perhaps you should instead add a new row? Some code I started writing, but didn't finish because I have better things to do than setting up the db I would need. function update_user_id($user_id, $subs, $cookie) { $subs = mysql_fetch_array(mysql_query("SELECT subscribed_user_id FROM users WHERE user_id = " . (int) $user_id)); $subs_array = explode(',', $subs['subscribed_user_id']); if (!in_array($cookie['user_id'], $subs_array)) { $new_user_id = $cookie['user_id']; if ($subs['subscribed_user_id'] === '') { $new_user_id = $subs['subscribed_user_id'] . ',' . $new_user_id; } mysql_query("UPDATE users SET subers = subers + 1, subscribed_user_id = '" . $new_user_id . "' WHERE user_id = ". (int) $user_id); } }
{ "domain": "codereview.stackexchange", "id": 5009, "tags": "php, mysql" }
How to find expectation value of an operator for a given state?
Question: The normalized wave functions $\Psi_1$ and $\Psi_2$ correspond to the ground state and the first excited states of a particle in a potential. The operators $\hat{A}$ act on the wave function as: $$\hat{A}\Psi_1=\Psi_2\text{ and }\hat{A}\Psi_2=\Psi_1$$ The expectation value of the operator $\hat{A}$ for the state $\hat{A}=(3\Psi_1+4\Psi_2)/5$ is: (A) 0 (B) -0.32 (C) 0.75 (D) 0.96 Not able to understand how to determine expectation value when ground state and first excited states are given. Answer: I'm not sure why the state is called $\hat A$ since $\hat A$ is also an operator, but if you call the state $|\psi\rangle$ such that $|\psi\rangle=(3|\psi_1\rangle+4|\psi_2\rangle)/5$ then the expectation value of $\hat A$ is given by $$\langle\hat A\rangle=\langle\psi|\hat A|\psi\rangle$$ Since $\hat A$ is linear ( most quantum operators are linear) this can be expanded as follows: $$\tfrac{1}{5}\left(3\langle\psi_1|+4\langle\psi_2|\right)\hat A\tfrac{1}{5}\left(3|\psi_1\rangle+4|\psi_2\rangle\right)= \\\tfrac{1}{25}\left(9\langle\psi_1|\hat A|\psi_1\rangle+12\langle\psi_1|\hat A|\psi_2\rangle+12\langle\psi_2|\hat A|\psi_1\rangle+16\langle\psi_2|\hat A|\psi_2\rangle\right)$$ You can now replace $\hat A|\psi_1\rangle$ by $|\psi_2\rangle$ and $\hat A|\psi_2\rangle$ by $|\psi_1\rangle$. What do you get as an expectation value now? Hint: what are the values of $\langle\psi_1|\psi_1\rangle$, $\langle\psi_2|\psi_2\rangle$ and $\langle\psi_1|\psi_2\rangle$?
{ "domain": "physics.stackexchange", "id": 49449, "tags": "quantum-mechanics, homework-and-exercises, operators, wavefunction" }
AI that maximizes the storage of rectangular parallelepipeds in a bigger parallelepiped
Question: As you can see in the title, I'm trying to program an AI in Java that would help someone optimize his storage. The user has to enter the size of his storage space (a box, a room, a warehouse, etc...) and then enter the size of the items he has to store in this space. (note that everything must be a rectangular parallelepiped) And the AI should find the best position for each item such that space is optimized. Here is a list of what I started to do : I asked the user to enter the size of the storage space (units are trivial here except for the computing cost of the AI, later on, I'm guessing), telling him that the values will be rounded down to the unit I started by creating a 3-dimensional array of integers representing the storage space's volume, using the 3 values taken earlier. Filling it with 0s, where 0s would later represent free space and 1s occupied space. Then, store in another multidimensional array the sizes of the items he has to store And that's where the AI part should be starting. The first thing the AI should do is check whether the addition of all the items' volumes doesn't surpass the storage space's volume. But then there are so many things to do and so many possibilities that I get lost in my thoughts and don't know where to start... In conclusion, can anyone give me the proper terms of this problem in AI literature, as well as a link to an existing work of this kind? Thanks Answer: A simple approach that gives a good baseline for such problems is simulated annealing. The idea is that you do something random. If it improves things, then it is good. If it makes things worse, you still take it with some probability $p$, where $p$ shrinks over time. The more bad solutions you can rule out beforehand / the smarter you can encode your problem, the better solutions simulated annealing will give.
{ "domain": "ai.stackexchange", "id": 781, "tags": "optimization, storage" }
Application of Componendo and Dividendo Rule and Dimensional Analysis
Question: Let us consider the following ratio: $$\frac A B=\frac C D$$ where $A$,$B$,$C$, and $D$ are of different dimensions. Can we apply the Componendo and Dividendo from Algebra as given below?: $$\frac{A+B}{A-B}=\frac{C+D}{C-D}$$ Why do I think "no"? Since $A$,$B$,$C$, and $D$ are of different dimensions, and we know that only quantities of same dimensions must be added or subtracted, we must not apply the Componendo and Dividendo. The equation obtained is dimensionally incorrect and hence totally invalid. Why do I think "yes"? In kinematics, I learnt the following formula to determine the displacement in the $t$ th second: $$S_t=u+\frac a 2(2t-1)$$ Clearly the above equation is also dimensionally incorrect, but if I were more careful while deriving the above equation, I would have got the following where I've taken dimensions into account: $$S_t=u(1s)+\frac a 2 (2t-(1s))(1s)$$ The above equation is dimensionally correct unlike the first form. I thought, in the similar manner, if we have some proper dimension corrective terms (which don't alter the magnitude) in the Componendo and Dividendo like the following: $$\frac{d_1A+d_2B}{d_1A-d_2B}=\frac{d_3C+d_4D}{d_3C-d_4D}$$ where $d_1$,$d_2$,$d_3$ and $d_4$ are appropriate dimension correcting terms like the $(1s)$ in the kinematics equation, the above equation will become dimensionally correct. Under this circumstance, will the result obtained be correct? In short, can we apply the Componendo and Dividendo rule by just neglecting the dimensions as we use to in kinematics? If no, kindly explain why the familiar rule from Mathematics fails. Answer: The proof of the componendo and dividendo rules is as follows: If $\frac{A}{B}=\frac{C}{D}$, then $\frac{A}{B}\pm1=\frac{C}{D}\pm1$. Since $1=\frac{B}{B}=\frac{D}{D}$, we have that $\frac{A}{B}\pm 1=\frac{A\pm B}{B}$ and $\frac{C}{D}\pm 1=\frac{C\pm D}{D}$. This means that $\frac{A\pm B}{B}=\frac{C\pm D}{D}$. For componendo, use $+$, and for dividendo, use $-$. Since $\frac{A+B}{B}=\frac{C+D}{D}$ and $\frac{A-B}{B}=\frac{C-D}{D}$, dividing the two, we have $\frac{\frac{A+B}{B}}{\frac{A-B}{B}}=\frac{\frac{C+D}{D}}{\frac{C-D}{D}}$, so $\frac{A+B}{A-B}=\frac{C+D}{C-D}$. The problem with this proof when units are involved lies at the very beginning. If $\frac{A}{B}$ and $\frac{C}{D}$ are quantities with units, you can only add to them quantities with the same units. In other words, $1$ must have the same units as $\frac{A}{B}$ and $\frac{C}{D}$. But later in the proof, we use the fact that $1=\frac{B}{B}=\frac{D}{D}$. This means that the number $1$ must be unitless, since the units of the $B$ in the numerator are the same as the units of the $B$ in the denominator. These two statements: The units of $1$ must match the units of $\frac{A}{B}$ $1$ must be unitless are contradictory unless $\frac{A}{B}$ and $\frac{C}{D}$ are also unitless. This means that $A$ and $B$ must have the same units, and $C$ and $D$ must have the same units. So, the rule in question is only applicable if $A$ and $B$ have the same units, and $C$ and $D$ have the same units. Note that the units of $A$ and $B$ are not required to be the same as the units of $C$ and $D$.
{ "domain": "physics.stackexchange", "id": 63109, "tags": "dimensional-analysis, mathematics" }
Van der waals equation derivation?
Question: He assumed that the intermolecular forces result in a reduced pressure on the walls of the container which has a real gas in it. Also that the molecules are finite in size which means they do not have the entire volume of the container to themselves; something less than that. So when he accounted for the reduced volume by $V-nb$, why did he not do $P-\frac{an^2}{v^2}$ and instead, did the below: He accounted for the reduced volume first with $V-nb$, then he used $$P(V-nb) = nRT$$ and then $$P=\frac{nRT}{V-nb},$$ then said that the real pressure is less than the ideal gas pressure by an amount $\frac{an^2}{V^2}$ from which follows the below $$P_{real}=\frac{nRT}{V-nb}-\frac{an^2}{V^2}$$and therefore $$(P_{real}+\frac{an^2}{V^2})(V-nb)=nRT.$$ My question is: what is the logic behind this? What if he did the other way around? meaning corrected for the reduced pressure first and then corrected for the reduced volume which would have given the following steps Correction for the pressure FIRST (reducing the ideal pressure by an amount$\frac{an^2}{V^2}$) $$V=\frac{nRT}{(P_{ideal}-\frac{an^2}{V^2})}$$ Then correcting the volume by reducing it by an amount $nb$, giving $$V=\frac{nRT}{(P_{ideal}-\frac{an^2}{V^2})}-nb$$ giving $$(P_{ideal}-\frac{an^2}{V^2})(V+nb)=nRT$$ Should the equation of state be $$P_{real}V_{real}=nRT$$ or $$P_{ideal}V_{real}=nRT$$ or $$P_{real}V_{ideal}=nRT$$ ?? Answer: The more formal derivation of the van der Waals equation of state utilises the partition function. If we have an interaction $U(r_{ij})$ between particles $i$ and $j$, then we can expand in the Mayer function, $$f_{ij}= e^{-\beta U(r_{ij})} -1$$ the partition function of the system, which for $N$ indistinguishable particles is given by, $$\mathcal Z = \frac{1}{N! \lambda^{3N}} \int \prod_i d^3 r_i \left( 1 + \sum_{j>k}f_{jk} + \sum_{j>k,l>m} f_{jk}f_{lm} + \dots\right)$$ where $\lambda$ is a convenient constant, the de Broglie thermal wavelength and this expansion is simply obtained by the Taylor series of the exponential. The first term $\int \prod_i d^3 r_i$ simply gives $V^N$, and the first correction is simply the same sum each time, contributing, $$V^{N-1}\int d^3 r \, f(r).$$ The free energy can be derived from the partition function, which allows us to approximate the pressure of the system as, $$p = \frac{Nk_B T}{V} \left( 1-\frac{N}{2V} \int d^3r \, f(r) + \dots\right).$$ If we use the van der Waals interaction, $$U(r) = \left\{\begin{matrix} \infty & r < r_0\\ -U_0 \left( \frac{r_0}{r}\right)^6 & r \geq r_0 \end{matrix}\right.$$ and evaluate the integral, we find, $$\frac{pV}{Nk_B T} = 1 - \frac{N}{V} \left( \frac{a}{k_B T}-b\right)$$ where $a = \frac23 \pi r_0^3 U_0$ and $b = \frac23 \pi r_0^3$ which is directly related to the excluded volume $\Omega = 2b$.
{ "domain": "physics.stackexchange", "id": 36312, "tags": "thermodynamics, statistical-mechanics, gas" }
Why are helium nuclei, electrons and gamma radiation given special names?
Question: Wikipedia article on alpha radiation says Alpha particles were first described in the investigations of radioactivity by Ernest Rutherford in 1899 I can't access the paper containing this discovery (because it is still behind a paywall 120 years after it was first published). Perhaps Rutherford et al. didn't yet know what they were looking at so just called it "alpha" radiation? Only later did we understand this was due to a helium nucleus? Another related question is why do proton and neutron radiation not get special names, where they not as easy to observe by the zinc sulfide detection method that gave off a flash of light, or by cloud chambers? Answer: This is history of science and history of education. You have to realize that after the enlightment all the university educated men had to go through learning greek and latin. The Greek alphabet stars with α β γ .. , i.e. alpha beta gamma. It just shows that Rutherford wanted to use his Greek giving the labeling of the unknown phenomenon he was studying, α, β, and γ came later as its order shows. It was later that they found out that α, β were not radiation, but particles. Then nuclear physics started to be really and it had no meaning to keep on labeling wrongly the nuclear decay products as the theory started to develop.
{ "domain": "physics.stackexchange", "id": 59591, "tags": "radiation" }
DIV mask implemented in JavaScript/CSS
Question: Mask.html <html> <head> <script type="text/javascript" src="http://code.jquery.com/jquery-1.8.3.min.js"></script> <script type="text/javascript" src="mask.js"></script> <link rel="stylesheet" type="text/css" href="mask.css"> </head> <body> <div id="div1" style="margin: 5px; border: 1px solid red; height: 100px; width: 500px;"></div> <div id="div2" style="margin: 5px; border: 1px solid blue; height: 100px; width: 500px"> <div id="div3" style="margin: 5px; border: 1px solid red; height: 88px; width: 235px; float:left;"></div> <div id="div4" style="margin: 5px; border: 1px solid green; height: 88px; width: 235px; float:left;"></div> </div> </body> </html> Mask.js (function ($) { $.fn.Mask = function () { if ($(this).find('.Mask').length > 0) return null; var __ctrl = $(this)[0]; if (__ctrl.tagName != 'DIV') return null; var containerCssPaddingTop = $(__ctrl).css('padding-top'); var containerCssPaddingRight = $(__ctrl).css('padding-right'); var containerCssPaddingBottom = $(__ctrl).css('padding-bottom'); var containerCssPaddingLeft = $(__ctrl).css('padding-left'); $(__ctrl).css('position', 'relative'); $(__ctrl).css('overflow', 'hidden'); var mask = '<div class="Mask"><div class="MaskContent"><img src="PleaseWait.gif"/></div></div>'; $(__ctrl).prepend(mask); var m = $(this).find('.Mask'); var mc = $(this).find('.MaskContent'); m.css('margin-top', '-' + containerCssPaddingTop); m.css('margin-right', '-' + containerCssPaddingRight); m.css('margin-bottom', '-' + containerCssPaddingBottom); m.css('margin-left', '-' + containerCssPaddingLeft); // The 16 just comes from the fact that the image displayed is 32x32 mc.css('left', m.width()/2 - 16 + "px"); var toReturn = { RemoveMask: function () { m.remove(); } }; return toReturn; }; })(jQuery); mask.css .Mask { background-image: -webkit-radial-gradient(center, ellipse farthest-side, #FFFFFF 0%, #EEEEEE 100%); height: 100%; opacity: 0.8; position: absolute; width: 100%; z-index: 4; } .MaskContent { background: none; position: absolute; top: 30%; z-index: 5; } The real code I would like reviewed/criticized is in Mask.js and Mask.css The idea is that this extends jQuery so you can block a div while it is being updated. You would use it by running v = $('#div1').Mask() then, to remove it, one would do v.RemoveMask(); Answer: The code looks fine to me. Just a couple minor points: Why do you change the div's position to relative? If the function is called on a div with absolute positioning, it would mess up the layout. And you should revert the CSS changes on RemoveMask() to what they were when Mask() was called because the div's content might depend on a certain overflow value. If the div changes size while masked, the image will no longer be horizontally centered. I would suggest using CSS instead of JS to center it: .MaskContent { background: none; position: absolute; width: 32px; height: 32px; top: 30%; left: 50%; margin-left: -16px; z-index: 5; } Your mask background currently only supports webkit browsers; you should add the CSS3 variations for the other browsers.
{ "domain": "codereview.stackexchange", "id": 2865, "tags": "javascript, jquery, html, css" }
Rust: Flattening nested struct to Vec
Question: I have a struct which nests other structs like following: #[derive(Debug)] pub struct Rankings { conferences: Vec<Conference>, } #[derive(Debug)] pub struct Conference { divisions: Vec<Division>, } #[derive(Debug)] pub struct Division { teams: Vec<Team>, } #[derive(Debug, Clone)] pub struct Team { name: String, market: String, } What I want to do is converting a Rankings instance to Vec<Team>. Here's my solution: fn main() { let mut rankings = Rankings { conferences: vec![ Conference { divisions: vec![ Division { teams: vec![ Team { name: String::from("Raptors"), market: String::from("Toronto"), }, Team { name: String::from("Knicks"), market: String::from("New York"), } ] }, Division { teams: vec![ Team { name: String::from("Bucks"), market: String::from("Milwaukee"), }, Team { name: String::from("Cavaliers"), market: String::from("Cleveland"), } ] }, ] }, ] }; println!("- rankings:\n{:#?}\n", rankings); let mut raw_teams: Vec<Vec<Vec<Team>>> = rankings .conferences .iter_mut() .map(|c| c.divisions.iter_mut().map(|d| d.teams.clone()).collect()) .collect(); println!("- raw_teams:\n{:#?}\n", raw_teams); let flattened_teams = raw_teams .iter_mut() .fold(Vec::new(), |mut acc, val| { acc.append(val); acc }) .iter_mut() .fold(Vec::new(), |mut acc, val| { acc.append(val); acc }); println!("- flattened_teams:\n{:#?}\n", flattened_teams); } Playground link First, I converted Rankings to Vec<Vec<Vec<Team>>> using iter_mut() and map(), then flattened Vec<Vec<Vec<Team>>> to Vec<Team> using iter_mut() and fold(). But I just wrote that code avoiding compile errors, which mean the code could be refactored better using idiomatic patterns. I think I might overuse mutability, and two conversion process can be simplified using appropriate iterator functions. Thanks for any advices. Answer: First of all, there is no need to use iter_mut() on conferences, as you never change the original rankings. As we clone the teams later, we can simply use let mut raw_teams: Vec<Vec<Vec<Team>>> = rankings .conferences .iter() .map(|c| c.divisions.iter().map(|d| d.teams.clone()).collect()) .collect(); Now that we have a Vec<Vec<Vec<Team>>, we can call flatten: let flattened_teams: Vec<Team> = raw_teams.into_iter().flatten().flatten().collect(); I used into_iter() as your original code left raw_teams empty. However, we can skip raw_teams entirely if we use flat_map instead of map(…).flatten(): let flattened_teams: Vec<Team> = rankings .conferences .iter() .flat_map(|c| &c.divisions) // need to borrow due to iter() .flat_map(|d| &d.teams) // need to borrow due to iter() .cloned() // clone the elements .collect(); If we don't want to borrow, we can of course just move everything into flattened_teams by simply removing cloned() and &: let flattened_teams : Vec<Team> = rankings .conferences .into_iter() .flat_map(|c| c.divisions) .flat_map(|d| d.teams) .collect() None of these functions use mut.
{ "domain": "codereview.stackexchange", "id": 33788, "tags": "functional-programming, rust, immutability" }
Proof of a unique homomorphism from an initial object
Question: What is the proof that there is only one homomorphism from an initial object to another object? Answer: To add on top of finrod's answer: while the existence of an initial object guarantees the uniqueness of the aforesaid morphism, the initial object's existence is not guaranteed. As you asked this question under the functional programming tag, perhaps you meant to ask how to establish the existence of initial algebras for a given algebraic type in a functional programming language. The existence of an initial object is not entirely trivial, but if you know some domain theory then the proof is very similar to the least fixed point theorem for continuous functions over directed CPOs. The canonical reference eludes me right now, but brief Googling/Wikipedia dig up Abramsky's and Jung's chapter on Domain Theory (section 5, this case is the example in 5.1.3) in the Handbook of Logic in CS.
{ "domain": "cstheory.stackexchange", "id": 34, "tags": "functional-programming, algebra, ct.category-theory" }
Setting image margins and location based on orientation
Question: This code is for Android but I guess anyone can take a chance at it. It does what I want it to do. Just that I am not comfortable with its deep level of nesting and too much of value swapping. Can this be optimized for readability or in compactness? for(int imagenum, imagenum <76, imagenum++){ if (orientation.contentEquals("landscape") || orientation.contentEquals("reverse landscape")) { params.height = imageheight; params.width = imagewidth; imageButton.setImageResource(R.drawable.imageback); if (imagenum <= 38) { params.leftMargin = imagenum * xspaceforeachimage; params.topMargin = 0; } else { params.leftMargin = (imagenum - 39) * xspaceforeachimage; params.topMargin = topMargin; } } else if (orientation.contentEquals("portrait") || orientation.contentEquals("reverse portrait")) { params.height = imagewidth; params.width = imageheight; imageButton.setImageResource(R.drawable.imagebacklaid); if (imagenum <= 38) { params.leftMargin = 0; params.topMargin = imagenum * xspaceforeachimage; } else { params.leftMargin = topMargin; params.topMargin = (imagenum - 39) * xspaceforeachimage; } } } Answer: This doesn't save many lines of code, but I think it's more readable and maintainable: boolean landscape = orientation.contains("landscape"); for(int imagenum = 0; imagenum <76; imagenum++){ /* Missing code that assigns params and imageButton using imageNum? */ int shortSideMargin = imagenum * xspaceforeachimage; int longSideMargin = 0; if (imagenum >= 39) { //new row or column, shift back and over shortSideMargin -= 39 * xspaceforeachimage; longSideMargin = topMargin; } if (landscape) { params.height = imageheight; params.width = imagewidth; params.leftMargin = shortSideMargin; params.topMargin = longSideMargin; imageButton.setImageResource(R.drawable.imageback); } else { params.height = imagewidth; params.width = imageheight; params.leftMargin = longSideMargin; params.topMargin = shortSideMargin; imageButton.setImageResource(R.drawable.imagebacklaid); } }
{ "domain": "codereview.stackexchange", "id": 6435, "tags": "java, optimization, android" }
Are eigenfunctions of $E$ also eigenfunctions of $p$?
Question: Given that $\hat{H}\Psi = \hat{E}\Psi$ and that $E=\frac{p^2}{2m}$ Assuming a non-relativistic, system, does this mean that any eigenfunction of Energy is also an eigenfunction of momentum? Or does it work the other way and every eigenfunction of momentum is an eigenfunction of Energy? If not, where does this break down? It seems reasonable to me that two particles with the same Kinetic Energy and mass would have the same momentum. Even if the momentum is in different directions I think those would be degenerate states and not strictly count. Although I don't really understand why degenerate states are a problem yet. The point is that any wavefunction of definite energy sounds like it would also have a definite momentum, or vice versa, and I can't think of a way that doesn't make sense. Answer: Since you don't specify $\hat H$, the general answer is a resounding no. If there is a potential, say: $$ V(x) = -\frac k r $$ then the energy eigenstates are definitely not momentum eigenstates. Any potential other than $V(x) = c$ breaks translational invariance, so that eigenstates of energy cannot have definite momentum. Note that $ V(x) = -\frac k r $ is spherically symmetric (it's the hydrogen atom), and in that case, the solution are angular momentum eigenstates.
{ "domain": "physics.stackexchange", "id": 73178, "tags": "quantum-mechanics, energy, hilbert-space, operators, momentum" }
Hydrostatic Pressure
Question: Need help with this. Part of a sea defense consists of a section of a concrete wall 4 meters high, and 6 meters wide. a) Calculate the resultant thrust on the section when the sea reaches a height of 3 metres relative to the base of the section. b) Calculate the overturning moment experienced by the section about point A. This is my answer. Have i done this correctly? I am not sure if i am on the right track with this or not. Especially the overturning moment. a) Thrust $Density of water = 1000kg/m^3$ Width = 6m Height of fluid = 3m $f = pg \frac{h} {2} A$ Ft = 1000 x 9.81 x 1.5 x (6x3) Ft = 264,870 N b) Overturning Moment $M = Ft \frac{h} {3}$ $M = 264.870 x \frac{3} {3} = 264.870 Nm$ Answer: Usually with this kind of problems what I would do is to analyse what effect is caused by what source. In your specific problem, there is water under gravitational load. The gravitational field imposes a vertical pressure distribution inside the water. This pressure acts on the wall. Thus the first answer I would give is the distribution of pressure $p$ as a function of the gravity of earth $g$, the density of water $\rho$ and the vertical coordinate $z$ (I would choose $z=0$ at sea level and positive values of $z$ going downward, but you can choose a different coordinate system). This gives you a function $p(z)$. This is not directly asked for, but is required to answer the two questions. In the second step, since the load is distributed, I would integrate this distributed load (force per unit length) along the height: $$ F=\int_0^h w \cdot p(z) \, \mathrm{d}z $$ with width $w=6\,\mathrm{m}$ and height $h=3\,\mathrm{m}$. The distributed load is the product of pressure and wall width, because the pressure does not vary with width. If it did, I would have to integrate not only along the height axis $z$, but also along the width axis. Finally, to calculate the momentum, I would integrate the local torque, with is the product of the distributed load $w \cdot p(z)$ and the lever arm length $(h-z)$: $$ M=\int_0^h w \cdot p(z) \cdot (h-z) \, \mathrm{d}z \quad\mbox{.} $$ Now you can calculate everything and check if my results are the same as yours. :-)
{ "domain": "engineering.stackexchange", "id": 1909, "tags": "mechanical-engineering, hydrostatics" }
K-nearest neighbours in C# for large number of dimensions
Question: I'm implementing the K-nearest neighbours classification algorithm in C# for a training and testing set of about 20,000 samples and 25 dimensions. There are only two classes, represented by '0' and '1' in my implementation. For now, I have the following simple implementation: // testSamples and trainSamples consists of about 20k vectors each with 25 dimensions // trainClasses contains 0 or 1 signifying the corresponding class for each sample in trainSamples static int[] TestKnnCase(IList<double[]> trainSamples, IList<double[]> testSamples, IList<int[]> trainClasses, int K) { Console.WriteLine("Performing KNN with K = "+K); var testResults = new int[testSamples.Count()]; var testNumber = testSamples.Count(); var trainNumber = trainSamples.Count(); // Declaring these here so that I don't have to 'new' them over and over again in the main loop, // just to save some overhead var distances = new double[trainNumber][]; for (var i = 0; i < trainNumber; i++) { distances[i] = new double[2]; // Will store both distance and index in here } // Performing KNN ... for (var tst = 0; tst < testNumber; tst++) { // For every test sample, calculate distance from every training sample Parallel.For(0, trainNumber, trn => { var dist = GetDistance(testSamples[tst], trainSamples[trn]); // Storing distance as well as index distances[trn][0] = dist; distances[trn][1] = trn; }); // Sort distances and take top K (?What happens in case of multiple points at the same distance?) var votingDistances = distances.AsParallel().OrderBy(t => t[0]).Take(K); // Do a 'majority vote' to classify test sample var yea = 0.0; var nay = 0.0; foreach (var voter in votingDistances) { if (trainClasses[(int)voter[1]] == 1) yea++; else nay++; } if (yea > nay) testResults[tst] = 1; else testResults[tst] = 0; } return testResults; } // Calculates and returns square of Euclidean distance between two vectors static double GetDistance(IList<double> sample1, IList<double> sample2) { var distance = 0.0; // assume sample1 and sample2 are valid i.e. same length for (var i = 0; i < sample1.Count; i++) { var temp = sample1[i] - sample2[i]; distance += temp * temp; } return distance; } This takes quite a bit of time to execute. On my system it takes about 80 seconds to complete. How can I optimize this, while ensuring that it would also scale to larger number of data samples? As you can see, I've tried using PLINQ and parallel for loops, which did help (without these, it was taking about 120 seconds). What else can I do? I've read about KD-trees being efficient for KNN in general, but every source I read stated that they're not efficient for higher dimensions. I also found this Stack Overflow discussion about this, but it seems like this is 3 years old, and I was hoping that someone would know about better solutions to this problem by now. I've looked at machine learning libraries in C#, but for various reasons I don't want to call R or C code from my C# program, and some other libraries I saw were no more efficient than the code I've written. Now I'm just trying to figure out how I could write the most optimized code for this myself. I cannot reduce the number of dimensions using PCA or something. For this particular model, 25 dimensions are required. Also, I did track the execution time using a profiler, and it seems that more than 60% of the runtime is spent in the GetDistance() function, which is why I was wondering whether there exists a different algorithm using a different data structure that does this more optimally. Answer: var distances = new double[trainNumber][]; for (var i = 0; i < trainNumber; i++) { distances[i] = new double[2]; // Will store both distance and index in here } This is a code smell. You shouldn't use a jagged double array to store an array of distances and indexes. Despite the comment, what you're doing is unclear, and it's very confusing to have a variable named distances that stores both distances and indexes. The only justification for this would be if you actually had hard profiling evidence that it caused a significant speedup. Make a separate class (or struct, if you're worried about overhead) with members double distance; int index; and then trainInfo (the former distances) should just be a trainNumber-sized array of that type. Also, since you only need the top K elements, you don't need to sort the whole list (n log n time). You ought to be able to do it with a partial sort (actual code sample) which is almost linear-speed. There are also parallel algorithms for this; you could probably get a speedup using PLINQ with a custom aggregate. On to the next refactoring. foreach (var voter in votingDistances) { if (trainClasses[(int)voter[1]] == 1) yea++; else nay++; } This code is also crying out for LINQ. How about yea = votingDistances.AsParallel().Count(voter=>trainClasses[voter.index] == 1); nay = votingDistances.Count - yea; And I'd refactor this: for (var i = 0; i < sample1.Count; i++) { var temp = sample1[i] - sample2[i]; distance += temp * temp; } to this: var differences = sample1.AsParallel().Zip(sample2,(s1,s2)=>s1-s2); distance = differences.Sum(x=>x*x); though you could get improved performance from a custom aggregate.
{ "domain": "codereview.stackexchange", "id": 29183, "tags": "c#, performance, machine-learning" }
Why does scattering depend on spin?
Question: I'm reading about giant magnetoresistance (GMR), and the most important feature of this phenomenon is the spin dependance of the electron scattering inside a magnetised lattice. However, I don't quite understand why electrons with spin parallel to the magnetisation (hence, to the majority-spin direction of the electrons) scatter less than those with spin antiparallel to the magnetisation (so, parallel to the minority-spin). I'd appreciate if someone could clear this up for me. Answer: A qualitative explanation is given in this GMR linκ : The probability of scattering depends upon the number of available quantum states for the electron to scatter into, and that depends strongly on the relative direction of the electron's spin and the magnetic field inside the ferromagnet. The more states that are available, the higher the probability of scattering, and the higher the electrical resistance. If the spin and magnetic field are anti-parallel, more states are available for electron scattering, so the electrical resistance is larger than if the spin and the magnetic field are parallel (see diagram). This is the basic idea of spin-dependent scattering-- If you have access to a library here is a publication on scattering crossections of electrons The spin dependences of the inelastic scattering cross section (inverse mean free path) and the elastic scattering cross section are calculated for polarized electrons scattered from oriented atoms in the Born-Ockhur approximation with a view to understanding spin-dependent scattering in ferromagnets. In the medium-to-high-energy range (≳ 100 eV) the elastic scattering for parallel spins is greater than for antiparallel spins, while the inelastic cross section for parallel spins is less than for antiparallel. Elastic spin dependence appears to be greater than inelastic, and the exchange effects fall off rapidly with increasing energy. The relation of this atomistic scattering approach to solid-state models is discussed. In a classical frame I would think of it as electrons with spin antiparallel to the main field would tend to be attracted to the magnetic domains that create the field, and thus slow down, whereas the parallel are repulsed from all sides and could move ahead unimpeded.
{ "domain": "physics.stackexchange", "id": 11282, "tags": "quantum-spin, scattering" }
What will be the reading of a barometer inside an airtight box?
Question: Suppose that I enclose a barometer inside an airtight (but not vacuum) box. Will the pressure reading change? Answer: Yes, it can change. This depends on how deformable the walls are, but no matter how strong, if the external pressure is high enough then the walls will bend inward and compress the air inside, raising the internal pressure as measured by the barometer. As usual, it's useful to consider the limiting cases to examine this behaviour. On one side, suppose that your 'air sealed box' is simply an impermeable membrane, maybe hung on a wooden frame or something. Then if the atmospheric pressure increases, this will be transferred to the inside. On the opposite side, suppose that you make a sturdy metal chamber where the walls themselves resist deformation. In this case, then everyday changes in atmospheric pressure are unlikely to change the internal pressure much, but one can turn things up quantitatively (but not qualitatively!) by submersing the box to the bottom of the ocean. There the external pressure won't crush the box (we hope) but it will compress some parts slightly, and that will raise the internal pressure as measured by the barometer.
{ "domain": "physics.stackexchange", "id": 44570, "tags": "pressure" }
Demographic model for admixed African Americans
Question: I am fairly new to genomics and population genetics. And I am reading this article 'Demographic history and rare allele sharing among human populations': https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3142009/pdf/pnas.201019276.pdf In this article they propose a demographic model in Figure 4 on page 11986. However, I am confused how does African Americans fit in this demographic model that are known to have both African and European ancestry. And if I am to consider African American population than what is the split time between the two populations, time they merge into admixed population and the admixed proportion. My apologies if it's a really basic question but, any insights would be appreciated. Answer: Just to further my answer, the problem with molecular dating within a species (I don't do humans [excepting immunology], but it's the same theory), is it requires an assumption of clonality (bifurcating trees). Mendelian species recombine towards panmixia, which will make any attempt to date via de novo point mutation, appear far younger than they really are, or for panmixia, incalculable (it breaks the central assumption). Dating through admixture requires a lot of extra data and/or assumptions for example the generation time, putative admixture per generation, and shifts in population dynamics. Maybe you can calculate some of those requirements but alot will remain assumptions. I have not read the paper you suggested, because I'm not involved in human genetics [outside immunology], but it is possible the authors may (and I stress may) have simplified an assumption that could be difficult to estimate. It is not clear what the objective of the paper was. Quantitative analysis is extremely important in modern population genetics/ evolutionary theory because historically it was overlooked and resulted in lots of hypothese which were difficult to test.
{ "domain": "bioinformatics.stackexchange", "id": 974, "tags": "phylogenetics" }
Is 99% Isopropyl Alcohol sold in stores really 99%?
Question: My understanding is that an azeotropic mixture of isopropanol and water is 91%. This makes sense as to why there are so many brands of rubbing alcohol sold at 91%. There are also some sold as 99%, which as far as I know is impossible to reach through distillation. Is there another process that the manufacturers are using to reach 99%, or is it just marketing lies? Answer: You certainly can get high purity isopropanol, e.g. 99.99% from Fisher Scientific. However, that is about US$50/l for high performance liquid chromatography (HPLC) grade alcohol. The isopropanol can be synthesized without water by hydrogenation of acetone, or water could be removed from the azeotrope using a desiccant. Membrane separation is also efficient. At home, table salt ($\ce{NaCl}$) can be used to remove some water. That said, there are a few reasons it's unlikely to find isopropanol at much greater strength than 91% at your local pharmacy: It is more expensive. It offers little advantage in home use over 91%. In fact, 70% ethanol or isopropanol is a more effective antiseptic than 91%. (Reagent grade alcohol is recommended for cleaning precision optics.) Water from air would slowly infiltrate most thin-walled plastic bottles, causing swelling (and possibly rupture) of the container. After opening, water would be more rapidly absorbed until it approached 91%.
{ "domain": "chemistry.stackexchange", "id": 11629, "tags": "concentration, alcohols, boiling-point, vapor-pressure, distillation" }
C++ code for Priority Search Tree
Question: I'm trying to implement a Priority Search Tree(PST) following this note: http://cs.brown.edu/courses/cs252/misc/resources/lectures/pdf/notes07.pdf. For convenience, I copied the related peuso-code here: and the note has also provided an example for creating a PST, I just copied the final tree here: The following is my code. #include <iostream> #include <vector> #include <algorithm> #include <set> #include <queue> template <typename E> struct Point2D { static int used_count; E _x; E _y; Point2D() : _x(0), _y(0) { used_count++; std::cout << "Point2D(), used_count = " << used_count << "\n"; } Point2D(const E& x, const E& y) : _x(x), _y(y) { used_count++; std::cout << "Point2D(const E& x, const E& y), used_count = " << used_count << "\n"; } Point2D(int arr[]) : _x(arr[0]), _y(arr[1]) { used_count++; std::cout << "Point2D(int arr[]), used_count = " << used_count << "\n"; } Point2D(const Point2D& other) { // copy constructor used_count++; std::cout << "Point2D(const Point2D& other), used_count = " << used_count << "\n"; _x = other._x; _y = other._y; } Point2D(Point2D& other) { // copy constructor used_count++; std::cout << "Point2D(Point2D& other), used_count = " << used_count << "\n"; _x = other._x; _y = other._y; } Point2D& operator=(const Point2D& other) { // assignment operator _x = other._x; _y = other._y; return *this; } Point2D& operator=(Point2D& other) { // assignment operator _x = other._x; _y = other._y; return *this; } Point2D(Point2D&& other) { // move constructor if (this != &other) { used_count++; std::cout << "Point2D(Point2D&& other), used_count = " << used_count << "\n"; _x = other._x; _y = other._y; } } virtual ~Point2D() { used_count--; std::cout << "~Point2D(), used_count = " << used_count << "\n"; } Point2D operator-(const Point2D& other) { return Point2D(_x - other._x, _y - other._y); } Point2D operator-(Point2D& other) { return Point2D(_x - other._x, _y - other._y); } bool operator<(const Point2D& other) { return (_y < other._y) || (_y == other._y && _x < other._x); } }; template <typename E> class PSTMultiset { public: struct Element { std::string _name; Point2D<E> _point; Element(const std::string& name, int arr[]) : _name(name), _point(arr) {} }; struct ElementComparatorY { bool operator()(const Element& l, const Element& r) { return (l._point._y > r._point._y) || (l._point._y == r._point._y && l._point._x > r._point._x); } }; struct ElementComparatorX { bool operator()(const Element& l, const Element& r) { return (l._point._x < r._point._x) || (l._point._x == r._point._x && l._point._y < r._point._y); } }; std::multiset<Element, ElementComparatorY> _points; typedef typename std::multiset<Element, ElementComparatorY>::iterator Iterator; int _size; PSTMultiset(int size) : _size(size) { std::cout << "PSTMultiset(int size)\n"; } PSTMultiset(std::string names[], E points[][2], int size) : PSTMultiset(size) { // Nx2 points for (int i = 0; i < _size; i++) { _points.insert(Element(names[i], points[i])); } std::cout << "PSTMultiset(std::string names[], E points[][2])\n"; } virtual ~PSTMultiset() { Clear(); std::cout << "~PSTMultiset()\n"; } void Print(const std::string& title) { std::cout << title << "\n"; for (auto iter = _points.begin(); iter != _points.end(); iter++) { const Element& p = *iter; std::cout << p._name << ": (" << p._point._x << ", " << p._point._y << ")\n"; } } class Node { public: Element _element; Node *_parent; Node *_left; Node *_right; Node(const Element& element, Node* parent) : _element(element), _parent(parent), _left(nullptr), _right(nullptr) { } friend std::ostream& operator<<(std::ostream& out, Node* node) { if (node == nullptr) { return out; } if (node->_left != nullptr) { out << "L(" << node->_left->_element._name << ")_"; } out << node->_element._name << ": (" << node->_element._point._x << ", " << node->_element._point._y << ")"; if (node->_right != nullptr) { out << "_R(" << node->_right->_element._name << ")"; } return out; } }; Node *_root; Node* Build(Node* parent, std::multiset<Element, ElementComparatorY>& points) { if (points.size() == 0) return nullptr; if (points.size() == 1) { Node *node = new Node(*(points.begin()), parent); return node; } auto first = points.begin(); std::multiset<Element, ElementComparatorX> remainPoints; auto iter = points.begin(); iter++; // remove the first point (the point with greatest Y) for (; iter != points.end(); iter++) { remainPoints.insert(*iter); } auto mid_iter = remainPoints.begin(); std::advance(mid_iter, remainPoints.size() / 2); std::multiset<Element, ElementComparatorY> leftPoints(remainPoints.begin(), mid_iter); std::multiset<Element, ElementComparatorY> rightPoints(mid_iter, remainPoints.end()); Node *node = new Node(*first, parent); node->_left = Build(node, leftPoints); node->_right = Build(node, rightPoints); return node; } void Build() { _root = Build(nullptr, _points); } void Clear() { // clear the tree by level-order traverse if (_root == nullptr) return; std::queue<Node*> q; q.push(_root); int height = 1; int levelSize = q.size(); while (!q.empty()) { Node *node = q.front(); std::cout << node << " "; if (node->_left != nullptr) { q.push(node->_left); } if (node->_right != nullptr) { q.push(node->_right); } delete node; q.pop(); levelSize--; if (levelSize == 0) { height++; levelSize = q.size(); std::cout << "\n"; } } std::cout << "Height: " << height << "\n"; } }; template<> int Point2D<int>::used_count = 0; int main() { std::string names[] = {"A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "M", "N"}; int arr[][2] = {{15, 7}, {16, 2}, {12, 1}, {14, -1}, {10, -2}, {-1, 9}, {6, 4}, {7, 6}, {-2, 5}, {2, 3}, {4, 0}, {9, -3}, {1, 8}}; int size = sizeof(arr) / sizeof(arr[0]); PSTMultiset<int> *t = new PSTMultiset<int>(names, arr, size); // print the multiset { t->Print("before"); } // build PST { t->Build(); } delete t; } The output is as follows: PSTMultiset(int size) Point2D(int arr[]), used_count = 1 Point2D(Point2D&& other), used_count = 2 ~Point2D(), used_count = 1 Point2D(int arr[]), used_count = 2 Point2D(Point2D&& other), used_count = 3 ~Point2D(), used_count = 2 Point2D(int arr[]), used_count = 3 Point2D(Point2D&& other), used_count = 4 ~Point2D(), used_count = 3 Point2D(int arr[]), used_count = 4 Point2D(Point2D&& other), used_count = 5 ~Point2D(), used_count = 4 Point2D(int arr[]), used_count = 5 Point2D(Point2D&& other), used_count = 6 ~Point2D(), used_count = 5 Point2D(int arr[]), used_count = 6 Point2D(Point2D&& other), used_count = 7 ~Point2D(), used_count = 6 Point2D(int arr[]), used_count = 7 Point2D(Point2D&& other), used_count = 8 ~Point2D(), used_count = 7 Point2D(int arr[]), used_count = 8 Point2D(Point2D&& other), used_count = 9 ~Point2D(), used_count = 8 Point2D(int arr[]), used_count = 9 Point2D(Point2D&& other), used_count = 10 ~Point2D(), used_count = 9 Point2D(int arr[]), used_count = 10 Point2D(Point2D&& other), used_count = 11 ~Point2D(), used_count = 10 Point2D(int arr[]), used_count = 11 Point2D(Point2D&& other), used_count = 12 ~Point2D(), used_count = 11 Point2D(int arr[]), used_count = 12 Point2D(Point2D&& other), used_count = 13 ~Point2D(), used_count = 12 Point2D(int arr[]), used_count = 13 Point2D(Point2D&& other), used_count = 14 ~Point2D(), used_count = 13 PSTMultiset(std::string names[], E points[][2]) before F: (-1, 9) N: (1, 8) A: (15, 7) H: (7, 6) I: (-2, 5) G: (6, 4) J: (2, 3) B: (16, 2) C: (12, 1) K: (4, 0) D: (14, -1) E: (10, -2) M: (9, -3) Point2D(const Point2D& other), used_count = 14 Point2D(const Point2D& other), used_count = 15 Point2D(const Point2D& other), used_count = 16 Point2D(const Point2D& other), used_count = 17 Point2D(const Point2D& other), used_count = 18 Point2D(const Point2D& other), used_count = 19 Point2D(const Point2D& other), used_count = 20 Point2D(const Point2D& other), used_count = 21 Point2D(const Point2D& other), used_count = 22 Point2D(const Point2D& other), used_count = 23 Point2D(const Point2D& other), used_count = 24 Point2D(const Point2D& other), used_count = 25 Point2D(const Point2D& other), used_count = 26 Point2D(const Point2D& other), used_count = 27 Point2D(const Point2D& other), used_count = 28 Point2D(const Point2D& other), used_count = 29 Point2D(const Point2D& other), used_count = 30 Point2D(const Point2D& other), used_count = 31 Point2D(const Point2D& other), used_count = 32 Point2D(const Point2D& other), used_count = 33 Point2D(const Point2D& other), used_count = 34 Point2D(const Point2D& other), used_count = 35 Point2D(const Point2D& other), used_count = 36 Point2D(const Point2D& other), used_count = 37 Point2D(const Point2D& other), used_count = 38 Point2D(const Point2D& other), used_count = 39 Point2D(const Point2D& other), used_count = 40 Point2D(const Point2D& other), used_count = 41 Point2D(const Point2D& other), used_count = 42 Point2D(const Point2D& other), used_count = 43 Point2D(const Point2D& other), used_count = 44 Point2D(const Point2D& other), used_count = 45 Point2D(const Point2D& other), used_count = 46 Point2D(const Point2D& other), used_count = 47 Point2D(const Point2D& other), used_count = 48 Point2D(const Point2D& other), used_count = 49 Point2D(const Point2D& other), used_count = 50 Point2D(const Point2D& other), used_count = 51 Point2D(const Point2D& other), used_count = 52 Point2D(const Point2D& other), used_count = 53 ~Point2D(), used_count = 52 ~Point2D(), used_count = 51 Point2D(const Point2D& other), used_count = 52 Point2D(const Point2D& other), used_count = 53 Point2D(const Point2D& other), used_count = 54 Point2D(const Point2D& other), used_count = 55 Point2D(const Point2D& other), used_count = 56 Point2D(const Point2D& other), used_count = 57 Point2D(const Point2D& other), used_count = 58 ~Point2D(), used_count = 57 ~Point2D(), used_count = 56 ~Point2D(), used_count = 55 ~Point2D(), used_count = 54 ~Point2D(), used_count = 53 ~Point2D(), used_count = 52 ~Point2D(), used_count = 51 ~Point2D(), used_count = 50 ~Point2D(), used_count = 49 ~Point2D(), used_count = 48 ~Point2D(), used_count = 47 ~Point2D(), used_count = 46 ~Point2D(), used_count = 45 ~Point2D(), used_count = 44 Point2D(const Point2D& other), used_count = 45 Point2D(const Point2D& other), used_count = 46 Point2D(const Point2D& other), used_count = 47 Point2D(const Point2D& other), used_count = 48 Point2D(const Point2D& other), used_count = 49 Point2D(const Point2D& other), used_count = 50 Point2D(const Point2D& other), used_count = 51 Point2D(const Point2D& other), used_count = 52 Point2D(const Point2D& other), used_count = 53 Point2D(const Point2D& other), used_count = 54 Point2D(const Point2D& other), used_count = 55 Point2D(const Point2D& other), used_count = 56 Point2D(const Point2D& other), used_count = 57 Point2D(const Point2D& other), used_count = 58 Point2D(const Point2D& other), used_count = 59 ~Point2D(), used_count = 58 ~Point2D(), used_count = 57 Point2D(const Point2D& other), used_count = 58 Point2D(const Point2D& other), used_count = 59 Point2D(const Point2D& other), used_count = 60 Point2D(const Point2D& other), used_count = 61 Point2D(const Point2D& other), used_count = 62 Point2D(const Point2D& other), used_count = 63 Point2D(const Point2D& other), used_count = 64 ~Point2D(), used_count = 63 ~Point2D(), used_count = 62 ~Point2D(), used_count = 61 ~Point2D(), used_count = 60 ~Point2D(), used_count = 59 ~Point2D(), used_count = 58 ~Point2D(), used_count = 57 ~Point2D(), used_count = 56 ~Point2D(), used_count = 55 ~Point2D(), used_count = 54 ~Point2D(), used_count = 53 ~Point2D(), used_count = 52 ~Point2D(), used_count = 51 ~Point2D(), used_count = 50 ~Point2D(), used_count = 49 ~Point2D(), used_count = 48 ~Point2D(), used_count = 47 ~Point2D(), used_count = 46 ~Point2D(), used_count = 45 ~Point2D(), used_count = 44 ~Point2D(), used_count = 43 ~Point2D(), used_count = 42 ~Point2D(), used_count = 41 ~Point2D(), used_count = 40 ~Point2D(), used_count = 39 ~Point2D(), used_count = 38 ~Point2D(), used_count = 37 ~Point2D(), used_count = 36 ~Point2D(), used_count = 35 ~Point2D(), used_count = 34 ~Point2D(), used_count = 33 ~Point2D(), used_count = 32 ~Point2D(), used_count = 31 ~Point2D(), used_count = 30 ~Point2D(), used_count = 29 ~Point2D(), used_count = 28 ~Point2D(), used_count = 27 ~Point2D(), used_count = 26 L(N)_F: (-1, 9)_R(A) ~Point2D(), used_count = 25 L(I)_N: (1, 8)_R(H) ~Point2D(), used_count = 24 L(E)_A: (15, 7)_R(B) ~Point2D(), used_count = 23 I: (-2, 5)_R(J) ~Point2D(), used_count = 22 L(K)_H: (7, 6)_R(G) ~Point2D(), used_count = 21 E: (10, -2)_R(M) ~Point2D(), used_count = 20 L(C)_B: (16, 2)_R(D) ~Point2D(), used_count = 19 J: (2, 3) ~Point2D(), used_count = 18 K: (4, 0) ~Point2D(), used_count = 17 G: (6, 4) ~Point2D(), used_count = 16 M: (9, -3) ~Point2D(), used_count = 15 C: (12, 1) ~Point2D(), used_count = 14 D: (14, -1) ~Point2D(), used_count = 13 Height: 5 ~PSTMultiset() ~Point2D(), used_count = 12 ~Point2D(), used_count = 11 ~Point2D(), used_count = 10 ~Point2D(), used_count = 9 ~Point2D(), used_count = 8 ~Point2D(), used_count = 7 ~Point2D(), used_count = 6 ~Point2D(), used_count = 5 ~Point2D(), used_count = 4 ~Point2D(), used_count = 3 ~Point2D(), used_count = 2 ~Point2D(), used_count = 1 ~Point2D(), used_count = 0 From this result, It seems the PST tree has correctly built with the code. But I'm not sure if the code is efficiency. If you had time and convenient, could you please check the code? I greatly appreciate for any suggestions and comments. Thanks in advance. Answer: About using underscores Avoid starting identifiers with an underscore, or using double underscores. Some uses of underscores are reserved, see this question. If you really want to use some way to distinguish private and public member variables, consider prefixing using m_ or adding a single underscore at the end. Useless overloads I see you often have two overloads for member functions, such as: Point2D& operator=(const Point2D& other) {...} Point2D& operator=(Point2D& other) {...} Unless you plan to modify other, there is no need for the second version. Make member functions const if they do not modify the object You should mark functions themselves as const if they do not make any modifications. For example: bool operator<(const Point2D& other) const { return (m_y < other.m_y) || (m_y == other.m_y && m_x < other.m_x); } Your set is actually a map Your class name is PSTMultiset, but instead of a set you actually implement a map from a Point2D<E> to a std::string. And your internal use of std::multiset can thus be replaced with std::multimap, and you then also no longer need to declare a struct Element: template <typename E> class PSTMultimap { std::multimap<Point2D<E>, std::string> m_points; class Node { ... Point2D<E> m_key; std::string m_value; ... }; ... }; Inside Build() you want to create a std::multimap with a custom comparison function to sort on the x coordinate first: struct Point2DComparatorX { bool operator()(const Point2D<E>& l, const Point2D<E>& r) { return (l.m_x < r.m_x) || (l.m_x == r.m_x && l.m_y < r.m_y); } }; Node* Build(Node* parent, std::multimap<Point2D<E>, std::string>& points) { ... std::multimap<Point2D<E>, std::string, Point2DComparatorX> remainPoints; ... }; Consider not hardcoding the use of Point2D inside your PST The only thing PSTMultimap needs to know about the type of keys is how to order them in two ways. It doesn't need to know you are sorting two-dimensional points. So it might be nicer if you could do something like: template<typename Key, typename Value, typename Compare1 = std::less<>, typename Compare2 = std::less<>> class PSTMultimap { std::multimap<Key, Value, Compare1> m_points; public: PSTMultimap(Key keys[], Value values[], int size) ... class Node { ... Key m_key; Value m_value; ... }; Node* Build(Node* parent, std::multimap<Key, Value>& points) { ... std::multimap<Key, Value, Compare2> remainPoints; ... } ... }; And then let the caller worry about providing the right comparison functions: std::string names[] = {"A", "B", ...}; Point2D<int> points[] = {{15, 7}, {16, 2}, ...}; int size = ...; template<typename E> struct Point2DComparatorX { bool operator()(const Point2D<E>& l, const Point2D<E>& r) { return (l.m_x < r.m_x) || (l.m_x == r.m_x && l.m_y < r.m_y); } }; PSTMultimap<Point2D<int>, std::string, std::less<Point2D>, Point2DComparatorX<int>> t(names, points, size); Alternative constructors Your constructor requires you to store the keys and values in two arrays. But what if you have the information in a different container? It might make sense to provide a constructor that takes two iterators to key/value std::pairs, like so: template<typename Iterator> PSTMultimap(const Iterator &begin, const Iterator &end) { for (Iterator it = begin; it != end; ++it) { m_points[it->first] = it->second; } } And then use it, for example, like so: std::vector<std::pair<Point2D<int>, std::string>> points = { {{15, 7}, "A"}, {{16, 2}, "B"}, ... }; PSTMultimap<...> t(points.begin(), points.end()); Consider allowing points to be added and removed from an existing PST Just like regular STL contains, it would be nice to be able to insert() and erase() elements. Even better would be to make it look like a regular STL container. Make Build() automatic I understand that you want to be able to print the situation before and after ordering the elements, but in production use, you don't need that, and it is much nicer if your class builds the PST automatically without having to manually call Build(). Also, this will remove the need to temporarily store all the elements in a std::multimap<> member variable until Build() is called. Use std::unique_ptr to manage memory Instead of having raw pointer variables and calling new and delete manually, consider using std::unique_ptr. Use this for PSTMultimap's root and Node's left and right member variables. Improving efficiency A major issue with your code is that you build a lot of temporary maps, and even do some unnecessary sorting. When building the PST, what you want to do at each recursive step is: Get the max element using Compare1, this becomes the root. Sort the remaining elements using Compare2. Split the remaining elements in two, recurse each half, and add those as children. Return the root. Consider storing all the elements in a std::vector instead, and for each call to Build(), give it the begin and end position in this vector to use. Then the above steps become more concretely: Use std::max_element() using Compare1 on the given range, then std::swap() it to the start of the range. Alternatively, you could use std::partial_sort(begin, begin + 1, end). Use std::sort() from begin + 1 to end using Compare2. Find the midpoint, and recurse each half (being + 1 to mid_iter and mid_iter to end) So there's just one vector throughout the whole building process, of which smaller and smaller parts get sorted, without needing to allocate more and more maps. It also avoids full sorts just to get the maximum element.
{ "domain": "codereview.stackexchange", "id": 39265, "tags": "c++, stl" }
Off position robot model - Inverse Kinematics
Question: I had to make a Unity3D robot model(ABB IRB 1600-6R/6DOF), that given a desired end effector transformation matrix, it would calculate and rotate the robot joints to the appropriate angles(Inverse Kinematics Computation). I found some code in Robotics Toolbox for MATLAB that, lets say that you trust me, actually calculates the needed angles(its the general "offset" case in ikine6s.m) - but for a different zero angle position than my chosen one, which is corrected using the appropriate offsets. So, I have set my 3D robot model in Unity3D correctly, angles are correct, I give the same parameters in Robotics Toolbox in MATLAB and the results are the same, I plot the robot stance in MATLAB to see it-it's on position-, I then run the code in Unity3D and the robot model seems to move to the stance I saw in MATLAB but it is off position- the end effector is away from its desired position. Am I missing something? The scaling is correct. I have subtracted a translation (equal to the distance of the bottom of the model's base contact to the floor, to the start of the next link- as MATLAB doesnt calculate it) from the Y component of the desired position of the end effector(in its homogenous transformation matrix I use as the rotation part, the identity matrix, so we do not care about that part). Here are some pictures showing my case(say Px, Py, Pz is my desired EE position): MATLAB-This is the plot of the results of the MATLAB ikine6s with input Px, Py, Pz in the corresponding translation part of the desired homogenous transform matrix: Unity3D-This is what I get for the same input and angles in Unity3D-the EE is off position(should be half inside white sphere): Answer: Is is hard to make out from the perspectives that you have included, but is seems that the matlap screenshot has a rotation of approx 45 degrees around the global z axis. Taking a look at the grid in the unitz screenshot it seems that the robot has also a 45 degree angle with the global vertical axes (y axis). I would suggest the following: Check if the units match. Matlab uses radians while unity in most cases uses degrees for rotation Check succession of children and parent game object in unity. Make sure that the root coordinate system is the one you think it is and that each linkage is the child of the previous one. make sure that the coordinate systems where each linkage meets the next one un Unity corresponds to the Matlab one. This might be tricky, due to different coordinate systems. In Unity each game object has its own coordinate system. There are no rules on how these are oriented and they might be left or right handed. The coordinate system in Matlab is left handed and it is oriented according to the DH convention. Make sure that the coordinate system at the joints in Unity are the same as in Matlab or the joint angles are correctly transformed. make sure that both system use the same handed coordinate systems, adjust if not lookup which Denavit Hartenberg convention is used in Matlab (proximal, distal, modified, classic) and follow that in Unity If everything else fails (or you just do not want to bother with coordinate systems) and it is not important to squeeze out every bit of performance in Unity, you can use the built-in Inverse Kinematics in Unity. UPDATE: build-in un Unity seems to only support humanoid models. Look in the assets tore for a free on which supports serial structures. Troubleshooting: Take the a MAtlab forward model. Get the position and orientation of each joint coordinate system in the global frame. Draw these in Unity in a global frame. See which is the first one which is not located at its corresponding joint. Fix then go forward until the last one is in place (at the EE).
{ "domain": "robotics.stackexchange", "id": 1168, "tags": "inverse-kinematics, 3d-model" }
Definition of Ohm in SI basic units in words
Question: One way Wikipedia defines Ohm is (this is also teached in school): $$1\Omega =1{\dfrac {{\mbox{V}}}{{\mbox{A}}}}$$ They describe this definition in words, too: The ohm is defined as a resistance between two points of a conductor when a constant potential difference of 1.0 volt, applied to these points, produces in the conductor a current of 1.0 ampere, the conductor not being the seat of any electromotive force. The definition of Ohm in SI basic units is: $$1\Omega = 1{\dfrac {{\mbox{kg}}\cdot {\mbox{m}}^{2}}{{\mbox{s}}^{3}\cdot {\mbox{A}}^{2}}}$$ It's really hard for me to get that this definition is correct. It's clear that mathematical calculations confirm this definition. But how do you describe the definition of the SI in words like that paragraph on wikipedia? Edit: How would you describe it? Although it is not common to do it that way, I think describing it that way, could be very interesting. Answer: I think the short answer is, you don't. The reason we call the unit of force a Newton and not a kg m/s$^2$ is because it is convenient and it expresses the relation you want to convey when used elsewhere (e.g., $F=-kx$ for a spring). Similarly, it is convenient to "hide" the MKS base units into a single term, the potential $V$ in this case, so that the formula is easier to remember and that the relation is conveyed, in this case the relation between potential difference, current, and resistance.
{ "domain": "physics.stackexchange", "id": 56811, "tags": "electrical-resistance, dimensional-analysis, si-units" }
Coding a Variable Delay with LFO (MATLAB/C)
Question: I am trying to code a Variable Delay (chorus/flanger) effect in MATLAB using a paradigm that would be friendly for porting to a lower level language like C. What I currently have is a working echo/delay that uses read/write indices to write into a buffer. I'm now trying to use an LFO (low freq sine wave) to modulate the delay time but can't seem to figure it out. Sometimes my errors sound "cool" but they aren't correct or what I was trying to do. % Create LFO lfo = abs(sin(2*pi*lfo_rate*[0:length(output)]/fs)); % Perform the Echo Loop for i=1:length(x_t) % Write Sample Into Delay Buffer buffer(writer) = x_t(i); % Get All Delay Into Output if i <= length(x_t) output(i) = x_t(i) + (b_n * buffer(reader)); elseif i > length(x_t) output(i) = b_n * buffer(reader); end % Circular Buffer writer = writer + 1; if writer > length(buffer) writer = 1; end reader = reader + 1; if reader > length(buffer) reader = 1; end end I've been trying to use the LFO Vector to modulate the readers position, but I'm pretty much stuck and the stuff I can find online about this sort of thing isn't too helpful either. Answer: The best source on this topic with C solutions for flanger and chorus is the DSP book by Orfanidis: Orfanidis For Matlab code examples on audio effects (including flanger on page 48) see the file below Marshall
{ "domain": "dsp.stackexchange", "id": 3060, "tags": "matlab, audio" }
Largest LCM of partitions of a number
Question: Suppose there is a number N which can be divided into two positive non zero numbers X and Y i.e N = X + Y. There will be N/2 pairs of (X,Y). I want to find the largest LCM(Least common multiple) of X & Y Example 5 = (1 + 4) and 5 = (2 + 3) Since LCM(1,4) = 4 < LCM(2,3) = 6 the largest LCM will be LCM(2,3) = 6. CODE #include <stdio.h> int lcm(int, int); int main(void) { int counter = 0; int lowerBound = 0, upperBound, input, result; // input is the n printf("input = "); scanf("%d",&input); int max = -1; upperBound = input; // at first upperBound will be the input for(counter=0; counter < input/2; counter++) // we will iterate till the input/2 terms { lowerBound++; upperBound--; // in every iteration we will compute the lcm of lowerBound and upperBound // so 1st iteration : compute lcm(1, n-1) // 2nd iteration : compute lcm(2, n-2) // 3rd iteration : compute lcm(3, n-3) // .......after ((n/2)-1)th iteration..... // if n is even : compute lcm(n/2, n/2) // if n is odd : compute lcm((n/2)-1, n/2) result = lcm(lowerBound,upperBound); // store the result if(result >= max) // if result is maximum update the maximum { max = result; } } printf("output = %d",max); return 0; } int lcm(int x, int y) { int n1 = x, n2 = y; while(n1 != n2) { if(n1 > n2) n1 = n1 - n2; else n2 = n2 - n1; } return((x*y) / n1); } OUTPUT 1 input = 10 output = 21 OUTPUT 2 input = 11 output = 30 UPDATE 1 After Aseem Bansal's answer I try to find a more optimize way to solve the problem and i came up with this code #include<stdio.h> int main(void) { int input, result = 0; printf("Enter the input = "); scanf("%d",&input); while(input <= 0) // if input is negative ask again { printf("Enter a positive non zero number :"); scanf("%d",&input); } int middle = (input/2); if(input%2 != 0) // if the input is an odd number { result = (middle * (middle+1)); } else // if input is an even number { result = middle%2 == 0 ? ((middle-1) * (middle+1)) : ((middle-2) * (middle+2)); } printf("result = %d",result); return 0; } OUTPUT 1 Enter the input = 13 result = 42 OUTPUT 2 Enter the input = 12 result = 35 OUTPUT 3 Enter the input = 10 result = 21 EXPLANATION As for any odd number the middle and middle+1 are highest coprime numbers, so the GCD will be 1. Therefore LCM will be product of middle and middle+1. But for any even number the LCM of middle and middle+1 is 1. So we need to find the nearest coprime numbers. If middle is even then the nearest coprime numbers are middle-1 and middle+1 otherwise the nearest coprime numbers are middle-2 and middle+2. The product will be the result. NOTE here middle is floor of (middle). Can it be more optimized? Any review is welcome. Answer: I am not sure about the overall algorithm but there are better ways for finding LCM. You should use a different algorithm. Try Euclid's algorithm to find the GCD. Then calculate result(your LCM) in main() as result = (lowerbound * upperbound)/gcd(lowerbound upperbound); The problem with your LCM function is that subtraction isn't the best way for finding the GCD which you are currently doing. It will take too long. Just consider the case of x = 1 and y = 100000. It should be easy to see that number of subtractions is much higher in this case and the program would be very slow. If you don't want to change the algorithm(which you should) then the least optimization that I can think is to not declare n1 and n2 separately in the function lcm. Just use x and y. Do the final division by the product in the main function. It will at least save you 2 variables. EDIT I came up with a much better optimization. In your main loop instead of going from the outside pairs i.e (1, n-1) to the inside pairs you would do much better by going from the inside. Example 5 = (1,4) and 5 = (2,3). The maximum is found at 5 = (2,3). 7 = (1,6) , 7 = (2,5) and 7 = (3,4). The maximum is at 7 = (3,4). The answer is in the middle. You'll get it without a loop. But there is a big problem. This reasoning works only for odd numbers. I tried writing down numbers on paper and solved it by hand. I'll need some more time to find out a pattern for even numbers. But for odd numbers this is probably the best optimization. EDIT2 In the main function instead of if(result >= max) you should be using if(result > max). What is the use of replacing the number with the same number? One less operation. You don't need to initialize counter at the beginning. You are doing that in the for loop. EDIT3: For the case of even numbers the maximum lcm will be either lcm((input/2)-2, (input/2)+2) or lcm((input/2)-1, (input/2)+1). It's easy to find which will be maximum without calculating both of them. If input/2 is even then lcm((input/2)-1, (input/2)+1) should be maximum otheriwse it should be lcm((input/2)-2, (input/2)+2). EDIT After OP's Update 1 How about this as a better code #include<stdio.h> int main(void) { int input, result; printf("Enter the input = "); scanf("%d",&input); while (input <= 0) { printf("Enter a positive non zero number :"); scanf("%d",&input); } // (x % 2) used as a condition in C means checking x for odd number int middle = input/2; result = input % 2 ? (middle % 2 ? middle * middle - 4 : middle * middle - 1) : middle * (middle + 1); printf("Result = %d",result); return 0; } Removed unnecessary braces, spaces, comparisons and comments. Comments are needed but commenting obvious things is just noise. I prefer single space on both sides of operators.
{ "domain": "codereview.stackexchange", "id": 4101, "tags": "c" }
Error Running Catkin_Make
Question: After installing/uninstalling some packages from source using rosinstall, I can no longer run catkin_make without error. I've tried removing the packages I installed, removing/reinstalling the catkin package, running wstool update, and other things. The error message makes reference to catkin_EXTRAS_DIR, but I can't seem to find any information on what this is. Can anyone help me understand what I can do to fix this? What is catkin_EXTRAS_DIR? pi@raspberrypi:~/ros_catkin_ws $ catkin_make Base path: /home/pi/ros_catkin_ws Source space: /home/pi/ros_catkin_ws/src Build space: /home/pi/ros_catkin_ws/build Devel space: /home/pi/ros_catkin_ws/devel Install space: /home/pi/ros_catkin_ws/install #### #### Running command: "make cmake_check_build_system" in "/home/pi/ros_catkin_ws/build" #### -- +++ catkin -- Using CATKIN_DEVEL_PREFIX: /home/pi/ros_catkin_ws/devel -- Using CMAKE_PREFIX_PATH: /home/pi/ros_catkin_ws/build;/home/pi/ros_catkin_ws/build/cmake -- Using PYTHON_EXECUTABLE: /usr/bin/python -- Using Debian Python package layout -- Using empy: /usr/bin/empy -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /home/pi/ros_catkin_ws/build/test_results -- Found gtest sources under '/usr/src/gtest': gtests will be built -- Using Python nosetests: /usr/bin/nosetests-2.7 -- catkin 0.6.18 -- BUILD_SHARED_LIBS is on CMake Error at catkin/cmake/assert.cmake:3 (message): Assertion failed: catkin_EXTRAS_DIR (value is '') Call Stack (most recent call first): catkin/cmake/catkin_workspace.cmake:34 (assert) CMakeLists.txt:28 (catkin_workspace) -- Configuring incomplete, errors occurred! See also "/home/pi/ros_catkin_ws/build/CMakeFiles/CMakeOutput.log". See also "/home/pi/ros_catkin_ws/build/CMakeFiles/CMakeError.log". Makefile:9532: recipe for target 'cmake_check_build_system' failed make: *** [cmake_check_build_system] Error 1 Invoking "make cmake_check_build_system" failed Thanks, Justin Originally posted by jwasserstein on ROS Answers with karma: 3 on 2016-07-31 Post score: 0 Original comments Comment by ahendrix on 2016-07-31: Please post your error message as text and format it with the "preformatted text" (010101) button so that it is searchable and readable. Answer: It's not entirely clear exactly what you've done to get here, but it's clear that your workspace is kind of messed up. catkin_EXTRAS_DIR is one of the variables used internally by catkin during the build process; if it's not set it means that some file for catkin is corrupt or empty. The first steps at troubleshooting would be to remove the build and devel directories in your workspace, and since the CMakeLists.txt link at the root of your workspace seems messed up, you should remove that too. A fresh build will re-create all of these. After that, next steps will depend on how you installed ROS. If you did a binary install, you should reinstall any apt packages you suspect are damaged with apt-get install --reinstall <package>. If you did a source install, you should make sure that the source files are still intact, and then run the build steps (catkin_make_isolated, etc) again. Originally posted by ahendrix with karma: 47576 on 2016-08-01 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by jwasserstein on 2016-08-01: I tried removing build, devel, and CMakeLists.txt, then building again, but this didn't work. I ended up re-installing ROS which fixed the issue. I also had some boot issues which I attributed to a corrupt SD card, so that might be what caused this. Thanks for your help.
{ "domain": "robotics.stackexchange", "id": 25412, "tags": "catkin-make" }
ROS Installation on Fedora 19
Question: I'm trying to install ROS Groovy on Fedora 19, following the instructions at wiki/groovy/Installation/Fedora. However, running the line $ rosdep install --from-paths src --ignore-src --rosdistro groovy -y results in the following error: ERROR: the following packages/stacks could not have their rosdep keys resolved to system dependencies: pcl: No definition of [libusb-1.0-dev] for OS version [schrödinger’s] locate libusb- gives: /usr/lib/libusb-0.1.so.4 /usr/lib/libusb-0.1.so.4.4.4 /usr/lib/libusb-1.0.so.0 /usr/lib/libusb-1.0.so.0.1.0 /usr/lib64/libusb-0.1.so.4 /usr/lib64/libusb-0.1.so.4.4.4 /usr/lib64/libusb-1.0.so.0 /usr/lib64/libusb-1.0.so.0.1.0 The problem was how to get the letter 'ö' readable in the yaml file because rosdep resolve is giving this error: ERROR: No definition of [libusb-1.0-dev] for OS version [schrödinger’s]. I solved this by changing the OS name in /etc/system-release to schrodinger and then editing the yaml file. Originally posted by atp on ROS Answers with karma: 529 on 2013-07-08 Post score: 1 Answer: You need to add rosdep rules for Fedora 19. It's correctly detecting the OS correctly but the latest definition for libusb-1.0 is for spherical. https://github.com/ros/rosdistro/blob/master/rosdep/base.yaml How to submit changes is documented here: http://ros.org/doc/independent/api/rosdep/html/contributing_rules.html Originally posted by tfoote with karma: 58457 on 2013-07-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by tfoote on 2013-07-23: You say that this doesn't work for you. What did you do to try it? What errors did you get? Comment by atp on 2013-07-23: see above. basically, how can I get the 'ö' readable in the yaml file? Comment by cottsay on 2014-05-10: For the record, this is fixed in the latest release of rospkg: https://github.com/ros-infrastructure/rospkg/pull/54
{ "domain": "robotics.stackexchange", "id": 14845, "tags": "ros, rosdep, rospkg, fedora, ros-groovy" }
Insert value into multiple locations in a list
Question: Code insertAt :: a -> [Int] -> [a] -> [a] insertAt x indices xs = go 0 xs where go _ [] = [] go j (y:ys) | nextIndex `elem` indices = y : x : go (nextIndex + 1) ys | otherwise = y : go nextIndex ys where nextIndex = j + 1 Questions How can I make this code more efficient? How can I make the code easier to follow? Is a recursive approach the correct one? Do the Prelude or Data.List have any functions that could make this more readable/efficient? Answer: Is it intended that insertAt 999 [1] [1..10] inserts 999 at index 1, but insertAt 999 [0] [1..10] does nothing. Shouldn't it insert a value at index 0? Also, it looks like certain other insertions don't work correctly: > insertAt 999 [2,5,6,7] [1..10] [1,2,999,3,4,999,5,999,6,7,8,9,10] Note that this inserts only at indices 2, 5, and 7 and appears to skip index 6. My guess is that you probably wanted something more like: insertAt :: a -> [Int] -> [a] -> [a] insertAt x indices xs = go 0 xs where go _ [] = [] go j (y:ys) | j `elem` indices = x : go (j+1) (y:ys) | otherwise = y : go (j+1) ys which handles these cases correctly: λ> insertAt 999 [0] [1..10] [999,1,2,3,4,5,6,7,8,9,10] λ> insertAt 999 [2,5,6,7] [1..10] [1,2,999,3,4,999,999,999,5,6,7,8,9,10] It doesn't insert anything for insertAt 999 [10] [1..10], but if you want that to insert at the end, then you need a slight modification: insertAt :: a -> [Int] -> [a] -> [a] insertAt x indices xs = go 0 xs where go j yall | j `elem` indices = x : go (j+1) yall go j (y:ys) = y : go (j+1) ys go _ [] = [] which works like so: λ> insertAt 999 [0] [1..10] [999,1,2,3,4,5,6,7,8,9,10] λ> insertAt 999 [2,5,6,7] [1..10] [1,2,999,3,4,999,999,999,5,6,7,8,9,10] λ> insertAt 999 [10] [1..10] [1,2,3,4,5,6,7,8,9,10,999] λ> insertAt 999 [10,11,12] [1..10] -- multiple consecutive indices off the end [1,2,3,4,5,6,7,8,9,10,999,999,999] I think this version is quite readable. The main efficiency problem is that, if indices is more than a few elements, the repeated linear traversals of the elem function through the indices list will be slow. A second efficiency problem is that if all the insertions are near the beginning, you will end up traversing the entire input list even after all the insertions have been performed. You can improve the performance substantially by using an IntSet. (A plain Set from Data.Set would work, too, but IntSets are even faster for sets of Ints.) import qualified Data.IntSet as IntSet insertAt :: a -> [Int] -> [a] -> [a] insertAt x indices = go 0 where indices' = IntSet.fromList indices go j yall | j `IntSet.member` indices' = x : go (j+1) yall go j (y:ys) = y : go (j+1) ys go _ [] = [] You could also pass the IntSet as a parameter to go and delete indices as they are used. This might be faster, but you'd need to benchmark it to be sure: import qualified Data.IntSet as IntSet insertAt :: a -> [Int] -> [a] -> [a] insertAt x indices = go 0 (IntSet.fromList indices) where go j indices' yall | j `IntSet.member` indices' = x : go (j+1) (IntSet.delete j indices') yall go j indices' (y:ys) = y : go (j+1) indices' ys go _ _ [] = [] This would also allow you to stop inserting when the list of indices is empty, but again, only benchmarking will tell whether or not this is faster: import qualified Data.IntSet as IntSet insertAt :: a -> [Int] -> [a] -> [a] insertAt x indices = go 0 (IntSet.fromList indices) where go _ indices' yall | IntSet.null indices' = yall go j indices' yall | j `IntSet.member` indices' = x : go (j+1) (IntSet.delete j indices') yall go j indices' (y:ys) = y : go (j+1) indices' ys go _ _ [] = [] Personally, I think I probably would have approached it by writing a go function that operates on the current index and a sorted list of indices: insertAt x indices = go 0 (sort indices) This way, we can directly compare the current index with the next insertion point in the sorted list: go j (i:is) (y:ys) | i == j = ... This variant would look something like: insertAt :: a -> [Int] -> [a] -> [a] insertAt x indices = go 0 (sort indices) where go _ [] yall = yall go j (i:idxs) yall | j == i = x : go (j+1) idxs yall | j > i = go j idxs yall -- delete duplicate indices go j idxs (y:ys) = y : go (j+1) is ys go _ _ [] = [] This avoids those efficiency problems I mentioned earlier without having to switch to a different data structure. Sorting the index list and consuming it as insertions are made will avoid any traversals through the index list, since you're only ever looking at the list's head. Also, when the index list is empty, the go _ [] yall = yall case will immediately return the remaining list without further processing. I'm not sure that there's any clever, non-recursive method that uses folds or other functions from Data.List and ends up being better than the above. You can replace the recursion with unfoldr. For example, given the version from above: insertAt :: a -> [Int] -> [a] -> [a] insertAt x indices xs = go 0 xs where go j yall | j `elem` indices = x : go (j+1) yall go j (y:ys) = y : go (j+1) ys go _ [] = [] you can rewrite this as: insertAt :: a -> [Int] -> [a] -> [a] insertAt x indices lst = unfoldr step (0, lst) where step (j, lst) | j `elem` indices = Just (x, (j+1, lst)) step (j, y:ys) = Just (y, (j+1, ys)) step (_, []) = Nothing and the other versions can be rewritten similarly. This doesn't particularly improve readability, and I doubt it's more efficient. Another completely different approach might be to intercalate the insertion value into a sequence of sublists generated from the indexes. So, for example, given the indexes [2,5,6,7], you might want to first break the input list [1..10] up into sublists between the insertions: [[1,2],[3,4],[],[],[5,6,7,8,9,10]] It might be easiest to first convert a sorted list of indices into a list of sizes for all the sublists except the last: indicesToChunks :: [Int] -> [Int] indicesToChunks indices = zipWith (\i j -> i-j-1) indices (-1 : indices) giving: > indicesToChunks [2,5,6,7] [2,2,0,0] and then write a function to break a list into sublists based on these sublist sizes. It's pretty easy to write recursively: chunks :: [Int] -> [a] -> [[a]] chunks [] lst = [lst] chunks (n:ns) lst = a : chunks ns b where (a,b) = splitAt n lst Then, you can write: insertAt :: a -> [Int] -> [a] -> [a] insertAt x indices = intercalate [x] . chunks (indicesToChunks (sortu indices)) where -- unique elements in sorted order sortu = map head . group . sort The definition of sortu is weird, but it's a reasonable method of sorting and removing duplicates. (The nub function in Data.List removes duplicates, but doesn't do it efficiently for sorted lists, the way map head . group does.) Note that this version behaves a little differently than the others, if there are insertions at the end of the list that skip indexes. The earlier versions all insert only one 999 at the end of the list in this example: > insertAt 999 [10,1000] [1..10] [1,2,3,4,5,6,7,8,9,10,999] treating the 1000 as an out-of-bounds index. However, the intercalate-based version inserts two 999s: > insertAt 999 [10,1000] [1..10] [1,2,3,4,5,6,7,8,9,10,999,999]
{ "domain": "codereview.stackexchange", "id": 41389, "tags": "beginner, haskell, linked-list, functional-programming" }
Scraping information from a webpage not knowing it's last page number
Question: I've written some code in vba to scrape names and phone numbers from a webpage that has spread across some pages I don't wish to know of. The main interesting thing with this scraper is that It only needs to know the first page number then it traverse across all the pages and fetch the information I've mentioned above. I tried to make it error-free. Here is what I did: Sub Yellowpage_Parser() Const mlink = "https://www.yellowpages.com/search?search_terms=pizza&geo_location_terms=San%20Francisco%2C%20CA&page=" Dim http As New XMLHTTP60, html As New HTMLDocument Dim post As HTMLHtmlElement Do y = y + 1 With http .Open "GET", mlink & y, False .send html.body.innerHTML = .responseText End With For Each post In html.getElementsByClassName("info") With post.getElementsByClassName("n")(0).getElementsByTagName("span") If .Length Then x = x + 1: Cells(x, 1) = .item(0).innerText End With With post.getElementsByClassName("phones phone primary") If .Length Then Cells(x, 2) = .item(0).innerText End With Next post Loop While InStr(http.responseText, "next ajax-page") Set html = Nothing MsgBox "Collected totals are " & ActiveSheet.UsedRange.Rows.Count End Sub Answer: I tried to make it error-free. Except this is the Web, and whether a server located somewhere on the globe will respond with something you expect, is completely out of your control. This instruction: .Open "GET", mlink & y, False Succeeds under normal circumstances, but one day the site will be down for maintenance, or whatever - and you'll get a run-time error, be it here or at the .Send call. Code that doesn't handle errors is code that is written for the "happy path" - it's code that works well, until it doesn't. And then, when one thing goes wrong, everything burst up in flames, in a quite inelegant way. There are ways to graciously handle run-time errors. Use them. On Error GoTo CleanFail Looks like you like scraping stuff. That's great, but at one point you need to solve the more generic problem, and move the URL from a local Const to a parameter. Consider implementing the website-specifics as interface implementations. @Interface Option Explicit Public Function Parse(ByVal url As String) As VBA.Collection End Function And when you start implementing interfaces in VBA, you'll find that VBA will refuse to compile when you have an implemented public interface member with an underscore in its name - so you might as well drop that habit now. The indentation is wrong. Get the latest Rubberduck and use its Smart Indenter. Rubberduck will also warn you about other things, e.g. multiple declarations in a single instruction, single-letter identifiers, and As New, which literally makes an indestructible object - which is usually a very bad idea. Try adding this instruction: Set http = Nothing Debug.Print http Is Nothing If you expected that to print True, you've fallen prey to the As New "feature". Best stay away from that when you want to control your objects' life time.
{ "domain": "codereview.stackexchange", "id": 26637, "tags": "vba, web-scraping" }
What are the various definitions for $\rm SNR$, and their associated methods for measuring it?
Question: The definition of $\rm SNR$ seems to be somewhat of a tower of babel in industry. What definitions of $\rm SNR$ are there (feel free to site application), and how exactly can it be measured for that applications? My specific questions on $\rm SNR$ are: How may we measure $\rm SNR$ for a communications system, if we have not yet been able to attain optimal bit sampling timing, and all we have to work with are all the signals in the receiver up to and including the envelope of the I and Q channels? See this post for context. Once we attain optimal bit sampling, and softbits are attained, how best to measure $\rm SNR$ (or $E_bN_0$)? One way I use is: $$ 10\log_{10}\left[ \frac{\textrm{mean}\left\{\lvert s_n \rvert^2\right\}}{\textrm{var}\textrm\{\lvert s_n \rvert\textrm\}}\right], $$ however I understand this is not suitable for low $\rm SNR$ cases. What other ways exist? Answer: It's not practical to come up with a comprehensive list of definitions of signal-to-noise ratio, because different measures are relevant in different applications. Here are a few common measures that I've come across and/or used in the past (you'll find that they are bent toward communications applications): "Pure" SNR: I call this "pure" because it is a literal interpreation of the term "signal to noise ratio." This measure is just the ratio of the power in the signal of interest to the total noise power over the signal's bandwidth. Any noise present outside of the signal bandwidth can be filtered out without removing any signal power, so it can be ignored. This definition isn't often used for digital communications; there are more useful quantities that I reference below. SNR is more relevant for analog modulation. $\frac{E_b}{N_0}$: This is the most common measure of signal to noise ratio used for digital communications. Referred to in the comments above by Dilip as the "BEND" ratio, Eb/N0 is the ratio of the energy per (information) bit ($E_b$) to the (assumed to be white) noise power spectral density ($N_0$) over the signal bandwidth. This is often used as a figure of merit for digital communications systems because it is normalized by the system's symbol rate and modulation format (since it is the energy per bit). This allows very different system implementations to be compared for efficiency on a level playing field, in an apples-to-apples fashion. In some other specific applications, you might also see $\frac{E_s}{N_0}$ (energy per symbol instead of energy per bit in the numerator) or $\frac{E_c}{N_0}$. I've seen two different meanings for the latter: first, in coded communications systems, you might see this, where $E_c$ stands for the energy per coded bit, in contrast to $E_b$, which is the energy per information bit. Another meaning: in direct-sequence spread spectrum systems, you might see $E_c$ refer to the energy per chip instead. $\frac{C}{N_0}$: This quantity is often termed "carrier to noise ratio." One place that I've seen this used as a common measure of signal quality is in GPS-related literature. This is very similar to the "pure" definition of SNR, except that the (again, assumed to be white) noise power spectral density value $N_0$ is used in lieu of the total noise power over the signal bandwidth. Since bandwidth is a somewhat squishy term that has many different definitions (there is no one particular measure of what a signal's "bandwidth" is that is accepted to be correct), this ratio is somewhat more precise and less ambiguous than the "pure" SNR metric.
{ "domain": "dsp.stackexchange", "id": 66, "tags": "noise, bpsk, snr" }
Get title from database based on member country
Question: I asked this question before. However, I realized that I had to edit some of my code after I got good answers. Therefore, I'm asking a new question here. I have the following database: Each member may have multiple addresses (1:n) stored in table address. It's possible that one member has multiple addresses within one country. Further, each member may have multiple titles (m:n) mapped in table title2memb. Each entry idRank has a name which can be found in title. I want to get the first N members that have at least one address in the country "US" and I want to display all their titles. Here is working code, using two JOINS and two where statements for N=2: <?php $db = new mysqli("localhost","root","","example"); $sql = "SELECT DISTINCT m.name, m.id FROM member m INNER JOIN address a ON m.id = a.idMember AND a.country='US' LIMIT 2"; $membersResult = $db->query($sql); while($member = $membersResult->fetch_assoc()){ $id = $zeile['id']; echo $zeile['name'] . ": "; $sub_sql = "SELECT t.name FROM title t INNER JOIN title2memb tm ON t.id = tm.idRank AND tm.idMember='$id'"; $sub_result = $db->query($sub_sql); while($rank = $sub_result->fetch_assoc()) { echo $rank['name'] . " "; } echo "<br>"; } ?> And as a result I get: Bernd: Bee Tom: Bear Dog Is it really necessary to have here 2 nested while clauses? Or can I get both in one SQL? Some remarks: I had to use the DISTINCT keyword in my first SQL, otherwise I would get Bernd twice. I have avoided using prepared statements and encoding the input to make the code more readable. I know that the here presented version is vulnerable to XSS. Answer: Firstly, for questions like this is would be helpful to provide an SQL fiddle. I mocked one up to test on, it only takes a few minutes but would be much faster for you since you have the table definitions. Is it really necessary to have here 2 nested while clauses? Or can I get both in one SQL? You can absolutely get everything you need in one SQL query without nested queries. In fact, you should. Performing database queries within a loop is never ideal and it's always a better idea to get the data you need (even if slightly more than you need) then filter it down with PHP to avoid having to do multiple queries. The logic I want to get the first N members that have at least one address in the country "US" and I want to display all their titles. OK - so you're aiming for two things - the member's name and their titles. Your conditions are based on the address table data, so you should start your query there: SELECT `idMember` FROM `address` WHERE `country` = 'US' GROUP BY `idMember` LIMIT 2 Your at least one address in the country "US" condition is covered by WHERE country = 'US', and the GROUP BY will limit your duplicated rows. If you want to see how many addresses each member has (or filter on it) you can add COUNT(*) to the select clause to see. You will adjust the LIMIT as required. This gives you the member IDs that you need to retrieve titles for. From here, just query the tables you need and join the results above: SELECT `member`.`id`, `member`.`name`, `title`.`name` FROM `title` INNER JOIN `title2memb` ON `title2memb`.`idRank` = `title`.`id` INNER JOIN ( SELECT `idMember` FROM `address` WHERE `country` = 'US' GROUP BY `idMember` LIMIT 2 ) AS `us_members` ON `us_members`.`idMember` = `title2memb`.`idMember` INNER JOIN `member` ON `member`.`id` = `us_members`.`idMember`; Note that I've added the member's ID here because you'll use it to group the result set with PHP, as MySQL as a relational database is not able to return multi-dimensional results (see NoSQL/MongoDB/DynamoDB etc if you want that!). I have avoided using prepared statements and encoding the input to make the code more readable. I know that the here presented version is vulnerable to XSS. Noted - won't comment on that. To implement, you can do something like this: $country = 'US'; $limit = 2; $sql = ' SELECT `member`.`id`, `member`.`name`, `title`.`name` AS `title_name` FROM `title` INNER JOIN `title2memb` ON `title2memb`.`idRank` = `title`.`id` INNER JOIN ( SELECT `idMember` FROM `address` WHERE `country` = "' . $country . '" GROUP BY `idMember` LIMIT ' . $limit . ' ) AS `us_members` ON `us_members`.`idMember` = `title2memb`.`idMember` INNER JOIN `member` ON `member`.`id` = `us_members`.`idMember`; '; $membersResult = $db->query($sql); // Use PHP to group the results by the member ID $members = array(); while ($member = $membersResult->fetch_assoc()) { // Track the member's name and an array of titles against their ID (which is unique) if (!array_key_exists($member['id'], $members)) { $members[$member['id']] = array( 'name' => $member['name'], 'titles' => array() ); } // Add the title $members[$member['id']]['titles'][] = $member['title_name']; } This will give you a simple structured array like so: Array ( [1] => Array ( [name] => Bernd [titles] => Array ( [0] => Bee ) ) [3] => Array ( [name] => Tom [titles] => Array ( [0] => Bear [1] => Dog ) ) ) Now, loop that array and output the data how you want to: foreach ($members as $memberId => $member) { echo sprintf('%s: %s<br>', $member['name'], implode(' ', $member['titles'])); } Example output (from a browser): Bernd: Bee Tom: Bear Dog General formatting While we're here, you should consider a few best practices: Use single quotes always, unless you require double quotes Choose a rule for indentation and stick to it. Four spaces is generally accepted as "the norm" for PHP (see while contents) Add spaces before and after brackets in control structures (see while) Space out arguments in function calls (see $db)
{ "domain": "codereview.stackexchange", "id": 20161, "tags": "php, mysqli, join" }
Which option is better for readability when dealing with blank strings?
Question: I'm working off a specification that says a bunch of fields within a record (one record being one line) must be blank. For example, it says that chars 6-14 must be blank. I am building a class hierarchy structure that represents the layout of the file, and within this class hierarchy I'm including every field for each type of record, including the blank ones. Basically I want to know if I should make each field return a blank literal, or if I should pad string.Empty with spaces. Example: public string ImmediateDestinationRoutingNumber { get { return " "; } } Versus public string ImmediateDestinationRoutingNumber { get { return string.Empty.PadRight(8, ' '); } } The second option of padding string.Empty looks the most readable to me, but I don't know if the runtime cost is significant enough to worry about. Answer: Instead of returning a " " or string.Empty.PadRight(8, ' ') you should extract the values to some meaningful constants. The easiest way would be for your example to just use the overloaded constructor of the String class like private const string eightSpaces = new string(' ', 8); this makes it clear for Mr./Mrs.Maintainer what it stands for. But because this seems to produce CS0133: The expression being assigned to <constant> must be constant. you should just use a readonly string like private readonly string eightSpaces = new string(' ', 8); Using your first version will lead for each bug which needs to be solved to count the spaces which are returned. The name of this property is also poorly named. It doesn't represent an ImmediateDestinationRoutingNumber but instead ImmediateDestinationRoutingNumberSpaces.
{ "domain": "codereview.stackexchange", "id": 11857, "tags": "c#, .net, comparative-review" }
Filtering users by criteria in Ruby on Rails
Question: I am a fairly new (RubyonRails) developer. I desire to improve my coding skills so I used climate to do some review on my code. It gave me a flag for this method that its complex. Is it characterised as complex because of having several "actions/tasks" in a single method? Will it be better if I extract some code segments to a different method? Is there something else I am not seeing? def search filter_mapping = {"cost" => "rcost", "quality" => "rquality", "time" => "rtime", "experience" => "rexperience", "communication" => "rcommunication"} @filters = params[:filter].split(",") rescue [] @sort = params[:sort] @user_type = params[:s_user_type] @skills = params[:s_skills] @location = params[:location] @max_rate = params[:max_rate] @availability = params[:availability] @users = User.scoped.joins(:user_skills) @users = @users.where('user_type = ?', @user_type) if @user_type.present? @users = @users.where('user_skills.skill_id in (?)', @skills.map(&:to_i)) if @skills.present? && @skills.size > 0 @users = @users.where('availability = ?', @availability) if @availability.present? @users = @users.where('location_country = ?', @location) if @location.present? @users = @users.where('rate < ?', @max_rate.to_i) if @max_rate.present? @users = @users.page(params[:page]).per(PER_PAGE) @filters.each do |f| if filter_mapping.keys.include?(f) @users = @users.order("#{filter_mapping[f]} desc") end end @users = @users.order('id asc') if @filters.empty? @advanced_link = @location.blank? && @max_rate.blank? && @availability.blank? render :index end Update I figured out that I can extract the scopes into a method like that: def get_users_where_scopes @users = User.scoped.joins(:user_skills) @users = @users.where('user_type = ?', @user_type) if @user_type.present? @users = @users.where('user_skills.skill_id in (?)', @skills.map(&:to_i)) if @skills.present? && @skills.size > 0 @users = @users.where('availability = ?', @availability) if @availability.present? @users = @users.where('location_country = ?', @location) if @location.present? @users = @users.where('rate < ?', @max_rate.to_i) if @max_rate.present? @users = @users.page(params[:page]).per(PER_PAGE) end and then call it with @users = get_users_where_scopes(). But now the complexity of this method seems wrong to me. Answer: I'd say to first make a service object to keep the controller lean and clean, and to give yourself a place to put all the logic without fear of polluting the controller. Plus: It's reusable! # app/services/user_search.rb class UserSearch ORDER_MAPPING = { "cost" => "rcost", "quality" => "rquality", "time" => "rtime", "experience" => "rexperience", "communication" => "rcommunication" }.freeze def initialize(params) @params = params end def results @results ||= begin records = User.scoped.joins(:user_skills) records = scope(records) records = order(records) end end private def param(key) @params[key] if @params[key].present? end def scope(scoped) scoped = add_scope(scoped, 'user_type = ?', param(:user_type)) scoped = add_scope(scoped, 'user_skills.skill_id in (?)', skill_ids) scoped = add_scope(scoped, 'availability = ?', param(:availability)) scoped = add_scope(scoped, 'location_country = ?', param(:location)) scoped = add_scope(scoped, 'rate < ?', max_rate) end def add_scope(scope, sql, *params) scope.where(sql, *params) if params.all?(&:present?) scope end def order(scope) terms = sanitized_order_terms || default_order_terms terms.each { |term| scope.order(term) } scope end def sanitized_order_terms terms = param(:filter).try(:split, ",") terms = terms.map { |term| ORDER_MAPPING[term] } terms = terms.compact terms if terms.any? end def default_order_terms ["id asc"] end def skill_ids param(:s_skills).try(:map, &:to_i) end def max_rate param(:max_rate).try(:to_i) end end I've intentionally kept the pagination in the controller, as it's pretty independent of the scoping and ordering. However, it'd be simple to add as arguments to the #results method In your controller: def search @users = UserSearch.new(params).results.page(params[:page]).per(PER_PAGE) advanced_params = %w(location max_rate availability).map { |p| params[p] } @advanced_link = advanced_params.all?(&:blank) render :index end I'd probably pick a more direct way of determining the @advanced_link, such as sending an advanced parameter along, and simply looking at that instead of the implicit state you have now. I have no idea what Code Climate thinks of the code above, but I imagine it'll be happier.
{ "domain": "codereview.stackexchange", "id": 7222, "tags": "ruby, ruby-on-rails, active-record" }
Why does the shoreline sometimes recede prior to a Tsunami?
Question: It is well known among regular beach goers that a sudden shoreline drawback is often a warning sign for an impending Tsunami. My understanding of Tsunamis is they they form as a result of the seafloor abruptly changing, causing a local vertical displacement of water at the site of above the disruption, which initiates the wave. How does this process ultimately result in the shoreline often receding prior to the Tsunami reaching the coast? Answer: It has nothing to do with the geological cause of the tsunami. Instead, it's a result of the way waves propagate. You can see the same effect on ordinary wind-generated ocean waves — the waterline draws back before each wave peak arrives and washes up the beach. Tsunamis are much bigger waves, in terms of both amplitude and wavelength, so the effect is more dramatic. The particles in some surface waves, including wind waves and Rayleigh waves (a component of what is often called ground roll), have in a circular or elliptical motion — in the case of a wind wave the motion is clockwise if the wave is traveling from left to right (see this animated comparison for Rayleigh waves). The 'backwards' motion in the trough results in the drawback you are asking about.
{ "domain": "earthscience.stackexchange", "id": 2713, "tags": "ocean, tsunami" }
Church Numerals - implement one, two, and addition
Question: Given the following exercise: Exercise 2.6 In case representing pairs as procedures wasn't mind-boggling enough, consider that, in a language that can manipulate procedures, we can get by without numbers (at least insofar as nonnegative integers are concerned) by implementing 0 and the operation of adding 1 as (define zero (lambda (f) (lambda (x) x))) (define (add-1 n) (lambda (f) (lambda (x) (f ((n f) x))))) This representation is known as Church numerals, after its inventor, Alonzo Church, the logician who invented the calculus. Define one and two directly (not in terms of zero and add-1). (Hint: Use substitution to evaluate (add-1 zero)). Give a direct definition of the addition procedure + (not in terms of repeated application of add-1). I wrote this solution. I'm not 100% certain what the correct answer is, and I'm not sure the best way to check my solution so it's possible that I have made a mistake in my answer. Please let me know what you think. ;given definitions (define zero (lambda (f) (lambda (x) x))) (define (add-1 n) (lambda (f) (lambda (x) (f ((n f) x))))) ; exercise 2.6: define one and two directly - ; not in terms of zero or add-1 (define one (lambda (f) (lambda (x) (f (f x))))) (define two (lambda (f) (lambda (x) (f ((f (f x)) x))))) (define (church-plus a b) (define (church-num n) (cond ((= 0 a) (lambda (x) x)) ((= 1 a) (lambda (f) (lambda (x) (f ((n f) x))))) (else (church-num (- n 1))))) (church-num (+ a b))) Answer: Church defined numerals as the repeated application of a function. The first few numerals are defined thus: (define zero (lambda (f) (lambda (x) x))) (define one (lambda (f) (lambda (x) (f x)))) (define two (lambda (f) (lambda (x) (f (f x))))) ... and so on. To see how one is derived, begin by defining one as (add-1 zero) and then perform the applications in order as shown in the steps below (I have written each function to be applied and its single argument on separate lines for clarity): ;; Givens (define add-1 (lambda (n) (lambda (f) (lambda (x) (f ((n f) x)))))) (define zero (lambda (f) (lambda (x) x))) (define one (add-1 ;; substitute with definition of add-1 zero)) (define one ((lambda (n) ;; apply this function to argument zero (lambda (f) (lambda (x) (f ((n f) x))))) zero)) (define one (lambda (f) (lambda (x) (f ((zero ;; substitute with definition of zero f) x))))) (define one (lambda (f) (lambda (x) (f (((lambda (f) ;; apply this function to argument f (lambda (x) x)) f) x))))) (define one (lambda (f) (lambda (x) (f ((lambda (x) ;; apply this function to argument x x) x))))) (define one (lambda (f) (lambda (x) (f x)))) ;; we're done! You may try your hand at two in a similar fashion. =) By definition, Church numerals are the repeated application of a function. If this function were the integer increment function and the initial argument were 0, then we could generate all natural numbers 0, 1, 2, 3, .... This is an important insight. Thus, a Church numeral is a function that takes one argument, the increment function, and returns a function that also takes one argument, the additive identity. Thus, if we were to apply a Church numeral to the integer increment function (lambda (n) (+ 1 n)) (or simply add1 in Scheme), which we then apply to the integer additive identity 0, we would get an integer equivalent to the Church numeral as a result. In code: (define (church-numeral->int cn) ((cn add1) 0)) A few tests: > (church-numeral->int zero) 0 > (church-numeral->int one) 1 > (church-numeral->int two) 2 > (church-numeral->int (add-1 two)) 3 > (church-numeral->int (add-1 (add-1 two))) 4 A Church-numeral addition should take two Church numerals as input, and not integers, as in your code. We use the insight above about increment functions and additive identities to notice that integer addition is simply repeated incrementing. If we wish to add integers a and b, we begin with the number b and increment it a times to get a + b. The same applies to Church numerals. Instead of using integer increment function and integer additive identity, we use the Church increment function add-1 and we begin with a Church numeral as the additive identity. Thus, we can implement Church-numeral addition as: (define (plus cn-a cn-b) ((cn-a add-1) cn-b)) A few examples: (define three (plus one two)) (define four (plus two two)) (define five (plus two three)) (define eight (plus three five)) ... and associated tests: > (church-numeral->int three) 3 > (church-numeral->int four) 4 > (church-numeral->int five) 5 > (church-numeral->int eight) 8
{ "domain": "codereview.stackexchange", "id": 224, "tags": "lisp, scheme, sicp" }
Thought experiment and $E=mc^2$
Question: Suppose we have an indestructible box that doesn't let through any matter or radiation or whatever. In the box, there is matter which is evenly distributed (state A). The energy content of the box is thus $$E=mc^2$$ where $m$ is the rest mass of the matter$^1$. Now suppose the box contains 50% matter and 50% antimatter that reacts in such a way that the outcome of the annihilation is 100% radiation (photons, state B). As energy is conserved, the energy of state B should be $$E=pc$$ where $p$ is the sum over the absolute values of all photon momenta. Question: Could an external observer distinguish between state A and state B? According to $E^2=p^2c^2+m^2c^4 = \text{const}$, state B should behave just like a box with (evenly distributed) mass $m=p/c$, i.e. exactly like state A? What confuses me: Energy from mass (density) does not look the same in the stress-energy-tensor of the Einstein field equations. Even if the momentum vectors are cancelling each other out (which they should), leaving only the time component (energy), what happens with the immense radiation pressure of state B which does not cancel out? Does this lead to observable$^2$ differences between A and B? Where is pressure and stress being accounted for in $E^2=p^2c^2+m^2c^4$? Or is that equation just an approximation that assumes no pressure? $^1$Assuming thermal energy, pressure and stress etc. can be neglected. $^2$ Observable from ouside the box. Answer: You did not explicitly state it, but I assume that you also intend the material of the box to be very rigid such that when the pressure increases inside the resulting increased stress does not produce any measurable increased strain. I.e. the size of the box is unchanged before and after. This scenario is well studied in the literature for a spherical box. When the pressure inside the box increases there is a stress produced in the walls. That stress is a tension in the walls. The increased tension in the shell effectively cancels out the increased pressure inside the box. The result is that there is no change outside the box. This is what we would expect due to Birkhoff’s theorem. I am not aware of any investigation with non-spherical geometries. It certainly is possible that there would be an external change of some sort due to dipole or quadrupole effects, but certainly it would be small since there would be no monopole change.
{ "domain": "physics.stackexchange", "id": 82535, "tags": "special-relativity, mass-energy, observers, stress-energy-momentum-tensor, thought-experiment" }
Wavelets / Identifying bursty signals
Question: I thought my network was experiencing outages. I collected bandwidth (Mbps) measurements every 30 min. A cumulative probability distribution shows the probability that bandwidth will exceed any given value (e.g. P(x > X)). However, this was not very insightful. A continuous wavelet transform using PyWavelets did not show any strong peaks (i.e. heat map of cwt in scale and time). That meant to me that there was not a consistent duration for the outages. What would be some ways to analyze the signal to see the time scale of outages? Sample Data Cumulative Probability CWT using PyWavelet Answer: I thought my network was experiencing outages. I collected bandwidth (Mbps) measurements every 30 min. . . . What would be some ways to analyze the signal to see the time scale of outages? An "outage" is a random event. Therefore, looking for regularity in outages might be hinting at something more widely already known. The answers to the questions in the comments are very much dependent to the objectives. But I will try to make this a bit more constructive. For example: The CDF presented is over 1400 instances at 30 minute intervals and it therefore gives you the probability of sustained download at a particular speed over 1400*30 minutes or about 30 days (If the horizontal axis depicts the index of the measurement). If you were asking if your line would sustain a download speed over the next 15 minutes, the answer would be "I don't know", because you have not measured anything more frequently than 15 minutes. On their own, wavelets and discrete fourier analysis cannot tell you much. They can tell you that a particular segment of the signal suddenly requires many more components to describe it and this usually happens at discontinuities but this helps in recognising a "spike" which you can already do here by thresholding (for example). Rather, if you suspect that there is some regularity in the outages and you want to estimate it, then what you can do is some form of time series decomposition. You can run autocorrelation for example and this will already give you a handle on periodicity. Or a full blown discrete fourier transform (or even wavelet transform) to determine periodic components more accurately. By determining the Trend and Seasonality components of your signal, you can then create "filters" to discover areas where it veers off its usual course. (And then of course, having discovered the timestamps of those events, you can analyse (or try to predict) the time that passess between them and see if there is anything useful in there). But notice here, we always need a time reference. For example, when you assess Trend, a trend over what time scale? This will then determine how often you need to observe the signal. Hope this helps.
{ "domain": "dsp.stackexchange", "id": 8841, "tags": "fft, python, wavelet" }
Original proof that "almost all decision problems are uncomputable"?
Question: Who gave the original proof that "almost all decision problems are uncomputable"? Any hint at the original paper appreciated, thanks! Answer: So the answer to my original question is the following: Because the statement is a corollary of the two facts that every program (Turing machine) is a finite binary string, i.e., a natural number, every decision problem is a function $f: \mathbb{N} \to \{0,1\}$ and hence corresponds to a number $x \in [0,1)$ (and every such $x$ indeed encodes a meaningful decision problem, see Sasho Nikolov's first comment above) the question about the original author / paper of this simple corollary is not that much interesting. (And the origins of the mentioned facts themselves I can trace back, of course.) EDIT: I add here that this is the answer modulo the objection by Damiano Mazza (see in the comments above), which I need to check/think about first.
{ "domain": "cstheory.stackexchange", "id": 4141, "tags": "computability" }
Conway's Game of Life implemented in Python
Question: I'm a Python novice and decided to give Game of Life a try in Python 3: import os import time import curses class Cell: newlabel = '' def __init__(self,row,col,label): self.row = row self.col = col self.label = label def updatecell(self,celllist, boardsize): liveneighbours = self.getliveneighbours(celllist, boardsize) if liveneighbours < 2 or liveneighbours > 3: #Kills cell if it has <2 or >3 neighbours self.newlabel = ' ' elif liveneighbours == 2 and self.label == ' ': #Cell doesn't come alive if it is dead and only has 2 neighbours self.newlabel = ' ' else: #Cell comes alive if it is already alive and has 2/3 neighbours or if it is dead and has 3 neighbours self.newlabel = 0 def getliveneighbours(self,celllist,boardsize): count = 0 for row in range(self.row-1,self.row+2): try: celllist[row] except: #Handles vertical wrapping. If there's an error, it checks the wrapped cell if row < 0: row += boardsize[0] else: row -= boardsize[0] for col in range(self.col-1,self.col+2): try: #Handles horizontal wrapping. If there's an error, it checks the wrapped cell celllist[row][col] except: if col < 0: col += boardsize[1] else: col -= boardsize[1] if not celllist[row][col].label: count += 1 if not self.label: #Subtracts the cell from the neighbours count return count-1 else: return count def updatelabel(self): #Updates the cell's label after all cell updates have been processes self.label = self.newlabel class Board: celllist = {} #Dict of rows def __init__(self, rows, columns): self.rows = rows self.columns = columns self.generate() def printboard(self): #Prints the board to the terminal using curses for num, row in self.celllist.items(): line = '' for col, cell in enumerate(row): line += str(cell.label) terminal.addstr(num, 0, line) terminal.refresh() def generate(self): #Adds all the cells to the board for row in range(self.rows-1): self.celllist[row] = [] for col in range(self.columns): self.celllist[row].append(Cell(row,col,' ')) def updateboard(self): #Prompts each cell to update and then update their label for row in self.celllist: for cell in self.celllist[row]: cell.updatecell(self.celllist, (self.rows-1, self.columns)) for row in self.celllist: for cell in self.celllist[row]: cell.updatelabel() if __name__ == "__main__": terminal = curses.initscr() # Opens a curses window curses.noecho() curses.cbreak() terminal.nodelay(1) #Don't wait for user input later rows, columns = os.popen('stty size', 'r').read().split() board = Board(int(rows), int(columns)) board.celllist[6][8].label = 0 board.celllist[7][9].label = 0 board.celllist[7][10].label = 0 board.celllist[8][8].label = 0 board.celllist[8][9].label = 0 while 1: board.printboard() time.sleep(0.1) board.updateboard() char = terminal.getch() if char == 113: #Checks for ASCII Char code for q and then breaks loop break curses.endwin() #Closes curses window I've written it to work based on your terminal size and I've made the sides wrap around because it was the only solution I could think of. Also, at the end, I've included a glider example as a test. Questions: Is my implementation pythonic enough (particularly concerning ranges and my constant use of iteration)? Are the data structures that I've used for the celllist (arrays in a dict) suitable? Is curses a good way of displaying the game? Would pyGame be better? Is the overall code style good? To further develop this, is there a better algorithm I could implement or a better method (other than wrapping the sides) of displaying all the cells? Answer: Running your code through pycodestyle will highlight some issues to make the code more familiar to Pythonistas. Other than that: while 1 should be written as while True (Explicit is better than implicit) Rather than checking for a specific character to quit I would just assume that people know about Ctrl-c Use as easily readable names as possible. updatecell, for example, could be update_cell or even update since it's implicit that it's the Cell that's being updated. Usually if __name__ == "__main__": is followed by simply main() or sys.exit(main()) (probably not relevant in your case). This makes the code more testable. Aim to minimize the amount of top-level code in general. The board is really a matrix, so it would IMO be better to represent it using a list of lists or ideally a matrix class. A dict has no intrinsic order, so semantically it's the wrong thing to use in any case. Somebody else will have to answer this. This is too subjective to answer. Some suggestions: Try to take into account the fact that the Game of Life board is infinite in all directions. Ideally your implementation should handle this, but you could also use it as a stop condition - when things get too close to the edge to be able to figure out the next step with certainty the program could stop. You can avoid conversions between numbers and strings by using numbers or even booleans everywhere (for example stored in a value or state field rather than label). You can then write a separate method to display a Board, converting values to whatever pair of characters you want, and possibly with decorations. Rather than updating Cells individually by saving their new value you can replace the entire Board with a new one at each step. The time.sleep() currently does not take into account that the Board update time might change. This is unlikely to be a problem for a small board, but if you want completely smooth frames you'll want to either use threads or a tight loop which checks whether it's time to print the current Board yet.
{ "domain": "codereview.stackexchange", "id": 30051, "tags": "python, beginner, python-3.x, game-of-life, curses" }
Assigning balls to bins with constraints
Question: Let $S= \{ b_{11}, b_{12}, b_{21}, b_{22}, b_{31}, b_{32},\dots, b_{n1}, b_{n2} \}$ be a set of $2n$ balls grouped in $n$ pairs, and $T = \{ B_1, B_2, \dots, B_m\}$ be a set of $m$ bins with capacities $\overline B_{1}, \overline B_{2}, \dots, \overline B_{m}$, respectively. Considering that each ball is stored in exactly one bin, the problem is to find the maximum number of balls that can be stored in those $m$ bins under the following two constraints. First, for each ball $b_{ij}$ there is a set of bins $\hat B_{ij}$ such that $b_{ij}$ can only be stored in a bin in $\hat B_{ij}$. Second, balls forming a pair (e.g., $b_{21}$ and $b_{22}$) cannot be stored in the same bin. My attempt consists of modelling this problem as a max-flow problem and then running the Ford–Fulkerson algorithm. I've built a graph $G$ with starting node $s$ and ending point $t$. The first layer of nodes are the $2n$ balls, and my second layer of nodes are the $m$ bins. The first layer and second layer are connected to match the first constraint and the capacity of those edges are all equal to one. All the nodes of the second layer are connected to the ending node $t$, and their capacities are $\overline B_{1}, \overline B_{2}, \dots, \overline B_{m}$. I don't know how to connect the starting node $s$ to the first layer of nodes and what the capacity of those edges should. It seems that it doesn't matter, but I'm not sure. Also, I don't know how to handle the second constraint. Answer: You are on the right track. Connect the starting node $s$ to each node in the first layer with an edge of capacity one. Since the balls $b_{i1}$ and $b_{i2}$ cannot stored in the same bin, for each bin $X$ in $\hat B_{i1}\cap\hat B_{i2}$, instead of connecting $b_{i1}$ and $b_{i2}$ directly to the node $X$ in the second layer, construct a node $c_{i,X}$ that connects to $X$ with capacity 1. connect $b_{i1}$ to $c_{i,X}$ with an edge of capacity one. connect $b_{i2}$ to $c_{i,X}$ with an edge of capacity one.
{ "domain": "cs.stackexchange", "id": 19123, "tags": "algorithms, graphs, optimization, discrete-mathematics, network-flow" }
Thermodynamic energy balance
Question: A fan that consumes 20 W of electric power when operating is claimed to discharge air from a ventilated room at a rate of 0.25 kg/s at a discharge velocity of 8 m/s. is this claim reasonable? I am getting maximum possible velocity of air around 12.64 so it's reasonable to me but all solution available on internet says maximum possible velocity of air is 6.3,so it's not reasonable. Please help The internet solution for the question: Answer: Other than the algebraic mistake and difficulties in properly analyzing the fan, pointed out by @Daniel Hatton. I would like to add the intent of formulating the problem like this (which is incomplete by design), reproduced from the "Thermodynamics: An Engineering Approach" by Cengel and Boles. As per the first law of thermodynamics, the energy is conserved as it is converted from one form to another, and so, there is nothing wrong with the conversion of all the electrical energy into kinetic energy of air, for a steady-state system: $$ \dot{Q} - \dot{W} = \dot{m}_{\text{air}} (\Delta \text{internal energy} + \Delta\text{potential energy} + \Delta\text{kinetic energy})$$ Now, our ideal case is that there is no heat in or out of our control volume, $\dot{Q} = 0$, no change in internal energy of air and no change in potential energy. That leaves us with: $$ -\dot{W} = \dot{m}_{\text{air}} (\Delta\text{kinetic energy}) = \frac{1}{2}\dot{m}_{\text{air}}(v_{\text{out}}^2 - v_{\text{in}}^2)$$ What if we had an imaginary situation where the inlet flow were completely stagnant $v_{\text{in}} = 0$, then as per the first law, all the electrical $20 \ \text{J/s}$ would be converted to kinetic energy of the stagnant inlet flow: $$- \dot{W} = \frac{1}{2} \dot{m}_{\text{air}} v_{\text{out}}^2 = - (-20) \text{J/s}$$ $$ v_{\text{out}} = \sqrt{\frac{2 * 20}{ 0.25 }} = 12.649 \ \text{m/s}$$ So, the first law has no objection to air velocity reaching 12.649 m/s, but this is the outlet velocity upper bound. Any analysis that obtains a higher velocity violates the first law. Now as per our first law analysis (and under same assumptions), the following holds: Someone tells you the outlet velocity of this fan is 8 m/s. It could be. Someone tells you the outlet velocity is 13.0 m/s, now that's impossible. So, the purpose of this problem is just to demonstrate upper bounds enforced by first law of thermodynamics. And the second law has a whole different say!
{ "domain": "engineering.stackexchange", "id": 5236, "tags": "mechanical-engineering, thermodynamics" }
Proof of correctness of divide and conquer clique algorithm
Question: I have the following divide and conquer algorithm that finds a clique in an undirected graph $G = (V, E)$: CLIQUE(G): 1) Number the vertices V as 1,2,3...,n where n=|V| 2) If n = 1, return V 3) Divide V in v1 = {1,2,...,[n/2]} and v2 = {[n/2]+1,...,n} 4) Being G1 the subgraph induced by V1 and G2 the subgraph induced by V2, find C1 = CLIQUE(G1) and C2 = CLIQUE(G2) 5) Combine the results in the following way: 5.1. Initialize C1+ as C1 and c2+ as C2 5.2. For every vertex v ∈ C2, if v is connected to all vertices in c1+ then add v to c1+ 5.3. For every vertex v ∈ C1, if v is connected to all vertices in c2+ then add v to c2+ 5.4. Return the bigger between c1+ and c2+ The algorithm is definitive and cannot be changed, and I need to prove that it always returns a subgraph of G that is a clique (don't know how to do it) and find its time complexity, on the time complexity, analyzing the algorithm I have arrived to the following recurrence: $T(n) = n + n·m+2T(n/2)$, which following the master theorem leads to an $O(n·m)$ complexity, but i found that cloque algorithms are exponential, so i must have done something wrong. Thanks. Answer: Assuming you wanted your algorithm to return a maximum clique, here's a small counterexample: Let $G$ be the triangle $abc$, plus the two isolated points $d$ and $e$. Suppose that $V$ is divided so that $v_1 = \{a, d\}$ and $v_2 = \{b, c, e\}$. Then CLIQUE($G_1$) will return either $a$ or $d$; suppose it returns $d$. CLIQUE($G_2$) will return (at best; i.e., if it were in fact implemented correctly) the edge $bc$. But neither $b$ nor $c$ is adjacent to every vertex in $c_1$ (in fact neither is adjacent to any vertex in $c_1$, since $c_1 = \{d\}$ and $d$ is an isolated point), and likewise $d$ is not adjacent to every (in fact any) vertex in $c_2$ (since $d$ remains an isolated point), thus step 5 fails to find a bigger clique, despite the fact that $abc$ is a clique of size 3. This counterexample also shows that the algorithm can fail to find a maximal clique, which is a much weaker requirement.
{ "domain": "cs.stackexchange", "id": 7345, "tags": "algorithm-analysis, correctness-proof, divide-and-conquer" }
how image_proc helps in ar_pose
Question: i noticed that ar_pose is subscribing only camera_info and image_raw while image_proc doesn't publish image_raw. so, i wonder what is its purpose in ar_pose? btw, i noticed that the surrounding lighting affects the ar_pose performance greatly. is there anything i can look at to reduce the effect of the lighting or increase ar_pose performance (e.g. detection distance etc)? Originally posted by chao on ROS Answers with karma: 153 on 2014-01-02 Post score: 1 Answer: "image_proc" is simply a collection of tools to process images. Your camera driver should publish the camera info and an image (typically by convention we name this image topic "image_raw" as it is typically not rectified, etc as might possibly be done by using the tools found in image_proc). You can however remap the "image_raw" topic to any image topic you like, so long as you have a corresponding "camera_info" topic to match that image topic. The "camera_info" topic is needed to get the camera intrinsics necessary to project the tags found in the image into 3d (thus, if you use camera info which does not correspond to the image topic used, you will still find tags, but they will be at improper locations). As for what role image_proc might play here, it can be used to perform rectification of the raw camera image, which might be useful in correcting lens deficiencies. As for lighting, there is a threshold level that can be adjusted, but that is about it. If you need greater distance or better low-light performance, you'll typically need to look for a better camera. Originally posted by fergs with karma: 13902 on 2014-01-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 16562, "tags": "ar-pose" }
Why does ada (adaboost) in R return different training error graphs and variable importance plots when running the same function multiple times?
Question: Question says most of it. I created a matrix of descriptors, set the vectors of responses, and input a set number of iterations. Each time I run the function with the same exact inputs, I get the same confusion table, but I get a different training error graph and different variable importance plots. Answer: Boosting, together with bagging, falls into the realm of so cold ensemble models: you randomly draw a sample from the data, fit a model, adjust your predictions, sample once again. Unless your samples are fixed, every time you run the algo you'll get slightly different results.
{ "domain": "datascience.stackexchange", "id": 477, "tags": "r" }
Mutation rate breakdown by original and mutated nucleotide for Coronaviridae
Question: In a discussion of the mutations S:Q677H and S:Q677P in SARS-CoV-2 it was mentioned that the mutations leading to this result are "against the tendency" of preferred mutations on the nucleotide level. It was also stated as a well known fact, that the rate of mutations depends on the original and mutated base. So I'd like to know the mutations rates for all possible base pairs in Human Coronaviruses, or a pointer to reference where I can look them up. A less specific but still applicable reference is also welcome, as well as relative mutation frequencies given in some arbitrary units. EDIT in response to comment by @David: This is the relevant part of a preprint by Hodcroft et al. on the top of page 5 The amino acid Q changes to H due to a mutation at nucleotide position 23593. Notably, in four of these six lineages, the mutation changes from G to U, whereas in the other two, it changes from G to C (FIG 2B). In contrast, the S:Q677P variant occurs by virtue of an A to C change at position 23592. All mutations leading to Q677H or Q677P involve transversions. Hence, their spontaneous occurrence is generally disfavored relative to transitions. In SARS-CoV-2 samples from human infections, A to C and G to C transversions occur at only ~10% the frequency of C to U transitions, while G to U mutations are more common, occurring half as frequently as C to U. (Wright, Lakdawala and Cooper, 2020) (Ratcliff and Simmonds, 2021). EDIT 2: Learning more about the terminology, I am looking for a Nucleotide Substitution Model or a Mutation Profile in the described environment, especially in a best fit of the empirical data. Answer: I finally found a paper with numbers, it is Host-directed editing of the SARS-CoV-2 genome by Tobias Mourier, Mukhtar Sadykov, Michael J.Carr, Gabriel Gonzale, William W. Hall, and Arnab Pain. The authors give relative numbers in percent, so all mutations make up 100%. C>U is by far the most frequent one, making up 36.9% of all mutations, followed by G>U 17.6%, U>C 12.6%, A>G 10.9%, and G>A 10.6%. The rest are much rarer: C>A 2.6%, A>U 2.4%, A>C 1.6%, G>C 1.6%, U>G 1.5%, U>A 1.4%, and finally the rarest one, C>G 0.3%.
{ "domain": "biology.stackexchange", "id": 11810, "tags": "mutations, coronavirus, nucleotide" }
Allocating flows in a network while avoiding a particular node
Question: I am considering a network with the max flow problem in a particular situation. I have a set of flows which should pass a certain node A and and another set of flows which should avoid A and pass through another node B. The flows can cross each other at any other arbitrary node. Under what category of network flow problems should this be considered? Answer: Flows with more than one "thing" flowing are known as "multicommodity flows". The basic definitions assume that every thing can flow through every vertex and edge. However, the standard way of solving these problems is linear programming and you could easily modify the normal multicommodity flow program to deal with your situation, just by substituting zero for the appropriate variables.
{ "domain": "cs.stackexchange", "id": 7687, "tags": "graphs, network-flow" }
Computing the Poynting vector?
Question: Approaching the following problem: A plane monochromatic electromagnetic wave with wavelength $\lambda = 2.0 cm$, propagates through a vacuum. Its magnetic field is described by $ > \vec{B} = ( B_x \hat{i} + B_y \hat{j} ) \cos(kz + ωt) $, where $B_x = 1.9 \times 10^{-6} T, B_y = 4.7 \times 10^{-6} T$, and $\hat i$ and $\hat j$ are the unit vectors in the $+x$ and $+y$ directions, respectively. What is $S_z$, the $z$-component of the Poynting vector at $(x = 0, y = 0, z = 0)$ at $t = 0$? Question 1 What is the Poynting vector in a general sense? i.e. Abstractly, what am I trying to compute? I know it is described as the directional energy flux density of an electromagnetic field. But, it is not traveling through a symmetric surface or anything of this nature so what am I trying to compute here? Question 2 How do I actually compute the value I am looking for? I know $\vec S \equiv \frac{\vec E \times \vec B}{ \mu_0}$ where $\vec E$ is the electric field, $\vec B$ is the magnetic field, and $\mu_0 = 4 \pi \times 10^{-7}$. But, I was under the impression that the magnetic field and the electric field are always perpendicular, why isn't $S_z = \frac{ c}{ \mu_0}$ since we are in a vacuum and $E = cB$? Answer: The Poynting vector $\mathbf{S}$ represents the flow of energy in an EM field. Specifically, if $u$ is the energy density of the field, the Poynting vector satisfies the continuity equation for it: $$\frac{\partial u}{\partial t}+\nabla\cdot\mathbf{S}=0$$ in vacuum. (This is Poynting's theorem.) In your particular problem, $E$ and $B$ are perpendicular and their cross product is proportional to the product of their amplitudes. Thus $$S_z={c\over\mu_0}B^2.$$ You then have to use your knowledge of $B$ to work out $S$.
{ "domain": "physics.stackexchange", "id": 4167, "tags": "homework-and-exercises, electromagnetism, poynting-vector" }
Complexity of special instance of Knapsack
Question: Consider the following algorithmic task: Given: primes $p_1,\dots,p_k$, and positive integers $e_1,\dots,e_k$, and a positive integer $M$ Find: integers $d_1,\dots,d_k$ that maximize $\prod_i p_i^{d_i}$, subject to the requirements that $0 \le d_i \le e_i$ and $\prod_i p_i^{d_i} \le M$. What is the complexity of this problem? Is it NP-hard? This problem is equivalent to the following: Given: primes $p_1,\dots,p_k$, and positive integers $e_1,\dots,e_k$, and a positive integer $M$ Find: integers $d_1,\dots,d_k$ that maximize $\sum_i d_i \log p_i$, subject to $0 \le d_i \le e_i$ and $\sum_i d_i \log p_i \le \log M$. This can be seen to be an instance of bounded knapsack with $k$ items, where the $i$th item has weight $\log p_i$ and value $\log p_i$ and can be used at most $e_i$ times, and where the capacity of the knapsack is $\log M$. (See the footnote below for a more careful statement.) Thus, my problem is a special case of bounded knapsack. It follows that my problem is in NP. But is my problem NP-hard? For instance, is there a way to reduce from bounded knapsack to my problem? It's not clear, because my problem limits the weights and values to be something specific related to the log of primes, rather than arbitrary values. Footnote: strictly speaking, the bounded knapsack requires item weights and values to be integers, whereas $\log p_i$ isn't an integer. However, we can fix this up by picking a sufficiently large constant $K$ and using $\lfloor K \log p_i \rfloor$ as the weight and value of the $i$th item, and use $\lfloor K \log M \rfloor$ as the capacity of the knapsack. With this fix-up, my problem now becomes a special case of bounded knapsack, or reduces to bounded knapsack. Answer: Your problem is at least as hard as the known-prime-factor version of this problem. Shor's reduction also applies to that version, and can be made effective by using this answer rather than the Wikipedia link in Shor's answer.
{ "domain": "cs.stackexchange", "id": 5288, "tags": "algorithms, optimization, np-hard" }
Hide some topics when launching a node?
Question: Some nodes I launch publish topics I never intend to use, and they just clutter up the namespace. Is there a way to prevent a node from publishing a topic at launch time? Originally posted by drewm1980 on ROS Answers with karma: 258 on 2015-09-10 Post score: 1 Answer: Is there a way to prevent a node from publishing a topic at launch time? Other than editing the source of the node in question? No. If you are worried about collisions and / or multiple nodes publishing to the same topic, and you'd like to avoid that, you could make use of remapping or namespacing. Originally posted by gvdhoorn with karma: 86574 on 2015-09-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 22603, "tags": "ros, node, topic" }
Formation of peracetic acid from acetic acid and hydrogen peroxide and its stability in their presence
Question: I've been trying to find out as much as I can about peracetic acid, especially regarding its use as a sanitizer. In the Wikipedia entry it notes that Peracetic acid is always sold in solution as a mixture with acetic acid and hydrogen peroxide to maintain its stability. This is definitely born out by commercial examples: Loeffler Lerasept PAA (a sanitizer I've used before) lists 31% hydrogen peroxide, 17% peracetic acid, 16% acetic acid and 1% phosphonic acid among its ingredients. But, my understanding was that mixing acetic acid and hydrogen peroxide makes peracetic acid. From the same Wikipedia article: It forms upon treatment of acetic acid with hydrogen peroxide. So my question is: what exactly keeps the excess hydrogen peroxide from reacting with the excess acetic acid to simply form more peracetic acid? Does the phosphonic acid inhibit this reaction? Is there something else not listed that does this? Or is it just some bit of chemistry I'm not aware of? Answer: The formation of peracetic acid from acetic acid and hydrogen peroxide is an equilibrium reaction and so in order for the peracetic acid to remain at a constant concentration as desired, acetic acid and hydrogen peroxide must also be present so that the reaction can be at equilibrium. $$\ce{CH3COOH + H2O2 <=> CH3COOOH + H2O}$$ what exactly keeps the excess hydrogen peroxide from reacting with the excess acetic acid to simply form more peracetic acid? Nothing. The hydrogen peroxide reacts with the acetic acid to form peracetic acid and water but the peracetic acid and water also react to reform hydrogen peroxide and acetic acid. At equilibrium the rates of these reactions are equal and so the concentrations of the species do not change. At room temperature the equilibrium constant is somewhere around 2.5 (see references) which indicates that significant amounts of reactants and products are present, supporting the reasoning above. The paper Chin J Proc Eng 2008 February, 8 (1), 35–41 has some data on the values of the equilibrium constant and the variation in concentration of peracetic acid formed. Another value for the equilibrium constant on page vii of Unis, Melod (2010) Peroxide reactions of environmental relevance in aqueous solution. Doctoral thesis, Northumbria University.
{ "domain": "chemistry.stackexchange", "id": 2969, "tags": "organic-chemistry, equilibrium" }
rqt_console doesn't display all messages
Question: My post is a duplicate of this post which was closed for unknown reasons. ROS users keep commenting under that post which means that the problem still exists! Basically, rqt_console does not display all the messages from all nodes even though rqt_logger_level "sees" the nodes and the proper logging levels are set up. Any solution to this problem? Is there any workaround? Originally posted by mac137 on ROS Answers with karma: 83 on 2021-02-10 Post score: 8 Answer: Hello, I was having a similar issue today. I was able to fix it by going to the rqt_console settings in the GUI, and then switching the Rosout Topic in the drop down menu from /rosout_agg to /rosout. Originally posted by aborger with karma: 106 on 2021-07-15 This answer was ACCEPTED on the original site Post score: 9 Original comments Comment by ros_lucas on 2022-02-05: Thank you!! This is the solution for me and my ROS version is noetic. Comment by wudi24k on 2022-08-10: Could you please show how to fix it in the GUI? I am a beginner. I can not find switching operator. Comment by ilan on 2023-03-16: it worked for me as well. In simple words when the rqt_console pops up there is small setting icon on the top right corner. I changed the /rosout_agg to /rosout first and then run the turtle_sim_node and it worked out.
{ "domain": "robotics.stackexchange", "id": 36071, "tags": "ros, logging" }
Not enough memory for operations with Pandas
Question: Wes McKinney, the author of Pandas, writes in his blog that "... my rule of thumb for pandas is that you should have 5 to 10 times as much RAM as the size of your dataset. So if you have a 10 GB dataset, you should really have about 64, preferably 128 GB of RAM if you want to avoid memory management problems." I frequently use Pandas with datasets not much smaller than my RAM (16GB). So I wonder, what are some practical implications of these "memory management problems"? Could anyone provide more insights into this? Does it mean it will store data in virtual memory on disk and therefore be very slow? Answer: When Pandas finds it's maximum RAM limit it will freeze and kill the process, so there is no performance degradation, just a SIGKILL signal that stops the process completely. Speed of processing has more to do with the CPU and RAM speed i.e. DDR3 vs DDR4, latency, SSD vd HDD among other things. Pandas has a strict memory limit but there are options other than just increasing RAM if you need to process large datasets. 1.- Dask Here are certain limitations in dask. Dask cannot parallelize within individual tasks. As a distributed-computing framework, dask enables remote execution of arbitrary code. So dask workers should be hosted within trusted network only. A Dask tutorial: https://medium.com/swlh/parallel-processing-in-python-using-dask-a9a01739902a 2.- Jax 3.- Feather Format Language agnostic so it's usable in R and Python, can reduce the memory footprint of storage in general. 4.- Decreasing memory consumption natively in Pandas Reducing the number of bits of memory to encode a column help specially when you use tree-based algorithms to process the data later. This is a script popularized by Kaggle. import pandas as pd def reduce_mem_usage(df, verbose=True): numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64'] start_mem = df.memory_usage().sum() / 1024**2 for col in df.columns: col_type = df[col].dtypes if col_type in numerics: c_min = df[col].min() c_max = df[col].max() if str(col_type)[:3] == 'int': if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max: df[col] = df[col].astype(np.int8) elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max: df[col] = df[col].astype(np.int16) elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max: df[col] = df[col].astype(np.int32) elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max: df[col] = df[col].astype(np.int64) else: if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max: df[col] = df[col].astype(np.float16) elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max: df[col] = df[col].astype(np.float32) else: df[col] = df[col].astype(np.float64) end_mem = df.memory_usage().sum() / 1024**2 print('Memory usage after optimization is: {:.2f} MB'.format(end_mem)) print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem)) return df
{ "domain": "datascience.stackexchange", "id": 8111, "tags": "python, pandas" }
Does an iron bar become heavier when magnetized?
Question: Say I have a piece of iron bar. Would the same iron bar weigh more when being magnetized, (assuming no outside forces like magnetic field, etc,) and the scale is non-magnetic? I think you know what I'm saying. Answer: Yes, a magnetic field will contribute to the mass of an object because it has energy and in special relativity mass is the length of the fourvector: The static field has zero momentum , but it has energy, therefore from the above formula it has an invariant mass. The densest magnetic fields are of a few tesla. I copy the answer to a similar question which gives an estimate: Let's say you have a lab magnet that fills 1000 cm3 of space with a very high field of 10^5 G. That field has an energy of (roughly) 10^12 ergs. That corresponds to a mass of 10^-9 grams. So for most purposes we just forget about it.
{ "domain": "physics.stackexchange", "id": 27315, "tags": "magnetic-fields" }
Submitting form using dynamic variable with query
Question: I have a form that I submit with these variables: category_time, date_month, date_year, and status_time. Here is an example of different form possibilities: CATEGORY_TIME 1 DATE_MONTH Month: DATE_YEAR Year: FIELDNAMES DATE_MONTH,DATE_YEAR,SUBMIT STATUS_TIME Select: SUBMIT submit ---------------- CATEGORY_TIME Choose: DATE_MONTH Month: DATE_YEAR Year: FIELDNAMES DATE_MONTH,DATE_YEAR,SUBMIT STATUS_TIME Pending SUBMIT submit ---------------- CATEGORY_TIME Choose: DATE_MONTH Month: DATE_YEAR 2016 FIELDNAMES DATE_MONTH,DATE_YEAR,SUBMIT STATUS_TIME Select: SUBMIT submit After I submit the form depending on the variables it outputs a query. For example, with the first possibility only category_time chose, it will output the second if statement. From what you can see, there's a lot of ifs statements that will be needed. Is there an easier way to make this work without so many ifs statement and just depending on the variables that are chosen, even if I have to use JavaScript to make it work better? <cfif structKeyExists(form, "category_time") || structKeyExists(form, "date_month") || structKeyExists(form, "date_year") || structKeyExists(form, "status_time") > <cfquery ....> SELECT * FROM work_timeline <cfif (category_time eq 'ALL' || category_time eq 'Choose:') && date_month eq 'Month:' && date_year eq 'Year:' && status_time eq 'Select:' > order by year(date_time) desc, month(date_time) desc </cfif> <cfif category_time neq 'ALL' && category_time neq 'Choose:' && date_month eq 'Month:' && date_year eq 'Year:' && status_time eq 'Select:' > where category_time = #category_time# </cfif> <cfif structKeyExists(form, "category_time") && date_month neq 'Month:' && date_year eq 'Year:' && category_time neq 'Choose:' and category_time neq 'ALL' && status eq 'Select:'> where category_time = #category_time# and month(date_time)=#date_month# </cfif> more if's statments ..... </cfquery> </cfif> Answer: A few things you could do to simplify the code: Rather than examining every form field to determine if the form was submitted, a single check of say - the submit button - should suffice for most purposes. Avoid using strings like "ALL" or "Choose:" for the form field values. Those are fine for display purposes, but the field values should be something more generic, like an empty string. That makes for simpler, more uniform, validation code. For example, if the default is an empty string, a simple LEN() check is all you need to determine if a particular field was populated. (You can also use other boolean functions like IsNumeric or IsDate, when more appropriate). Consider using cfparam to assign a default for all fields (except the submit button). That ensures the fields always exist, eliminating the need to use structKeyExists every time you access a field. With the above changes, you could also simplify the WHERE clause logic. Start with an expression that is always true. Then add additional filters based on which form fields were populated. NB: Using raw client supplied values in a queries puts your database at risk for sql injection. Always use cfqueryparam on all parameters. (I do not know the data types of your database columns, so modify the cfsqltype's as needed.) <cfparam name="FORM.category_time" default=""> <cfparam name="FORM.date_month" default=""> ... <cfif structKeyExists(FORM, "submit")> <cfquery ....> SELECT Column1, Column2, .... FROM work_timeline WHERE 1 = 1 <!--- Filter on FORM.category_time when populated ---> <cfif LEN(FORM.category_time)> AND category_time = <cfqueryparam value="#FORM.category_time#" cfsqltype="cf_sql_integer"> </cfif> <!--- Filter on FORM.date_month when populated ---> <cfif LEN(FORM.date_month)> AND month(date_time) = <cfqueryparam value="#FORM.date_month#" cfsqltype="cf_sql_integer"> </cfif> ... more cfif conditions </cfquery> </cfif> As an aside, regarding SQL There is nothing technically wrong with using SELECT *, but in most cases it is preferable to supply a list of the columns you need. It makes the code a more readable, and also helps avoid pulling back extra data columns that are never used. Using functions on an indexed database column usually prevents the database from utilizing the index. While it may not be feasible for your current query, when possible it is best to revise expressions like this: WHERE month(column) = 12 AND year(column) = 2015 .. to use a more index friendly expression instead. For example: WHERE column >= '2015-12-01' -- Begin date at midnight AND column < '2016-01-01' -- Day AFTER end date at midnight
{ "domain": "codereview.stackexchange", "id": 17184, "tags": "coldfusion, cfml" }
What is the strongest oxidising agent?
Question: I searched for the strongest oxidising agent and I found different results: $\ce{ClF3}$, $\ce{HArF}$, $\ce{F2}$ were among them. Many said $\ce{ClF3}$ is the most powerful as it oxidises everything, even asbestos, sand, concrete, and can set easily fire to anything which can't be stopped; it can only be stored in Teflon. And $\ce{HArF}$ could be a very powerful oxidant due to high instability as a compound of argon with fluorine, but was it even used as such? What compound is actually used as oxidising agent and was proven to be stronger then others, by, for example, standard reduction potential? Answer: Ivan's answer is indeed thought-provoking. But let's have some fun. IUPAC defines oxidation as: The complete, net removal of one or more electrons from a molecular entity. My humble query is thus - what better way is there to remove an electron than combining it with a literal anti-electron? Yes, my friends, we shall seek to transcend the problem entirely and swat the fly with a thermonuclear bomb. I submit as the most powerful entry, the positron. Since 1932, we've known that ordinary matter has a mirror image, which we now call antimatter. The antimatter counterpart of the electron ($\ce{e-}$) is the positron ($\ce{e+}$). To the best of our knowledge, they behave exactly alike, except for their opposite electric charges. I stress that the positron has nothing to do with the proton ($\ce{p+}$), another class of particle entirely. As you may know, when matter and antimatter meet, they release tremendous amounts of energy, thanks to $E=mc^2$. For an electron and positron with no initial energy other than their individual rest masses of $\pu{511 keV c^-2}$ each, the most common annihilation outcome is: $$ \ce{e- +\ e+ -> 2\gamma}$$ However, this process is fully reversible in quantum electrodynamics; it is time-symmetric. The opposite reaction is pair production: $$ \ce{2\gamma -> e- +\ e+ }$$ A reversible reaction? Then there is nothing stopping us from imagining the following chemical equilibrium: \begin{align} \ce{e- +\ e+ &<=> 2\gamma} & \Delta_r G^\circ &= \pu{-1.022 MeV} =\pu{-98 607 810 kJ mol^-1} \end{align} The distinction between enthalpy and Gibbs free energy in such subatomic reactions is completely negligible, as the entropic factor is laughably small in comparison, in any reasonable conditions. I am just going to brashly consider the above value as the standard Gibbs free energy change of reaction. This enormous $\Delta_r G^\circ$ corresponds to an equilibrium constant $K_\mathrm{eq} = 3 \times 10^{17276234}$, representing a somewhat product-favoured reaction. Plugging the Nernst equation, the standard electrode potential for the "reduction of a positron" is then $\mathrm{\frac{98\ 607\ 810\ kJ\ mol^{-1}}{96\ 485.33212\ C\ mol^{-1}} = +1\ 021\ 998\ V}$. Ivan mentions in his answer using an alpha particle as an oxidiser. Let's take that further. According to NIST, a rough estimate for the electron affinity of a completely bare darmstadtium nucleus ($\ce{Ds^{110+}}$) is $\pu{-204.4 keV}$, so even a stripped superheavy atom can't match the oxidising power of a positron! ... that is, until you get to $\ce{Ust^{173+}}$ ...
{ "domain": "chemistry.stackexchange", "id": 16570, "tags": "inorganic-chemistry, redox" }
What does the number of bit of microprocessor mean?
Question: Intel 8085 is a 8 bit processor whereas 8086 is a 16 bit processor,what does the number of bit specify? Answer: How many bits wide its registers are: usually, its data registers (the ones which can load and store data and perform logical and arithmetical functions). The 8086 has 16-bit data and address registers, and a 16-bit data path to external memory. The 8088 has 16-bit data and address registers, but only an 8-bit data path to external memory. Because it is internally identical to the 8086, it is also considered to be a 16-bit processor. The 8085 has 8-bit data registers and 16-bit address registers. The address registers (apart from SP) are made up of pairs of 8-bit data registers. The fact that it is possible to add one address register to another isn't enough to make the processor considered to be "16-bit".
{ "domain": "cs.stackexchange", "id": 6773, "tags": "computer-architecture" }
Neatly printing integers interspersed with spaces
Question: The next iteration is here. I have this small method for printing long integers neatly. For example: neatify(123L) = "123" neatify(1234L) = "1 234" neatify(12345L) = "12 345" . . . The code: import java.util.Scanner; public class Main { public static String neatify(final long number, final int groupLength) { final String str = Long.toString(number); if (groupLength < 1) { return str; } final char[] charArray = str.toCharArray(); final StringBuilder sb = new StringBuilder(); for (int i = 0; i < charArray.length; ++i) { if (i != 0 && (charArray.length - i) % groupLength == 0) { sb.append(' '); } sb.append(charArray[i]); } return sb.toString(); } public static String neatify(final long number) { return neatify(number, 3); } public static void main(final String... args) { final Scanner scanner = new Scanner(System.in); while (scanner.hasNextLong()) { System.out.println(neatify(scanner.nextLong())); } } } Answer: Usability do you really need the groupLength? I'm not aware of any number writing scheme where it is fixed, but not 3[*]. on the other hand, the separator is different in a lot of countries. You can see an example for thousands separators here (or on wikipedia). For example, Canada uses , Italy uses ., the US uses ,, and Switzerland uses '. It might be nice to add that as a parameter instead of hardcoding . You can still use as a default. your code doesn't deal that well with negative numbers. -123 becomes - 123, and -123456 becomes - 123 456. I would not expect a space after the -. [*] In India it seems to be written eg as 15,00,000, which doesn't use 3, but which also isn't a fixed length. Misc if you start your loop at 1 and prepend sb.append(charArray[0]);, you can save the i != 0 check you perform each time.
{ "domain": "codereview.stackexchange", "id": 13525, "tags": "java, strings, reinventing-the-wheel" }
Understanding the interference pattern
Question: I have some problems with understanding the interference pattern. The light from the source passes through the slit and then through Fresnel biprism. As a result, I can observe the following interference pattern: How do I determine the total number of fringes? There are narrow dark and bright bands that are parts of the wider dark and bright bands. So what exactly do I observe? Which of those bands are interference fringes? Thank you in advance Answer: As @EmilioPisanty has pointed out there is a fringe pattern which modulates the intensity of the two virtual source interference pattern. In your image the two virtual source interference pattern is composed of the equally and cl0sely spaced fringes. I have enlarged your image and indicated the positions of some of these fringes. The modulating fringe pattern is the diffraction pattern that one gets from a straight edge which in this arrangement is the apex of the biprism. If one side of the biprism was covered over then you would get a displaced (because of the refraction by the prism) straight edge diffraction pattern and you will note that these fringes are not equally spaced. So it is the displaced diffraction pattern produced by the apex of the prism which modulated the intensity of the two virtual source interference pattern. This pair of images show the interaction of the edge and two virtual source patterns rather well. The top image if just the "normal" Fresnel biprism fringe pattern. To produce the second image the region of overlap of light from the two virtual was restricted by only allowing light from the very central portion of the biprism to pass through. One can now clearly see the edge diffraction pattern in the regions where the light from the two virtual sources wwas not allowed to overlap.
{ "domain": "physics.stackexchange", "id": 48191, "tags": "optics, interference" }
Compressibility and the form of Newton's second law in fluid mechanics
Question: In deriving Euler's equations for fluid mechanics, in particular $$f=\rho \partial_t v +\rho v\cdot \nabla v$$ for some body force $f$ (e.g Landau & Lifschitz 2.3) one assumes the continuity equation $\partial_t\rho=-\nabla\cdot\left(\rho v\right)$ and Newton's second law in the form $F=ma$ so $$f=\rho D_t v$$ where $D_t=\partial_t+v\cdot \nabla$ is the total time derivative (e.g Landau & Lifschitz 2.1). If one instead uses $F=\dot p$ so that $$f = D_t\left(\rho v\right)$$ then in order to get to get our initial equation back one must assume $D_t \rho=0$ but of course $D_t \rho=\partial_t\rho+v\cdot \nabla\rho=\nabla\cdot\rho$ and so we have assumed the flow is incompressible. Landau implies that he has not assumed incompressibility at this point so how is it that one can choose the first form for Newton's second law without loss of generality? (If your answer would appeal to stress-energy tensors or Navier-Stokes etc., if possible please show how the assumption isn't implicitly made.) To be more clear, as I have shown above, if you say that $f = D_t\left(\rho v\right)$ and also that $f=\rho \partial_t v +\rho v\cdot \nabla v$ then it follows that the flow is incompressible. I don't think that this does imply incompressibility so why is the assumption false? Answer: The main problem with continuum equations is that it is just a "lucky guess" of macroscopic dynamics otherwise governed by microscopic ones. You have encountered the moment where this "guessing" isn't entirely consistent. Before delving into the technical derivation, the non-$D_t \rho$ in the Newton's equation is just a consequence of the fact that the actual force acts on microscopic particles whose mass doesn't change. Hence, only the macroscopic velocity of the fluid is affected by the force. Let us define a one particle phase-space distribution function $f(\vec{x},\vec{p},t)$. We have the Boltzmann equation derived straightforward from classical mechanics (this is actually the only point where $\vec{F}=\dot{\vec{p}}$ enters) $$\frac{df}{dt} = \partial_t f + \{f,H\} = \partial_t f + \frac{\vec p \cdot \nabla_x f}{m} + \vec{F} \cdot \nabla_p f$$ Without colliding with other particles we would just have $df/dt = 0$ but with interaction we can describe the ensemble as N copies of a one-particle distribution with a collision term $$\partial_t f + \frac{\vec p \cdot \nabla_x f}{m} + \vec{F} \cdot \nabla_p f = \delta_t f|_c$$ We can now integrate the whole equation in momentum space $\int d^3p$ to get the continuity equation $$\partial_t \rho + \nabla \cdot (\rho \vec{v}) = \delta_t \rho|_c$$ With naturally $\rho \equiv m \int f d^3p$ and $\vec{v} = <\vec{p}>/\rho = \int \vec{p} f d^3 p/\rho$. The term $\delta_t \rho|_c$ is non-zero for example in chemical reactions. Integrating again the Boltzmann equation, this time like $\int \vec{p} ... d^3 p$, we get $$\partial_t(\rho v^j) + \sum \partial_{x_i} T_{ij} - \frac{\rho}{m} F^j = \delta_t p^j|_c$$ Where again $\delta_t p|_c$ is zero for momentum conserving collisions and no chemical reactions. The new symbol is $T_{ij} = <p_i p_j>/m = \int p_i p_j f d^3p/m$ and you can identify in it $ \rho v^i v^j$, isotropic pressure ($<\sum p_i^2>/3\rho$) and viscous stress ($(<p_i p_j>-<p_i><p_j>)/\rho$). For simplicity, let us now drop the viscous stress and using the continuum equation we can get $$\frac{\rho}{m} \vec{F} = \rho D_t \vec{v} + \nabla P ,$$ or a much more "Newton law" form $$\vec{F} = m D_t \vec{v} + \frac{m}{\rho} \nabla P.$$ So the Newton's law in continuum is derivable only as an intuition of how the sum of forces on individual particles will act on the average properties of the ensemble. It all make sense when you consider that $\vec{f}$ is not exactly the force but force time the number of particles per volume. However, notice that this is the equation for mean velocity, but the force affects also higher momenta($\int \vec{p} \vec{p}...d^3p$) of the momentum equation and this is important for example in plasma physics. In non-extreme fluids the higher momentum equations are usually neglected as the higher momenta of $\delta_t f|_c$ do not vanish so easily and are actually very difficult to derive in a non-perturbative regime (which is the case of a tightly-packed fluid rather than a loose plasma or gas). As a consequence, observed macroscopic properties of the fluid such as incompressibility are plugged in as extra conditions to get a practical model. My derivations are very brief, for details see e.g. http://www.astronomy.ohio-state.edu/~dhw/A825/notes2.pdf (with a different notation). EDIT: Combining my memory using the phase-space notation with the notation of the linked text, I have done some possibly quite confusing typos. Now the equations should be all correct. I also added a commentary to the much clearer form of the final "Newton's law".
{ "domain": "physics.stackexchange", "id": 22173, "tags": "newtonian-mechanics, fluid-dynamics" }
Procedural wave generation, frequency shift issue
Question: sorry for the long post but I want to try and get all the details in here in hopes it will help. I've been working on building a procedural audio system for use in XNA projects. First this is general structure of the procedural system. WaveFragments: generate the actual wave forms for a specified duration, based on frequency using a phase accumulator. Waves: complete wave forms, generated by joining wave fragments together. DynamicWaves: use a wave fragment to generate small buffers of waveform data, which are pushed into a DynamicSoundEffectInstance to be played. My problem is (I think) with my wave generation. At first it seems to work OK, but then after a certain time the sounds frequency alters slightly, and then back again later. for example a tone of C3 (130.81f) will increase frequency slightly (but noticeably) at around 20 seconds and then back again later on at 77 seconds. (I've not yet had the patience to see if it shifts again later, I've gone as far as 3 minutes without it shifting up again. I'm going to assume its a issue with the way I'm generating the wave data with my phase accumulator. Here is my generation of the PhaseDelta based on provided frequency. _PhaseDelta = (float)((Math.PI * 2 * _Frequency) / (float)samplesPerSecond); and here is the calculation I use for the generation of the wave. for (int i = 0; i < numSamples; i++) { _PhaseAccumulator = MathHelper.Lerp(_PhaseAccumulator, _PhaseAccumulator+_PhaseDelta, _PhaseSmoothness); data[i] = (float)Math.Sin(_PhaseAccumulator) * _Amplitude; } Thanks in advance for any input that anyone might have. It may be the most glaringly obvious fix, but at this point after hours of dull single tones my brain is two melted to be getting anywhere with fixing this now. Now I've tried it without lerping the accumulator (obviously produces the clicks at the start) but still produces the shift. I've also tried lerping the phase delta gradually to its new value, again to no avail. (EDIT: i have also tried throwing lerp out of the window and just adding the phase delta each sample, again no dice). So, my question is, any ideas on why this weird pitch shift is happening? In practicality it isn't a huge issue, the system was only ever intended for short notes to be stacked together for old video game style music, or a tone that would likely change frequency often. But I'd feel more comfortable knowing that the wave generation is accurate and wasn't accumulating some error somewhere sending it way of track. Answer: I suspect that the accumulator value is growing very large, which causes the float's resolution to degrade. To avoid this you need to occasionally reduce the accumulator to manageable levels. One way to do that is to subtract by 2*pi whenever you exceed 2*pi.
{ "domain": "dsp.stackexchange", "id": 235, "tags": "frequency, wave" }
Is this an inertial frame of reference in relativistic context?
Question: What I've learned from our special relativity lectures is that an inertial frame of reference in relativity is one that experiences no gravitational forces. Also, it is a frame where if a particle is initially at rest, it will remain at rest. And if it is initially in motion, it will remain in motion with constant speed and same direction. My question is if my understanding is right about the following: An elevator in free fall is NOT an inertial frame of reference because it experiences gravitational forces A rocket that accelerates upward with acceleration g = 9.81m/s IS an inertial frame of reference since the earth pulls it down with the same acceleration so this cancels and there's like zero net acceleration so it doesn't experience any gravitational forces. I'm sorry if my trains of thought sound stupid. I'm just really struggling with relativity. Answer: Special relativity and general relativity have different views about inertial frames, but in some ways the general relativity take on them is (perhaps surprisingly) easier to explain. So I'll start with GR then extend the description to SR. In general relativity there are usually no global inertial frames i.e. it is impossible to construct a frame that looks inertial everywhere. However it is always possible to construct a locally inertial frame. This is a frame that looks inertial as long as you consider only the spacetime in your immediate vicinity. To check if your frame is locally inertial you surround yourself with a sphere of test particles then watch to see what happens to them. If the sphere of test particles remains unchanged in relation to you then your frame is locally inertial. However if the sphere of particles moves with respect to you then your frame is non-inertial because it's accelerating. Finally if the sphere of particles changes shape then the frame is non-inertial because there are tidal forces acting. Now see how this definition applies to your two specific questions: if you and the test particles are falling down an elevator shaft then you and the particles both accelerate with the same acceleration of $g$. That means your frame is locally inertial. Exactly this reasoning applies to an astronaut aboard the International Space Station. Suppose you're in a part of the the ISS where there are no windows. If you throw a ball it's going to travel in a straight line at constant speed, even though the ISS is moving in a circular orbit around the Earth. If you don't look out of the windows you couldn't tell you weren't floating freely in space far from any masses. If you're in a rocket hovering at a fixed distance above the Earth then when you let go of the test particles they'll fall to the floor. This is an accelerated frame not a non-inertial frame. If you throw a ball it won't travel in a straight line at constant speed. So actually the answers to your questions are the other way around to what you thought. Incidentally, if you look more closely at the falling elevator frame in your question (1) you'll realise it isn't inertial either. The acceleration due to gravity changes with distance from the centre of the Earth, so the test masses nearer the centre will accelerate slightly faster while the ones farther from the centre of the Earth will accelerate slightly slower. The result is that your sphere of test particles changes shape and gets stretched out into an oval. This is an example of tidal forces. However if the size of your test sphere is small the tidal forces will be small. If we make the radius of the sphere really tiny the tidal forces will be undetectably small. This is what we mean by locally inertial - the frame looks inertial if we consider a small enough region of it. Now let's consider special relativity. In this case there is no gravity so if you were in the elevator shaft you'd just float there. In this case your sphere of test particles would not move with respect to you no matter how big you made the sphere. So this frame is inertial, but unlike the GR case it's globally inertial. Since there's no gravity you could make your test sphere light years in size and it still wouldn't change with time. The absence of gravity affects the rocket as well. With no gravity the rocket wouldn't hover above the Earth but instead it would go shooting off into outer space at an acceleration of 9.81 m/s$^2$. However inside the rocket you couldn't tell the difference. When you released your test articles they'd still fall to the floor in the same way, so your frame is still non-inertial. Actually the fact that you couldn't tell the difference between gravity and the rocket accelerating away is a key part of general relativity, and it's called the equivalence principle.
{ "domain": "physics.stackexchange", "id": 23221, "tags": "special-relativity, inertial-frames" }
How many psychoacoustical critical bands are there?
Question: I understand that the human ear works as a band-pass filter and I have doubt regarding the number of such bandpass filters in the filter bank. I came across these two possible explanations of how it happens: There are no fixed bands and the critical bandwidth of a sound depends on which audio frequency the ear is exposed to. In such a case, the audio frequency could be treated as the center frequency of a bandpass filter and another frequency occurring in the same band gets masked out. There are 24 fixed bandpass filters with their corresponding fixed bandwidths independent of the audio frequency the ear is exposed to. In such a case too, frequencies occurring in the same band sound like a single frequency and the difference goes undetected. Which one of them is correct? Do the critical bands exist only between the 24 fixed values or are they continuous values? Answer: Neither. The basilar membrane in your ear performs a frequency-to-place transformation which is then picked up by an array of thousands of hair cells (cilia). Therefore there are thousands of heavily overlapped bands. The processing that follows is very complex and not completely understood. The shape of the effective band pass filter at each location on the membrane can be measured in animals, and the masking curves can be measured by listening tests. Masking is measured by increasing the amplitude of a test tone relative to the masking tone until it is audible, so the situation is a bit different that what you assumed in your question (where you assumed that if the masker and the test tone were in the same band, you only hear one tone). The actual result is that as the test tone approaches the masker frequency, it must have a higher amplitude before it can be detected. Bob
{ "domain": "dsp.stackexchange", "id": 6523, "tags": "audio, speech-processing, music, audio-processing" }
Direction of acceleration at highest point
Question: When a ball is thrown vertically upwards, what will be the direction of acceleration at the highest point(where velocity is zero)? Upwards, downwards or arbitrary? Answer: During the toss, while still in your hand, the ball is accelerated upwards. As soon as the ball leaves your hand, it begins to slow down. Here, scientific and common usage diverge. Technically, the ball experiences a downward force, and its velocity decreases with time, so it can be said to possess a negative (downwards) acceleration. In common usage, though, when acceleration reduces speed (magnitude of velocity), this is referred to as deceleration. Once the ball reaches its peak, it will begin to fall downwards, and at this point both technical and common use agree - it is accelerating downwards. So the question becomes, what exactly did you mean by your question? Technically, the ball experiences a constant negative (downwards) acceleration. In common use, the ball decelerates until its peak, then accelerates downwards. In this sense, at peak altitude the ball accelerates downwards. If it did not, it would remain at that altitude forever, since its vertical velocity would be zero, and it would not experience any change in that velocity. The fact that, just for an instant, the ball has zero velocity, does not mean that the ball is not accelerating - it just means that the velocity is zero.
{ "domain": "physics.stackexchange", "id": 22643, "tags": "newtonian-mechanics, forces, acceleration, free-body-diagram" }
Neural network for time series forecasting with an auxiliary data
Question: Lets say we have 2 data sets. First is the close price time series data set and we want to predict future values of it. The second is volumes of each price from the first data set and we do not want to predict on it but use this data to help predict future price values. What kind of neural networks is suitable for this task? In my sight this may be an LSTM with some changes. Advise me please. Answer: You can use RNN architectures like LSTM, and GRU. RNNs take input vectors in each time step, so you can add your extra data to input vector. Your input shape will be batch_size x sequence length x num_of_features.
{ "domain": "datascience.stackexchange", "id": 9430, "tags": "neural-network, time-series, lstm" }
Strange Grassmann double integration
Question: I can unterstand why because the integration over Grassman variables has to be translational invariant too, one has $$ \int d\theta = 0 $$ and $$ \int d\theta \theta = 1 $$ but I dont see where the rule for this double integration $$ \int d^2 \theta \bar{\theta}\theta = -2i $$ comes from. So can somebody explain to me how this is motivated and/or derived? Answer: As with anything that has to do with supersymmetry the details will be dependent on your exact conventions, but we can obtain the result as follows: Assume we have two Grassman variables $\theta_1$ and $\theta_2$. By applying your first formula twice we find $$\int d\theta_1 d\theta_2 \, \theta_2 \theta_1 = 1$$ Now combine these into $$\theta = \theta_1 + i\theta_2 \qquad \text{and} \qquad \bar{\theta}=\theta_1-i\theta_2.$$ We then have $$\bar{\theta} \theta = - 2i\theta_2\theta_1$$ and hence $$\int d\theta_1 d\theta_2 \bar{\theta} \theta = - 2i$$ which is exactly your second integral, if we identify the measure $$d^2\theta = d\theta_1 d\theta_2.$$
{ "domain": "physics.stackexchange", "id": 7272, "tags": "supersymmetry, grassmann-numbers" }
Random number generation seeding in C++
Question: I wrote a small function to return a random in a given range: int random_in_range(int min, int max) { std::random_device rd; std::mt19937 rng(rd()); std::uniform_int_distribution<int> uni(min, max); return uni(rng); } But I read somewhere that you should only seed a random number generator once leading me to believe that the function should really be: std::random_device rd; std::mt19937 rng(rd()); int random_in_range(int min, int max) { std::uniform_int_distribution<int> uni(min, max); return uni(rng); } I later tested both to see if one was clearly better than the other (in terms of randomness) and got results which do not make things any clearer. First example result with 10 runs, making a decision of 1 or 0: for (int i = 0; i < 10; i++) { cout << first_example(0, 1); } >0100100001 The second example result with 10 runs, making a decision of 1 or 0: for (int i = 0; i < 10; i++) { cout << second_example(0, 1); } >1011000110 The two results don't seem too strange leading me to be confused about how I should initialize random number generators. Basically, what I am asking is: which of these two example (or something else if both are wrong) would be used in order to guarantee the lowest amount of bias? Answer: If you were going to get a number from random_device at every call, you might as well just use it directly: int random_in_range(int min, int max) { std::random_device rd; std::uniform_int_distribution<int> uni(min, max); return uni(rd()); } std::random_device is intended to be a front-end for a truly random bit source. The major shortcoming is that in many cases it has fairly limited bandwidth, so you'd prefer to avoid calling it every time you need a number. If you do want to use mt19937 (a perfectly fine idea in many cases) I'd personally use a function-object instead of a function: class random_in_range { std::mt19937 rng; public: random_in_range() : rng(std::random_device()()) {} int operator()(int low, int high) { std::uniform_int_distribution<int> uni(low, high); return uni(rng); } }; This does have some shortcoming though: people may use a temporary of this type in a loop: for (int i=0; i<10; i++) std::cout << random_in_range()(0, 1); ...which puts you back where you started. You need to do something like: random_in_range r; for (int i=0; i<10; i++) std::cout << r(0, 1); ...to get the results you want (i.e., seed once, call multiple times).
{ "domain": "codereview.stackexchange", "id": 15338, "tags": "c++, random" }
What part of the photons emitted from a star are from black body radiation and what part originate from fusion reactions?
Question: What part of the photons emitted from a star are from black body radiation and what part originate from fusion reactions? To my understanding these are the two sources of luminosity for a star, so I'm just wondering which phenomena accounts for the majority of photons that come from a star. Answer: Fusion reactions produce high energy gamma radiation. None of those photons reach the surface of the star directly. Over timescales of $10^4-10^5$ years, they scatter around as their energy propagates towards the surface. Thus, all photons from a star are from blackbody radiation. Some of these blackbody photons get absorbed by atoms or ions in the atmospheres of the star, and get re-emitted. This re-emission shows up as absorption lines in the spectrum. However, since re-emission goes in all directions, some of the photons we see are from such atomic recombination processes. You can get a qualitative understanding of the numbers when you look at any absorption spectrum from a star. The overall smooth distribution is the blackbody radiation. Absorption lines are subdominant to that, and only a tiny fraction of the difference between an absorption line and the thermal spectrum at that wavelength is going into your direction from re-emission. Hence, overall, that becomes pretty negligible and one can say that essentially all photons are blackbody radiation.
{ "domain": "physics.stackexchange", "id": 84049, "tags": "photons, astrophysics, thermal-radiation, stars, luminosity" }
Elementary question about global supersymmetry of a worldsheet
Question: I'm reading chapter 4 of the book by Green, Schwarz and Witten. They consider an action $$ S = -\frac{1}{2\pi} \int d^2 \sigma \left( \partial_\alpha X^\mu \partial^\alpha X_\mu - i \bar \psi^\mu \rho^\alpha \partial_\alpha \psi_\mu \right), \tag{4.1.2}, $$ where $\psi^\mu$ are Majorana spinors, $$ \rho^0 = \begin{pmatrix} 0 & -i\\ i & 0 \end{pmatrix}, \qquad \rho^1 = \begin{pmatrix} 0 & i\\ i & 0 \end{pmatrix},\tag{4.1.3} $$ $$ \bar \psi = \psi^\dagger \rho^0. $$ It is claimed that this action is invariant under the following infinitesimal transformations \begin{align} \delta X^\mu &= \bar \varepsilon \psi^\mu,\\ \delta \psi^\mu &= -i \rho^\alpha \partial_\alpha X^\mu \varepsilon, \tag{4.1.8} \end{align} where $\varepsilon$ is a constant (doesn't depending on worldsheet coordinates) anticommuting Majorana spinor. I can't prove it. Can you show me where I'm wrong? $$ \delta \left( \partial_\alpha X^\mu \partial^\alpha X_\mu \right) = 2 \partial_\alpha X^\mu \partial^\alpha \bar \psi^\mu \varepsilon $$ (I used $\bar \chi \psi = \bar \psi \chi$ identity). \begin{multline} \delta \left( -i \bar \psi^\mu \rho^\alpha \partial_\alpha \psi_\mu \right) = -i \overline{\left(-i \rho^\alpha \partial_\alpha X^\mu \varepsilon\right)} \rho^\beta \partial_\beta \psi_\mu -i \bar \psi^\mu \rho^\alpha \partial_\alpha \left( -i \rho^\beta \partial_\beta X_\mu \varepsilon \right)\\ = - \overline{\rho^\beta \partial_\beta \psi_\mu} \rho^\alpha \partial_\alpha X^\mu \varepsilon - \bar \psi^\mu \rho^\alpha \partial_\alpha \rho^\beta \partial_\beta X_\mu \varepsilon. \end{multline} Note that \begin{multline} \overline{\rho^\beta \partial_\beta \psi_\mu} = \partial_\beta \psi_\mu^\dagger (\rho^\beta)^\dagger \rho^0 \equiv \partial_0 \psi_\mu^\dagger (\rho^0)^\dagger \rho^0 + \partial_1 \psi_\mu^\dagger (\rho^1)^\dagger \rho^0\\ = \partial_0 \psi_\mu^\dagger \rho^0 \rho^0 - \partial_1 \psi_\mu^\dagger \rho^1 \rho^0 = \partial_0 \psi_\mu^\dagger \rho^0 \rho^0 + \partial_1 \psi_\mu^\dagger \rho^0 \rho^1 \equiv \partial_\beta \bar \psi_\mu \rho^\beta. \end{multline} So \begin{multline} \delta \left( -i \bar \psi^\mu \rho^\alpha \partial_\alpha \psi_\mu \right) = - \partial_\beta \bar \psi_\mu \rho^\beta \rho^\alpha \partial_\alpha X^\mu \varepsilon - \bar \psi^\mu \rho^\alpha \partial_\alpha \rho^\beta \partial_\beta X_\mu \varepsilon\\ \equiv - \partial_\alpha \bar \psi_\mu \rho^\alpha \rho^\beta \partial_\beta X^\mu \varepsilon - \bar \psi^\mu \rho^\alpha \partial_\alpha \rho^\beta \partial_\beta X_\mu \varepsilon. \end{multline} How the variation can vanish? I don't see any chance. I'll remind that the symmetry is global, so we even can't integrate by parts. Answer: Hints: The Majorana spinor is real. For instance $\bar{\psi}=\psi^T\rho^0$ without complex conjugation. The SUSY transformation $\delta{\cal L}$ of the Lagrangian density ${\cal L}$ does not have to vanish. It is enough if it is a total divergence. See the notion of quasi-symmetry, cf. e.g. this and this Phys.SE posts.
{ "domain": "physics.stackexchange", "id": 22392, "tags": "homework-and-exercises, string-theory, lagrangian-formalism, supersymmetry, fermions" }
FSK: non-integer vs integer rmodulation index, peculiar peaks
Question: I've been experimenting with frequency shift keying (FSK) and found a peculiar pattern I cannot explain. I noticed that there is a significant change to the shape of the power spectral density of the fsk signals, only depending on their modulation index. For readability I split the discussion into the two cases integer and non-integer modulation index: Integer modulation index: The two characteristical FSK flanks are symmetrical and they show very strong peaks in their middle. The distance between those two peaks is exactly the shift of the FSK signal. The image below nicely shows those peaks and the symmetrical FSK flanks in the PSD and STFT of one FSK signal. Non-integer modulation index: The two characteristical FSK peaks are not symmetrical, there is considerable skewness to be observed.The peaks found in integer modulation index, are not to be found. The image below shows a similar non-integer modulation index example: I wonder how that difference in the PSD of the signals comes to pass? Answer: With integer modulation indices, each symbol is actually an integer number of full oscillations. You DFT that, and get a sharp, discrete spectrum of tones, convolved with the pulse shape. With non-integer indices, you get "cutoff" oscillations. That is, we need to dsitinguish between two cases: Continuous-Phase FSK: your next oscillation starts with a phase determined by the previous symbol(s) (as the phase continues where you left off) Non-continuous-Phase FSK: your symbol starts with a specific phase, no matter what the end phase of the last symbol was. In the non-continuous case, you get a PSK-alike spectral component (because you suddenly change the phase). If you've been (virtually) letting the subcarrier oscillator run through while the other one was active, you'd only need convolve that PSK spectrum with dirac impulses to form your spectrum. If you "stopped" it, and always started a symbol with the same phase, you'd even get the PSK effect if constantly sending the same symbol. If you then also send both possible symbols alternatingly, you'd have to convolve that with the abrupt symbol phase change, so you'd get a slightly biased PSD. I think this is what we're seeing here. EDIT Found out this is CP-FSK: Well, the fact that you don't have full oscillations definitely explains why you don't get diracs in spectrum – especially since you're not really observing the PSD (which is an "invisible" property of a stochastic process), but a PSD estimate, very likely based simply on a mag² of a DFT – in other words, oscillations that don't fit into the DFT size an integer amount of times cannot ever be sharp spikes. In reality, this poses no problem (because for reliable detection, you need signal power, not peak height, and the power within the "broadened" peaks is, thanks to Parseval, the same). Non-symmetry is an interesting effect here, and I wouldn't have expected it to be so clear (it should have been there, by the pure fact that the support of your subcarrier shapes in frequency domain would not be limited to "their" half of all frequencies, just not as clearly). The fact that the "right" half of the spectrum seems to be consitently higher worries me a bit – is it possible there's channel model/analog reasons for this?
{ "domain": "dsp.stackexchange", "id": 8059, "tags": "digital-communications, modulation, fsk" }
Force due to pressure on side of box versus weight of fluid inside box
Question: I recently solved a problem that for some reason I have no intuition about. Imagine a box of width $w$, length $l$, and height $h$ filled with a fluid of density $\rho$ on the earth (so under the influence of gravity). Assume that $l > w$, so we can refer to a "long side" of the box (see attached figure). Compare the force on the long side of the box with the weight of the fluid. So, the weight of the fluid in the box is given by $W = \rho g V = \rho g lwh$. Meanwhile, we can find that the force on the long side of the box is given by $F=\frac{1}{2}\rho g lh^2$. (This is just the average pressure on the long side of the box times the area of that side.) Then, looking at the ratio of the force on the long side of the box to the weight of the fluid, we find $F/W = h/(2w)$. This result is what I find puzzling. If I make a very narrow (small $w$) but tall (large $h$) box, for a small amount of fluid, the force on the long side can still be very large (much larger than the weight of the fluid). If I make the long side of the box so it can slide (causing $w$ to change), for small enough $w$ and large enough $h$, I won't be able to hold the side of the box in place, even if the volume of fluid (e.g. water) that the box can hold is very small. We can further decrease $w$ and/or increase $h$ to the point that something like a tractor or bulldozer still won't be able to keep the wall of the box from sliding out. One thing that makes me feel a little better about this is knowing that for very small $w$, if the wall of the box slides out just slightly, the water level in the box will drop drastically and therefore so will the force that the wall applies on me or the bulldozer or whatever. But still, this idea seems strange. Does anyone have a good way to think about this to make it more intuitive? Answer: The other answer is more detailed and related more specifically to details of your question, but maybe this will create some intuition: It does not take much water to create a great amount of force. For instance, you can float a ship in one cup of water if you make a container that is the same shape as the ship (below the water line) but offset by such an amount that the space between ship and the container has volume one cup. The total weight of the water would be slight, but the upward component of the force over the area of the ship would be equal to the weight of the ship.
{ "domain": "physics.stackexchange", "id": 25227, "tags": "pressure, fluid-statics" }
Derivation of the diffusion coefficient?
Question: The diffusion coefficient relates the particle flux $J$ to the gradient in the number density (of the 'labelled' particles) $\frac{\partial \bar n}{\partial z}$ such that; $$J=-D \frac{\partial \bar n}{\partial z}$$ I have seen a number of places* give an approximate derivation of $D$. All rely on the statement that the mean number of particles travelling from above the boundary at $z=z_0$ is related to $n(z_0+\lambda)$ where $\lambda$ is the mean free path length. I cannot, however see why collisions come into such (approximate) derivations and therefore where the use of $\lambda$ is justified. Please can someone explain this to me? *For example The mathematical theory of non-uniform gases 3rd ed by Chapman and Cowling page 102 Answer: the collision sets the time scale over which the particle can travel freely on average. Based on this picture, semiclassical, you can use the relation you quoted to calculate the Diffusion constant.
{ "domain": "physics.stackexchange", "id": 28019, "tags": "diffusion, kinetic-theory" }
What do the derivatives in these Hamilton equations mean?
Question: I have a Hamiltonian: $$H=\dot qp - L = \frac 1 2 m\dot q^2+kq^2\frac 1 2 - aq$$ In a system with one coordinate $q$ (where $L$ is the Lagrangian). One of the Hamilton equations is: $$\dot q =-\frac {\partial H} {\partial p}$$ But when I try to derive $H$ with respect to $p$, I get very confused. What is the derivative of $q$ with respect to $p=m\dot q$, for instance? When I boil it right down, my confusion stems from the fact that I realize I don't know what that partial derivative means. A partial derivative of a multi-variable function should be taken with respect to an index (you just specify which variable, thought of as a "slot" in the function, you're deriving with respect to). I suppose I'm not clear on what multi-variable function $H$ represents (I mean, $q$ and $\dot q$ are functions of $t$, so you could say it's a one variable function...), or how I should interpret $p$ as a variable. I have similar difficulties with the equation $\dot p=\frac {\partial H} {\partial q}$, although I think I can understand $\frac {\partial H} {\partial t} = \frac {dL} {dt}$. The left hand side should give $m\dot q\ddot q + kq\dot q - a\dot q$, right? Answer: Let's restrict the discussion to one spatial dimension for simplicity. What's going on with partial derivatives? The lagrangian is a function of two real variables. We commonly label these variables $q, \dot q$ because of their physical significance. For example, the lagrangian for a one-dimensional simple harmonic oscillator is \begin{align} L(q, \dot q) = \frac{1}{2}m \dot q^2 -\frac{1}{2}k q^2 \end{align} Notice that we could just have easily written \begin{align} L(\heartsuit, \clubsuit) = \frac{1}{2}m\clubsuit^2 - \frac{1}{2}k\heartsuit^2, \end{align} because we are just using labels for the two slots that can be any symbols we choose. However, it is convenient to stick to a particular labeling, because then, we can use some convenient notation for the partial derivatives. For example, if we use the $q, \dot q$ labeling, then the expressions \begin{align} \frac{\partial L}{\partial q}, \qquad \frac{\partial L}{\partial \dot q} \end{align} mean the derivatives of $L$ with respect to its first and second arguments ("slots") respectively. But notice that if we have used the second labeling above, we just as easily could have written \begin{align} \frac{\partial L}{\partial \heartsuit}, \qquad \frac{\partial L}{\partial \clubsuit} \end{align} for the same derivatives. What exactly is the hamiltonian...really? Now, the Hamiltonian is also a function of two real variables, and we conventionally call them $q$ and $p$, but how is this function generated from a given Lagrangian $L$? Well we need to be careful here because this is where physicists tend to really abuse notation. What we do, is we first define a function $\bar p$ (the canonical momentum conjugate to $q$) as a certain derivative of the Lagrangian: \begin{align} \bar p = \frac{\partial L}{\partial \dot q}, \end{align} where I am using the conventional labeling for the arguments of the lagrangian. I put a bar on $p$ here to avoid the common abuses of notation you'll see in physics, and to emphasize the actual mathematics of what's going on. Notice, in particular, that in this conventional labeling, $\bar p$ is a function of two real variables $q$ and $\dot q$. Next, we write the relation \begin{align} p = \bar p(q, \dot q), \end{align} and we invert it to write $\dot q$ in terms of $q$ and $p$, so now we have \begin{align} \dot q = \text{some expression in terms of $q$, and $p$} = f(q,p), \end{align} Finally, we define \begin{align} H(q,p) = p f(q,p) - L(q, f(q,p)). \end{align} Notice, again, that we just as easily could have labeled the arguments of $H$ with whatever labels we wanted, but once we do this, we typically stick to that labeling in which case the same remarks that we made above for the derivatives of the Lagrangian can be made here. Note that intuitively what's happening here is that the Hamiltonian is defined "as a function of $q$ and $p$; you should never be writing it "as a function of $q$ and $\dot q$." Example. Consider, again, the one-dimensional simple harmonic oscillator. We have \begin{align} \bar p (q, \dot q) = \frac{\partial L}{\partial \dot q}(q, \dot q) = m \dot q \end{align} So now the relation $\bar p (q, \dot q) = p$ is \begin{align} m \dot q = p \end{align} and therefore inversion to write $\dot q$ in terms of $q$ and $p$ is super easy in this case; \begin{align} \dot q = \frac{p}{m} = f(q,p). \end{align} It follows that \begin{align} H(q,p) &= p f(q,p) - L(q, f(q,p)) \\ &= p(p/m) - \frac{1}{2}m(p/m)^2 +\frac{1}{2}kq^2 \\ &= \frac{p^2}{2m} + \frac{1}{2}kq^2 \end{align} Now, taking derivatives with respect to $q$ and $p$ simply means taking derivatives with respect to the first and second arguments of this function of two real variables. What about Hamilton's equations etc.? Now, that we know what the hamiltonian is and how it's computed, let's address equations like Hamilton's equations: \begin{align} \dot q = \frac{\partial H}{\partial p}, \qquad \dot p = -\frac{\partial H}{\partial q}, \end{align} Again, your confusion is not surprising because physicists are notorious for abusing notation in these instances and not pointing that out. To interpret this properly, we note that in the Hamiltonian formulation, the state of the system at any given time $t$ consists of a pair $(q(t),p(t))$ giving the value of the position of the system and of its canonical momentum at that time $t$. Actually, in order to avoid perpetuating common confusions, let's use a different notation and write $(\gamma_q(t), \gamma_p(t))$ for the state of the system at time $t$ and reserve $q$ and $p$ for labels of the argument of $H$. Then Hamilton's equations are really saying that if the pair $(\gamma_q(t), \gamma_p(t))$ is a physical motion realized by the system, then \begin{align} \dot \gamma_q(t) = \frac{\partial H}{\partial p}(\gamma_q(t), \gamma_p(t)), \qquad \dot \gamma_p(t) = -\frac{\partial H}{\partial q}(\gamma_q(t), \gamma_p(t)). \end{align} for all $t$. In other words, we get a system of coupled, first order ODEs for the functions $\gamma_q(t), \gamma_p(t)$. You can see, because of this notation, that for example, $\dot q$ in Hamilton's equations is a different beast than $\dot q$ as used to label the arguments of the Lagrangian. In the former case, it is a function, in the latter case, it is just a label. You can always stave-off this ambiguity by using different notations for these animals in the different contexts as I have done here. However, once you know what you're doing, you can happily once again revert back to overloading the symbols you're using, and you probably won't make a mistake either procedurally, or conceptually. In fact, in practice almost everyone who knows what he's doing does this because it's faster.
{ "domain": "physics.stackexchange", "id": 14511, "tags": "lagrangian-formalism, hamiltonian-formalism" }
Simple login test, translated from java to python
Question: I'm reading on how to write proper testing suites here. So I'm trying to follow the selenium example in the docs which is in Java; I'm trying to translate it to Python since my app is written in Python. So I translated the example like so: class LoginPage(unittest.TestCase): FE_URL = os.getenv('FE_URL') SERVER_URL = os.getenv('SERVER_URL') def __init__(self, driver): self.selenium = driver def find_button_by_text(self, text): buttons = self.selenium.find_elements_by_tag_name("button") for btn in buttons: if text in btn.get_attribute("innerHTML"): return btn def login_page(self): self.selenium.get(self.FE_URL) WebDriverWait(self.selenium, MAX_WAIT).until( EC.presence_of_element_located((By.CLASS_NAME, "login")) ) return self.selenium def type_username(self, username): username_locator = self.selenium.find_element_by_name("user[email]") username_locator.send_keys(username) return self.selenium def type_password(self, password): password_locator = self.selenium.find_element_by_name("user[password]") password_locator.send_keys(password) return self.selenium def submit_login(self): login_locator = self.find_button_by_text("Continue") login_locator.click() return self.selenium def submit_login_expecting_failure(self): self.login_page() self.submit_login() return self.selenium def login_as(self, username, password): login_page = self.login_page() self.type_username(username) self.type_password(password) return self.submit_login() and then the actual test is here: class MyTest(unittest.TestCase): USER_NAME = os.getenv('USER_NAME') PASSWORD = os.getenv('PASSWORD') @classmethod def setUpClass(cls): super(MyTest, cls).setUpClass() cls.selenium = WebDriver() # cls.selenium = webdriver.Firefox() cls.wait = WebDriverWait(cls.selenium, MAX_WAIT) @classmethod def tearDownClass(cls): cls.selenium.quit() super(MyTest, cls).tearDownClass() def test_login(self): login_page = LoginPage(self.selenium) main_page = login_page.login_as(self.USER_NAME, self.PASSWORD) WebDriverWait(main_page, MAX_WAIT).until( EC.presence_of_element_located((By.LINK_TEXT, "Create alert")) ) def test_failed_login(self): login_page = LoginPage(self.selenium) page = login_page.submit_login_expecting_failure() alert = WebDriverWait(page, MAX_WAIT).until( EC.visibility_of_element_located((By.CLASS_NAME, "alert-danger")) ) self.assertIn("Invalid Email or Password", alert.text) if __name__ == "__main__": unittest.main() The test works. Did I understand this correctly that the driver is setup in the actual test class and not in the LoginPage? Did I hide the actual mechanics of the test correctly? I am using WebDriverWait in the LoginPage class to wait till the page is loaded. I see this as kind of an assert replacement but I am not sure how else to wait for the page to have finished loading. Answer: I think the biggest miss here is that LoginPage, though it is perhaps fine as a test utility class, is clearly not a TestCase and should not inherit from that. This loop: buttons = self.selenium.find_elements_by_tag_name("button") for btn in buttons: if text in btn.get_attribute("innerHTML"): return btn should not be needed, and you should be able to write a single selector that accomplishes the same thing. Without access to your DOM I don't know what that's going to be, precisely. Your pattern of return self.selenium isn't particularly useful, since MyTest already has a reference to its own self.selenium; so all of those functions can just be None-returns. For Python 3 you should no longer be passing parameters into super().
{ "domain": "codereview.stackexchange", "id": 41762, "tags": "python-3.x, selenium, integration-testing" }
Clearing up my confusion about the static coefficient of friction
Question: I had a question about calculating the static coefficient of friction of an object on an inclined plane (classic physics question). We can simplify those calculations to $\mu = \tan \theta$. Since $\theta$ can be $0$ (i.e. the object is on a flat surface), does this mean that the minimum static coefficient is always $0$? Also, when calculating the static friction coefficient, is it best to take the average? Like take the maximum static coefficient and divide it by 2 to obtain the average. Or should I just take the maximum static coefficient? Thanks a lot for the help. Answer: The other answers are fine. This one is just to give you another perspective that may help in combination with the other answers. First of all, the static friction force parallel to a surface always matches the opposing force parallel to the surface up until the maximum possible static friction force is reached, at which point relative motion between the object and the surface is impending. So if you think of the object resting on a horizontal surface, there is no force parallel to that surface for the static friction force to oppose. So the static friction force is zero. Now if you start to increase the angle of the surface relative to the horizontal, the component of the gravitational force acting on the object down and parallel to the incline increases to $mgsin\theta$. The static friction force acting up the incline increases an equal amount preventing a net force downward and a downward acceleration of the object, as long as the maximum possible static friction force is not exceeded, which equals $\mu N$ where $N$ is the normal force to the incline. We can simplify those calculations to $\mu = \tan \theta$ The following inequality means that in order for impending motion not to occur at an incline angle of $\theta$, the coefficient of static friction has to be $$\mu \ge tan\theta$$ Or $$tan^{-1}\theta \lt \mu$$ So, for example, if the coefficient of static friction $\mu =0.5$ then impending motion of the object will not occur as long as $\theta \lt 26.66^0$ Also, when calculating the static friction coefficient, is it best to take the average? Like take the maximum static coefficient and divide it by 2 to obtain the average. Or should I just take the maximum static coefficient? There is no average static friction force. As indicated above, the static friction force matches the opposing force on the object until the maximum possible static friction force is reached. That maximum force is based on a single value of the coefficient of static friction. Hope this helps.
{ "domain": "physics.stackexchange", "id": 76698, "tags": "newtonian-mechanics, friction" }
Can hot Jupiters cause solar flares?
Question: I'm very new at Astronomy, and my knowledge is sparse. I've tried to be conscientious about my Wikipedia research but there's going to be a lot of things I don't know. Thanks for your patience. BACKGROUND I had read that hot Jupiters can cause superflares due to magnetic recombination. But checking Wikipedia I came across the following: Not all planetary transits can be detected by Kepler, since the planetary > orbit may be out of the line of sight to Earth. However, the hot Jupiters > orbit so close to the primary that the chance of a transit is about 10%. > If superflares were caused by close planets the 279 flare stars discovered should have about 28 transiting companions; none of them actually showed evidence of transits, effectively excluding this explanation. - Wikipedia QUESTION Given this, and among modern astronomers in general, is the theory that hot Jupiters can cause superflares on the outs? Or is the article just saying that hot Jupiters can't explain these particular stars, while the model remains valid elsewhere? Do we have any solid evidence of a gas giant causing solar flares, super or otherwise? Answer: Just going off the Wikipedia article you posted, it says the hot jupiter superflare theory was abandoned. The flares were initially explained by postulating giant planets in very close orbits, such that the magnetic fields of the star and planet were linked. The orbit of the planet would warp the field lines until the instability released magnetic field energy as a flare. However, no such planet has showed up as a Kepler transit and this theory has been abandoned.
{ "domain": "astronomy.stackexchange", "id": 2770, "tags": "star, gas-giants, solar-flare, hot-jupiter" }
Matrix multiplication
Question: Below is the code that I've written for matrix multiplication: import java.text.DecimalFormat; import java.util.InputMismatchException; import java.util.Scanner; public class MatrixMultiplication { private static final String TERMINATED_MESSAGE = "Terminated" + "," + " "; private static final String INVALID_MATRIX_DIMENSION_ERROR_MESSAGE = TERMINATED_MESSAGE + "Invalid matrix dimension!"; private static final String MATRIX_MISMATCH_ERROR_MESSAGE = TERMINATED_MESSAGE + "First matrix column and second matrix row must be same!"; private static final String INVALID_INPUT_ERROR_MESSAGE = TERMINATED_MESSAGE + "Invalid Input!"; private static Scanner scanner; private static DecimalFormat decimalFormat; private static int firstMatrixRows; private static int firstMatrixcolumns; private static int secondMatrixRows; private static int secondMatrixColumns; private static double firstMatrix[][]; private static double secondMatrix[][]; private static double resultMatrix[][]; private static boolean errorFlag; public static void main(String[] args) { initialize(); if (!errorFlag) getInput(); if (!errorFlag) calculateProduct(); if (!errorFlag) displayResult(); } private static void initialize() { scanner = new Scanner(System.in); decimalFormat = new DecimalFormat("#.##"); errorFlag = false; try { System.out.print("Number of Rows in First Matrix: "); firstMatrixRows = scanner.nextInt(); System.out.print("Number of Columns in First Matrix: "); firstMatrixcolumns = scanner.nextInt(); System.out.print("Number of Rows in Second Matrix: "); secondMatrixRows = scanner.nextInt(); System.out.print("Number of Columns in Second Matrix: "); secondMatrixColumns = scanner.nextInt(); } catch (InputMismatchException ime) { System.out.println(INVALID_INPUT_ERROR_MESSAGE); errorFlag = true; return; } firstMatrix = new double[firstMatrixRows][firstMatrixcolumns]; secondMatrix = new double[secondMatrixRows][secondMatrixColumns]; if (firstMatrixRows == 0 || firstMatrixcolumns == 0 || secondMatrixRows == 0 || secondMatrixColumns == 0) { System.out.println(INVALID_MATRIX_DIMENSION_ERROR_MESSAGE); errorFlag = true; return; } else if (firstMatrixcolumns != secondMatrixRows) { System.out.println(MATRIX_MISMATCH_ERROR_MESSAGE); errorFlag = true; return; } resultMatrix = new double[firstMatrixRows][secondMatrixColumns]; } private static void getInput() { System.out.println("Enter the first matrix (" + firstMatrixRows + " x " + firstMatrixcolumns + ") :"); readValues(firstMatrix); System.out.println("Enter the second matrix (" + secondMatrixRows + " x " + secondMatrixColumns + ") :"); readValues(secondMatrix); } private static void readValues(double matrix[][]) { for (int i = 0; i < matrix.length; i++) { for (int j = 0; j < matrix[0].length; j++) { try { matrix[i][j] = scanner.nextDouble(); } catch (InputMismatchException ime) { System.out.println(INVALID_INPUT_ERROR_MESSAGE); errorFlag = true; return; } } } } private static void calculateProduct() { for (int i = 0; i < firstMatrixRows; i++) { for (int j = 0; j < secondMatrixColumns; j++) { for (int k = 0; k < secondMatrixRows; k++) { resultMatrix[i][j] = resultMatrix[i][j] + (firstMatrix[i][k] * secondMatrix[k][j]); } } } } private static void displayResult() { System.out.println("First Matrix:"); printMatrix(firstMatrix); System.out.println("Second Matrix:"); printMatrix(secondMatrix); System.out.println("Result Matrix (Product):"); printMatrix(resultMatrix); } private static void printMatrix(double matrix[][]) { for (int i = 0; i < matrix.length; i++) { for (int j = 0; j < matrix[0].length; j++) { System.out.print(decimalFormat.format(matrix[i][j]) + "\t\t"); } System.out.println(); } System.out.println(); } } What are your thoughts on this? Can this code be optimized or done using any other simpler logic? Any suggestions and thoughts are welcome. Answer: Static and void All your variables are static. All your methods return void. This is not good. Java is an object-oriented language. You're not using it that way. You're using it more as a procedural language by having everything as static and using only void methods. Although this works (apparently), you're losing flexibility. I have a mission for you: Remove all the below lines from your program and use them as local variables rather than static fields. private static Scanner scanner; private static DecimalFormat decimalFormat; private static int firstMatrixRows; private static int firstMatrixcolumns; private static int secondMatrixRows; private static int secondMatrixColumns; private static double firstMatrix[][]; private static double secondMatrix[][]; private static double resultMatrix[][]; private static boolean errorFlag; To get you started I have some few suggestions: displayResult method can take three parameters: firstMatrix, secondMatrix, and, you guessed it: resultMatrix. calculateProduct can take two parameters: firstMatrix, secondMatrix, and return the result matrix. getInput can be modified into only reading one matrix, then you can use getInput("Enter the first matrix", firstMatrix); Use firstMatrix.length and firstMatrix[0].length to determine the width and height of a matrix. Your own Matrix class And finally, this is a major suggestion that partially goes against the above suggestions: Replace double[][] with MyMatrix that you create as your own class. The MyMatrix class itself can contain the getInput method and a public MyMatrix multiply(MyMatrix otherMatrix) method. It can also contain double[][] matrixData The class can contain an output method for outputting it's internal matrixData. It can also contain, if you want, private final int columns; and private final int rows; This is what I would ultimately recommend, as it will allow you to add an add method, and a whole lot of other matrix-specific methods such as calculateInverseMatrix. Constants The only variables I can accept being static are the ones marked final (the constants). I would however make a minor change to one of them, as there's no meaning to use string concatenation here: "Terminated" + "," + " ". private static final String TERMINATED_MESSAGE = "Terminated, "; Use exceptions rather than errorFlag if (firstMatrixRows == 0 || firstMatrixcolumns == 0 || secondMatrixRows == 0 || secondMatrixColumns == 0) { throw new IllegalStateException(INVALID_MATRIX_DIMENSION_ERROR_MESSAGE); } else if (firstMatrixcolumns != secondMatrixRows) { throw new IllegalStateException(MATRIX_MISMATCH_ERROR_MESSAGE); } Throwing an exception is to be used for exceptional cases. You should perhaps consider creating your own Exception class, and ask yourself if you want it to be a checked or unchecked exception. Once you have created your own MyMatrix class and restructured your program a bit to use more object orientation (remember my challenge, get rid of all those static variables!) I hope that you will write a follow-up question and that I will say "Well done! You did it!".
{ "domain": "codereview.stackexchange", "id": 7423, "tags": "java, optimization, algorithm, matrix" }
What kind of problems is DQN algorithm good and bad for?
Question: I know this is a general question, but I'm just looking for intuition. What are the characteristics of problems (in terms of state-space, action-space, environment, or anything else you can think of) that are well solvable with the family of DQN algorithms? What kind of problems are not well fit for DQNs? Answer: I don't currently have much practical experience with DQN, but I can partially answer this question also based on my theoretical knowledge and other info that I found. DQN is typically used for discrete action spaces (although there have been attempts to apply it to continuous action spaces, such as this one) discrete and continuous state spaces problems where the optimal policy is deterministic (an example where the optimal policy is not deterministic is rock-paper-scissors) off-policy learning (Q-learning is an off-policy algorithm, but the point is that, if you have a problem/application where data can only be or has been gathered by a policy that is unrelated to the policy you want to estimate, then DQN is suitable, though there are other off-policy algorithms, such as DDPG) This guide also states that DQN is slower to train, but more sample efficient than other approaches, due to the use of the experience replay buffer. Moreover, if you have a small state and action space, it is probably a good idea to just use tabular Q-learning (i.e. no function approximation), given that it is guaranteed to converge to the optimal value function. See also this and this questions and this article (which compares DQN with policy gradients).
{ "domain": "ai.stackexchange", "id": 2448, "tags": "neural-networks, reinforcement-learning, dqn, applications" }
With a MLP (regression), is it appropriate to initialize bias in the final layer to be a value near the expected mean?
Question: For instance, when predicting IQ in a population you would expect the mean to be 100. If you initialize the bias in the final layer you are basically giving the network a head start, telling it in what range of values it should be guessing. Another way of getting at this would be to predict a scaled outcome value with a mean of 0, which would make the standard bias initializer value of 0 be spot-on. Are there any guiding rules/norms when it comes to this situation? Are both methods appropriate? Answer: Cool idea! Looks like it really helps. Training starts with smaller error, so you might be able to train the model in shorter time with this trick. You should be okay as long as you initialise the weights properly. I trained model with different biases and plotted error at the start and the end of of the training. Expected mean of the data is 100. See code below. from keras.models import Sequential from keras.layers import Dense from keras.initializers import Constant, Zeros, Ones from keras.metrics import mean_squared_error import matplotlib.pyplot as plt import numpy as np def getData(): n = 200 X = np.random.randn(n, 2) Y = 20 * X[:,0] + 10 * X[:,1] + 100 return X, Y def getModel(bias): m = Sequential() m.add(Dense(1, input_shape=(2,), bias_initializer=bias)) m.compile('adam', loss='mse') return m X, Y = getData() constants = [0, 1, 10, 50, 100, 150, 200] loss_at_start = [] loss_at_end = [] for c in constants: m = getModel(Constant(c)) m.fit(X,Y, epochs=20, validation_split=0.2, validation_steps=20, steps_per_epoch=1000) loss_at_start.append(m.history.history['loss'][0]) loss_at_end.append(m.history.history['loss'][-1]) plt.plot(constants, loss_at_start) plt.plot(constants, loss_at_end) plt.xlabel('Bias') plt.ylabel('Loss') plt.legend(['Loss at training start','Loss at training end']) plt.title('Expected mean: 100') plt.show()
{ "domain": "datascience.stackexchange", "id": 5045, "tags": "machine-learning, deep-learning, keras" }
Most pythonic way to combine elements of arbitrary lists into a single list
Question: I have a list of lists which represents the options I can chose from. I have a sequence of indices which represent which option lists I want to take elements from, and in which order. E.g. if I have choices = [ [ 1, 2, 3 ], [ 4, 5, 6 ], [ 7, 8, 9 ] ] sequence = [ 2, 0, 1, 1 ] I want my output to be [7, 8, 9, 1, 2, 3, 4, 5, 6, 4, 5, 6] #index 2 index 0 index 1 index 1 I have found three possible solutions: choice = sum( ( choices[i] for i in sequence ), [] ) choice = reduce( operator.add, ( choices[i] for i in sequence ) ) choice = [ element for i in sequence for element in choices[i] ] I would like to know which of these do people find the most pythonic and to know if there are any other elegant solutions. Answer: Unless I actually needed the whole list at once, I would probably use itertools for this: from itertools import chain choice = chain.from_iterable(choices[i] for i in sequence) If you do need the list, you can still use this with an explicit conversion: choice = list(chain.from_iterable(choices[i] for i in sequence)) Note: this fits pretty close to Nobody's suggestion - here chain.from_iterable is flatten and the generator expression is making the sample.
{ "domain": "codereview.stackexchange", "id": 8474, "tags": "python, performance" }
Why is the electric field perpendicular to a circuit zero?
Question: Griffith briefly addresses the uniformity of the electric field in a wire using Laplace's equation by assuming that the electric field is always parallel to the wire (i.e. $\mathbf{E}\cdot\hat{n}=0$). On the cylindrical surface, $\mathbf{J}\cdot\hat{n}=0$, else charge would be leaking out into the surrounding space (which we assume to be nonconducting). Therefore $\mathbf{E}\cdot\hat{n}=0$, and hence $\partial V/\partial n = 0$. Having specified V or its normal derivative on all surfaces, the potential is uniquely determined. -Griffith, Introduction to Electrodynamics I am likely missing something, but this seems like extremely fallacious logic. The current density perpendicular to the wire is obviously zero, but by Ohm's law, $$\mathbf{J}=\sigma\mathbf{E}$$ If the current density $\mathbf{J}$ is equal to zero, and, as he says, the surrounding medium is non-conductive, then doesn't an essentially zero conductivity and essentially zero current density allow $\mathbf{E}\cdot\hat{n}$ to be whatever it wants? And then for my question. If this is true, there must be some mechanism by which $\mathbf{E}\cdot\hat{n}$ is kept equal to zero so the Laplace's solution holds. The horizontal motion of electrons (i.e. electron buildup on the surface of the wire negating any perpendicular component) seems the obvious choice, but given the slow drift velocity of the electrons and the near-c speed of the "current wave-front", can this mechanism act fast enough? The horizontal motion of electrons in a wire might just be fast enough to achieve this. And does this account for the uniformity of the electric field in the wire, as well? Thanks Answer: The statement "On the cylindrical surface $\mathbf{J}\cdot\hat{n}=0$..." refers to just inside the wire so $\sigma\neq0$ and $\mathbf{E}$ cannot be whatever it wants.
{ "domain": "physics.stackexchange", "id": 32545, "tags": "electromagnetism, electric-current, classical-electrodynamics" }
Hidden shift problem as a benchmarking function
Question: I encountered the hidden shift problem as a benchmarking function to test the quantum algorithm outlined in this paper (the problem also features here). There are two oracle functions $f$, $f'$ : $\mathbb{F}_{2}^{n} \rightarrow \{ \pm 1 \}$ and a hidden shift string $s \in \mathbb{F}_{2}^{n}$. It is promised that $f$ is a bent (maximally non-linear) function, that is, the Hadamard transform of f takes values $ \pm 1$. It is also promised that $f′$ is the shifted version of the Hadamard transform of $f$, that is $$f'(x \oplus s) = 2^{−\frac{n}{2}} \sum_{y \in \mathbb{F}^{n}_{2}} (−1)^{x·y}f(y) \forall x \in \mathbb{F}_{2}^{n} $$ The oracles are diagonal $n$ qubit unitary operators such that $O_{f}|x \rangle = f(x) |x \rangle$ and $O_{f'}|x \rangle = f'(x) |x \rangle$ for all $x \in \mathbb{F}^{n}_{2}$. It is stated that $|s⟩ = U|0^{n}⟩$, $U ≡ H^{\otimes n} O_{f′} H^{\otimes n} O_{f} H^{\otimes n}$. I am struggling with this calculation. Here's what I did. $$H^{\otimes n} O_{f′} H^{\otimes n} O_{f} H^{\otimes n}|0^{n}\rangle$$ $$= H^{\otimes n} O_{f′} H^{\otimes n} 2^{−\frac{n}{2}}\sum_{x \in \mathbb{F}^{n}_{2}} f(x) |x \rangle$$ $$= H^{\otimes n} O_{f′} ~2^{−n}\sum_{x \in \mathbb{F}^{n}_{2}} f(x) \sum_{y \in \mathbb{F}^{n}_{2}} (-1)^{x.y} |y \rangle $$ $$= H^{\otimes n} ~2^{−n} \sum_{x \in \mathbb{F}^{n}_{2}} f(x) \sum_{y \in \mathbb{F}^{n}_{2}} (-1)^{x.y} f'(y) |y \rangle $$ $$= 2^{−\frac{3n}{2}} \sum_{x \in \mathbb{F}^{n}_{2}} f(x) \sum_{y \in \mathbb{F}^{n}_{2}} (-1)^{x.y} f'(y) \sum_{z \in \mathbb{F}^{n}_{2}} (-1)^{y.z} |z \rangle $$ I am not sure whether I have the correct expression and if I do, I have no idea how to simplify this large expression to get $|s\rangle$. Answer: Well, you can simplify $$ H^{\otimes n} O_{f} H^{\otimes n}|0^{n}\rangle = H^{\otimes n} 2^{−\frac{n}{2}}\sum_{x \in \mathbb{F}^{n}_{2}} f(x) |x \rangle = $$ $$ = ~2^{−n}\sum_{x \in \mathbb{F}^{n}_{2}} f(x) \sum_{y \in \mathbb{F}^{n}_{2}} (-1)^{x.y} |y \rangle = ~2^{−n}\sum_{x \in \mathbb{F}^{n}_{2}}\sum_{y \in \mathbb{F}^{n}_{2}} f(x) (-1)^{x.y} |y \rangle = $$ $$ = ~2^{−\frac{n}{2}} \sum_{y \in \mathbb{F}^{n}_{2}}\left(~2^{−\frac{n}{2}}\sum_{x \in \mathbb{F}^{n}_{2}} f(x) (-1)^{x.y} \right) |y \rangle = ~2^{−\frac{n}{2}} \sum_{y \in \mathbb{F}^{n}_{2}} f′(y\oplus s)|y \rangle $$ On the other hand $$ O_{f'} H^{\otimes n} |s\rangle = O_{f'} ~2^{−\frac{n}{2}} \sum_{y \in \mathbb{F}^{n}_{2}} (-1)^{s.y}|y \rangle = ~2^{−\frac{n}{2}} \sum_{y \in \mathbb{F}^{n}_{2}} (-1)^{s.y}f'(y)|y \rangle $$ Now, if $|s\rangle = H^{\otimes n} O_{f′} H^{\otimes n} O_{f} H^{\otimes n} |0\rangle$ then $ O_{f′} H^{\otimes n} |s\rangle = H^{\otimes n} O_{f} H^{\otimes n} |0\rangle$. Hence it must be $$ ~2^{−\frac{n}{2}} \sum_{y \in \mathbb{F}^{n}_{2}} f′(y\oplus s)|y \rangle = ~2^{−\frac{n}{2}} \sum_{y \in \mathbb{F}^{n}_{2}} (-1)^{s.y}f'(y)|y \rangle , $$ i.e. $$ f′(y\oplus s) = (-1)^{s.y}f'(y) $$ Well, the last equality just can't be true for a general bent $f'$ and $s$,$y$, because you can deduce $f'(y)=f'(s)$, hence $f'$ must be constant. So, there is a mistake somewhere. Probably with notations or definitions. A deeper look is needed. EDIT. Ok, after some digging I found a mistake :) Authors of the first paper not quite carefully restated results of the source paper (your second link). In fact, $f'$ must be a Hadamard transform of the shifted $f$, and not a shift of the Hadamard transform of $f$! That is, it must be $$ f'(x) = ~2^{−\frac{n}{2}} \sum_{y \in \mathbb{F}^{n}_{2}} (-1)^{x.y}f(y\oplus s) $$ or, equivalently, by changing $x$,$y$ $$ f'(y) = ~2^{−\frac{n}{2}} \sum_{x \in \mathbb{F}^{n}_{2}} (-1)^{x.y}f(x\oplus s) $$ With this correct $f'$ we have $$ ~2^{−\frac{n}{2}} \sum_{y \in \mathbb{F}^{n}_{2}}\left(~2^{−\frac{n}{2}}\sum_{x \in \mathbb{F}^{n}_{2}} f(x) (-1)^{x.y} \right) |y \rangle = ~2^{−\frac{n}{2}} \sum_{y \in \mathbb{F}^{n}_{2}} (-1)^{y.s}f′(y)|y \rangle $$ and everything coincides (in the context of my previous calculations).
{ "domain": "quantumcomputing.stackexchange", "id": 683, "tags": "quantum-algorithms, quantum-state, mathematics" }
Why does rotation make black holes smaller?
Question: A non-rotating black hole has a Schwarzchild radius of $2GM/c^2$. A rotating black hole of the same mass has a smaller outer horizon radius, down to a limit of $GM/c^2$ at the fastest possible rotation. Why does faster rotation shrink the outer event horizon? I would expect rotation to be a form of energy, which would increase the black hole's effective mass, and I would expect that larger mass to result in a larger effect horizon. Clearly this intuition is wrong, so what's a more accurate intuition? Answer: First, let us note that “horizon radius” (at least when talking about black holes without spherical symmetry) is a coordinate dependent term, so it is better to use coordinate independent measure of size such as “event horizon area”. Then we can say that … A rotating black hole of the same mass has a smaller area of event horizon. Next, Why does faster rotation shrink the outer event horizon? It is more correct to say that spinning up the black hole increases its mass but does not lead to corresponding increase in horizon area. I would expect rotation to be a form of energy, which would increase the black hole's effective mass, … This is indeed correct. … and I would expect that larger mass to result in a larger effect horizon. And this is wrong, because this additional rotational energy is located outside the event horizon, and so does not lead to increase in horizon area. A useful notion when discussing rotational energy of a black hole is irreducible mass, which can be seen as the mass of nonrotating (and uncharged) black hole with the same area of event horizon. For more discussion of irreducible mass see this answer, but here let us mention that in an idealized situation of reversible changes in black hole states it is possible to take a nonrotating black hole with mass $M_\text{irr} $ and “spin it up” by transferring to it angular momentum $J$ without increasing its irreducible mass, so that the full mass of this rotating black hole $$ M^{2}=M_\text{irr}^{2}+{\frac {J^{2}c^{2}}{4M_\text{irr}^{2}G^{2}}}.$$ Subsequently (again in an idealized situation) it would be possible to extract all the rotational energy and arrive back at a nonrotating black hole with the same $M_\text{irr} $. Of course, losses would make perfectly reversible processes impossible, so at each stage of (classical) evolution the area of event horizon (and thus irreducible mass $M_\text{irr} $) would be increasing ($\delta M_\text{irr} > 0$) but this increase (at least in principle) could be made quite small.
{ "domain": "physics.stackexchange", "id": 97399, "tags": "general-relativity, gravity, black-holes, angular-momentum, event-horizon" }
Calculating the ground states of an Ising Hamiltonian on a real quantum computer
Question: I have followed this tutorial and based on it, I've written the following function in qiskit, which can explicitly calculate the ground states of a transverse-field Ising Hamiltonian. from qiskit import * import numpy as np def Hamiltonian(n,h): pow_n=2**n qc = np.empty(2*n-1, dtype=object) #Creating the quantum circuits that are used in the calculation of the Hamiltonian based on the number of qubits for i in range(0, 2*n-1): #2n-1 is the number of factors on the n-site Hamiltonian qr = QuantumRegister(n) qc[i] = QuantumCircuit(qr) #create quantum circuits for each factor of the Hamiltonian #print(i) if (i<=n-2): #for the first sum of the Hamiltonian qc[i].z(i) #value of current spin qc[i].z(i+1) #and value of its neighboring spin else: #for the second sum of the Hamiltonian qc[i].x(2*n-2-i) #2*n-2 gives the proper index since counting starts at 0 #Run each circuit in the simulator simulator = Aer.get_backend('unitary_simulator') result = np.empty(2*n-1, dtype=object) unitary = np.empty(2*n-1, dtype=object) Hamiltonian_Matrix=0 #Get the results for each circuit in unitary form for i in range(0, 2*n-1): result[i] = execute(qc[i], backend=simulator).result() unitary[i] = result[i].get_unitary() #print(unitary[i]) #And calculate the Hamiltonian matrix according to the formula if (i<=n-2): Hamiltonian_Matrix=np.add(Hamiltonian_Matrix,-unitary[i]) else: Hamiltonian_Matrix=np.add(Hamiltonian_Matrix,-h*unitary[i]) print("The",pow_n,"x",pow_n, "Hamiltonian Matrix is:") print(Hamiltonian_Matrix) #Now that we have the Hamiltonian #find the eigenvalues and eigenvectors w, v = np.linalg.eig(Hamiltonian_Matrix) print("Eigenvectors") print(v) print("Eigenvalues") print(w) minimum=w[0] min_spot=0 for i in range(1, pow_n): if w[i]<minimum: min_spot=i minimum=w[i] print(min_spot) groundstate = v[:,min_spot] #the probability to measure each basic state of n qubits probability = np.square(groundstate).real print("The probability for each of the",pow_n,"base states is:") print(probability) print("The probabilities for each of the",pow_n,"base states add up to:") print ("%.2f" % np.sum(probability)) My problem with this piece of code I've written is that it can only run on a unitary simulator. To my understanding (which may lack some of the underlying physics), the Hamiltonian itself is not a "purely" quantum calculation, since there are additions to be made which cannot be expressed with a quantum (unitary) gate, and this is why the resulting Hamiltonian matrix is also not unitary. For example, if you run Hamiltonian(3, 1), the Hamiltonian matrix is: [[-2.+0.j -1.+0.j -1.+0.j 0.+0.j -1.+0.j 0.+0.j 0.+0.j 0.+0.j] [-1.+0.j 0.+0.j 0.+0.j -1.+0.j 0.+0.j -1.+0.j 0.+0.j 0.+0.j] [-1.+0.j 0.+0.j 2.+0.j -1.+0.j 0.+0.j 0.+0.j -1.+0.j 0.+0.j] [ 0.+0.j -1.+0.j -1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j -1.+0.j] [-1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j -1.+0.j -1.+0.j 0.+0.j] [ 0.+0.j -1.+0.j 0.+0.j 0.+0.j -1.+0.j 2.+0.j 0.+0.j -1.+0.j] [ 0.+0.j 0.+0.j -1.+0.j 0.+0.j -1.+0.j 0.+0.j 0.+0.j -1.+0.j] [ 0.+0.j 0.+0.j 0.+0.j -1.+0.j 0.+0.j -1.+0.j -1.+0.j -2.+0.j]] Does this mean that there is no way for this approach to run on a real quantum computer where all you can do is measurements on the qubits? I've seen different approaches online such as QAOA or the use of transformations, but I thought if it's so easy to do it with unitaries and some additions, there should be a way to do it with measurements as well. Answer: You can definitely run this on a real quantum computer! In your snippet above you mixed circuits and operators. A circuit is only used for the ansatz of your ground state, not for representing the operators. The website you provided talks about the Hamiltonian in terms of the Pauli X and Z matrices; $\hat\sigma^x$ and $\hat\sigma^z$. If you want to compute the ground state of your Hamiltonian you need to use this Pauli representation and not convert them into a matrix. Here's a short example you can generalize to your above case. Say we the transverse field Ising chain, but to simplify things, we assume only two sites. Then we can write the Hamiltonian as $$ \hat H = -\hat\sigma^z_0 \otimes \hat\sigma^z_1 - h(\hat\sigma^x_0 + \hat\sigma^x_1). $$ Now we associate each site with a qubit and then we can write the above matrix in a one-to-one correspondence in Qiskit (I'm using Qiskit 0.25.0): # opflow is Qiskit's module for creating operators like yours from qiskit.opflow import Z, X, I # Pauli Z, X matrices and identity h = 0.25 # or whatever value you have for h H = -(Z ^ Z) - h * ((X ^ I) + (I ^ X)) The ^ in Qiskit means we're using a tensor product. Also note that I've expanded the notation $\hat\sigma^x_0$ to X ^ I since we act on two qubits and by $\hat\sigma^x_0$ implicitly mean that nothing happens to the second qubit. Now you can go ahead and use this representation of the Hamiltonian to run on a real quantum computer. If you want to compute the ground state using Qiskit, you can use the VQE class. To run that you also need to select an ansatz and an optimizer, but that's very easy in Qiskit. For instance from qiskit.providers.aer import QasmSimulator from qiskit.algorithms import VQE from qiskit.algorithms.optimizers import COBYLA from qiskit.circuit.library import EfficientSU2 # you can swap this for a real quantum device and keep the rest of the code the same! backend = QasmSimulator() # COBYLA usually works well for small problems like this one optimizer = COBYLA(maxiter=200) # EfficientSU2 is a standard heuristic chemistry ansatz from Qiskit's circuit library ansatz = EfficientSU2(2, reps=3) # set the algorithm vqe = VQE(ansatz, optimizer, quantum_instance=backend) # run it with the Hamiltonian we defined above result = vqe.compute_minimum_eigenvalue(H) # print the result (it contains lot's of information) print(result) This will simulate the quantum computer but you can run exactly the same piece of code on real hardware by changing the backend. On a real device, each of the Hamiltonian summands, $\sigma^z_0 \otimes \hat\sigma^z_1$, $\hat\sigma^x_0$ and $\hat\sigma^x_1$, will in general be measured individually. That is because we can only measure the expectation value of an operator, that is diagonal in the computational basis (in the Z-basis). Thus, we need to apply basis transformations to the terms that are not already diagonal (in this case the Pauli-X terms). If you want to know more about this I would suggest you to have a look at the Qiskit textbook or this stackoverflow question.
{ "domain": "quantumcomputing.stackexchange", "id": 2661, "tags": "programming, qiskit, hamiltonian-simulation, qaoa" }
Electrostatic adhesion instead of glue. Is it possible?
Question: I am thinking about the way to attach the printed photographs to the wall but not using the frame. And the most interesting idea for me is the use of electrostatics. In addition I have found the following video that shows this process in action: http://youtu.be/r0u85k5EpcE Also I think that something like this is used in some kinds of wall-climbing robots. So does somebody know how it works? I suppose that the materials of the objects are very important here. Probably it works because the wall and photo are made of paper, and plastic between them gets charged and consequently attracts paper. What other materials or combinations of materials can provide this effect? And how is the electrostatic charge transferred to the material (plastic foil and paper)? I know only about objects gaining electrostatic charge after rubbing them. But in the stick from this video it must be done by some simple electric circuit... Answer: I've never done the experiment with the plastic film, but as child I remember that if I rubbed a balloon on my pullover it would stick to the wall. The reason is the one you suggest. When you run the balloon on your pullover you charge it. When you bring the balloon close to the wall the charge on the ballon polarises the surface of the wall, i.e. it repels like charges and attracts unlike charges, so the surface of the wall acquires a charge opposite to the balloon. The opposite charges on the ballon and the surface of the wall attract, so the balloon sticks to the wall. The problem with using this to fix pictures to the wall is that the charge gradually leaks away so the balloon/picture will fall off. My recollection is that the balloon would stay stuck to the wall for about an hour. As you say, objects may be charged by rubbing them and this is called triboelectricity. Alternatively you can charge objects by touching them with something else that is charged. Because like charges repel, if you touch a charged object to an uncharged object there will be a tendancy for the charge to spread over both objects. To what extent this happens depends on the conductivity of the objects. I imagine the plastic film is a poor conductor and this is why you have to rub the Fun-Fly-Stick all over it. I had never heard of the Fun-Fly-Stick but a quick Google suggests it's a type of Van de Graaff generator.
{ "domain": "physics.stackexchange", "id": 6198, "tags": "electrostatics" }
Accelerating at $\rm 1,000,000 \, m/s^2$ to $\rm 1 \, m/s$?
Question: What would it look like if someone was accelerated at $\rm 1,000,000 \, m/s^2$ to $\rm 1 \, m/s$? Would they die? Would their body stay intact? My guess is that the answer might be more interesting if we make the assumption that not all parts of the body are accelerated uniformly as you might find in a freefall situation? It's entirely possible that the answer to this question is highly trivial, but when I asked my physics-oriented friends, they all seemed to disagree with each other. Some said that it would be similar in effect to someone dying after smacking into the ground after a long fall, while others claimed that in fact the situation was different from this in subtle ways. Thoughts on any of this would be greatly appreciated! Answer: Probably you wouldn’t even notice The thing is that if all points of a body are equally accelerated then all points of that body will retain their relative positions and therefore it will keep its shape and integrity. The problem starts when you have differential accelerations, i.e., different parts of the body are moving differently. For example: Let’s consider a guy with his back in with cross contact with a wall. Suddenly, the wall moves, pushing him forward with an arbitrary acceleration and time. Since the wall is in contact with his back, this will start moving first and only after a certain time the chest will follow. This will cause different displacements on his back and chest. Let’s imagine that after the acceleration his back have been displaced by 40 cm and his chest by 30 cm. The result will be the same as been crushed by a car crusher for 10 cm. The reverse of this happens to someone falling from a height. The part that makes the first contact with the ground stops but the opposite side continues moving crushing him. Therefore, just as in a car crusher, the key here is displacement The displacement caused by a constant acceleration (force) is $$\rm d= \frac{1}{2}at^2 \tag{1}$$ and the time elapsed during acceleration is $$ \rm t=v/a \tag{2}$$ meaning that, knowing the final speed $\rm v$, the displacement caused by a constant force is $$ \rm d=\frac{v^2}{2a} \tag{3}$$ which in your situation $$ \rm d=\frac{1^2 \,m^2/s^2}{2•10^6 \,m/s^2}=0.5•10^{-6}m=0.5 \,µm$$ In other words, you will end up with a half micrometer amplitude chock wave bouncing in your body and dissipating. A cool result from equation (3) is that contrary to the expected result, if you keep the same end speed $\rm v$, the higher the acceleration, the harmless it gets, to the point that an infinite acceleration would cause no harm at all. This happens because displacement is quadratic with time and only linear with the acceleration. To keep the same end velocity, with higher and higher accelerations, one would need a quadratic smaller time, resulting in smaller displacements.
{ "domain": "physics.stackexchange", "id": 44594, "tags": "newtonian-mechanics, kinematics, acceleration, velocity, biology" }
How to use gazebo plugins found in gazebo_ros [ROS2 Foxy gazebo11]
Question: I am trying to use the gazebo_ros_state plugin found inside the gazebo_ros folder in gazebo_ros_pkgs. I have included this inside my world file similar to the demo also found in gazebo_ros folder... <plugin name="gazebo_ros_state" filename="libgazebo_ros_state.so"> <ros> <namespace>/demo</namespace> <argument>model_states:=model_states_demo</argument> </ros> <update_rate>1.0</update_rate> </plugin> I also tried to see if the migration guide found here (https://github.com/ros-simulation/gazebo_ros_pkgs/wiki/ROS-2-Migration:-Entity-states and https://github.com/ros-simulation/gazebo_ros_pkgs/wiki/ROS-2-Migration:-gazebo_ros_api_plugin) gave any info on it however nothing there seemed to offer any help. This is the error I am getting when I try and start my world... [Err] [Model.cc:1097] Model[cartpole] is attempting to load a plugin, but detected an incorrect plugin type. Plugin filename[libgazebo_ros_state.so] name[gazebo_ros_state] libgazebo_ros_state.so is found inside my lib folder just like my other plugins One more thing to note is that I have other plugins found in the gazebo_plugin folder that work and do not give an error like this... Originally posted by Dawson on ROS Answers with karma: 11 on 2020-07-11 Post score: 1 Original comments Comment by BorgesJVT on 2020-09-15: I am having the same problem. Any updates here? Comment by guru_florida on 2020-09-27: I took the gazebo_ros_state_demo.world straight from the repo. Strangely, I get the same error but regardless it seems to work and I get the two service endpoints /demo/get_entity_state and /demo/set_entity_state and they do correctly return model data. Comment by sherlockshome221 on 2021-06-24: I have one observation. The gazebo_ros_state plugin is registered as a WORLD_PLUGIN type here. The error in the OP is coming from model.cc which is looking for MODEL_PLUGIN type here. So, it seems the problem here is, the addition of the plugin in the sdf file is not having the desired effect of specifying a WORLD_PLUGIN. I wonder if the <plugin> tag added in the OP is under the <model> or under <world>. Not sure if that's a fix to the problem. But sharing a thought as I am trying to fix the same too. Answer: Following up on my comment above: The gazebo_ros_state plugin is registered as a WORLD_PLUGIN type here. The error in the OP is coming from model.cc which is looking for MODEL_PLUGIN type here. So, the problem here is, the addition of the plugin in the sdf file is not having the desired effect of specifying a WORLD_PLUGIN. Adding the <plugin> under the <world> tag directly like below works without issues: <world name="default"> <plugin name="gazebo_ros_state" filename="libgazebo_ros_state.so"> <ros> <namespace>/demo</namespace> <argument>model_states:=model_states_demo</argument> </ros> <update_rate>1.0</update_rate> </plugin> </world> Starting this world with gazebo my_world_file.world --verbose and then doing a ros2 service list | grep entity gives: /demo/get_entity_state /demo/set_entity_state Originally posted by sherlockshome221 with karma: 46 on 2021-06-24 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by sherlockshome221 on 2021-06-24: Related answer here
{ "domain": "robotics.stackexchange", "id": 35261, "tags": "ros, gazebo, ros2, gazebo-plugins, gazebo-ros" }
Gazebo skid steer drive and navigation stack
Question: I want to use gazebo skid steering drive plugin for my 4 wheelled robot and then use navigation stack like in ROS Tutorials. The question is whould it work or I need something else for working?(maybe another controller) Now I've created urdf of my robot with skid steering drive controller, I see that it publish to odom and tf, and it seems ground thuth, but I'm not shure about that because didn't find a lot of information. also I want to see a code of skid steering plugin, but I found only old version =( Originally posted by denzist on ROS Answers with karma: 11 on 2014-04-28 Post score: 1 Answer: I've solved following this tutorial. http://gazebosim.org/wiki/Tutorials/1.9/ROS_Control_with_Gazebo Originally posted by Dani C with karma: 126 on 2014-04-29 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 17810, "tags": "gazebo, navigation" }
excel-vba function to filter an array and return an array
Question: I have a function to filter a 2-d array. It works. But I am not sure if the following mechanism is a sensible concept. The idea is: loop over the input array which is received as input_arr(y,x) store the values in temp_arr(x,y), since in order to resize the y-dimension it needs to be last dimension, so I have to transpose the values as I store them once the loop is done, transpose temp_arr(x,y) which becomes the return value Public Function filter_arr(input_arr As Variant, col_index As Long, filter_value As Variant) As Variant 'the input_arr might be indexed starting from 0 or 1 Dim n1 As Long, n2 As Long n1 = LBound(input_arr, 2) n2 = UBound(input_arr, 2) Dim temp_arr() As Variant Dim y As Long, x As Long, count As Long count = 0 If (LBound(input_arr, 1) = 0) Then For y = LBound(input_arr, 1) To UBound(input_arr, 1) If (input_arr(y, col_index) = filter_value) Then ReDim Preserve temp_arr(n1 To n2, 0 To count) For x = n1 To n2 temp_arr(x, count) = input_arr(y, x) Next x count = count + 1 End If Next y Else 'if LBound(input_arr, 1) = 1 For y = LBound(input_arr, 1) To UBound(input_arr, 1) If (input_arr(y, col_index) = filter_value) Then count = count + 1 ReDim Preserve temp_arr(n1 To n2, 1 To count) For x = n1 To n2 temp_arr(x, count) = input_arr(y, x) Next x End If Next y End If filter_arr = Application.Transpose(temp_arr) End Function Edit: adding some context Imagine that I have a ws in an .xlam file that contains product information which I've added as an add-in. I have a function like =getProductData() which returns an array containing that data So excel has a SORT function which works great with this kind of thing. I can do =SORT(getProductData(), 2, 1) to return my product data sorted by the 2nd column But excel's FILTER function needs me to specify the data itself as its second parameter, not just its index, like in SORT I can use INDEX to do something like =SORT(FILTER(getProductData(),INDEX(getProductData(),,3)="red",""),2,1) to filter my product data by the 3rd column if the value = red, then sort by the 2nd column of the resulting array But I was hoping to do something like =SORT(filter_arr(getProductData(),3,"red"),2,1) to do the same Naturally I'm still just playing around to determine what I can pull off - maybe using INDEX is the way to go and I'm trying to reinvent the wheel Answer: ReDim Preserve and Transpose are not free. I use this pattern when I need to filter an array: Declare a 1-D array (Indices) with the same number of rows as the 2D array When a match is found add the matches index to the Indices array and increment the counter If there is matching data, declare a 2-D array (Results) with the same dimensions of the Input array but a row count equal to the counter Iterate over the indices in the Indices array and add the matching values to the Results Return the Results If there are no matches return an empty array Quick Filter Public Function QuickFilter(input_arr As Variant, col_index As Long, filter_value As Variant) As Variant Dim r As Long Dim Count As Long Dim Indices As Variant ReDim Indices(LBound(input_arr, 1) To UBound(input_arr, 1)) Count = LBound(input_arr, 1) - 1 For r = LBound(input_arr, 1) To UBound(input_arr, 1) If input_arr(r, col_index) = filter_value Then Count = Count + 1 Indices(Count) = r End If Next Dim Results As Variant Dim c As Long Dim index As Long If Count >= LBound(input_arr, 1) Then ``` we want results to hold the entire row ReDim Results(LBound(input_arr, 1) To Count, LBound(input_arr, 2) To UBound(input_arr, 2)) For r = LBound(input_arr, 1) To Count index = Indices(r) For c = LBound(input_arr, 2) To UBound(input_arr, 2) Results(r, c) = input_arr(index, c) Next Next QuickFilter = Results Else QuickFilter = Array() End If End Function
{ "domain": "codereview.stackexchange", "id": 40466, "tags": "vba, excel" }
Superposition of waves and energy transferred
Question: A wave transfers energy. I was reading about the superposition of two waves, where the amplitudes of the two waves added up to produce a resultant wave. So I began wondering about the energy that is now being transferred in this resultant wave. Is it the sum of energy being transferred by both waves? Please help me understand. Answer: When two waves, propagating in a linear medium, interfere with each other, the amplitudes of individual points within the region of interference could add or subtract, but this does not affect the flow of energy. We can show it in a simple example below: Point A is in the region of interference and its amplitude will be affected by both waves. Point B is beyond the region of interference and should not be affected by wave S2. This is because the amplitude at B is defined by a superposition of the two waves, i.e., it has to be the sum of S1 and S2 at point B. Since the amplitude of S2 at B is zero (or negligible), the amplitude at B is affected by S1 only. The same could be said about all point of wave S1 beyond the region of interference. If so, we have to conclude that S2 would not affect S1 beyond the region of interference and therefore will not change the flow of energy of S1. We could come to a similar conclusion, if we took into account that the waves don't get reflected while propagating in a uniform linear medium, which means that no energy is coming back and, therefore, it should continue moving forward, unaffected by other waves in that medium. The sound wave moving in the air will be reflected by a wall, but not by another sound wave.
{ "domain": "physics.stackexchange", "id": 48056, "tags": "energy, waves, superposition" }
Password protected Joomla administrator folder with Python
Question: I am trying to make a basic auth password protected folder for Joomla. This script will simply ask for username and password to secure administrator folder in Joomla and creates .htaccess and .pass_file. I want you to review it and offer suggestions so that I can do it more efficiently and learn to manage code. #!/usr/bin/env python import os import pwd import sys def ok(): os.chdir("/home/" + username); os.system('htpasswd -c .pass_file '+ user); os.chdir("/home/" + username + "/public_html/administrator/"); os.system('echo AuthUserFile /home/' + username + '/.pass_file > .htaccess'); os.system('echo AuthName Restricted >> .htaccess'); os.system('echo AuthType Basic >> .htaccess'); os.system('echo require valid-user >> .htaccess'); print "=" *40; quit(); username = raw_input("Enter your cpanel username: "); try: pwd.getpwnam(username); except KeyError: print "User not found."; quit(); print "=" * 40; user = raw_input("Enter the username you want to create for authentication: "); print "=" * 40; print "Your username will be "+ user +" for authentication"; print "=" * 40; filename="/home/"+username+"/public_html/administrator/.htaccess"; if os.path.isfile(filename): yes = set(['yes','y', 'ye', '']) no = set(['no','n']) response = raw_input("Filename exists. Overwrite??"); ans = response.lower(); if ans in no: print "Sorry we didn't secure the administrator folder. :("; quit(); elif ans in yes: ok(); else: sys.stdout.write("Please respond with 'yes' or 'no'. Quiting..."); quit(); Answer: My suggestions would be: Rewrite ok() using python I/O rather than running echo in a subprocess. We're passing in arguments because they are no longer going to be global variables after some of the edits below. def ok(passfile, htaccess, user): os.system("htpasswd -c %s %s" % (passfile, user)) with open(htaccess, 'w') as f: f.write('AuthUserFile %s\n' % (passfile)) f.write('AuthName Restricted\n') f.write('AuthType Basic\n') f.write('require valid-user\n') print "=" *40 quit() Consider using the python module https://pypi.python.org/pypi/htpasswd instead of calling htpasswd in a subprocess. For example: import htpasswd with htpasswd.Basic("/path/to/user.db") as userdb: try: userdb.add("bob", "password") except htpasswd.basic.UserExists, e: print e Take all the top level lines outside any function and put them in a main() routine at the top of the file (after the imports). Like so (I'm also doing some tweaks here to make things more portable and pythonic): #!/usr/bin/env python import os import pwd import sys def main(args): username = raw_input("Enter your cpanel username: "); try: pwd.getpwnam(username); except KeyError: print "User not found."; quit(); print "=" * 40; user = raw_input("Enter the username you want to create for authentication: "); print "=" * 40; print "Your username will be "+ user +" for authentication"; print "=" * 40; # Note we're building up our file paths without ever using '/' or # '\'. By letting python build our paths for us, they'll be portable homedir = os.getenv("HOME") passfile = os.path.join(homedir, ".pass_file") htaccess = os.path.join(homedir, "public_html", "administrator", ".htaccess") # If htaccess already exists, we ask user whether to overwrite. # If the user says "no", we say sorry and call quit(). # If the user says "yes", we break out of the loop and call ok(). # Otherwise, we keep telling the user to answer 'yes' or 'no' # until they do. # # If htaccess does not exist, I think we want to create it. The # original code did not address this case. if os.path.isfile(htaccess): while True ans = raw_input("File %s exists. Overwrite??" % htaccess); # re.findall() will return a list of regexp matches. If there # are no matches, the list is empty and the if evaluates to # False. Note the regexp handles mixed case and abbreviations. if re.findall('[Nn][Oo]?', ans): print "Sorry we didn't secure the administrator folder. :("; quit(); elif re.findall('[Yy]?[Ee]?[Ss]?', ans): break else: sys.stdout.write("Please respond with 'yes' or 'no'."); ok(passfile, htaccess, user) quit(); Then at the bottom of the file, if __name__ == '__main__': main(sys.argv) By using a main() routine and calling it from the bottom of the file if __name__ contains the string '__main__', you can import this file into other files and reuse the code by calling the functions. With lines at the top level of the file, outside functions, those lines would get run if you ever imported the file. Following this convention consistently makes debugging and code reuse much easier.
{ "domain": "codereview.stackexchange", "id": 7414, "tags": "python, beginner, authentication" }
What is a functional group?
Question: I know that a functional group gives definitive property to an organic compund. But my text book claims carboxylic acid is a functional group but isnt carboxyl the functional group with a formula -COOH , is the text book wrong or am i missing something? And similarly isnt hydroxyl the functional group of alcohols rather than alcohol being the functional group. Could you please clear this out ? Answer: Your books statement that "carboxylic acid is a functional group" doesn't seem quite right to me. I think it's better to say that a carboxylic acid is an organic molecule containing a carboxyl functional group. Similarly, I wouldn't say that an alcohol is a functional group, but that an organic molecule containing a hydroxyl group is an alcohol. Of course you can have single large molecules with multiple functional groups that could be called, at once, for example, an alcohol, a ketone and a carboxylic acid, because it contains a hydroxyl functionality, a carbonyl functionality and a carboxyl functionality. I hope this addresses what you were asking, don't hesitate to ask if I missed something.
{ "domain": "chemistry.stackexchange", "id": 7677, "tags": "organic-chemistry" }
Balanced Max-2-SAT NP-Hardness
Question: The Balanced Max-2-SAT is a special case of Max-2-SAT (each clause is a disjunction of exactly 2 literals) in which for every variable $x$, there is a $k$ such that $x$ appears positive exactly $k$ times in the clauses, and exactly $k$ times negative in the clauses. I'm looking for a published proof of NP-Hardness for this problem, for references. It is implied NP-Hard by [this paper], but I can't find the original NP-Hardness proof. Is it a consequence of the hardness of Max-2-SAT ? Also as a bonus, Balanced Max-2-SAT($k$) is the subcase in which every variable appears exactly $k$ times positive, exactly $k$ times negative. Is there some known $k$ for which NP-Hardness is published? I know that Balanced Max-3-SAT(2) is hard, but found nothing for the $2$ case. Answer: I think it is possible to reduce Max Cut to this problem: given a graph $G(V,E)$, make a variable for each vertex, and for each edge $(u,v)\in E$ make the two clauses $(x_u\lor x_v)$ and $(\neg x_u\lor\neg x_v)$. In the resulting instance, $x_u$ appears $d(x_u)$ times positive and the same number of times negative. Since Max Cut is still NP-hard on regular graphs, this proves that the version for which $k$ is the same for all variables is also hard.
{ "domain": "cstheory.stackexchange", "id": 3633, "tags": "np-hardness, sat, max2sat" }
Why horse rides in a merry-go-round swing outward?
Question: In a circular motion, like the horse rides in a merry-go-round, there is only one force that acting on it, the centripetal force that pulls the ride inward. I'm struggling to answer why the horse rides swing outward instead? Answer: The only REAL force is the centripetal force. A merry-go-round is a non-inertial frame of reference and there are inertial forces present in it too. The centrifugal force, The Euler force and the Coriolis force. The first one for example, from an observer's point of view, who is sitting in the middle of the merry-go-round pushes the object (Horse in this case) far from the center. As the ride starts, The Euler force will be the apparent force pushing the person to the back of the horse, and as the ride comes to a stop, it will be the apparent force pushing the person towards the front of the horse. You can read about these forces and the Coriolis force, which is a major player in the earth climate and weather changes, in the provided links or analytical mechanics books like "Classical Dynamics of Particles and Systems" by Thornton and Marion. Or Fowles analytical mechanics.
{ "domain": "physics.stackexchange", "id": 38867, "tags": "newtonian-mechanics, reference-frames, centrifugal-force" }
General Subscriber's Callback in C++
Question: Hi all, Is there any way to make one general callback in c++ that can be executed for every subscriber? I mean every callback needs type of the message but I want one callback that should be generally and dynamically executable for all subscriber. I can do this in python but need to do in c++. Has someone done this? Thanks in advance, regards, safdar Originally posted by safzam on ROS Answers with karma: 111 on 2011-09-20 Post score: 1 Original comments Comment by safzam on 2011-09-20: I mean , for example I have two different nodes publishing one publishes int type message on toip1 and other publishes on topic2. Now I need to make certainly two subscriber s1 for topic1 and s2 for topic2. But I want to have one callback for both of these s1 and s2. Is it possible in C++? regards Comment by tfoote on 2011-09-20: Can you be a little clearer about what you mean by a generally and dynamically executable callback? The roscpp implementation is designed to take advantage of C++ types and requires you to know the type of the function to call back. Maybe your use case or what you are doing in python would help. Answer: If both topics publish the same message type, you can definitely use the same callback for both subscriptions, although you won't be able to tell which message came from which topic. Otherwise, you could always delegate the work to a third function, like this: void processInt (int a) { // Lots of computation here } void callback1 (MyMessageConstPtr msg) { processInt (msg->int_value); } void callback2 (MyOtherMessageConstPtr msg) { processInt (msg->other_int_value); } int main (int argc, char** argv) { ros::init (argc, argv, "my_node"); ros::NodeHandle node; ros::Subscriber s1 = node.subscribe<MyMessage>("topic1", 1, callback1); ros::Subscriber s2 = node.subscribe<MyOtherMessage>("topic2", 1, callback2); ros::spin(); return 0; } Originally posted by roehling with karma: 1951 on 2011-09-21 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Lorenz on 2011-09-21: In C++ you need to know the data types when writing the program if you want to work with the data in the messages. Comment by safzam on 2011-09-21: Hi, I have done the same way before but it needs as many functions as we have different datatypes which I dont know in prior for example. So I want to avoid all so many functions. Is there other way?
{ "domain": "robotics.stackexchange", "id": 6730, "tags": "ros, callbacks" }
Custom matrix object in C++
Question: I have been getting back into C++ today, after a few years of Python and R. I am completely rusty, but decided to create a matrix object to re-familiarise. In reality I wouldn't use this class, since I know Boost has matrix objects, but it's good practice! It can be compiled with g++ -std=c++11 matrix.cpp -o m. Any thoughts or comments are most welcome. #include <iostream> #include <vector> class matrix { // Class: matrix // // Definition: // creates a 2d - matrix object as a vector of vectors and // populates with (int) zeros. // // Methods: get_value, assign_value, get_row. // create vector of vectors std::vector<std::vector<int> > m; public: // constructor for class, matrix dimension (rows=X, cols=Y). matrix( int X, int Y) { m.resize(X, std::vector<int>(Y, 0)); } class row { //class for matrix row object. Pass in a // vector and overload `[]`. std::vector<int> _row; public: // constructor row(std::vector<int> r) : _row(r) { } // overload `[]` to return y element. // note `.at()` does a range check and will throw an error // if out of range int operator[]( int y) { return _row.at(y); } }; // overload [] to return x element row operator[]( int x) { return row(m.at(x)); } int get_value ( int x, int y ) { // Function: get_value // Definition: returns value `v` of element // `xy` in matrix `M`. return m[x][y]; } void assign_value ( int x, int y, int v ) { // Function: assign_value // Definition: Assigns value `v` to element // `xy` in matrix. m[x][y] = v; } std::vector<int> get_row(int y, int X){ // Function get_row // Definition: returns a vector object with row // of x-values of length X at y. std::vector<int> ROW; for ( int i=y; i<=y;i++){ for (int j=0; j<=X-1;j++){ ROW.push_back(m[i][j]); } } return ROW; } }; int main(){ // specify matrix dimensions int N = 10; // number of rows int M = 10; // number of cols // create a matrix object matrix mm(N,M); // print elements int i, j; for (i=0; i<=N-1;i++){ for (j=0;j<=M-1; j++){ std::cout << mm[i][j]; } std::cout << std::endl; } // grab a value and print it to console std::cout << mm.get_value(0,0) << std::endl; // assign a new value (v = 1) to element (0,0) mm.assign_value(0,0,1); // re-print the updated matrix for (i=0; i<=N-1;i++){ for (j=0;j<=M-1; j++){ std::cout << mm[i][j]; } std::cout << std::endl; } // `get_row()` test std::vector<int> R = mm.get_row(0, M); for( int i: R){ std::cout << i << ' '; } } Answer: That's some nicely presented code. I found it very easy to read and understand. A vector of rows isn't the best structure for a matrix. The reason is that each vector has its storage elsewhere, so you lose locality of access. A better structure is a flat array (or vector) of elements, and a knowledge of the stride from one row to the next. (we can make the stride be the same as the row length, for simplicity; separate members for width and stride can be useful in more advanced scenarios). std::vector<int> m; std::size_t width; public: # Constraint: x * y must not overflow size_t matrix(std::size_t x, std::size_t y) : m(x*y, 0), width{x} { } I've made the dimensions be size_t, as that's the natural type for a size or count in C++. Now, when we need to index into the array, we need to multiply the y value by width and add x: int get_value(std::size_t x, std::size_t y) { return m[x + y*width]; } We can improve on this, by returning a reference to the value. Instead of having a "get" and "set" method, we have a single method (for now), and we can give it a more convenient name: int& operator()(std::size_t x, std::size_t y) { return m[x + y*width]; } This means that instead of having to write mm.assign_value(0,0,1); we can instead use the more intuitive mm(0,0) = 1; Now it's time admit to a slight lie above. We actually need two methods, because if we have a const matrix, we should be allowed to read, but not write, its elements. So we also need: const int& operator()(std::size_t x, std::size_t y) const { return m[x + y*width]; } For printing, it's helpful to provide an operator<<(). Mine would look like this: friend auto& operator<<(std::ostream& os, const matrix& m) { for (std::size_t row = 0; row < m.height; ++row) { for (std::size_t col = 0; col < m.width; ++col) { os << m.m[col + row*m.width] << ' '; } os << '\n'; } return os; } I added a height member to make this easier. With these changes, see how much easier it is to use: #include <iostream> int main() { // create a matrix object matrix mm(4,6); // print elements std::cout << mm; // grab a value and print it to console std::cout << mm(0,0) << std::endl; // assign a new value (v = 1) to element (0,0) mm(0,0) = 1; // re-print the updated matrix std::cout << mm; } Here's the full version of matrix after my edits: #include <ostream> #include <vector> class matrix { std::vector<int> m; std::size_t width; std::size_t height; public: matrix(std::size_t x, std::size_t y) : m(x*y, 0), width{x}, height{y} { } int& operator()(std::size_t x, std::size_t y) { return m[x + y*width]; } const int& operator()(std::size_t x, std::size_t y) const { return m[x + y*width]; } friend auto& operator<<(std::ostream& os, const matrix& m) { for (std::size_t row = 0; row < m.height; ++row) { for (std::size_t col = 0; col < m.width; ++col) { os << m.m[col + row*m.width] << ' '; } os << '\n'; } return os; } }; Further exercises If you actually want a public get_row() (and get_column()), these will need new implementations, perhaps copying values. Think about providing a get_subarray(x, y, width, height) to give a view of part of the matrix - you'll need new members for offset and stride. See how we can now more easily implement get_row() and get_column() using this new method. Make the matrix a template, so we can have elements of whatever type we choose, rather than only int.
{ "domain": "codereview.stackexchange", "id": 32044, "tags": "c++, matrix" }
What makes a machine learning algorithm a low variance one or a high variance one?
Question: Some examples of low-variance machine learning algorithms include linear regression, linear discriminant analysis, and logistic regression. Examples of high-variance machine learning algorithms include decision trees, k-nearest neighbors, and support vector machines. Source: What makes a machine learning algorithm a low variance one or a high variance one? For example, why do decision trees, k-NNs and SVMs have high variance? Answer: What this is talking about is how much a machine learning algorithm is good at "memorizing" the data. Decision trees, for their nature, tend to overfit very easily, this is because they can separate the space along very non-linear curves, especially if you get a very deep tree. Simpler algorithms, on the other hand, tend to separate the space along linear hyper surfaces, and therefore tend to under-fit the data and may not give very good prediction, but may behave better on new unseen data which is very different from the training data.
{ "domain": "ai.stackexchange", "id": 2119, "tags": "machine-learning, linear-regression, statistical-ai, decision-trees, bias-variance-tradeoff" }
car with steering control urdf model: teb_local_planner
Question: I need to design a car model with the steering control. For this , several of the packages are available out of which the teb_local_planner(car like model) seems to be very suitable. But , i am unable to find any urdf model of the car like robot in the package files (as shown in the video of the car like robot). Does that mean we need to use our own car like model? UPDATE: I need to design a car like robot model with the steering control. The first doubt of mine is that , the launch file of the "teb_local_planner" does not include any urdf model, then how is the teb_local_planner working? and if i need to use my robot model, then what all changes are to be required? Also, I have seen the use of exporter plug-in of the solid-work which will help to get the urdf model of the 3D Model. Can i get the car with steering model with this plug in of solid work? (as there are many 3D Models available for the car in solid-work community). And if it is not possible to extract the urdf of a steering machine, then which tutorial or source should i follow for this? Kindly help in this regard. Thank you. Originally posted by Ayush Sharma on ROS Answers with karma: 41 on 2017-04-07 Post score: 0 Original comments Comment by zeal on 2017-04-14: As i know , we use teb_local_planner for 2d_navigation,so there only exists a footprint configuration in teb,you can define the car's footprint and min_turing_radius in the yaml file。you can see the tutorials : http://wiki.ros.org/teb_local_planner/Tutorials/Planning%20for%20car-like%20robots Answer: My approach to simulate a car like robot was to develop a SDF model because it has a revolute2 joint that allow the wheels to rotate around two different axis simultaneous. In my case URDF is made just for RVIZ visualization and to define the transformations between robot's frames. Hope it helps. Any doubt feel free to ask Originally posted by agbj with karma: 83 on 2017-04-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27552, "tags": "ros" }
Fireplace window: Can one influence how fast soot is building up?
Question: I'm sorry for such a stupid question, but after googling for half an hour I know two things There are at least a zillion pages, blog-posts and articles about how to keep your fireplace window clean or how to clean it. There is not a single page which explains what I'm looking for and which doesn't make claims without supporting them with scientific facts. Therefore, here the short form of my question: Assuming I have a fireplace with a window, which looks like this after a while: Further assume I'm able to keep all other factors constant (like type of wood, temperature of the fire etc.), is there something I can apply/do to the cleaned window to prevent the building of soot for a longer time? The reason I'm asking is this article I stumbled over, which claims that normal corn-starch can be used to fill the pores of the glass. This left me with more questions does each glass (especially my one) have pores with a size that starch particles can fit in? I read somewhere that starch has a diameter between 2-170 µm depending on the type. I'm making a fire where I burn wood! Why wouldn't the starch simply burn and create itself the soot? As we know for a while now making a surface really smooth doesn't mean it keeps clean. With a size of 10-20 µm of the lotus papillae, it seems to fall in the same range as starch. And for the diligent, here two more sub-questions: How does soot building work? I mean, why does the soot stick so hard at the window? After all it's just little particles flowing around and landing on the window. (I know it's called burning in, but how does the heat work on a already burned particle?) Is there some connection to the pores (if any) in the glass in this process? Update After using my fireplace one winter, I'd like to make recommendation based on my experience so far. Important is that I can easily remove the window from my fireplace for cleaning. In case the window is fixed, cleaning might become a bit messy. Let me bring it down to 3 tips: My current method is to use a cleaner for ceramic glass cooktops. This works like a charm. In case someone wants to know the specific product: it's a cleaning stone for class ceramic which is probably only available in Germany but I guess something similar can be found all around the world. This thing comes with a sponge and you just have to rub the wet sponge a few times over the cleaner to apply some. Cleaning the window with this is done in less than a minute, even if there is a lot of soot. If you don't want to invest any money, the most simple and very effective cleaning method is to use the ash itself to scrub the window. Simply take water and a newspaper, if possible make the window itself wet or better rinse it with water. Take a sheet of newspaper, make it wet and apply ash directly from the fireplace itself. It's really surprising how easily the soot can be scrubbed away with this method. Last tip: Heat! As already said in the accepted answer, soot is basically the deposition of incomplete combustion products from a flame. There is no doubt and I have tested this very often: a dirty window can partially be cleaned by make a hot fire with very dry wood and a lot of air. Therefore, try to fill your fire-room with the maximum possible amount of wood, open all air-valves and let it burn for a while. Answer: I don't have a fireplace or wood stove, so I don't have a practical technique that I know works to prevent soot buildup on glass, but I can answer some of your questions about glass and soot. To start, I should add a caveat here: while many fireplaces use tempered glass windows, wood stoves generally have glass-ceramic windows to withstand the increased heat. As far as surface properties go for this, glass-ceramic isn't dramatically different from normal glass. does each glass (especially my one) have pores with a size that starch particles can fit in? I read somewhere that starch has a diameter between 2-170 µm depending on the type. While special techniques exist for making porous glasses, normal glass is not porous and float glass (the standard technique for fabricating flat glass panes) produces quite smooth pieces even without polishing (hundreds of nm peak-to-peak), so probably whole starch particles are not getting stuck in the surface of the glass and the surface roughness probably has little impact on soot adhesion in this case. I'm making a fire where I burn wood! Why wouldn't the starch simply burn and create itself the soot? Unless the flame is right up to the glass, it probably doesn't get hot enough to really burn starch (if any is even there). That site didn't compare it directly with not applying starch so it's not clear that it makes any difference. How does soot building work? I mean, why does the soot stick so hard at the window? After all it's just little particles flowing around and landing on the window. (I know it's called burning in, but how does the heat work on a already burned particle?) Soot is basically the deposition of incomplete combustion products from a flame. A wood flame appears yellow or orange because tiny particles of mostly carbon are incandescing. There is insufficient oxygen to fully combust the carbon in the fuel to $\ce{CO2}$, so the combustion results in carbon molecules in various states of oxidation from small polymers to elemental carbon. These species are really only stable as gases within the heat of the flame itself and will thus readily deposit onto cold surfaces, like the window glass. Many modern wood stoves seem to have "air-wash" systems that flow air past the inside of the window to reduce how much soot can actually reach the surface. What is actually deposited as soot is a complex mixture, but what is emitted from the flame is related to how much air is available and the temperature of the flame (the two are also connected with each other). If the oxygen supply is increased, more complete combustion is favoured so less carbon species that can deposit as soot are formed. The temperature that is attained influences the composition of soot particles: higher temperatures produce more aliphatic and less aromatic compounds. The practical significance is that reducing the flame intensity by limiting oxygen can greatly increase the production of soot. (and may have some effect on how well it adheres to the glass, though I wasn't able to find anything concrete to that effect) From the non-scientific sources I glanced through, it seems to be well known that excessive soot buildup can be due to wet fuel or low airflow and that burning a very hot fire can help make soot easier to remove. As for the corn starch, it may indeed be leaving some deposit that hinders soot adhesion (there are commercial cleaners that claim to leave a film behind that makes soot easier to remove), the act of applying it may be removing existing soot that's too small to see but new soot adheres readily too, or it may do nothing. Perhaps you might want to try applying corn starch to half of a freshly-cleaned window to see if there is a noticeable difference.
{ "domain": "chemistry.stackexchange", "id": 9215, "tags": "organic-chemistry, everyday-chemistry" }
Minimal set of invariants to specify a Kepler orbit
Question: In the Kepler problem, we know that there are various invariants, including: Energy Angular momentum vector Runge-Lenz vector All together these consist of 7 parameters. On the other hand, the orbital elements consist of only 6 parameters, and one of them (the mean anomaly) is just a time reference, so if we ignore time offsets there are only 5 parameters needed to describe an orbit. So there should be some relationships among the invariants so that there are only 5 degrees of freedom. The angular momentum vector and Runge-Lenz vector have to be perpendicular. So if I specify these two vectors, that's 5 degrees of freedom. Does that fully specify the orbit? Can the energy be calculated from them? What about if I specify the angular momentum and the energy? That's only 4 degrees of freedom; what else is needed? What if I specify the Runge-Lenz vector and the energy? Again, it's only 4 degrees of freedom, so what's the last component needed to nail down the orbit? Answer: You're correct, that there must be 5 independent constants of the motion, allowing closed 1-dimensional orbits in the 6-dimensional phase space. I'll show now that from the LRL vector, $\textbf{A}$, you can get only one additional conserved quantity. The magnitude of $\textbf{A}$, depending on scaling can be chosen to correspond to the eccentricity of the orbit $e$. This can be found from the orbital energy and angular momentum through $$e=\sqrt{1+ \frac{2 E h^2}{\mu^2}}.$$ Hence the magnitude of the LRL vector can contribute no additional constant of the motion, and we must look at its direction. Through the definition $$\textbf{A} = \textbf{p}\times\textbf{L}-mk \hat{\textbf{r}},$$ we see that $\textbf{A}$ must be perpendicular to $\textbf{L}$, as $$\textbf{A}\cdot \textbf{L} =\textbf{L}\cdot \left( \textbf{p}\times\textbf{L}\right)- \textbf{L}\cdot \left( mk \hat{\textbf{r}}\right) = \textbf{p}\cdot\left( \textbf{L} \times \textbf{L} \right) - mk \left( \textbf{p}\cdot (\hat{\textbf{r}}\times \textbf{r})\right) = 0,$$ where I've cyclically permuted the scalar triple products, to show they must both identically vanish. This statement, is that $\textbf{A}\cdot \textbf{L}=0$, or that $\textbf{A}$ lies in the plane of motion of the orbit. Since $\textbf{A}$ is constrained to lie in a plane already specified by the angular momentum, $\textbf{A}$ can only contribute one extra independent conserved quantity, namely the direction it points in the plane of the orbit. You can show that $\textbf{A}$ in fact points along the symmetry axis that all Keplerian orbits possess. So we have shown $\textbf{A}$ can only contribute a single additional conserved quantity. We get a conserved quantity from energy, and 3 conserved quantities from angular momentum, giving us a total of 5 conserved quantities.
{ "domain": "physics.stackexchange", "id": 42976, "tags": "orbital-motion, celestial-mechanics, degrees-of-freedom, invariants, laplace-runge-lenz-vector" }
Do immature fruits perform photosynthesis?
Question: Most immature fruits are green: peppers, pine cones, plums, lots of them. I want to know if the green is from chlorophyll in the cells. Do the fruit cells perform photosynthesis? When you cover a green stem or leaf, it will turn pale and stretch. That is because the stems have little need for chlorophyll in the dark, which is why they are pale. They stretch because the auxins in the stems are not destroyed by the photons, and the stems stretch out and topple over. If a green fruit is covered, will it turn pale and stretch like that? Answer: The green pigment is indeed chlorophyll, and the fruits do perform photosynthesis. It's not just "fruiting" plants that do this either. Some shading experiments i saw estimated that up to 30% of the sugars assimilated into the barley ears (basically the grain) can come from photosynthesis occurring in those organs.
{ "domain": "biology.stackexchange", "id": 266, "tags": "botany, plant-physiology, photosynthesis" }
Yellowish tinge during titration of oxalic acid with potassium permanganate
Question: One of the our chemistry practicals involve finding the molarity of an oxalic acid solution by titrating it with standard $\ce{KMnO4}$ solution. The oxalic acid solution is heated and little sulfuric acid is added to titration flask before starting the titration. It was my personal observation that adding the $\ce{KMnO4}$ solution rapidly causes a temporary appearance of a light yellow colour. Slow and gradual addition of $\ce{KMnO4}$ does not lead to this observation. If sulfuric acid is not added to the titration flask, then also a yellowish tinge appears as we get closer to the end point. What is the for this yellowish tinge? Is there any intermediate species formed (maybe a complex one) that leads to this observation? Answer: Without knowing the concentrations of the involved substances and the values of some other parameters, we can only guess what has happened during your experiment. The intended analytical reaction is the reduction of purple permanganate to colourless $\ce{Mn^2+}$: $$\ce{MnO4- + 8H+ + 5e- <=> Mn^2+ + 4H2O}$$ If you add the permanganate solution too quickly, the reduction may be incomplete because not enough reducing agent (here: oxalic acid) or acid $(\ce{H+})$ is available in the reaction zone: $$\ce{MnO4- + 4H+ + 3e- <=> MnO2 + 2H2O}$$ The observed colour may be caused by a cloud of finely dispersed particles of dark brown manganese(IV) oxide $(\ce{MnO2})$, which dissolves when the solution is well mixed again so that the intended reaction continues. The same reaction may occur when you don’t add enough sulfuric acid, since $\ce{H+}$ is consumed during the reaction (8 mol $\ce{H+}$ per 1 mol $\ce{MnO4-}$).
{ "domain": "chemistry.stackexchange", "id": 6487, "tags": "experimental-chemistry, redox, titration, transition-metals, carbonyl-compounds" }
ROS2 - tf2_ros::TransformBroadcaster and rclcpp_lifecycle::LifecycleNode
Question: I need to create a TransformBroadcaster inside my LifecycleNode, but the constructor of tf2_ros::TransformBroadcaster requires a rclcpp::Node::SharedPtr. This may be a silly question, but I cannot fill the missing part: std::shared_ptr<tf2_ros::TransformBroadcaster> mTransformPoseBroadcaster; mTransformPoseBroadcaster = std::make_shared<tf2_ros::TransformBroadcaster>(??????????); Thank you for help Walter Originally posted by Myzhar on ROS Answers with karma: 541 on 2018-09-05 Post score: 0 Answer: The tf2_ros interface, which pre-dates lifecycle nodes and the node interfaces in rclcpp, would need to be updated to take the "node interfaces" that it uses, which are the common element between Node and LifecycleNode. For example, maybe it should instead take a pointer to rclcpp::node_interfaces::NodeTopicsInterface and a pointer to an instance of rclcpp::node_interfaces::NodeParametersInterface instead of a pointer to a node. A separate question is whether or not tf2_ros should create lifecycle topics or normal topics (should it work only in certain lifecycle states or always)? I'd recommend opening an issue on the ros2/geometry2 issue tracker (https://github.com/ros2/geometry2/issues) asking about this. I'm not sure if anyone will have time to implement this for you, but if you're interesting in contributing this feature for tf2 that would probably be a quicker way to get a solution for this. Originally posted by William with karma: 17335 on 2018-09-05 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Myzhar on 2018-09-06: Yes, I think that it should be similar to executors that get a pointer to NodeBaseInterface when you add nodes. Added issue: https://github.com/ros2/geometry2/issues/70 Comment by Myzhar on 2018-09-06: Actually I'm using a simple trick: I publish the TF2 as normal message and I made a "normal" node that subscribes and broadcasts using tf2_ros::TransformBroadcaster Comment by Myzhar on 2018-09-15: An update: using the node that subscribes to tf2 messages and broadcasts them it works, but there is a latency issue as i described here: https://answers.ros.org/question/303377/ros2-tf2-broadcasting-very-slow/
{ "domain": "robotics.stackexchange", "id": 31714, "tags": "ros2" }