anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Thermal expansion of both liquid and glass tube
Question: I'm a bit confused about thermal expansion in the case in which both a liquid and the container do expand. I will describe an example situation to expose the problem. Consider a cylindrical glass tube (linear thermal expansion coefficient $\alpha$) that contains liquid (volume thermal expansion coefficient $\beta$). The height of the tube is $h_{t,0}$ and the height of the liquid inside of it is $h_{l,0}$. If the temperature changes of an amount $\Delta T$ what is the new height of the liquid? If the cylindrical tube is provided of a measuring scale, what is the new height of liquid measured from the scale? The relation I would use is $$\frac{\Delta V}{V_0}\approx\frac{\Delta h}{h_0} +\frac{\Delta A}{A_0}$$ Which comes from $$(V_0+\Delta V)=(h_0+\Delta h) \cdot(A_0+\Delta A)$$ Neglecting higher order terms. To find the new "absolute" height of the liquid I would simply consider the change in volume $\Delta V_{l}=V_{l,0} \beta \Delta T$, and then the change in the area of the cylinder $\Delta A_{t}=A_{t,0} 2 \alpha \Delta T$. Then I would write $$\frac{\Delta h_{l}}{h_{l,0}} =\frac{\Delta V_{l}}{V_{l,0}}- \frac{\Delta A_{t}}{A_{t,0}}=(\beta-2\alpha) \Delta T$$ So actually in this case I would not consider the change in height of the tube, since I'm looking for the absolute change in height of the liquid. To get the new height of liquid "relative to the tube" I would consider the "relative change in volume" $$\Delta V_{l,relative}=\Delta V_{l}-\Delta V_{t}=(V_{l,0} \beta- V_{t,0} 3\alpha)\Delta T$$ Here is my main doubt: does this "relative" change already takes into account the fact that both the area and the height of the tube change? If so, considering this "relative change" I can write $$\frac{\Delta h_{l,relative}}{h_{l,0}}= \frac{\Delta V_{l,relative}}{V_{l,0}}$$ Because "relative to the tube" the only thing that can change is the height of the liquid and the base area is "constant" (infact the change in area of the liquid is the same of the one of the tube). I'm not very convinced about this last consideration made. Are these two processes correct or are there any mistakes (conceptual or of other kind) ? Any suggestion is highly appreciated Answer: You already have the answer when you write $$\frac{\Delta h}{h} = (\beta -2\alpha)\Delta T$$ What you do after that is unnecessary and does not make sense. You have already said that the height of the tube is irrelevant, so the height of the liquid "relative to the tube" is meaningless. If initially the liquid fills the tube completely and you want to know how much liquid spills out, use $$\frac{\Delta V}{V} = (\beta - 3\alpha)\Delta T$$ In response to your comment : I think what you are trying to do is calculate the new volume reading of the liquid on the scale on the tube. For this you should use the same formula (for volume), which is marked in units of $cc$ or $cm^3$. So if the reading on the scale was initially $V_0$ cc then after expansion of the liquid and the glass tube the reading will be $V_1$ cc where $$V_1 - V_0 = V_0 (\beta - 3\alpha)\Delta T.$$
{ "domain": "physics.stackexchange", "id": 32188, "tags": "homework-and-exercises, thermodynamics, temperature, volume" }
How can the $PV^{\gamma}$ equation be used here? If it can't be used then how do you solve this problem?
Question: A motor tyre has a pressure of $3$ atm at a temperature of $27^\circ C$ If the tyre suddenly bursts what is the resulting temperature? First of all, I believe this is not a quasi-static process and by "suddenly" i dont think there is any equillibrium maintained. This question is given as a homework problem under the section of Applications of the First Law of Thermodynamics. There was a solution on some other website to a very similar problem but they use the $PV^{\gamma}=constant$ equation and also the equation for ideal gas. But I dont see how they are able to use it since this is not a quasi-static process. Answer: In my judgment, your assessment is correct. The subsequent expansion of the air is going to be irreversible. To a first approximation, the expansion takes place against constant atmospheric pressure, and the behavior is essentially the same as if the air were contained in an insulated cylinder with a massless piston. Of course, that part that ends up outside the carcass will, after a while, come to equilibrium with the surrounding air. But the air remaining inside the carcass of the tire will heat up much more slowly, and so, at least for the purposes of this problem, the expansion can be considered adiabatic (and, as indicated previously, against constant atmospheric pressure).
{ "domain": "physics.stackexchange", "id": 68942, "tags": "homework-and-exercises, thermodynamics" }
Lifting 250 watts of heat using some heat exchanger/heat sink
Question: I am new to thermal management. I want to lift about 250 W of heat from hot end of thermoelectric cooler/peltier cooler (TEC). The temperature of hot side of TEC is 30°C. I want to use some kind of heat exchanger/chiller to accomplish this task. I am using equation $$ P = h S (T_s-T_f) $$ where $P$ = heat to be removed = 250 W $h$ = heat transfer coefficient $S$ = area of contact between hot side of TEC & fluid (air/water) used for convective heat transfer $T_s$ = temperature of hot end $T_f$ = temperature of fluid (air/water) I do not know how to calculate $h$. If I calculate that, it will give me temperature of fluid. How do I compute $h$? And is my approach correct? Answer: This is a convective heat transfer problem you're asking about. Calculating the coefficient isn't normally possible. What you can do is read up on stuff like fin geometry on heat sinks, and CFD. The long-term solution here is to derive an equation for your system, something like P = f(X) where X is (for example) the length of a fin. Then, perform a CFD simulation for X = 1 to get P for that particular geometry. Then you can scale your fin until you get the right value for P. It's get a bit more complicated because you probably shouldn't use dimensioned parameters for something like this. Instead, consider something like the Reynold's number of the system. Some more reading to help you get started: Heat Transfer From a Fin Convective Heat Transfer
{ "domain": "engineering.stackexchange", "id": 1091, "tags": "mechanical-engineering, thermodynamics, heat-transfer, heating-systems, heat-exchanger" }
Is this a bed bug? I found it crawling on me in hotel bed
Question: Is this a bed bag? It doesn't have the stripes that I saw in other pictures. I'm traveling and for 5 days I was sleeping in different hotel rooms. What should I do if it is. I searched the bed and found no more. Answer: As already stated in the comments the animal you found is a tick (Wikipedia). Since they are known to carry diseases you should make sure you have not been bitten or even still have one sucking your blood. So search your body. If you happen to find one still attached, do NOT squeeze it as this may stimulate fluids to flow from the tick into your body (bad). Instead get it out asap using tweezers or even better specialized tools. If you can't do it yourself ask another person preferably a doctor. If you have been bitten watch for symtoms of an infection such as a red circle. If in doubt always see a doctor since you might require antibiotic treatment. NOTE: usually nothing happens so don't worry too much but be alert for changes in skin around bites (if bitten that is). EDIT: since hotel beds are not a typycal habitat of ticks you probably carried it there yourself.
{ "domain": "biology.stackexchange", "id": 6023, "tags": "species-identification" }
How do you go from watts to lumens?
Question: let's say you have a light source and using a solar panel or photoelectric diode you can absorb all the emitted light, which would produce some amount of power outputted by the solar panel. How would you then calculate the lumens? and I'm not talking about some rule-of-thumb calculation like every 10 watts is 80 lumens. Answer: That's a two part process: The solar panel produces a certain wattage output The electical energy is converted to light energy Simplisticly solar panels are rated in efficiency at specific temperature and light path conditions. This is provided by all reputable manufacturers and measured in Watts output per 1000 Watts insolation under specified conditions. If desired you can go into much more detail regarding response across the spectrum, efficiencies at various temperature and more, BUT in most cases a simple Watts per kW of insolation under specified conditions is adequate. Typical figues are in the 17%-22% range for conversion of solar to electrical energy. Wikipedia AM1.5 air mass specification NREL Even more detail on conditions Buy your own miniatute test sun here Much more - image search here Light output in lumen from a given radiator (typically an LED) is also specified by manufacturers - usually in lumen per electrical watt input. This too is under specified conditions. Typically either at 20 degrees C - which is almost always lower than actual die temperature due to electrical heating or, for more modern lighting orientated devices, at say 90C or 105C - which corresponds more closefly to actual die temperatures during operation in higher power LEDs. Output is also at specified voltage and cuttent input - and this is often at values lower or substantially lower than maximum ratings. Typical modern "white" LEDS have outputs in the 120 - 200 lumen per Watt range. Some more recent devices have outputs of 220 lumen/watt and a very few are higher. Lumen ratings are dependent on wavelength as they are relatd to eye response. A deep-blue near UV source will have extremely low lumen/watt ratings and output at such short wavelengths is often specified in mW of light energy. Maximum lumen per mW are achieved at the peak spectral response wavelength of the human eye - nominally 550nm wavelength. This is a "sort of yellowy green colour" - and the reason why some fire-engines are coloued "sort of yellowy green" rather than red. Lumen per Watt - image search. Each image is a related webpage here The relative sensitivity of the (standardised) human eye at various wavelengths of light is shown in the image below. This image is from this page Sensitivity of the human eye To avoid confusion, I recommend starting with considering light bright enough to see colours. The photopic curve relates to light above about 5 mW per square metre - where the "cones" in the centre of the eye give colour vision. This is about the level of light in a dimly lit corner of a room with a single central light bulb. The scotopic curve relates to very low level light where the rods in the periphery of the eye give mainly monochrome vision. Peak bright light response is defined as being 683 lumen for a source of 1 Watt of 550 nanometre light on one square metre. At very low light levels the eye operates in a monochrome viewing mode and somewhat different resuslts are obtained. Lumen / Lux / Candella comparison here Useful What is lumens? How to Choose the Right Lighting Lumens? And Response of the eye to light
{ "domain": "physics.stackexchange", "id": 92582, "tags": "optics, energy, visible-light, dimensional-analysis, si-units" }
Fast triangulation of unordered point clouds - adding new clouds
Question: Hello, I am happily using the GreedyProjectionTriangulation class as demonstrated in the pcl tutorial 4.2. Section IV of the paper Marton, Rusu, Beetz - On Fast Surface Reconstruction Methods for Large and Noisy Datasets claims the possibility of inserting new point clouds in the existing mesh. I had a look at the files gp3.h and reconstruction.h contained in pcl/include/pcl/surface/ but I can't seem to find any function that would allow adding new points in the mesh Would such a functionality already be implemented? More generally, up to what extent is the diamondback implementation covering the (impressive) functionalities described in the paper (automatic alignment of new clouds, noise correction, density correction...)? thanks in advance for your help, Raph Originally posted by raphael favier on ROS Answers with karma: 1382 on 2011-03-08 Post score: 1 Answer: This question has been answered in the PCL mailinglist. See the 5th message in this conversation thread. Originally posted by Abhijit with karma: 26 on 2011-03-08 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 4991, "tags": "pcl, pointcloud" }
What is the form of the $n$-th order term of the perturbation series of an eigenvalue?
Question: Suppose I have a matrix given by a sum $A=D+\epsilon B$, where $D$ is diagonal and $\epsilon$ is small, and I want the eigenvalues of $A$ as power series in $\epsilon$. The leading order is just the eigenvalues of $D$, the first corrections are the diagonal elements of $B$, the second order is also well known. I would like to know what is the particular form of the $n$-th order term in the eigenvalue perturbation series. Apparently it can be written as a sum over partitions, but I can't find this anywhere. Answer: The answer can be found in Kato's book Perturbation theory for linear operators. I will use Kato's notation. In fact I will answer a more general question where you have an operator which depends analytically on a parameter $x$ (your $\epsilon$). Let such operator be $T(x)$ and let $$ T(x) = \sum_{n=0}^{\infty} x ^n T^{(n)} $$ such that the series converges in a neighborhood of $x=0$. I also call $T=T^{(0)}$. In your case you simply have $T^{(n)}=0$ for $n\ge 2$. We seek the perturbation series of an eigenvalue $\lambda$ of $T$. This means that there exist an eigen-projector of $T$, $P$ such that $$ TP = \lambda P +D, $$ where $D$ is a nilpotent term that may arise from the Jordan decomposition. If $m = \mathrm{dim} P$ is the dimension of the range of $P$, $D^m=0$. Note that for non-degenerate eigenvalue ($m=1$) we have necessarily $D=0$. Define also $Q=1-P$ and the reduced resolvent $$ S = \lim_{z\to \lambda} = Q (T - z)^{-1} Q $$ lastly let's define $$ S^{(0)} = -P, \ \ S^{(n)} = S^n, \ \ S^{(-n)} = - D^n, \ \mathrm{for}\ n\ge 1. $$ Let $P(x)$ be the eigenprojector of $T(x)$ analytically connected to $P$. Then one has the following series: $$ (T(x) - \lambda) P(x) = D + \sum_{n=1}^{\infty} x^n \tilde{T}^{(n)} $$ with $$ \tilde{T}^{(n)} = - \sum_{p=1}^{\infty} (-1)^p \sum_{\mathcal{A}} S^{(k_1)} T^{(n_1)} S^{(k_2)} \cdots S^{(k_p)} T^{(n_p)} S^{(k_{p+1})}, $$ where $\mathcal{A}$ corresponds to the indices satisfying the following constraint $$ \mathcal{A} = \left \{ \sum_{i=1}^p n_i = n ; \sum_{j=1}^{p+1} k_j = p; n_j \ge 1; k_j \ge -m+1 \right \}. $$ In the non-degenerate case ($m=1$) this provides the final answer, i.e. \begin{eqnarray} \lambda(x) &=& \lambda + \sum_{n=1}^{\infty} x^n \lambda^{(n)} \\ \lambda^{(n)} &=& \mathrm{Tr} \tilde{T}^{(n)}. \end{eqnarray} Note that in this case one must have $D=0$. Moreover taking the trace already kills many terms because of the cyclic property of the trace and noting that $SP = PS = 0$. To make contact with possibly more familiar expressions, note that for a self-adjoint unperturbed operator $T$, the reduced resolvent should look familiar: $$ S = \sum_{\lambda_j \neq \lambda} \frac{ |j\rangle \langle j|}{ \lambda_j - \lambda}, $$ where I called here $\lambda_j$ and $|j\rangle$ the eigenvalues and eigenvector of $T$.
{ "domain": "physics.stackexchange", "id": 44235, "tags": "perturbation-theory, eigenvalue" }
Parsing JSON using jsoncpp
Question: I left questions in the comments strategically, since I think it's easier to answer questions this way. Basically what my small class does is abstracts away functionality that was originally in a function. This class reads json form a file and creates an object which I can use to traverse the json. I understand that if something can be made into a function it shouldn't be made into a class, but I need to practice. Below I outlined things I'm looking for. Choose one or choose all. Looking for: Did I use references "&" correctly #include header files correctly A way to initialize object like so: Root rate("test.json", ["query"]["results"]["rate"]) (syntax can be different) Best practice advice Not looking for (at least not yet): Exceptions and Error handling Advice form C programmers root.h #ifndef ROOT_H #define ROOT_H // Should header files always be included in root.h as opposed to root.cpp? #include <fstream> #include <string> // Seems like I do not need to include this string container, why? //is it because json/json.h contains it? #include "json/json.h" // Would I ever create a class with a dependency like this one? // jsoncpp.sourceforge.net/annotated.html class Root { private: std::ifstream m_json; public: Json::Value m_root; Json::Value m_query; Root(const std::string&); ~Root(); }; #endif // ROOT_H root.cpp #include "root.h" Root::Root(const std::string & filename) : m_json(filename, std::ifstream::binary) // Is std::ifstream::binary ok to put it in here like this ^? // It's working, but would that be good practice? { m_json >> m_root; m_json.close(); // Do I need .close() here? } Root::~Root(){} main.cpp #include <iostream> #include "root.h" int main() { Root rate("test.json"); rate.m_query = rate.m_root["query"]["items"]; // Is it ok to assign member to a member like so, // as opposed to just a variable? // How can I instantiate my object like the line below? // Root rate("test.json", ["query"]["results"]["rate"]); // Syntax does not have to match precisely? for(const auto & it : rate.m_query) { std::cout << it << std::endl; } } test.json { "query": { "count": 3, "items": [ {"item": "4"}, {"item": "3"}, {"item": "2"} ] } Answer: // Should header files always be included in root.h as opposed to You should include all header files that are required (no more than are required). In your case you use the following types in the header file. std::ifstream std::string Json::Value So you should include the appropriate header file for these types. #include <fstream> #include <string> #include "json/json.h" The only curve ball is that if you only use a type reference then you can technically get away with a forward declaration (rather than including the header file). So you could use a forward declaration of std::string, unfortunately its not your class and you don't actually know how to forward declare it because that is not defined in the standard, so you have to include the header file. // Seems like I do not need to include this string container, why? It may be included indirectly via <fstream> or "json/json.h". BUT you should still include the <string> header file because these dependencies may not always hold (on different platforms or different versions of the compiler it may including things differently). So do not assume because it worked on this platform it will always work. Think worst case and explicitly include <string> The include guard seems a bit too generic: ROOT_H Seems like somebody else may use this. You need to make sure your guards are unique. I always use guards that include the namespace // I own the domain name thorsanvil.com // and I put all my classes in the namespace ThorsAnvil // So my include guards go like this. #define THORSANVIL_<Optional Nested Namespace>_ROOT_H // Is std::ifstream::binary ok to put it in here like this ^? // It's working, but would that be good practice? Sure. That's totally fine. You just have to understand what it mean and the difference between binary and text mode (the default). In Text mode: => "Platform Specific" "End of Line Sequence" is converted into '\n' when reading from a file. In Binary mode: no conversion is performed and you get the raw bytes. So in the constructor: Root::Root(const std::string & filename) : m_json(filename, std::ifstream::binary) { m_json >> m_root; m_json.close(); // Do I need .close() here? // Yes probably. // Closing it releases the related OS resources. // If you don't the resource will be held until // the whole object is destroyed. } I don't see the need for the m_json object to be part of the object. It is only every going to be used inthe constructor. Once the data is loaded it will never be used again. So just declare it as an automatic variable that is local to the constructor. Root::Root(const std::string & filename) { std::ifstream m_json(filename, std::ifstream::binary) m_json >> m_root; } // Now we don't need `close()` because the object goes out of scope // and the `std::ifstream` destructor calls close for us. // Is it ok to assign member to a member like so, // as opposed to just a variable? It is. But why. It seems much more logical to use a local variable. Json::Value query = rate.m_root["query"]["items"]; Prefer '\n' over std::endl The difference is that std::endl forces the buffer to flush. The buffer will flush automatically when it needs to. Forcing it to flush is only going to make your code inefficient. // How can I instantiate my object like the line below? // Root rate("test.json", ["query"]["results"]["rate"]); You can define the operator[]. You can make it take a string as an argument. class Root { public: JsonValue operator[](std::string const& index) { return m_root[index]; } // Other stuff in your class. }; Now it can be used: Root json("file.name"); Json::Value = json["query"]["results"]["rate"];
{ "domain": "codereview.stackexchange", "id": 18963, "tags": "c++, c++11, parsing, json" }
ROS Answers SE migration: Kinect amcl
Question: Hi everybody! I am testing amcl on a KUKA youBot equipped with a Kinect. Following this tutorial http://www.ros.org/wiki/navigation/Tutorials/Navigation%20Tuning%20Guide I move the robot with a joypad on the known map to check the amcl. This is what I obtain: http://www.youtube.com/watch?v=Bn0Bde0BjSs As you can see the correction for translation is quite good, te problem is that odom doesn't turn enough to in order to correct odometry in rotation. Here is my amcl file <launch> <node pkg="amcl" type="amcl" name="amcl"> <!-- Publish scans from best pose at a max of 10 Hz --> <param name="base_frame_id" value="/base_footprint"/> <param name="odom_model_type" value="omni"/> <param name="odom_alpha5" value="0.5"/> <param name="transform_tolerance" value="0.2" /> <param name="gui_publish_rate" value="10.0"/> <param name="laser_max_beams" value="60"/> <param name="min_particles" value="1000"/> <param name="max_particles" value="10000"/> <!--param name="kld_err" value="0.05"/--> <param name="kld_z" value="0.99"/> <param name="odom_alpha1" value="0.5"/> <param name="odom_alpha2" value="0.5"/> <!-- translation std dev, m --> <param name="odom_alpha3" value="0.5"/> <param name="odom_alpha4" value="0.5"/> <param name="laser_z_hit" value="0.95"/> <param name="laser_z_short" value="0.15"/> <param name="laser_z_max" value="0.03"/> <param name="laser_z_rand" value="0.01"/> <param name="laser_sigma_hit" value="0.002"/> <param name="laser_lambda_short" value="0.1"/> <param name="laser_model_type" value="likelihood_field"/> <param name="laser_likelihood_max_dist" value="4.0"/> <param name="update_min_d" value="0.05"/> <param name="update_min_a" value="0.05"/> <param name="odom_frame_id" value="/odom"/> <param name="resample_interval" value="1"/> <param name="recovery_alpha_slow" value="0.001"/> <param name="recovery_alpha_fast" value="0.1"/> <param name="initial_pose_x" value="0.35"/> <param name="initial_pose_y" value="1.08"/> <param name="initial_pose_a" value="1.57"/> <param name="first_map_only" value="true"/> <param name="laser_max_range" value="4.0"/> <param name="laser_min_range" value="0.6"/> </node> </launch> The youBot odometry is not so good, so I gave a high weight to odom parameters and I tried to trust much more the laser with a very low "laser_sigma_hit" parameter. Any suggesion to make it works better? Thank you guys Originally posted by Lorenzo on ROS Answers with karma: 76 on 2011-11-03 Post score: 2 Answer: Thanks for supplying the bag file; that always helps in trying to debug this sort of issue. It looks like the robot is consistently underreporting the actual rotation. I watched the scans accumulate in the odom frame in rviz (as described here), and would estimate that it's underreporting by 5-10 degrees per rotation. amcl's odometry model assumes zero-mean noise. It has a hard time correcting for systematic bias, which your robot seems to exhibit. If your robot is indeed always underreporting rotation (more data might be required to verify that), then I would try to correct the odometry before sending it to amcl. You might try a variation on the TurtleBot calibration, which uses a wall as a fixed feature to compare to. Or you might just inflate the reported yaw differences by 1-3%. Either way, if there's a consistent bias, it should be possible to do a one-time correction (not a periodic re-calibration). Such a fix could probably be incorporated into the YouBot odometry calculation somewhere. If fixing the odometry is not feasible, I would try tuning amcl's odometry parameters. In this situation, odom_alpha1 is probably the most important one to increase, with odom_alpha4 coming next. I would leave the other parameters low (at their defaults). I would not increase confidence on the laser model; the Kinect isn't really giving you high-quality laser scans. Instead, I would take the laser model configuration from turtlebot_navigation. Originally posted by Brian Gerkey with karma: 2916 on 2011-11-08 This answer was ACCEPTED on the original site Post score: 5
{ "domain": "robotics.stackexchange", "id": 7180, "tags": "navigation, kinect, youbot, amcl, 2d-pose-estimate" }
Sign Up layout in android
Question: I have created a sign up layout as image below. Does this part of code looked bad since the dp I used for the floating button are quite large. android:layout_marginTop="110dp" android:layout_marginLeft="70dp" If yes, how can I change it ? <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="wrap_content" xmlns:app="http://schemas.android.com/apk/res-auto" android:orientation="vertical"> <RelativeLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="horizontal" android:gravity="center"> <de.hdodenhof.circleimageview.CircleImageView android:layout_marginTop="25dp" android:id="@+id/imgProfilePicture" android:layout_width="110dp" android:layout_height="130dp" app:civ_border_width="1dp" app:civ_border_color="@color/colorPrimary"/> <android.support.design.widget.FloatingActionButton app:fabSize="mini" android:layout_marginTop="110dp" android:layout_marginLeft="70dp" android:id="@+id/fab" android:layout_width="wrap_content" android:layout_height="wrap_content" app:elevation="2dp" android:src="@drawable/camera" android:layout_alignRight="@+id/imgProfilePicture"/> </RelativeLayout> <TextView android:layout_marginLeft="35dp" android:backgroundTint="@color/colorPrimary" android:layout_marginTop="30dp" android:id="@+id/textViewUserId" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="USER ID "/> <EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginLeft="30dp" android:layout_marginRight="30dp" android:id="@+id/editTextUserId" /> <TextView android:layout_marginLeft="30dp" android:backgroundTint="@color/colorPrimary" android:layout_marginTop="25dp" android:id="@+id/textViewUserName" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="USERNAME "/> <EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginLeft="30dp" android:layout_marginRight="30dp" android:id="@+id/editTextUsername" /> <TextView android:layout_marginLeft="30dp" android:backgroundTint="@color/colorPrimary" android:layout_marginTop="25dp" android:id="@+id/textViewPassword" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="PASSWORD "/> <EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginLeft="30dp" android:layout_marginRight="30dp" android:id="@+id/editTextUserPassword" /> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="horizontal" > <TextView android:layout_marginLeft="30dp" android:backgroundTint="@color/colorPrimary" android:layout_marginTop="25dp" android:id="@+id/textViewCourse" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="COURSE "/> <TextView android:layout_marginLeft="65dp" android:layout_marginRight="30dp" android:backgroundTint="@color/colorPrimary" android:layout_marginTop="25dp" android:id="@+id/textViewPhoneNum" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="PHONE NUMBER "/> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="horizontal"> <Spinner android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="30dp" android:layout_marginRight="30dp" android:id="@+id/spinner"/> <EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginRight="30dp" android:id="@+id/editTextPhoneNum"/> </LinearLayout> </LinearLayout> Answer: Its not that bad, but in this case you might have to test it in different screen sizes Android devices, to see if it fits exactly where you want. In my opinion, better approach is to use FrameLayout instead of using relative layout. By using frame layout, you can use gravity center | bottom and thereby removing marginTop. The code goes like this: <FrameLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:gravity="center"> <de.hdodenhof.circleimageview.CircleImageView android:layout_marginTop="25dp" android:id="@+id/imgProfilePicture" android:layout_width="110dp" android:layout_height="130dp" app:civ_border_width="1dp" app:civ_border_color="@color/colorPrimary" android:layout_gravity="center"/> <android.support.design.widget.FloatingActionButton android:id="@+id/fab" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignRight="@+id/imgProfilePicture" android:src="@mipmap/ic_launcher" app:elevation="2dp" app:fabSize="mini" android:layout_gravity="center|bottom" android:layout_marginLeft="50dp" /> </FrameLayout> And this marginLeft 50dp is because , width if your circleImageView is 110, so 50 is approximately half where you want it.
{ "domain": "codereview.stackexchange", "id": 32315, "tags": "java, android, xml, layout" }
Is the protein in teardrops still attached to cells, or is it released and free-flowing?
Question: A ScienceDaily article says that the protein in teardrops can kill bacteria. But how does it reach the bacteria? Answer: I am not sure I understand your question. According to the article you mention the proteins in teardrops kill the bacteria which are invading the eye (e.g. also present in the teardrops): "Those jaws chew apart the walls of the bacteria that are trying to get into your eyes and infect them," EDIT: These proteins are enzymes called lysozymes. Those are free-flowing proteins of the human tears. These proteins are actively produces in the lacrimal glands and actively secreted into the lacrimal liquid.
{ "domain": "biology.stackexchange", "id": 107, "tags": "human-biology, molecular-biology, proteins" }
Rolling Ball and its angular momentum
Question: I would like to know your opinion about a question Everything ist idealized. This means there exists no friction or other forces which would ruin the 'Beauty and simplicity' of this question which are aiming to demonstrate a specific law or rule. Questions A: A Ball with a moment of intertia $I$ and a linear speed (speed of the Center of mass) $V$ Rolls without any skipping or whatsoever (just roll movement so linear and rotation) and then this balls climbs an inclined plane. The question is: At the top of the plane, the ball finds it self in position of rest (thus no movement and no rotation ) how do you justify this in relation of the conservation of angular momentum? So personally i do not understand this question, since there is obviously a a Torque being applied by the gravitational force while climbing the plane and thus it reduces its angular velocity to 0 (this is a question from an exam so i guess i misunderstood it) Where as we know that Angular momentum $L$ is given to be $L = \omega *I $ How is it conserved and How do you answer this question. Answer: You are absolutely right in saying that the angular momentum isn't conserved. Let's say we observe the ball rising up the inclined plane, from its instantaneous axis of rotation, then we would see that the ball experiences a torque due to gravity ($|\boldsymbol{\tau}|=mgR$, where $m$ is the mass of the ball and $R$ is its radius). This torque reduces the ball's angilar momentum and thus the ball comes to rest. Just in case, if the friction is absent on the inclined plane, then the ball's $\boldsymbol{\omega}$ will not change when it reaches at the top on the inclined plane. Onky the velocity will change. Thus in the center of mass frame, the angular momentum ($I\boldsymbol{\omega}$) will be conserved. However, angular momentum may not be conserved in any other general reference frame.
{ "domain": "physics.stackexchange", "id": 67172, "tags": "newtonian-mechanics, reference-frames, rotational-dynamics" }
Calculating the Total number of States for a microcanonical system
Question: Please note before flagging, I do not need help solving as the math is simple algebra. Where I am lost is understanding what the math means and why/how it is applied. Problem 2.4 from Reif Fundamentals of Statistical and Thermal Physics: Consider an isolated system consisting of a large number $N$ of virtually non-interacting localized (not translating) particlesof spin $½$. Each particle has magnetic moment $\mu$ which can point either parallel or antiparallel to an applied magnetic field $H$. The energy of the system is $E=-(n_1 - n_2)\mu H$ where $n_1$ is the number of particles with spin parallel and $n_2$ is the number of particles with spin antiparallel to $H$. a)Consider the energy range $E \rightarrow E+ \delta E$ where $\delta E$ is very small compared to $E$ but is microscopically large, i.e., $\delta E >> \mu H$. What is the total number of states, $\Omega (E)$, of the system lying in this energy range? I am given from the book that the total number of states $\Omega (E)=\omega(E) \delta E$ where $\omega (E)=\frac{N!}{n_1 ! n_2 !} = \frac{N!}{n_1 ! (N-n_1)!}$. The density of states, $\omega(E)$, is easy enough to understand and calculate. However, I do not understand what $\delta E$ is, how to calculate it, or why I multiply the density of states by it. Answer: The density of states is defined that way, such that in a continuous distribution (which is valid here because the energy is large), the number of states between $E_1$ and $E_2$ is $$N = \int_{E_1}^{E_2}dE \ \Omega(E)$$ where $\Omega(E)$ is the number of microstates for energy $E$. In the case of the infinitesimal range $\delta E$, the function varies negligibly over the range, so the (differential) number of states (leaving out mathematical rigor) is simply the DOS multiplied by the range $\delta E$. That's just mathematics. The same goes for charge density in electrostatics, fluid density, etc.
{ "domain": "physics.stackexchange", "id": 31696, "tags": "homework-and-exercises, thermodynamics, statistical-mechanics" }
Carrier density and size of Fermi surface
Question: I think I've heard that a large Fermi surface implies a large carrier concentration and a small Fermi surface implies a small carrier concentration. I am not really sure what the relationship is between the two. Answer: There is a Luttinger's theorem that relates the volume enclosed by the Fermi surface to the particle density.
{ "domain": "physics.stackexchange", "id": 79701, "tags": "solid-state-physics, electronic-band-theory, fermi-energy, charge-carrier" }
2D SLAM with gmapping and openni_kinect
Question: I know it's possible to do 2D SLAM with Kinect using slam_gmapping, eg. because of this post. Could anyone please tell me how exactly to do that? I've already installed pointcloud_to_laserscan, slam_gmapping, turtlebot and turtlebot_apps. But after running roslaunch turtlebot_navigation gmapping_demo.launch all I'm getting is an info saying: "Still waiting on map". What and in which order should I execute to obtain a map like in turtlebot_navigation's tutorial? Ok, I think I partially got it. There were some errors in the default pointcloud_to_laserscan launchfile. My working version below. <launch> <!-- kinect and frame ids --> <include file="$(find openni_camera)/launch/openni_node.launch"/> <!-- Kinect --> <node pkg="nodelet" type="nodelet" name="openni_manager" output="screen" respawn="true" args="manager"/> <!-- fake laser --> <node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_camera"> <param name="output_frame_id" value="/openni_depth_frame"/> <remap from="cloud" to="cloud_throttled"/> </node> <!-- throttling --> <node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_camera"> <param name="max_rate" value="2"/> <remap from="cloud_in" to="/camera/depth/points"/> <remap from="cloud_out" to="cloud_throttled"/> </node> </launch> When I now run rosrun gmapping slam_gmapping I get a warning saying: [ WARN] [1300374330.893690231]: MessageFilter [target=/odom ]: Dropped 100.00% of messages so far. Please turn the [ros.gmapping.message_notifier] rosconsole logger to DEBUG for more information. [DEBUG] [1300374332.317853661]: MessageFilter [target=/odom ]: Removed oldest message because buffer is full, count now 5 (frame_id=/kinect_depth_frame, stamp=1300374331.992896) [DEBUG] [1300374332.318014019]: MessageFilter [target=/odom ]: Added message in frame /kinect_depth_frame at time 1300374332.311, count now 5 [DEBUG] [1300374332.376768593]: MessageFilter [target=/odom ]: Removed oldest message because buffer is full, count now 5 (frame_id=/kinect_depth_frame, stamp=1300374332.060879) I think the problem might be there is no tf tree between /openni_camera and /map defined - how do I achieve this? Any help appreciated, Tom. Originally posted by tom on ROS Answers with karma: 1079 on 2011-03-16 Post score: 2 Answer: Hi Tom, This does not answer your gmapping question, but the launch file you posted above does not work for me. It runs without error but it does not produce a "/scan" topic for the fake laser scan. In other words, run your launch file, then in a separate terminal run the command "rostopic list | grep scan". It should return "/scan" but in my case it returns nothing. The following modified version of your launch file does work for me: <launch> <!-- kinect and frame ids --> <include file="$(find openni_camera)/launch/openni_node.launch"/> <!-- openni manager --> <node pkg="nodelet" type="nodelet" name="openni_manager" output="screen" respawn="true" args="manager"/> <!-- throttling --> <node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_manager"> <param name="max_rate" value="2"/> <remap from="cloud_in" to="/camera/depth/points"/> <remap from="cloud_out" to="cloud_throttled"/> </node> <!-- fake laser --> <node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_manager"> <param name="output_frame_id" value="/openni_depth_frame"/> <remap from="cloud" to="cloud_throttled"/> </node> </launch> The important difference is that the two nodelets now wait on openni_manager rather than openni_camera. Using this launch file, I can add a Laser Scan display in RViz and select the "scan" topic and see the scan points. Regarding your gmapping question, you don't mention what kind of robot you are using. Is it an iRobot Create? If so, are you using an IMU with it? Also, have you run through all the Navigation tutorials starting here: http://www.ros.org/wiki/navigation/Tutorials? --patrick Originally posted by Pi Robot with karma: 4046 on 2011-03-17 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by tom on 2011-03-20: I'd like to use SLAM to navigate a quadrotor UAV in an indoor env and create a floor plan. I'd like to do it with an IMU and a Kinect. However an IMU on a VTOL w/o ext aiding drifts in yaw direction. So I guess a scan matcher must be used to provide yaw correction. Do I stand a chance / any hints? Comment by Pi Robot on 2011-03-18: Yeah, the navigation stuff has a lot of parts but is very nicely laid out in the tutorials. Then, to build a simple robot to do all the cool gmapping stuff, you could use an iRobot Create or something home grown based on the ArbotiX or Serializer controllers, to name a couple. Comment by tom on 2011-03-18: Ok, I'll go through the tutorial above, probably everything becomes clear then. Thanks all. Comment by fergs on 2011-03-18: Just a Kinect is not enough -- gmapping requires an odometry tf transformation. You might look at using the canonical_scan_matcher to estimate an odometry transformation -- although I somewhat doubt that the Kinect data will work with it (due to the lack of features, because of the narrow fov). Comment by tom on 2011-03-17: Thanks Patrick. I'm not using any robot, I just have a Kinect sensor. Is it not enough to create a map of a non-changing environment? For the beginning it would suffice to obtain a map by just rotating the sensor in the yaw axis assuming no pitch or roll or translation. Is it not possible?
{ "domain": "robotics.stackexchange", "id": 5091, "tags": "navigation, kinect, openni-kinect, slam-gmapping, gmapping" }
Reciprocal space in the context of x-ray diffraction and a crystal lattice?
Question: Could someone please explain what reciprocal space means in the context of of x-ray diffraction of a crystal lattice? Answer: The unit cell space and reciprocal space are fourier transforms of each other. The unit cell indicates the stacking space between crystal elements. The reciprocal space is a similar kind of vectorial representation of the diffracted X-rays. The following reference explains it quite well, although it is not a simple concept. Unless you are smarter than me, you may have to read it 2 or 3 times to 'get it'. Reciprocal Lattices and Diffraction
{ "domain": "earthscience.stackexchange", "id": 874, "tags": "crystallography" }
What is the etymology of "bridge cam gauge"?
Question: On the pictures below is a tool called "bridge cam gauge", mentioned in this answer. I wonder why the words "bridge" and "cam" are used in its name. I found a definition of "cam" that might be relevant: A curved wedge, movable about an axis, used for forcing or clamping two pieces together. As I understand, the 'cam' here is the flat roundish beaky plate that you can rotate about its hinge. But what about "bridge"? Wiktionary offers a variety of senses, which makes it hard for a non-native speaker of English. Also asked on ELU SE. P.S. It is also called "cam type weld gauge" Answer: See here http://www.newmantools.com/gauge/wghowto.htm#wg4 (gauge type WG-4) for how it is used. It works the same way as a cam. The rotating part (marked "undercut or reinforcement" in your second picture) has a pointed end (marked with the arrow) that "follows" the profile of the parts to be welded like the follower on a conventional cam, as in the graphic in your Wikipedia link. I think the "bridge" part just means that the gauge has two "feet" that are in contact with the part being measured, and "bridges the gap" between them. See http://marinenotes.blogspot.co.uk/2012/08/sketch-and-describe-bridge-gauge-how-is.html for a different type of "bridge gauge", which doesn't have a cam.
{ "domain": "engineering.stackexchange", "id": 923, "tags": "terminology" }
Image of concave mirror when object is farther than the focal point
Question: I have a set of concave/convex mirrors/lenses, and I have drawn out all of the ray tracing diagrams for each possible combination and type of image each optical element can produce. When I look at the ray tracing diagrams then reproduce the situation in the real world with myself as the "object", all the scenarios make sense to me except one: when I as the object stand beyond the focal point of a concave mirror. The image ends up as "real" and appears to be in front of the mirror. Now I understand that in the situation above, I as the object would stand at 6m and this would result in the image appearing at 12m. So if I placed a wall or screen at the 12m mark, the image would form on that surface (just like a movie theater screen).... at least I think I interpreted that correctly. But I still see myself when I look in the mirror?! If the image is supposed to form and come into focus behind me, what in the world am I seeing when looking at the surface of the mirror? I still see myself, just inverted. The surface of the concave mirror still has a reflection on it. It doesn't look quite like a normal flat mirror, in that the image doesn't really look like it's "inside" the mirror, more like it's right on the surface of it. It's really frustrating because every other possible arrangement of optical elements makes total sense to me, except this one. Any suggestions on how to think about it? Thank you for your time! Answer: You are just getting mixed up with what it means for an image to form and what it means to actually see an image with your own eyes. A good place to start is a flat mirror. What happens if you draw rays like in the image you have posted? You will not find all of the rays converging to a single point, even though you can certainly see something when you look in the mirror. As you mentioned before, we can think of putting a screen at the "image location", but for a flat mirror there is no such place; you could not form an image on a screen. What is going on? Why can you see something from a mirror if there is no image location? Well, your eyes are lenses, and they form images on your retina. So while there is no image being formed at the location of your eye, the light can be focused to then form an image on your retina. So, when we talk about real images being formed, we are not talking about the image one would see if their eyes were in that location. We are just talking about a place where light rays from an object converge after interacting with the mirror / lens. If you want to describe what you see through your eyes, you have to add in the additional lenses that make up your eyes.
{ "domain": "physics.stackexchange", "id": 79112, "tags": "optics, reflection, geometric-optics" }
Required energy to metabolize fat
Question: Tl;dr: What is the net joule value of fat if used for energy provision, compared to e.g. carbohydrates. So does 1 Joule Fat intake result in 1 - X Joule energy provided to the cell, or does it work in a completely different way? My understanding is that if I take in one Joule of fat, my body takes in the energy to move ~100g of mass up by one meter (Wikipedia). Now for my body to use that energy, it needs to convert it to some sort of Glucose, right? From the Lipid Metabolism Wikipedia Article I get that my fat needs to become a Triglyceride to serve as energy source. There are some steps involved, roughly the following: gastric peristalsis (basically muscle contractions to mix it all up) peristaltic contractions in the gut (again, mixing is key) lipase, also working in the gut to finally create Triglyceride then the fat gets "packaged" in Chylomicrons and put into the bloodstream releasing energy to the cell involves the following Lipase unpacking the Chylomicrons Splitting of the Triglyceride in Glycerin (the stuff we want) and fatty acids Now this seems like a really cumbersome process to me, and I wonder what the net Joule Value is that arrives at the cell. Is it significant, or is this process so efficient that it doesn't matter anyways? Answer: So, what I was looking for, but was lacking the knowledge of, was the concept of "Thermic Effect of Food" I really didn't know how to phrase my question well, due to the fact that I tackled the problem from the wrong angle. Direct quote from the Wikipedia Article as it answers the question best: The thermic effect of food is the energy required for digestion, absorption, and disposal of ingested nutrients. >Its magnitude depends on the composition of the food consumed: Carbohydrates: 5 to 15% of the energy consumed [7] Protein: 20 to 35% [7] Fats: at most 5 to 15 %[8]
{ "domain": "biology.stackexchange", "id": 9103, "tags": "biochemistry, human-physiology" }
Query performance of this Select Where each Column is a Select
Question: I would like to know how to improve the performance of the following query: SELECT Kunde.Nachname, Kunde.Vorname, Kunde.Debitorennummer, sum(Abmeldestatus) AS Abmeldestatus, Ab1.Fruehstuck, Best1.Menu1, Best2.Menu2, Ab2.Vesper, Ab3.Abendbrot FROM Kunde LEFT OUTER JOIN ( SELECT EssenTyp AS Fruehstuck, RefKundeId FROM Abbestellungen WHERE ([date] =@Datum) AND (EssenTyp = 0) ) AS Ab1 ON Ab1.RefKundeId = Kunde.KundeId LEFT OUTER JOIN ( SELECT BestellDetails.Anzahl AS Menu1, Bestellung.RefKundeId FROM Bestellung INNER JOIN BestellDetails ON Bestellung.BestellId = BestellDetails.RefBestellId WHERE (BestellDetails.Datum = @Datum) AND (BestellDetails.SpaltenNr = 0) ) AS Best1 ON Best1.RefKundeId = Kunde.KundeId LEFT OUTER JOIN ( SELECT BestellDetails.Anzahl AS Menu2, Bestellung.RefKundeId FROM Bestellung INNER JOIN BestellDetails ON Bestellung.BestellId = BestellDetails.RefBestellId WHERE (BestellDetails.Datum = @Datum) AND (BestellDetails.SpaltenNr = 1) ) AS Best2 ON Best2.RefKundeId = Kunde.KundeId LEFT OUTER JOIN ( SELECT EssenTyp AS Vesper, RefKundeId FROM Abbestellungen WHERE ([date] =@Datum) AND (EssenTyp = 1) ) AS Ab2 ON Ab2.RefKundeId = Kunde.KundeId LEFT OUTER JOIN ( SELECT EssenTyp AS Abendbrot, RefKundeId FROM Abbestellungen WHERE ([date] =@Datum) AND (EssenTyp = 2) ) AS Ab3 ON Ab3.RefKundeId = Kunde.KundeId LEFT OUTER JOIN ( SELECT AbmeldeId AS Abmeldestatus, RefKundenId FROM Abmelden WHERE (StartDate <= @Datum) AND (EndDate IS NULL OR EndDate >= @Datum) ) AS Abm ON Abm.RefKundenId = Kunde.KundeId INNER JOIN dbo.iter_intlist_to_tbl(@Kunden) AS i ON Kunde.KundeId = i.number GROUP BY Kunde.Nachname, Kunde.Vorname, Kunde.Debitorennummer, Ab1.Fruehstuck, Best1.Menu1, Best2.Menu2, Ab2.Vesper, Ab3.Abendbrot The query above takes 13sec to execute for 100 Kunden which is not acceptable. My function iter_intlist_to_tbl takes a list of numbers (IDs) and creates a table with them. I build my SELECT based on multiple SELECTs which are OUTER JOIN to return what I expect. The result looks like: Nachname | Vormane | Debitorennummer | Fruehstuck | Menu1 | Menu2 | Vesper | Abendbrot -------------------------------------------------------------------------------------- Schmiedt | Lee | 123456789 | NULL | 1 | NULL | 1 | 1 Müller | Marie | 123456700 | 1 | NULL | NULL | NULL | NULL Execution plan: Answer: You could try to use a PIVOT statement in order to convert values of several rows into columns. I show it for Abbestellungen.EssenTyp here. You will have to combine it with the rest of your query. SELECT RefKundeId, [0] AS Fruehstuck, [1] AS Vesper, [2] AS Abendbrot FROM (SELECT RefKundeId, EssenTyp FROM Abbestellungen WHERE [date] = @Datum) a PIVOT ( MAX (EssenTyp) FOR EssenTyp IN ([0], [1], [2]) ) AS pvt; I am not sure which value you want to return for the EssenTyp columns. Your query returns the EssenTyp id, but your example result set returns 1. So may be you want a count? Then you would have to replace MAX(EssenTyp) by SUM (1) or COUNT(EssenTyp). If you are storing a number of meals say in a column Anzahl (german 'number'), then write SUM(Anzahl). In any case you need an aggregate function for the pivot operation. Will it be faster? I don't know. Make experiments. Instead of making two independent pivot queries for Abbestellungen and Bestellung, you could combine the two in a UNION query and pivot them together: SELECT RefKundeId, [0] AS Fruehstuck, [1] AS Vesper, [2] AS Abendbrot, [10] AS Menu1, [11] AS Menu2 FROM ( SELECT RefKundeId, EssenTyp AS nr, 1 AS value FROM Abbestellungen WHERE [date] = @Datum UNION ALL SELECT B.RefKundeId, D.SpaltenNr + 10 AS nr, D.Anzahl AS value FROM Bestellung B INNER JOIN BestellDetails D ON B.BestellId = D.RefBestellId WHERE D.Datum = @Datum ) A PIVOT ( SUM (value) FOR nr IN ([0], [1], [2], [10], [11]) ) AS pvt;
{ "domain": "codereview.stackexchange", "id": 15194, "tags": "performance, sql, sql-server" }
I Cannot endure anymore. Gazebo Team, could you please carefully release your project
Question: With due all my respects, If a powerful software is causing countless problems which are resulted by careless posting and maintenance, it will be a totally useless one and this is gazebo. Could you please kindly have a look at your tutorials and release instruction on your website? Almost none of them works! waiting for namespaces..... when executing Unable to locate package gazeboXX.... when installing Come on, what is the point to release your project and instructions when the users cannot get them? Thank you for your attention. Originally posted by puav on Gazebo Answers with karma: 3 on 2016-03-31 Post score: -5 Original comments Comment by Ben B on 2016-04-01: -1. People here are happy to answer specific questions you have -- general complaints are really not constructive. If you have a question on how to do a particular thing: ask. If you have a feature request, make it. If you have a bug report, file it: https://bitbucket.org/osrf/gazebo/issues?status=new&status=open Answer: Sorry to hear that you had a bad experience. Hopefully we'll be able to assist you. I'll need a bit more information though. Could you update your post with answers to these questions? What version of Gazebo are you using? What operating system are you using (such as Ubuntu Trusty, or OSX)? What tutorial(s) did you try to follow? What command did you execute that resulted in "waiting for namespaces....."? What command did you execute that resulted in "Unable to locate package gazeboXX...."? Every release we run through our tutorials to make sure they work. Sounds like we missed something, and we'll try to correct this situation as fast as possible. Thanks for letting us know about your problem. ** Update ** So far you have said that you followed the tutorial exactly, and still can't install gazebo7. If this is the case, then something strange is going on. Can you please post the output to the following commands: sudo sh -c 'echo "deb http://packages.osrfoundation.org/gazebo/ubuntu-stable `lsb_release -cs` main" > /etc/apt/sources.list.d/gazebo-stable.list' . wget http://packages.osrfoundation.org/gazebo.key -O - | sudo apt-key add - . just post the last couple lines of the next one sudo apt-get update . sudo apt-get install gazebo7 Originally posted by nkoenig with karma: 7676 on 2016-04-01 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by nkoenig on 2016-04-04: The author of this question had Ubuntu 12.04, not 14.04 as originally reported. Therefore they could not install gazebo7 as indicated here (http://gazebosim.org/#status).
{ "domain": "robotics.stackexchange", "id": 3898, "tags": "installation" }
Removing whitespaces in a string
Question: I wrote this function to remove whitespaces in strings. Please help me improve it. I intended to use the function for a Big Integer ADT. #include <iostream> #include <string> #include <string.h> void rs(char* str){ int i(0); int j(0); while((*(str + i) = *(str + j++)) != '\0') if(*(str + i) != ' ') i++; return; } int main() { std::string str = "Hello World"; char* result = strcpy((char*)malloc(str.length()+1), str.c_str()); rs(result); std::cout << result << std::endl; return 0; } Answer: char* result = strcpy((char*)malloc(str.length()+1), str.c_str()); This seems more suited for a C program than a C++ one. In fact, with C++11, you don't need to write your own function. Behold: std::remove_if Note: std::remove_if Removes all elements satisfying specific criteria from the range [first, last) and returns a past-the-end iterator for the new end of the range. A call to remove is typically followed by a call to a container's erase method, which erases the unspecified values and reduces the physical size of the container to match its new logical size. This is what you would do: str.erase(remove_if(str.begin(), str.end(), ::isspace), str.end()); I am not sure if this is the safest or most efficient method, but I think it's definitely an improvement to what you have done.
{ "domain": "codereview.stackexchange", "id": 9941, "tags": "c++, beginner, strings" }
Arduino IDE does not find msg header
Question: I'm trying to learn rosserial_arduino. I've worked through the tutorials and am now going back and walking through them with my own code. My initial test code will use rosserial_arduino to turn 8 LEDs on and off. I've created the package (arduino) and a custom message called 8Leds.msg. I added the lines per the tutorials to package.xml and CMakeLists.txt. I source my files and run catkin_make. Everything seems to compile properly and the 8Leds.h file is created in devel/include/arduino. However, when I try to compile the sketch for the arduino I get this error: In file included from light_8_leds_test.ino:2:0: /home/jcicolani/catkin_ws/devel/include/arduino/8Leds.h:42:18: fatal error: string: No such file or directory compilation terminated. I don't get it. It's even showing the exact path and file name. Thoughts? Originally posted by jcicolani on ROS Answers with karma: 3 on 2014-08-12 Post score: 0 Answer: You need to run the following again everytime you changed/added msg files in your workspace: cd <sketchbook>/libraries rm -rf ros_lib rosrun rosserial_arduino make_libraries.py . See: http://wiki.ros.org/rosserial_arduino/Tutorials/Arduino%20IDE%20Setup#Install_ros_lib_into_the_Arduino_Environment Note that your arduino code needs to be compiled against the headers in your /libraries folder, not against those in your catkin workspace.... Originally posted by Wolf with karma: 7555 on 2014-08-13 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 19024, "tags": "ros, arduino, msg, compilation" }
What's wrong with the reduction from integer programming to linear programming?
Question: I'm confused with polynomial-time reduction and NP-hardness. Let's say that the following integer programming is NP-hard. $\min_{x \in K} f(x)$, where $K$ is a finite subset of $\mathbb{N}$. But it is a special case of the following linear programming. $\min_{x \in X} f(x)$, where $X$ is a compact subset of $\mathbb{R}$. Then there is a polynomial-time reduction from IP to a special case of LP, where the solution space is restricted to integer points in $X$. Then LP is NP-hard, which is wrong. What is wrong with this logic, especially with the reduction part? Answer: To be precise, a polynomial-time algorithm for a linear program requires $X$ to be some convex set in $\mathbb{R}$ (not simply a subset of $\mathbb{R}$). Since $K \subset \mathbb{N}$ is not a convex set, an algorithm for linear program might not work for an integer linear program. To see why decreasing the domian of the input space might not always simplify things; consider the following simple problem: P1: Given a set $S$ of $n$ positive integers, find an integer in $S$ with minimum value. P2: Given a set $S$ of $n$ positive integers, find an integer in $S \cup \{0\}$ with minimum value. To solve P1, the algorithm needs to go through every element of $S$ (in worst case). Therefore, its time complexity is $\Theta(n)$. To solve P2, the algorithm can simply output $0$ without doing any operation.
{ "domain": "cs.stackexchange", "id": 19465, "tags": "complexity-theory, polynomial-time-reductions" }
Single class which holds response and error message
Question: I am working on a library which will make HTTP call to my rest service basis on inputs passed to this library. And whatever response comes back from service whether it is successful response or failure coming from the service or something happened in this library like timeout or any other errors, I am returning a response object back to the customer which customer can look into and iterate the object accordingly. My response object is like this as of now and this is what I am returning back to the customer and customer will use this object and iterate it to figure out whether it is successful response or some failure and use it accordingly. public class DataResponse { // response data private final String response; private final String date; private final String confidence; private final String mask; // what are the errors? private final ErrorCode error; // whether it is successful response or not private final StatusCode status; public DataResponse(String response, String date, String confidence, String mask, ErrorCode error, StatusCode status) { this.response = response; this.date = date; this.confidence = confidence; this.mask = mask; this.error = error; this.status = status; } // getters here } And below is my ErrorCode class which contains all the errors happened at client level or service level so that the customer can know what happened. public enum ErrorCode { OK(200, "NONE", "Response is success."), NO_CONTENT(204, "No Content", "Response is success."), CLIENT_TIMEOUT(1007, "Timeout", "Timeout has occured on the Client."); // some more error messdates, keeping it short to show the idea private final int code; private final String status; private final String description; private ErrorCode(int code, String status, String description) { this.code = code; this.status = status; this.description = description; } // getters here @Override public String toString() { return code + " : " + status; } } And below is StatusCode class basis on which customer will decide whether it is successful response or not. public enum StatusCode { SUCCESS, ERROR; } Customers of this library - they will use this DataResponse object in two ways: Firstly, they can serialize DataResponse object to JSON as per their needs if they want to do it but it's up to them. Secondly they will use this DataResponse object as it is and just iterate it normally. So wanted to make sure from both the perspectives if the current design is good? If not what can be the improvements. Please review my code. If you have any questions, please ask. If you have any comments, please give them. Answer: Is that error, or status, or both, or neither? This API is a bit confusing. You have ErrorCode and StatusCode enums. But ErrorCode contains things like: OK(200, "NONE", "Response is success."), NO_CONTENT(204, "No Content", "Response is success."), CLIENT_TIMEOUT(1007, "Timeout", "Timeout has occured on the Client."); So two of the "error codes" indicate success. public enum StatusCode { SUCCESS, ERROR; } ... And StatusCode can indicate success or error. ... The confusing nature of these elements make it difficult to understand the API, and the distinction of its building blocks. Your error codes resemble HTTP status codes. Why not go a bit further in following HTTP practices, and have just one Status enum, that contains status codes like in the HTTP protocol, some of which are success, and others are error. DataResponse I like that the class is immutable. But I'm concerned about the long parameter list of the constructor, especially considering that the first 4 values are all of String type, which can lead to errors such as mistaken ordering. I would suggest to consider adding a builder, that will have well-named setters to set the parameter values, possibly allowing some sensible defaults. Lastly, the type of the date field is a String. It would be good to add a JavaDoc about the required format. (For that matter, a JavaDoc for the other String parameters would be good too.)
{ "domain": "codereview.stackexchange", "id": 17413, "tags": "java, error-handling, http, library, rest" }
OpenCL implementations of IQZZ and IDCT for MJPEG
Question: I am using this code for MJPEG decoding and I am trying to make two functions (IQZZ and IDCT) run faster on the GPU (NVIDIA Tesla k20c). I am using the OpenCL framework to accomplish this task. I have already successfully offloaded these functions to the GPU and am getting the expected output. However, the output video is very slow after offloading the code to the GPU. My .cl file is as follows: /******************************* IDCT *************************************/ void idct_1D(__local int *Y); __kernel void IDCT(__global int* input, __global uchar* output) { unsigned int kid= get_global_id(0); __local int Y[64]; int k= get_global_id(0); int l; int lid= get_global_id(1); __local int Yc[8]; if (k < 8) { for (l = 0; l < 8; l++) { Y(k, l) = SCALE(input[(k << 3) + l], S_BITS); } idct_1D(&Y(k, 0)); } if (lid < 8) { for (k = 0; k < 8; k++) { Yc[k] = Y(k, lid); } idct_1D(Yc); for (k = 0; k < 8; k++) { int r = 128 + DESCALE(Yc[k], S_BITS + 3); r = r > 0 ? (r < 255 ? r : 255) : 0; X(k, lid) = r; } } } void idct_1D(__local int *Y) { int z1[8], z2[8], z3[8]; but(Y[0], Y[4], z1[1], z1[0]); rot(1, 6, Y[2], Y[6], &z1[2], &z1[3]); but(Y[1], Y[7], z1[4], z1[7]); z1[5] = CMUL(sqrt2, Y[3]); z1[6] = CMUL(sqrt2, Y[5]); but(z1[0], z1[3], z2[3], z2[0]); but(z1[1], z1[2], z2[2], z2[1]); but(z1[4], z1[6], z2[6], z2[4]); but(z1[7], z1[5], z2[5], z2[7]); z3[0] = z2[0]; z3[1] = z2[1]; z3[2] = z2[2]; z3[3] = z2[3]; rot(0, 3, z2[4], z2[7], &z3[4], &z3[7]); rot(0, 1, z2[5], z2[6], &z3[5], &z3[6]); but(z3[0], z3[7], Y[7], Y[0]); but(z3[1], z3[6], Y[6], Y[1]); but(z3[2], z3[5], Y[5], Y[2]); but(z3[3], z3[4], Y[4], Y[3]); } /*---------------IQZZ----------------------------*/ __kernel void iqzz_block(__global int in[64], __global int out[64], __global uchar table[64]) { uint index= get_global_id(0); int priv_in[64]; uchar priv_table[64]; int priv_out[64]; if (index < 64) { priv_in[index]= in[index]; priv_table[index]= table[index]; priv_out[G_ZZ[index]] = priv_in[index] * priv_table[index]; out[G_ZZ[index]]= priv_out[G_ZZ[index]]; } } For IDCT, I simply copied and pasted constants from the the .c file. I haven't included the constants in my query for conciseness. Details of the constants can be found here. In main.c, I have simply substituted the function calls with OpenCL commands to transfer data to the device, execute the kernel there and transmit the results back on the CPU. My main.c looks like this: /* Get Platform */ ret= clGetPlatformIDs(1, &platform_id, &ret_num_platforms); /* Get Device */ ret= clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_GPU, 1, &device_id, &ret_num_devices); /* Create Context */ context = clCreateContext(0, 1, &device_id, NULL, NULL, &ret); /* Create Command Queue */ command_queue = clCreateCommandQueue(context, device_id, 0, &ret); /* Create kernel from source */ program = clCreateProgramWithSource(context, 1, (const char **)&source_str, (const size_t *)&source_size, &ret); ret= clBuildProgram(program, 1, &device_id, NULL, NULL, NULL); //--------kernel for iqzz-----------// kernel= clCreateKernel(program, "iqzz_block", &ret); //-------kernel for idct-----------// cos_kernel= clCreateKernel(program, "IDCT", &ret); cl_mem block_GPU = clCreateBuffer(context, CL_MEM_READ_WRITE, 64 * sizeof(cl_int), NULL, &ret); //This will serve as the output buffer for iqzz cl_mem DCT_Input = clCreateBuffer(context, CL_MEM_READ_WRITE| CL_MEM_COPY_HOST_PTR, 64 * sizeof(cl_int), unZZ_MCU, &ret); chk(ret, "clCreateBuffer"); //Output buffer cl_mem DCT_Output = clCreateBuffer(context, CL_MEM_READ_WRITE| CL_MEM_COPY_HOST_PTR, (MCU_sx * MCU_sy * max_ss_h * max_ss_v) + 4, YCbCr_MCU_ds[component_index] + (64 * chroma_ss), &ret); //Regular code from main.c follows............ case M_SOS: //regular code from main.c....... //The Relevant part starts here...... for (index_X = 0; index_X < nb_MCU_X; index_X++) { for (index_Y = 0; index_Y < nb_MCU_Y; index_Y++) { for (index = 0; index < SOS_section.n; index++) { int component_index = component_order[index]; int nb_MCU = ((SOF_component[component_index].HV>> 4) & 0xf)*(SOF_component[component_index].HV & 0x0f); for (chroma_ss = 0; chroma_ss < nb_MCU; chroma_ss++) { unpack_block(movie, &scan_desc,index, MCU); /////--------------Transfer data to buffers----------------//////////// ret = clEnqueueWriteBuffer(command_queue, block_GPU, CL_TRUE, 0, 64 * sizeof(cl_int), MCU, 0, NULL, NULL); ret = clEnqueueWriteBuffer(command_queue, qtable_GPU, CL_TRUE, 0, 64 * sizeof(cl_uchar), DQT_table[SOF_component[component_index].q_table], 0, NULL, NULL); cl_mem qtable_GPU = clCreateBuffer(context, CL_MEM_READ_WRITE, 64 * sizeof(cl_uchar), NULL, &ret); /* Set OpenCL kernel arguments */ ret = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&block_GPU); ret = clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&DCT_Input); ret = clSetKernelArg(kernel, 2, sizeof(cl_mem), (void *)&qtable_GPU); start_time = wtime(); size_t global=64; size_t local= 16; ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL, &global, &local, 0, NULL, NULL); run_time += wtime() - start_time; //Copy result from device to host ret = clEnqueueReadBuffer(command_queue, DCT_Input, CL_TRUE, 0, 64 * sizeof(cl_int), &unZZ_MCU, 0, NULL, NULL); /////---------------IDCT-----------------////// ret = clSetKernelArg(cos_kernel, 0, sizeof(cl_mem), (void *)&DCT_Input); ret |= clSetKernelArg(cos_kernel, 1, sizeof(cl_mem), (void *)&DCT_Output); //No. of work-items const size_t globalForInverseDCT[2]= {8, 8}; ret = clEnqueueNDRangeKernel(command_queue, cos_kernel, 1, NULL, &globalForInverseDCT, &localForInverseDCT, 0, NULL, NULL); ret = clEnqueueReadBuffer(command_queue, DCT_Output, CL_TRUE, 0, (MCU_sx * MCU_sy * max_ss_h * max_ss_v) + 4, YCbCr_MCU_ds[component_index] + (64 * chroma_ss), 0, NULL, NULL); } upsampler(YCbCr_MCU_ds[component_index],YCbCr_MCU[component_index],Horizontal,Vertical,max_ss_h,max_ss_v); } //more code which is not immediately relevant follows...... } How can I modify my iqzz and idct kernels to make them run faster on the GPU? The details of my GPU are as follows: DEVICE_NAME = Tesla K20c DEVICE_VENDOR = NVIDIA Corporation DEVICE_VERSION = OpenCL 1.2 CUDA DRIVER_VERSION = 352.21 DEVICE_MAX_COMPUTE_UNITS = 13 DEVICE_MAX_CLOCK_FREQUENCY = 705 DEVICE_GLOBAL_MEM_SIZE = 5032706048 CL_DEVICE_ERROR_CORRECTION_SUPPORT: yes CL_DEVICE_LOCAL_MEM_TYPE: local CL_DEVICE_LOCAL_MEM_SIZE: 48 KByte CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE: 64 KByte CL_DEVICE_QUEUE_PROPERTIES: CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE CL_DEVICE_QUEUE_PROPERTIES: CL_QUEUE_PROFILING_ENABLE Answer: Indent your loop bodies. Actually check ret - you're uselessly assigning and discarding it every time. Use better variable names: avoid single letters (Y, k, l) and generic names (index) It appears all the work of your code is inside four nested loops: Try to vectorise - rewrite the inner block to operate on muliple pixels/components/chromas simultaneously. Optimise the iteration order for caching and branch prediction. Pull up anything that doesn't actually have to be in the inner loop (like the runtime calculations?) The main problem with offloading to a GPU is that the data transfer is very slow. You need to minimise the number of copies to/from the GPU and take maximum advantage of its parallelism. If you cannot do that then it will always be faster to stay on the CPU (especially with SIMD); even if the individual operations are slower it can still get through the data quicker.
{ "domain": "codereview.stackexchange", "id": 28065, "tags": "performance, c, image, signal-processing, opencl" }
Cirq.simulate expectation value of a Hamiltonian
Question: I want to simulate the final state of an ansatz in cirq using simulate. Now I want to calculate the expectation value of a Hamiltonian. How do I do this? I can only find simulator.run examples in cirq. But I want to access the wavefunction and therefore would need simulator.simulate. Is there a function in cirq I can use or how could I do this? Answer: You can get the wavefunction from cirq.final_state_vector(circuit). Then you can define your observable as an instance of cirq.PauliSum, on which you will be able to use the expectation_from_state_vector() method to get the expectation value.
{ "domain": "quantumcomputing.stackexchange", "id": 2152, "tags": "programming, hamiltonian-simulation, cirq" }
Lorentz invariance of the integration measure
Question: This is regards to the lorentz invariance of a classical scalar field theory. We assume that the action which is $S= \int d^4 x \mathcal{L}$, is invariant under a Lorentz transformation. How do you prove that the integration measure $d^4 x$ is Lorentz invariant. Answer: It's invariant because the Lorentz group is $SO(3,1)$ and the letter "S" stands for "special" which mathematically means the condition $$\det M = +1.$$ But the determinant is exactly the coefficient by which the volume form gets multiplied when the coordinates are Lorentz-transformed: $$ x \to M\cdot x\quad \Rightarrow \quad d^4 x \to \det M \cdot d^4 x $$ (this determinant-based transformation rule may also be derived if one views the volume form as an antisymmetric tensor with 4 indices) so if the determinant is equal to $+1$, the measure doesn't change. Well, $d^4 x$ is usually interpreted as $|d^4 x|$, so it's actually invariant under the whole $O(3,1)$, including the metrices with $\det M =-1$. And the condition $\det M=\pm 1$ (with "OR") follows from the orthogonality condition itself, so the adjective "special" is really unnecessary when we're already focusing on pseudoorthogonal matrices.
{ "domain": "physics.stackexchange", "id": 5802, "tags": "special-relativity, lagrangian-formalism, lorentz-symmetry, action" }
Find all non-isomorphic graphs with a particular degree sequence
Question: I have a degree sequence and I want to generate all non-isomorphic graphs with that degree sequence, as fast as possible. The only way I found is generating the first graph using the Havel-Hakimi algorithm and then get other graphs by permuting all pairs of edges and trying to use an edge switching operation (E={{v1,v2},{v3,v4}}, E'= {{v1,v3},{v2,v4}}; this does not change vertice degree). Are there any faster algorithms? Answer: To the best of my knowledge, nothing considerably better is known. For instance, your problem was part of the Combinatorics REGS 2012 program at Illinois. With that being said, I can think of three possible engineering approaches that might work: Use what you have (perhaps as a baseline for benchmarking other approaches). Use the suggestion of Yuval: use the configuration model to generate all such graphs, and check for isomorphism between them. For instance, you can use nauty that is fast and highly optimized. If your degree sequences are of length $n$ for some small $n$ (or you can afford to do a lot preprocessing), generate all $n$-vertex graphs. For this, you can either download them all from McKay's homepage, or use e.g., geng. If you have additional information, you might use an even more specific generator such as plantri (geng has options too). Once you have the graphs, iterate over them and check whether they have the right degree sequence. By the way, Sage also has much of this functionality built-in, see e.g., here (for instance, search for "Generate all graphs with a specified degree sequence").
{ "domain": "cs.stackexchange", "id": 6598, "tags": "algorithms, graphs" }
Meaning of Free (Arbitrary Abstract Algebra Term)
Question: I'm currently learning abstract algebra and the word free appears (free monoid, free vector space) throughout different literatures. Is there a general (and simple) definition of the word (and possibly examples in functional programming) or is it merely a highly overloaded word? Many Thanks! Answer: Yes, and in fact there is even a general notion of an algebraic theory, and thus algebraic object. An algebraic theory consists of: a sort constants and function symbols algebraic axioms, that is universally quantified sentences which may contain equality, but no other logical symbols (ie. $\exists,\vee,\to,\neg$) It is worth noting that the theory of fields is not algebraic, thus there are no free fields. So the theory of monoids $\mathcal M$ has a sort $M$, constant $e : M$, function symbol $\mu:G\times G\to G$, and axioms $\forall x.\mu(x,e)=x$ $\forall x.\mu(e,x)=x$ $\forall x,y,z.\mu(\mu(x,y),z)=\mu(x,\mu(y,z))$ In general, given an algebraic theory $T$ we may consider the set of terms in this theory. Categorically, a free algebra for $T$ is the initial object in the category of algebras for $T$. The standard way to construct the free algebra is to take the term model for $T$, that is the underlying set of the free algebra is the set of terms of $T$ modulo the equivalence relation generated by the axioms of $T$. So what we mean by a free structure in the theory $T$ generated by the set $B$, is to consider a free algebra for the theory $T_B$ containing an additional constant symbol for each element of $B$. This description is concrete but a bit complicated. There is a much simpler but more abstract construction using the theory of monads, because an algebraic theory corresponds precisely to a monad. An example for monoids in a functional programming language could look as follows, by interpreting the theory of monoids as a monad: type 'a mon = | Id | Bin of 'a mon * 'a mon | Var of 'a let return (x : 'a) : 'a mon = Var x let rec bind (x : 'a mon) (f : 'a -> 'b mon) : 'b mon = match x with | Id -> Id | Bin (s,t) -> Bin (bind s f, bind t f) | Var u -> f u let rec eq (a_eq : 'a -> 'a -> bool) (s : 'a mon) (t : 'a mon) : bool = match (s,t) with | (Bin (Id,u), v) | (Bin (u,Id), v) | (u, Bin (Id,v)) | (u, Bin (v,Id)) -> eq u v | (Bin (Bin (u1,u2),u3), v) -> eq (Bin u1, Bin (u2,u3)) v | (u, Bin (Bin (v1,v2),v3)) -> eq u (Bin v1, Bin (v2,v3)) | (Id,Id) -> true | (Bin (u1,v1), Bin (u2,v2)) -> (eq u1 u2) && (eq v1 v2) | (Var x, Var y) -> a_eq x y | _ -> false type two = A | B let two_eq x y : bool = match (x,y) with | (A,A) | (B,B) -> true | _ -> false Then the type two mon with relation eq two_eq gives the free monoid on two elements. The function return : two -> two mon gives us the two generators return A and return B. The function bind : two mon -> (two -> 'b mon) -> 'b mon tells us that a map from the free monoid on two elements to another (in this case free) monoid is uniquely determined by what the elements of two are mapped to.
{ "domain": "cs.stackexchange", "id": 17697, "tags": "functional-programming, category-theory, algebra" }
PhD or master in physics
Question: I am 35 years old and I've just got a bachelor in Science. I will pursue higher education in Physics but I have my doubts if either a PhD or a master is more convenient for me. My areas of interest are Plasma-Fusion Physics, Condensed Physics, and Applied Physics. If I enter a PhD program, will take me around 6 years to complete it and probably I would be kind of old to take advantage of it...in the other hand If I choose the master (which is shorter in time) probably won't help 'cause Universities and Companies in these areas mostly look for doctors. These are my doubts and concerns. Please help with your expertise. Answer: I am currently a 5th year PhD student (almost done, I hope) and I would recommend going after the Masters degree. I believe this is a very subjective matter so other answers may be completely different and in the end you're going to have to decide on your own. But I will like to give reasons for my opinion below. I am not trying to persuade you one way or the other but just offering my opinion. This is not to say I regret getting my PhD. I am very happy that I have come this far in my degree. I have recently started my own job search and I have seen that any job that you want to get with a physics PhD you can get with a physics Masters. So what's the big deal with the PhD? What I've noticed is that your starting salary is higher for PhDs. This is especially true for government positions. A PhD might get you started around 80K where as a masters may be around 70K. The other main difference is that when you take the job with a PhD you can apply for promotions more quickly (perhaps after only 2 yrs on the job), but with a masters it might take longer (4 to 5 yrs). Note these numbers are really for gov't jobs I can't say I have such specific knowledge of the private sector. Another issue with the PhD is that some private sectors companies will think you are too academic. That you can't think in terms of the real world. They actually prefer the masters degree because this shows you can handle the extra work, and that you're not completely academically minded. My advice would be to think about what you want to with your PhD. If you think you want to stay in academia then you MUST get the PhD. If you want a private sector job a masters is plenty good. Not to mention, if you start a PhD and find out you don't like doing research you might just want to leave after 3-4yrs with a masters anyway. I also know that a lot of physics departments will not accept only masters students. So you do have to apply for the PhD program. That is fine. Apply, see how it goes...you might love it, but if its not for you I have know plenty of people that leave with their Masters. Again, this all really depends on how you want to apply the degree.
{ "domain": "physics.stackexchange", "id": 10966, "tags": "education" }
Why is earth on an axial tilt of 23.4 degrees?
Question: What were the influences for this to occur? Did some things play a bigger part that others? Answer: An impact, possibly the same impact that caused material from earth to fly off to space and create the moon, tilted the earth to $23.5^o$
{ "domain": "physics.stackexchange", "id": 12804, "tags": "earth, solar-system" }
What causes molecules to vibrate when exposed to electromagnetic radiation
Question: TL;DR: why atoms gain kinetic energy when hit by a photon? I'm trying to understand the process that converts light into heat. I found poor explanation that do not include the whole process. A photon hits an atom of a molecule The electron is excited and moved to a higher energy state Assuming the energy is not remitted The overall charge of the atom did not change (?) (?) causes the molecules to vibrate Are the vibration caused by the change in charge between the molecules? I assume that the atoms are not ionized, since it would cause the matter to conduct electricity, wouldn't it? Am I missing something? Answer: Even though photons have zero rest mass, they carry a finite amount of energy and momentum. A photon's energy & momentum is given by, $$E = h\nu\space,\space \space p = \frac{hc}{\lambda}$$ where $\nu$ and $\lambda$ are frequency and wavelength of the photon respectively. If the photon is absorbed by an atom and not re-emitted, the energy of the photon is completely transferred to the atom. These kind of collisions are inelastic. The atom takes up the energy as kinetic or vibrational energy. If a new photon is emitted immediately after photon absorption with the same energy as that of the incident photon, the overall collision is said to be elastic. There won't be a change in energy, however, changing momentum is allowed. However, sometimes, the emitted photon doesn't necessarily have the same energy as that of the incident photon and this kind of collusion (overall process)is also called inelastic collision.(Raman Scattering & Compton Scattering) This missing energy is taken up by the atom as kinetic energy or vibrational energy. The answer is highly simplified. The energy gained from the photon could be used for other purposes such as pair production of particles, knocking electrons off, etc. Also, the mechanism of absorption and emission are a bit complicated. References: Photon-Atom Interactions - MIT OCW
{ "domain": "physics.stackexchange", "id": 33028, "tags": "electromagnetism, photons, atoms" }
Can a black hole "supernova"?
Question: In layman terms: nothing ever escapes the pull of a black hole, not even light when a super massive star reaches the end of it's life you get a supernova sometimes the "remains" of these stars can turn turn into black holes My question is: can a black hole "supernova" the same way a large star does? What I'm saying is, could this process occur in reverse? I.E: black hole -> supernova -> particles form dust cloud -> new star EDIT: Apologies if any of this sounds stupid (or vague), I'm just curious. Answer: No, it cannot. A black hole is something vastly different from a star. It's vastly different from anything else. It's not a thing, really, but more like a portion of very distorted spacetime. Nothing escapes from it simply because there is no way out - spacetime is distorted in such a way that all trajectories lead to the center. Now, there is a mechanism where radiation is generated just outside the black hole, sucking energy from its gravitational field and therefore from its mass. This is called Hawking radiation. It does not come from inside the BH, but the way it's created it leeches the BH's energy / mass. In time, this would diminish the mass of the BH. It turns out, the smaller the BH, the stronger the Hawking radiation. This in turn makes the BH lose weight even faster, so the HR gets even stronger, and so on. It's a vicious cycle. If the black hole is small enough, the process turns into an explosion - HR gets so intense, the whole mass of the black hole is converted into radiation at once. It's not a supernova, not as big as that, but it is a powerful explosion. The only issue here is - HR is very weak when BHs are big. So regular BHs will take an incredibly long time to slim down, if ever, to the tiny size where HR spirals out of control. Basically this never happens in reality. TLDR: Not a supernova, just a huge bomb, but not very likely.
{ "domain": "astronomy.stackexchange", "id": 3765, "tags": "star, black-hole, gravity, supernova, explosion" }
Entropy - does Heat Death occur in a closed system
Question: Does heat death occur in a closed system? (Assuming you can theoretically have some sort of "closed system". Answer: There is a degree to which this is just terminology, but in cosmology a distinction is somtimes made between the Heat Death and the Big Freeze. The Big Freeze is the point at which the universe reaches absolute zero, while the Heat Death is the point at which the entire universe has a constant temperature. These are not necessarily the same thing, because a de Sitter universe possesses a cosmological horizon and this will emit Hawking radiation. That means a de Sitter universe will never cool to absolute zero - only to the temperature of the Hawking radiation. However the distinction I make here is far from standard and you'll see the two terms being used interchangeably. The point of all this is that a closed system is never going to cool to absolute zero because obviously if it's closed heat can't escape from it. However it can attain a state of uniform temperature. So it can attain a state of Heat Death, but only for a subset of commonly used meanings of the term Heat Death.
{ "domain": "physics.stackexchange", "id": 20088, "tags": "thermodynamics, entropy" }
Ratio of electrons and protons in Universe
Question: What is the relation between amount of electrons and protons in Universe? I expect that the Universe should not be charged, so the estimation is 1:1. But then, why there should be the same amount of leptons and quarks/3 ? (if I generalize a bit the question) I remember, that lepton and baryon numbers are conserved, which sounds to me like we speak about two completely different species. That have the same number of members by chance... Answer: The Universe is indeed electrically neutral at the cosmological length scales which means that the total charge of the positively charged particles is equal to (minus) the total charge of the negatively charged particles. However, one must be more careful what these particles are. Electrons and protons are two dominant charged particle species. However, the Universe also contains other charged particles including antiprotons, positrons, and, less importantly, some unstable particles. But if one ignored all charged particles except for protons and electrons, $N_e=N_p$ would really arise from the neutrality of the Universe, and it is approximately obeyed by the Universe around us, anyway. (Most of electrons and protons in the Universe exist in the form of hydrogen atoms, anyway.) If one talks about the number of quarks, the counting is different. A proton contains 3 (valence) quarks so the number of (valence) quarks is $N_q=3N_p+\dots $. However, protons aren't the only particles that contain quarks. There are lots of neutrons, so neglecting all other hadrons, $N_q=3N_p+3N_n\gt 3N_p\approx 3N_e$. However, I have mentioned that most of the electrons and protons in the Universe come in the form of hydrogen-1 which has no neutrons, so the number of neutrons in the Universe is much smaller than the number of the protons, and this term may be approximately neglected. However, heavy elements – like those on Earth – actually contain a greater number of neutrons than protons. And neutron stars are full of neutrons. One must be careful about this counting.
{ "domain": "physics.stackexchange", "id": 95416, "tags": "electrons, quarks, protons, leptons" }
Why can any wave function be written as the sum of parity eigenstates?
Question: I read in Zettili's Book of Quantum Mechanics that any wave function $\psi(\vec{r})$ can be written as a sum of the parity operator's eigenfunctions, since its eingenstates $ |\psi_+\rangle$ and $ |\psi_-\rangle$ form a complete set, then $\psi(\vec{r}) = \psi_+ (\vec{r}) + \psi_- (\vec{r}) . $ But I still can't see why. Could you help me, please? Answer: Any function $f(x)$ can be written as the sum of an even function $f_+(x)$ and an odd function $f_-(x)$, $$f_+(x) = \frac{f(x) + f(-x)}{2}, \quad f_-(x) = \frac{f(x) - f(-x)}{2}$$ because $$f_+(x) + f_-(x) = f(x).$$ This is just the generalization of that to three dimensions.
{ "domain": "physics.stackexchange", "id": 64982, "tags": "quantum-mechanics, wavefunction, parity" }
Trace distance between mixed state and pure state vs trace distance between their purifications
Question: Let $\rho$ be a mixed state and $\vert\psi\rangle\langle\psi\vert$ be a pure state on some Hilbert space $H_A$ such that $$\|\rho - \vert\psi\rangle\langle\psi\vert \|_1 \leq \varepsilon,$$ where $\|A\|_1 = \text{Tr}\sqrt{A^\dagger A}$. Does there exist a purification $\vert\phi\rangle\langle\phi\vert$ of $\rho$ and a (trivial) purification $\vert\psi\rangle\langle\psi\vert\otimes\vert 0\rangle\langle 0\vert$ living on Hilbert space $H_A\otimes H_B$ such that $$\|\left(\vert\phi\rangle\langle\phi\vert - \vert\psi\rangle\langle\psi\vert\otimes\vert 0\rangle\langle 0\vert\right) \|_1 \leq \varepsilon? \tag{1}$$ I am aware that the fidelity between purifications can be bounded using Uhlmann's theorem. One can then use the Fuchs van de Graaf inequalities and come back to trace distance but the resulting bound in (1) goes as $\sqrt{\varepsilon}$. Can one do better and achieve (1)? Answer: Sanity check: the statement is indeed true when $\rho$ is a pure state. We can start by finding the singular values of the combination of purified systems, which I will write as $|\phi\rangle$ and $|\Psi\rangle$. Given the Hermitian matrix $M=|\phi\rangle\langle\phi|-|\Psi\rangle\langle\Psi|$, we can look for eigenvectors of the form $\alpha|\phi\rangle+\beta|\Psi\rangle$ (this is much easier when we have a sum of projectors, not a difference like we have here). Defining $z=\langle\phi|\Psi\rangle$, the eigenvalue equation reads \begin{align} M\left(\alpha|\phi\rangle+\beta|\Psi\rangle\right)&=|\phi\rangle\left(\alpha+\beta z\right)+|\Psi\rangle\left(-\beta-\alpha z^*\right)\\ &=\lambda\left(\alpha|\phi\rangle+\beta|\Psi\rangle\right). \end{align} The eigenvalues are given by $$\lambda=\frac{\alpha+\beta z}{\alpha}=-\frac{\beta+\alpha z^*}{\beta},$$ which means that the eigenvectors must obey $$\alpha=\beta\frac{-1\pm\sqrt{1-|z|^2}}{z^*};$$ we thus have the unnormalized eigenvectors and eigenvalues $$|v_\pm\rangle=\left(-1\pm\sqrt{1-|z|^2}\right)|\phi\rangle+z^*|\Psi\rangle,\qquad \lambda_\pm=\mp\sqrt{1-|z|^2}.$$ The entire matrix $M$ thus has two degenerate singular values $$\sigma_1=\sigma_2=\sqrt{1-\left|\langle\phi|\Psi\rangle\right|^2}.$$ This means that we can write your purification's trace distance as $$\big|\big||\phi\rangle\langle\phi|-|\Psi\rangle\langle\Psi|\big|\big|_1=2\sqrt{1-\left|\langle\phi|\Psi\rangle\right|^2}.$$ To achieve the smallest value of $\big|\big||\phi\rangle\langle\phi|-|\Psi\rangle\langle\Psi|\big|\big|_1$, we require the largest value of $\left|\langle\phi|\Psi\rangle\right|^2$. Now, this expression for the singular values uses the overlap between two purified states, so we can connect it to the fidelity! Labelling all viable purifications of $\rho$ by $|\phi_\rho\rangle$, we know that $$F(\rho,\psi)=\max_{|\phi_\rho\rangle}\left|\langle\phi_\rho|\Psi\rangle\right|^2,$$ where we have fixed our purification $|\Psi\rangle$ of $|\psi\rangle$ without loss of generality. So we can collect our results and write $$\min_{|\phi_\rho\rangle}\big|\big||\phi_\rho\rangle\langle\phi_\rho|-|\Psi\rangle\langle\Psi|\big|\big|_1=2\sqrt{1-F(\rho,\psi)}\stackrel{?}{\leq}\epsilon.$$ Our question is now how to relate $F(\rho,\psi)$ and $\epsilon$. We know that $1-F(\rho,\psi)\leq {\epsilon}/2$, so the question is by how much can it be smaller. We can always write this fidelity as $F(\rho,\psi)=\langle \psi|\rho|\psi\rangle.$ Choosing some fiducial state $\rho=p|\psi\rangle\langle\psi|+(1-p)|\psi_\perp\rangle\langle\psi_\perp|$ with orthonormal $|\psi\rangle$ and $|\psi_\perp\rangle$, we can calculate (2 times) the trace distance $$\big|\big|\rho-|\psi\rangle\langle\psi|\big|\big|_1=2\left(1-p\right)$$ and the fidelity $$F(\rho,\psi)=p.$$ Since $$2\sqrt{1-p}\geq 2\left(1-p\right),$$ we have a counterexample where $$\min_{|\phi_\rho\rangle}\big|\big||\phi_\rho\rangle\langle\phi_\rho|-|\Psi\rangle\langle\Psi|\big|\big|_1 > \big|\big|\rho-|\psi\rangle\langle\psi|\big|\big|_1,$$ so it does not seem like the desired property always holds for all $\epsilon$. In fact, it seems as though this counterexample saturates the $\sqrt{\epsilon}$ condition that we have been trying to beat.
{ "domain": "quantumcomputing.stackexchange", "id": 2794, "tags": "quantum-state, information-theory, fidelity, linear-algebra, trace-distance" }
Unable to run gazebo 1.0.0 RC2
Question: Hi there, I have just installed Ubuntu and ROS and gazebo-1.0.0-RC2 but when I run the gzserver command this is what I get: Gazebo multi-robot simulator, version 1.0.0-RC2 Copyright (C) 2011 Nate Koenig, John Hsu, Andrew Howard, and contributors. Released under the Apache 2 License. http://gazebosim.org Warning: no world filename specified, using default world Msg Waiting for master Msg Connected to gazebo master @ http://localhost:11345 Segmentation fault Is there any log or anything else where I can try and figure out what's causing the Segmentation fault? Previous to this I was having problems, gzserver repeated this message several times: Xlib: extension “GLX” missing on display “:0.0″. Insufficient GL support I solved it by installing OpenGL... and that's when the Segfault appeared. So now I don't know if this introduced the current error or it has nothing to do. Thanks for your time! Miguel. Update: My Ubuntu installation died and I had to reinstall everything... Now I'm not getting any Xlib errors or segfaults, but instead this is what's happening: Gazebo multi-robot simulator, version 1.0.0-RC2 Copyright (C) 2011 Nate Koenig, John Hsu, Andrew Howard, and contributors. Released under the Apache 2 License. http://gazebosim.org Warning: no world filename specified, using default world Msg Waiting for master Msg Connected to gazebo master @ http://localhost:11345 Exception [RenderEngine.cc:473] unable to find rendering system Error [Rendering.cc:37] Failed to load the Rendering engine subsystem unable to find rendering system Exception [RenderEngine.cc:473] unable to find rendering system Exception [RenderEngine.cc:473] unable to find rendering system Exception [Sensors.cc:38] Unable to load the rendering engine terminate called after throwing an instance of 'gazebo::common::Exception' Aborted I'm going to investigate, but I wanted to post it here first. Thanks again! Miguel. Update 2: I tried everything that I could think of, and I'm still getting those errors about the rendering system... Any ideas? :( Thanks, Miguel. Originally posted by Capelare on ROS Answers with karma: 202 on 2012-02-08 Post score: 0 Original comments Comment by Capelare on 2012-02-12: (Weird... I accidentally double posted the comment and now I can't delete one without having the other deleted as well, so I'm editing this comment instead of deleting it. I think the problem is that the 6th comment is neither showed nor listed as "more comments"). Comment by Capelare on 2012-02-12: BTW, I didn't understand very well what you said about the ros wrapped gazebo Hsu, because I don't have Fuerte installed (it's not listed as a requirement at gazebosim.org...). Right now I don't even have Electric or any other ros version! Comment by Capelare on 2012-02-12: I've updated the question after I've been forced to reinstall Ubuntu. Now I have a clean install of 11.10 with just the additional nVidia drivers and all the dependencies listed on the Requeriments page. And gazebo-1.0.0-RC2, of course. Comment by hsu on 2012-02-09: Can you make a backtrace? There could be a problem in Fuerte that causes segfault with the ros wrapped gazebo. Specifically when the plugin to get package paths is invoked (e.g. see https://code.ros.org/svn/ros-pkg/stacks/simulator_gazebo/trunk/gazebo/scripts/gazebo with the -p options). Comment by Capelare on 2012-02-09: For example, right now I am on a i5-650 @ 3.2GHz with 8GB of RAM and an ATI RV730 PRO (Radeon HD 4650)... Comment by Capelare on 2012-02-09: It's not that easy... Because I have Ubuntu 11.04 installed on a USB flash drive that I boot in different computers. And it doesn't work in any of the machines. Comment by tfoote on 2012-02-08: Can you please add a description of your computer hardware? Comment by hsu on 2012-02-22: can you post how you are starting gazebo? Comment by Capelare on 2012-02-22: Of course, I start gazebo by running 'gzserver', as indicated here: http://gazebosim.org/wiki/quick_start Comment by hsu on 2012-02-22: also, did you install using deb, tarball or mercurial? Comment by Capelare on 2012-02-22: I used the .deb file. Should I try otherwise? :\ Comment by hsu on 2012-02-22: can you also post the output of 'glxinfo'? Comment by Capelare on 2012-02-23: Here you go: http://pastebin.com/bi8LUaiq Comment by hsu on 2012-02-23: safe to say glx is not an issue. Please double check to make sure your OGRE_RESOURCE_PATH points to where the ogre libraries are installed? Comment by Capelare on 2012-02-23: Hi, I'm not right now at the computer I use for all of these testings (e.g. glxinfo says "X Error of failed request..." in this machine), but I thought it wouldn't matter for something as simple as checking if a system variable points to the right location... correct me if I'm wrong. That said... Comment by Capelare on 2012-02-23: OGRE_RESOURCE_PATH doesn't point to where ogre libraries are, because it doesn't point anywhere... I can't test if these will solve the problem until tomorrow afternoon (~14h from now) because of the lack of GLX in these machine, but I'll update this as soon as I can. Thanks for your time! :) Comment by hsu on 2012-02-23: I updated step 2 in http://gazebosim.org/wiki/quick_start, so it now says "source /usr/share/gazebo-[gazebo version]/setup.sh", which is more correct. Comment by Capelare on 2012-02-23: Anyway I thought that Gazebo didn't use OGRE_RESOURCE_PATH since C-Turtle... At least I think I read something about that somewhere :? Comment by hsu on 2012-02-23: true if you install the ros wrapper version (install: http://www.ros.org/wiki/electric/Installation/Ubuntu + "sudo apt-get install ros-fuerte-simulator-gazebo" + http://ros.org/wiki/simulator_gazebo/Tutorials). The instructions on gazebosim.org are for standalone system install, needs more setup. Comment by Capelare on 2012-02-23: Oh fuck, I didn't know that. Besides, don't you think it'd be a good idea to add these additional setup you are talking about to the gazebosim.org "quick start" and/or "install" guides? :-\ Comment by Capelare on 2012-02-23: Oh wait! I didn't think that setup.sh would export OGRE_RESOURCE_PATH... AFTER running setup.sh I'd get this: http://pastebin.com/9pa5BL4S So I think I should modify setup.sh to point to /usr/lib/OGRE, am I wrong? Comment by Capelare on 2012-02-24: Yeah, after I changed the OGRE_RESOURCE_PATH to /usr/lib/OGRE everything seems to work fine. BTW, /usr/lib/OGRE was the default install dir for apt-get install libogre-dev... so maybe you should change (or just add) that path to the setup.sh in order to avoid this from happening to others :? Comment by Capelare on 2012-02-24: Oh, if you want to write the whole thing as an answer I'll gladly mark it as correct! Thanks! :) Answer: The instructions on gazebosim.org are for standalone system install of gazebo. (see my comment below about the ros-ified gazebo stack). Your problem above might be fixed by the updated step 2 in quick_start source /usr/share/gazebo-[gazebo version]/setup.sh should configure environment variables necessary for things to work correctly. In this case, your system is missing the correct OGRE_RESOURCE_PATH. On another note, it looks like you are trying to install gazebo simulator directly on your system (outside of ros context) for using it with ROS. Let me try to clarify a little bit about the difference between the ros simulator_gazebo stack and the gazebosim project. The ros simulator_gazebo stack wraps a hand-picked stable version of gazebo from the gazebo simulator project, and provides additional ros plugins/tools for using gazebo inside the ros environment. By installing ros-fuerte-simulator-gazebo, you don't have to worry about installing gazebo via instructions on gazebosim.org, but rather, just follow along the ros wiki tutorials. However, if you are interested in helping out with testing/developing the underlying simulator software, then proceed with the system install as described on the gazebo simulator project installation instruction page and try out the quick start steps. Note the two installs can exist side by side with some careful finessing of installation locations and system paths. Hope this is not too confusing. Originally posted by hsu with karma: 5780 on 2012-02-24 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Capelare on 2012-03-02: @hsu if anyone installs ros-fuerte-simulator-gazebo should they also install ros-fuerte-desktop*? and/or ros-electric-desktop? Also, the link you posted to the simulator_gazebo tutorials are way different than the ones at gazebosim... isn't ros-fuerte-simulator-gazebo the same as gazebosim? Comment by Capelare on 2012-03-02: Ok, I figured it out myself, but please tell me if I'm right/wrong. In order to use the simulator as it's explained at the gazebosim wiki, one should source /setup.sh and then /stacks/simulator_gazebo/gazebo/setup.bash. After that gzserver and gzclient work fine. (?) Comment by hsu on 2012-03-02: sorry, I realized my earlier answer might be too confusing, so I removed the seconds part. I'll try to put this information on gazebo wiki instead. The basics are explained in the update gazebo package manifest (https://code.ros.org/svn/ros-pkg/stacks/simulator_gazebo/trunk/gazebo/manifest.xml) Comment by Capelare on 2012-03-02: Thanks! Anyway I decided to go with the ros-wrapped gazebo version because I think it'd be easier to upgrade :) So what I'm doing now is setting up the environment for ros-fuerte and for simulator_gazebo, and then I launch "rosrun gazebo gzserver" instead of gzserver directly ;-) Comment by Capelare on 2012-03-02: It's not confusing at all, don't worry. It's just I don't know what I really want to do, so I'm trying things out. What I want to accomplish is a Lego NXT simulator, and I thought it'd be a good idea to write plugins for gazebo (and use the NXT stack probably). So you tell me, do I want to use ROS? Comment by Capelare on 2012-03-02: I think I do want (and need?) to use it, and I'd like to use Fuerte and/or the newest gazebo, but I'm having problems with the whole 'lack of tutorials/documentation for these two' thing. Or maybe I'm just too clumsy to find them!
{ "domain": "robotics.stackexchange", "id": 8160, "tags": "gazebo, simulation, simulator-gazebo" }
Is there a way for me to not have to build and source every time I open a new terminal
Question: Hello all, Thank you for your time and consideration! I'm using ROS melodic on Ubuntu 18.04 distribution and every time I open a new terminal, or even a new tab on a terminal, I have to re-build and re-source my workspace ( "catkin build [package names]" and "source devel/setup.bash"). I used a docker image at university and I didn't have to do this. i'm wondering if there is something that I can do so that once I build my workspace once it's built in all terminals. Please forgive any terminology misuse, I'm fairly new to ROS and welcome any feedback! Cheers, Derek Originally posted by derekboase on ROS Answers with karma: 3 on 2022-02-11 Post score: 0 Answer: just run these commands here in your home directory and they should allow you to not have to retype this command source devel/setup.bash, but remember you will have to run your launch files and packages in your home directory to have effect but i'm not sure you have to type catkin build every time maybe just catkin_make once when there is a new package in your_ws I would also recommend checking out the ROS wiki echo "source <your_ws>/devel/setup.bash" >> ~/.bashrc source ~/.bashrc Originally posted by bribri123 with karma: 36 on 2022-02-11 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by derekboase on 2022-02-11: Absolutely right, worked brilliantly. Thank you!
{ "domain": "robotics.stackexchange", "id": 37434, "tags": "ros, ros-melodic, catkin, build, source" }
Can the CKM quark mixing matrix be derived?
Question: Does the standard model make exact predictions about what the values of the CKM matrix are or are they inferred from experiment? Answer: The Standard Model has $18$ free parameters that cannot be evaluated directly from the theory. Among this $18$ parameters there are the three mixing angles $\theta_{12}, \theta_{13}, \theta_{23}$ and the phase angle $\delta$ from the CKM matrix. So, no, the $4$ parameters that define the CKM matrix cannot be evaluated directly from the theory but are found through various experiments.
{ "domain": "physics.stackexchange", "id": 95241, "tags": "particle-physics, standard-model, quarks" }
Proving that a language is not Recursive
Question: I have the following language: T = {M | there exists w such that M accepts w within |w| steps} I am trying to prove that this language is not recursive and that it is recursive-enumerable. To prove that it is NOT recursive I considered the following steps: 1) Assume that it was recursive 2) Conside input w of infinite length 3) Meaning, we get infinite steps that M can perform before halting. So we get the halting problem when considering this infinite input, and we are saying that we can decide it 4) halting problem is not decidable. contradiction What do you think about this proof? I'm not sure that I can consider an infinite length input, and I'm not sure wither turing machines in general should deal with infinite inputs lengths. I tried to search for answers with no luck so far. Any help would be much appreciated. Thanks. Answer: Proving that the language is in RE is quite easy: simulate the machine on every input $w\in \Sigma^*$, in minlex order, for $|w|$ steps. If an input is accepted - accept. Proving that the language is undecidable can be done by reduction from $$A_{TM}=\{<M,w>: M\text{ accepts }w\}.$$ A possible reduction: Given input $<M,w>$, construct a new TM $M'$ that works as follows: given input $x$, simulate $M$ on $w$. If $M$ accepts, accept. If $M$ rejects, reject, and of course if $M$ does not halt then so will $M'$. Now, if $<M,w>\in A_{TM}$, then $M$ accepts $w$ within $t$ steps, for some $t\in \mathbb{N}$. Thus, for a word $x$ with $|x|>>t$ (the size of $|x|$ needed depends on the manner of simulation, but can be done with $O(t\log t)$), the simulation of $M$ on $w$ will terminate within $|x|$ steps, and $M$ will accept $w$, so $M'$ will accept $x$, and $<M'>\in L$. Conversely, if $<M,w>\notin A_{TM}$, then $M'$ will not accept any input, so $<M'>\notin L$, and you are done.
{ "domain": "cs.stackexchange", "id": 5220, "tags": "computability, turing-machines, proof-techniques, undecidability" }
Why do free-falling particles converge onto the Hubble flow?
Question: I'm currently reading the book Cosmology by Daniel Baumann, and in Chapter 2, I encountered a claim that I was unable to prove. To provide some context to my question, let's start with the expression it provides for the physical velocity of a particle: $$\vec{v}_{phys}=\dfrac{d\vec{r}_{phys}}{dt}=\dfrac{d}{dt}(a\vec{r})=\dot{a}\vec{r}+a\dot{\vec{r}}=\dfrac{\dot{a}}{a}a\vec{r}+a\dot{\vec{r}}=H\vec{r}_{phys}+\vec{v}_{pec}$$ where: $a=a(t)$ is the scale factor, which measures the expansion of the universe. $H=\dfrac{\dot{a}}{a}$ is the Hubble parameter. $H\vec{r}_{phys}$ is the Hubble flow. $\vec{v}_{pec}=a\dot{\vec{r}}$ is the peculiar velocity of the particle. Later on, it asks the reader to prove that the physical three-momentum, defined as $p^2=g_{ij}P^iP^j$, verifies: $p=\dfrac{mv}{\sqrt{1-(v/c)^2}}$ $p$ is proportional to $a^{-1}$ This is easy to do, using the information that the geodesic equation provides, but then the book states that: Since $p\propto a^{-1}$, free-falling particles converge onto the Hubble flow. Why? I understand this would mean that, for free-falling particles, $\vec{v}_{pec}$ tends to zero as time passes, and I also suppose that the so-called by the book "physical peculiar velocity" $v$ that appears in the expression of $p$ is equal to $\dot{\vec{r}}$. But I don't see how to conclude from this that $\vec{v}_{pec}$ tends to zero. Edited to provide more details: If we invert algebraically that expression of the three-momentum $p$ in terms of the velocity $v$, we obtain the following expression for $v$ in terms of $p$: $$v=\dfrac{p}{\sqrt{m+(p/c)^2}}$$ If we consider most cosmological objects and massive particles to be non-relativistic, we can neglect the denominator and say that, since $p\propto a^{-1}$: $$v\simeq\dfrac{p}{\sqrt{m}}\propto p\propto a^{-1}$$ But then, if we consider that $v^2=g_{ij}\dot{x}^i\dot{x}^{j}$ denotes the same velocity $v$ than $\dot{\vec{r}}$ in the expression of the physical velocity, this means that: $v_{pec}=a\cdot v\propto a\cdot a^{-1}=1$ So, $v_{pec}$ would not decrease with the expansion of the universe, which is what I interpreted from the phrase "free-falling particles converge onto the Hubble flow". Where is my mistake here? Answer: The velocity $v_{\text{pec}} = a \, \dot{r}$ is the velocity as measured by the local stationary observer located at $r$, close to the moving particle. This velocity is restricted by causality: $v_{\text{pec}} \le 1$ (in units of $c \equiv 1$). The velocity $v_{\text{rec}} = \dot{a} \, r$ isn't bounded since $r$ can be as large as you want (depending on the spatial geometry considered). So when $r$ is large, you get $v_{\text{rec}} \gg v_{\text{pec}}$, which defines the "Hubble flow" at large distances. In this case, the peculiar motion isn't much relevant and you may set $v_{\text{pec}} \approx 0$.
{ "domain": "physics.stackexchange", "id": 93085, "tags": "special-relativity, cosmology" }
Why does pOH increase when pH decreases?
Question: Let's say you add $\ce{HCl}$ to water. The $\ce{H+}$ ion concentration increases and that causes a decrease in $\mathrm{pH}$. But why would the $\mathrm{pOH}$ increase? I can't see why added $\ce{H+}$ ions will decrease the $\ce{OH-}$ concentration? Is this because of the water auto-ionization? Is it because, according to Le Chatelier principle, when you add an amount of a component, the reaction will go in the other direction to counteract the disturbance, thus decreasing the concentration of $\ce{OH-}$ to form water? Answer: At a particular temperature, the $K_{\text{eq}}$ for the following reaction (yes, it's the auto-protolysis of water): $$\ce{H2O(l) <=> H+(aq) + OH-(aq)}$$ will be constant. Note that $K_\mathrm{w}=K_{\text{eq}}\times[\ce{H2O}]=\ce{[H+(aq)][OH-(aq)]}$ is called the auto-protolysis constant of water. It is considered a constant because concentration of water $([\ce{H2O}])$ is assumed to be constant. So, you can directly observe from here, that since $\ce{[H+(aq)][OH-(aq)]}=K_\mathrm{w}=\text{constant}$ at a particular temperature, if $\ce{[H+(aq)]}$ increases (i.e. $\mathrm{pH}$ decreases), $[\ce{OH-(aq)}]$ must decrease (i.e. $\mathrm{pOH}$ must increase). Your interpretation of the Le Chatelier's principle is also correct. In fact, it is actually what is happening behind the scenes. When you add more acid (assuming a 100% ionized acid), the $Q$ value of the auto-protolysis of water increases, causing the reaction to shift backward. However, the backward shift is unable to decrease the concentration of $\ce{H+}$ ions as much as it decreases the concentration of $\ce{OH-}$ ions. See explanation below: Consider adding $\pu{0.01M}$ $\ce{HCl}$ to pure water (at $\pu{25^\circ C}$) at $t=0$. This is what happens till achieving equilibrium at $t=t_0$: $$ \begin{array}{ccc} &\ce{H2O(l) &<=>& H+(aq)&+&OH-(aq)}\\\hline -&-&&\pu{10^-7M}&&\pu{10^-7M}\\ t=0&-&&\pu{(10^-7+10^-2)M}&&\pu{10^-7M}\\ t=t_0&-&&(10^{-7}+10^{-2}-x)~\pu{M}&&(10^{-7}-x)~\pu{M}\\ \end{array} $$ Now, if you actually solve for $x$ at $t=t_0$, you'll find $x$ to be of the order $10^{-8}$ or below. Notice that, for the $\ce{H+}$ ions, this means a decreases in concentration from $\approx\pu{0.01M}$ at $t=0$ to $\pu{0.0099999M}$ at $t=t_0$. This a hardly noticeable change. However, it is a catastrophe for the $\ce{OH-}$ ions, whose concentration would probably be reduced, by a factor of more than ten thousand, from the initial $\pu{10^-7 M}$! The change in $\mathrm{pOH}$ is, hence, much more pronounced.
{ "domain": "chemistry.stackexchange", "id": 9853, "tags": "physical-chemistry, acid-base, ph" }
Explicit calculation of bosonic string Weyl invariance at one loop
Question: I have been trying to do all the calculations in the Green, Schwarz and Witten Superstring Theory textbook. At the end of chapter 3, the author did one-loop calculation for Weyl invariance for the bosonic string, in section 3.4.2 and 3.4.5. In the latter section more fields were included, and not much calculation detail was given. The actions are $S_1=\frac{-1}{4\pi\alpha'}\int d^2\sigma\sqrt h h^{\alpha \beta}\partial_\alpha X^\mu \partial_\beta X^\nu g_{\mu \nu}$ $S_2=\frac{-1}{4\pi\alpha'}\int d^2\sigma \epsilon^{\alpha \beta}\partial_\alpha X^\mu \partial_\beta X^\nu B_{\mu \nu}$ $S_3=\frac{1}{4\pi}\int d^2\sigma \sqrt h \Phi (X^\rho) R^{(2)}$ I consider those one-loop calculations to be very good exercise, but as a beginner in string theory I find myself not able to do them. Therefore may I ask if there are some notes/papers in literature that give explicit calculation or point out key steps? Thank you very much! Answer: Thanks to @Trimok , the reference he provided gives some detailed calculation. To see one of the missing Feynman diagrams (you can draw others once you understand that) and more detailed calculation of the leading order (not one loop), you can also check http://www.itp.phys.ethz.ch/research/qftstrings/archive/13FSProseminar/LEEre_Guns which gives more useful math tricks.
{ "domain": "physics.stackexchange", "id": 10900, "tags": "string-theory, renormalization" }
Is there a Lorentz invariant approximation to General Relativity?
Question: Since General Relativity is the most accurate description of gravity is there any possible way to derive a Lorentz invariant theory from: $$R_{\mu\nu}-\frac{1}{2} g_{\mu\nu}R+\Lambda g_{\mu\nu}=kT_{\mu\nu}$$ Assuming the metric is in the form: $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$ where $\eta_{\mu\nu}$ is the flat space time metric and $h_{\mu\nu}$ is a disturbance in the metric. If there is a derivation for a Lorentz invariant theory, what would each component of the metric correspond to? Thanks Answer: If we take a flat background and perturb around it, $g_{\mu\nu}=\eta_{\mu\nu} + h_{\mu\nu}$, the Riemann tensor takes the form, $$R_{\alpha\beta\gamma\delta} = \frac12 (h_{\alpha\delta,\beta\gamma}+ h_{\beta\gamma,\alpha\beta} - h_{\alpha\gamma,\beta\delta}-h_{\beta\delta,\alpha\gamma}),$$ where commas denote covariant derivatives, which are reduced to partial derivatives, since they are taken with respect to the background in perturbation theory, and in this case it is just $\eta_{\mu\nu}$. Noting that $g^{\mu\nu}=\eta^{\mu\nu}-h^{\mu\nu}$ (a good exercise to show this using the properties you know about a metric), the Einstein field equations read, $$\partial^\alpha\partial_\nu {h}_{\mu\alpha} + \partial_\mu \partial^\alpha h_{\nu\alpha} - \partial^\alpha\partial_\alpha h_{\mu\nu} - \partial_\mu\partial_\nu h^{\alpha\beta}h_{\alpha\beta}-\eta_{\mu\nu}(\partial^\alpha\partial^\beta h_{\alpha\beta} - \partial^\alpha\partial_\alpha h^{\gamma\delta}h_{\gamma\delta}) = 16\pi T_{\mu\nu}.$$ We have yet to choose a gauge, and the choice often depends on the problem at hand. It can at times reduce it to simply the wave equation, which one can solve with Green's functions. Knowing the form of the curvature tensor, you can compute the action $S \sim \int \mathrm{d}^dx \sqrt{|g|}R$ and find a Lorentz invariant theory, which can in fact be quantised as well, like with other field theories, though it leads to a non-renormalisable field theory.
{ "domain": "physics.stackexchange", "id": 65632, "tags": "general-relativity, special-relativity, approximations, linearized-theory" }
Traditional definition for heat capacity assume reversible processes?
Question: I believe it is widely accepted the general definition for heat capacity is as follows $$C \equiv \frac{\delta Q}{\textrm{d} T}.$$ One also finds that it is widely taken that $$C_p = \left(\frac{\partial H}{\partial T}\right)_P,$$ and $$C_V = \left(\frac{\partial U}{\partial T}\right)_V.$$ It seems to me these latter two results were derived for reversible processes. Is it then that heat capacities are intrinsic to the material and should be path independent, meaning that the path used in the derivation is irrelevant? From the first law $$ \textrm{d} U = \delta Q + \delta W,$$ and the Legendre transform variable enthalpy is $$ \textrm{d} H = \delta Q + \delta W + p\textrm{d} V + V \textrm{d} p.$$ It seems then that you must assume reversible, and ignore composition, so that the work is only $\delta W = -pdV,$ then $$ \textrm{d} U = \delta Q - p \textrm{d}V,$$ $$ \textrm{d} H = \delta Q + V \textrm{d} p.$$ The two heat capacities clearly follow. Otherwise it is true that as long as the process is quasi-static $$ \textrm{d} U = T \textrm{d}S - p \textrm{d}V,$$ $$ \textrm{d} H = T \textrm{d}S + V \textrm{d} p.$$ If one can write the heat capacity as $$ C = T \frac{\textrm{d} S}{\textrm{d} T},$$ without having to assume a reversible process then again the relations could fall out. However, to do so seems to require a reversible process so that $$T \textrm{d} S = \delta Q,$$ for it is known that for an irreversible process $$T \textrm{d} S > \delta Q.$$ I would think that these results aren't path dependent, but I am not sure how to justify this thought. Answer: The internal energy, enthalpy, and entropy are physical properties of a material, independent of path. So the difference in these quantities between two closely neighboring thermodynamic equilibrium states are independent of the path between these states (which could have been very tortuous and irreversible). Therefore, the heat capacities defined in terms of U, H, and S are path independent.
{ "domain": "physics.stackexchange", "id": 93245, "tags": "thermodynamics, work, reversibility" }
Why do we need sulfuric acid in creating alkyl bromide and not in creating alkyl iodide from alcohol and hydrogen halide?
Question: i) $\ce{R-CH_2-OH + HBr ->[H_2SO_4] R-CH_2-Br + H_2O}$ ii) $\ce{R-CH_2-OH + HI -> R-CH_2-I + H_2O}$ What role does sulfuric acid play in the first reaction? Why are we not using it in the second one? Answer: The synthesis of alkyl halides from the corresponding aliphatic alcohols using concentrated hydrohalogen acids was investigated by Klein, Zhang and Jiang.[1] They note: [W]e found that the reflux of 1-butanol ($\pu{2.34 g}, \pu{31.5 mmol}$) with $48~\%$ hydrogen bromide ($\pu{7 mL}$) for $\pu{4 h}$ on a $\pu{120 °C}$ oil bath only gave low yield of 1-bromobutane ($54~\%$) and moderate purity ($93~\%$). The yield of 1-bromobutane was improved to $82~\%$ with $90~\%$ purity by adding additional sulfuric acid ($98~\%, \pu{1 mL}$). Interestingly, we found that 1-iodobutane could be directly synthesized using 1-butanol and hydriodic acid ($\ce{HI}, 57~\%$) by reflux without adding an additional acid in $80~\%$ yield with $98~\%$ purity. They do not offer any reasoning why switching from $\ce{HBr}$ to $\ce{HI}$ gives better yields. However, we may use our chemical reasoning to deduce the reason. As has been pointed out and explained multiple times on this site, the acidity of hydrohalogen acids increases from fluorine to iodine: $\ce{HI}$ is a stronger acid than $\ce{HBr}$ is. This means that hydrogen iodide should protonate a greater percentage of alcohol molecules than hydrogen bromide — and it requires the collision of a protonated alcohol and the halide anion for the reaction to proceed ($\mathrm{S_N2}$ mechanism). Remember that all hydrohalogen acids are gases at room temperature and standard pressure and thus need to be dissolved in water to give the actual acidic solution. While sulphuric acid itself may be less acidic than both pure $\ce{HBr}$ and $\ce{HI}$, adding it to the solution will increase the solution’s overall acidity since it comes with no added water. Hydrogen iodide in itself is acidic enough to promote the reaction as the experimental evidence shows so only in the case of hydrogen bromide is additional acidity needed for the reaction to occur. Interestingly, the authors also state that alcohols with more than five carbon atoms give worse yields due to their low solubility in $\ce{HI}$. This can be overcome by adding phosphoric acid to the mixture. Again, phosphoric acid may not be strong but it features a low water content. The added acidity may protonate more alcohol molecules allowing the $\mathrm{S_N2}$ reaction to proceed. I point to the other answers as to why using sulphuric acid instead of phosphoric acid is a bad idea in the case of hydrogen iodide. Thus, to sum up: hydrogen iodide is sufficiently acidic to protonate the corresponding alcohol and drive the reaction. More acids are not needed. hydrogen bromide is not acidic enough to drive the reaction. Adding sulphuric acid protonates the alcohol partially and allows the reaction to proceed. in the case of weakly soluble alcohols and hydrogen iodide, phosphoric acid may be used instead since it does not oxidise iodide (while sulphuric acid does). [1]: S. M. Klein, C. Zhang, Y. L. Jiang, Tetrahedron Lett. 2008, 49, 2638–2641. DOI: 10.1016/j.tetlet.2008.02.106.
{ "domain": "chemistry.stackexchange", "id": 8525, "tags": "organic-chemistry, reaction-mechanism, reaction-control" }
A* path finding getting the node neighbors
Question: Is there any way I can minimise this code.The code will get the neighbours of the current node and point the arrow to the current node private boolean isDiagonal(int x, int y, Node node) { if (x == -1 && y == 1) { node.arrow.setDrawable("bottom-right"); return true; } else if (x == 1 && y == 1) { node.arrow.setDrawable("bottom-left"); return true; } else if (x == -1 && y == -1) { node.arrow.setDrawable("top-right"); return true; } else if (y == -1 && x == 1) { node.arrow.setDrawable("top-left"); return true; } // Determine here if it's cross else if (x == -1 && y == 0) { node.arrow.setDrawable("right"); return true; } else if (x == 1 && y == 0) { node.arrow.setDrawable("left"); return true; } else if (x == 0 && y == -1) { node.arrow.setDrawable("up"); return true; } else if (y == 1 && x == 0) { node.arrow.setDrawable("down"); return true; } return false; } output Answer: Just as I answered yesterday in BFS in a grid with wall breaking saldo in Java : define a direction and use it: enum Direction { // define directions for each pair of offsets NORTH(-1, 0, "up"), NORTHEAST(-1, 1, "top-right"), SOUTH(1, 0, "...."), ... ; public final int offsetX; public final int offsetY; public final String drawableValue; private Direction(int offsetX, int offsetY, String drawableValue) { this.offsetX = offsetX; this.offsetY = offsetY; this.drawableValue = drawableValue; } }; private boolean isDiagonal(int x, int y, Node node) { for(Direction dir : Direction.values()) if(x == dir.offsetX && y == dir.offsetY) { node.arrow.setDrawable(dir.drawableValue); return true; } return false; } Apart from that, the method-name is a lie: it does NOT check for diagonals only, so you should find a name which describes the actual action. (And just because this is code-review and someone will be tempted to say: add braces to the for loop: no. This is the style I prefer, I know that other people like it in other ways. ;-))
{ "domain": "codereview.stackexchange", "id": 24035, "tags": "java, pathfinding, a-star" }
How many Newtons are in a kg exactly?
Question: I understand that it is supposed to equal the gravity coefficient, which is defined as exactly 9.80665. But when i search google i saw one result saying 9.80665002864, and at first i thought it may have just been a precision error, but when i ran a google search for the number 9.80665002864 then i get plenty of results, all having to do with Newton conversion but none at all had anything to do with gravity. Is it just 9.80665 and all these websites made the same precision error? Or is a kg/newton not the same as exact gravity? Answer: There are a couple of misunderstandings in your questions, but I think I can see what is ultimately being asked. If you don't mind, I need to tidy up your question just a bit: How many Newtons are in a kilogram, exactly. I thought that it was 9.80665 by definition. However, I found numerous sources on the web that seem to give the exact same answer for g, the acceleration due to gravity. Most all of them seem to give 9.80665002864 meters/sec^2. What gives? First off, the others posting answers to this question are ultimately correct - there really is no direct conversion between kilograms and Newtons. Kilograms is a unit of mass (how much matter is in an object) while Newtons is a measure of the force that the Earth's gravity exerts on the object. These two may seem like they are exactly the same thing, but they are just different enough to be completely different things. Now, if we limit our question to "how many Newtons would a 1kg object weigh at the Earth's surface", then yes I suppose there is a conversion between the two, and yes that conversion is about 9.81 m/s^2. Now, on to the part you asked about the number of decimal places, accuracy, and g being "defined" as 9.80665. The fact is that g has never had any defined value to speak of (unlike the speed of light, but that's another story). Most of the best measurements at the Earth's surface put the g number at about 9.81, but this may vary around a bit because the Earth isn't exactly a sphere, and so fourth. All of these web pages that seem to be giving you the same value of 9.80665002864 must be getting their information from the same source. Besides, I'm not sure what leads anyone to think that g can be calculated to this many decimal places - moving around from continent to continent will result in different g values that differ by roughly 1%. Hope this helps.
{ "domain": "physics.stackexchange", "id": 41793, "tags": "newtonian-gravity, mass, units, weight" }
Can we call deterministic equivalent of all NFA to be finite
Question: Finite automata is defined as a simple machine having small memory. A deterministic equivalent of a NFA with n states will have O(2^n) states,so the number of states grow exponentially. So can we always call the deterministic equivalent to be a finite automata just by saying that as n is finite 2^n will also be finite even though the actual memory will be very large or is there some limitation? Answer: Finite automata is defined as a simple machine having small memory. In fact, they are not defined in this way. You can look up the actual definition on Wikipedia. Even as an informal definition, the adjective small is inappropriate here — in fact, DFAs and NFAs have constant memory. You are using the letter $n$ to represent the number of states in a finite automaton. The letter $n$ is usually reserved for the input length. The number of states in a finite automaton doesn't depend on the length of the input. On the contrary, the number of states is fixed ahead of time, and the finite automaton then uses exactly the same number of states for any input, of any length. Perhaps it will be less confusing if you use a different symbol, say $m$. ... the actual memory will be large ... In theoretical computer science, there are no arbitrary constants thrown around. Today, $2^{50}$ bytes of memory can be considered large, but next year it might be $2^{60}$ bytes of memory. The same theory should account for both. In theoretical computer science, the formal meaning of large (or, more accurately, super-constant), is "tending to infinity with $n$", where $n$ is the input length. Finite automata use a fixed amount of memory which is independent of the input length, and so they never use a large amount of memory, as far as theorists are concerned. In practice, some automata are too big to actually implement. But this is a practical restriction rather than a restriction on the actual definition.
{ "domain": "cs.stackexchange", "id": 11702, "tags": "automata, finite-automata, nondeterminism" }
Why is the acceleration of the boulder and pallet half that of the truck
Question: While working on some homework I accidentally seem to have reached a bit of a mental block on this relatively simple problem. A 5000-lb truck is used to lift a 1000-lb boulder sitting on a 200-lb pallet A. Knowing the acceleration of the truck is $\text{1 m/s}^2$. Determine (a) the horizontal force between tires and the ground, (b) the force between the boulder and the pallet. I’m not asking to solve this, I can actually find multiple copies of the solution online. But all of them just take as an additional assumption: $$a_{boulder}=a_{pallet}$$ This makes sense as they are essentially connected for the benefit of this problem (I’m also not really sure at what point that assumption breaks down). My question is that how do we arrive at $$ a_{boulder} = a_{pallet} = \text{0.5 m/s}^2 $$ I’m assuming that (given the image accompanying this problem) we are making an assumption that for each time interval $t$ the truck moves some distance $x$, and the rope is doubled up and only moves $\frac{1}{2}x$. However I don’t necessarily feel satisfied by that and trying to model that assumption in my head isn’t convincing me. Are my assumptions here valid? Is there another way of looking at this that might be more understandable than the mental model I’ve constructed? Answer: All these additional assumptions can be called hidden constraints of the problem. $a_{boulder}=a_{pallet}$ because they always have the same height in your problem, i.e. $h_{boulder}=h_{pallet}$. Take second derivatives on both sides an you will conclude $a_{boulder}=a_{pallet}$. As for why $a_{pallet}=1/2 a_{truck} = 0.5 m/s^2$, consider conservation of rope length. If your truck moves a distance $x_{truck}$, the horizontal part of the rope must also increase by the same length, and thus the verticle rope length must decrease by the same length. Since the verticle rope is double-fold, decreasing its length by $x_{truck}$ only raises the height of the pellet and boulder by an height of $h_{pallet} = 1/2 x_{truck}$. Take second derivatives on both sides an you will conclude $a_{pallet}=1/2a_{truck}$.
{ "domain": "physics.stackexchange", "id": 67202, "tags": "homework-and-exercises, newtonian-mechanics, kinematics, acceleration" }
Which electronic transition of the hydrogen spectrum corresponds to "the third line from the red end"?
Question: Question In Bohr series of lines of hydrogen spectrum, the third line from the red end corresponds to which one of the following inner-orbit jumps of the electron for Bohr orbits in an atom of hydrogen? (A) $3\to2$ (B) $5\to2$ (C) $4\to1$ (D) $2\to5$ Only one option is correct. My approach I just drew the electron transition diagram like this: Then, I counted the third line from the red end, i.e., from the left side starting from the first line of the particular series. I then counter checked with the given options. Surprisingly two options matched, i.e., option B as well as C. I don't know further how to decide which is the correct answer out of B and C, since only one answer is correct. There is no other details mentioned in the question and thus I am confused. Answer: Since only one answer is correct, we must consider the transitions in the visible region only i.e., the Balmer Series. If multiple answers are correct then it is wise to choose both the options B and C. The transition $n_2 \to n_1$ can be easily calculated by using the following formula and it is not necessary to draw the transitions, which might be difficult for higher transitions where drawing the transitions would be time-consuming. $$n_2=n_1 + n$$ where $n$ represents the $n^{th}$ line in a particular series. Since here we are talking about Balmer series, $n_1=2$ and for the third line $n=3$. Substituting the values $$n_2=2+3=5$$ Hence the required transition is given by option B.
{ "domain": "chemistry.stackexchange", "id": 12514, "tags": "nomenclature, spectroscopy" }
Which of the following sampling methods can be used to sample x(t) such that this signal can be uniquely recovered from its samples?
Question: Assuming that a continuous-time $x(t)$ having its frequency content in the frequency band $1612\leq|F|\leq2015(Hz)$ is sampled with the sampling rate $Fs=806$ samples per second. Which of the following sampling methods can be used to sample $x(t)$ such that this signal can be uniquely recovered from its samples? A. Uniform first-order sampling B. Uniform second-order sampling C. None of the above I have searched a lot for the concept of uniform $n^{th}$-order sampling here but haven't found anything useful or related, tbh, Idk the concepts of those sampling techniques. Can someone provide me full explanation for this mcq, please! Answer: B): $f_s \geq 2B = 2\cdot(2015-1612) = 2\cdot 403 = 806$. "Second-order" bandpass sampling is described in this paper (or older, here). It's sampling $x(t)$ at a lower sampling rate, $M$ times, each with a different offset - then keeping only (carefully selected) 2 of the $M$ sequences: $$ \begin{align} x_A(n) &= x(n / f_0) \\ x_B(n) &= x(n / f_0 + k) \\ x(n) &= x_A(n) + x_B(n) \end{align} $$ The trick is in appropriately choosing $f_0$ and $k$. Graphically, $x(t)$'s spectrum is and it sampled at $f_0 = 2B$ is (showing one half of one of the sequences) (Sampling in time <=> Periodizing in frequency). From above it's clear that an appropriate combination of samplings of $x(t)$ at a rate $f_s \geq 2B$ will uniquely represent $x(t)$. This theoretical minimum average sampling rate, $2B$, is called the Nyquist-Landau rate. The motivation is described in paper introduction: Uniform sampling of $x(t)$ at the Nyquist rate $2(f_L + B)$ will be impractical when the frequency is high because this will increase the power consumption of the ADC applied in the sampling operation, resulting in reduced overall system efficiency. But don't be mislead; the Nyquist frequency imposes a fundamental limit on representing variations with finite number of samples. A consequence is, to recover $x(t)$, we require specifically designed interpolation functions that use the knowledge of $f_L$ and $B$. The dependence on $B$ follows directly from inability to represent a greater range of variations with fewer samples.
{ "domain": "dsp.stackexchange", "id": 10416, "tags": "discrete-signals, sampling, nyquist, analog-to-digital" }
Why is the electric field inside a charged conductor zero?
Question: When a neutral conductor is positively charged , the excess charges move to the boundary. Initially, inside the conductor there exist a field due to these excess charges which as per my reading results in the rearrangement of mobile electrons to cancel the initial field. My doubt is how exactly the electrons are redistributed without leading to another electric field inside the conductor. Answer: The "NET" Electric field inside a "conductor" is always zero as you have said.This is basically due to the presence of a lot of free electrons in the conductor. Let us consider a conducting plate kept in a uniform electric field perpendicular to the plane of the conductor as shown. The electric field exerts a force F=-eE on the the free electrons of the conductor pulling the electrons to the left side ( to the direction opposite to the electric field) of the conductor creating negative charge accumulation on that side. This in-turn creates a positive charge accumulation on the other side of the conductor. This accumulation starts to create an electric field opposite in direction to the existing electric field ( that is the new electric field opposes the already existing one). This accumulation increases and so do the opposing electric field. This continues until both the electric field become equal in magnitude. So the new opposing electric field completely cancels out the already existing on (only inside the conductor). So the net electric field in inside the conductor becomes 0. All this happens basically in nano seconds. So, the conductor isn't affected by any new electric field coming across it as it is cancelled out instantly. So, Getting to your question, the excess charges creates an electric field inside the conductor.Now,the mobile electrons rearranges as you have said to create another electric field which will totally cancel out the existing electric field inside the conductor. Its the "net electric field" that will be zero inside a conductor.
{ "domain": "physics.stackexchange", "id": 82933, "tags": "electrostatics, electric-fields, conductors" }
Reading multiple signed digits from serial port
Question: I'm using simulink support package for arduino to read serial data from port2 in Arduino due. My plan is to read signed integers (-415 for example) representing motor speed and feed it to the pid controllers as in the image. from the far end i'm sending delimited data in the following shape . The matlab function in sumlink is supposed to read the received ASCII characters and add them to a variable until it reaches the end character '>'. I'm using the following simple code just to give the output on both the Right and Left to check if I'm receiving the correct data, However I'm not. function [Right ,Left] = fcn(data,status) SON = '<'; EON = '>'; persistent receivedNumber; receivedNumber = 0; persistent negative; negative = false; if(status ==1) switch(data) case EON if (negative) receivedNumber = -1*receivedNumber; else receivedNumber = 1*receivedNumber; end case SON receivedNumber = 0; negative = false; case {'0','1','2','3','4','5','6','7','8','9'} receivedNumber = 10*receivedNumber; receivedNumber = receivedNumber + double((data - 48)); case '-' negative = true; end end Right = receivedNumber; Left = receivedNumber; end Can anybody tells if there are any other approaches to read multiple signed digits in simulink? Taking into consideration that I have to use the support package for Arduino since my pid controllers are already configured in Simulink and interfaced with port2 in Arduino (which will be connected to BeagleBone black) Answer: It looks like you're doing this in a very round-about manner. You said, The matlab function in sumlink is supposed to read the received ASCII characters and add them to a variable until it reaches the end character '>'. It looks like you are thinking the "signed numbers" are coming in as a string that you'll read one digit at a time, where the entire string is delimited by a start and stop magic number (the '<' and '>' symbols). A couple of things I would point out: First and foremost, typically a signed integer is just sent as a byte or (more typically) a series of bytes. For example, a 16-bit signed integer (short signed integer) is composed of two 8-bit bytes. To convert to the signed 16-bit integer, just use the Matlab function int16. You're doing it the especially long and painful way of sending a string, with delimiters, then reading that string in one digit at a time, converting the characters to numbers, then multiplying the current number by 10 every time you want to concatenate a new digit. Probably where you're running into more trouble code-wise is that you're not using the Matlab persistent function correctly. From the documentation If the persistent variable does not exist the first time you issue the persistent statement, it is initialized to the empty matrix. This is the check you should be making. If the variable you want to be persistent is empty, then initialize it to zero. You are currently declaring it persistent and then immediately setting it to zero every function call! Your code is: persistent receivedNumber; receivedNumber = 0; when it should be: persistent receivedNumber; if isempty(receivedNumber) receivedNumber = 0; end This way, receivedNumber only gets set to zero IF IT IS EMPTY (meaning it was just created). In all other cases it should be retaining the value it had and thus would be NOT empty. For the record, you need to do the same thing for negative; they're both being implemented incorrectly. Now, about your other question, Can anybody tells if there are any other approaches to read multiple signed digits in simulink? Yes, there are a couple things you could do. First, you could just concatenate the string and convert to double when you're done. That would look something like the following: if(status ==1) switch(data) case EON output = str2double(receivedNumber); case SON receivedNumber = char; otherwise receivedNumber(end+1) = data; end end This code creates output by converting the running receivedNumber to a 'double' format when you hit the end of number delimiter. On receiving the start of number delimiter, output is reinitialized to an empty character array. Any other input appends whatever the current character is to the string. In going through the code looking for your output variable, I noticed that you have Left and Right, which output your current receivedNumber. I'd point out that, generally speaking, receivedNumber is an intermediate result. I would create a persistent variable called output instead, assign Left and Right to the output, and then only update output when you have a full number to report. You could further "bulletproof" the function by checking that the output of the str2double command is, in fact, a number, and that you didn't pick up a mis-transmitted byte somewhere along the way that would look something like -1Q65. Right now you don't have any error checking in your inputs; if you get a mis-transmitted byte that is not a valid digit then you just skip handling it and then treat the next valid digit as correct. This is probably not what you should be doing; if, for example, you should have gotten -1065 and got -1Q65 instead, your code would output: 0; 1; 1; 16; 165; -165; The first zero is the output when you get '<', then you output 1 when you receive it, then you output 1 again because you don't handle Q, then you output 16, then 165, then you apply the negative sign when you hit the end delimiter >. The way I would suggest doing it instead would be: persistent output; persistent receivedNumber; if isempty(output) output = 0; receivedNumber = char; end if status == 1 switch(data) case '<' receivedNumber = char; case '>' tempResult = str2double(receivedNumber); if ~isnan(tempResult) output = tempResult; end otherwise receivedNumber(end+1) = data; end end Now your output is all zeros until the number is done being parsed, then it's the output number. The more important thing here is that, when you start receiving the next number, the output continues to be the previous number until the current number is done being parsed. If there is a misformed byte somewhere, the str2double conversion will output a NaN (Not A Number), and the ~isnan() (not isnan; Matlab uses ~ instead of !) check catches the mis-created number and the result is discarded. Consider your previous code getting -1065, -1Q65, and then -1064. There's almost no difference there, so I wouldn't expect much response from your PID controller, but the way you're handling your feedback will give you nothing but headaches. You would get: 0; 1; 10; 106; 1065; -1065; 0; 1; 1; 16; 165; -165; 0; 1; 10; 106; 1064; -1064; There are 18 numbers there, only three of them have the correct sign, and only two of them are actually correct. Granted you were given a faulty input, but what should your system do in that circumstance? Error out, discard the input, or pass the faulty input as truth? Whatever your decision on how an error should be handled, for control purposes, you should only be releasing complete and valid data. :EDIT: I just re-read my answer, and thought I'd reiterate that the persistent function isn't being used correctly. All of my examples of what your output would be are assuming you fix that. Otherwise, you code would output only whatever the current digit is.
{ "domain": "robotics.stackexchange", "id": 1247, "tags": "arduino, matlab, serial" }
quantization of angular momentum
Question: What is the most direct way of observation of quantization of angular momentum? Answer: Rotational spectroscopy seems like a pretty obvious demonstration of quantised angular momentum. If you look at the microwave absorption spectrum of a diatomic molecule (with a non-zero dipole moment) you'll find equally spaced absorption lines. These arise because the angular momentum can only change in integral jumps of $\hbar$.
{ "domain": "physics.stackexchange", "id": 7269, "tags": "quantum-mechanics, angular-momentum" }
How does energy (from fusion reactions still inside the sun) still have gravitational attraction?
Question: In this answer they say: While the conversion of mass matter† to energy in the Sun's core now represents a loss of mass proper matter, it turns out that that energy (trapped in the Sun and slowly diffusing towards the surface) will have the same gravitational attraction as the matter it came from until it actually escapes the Sun! Now I expect that when mass is converted to energy, it has become an electromagnetic wave that can propagate through matter or empty space. My question is: How does energy (from fusion reactions still inside the sun) still have gravitational attraction? Answer: I think it is best not to think of energy having a gravitational attraction, rather that energy, like mass, (since they are equivalent keeping in mind c) will curve spacetime. Electric charges, magnets, and electromagnetic fields will curve spacetime, Scientific American. So whether it is mass or energy, they both will affect the curvature of spacetime, and the sun's total impact on spacetime will not change.
{ "domain": "astronomy.stackexchange", "id": 4328, "tags": "the-sun, gravity, mass" }
Efficient algorithm for this optimization problem? Dynamic programming?
Question: I've created a diagram that depicts what I'm trying to accomplish. Full-size Image In the input sequence, the nodes are as close together as possible. But I want the white nodes to be as close to their respective black nodes as possible. The edges between nodes can be lengthened to try to minimize this error. They cannot be shortened. So, 1 -> 2 can be no less than 4, for example. I've included a possible solution. The edges that have been lengthened are labeled. Note that lengthening an edge shifts all the nodes to its right. This axis is continuous, but I could possibly discretize it if that helps. I'm thinking a dynamic programming approach could work here but I'm not sure - I was never very good with DP. What's the fastest running algorithm that can solve this? Can this be categorized / re-framed as a well-known problem? Answer: This is just an extension to @Sébastien Loisel's answer. Notice minimize $(x_i-y_i)^2$ subject to $x_i-x_{i-1}\ge c_i$ is equivalent to minimize $(x_i-(y_i-c_i))^2$ subject to $x_i\geq x_{i-1}$. Let $a_i=y_i-c_i$, then this is precisely the isotonic regression problem. There exist a $O(n)$ time algorithm.
{ "domain": "cs.stackexchange", "id": 4484, "tags": "algorithms, dynamic-programming" }
Script that tells you the amount of base required to neutralise acidic nootropic
Question: Function Gives the mass needed of specific basic (pH above 7) substance in order to neutralise the pH of a specific acidic substance. from decimal import Decimal as d # m = mass # n = moles # M = molar mass MkHCO3 = 100.11 MnaKHCO3 = 84.007 MphenHCl = 215.67 moles = lambda m,M: m/M nPhenHCl = moles(1.54, MphenHCl) mSubstance = lambda x,y: x*y mKHCO3 = mSubstance(MkHCO3, nPhenHCl) mNaHCO3 = mSubstance(MnaKHCO3, nPhenHCl) nl = '\n' print(f'{nl}KHCO3:') print(f'{d(mKHCO3):.2f}g{nl}') print(f'NaHCO3:') print(f'{d(mNaHCO3):.2f}g{nl}') Output KHCO3: 0.71g NaHCO3: 0.60g Background I know using a named lambda defeats the purpose but the function was so small I couldn't resist. This script is just for me which is why I hardcoded the value of the nootropic. I use single variables because I'm a chemist and that's how these equations are. If it was for others I wouldn't do these things. I usually write bash scripts to do things on the filesystem, I could do that with python but there's too much overhead, bash is better suited for that task. In doing so I write scripts (like the above) with a single use in mind. Which is why classes are hard for me to grasp. I've not yet come up with an idea big enough to use a class I think that's my problem, also because I don't practice much and online example are silly and don't apply to the real world. The above code is probably useless to use as a class, is that right? I need ideas on when a class would be required or how you'd even turn the above code into a class or if it's just a waste of time. Class Idea, too much? If I compiled a bunch of acid bases and made a class for that? if you have a weak acid (vinegar) plus a strong base (sodium hydroxide) then you have to use logs and more complex equations, for a weak acid + weak base, (or strong acid + weak base and vice versa) you also have rate equations which will tell you how long these reactions will take. I could also add reaction feasibility stuff which will tell you if a reaction is possible. Other things like deriving the pressure and temp for a reaction to occur. Could I have a class for that or is it too much? For the curious because of the shambles that went on in the comments (now unavailable to see) All this info is really not needed that's why I said 'for the curious'. Neutralisation NaHCO3(aq) + phen-HCl(aq) -> phen(aq) + NaCl(aq) + H2O(l) + CO2(g) aq means aqueous (something dissolved in water) Aqueous bicarbonate salt + aqueous medicine salt -> aqueous medicine + water + gaseous carbon dioxide From the equation you can see that the reactants are 1:1. Simpler Neutralisation HCl(aq) + NaOH(aq) -> NaCl(aq) + H2O(l) You might notice most medicines are salts; med-hcl, med-phosphate etc, this is for long shelf life or because med might smell really bad besides other reasons. Moles (n) is just a ratio, The moles function, gives a decimal which you can multiply against the other compounds molar mass in order to find out how much equal mass it needs to react completely to make the products. Why you have to multiply n by molar mass Molar mass differs for each atoms/compounds (or say particles to group them all) this is because their weight varies; they have less or more protons, neutrons and electrons. Molar mass has the same number of particles for any atom/molecule/ compound per mol. molarMass = mass which has 6.023*10^23 particles. This is Avogadro's number or NA and is what 1 mol means, mol is different from moles. 1 mol = mass which contains NA particles. 1 moles = mass/molarMass = ratio So you need to multiply n by NA to ensure equal numbers of particles react. Demystifying moles (n) Edited, I was sleepy and had nonsense before. x = £20 per 100g y = £12 per 50g z = what the price of y should be. Where x and y are the same products. You want to know the price per 1g. So you can know z. x[ratio] = 1 / 100 = 0.01 x[price per 1g] = x[ratio] * x[price] = £0.20 per 1g y[price per 1g] = x[ratio] * y[price] = £0.12 per 1g z[price per 50g] = y[price] * 50 = £6.00 per 50g That's all moles is and you probably do it to compare small vs large products prices to see which is cheaper/how much you save. Ionic and covalent bonding KHCO3 is an inorganic compound. K+ binds to bicarbonate or HCO3- creating a salt; potassium bicarbonate, this is an ionic bond (two opposite charges attracting). Salts have ionic bonds and break in water (excluding some crystals) giving ions; atoms/molecules with charges, this is why salt water is more conductive than water. More on ions Water exists in an equilibrium: H2O <-> H3O+ ~1 ~0 When electricity is passed through water, it ionises much more greatly, causing the position of equilibrium to greatly shift to the right. The bonding in KHCO3: K(+)(-O3-C-H) C is bonded covalently (the dashes -) to each of the oxygen atoms and hydrogen atom, this bond is strong, electrons are shared, and they are bound by the strong nuclear force and require high energy to ionise, they remain unchanged in water. This is the way salts work, and why they ionise in water. Take NaCl it's Na(+)Cl(-). You add water and it ionises to Na(+) + Cl(-). The free Na(+) is what allows you to taste salt. H2O2 represents Hydrogen Peroxide. When used as a variable name it doesn't have the same meaning as in chemistry. It is just a string and the underlying object is an integer. It doesn't have chemical properties. So the variables contain the molar mass of the compound. The only physical property I'm interested in is its molar mass which is an integer. You can calculate the mass for a reaction very easily, chemical properties have nothing to do with the calculation, only physical properties do like mass in this case. Example of 2:1 reaction H2O2(aq) -> 2H2O(l) + O2(g) If this is reversed, take the molar mass of both of the reactants then say for O2 you have has a mass of 10g: O2[n] = O2[mass] / O2[molarMass] H2O[mass required to completely react with O2] = O2[n] * (H2O[molarMass] * 2) That was all done without chemical properties, only physical properties. Might be obvious but to the fervent few well I hope this calms you, for the curious I hope you learned something. Conclusion I used to add about half a teaspoon of baking soda or potassium bicarbonate, this gave a salty or bitter taste respectively and was irritating to the throat. Using the calculated masses for the bases there is hardly a taste and no irritation. Answer: I'll demonstrate two directions for your code. The first is a simplification and output addition - since the code is already so short, I believe that it could be made more legible with units baked into the variable names, so that you can follow along with unit arithmetic just by reading the names nearly as you would on paper. This changes the following: Remove decimal; you don't need it here. Add friendly names for compounds in comments. Consider using the Unicode point for subscript-2 and 3 in your output. Consider using milligrams instead, since they can show more accuracy with fewer characters in this case. Since clarity is of paramount importance in pharmacology, write out your two reactions in full. Don't bother making a constant for a newline. SI convention is to have a space between the quantity and the unit. More broadly: you're usually writing code on your own (which isn't the end of the world). Seeking a review is a great idea. Adding clear comments, variable names and output content is an important step to branching out so that you aren't the only person who understands your code. Having most of the context for your calculations in your head is fine until it isn't: when their complexity increases, when time passes and (like we all do) things are forgotten, or when you want to share this with someone else. Units without functions # Units g = 1 mg = 1e3 * g mol = 1 mmol = 1e3 * mol # Molar masses KHCO3_g_mol = 100.115 # Potassium bicarbonate NaHCO3_g_mol = 84.0066 # Sodium bicarbonate phenHCl_g_mol = 215.67 # Phenibut hydrochloride # Quantities for neutralization reactions phenHCl_g = 1.54 phenHCl_mol = phenHCl_g / phenHCl_g_mol KHCO3_mol = phenHCl_mol NaHCO3_mol = phenHCl_mol KHCO3_g = KHCO3_g_mol * KHCO3_mol NaHCO3_g = NaHCO3_g_mol * NaHCO3_mol print(f' KHCO₃ + Phen-HCl → Phen + KCl + H₂O + CO₂') print(f'{KHCO3_mol * mmol/mol:6.3f} + {phenHCl_mol * mmol/mol:8.3f} (mmol)') print(f'{KHCO3_g * mg/g:6.1f} + {phenHCl_g * mg/g:8.0f} (mg)') print() print(f'NaHCO₃ + Phen-HCl → Phen + NaCl + H₂O + CO₂') print(f'{NaHCO3_mol * mmol/mol:6.3f} + {phenHCl_mol * mmol/mol:8.3f} (mmol)') print(f'{NaHCO3_g * mg/g:6.1f} + {phenHCl_g * mg/g:8.0f} (mg)') KHCO₃ + Phen-HCl → Phen + KCl + H₂O + CO₂ 7.141 + 7.141 (mmol) 714.9 + 1540 (mg) NaHCO₃ + Phen-HCl → Phen + NaCl + H₂O + CO₂ 7.141 + 7.141 (mmol) 599.9 + 1540 (mg) Units with functions # Units g = 1 mg = 1e3 * g mol = 1 mmol = 1e3 * mol # Molar masses KHCO3_g_mol = 100.115 # Potassium bicarbonate NaHCO3_g_mol = 84.0066 # Sodium bicarbonate phenHCl_g_mol = 215.67 # Phenibut hydrochloride # Quantities for neutralization reactions phenHCl_g = 1.54 phenHCl_mol = phenHCl_g / phenHCl_g_mol def show_neutralization(salt_name: str, salt_g_mol: float, chloride_name: str) -> None: salt_mol = phenHCl_mol salt_g = salt_mol * salt_g_mol print(f'{salt_name:>6} + Phen-HCl → Phen + {chloride_name} + H₂O + CO₂') print(f'{salt_mol * mmol/mol:6.3f} + {phenHCl_mol * mmol/mol:8.3f} (mmol)') print(f'{salt_g * mg/g:6.1f} + {phenHCl_g * mg/g:8.0f} (mg)') print() def main() -> None: show_neutralization(salt_name='KHCO₃', salt_g_mol=KHCO3_g_mol, chloride_name='KCl') show_neutralization(salt_name='NaHCO₃', salt_g_mol=NaHCO3_g_mol, chloride_name='NaCl') if __name__ == '__main__': main() KHCO₃ + Phen-HCl → Phen + KCl + H₂O + CO₂ 7.141 + 7.141 (mmol) 714.9 + 1540 (mg) NaHCO₃ + Phen-HCl → Phen + NaCl + H₂O + CO₂ 7.141 + 7.141 (mmol) 599.9 + 1540 (mg) Classes Make your classes immutable via NamedTuple. This demonstrates a simple two-class system for substance definitions and reactant instances. You could throw in a lot more complexity, such as Equation, but that's not really needed. from typing import NamedTuple # Units g = 1 mg = 1e3 * g mol = 1 mmol = 1e3 * mol class Substance(NamedTuple): short_name: str long_name: str g_mol: float # molar mass def __str__(self) -> str: return self.short_name class Reactant(NamedTuple): substance: Substance mol: float @classmethod def from_mass(cls, substance: Substance, g: float) -> 'Reactant': return cls(substance, g/substance.g_mol) @property def g(self) -> float: # mass in grams return self.mol * self.substance.g_mol def __str__(self) -> str: return f'{self.substance}: {self.g * mg/g:.1f} mg' KHCO3 = Substance('KHCO₃', 'potassium bicarbonate', 100.115) NaHCO3 = Substance('NaHCO₃', 'sodium bicarbonate', 84.0066) PhenHCl = Substance('Phen-HCl', 'phenibut hydrochloride', 215.67) def main() -> None: phenhcl = Reactant.from_mass(PhenHCl, g=1.54) khco3 = Reactant(KHCO3, phenhcl.mol) nahco3 = Reactant(NaHCO3, phenhcl.mol) print('KHCO₃ + Phen-HCl → Phen + KCl + H₂O + CO₂') print(khco3) print(phenhcl) print() print('NaHCO₃ + Phen-HCl → Phen + NaCl + H₂O + CO₂') print(nahco3) print(phenhcl) if __name__ == '__main__': main() KHCO₃ + Phen-HCl → Phen + KCl + H₂O + CO₂ KHCO₃: 714.9 mg Phen-HCl: 1540.0 mg NaHCO₃ + Phen-HCl → Phen + NaCl + H₂O + CO₂ NaHCO₃: 599.9 mg Phen-HCl: 1540.0 mg
{ "domain": "codereview.stackexchange", "id": 44852, "tags": "python" }
List subclasses of a class
Question: Description Given a class, return all its subclasses (recursively). As you can see I've eliminated recursion using a stack. What I want reviewed Is there a better way to do this? How can I make this code more generic and easier to use? Is it pythonic? Better way to eliminate recursion? Code def all_subclasses(cls): if cls == type: raise ValueError("Invalid class - 'type' is not a class") subclasses = set() stack = [] try: immediate_subclasses = cls.__subclasses__() except (TypeError, AttributeError) as ex: raise ValueError("Invalid class" + repr(cls)) from ex for subclass in immediate_subclasses: stack.append(subclass) while stack: sub = stack.pop() subclasses.add(sub) try: sub_subclasses = sub.__subclasses__() except (TypeError, AttributeError) as _: continue if sub_subclasses: stack.extend(sub_subclasses) return list(subclasses) Tests import unittest from class_util import all_subclasses def names(classes): return sorted([cls.__name__ for cls in classes]) class A: @classmethod def all_subclasses(cls): return all_subclasses(cls) class B(A): pass class C(B): pass class D(C): pass class E(C): pass class F(E, C): pass class AllSublassesTestCase(unittest.TestCase): def test_nested_classes(self): self.assertEqual(names(A.all_subclasses()), ["B", "C", "D", "E", "F"]) def test_work_with_buitins(self): self.assertTrue(names(all_subclasses(dict))) self.assertTrue(names(all_subclasses(tuple))) self.assertTrue(names(all_subclasses(list))) def test_value_error_is_raised_on_invalid_classes(self): self.assertRaises(ValueError, all_subclasses, type) self.assertRaises(ValueError, all_subclasses, "") self.assertRaises(ValueError, all_subclasses, None) self.assertRaises(ValueError, all_subclasses, []) if __name__ == "__main__": unittest.main() Answer: While working on the stack you are using stack.extend. You can also use this in the part where you add the immediate subclasses. There is no need to check if a list is empty before using extend, if it is empty it will just do nothing. If you don't need the exception, just don't catch it with as _. Not sure if you should be doing if cls is type instead of if cls == type. def all_subclasses(cls): if cls == type: raise ValueError("Invalid class - 'type' is not a class") subclasses = set() stack = [] try: stack.extend(cls.__subclasses__()) except (TypeError, AttributeError) as ex: raise ValueError("Invalid class" + repr(cls)) from ex while stack: sub = stack.pop() subclasses.add(sub) try: stack.extend(sub.__subclasses__()) except (TypeError, AttributeError): continue return list(subclasses) One way to optimize this further is to make sure you don't visit a class multiple times: while stack: sub = stack.pop() subclasses.add(sub) try: stack.extend(s for s in sub.__subclasses__() if s not in subclasses) except (TypeError, AttributeError): continue This should prevent having to visit (almost) every class twice with convoluted hierarchies like this: A / \ B C \ / D / | | \ E F G ...
{ "domain": "codereview.stackexchange", "id": 34161, "tags": "python, python-3.x, object-oriented, inheritance" }
Python rospy Script doesn't end running
Question: Hey there, i already talked to some guys in #python irc channel. My aim was to close a while loop with ctrl-c so i used a try - exception statement. The problem is that the while loop doesnt really close. If i put a print "whatever" in the exception statement it doesnt get printed, though the code in the while loop doesnt get executed anymore. People said that my code itself seems correct. So i used the faulthandler to get deeper into that problem and it seems that it has smth to do with ROS. But im not into programming that deeply to understand what is going wrong here or maybe its just a simple mistake i make... Anyways, here is my code: #!/usr/bin/env python import time import sys #Ueye Imports import ids #ROS Imports import rospy from sensor_msgs.msg import Image #OpenCV Imports import cv2 from cv_bridge import CvBridge, CvBridgeError import faulthandler import signal faulthandler.enable() faulthandler.register(signal.SIGUSR1) # +++++++++++++++ ROS Node Einstellugen +++++++++++++++ # Diese Node agiert unter dem Namen "NeedleDetector" rospy.init_node('VideoPublisher', anonymous=True) # und published folgende Topics: VideoRaw = rospy.Publisher('Kamera_Rohbild', Image, queue_size=10) EdgesRaw = rospy.Publisher('Kamera_Kantenbild', Image, queue_size=10) # ++++++++++++++++++++++++++++++++++++++++++++++++++++++ cam = cv2.VideoCapture('output.avi') # ++++++++++++++++ Einstellungen Bildverarbeitung +++++++++++++++ #Einstellungen fuer den Gaussschen Weichzeichner gaussian_blur_ksize = (5, 5) gaussian_blur_sigmaX = 4 #Die Threshold Werte fuer den Canny Operator threshold1 = 55 threshold2 = 75 # +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ br = CvBridge() try: while cam.isOpened(): # Naechstes Bild abfragen meta, frame = cam.read() # Anwendung des Gausschen Weichzeichners um das Rauschen etwas zu unterdruecken frame_gaus = cv2.GaussianBlur(frame, gaussian_blur_ksize, gaussian_blur_sigmaX) # ueberfuehren in ein Graustufen Bild fuer den Canny Algorithmus frame_gray = cv2.cvtColor(frame_gaus, cv2.COLOR_BGR2GRAY) # ueberfuehren in das Kantenbild frame_edges = cv2.Canny(frame_gray, 55, 75) # umwandeln der Numpy Arrays in die ROS Nachricht frame_msg = br.cv2_to_imgmsg(frame, "rgb8") frame_msg_edges = br.cv2_to_imgmsg(frame_edges, "mono8") # Nachrichten veroeffentlichen VideoRaw.publish(frame_msg) EdgesRaw.publish(frame_msg_edges) time.sleep(0.1) except KeyboardInterrupt: print "1" rospy.is_shutdown() cam.close() print "2" And here is what the faulthandler said: Thread 0x00007f8f02b25700: File "/usr/lib/python2.7/threading.py", line 358 in wait File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/impl/tcpros_pubsub.py", line 418 in _run File "/usr/lib/python2.7/threading.py", line 763 in run File "/usr/lib/python2.7/threading.py", line 810 in __bootstrap_inner File "/usr/lib/python2.7/threading.py", line 783 in __bootstrap Thread 0x00007f8eea43b700: File "/usr/lib/python2.7/threading.py", line 358 in wait File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/impl/tcpros_pubsub.py", line 418 in _run File "/usr/lib/python2.7/threading.py", line 763 in run File "/usr/lib/python2.7/threading.py", line 810 in __bootstrap_inner File "/usr/lib/python2.7/threading.py", line 783 in __bootstrap Thread 0x00007f8ee9c3a700: File "/usr/lib/python2.7/threading.py", line 358 in wait File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/impl/tcpros_pubsub.py", line 418 in _run File "/usr/lib/python2.7/threading.py", line 763 in run File "/usr/lib/python2.7/threading.py", line 810 in __bootstrap_inner File "/usr/lib/python2.7/threading.py", line 783 in __bootstrap Thread 0x00007f8f02324700: File "/usr/lib/python2.7/socket.py", line 224 in meth File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/impl/tcpros_base.py", line 650 in write_data File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/impl/tcpros_pubsub.py", line 426 in _run File "/usr/lib/python2.7/threading.py", line 763 in run File "/usr/lib/python2.7/threading.py", line 810 in __bootstrap_inner File "/usr/lib/python2.7/threading.py", line 783 in __bootstrap Thread 0x00007f8f03326700: File "/usr/lib/python2.7/socket.py", line 202 in accept File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/impl/tcpros_base.py", line 153 in run File "/usr/lib/python2.7/threading.py", line 763 in run File "/usr/lib/python2.7/threading.py", line 810 in __bootstrap_inner File "/usr/lib/python2.7/threading.py", line 783 in __bootstrap Thread 0x00007f8f03d73700: File "/usr/lib/python2.7/threading.py", line 358 in wait File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/impl/registration.py", line 297 in run File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/impl/registration.py", line 275 in start File "/usr/lib/python2.7/threading.py", line 763 in run File "/usr/lib/python2.7/threading.py", line 810 in __bootstrap_inner File "/usr/lib/python2.7/threading.py", line 783 in __bootstrap Thread 0x00007f8f04574700: File "/usr/lib/python2.7/SocketServer.py", line 155 in _eintr_retry File "/usr/lib/python2.7/SocketServer.py", line 236 in serve_forever File "/opt/ros/indigo/lib/python2.7/dist-packages/rosgraph/xmlrpc.py", line 284 in _run File "/opt/ros/indigo/lib/python2.7/dist-packages/rosgraph/xmlrpc.py", line 212 in run Current thread 0x00007f8f1bea1740: File "/opt/ros/indigo/lib/python2.7/dist-packages/sensor_msgs/msg/_Image.py", line 145 in serialize File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/msg.py", line 152 in serialize_message File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/topics.py", line 1018 in publish File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/topics.py", line 834 in publish File "VideoPublisher.py", line 66 in <module> I would really appreciate if someone takes a look at my problem! Feel free to ask questions, i had quite some trouble to conclude 4 hours of work in an appropriate way. Originally posted by Holzroller on ROS Answers with karma: 62 on 2015-02-26 Post score: 0 Answer: You are not using rospy.is_shutdown() correctly. Try moving it into the condition of the while-loop. Compare with the example in the tutorial at http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28python%29 Originally posted by slivingston with karma: 254 on 2015-02-27 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Holzroller on 2015-02-28: oh i see. so i implemented it this way : while cam.isOpened() and not rospy.is_shutdown(): ... and it is working! so if i press ctrl-c now, rospy gets shutdown? Or what does trigger the script to close the ros node? Comment by slivingston on 2015-02-28: Yes, by default Ctrl-C causes an interruption signal (SIGINT) that is caught by rospy, which then starts a shutdown procedure. You can use different signal handlers or add hooks if you want. Consult http://wiki.ros.org/rospy/Overview/Initialization%20and%20Shutdown
{ "domain": "robotics.stackexchange", "id": 21004, "tags": "ros, python, script, rospy" }
prove/disprove regularity of languages
Question: Let $L_1 \in REG$ and $L_2 \notin REG$ prove or disprove: $\forall L_1 ,L_2 \text{ } $ $\text{ }L_1^C \cup L_2\in REG \lor L_2\setminus L_1\in REG$ I think that it may be disproved, but I found it very hard to disprove, because: if $L_2 \subseteq L_1$ then $L_2\setminus L_1 = \emptyset$ and if $L_1 \subseteq L_2$ then $ L_1^C \cup L_2 = (L_1 \cap L_2^C)^C = (L_1 \setminus L2)^C = \Sigma ^* $ and if $L_1 \cap L_2 = \emptyset$ then $ L_1^C \cup L_2 = L_1^C$ and in any other scenario, any counter-example that I tried to construct was too complicated to disprove the regularity of both languages. I know that at least one of the languages must be not regular, and I proved it. Answer: Let $\Sigma=\{0,1\}$ and denote by $|w|_1$ the number of occurrences of $1$ in $w \in \Sigma^*$. Pick $L_1 = \{w \in \Sigma^* \mid |w|_1 \equiv 1 \pmod 2\}$ and $L_2 = \{0^n1^n \mid n \ge 0\}$. Suppose that $L' = L_1^C \cup L_2$ was regular. Then a necessary condition for $L'$ to be regular is for $L^{(1)} = L' \setminus L_1^C = L_2 \setminus L_1^C = \{ 0^{2k+1}1^{2k+1} \mid k \ge 0 \}$ to be regular. Moreover, let $L^{(0)} = L_2 \setminus L_1 = \{ 0^{2k}1^{2k} \mid k \ge 0 \}$. Both $L^{(1)}$ and $L^{(0)}$ are not regular. Suppose towards a contradiction that at least one of them, say $L^{(i)}$, was regular and let $p$ be any even integer that is not smaller than the the pumping length of $L^{(i)}$. Consider the word $0^{p+i}1^{p+i} \in L^{(i)}$. By the pumping lemma we know that there is some choice of $1 \le k \le p$ such that $0^{p+i-k} 0^{kh} 1^{p+i} \in L^{(i)}$ for every integer $h \ge 0$. Picking $h=0$ yields $0^{p+i-k} 1^{p+i} \in L^{(i)}$ but this is a contradiction.
{ "domain": "cs.stackexchange", "id": 18682, "tags": "formal-languages, regular-languages" }
Nuclear Magnetic Resonance
Question: Can you recommend a good book on NMR for beginners or something like that. I know QM and classical electrodynamics. Answer: Abragam - The principles of nuclear magnetism Levitt- Spin dynamics, Basics of nuclear magnetic resonance Slichter - Principles of Magnetic Resonance
{ "domain": "physics.stackexchange", "id": 33613, "tags": "quantum-mechanics, nuclear-physics, resource-recommendations, resonance" }
Radially Accelerated Observers on Schwarzschild Spacetime
Question: I'll first provide a little bit of context, and then proceed to the question. Feel free to jump ahead to the question if you feel so, I'll try to keep all the absolutely essential information over there. You do not need to address anything on the context, but I believe it can be helpful to know what I was doing when the issue arose. Context A while ago I saw the question How can we explain the position of Mann's planet when travelling on Miller's planet in Interstellar movie? and it made me want to compute the time dilation relative to the stationary spaceship for three astronauts: a stationary astronaut (which is just a computation of the gravitational redshift), a free falling astronaut (gravitational redshift and a kinematic correction), and an astronaut who's returning to the ship. Interstellar uses a Kerr black hole, but I'm doing computations on Schwarzschild for simplicity. To consider the astronaut returning, I decided to consider an observer moving with an acceleration proportional to the one needed to keep someone static. The intuition was "If I need to maintain my ship's rockets at this power to keep the ship stationary, if I double their power I should start moving away from the black hole". There would also be the bonus that setting the proportionality constant to $0$ or $1$ should recover my previous results, and hence I'd have a simple way to check the computations. However, something weird happened once I started actually doing the math. Question The equation of motion for an observer with acceleration $a^\mu$ is $$u^\mu \nabla_\mu u^\nu = a^\nu,$$ where $u^\mu$ is the four-velocity. If we desire the four-velocity to stay normalized during the trajectory (and, as far as I know, we always do), then we must have $$ \begin{align} 0 &= u^\mu \nabla_\mu (u^\nu u_\nu), \\ &= 2 u^\nu u^\mu \nabla_\mu u_\nu, \\ &= 2 u^\nu a_\nu. \tag{1} \end{align} $$ Now suppose we want to consider an observer on Schwarzschild spacetime moving radially with acceleration $$a^\mu = \frac{\alpha M}{r^2} \left(\frac{\partial}{\partial r}\right)^\mu,$$ where $M$ is the black hole's mass and $\alpha$ is some arbitrary proportionality constant. Notice that $\alpha = 0$ leads to a freely-falling observer and $\alpha = 1$ leads to a static observer. Now, given this acceleration, Eq. (1) implies that either $\alpha = 0$ or $\dot{r} = 0$. However, the second case consists of a static observer, i.e., $\alpha = 1$. Hence, it seems that $\alpha = 2$, for example, is not possible. Assuming I did not do anything wrong (please correct me if that is the case), what is the physical meaning behind this? Answer: After reading the comment by A.V.S. and thinking for a while, I decided to tackle a simpler problem: observers moving in a single direction in Minkowski spacetime. More specifically, instead of considering observers while the acceleration vector I proposed earlier in Schwarzschild, let us think of observers in Minkowski spacetime with acceleration given by $$a^\mu = \alpha \left(\frac{\partial}{\partial x}\right)^\mu.$$ The argument in the original post still applies in this situation and leads to the conclusion that either $\alpha = 0$ or $\dot{x} = 0$. The special symmetries of Minkowski spacetime ends up forcing $\alpha = 0$ in both situations. Let us then ignore this fact for a while and just try to solve for the four-velocity anyway. For this acceleration, the equations of motion will end up being $$\left\lbrace\begin{aligned} \ddot{t} &= 0, \\ \ddot{x} &= \alpha. \end{aligned}\right.$$ Solving them leads to $$\left\lbrace\begin{aligned} t(\tau) &= t_0 + \dot{t}_0 \tau, \\ x(\tau) &= x_0 + \dot{x}_0 \tau + \frac{\alpha}{2} \tau^2. \end{aligned}\right.$$ Let us pick, for simplicity, $x_0 = t_0 = 0$ and also set $\dot{x}_0 = 0$. The condition $u^\mu u_\mu = -1$ then implies $\dot{t}_0 = 1$. Hence, we get to $$\left\lbrace\begin{aligned} t(\tau) &= \tau, \\ x(\tau) &= \frac{\alpha}{2} \tau^2. \end{aligned}\right.$$ So far, it might seem that we are just doing calculations blindly, but notice that with so simple equations we can write the spatial coordinate in terms of the time coordinate. We get $$x(t) = \frac{\alpha}{2} t^2,$$ which means that the velocity as measured in this particular choice of frame (defined by $x_0 = t_0 = 0$ and $\dot{x}_0 = 0$) is $$\frac{\text{d} x}{\text{d} t} = \alpha t,$$ which will surpass the speed of light within coordinate time $t = \frac{c}{\alpha}.$ Hence, as suggested by A.V.S.'s comment, only a few specific observers can have accelerations which do not depend on time. As an observer accelerates through space, they also need to accelerate through time so that inertial observers still measure their speeds to be less than the speed of light. In other words, the physical interpretation for the problem I was facing is that causality forbids arbitrary accelerations in a single spatial direction through spacetime.
{ "domain": "physics.stackexchange", "id": 85396, "tags": "general-relativity, acceleration, observers" }
Why is the melting point of p-dichlorobenzene higher than those of o-dichlorobenzene and m-dichlorobenzene?
Question: I was going through alkyl and aryl halides and noted the following statement in my textbook: p-dichlorobenzene has higher melting point and solubility than those of o-dichlorobenzene and m-dichlorobenzene. I thought over it but could not come to a conclusion. Why is the above assertion true? Answer: Generally, the melting point of para-isomer is quite higher than that of ortho- or meta-isomers. This is due to the fact that it has a symmetrical structure and therefore, its molecules can easily pack closely in the crystal lattice. As a result, intermolecular forces of attraction are stronger and therefore, greater energy is required to break its lattice and its melts at higher temperature. This question is most likely to be answered easily. If you consider solubility. These are insoluble in water but soluble in organic solvents. Generally, para-isomers are highly soluble in organic solvents than ortho-isomers. This is interesting question. Every compound (more specifically solids) does not dissolve in a given compound. In general, a liquid dissolves in a liquid if the intermolecular interactions are similar. This is in accordance with the rule "like dissolves like," ionic or polar compounds dissolve more readily in polar solvents. Non-polar compounds (covalent or organic) are soluble in non-polar solvents. If you consider p-dichlorobenzene, it has zero dipole moment and thus is more non-polar than o-dichlorobenzene which has dipole moment of about $\pu{2.54 D}$. So, non-polar p-dichlorobenzene dissolves more readily than o-dichlorobenzene in non-polar (organic) solvents.
{ "domain": "chemistry.stackexchange", "id": 8197, "tags": "organic-chemistry, solubility, melting-point" }
openni_tracker on ubuntu11.10 on VMware
Question: Hello. I'm very new to linux and ros. I have installed ubuntu 11.10 on VMware. I have installed ROS following guide from here "http://www.ros.org/wiki/electric/Installation/Ubuntu" and openni_kinect from here "http://www.ros.org/wiki/openni_kinect". Now when I run "roscore" on one terminal (it runs successfully) and "rosrun openni_tracker openni_tracker" on other terminal, nothing is happening (even when I stand in front of sensor). When I run "rosnode list" on third terminal, there is "/openni_tracker" node added to the list. When I run "rostopic list" there is no new topics. After a while of nothingness, an error appears and it stops. The error is: "[ERROR] [1336588766.792536891]: InitFromXml failed: Xiron OS got an event timeout!" The "/openni_tracker" node is gone after that. Each time I try to run it again, I get error almost immediately. Error is: "[ERROR] [1337177233.598450808]: InitFromXml failed: Got a timeout while waiting for a network command to complete!". The "/openni_tracker" node is also created for a short period. Also if I would try to launch other things, if something wouldn't work, it would be because some sort of timeout error so I guess it's the same problem. Please help. Originally posted by nnyJoh on ROS Answers with karma: 1 on 2012-05-16 Post score: 0 Answer: Hi there, The Kinect device does not seem to work in a VM. The only solution I know is to dual boot Ubuntu. Best regards. Originally posted by Hansg91 with karma: 1909 on 2012-05-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 9416, "tags": "openni, openi-tracker" }
Dendrodendritic synapse through axodendritic synapse at same dendrite?
Question: Reading Wikipedia's article of dendrodendritic synapse, I find that: Dendrodendritic synapses are activated in a similar fashion to axodendritic synapses in respects to using a chemical synapse. These chemical synapses receive a depolarizing signal from an incoming action potential which results in an influx of calcium ions that permit release of Neurotransmitters to propagate the signal the post synaptic cell. There is also evidence of bi-directionality in signaling at dendrodendritic synapses. Ordinarily, one of the dendrites will display inhibitory effects while the other will display excitatory effects. Now I was wondering, does dendrodendritic synapse occur only when the neuron reaches the threshold action potential and "fires" or is it possible that dendrodendritic synapse occurs as a result of, for example, an axodendritic synapse at the same dendrite? (even though the neuron doesn't fire) Edit Clearly, we can see that the dendrite has multiple 'endings'. I was under the impression that a synapse can happen at any such 'ending'. Assuming that a synapse happens at one ending of the dendrite, is it viable that anything happens to a neighbouring neuron that is connected with a dendrodendritic synaptic connection to the other ending of the dendrite? Answer: I'm not sure if you are asking if an axon that sends a signal to a dendrite would then be released from that dendrite to another dendrite? I don't see how this is possible seeing as the dendrite is already paired with the axon and downstream of said dendrite there would be a neuron. A dendrodenritic synapse is two dendrites of two different neurons in contact with each other. For there to be a signal an action potential must have been reached. Synapse is a junction between two neurons where chemical signaling occurs, but it is not the definition of a signal occurring. If that makes sense. It seems like you are confused on the location and anatomy of a synapse. There wouldn't be an axodendritic synapse followed by a dendrodendritic synapse then the neuron. It would either be axodendritic or dendrodendritic. Now there can be multitudes of connections between neurons but I don't think that is what you were asking. Edit: Seeing your edit makes your question much clearer. If the signal from the axodendritic synapse excites the neuron enough to reach action potential then that signal would be sent out through its synapses, including the dendrodendritic ones. So I guess in this way it was a "consequence" of the axodendritic synapse. Though I would rather you think about it in that the action potential of one neuron sent a signal through it's connections to the neighborhing neuron, thus causing that neuron to reach action potential and further send out the signal. It's not really a "consequence" of the synapse, the synapse is more a pathway for the signal to travel. The stimulus for the action potential is what causes a signal to fire and travel through a synapse. Hopefully that answers your question.
{ "domain": "biology.stackexchange", "id": 4755, "tags": "neuroscience, neurophysiology, action-potential, synapses" }
Haskell MultiWayIf extension: When is it considered useful syntactic sugar?
Question: The new MultiWayIf extension (available with GHC 7.6) allows guard syntax in an if: {-# LANGUAGE MultiWayIf #-} fn :: Int -> String fn x y = if | x == 1 -> "a" | y < 2 -> "b" | otherwise -> "c" But I don't find it better than the old way: fn :: Int -> String fn x y | x == 1 = "a" | y < 2 = "b" | otherwise = "c" Has it sense in other cases? Answer: Unsurprisingly, it's mainly useful when the if is not the top-level expression. Say: forM_ [1..100] $ \i -> putStrLn $ if | i `mod` 15 == 0 -> "FizzBuzz" | i `mod` 3 == 0 -> "Fizz" | i `mod` 5 == 0 -> "Buzz" | otherwise -> show i I happen across similar cases quite often - up to this point this required either deep if trees or mis-using case in some way.
{ "domain": "codereview.stackexchange", "id": 3564, "tags": "haskell, formatting" }
Why do stars start off burning deuterium?
Question: Given hydrogen is a lighter and more abundant element, why do most baby stars start off burning deuterium? Answer: The threshold temperature at which D-burning becomes significant is significantly lower than that to produce significant energy from the pp H-burning chain. Deuterium is found as a trace element in the gas from which stars form. It is depleted from the entire star well before the hydrogen burning pp chain (or CNO cycle in more massive stars) begins. Essentially, D-burning is the faster second step in the pp-chain, the slowest initial step being the formation of new deuterium in the first place. Thus, whilst D-burning takes place at temperatures of $10^6$ K, the pp chain does not start significant energy production until temperatures exceed $10^7$ K. The underlying reason that D-burning happens more easily, is that getting two singly positively charged nuclei together is only part of the problem and is similar for isolated protons or deuterons - however a diproton is unstable and falls apart before fusion can take place unless one of the protons can turn into a neutron via a weak-force interaction to form a stable deuteron. If the deuterium nuclei are already formed, then their fusion with another proton occurs relatively quickly without having to go through the slow first step.
{ "domain": "physics.stackexchange", "id": 58725, "tags": "astrophysics" }
How to solve the trajectory equation using quadratic drag formula?
Question: I am doing a project on ballistics and projectiles and I was reading this link here and it states that the ballistics equation cannot be integrated analytically, and has to be integrated numerically. Can anyone explain why? Answer: When we say a differential equation can be solved we normally mean the solution can be written as a closed form expression, which is summarised as: In mathematics, a closed-form expression is a mathematical expression that uses a finite number of standard operations. It may contain constants, variables, certain well-known operations (e.g., + − × ÷), and functions (e.g., nth root, exponent, logarithm, trigonometric functions, and inverse hyperbolic functions), but usually no limit, differentiation, or integration. The set of operations and functions may vary with author and context. But this is the exception rather than the rule. The vast majority of differential equations have solutions that cannot be written as a closed form expression. This doesn't mean they can't be solved, only that that the solutions are more complicated than the small number of functions that the closed form allows. For example many differential equations will have solutions that are gamma functions or Bessel functions or one of the many other functions that are not generally considered to belong to a closed form solution. This is a somewhat artificial distinction. After all why should we say that the sine function is closed form while the gamma function is not? At the end of the day it comes down to convention. For example our (scientific) calculators all have a button for sine but not a button for gamma, so we tend to included sine in closed form expressions but exclude gamma. We could define a new function foo as the solution to the ballistics equation, then claim the ballistics equation has a closed form solution because the solution is foo. However this isn't very helpful as none of the computers I use have a foo function. That's why we just say the ballistics equation has no closed form solution. Note that the ballistics equation does have a closed form solution for the special case where the trajectory is vertical. It's only the general case that has no closed form solution.
{ "domain": "physics.stackexchange", "id": 93149, "tags": "newtonian-mechanics, computational-physics, projectile, drag" }
"ERROR: Stack's declared dependencies are missing calculated dependencies"
Question: I run this command: rosrun release create.py wu_ros_tools 0.1.0 electric using the electric version of the release script. The stack file I'm using is here in my repository. However, after the above command checks out the repo, I get this error. ERROR: Stack's declared dependencies are missing calculated dependencies: geometry visualization diagnostics_monitors common_msgs ...but I DID declare those stacks, didn't I? Originally posted by David Lu on ROS Answers with karma: 10932 on 2012-09-13 Post score: 0 Answer: Might there be local changes in your working copy? Originally posted by tfoote with karma: 58457 on 2012-09-14 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by David Lu on 2012-09-16: Must've just de-synched. Works now.
{ "domain": "robotics.stackexchange", "id": 11017, "tags": "ros, ros-release, release" }
Found the following, but they're either not files, or not executable:
Question: Hi all! rosrun using_markers basic_shapes Couldn't find executable named basic_shapes below /home/tunc/catkin_wws/src/using_markers rosrun using_markers basic_shapes.cpp Found the following, but they're either not files, or not executable /home/tunc/catkin_wws/src/using_markers/src/basic_shapes.cpp I have problem like this. Can somebody help me? Thanks... Originally posted by tuncguclu on ROS Answers with karma: 1 on 2016-03-22 Post score: 0 Answer: You haven't compiled your workspace, this is why it only finds the .cpp and not the executable. You have to run a catkin_make and have your CMakeLists.txt and package.xml configured correctly. Check out section 4.3 and the last lines of section 4.1 here. Originally posted by mgruhler with karma: 12390 on 2016-03-22 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by tuncguclu on 2016-03-23: Ok but when i try to run catkin_make an error like this occurs The specified base path "/home/tunc/catkin_wws/src/using_markers" contains a package but "catkin_make" must be invoked in the root of workspace. Comment by mgruhler on 2016-03-23: You need to run catkin_make in /home/tunc/catkin_wws/, not in /home/tunc/catkin_wws/src/. Comment by tuncguclu on 2016-03-23: catkin_make worked. I know i am asking too much :D but when i try to run rosrun using_markes basic_shapes it says Error: package 'using_markes' not found How do i handle that? Comment by mgruhler on 2016-03-23: Did you source /home/tunc/catkin_wws/devel/setup.bash after catkin_make? Comment by tuncguclu on 2016-03-23: No but i did now and still says the same thing. Comment by mgruhler on 2016-03-23: You need to do this in every terminal you use a ROS command. Can you past the output of echo $ROS_PACKAGE_PATH as well as the package.xml and CMakeLists.txt of your package? Please edit your question and format the files respectively using the preformatted text button (the one with 1s and 0s).
{ "domain": "robotics.stackexchange", "id": 24205, "tags": "ros, rviz, ros-jade" }
building opencv dependent ros packages from source?
Question: I am trying to get ros-kinetic-desktop onto Ubuntu 16.04 - ARM. However, I would like to build opencv library 3.1.0 with cuda support and use it for ROS applications instead of the custom ros package opencv3. Looking at the reverse dependencies on ros-kinetic-opencv3, I am assuming that I need to build the packages rqt-image-view and cv-bridge from source and link them with the new opencv that I am building. However, I am clueless of how to do this. Could someone please confirm if the above mentioned two packages are the only ones required to be built and I would appreciate if you could also guide me through the procedure of how it is to be done. Thank you. Originally posted by sam26 on ROS Answers with karma: 231 on 2017-04-05 Post score: 0 Answer: Hey, why you want to use the Version 3.1.0 at the moment opencv3 package of ROS use 3.2.0. Therefore when you compile 3.2.0 with cuda support a build of rqt-image-view and cv-bridge should not be necessary or I'm wrong with this idea? Take a look on the opencv3 package side. Originally posted by Mondi with karma: 51 on 2017-04-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by sam26 on 2017-04-06: Oh. You mean that instead of building two packages that depend on opencv, I could just fetch the source code for opencv3 and add Cuda support while building it. That makes sense but I'm not sure if it works. I'll probably give it a try. But if anyone who is sure can confirm it, that'd be assuring
{ "domain": "robotics.stackexchange", "id": 27525, "tags": "robotic-arm, ros, opencv3" }
OSError when running roscore
Question: I am new to ROS and trying to install everything correctly and test that roscore is working. I know there are previous questions like this, but none were able to fix my problem. I get the following error messages when running: Traceback (most recent call last): File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/__init__.py", line 307, in main p.start() File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/parent.py", line 279, in start self.runner.launch() File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/launch.py", line 654, in launch self._setup() File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/launch.py", line 630, in _setup self._launch_master() File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/launch.py", line 394, in _launch_master validate_master_launch(m, self.is_core, self.is_rostest) File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/launch.py", line 83, in validate_master_launch if not rosgraph.network.is_local_address(m.get_host()): File "/opt/ros/indigo/lib/python2.7/dist-packages/rosgraph/network.py", line 176, in is_local_address local_addresses = ['localhost'] + get_local_addresses() File "/opt/ros/indigo/lib/python2.7/dist-packages/rosgraph/network.py", line 219, in get_local_addresses for iface in netifaces.interfaces(): OSError: [Errno 22] Invalid argument Originally posted by njames8 on ROS Answers with karma: 1 on 2017-05-10 Post score: 0 Original comments Comment by gvdhoorn on 2017-05-11: Can you clarify which OS this is? If this is Win10 with WSL, make sure to run a very recent build. Comment by njames8 on 2017-05-11: Yes Windows 10 and I got Ubuntu. Not sure why python is in the error messages Answer: Please see #q238646. You'll need to run a really recent Win10 build. Possibly even at least the Creators Update or an insiders build. Also: could you please for the future remember to always mention at least the OS that you are running, and in cases of Win10/WSL mention that? I know the idea is that WSL is as-close-as-possible (sort-of) to a regular Linux kernel+userland, but that is not the case right now and there are multiple areas in which bugs/unimplemented features in WSL actually cause problems with software that otherwise is perfectly ok. Without such information we cannot help you. Edit: Not sure why python is in the error messages Well .. roscore and large parts of ROS are written in Python, so that would explain it. Edit2: Could you please explain what a recent Win10 build is and how I can get this correctly installed on my machine? Sorry, I'm very new to ROS. This is not really a ROS-issue: Windows 10 is currently still being developed and the problem you report is in the Windows Subsystem for Linux (WSL). The ability to use the Bash shell as MS calls it was only added in relatively recent releases of Windows 10. The problem is that it is not a finished piece of functionality. Lots of things are still missing. Especially pieces that ROS needs to work under Win10/WSL. Only in very recent releases has WSL matured enough for it to run ROS successfully. And most of those releases are not available without being on the Windows Insider program, and then specifically in the fast ring. Bare-bones ROS appears to work relatively ok on Windows 10 build 16188, which was only released a week ago (2017-05-04). The error message that you included in your post suggests to me that you are on a relatively old release of Windows 10, as that specific issue was resolved in WSL a couple of releases ago. To check which version you are running: win + r, then type winver and press enter. The dialogue that pops up should tell you the version and the build. Originally posted by gvdhoorn with karma: 86574 on 2017-05-11 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by njames8 on 2017-05-11: Could you please explain what a recent Win10 build is and how I can get this correctly installed on my machine? Sorry, I'm very new to ROS.. Comment by njames8 on 2017-05-11: I looked at the referenced question above and couldnt really understand where to go from here Comment by njames8 on 2017-05-11: Mine is: Version 1607 (OS Build 14393.1198). So I need a newer build correct? If so, how can I get that? Thanks again for the help! Comment by gvdhoorn on 2017-05-11: Yes, you need a much more recent build. Yours is almost ancient (when it comes to WSL at least). As this is really no longer a ROS issue, I would advise you to search around a bit for how to get into the Insiders program. Be aware of all the consequences of that though.
{ "domain": "robotics.stackexchange", "id": 27868, "tags": "roscore" }
Why does momentum need learning rate?
Question: If momentum optimizer independently keeps a custom "inertia" value for each weight, then why do we ever need to bother with learning rate? Surely, momentum would catch up its magnutude pretty quickly to any needed value anyway, why to bother scaling it with learning rate? $$v_{dw} = \beta v_{dw} +(1-\beta)dW$$ $$W = W-\alpha v_{dw}$$ Where $\alpha$ is the learning rate (0.01 etc) and $\beta$ is the momentum coefficient (0.9 etc) Edit Thanks for the answer! To put it more plain: momentum controls "how well we retain" the movement, and learning rate is "how fast do we reGain" the movement Answer: To answer the first question about why we need the learning rate even if we have momentum, let's consider an example in which we are not using the momentum term. The weight update is therefore: $ \Delta w_{ij} = \frac{\partial E}{\partial w_{ij}} \cdot l $ where: $ \Delta w_{ij} $ is the weight update $ \frac{\partial E}{\partial w_{ij}} $ is the gradient of the error with respect to the weight $ l \space $ is the learning rate coefficient Our weight update is determined by the gradient of our current error with respect to the weight at node $ ij $. Therefore, our prior weight deltas are not factored into our weight update equation. If we were to eliminate the learning rate, our weights would not update. Now let's consider an example using the momentum term in its derived form: $ \Delta w_{ij} = (\frac{\partial E}{\partial w_{ij}} \cdot l) + (\mu \cdot \Delta w^{t-1}_{ij}) $ where: $ \mu $ is the momentum coefficient $ \Delta w^{t-1}_{ij} $ is the weight update of node $ ij $ from the previous epoch Now we are factoring the previous weight delta in our weight update equation. In this form, it is easier to see that the learning rate and momentum are effectively independent terms. However, without a learning rate, our weight delta would still be zero. Now you might ask: what if we remove the learning rate after getting an initial momentum value so that momentum is the sole influence of the weight delta? This destroys the backpropagation algorithm. The objective of backprop is to optimize the weights to minimize error. We achieve this minimization by adjusting the weights according to the error gradient. Momentum, on the other hand, aims to improve the rate of convergence and to avoid local minimas. The momentum term does not explicitly include the error gradient in its formula. Therefore, momentum by itself does not enable learning. If you were to only use momentum after establishing an initial weight delta, the weight update equation would look as such: $ \Delta w_{ij} = (\mu \cdot \Delta w^{t-1}_{ij}) $ and: $ \lim_{t \to \infty} \Delta w^t_{ij} = \begin{cases} 0 & | \space \mu < 1 \space \lor \space (\mu = 1 \space \land \space \Delta w^{t=0}_{ij} < 1) \\ 1 & | \space \mu = 1 \land \space \Delta w^{t=0}_{ij} = 1\\ \infty & | \space otherwise \end{cases} $ Although there exists a scenario where the weight delta approaches zero, this descent is not based on the error gradient and is in fact predetermined by the momentum coefficient and the initial weight delta: this weight delta does not achieve our objective to minimize the error and is therefore useless. TL;DR: The learning rate is critical for updating the weights to minimize error. Momentum is used to help the learning rate, but not replace it.
{ "domain": "datascience.stackexchange", "id": 2967, "tags": "neural-network, backpropagation, momentum" }
Status of Raghavendra's algorithm for solving linear systems in finite fields
Question: In 2012, Lipton wrote a blog entry about a new algorithm for solving linear systems over finite fields by Prasad Raghavendra. The link to Raghavendra's draft paper on the topic is now dead, and I can't find anything on the subject on Raghavendra's website. Is the result correct? Is a write-up available anywhere? Thanks! Answer: The paper by Raghavendra is now also published and available here under the title: Correlation Decay and Tractability of CSPs, appeared in the 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). A related article has appeared in the Electronic Colloquium on Computational Complexity, Report No. 7 (2015), available here.
{ "domain": "cstheory.stackexchange", "id": 4128, "tags": "linear-algebra, finite-fields" }
Window function for signal with 3-5 sine waves inside
Question: Should I apply some window before FFT if I want to find 5 frequencies in signal? Signal is pure 5 sinusoids + some noise generated during recording on analog cable. Answer: Windowing, in the context of DFT, is used to reduce the side-lobes or spectral leakage which happens because you work with finite length discrete signals. Now, if is sum of sinusoids that are periodic in the observation time (that means that each sinusoid completes an integer amount of periods), then the DFT will be a series of delta function with no side-lobes. However, in a more realistic scenario, the DFT of your signal should be effected by the spectral leakage. In that case, it is possible that some of the side-lobes will have greater magnitude than some of the sinusoids and if you try to select the 5 frequencies with the greatest magnitude, you will get the wrong results. To sum it up simply: Yes, you should use some kind of windowing. It doesn't actually meter which window function you use. As suggested, Hamming window is a good choice.
{ "domain": "dsp.stackexchange", "id": 2434, "tags": "fft, window-functions" }
Create Form Controls based on the properties of a class
Question: Is there a better way to create Form Controls for properties of a class? Right now, I iterate through all properties of a class and have a method to create the Form Control for that property based on the type of the property. private void AddControl(PropertyInfo p, string name = null, Color? c = null, int count = 0) { Label newLabel = new Label(); TableLayoutPanel tp = new TableLayoutPanel(); Control ctrl = new Control(); tp.ColumnCount = 2; tp.ColumnStyles.Add(new ColumnStyle(SizeType.Absolute, 200F)); tp.ColumnStyles.Add(new ColumnStyle(SizeType.Absolute, 215F)); tp.Margin = new Padding(0); tp.Name = "tp"; tp.RowCount = 1; tp.RowStyles.Add(new RowStyle(SizeType.Absolute, 28F)); tp.Size = new System.Drawing.Size(380, 28); tp.Location = new Point(0, 14 + num * 28); newLabel.Name = "lbl_" + num; newLabel.Text = GetPropertyAttributes(p); // gets a custom attributes for the property newLabel.AutoSize = true; newLabel.Anchor = AnchorStyles.Left; tp.Controls.Add(newLabel); ToolTip myToolTip = new ToolTip(); switch (p.PropertyType.Name) { case "String": ctrl = new TextBox(); ctrl.Tag = "String"; ctrl.Size = new System.Drawing.Size(170, 20); goto case "common"; case "Boolean": ctrl = new CheckBox(); goto case "common"; case "Array[]": Console.WriteLine("Array"); break; case "Int32": ctrl = new TextBox(); ctrl.Tag = "Int"; ctrl.Size = new System.Drawing.Size(40, 20); goto case "common"; case "Double": ctrl = new TextBox(); ctrl.Tag = "Double"; ctrl.Size = new System.Drawing.Size(40, 20); goto case "common"; case "IList`1": if (ClassExists(p.Name)) // ClassExists checks if the property is a class itself { ctrl = new Button(); ctrl.Text = "Add"; ctrl.Tag = p; ctrl.AutoSize = true; ctrl.Click += new EventHandler(AddButton_Click); } else { // TODO create gridview } goto case "common"; case "common": ctrl.Anchor = AnchorStyles.Left; ctrl.Name = name ?? GetPropertyAttributes(p); ctrl.Font = new Font("Consolas", 9); ctrl.KeyPress += new KeyPressEventHandler(textBoxKeyPress); if (c != null) { tp.BackColor = c ?? Color.Black; } tp.Controls.Add(ctrl); formControls.Add(tp); // List<Control> break; default: ctrl = new Button(); ctrl.Text = "Add"; ctrl.Tag = p; ctrl.AutoSize = true; ctrl.Click += new EventHandler(AddButton_Click); goto case "common"; } } private void AddControls() { foreach (Control ctrl in formControls) { MainPanel.Controls.Add(ctrl); // MainPanel is a Panel Control } } Can it be simplified? I may have to add more types to it. Answer: In my commented I suggested a class that could do what you want it to do out of the box. However sometimes having a custom piece of code to show things in a better way is the goal. I can see this going in two different ways. One is to use a variation of the Visitor pattern and a list, and the other is to use a dictionary with Type and the key, and the class that represents how to view it as the value. The first way you could make the IVisitor interface one of two ways. public interface IVisotor { //option 1 void Accept(Control control); //option 2 bool Accept(Control control); } then to implement you'd make a new class per type that you want to show a specific way. Here is an example with both. public class StringVisitor : IVisitor { public bool Accept(PropertyInfo property, Control control) { if(property.PropertyType != typeof(string)) return false; Accept(control); return true; } public void Accept(Control control) { control = new TextBox(); control.Tag = "String"; control.Size = new System.Drawing.Size(170, 20); } } Now in your class you have two options, make a List<IVisitor> or make a Dictionary<Type, IVisitor>. Here is an example of what I mean. It is by no means complete (and might even have a compiler error as I did it in Notepad++) public class PropertyViewer { private readonly List<IVisitor> visitors1; private readonly Dictionary<Type, IVisitor> visitors2; public PropertyViewer() { visitors1 = new List { new StringVisitor(), //... }; visitors2 = new Dictionary<Type, IVisitor>() { {String, new StringVisitor()}, //... } } private void AddControl(PropertyInfo p, string name = null, Color? c = null, int count = 0) { Control control = null; //option 1 foreach(var visitor in visitors1) { if(visitor.Accept(p, control)) break; } //option 20 if(visitors2.ContainsKey(p.PropertyType)) { visitors2[p.PropertyType].Accept(control); } //do your common control stuff here. } } There are other ways as well but that should encapsulate your work some and make it easier to find where a mistake is. Because of the nature of your work on this it won't be super easy to test that everything is visually pleasing, but the code should be encapsulated enough to make it easy to adjust what you need adjusted.
{ "domain": "codereview.stackexchange", "id": 15914, "tags": "c#, winforms, reflection" }
Effect of stacking diffraction gratings
Question: I have a basic question about the effect of transmitting a laser beam through multiple diffraction gratings. Suppose a diffraction grating was used to produce many spots as follows: Would adding a second grating of the same pattern result in more spots/maximas? If so, would the resulting number be $n^2$ where $n$ is the original number of spots/maximas? If not, what would the result be? Answer: You can get your own answer by sending the beam through the grating, then reflecting the multiple beams back through the grating from a mirror. The answer is that each new beam is independently diffracted by the second grating.
{ "domain": "physics.stackexchange", "id": 67315, "tags": "optics, experimental-physics, diffraction" }
Detection of closed timelike curves (CTC)
Question: I have a question. Are there any methods for detecting CTC? Is there a possibility of designing modern physics experiment in order to confirm / refute the possibility of CTC at any given time in a given space? (Only in a specific space). I found the publication of 'Detection of closed timelike curves' (W. B. Bonnor) but free there are only two pages. Answer: Classically, (and with classically distinguishable particles) there is no method. Just like there is no method to prove that the universe is bounded spatially. If you travel and things look the same, you can't prove they are the same, the universe could just be repetitive spatially and/or temporally. I know that it is very problematic for the general case (if there are in the Universe?). But when we say a very limited space (eg. in the laboratory), we can prove that there is no CTC in this limited space? Every experiment and observation has some finite precision. So classically, according to GR, you could have a very tiny spinning black hole that that is so small that the odds of it getting close enough to your detectors to be detected is super small and yet inside it could be a region with time travel. But the time travel region is inside an event horizon, so you wouldn't notice. The precision issue is like trying to prove the mass of a photon is zero. With a good experiment you can prove that it must be tiny, and with better experiments could prove it must be even more tiny. But you can't prove it is zero. You can exclude tiny black holes. But there could always be even more tiny ones that weren't ruled out. So whatever precision you have there could be a small enough black holes that aren't ruled out. So no experiment can rule out time travel. So relax, and don't worry about it. If there is time travel but it doesn't affect you, then it's no big deal.
{ "domain": "physics.stackexchange", "id": 39434, "tags": "experimental-physics, time-travel, closed-timelike-curve" }
Credit card validator using Luhn Algorithm
Question: I'm looking for feedback or tips to maybe make my code more readable or faster or just general tips to help get myself started with app making with Python. import sys # check for valid input if len(sys.argv) == 2: try: int(sys.argv[1]) if(len(str(sys.argv[1])) == 16): pass else: print("Not 16 digits!") sys.exit() except ValueError: print('Not an integer!') sys.exit() else: print('Not enough or too many command line arguments! \n Proper use \"python Check.py <credit card number here> \" ') sys.exit() def main(): # put the digits into a list number = convertToList(sys.argv[1]) sum = cardCheck(number) if (sum%10 == 0): print('Valid Card!') else: print('Invalid Card!') #converts initial passed int variable to list def convertToList(num): numStr = str(num) numList = [] for digit in numStr: numList.append(int(digit)) return (numList) def cardCheck(digitList, count = 0): sum = 0 #if digit is every second digit multiply by 2 if(count%2 == 0 & count < 15): digitList[count] = (digitList[count] * 2) #if is 2 digit number after multiplication if(digitList[count] >= 10): digitList[count] = addDigits(digitList[count]) cardCheck(digitList, count + 1) else: cardCheck(digitList, count + 1) #progresses program elif(count < 15): cardCheck(digitList, count + 1) else: return 0 for digits in digitList: sum += int(digits) return sum #resolve 2 digit number conflict by adding the digits of the number and returning it def addDigits(num): list = str(num) sum = 0 for digits in list: sum += int(digits) return sum if __name__ == '__main__': main() Answer: For making your code faster, we can choose more Pythonic ways in some parts of it. First, let's take a look at convertToList function. The goal of this function is to split digits of a number to a list of int values. I want to follow your algorithm and do this by converting the number to the str and then splitting it. I want to do this by "List Comprehension": def convert_to_list(num): result = [int(x) for x in str(num)] return result We made some changes here. First of all, I changed the name of the function from camelCase (convertToList) to snake_case (convert_to_list) because according to the Python style guide, it is the better way. You can read more about the Python style guide at PEP8. Next change is I replaced all your code with a single line list comprehension. The first advantage, we have written less code. Less code means probably fewer bugs. But the second advantage here is this code is so much faster. How much? I have written a simple script for it; the result is that on average, the second version is 1.7 times faster. Now let's move on and take another look at addDigits function. I want to choose pythonic way again here: def sum_of_digits(number): num_list = convert_to_list(number) return sum(num_list) Like the previous time, I changed the name. I think this name is clearer and everyone could tell what this code does. For converting the number to a list of digits, I used convert_to_list function instead of writing the whole code again. We are using functions to avoiding duplication, so I think it's a bad idea to write the same code here. For calculating the sum of digits in a list, I strongly recommend you that always use built-in function sum. It's faster, you don't need to write new code and every Python programmer can tell what you are doing at first glance. This code is somehow 1.2 times faster than previous. Now let's go to the beginning of your code. We want to parse command line parameters and be sure that the input is correct. Even though we only call those codes once, I think it is a great favor to the readability of the code to move those lines in a separate function. from re import search def get_input_from_cmd(args_list): if len(args_list) != 2: raise Exception("You should enter a 16-digit number as input argument") return args_list[1] def is_input_valid(input_str): return bool(search(r"\d{16}", input_str)) I separated your code into two functions. The first function gets argv list as an input parameter and if its length is equal to 2, returns the second parameter. Else, it will raise an Exception. There are a lot of people out there who are against exceptions and I agree with most of their reasons. But when we want our program to stop when a bad input came in, I think using exceptions is the best way. The second function simply uss search function of re module. It checks that the input string only contains 16 digits. If that assumption will be true, True will be returned. Otherwise, the False value is what you get. Now you can change your main function like this: def main(): input_string = get_input_from_cmd(sys.argv) if is_input_valid(input_string): digits_list = convert_to_list(input_string) card_checking_sum = card_check(digits_list) if card_checking_sum % 10 == 0: print('Valid Card!') else: print('Invalid Card!') else: print("Invalid Card number") What we do is if card number is not a 16-digit number, code in the last else will execute. Otherwise, codes in the first if will run. That looks nicer to me. So now let's go to the last function, the cardCheck. def card_check(digits_list, count=0): result = 0 if count % 2 == 0: digits_list[count] *= 2 if digits_list[count] >= 10: digits_list[count] = sum_of_digits(digits_list[count]) if count < 15: card_check(digits_list, count + 1) else: return 0 result += sum(digits_list) return result There were some problems in your code that I tried to fix. First, you don't need to put if conditions in parentheses. In python, don't need means you should not. Second, if you run the same code in if and else; you should take that part of code away from if statement. That is what I did with card_check(digits_list, count + 1) line. That line was repeated needlessly. In the end, for logical operations, you should use operators like and and or, not & and |. Here is the full code. I hope that helps you: from re import search import sys def get_input_from_cmd(args_list): if len(args_list) != 2: raise Exception("You should enter a 16-digit number as input argument") return args_list[1] def is_input_valid(input_str): return bool(search(r"\d{16}", input_str)) def convert_to_list(num): result = [int(x) for x in str(num)] return result def sum_of_digits(number): num_list = convert_to_list(number) return sum(num_list) def card_check(digits_list, count=0): result = 0 if count % 2 == 0: digits_list[count] *= 2 if digits_list[count] >= 10: digits_list[count] = sum_of_digits(digits_list[count]) if count < 15: card_check(digits_list, count + 1) else: return 0 result += sum(digits_list) return result def main(): input_string = get_input_from_cmd(sys.argv) if is_input_valid(input_string): digits_list = convert_to_list(input_string) card_checking_sum = card_check(digits_list) if card_checking_sum % 10 == 0: print('Valid Card!') else: print('Invalid Card!') else: print("Invalid Card number") if __name__ == '__main__': main()
{ "domain": "codereview.stackexchange", "id": 34686, "tags": "python, beginner, checksum" }
Debug Messages in Gazebo 4?
Question: I have a robot model that worked on Gazebo 2.2.3 on a different computer that I've been trying to port over to Gazebo 4. I copied the entire folder over to Gazebo 4's model directory and changed the SDF tags to 1.5 for both .sdf and .config files. For some reason, whenever I try to spawn the robot in Gazebo 4 through the menu, gzclient just terminates and gives me a generic "process has died" message. Is there somekind of debug model in Gazebo 4 that will allow me to determine what exactly is wrong with my model or Gazebo? I'm going to ask another question related to the Robot. Originally posted by K. Zeng on Gazebo Answers with karma: 103 on 2014-10-23 Post score: 0 Answer: start gazebo with $ gazebo --verbose Originally posted by AndreiHaidu with karma: 2108 on 2014-10-23 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 3658, "tags": "gazebo-4" }
how to change ros(map) coordinate system to html5 canvas coordinate system
Question: We want to design a robot web monitor of robot location. We has been built the indoor map through the ros and generate a map picture, the file pf yaml parameters are as follows: [resolution: 0.050000 origin: [-24.45000, -28.000000, 0.000000] negate: 0 occupied_thresh: 0.65 free_thresh: 0.196] , according to the html5 canvas will always show the location on the indoor map. But we do not know how to change ros(map) coordinate system to html5 canvas coordinate system. Hope that experienced people can give me some ideas or algorithms, thank you. Originally posted by alvinhu on ROS Answers with karma: 11 on 2017-09-25 Post score: 1 Original comments Comment by sr_corey on 2019-06-24: Did you ever figure this out? I am attempting to do the same thing. I want to create my own custom map component so we can do more custom actions with the map, such as creating travel lanes with user input on the map. The problem is that I don't know how to convert the ROS coordinates to canvas coordinates and can't find any information about how the robot_pose_publisher goes about doing so. Answer: For anyone still having this issue, check out http://answers.ros.org/question/205521/robot-coordinates-in-map/. To paraphrase: In the case where the origin of /map and your grid 0,0 are the same and there is no rotation between the two, you need a pixel to real world units (meters) scale factor. OccupancyGrid has resolution in meters/cell. So pose.x / resolution = cell_x. Originally posted by sr_corey with karma: 16 on 2019-06-25 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 28924, "tags": "navigation, mapping" }
Calculate prime numbers
Question: This is a program to calculate the largest prime number. Are there any possible improvements in style/speed/accuracy? public static int? LargestPrime(int max) { if (max < 0) return null; int largestPrime = 0; for (int i = 0; i <= max; i++) { bool? isPrime = IsPrime(i); if ((bool)isPrime) largestPrime = i; if (i % 100000 == 0) Console.WriteLine(largestPrime.ToString("N0")); } return largestPrime; } private static List<int> listPrimes = new List<int>() { 2, 3 }; private static int listPrimeMax; private static int listPrimesCount = 100000; private static bool firstA = true; private static bool firstB = true; public static bool? IsPrime(int n) { if (listPrimes.Count < listPrimesCount) { // from wiki // function is_prime(n : integer) // if n ≤ 1 // return false // else if n ≤ 3 // return true // else if n mod 2 = 0 or n mod 3 = 0 // return false // let i ← 5 // while i×i ≤ n // if n mod i = 0 or n mod(i + 2) = 0 // return false // i ← i + 6 //return true if (n < 0) return null; if (n <= 1) return false; if (n <= 3) return true; if (n % 2 == 0 || n % 3 == 0) return false; for (int i = 5; i*i <= n; i += 6) { if (n % i == 0 || n % (i + 2) == 0) return false; } if (listPrimes.Count < listPrimesCount) { listPrimes.Add(n); listPrimeMax = n; } return true; } else { if(firstA) { Console.WriteLine("in listPrimes"); firstA = false; } foreach (int i in listPrimes) { if (n % i == 0) return false; if (i * i > n) break; } for (int i = listPrimeMax + 2; i*i <= n; i += 2) { if (n % i == 0) return false; } if (listPrimes.Count < 100 * listPrimesCount) { listPrimes.Add(n); listPrimeMax = n; } else if(firstB) { Console.WriteLine("in 100*listPrimes"); firstB = false; } return true; } } Answer: public static int? LargestPrime(int max) { if (max < 0) return null; int largestPrime = 0; for (int i = 0; i <= max; i++) { bool? isPrime = IsPrime(i); if ((bool)isPrime) largestPrime = i; if (i % 100000 == 0) Console.WriteLine(largestPrime.ToString("N0")); } return largestPrime; } Omitting braces {}, although they might be optional, won't do you good in the long run because it makes your code error-prone. If an argument of a method isn't correct like max < 0 you should throw either an ArgumentException or better an ArgumentOutOfRangeException. You are iterating from 0 to max to check if the value is a prime, but I wonder what else could the number be? Either it is a prime or it isn't a prime, there is no third way so it doesn't make sense that IsPrime() returns a nullable bool. bool? IsPrime(int n) Here you are iterating from 5 to i*i<n for each number which is passed to the method. So let us assume that we pass max = 10.000.000 (the dots are there for clearity) this if (n % i == 0 || n % (i + 2) == 0) return false; in the for loop in the IsPrime() method is executed 16.194.513 times. By using a sieve you could pretty much speed this thing up like so static int CalculateLargestPrime(int maxPrime) { bool[] isComposite = new bool[maxPrime + 1]; for (int x = 2; x * x <= maxPrime; x++) { if (!isComposite[x]) { for (int y = x * x; y <= maxPrime; y = y + x) { isComposite[y] = true; } } } for (int i = maxPrime; i >= 0; i--) { if (!isComposite[i]) { return i; } } return -1; } taken and adjusted from https://codereview.stackexchange.com/a/62158/29371
{ "domain": "codereview.stackexchange", "id": 22395, "tags": "c#, algorithm, .net, primes" }
Is Newton's first law a special case of a more general law?
Question: I was reading my freshman physics textbook (fundamental's of physics by Jearl Walker), and the book says that Newton's first law only applies in a special frame of reference Newton’s first law is not true in all reference frames, but we can always find reference frames in which it (as well as the rest of Newtonian mechanics) is true. Such special frames are referred to as inertial reference frames, or simply inertial frames. I have multiple questions about this paragraph: What are frames of reference? What do they mean? I was not able to find a definition that I can understand at my current level. However, without knowing what a frame of reference is, I attempted to come up with another definition: Assume that there's a set of all possible frames $R$, then we can rewrite the first law this way: There exists a frame $r \in R$ such that, in the frame $r$ the following is always true: $$a = 0 \iff f = 0$$ What do you think of my definition? Can we find other frames with different laws? Can we prove they exist? Answer: A reference frame is simply a system of co-ordinates measured relative to a specific point, which is the origin in that reference frame. Often we use Cartesian co-ordinates in each reference frame (we don't have to, but this makes it simpler to define what we mean by a "straight line") and we rotate the co-ordinates in each reference frame so that the $x,y,z$ axes are aligned (again, we don't have to, but it makes life simpler). And we choose the origin in each reference frame so that all of the origins coincide at some specific time, which we call $t=0$. We can then identify a particular point (or event) in spacetime by its co-ordinates and time relative to reference frame $A$ - say $(x_A, y_A, z_A, t)$. In another reference frame $B$ the same event will have different co-ordinates $(x_B, y_B, z_B, t)$. Note that because we are considering Newtonian mechanics here, the value of the time co-ordinate $t$ is the same in all reference frames - there is a universal time. If we were considering relativistic mechanics then $t$ would depend on the reference frame as well. We can track the $(x_A, y_A, z_A)$ co-ordinates of some object $O$ in reference $A$ - in general these will depend on time $t$. If the $(x_A, y_A, z_A)$ co-ordinates of $O$ are constant (i.e. do not depend on $t$) then we say that $O$ is at rest relative to reference frame $A$. If the $(x_A, y_A, z_A)$ co-ordinates of $O$ depend linearly on time $t$ (so if $x_A(t) = x_A(0) + vt$ etc. ) then we say that $O$ is moving at a constant velocity relative to frame $A$. By observing the co-ordinates of different events in reference frames $A$ and $B$, we can deduce a set of relations between the two sets of co-ordinates, and these relations hold for all events in spacetime. For example, if frame $B$ is moving relative to frame $A$ with constant velocity $v$ parallel to the $x$ axis then $x_A = x_B + vt \\ y_A = y_B \\ z_A = z_B$ This is called a Galilean transformation. But if frame $B$ is accelerating relative to frame $A$ with constant acceleration $a$ parallel to the $x$ axis then $x_A = x_B + \frac 1 2 at^2 \\ y_A = y_B \\ z_A = z_B$ and this is no longer a Galilean transformation. If we have an object $O$ with no forces acting on it then we can define a reference frame $F_O$ in which this object is at rest (simply define the origin of the reference frame to be wherever that object is). Newtons' first law then says that any other object on which no forces act will either be at rest or will move with a constant velocity relative to reference frame $F_O$. And this will also be true in any other reference frame that is related to $F_O$ by a Galilean transformation. However, Newton's first law will not be true in a reference frame that is related to $F_O$ by a non-Galilean transformation. In a reference frame that is accelerating relative to $F_O$ for example, then $O$ will appear to be accelerating even though there are no forces acting on it.
{ "domain": "physics.stackexchange", "id": 71844, "tags": "newtonian-mechanics, reference-frames, inertial-frames, definition" }
How to set limits of Y-axes in countplot?
Question: df in my program happens to be a dataframe with these columns : df.columns '''output : Index(['lat', 'lng', 'desc', 'zip', 'title', 'timeStamp', 'twp', 'addr', 'e', 'reason'], dtype='object')''' When I execute this piece of code: sns.countplot(x = df['reason'], data=df) # output is the plot below but if i slightly tweak my code like this : p = df['reason'].value_counts() k = pd.DataFrame({'causes':p.index,'freq':p.values}) sns.countplot(x = k['causes'], data = k) So essentially I just stored the 'reasons' column values and its frequencies as a series in p and then converted them to another dataframe k but this new countplot doesn't have the right range of Y-axis for the given values. My doubts happen to be : Can we set of Y-axis in the second countplot in its appropriate limits Why the does second countplot differ from the first one when i just separated the specific column i wanted to graph and plotted it separately ? Answer: Countplot from seaborn will not work as you expect. When you calculate the frequencies, you want to plot the values in p.values as they appear. Countplot will take a dataframe where labels are not aggregated and then count each one of them, as it did in the first case. So countplot will be appropriate for the case where your dataframe looks like: index | reason | 0 EMS 1 EMS 2 Traffic 3 Fire 4 Fire 5 EMS 6 Traffic ... In the second case you already have your frequencies: index | reason | EMS 10 Traffic 21 Fire 15 Then count plot will just count the lines and it will be one for each, that is why your plot looks like that. To solve your problem you could just plot using .plot from pandas: df['reason'].value_counts(normalize=True).plot(kind='bar') Where the parameter normalize=True will show normalized frequencies instead of raw count values.
{ "domain": "datascience.stackexchange", "id": 4762, "tags": "python, dataframe, matplotlib, seaborn" }
Why can two target superconducting processors with the same layout get different transpilation results?
Question: I compile and run the same small Grover circuit for 6 different processors. ibmq_santago, manila and bogota all have a linear layout, while quito, belem and lima have qubits arranged in a T-shape. I get transpiled circuits using transpile(circuit, backend=target_backend, seed_transpiler=10, optimization_level=3). I expected the resulting circuits to have the same structure for processors with the same layout. However, they do differ. For ex. circuit for Santiago has depth of 73 and includes 38 cX gates, while one for Manila has depth of 70 but with 41 cX gates. There are similar differences between other processors. I thought it could be due to some randomness inside of the code, but the results always return the same. Does the Qiskit transpiler take into account average errors or other parameters that could affect this? Answer: The maximum optimization level (level 3) is noise-aware. That means that considers the noise reported by the backend to allocate qubits and tries to maximise fidelity. As a consequence, transpiling for different backends might end up with different allocations. From the level 3 passmanager documentation (emphasis mine): Level 3 pass manager: heavy optimization by noise adaptive qubit mapping and gate cancellation using commutativity rules and unitary synthesis.
{ "domain": "quantumcomputing.stackexchange", "id": 3053, "tags": "qiskit, ibm-quantum-devices" }
Why Do Sausages Always Split Lengthwise?
Question: Sausages universally split parallel to the length of the sausage. Why is that? Answer: This behaviour is well explained by Barlow's formula, even though the English Wikipedia article is incomplete in this context. The German version, on the other hand, gives the full picture (which I will quote in the following). The walls of a pipe (or a similar cylindric container, say, a sausage) experience two types of stresses: Tangential ($\sigma_{\rm{t}}$) and axial ($\sigma_{\rm{a}}$). For given pressure $p$, diameter $d$ and wall-thickness $s$, the individual stresses can approximately be calculated from $$\sigma_{\rm{t}} = \frac { p \cdot d } { 2 \cdot s }$$ and $$\sigma_{\rm{a}} = \frac { p \cdot d } { 4 \cdot s }.$$ Here, you can directly see that the tangential stress will always be larger, which is why it is likely that cracks in the container/sausage will first form in this direction. In fact, this is why the first formula is often stated on its own, just as it is the case in the English Wikipedia article. Fun fact: The sausage example is used by many German students as a mnemonic helping to remember which of the stresses is larger. As a result, the formulas are often called "Bockwurstformeln" (sausage formulas). Edit: In response to the comments below, I will try to summarize some details about the above formulas The formulas do not directly indicate how and where the container will split. Assuming that the tensile strength is identical in all directions, we can see that there will be a greater release of tension when the crack propagates length-wise (See the video posted by JoeHobbit and the comment by LDC3) A real sausage will of course have various imperfections, which is why the crack path will not be straight in practice
{ "domain": "physics.stackexchange", "id": 12845, "tags": "everyday-life, fracture" }
Fresnel Zones-How are they Formed?
Question: How are Fresnel Zones formed? What phenomena of light allow ellipsoid areas to be in phase? I've tried reading articles, but they more or less introduce me to characteristics of light, and then tell me that Fresnel Zones exist. How does the circular wave coming out of one antenna "bend back" to the other one in the shape of an ellipse? Answer: When you derive a Huygen-Fresnel Propagator (which is how actual wavefronts propagate according to Maxwell's equations) a Fresnel zone is really the difference (in phase) between surfaces of equal phase on the propagating wavefront and a plane slicing or tangent to that surface of equal phase. These Fresnel zones are defined when propagating a plane wave incident on some circular aperture. This picture shows the concept, except the Fresnel zones are shown for a finite conjugate (i.e. phase difference between a propagating wave and a point that we want to use the zone plate act as a "lens"). Basically, the Fresnel zone is the phase difference map between something like a spherical wavefront, and a plane tangent to that wave front. That is exactly what it is physically and conceptually. A zone plate (manufactured on a plane) can then be used to change the propagation properties of the light wave. I say something like a spherical wavefront above because that is a good approximation, the actual Green's function for the propagator is the derivative of a spherical wavelet, which is required to satisfy the boundary conditions properly. The figure above shows how canceling the additive and subtractive parts of the wavefront can result in "lens type" behavior.
{ "domain": "physics.stackexchange", "id": 10795, "tags": "optics, antennas" }
PSD in MATLAB, problem with the coefficient $\frac{1}{2\pi}$
Question: Using the FFT to mimic the Fourier Transform I have this question. The definition that I use for the PSD $S(f)$ is: $$S(f)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\exp\left(-i\tau2\pi f\right)r(\tau)d\tau$$ where $r(t)$ is the correlation function. The FFT in MATLAB use this formula $$X(k) = \sum_{n=1}^N x(n)\exp\left(-j2\pi\frac{(k-1)(n-1)}{N}\right), \qquad 1 \leq k \leq N.$$ So in order to replace the PSD I did: S=abs(fft(r))*delta_t where delta_t is the time step. In other words, translating the code in formula I did: $$S(k)=\sum_{n=1}^{N}\exp\left(-i2\pi \frac{(k-1)(n-1)}{N}\right)r(n)\Delta T$$ After using this formula I applied this, checking the result $$\int_{-\infty}^{\infty}S(f)df=r.m.s. \big\{x(t)\big\}$$ where $x(t)$ is the original signal. This formula is perfectly verified. Now the problem is that I noticed that in the MATLAB's fft there is no coefficient $\frac{1}{2\pi}$. Can you give me an explanation of why this happens? I know that there are a lot of different coefficient that I can use for the Fourier transform pair, but I don't know if this is related to my problem. Thanks Answer: I would simply say that it doesn't matter so much. The Fourier Transformation and its inverse are a pair and the two formulas are entangled and require a normalization factor, which is up to conventions. See eg enter link description here If your theory/model does not require a special convention, it is basically a degree of freedom that you can choose to fit your needs.
{ "domain": "dsp.stackexchange", "id": 7700, "tags": "matlab, fft, power-spectral-density" }
What's the proper way to determine whether dose response curves +/- another variable (besides dose) are statistically different?
Question: I have an experiment wherein I measured uptake of a certain molecule into cells when delivered using Carrier A vs Carrier B. In other words, for example, I delivered 1, 2, 3, 4, 8, and 10 nmoles of DrugX to 6 wells of cells per dosage, HOWEVER in 3 wells DrugX was delivered using Carrier A and in the other 3 wells DrugX was delivered using Carrier B. I would like to statistically determine whether the dose-response curve I get when I used Carrier A for delivery is any different than the dose-response curve I get when I used Carrier B for delivery. My initial thought was to run a paired T-test, but then I realized I would have to report results at each point and I would like to report on the curve as a whole. My next thought was to use a two-factor ANOVA, however from what it looks like, when it determines whether or not there is variance due to the change in carrier it looks at the average and variance of all the responses. Another thought I had would be to calculate area under the curve, but I'm not sure how. I finally settled on running a nonlinear regression on the Carrier A data, then on the Carrier B data, then as the dataset as a whole and compared the resulting fit curves. I got a nice p-value, however the $R^2$ values are lower than I'd like (about 0.9) because the curves didn't model especially well. What would you do? What is the industry standard? Answer: Good call on not using multiple t-tests. The reason being, assuming a 95% confidence interval, each time you run the test you are "allowing" a 5% possibility of making a Type I error - this means that you incorrectly reject the null hypothesis in favour of the alternative hypothesis. If you run one t-test, there's a five percent chance of making an error. If you run two t-tests, there's a ten percent chance of making an error. Between a regression model, or ANOVA, it does not matter. As the mathematics behind each are pretty much equivalent. What you really want to look for, is the slopes of both models. If the two models have a statistically significant difference between their slopes, then this is indicating an interaction effect. Two examples of (some, but not all) possible interaction effects: Carrier A & B have a similar effect on dose-response at 2mmol. But as the dosage approaches 10mmol, Carrier B causes a dose-response that is nearly double that of Carrier A. Above 5mmol, Carrier B creates a greater dose-response. Below 5mmol Carrier A creates a greater dose response. At 5mmol the dose-response is nearly equivalent.
{ "domain": "biology.stackexchange", "id": 6928, "tags": "statistics" }
RattleHiss (fizzbuzz in python)
Question: This is an iterative review. The previous iteration can be found here. The next iteration can be found here I've now been programming in python for all of 4 hours. At 70 lines it's rather verbose for a FizzBuzz. If you can suggest a more pythonic way to structure it, that would be awesome. FizzBuzz.py def print_fizz_buzz_output(start_end_pair, pair_list): start_num, end_num = start_end_pair for num in range(start_num, end_num + 1): has_divisor = False for divisor, text in pair_list: if num % divisor == 0: has_divisor = True print(text, end='') if has_divisor: print() # New Line else: print(str(num)) def ask_divisor_text_pairs(): exit_flag = False while not exit_flag: divisor_is_valid = False while not divisor_is_valid: divisor = input('Divisor? ') try: divisor = int(divisor) divisor_is_valid = True except ValueError: print('Invalid input for divisor. Divisor must be a whole number. Please try again.') text = input('Text? ') yield (divisor, text) while True: continue_response = input('Input Another Divisor (y/n)? ') if continue_response in ('N', 'n'): exit_flag = True break elif continue_response in ('Y', 'y'): exit_flag = False break else: print('Invalid response. Please input "y" or "n"') def ask_iteration_range() -> tuple: start_num_is_valid = False while not start_num_is_valid: start_num = input('Start Number? ') try: start_num = int(start_num) start_num_is_valid = True except ValueError: print('Invalid input for start number. Must be a whole number. Please try again.') end_num_is_valid = False while not end_num_is_valid: end_num = input('End Number? ') try: end_num = int(end_num) if end_num >= start_num: end_num_is_valid = True else: raise ValueError except ValueError: print('Invalid input for end number. Must be a whole number greater than or equal to the start number.' ' Please try again.') return start_num, end_num print_fizz_buzz_output(ask_iteration_range(),list(ask_divisor_text_pairs())) Example Input/Output Start Number? 1 End Number? 20 Divisor? 3 Text? Rattle Input Another Divisor (y/n)? y Divisor? 5 Text? Hiss Input Another Divisor (y/n)? n 1 2 Rattle 4 Hiss Rattle 7 8 Rattle Hiss 11 Rattle 13 14 RattleHiss 16 17 Rattle 19 Hiss Answer: Flags, flags everywhere There are constructs in Python that can reduce the amount of needed flags like the for ... else blocks or the try .. else construct. break and return should also be your friends: def print_fizz_buzz_output(start_end_pair, pair_list): start_num, end_num = start_end_pair for num in range(start_num, end_num + 1): has_divisor = False for divisor, text in pair_list: if num % divisor == 0: has_divisor = True print(text, end='') if has_divisor: print() # New Line else: print(str(num)) def ask_divisor_text_pairs(): while True: while True: divisor = input('Divisor? ') try: divisor = int(divisor) except ValueError: print('Invalid input for divisor. Divisor must be a whole number. Please try again.') else: break text = input('Text? ') yield (divisor, text) while True: continue_response = input('Input Another Divisor (y/n)? ') if continue_response in ('N', 'n'): return elif continue_response in ('Y', 'y'): break else: print('Invalid response. Please input "y" or "n"') def ask_iteration_range() -> tuple: while True: start_num = input('Start Number? ') try: start_num = int(start_num) except ValueError: print('Invalid input for start number. Must be a whole number. Please try again.') else: break while True: end_num = input('End Number? ') try: end_num = int(end_num) except ValueError: print('Invalid input for end number. Must be a whole number.' ' Please try again.') else: if end_num < start_num: print('Invalid input for end number. Must be greater than or equal to the start number. Please try again.') else: break return start_num, end_num print_fizz_buzz_output(ask_iteration_range(),list(ask_divisor_text_pairs())) Factorize out common behaviour Next we see that you make extensive use of constructs like: while True: value = input('Something') try: result = convert(value) except ValueError: print('Error message') else: break You can factorize that out: def ask_something(prompt, type_=int, error_message='Invalid input'): while True: value = input(prompt) try: return type_(value) except ValueError: print(error_message) def print_fizz_buzz_output(start_end_pair, pair_list): start_num, end_num = start_end_pair for num in range(start_num, end_num + 1): has_divisor = False for divisor, text in pair_list: if num % divisor == 0: has_divisor = True print(text, end='') if has_divisor: print() # New Line else: print(str(num)) def ask_divisor_text_pairs(): while True: divisor = ask_something('Divisor? ', error_message='Invalid input for divisor. Divisor must be a whole number. Please try again.') text = input('Text? ') yield (divisor, text) while True: continue_response = input('Input Another Divisor (y/n)? ') if continue_response in ('N', 'n'): return elif continue_response in ('Y', 'y'): break else: print('Invalid response. Please input "y" or "n"') def ask_iteration_range() -> tuple: start_num = ask_something('Start Number? ', error_message='Invalid input for start number. Must be a whole number. Please try again.') while True: end_num = ask_something('End Number? ', error_message='Invalid input for end number. Must be a whole number. Please try again.') if end_num < start_num: print('Invalid input for end number. Must be greater than or equal to the start number. Please try again.') else: break return start_num, end_num print_fizz_buzz_output(ask_iteration_range(),list(ask_divisor_text_pairs())) Build strings before printing them The last flag we didn't take care of is in the print_fizz_buzz_output function. For this one, we need to see the inner for loop as something equivalent to: text_for_divisor = [] for divisor, text in pair_list: if num % divisor == 0: text_for_divisor.append(text) if text_for_divisor: print(''.join(text_for_divisor)) else: print(num) So we got rid of the flag but the construct is inneficient. Better use a list-comprehension instead: text_for_divisor = [text for divisor, text in pair_list if num % divisor == 0] print(''.join(text_for_divisor) if text_for_divisor else num) Use if __name__ == '__main__' @Dex'ter already told you, but trully, you want to take that habit. When you jump into an interactive session and try to debug your functions, you don't want to be prompted about what are your bounds. You want to be able to directly do: >>> import FizzBuzz as fb >>> fb.print_fizz_buzz_output((1, 50), [(3, 'Fizz'), (5, 'Buzz')]) Better signature I would use print_fizz_buzz_output(begin, end, pair_list) instead of requiring a tuple as the begin/end pair, this feel more natural. You would then need to change your main call to print_fizz_buzz_output(*ask_iteration_range(), list(ask_divisor_text_pairs())) Proposed improvements def ask_something(prompt, type_=int, error_message='Invalid input'): while True: value = input(prompt) try: return type_(value) except ValueError: print(error_message) def print_fizz_buzz_output(begin, end, pair_list): for num in range(begin, end + 1): text_for_divisors = [text for divisor, text in pair_list if num % divisor == 0] print(''.join(text_for_divisors) if text_for_divisors else num) def ask_divisor_text_pairs(): while True: divisor = ask_something('Divisor? ', error_message='Invalid input for divisor. Divisor must be a whole number. Please try again.') text = input('Text? ') yield (divisor, text) while True: continue_response = input('Input Another Divisor (y/n)? ') if continue_response in ('N', 'n'): return elif continue_response in ('Y', 'y'): break else: print('Invalid response. Please input "y" or "n"') def ask_iteration_range() -> tuple: start_num = ask_something('Start Number? ', error_message='Invalid input for start number. Must be a whole number. Please try again.') while True: end_num = ask_something('End Number? ', error_message='Invalid input for end number. Must be a whole number. Please try again.') if end_num < start_num: print('Invalid input for end number. Must be greater than or equal to the start number. Please try again.') else: break return start_num, end_num if __name__ == '__main__': print_fizz_buzz_output(*ask_iteration_range(), list(ask_divisor_text_pairs()))
{ "domain": "codereview.stackexchange", "id": 22312, "tags": "python, beginner, python-3.x, fizzbuzz" }
How Can I Backpropagate My Network with PPO
Question: I am trying to implement PPO to my reinforcement agents. I have a classic neural network that represents the policy. I didn't quite understand how the PPO updates the network, according to what? There needs to be target values right? Or at least a loss function? For example in DQN a second network gives target Q values and the backpropagation is done according to that values for example with MSE loss function. But I don't know where I should put the returned value from PPO function. I don't get it. How can I backpropagate according to PPO values? Should I treat it like a target values of the network? If I should not, what are the target values or what is the loss? Answer: I didn't quite understand how the PPO updates the network, according to what? PPO implements what is called a policy gradient algorithm. What policy gradient is essentially doing is updating the policy in the direction of better return (return is the discounted sum of rewards during an episode). Let's say the policy is parametrized by neural network weights $\phi$. Then the weights update in the most basic policy gradient algorithm is computed as: $$\phi = \phi + \alpha \nabla \log \pi_{\phi}(a_t,s_t) \cdot V_t,$$ where $V_t$ is the return. In case of PPO, $V_t$ is actually estimated (generalized) advantage function implemented by another neural network. This second neural network is often called critic, while your policy network is called an actor. Meaning that the actor is taking decisions, while the critic helps to train the actor by providing "an advice" in which direction to move. There needs to be target values right? Or at least a loss function? The loss function in PPO is $$L^{PPO} = L^{PG} + c_1 L^{VF} - c_2 S,$$ where $L^{PG}$ is policy gradient loss calculated for actor performance, i.e. how good were actor's actions. In a simple policy gradient algorithm it would be $L^{PG} = -\sum_{t} \log \pi_{\phi}(a_t|s_t) \cdot V_t$, while in PPO it is a bit more sophisticated $$ L^{PG}(\phi) = \sum_t \max \left( -r_t(\phi) \hat A_t, -\text{clip}(r_t(\phi), 1 - \epsilon, 1 + \epsilon) \hat A_t \right),$$ where $\hat A_t$ is empirical advantage function (the same as $V_t$ before) and $r_t(\phi) = \frac{\pi_{\phi}(a_t|s_t)}{\pi_{\phi_{\text{old}}}(a_t|s_t)}$ is the probability ratio of how better is the new policy is compared to the old one. $L^{VF} = (V_{\theta}(s_t) - V_t^{\text{targ}})^2$ is value function loss function ($\theta$ are critic network parameters). This one is calculated for critic performance, i.e. how good it is able to estimate advantages. And $S$ is what is called "entropy" bonus (to increase the entropy, i.e. to stimulate exploration). Should I treat it like a target values of the network? No, not necessary. Note also that unlike DQN, PPO is an on-policy algorithm, which means that it throws away the episode information (states, rewards, actions) as soon as it is done with it. For more details, see this excellent post about PPO details. There are also three Youtube videos explaining the subject. And I also wrote a post implementing PPO from scratch (first using REINFORCE to illustrate the simplest policy gradient algorithm). Also don't hesitate to read the original PPO paper.
{ "domain": "ai.stackexchange", "id": 3931, "tags": "neural-networks, reinforcement-learning, proximal-policy-optimization" }
Why only honey bee collects & store honey?
Question: In Insects, most of flying insects like butterfly etc, diet is sucking honey from flowers. none of them store honey except honey bee. Why so ? Answer: Many different bees also make honey, not just honey bees (Apis species). There are carpenter bees, bumblebees and as Geraldo said, there's honey pot ants. Only honey bees make any great quantity of honey. The other species make far less and it's a different consistency. Honey bees live in large colonies and honey is stored to feed developing larvae as well as feeding workers and the queen through winter. Other bees are solitary except for a few bumblebee species but their colonies are much smaller - usually less than 50. Bumblebees make far less honey. There's no need to store honey for winter as the queen and workers die in late fall. Only a newly hatched and fertilized queen will survive, hibernating through winter underground. Hence, storing more than the immediate needs of queen, larvae and workers would be pointless. You mentioned butterflies sipping nectar. A butterfly's lifespan is very brief and only sips nectar to provide energy for fertilization and egg laying. Then, generally, they die. Larvae have been provided for already since the eggs hatch on the appropriate plant leaves as their food.
{ "domain": "biology.stackexchange", "id": 7440, "tags": "entomology" }
True examples of common variation due to Mendelian Inheritance
Question: Classic examples of mendelian inheritance are genetic diseases such as sickle-cell anemia, Tay-Sachs, cystic fibrosis, and xeroderma pigmentosa. For some of these diseases, it is believed that they evolved because the genes gave a beneficial effect that outweighed the rare occurrence of the disease (e.g sickle cell and protection against malaria). However, all of these are examples of uncommon phenotypes, usually less than 1% of the relevant population (where it is most common) has those diseases. I'm interested in examples of mendelian inheritance where variation is common (all of the relevant phenotypes have, say, >5% frequency or something like that). Consider eye color. In European populations both blue and brown eyes are common phenotypes. Many people have probably experienced a typical high-school class of biology, where this was introduced as an example of mendelian inheritance with brown eyes being dominant and blue eyes being recessive. However, modern studies (for example, genome-wide association studies) show that things like eye, hair and skin color are all polygenic traits. See for example (Sulem, ..., Stefansson, 2007). Throughout the years I've heard of other traits that some said were examples of mendelian inheritance. Being able to roll the tongue, for example. This question was inspired by this recent question where absence of teeth (Hypodontia) was taught to the OP as an example of a dominant trait. The current answer says that hypodontia is probably an example of a polygenic trait that interacts with certain environmental effects. This got me wondering whether there even are examples of common mendelian variation. I don't quite know the best way to phrase it, but my question is the following: Are there any examples of human phenotypes that are due to mendelian inheritance where all of the phenotypes are common? Take eye color as an example again. By common, I mean that both blue eyes and brown eyes are common (over, say, 5% in for example Sweden). On the other hand, for example Tay-Sachs disease occurs only about 1 in 3,500 among Ashkenazi Jews where it is most common, according to Wikipedia. Way less than even 1%. Preferably I want it to be verified by modern genomic studies (such as GWAS) that it is an example of mendelian inheritance. If such an example does not exist, then I will accept any answer that provides modern, relevant scientific literature that explores this topic. For example, it might explain why we would not expect to see common variation due to mendelian inheritance. Answer: Earwax consistency is an example of a monogenic trait with dimorphic phenotypes in humans -- "wet" and "dry" -- where both phenotypes are globally common, though allele ratios are highly variable between human populations. The population genetics and molecular basis of this trait are discussed at length by Dr. John H. McDonald as part of a series on the myths of human genetics (archived version here). I reformat his discussion in my answer. In The dimorphism in human normal cerumen, Matsunaga summarizes 30 years of research into the earwax of Japanese families, with two major observations: Dry earwax parents never have wet earwax children, suggesting the genetics underlying the wet phenotype are dominant to those underlying the dry phenotype. Because the allele for wet earwax is relatively rare in the sampled families, it is likely that most wet earwax parents are heterozygotes. Following from this, if the trait is monogenic with two allele variants, we expect a 3:1 ratio of wet to dry children in wet × wet parent crosses, and a 1:1 ratio of wet to dry children wet × dry parent crosses. Indeed, this is observed. $$\begin{array}{c|c|c|} \text{parents} & \text{wet children} & \text{dry children} \\ \hline \text{w×w} & 35 & 12 \\ \hline \text{w×d} & 205 & 195 \\ \hline \text{d×d} & 0 & 634 \\ \hline \end{array}$$ These results were later corroborated in Native Americans -- Cerumen in American Indians: Genetic Implications of Sticky and Dry Types $$\begin{array}{c|c|c|} \text{parents} & \text{wet children} & \text{dry children} \\ \hline \text{w×w} & 32 & 6 \\ \hline \text{w×d} & 20 & 9 \\ \hline \text{d×d} & 0 & 42 \\ \hline \end{array}$$ And in a different Japanese cohort -- Distribution and inheritance of earwax types: a study on inhabitants in Awa District, Chiba Prefecture $$\begin{array}{c|c|c|} \text{parents} & \text{wet children} & \text{dry children} \\ \hline \text{w×w} & 27 & 3 \\ \hline \text{w×d} & 137 & 109 \\ \hline \text{d×d} & 0 & 345 \\ \hline \end{array}$$ In the early 2000s, the molecular basis of human earwax consistency was elucidated in three major publications. Toshida et al. mapped a single locus on chromosome 16 Yoshiura et al. identified a SNP at position 538 in the coding region of ABCC11. At this site, G corresponds to a wet earwax phenotype, whereas A leads to a dry earwax phenotype. A less common allele representing a deletion of 27 nucleotides in exon 29 also yields a dry phenotype. Toyoda et al. found that the 538G>A transition encodes arginine in place of glycine, resulting in a loss of glycosylation in the mutant protein, leading to deficient cellular localization, rapid degradation, and loss of function. To directly address your question, Are there any examples of human phenotypes that are due to mendelian inheritance where all of the phenotypes are common? I point you to the tables above, as well as Table 1 from Cerumen Phenotypes in Certain Populations of Eurasia and Africa, which shows that the frequency of the dry allele ranges from 59% to 89% across diverse human populations -- To conclude, human earwax consistency is a dimorphic trait controlled by a single gene, where both phenotypes are common across human populations.
{ "domain": "biology.stackexchange", "id": 10516, "tags": "genetics, molecular-genetics, human-genetics, literature" }
How could a neutron star collapse into a black hole?
Question: White dwarfs usually do not collapse, as they have electron degeneracy pressure due to the Pauli exclusion principle. However, if one accretes mass beyond the Chandrasekhar limit, it is energetically favorable for the electrons to combine with protons and form neutrons. This gives us a neutron star. However, neutron stars usually do not collapse into black holes due to neutron degeneracy pressure. How is it possible that beyond the LOV limit, the Pauli exclusion principle no longer prevents the collapse? Shouldn't it still prevent neutrons, which are fermions, from being compressed together any further? I've seen answers involving quark stars, but those are purely hypothetical. What is the most accepted explanation for this? Answer: The scenario you describe may occur. On the other hand it may actually be that neutronisation in a white dwarf is the trigger for a thermonuclear type Ia supernova. You may be misunderstanding the Pauli Exclusion Principle (PEP).The PEP states that no two fermions can occupy the same quantum state, not that they cannot occupy the same space or be compressed to whatever density you like. The quantum states here consist of two spin states for every possible momentum state. In a degenerate gas, all these states are filled up to the Fermi energy. All that happens when the neutron star gets smaller (or collapses), is that the Fermi energy just keeps increasing as the neutron density climbs, and the neutron degeneracy pressure just keeps increasing as a consequence. However, in General Relativity, pressure (like mass/energy) is a source of gravitational curvature and actually increases the required pressure gradient needed to support the star. At a certain threshold radius - a small factor larger than the Schwarzschild radius, a point of instability is reached where increasing the pressure is actually counter-productive. Beyond this, you can make the pressure as large as you like and it will not prevent the formation of a black hole. Even inside the BH there is not necessarily a problem with the PEP. You can compress fermions to infinite density so long as they can have infinite momentum.
{ "domain": "astronomy.stackexchange", "id": 2572, "tags": "black-hole, general-relativity, stellar-evolution, neutron-star, degenerate-matter" }
Relative motion-Acceleration
Question: My first post here and I'm a complete beginner on this. So please excuse if I'm asking too-basic a question. This question is about the classical boat and river problem. Say a boat travels at 10 m/s in a water channel. the water speed relative to ground is 0. so the boat travels at 10 m/s relative to the ground. now suddenly, the water in the channel has started to flow at 10 m/s in the opposite direction. (say this happened in 10 seconds so the acceleration is 1 m/s^2). As after a while the boat speed relative to ground has become 0, then from the ground-based observer's point of view, the boat has undergone a deceleration. My question is; Is this deceleration always necessarily equal to minus the water acceleration? In other words whats the velocity of the boat with respect to the ground, infinitesimal time dt after the water has started to accelerate ? PS: What I'm trying to understand is what happens when an aircraft or watercraft gets hit by a gust or similar disturbance? Answer: My question is; Is this deceleration always necessarily equal to minus the water acceleration? The answer is no. Acceleration/deceleration is controlled by the fluid-resistance $f$. Typically: $$f=kv\qquad \text{ for low speed}\\ f=kv^2\qquad \text{ for high speed}$$ where $v$ is speed of the object (boat, airplane, car ...) relative to the fluid and $k$ a coefficient. Fluid-resistance depends on (relative) speed, but for low speeds the dependence is linear as shown. At almost zero (relative) speed there will be almost no fluid-resistance anymore. Deceleration therefore decreases with speed. The constant $k$ envelopes the aerodynamics/streamlineliness of the object - in other words, the geometry. A flat-bottomed boat will catch less water than a boat with a deeper an more vertically flat front. Also, the amount of the boat (the area) sticking below the surface (thus depending on weight and load) determines this geometry. $k$ furthermore contains other factors such as viscosity $\mu$ (the "thickness" - fluid-resistance of course depends on the type of fluid) and density $\rho$ (air-density changes with pressure and therefore with height). Other things to be aware of is for example that fluid speed is not at all necessarily constant throughout the water-stream. The surface speed can be very different from the speed at edges or deeper in the water. This suddenly makes the value of $v$ very complicated - so complicated that it might vary on different parts of the boat so that different parts experience difference fluid-resistances. All in all, fluid dynamics is not a simple discipline and usually relies on experimental and computational results rather than formulae and calculations. The descriptions and formulae in this answer gives some guidelines of what to expect.
{ "domain": "physics.stackexchange", "id": 34828, "tags": "newtonian-mechanics, drag, relative-motion" }
Why does heat added to a system cause an increase in entropy that is independent of the amount of particles in the system?
Question: Say we have two gas containers of $N_{2}$ at the same temperature of $300 ~\text{K}$, one containing $10^{23}$ particles and the other containing $10^{13}$ particles. If we add a quantity of heat to both containers, and the volume of these containers remains constant, then the rise in entropy is given by $d\mathtt{S} = \dfrac{dQ}{T}$. Why will both containers, with a different particle count, experience the same increase in entropy? I tried to prove that the entropy increase is equal by working out some equations but couldn't follow through and also fell short of a conceptual explanation. My attempt at an equation: Where volume is constant: $dQ = C_{v}\cdot dT$ and $C_{v} = \alpha N_k$, where $N$ is the number of particles in some container. Thus, $d\mathtt{S} = \frac{\alpha N_k\cdot dT}{T}$ and $\Delta \mathtt{S} = \alpha N_k \ln\left(\frac{T_{f}}{T_{i}}\right).$ My attempt was to show that two different $\Delta S$'s are really the same, assuming that these different processes have the same $T_{i}$ and $\alpha$. My attempt didn't produce anything meaningful. Both perspectives, one from an equation and one conceptual, appreciated! Answer: Why does heat added to a system cause an increase in entropy that is independent of the amount of particles in the system? Short answer: it doesn't The systems won't end up with the same entropy. Your intuition is correct that the change in entropy depends on the number of particles. The reason why you can't just reason directly from $dS = \delta Q/T$ is that, first of all, the temperature is changing, and secondly, the temperature is changing at a different rate relative to the amount of energy added via heat, because the systems have different heat capacities $C_V$. That is, the same amount of heat added causes a different change in temperature, which makes $\Delta S$ different. Calculation We will assume an ideal gas for these calculations (or, at least, that the specific heat at constant volume is independent of temperature). During an isochoric process, the change in entropy is given exactly by $$\Delta S = \int_i^f \frac{\delta Q}{T} = \int_{T_i}^{T_f} \frac{n\bar{c}_V}{T}dT = n\bar{c}_V\ln\left(\frac{T_f}{T_i}\right),$$ where $\bar{c}_V$ is the molar specific heat of the system. The final temperature is related to the initial temperature via $$ Q = n\bar{c}_V(T_f - T_i),$$ in which case we can write $$\Delta S = n\bar{c}_V\ln\left(1 + \frac{Q}{n\bar{c}_VT_i}\right).$$ It should be pretty clear that this expression definitely depends on the value of $n$, and therefore the change in entropy will be different for the two cases.
{ "domain": "physics.stackexchange", "id": 26263, "tags": "thermodynamics, temperature, entropy" }
How do I know which equations can be treated as differential equations and which can't?
Question: I'm sometimes mystified by the use of differentials in physics. I don't understand which formulas—on which occasions—can be thought of as differential equations and which cannot. While discussing work done by a piston during an isothermal process, my textbook does not treat $PV=nRT$ as a differential equation. Let me illustrate how $W$ is derived: $$W=\int_{V_1}^{V_2}P\mathrm{d}V=\int_{V_1}^{V_2}\frac{nRT}{V}\mathrm{d}V=nRT\ln\frac{V_2}{V_1}$$ My question is, why could I not treat the ideal gas law as a differential equation and say $P\mathrm{d}V=nR\mathrm{d}T$? If I could, I'd then say: $$W=\int_{V_1}^{V_2}P\mathrm{d}V=\int_{T_1}^{T_1}nR\mathrm{d}T=0$$ Since $\mathrm{d}T=0$ during an isothermal process. The result is erroneous, but why can I not argue in the following way? Why can't $PV=nRT$ be treated as a differential equation? How do I know which equations can be treated as such and which can't? Answer: $$PV=nRT\tag{1}$$ can not be considered a differential equation, simply because it contains no differentials. Now, you can't just go and differentiate that equation as: $$P\mathrm{d}V=nR\mathrm{d}T$$ because $V=f(n,P,T)$ and $T=g(n,P,V)$ where $f$ and $g$ are multi-variable functions. To differentiate them you would have to use partial differentials ($\partial$). Differentiating $(1)$ you need to apply the product rule: $$\mathrm{d}(PV)=\mathrm{d}(nRT)$$ Assuming $n=\text{constant}$: $$P\mathrm{d}V+V\mathrm{d}P=nR\mathrm{d}T$$ But to derive the work done by an isothermal expansion/compression we simply use the general definition of work: $$\mathrm{d}W=F(x)\mathrm{d}x$$ It's easy to show that for a piston $F(x)\mathrm{d}x=P\mathrm{d}V$, so: $$\mathrm{d}W=P\mathrm{d}V$$ Then extract $P$ from the Ideal Gas Law. Regarding your title question. Differential equations (DEs) typically arise to describe dynamic problems, where change, often (but not exclusively) in time, occurs. Let's take a simple example. A mass $m$ sits on a rough incline, motionless. This is a static problem and requires no DEs. Now we apply sufficient force on the mass for it to start moving. Newton's Second Law now states: $$F_{net}=ma$$ or: $$F_{net}=m\frac{\mathrm{d}v}{\mathrm{d}t}$$ This is of course a DE which allows us to calculate the rate of change of the velocity $v$.
{ "domain": "physics.stackexchange", "id": 76277, "tags": "soft-question, mathematics, calculus" }