anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Interval scheduling problem with priorities
Question: I have a problem that is similar to the interval scheduling algorithm but it involves priorities. My data sets consist of the following data: Cars with the start and end time of parking, along with their one or more attributes (e.g. electric vehicle, motorcycle, handicapped). Parking spots along with zero or more attributes and lot number. Attributes with their priorities. For example if the property handicapped is given a value of 1, cars that have that attribute should be assigned a parking spot first. Attributes are hard constraints, the priorities of the attributes determine the order of assignment. There is no overnight parking so I have divided the data into buckets of days. Start and end times are in increments of 5 minutes (not sure if this is important). To be considered a valid assignment, a car's attributes must be a subset of the attributes for the assigned spot. See examples below. Objectives This problem comes from overhauling an existing algorithm, which after observing how users interact with the system, it could definitely use improvement. My first step is to get something going that can produce one or more possible solutions that meet all of the provided attributes. For example, a limo cannot be assigned to a motorcycle spot. There may not be a complete solution given the inputs, if there are 5 electric vehicles but only 4 spots, the algorithm should still try to assign 4 of them (the 4 that have the highest priority). Given multiple solutions, the "best" solution would minimize the number of open lots at any given time (ideally all the cars parked in the same lot). Even if it is a small block of time in the middle of the day, the lot can still be closed to minimize the cost of security guards. Example input/output Set 1 Attributes: [bus: -1; electric: -2; handicapped: -2] Cars: [C1: bus, electric; C2: handicapped, C3: electric] Spots: [P1: bus, electric; P2: bus, electric; P3: electric, handicapped] Valid assignments: [C1-P1, C2-P3, C3-P2] and [C1-P2, C2-P3, C3-P1] Set 2 Attributes: [bus: -1; electric: -2; handicapped: -2] Cars: [C1: bus, handicapped; C2: bus, C3: electric] Spots: [P1: bus, handicapped, electric; P2: bus, electric; P3: handicapped] Valid assignments: [C1-P1, C2-P2, C3-null] Spot 1 is the only spot that can accommodate car 1. Both cars 2 and 3 can take spot 2 but priority is given to the bus, leaving car 3 unassigned. Set 3 Attributes: [bus: -1; electric: -2; handicapped: -2] Cars: [C1: bus, electric; C2: electric, C3: bus] Spots: [P1: bus, electric; P2: electric; P3: electric, handicapped] Valid assignments: [C1-P1, C2-P2, C3-null] or [C1-P1, C2-P3, C3-null] There are two buses but only one bus parking spot. Since C1 has a greater priority sum, it is assigned to the available spot even though C3 could have taken it. Verifying a solution For each assigned car A, if any, verify the assigned spot (P) that it has been assigned to has all of its attributes. In other words, Attributes(A) is a subset of Attributes(P). For each unassigned car B, let X be the set of spots in the input data that meet the car's attribute criteria. If one or more spots in X is unassigned, abort these steps and mark the solution as invalid If one or more cars assigned to spots in X has a greater maximum priority than MaxPriority(B), abort these steps and mark the solution as invalid Let Z be the subset of cars assigned to spots in X where the maximum priority of the car = MaxPriority(B). If one or more cars in Z has a greater priority sum than SumPriority(B), abort these steps and mark the solution as invalid What I have tried Find all the valid parking spots for each car. Sort each parking spot list in ascending order of the sum of priorities for the parking spot. Sort the list of cars in descending order of sum of priorities. Attempt to assign each car in order of the sorted parking spots. If the spot is taken for that time then try the next one and so on. I am hoping to make this more efficient by taking into account the interval for each car, as it currently isn't being taken into account when sorting. I stumbled upon Google Optimization Tools and it looks similar to the nurse scheduling problem but with more constraints. A key difference is that each shift in the NSP is defined whereas the intervals in my problem can partially overlap. Questions How can I model the problem? Are tools like Google OR-Tools or pyschedule appropriate for solving this? Answer: The problem is probably fairly hard. One approach is to formulate it as an instance of integer linear programming. Divide the time period into short time segments. Let $x_{i,j,t}$ be a zero-or-one variable, with the intended meaning that car $i$ is assigned to parking spot $j$ at time segment $t$. Also let $y_i$ be a zero-or-one variable, with the intended meaning that car $i$ is assigned a parking spot (somewhere). Then you can express each of your requirements as a set of linear inequalities on these $x$'s: If car $i$ needs a parking spot for the time window $t_0..t_1$, then add the constraint $\sum_j x_{i,j,t_0}=y_i$, to require that it is assigned exactly one slot if $y_i$ says it should be. Here the sum is over all parking spots $j$ that are compatible with car $i$ (given their attributes). Also, add the constraint $x_{i,j,t_0} = x_{i,j,t_0+1} = \cdots = x_{i,j,t_1}$ for all $j$, to indicate that if car $i$ is assigned to parking spot $j$, then it should be there for its entire time window. To take into account that you can't have two cars parking in the same spot at the same time, add the constraint $\sum_i x_{i,j,t} \le 1$ for all $j,t$. To take into account the priorities, if cars $i,i'$ are both vying for a parking spot at the same time $t$, and if $i$ has higher priority than $i'$ for parking spot $j$, then add the constraint $x_{i,j,t} \ge x_{i',j,t}$. (Add this for all times $t$ in the intersection of their time windows.) Finally, maximize the objective function $\sum_i y_i$. This is a system of linear inequalities, with a linear objective function, so it can be solved using an off-the-shelf integer linear programming (ILP) solver. Keep in mind that solving ILP can take exponential time in the worst case, so on large problems, it's possible that the ILP solver might take a very long time. However, the hope is that if your problem is not too large, then an ILP solver might be able to find a good solution in a reasonable amount of time.
{ "domain": "cs.stackexchange", "id": 11138, "tags": "algorithms, optimization, scheduling, integer-programming" }
Are there plans to detect life on Earth from the outer solar system?
Question: This has been kicking around in my head for a while. We've been detecting planets for decades by observing regular dips in starlight from many light years away as a planet transits its host star. I've often wondered if we considered staring at our own planet the same way to see if we can get our "Eureka! We found life!" moment. After reading Carl Sagan detected life on Earth 30 years ago—here's how his experiment is helping us search for alien species today, I started searching online for more experiments aimed at detecting life on Earth. I came across Hubble Makes the First Observation of a Total Lunar Eclipse By a Space Telescope where we were able to detect ozone in Earth's atmosphere. So, the Galileo spacecraft detected life as it flew by Earth thousands of miles away. Hubble stared at the moon during an eclipse, but that's a bit like knocking on the door to see if someone is alive, cosmologically speaking. Detecting those same signatures from trillions of miles away is such a shockingly different order of magnitude that it renders those experiments fun, and informative, but not small scale replicas of how we typically find exoplanets. Are there plans to detect life on Earth the same way we think we can detect life on exoplanets, but on a much smaller scale? I'm imagining something at Neptune-ish distances as a test for how well these biosignatures can be detected at longer distances for an Earth-sized planet. (Is this even a worthwhile experiment, given the logistics mentioned in Space telescope located in outer solar system? I also realize I'm asking to build a telescope and send it 2.8 billion miles away just to snap a selfie, but we've already spent quite a bit of money building observatories to detect transiting exoplanets. It would be curious to know if these signatures are even detectable from light hours away, much less many light years away. Answer: Somebody can fill in the details perhaps but both the Galileo and Osiris-Rex spacecraft analysed light received from the Earth and looked for the spectroscopic signatures of carbon dioxide, oxygen, methane and ozone. See for example here. However, these were data obtained from relatively close to the Earth. Plans to observe the Earth in almost exactly the way we might be able to analyse exoplanets around other stars are being discussed. For example, Mayorga et al. (2021) argue that it is essential that we send a spacecraft to beyond the Earth-Sun L2 point so that it can look back and examine the Earth's atmosphere using transit spectroscopy in exactly the same way that JWST is doing for exoplanets. A pointer to the kind of science we are talking about comes from the observation of Venus as a transiting planet that were done (from Earth) in 2004 and 2012 (e.g., Hedelt et al. 2011; Ehrenreich et al. 2012; Chavassa et al. 2015), though there has not been as much work on this as I imagined.
{ "domain": "astronomy.stackexchange", "id": 7150, "tags": "observational-astronomy, telescope, exoplanet, astrobiology" }
Why is there no color shift on the photo of the M87 black hole?
Question: Last year, the first photo of a black hole in Messier 87 was published: (Source: EHT) It is quite obvious that about the lower half of the accretion disk is brighter. This question (or rather, the asnwers) explain that it is caused by Doppler beaming. Since Doppler beaming partly constitutes of Doppler effect, I would expect to see a blueshift on the brighter areas and a redshift on the fainter areas; similarly to what the original rendering of the black hole from the movie "Interstellar" looked like: (Source) So why is this not the case for the photo of the M87 black hole? Answer: The picture isn't a "colour" picture - it is monochrome. i.e. It is obtained at a single microwave wavelength of 1.3 mm, and so not at any wavelength you could see (Akiyama et al. 2019). There isn't therefore any spectral information that would reveal the expected Doppler effect. Any difference of colour in the "false-colour picture" is purely related to the intensity of the emission, not its wavelength. If one were able to obtain coverage at multiple wavelengths then you might expect a "bluer" (i.e. shorter average wavelength) colour to be associated with the brightest regions.
{ "domain": "astronomy.stackexchange", "id": 5120, "tags": "black-hole, photography, supermassive-black-hole, doppler-effect, m87" }
Celestial Time-Keeping and Navigation
Question: Say, for sake of argument, someone was randomly transported in time and space. Would it be possible for them to determine their location on Earth and the current time using just observations of the stars? They have no charts or equipment (not even anything to write with, so all calculations must be done in their head), but have a working knowledge of constellations and the location of other celestial bodies. Their tolerance for accuracy is pretty wide: The location can be as vague as "the north-west of the North American coast", "central Australia", etc. (other things, such as geographical landmarks and seasonal conditions can refine this, but we're not interested in that); The time can be +/-100 years. (If it's possible to do better with mental calculations, then all well and good!) If this is possible, could you outline the procedure for doing so? Answer: Century First, you'd have to watch through a night to see if Polaris wobbles - currently, the radius is about 1° I think, but that changes with precession (and nutation, but that's small enough to ignore). Once you know that, you can try to find a point in the sky that stays still all the time (like Polaris nearly does in our time). This is celestial pole, the direction the axis of earth points to. It moves in a circle with ~23° radius ( = obliquity of the ecliptic, currenty ~23,4°). The center of the circle is roughly between Polaris and Vega, that makes it easy. Then, you have to see whether Polaris is approaching or departing it on its way on the "precession circle". If you can measure the angle $\alpha$ it has travelled on its way on this circle, you can estimate which century you have been transported to: $year = 2000 + \alpha \frac {26000} {360}$ If you take exact values and do all the calculations and especially the measurements very very exact, you might get more than just the century. If you really want to prepare, print this and pin it somewhere you will see it every day: https://en.wikipedia.org/wiki/File:Precession_N.gif Beware that if you are transported more than 13,000 years back or forth, you will guess wrong. In this case, you'll have to take the proper motion of the stars into account - you'll have to know not only the sky but also the motion of the stars (or at least a few) very very good. Which is the first thing that is really hard to do. Maybe you'd have to invent something with sticks for the measurements. Year I guess you need to know the planets positions (especially Jupiter because it's bright and not to fast) of the time you are transported to to derive the year - but you'd have to be a walking ephemeris for this. Time of the year The time of the year is actually much easier than the year - most of the time, you can even feel it. If you still need to calculate it, it is a direct function of the position of the sun in the sky: You need to know the orientation of the ecliptic in the sky and the equinox of our time - in Pisces, you can see it in the image in that Wikipedia link: the intersection of "0°" and the ecliptic. You measure the time in hours $h$ that passes from either the spring or the fall equinox (of our time) is at midheaven until the sun is at midheaven (noon). The current date of the year, measured (roughly) from january to december in values from 0 to 1 is: $ y = h/24-0.2+\frac \alpha {360}$ If you took the fall equinox, you have to add or substract 0.5, whichever fits. Without the $-0.2$, it would be measured from spring equinox (~ march 23rd) to spring equinox. Geographical latitude The angle of the celestial pole mentioned above and the ground (as long as it's horizontal) helps you calculate the geographical latitude $ \phi = 90°-angle$. Geographical longitude Sorry, that's the thing I could not do.
{ "domain": "physics.stackexchange", "id": 8937, "tags": "astronomy, soft-question, experimental-physics, celestial-mechanics" }
How do Precipitation Reactions behave in the Absence of Gravity?
Question: How do precipitation reactions behave in the absence of gravity, say on the International Space Station (ISS)? I have seen water taking the shape of a sphere and not that of the container in space due to its surface tension. It takes a spherical shape in order to minimize the energy due to forces of surface tension. In space all directions are equivalent and hence there is no notion of up or down. In precipitation reactions, generally, the precipitate formed either floats on the surface or settles down. I know the solid particles will be formed in space due to the chemical reaction between the reactants. What will happen to the solid particles (precipitate), will they concentrate at the centre of the sphere of the reaction mixture (solution) or will they come to the surface, or will they remain suspended, or is there any other possibility? Has such an experiment been done on the ISS to date? For reference, this is how water looks in the absence of gravity: Please note: Water is coloured to enhance visibility Answer: In gravityless environment, all directions are equivalent save the very vicinity of the surface(*), where just few water molecules are toward the surface. Therefore, in the bulk volume, the precipitate would stay where it is, unless some currents exist for whatever reason. At the very surface, it depends on nature of precipitate. If it is rather hydrophilic, water molecules would have tendency to surround it, so it would be rather pulled from surface inwards. If it is rather hydrophobic, water molecules would have tendency to expelled it, so it would be rather pushed outwards to the surface. If it is far enough ( tens molecules ) from surface, all effects will get spherically symmetric, but one.... (*) ..... as in a very very long term, precipitate denser than water would be more concentrated near the centre by the own gravity of the "bubble" and vice versa. I am not aware of such an ISS experiment, but it makes little sense to do it for me. In the microgravity context, it is very very long term experiment.
{ "domain": "chemistry.stackexchange", "id": 12724, "tags": "precipitation" }
multiple turtlebots on wireless network
Question: I have 10 turtlebots (version 1, the white one) that I'm trying to operate on a wifi network (g wireless standard). Any three of them together work fine but as soon as I add a fourth the turtlebot_node starts throwing lots of error messages about not being able to read data from the iCreate properly and the system grinds to a halt. I switched it over to a wired network and got all 10 running great with a peak network bandwidth of 9KBps, which isn't really that much. Is there something specific to the turtlebot_node that struggles with message latency or wifi in general? Right now my alternatives are: rewrite the turtlebot_node to make it more tolerant of infrequent message combine all the nodes on each turtlebot using nodelets reorganize the system as a multi-master network, with one master on each turtlebot Originally posted by andrewlybarger on ROS Answers with karma: 1 on 2013-06-20 Post score: 0 Original comments Comment by Rafael on 2013-06-24: Hi, how are they running in the same network? Are they running independently or they can share topics? Comment by andrewlybarger on 2013-06-26: I'm quite certain that it is possible to run 4 or more turtlebots together, the question is does the stock turtlebot_node need to modified or the network/ros configuration. Right now all of the turtlebots are using a single ros master, which is running on my desktop machine. Comment by Rafael on 2013-06-26: Thanks Andrew, I have a question setup regarding how to do that single ros master multi-robot config. I would be very grateful if you could please take a look at it and give me an idea of how you got this to work. I tried it with namespaces following a suggestion here but it has been a dead end. Answer: I suggest that you debug a little bit deeper into the root cause. I know of demos with 4 or more TurtleBots on the same wireless network. Originally posted by tfoote with karma: 58457 on 2013-06-20 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14646, "tags": "ros, wireless, wifi, turtlebot" }
Calculating the dissimilarity between term frequency vectors
Question: Given that a document is an object represented by what is called a term frequency vector. How can we calculate the dissimilarity between term frequency vectors? Answer: There are several ways to find the relationship between vector representations in NLP, such as the cosine distance (you can check this for instance to apply it as a quick proof of concept) or L2 distance, which aim to find the relationship between such vectors in the vectors space they lay in. Nevertheless, to associate the geometric distance to semantic similarity, it is interesting to apply word embeddings, with which you get more lower-dimensional vectors directly learned from your data (you can check for it).
{ "domain": "datascience.stackexchange", "id": 10172, "tags": "nlp, data-mining" }
Why do we need to convert revolutions into radians before we multiply with meters?
Question: When we are calculating tangential acceleration, I notice that we have to convert the $\alpha$ that was in unit of rev/min^2 into the unit of rad/min^2 before we are able to multiply this value with a value of unit $m$, such as in the following picture: Why is this unit conversion necessary? Why can't we multiply rev/min^2 with $m$? Answer: It's because the circumference of a circle with radius $r$ is $2\pi r$. It's easier to see with velocities, so I'll stick to that for now. If you want to know the tangential velocity of a particle rotating around some point, you have to divide the distance it travels by the time it takes to do that. At $1\,\mathrm{rpm}$, the distance traveled in $1\,\mathrm{min}$ is exactly the circumference, which is $2\pi r$. At $2\,\mathrm{rpm}$, we travel around the circumference twice per minute, so $4\pi r$. With units, and angular velocity $\omega$, that is $$ v_\mathrm{T}~\mathrm{[m/min] = \omega~\mathrm{[rev/min]}} \cdot 2\pi\,r~\mathrm{[m]} $$ So the $2\pi$ really comes from calculating the circumference using the radius. Radians already have this feature contained in their definition. The distance traveled by something having moved $1\,\mathrm{rad}$ along a circle of radius $r$ is exactly $r$ meters. That makes using radians often very convenient. For the acceleration it's a bit harder to visualize, but the principle is exactly the same.
{ "domain": "physics.stackexchange", "id": 79155, "tags": "newtonian-mechanics, rotational-kinematics, dimensional-analysis, units" }
Electrolysis of Anhydrous Sodium Hypochlorite
Question: Considering liquid anhydrous sodium hypochlorite: $$\ce{NaClO (l)}$$ Assuming the ions are free, What would be produced if this were electrolysed? As I've found lots of information for it's hydrated pair (a.k.a. bleach). And while I know the half equation for sodium: $$\ce{Na+ + e- -> Na}$$ I cannot find any half equation for the hypochlorite ion: $\ce{ClO-}$, apart from in acid, or even what would be the product (maybe $\ce{Cl2O}$ or $\ce{Cl2O2}$?) Answer: Obviously, sodium does not have a choice other than getting reduced. Oxidising it to sodium(II) is not an easy feat considering that the electron would have to be extracted from a core orbital. That means, the hypochlorite must be oxidised. Usually, an oxidation of hypochlorite will yield chlorate, $\ce{ClO3-}$. If we attempt to form a redox half-reaction out of that, we can only use $\ce{ClO-}$ as a charge balancing agent: $$\begin{align}\ce{ClO- \phantom{\ce{+ 4 ClO-}} &-> ClO3- + 4e-}\tag{Ox1}\\ \ce{ClO- + 4 ClO- &-> ClO3- + 4 e-}\tag{Ox2}\end{align}$$ If we do this, we have four extraneous chlorine atoms and two extraneous oxygen atoms. Thus, it makes most sense to do the mass balance by adding $\ce{Cl2O}$: $$\ce{ClO- + 4 ClO- -> ClO3- + 4 e- + 2 Cl2O}\tag{Ox3}$$ However, it will be difficult confirming this. Sodium hypochlorite is not stable, especially not at elevated temperatures, especially not in pure form, especially not anhydrous sodium hypochlorite (sensing a pattern?). You would need a certain amount of recklessness to attempt this and sufficient luck to be able to report the result. Which is possibly why I cannot find any scientific publication on the topic; so take the equation with two grains (or more) of salt.
{ "domain": "chemistry.stackexchange", "id": 6841, "tags": "electrochemistry, ionic-compounds, electrolysis" }
start position is off the global costmap? Navigation
Question: Hi guys I completely lost in using the navigation stack. I have created a launch file: <launch> <master auto="start"/> <!-- Run the map server --> <node name="map_server" pkg="map_server" type="map_server" args="$(find navigation_for_segbot)/maps/bwi_test_world.yaml"/> <!--- Run AMCL --> <include file="$(find amcl)/examples/amcl_omni.launch" /> <node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen"> <rosparam file="$(find navigation_for_segbot)/params/local_costmap_params.yaml" c <rosparam file="$(find navigation_for_segbot)/params/local_costmap_params.yaml" command="load" /> <rosparam file="$(find navigation_for_segbot)/params/global_costmap_params.yaml" command="load" /> <rosparam file="$(find navigation_for_segbot)/params/base_local_planner_params.yaml" command="load" /> </node> </launch> then I tried to send goal but the robot is rotating around without actually moving anywhere: [ INFO] [1370824628.007088563, 316.393000000]: Subscribed to Topics: [ INFO] [1370824628.233852157, 316.545000000]: MAP SIZE: 0, 200 [ INFO] [1370824628.238549672, 316.549000000]: Subscribed to Topics: [ INFO] [1370824628.449820351, 316.686000000]: Sim period is set to 0.05 [ WARN] [1370824628.463985213, 316.696000000]: Trajectory Rollout planner initialized with param meter_scoring not set. Set it to true to make your settins robust against changes of costmap resolution. [ WARN] [1370824381.338229900, 95.549000000]: The robot's start position is off the global costmap. Planning will always fail, are you sure the robot has been properly localized? I believe the last line of above is the error. But why?????? Originally posted by Gazer on ROS Answers with karma: 146 on 2013-06-09 Post score: 0 Answer: It seems to me that your map is not loaded correctly. MAP SIZE is 0,200. So this obviously leads to the robot to be off the costmap, as the costmaps dimension is 0 in x-direction. Are width and height set correctly in the bwi_test_world.yaml? Originally posted by mgruhler with karma: 12390 on 2013-06-11 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 14484, "tags": "ros" }
Form of affinely parametrized geodesic equation under an arbitrary coordinate transformation?
Question: This question is based off of Chapter 3 of Hobson M.P., G. P. Efstathiou, A. N. Lasenby General Relativity An Introduction for Physicists (2006). The exact question is Q3.15. Not for homework though, but studying for exams. For an affinely parameterised geodesic $x^a(u)$ that is parameterised by an affine parameter u. The geodesic satisfies: $$ \frac{d^2x^a}{du^2 }=\Gamma^{a}_{bc}\frac{d^2x^b}{du^2}\frac{d^2x^c}{du^2}$$ Under an arbitrary coordinate transformation $x^a \rightarrow x'^a$, the form of the equation should be unchanged. How do I show this? I am a little mixed up, does an arbitrary coordinate transformation just entail a change of parameter, e.g. $u \rightarrow u' $ or is the parameter changed to the new coordinate e.g. $x^a(u) \rightarrow x'^a(x^a) $ ? How do the coordinate vectors (e.g. $e^a $)change under a transformation? Some thoughts I had on this: If a coordinate transformation is just a parameter change, could I just represent the derivatives as: $$ \frac{d^2x^a}{du^2 } \rightarrow \frac{d^2x^a}{du^2 }\frac{d^2u}{du'^2 }$$ And the connection changes to: $$\Gamma^{a}_{bc} \rightarrow \Gamma'^{a}_{bc}=\frac{\partial x'^a }{\partial x^d }\frac{\partial x^f }{\partial x'^b }\frac{\partial x^g }{\partial x'^c } \Gamma'^{d}_{fg}+\frac{\partial x'^a }{\partial x^d }\frac{\partial^2 x'^a }{\partial x'^c \partial x'^b }$$ and then insert into the differential geodesic equation and evaluate? Or do I need to specify a general coordinate transformation equation? e.g. a linear transformation $x'^1= \alpha x^1 + \beta $ and then insert this into the original geodesic differential equation? A quick explanation of the form of a geodesic $x^a(u)$ (i.e. is this similar to an equation of a line like $y=mx+c$) and what happens under arbitrary coordinate transformations in general, would be very useful in understanding this. Any help is greatly appreciated. Answer: The geodesic equation is $$ \frac{ d^2 x^\lambda }{ d\tau^2} + \Gamma^\lambda_{\mu\nu}(x) \frac{ d x^\mu }{ d \tau} \frac{ d x^\nu }{ d \tau} = 0 ~. $$ Under $x^\mu \to x'^\mu(x)$, we note that the Christoffel symbol does not transform like a tensor. Rather, $$ \Gamma'^\lambda_{\mu\nu}(x') =\frac{ \partial x^\alpha}{ \partial x'^\mu} \frac{ \partial x^\beta}{ \partial x'^\nu} \left[ \frac{\partial x'^\lambda}{\partial x^\rho} \Gamma^\rho_{\alpha\beta}(x) - \frac{ \partial^2 x'^\lambda }{ \partial x^\alpha \partial x^\beta } \right] $$ We also have \begin{align} \frac{d x'^\lambda }{ d\tau} &= \frac{ \partial x'^\lambda }{ \partial x^\rho } \frac{ d x^\rho }{ d\tau}~, \\ \qquad \frac{d^2 x'^\lambda }{ d\tau^2} &= \frac{d}{d\tau} \left( \frac{ \partial x'^\lambda }{ \partial x^\rho } \frac{ d x^\rho }{ d\tau} \right) = \frac{ \partial x'^\lambda }{ \partial x^\rho } \frac{ d^2x^\rho }{ d\tau^2} + \frac{ \partial^2 x'^\lambda }{ \partial x^\alpha \partial x^\beta } \frac{ d x^\alpha }{ d\tau} \frac{ d x^\beta }{ d\tau} ~. \end{align} Thus, $$ \Gamma'^\lambda_{\mu\nu}(x') \frac{ d x'^\mu }{ d \tau} \frac{ d x'^\nu }{ d \tau} = \left[ \frac{\partial x'^\lambda}{\partial x^\rho} \Gamma^\rho_{\alpha\beta}(x) - \frac{ \partial^2 x'^\lambda }{ \partial x^\alpha \partial x^\beta } \right] \frac{ d x^\alpha }{ d \tau} \frac{ d x^\beta }{ d \tau} $$ Putting this altogether, we find \begin{align} &\frac{ d^2 x'^\lambda }{ d\tau^2} + \Gamma'^\lambda_{\mu\nu}(x') \frac{ d x'^\mu }{ d \tau} \frac{ d x'^\nu }{ d \tau} \\ &\qquad= \frac{ \partial x'^\lambda }{ \partial x^\rho } \frac{ d^2x^\rho }{ d\tau^2} + \frac{ \partial^2 x'^\lambda }{ \partial x^\alpha \partial x^\beta } \frac{ d x^\alpha }{ d\tau} \frac{ d x^\beta }{ d\tau} + \left[ \frac{\partial x'^\lambda}{\partial x^\rho} \Gamma^\rho_{\alpha\beta}(x) - \frac{ \partial^2 x'^\lambda }{ \partial x^\alpha \partial x^\beta } \right] \frac{ d x^\alpha }{ d \tau} \frac{ d x^\beta }{ d \tau} \\ &\qquad= \frac{ \partial x'^\lambda }{ \partial x^\rho } \left[ \frac{ d^2x^\rho }{ d\tau^2} + \Gamma^\rho_{\alpha\beta}(x) \frac{ d x^\alpha }{ d \tau} \frac{ d x^\beta }{ d \tau} \right] \\ \end{align}
{ "domain": "physics.stackexchange", "id": 41700, "tags": "general-relativity, differential-geometry, coordinate-systems, geometry, geodesics" }
Find all words in a dictionary that can be made with a string of characters (Recursion/Binary Search)
Question: I'm working on an algorithm that could take in a string of 20 random characters, and display to the user every word in a dictionary that can be successfully made with those letters, regardless of length. If the string is "made", it would return "mad", "made", etc. However, the execution time is extremely poor with my current method. I've been recommended to give the Trie structure a shot. But seeing how Java doesn't have it built in, I wanted to see if there's a better approach to this algorithm, or if I should look at implementing my own Trie structure. I currently use a Binary Search implementation to check for prefixes, found through recursion, and see if a certain recursive path should be continued or not. private ArrayList<String> dict = new ArrayList<>(); private Set<String> possibleWords = new HashSet<>(); private void getAllValidWords(String letterPool, String currWord) { //Add to possibleWords when valid word if (letterPool.equals("")) { // No action to be done. } else if(currWord.equals("")){ //Will run only the initial time the method is called. for (int i = 0; i < letterPool.length(); i++) { //Get the individual letters that will become the first letter of a word String curr = letterPool.substring(i, i+1); //Delete the single letter from letterPool String newLetterPool = (letterPool.substring(0, i) + letterPool.substring(i+1)); if(inDict(curr)){ possibleWords.add(curr); } boolean prefixInDic = binarySearch(curr); if(prefixInDic){ //If the prefix isn't found, don't continue this recursive path. getAllValidWords(newLetterPool, curr); } } } else { //Every time we add a letter to currWord, delete from letterPool for(int i=0; i<letterPool.length(); i++){ String curr = currWord + letterPool.substring(i, i+1); String newLetterPool = (letterPool.substring(0, i) + letterPool.substring(i+1)); if(inDict(curr)){ possibleWords.add(curr); } boolean prefixInDic = binarySearch(curr); if(prefixInDic){ //If the prefix isn't found, don't continue this recursive path. getAllValidWords(newLetterPool, curr); } } } } private boolean binarySearch(String word){ int max = dict.size() - 1; int min = 0; int currIndex; boolean result = false; while(min <= max) { currIndex = (min + max) / 2; if (dict.get(currIndex).startsWith(word)) { result = true; break; } else if (dict.get(currIndex).compareTo(word) < 0) { min = currIndex + 1; } else if(dict.get(currIndex).compareTo(word) > 0){ max = currIndex - 1; } else { result = true; break; } } return result; } Answer: Check if you are solving the right problem! Most times when codes get horribly slow, you are solving the wrong problem, or the problem in an ineffective way. Try to think of different approaches. It's hard when you are already down a certain path, but try to take some distance and approach from another angle. Alternative solution You should reverse the problem and see for each word in the dictionary if it can be made from the letters. It then becomes a linear search through all the words. Which is nice :) You can use anagram code like counting the frequency of the letters of your input to quickly check if a word can be matched. A match is when the frequencies of all letters in a word are at least matched by the input. You can quickly stop in the matching code if you find a frequency of a character that is not covered. Example I used this list: https://raw.githubusercontent.com/dwyl/english-words/master/words.txt The program below executes within a second on my laptop. Input: made Output: a ad ade adm ae am amd ame d da dae dam dame de dea dem dema dm dma dme e ea ead eam ed eda edam edm em ema emad emda m ma mad made mae maed md mde me mea mead med meda Program: public class Words { public static void main( String[] args ) throws IOException { List<String> list = Files.readAllLines( new File( "/home/raudenaerde/words.txt" ).toPath(), Charset.defaultCharset() ); List<String> lowercase = list.stream().map( s -> s.toLowerCase() ).filter( s->s.chars().allMatch(Character::isLetter)).collect( Collectors.toList() ); System.out.println( "Read " + lowercase.size() + " words" ); findOptions( "made", lowercase ); } private static void findOptions( String string, List<String> lowercase ) { int[] freq = toFreq( string ); for ( String l : lowercase ) { int[] freqIn = toFreq( l ); if ( matches( freq, freqIn ) ) { System.out.println( l ); } } } /** * Returns true if all the frequencies of the letters match. * * @param freq * @param freqIn * @return */ private static boolean matches( int[] freq, int[] freqIn ) { for ( int i = 0; i < 26; i++ ) { if ( freq[i] == 0 && freqIn[i]>0) { return false; } else if (freq[i] < freqIn[i]) { return false; } } return true; } /** * Encode a word in to a freqceny array. int[0] = #a's, int[1] = #b's etc. * * @param string * @return */ private static int[] toFreq( String string ) { int[] freq = new int[26]; for ( char c : string.toCharArray() ) { if ( ( c - 'a' ) >= 0 && ( c - 'a' ) < 26 ) { freq[c - 'a']++; } } return freq; } }
{ "domain": "codereview.stackexchange", "id": 26154, "tags": "java, performance, recursion, binary-search, trie" }
How to spawn multiple nodes in same C++ process
Question: Hello there, I want to spawn multiple nodes within the same C++ process. To this end I used boost/thread. All seems to work, except that always one of the nodes is not showing up in 'rosnode list'. This is the code that runs the threads and creates the nodes: static boost::mutex mutex; static boost::once_flag cv_thread_flag = BOOST_ONCE_INIT; void worker0(int argc, char* argv[]) { mutex.lock(); ros::init(argc, argv, "optical_flow_visualization0"); optical_flow_analyser::visualize::of_visualize vis("asdf", 0); mutex.unlock(); ros::spin(); } void worker1(int argc, char* argv[]) { mutex.lock(); ros::init(argc, argv, "optical_flow_visualization1"); optical_flow_analyser::visualize::of_visualize vis("asdf", 1); mutex.unlock(); ros::spin(); } int main(int argc, char* argv[]) { boost::call_once(cv_thread_flag, &cv::startWindowThread); boost::thread worker_thread_0(worker0, argc, argv); boost::thread worker_thread_1(worker1, argc, argv); worker_thread_0.join(); worker_thread_1.join(); return 0; } Sample output of 'rosnode list': % rosnode list /gazebo /optical_flow_visualization1 (...) The internal workings of the node consist of subscribing to some image_transport topics and displaying them with cv::imshow() in the callback function. As stated above this does indeed work as it should, but the fact that only one of the nodes is found by rosnode tells me that I might be doing something wrong here. So I wonder: Is this a sane way of spawning multiple nodes? If not: how should I do it instead? The background of this is that I want to process multiple camera streams from Gazebo, and I'd like to have it so, that I can flexibly spawn processing nodes, depending on how many camera streams there are. Originally posted by nitschej on ROS Answers with karma: 35 on 2011-08-25 Post score: 3 Answer: I am surprised this seemed to work at all. I would not expect a single process to provide more than one node. I recommend implementing each thread as a nodelet. They are well-supported and the nodelet manager can dynamically load additional threads for the process when needed. Many camera drivers provide device nodelets, allowing you to display shared copies of the images published. The image pipeline also provides nodelets for similar reasons. Originally posted by joq with karma: 25443 on 2011-08-25 This answer was ACCEPTED on the original site Post score: 5
{ "domain": "robotics.stackexchange", "id": 6520, "tags": "ros, c++, threads, nodes" }
Are there any drawbacks to having immutable DNA during lifetime?
Question: Imagine that there is a way to prevent cell DNA from being changed once zygote has been formed. Would this have any negative effects on the target organism? Does DNA change during the lifetime as part of normal operation, or DNA changes are always due to mutations or malfunction? Answer: Semantic of the post This question could arguably be better on-topic on WorldBuilding.SE. The difficulty with the kind of question that ask What if the world was different, it is unclear how different would it exactly so to understand what consequence to draw from this scenario. The other issue is of course that the answer cannot be drawn from empirical studies but only be inspired from current knowledge and make untestable predictions. Because definitions of epigenetic vary and some of them might be extraordinary broad, I will assume that by mutation, you are referring to any change in the DNA sequence excluding epigenetic changes. Related post Does our DNA change during our lives? is a related post. What do somatic mutations do? Adaptive Immunity For species having some form of adaptive immunity, somatic mutations are used for producing a large diversity of antibody than can be tested on the self before being released in the body searching for antigen to bind to. In absence of somatic mutations, the adaptive immunity is pretty much inexistent. This point is probably the most important and has been raised by @Polisetty in the comments. Cancer Somatic mutations cause cancers. More generally speaking, somatic mutations create genetic diversity within the body yielding to opportunity for selfish cells that could proliferate to the expense of the rest of the body. In absence of somatic mutations, there would be no (or at least much fewer) cancer. This will affect selection for increased lifespan. Somatic mutations on genetic diversity As a fraction of the mutations that are transmitted to the offspring occur before the separation between the somatic lines and the germline, the absence of any somatic mutation would result in a lower mutation rate. This would result in lower genetic diversity overall. It would also affect selection pressure that depend upon the mutation rate and may have other consequences that are hard to predict. Also, and more importantly, many species undergo asexual reproduction such as budding in plants. In some species, the entire genetic diversity is caused only by somatic mutations. In absence of such mutation, there would be no evolution possible. What mutations are inherited If a parent has a mutation in the lineage giving rise to half the gametes, then this mutation will necessarily be passed to half of the offspring. For this reason, offspring tend to share new mutations even if their parent did not have them. Absence of somatic mutations will remove this effect, which I would predict would have little impact on anything!
{ "domain": "biology.stackexchange", "id": 6218, "tags": "genetics, dna, molecular-genetics" }
6 out of 49 Lottery
Question: In Germany we have a lottery with the following rules: You have to guess 6 numbers. The numbers shall be less than 50 and greater then 0. A number occurs only one time in a game. So you can not guess the same number multiple times in one game. If you have guessed 6 numbers right you won the game. I wanted to know how many attempts it would take to have 6 right numbers and wrote this example. It works but unfortunately the example is very slow and needs a lot of time because it is very unlikely to have 6 numbers right. How could I improve the speed? #include <time.h> #include <stdlib.h> #include <stdio.h> int play(int attempt[]) { // generate draw int draw[6]; draw[0] = rand() % 49 + 1; for(int i = 1; i < 6; i++) { int random_number; generate: random_number = rand() % 49 + 1; for(int j = 0; j < i; j++) { if(random_number == draw[j]) { goto generate; } } draw[i] = random_number; } // compare draw with attempt int compared = 0; for(int i = 0; i < 6; i++) { if(attempt[i] == draw[i]) { compared++; } } if(compared == 6) { return 1; } else { return 0; } } int main() { srand (time(NULL)); int attempt[] = {1, 2, 3, 4, 5, 6}; long long int counter = 1; while(!play(attempt)) { counter++; } printf("You only needed %lld attempts to get 6 right numbers!\n", counter); } By the way: after around 5 minutes the program was finished and I needed only 1,487,592,156 attempts! Answer: Bug Winning a 6/49 game is, of course, unlikely. The probability of any single ticket having all six numbers correct is $$\dfrac{1}{\binom{49}{6}} = \dfrac{6!\,(49-6)!}{49!} = \dfrac{1}{13983816}$$ But your code required 1.5×109 draws to produce a win, which is 100 times more than the expected 1.4×107 draws. Why? Because your comparison loop… // compare draw with attempt int compared = 0; for(int i = 0; i < 6; i++) { if(attempt[i] == draw[i]) { compared++; } } … requires the chosen numbers to be in the same order! Thus, a win is 720× less likely in your simulation than in a real-life 6/49 game. Performance and style Your // generate draw loop has a special case: // generate draw int draw[6]; draw[0] = rand() % 49 + 1; for(int i = 1; i < 6; i++) { … You could easily eliminate the special case: // generate draw int draw[6]; for(int i = 0; i < 6; i++) { … You also have a goto to handle cases where you draw the same number twice. Well structured code should not have gotos. As an alternative to generating random numbers and checking for conflicts, consider doing a partial shuffle instead. Suggested solution #include <stdio.h> #include <stdlib.h> #include <time.h> const int *lotto_draw(int r) { static int balls[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49 }; const static int n = sizeof(balls) / sizeof(balls[0]); // Partial Fisher-Yates shuffle on the last r elements for (int i = n - 1; i >= n - r; i--) { int j = rand() % (i + 1); int swap = balls[j]; balls[j] = balls[i]; balls[i] = swap; } return balls + n - r; } int play(int attempt[], int r) { const int *draw = lotto_draw(r); // compare draw with attempt int correct = 0; for (int i = 0; i < r; i++) { for (int j = 0; j < r; j++) { if (attempt[i] == draw[j]) { correct++; } } } return (correct == r); } int main() { srand(time(NULL)); int attempt[] = {1, 2, 3, 4, 5, 6}; int r = sizeof(attempt) / sizeof(attempt[0]); long long int counter; for (counter = 1; !play(attempt, r); counter++); printf("You only needed %lld attempts to get %d right numbers!\n", counter, r); }
{ "domain": "codereview.stackexchange", "id": 27046, "tags": "c, random" }
Searching two different elements in a same list
Question: I have a list of conversion histories, where information about user channels is stored. For example user may have app for iOS and Android installed. Each time user logs in, I store a conversion history, so that there might be multiple rows with shared data (e.g. client is always iOS). I need to get the latest usage of iOS and Android in this case, as you can see code is pretty much in 'serial' way, so I wonder how this part can be optimized. Any help is appreciated. private List<UserChannel.AppUser> findLatestAppClients( List<Conversion> conversions ) { Optional<Conversion> iosClient = conversions.stream().filter(c -> IOS.clientName().equals(c.getClient())).findFirst(); Optional<Conversion> androidClient = conversions.stream().filter(c -> ANDROID.clientName().equals(c.getClient())).findFirst(); List<UserChannel.AppUser> appUsers = new ArrayList<>(); if (iosClient.isPresent()) { appUsers.add(new UserChannel.AppUser(iosClient.get().getClient(), iosClient.get().getClientId())); } if (androidClient.isPresent()) { appUsers.add(new UserChannel.AppUser(androidClient.get().getClient(), androidClient.get().getClientId())); } return appUsers; } Answer: There's no problem with the performance. Java 8 filtering actually doesn't scan the whole stream, so the processing will be done only until the first element meeting the predicate is hit. So you're fine this way. However, I would do a bit of refractoring here: private List<UserChannel.AppUser> findLatestAppClients( List<Conversion> conversions) { Optional<Conversion> iosClient = findFirstWithName(IOS.clientName()); Optional<Conversion> androidClient = findFirstWithName(ANDROID.clientName()); List<UserChannel.AppUser> appUsers = new ArrayList<>(); addIfPresent(appUsers, iosClient); addIfPresent(appUsers, androidClient); return appUsers; } private Optional<Conversion> findFirstWithName(String name){ return conversions.stream().filter(c -> name.equals(c.getClient())).findFirst(); } private void addIfPresent(List<UserChannel.AppUser> appUsers, Optional<Conversion> conversion){ if (conversion.isPresent()) { appUsers.add(new UserChannel.AppUser(conversion.get().getClient(), conversion.get().getClientId())); } }
{ "domain": "codereview.stackexchange", "id": 19804, "tags": "java, android, collections" }
Set of $\mathsf{NP}$-hard languages closed under set inclusion?
Question: As the title says, my question is whether the set of $\mathsf{NP}$-hard languages is closed under set inclusion, i.e. whether for any $\mathsf{NP}$-hard language $L$, all subsets of $L$ are also $\mathsf{NP}$-hard. This question is related since $\emptyset$ is not $\mathsf{NP}$-hard as there is nothing we could map "yes"-instances to and $\emptyset \subseteq L$ for all $\mathsf{NP}$-hard $L$. However, what about non-trivial subsets, i.e. languages $L'$ of the form $\emptyset \neq L' \subsetneq L$ for $\mathsf{NP}$-hard $L$? We know that $\emptyset \neq 2SAT \subsetneq SAT$ and while $SAT$ is $\mathsf{NP}$-hard, $2SAT$ is in $\mathsf{P}$. This suggests to me that whether the set of $\mathsf{NP}$-hard is closed under "non-trivial" inclusions depends on whether $\mathsf{P} = \mathsf{NP}$. Am I mistaken here? In one sentence, my question is this: is the set of $\mathsf{NP}$-hard languages closed under nontrivial set inclusion (1) assuming $\mathsf{P} = \mathsf{NP}$ and (2) assuming $\mathsf{P} \neq \mathsf{NP}$? Answer: Indeed, if NP=P then every nontrivial problem is NP-hard, and in particular every nontrivial subset of an NP-hard problem (thus closing it to inclusion).
{ "domain": "cs.stackexchange", "id": 2881, "tags": "complexity-theory, np-hard, closure-properties" }
Why do X-ray telescopes have to be in space?
Question: I have read this question: For x-rays the (HUP limit) Δx becomes smaller than the distances between the lattice distances of atoms and molecules, and the photon will interact only if it meets them on its path, because most of the volume is empty of targets for the x-ray wavelengths of the photon. Why do X-rays go through things? As far as I understand, X-rays are one of the most penetrating electromagnetic radiation. They should easily penetrate Earth's atmosphere just like visible light. Then why do all x-ray telescopes have to be in space? The image is from the DK Smithsonian Encyclopedia. The only thing I found about this says something about atmospheric absorption, but does not go into detail, why x-rays get absorbed more then any other wavelength (like visible). So basically I am asking why are x-rays one of the most penetrating in solids, but one of the least penetrating in gases? Answer: X-ray (and gamma rays) are quite penetrating. They can pass through solid matter with much less attenuation than visible light as an example. But that doesn't mean that the attenuation is zero. Put enough "stuff" in the way, and the energy is eventually scattered or absorbed. In the case of the atmosphere, it's "just" air, but there is quite a bit of it. The depth of the atmosphere is plenty to stop almost all UV/X/gamma radiation. In fact most types of EM radiation are blocked by the atmosphere. But our eyes see only the transparency in visible light. The small molecules that make up most of the atmosphere ($N_2$, $O_2$, $Ar$) take a lot of energy to excite. It turns out that visible light is just shy of the energy to do this efficiently, so interactions are very rare. More energetic forms (including X-rays) can ionize these molecules, absorbing or scattering the radiation. Given a thick enough layer, almost all the incoming radiation is removed.
{ "domain": "physics.stackexchange", "id": 94040, "tags": "electromagnetic-radiation, atomic-physics, material-science, atmospheric-science, x-rays" }
Transformation of the derivative of the scalar field in Ramond's book about QFT
Question: In the book by Pierre Ramond about quantum field theory, he explores in chapter 1.4 (p.13) the behavior of fields under Poincaré transformations. He starts by explaining that infinitesimal transformations have the following effect on an arbitrary function: $$f'(x') = f(x) + \delta_0 f(x) + \delta x^\mu \partial_\mu f(x)\tag{1}$$ with $\delta_0 f := f'(x) - f(x)$. The transformation of $x^\mu$ is found to be: $$\delta x^\mu = \frac{i}{2} \epsilon^{\rho\sigma} L_{\rho\sigma} x^\mu = \epsilon^{\mu\rho} x_\rho \tag{2}$$ Then he shows that, for a scalar field, the spin part of the transformation has to vanish by comparing $$ \phi'(x')=\phi(x) \iff \delta_0 \phi = - \frac{i}{2} \epsilon^{\rho\sigma} L_{\rho\sigma} \phi \tag{3}$$ with the general form of the Lorentz transformation $$\delta_0 \text{ (something)} = -\frac{I}{2} \epsilon^{\rho\sigma} M_{\rho\sigma} \text{ (something)} \tag{4}$$ with $M_{\rho\sigma} = L_{\rho\sigma} + S_{\rho\sigma}$ and $L_{\rho\sigma} = i(x_\rho \partial_\sigma - x_\sigma \partial_\rho)$. So we conclude that the scalar field must have spin-$0$. Then he goes on showing that the spin part does not vanish for the derivative of a scalar field $\partial_\mu \phi$. There I have a hard time understanding how he shows that. The transformation of $\partial_\mu \phi$ is given by: $\begin{align} \delta \partial_\mu \phi &= \left[ \delta,\partial_\mu \right]\phi + \partial_\mu \phi \\ &= \left[ \delta x^\nu \partial_\nu , \partial_\mu \right] \phi \\ &= \epsilon^{\nu\rho} x_\rho \partial_\mu \partial_\nu - \epsilon^\nu_{\ \mu} \partial_\nu \phi \tag{5}\end{align}$ Somehow, in Ramond only the 2nd term survives. Is that because we consider the second derivative as the next order? Or does that somehow vanish? Then he just states that we find: $$\delta_0 \partial_\mu \phi = -\frac{i}{2} \epsilon^{\rho\sigma} L_{\rho\sigma} \partial_\mu \phi - \frac{i}{2} \left(\epsilon^{\rho\sigma} S_{\rho\sigma} \right)_ \mu^{\ \nu} \partial_\nu \phi \tag{6}$$ with $$\left(S_{\rho\sigma} \right)_\mu^{\ \nu} = i \left( g_{\rho\mu} g^\nu_{\ \sigma} - g_{\sigma \mu} g^\nu_{\ \rho} \right). \tag{7}$$ I could not manage to show that. So my question would be: how did that happen? What spin does the derivative carry, and why? Answer: So here is the answer to about 3/4 of my own question. First, let me detail the calculation of $\delta \partial_\mu \phi$: $$\begin{align} \delta \partial_\mu \phi &= \left[\delta,\partial_\mu \right] \phi + \partial_\mu \underbrace{\delta \phi}_{=0} \\ &= \underbrace{\left[ \delta_0,\partial_\mu \right]}_{=0} \phi + \left[\delta x^\nu \partial_\nu , \partial_\mu \right] \phi \\ &= \delta x^\nu \partial_\mu \partial_\nu \phi - \partial_\mu \left( \delta x^\nu \partial_\nu \phi \right) \tag{8} \end{align}$$ Now we can use eq. (1) (see OP) and also obtain: $$\delta \partial_\mu \phi = \delta_0 \partial_\mu \phi + \delta x^\nu \partial_\nu \partial_\mu \phi \tag{9}$$ Equating eq. (8) and (9) gives us: $$\delta_0 \partial_\mu \phi = -\partial_\mu \left( \delta x^\nu \partial_\nu \phi \right) \tag{10}$$ Now we want to try to massage this expression so that we can retrieve the expression given in eq. (6) in the OP. This is actually straightforward: $$\begin{align} \delta_0 \partial_\mu \phi &= -\partial_\mu \left( \delta x^\nu \partial_\nu \phi \right) \\ &= - \partial_\mu \delta x^\nu \partial_\nu \phi - \delta x^\nu \partial_\mu \partial_\nu \phi \\ &= -\frac{i}{2} i \left( \epsilon_\mu^{\ \nu} \partial_\nu - \epsilon^\nu_{\ \mu} \partial_\nu \right) \phi - \frac{i}{2} \epsilon^{\rho\nu} \underbrace{i \left(x_\rho \partial_\nu - x_nu \partial_\rho \right)}_{=L_{\rho\nu}} \partial_\nu \phi \\ &= -\frac{i}{2} \epsilon^{\rho\sigma} \underbrace{i \left( g_{\rho\mu} g_\sigma^{\ \nu} - g_{\sigma\mu} g_\rho^{\ \nu} \right)}_{=\left(S_{\rho\sigma}\right)_\mu^{\ \nu}} \partial_\nu \phi - \frac{i}{2} \epsilon^{\rho\nu} L_{\rho\nu} \partial_\mu \phi \\ &= -\frac{i}{2} \epsilon^{\rho\sigma} \left(S_{\rho\sigma}\right)_\mu^{\ \nu} \partial_\nu \phi - \frac{i}{2} \epsilon^{\rho\nu} L_{\rho\nu} \partial_\mu \phi \tag{11} \end{align}$$ which is exactly what was wanted in the OP. The only thing that I still cannot answer is, how do we know that this is a spin $1$ object (and is that even the case?)?
{ "domain": "physics.stackexchange", "id": 59934, "tags": "special-relativity, angular-momentum, field-theory, bosons, poincare-symmetry" }
Implementation of mergesort in C
Question: I have implemented mergesort in C. Any advice on making it more compact? My merge function seems less than fully optimal. //mergeSort in C #include <stdio.h> #include <stdlib.h> void merge(int [], int[], int[], int, int); void mergesort(int[], int); int main() { int unsorted[] = {4, 1, 3, 0, 10, 2, 5, 5}; int size = sizeof(unsorted)/sizeof(int); mergesort(unsorted, size); printf("The sorted array is: "); for(int i = 0; i < size; i++) { printf("%d, ", *(unsorted+i)); } return 0; } void merge(int *original, int* first, int* second, int len1, int len2) { int i = 0; int firPtr = 0; int secPtr = 0; while(i < (len1+len2)) { if(firPtr == len1) { original[i] = second[secPtr++]; } else if(secPtr == len2) { original[i] = first[firPtr++]; } else if(first[firPtr] < second[secPtr]) { original[i] = first[firPtr++]; } else { original[i] = second[secPtr++]; } i++; } } void mergesort(int unsorted[], int size) { if(size <= 1 || unsorted == NULL) { return; } int *first = (int *)malloc((size/2)*sizeof(int)); int *second = (int *)malloc((size - size/2)*sizeof(int));; int mid = size/2; for(int i = 0; i < mid; i++) { *(first+i) = *(unsorted+i); } //Common Error/Note to self: Make sure when initializing j to //not 0, the code block truly requires that. for(int j = (mid); j < size; j++) { *(second+(j-mid)) = *(unsorted+j); } mergesort(first, mid); mergesort(second, size - mid); merge(unsorted, first, second, mid, size - mid); free(first); free(second); } Answer: Use for loop This while loop in merge can be easily rewritten as a for loop: int i = 0; // ... while(i < (len1+len2)) { // ... i++; } The benefit of a for loop is that it limits the scope of the variable i, preventing misuses outside, and it makes the i++ easier to see, impossibly to forget: for (int i = 0; i < (len1+len2); i++) { // ... } Unnecessary recalculation It's ironic to see size / 2 repeated 2 times, and then again when storing in a variable: int *first = (int *)malloc((size/2)*sizeof(int)); int *second = (int *)malloc((size - size/2)*sizeof(int));; int mid = size/2; You could as well reorder these statements and reuse mid: int mid = size/2; int *first = (int *)malloc(mid * sizeof(int)); int *second = (int *)malloc((size - mid) * sizeof(int)); Naming In merge, the variables firPtr and secPtr are poor names for two reasons: "Ptr" suffix is typically used for pointers. But these variables are not pointers, these are indexes. So the suffix should be "Index" "fir" and "sec" are meaningless fragments, simply spelling them out to "first" and "second" would make them naturally more readable Pointer indexing In some places you use array-style indexing with pointers: original[i] = first[firPtr++]; ... while in many other places you use pointer arithmetics: printf("%d, ", *(unsorted+i)); // ... *(first+i) = *(unsorted+i); I suggest to use simply array-style indexing consistently everywhere. Usability As I pointed out on your other questions, the program can become more interesting if it can take input from the command line rather than a hardcoded array. You could rework main easily to that end: int main(int argc, char ** argv) { int size = argc - 1; int * unsorted = (int *)malloc(size * sizeof(int)); for (int i = 1; i < argc; ++i) { unsorted[i - 1] = atoi(argv[i]); } // ... free(unsorted); } Minor points No need for the return 0 at the end of main. The compiler adds that automatically. The program doesn't print a newline after printing the sorted array. This makes the output look strange when running in a terminal. I suggest to add a newline at the end: puts("");
{ "domain": "codereview.stackexchange", "id": 18454, "tags": "beginner, c, mergesort" }
minimum number of states for the regular expression
Question: Could somebody please tell me if there is a way to create a DFA with 8 states for the regular expression $$(111 + 11111)^*$$ I was able to create a DFA with 8 states, but the place I saw the question has answer as 9 states. And if possible, could you please show me the DFA. Thanks Answer: We can summarize this unary language by the list of possible lengths of words: $$ 0,3,5,6,8,9,10,11,12,\ldots. $$ Now let us look at (left) shifts of this list: $$ 0,3,5,6,8-\infty \\ 2,4,5,7-\infty \\ 1,3,4,6-\infty \\ 0,2,3,5-\infty \\ 1,2,4-\infty \\ 0,1,3-\infty \\ 0,2-\infty \\ 1-\infty \\ 0-\infty $$ There are 9 different shifts, and so 9 different states in the minimal DFA. In this case, the minimal DFA is very simple: it has 9 states $s_0,\ldots,s_8$, with arrows $s_i \to s_{i+1}$ (for $i < 8$) and $s_8\to s_8$, starting state $s_0$, and accepting states $s_0,s_3,s_5,s_6,s_8$. This sort of reasoning works for all unary languages, though sometimes the minimal DFA won't be a simple path. For languages of the form $(1^{n_1}+\cdots+1^{n_k})^*$, the minimal DFA is in fact always a path, and assuming (without loss of generality) that $\mathrm{gcd}(n_1,\ldots,n_k)=1$, the number of states exactly equals $2$ plus the maximal integer not representable as a sum of non-negative multiples of $n_1,\ldots,n_k$ (in our case, $7$).
{ "domain": "cs.stackexchange", "id": 3941, "tags": "finite-automata, regular-expressions" }
Generalized momentum conjugate and potential $U(q, \dot q)$
Question: On Goldstein's "Classical Mechanics" (first ed.), I have read that if $q_j$ is a cyclic coordinate, its generalized momentum conjugate $p_j$ is costant. He obtained that starting from Lagrange's equation: $$\frac {d}{dt} \frac {\partial L}{\partial \dot q_j}- \frac {\partial L}{\partial q_j}=0.$$ But this Lagrange's equation refers to a conservative system. What would happen if I considered a system in which the potential is $U(q, \dot q)$? Answer: OP wrote(v1): What would happen if I considered a system in which the potential is [velocity-dependent] $U(q, \dot q)$? Well, if OP already knows that the generalized force$^1$ $$\tag{1} Q_j~=~\frac {d}{dt} \frac {\partial U}{\partial \dot q^j}- \frac {\partial U}{\partial q^j}$$ is given in terms of a velocity-dependent potential $U=U(q, \dot q, t)$, this means that Lagrange's equations $$\tag{2}\frac {d}{dt} \frac {\partial T}{\partial \dot q^j}- \frac {\partial T}{\partial q^j}~=~Q_j,$$ can be written as $$\tag{3}\frac {d}{dt} \frac {\partial L}{\partial \dot q^j}- \frac {\partial L}{\partial q^j}~=~0, \qquad L~=~T-U.$$ As long as one has Lagrange's equations (3), then it is still true that if $q^j$ is a cyclic coordinate $\frac {\partial L}{\partial q^j}=0$, then its generalized momentum conjugate $p_j:=\frac {\partial L}{\partial \dot q^j}$ is a constant of motion. $^1$ Here we consider for simplicity a system with only one type of generalized force. In practice, there may be several types of forces (e.g. gravity force, Lorentz force, etc.). The generalization is straightforward.
{ "domain": "physics.stackexchange", "id": 5273, "tags": "classical-mechanics, lagrangian-formalism, momentum, potential-energy, conservative-field" }
Finding the eigenfunctions of the $\hat{\vec l}.\hat{\vec s}$ operator for a single p-electron
Question: I'm trying to calculate the SO-coupling for a single p-electron ($l=1$, $s=\frac{1}{2}$) in the uncoupled representation. This comes down to calculating these matrix elements: $$\left\langle nlm_lsm_s\left|\hat{\vec l}.\hat{\vec s}\right|nlm_l^{'}sm_s^{'}\right\rangle$$ It can be easily found that: $$\hat{\vec l}.\hat{\vec s} = \frac{1}{2}(\hat{ l_+}\hat{ s_-}+\hat{ l_-}\hat{ s_+})+\hat{l_z}\hat{s_z}$$ Which gives me the following matrix representation of $\hat{\vec l}.\hat{\vec s}$: \begin{bmatrix} \frac{1}{2} & 0 & 0 & 0 & 0 & 0 \\ 0 & -\frac{1}{2} & \frac{1}{\sqrt{2}} & 0 & 0 & 0 \\ 0 & \frac{1}{\sqrt{2}} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{1}{\sqrt{2}} & 0 \\ 0 & 0 & 0 & \frac{1}{\sqrt{2}} & -\frac{1}{2} & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{2} \end{bmatrix} I recognize this is a diagonal block matrix of the form \begin{bmatrix} A & 0 \\ 0 & A \end{bmatrix} And I've found the eigenvalues this way ($\lambda=-1$ and $\lambda=\frac{1}{2}$) Now I've tried solving for the eigenvectors but I think I'm doing something wrong because the equations I get are trivially equal to zero (The eigenket is represented by a column vector $(x,y,z)$, the calculation is for the $\lambda = \frac{1}{2}$ eigenvalue): $(A-\lambda I)\ |\phi\rangle=0$ leads to the following equations: $$-y+z\frac{1}{\sqrt 2}=0$$ $$\frac{1}{\sqrt 2}y-\frac{1}{2}z=0$$ Which in turn gives the trivial equations: $$z=\frac{2}{\sqrt 2}y$$ $$y= 0, z=0$$ I don't see what I'm doing wrong, I get similar trivial equations when calculating the other eigenvalue. Answer: The zero vector is always a (trivial) solution of the eigenvector equation. You are looking for a non-zero solution, in fact a normalized solution, so your equations are $z=\sqrt{2}y$ and $y^2 + z^2 = 1$. I get $y=1/\sqrt{3}$, $z=\sqrt{2/3}$.
{ "domain": "physics.stackexchange", "id": 36314, "tags": "angular-momentum, quantum-spin, eigenvalue" }
Milking a COM type library: "fun" with COM reflection
Question: Once upon a time, there was a duck that wanted to know where and how user code was calling into the VBA standard library and Excel object model. To match the rest of its API, the poor little duck had to dig through pages and pages and pages of MSDN documentation, and instantiate a Declaration object for each and every single module, class, enum, function, property, even and whatnot. For example, the ColorConstants built-in module from the VBA standard library would be hard-coded as a series of Declaration fields, and provided to rubberduck through a static getter that used reflection to pull all the members: public static IEnumerable<Declaration> Declarations { get { if (_standardLibDeclarations == null) { var nestedTypes = typeof(VbaStandardLib).GetNestedTypes(BindingFlags.NonPublic).Where(t => Attribute.GetCustomAttribute(t, typeof(CompilerGeneratedAttribute)) == null); var fields = nestedTypes.SelectMany(t => t.GetFields()); var values = fields.Select(f => f.GetValue(null)); _standardLibDeclarations = values.Cast<Declaration>(); } return _standardLibDeclarations; } } //... private class ColorConstantsModule { private static readonly QualifiedModuleName ColorConstantsModuleName = new QualifiedModuleName("VBA", "ColorConstants"); public static readonly Declaration ColorConstants = new Declaration(new QualifiedMemberName(ColorConstantsModuleName, "ColorConstants"), VbaLib.Vba, "VBA", "ColorConstants", false, false, Accessibility.Global, DeclarationType.Module); public static Declaration VbBlack = new ValuedDeclaration(new QualifiedMemberName(ColorConstantsModuleName, "vbBlack"), ColorConstants, "VBA.ColorConstants", "Long", Accessibility.Global, DeclarationType.Constant, "0"); public static Declaration VbBlue = new ValuedDeclaration(new QualifiedMemberName(ColorConstantsModuleName, "vbBlue"), ColorConstants, "VBA.ColorConstants", "Long", Accessibility.Global, DeclarationType.Constant, "16711680"); public static Declaration VbCyan = new ValuedDeclaration(new QualifiedMemberName(ColorConstantsModuleName, "vbCyan"), ColorConstants, "VBA.ColorConstants", "Long", Accessibility.Global, DeclarationType.Constant, "16776960"); public static Declaration VbGreen = new ValuedDeclaration(new QualifiedMemberName(ColorConstantsModuleName, "vbGreen"), ColorConstants, "VBA.ColorConstants", "Long", Accessibility.Global, DeclarationType.Constant, "65280"); public static Declaration VbMagenta = new ValuedDeclaration(new QualifiedMemberName(ColorConstantsModuleName, "vbMagenta"), ColorConstants, "VBA.ColorConstants", "Long", Accessibility.Global, DeclarationType.Constant, "16711935"); public static Declaration VbRed = new ValuedDeclaration(new QualifiedMemberName(ColorConstantsModuleName, "vbRed"), ColorConstants, "VBA.ColorConstants", "Long", Accessibility.Global, DeclarationType.Constant, "255"); public static Declaration VbWhite = new ValuedDeclaration(new QualifiedMemberName(ColorConstantsModuleName, "vbWhite"), ColorConstants, "VBA.ColorConstants", "Long", Accessibility.Global, DeclarationType.Constant, "16777215"); public static Declaration VbYellow = new ValuedDeclaration(new QualifiedMemberName(ColorConstantsModuleName, "vbYellow"), ColorConstants, "VBA.ColorConstants", "Long", Accessibility.Global, DeclarationType.Constant, "65535"); } This was very much unfortunate, because not only was it a fairly ugly and not-quite-justified use of reflection, it meant that the VBE add-in would only ever know of hard-coded declarations, and hard-coding the declarations for every member of every possible VBA host application that could ever run Rubberduck, would be beyond ridiculous - even the thought of it is ludicrous. Imagine the above, times 700. Now imagine you want to add a Declaration for every parameter of every function out there: the result would be an unmaintainable pile of hard-coded goo. So I decided to scratch that and go with a wildly different approach instead. Not any less ludicrous though. When hosted in MS-Excel 2010, the below code yields 37,135 built-in declarations in... well, about two thirds of a second: VBA declarations added in 77ms Excel declarations added in 582ms stdole declarations added in 2ms 37135 built-in declarations added. It's being used like this, in the RubberduckParser.ParseParallel method: if (!_state.AllDeclarations.Any(declaration => declaration.IsBuiltIn)) { // multiple projects can (do) have same references; avoid adding them multiple times! var references = projects.SelectMany(project => project.References.Cast<Reference>()) .GroupBy(reference => reference.Guid) .Select(grouping => grouping.First()); foreach (var reference in references) { var stopwatch = Stopwatch.StartNew(); var declarations = _comReflector.GetDeclarationsForReference(reference); foreach (var declaration in declarations) { _state.AddDeclaration(declaration); } stopwatch.Stop(); Debug.WriteLine("{0} declarations added in {1}ms", reference.Name, stopwatch.ElapsedMilliseconds); } Debug.WriteLine("{0} built-in declarations added.", _state.AllDeclarations.Count(d => d.IsBuiltIn)); } So, it works wonderfully well - too well even (the resolver code wasn't quite ready to handle that many declarations). Given a Reference, we load its COM type library, start iterating its types and member, and yield return a Declaration as soon as we have enough information to provide one. using System; using System.Collections.Generic; using System.Runtime.InteropServices; using System.Runtime.InteropServices.ComTypes; using Microsoft.Vbe.Interop; using Rubberduck.VBEditor; using FUNCFLAGS = System.Runtime.InteropServices.ComTypes.FUNCFLAGS; using TYPEDESC = System.Runtime.InteropServices.ComTypes.TYPEDESC; using TYPEKIND = System.Runtime.InteropServices.ComTypes.TYPEKIND; using FUNCKIND = System.Runtime.InteropServices.ComTypes.FUNCKIND; using INVOKEKIND = System.Runtime.InteropServices.ComTypes.INVOKEKIND; using PARAMFLAG = System.Runtime.InteropServices.ComTypes.PARAMFLAG; using TYPEATTR = System.Runtime.InteropServices.ComTypes.TYPEATTR; using FUNCDESC = System.Runtime.InteropServices.ComTypes.FUNCDESC; using ELEMDESC = System.Runtime.InteropServices.ComTypes.ELEMDESC; using VARDESC = System.Runtime.InteropServices.ComTypes.VARDESC; namespace Rubberduck.Parsing.Symbols { public class ReferencedDeclarationsCollector { /// <summary> /// Controls how a type library is registered. /// </summary> private enum REGKIND { /// <summary> /// Use default register behavior. /// </summary> REGKIND_DEFAULT = 0, /// <summary> /// Register this type library. /// </summary> REGKIND_REGISTER = 1, /// <summary> /// Do not register this type library. /// </summary> REGKIND_NONE = 2 } [DllImport("oleaut32.dll", CharSet = CharSet.Unicode)] private static extern void LoadTypeLibEx(string strTypeLibName, REGKIND regKind, out ITypeLib TypeLib); private static readonly IDictionary<VarEnum, string> TypeNames = new Dictionary<VarEnum, string> { {VarEnum.VT_DISPATCH, "DISPATCH"}, {VarEnum.VT_VOID, string.Empty}, {VarEnum.VT_VARIANT, "Variant"}, {VarEnum.VT_BLOB_OBJECT, "Object"}, {VarEnum.VT_STORED_OBJECT, "Object"}, {VarEnum.VT_STREAMED_OBJECT, "Object"}, {VarEnum.VT_BOOL, "Boolean"}, {VarEnum.VT_BSTR, "String"}, {VarEnum.VT_LPSTR, "String"}, {VarEnum.VT_LPWSTR, "String"}, {VarEnum.VT_I1, "Variant"}, // no signed byte type in VBA {VarEnum.VT_UI1, "Byte"}, {VarEnum.VT_I2, "Integer"}, {VarEnum.VT_UI2, "Variant"}, // no unsigned integer type in VBA {VarEnum.VT_I4, "Long"}, {VarEnum.VT_UI4, "Variant"}, // no unsigned long integer type in VBA {VarEnum.VT_I8, "Variant"}, // LongLong on 64-bit VBA {VarEnum.VT_UI8, "Variant"}, // no unsigned LongLong integer type in VBA {VarEnum.VT_INT, "Long"}, // same as I4 {VarEnum.VT_UINT, "Variant"}, // same as UI4 {VarEnum.VT_DATE, "Date"}, {VarEnum.VT_DECIMAL, "Currency"}, // best match? {VarEnum.VT_EMPTY, "Empty"}, {VarEnum.VT_R4, "Single"}, {VarEnum.VT_R8, "Double"}, }; private string GetTypeName(ITypeInfo info) { string typeName; string docString; // todo: put the docString to good use? int helpContext; string helpFile; info.GetDocumentation(-1, out typeName, out docString, out helpContext, out helpFile); return typeName; } public IEnumerable<Declaration> GetDeclarationsForReference(Reference reference) { var projectName = reference.Name; var path = reference.FullPath; var projectQualifiedModuleName = new QualifiedModuleName(projectName, projectName); var projectQualifiedMemberName = new QualifiedMemberName(projectQualifiedModuleName, projectName); var projectDeclaration = new Declaration(projectQualifiedMemberName, null, null, projectName, false, false, Accessibility.Global, DeclarationType.Project); yield return projectDeclaration; ITypeLib typeLibrary; LoadTypeLibEx(path, REGKIND.REGKIND_NONE, out typeLibrary); var typeCount = typeLibrary.GetTypeInfoCount(); for (var i = 0; i < typeCount; i++) { ITypeInfo info; typeLibrary.GetTypeInfo(i, out info); if (info == null) { continue; } var typeName = GetTypeName(info); var typeDeclarationType = GetDeclarationType(typeLibrary, i); QualifiedModuleName typeQualifiedModuleName; QualifiedMemberName typeQualifiedMemberName; if (typeDeclarationType == DeclarationType.Enumeration || typeDeclarationType == DeclarationType.UserDefinedType) { typeQualifiedModuleName = projectQualifiedModuleName; typeQualifiedMemberName = new QualifiedMemberName(projectQualifiedModuleName, typeName); } else { typeQualifiedModuleName = new QualifiedModuleName(projectName, typeName); typeQualifiedMemberName = new QualifiedMemberName(typeQualifiedModuleName, typeName); } var moduleDeclaration = new Declaration(typeQualifiedMemberName, projectDeclaration, projectDeclaration, typeName, false, false, Accessibility.Global, typeDeclarationType, null, Selection.Home); yield return moduleDeclaration; IntPtr typeAttributesPointer; info.GetTypeAttr(out typeAttributesPointer); var typeAttributes = (TYPEATTR)Marshal.PtrToStructure(typeAttributesPointer, typeof (TYPEATTR)); //var implements = GetImplementedInterfaceNames(typeAttributes, info); for (var memberIndex = 0; memberIndex < typeAttributes.cFuncs; memberIndex++) { IntPtr memberDescriptorPointer; info.GetFuncDesc(memberIndex, out memberDescriptorPointer); var memberDescriptor = (FUNCDESC) Marshal.PtrToStructure(memberDescriptorPointer, typeof (FUNCDESC)); var memberNames = new string[255]; // member name at index 0; array contains parameter names too int namesArrayLength; info.GetNames(memberDescriptor.memid, memberNames, 255, out namesArrayLength); var memberName = memberNames[0]; var funcValueType = (VarEnum)memberDescriptor.elemdescFunc.tdesc.vt; var memberDeclarationType = GetDeclarationType(memberDescriptor, funcValueType); var asTypeName = string.Empty; if (memberDeclarationType != DeclarationType.Procedure && !TypeNames.TryGetValue(funcValueType, out asTypeName)) { asTypeName = funcValueType.ToString(); //TypeNames[VarEnum.VT_VARIANT]; } var memberDeclaration = new Declaration(new QualifiedMemberName(typeQualifiedModuleName, memberName), moduleDeclaration, moduleDeclaration, asTypeName, false, false, Accessibility.Global, memberDeclarationType, null, Selection.Home); yield return memberDeclaration; var parameterCount = memberDescriptor.cParams - 1; for (var paramIndex = 0; paramIndex < parameterCount; paramIndex++) { var paramName = memberNames[paramIndex + 1]; var paramPointer = new IntPtr(memberDescriptor.lprgelemdescParam.ToInt64() + Marshal.SizeOf(typeof (ELEMDESC))*paramIndex); var elementDesc = (ELEMDESC) Marshal.PtrToStructure(paramPointer, typeof (ELEMDESC)); var isOptional = elementDesc.desc.paramdesc.wParamFlags.HasFlag(PARAMFLAG.PARAMFLAG_FOPT); var asParamTypeName = string.Empty; var isByRef = false; var isArray = false; var paramDesc = elementDesc.tdesc; var valueType = (VarEnum) paramDesc.vt; if (valueType == VarEnum.VT_PTR || valueType == VarEnum.VT_BYREF) { //var paramTypeDesc = (TYPEDESC) Marshal.PtrToStructure(paramDesc.lpValue, typeof (TYPEDESC)); isByRef = true; var paramValueType = (VarEnum) paramDesc.vt; if (!TypeNames.TryGetValue(paramValueType, out asParamTypeName)) { asParamTypeName = TypeNames[VarEnum.VT_VARIANT]; } //var href = paramDesc.lpValue.ToInt32(); //ITypeInfo refTypeInfo; //info.GetRefTypeInfo(href, out refTypeInfo); // todo: get type info? } if (valueType == VarEnum.VT_CARRAY || valueType == VarEnum.VT_ARRAY || valueType == VarEnum.VT_SAFEARRAY) { // todo: tell ParamArray arrays from normal arrays isArray = true; } yield return new ParameterDeclaration(new QualifiedMemberName(typeQualifiedModuleName, paramName), memberDeclaration, asParamTypeName, isOptional, isByRef, isArray); } } for (var fieldIndex = 0; fieldIndex < typeAttributes.cVars; fieldIndex++) { IntPtr ppVarDesc; info.GetVarDesc(fieldIndex, out ppVarDesc); var varDesc = (VARDESC) Marshal.PtrToStructure(ppVarDesc, typeof (VARDESC)); var names = new string[255]; int namesArrayLength; info.GetNames(varDesc.memid, names, 255, out namesArrayLength); var fieldName = names[0]; var fieldValueType = (VarEnum)varDesc.elemdescVar.tdesc.vt; var memberType = GetDeclarationType(varDesc, typeDeclarationType); string asTypeName; if (!TypeNames.TryGetValue(fieldValueType, out asTypeName)) { asTypeName = TypeNames[VarEnum.VT_VARIANT]; } yield return new Declaration(new QualifiedMemberName(typeQualifiedModuleName, fieldName), moduleDeclaration, moduleDeclaration, asTypeName, false, false, Accessibility.Global, memberType, null, Selection.Home); } } } //private IEnumerable<string> GetImplementedInterfaceNames(TYPEATTR typeAttr, ITypeInfo info) //{ // for (var implIndex = 0; implIndex < typeAttr.cImplTypes; implIndex++) // { // int href; // info.GetRefTypeOfImplType(implIndex, out href); // ITypeInfo implTypeInfo; // info.GetRefTypeInfo(href, out implTypeInfo); // var implTypeName = GetTypeName(implTypeInfo); // yield return implTypeName; // //Debug.WriteLine(string.Format("\tImplements {0}", implTypeName)); // } //} private DeclarationType GetDeclarationType(ITypeLib typeLibrary, int i) { TYPEKIND typeKind; typeLibrary.GetTypeInfoType(i, out typeKind); DeclarationType typeDeclarationType = DeclarationType.Control; // todo: a better default if (typeKind == TYPEKIND.TKIND_ENUM) { typeDeclarationType = DeclarationType.Enumeration; } else if (typeKind == TYPEKIND.TKIND_COCLASS || typeKind == TYPEKIND.TKIND_INTERFACE || typeKind == TYPEKIND.TKIND_ALIAS || typeKind == TYPEKIND.TKIND_DISPATCH) { typeDeclarationType = DeclarationType.Class; } else if (typeKind == TYPEKIND.TKIND_RECORD) { typeDeclarationType = DeclarationType.UserDefinedType; } else if (typeKind == TYPEKIND.TKIND_MODULE) { typeDeclarationType = DeclarationType.Module; } return typeDeclarationType; } private DeclarationType GetDeclarationType(FUNCDESC funcDesc, VarEnum funcValueType) { DeclarationType memberType; if (funcDesc.invkind.HasFlag(INVOKEKIND.INVOKE_PROPERTYGET)) { memberType = DeclarationType.PropertyGet; } else if (funcDesc.invkind.HasFlag(INVOKEKIND.INVOKE_PROPERTYPUT)) { memberType = DeclarationType.PropertyLet; } else if (funcDesc.invkind.HasFlag(INVOKEKIND.INVOKE_PROPERTYPUTREF)) { memberType = DeclarationType.PropertySet; } else if (funcValueType == VarEnum.VT_VOID) { memberType = DeclarationType.Procedure; } else if (funcDesc.funckind == FUNCKIND.FUNC_PUREVIRTUAL) { memberType = DeclarationType.Event; } else { memberType = DeclarationType.Function; } return memberType; } private DeclarationType GetDeclarationType(VARDESC varDesc, DeclarationType typeDeclarationType) { var memberType = DeclarationType.Variable; if (varDesc.varkind == VARKIND.VAR_CONST) { memberType = typeDeclarationType == DeclarationType.Enumeration ? DeclarationType.EnumerationMember : DeclarationType.Constant; } else if (typeDeclarationType == DeclarationType.UserDefinedType) { memberType = DeclarationType.UserDefinedTypeMember; } return memberType; } } } Should I break it down further? How readable / maintainable is it? I left some //todo comments in there, so I'll be coming back to this code in a number of weeks/months - what am I going to be regretting? Answer: I think you can make this a tiny bit clearer: if (!_state.AllDeclarations.Any(declaration => declaration.IsBuiltIn)) { // multiple projects can (do) have same references; avoid adding them multiple times! var references = projects.SelectMany(project => project.References.Cast<Reference>()) .GroupBy(reference => reference.Guid) .Select(grouping => grouping.First()); foreach (var reference in references) { var stopwatch = Stopwatch.StartNew(); var declarations = _comReflector.GetDeclarationsForReference(reference); foreach (var declaration in declarations) { _state.AddDeclaration(declaration); } stopwatch.Stop(); Debug.WriteLine("{0} declarations added in {1}ms", reference.Name, stopwatch.ElapsedMilliseconds); } You don't actually need to construct the whole group at all. You can use a HashSet instead: var deduper = new HashSet<Guid>(); var references = projects .SelectMany(project => project.References.Cast<Reference>()); foreach (var reference in references) { if (!deduper.Add(reference.Guid)) { continue; } // do your stuff. } In general a DistinctBy extension method comes in damn handy: public static IEnumerable<T> DistinctBy<T, TKey>(this IEnumerable<T> source, Func<T, TKey> keySelector) { if (source == null) { throw new ArgumentNullException(nameof(source)); } if (keySelector == null) { throw new ArgumentNullException(nameof(keySelector)); } var deduper = new HashSet<TKey>(); return source.Where(item => deduper.Add(keySelector(item))); } You can get really fancy and pass in an IEqualityComparer<T> if you want to. Which means you could simply do: var references = projects .SelectMany(project => project.References.Cast<Reference>()) .DistinctBy(r => r.Guid); foreach (var reference in references) { You'll notice that I prefer to keep SelectMany simple and add the call to Cast later. That's a personal preference thing but I find it easier to scan that way. Although that's still true, as you note in the comments, you can't do that here :) GetDeclarationsForReference seems much too long. You should break it up. e.g. for (var paramIndex = 0; paramIndex < parameterCount; paramIndex++) { yield return CreateParameterDeclaration(/* lots of parameters */); } It's a pain to do it but you'll thank yourself in the long run! Not much of a review... I'll hopefully take a closer look later in the week.
{ "domain": "codereview.stackexchange", "id": 18896, "tags": "c#, reflection, rubberduck, com" }
Wannier Hamiltonian in Momentum Space
Question: In connection to a previous question, We can write the one-particle Hamiltonian in the Wannier basis working on a general vector $v$ as : $$ \langle\vec{R},\,\lambda|\hat{H}|v\rangle = \sum_{\lambda',\,\vec{R}'}\sum_{\vec{k}}\langle\psi_{\lambda',\,\vec{k}}|v\rangle\exp\left(i\vec{k}\cdot\vec{R}'\right)\langle\vec{R},\,\lambda|\hat{H}|\vec{R}',\,\lambda'\rangle \tag{1}$$ Where $\left\{|\vec{R,\lambda}\rangle\right\}_{\vec{R},\lambda}$ is the basis of the Wannier functions ($\lambda$ indexes the band) and $|\psi_{\vec{k},\lambda}\rangle$ are the Bloch wave-functions (eigen-functions of both the Hamiltonian (with eigenvalues indexed by $\lambda$ and the translation-by-lattice-vector operator (with eigenvalues indexed by $\vec{k}$)). In this sense we can think of $\left(\sum_{\vec{k}}\langle\psi_{\lambda',\,\vec{k}}|v\rangle\exp\left(i\vec{k}\cdot\vec{R}'\right)\right)=:v_{\lambda',\vec{R}'}$ as expansion coefficients of $|v\rangle$ in the Wannier basis $\left\{|\vec{R,\lambda}\rangle\right\}_{\vec{R},\lambda}$ and the first equation gets written in a natural matrix way as: $$ (\hat{H} |v\rangle)_{\lambda,\vec{R}} = \sum_{\lambda',\,\vec{R}'} H_{\lambda,\vec{R},\,\lambda',\vec{R}'} v_{\lambda',\vec{R}'} \tag{2} $$ My question is, how to apply Bloch-decomposition to (2). By Bloch decomposition, I mean that we write $\psi_{\vec{k},\lambda}(\vec{r})=\exp\left(i\vec{k}\cdot\vec{r}\right)u_{\lambda,\vec{k}}\left(\vec{r}\right)$ where $u_{\lambda,\vec{k}}\left(\vec{r}\right)$ is periodic in $\vec{r}$, plug this into the Schroedinger equation to get an eigenvalue equation for $u_{\lambda,\vec{k}}\left(\vec{r}\right)$ alone: $$ \left[-\frac{\hbar^2}{2m}(\vec{\nabla}+i\vec{k})^2+V\left(\vec{r}\right)\right]u_{\lambda,\vec{k}}\left(\vec{r}\right) = E_{\lambda}\left(\vec{k}\right) u_{\lambda,\vec{k}}\left(\vec{r}\right) \tag{3}$$ So the question is how to obtain the Bloch basis of $H_{\lambda,\vec{R},\,\lambda',\vec{R}'}$, so as to be able to write (2) only for the periodic "$u$" part of the $|v\rangle$ coefficients in the Wannier basis. In other words, how to write $$ \underbrace{\left[-\frac{\hbar^2}{2m}(\vec{\nabla}+i\vec{k})^2+V\left(\vec{r}\right)\right]}_{H_{\vec{k}}(\vec{r})=\mbox{Bloch-decomposed Hamiltonian}}u_{\lambda,\vec{k}}\left(\vec{r}\right) \tag{4}$$ in the Wannier basis, but instead of with the "eigen-function" $u_{\lambda,\vec{k}}\left(\vec{r}\right)$ with a general superposition corresponding to the periodic part of $|v\rangle$. Answer: So it turns out that the attempt is misguided, because the Wannier functions don't act on the $u$ functions: the $u$ functions live in a different vector space. The correct way to proceed from equation (2) is to apply Bloch's theorem once more on it (noting that the matrix $H_{\lambda,\vec{R},\lambda',\vec{R}'}$ is invariant under $H_{\lambda,\vec{R},\lambda',\vec{R}'}\mapsto H_{\lambda,\vec{R}+\vec{R}'',\lambda',\vec{R}'+\vec{R}''}$ and so by Bloch's theorem we should expect that the eigenvectors to the matrix $H_{\lambda,\vec{R},\lambda',\vec{R}'}$ should be simultaneous eigenvectors of the operator that translates by $\vec{R}''$. As a result we could write a general eigenvector $\psi_{\lambda,\vec{R}}$ of $H_{\lambda,\vec{R},\lambda',\vec{R}'}$ as $$\psi_{\lambda,\vec{R},\vec{k}}=\exp\left(i\vec{k}\cdot\vec{R}\right)u_{\lambda,\vec{R},\vec{k}}$$ where $u_{\lambda,\vec{R},\vec{k}}$ obey the condition $$u_{\lambda,\vec{R}+\vec{R}',\vec{k}}=u_{\lambda,\vec{R},\vec{k}}$$ for any Bravais lattice vector $\vec{R}'$. However, this condition means that there is no need to carry on the index $\vec{R}$ for the $u$'s as they are all the same across the entire lattice. Thus, a general eigenvector of $H_{\lambda,\vec{R},\lambda',\vec{R}'}$ can be written as: $$\psi_{\lambda,\vec{R},\vec{k}}=\exp\left(i\vec{k}\cdot\vec{R}\right)u_{\lambda,\vec{k}}$$ Plug this into equation (2), "cancel out" the $\exp\left(i\vec{k}\cdot\vec{R}\right)$ on both sides and get something similar to equation (3) in the Wannier basis.
{ "domain": "physics.stackexchange", "id": 17227, "tags": "quantum-mechanics, solid-state-physics" }
Android Tutlebot Teleop Source Code Structure (Hydro)
Question: Hi guys. i'm working on the code structure of the teleop of turtlebot (hydro). Is there anyway possible to find out what code is being sent out when i shift the joystick up, down, left, etc? I understand that the joystick it self is being imported from: import org.ros.android.view.VirtualJoystickView; From the best of my knowledge, I can't really figure out which part of the whole source code actually has the code that I'm looking for I would like to play around with GUI itself. Well, let's say just inserting a 4 directional buttons (up, down, left, right) to move to the turtlebot instead of the virtual joystick. What is the code that is needed to be sent out here for each buttons? How may i go about doing this? Any help will be greatly appreciated. Thank you! :) Originally posted by syaz nyp fyp on ROS Answers with karma: 167 on 2014-08-03 Post score: 2 Answer: Not sure, if you are looking for this ? You have to publish twist messages with veloctiy info after detecting touch of buttons. Here is some code extract refer this code for futhurinfo import org.ros.message.geometry_msgs.Twist; import org.ros.node.topic.Publisher; private Publisher<org.ros.message.geometry_msgs.Twist> publisher; private org.ros.message.geometry_msgs.Twist currentVelocityCommand = new org.ros.message.geometry_msgs.Twist(); public boolean onTouchEvent(MotionEvent event) { // other stuff case MotionEvent.ACTION_UP: { currentVelocityCommand.linear.x = linearV; // currentVelocityCommand.linear.y = 0; currentVelocityCommand.linear.z = 0; currentVelocityCommand.angular.x = 0; currentVelocityCommand.angular.y = 0; currentVelocityCommand.angular.z = -angularV; // publisher.publish(currentVelocityCommand); } } Originally posted by bvbdort with karma: 3034 on 2014-08-04 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by syaz nyp fyp on 2014-08-04: @bvbdort, your invaluable help eventually led me to this.. (which was what i was looking for) : http://docs.ros.org/hydro/api/android_core/html/VirtualJoystickView_8java.html http://docs.ros.org/hydro/api/android_core/html/VirtualJoystickView_8java_source.html thank you so much for the help :)
{ "domain": "robotics.stackexchange", "id": 18881, "tags": "ros, turtlebot, ros-hydro, source, teleop" }
Can rigor mortis change the anatomical position in which a person died?
Question: I've been told as an undergrad in anthropology that the flexed position of the body in which some Neanderthal skeletons were found indicates that they were deliberately buried. Apart from the good preservation of the remains, one of the arguments for deliberate burial was that if the individual had died in this very flexed position during sleep, rigor mortis or post-mortem bloating would have caused the body, if not already buried, to extend (see e.g. Villa's [1989:325] comments to Gargett 1989). Barring sudden death by rockfall and ceiling collapse, this suggests that the body was completely buried before rigor mortis set in, or was deliberately placed in this sleeping position after rigor mortis faded by those that buried it. But I thought I had heard in a biology course that rigor mortis stiffens the muscles, but does not contract them (see e.g. Faux et al. 2006). If someone died during sleep, could rigor mortis really straighten a highly flexed death position? Answer: Rigor mortis does not cause movement. It causes rigor (Latin for stiffness.) This is only a comparison, but think of it this way. Superglue doesn't shrink or change the shape of the connection of the two things glued together; it just holds them there with powerful 'stiffness'. So imagine that at death, part of our muscle cells exuded superglue. You would get stiff (the muscle to bone/fascia connections would not change) but you would not move. Immediately after death, some muscles relax; most obviously the eyelids open partially, the mouth, hands, and sphincters relax (you won't have to pry a gold coin from someone's dead hands like in the movies.) But aside from general relaxation, you very closely stay in the position you were in when you die. Physiologically, this is roughly how muscles work: muscle cells are filled with filaments, actin and myosin (I'm leaving out tropomyosin and troponin for brevity.) Contraction occurs when the myosin can bind to the actin, break the bond and reform it on the next spot on actin, and repeat, like crawling along the actin. ATP and Calcium ions are necessary for this crawling movement. In death, ATP is quickly depleted (aerobically then anaerobically), whereas Calcium floods the cell, and the bonds within each muscle cell which were formed when death occurred become stable. The muscle does not contract any more; it just stiffens. But since stiffening requires contraction in life, it's called contraction in death. As a parallel, touch your fist to your shoulder. Without resistance, there is some contraction there. Now keep it there while someone tries to move it; that requires much more forceful contraction, but the position of your arm doesn't change (remember, you're keeping your fist to your shoulder.) All this is thanks to the number and stability of those tiny bonds between actin and myosin. Those tiny bonds develop and stabilize in death. But they don't last forever, hence rigor reverses. Rigor mortis has been studied extensively for an astonishing number of reasons including forensic studies and how it affects the meat we eat. I have seen a lot of death; one of my duties was to "pronounce" people dead. Sometimes when someone died during the night in a nursing home, a nurse would wait until morning to call - what difference did it make, anyway? - and I saw bodies in rigor. They were not all drawn up as in a fetal position. The nurses would usually straighten the patients out before rigor set in to make it more dignified and easier to transport the patient. If rigor caused movement, they wouldn't be straightened yet stiff (rigor is something one needs to document, as is lividity.) Rigor Mortis
{ "domain": "biology.stackexchange", "id": 6767, "tags": "death, anthropology, forensics, decomposition" }
calculation monocationic elements
Question: How to calculate of the number of monocationic millimoles? If I have concentrations of: $SiO_2$, $TiO_2$, $K_2O$, $MgO$, $MnO$, $Na_2O$, etc. How can I get $Na^+$, $Mg^{2+}$, $Mn^{2+}$ ..? This is part of an intent to calculate the weathering intensity scale suggested by Meunier et al. (2013), but it is a doubt suitable to any kind of use of oxides, so post it apart from it. See a related WIS question. Reference: Meunier, A.; Caner, L.; Hubert, F.; El Albani, A. and Prêt, D. (2013). The weathering intensity scale (WIS): An alternative approach of the Chemical Index of Alteration (CIA). American Journal of Science, 313(2), p.113-143. doi: 10.2475/02.2013.03 Answer: Oxygen is always divalent negative. An oxide has to be charge balanced, meaning the total has to be zero. Therefore, some like CaO means you have Ca2+O2-. Something like SiO2 is going to be Si4+2O2-. Something like Na2O is going to be 2Na+O2-. As you can see, the charges all sum to zero, given that oxygen is negative two. Let's work out an example: You have 1 moles of each: K2O, MgO, and ZrO2. You will have one mole of Mg2+, one mole of Zr4+, and two moles of K+.
{ "domain": "earthscience.stackexchange", "id": 1245, "tags": "geochemistry, atmospheric-chemistry, biogeochemistry" }
Conical vs Simple Pendulum
Question: I don't understand why the Tension $T$ on a conical pendulum and a simple pendulum are different. In a simple pendulum, one would say that the tension of the rope is $T=mg \cos(\theta)$. simple pendulum http://n8ppq.net/site_n8ppq/Physics/pendulum_files/image001.gif However, in a conical pendulum (describing a circular motion), $mg=T \cos(\theta)$. The only difference I see in the set up of the two cases is that in the second one there is a velocity component that makes the bob go around in a circle. I know that in the conical pendulum, the component $T \sin(\theta)$ would give the centripetal acceleration of the circular motion. I've seen this everywhere. The two cases look pretty much the same to me, so I would be tempted to say one of them (rather the second one) is wrong. Answer: Both pendulums are correct in their respective situation. We must remember that Newton's second law dictates that the vector sum of forces on an object must be equal to the mass of the object times the acceleration of the object. \begin{equation} \sum_n F_n = ma \end{equation} In the first pendulum the object is swinging side to side so we know that the acceleration of the object is orthogonal to the arm of the pendulum pointing at an angle $\theta$ below the horizontal towards the center of oscillation of the pendulum. This means that the forces in line with the arm of the pendulum must be equal and opposite since there is no motion in this direction and we see that $T=mg\cos{\theta}$ is true for this pendulum. For the second (conical) pendulum the object is moving at the same vertical height in a circular path of radius $r$. This tells us that the acceleration of the object points horizontally inward at an angle of $\pi/2-\theta$ with respect to the arm of the pendulum. We also know that for circular motion Newton's second law can be rewritten as \begin{equation} \sum_n F_n = \frac{mv^2}{r} \end{equation} Since there is no downward acceleration in the conical pendulum the vertical forces must be in equilibrium such that $T\cos{\theta}=mg$. Moral of the story? Your choice of coordinate axes is important and net acceleration must be accounted for when making free-body diagrams.
{ "domain": "physics.stackexchange", "id": 12521, "tags": "newtonian-mechanics, reference-frames, centrifugal-force, centripetal-force, free-body-diagram" }
Why to use non-inductive resistances in Callendar-Griffiths bridge?
Question: Worsnop in his Advanced Practical Physics for Students states that All the resistances should be 'non-inductive', for in this method it will be seen that the galvanometer is permanently connected in the circuit, and the battery takes the position in the sliding contact. This is essential, for we must balance the resistance of the platinum at the temperature which is fixed by the surroundings, so that the current should not pass for any appreciable time and cause a heating in the spiral. The accompanying figures are shown below. But I think that having inductance in the circuit is instead beneficial for the experiment because the inductance will delay the time for the current to reach its full value, preventing heating to some extent in the PRT coil. Also, when the contact is removed the current will go to zero immediately. Question: So, who is correct? Answer: As you will see from the circuit diagram the resistors are used in a variation of the Wheatstone bridge arrangement. With such an arrangement a balance point, zero current through the galvanometer, needs to be found. This is achieved by tapping a jockey, spade ended conducting contact, along a uniform resistance wire, $A$, until the zero current position is found. Inductance in the circuit will produce an emf in the circuit if any of the currents change which will happen when a null deflection on the galvanometer is being looked for as the jockey is removed and touched onto the resistance wire. This would make finding the null position more difficult. You may think that a steady reading of zero current is easy to find directly but it is easier to find the "exact" zero current position by looking for no deflection between two position either side for which there are small deflections in opposite directions.
{ "domain": "physics.stackexchange", "id": 61529, "tags": "electric-circuits, electrical-resistance, electronics, inductance, electrical-engineering" }
Polynomial size Boolean circuit for counting number of bits
Question: Given a natural number $n \geq 1$, I am looking for a Boolean circuit over $2n$ variables, $\varphi(x_1, y_1, \dots, x_n, y_n)$, such that the output is true if and only if the assignment that makes it true verifies $$\sum_{i = 1}^{i = n} (x_i + y_i) \not\equiv n \bmod 3$$ I should specify that this I am looking for a Boolean circuit, not necessarily a Boolean formula as it is usually written in Conjunctive Normal Form (CNF). This is because when written in CNF, a formula like the one before has a trivial representation where the number of clauses is approximately $\frac{4^n}{3}$, as it contains a clause for every assignment $(x_1, y_1, \dots, x_n, y_n)$ whose bits sum to a value which is congruent with $n \bmod 3$. Constructing such a formula would therefore take exponential time. I have been told that a Boolean circuit can be found for this formula that accepts a representation of size polynomial in $n$. However, so far I have been unable to find it. I would use some help; thanks. Answer: You can write a straight-line program (an equivalent way to define a circuit) that computes the Boolean variables $z_{i,j}=[x_1+y_1+\cdots+x_i+y_i \equiv j \pmod{3}]$ (where $i=1,\ldots,n$ and $j=0,1,2$) as follows: $z_{1,0} = \lnot x_1 \land \lnot y_1$. $z_{1,1} = (x_1 \land \lnot y_1) \lor (\lnot x_1 \land y_1)$. $z_{1,2} = x_1 \land y_1$. $z_{i+1,j} = (z_{i,j} \land \lnot x_{i+1} \land \lnot y_{i+1}) \lor (z_{i,j-1} \land ((x_{i+1} \land \lnot y_{i+1}) \lor (\lnot x_{i+1} \land y_{i+1})) \lor (z_{i,j-2} \land x_{i+1} \land y_{i+1})$ The output of the circuit is $o = \lnot z_{n,n \bmod 3}$. The circuit has size $O(n)$.
{ "domain": "cs.stackexchange", "id": 14344, "tags": "complexity-theory, logic, boolean-algebra, polynomials" }
Quantum Phase Estimation - Should be getting exact answer
Question: Having read the Qiskit demonstration in the Qiskit textbook on how to implement Quantum Phase Estimation, I tried to do so on PennyLane's framework. My code pretty well follows what was done in Qiskit, with a few nuances according to PennyLane, but when I run it over many shots, I get varying answers. For reference, I was implementing the T-Gate (exactly what is done in Qiskit textbook). While my code yields varying possibilities, Qiskit's strictly obtains 001 (which, through post-processing, shows that the applied phase was 1/8). Perhaps there is something wrong with the program I wrote? Is the code supposed to always yield 001? dev = qml.device('default.qubit', wires = 4, shots=1) @qml.qnode(dev) def circuit(): qml.PauliX(wires = 3) for qubit in range(3): qml.Hadamard(qubit) def t_gate(j): qml.T(wires = j) repetitions = 1 n = len(dev.wires) - 1 for x in range(n-1, -1, -1): for i in range(repetitions): qml.ctrl(t_gate, control = x)(3) repetitions *= 2 def ops1(wires = [0, 2]): qml.templates.QFT(wires = [0, 2]) qml.adjoint(ops1)(wires = [0, 2]) return qml.sample() fig, ax = qml.draw_mpl(circuit)() fig.show() for i in range(0, 10): print(circuit()) Results: [0 0 1 1] [0 1 1 1] [1 1 1 1] [0 1 1 1] [1 1 1 1] [0 0 1 1] [1 0 1 1] [0 0 1 1] [0 0 1 1] [0 0 1 1] Answer: Your circuit is very close, with only minor modifications needed to match the result from the Qiskit textbook. The counting registers are the first three wires of the circuit; so we need to apply the inverse QFT to wires 0, 1, and 2. Similarly, we want to measure samples only from wires 0, 1, and 2. Here is an updated version of your code with these two changes: import pennylane as qml import numpy as np dev = qml.device("default.qubit", wires=4, shots=10) @qml.qnode(dev) def circuit(): qml.PauliX(wires=3) for qubit in range(3): qml.Hadamard(wires=qubit) repetitions = 1 for x in range(2, -1, -1): for i in range(repetitions): qml.ControlledPhaseShift(np.pi / 4, wires=[x, 3]) repetitions *= 2 qml.adjoint(qml.QFT)(wires=[0, 1, 2]) return qml.sample(wires=[0, 1, 2]) print(qml.draw(circuit)()) print(circuit()) This gives the results: 0: ──H────────────────────────────────────────────────────────────────────────────────────────────╭ControlledPhaseShift(0.785)──╭ControlledPhaseShift(0.785)──╭ControlledPhaseShift(0.785)──╭ControlledPhaseShift(0.785)──╭QFT⁻¹──╭┤ Sample[basis] 1: ──H────────────────────────────────╭ControlledPhaseShift(0.785)──╭ControlledPhaseShift(0.785)──│─────────────────────────────│─────────────────────────────│─────────────────────────────│─────────────────────────────├QFT⁻¹──├┤ Sample[basis] 2: ──H──╭ControlledPhaseShift(0.785)──│─────────────────────────────│─────────────────────────────│─────────────────────────────│─────────────────────────────│─────────────────────────────│─────────────────────────────╰QFT⁻¹──╰┤ Sample[basis] 3: ──X──╰ControlledPhaseShift(0.785)──╰ControlledPhaseShift(0.785)──╰ControlledPhaseShift(0.785)──╰ControlledPhaseShift(0.785)──╰ControlledPhaseShift(0.785)──╰ControlledPhaseShift(0.785)──╰ControlledPhaseShift(0.785)───────────┤ [[0 0 1] [0 0 1] [0 0 1] [0 0 1] [0 0 1] [0 0 1] [0 0 1] [0 0 1] [0 0 1] [0 0 1]] matching the expected result from applying phase estimation. Note that I also made some other minor modifications: Rather than looping over the QNode executions, if you set shots=N in the device, you will get a somewhat significant speed boost! While qml.ctrl(qml.T, control=x)(wires=x) works, I have slightly modified this to use qml.ControlledPhaseShift, simply because it prints out slightly nicer in the circuit drawer qml.T and qml.QFT are directly callables, so they can be passed directly to qml.ctrl and qml.adjoint, no need to wrap them in functions :)
{ "domain": "quantumcomputing.stackexchange", "id": 3393, "tags": "programming, pennylane" }
How to handle disconnection/reconnection to the ROS master
Question: I have written a ROS node, just publishing to a topic, using roscpp that runs on a separate machine to the ROS master. It is based on the simple publisher/subscriber tutorials. If the ROS master gets restarted my node no-longer appears when I query the new master with rosnode list. It needs to be able to detect that the ROS master has gone down and re-advertise it's topics when it comes back up. I haven't managed to get this to work without killing and restarting my entire process. I can detect that the connection to the ROS master has dropped by periodically checking ros::master::check () (would love a better way). When it returns true again after the master comes back up, none of the following work: Call .advertise() on the old NodeHandle and get a new publisher. I thought this would work but it doesn't seem to. Create a new NodeHandle and re-advertise on that. Re-run ros::init Call ros::shutdown and then call ros::init core dumps. Also: I'm assuming ros::master::check () returns false if the connection is interrupted as well as if the master is restarted, do I need to handle that case differently or will re-advertising on the same master be OK? Originally posted by techno74 on ROS Answers with karma: 90 on 2015-02-03 Post score: 6 Original comments Comment by EpicZa on 2018-03-21: Ever find a solution to this? Answer: Wrapping the ros::init() and NodeHandle creation seems to work. In a bigger project where I have passed the ROS nodeHandle around and attached publishers and things to it I can't get it to work yet. main.c #include "ros/ros.h" #include "std_msgs/String.h" #include "rosConnection.H" #include <sstream> #include <iostream> using namespace std; int main(int argc, char **argv) { int count = 0; bool reconnect = false; do { rosConnectionHandler_t rc (argc, argv); ros::Publisher chatter_pub = rc.nodeHandle ()->advertise<std_msgs::String>("chatter", 1000); ros::Rate loop_rate(10); while (ros::ok() && ros::master::check()) { std_msgs::String msg; stringstream ss; ss << "hello world " << count; msg.data = ss.str(); cout << msg.data << endl; chatter_pub.publish(msg); ros::spinOnce(); loop_rate.sleep(); ++count; } cout << "ROS OK: " << (ros::ok()?"true":"false") << endl; cout << "MASTER UP: " << (ros::master::check()?"true":"false") << endl; reconnect = !ros::master::check() && ros::ok(); if (reconnect) cout << "Attempting to reconnect" << endl; } while (reconnect); return 0; } rosConnection.H #pragma once #include <ros/ros.h> class rosConnectionHandler_t { public: rosConnectionHandler_t (int argc_, char** argv_) { ros::init (argc_, argv_, "testnode"); nh = new ros::NodeHandle(); } ~rosConnectionHandler_t () { delete nh; } ros::NodeHandle *nodeHandle() { return nh; }; private: ros::NodeHandle *nh; }; Originally posted by techno74 with karma: 90 on 2015-02-05 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by VictorLamoine on 2018-03-30: This works but for some reason ROS_INFO_STREAM, ROS_WARN_STREAM and ROS_ERROR_STREAM does not seem to work after the first re-connection
{ "domain": "robotics.stackexchange", "id": 20783, "tags": "ros, rosmaster, roscpp" }
Writing scalar quantum field as mode expansion form for interacting theory
Question: We know that for Klein-Gordon Equation, quantum field can be written in the form $$\phi(\mathbf{x},t) = \int \frac{d^3p}{(2\pi)^3}\frac{1}{\sqrt{2\omega_p}}[a_p e^{-ipx} + a^\dagger_p e^{ipx}]$$ It was stated somewhere that in interactive theory, I can write scalar field in the same form as this for a fixed time $t$ with creation and annihilation operator replaced as $a_p(t)$ and $a_p^\dagger (t)$, which satisfy the same algebra as the free theory. The reason for this was because Hilbert space is the same at every time due to time-translation invariance, which I don't understand as a justification of this. Can someone please elaborate on this point? Thank you. Answer: Let's understand this statement in Hamiltonian formalism, where KG equation is equivalent to having the free scalar field hamiltonian and the Heisenberg equations of motion for the free fields. Then $\phi(\vec{x},t) = \int \frac{d^3 p}{(2\pi)^3}\frac{1}{\sqrt{2\omega_p}}\left( a_p e^{ipx} + a^\dagger_p e^{-ipx}\right)$ and the canonical conjugate $\pi(\vec{x},t) = \dot{\phi}(\vec{x},t)$, are the most general solution. Now let's consider an interacting Hamiltonian $H = H_0 + \lambda V$, and DEFINE $$\Phi(\vec{x},t)\equiv e^{iHt}e^{-iH_0 t} \phi(\vec{x},t) e^{iH_0 t}e^{-iHt}$$ $$\Pi(\vec{x},t)\equiv e^{iHt}e^{-iH_0 t} \pi(\vec{x},t) e^{iH_0 t}e^{-iHt}$$ Then it is straight forward to show that $\Phi$ and $\Pi$ satisfy the canonical commutation relations, as well as the new interacting Heisenberg equations (notice that in the definition we use $H(\phi,\pi)$ and in the Heisenberg equations $H(\Phi,\Pi)$, we are allowed to do so because both are equal!). (Hint: to see that the new fields satisfy the full heisenberg equations notice that $\Phi(\vec{x},t) = e^{iHt}\phi(\vec{x},0)e^{-iHt}$) So in this sense they are the interacting fields, written in terms of the free fields. Then we conclude that $$\Phi(\vec{x},t) = \int \frac{d^3 p}{(2\pi)^3}\frac{1}{\sqrt{2\omega_p}}\left( \mathbb{a}(t)_p e^{ipx} + \mathbb{a}(t)^\dagger_p e^{-ipx}\right)$$ Where $$\mathbb{a}(t)_p\equiv e^{iHt}e^{-iH_0 t} a_p e^{iH_0 t}e^{-iHt}$$ and $\mathbb{a}_p(t)$ satisfies the required commutation relations as a consequence of its parents $\Phi$ and $\Pi$ doing so, or as can be verified directly using those of $a_p$. Notice that this description is particularly useful for weakly coupled theories, since then $\mathbb{a}_p = a_p + \mathcal{O}(\lambda,a_p^2)$, then all our particle spectrum can be inferred from that of the free theory, unlike when this expansion is no longer valid, and the new creation operator can create states completely different in nature from what's contained in the free theory.
{ "domain": "physics.stackexchange", "id": 20230, "tags": "quantum-field-theory, klein-gordon-equation" }
Problems conjectured but not proven to be easy
Question: We have many problems, like factorization, that are strongly conjectured, but not proven, to be outside P. Are there any questions with the opposite property, namely, that they are strongly conjectured but not proven to be inside P? Answer: Two decades ago, one of the plausible answers would be primality testing: there were algorithms that ran in randomized polynomial time, and algorithms that ran in deterministic polynomial time under a plausible number-theoretic conjecture, but no known deterministic polynomial-time algorithms. In 2002, that changed with a breakthrough result by Agrawal, Kayal, and Saxena that primality testing is in P. So, we can no longer use that example. I would put polynomial identity testing as an example of a problem that has a good chance of being in P, but where no one has been able to prove it. We know of randomized polynomial-time algorithms for polynomial identity testing, but no deterministic algorithms. However, there are plausible reasons to believe that the randomized algorithms can be derandomized. For instance, in cryptography it is strongly believed that highly secure pseudorandom generators exist (e.g., AES-CTR is one reasonable candidate). And if that is true, then polynomial identity testing should be in P. (For instance, use a fixed seed, apply the pseudorandom generator, and use its output in lieu of random bits; it would take a tremendous conspiracy for this to fail.) This can be made formal using the random oracle model; if we have hash functions that can be suitably modelled by the random oracle model, then it follows that there is a deterministic polynomial-time algorithm for polynomial identity testing. For more elaboration of this argument, see also my answer on a related subject and my comments on a related question.
{ "domain": "cs.stackexchange", "id": 7254, "tags": "complexity-theory, polynomial-time" }
Nitration of N-phenylbenzamide
Question: Why during nitration of N-phenylbenzamide the $\ce{NO2}$ group is directed at para-position of the ring attached to the nitrogen atom? Okay, I understand that this is due to the para directing effect of the $\ce{>NH}$ group. But carbonyl $\ce{>C=O}$ group has a meta directing effect, then why is $\ce{NO2}$ not directed to the meta-position of the benzene ring attached to carbonyl group? In the following reaction (1) $\ce{NO2}$ group at meta position w.r.t. Ring 2 (2) $\ce{NO2}$ group at para position w.r.t. Ring 1 (3) $\ce{NO2}$ group at para position w.r.t. Ring 2 (4) $\ce{NO2}$ group at meta position w.r.t. Ring 1 Answer: You should not be looking at $\ce{-NH-CO\bond{-}}$ as two different functional groups i.e. as $\ce{-NH-}$ and $\ce{-CO-}$. Since as in $\ce{-(NH-CO)\bond{-}}$: $\ce{N}$'s lone pair are actually in resonance with the carbonyl group. The left handed phenyl ring is more activated than the right handed phenyl ring. Hence the electrophile would preferentially attack on the left handed phenyl ring. The amide group is too weak base to be protonated by the acid and therefore it does not direct the electrophile to meta position. Ortho position is hindered due to steric hindrance of the another phenyl ring. And therefore the attack takes place at para position of the left handed phenyl ring so answer should be option B.
{ "domain": "chemistry.stackexchange", "id": 11797, "tags": "organic-chemistry" }
Is (rest) mass quantized?
Question: I learned today in class that photons and light are quantized. I also remember that electric charge is quantized as well. I was thinking about these implications, and I was wondering if (rest) mass was similarly quantized. That is, if we describe certain finite irreducible masses $x$, $y$, $z$, etc., then all masses are integer multiples of these irreducible masses. Or do masses exist along a continuum, as charge and light were thought to exist on before the discovery of photons and electrons? (I'm only referring to invariant/rest mass.) Answer: There are a couple different meanings of the word that you should be aware of: In popular usage, "quantized" means that something only ever occurs in integer multiples of a certain unit, or a sum of integer multiples of a few units, usually because you have an integer number of objects each of which carries that unit. This is the sense in which charge is quantized. In technical usage, "quantized" means being limited to certain discrete values, namely the eigenvalues of an operator, although those discrete values will not necessarily be multiples of a certain unit. As far as we know, mass is not quantized in either of these ways... mostly. But let's leave that aside for a moment. For fundamental particles (those which are not known to be composite), we have tabulated the masses, and they are clearly not multiples of a single unit. So that rules out the first meaning of quantization. As for the second, there is no known operator whose eigenvalues correspond to (or even are proportional to) the masses of the fundamental particles. Many physicists suspect that such an operator exists and that we will find it someday, but so far there is no evidence for it, and in fact there is basically no concrete evidence that the masses of the fundamental particles have any particular significance. This is why I would not say that mass is quantized. When you consider composite particles, though, things get a little trickier. Much of their mass comes from the kinetic energy and binding energy of the constituents, not from the masses of the constituents themselves. For instance, only a small part of the mass of the proton comes from the masses of its quarks. Most of the proton's mass is actually the kinetic energy of the quarks and gluons. These particles are moving around inside the proton even when the proton itself is at rest, so their energy of motion contributes to the rest mass of the proton. There is also a contribution from the potential energy that all the constituents of the proton have by virtue of being subject to the strong force. This contribution, the binding energy, is actually negative. When you put together the mass energy of the quarks, the kinetic energy, and the binding energy, you get the total energy of what we call a "bound system of $\text{uud}$ quarks." Why not just call it a proton? Well, there is actually a particle exactly like the proton but with a higher mass, the delta baryon $\Delta^+$. Technically, a $\text{uud}$ bound system could be either a proton or a delta baryon. But we've observed that when you put these three quarks together, you only ever get $\mathrm{p}^+$ (with a mass of $938\ \mathrm{MeV/c^2}$) or $\Delta^+$ (with a mass of $1232\ \mathrm{MeV/c^2}$). You can't get any old mass you want. This is a very strong indication that the mass of a $\text{uud}$ bound state is quantized in the second sense. Now, the calculations involved are very complicated, so I'm not sure if the operator which produces these two masses as eigenvalues can be derived in detail, but there's basically no doubt that it does exist. You can take other combinations of quarks, or even include leptons and other particles, and do the same thing with them - that is, given any particular combination of fundamental particles, you can make some number of composite particles a.k.a. bound states, and the masses of those particles will be quantized given what you're starting from. But in general, if you start without assuming the masses of the fundamental particles, we don't know that mass is quantized at all.
{ "domain": "physics.stackexchange", "id": 30384, "tags": "quantum-mechanics, particle-physics, mass, discrete, binding-energy" }
Overflow of integer counter in distributed systems
Question: I've just been introduced to Paxos. There is notion of the of value that is incremented each time a new proposal is send. To provide order for proposals, something like time. What happens when this long or integer value overflows and starts from zero again. The proposed sequence number will become lower than the old one and every proposal will be rejected. Thank you. Answer: In general: Paxos algorithm uses unbounded integers to tag data. In practice, however, every integer handled by the processors is bounded by some constant $2^b$ where $b$ is the integer memory size. Yet, if every integer variable is initialized to a very low value, the time needed for any such variable to reach the maximum value $2^b$ is actually way larger than any reasonable system’s timescale. For instance, counting from $0$ to $2^{64}$ by incrementing every nanosecond takes roughly 500 years to complete. Such a long sequence is said to be practically infinite. Self-stabilizing systems: One particular aspect of self-stabilizing systems is the need to re-examine the assumption concerning the use of (practically) unbounded time-stamps. While in practice it is reasonable for Paxos to assume that a bounded value, represented by 64 bits, is a natural (unbounded) number, for all practical considerations, in the scope of self-stabilization the 64 bits value may be corrupted by a transient fault to its maximal value at once, and still recovery following such a transient fault must be guaranteed. Source: Self-Stabilizing Paxos
{ "domain": "cs.stackexchange", "id": 7094, "tags": "distributed-systems, arithmetic" }
Computing an approximate value of Pi via Monte Carlo method in Java with streams
Question: I have this short program that attempts to compute an approximate value of \$\pi\$: package net.coderodde.fun; import java.awt.geom.Point2D; import java.util.Arrays; import java.util.Objects; import java.util.Random; import java.util.stream.Collectors; import java.util.stream.IntStream; /** * This class computes an approximate value of Pi. * * @author Rodion "rodde" Efremov * @version 1.6 (Feb 23, 2018) */ public class MonteCarloPiComputer { /** * The default radius of the simulated circle. */ private static final double DEFAULT_RADIUS = 0.5; /** * The random number generator. */ private final Random random; public MonteCarloPiComputer(Random random) { this.random = Objects.requireNonNull( random, "The input random number generator is null."); } public MonteCarloPiComputer() { this(new Random()); } /** * Computes an approximate value of Pi via a Monte Carlo method. The method * creates {@code samples} random points, computes the percentage of all * points within the radius from the center of the simulated square and * multiplies it by {@code 4.0}. * * @param samples the number of points to create. * @param radius the radius of the simulated circle. * @return an approximate value of Pi. */ public double computeApproximateValueOfPi(int samples, double radius) { Point2D.Double center = new Point2D.Double(radius, radius); double squareSideLength = 2.0 * radius; long numberOfPointsWithinCircle = IntStream.range(0, samples) .mapToObj( (i) -> { return new Point2D.Double( squareSideLength * random.nextDouble(), squareSideLength * random.nextDouble()); }) .filter((point) -> { return point.distance(center) < radius; }).count(); return (4.0 * numberOfPointsWithinCircle) / samples; } /** * Computes an approximate value of Pi via a Monte Carlo method with default * radius. * * @param samples the number of points to create. * @return an approximate value of Pi. */ public double computeApproximateValueOfPi(int samples) { return computeApproximateValueOfPi(samples, DEFAULT_RADIUS); } public static void main(String[] args) { MonteCarloPiComputer computer = new MonteCarloPiComputer(); for (int samples = 100_000; samples <= 1_000_000; samples += 100_000) { double approximation = computer.computeApproximateValueOfPi(samples); double percentage = approximation / Math.PI; System.out.print(String.format("%7d: ", samples)); System.out.print(String.format("%10f", approximation)); System.out.println( String.format( ", percentage from exact Pi: %10f", (100.0 * percentage))); } } } Critique request I would like to hear any comments and improvement suggestions. Answer: You have some superfluous imports. I also think that you can rename computeApproximateValueOfPi() to approximatePi() without losing any clarity. The simulation is complicated by the fact that the circle is centered at an adjustable point (radius, radius). You could just as easily use a unit circle centered at the origin, and generate points in a square with x in the range [0, 1) and y in the range [0, 1). The computation is mathematically equivalent, with less shifting and scaling. Furthermore, there is no need to instantiate a Point2D for each generated point. You can use .distance(x, y). Better yet, avoid computing the square root by using .distanceSq(x, y). The output routine in main() could be improved. Your percentage variable doesn't actually store a percentage as its name suggests; the 100× happens in the formatting instead. Splitting up the formatting into several System.out.print() calls defeats the purpose of String.format(). I'd combine them all into one System.out.format() call. Finally, the "percentage from" wording implies that you are calculating the difference; the calculation that you actually performed is what I would call "percentage of". import java.awt.geom.Point2D; import java.util.Objects; import java.util.Random; import java.util.stream.IntStream; public class MonteCarloPiComputer { /** * The random number generator. */ private final Random random; public MonteCarloPiComputer(Random random) { this.random = Objects.requireNonNull( random, "The input random number generator is null." ); } public MonteCarloPiComputer() { this(new Random()); } /** * Computes an approximate value of Pi via a Monte Carlo method. The method * creates {@code samples} random points in the upper-right quadrant, * computes the fraction of all points within the radius from the origin * and multiplies it by {@code 4.0}. * * @param samples the number of points to create. * @return an approximate value of Pi. */ public double approximatePi(int samples) { Random r = this.random; Point2D.Double origin = new Point2D.Double(); long pointsWithinUnitArc = IntStream.range(0, samples) .filter(i -> origin.distanceSq(r.nextDouble(), r.nextDouble()) < 1) .count(); return (4.0 * pointsWithinUnitArc) / samples; } public static void main(String[] args) { MonteCarloPiComputer computer = new MonteCarloPiComputer(); for (int samples = 100_000; samples <= 1_000_000; samples += 100_000) { double approximation = computer.approximatePi(samples); double pctDiff = 100 * (approximation - Math.PI) / Math.PI; System.out.format("%7d: %10f, deviation from exact Pi: %+10f%%%n", samples, approximation, pctDiff ); } } }
{ "domain": "codereview.stackexchange", "id": 29589, "tags": "java, numerical-methods" }
What do moles and moles of various subatomic particles gathered together look like?
Question: I wonder whether it is even possible to find the answer. If it is impossible to find out, why? Do moles of neutrons basically look like a neutron star? If so, what does one look like? How about moles and moles of protons and electrons gathered together? What would be their electromagnetic attraction? What would be some of their properties? What would be the explosive repulsion if you gathered a mole of electrons or protons together in one spherical ball? Answer: If you had a mole of electrons and a mole of protons and put them together, they would make hydrogen. The transition from ions to ground-state atoms would release 13.6 eV/atom or about 1300 kJ/mol. This mole of hydrogen would have a mass of one gram. For comparison, combustion of 1 kg of gasoline releases about 44 MJ of heat; your completely-ionized hydrogen would win in a fight. Hydrogen atoms and hydrogen molecules don't interact strongly with visible light, so your sample would be invisible. If you had a mole of free charges (electrons or protons, but not both) you would have a lot more energy. How much is a famous problem in electrostatics: the self-energy of a uniformly-charged sphere. You find it by computing the energy needed to assemble each thin shell that makes up the sphere and integrating over the volume; the solution is that the energy of the shell is $$ U = \frac35 \frac1{4\pi\epsilon_0} \frac{Q^2}{R} = \frac 35 \frac{\alpha\hbar c}{e^2} \frac{Q^2}{R} $$ for charge $Q$, sphere radius $R$, fine structure constant $\alpha$. A mole of electrons in a sphere with one meter radius would have energy \begin{align} U &= \frac 35 \frac{ \rm 200\,eV\,nm }{137} \frac{ (6\times20^{23})^2 }{ 10^9\rm\,nm} \left( {}\times \frac{ \rm1.6\times10^{-19}\,J }{\rm 1\,eV} \right) \\ &\approx 10^{19}\rm\,J \end{align} Don't stand too close.
{ "domain": "physics.stackexchange", "id": 22927, "tags": "electrons, protons, neutrons, neutron-stars, visualization" }
Formatting a long filesystem path in Python
Question: This is a follow up to Log probe requests of WiFi devices focussing on a specific element of the code. I have this line of code: airport_path = "/System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport" It seems to be too long for Python style standard. Can I shorten it or make it multiline? Answer: Constants should be named in ALL_CAPS, by convention. You could put multiple constants together, such as AIRPORT_PATH = "/System/Library/PrivateFrameworks/Apple80211.framework/" \ "Versions/Current/Resources/airport" But PEP 8 also says that its own rules should not be applied dogmatically, and this is one place where I think a violation of the line-length limit is justified. It's not that much longer than 80 columns, and breaking up the string makes it less readable and searchable.
{ "domain": "codereview.stackexchange", "id": 12589, "tags": "python, beginner" }
[ERROR] Client wants topic /xxx but our version has /yyy
Question: Hello all, I am trying to execute some trajectories on my robot using the simple_trajectory.cpp tutorial. I am able to see my controller using rostopic echo /r_arm_controller and it shows me everything I want. However when I execute my moves I get this error: [ERROR] [1360184059.110097449]: Client [/right_joint_trajectory_action] wants topic /r_arm_controller/joint_trajectory_action/goal to have datatype/md5sum [control_msgs/FollowJointTrajectoryActionGoal/8f3e00277a7b5b7c60e1ac5be35ddfa2], but our version has [pr2_controllers_msgs/JointTrajectoryActionGoal/aee77e81e3afb8d91af4939d603609d8]. Dropping connection. It looks pretty self explanitory but I wanted some claification. So my action server uses [control_msgs/FollowJointTrajectoryActionGoal] for topic of publishing on and my action client uses [pr2_controllers_msgs/JointTrajectoryActionGoal]. Which of these should I conform to? I am not using a pr2 robot so I am thinking of using control_msgs, but I want to know if one is better than the other. Thanks in advance. Kind Regards, Martin Originally posted by MartinW on ROS Answers with karma: 464 on 2013-02-06 Post score: 1 Answer: I found the answer, the pr2_controller_msgs/JointTrajectoryActionGoal is an old method of using the joint trajectory action control. I changed over the tutorial code to use: #include <control_msgs/FollowJointTrajectoryAction.h> instead of: #include <pr2_controllers_msgs/JointTrajectoryAction.h> and any calls like: pr2_controllers_msgs::JointTrajectoryGoal to: control_msgs::FollowJointTrajectoryGoal Works fine but now I'm encountering a different run-time error, however I don't know the cause of this one yet. I will try to update if I find an answer! Regards, Martin Originally posted by MartinW with karma: 464 on 2013-02-06 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 12766, "tags": "ros" }
Counting number of stereo isomer
Question: I am having trouble counting the stereo-isomers in this structure: There are no chiral carbons. The double bonds means there maybe cis-trans geometrical isomerism but the groups are different. So that approach failed. Answer given is: 4 Answer: Though all the four groups around double bond are different, the geometrical isomerism is still present. If the groups are not same then the highest priority group on one side of the carbon is determined through CIP sequence rules. Now if two highest priority groups are on the same side of double bond (syn position), the isomer is called Z-isomer. If two highest priority groups are on the opposite side of double bond (anti-position), that isomer is called E-isomer. Here there will be four combinations, i.e (2Z,6Z); (2Z,6E); (2E,6Z); (2E, 6E). Thus, it gives four possible geometrical isomers.
{ "domain": "chemistry.stackexchange", "id": 9711, "tags": "stereochemistry" }
How can one find an element in a Merkle tree?
Question: How can one find an element in a Merkle tree, as effectively as possible? Each internal node has a hash value. So I think, first, hash the value to find, and if an internal node has the same value exactly, get its leaf node. But this is correct in 2-depth, not all cases. Because each internal node has a hashed what is concatenation of their child nodes, by the avalanche effect, the concatenated hash value is unexpected. So I cannot find the value to do hash and compare. Answer: Merkle trees are not designed to support efficient lookup. The best you can do is search the entire tree (all the leaves) and check each node for a match. This is $O(n)$ time. If you want to be able to efficiently look up an item in a Merkle tree, construct a separate "index": i.e., a separate hashtable (or binary search tree) storing the mapping from hash value to position in the tree.
{ "domain": "cs.stackexchange", "id": 7108, "tags": "data-structures, hash" }
Database credentials and connector including encryption
Question: I'm designing a small Java desktop application to interact with my database and this is a very important part of it as a majority of the operations will be involving the SQL Server 2012 database. I am using the SQL Server JDBC driver (v6.0) provided by Microsoft. I wrote a class to hold credentials used to connect, and another one to provide a connection to be able to interact with the database. I'm using Java 8. Looking for any and all improvements. ConnectionCredentials import javax.crypto.Cipher; import javax.crypto.spec.SecretKeySpec; import java.security.Key; import java.security.MessageDigest; /** * Hold and make available credentials to connect to SQL database. */ public class ConnectionCredentials { private String serverName = ""; private String databaseName = ""; private String username = ""; private Key passwordKey; private byte[] encryptedPassword; /** * Constructor * @param serverName name/address of the server * @param databaseName name of the database * @param username user/login name on the database * @param password user/login password on the database (plain text) */ public ConnectionCredentials(String serverName, String databaseName, String username, String password) { this.serverName = serverName; this.databaseName = databaseName; this.username = username; encryptedPassword = encrypt(password); } /** * Encrypts password prior to storing in a ConnectionCredentials object * @param password * @return the encrypted password */ private byte[] encrypt(String password) { // Encryption code based on // http://stackoverflow.com/a/32583766/3626537 byte[] encrypted = {}; try { MessageDigest digester = MessageDigest.getInstance("MD5"); digester.update(String.valueOf(password).getBytes("UTF-8")); byte[] digest = digester.digest(); passwordKey = new SecretKeySpec(digest, "AES"); Cipher cipher = Cipher.getInstance("AES"); cipher.init(Cipher.ENCRYPT_MODE, passwordKey); encrypted = cipher.doFinal(password.getBytes()); } catch(Exception exc) { exc.printStackTrace(); } return encrypted; } /** * Format credentials to a String. * @return the credentials formatted as a String */ @Override public String toString() { return String.format( "serverName: %s | databaseName: %s | username: %s | password: HIDDEN", getServerName(), getDatabaseName(), getUsername() ); } /* * Field getters and setters */ public String getServerName() { return serverName; } public void setServerName(String serverName) { this.serverName = serverName; } public String getDatabaseName() { return databaseName; } public void setDatabaseName(String databaseName) { this.databaseName = databaseName; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } public Key getPasswordKey() { return passwordKey; } public byte[] getEncryptedPassword() { return encryptedPassword; } public void setPassword(String password) { encryptedPassword = encrypt(password); } } DatabaseConnector import javax.crypto.Cipher; import java.sql.Connection; import java.sql.DriverManager; /** * Connect to SQL Server database using ConnectionCredentials object. */ public class DatabaseConnector { private String connectionUrl = ""; private ConnectionCredentials credentials; /** * Constructor * @param credentials ConnectionCredentials object containing the required configuration to connect to the database server instance. */ public DatabaseConnector(ConnectionCredentials credentials) { this.credentials = credentials; } /** * Get a JDBC connection using ConnectionCredentials object. * @param credentials the ConnectionCredentials object to use to open JDBC connection * @return the JDBC connection */ public Connection getJdbcConnection(ConnectionCredentials credentials) { Connection jdbcConnection = null; try { Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver"); jdbcConnection = DriverManager.getConnection( getConnectionUrl(credentials), credentials.getUsername(), decrypt(credentials.getEncryptedPassword()) ); } catch(Exception exc) { exc.printStackTrace(); } return jdbcConnection; } /** * Builds the required connection URL required by DriverManager. * @param credentials ConnectionCredentials object containing the required configuration to connect to the database server instance. * @return the connection URL built from the credentials */ private String getConnectionUrl(ConnectionCredentials credentials) { String serverName = credentials.getServerName(); String databaseName = credentials.getDatabaseName(); String username = credentials.getUsername(); connectionUrl = String.format( "jdbc:sqlserver://%s;databaseName=%s;", serverName, databaseName ); return connectionUrl; } /** * Decrypts data encrypted by ConnectionCredentials, e.g., a password. * @param encrypted the encrypted data * @return the decrypted data */ private String decrypt(byte[] encrypted) { String decrypted = ""; StringBuilder stringBuilder = new StringBuilder(); try { Cipher cipher = Cipher.getInstance("AES"); cipher.init(Cipher.DECRYPT_MODE, credentials.getPasswordKey()); decrypted = new String(cipher.doFinal(encrypted)); } catch(Exception exc) { exc.printStackTrace(); } return decrypted; } } Example usage ConnectionCredentials credentials = new ConnectionCredentials(server, database, username, password); DatabaseConnector connector = new DatabaseConnector(credentials); Connection connection = connector.getJdbcConnection(credentials); try { Statement statement = connection.createStatement(); ResultSet resultSet = statement.executeQuery("select Id, Name, Organization from PsychoProductions.Persons;"); while(resultSet.next()) { System.out.printf( "Id: %d Name: %s Organization: %s%n", resultSet.getInt("Id"), resultSet.getString("Name"), resultSet.getString("Organization") ); } } catch(Exception exc) { exc.printStackTrace(); } Answer: Encrypts password prior to storing in a ConnectionCredentials object Why are you encrypting the password in this object? There are very few things attacks that it would actually prevent that I can see here. Maybe you're trying to prevent unauthorised classes from accessing the used credentials object? Well, the credentials object is only ever used inside of DatabaseConnector, as is not exposed; unless your application leaks it to other parts, this should not be an issue. This could be prevented simply by ensuring that you never expose an instance of the credentials object anywhere. Maybe you're worried about memory reading attacks? As far as I understand, in Java, strings are by default interned by the compiler. This means that they are stored on the heap - anyone could just as easily access the password in memory. Only literals are interned. Thanks @h.j.k This is ignoring the interesting reflection tricks they could use to access the private field anyway. Furthermore, encrypting the password before immediately passing it to the database connector and having the db connector decrypt it is pretty poor, architecturally: you're assuming that any password passed to the db connector would be encrypted with a given algorithm. This doesn't make it very reusable at all! My suggestion? Don't encrypt the password. It is making you feel safe, but it is not doing anything at all. Instead, make sure that the Credentials class only touches as few places as it needs to touch and heavily restrict its access. There's a rather insidious bug caused by how you've arranged encrypt in your constructor. First off, as a result of thumb, constructors should not throw. Given your types are correct (which the compiler should catch), your constructor should only assign fields. In your example, you have this; public ConnectionCredentials(String serverName, String databaseName, String username, String password) { this.serverName = serverName; this.databaseName = databaseName; this.username = username; encryptedPassword = encrypt(password); } /** * Encrypts password prior to storing in a ConnectionCredentials object * @param password * @return the encrypted password */ private byte[] encrypt(String password) { // Encryption code based on // http://stackoverflow.com/a/32583766/3626537 byte[] encrypted = {}; try { ...snip... } catch(Exception exc) { exc.printStackTrace(); } return encrypted; } If the snipped block threw an exception (which you try/caught presumably to avoid throwing in the ctor), then a really bad bug would occur where, from the user's perspective, nothing would happen until they tried to connect to a DB object. When they tried to this, they would notice that encrypted was set to an empty byte array, and the connection would fail, and then that would throw. This bug would take a while to track down. Do not do this. If you must encrypt your password, do it in an object that constructs ConnectionCredentials. Use catch sparingly. There are seldom occasions where it is a good idea to try/catch as many exceptions will be catastrophic errors (Exceptions should not be used for control flow, after all). You should let the exception propagate up until something can handle it (logging it to the console and leaving the object in an uninitialised state is not handling it). If the object cannot recover, let it turn into a crash. Note that in Java, if a constructor throws, it will leave the object in a semi initialised state. This may cause further bugs. So, if the constructor throwing is a bad idea and catching the exception is a bad idea, what do you do? You move the action that is causing the throw to another location.
{ "domain": "codereview.stackexchange", "id": 20643, "tags": "java, sql-server" }
When Phobos collides with mars (or breaks up), will it have any noticeable effects on Earth?
Question: When Phobos meets its predicted doom in a few 10s of millions of years, whether it breaks up or collides with the surface of Mars, will this change have any noticeable effects (permanent or temporary) on the Earth? Answer: No. Phobos is small - just 11 Km across - the size of a small city. Mars (and Phobos) is so far away that a Phobos impact will not affect Earth much. (Mars ranges from 100 billion meters away to nearly 400 billion meters depending on its and Earth's position in their orbits.) When Phobos hits the Roche limit as it will break up and become a thin light-grey planetary ring around Mars for a few million years. When it breaks up, (almost) all the debris should stay near Mars because the escape velocity from its orbit around Mars is greater than the velocity imparted to boulders of Phobos as it self-destructs.
{ "domain": "astronomy.stackexchange", "id": 1441, "tags": "earth, mars, phobos" }
What is the purpose of standardization in machine learning?
Question: I'm just getting started with learning about K-nearest neighbor and am having a hard time understanding why standardization is required. Reading through, I came across a section saying When independent variables in training data are measured in different units, it is important to standardize variables before calculating distance. For example, if one variable is based on height in cms, and the other is based on weight in kgs then height will influence more on the distance calculation. Since K nearest neighbor is just a comparison of distances apart, why does it matter if one of the variables has values of a larger range since it is what it is. Considering 3 points A,B & C with x,y co-ordinates (x in cm, y in grams) A(2,2000), B(8,9000) and C(10,20000), the ranking of the points as distance from origin for example (or any other point), will be the same whether the y values are in grams,pounds, tonnes or any other combinations of units for both x and y so where's the need to standardise. Every example or QA i see brushes through with the same statement of 'one variable influencing the other' without a real example of how this might occur. Again, how does one know when this influence is too much as to call for standardization. Also,what exactly does standardization do to the values? One of the formulas does it by Xs = (X-mean)/(max-min) Where does such a formula come from and what is it really doing? Hopefully someone can offer me a simplified explanation or give me a link to a site or book that explains this in simple terms for beginners. Answer: Considering 3 points A,B & C with x,y co-ordinates (x in cm, y in grams) A(2,2000), B(8,9000) and C(10,20000), the ranking of the points as distance from origin for example (or any other point), will be the same whether the y values are in grams,pounds, tonnes or any other combinations of units for both x and y so where's the need to standardise. This is true for the example you provided, but not for euclidean distance between points in general. Look at this example: def euclidian_distance(a, b): return ((a[0] - b[0])**2 + (a[1] - b[1])**2)**0.5 a1 = 10 #10 grams a2 = 10 #10 cm b1 = 10 #10 gram b2 = 100 #100 cm c1 = 100 #100 gram c2 = 10 #10 cm # using (grams, cm) A_g_cm = [a1, a2] B_g_cm = [b1, b2] C_g_cm = [c1, c2] print('[g, cm] A-B:', euclidian_distance(A_g_cm, B_g_cm)) print('[g, cm] A-C:', euclidian_distance(A_g_cm, C_g_cm)) # using (kg, cm) A_kg_cm = [a1/1000, a2] B_kg_cm = [b1/1000, b2] C_kg_cm = [c1/1000, c2] print('[kg, cm] A-B:', euclidian_distance(A_kg_cm, B_kg_cm)) print('[kg, cm] A-C:', euclidian_distance(A_kg_cm, C_kg_cm)) # using (grams, m) A_g_m = [a1, a2/100] B_g_m = [b1, b2/100] C_g_m = [c1, c2/100] print('[g, m] A-B:', euclidian_distance(A_g_m, B_g_m)) print('[g, m] A-C:', euclidian_distance(A_g_m, C_g_m)) # using (kilo, m) A_kg_m = [a1/1000, a2/100] B_kg_m = [b1/1000, b2/100] C_kg_m = [c1/1000, c2/100] print('[kg, m] A-B:', euclidian_distance(A_kg_m, B_kg_m)) print('[kg, m] A-C:', euclidian_distance(A_kg_m, C_kg_m)) Output: [g, cm] A-B: 90.0 [g, cm] A-C: 90.0 [kg, cm] A-B: 90.0 [kg, cm] A-C: 0.09000000000000001 [g, m] A-B: 0.9 [g, m] A-C: 90.0 [kg, m] A-B: 0.9 [kg, m] A-C: 0.09000000000000001 Here you can clearly see that the choice of units influence the ranking, thus the need for standardization.
{ "domain": "datascience.stackexchange", "id": 5884, "tags": "machine-learning, python, classification, normalization" }
Is mass an observable in Quantum Mechanics?
Question: One of the postulates of QM mechanics is that any observable is described mathematically by a hermitian linear operator. I suppose that an observable means a quantity that can be measured. The mass of a particle is an observable because it can be measured. Why then the mass is not described by a linear hermitian operator in QM? Answer: In non-relativistic quantum mechanics the mass can, in principle, be considered an observable and thus described by a self-adjoint operator. In this sense a quantum physical system may have several different values of the mass and a value is fixed as soon as one performs a measurement of the mass observable, exactly as it happens for the momentum for instance. However, it is possible to prove that, as the physical system is invariant under Galileian group (or Galilean group as you prefer), a superselection rule arises, the well-known Bargmann mass superselection rule. It means that coherent superpositions of pure states with different values of the mass are forbidden. Therefore the whole description of the system is always confined in a fixed eigenspace of the mass operator (in particular because all remaining observables, including the Hamiltonian one, commute with the mass operator). In practice, the mass of the system behaves just like a non-quantum, fixed parameter. This is the reason, barring subtle technicalities (non-separability of the Hilbert space if the spectrum of the mass operator is continuous), why the mass can be considered a fixed parameter rather than a self-adjoint operator in non-relativistic quantum mechanics. In relativistic quantum mechanics the picture is quite different. First of all, one has to distinguish between elementary systems (elementary free particles in with Wigner's defintion) and compound (interacting) systems. The formers are defined as irreducible (strongly continuous) unitary representations of Poincaré group. Each such representation is identified by a set of numbers defining the eigenvalues of some observables which attains constant values in the representation because of the irreducibility requirement. The nature of these numbers depend on the structure of the group one is considering. Each such observable, in the irreducible Hilbert space of the system has the form $\lambda I$ where $\lambda$ is a fixed real number. Referring to the Poincaré group, the mass operator turns out to be one of these elementary observables. Therefore, in relativistic quantum mechanics, the elementary systems must have the trivial mass operator, which as before, can be considered as a fixed, non-quantum parameter. The picture changes dramatically if one focuses on compound systems: there the mass is simply the energy operator evaluated in the rest frame of the system. It generally shows a mixed spectrum made of a continuous part, due to the "relative" kinetic energy and, below that, a point spectrum describing the possible masses of the overall system. ADDENDUM. As Arnold Neumaier pointed out to me, neutrinos appear to have non-fixed values of the mass (i.e. the mass operator is not trivial) in view of the presence of the weak interaction. In my view, it is disputable if they can be considered elementary particles since they include weak interaction in their description. Surely they are elementary from a purely physical viewpoint. Maybe Wigner's description is physically inappropriate.
{ "domain": "physics.stackexchange", "id": 93785, "tags": "quantum-mechanics, mass, operators, observables" }
Orphaned star systems and intergalactic travel (in fictional context)
Question: Context: I'm looking for some authority on a particular idea that I brought up in regards to a new update brought out for a space travel sim game, Elite Dangerous The vessels in that game can travel between star systems, requiring fuel- they can only jump between star systems, and have a limit on how many light years they can travel in one jump. The latest update added a vessel that can jump up to 500 light years. Vessels can't travel continuously to a location, it has to be point to point, i.e. teleportation (sort of) The question: Given the context above, can it be estimated that there are enough high orbiting, or orphaned star systems between the edge of the milky way, and the edge of the Canis Major dwarf galaxy for one of these vessels to traverse a 'corridor' of 500ly (or closer) intervals of star systems between the two galaxies? Answer: The Canis Major dwarf galaxy is about 8 kpc from the Sun, but is only 8 degrees below the Galactic plane (and further out than the Sun). So it is about 42,000 light years from the Galactic centre and about 1150 light years below the plane. This almost within the disc of the Galaxy itself. The Galactic disc has a density that varies pseudo-exponentially in both radial distance from the centre and with height above the plane: $$ n \propto \exp(-r/R_0)\exp(-z/Z_0),$$ where $R_0$ and $Z_0$ are appropriate numbers for the radial scale length and the height scale length respectively, and $r$ and $z$ are the radial and vertical coordinates with respect to the Galactic centre. In this coordinate system, the position of the Sun is about $r=25,000$ light years and $z \sim 0$ light years. The appropriate numbers for the scale-lengths/heights are $R_0 \sim 10,000$ light years and $Z_0 \sim 1000$ light years. The density of stars in the solar neighbourhood is about 0.1 pc$^{-3}$ or about $3\times 10^{-3}$ stars per cubic light year. We can now scale this number according to the equation above, to work out the approximate density in the vicinity of the Canis Major dwarf galaxy, which will be loer than the solar neighbourhood. $$ \frac{n_{\rm CM}}{n_{\rm Sun}} \simeq \exp[(r_{\rm Sun} - r_{\rm CM})/R_0]\exp[(z_{\rm Sun}-z_{\rm CM})/Z_0],$$ from which we get $n_{\rm CM} \simeq 0.06 n_{\rm Sun} = 2\times 10^{-4}$ stars per cubic light year. To work out the average distance between stars we take the reciprocal of the cube root, $$n_{\rm CM}^{-1/3} = 18\ {\rm light\ years}.$$ So, if you have a range of 500 light years per "jump" you have no problem skipping all the way to the Canis Major dwarf galaxy.
{ "domain": "astronomy.stackexchange", "id": 4437, "tags": "star, galactic-dynamics, space-travel" }
What is the relation between Hilbert space constructed from the GNS construction and the standard Hilbert space-state?
Question: I recently started reading Algebraic quantum mechanics. So I have no knowledge of the subject. In the GNS construction we construct the Hilbert space of states as follows, We endow the algebra of observables $\mathfrak{A}$ with an inner product using the state $\omega$ which is a linear functional on the space of observables. This inner product may be degenerate. (non zero element might have zero norm in this inner product) Remove these null vectors by quotienting the null space $\mathfrak{N}$ hence giving a positive definite inner product on $\mathfrak{A}/\mathfrak{N}$. Completing this space we get a Hilbert space. The algebra of observables acts naturally on this Hilbert space. How is the state used to give an inner product? In this case how does the operators corresponding to the observables operate on this Hilbert space? How is it related to the standard Hilbert space state formulism? Answer: Each element of the Hilbert space is a Cauchy sequence of equivalence classes of operators. So $\vec v=([a_1],[a_2],\dots)$ where $[a]=\{A\in\mathfrak A: \omega(A-a)=0\}$ and where $(\mathcal C_1,\mathcal C_2,\dots)$ is the specific function (sequence) that maps $n\mapsto \mathcal C_n$ and where the sequence is Cauchy. So now you have an operator $B$ and a vector $\vec v=([a_1],[a_2],\dots)$ and the obvious operation is $B\vec v=([Ba_1],[Ba_2],\dots)$ but you need to show it is well defined. Firstly that it didn't depend on the representative of the equivalence class, that $\omega(a-b)=0$ implies $\omega(Ba-Bb)=0$ and secondly that$([Ba_1],[Ba_2],\dots)$ is Cauchy. Though if it isn't, then you could just say that $\vec v=([a_1],[a_2],\dots)$ isn't in the domain of the unbounded operator. How does it correspond to observation? The same as always, the measurement sends a vector to its orthogonal projection onto an eigenspace. The relative frequency of getting a particular eigenspace is the ratio of the squared norm before and after the projection. Technically the space of Cauchy sequences still won't be a Hilbert space becasue we didn't finish the completion. Given two Cauchy sequences $([a_1],[a_2],\dots)$ and $([b_1],[b_2],\dots)$ we identify them with the same vector in the Hilbert space if $([a_1-b_1],[a_2-b_2],\dots)$ has zero as a limit (and we have to show that definition is well defined). So a vector in the Hilbert space is a set of Cauchy sequences. Each Cauchy sequence has values which themselves are sets of operators. So $\vec v=[([a_1],[a_2],\dots)]$ where the outer $[\,]$ identified two Cauchy sequences if the difference has zero as a limit. And the inner $[\,]$ identifies two operators is their difference has zero as the resukt of $\omega$ and the $(\,)$ just denotes a sequence by listing the values of the sequence in order (and I might be using the axiom of choice in my choice of notation by denoting each equivalence class by a representative). This means the operator also has to be shown to be well defined on two Cauchy sequence that are identified.
{ "domain": "physics.stackexchange", "id": 29141, "tags": "quantum-mechanics, mathematical-physics, operators, hilbert-space" }
Book Recommendations for Global General Relativity
Question: Can someone please recommend books that deal with the techniques used to apply General Relativity at Global scales? Einstein's Field Equations are local statements, so is there a technique or a whole book that deals with how to get global information using them? Answer: The book by Hawking and Ellis, The large-scale structure of spacetime, or Wald's general relativity may be what you are looking for. Global methods in GR were used by Roger Penrose in the sixties to establish the singularity theorem of black holes, and are a combination of topological techniques. These methods were further expanded by Hawking and Penrose to prove that the big bang is a space-time singularity, and further employed by Hawking to establish the properties of the absolute horizons of black holes. Rather understandably, the book by Hawking and Ellis, which was written in 1973, draws much material from Hawking's pioneering work on the topic, while Wald's book is a self-contained, slightly more modern introduction to much of the same material and techniques. Since then, global methods have become a major topic of research, at the frontier between mathematics and physics. Depending on your level of sophistication you may read either a chapter like Causal structure and global geometry in Wikipedia, or some rather nice lecture notes given at Columbia. Just remember to brush up on your understanding of Raychaudhuri's equation ;-)
{ "domain": "physics.stackexchange", "id": 37081, "tags": "general-relativity, resource-recommendations, topology" }
Why model in tensor flow model zoo have low mAP?
Question: I read in paper and article SSD model achieved above 70% mAP but when i browse through tensor flow model zoo, the mAP of SSD is around 30-40% in this link https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md Why is it much lower to the mAP in paper? Answer: You may want to check whether the 0.70 mAP reported for SSD is for the CoCo dataset. I'm guessing it is for a much easier dataset like VOC2007 as reported in the original paper. As far as I know, none of the current SOTA detection models have been able to achieve 0.70 mAP on coco. See: https://paperswithcode.com/sota/object-detection-on-coco
{ "domain": "ai.stackexchange", "id": 3541, "tags": "tensorflow" }
Some clarifications about the ideas of representation of a group
Question: I started to study group theory but I have many doubts about the topic, so I'd like to share my current understanding together with some questions, my aim is to understand the general ideas and concepts about the topic more than the specific calculations, so answers with little math will be appreciated. I've seen that a representation of a group is a homomorphism, that means: let $G$ and $H$ be two groups, and $f$ a map from $G$ to $H$ such that $\forall g\in G \Rightarrow f(g)\in H$. Then $f$ is a homomorphism if $\forall g_1,g_2\in G \Rightarrow f(g_1g_2)=f(g_1)f(g_2)$. Now what I understood about this is that a representation conserves the information of the operation inside the group, so $G$ and $H$ may be different but the operations of the two groups follow the same rules. Does it mean that if I know just the operation rules of $G$ and $H$ I can't distinguish them? If yes, can you give me an example of what these abstract rules are? Then an isomorphism is an homomorphism that is also bijective, which means that if two groups have an isomorphism, one is equivalent to the other and the only way to distinguish them is to look at their elements. So, if a representation would be an isomorphism, I would understand its meaning as another way to express the same group. But, since it's just an homomorphism, I'm struggling in understanding why it's useful to know the representations of a group. Answer: I will answer a couple of your questions in the hope that this helps you on the way of understanding more about this subject. I will explain it in general ideas and not concrete theorems. Groups can be anything from abstract spaces to for example the vector spaces we all know such as $\mathbb{R}^n$. A homomorphism allows us to compare groups and map certain properties from one to the other. For example, consider the group $\mathbb{Z}$,+ (the group of integers equipped with the standard sum). We can define a function $\phi: \mathbb{Z},+ \to \mathbb{Z}_5,+: a \mapsto a \mod{5}$ for example, with $\mathbb{Z}_5$ being the multiplicative group modulo 5 and $\phi$ in this case the canonical map. Modulo groups are very prominent in group theory so you should get to know them! In this case, the operator + is the same for both groups but they are indeed very different and are distinguishable (for example $\mathbb{Z}$ has infinite elements while $\mathbb{Z}_5$ only has 5). You can check with the definition you included that this is a homomorphism. For example: $\phi(2+6) = \phi(8) = 3$ and also $\phi(2)+\phi(6) = 2 + 1=3$. From this arrives the question: what about groups that are more similar to eachother? For this we use the isomorphism. For the homomorfism to be bijective, this would also include for example that the amount of elements must be equal. When a new group is encountered, existing properties of a different group can be instantly applied if it is known that an isomorphism exists. An isomorphism guarantees that the groups "behave" in the same way you could say. In our previous example, you could find all information about $\mathbb{Z}_5$ coming from $\mathbb{Z}$ but not the other way around. The isomorphism gives a path in both directions. An example of an isomorphism would be $\psi: \mathbb{Z}_3 \times\mathbb{Z}_2 \to \mathbb{Z}_6:(a,b) \mapsto (3a + 4b) \mod{6}$, you can check this out in the wikipedia I linked above. We can in fact get all information of the multiplicative group of any number by "only" using information out of the multiplicative groups of its factors in the prime factorisation. It is a one-to-one map. The inverse can be found using the opposite direction of the maps in that wiki-page (which might not be a "normal" function but just a map from point to point).
{ "domain": "physics.stackexchange", "id": 66970, "tags": "group-theory, representation-theory, mathematics" }
Why is the limiting operator in the CFT state-operator correspondence well-defined, and why is conformal symmetry necessary?
Question: Consider a Euclidean CFT in radial quantisation, and let $S$ be the unit sphere centred on the origin. The state-operator correspondence says that any state $\Psi_S$ living on $S$ can be prepared by a path integral with an insertion only at the origin. It is proved as follows (see Sec 4.6 in Tong): Let $S_r$ be the sphere of radius $r$ centred on the origin. Then we can evolve the state $\Psi_S$ radially to get some state $\Psi_r$ living on $S_r$. This evolution can be written as a path integral on the annulus $r\leq |\mathbf{x}|\leq 1$: \begin{align} \tag{1}\Psi_S[\phi_S] &= \int D\phi_{r} \space\Psi_r[\phi_r] \int_{\phi|_{S_r}=\phi_r}^{\phi|_S=\phi_S}D\phi e^{-S} \\ \tag{2}&=\int_{\phi|_S = \phi_S} D\phi \space\space\Psi_{S_r}\big[\phi|_{S_r}\big]\space e^{-S}. \end{align} (I'm working in the wavefunctional picture, where $\Psi_S$ is a functional of the field configuration $\phi_S$ on $S$, and likewise for $\Psi_r$. The second line follows since integrating over $\phi_r$ removes the inner boundary condition.) Eq (2) tells us that any state $\Psi_S$ can be prepared by a path integral on the annulus $r\leq|\mathbf{x}|\leq 1$ with some appropriate insertion $\Psi_r[\phi|_{S_r}]$ on the inner boundary. Now take $r\to 0$, so the inner boundary shrinks to the origin. Hence we conclude that $\Psi_S$ can be prepared by an insertion at the origin. Question 1: How exactly is the $r\to 0$ limit defined? Each $\Psi_r$ is a functional depending on $\phi|_{S_r}$, whereas in the $r\to 0$ limit we generally expect to obtain an insertion depending on not just $\phi(0)$ but also its derivatives. I think the existence of an appropriate limit is the exact content of what people call the "state-operator correspondence", but I can't find any reference addressing this. Question 2: Why does the QFT need to be conformal for the above to work? Even without conformal symmetry, the path integral still defines some map from states on $S$ to states on $S_r$. Then we can take $r\to 0$ as before and the rest of the argument seems to work fine, showing that all states can be prepared by an insertion at the origin. In other words, I'm claiming that radial quantisation is perfectly well-defined in an arbitrary QFT. Unlike in CFT, in general the 1-parameter family of evolution maps from $S$ to $S_r$ will not be the exponential of some conserved charge, but I don't see why this would pose a problem to the above argument. I'm asking these two questions together because I suspect their answers might be related. For example, perhaps the limiting procedure in the state-operator correspondence is well-defined precisely when the theory is conformal. EDIT: another confusing point is that usually in QFT operators have to be smeared: they're not well-defined "at a point". So I don't know whether the state-operator correspondence is even well-defined, at least the way it's normally written. Answer: It is not true that any state on the sphere can be prepared by a local operator at the origin. For example, take the state $|\Psi\rangle=\phi(x)\phi(-x)|0\rangle$, defined on the unit sphere, where $0<|x|<1$ and $\phi$ is some local scalar operator. If it were true that $|\Psi\rangle=\mathcal{O}(0)|0\rangle$ for some local operator $\mathcal{O}$, then the correlation function $\langle 0|\mathcal{O}'(y)|\Psi\rangle$, where $\mathcal{O}'$ is a local operator, would be regular for all $|y|>0$ (the defintion $\langle 0|\mathcal{O}'(y)|\Psi\rangle$ makes sense only for $|y|>1$ but since correlation functions are analytic, this is enough to ask questions also at |y|<1). But of course we know that $\langle 0|\mathcal{O}'(y)|\Psi\rangle=\langle 0|\mathcal{O}'(y)\phi(x)\phi(-x)|0\rangle$ has singularities at $y=\pm x$ and no singularity at $y=0$. As Connor Behan points out in the comments only the dilatation eigenstates can be represented by local operators at a point. For dilatation eigenstates, there is no limit to be taken as they evolve trivially. To make the argument precise, one needs to define what you mean by a local operator. The objection raised by the OP to Connor Behan's comment is addressed by the following observation. The dilatation eigenstates do not span the Hilbert space, they only span a dense subset. In other words, to reproduce a completely general state, one needs to take infinite sums of dilatation eigenstates. These sums will converge in the Hilbert space, but not in the space of local operators (by the argument in the beginning of this answer).
{ "domain": "physics.stackexchange", "id": 93783, "tags": "quantum-field-theory, operators, hilbert-space, conformal-field-theory, path-integral" }
Rendered Image Denoising
Question: I am learning about "Image Denoising using Autoencoders". So, now I want to build and train a model. Hence, when I read into how Nvidia generated the dataset, I came across: We used about 1000 different scenes and created a series of 16 progressive images for each scene. To train the denoiser, images were rendered from the scene data at 1 sample per pixel, then 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, and 131072 samples per pixel. I was trying to understand- 1) what is meant by rendering images at n samples per pixel? 2) How to do this in python to generate the dataset? I have read some articles regarding this but could not form a confident opinion from a Data Science perspective. https://area.autodesk.com/tutorials/what-is-sampling/ Any leads would be much appreciated! Thanks Answer: Your link is to paid course :) In ray-tracing too few samples will generate something like at the top In fact the link with the picture answers your question https://chunky.llbit.se/path_tracing.html 2) Ray-tracing is hard... but not impossible, google for "python ray tracing module"... But something looking close - easily https://stackoverflow.com/questions/22937589/how-to-add-noise-gaussian-salt-and-pepper-etc-to-image-in-python-with-opencv Although actually on the ray-traced images the noise can change because of slope and environment. If you still want ray-traced noisy images, better to find tutorials for 3D modelling programs, like "ray tracing in 3D Studio MAX tutorial"
{ "domain": "datascience.stackexchange", "id": 7062, "tags": "python, autoencoder, image, nvidia" }
Is it true that the self-force prevents a classical particle from falling into a Coulomb potential? What is the physical explanation of this result?
Question: In 1943 CJ Eliezer published a paper claiming that the self-force prevents a zero angular momentum particle from ever reaching the center of an attractive Coulomb potential (and what's more that it can collide with a repulsive potential). As stated in the paper this result is somewhat counterintuitive, but the reasoning seems like a relatively straightforward differential equation argument. In thinking about this the understanding of the self-force that I came to is that while you can derive it purely from energy and momentum conservation (and thus it must be valid in any theory of classical charged point particles), the resulting differential equation is better thought of as a consistency condition than equations of motion (i.e. only solutions of the 3rd order self-force equation correspond to point particle sources that have solutions to Maxwell's equations that lose or gain the correct amounts of energy and momentum at the point particle corresponding to its motion). And while one would like to be able to define a dynamical system of a point particle coupled to the electromagnetic field with physically plausible boundary conditions, even eliminating runaway solutions doesn't prevent you from being forced to include waves coming in from past infinity (i.e. in pre-acceleration solutions, which do not have to have runaway behavior, the particle will move before an external force is applied, meaning that it must be gaining energy from radiation from past infinity). Assuming that Eliezer's result is correct it seems like every trajectory of a particle in a stationary Coulomb potential similary requires radiation adding energy to the particle (otherwise you can prove with a simple energy argument that it must fall in). So the question is, is my interpretation of the dynamics of the self-force correct and is there a physical or intuitive explanation for this extremely pathological behavior in the presence of a Coulomb potential? Answer: is my interpretation of the dynamics of the self-force correct and is there a physical or intuitive explanation for this extremely pathological behavior in the presence of a Coulomb potential? Eliezer makes his argument based on the equation with the Lorentz-Abraham-Dirac term. This term was originally (Lorentz) devised as an approximate way to account for the action of charged sphere on itself (one charged part acts on another charged part and as a result, there is a net force). His derivation shows the LAD term is only approximate way to account for the interaction of the parts. Similarly in antenna theory it is possible to show that third derivative is only approximate way to model internal interactions. Moreover, there are well-known cases where the model based on the LAD term fails completely (run-aways, preaccelerations). All this holds for particles with finite charge density (the particle has non-zero dimensions). If the particle is truly a point, there is no valid reason to even try to apply the LAD term to it. Its derivation is not valid for point particles (Dirac's paper has a "derivation" that is based on wrong premise - Poynting expressions for point particles). People have tried anyway and they consistently failed - there are always some fishy excuses made to make the edifice work apparently. Consistent theories of charged point particles were described many times long time ago, e.g. by Frenkel: J. Frenkel, Zur Elektrodynamik punktfoermiger Elektronen, Zeits. f. Phys., 32, (1925), p. 518-534. http://dx.doi.org/10.1007/BF01331692 In English, this article also explains it concisely: R. C. Stabler, A Possible Modification of Classical Electrodynamics, Physics Let- ters, 8, 3, (1964), p. 185-187. http://dx.doi.org/10.1016/S0031-9163(64)91989-4
{ "domain": "physics.stackexchange", "id": 21829, "tags": "electromagnetism, classical-mechanics, electromagnetic-radiation, classical-electrodynamics, classical-field-theory" }
Generating all possible permutations of the string
Question: This generates all possible permutations of the string. I am also unable to understand the time complexity of this problem. Can someone please explain the complexity to me? import java.util.ArrayList; public class permutations { public ArrayList<String> performPermutations(String s){ ArrayList<String> arrayList = new ArrayList<String>(); if (s == null) { return null; } else if (s.length() == 0) { arrayList.add(""); return arrayList; } else { for (int i = 0; i < s.length(); i++) { ArrayList<String> remaining = performPermutations(s.substring(0, i) + s.substring(i + 1)); for (int j = 0; j < remaining.size(); j++) { arrayList.add(s.charAt(i) + remaining.get(j)); } } return arrayList; } } public static void main(String[] args) { permutations p = new permutations(); ArrayList<String> arr = p.performPermutations("abcde"); for(int i = 0; i<arr.size();i++) { System.out.println(arr.get(i)); } } } } Answer: There are quite a few ways this code can be improved: All class names should start with a capital, and be in CamelCase. Specifically, the class permutations should be Permutations. The way it doesn't look like a variable, as in here: permutations p = new permutations(); performPermutations method Why does it declare a return type of ArrayList<String>? Is there anything special about the implementation of the list? It should just be List<String>. You have an unnecessary if-else structure, since each if statement returns a value. You can remove all of the elses. In the first if statement, you return null. This is a bad idea in many cases. An empty list would be more favorable, just in case any calling code wants to operate on it immediately afterwards. (Notice that your main method would fail with a NullPointerException if the passed in list was empty). You can combine the first and second if statements into one. This would be your "base" case: if (s == null || s.isEmpty() { return arrayList; } Never compare a list length to 0, use isEmpty() instead. Also, s is a poor name for a method parameter. Try to be creative and descriptive. Finally, there's no need to add an empty string if the list is empty. There's a lot of existing posts on string permutations you may find interesting. Seeing a recursive call embedded in two for loops is probably unnecessarily complex. How this method could look (but can still be improved): public List<String> performPermutations(final String s){ final List<String> result = new ArrayList<String>(); if (s == null) { return result; } for (int i = 0; i < s.length(); i++) { final List<String> remaining = performPermutations(s.substring(0, i) + s.substring(i + 1)); for (int j = 0; j < remaining.size(); j++) { result.add(s.charAt(i) + remaining.get(j)); } } return result; }
{ "domain": "codereview.stackexchange", "id": 9791, "tags": "java, combinatorics" }
Higgs to 4 lepton decay width
Question: I am bit confused with a statement I read. It talks about $H\to 4\ell$ decay width (higgs to 4 lepton). Now, higgs can decay through different modes. But, as far as I know it doesn't decay directly to 4 leptons (atleast not at leading order).But anyways, it can also decay (generally) to Z Z* which then decay to 4 leptons. So, my question is what does h->4l decay width means? I should add the decay width of the intermediate decay process of Z to dileptons to the decay width to Higgs to Z Z boson to get h->4l decay width? Answer: I general, one needs to compute the matrix elements of the whole diagram: the first one I have drawn in that answer to another question, then sum it with the matrix element for the second one, and any other relevant ones. Indeed $Z$ bosons are typically off-shell when the invariant mass of the final state is above twice the mass of $Z$. So, no, you can't just add decay widths.
{ "domain": "physics.stackexchange", "id": 41148, "tags": "quantum-field-theory, particle-physics, standard-model, higgs, large-hadron-collider" }
Why is this assumption made in deriving time dilation?
Question: My book makes the following assumptions (as far as I understood) in deriving length contraction and time dilation from Lorentz transformation : Suppose there is an inertial frame $S$ and another frame $S'$ which has a velocity $v$ relative to $S$. There is a rod whose starting point is $a'$ and ending point is $b'$ in $S'$ and so $b'-a'$ is its proper length and so it's length can be calculated in $S$ using Lorentz transformation and in using the transformation the assumption that $t_2=t_1$ is made. As far as I understand, this assumption is based upon the fact that the measurement of both points have to be simultaneous in the $S$ frame. But in case of deriving time dilation the books assumes that $a'=b'$. However I find this very confusing. The proper time between two events in $S'$ is $t_2 '- t_1 '$ and so we can calculate the time difference in $S$ using Lorenz transformation. Then why is $a'=b'$ assumed instead of $a=b$ like the former derivation? I am quite new to SR and so forgive me if it seems a really silly question. Answer: I agree that the explanation as it stands now is a little misleading. The reason for this, I feel, is because it doesn't speak of specific events that we're taking into consideration. The proper time interval $\Delta\tau$ between two events is defined as the time interval measured by an observer for whom both events occur at the same location. In other words, it is the time interval between two events that can be measured by the same clock. Thus, it isn't so much an assumption as a definition. Just as the difference in the end-points of a moving object in $S$ can't be called its length unless these points were measured simultaneously, the difference in time intervals between two events in $S^\prime$ can't be called ``proper time'' unless the events occur at the same location in space. You could, of course, try to find a relationship between the time intervals of events that don't occur at the same point in $S^\prime$, but the corresponding time interval in $S$ will then also depend on the spatial separation of these events in $S^\prime$. This is not a useful quantity, however, since different observers would disagree about the numerical value of this time interval. Further reading: Why is the time interval between two events measured by two synchronised clocks seperated by a distance not proper?, and JohnRennie's answer therein. You might already know this, but I like to think of time dilation in terms of this simple thought experiment: Consider a `light' clock, which we make using a rod and an emitter and detector of light. A pulse of light is emitted at one end of the rod, reflected at the other end, and detected back where it was emitted. Let us place this clock in the frame $S^\prime$ where it is moving with respect to $S$ with a velocity $v$. A light clock at rest in $S^\prime$, observed from $S$. The light pulse emitted at one end of the rod is reflected at the other end and detected at back at the point of emission. Alice sits near the emitter/detector in $S^\prime$ and measures the time between emission and detection. This is the proper time between those events, as they occur at the same place. Bob, an observer in $S$, also measures the time interval between emission and detection. However, while the spatial coordinates of these events in $S^\prime$ are the same ($x^\prime_A$), they are different when viewed from $S$, as the clock is moving with respect to an observer in $S$. Alice sees the light traverse the length of the rod twice and be detected after some time $\Delta t^\prime$. We would like to relate this to the time interval that Bob measures. In order to relate these two observations, let us consider the two events: We can easily see that $\Delta x^\prime = 0$, and so using the appropriate (inverse) Lorentz Transformation, \begin{equation*} \begin{aligned} \Delta t &= \gamma \left( \Delta t^\prime + \frac{v}{c^2}\Delta x^\prime \right)\\ \text{i.e. } \Delta t &= \gamma \Delta t^\prime \end{aligned} \end{equation*} Thus, $\Delta t > \Delta t^\prime$, in other words intervals of time measured by Bob in $S$ would appear to take longer than the same intervals as measured by Alice in $S^\prime$.
{ "domain": "physics.stackexchange", "id": 69178, "tags": "special-relativity, time-dilation" }
Linear dilatation
Question: $$ \delta L = L \alpha \delta\theta$$ Is the equation of linear dilatation (Approximately). Just now raise a doubt about this equation: See this image, imagine that the bar was initially with its ends on the red lines, and so the bar expanded, by let's say L/6 in each side. I need to use $$ \delta L = L/6 $$ or $$ \delta L = L/3 $$ ? Mathematically i know it should be the second option, but i am not sure if i am interpreting right. Maybe if the bar is free in its sides, one expansion in one direction occurs with one expansion in the opposite direction, so that $$ \delta L$$ in the equation is just with respect to one side Answer: Temperature is a monotonic function of the average molecular kinetic energy of a substance. When a substance is heated, molecules begin to vibrate and move more, usually creating more distance between themselves. When you heat a rod, the length of the rod will increase (in a homogeneous manner). In the formula, $\delta L$ is a change in length of the rod ie. Final minus initial. In your example, If you heat up the rod, the length increase for each part (distance between each molecule will change) and thus you will take a total change in length that is $L/3$. Note that we are talking about linear expansion, so it's not possible that the length will increase differently at different parts of the rod. So It doesn't matter How you take increment. Just mark two points before and after the heating and note the increment.
{ "domain": "physics.stackexchange", "id": 73273, "tags": "thermodynamics" }
Can carbonic acid form a salt with pH < 7?
Question: Please read edit before answering Background I am currently studying in 10th grade. This question appeared in my science exam - Which acid does not form an acidic salt? Phosphoric acid Carbonic acid Hydrochloric acid Sulphuric acid I understood that all acids other than carbonic acid are strong, and the question is expecting me to answer carbonic acid, as at our level we are only taught SA + SB -> Neutral salt WA + SB -> Basic salt SA + WB -> Acidic salt My Efforts Because there was no mention of WA + WB reaction in my book, while this was being taught in class out of curiosity I searched online and found out that to find the nature of salt of WA and WB, it is required to compare their dissociation constants (correct me if I am wrong here) I felt that there must be a base weaker than carbonic acid so that their reaction would result in an acidic salt. I have tried to find such bases online but because of my lack of understanding of this topic, I am unable to do so. Question Can carbonic acid form an acidic salt? If yes, then upon reacting with which base? Edit: By acidic and basic salts, I mean salts with pH < 7 and pH > 7 respectively. I did not know acidic and basic salts mean something else in chemistry. At our class level we are taught the meaning of acidic and basic salts in this context only Edit 2: Phosphoric acid is not a strong acid, my bad for writing it as such Answer: Both ammonium and sodium bicarbonate solutions are basic, as the anion $\ce{HCO3-}$ is a stronger base than is the strength of the acid $\ce{NH4+}$, not even talking about $\ce{Na+}$. Carbonic acid could form a mildly acidic salt in sense of forming acidic solution with a base weaker than $\ce{HCO3-}$, e.g. with pyridine or aniline. But their water solutions would be highly hydrolyzed to the respective base and $\ce{H2CO3(aq)/CO2(aq)}$, as such bases need very acidic pH. Carbon dioxide would be gradually escaping as gas. $$\ce{CO2(aq) + H2O(l) <=> H2CO3(aq)} \tag{1}$$ $$\ce{H2CO3(aq) + H2O(l) <=> HCO3-(aq) + H3O+(aq)}\tag{2}$$ $$\ce{C5H5N(aq) + H3O(aq) <=> C5H5NH+(aq) + H2O(l)}\tag{3}$$ Very formally, the salt formula can be written as $\ce{(C5H5NH)HCO3}$, as it would in large extent hydrolyze to the left side of reactions (2) , (3) to pyridyne and carbon dioxide. An acidic salt – aside of pH of its solution – means a salt where not all acidic hydrogens of a multiprotic acid are neutralized. Like $\ce{KHSO4}$ (very acidic solution) $\ce{KH2PO4}$ (acidic solution) $\ce{NaHCO3}$ (mildly basic solution) $\ce{Na2HPO4}$ (basic solution) As the anion of such partially neutralized acid is both an acid ($\ce{HA- <=> H+ + A^2-}$) and a base ($\ce{HA- + H+ <=> H2A}$), pH of its solution depends on which of them is stronger. Similarly, basic salts are combination of a salt and a hydroxide, like the bleaching powder is a complex combination of hydrated calcium chloride, hypochlorite and hydroxide. Wikipedia: Commercial calcium hypochlorite consists of anhydrous $\ce{Ca(ClO)}$2, dibasic calcium hypochlorite $\ce{Ca3(ClO)2(OH)4}$ (also written as $\ce{Ca(ClO)2·2Ca(OH)2}$), and dibasic calcium chloride $\ce{Ca3Cl2(OH)4}$ (also written as $\ce{CaCl2·2Ca(OH)2}$). Or there is $\ce{CuCO3 . Cu(OH)2}$ in the mineral malachite. How can someone figure out which of the bases is stronger between $\ce{HCO3-}$ and Pyridine. And if we have the assertion that pyridinium bicarbonate will be acidic in nature, then is the reason that pyridine is a weaker base than $\ce{HCO3-}$ hence pyridinium ion will be stronger acid than $\ce{HCO3-}$? The values of acidity/basicity constanst can be found tabulated. The most available ways is getting them from Wikipedia pages for respective compounds, or various chemistry dedicated sited have listed them for many compounds. If there is an equilibrium $$\ce{HA(aq) <=> H+(aq) + A-(aq)}$$ or $$\ce{B(aq) + H+(aq) <=> BH+(aq)},$$ then acidity of $\ce{HA}$ or $\ce{BH+}$ and basicity of $\ce{A-}$ or $\ce{B}$ are linked by the equation $$K_\mathrm{a} \cdot K_\mathrm{b} = K_\mathrm{w},$$ resp. (at $\ce{25 ^{\circ}C}$) $$\mathrm{p}K_\mathrm{a} + \mathrm{p}K_\mathrm{b} = 14$$. If there are two bases, then the stronger one forms by neutralization a weaker acid. Pyridinium with $\mathrm{p}K_\mathrm{a} = 5.23$ is a stronger acid than CO2(aq) with $\mathrm{p}K_\mathrm{a1}=6.35$, therefore pyridine is a weaker base than $\ce{HCO3-}$.
{ "domain": "chemistry.stackexchange", "id": 17926, "tags": "acid-base, ph, salt" }
At what frequency does the medium of air change from acoustic to light?
Question: I'm calculating the max doppler frequency of a fading channel in an in-room environment and looking at different carrier frequencies. Calculated as follows: F = vf/c Where F -> max doppler shift v -> Velocity of object f -> carrier freq c -> speed of waves in medium At lower frequencies (ultrasound), I use the speed of sound. At higher frequencies (GSM range), I use the speed of light. How do I know when the acoustic medium ends and the light medium begins? Answer: Acoustic and electromagnetic waves are totally different. There is no overlap whatsoever. You can obviously have very long EM waves, but best example I can think of would be old long wave radio. In my country they still transmit at 225 kHz which is quite easy to achieve with sound wave too, but their nature is different. EM wave propagates in vacuum while the sound wave does not. EM is transverse wave while sound is longitudal wave (mostly).
{ "domain": "physics.stackexchange", "id": 10922, "tags": "speed-of-light, doppler-effect" }
rqt_gui error in perspective manager in ROS noetic
Question: I run my code flawlessly in ROS Melodic but in ROS Noetic, I am having the following error (maybe because of python3): Traceback (most recent call last): File "/opt/ros/noetic/lib/rqt_gui/rqt_gui", line 13, in <module> sys.exit(main.main()) File "/opt/ros/noetic/lib/python3/dist-packages/rqt_gui/main.py", line 61, in main return super( File "/opt/ros/noetic/lib/python3/dist-packages/qt_gui/main.py", line 614, in main perspective_manager.import_perspective_from_file( File "/opt/ros/noetic/lib/python3/dist-packages/qt_gui/perspective_manager.py", line 360, in import_perspective_from_file self._convert_values(data, self._import_value) File "/opt/ros/noetic/lib/python3/dist-packages/qt_gui/perspective_manager.py", line 422, in _convert_values self._convert_values(groups[group], convert_function) File "/opt/ros/noetic/lib/python3/dist-packages/qt_gui/perspective_manager.py", line 422, in _convert_values self._convert_values(groups[group], convert_function) File "/opt/ros/noetic/lib/python3/dist-packages/qt_gui/perspective_manager.py", line 422, in _convert_values self._convert_values(groups[group], convert_function) [Previous line repeated 1 more time] File "/opt/ros/noetic/lib/python3/dist-packages/qt_gui/perspective_manager.py", line 419, in _convert_values keys[key] = convert_function(keys[key]) File "/opt/ros/noetic/lib/python3/dist-packages/qt_gui/perspective_manager.py", line 429, in _import_value return QByteArray.fromHex(eval(value['repr(QByteArray.hex)'])) File "<string>", line 1, in <module> TypeError: arguments did not match any overloaded call: QByteArray(): too many arguments QByteArray(int, str): argument 1 has unexpected type 'str' QByteArray(Union[QByteArray, bytes, bytearray]): argument 1 has unexpected type 'str' Any idea on this? Or have you encountered this before? Thanks Originally posted by abata on ROS Answers with karma: 33 on 2020-11-10 Post score: 3 Original comments Comment by mikessut on 2020-11-11: I'm hitting exact same error with rqt. I'm able to get it to run by changing /opt/ros/noetic/lib/python3/dist-packages/qt_gui/perspective_manager.py:429 to return value['repr(QByteArray.hex)']. Still get a bunch of errors, but it at least runs. Comment by abata on 2020-11-11: My launch file was using a .perspective file. Also, when I tried to load it manually in rqt. The same errors happened. It was not necessary for me to load that file, then I modified the launch file :) Answer: Hey, I also encountered the same problem when porting some existing code to the ROS noetic. In my case I had a launch file which invoked rqt_gui with a so-called perspective file, like this: <node name="rqt_gui" pkg="rqt_gui" type="rqt_gui" respawn="false" args="--perspective-file $(find cyton_gamma_300_gazebo)/config/gazebo_effort_controller.perspective"/> The perspective file in turn looked similar to the following: ..., "mainwindow": { "keys": { "geometry": { "type": "repr(QByteArray.hex)", "repr(QByteArray.hex)": "QtCore.QByteArray('01d9d')", "pretty-print": " : I : I " }, "state": { "type": "repr(QByteArray.hex)", "repr(QByteArray.hex)": "QtCore.QByteArray('000000ff')", "pretty-print": " Lrqt_topic__TopicPlugin__1__TopicWidget Xrqt_publisher__Publisher__1__PublisherWidget 6MinimizedDockWidgetsToolbar " } }, .... We are interested in the key-value pairs looking similar to the following: "repr(QByteArray.hex)": "QtCore.QByteArray('000000ff')", and change it to the following: "repr(QByteArray.hex)": "b'000000ff'", I.e. after replacing the value with only byte content, like b'000000ff' and removing QtCore.QByteArray() creation, the rqt_gui is able to start without errors. No patching of noetic code required. Hope this helps! Originally posted by selyunin with karma: 96 on 2021-01-06 This answer was ACCEPTED on the original site Post score: 7 Original comments Comment by M Usamah Shahid on 2021-01-29: This worked perfectly for me. Thanks a lot
{ "domain": "robotics.stackexchange", "id": 35740, "tags": "ros, rqt-gui" }
Sort a Python list of strings where each item is made with letters and numbers
Question: I have a set of strings that all consist of one or two letters followed by one or two digits. I need to sort by the letters then by the numeric value of the digits. My solution is quite inefficient in terms of performance; how is it possible to improve it? import re unsorted_list = ['A1', 'A11', 'A12', 'A2', 'A3', 'B1', 'B12', 'EC1', 'EC21'] expected_result = ['A1', 'A2', 'A3', 'A11', 'A12', 'B1', 'B12', 'EC1', 'EC21'] unsorted_dict = {} for item in unsorted_list: match = re.match(r"([A-Z]+)([0-9]+)", item, re.I) letters_in_item = match.groups()[0] numbers_in_item = int(match.groups()[1]) if letters_in_item in unsorted_dict: unsorted_dict[letters_in_item].append(numbers_in_item) else: unsorted_dict[letters_in_item] = [numbers_in_item] sorted_dict = dict(sorted(unsorted_dict.items())) result = [] for key in sorted_dict: for value in sorted(sorted_dict[key]): result.append(key + str(value)) assert result == expected_result Answer: Because we need to sort the data, the best we can do in terms of performance is to try and settle for \$O(N \log N\$). What you have here is called natural sort. Apart from some libraries that you might use (e.g. natsort - which also handles more complex cases), I'm going to try to cleanup a bit your existing implementation. NOTES: Using a dictionary doesn't add any value or improvements to your code / algorithm but it rather adds overhead. As a rule of thumb, use dictionaries when you need to associate values with keys, so you can look them up efficiently (by key) later on; You don't need to use sorted multiple times; You could separate your code logic into different functions (benefits: easier to read, test, reuse, etc); Type-hints are always welcome; First, let's take out the regex you have: COMPILED = re.compile(r"([A-Z]+)([0-9]+)", re.I) For me, the biggest benefit to using re.compile() is being able to separate definition of the regex from its use. It might also offer some speed improvements but those cases are quire rare. Now, what do we need to do? Split letters group from digits group (regex will take care of this); Sort by letters group then by digits group; Normalize data; From what I can see you already know about sorted() but didn't use it to its fullest. Let's try and change that. Having the first two points done (split + sort) is as simple as: def nat_sort(items): # for me it feels more natural to use re.findall() instead of # re.match + re.group items_groups = [ COMPILED.findall(item)[0] for item in items ] return sorted(items_groups, key=lambda group: (group[0], int(group[1]))) In the above, items_groups is going to look like this: [('A', '1'), ('A', '11'), ('A', '12'), ('A', '2'), ('A', '3'), ('B', '1'), ('B', '12'), ('EC', '1'), ('EC', '21')] The next line is just going to sort our items by the first group (letters), then by the last one (digits), giving us the following: [('A', '1'), ('A', '2'), ('A', '3'), ('A', '11'), ('A', '12'), ('B', '1'), ('B', '12'), ('EC', '1'), ('EC', '21')] items_groups could also be sorted in-place to avoid creating an extra list: def nat_sort(items): items_groups = [ COMPILED.findall(item)[0] for item in items ] items_groups.sort(key=lambda group: (group[0], int(group[1]))) return items_groups The remaining thing we have to do is to transform the data we have into our desired format: def main(): items = ['A1', 'A11', 'A12', 'A2', 'A3', 'B1', 'B12', 'EC1', 'EC21'] return [''.join(item) for item in nat_sort(items)] Full code: import re COMPILED = re.compile(r"([A-Z]+)([0-9]+)", re.I) def nat_sort(items): items_groups = [ COMPILED.findall(item)[0] for item in items ] items_groups.sort(key=lambda group: (group[0], int(group[1]))) return items_groups def main(): items = ['A1', 'A11', 'A12', 'A2', 'A3', 'B1', 'B12', 'EC1', 'EC21'] return [''.join(item) for item in nat_sort(items)] if __name__ == '__main__': print(main()) Or as recommended in the comments, have everything contained within the same function (BONUS: type-hints included): import re from typing import List COMPILED = re.compile(r"([A-Z]+)([0-9]+)", re.I) def nat_sort(items: List[str]) -> List[str]: items_groups = [ COMPILED.findall(item)[0] for item in items ] items_groups.sort(key=lambda group: (group[0], int(group[1]))) return [''.join(item) for item in items_groups] def main() -> List[str]: items = ['A1', 'A11', 'A12', 'A2', 'A3', 'B1', 'B12', 'EC1', 'EC21'] return nat_sort(items) if __name__ == '__main__': print(main()) Not sure if this comes with an improvement in terms of complexity (my complexity analysis is quite rusty) but my suggestion has \$O(N \log N\$) time complexity and \$O(N)\$ space complexity. There are probably better, easier-to-read, more intuitive ways of doing this but this felt natural to me.
{ "domain": "codereview.stackexchange", "id": 43073, "tags": "python, python-3.x, sorting" }
kmeans++ for arbitrary metric spaces and general potential function
Question: I was reading this popular paper "k-means++: The Advantages of Careful Seeding". It appeared in SODA 2007. Since this technique is the most popular clustering technique, I am hoping that my question can be answered. I found two versions of the paper (which I find contradictory): http://ilpubs.stanford.edu:8090/778/1/2006-13.pdf https://theory.stanford.edu/~sergei/papers/kMeansPP-soda.pdf I have a problem with section 5 ("Generalization") of the paper. The section describes a generalization of the kmeans++ algorithm for an arbitrary metric space with a general potential function $ \Phi^{[\ell]} \equiv \sum_{x \in \mathcal{X}} min_{c \in \mathcal{C}} \| x-c\|^{\ell}$, where $||x-c||$ denotes the distance in any metric space, $\mathcal{X}$ is the data-set, and $\mathcal{C}$ is the center set of size $k$. Consider Lemma 5.3, of the first version. It says that -"For a cluster $A$, if we choose a point $p$ uniformly at random, then the expected cost of the cluster(with $p$ as a center) is at most $4 \cdot OPT(A)$". Before stating this lemma, they explicitly say that this result is independent of the value of $\ell$. However, a contradictory result is mentioned in lemma 5.1 of the second version, which says that "For a cluster $A$, if we choose a point $p$ uniformly at random, then the expected cost of the cluster(with $p$ as a center) is at most $2^{\ell} \cdot OPT(A)$" So far, I agree with the second version of the paper. However, it is possible that the first version came later with corrections. If the first version result is also correct, how to prove this result? Note: Both these versions are highly cited and appear at the top in the google search. So I doubt if they are incorrect. Also, none of the versions mention anything about the corrections made in the paper. Answer: Here's an example that suggests that the stronger claim in the earlier version (Lemma 5.3) is false. I've only given a cursory look at the papers, so please check this carefully to make sure I am understanding correctly, thanks. Consider a cluster $X$ consisting of a rooted star: a root $r$ and $n-1$ nodes $v_1,v_2,\ldots, v_{n-1}$ such that $d(r, v_i) = 1$ for each $i$, and $d(v_i, v_j) = 2$ for each $i, j$ with $i\ne j$. $OPT$ takes the center to be $r$, at cost $\sum_{i=1}^{n-1} 1^\ell = n-1$. But suppose the center $c$ is chosen at random. Then with probability $1-1/n$ the center is one of the $v_i$'s (not the root), and then the cost is $1+\sum_{j\ne i} 2^\ell \ge (n-1)2^\ell$. So the expected cost is at least $(1-1/n) 2^\ell \,OPT$. BTW, the published version of the manuscript appears to be here: https://dl.acm.org/doi/abs/10.5555/1283383.1283494
{ "domain": "cstheory.stackexchange", "id": 5245, "tags": "approximation-algorithms, randomized-algorithms, clustering, metric-spaces" }
Problem with subscribing to a topic that contains slash
Question: I noticed something weird that seems like a bug. When a subscriber listens to a topic that contains a slash and that a publisher advertises the "parent" topic (the part before the slash), then the subscriber changes the topic to what it listens to match the publisher. Here is an example. I open three terminals that will exchange on topic /parent/child. Step 1: In Terminal 1, I start rostopic echo /parent/child Step 2: In Terminal 2, I start rostopic echo /parent Step 3: In Terminal 3, I start a node with publishers for /parent and parent/child Step 4: In Terminal 3, the node publishes on both topics The result is that only the subscriber in Terminal 2 gets the message. In addition, rxgraph shows that Terminal 1 is then subscribing to /parent instead of /parent/child. In Terminal 1, I get this instead of the message: no field named [/child] What's happening here? Here is the code: #include <ros/ros.h> #include <std_msgs/String.h> int main(int argc, char** argv) { ros::init(argc, argv, "MyPublisher"); ros::NodeHandle nh("~"); ros::Rate oneSecond(1); // Creates a publisher to see if it replicates the bug ros::Publisher pubParent = nh.advertise<std_msgs::String>("/parent", 10); // Waits for the subscribers to connect oneSecond.sleep(); ros::Publisher pubChild = nh.advertise<std_msgs::String>("/parent/child", 10); // Waits for the subscribers to connect oneSecond.sleep(); std_msgs::String msgChild; msgChild.data = "Hello World! (on /parent/child)"; pubChild.publish(msgChild); ROS_INFO("Message published (on /parent/child)"); std_msgs::String msgParent; msgParent.data = "Hello World! (on /parent)"; pubParent.publish(msgParent); ROS_INFO("Message published (on /parent)"); // Waits for the user to manually stop the node (to give time to see the update in rxgraph) ros::spin(); return 0; } Here is the info about the nodes: Terminal 1: Node [/rostopic_27422_1352897219007] Publications: * /rosout [rosgraph_msgs/Log] Subscriptions: * /parent [std_msgs/String] Services: None Pid: 27422 Connections: * topic: /rosout * to: /rosout * direction: outbound * transport: TCPROS * topic: /parent * to: /MyPublisher (http://dfki-benoit:33822/) * direction: inbound * transport: TCPROS Terminal 2: Node [/rostopic_27405_1352897218194] Publications: * /rosout [rosgraph_msgs/Log] Subscriptions: * /parent [std_msgs/String] Services: None Pid: 27405 Connections: * topic: /rosout * to: /rosout * direction: outbound * transport: TCPROS * topic: /parent * to: /MyPublisher (http://dfki-benoit:33822/) * direction: inbound * transport: TCPROS Terminal 3: Node [/MyPublisher] Publications: * /parent [std_msgs/String] * /parent/child [std_msgs/String] * /rosout [rosgraph_msgs/Log] Subscriptions: None Services: * /MyPublisher/get_loggers * /MyPublisher/set_logger_level Pid: 27455 Connections: * topic: /rosout * to: /rosout * direction: outbound * transport: TCPROS * topic: /parent * to: /rostopic_27405_1352897218194 * direction: outbound * transport: TCPROS * topic: /parent * to: /rostopic_27422_1352897219007 * direction: outbound * transport: TCPROS EDIT (12/12/12): Following the answer from @Dirk Thomas, I tried his Python script. The behavior that I observe is the following. If I start the standard Python listener and rostopic echo, they both receive the messages on /parent/child. However, if I start only rostopic echo, then it "switches" to /parent and tries to read the field /child. In addition, rostopic echo does not print the WARNING: topic [/parent/child] does not appear to be published yet if the standard listener was started before. These observations mean that the presence of a standard listener changes the behavior of rostopic echo. In my opinion, this is unintuitive and most importantly not "repeatable". It means that my scripts using rostopic echo would yield different results if my colleagues started some listeners on the ROS network, for example. Originally posted by Benoit Larochelle on ROS Answers with karma: 867 on 2012-11-12 Post score: 1 Original comments Comment by Lorenz on 2012-11-12: What's the output of rosnode info on the two subscriber nodes? Comment by dornhege on 2012-11-12: Does the subscription for 1 change in between the other steps? I cannot reproduce this with rostopic echo/pub. For me in step 4 both rostopic echo get the message. Comment by Benoit Larochelle on 2012-11-13: No, the rosnode info for Terminal 1 does not change between steps, other than adding the publisher when I start it in Step 2. The subscriber in Terminal 1 thus starts on /eoi. Comment by dornhege on 2012-11-13: Can you reproduce the exact steps (after a roscore restart) using standard messages? This looks like a weird bug to me. Comment by Benoit Larochelle on 2012-11-13: Ok, I'll come up with a minimal example that shows this behavior Comment by dornhege on 2012-11-14: I can confirm this by following step 1 and 2 and then rostopic pub -r 5 /parent/ std_msgs/String -- "{data: Hello}". It only works after a roscore restart. When a publisher for /parent and /parent/child are present before the subscribers are started everything works fine. Comment by dornhege on 2012-11-14: Either you are not allowed to have a topic at /parent while there is one at /parent/child (thus parent becoming a namespace) or this is a bug that should be filed. In any case something is wrong as I can get both subscriptions to run depending on the order. It should be always or never. Comment by Lorenz on 2012-11-14: I think that's definitely a bug and should be filed. Even if that's desired behavior, it is not intuitive and at least a warning or error should be thrown. Nice catch... Comment by Benoit Larochelle on 2012-11-15: I filed this issue: https://github.com/ros/ros_comm/issues/24 Comment by Benoit Larochelle on 2012-12-11: I would suggest that rostopic echo take a -f option to list the fields to be displayed within a topic. Not providing this option would display all fields (standard current behavior) Answer: As guessed by felix k before this is intended behavior of rostopic echo. It subscribes to parent topics and attempts to extract field names with that name from the messages. When the subscription is performed in a node it will not auto magically subscribe to the parent topic. You can try the following Python script sub.py with the argument /parent or /parent/child instead of invoking rostopic: import sys import rospy from std_msgs.msg import String def callback(data): rospy.loginfo(rospy.get_caller_id() + 'Incoming message: %s',data.data) rospy.init_node('listener', anonymous=True) rospy.Subscriber(sys.argv[1], String, callback) rospy.spin()``` --- Originally posted by [Dirk Thomas](https://answers.ros.org/users/2575/dirk-thomas/) with karma: 16276 on 2012-11-21 This answer was **ACCEPTED** on the original site Post score: 0 --- ### Original comments **Comment by [Benoit Larochelle](https://answers.ros.org/users/216/benoit-larochelle/) on 2012-12-11**:\ I added the results of an experiment with this script to the question. I'd be interested to know if you expected this behavior, or if something weird is going on. **Comment by [felix k](https://answers.ros.org/users/606/felix-k/) on 2012-12-12**:\ My guess: With the python subscriber the `/full/topic` is announced to the system, despite missing any publisher. The `rostopic echo` sees this (when started afterwards) and accepts it as the topic, therefore no warning.
{ "domain": "robotics.stackexchange", "id": 11714, "tags": "ros, namespace, topic, publisher" }
Stationary states in quantum mechanics
Question: I know that the general one-dimensional Schroedinger equation is given by: $$-\frac{\hbar^2}{2m} \frac{\partial^2\Psi(x,t)}{\partial x^2} + U(x)\Psi(x,t) = i\hbar \frac{\partial \Psi(x,t)}{\partial t} $$ The source I am using mentions: "If the potential energy function is nonzero, these sinusoidal waves do not satisfy the Schroedinger equation". Why? I thought the reason why the general equation included $U(x)$ was exactly to address this problem. Then it continues by: "However, we can still write the wave function for state of definite energy $E$ in the following form" $$\Psi(x,t) = \psi(x)e^{iEt/\hbar} $$ I understand that this comes from $$ \Psi(x,t) = Ae^{i(kx - wt)} $$ But why a state of definite energy must have potential energy non-zero? Answer: The form $\Psi(x,t)=\psi(x)e^{-iEt/\hbar}$ does not "come" from $Ae^{i(kx-\omega t)}$ but thee from basic ansatz $\Psi(x,t)=\psi(x)\Phi(t)$ used to solve partial differential equations. The form $\Psi(x,t)=\psi(x)\Phi(t)$ is used to convert the partial differential equation to a pair of ordinary differential equations connected by a separation constant. Inserting $\Psi(x,t)=\psi(x)\Phi(t)$ into the time-dependent Schrodinger equation produces a result easily rearranged to \begin{align} \frac{1}{\psi(x)}\left(-\frac{\hbar^2}{2m}\frac{d^2\psi(x)}{dx^2}+U(x)\psi(x)\right)=\frac{1}{\Phi(t)}\left(i\hbar \frac{d\Phi(t)}{dt}\right)=E \end{align} with $E$ the separation constant (to be identified with the energy). It is possible to solve the $\Phi$ equation immediately as it is independent of the potential: $$ \Phi(t)=e^{-iEt/\hbar}\, , $$ from which $\Psi(x,t)=\psi(x)e^{-iEt/\hbar}$ follows, with $\psi(x)$ a solution to the time-independent Schrodinger equation $$ -\frac{\hbar^2}{2m}\frac{d^2\psi(x)}{dx^2}+U(x)\psi(x)=E\psi(x)\, . \tag{1} $$ The form $\Psi(x,t)=\psi(x)e^{-iEt/\hbar}$ thus holds for arbitrary potential $U(x)$ provided $\psi(x)$ satisfies (1).
{ "domain": "physics.stackexchange", "id": 39506, "tags": "quantum-mechanics, wavefunction, schroedinger-equation" }
Work done on person by escalator when person is climbing up the stairs
Question: Suppose a person walks up the stairs of an escalator when it (the escalator) is moving upwards. What is the work done by the escalator on the person? Is it the same as if the person were standing still? I think it is not the same because the person is pushing the stairs down, so the normal force must be greater than the weight of the person, which means the escalator does a greater work. I think that part of this greater work is converted into kinetic energy, since the person is moving with respect to the escalator. Is my reasoning correct? Answer: The elevator has to perform more work in the moment the person is accelerating upwards, but then again less work when stopping at the top, i.e. decelerating. In total, the work performed by the elevator is therefore the same as if the person was standing still the whole time.
{ "domain": "physics.stackexchange", "id": 43982, "tags": "newtonian-mechanics, forces, energy, reference-frames, work" }
Speed ​up file search
Question: I want to change the conversion from List<string> to the StringCollection type. To make files move faster. try { List<string> Picture = new List<string>(); List<string> PaThS = new List<string>(); string[] SVF = { Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData) }; foreach (var paths in SVF) PaThS.AddRange(Directory.GetDirectories(paths)); foreach (var e in PaThS) Picture.AddRange(Directory.GetFiles(e, "*.jpg", SearchOption.AllDirectories)); } catch{} } Actually by subject. Modify the StringCollection Answer: Hard disk accesses are measured in ms. RAM accesses in ns, i.e. they are roughly one million times faster! This means that List<string> manipulations (which happen in RAM) are much much faster than calls to Directory.GetDirectories or Directory.GetFiles that access the storage (hard disk, usb stick, SSD ...). Replacing List<string> by StringCollection won't make any difference! List<string> and StringCollection both store data in the main memory. They don't move files and they don't search files. It makes no sense to first add data to an array just to copy it to a list afterwards. Add it to the list directly with a collection initializer: var paths = new List<string> { Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData) }; What kind of crazy casing is PaThS? The usual C# style for variables is camelCase, i.e. the first character is lower case, succeeding words start with an upper-case letter.
{ "domain": "codereview.stackexchange", "id": 25253, "tags": "c#" }
Bubble sorting in C
Question: I've been into algorithms lately, so I wanted to apply a simple algorithm using a language of my choice, which was C in this case. I've implemented the bubblesort algorithm (for strings) in a simple program: #include <stdio.h> #include <stdbool.h> #include <string.h> #define NUM_NAMES (5) void sort(char ** sorted, char ** strs, const size_t size, const bool ascending) { // using the bubble sort algorithm sorted[0] = strs[0]; char ** f = strs; for(int u=0; u < size; ++u) { for(int i = 0; i < size - 1; ++i) { if (strcmp(sorted[i], f[i+1]) <= 0) { // sorted[i] is first sorted[i+1] = f[i+1]; } else { // f[i+1] is first char *temp = f[i]; // just in case f == sorted, they'll point to the same thing .. sorted[i] = f[i+1]; sorted[i+1] = temp; } } f = sorted; } if (!ascending) { // reverse it ! char *reversed[size]; // temporarily int i1 = 0, i2 = size - 1; while (i1 < size && i2 >= 0) { // one condition would do. Only to be thread-safe reversed[i1] = sorted[i2]; i1++;i2--; } for(int i = 0; i < size; ++i) sorted[i] = reversed[i]; // putting it to sorted } } void printNames(char * q, char ** names, int num) { printf("\t%s\n", q); for(int i = 0; i < num; ++i) printf("%d: %s\n", i+1, names[i]); for(int i = 0; i < 30; ++i) printf("="); printf("\n"); } int main(int argc, char const *argv[]) { char * names[] = { "This should be Second", "This should be First", "This should be before the last", "Wait .. That's the last!", "This should be Third" }; char *names_ordered[NUM_NAMES]; printNames("Original", names, NUM_NAMES); sort(names_ordered, names, NUM_NAMES, true); printNames("Ascending", names_ordered, NUM_NAMES); sort(names_ordered, names, NUM_NAMES, false); printNames("Descending", names_ordered, NUM_NAMES); return 0; } I want to know if there's a problem with the sort function, especially in the reversing part, because I think that that's not efficient. Answer: Reversing is not very efficient indeed (but who cares about an extra linear pass when bubble sort itself is quadratic?). I would rather account for the requested order during the comparison: result = strcmp(...); if (!ascending) result = -result; Initialization f = strs is very confusing, because later on f is reinitialized to sorted. I'd initialize it to sorted always, as close to use as possible. Something like for(int u=0; u < size; ++u) { char ** f = sorted; for(int i = 0; i < size - 1; ++i) { ... } } One-character names, especially unmotivated like f, u and q should be avoided. You really have to figure out what the variable is, and name it accordingly.
{ "domain": "codereview.stackexchange", "id": 14997, "tags": "algorithm, c, sorting" }
Moment of inertia of rods
Question: Ok so I'm extremely comfortable with calculating moment of inertia of continuous bodies but how do we do it for a system not continuous. For example if 3 rods of mass $m$ and length $l$ are joined together to form an equilateral triangle what will be the moment of inertia about an axis passing through its centre of mass perpendicular to the plane. i know that moment of inertia of each rod is $ml^2/12$ and c.o.m is at centroid? also if 2 rods form a cross then to calculate the moment of inertia about its point of intersection would it be correct to sum up the individual moment of inertia of the rods form?? Answer: The moment of inertia for a system of $n$ point masses, $m_i$, at distances $r_i$ from the pivot is simply: $$ I = \sum m_i r_i^2 \tag{1} $$ We normally calculate $I$ by integration, i.e. we take each point mass to be an infinitesimal element of our continuous object and integrate to add up the moments of inertia of all those elements. In your case let's call the three rods $A$, $B$ and $C$, then our initial equation (1) can be written as: $$ I = \sum m_{Ai} r_{Ai}^2 + \sum m_{Bi} r_{Bi}^2 + \sum m_{Ci} r_{Ci}^2 $$ where all we've done is divide up our sum into the infinitesimal parts that belong to the three masses. But from equation (1) we know that $I_A = \sum m_{Ai} r_{Ai}^2$, and likewise for $B$ and $C$, so the total moment of inertia is just: $$ I = I_A + I_B + I_C $$ So just calculate the separate moments of inertia for all the objects in your system then add them together. In your particular case the objects are identical so the total is just the moment of inetria of a single rod multiplied by three.
{ "domain": "physics.stackexchange", "id": 16862, "tags": "homework-and-exercises, moment-of-inertia" }
How to Correct wrong Environment setup?
Question: I was trying to set up my environment variables and typed in the following code: echo "source /opt/ros/electric/setup.bash" >> ~/.bashrc source ~/.bashrc Later i realised i only had groovy, so i changed electric in above to groovy but whenever i try to run ros or baxter, always this message gets displayed that bash: /opt/ros/electric/setup.bash: No such file or directory My question is how to get the wrong environment set-up command disabled so that the irritating message is not displayed again and again? Thanks in advance. Originally posted by GoBaxter on ROS Answers with karma: 15 on 2014-09-30 Post score: 0 Answer: Just remove all lines with electric from your ~/.bashrc with a text editor. The second message seems to indicate that not roscore is running. Originally posted by dornhege with karma: 31395 on 2014-09-30 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by GoBaxter on 2014-09-30: Where is ~/.bashrc located? Comment by dornhege on 2014-09-30: in your home directory Comment by GoBaxter on 2014-09-30: Thanks. That solved my problem.
{ "domain": "robotics.stackexchange", "id": 19573, "tags": "ros, environment, baxter" }
Hardy-Weinberg Color-blind
Question: In a city, 4% of male population have color blindness. How many of the female are (a) color blind carrier, (b) color blind? Suppose the city holds Hardy Weinberg equilibrium. My progress: 4% of male have color blind => $p=F(cb~allele)=0.04$ and therefore $q=F(not~cb)=0.96$. Since HW equilibrium stand, we get the allele frequency among female is the same as among male. Then (a) $2pq=2*0.04*0.96=0.0768=7.68\%$ and (b) $q^2=0.9216=92.16\%$. Am I correct? Answer: A: wild-type allele / a: color blind allele Because color blindness is recessive and X-linked your assumption $p=F(a)=4\%$ is correct as men do only have one copy of the allele. Subsequently $F(A)=q=1-p=0.96$ is also correct. Therefore: a) $F(Aa)=2pq=7.68\%$ is correct and b) is wrong, a is the color blind allele and $F(a)=0.04$ therefore it's $p^2=0.04^2=0.0016=0.16\%$. 92% color blind among females seems a bit high :).
{ "domain": "biology.stackexchange", "id": 3739, "tags": "genetics, homework, hardy-weinberg" }
Compile-time data structure generator
Question: In response to another recent question I mentioned that one mechanism to avoid runtime overhead for creating a data structure was to create it at compile time and use it directly. Since there was some interest in that concept, I thought I would write code to show how that might be done and submit it for review here. The theory There are two pieces to this code. The first is code that is used to construct an AVL tree of arbitrary data. For illustration purposes, the data inserted into the tree are the names of the months (in English) and the number of days in each month. The month name is used as the key for lookup and the number of days is the associated data retrieved. This first piece of code constructs the tree and then emits it as a C source code representing the resulting structure. The second piece of code is the part that would be compiled and linked with the object file resulting from the first piece. makeavl.c #include <stdio.h> #include <stdlib.h> #include <string.h> typedef struct node { char *key; int data; struct node *left; struct node *right; } node; node *new_node(char *key, int data) { node *p = malloc(sizeof(*p)); if (p != NULL) { p->key = key; p->data = data; p->left = NULL; p->right = NULL; } return p; } int max(int a, int b) { return a > b ? a : b; } int nodecount(node *p) { int count = 0; if (p != NULL) { ++count; count += nodecount(p->left); count += nodecount(p->right); } return count; } int relheight(node *p, int count) { if (p == NULL) return count; return max(relheight(p->left, count+1), relheight(p->right, count+1)); } int height(node * p) { return relheight(p, 0); } node *rotate_right(node * p) { node *q = p->left; p->left = q->right; q->right = p; return q; } node *rotate_left(node * p) { node *q = p->right; p->right = q->left; q->left = p; return q; } node *balance(node * p) { if (height(p->left) - height(p->right) == 2) { if (height(p->left->right) > height(p->left->left)) p->left = rotate_left(p->left); return rotate_right(p); } else if (height(p->right) - height(p->left) == 2) { if (height(p->right->left) > height(p->right->right)) p->right = rotate_right(p->right); return rotate_left(p); } return p; } node *insert(node * p, char *key, int data) { if (p == NULL) return new_node(key, data); int keycmp = strcmp(key, p->key); if (keycmp < 0) p->left = insert(p->left, key, data); else if (keycmp > 0) p->right = insert(p->right, key, data); else p->data = data; return balance(p); } void free_tree(node * p) { if (p == NULL) return; free_tree(p->left); free_tree(p->right); free(p); } void emit(FILE * out, node * p, const char *name, int *count) { if (p != NULL) { fprintf(out, "/* %d */ ", *count); fprintf(out, "{ \"%s\", %d, ", p->key, p->data); ++(*count); if (p->left) { fprintf(out, "&%s[%d], ", name, *count); } else { fprintf(out, "NULL, "); } if (p->right) { fprintf(out, "&%s[%d] },\n", name, *count + nodecount(p->left)); } else { fprintf(out, "NULL },\n"); } emit(out, p->left, name, count); emit(out, p->right, name, count); } } void makeC(FILE *out, node *p, const char *name) { int count = 0; fprintf(out, "node %s[] = {\n", name); emit(out, p, name, &count); fprintf(out, "};\n"); } int main() { node *root = NULL; char *months[12] = { "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December" }; int days[12] = { 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 }; for (int i = 0; i < 12; ++i) { root = insert(root, months[i], days[i]); } makeC(stdout, root, "caltree"); free_tree(root); } useavl.c #include <stdio.h> #include <stdlib.h> #include <string.h> typedef struct node { char *key; int data; struct node *left; struct node *right; } node; node *search(node * p, char *key) { if (p == NULL) return NULL; int keycmp = strcmp(key, p->key); if (keycmp < 0) return search(p->left, key); else if (keycmp > 0) return search(p->right, key); else return p; } int main(void) { #include "staticavl.c" node *n = search(caltree, "February"); printf("There are %d days in %s.\n", n->data, n->key); } staticavl.c (generated file) node caltree[] = { /* 0 */ { "March", 31, &caltree[1], &caltree[8] }, /* 1 */ { "January", 31, &caltree[2], &caltree[6] }, /* 2 */ { "August", 31, &caltree[3], &caltree[4] }, /* 3 */ { "April", 30, NULL, NULL }, /* 4 */ { "February", 28, &caltree[5], NULL }, /* 5 */ { "December", 31, NULL, NULL }, /* 6 */ { "June", 30, &caltree[7], NULL }, /* 7 */ { "July", 31, NULL, NULL }, /* 8 */ { "October", 31, &caltree[9], &caltree[11] }, /* 9 */ { "May", 31, NULL, &caltree[10] }, /* 10 */ { "November", 30, NULL, NULL }, /* 11 */ { "September", 30, NULL, NULL }, }; Sample output There are 28 days in February. The advantage to doing things this way is that only the lookup function needs to be in the useavl program, so if it's an embedded system and especially if the data structure is large and complex, the burden for building the structure is shifted to compile time (or technically, to runtime for makeavl). It's possible, and indeed probable, that makeavl and useavl would run on different machines. Comments welcome. Answer: It's a little tricky to provide feedback on this code, while at the same time remembering that it's just an example. (As nhgrif mentioned in the comments, "What if it's a leap year?") In real life, we'd never want to use a binary search tree; at least we'd prefer to store the data in a sorted array and use binary search — i.e., we'd strength-reduce "dereferencing operations on pointers" to "arithmetic operations on indices", because the latter are much cheaper on our usual hardware. In fact, for just 12 pieces of data, we'd be more likely to use linear search, or perhaps a perfect hash table. However, let's say we had a whole ton of data to store in this data structure, and for some reason we really needed it to be stored in a binary search tree with real pointers. (I can't think of such an application, but I'm willing to stipulate that one exists.) Then your program organization, IMHO, still leaves something to be desired. int main(void) { #include "staticavl.c" node *n = search(caltree, "February"); printf("There are %d days in %s.\n", n->data, n->key); } I would greatly prefer this to be written as #include "month_data.h" int main(void) { month_data *n = get_month_data("February"); printf("There are %d days in %s.\n", n->days, n->name); } where "month_data.c" would consist of the same pieces of code you wrote, just rearranged to put the module boundary in the right place: // month_data.h typedef struct month_data { const char *name; int days; } month_data; month_data *get_month_data(const char *name); // month_data.c struct node { struct month_data data; struct node *left; struct node *right; }; static struct node *search(struct node *p, const char *key) { if (p == NULL) return NULL; int keycmp = strcmp(key, p->data.name); if (keycmp < 0) return search(p->left, key); else if (keycmp > 0) return search(p->right, key); else return p; } static struct node caltree[] = { /* 0 */ { "March", 31, &caltree[1], &caltree[8] }, /* 1 */ { "January", 31, &caltree[2], &caltree[6] }, /* 2 */ { "August", 31, &caltree[3], &caltree[4] }, /* 3 */ { "April", 30, NULL, NULL }, /* 4 */ { "February", 28, &caltree[5], NULL }, /* 5 */ { "December", 31, NULL, NULL }, /* 6 */ { "June", 30, &caltree[7], NULL }, /* 7 */ { "July", 31, NULL, NULL }, /* 8 */ { "October", 31, &caltree[9], &caltree[11] }, /* 9 */ { "May", 31, NULL, &caltree[10] }, /* 10 */ { "November", 30, NULL, NULL }, /* 11 */ { "September", 30, NULL, NULL }, }; month_data *get_month_data(const char *name) { node *n = search(caltree, name); return n ? &n->data : NULL; } The advantage of this organization is that we've put the module boundary at a natural place: the month_data module is now concerned solely with getting the data for a particular month. The notion of "searching a tree" is encapsulated into it as an implementation detail, one that the user of the module doesn't have to worry about. Nor does the compiler have to worry about it, because we're no longer passing a struct node * across the module boundary; the compiler is free to inline and optimize our use of the now-static data structure. And we're free to change the internal implementation of get_month_data; if profiling proves that a linear search in a lookup table would be faster, then we can make that change, without even requiring the client to recompile their main.o module. Using your old code, they'd have to not only recompile useavl.o, but rewrite it, unless you went out of your way to reuse the same struct node interface. (And a good engineer would also want to rename useavl.o, since it would then no longer be using an AVL tree. That's a lot of needless code-shoveling, which we could have avoided by using the right abstractions from the beginning.)
{ "domain": "codereview.stackexchange", "id": 14926, "tags": "c, embedded" }
Why is the velocity different for different points on a rolling wheel?
Question: Lets take the following example According to above examples it means that velocity at the above portion is max while the velocity at lower portion is min. But I think it should be the same at both parts (just opposite in direction). Why are both different? Answer: You have to remember that the entire wheel is also moving. Think of this. Where the wheel meets the ground, the velocity of the contact point must be 0, otherwise the wheel would be skidding. Another way of looking at it is that at the contact point the forward velocity of the wheel is cancelled by the backward velocity of the point. On the other hand, at the top of the wheel these velocities add together: the velocity of the entire wheel with respect to the ground, plus the velocity of that point with respect to the centre of the wheel. I once tested this, when I drove behind a truck that was trailing a rope on the road. I drove one of my front wheels over the rope and instantly the rope broke. It had to break because one end of the rope was moving at the speed of the truck, while the other was stationary between the road and my tyre.
{ "domain": "physics.stackexchange", "id": 35730, "tags": "rotational-dynamics, velocity, angular-velocity" }
Viola Jones Algorithm
Question: Can Viola Jones algorithm be used to detect the facial emotion. Actually it was used in creating harr-cascade file for object and facial detection, but what confused me is whether it can be used to train for emotion detection. If not, what algorithms can I use? and what are the mathematical bases? (i.e. what mathematics should I be studying?) Answer: An introduction to the Haar features is provided in the youTube video. The video indicates the VJ face detector leverages a selected combination of Haar features (convolutional kernels) to detect facial features (weak classifiers), such as the nose bridge. The binary presence of the weak classifiers are summed to determine if the window contains a face. The ability for a VJ algorithm to detect emotion would rely on the ability to assign a set of Haar features (kernels) to recognize features associated with a particular emotion label (surprise, anger, content, fear). It is conceivable that the initial stage of an emotion classifier could use a the VJ algorithm to identify a face for additional stages to classify emotion.
{ "domain": "ai.stackexchange", "id": 782, "tags": "algorithm, image-recognition, emotional-intelligence, math, facial-recognition" }
Can gamma radiation cause transmutation?
Question: In irradiation of food for sterilisation, is gamma radiation absorbed by the food? If so, can it theoretically cause production of radioisotopes? Or does it ionise atoms in the food? Answer: Answer to a More General Form of Your Question The answer to the question Can gamma radiation make non-radioactive stuff radioactive? is yes, but only if the gamma ray has enough energy. "Enough" is different for all atoms but for most of the lighter ones it's over 2 MeV, and what basically happens if the gamma ray has so much momentum that when it hits a proton or neutron in the nucleus of an atom it can knock that proton or neutron right out of it. Answer to Your Specific Question The answer to the question In irradiation of food for sterilization, is gamma radiation absorbed by the food? If so, can it theoretically cause production of radioisotopes? Or does it ionize atoms in the food? is no in the grand majority of cases. Either way it ionizes the atoms in the food (That is how it sterilizes after all), but the gamma rays used in most food irradiation are produced by 60Co and therefore have a max energy of ~1.3 MeV. If you recall the answer to the more general question above, these gamma rays are not high enough in energy to cause photodisintegration in any of the light atoms of food. In some cases however, the radiation is produced in a high energy accelerator. If these energies are high enough, gamma rays can start to actually do things. These high energy accelerators are not really used for food sterilization that much as far as I know, so most of the time the bolded answer above holds true.
{ "domain": "chemistry.stackexchange", "id": 5553, "tags": "radioactivity, electromagnetic-radiation" }
Is mass a hash function?
Question: In what sense is mass a hash function? Classically, it appears names can be avoided and replaced by numeric values in order to cut down on complexity. For instance, if I am running a grocery store, rather than storing the names of all the packaged products in a database, and putting bar codes or other unique identifiers on any of the packages, I could simply keep a high precision scale at all of the registers. Then, as long as the precision is high enough (say picograms), I can just use the mass of each packaged food item to quickly identify it. Of course you could do the same thing for a set of particles in a physics model. For instance, if one wanted to put the Standard Model on a floppy disk, and needed to save a few bytes, rather than putting the names “electron neutrino”, “muon neutrino”, and “tau” neutrino, one might instead just use their masses in eV to identify them (why use English anyways?). I’m wondering if there is a precise sense in which this analogy maps onto a physics model, say in the AdS/CFT correspondence. Like, if particles on the boundary are defined in terms of entangled qubit states, perhaps there is a (I hesitate to use the word quantum but a quantum) analog of a hash function that would allow the model to pick out and label particular particles in the model. References appreciated if the question is too vague. Answer: In a good “hash function,” unique inputs map to unique outputs. However, even among the twenty-ish fundamental particles on the Standard Model poster, there are already several “hash collisions” with the same mass: the photon’s mass is zero the gluon’s mass is zero the graviton’s mass is zero “Oh, those don’t count, those are force-carrying particles,” you say. However, while we have in the 21st century discovered that all three neutrino species have different masses, we have not ruled out that one neutrino species still has mass zero. “Which neutrino species might have mass zero?” you ask, editing your table. Answer: the lightest neutrino is a mixture of the electron, muon, and tau neutrino flavor states. It’s either the one with the most electron neutrino, or the one with the least electron neutrino, depending on the hierarchy. The electron neutrino being emitted by your banana does not have a mass. That’s different from “the electron neutrino is massless”: the correct statement is that the electron neutrino is a coherent superposition of three particles which have different masses. We can break the degeneracy among the massless force-carrying particles by noting that, in a superconducting material, the photon acquires an effective mass. This means that, within a superconductor, electromagnetism is a short-range rather than long-range interaction, and is related to the expulsion of magnetic fields. Unfortunately for the hash-table approach, the effective mass of a photon in a superconductor depends on the material’s composition, temperature, and other properties. For that matter, the color force mediated by gluons is also subject to medium modifications. But here the medium modifications go the other way. The gluon is massless only in the interior of a proton, neutron, or other hadron. In the vacuum, the gluon acquires an effective mass and the color force becomes a short-range force. Our vacuum is a color superconductor. The matter particles are subject to medium modifications, too. In some perfect-crystal semiconductors, it’s awkward and complicated to describe charge transport in terms of electrons hopping from nucleus to nucleus. It’s more parsimonious to identify the quantized properties of the collective motion of the electron ocean as “quasiparticles” whose effective mass is typically comparable to, but different from, the electron mass. If these quasiparticles have negative charge we sometimes still call them “electrons” (or perhaps “dressed electrons”). If they have positive charge, we call them “holes.” In a material with both positive and negative charge carriers, the holes and the dressed electrons act a little like particles and antiparticles, forming in oppositely-charged pairs and annihilating each other if they get too close. The effective masses of electrons and holes are usually not the same, unlike particles and antiparticles. You can explain this away because a semiconductor crystal has a large matter excess over antimatter, so it’s reasonable to expect that matter and antimatter should behave differently in that environment. But of course, our vacuum also has $\mathit{CP}$ violation: matter and antimatter behave differently enough that the visible part of our universe is made entirely of matter, with only incidental antimatter. It’s surprisingly hard to rule out the idea that our vacuum is a “false vacuum” and that our particles are quasiparticle excitations in the “true vacuum.” Like a fish who never understands he is in water, perhaps we cannot see that our universe’s “real matter” is hiding the dark sector. And there’s another medium effect (or lack-of-medium effect) in QCD: the light quarks, for whom confinement is the strongest, have very weak constraints on their masses. The “dressed quark masses” are around $\rm \frac13\,GeV$, because nucleons are made of three valence quarks. However, the “bare quark” masses are much closer to $\rm2\,MeV$. Your idea of mass as a hash function only really works for the easy cases.
{ "domain": "physics.stackexchange", "id": 83791, "tags": "quantum-information, mass" }
Human Readable filesizes
Question: I am re-inventing the wheel and would like to lay down a comprehensive filesizes function. The only issue is that I guess bit-wise operators are more efficient but I'll be darned if I can understand them. What would you do to optimize this function? function humanFileSize($bytes = 0, $f = 1024) { $f = (int)$f; $i = 'i'; $newsize = 0; $units = 'bytes'; if($f === 1000) { $i = ''; } else { $f = 1024; } if($bytes < $f) { $newsize = $bytes; } elseif($bytes < ($f * $f)) { $newsize = ($bytes / $f); $units = ($f === 1024 ? 'K' : 'k').$i.'B'; } elseif($bytes < ($f * $f * $f)) { $newsize = ($bytes / $f / $f); $units = 'M'.$i.'B'; } elseif($bytes < ($f * $f * $f * $f)) { $newsize = ($bytes / $f / $f / $f); $units = 'G'.$i.'B'; } else { $newsize = $bytes; } return number_format($newsize, 2, '.', ',').' '.$units; } Answer: A better method would be to store the suffixes in an array: $suffix = [ '', 'K', 'M', 'G', 'T', 'P' ]; and use a loop to compute the denomination: $idx = 0; while ($bytes > $f) { $bytes = $bytes / $f; $idx++; } and then, use a string formatter: return sprintf( "%.2f %s%sB", $bytes, $suffix[$idx], $i ); You can see it in action here. The function would be: function humanFileSize($bytes = 0, $f = 1024) { $f = (int) $f; $i = ($f === 1024) ? 'i' : ''; $suffix = [ '', 'K', 'M', 'G', 'T', 'P' ]; $idx = 0; while ($bytes > $f) { $bytes = $bytes / $f; $idx++; } return sprintf( "%.2f %s%sB\n", $bytes, $suffix[$idx], $i ); }
{ "domain": "codereview.stackexchange", "id": 12497, "tags": "php, reinventing-the-wheel" }
Why are 2D block matrices considered irreducible?
Question: I am currently reading Applications of Group Theory by M.S. Dresselhaus On page 18 irreducible representations are defined and an example is given as follows: The author claims that these representations are irreducible. However for E and A the $\Gamma_2$ 2D representation is in block form and it can be seen that it could be expressed in terms of the $\Gamma_1$ and $\Gamma_1'$ representations as follows $\begin{pmatrix} \Gamma_1 & 0\\ 0 & \Gamma_1' \end{pmatrix}$ , so why is $\Gamma_2$ considered irreducible? I am using the last line of the definition here. Answer: The representation of the group (in this case) is given by a set of six matrices. Reducibility is a property of the representation as a whole, not the individual matrices that form it. So, you can't say that the $E$ matrix on its own is (ir)reducible, or that $A$ on its own is (ir)reducible. You have to look at the set of six matrices as a whole, and whether they can all be simultaneously block-diagonalised. If they can, then the entire representation is reducible; if they can't, then the entire representation is irreducible. In this case, $E$ and $A$ are block-diagonal, but the other matrices $B$, $C$, $D$, and $F$ are not. You could find some change of basis such that one or more of the latter become diagonal. However, it's not possible to find a change of basis which simultaneously diagonalises all six matrices. So, the representation $\Gamma_2$ is irreducible. Another way of looking at it is that you are trying to decompose $\Gamma_2 = \Gamma_1 \oplus \Gamma_1'$. This is indeed satisfied for the $E$ and $A$ matrices, but not for the others. So, it's not possible to say that the representations add up, because the representation consists of all six matrices, and all six matrices must add up (technically, form the correct direct sum) for this to be the case. The cited definition is actually very rigorous and does mention this (emphasis mine): If by one and the same equivalence transformation, all the matrices in the representation of a group can be made to acquire the same block form, then the representation is said to be reducible [...]
{ "domain": "chemistry.stackexchange", "id": 16362, "tags": "group-theory" }
Binary to decimal converter in JavaScript
Question: This exercise with a test suite is from here. I did not intend to use ES6 features, it's plain old JavaScript. That's why I seek advice more on the good practices, performance side, than modern syntax. Basically anything to make this code more elegant, without ES6. // Constructor var Binary = function(binary) { this.binary = binary; this.base = 2; }; // Transform the string input in a reversed array Binary.prototype.reverseInput = function() { return this.binary.split('').reverse(); }; // Handle the conversion to decimal Binary.prototype.toDecimal = function() { var output = 0; binaryArray = this.reverseInput(); if (this.invalidInput()) { return output; } for (var i = 0, j = binaryArray.length; i < j; i++) { output += binaryArray[i] * Math.pow(this.base, i); } return output; } // Check if any character is not a 1 or a 0 Binary.prototype.invalidInput = function() { for (var i = 0, j = this.binary.length; i < j; i++) { if (this.binary[i] != '0' && this.binary[i] != '1') { return true; } } } module.exports = Binary; Answer: Flawed exercise Before I begin, I would like to point out that the exercise is flawed: The task is not really converting the input from a binary to a decimal representation. Rather, it is to convert it to a number, whose conventional string representation is rendered in base 10. The test suite included this test: xit('11010 is decimal 26', function() { expect(new Binary('11010').toDecimal()).toEqual(26); }); … which could also have been written as: xit('11010 is twenty-six', function() { expect(new Binary('11010').toDecimal()).toEqual(0x1a); }); It is bad practice to return 0 for new Binary('junk').toDecimal(). Acceptable results would be to either throw an exception or return Number.NaN. Review Your invalidInput function either returns true or falls off the end. Good practice would be to name it isInvalidInput, and have it explicitly return true or false. Better yet, reverse the sense and call it isValidInput. Even better, don't expose the validation function as a method at all — you don't really need one. this.binary.split('').reverse() and Math.pow(…) seem extravagant. Suggested solution // Constructor var Binary = function(binary) { this.binary = binary; this.base = 2; }; Binary.prototype.toDecimal = function() { // <-- Method name is a misnomer! var output = 0; for (var i = 0; i < this.binary.length; i++) { output *= 2; switch (this.binary.charAt(i)) { case '0': break; case '1': output += 1; break; default: return 0; // <-- This sucks, but the test suite requires it! } } return output; }
{ "domain": "codereview.stackexchange", "id": 21224, "tags": "javascript, object-oriented, programming-challenge, reinventing-the-wheel, number-systems" }
How can EXP^P be characterized?
Question: I had a question about EXP^P (EXPTIME with access to a P oracle). I thought I had read somewhere that EXP = EXP^P, and that seemed fairly intuitive to me: I thought "adding polynomial power to something that already has exponential power must not change a lot". Now I am questioning myself: I was searching for something that verified thar EXP equals EXP^P, but I can't find something like that. I am thinking that I probably just assumed those classes were equal and they actually are not (or it is still not known). Does anyone have some hints about how to characterize this class? Or if there is a known relationship between EXP and EXP^P? Answer: You are right, EXP = EXP^P. Suppose that you have an oracle $f$ that is in $P$. This means that whatever computation the oracle $f$ is doing, can be done by some polynomial-time algorithm, let's call it $B$. Consider any algorithm $A$ in EXP^P that uses the oracle $f$. We can modify $A$ so that any time it would have called $f$, it instead runs the polynomial-time algorithm $B$ (as a subroutine) without using the oracle. In this way, we have obtained a new algorithm that doesn't use any oracle. What is the running time of this new algorithm? It is at most the running time of $A$, times the running time of $B$, or in other words, at most an exponential function times a polynomial function. One can easily show that this is also an exponential function (for example, $2^x \times x^7 = O(4^x)$, since $x^7 = O(2^x)$). Therefore, we have obtained a new algorithm that also runs in exponential time, and doesn't use an oracle, so this new algorithm must be in EXP. In this way, we can convert any algorithm in EXP^P to an algorithm in EXP. This shows that every problem in EXP^P is also in EXP. The other direction (every problem in EXP is also in EXP^P) is trivial (just ignore the oracle and never use it), so this shows that EXP = EXP^P.
{ "domain": "cs.stackexchange", "id": 19742, "tags": "complexity-theory, np, complexity-classes, oracle-machines" }
Difference between a wavelet transform and a wavelet decomposition
Question: I'm confused about the difference between a wavelet transform and a wavelet decomposition is. For example load woman [cA1,cH1,cV1,cD1] = dwt2(X,'db1'); [c,s] = wavedec2(X,2,'db1'); What's the difference between these two matlab commands, and when would you want to do one over the other? Answer: I don't think there is any difference. The documentation for dwt2 says Single-level discrete 2-D wavelet transform The dwt2 command performs a single-level two-dimensional wavelet decomposition... While the documentation for wavedec2 says Multilevel 2-D wavelet decomposition The difference is that dwt2 is single-level (produces a single A, H, V, D output): and wavedec2 is multilevel (produces array C output, which contains multiple A, H, V, D inside it):
{ "domain": "dsp.stackexchange", "id": 1592, "tags": "wavelet" }
Webapp Substrate script for WordPress on Debian-Nginx
Question: The following script creates a webapp substrate for WordPress webapps/websites on Ubunu-Nginx with PHP-FPM and Mysql environments, with Certbot, while all software is uncustomized. Such substrate includes, based on the 1 or more domains given as an argument ( /opt/nwsm.sh domain1.tld domain2.tld): /etc/nginx/sites-available/example.com.conf. etc/nginx/sites-enabled/ symlink. Appropriate DB user and instance, named example.com. Suitable wp-config.php file. I especially hope to learn how I could shorten this code in at least 10 rows, if at all. Installing the script Just copy and paste in Bash and it willl be created under /opt/nwsm.sh. cat <<-"NWSM" > /opt/nwsm.sh #!/bin/sh for domain; do cat <<-WEBAPPCONF > "/etc/nginx/sites-available/${domain}.conf" server { root /var/www/html/${domain}; server_name ${domain} www.${domain}; location ~* \.(jpg|jpeg|png|gif|ico|css|js|ttf|woff|pdf)$ { expires 365d; } location / { index index.php index.html index.htm fastcgi_index; try_files $uri $uri =404 $uri/ /index.php?$args; } location ~ \.php$ { fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } WEBAPPCONF ln -s /etc/nginx/sites-available/${domain}.conf /etc/nginx/sites-enabled/ echo "Please enter database user password for user ${domain}: " read -s waps echo "Please enter database root password" mysql -u root -p <<-MYSQL create user "${domain}"@"localhost" identified by "${waps}"; create database ${domain}; GRANT ALL PRIVILEGES ON ${domain}.* TO ${domain}@localhost; MYSQL certbot --nginx -d ${domain} -d www.${domain} done cd /var/www/html/ wget http://wordpress.org/latest.tar.gz tar xzvf latest.tar.gz && rm latest.tar.gz mv wordpress ${domain} cp /var/www/html/${domain}/wp-config-sample.php /var/www/html/${domain}/wp-config.php echo "1/1: Please enter the password of the site's DB user." && read -s dbup sed -i "s/database_name_here/${domain}"/g /var/www/html/${domain}/wp-config.php sed -i "s/username_here/${domain}"/g /var/www/html/${domain}/wp-config.php sed -i "s/password_here/${dbup}"/g /var/www/html/${domain}/wp-config.php chown www-data:www-data /var/www/html/* -R find /var/www/html/* -type d -exec chmod 755 {} \; find /var/www/html/* -type f -exec chmod 644 {} \; systemctl restart nginx.service NWSM chmod +x /opt/nwsm.sh Using the script Call the script with a domain and tld as an argument, say: /opt/nwsm.sh example.com Answer: The outer heredoc construct isn't really required. You do not verify that you have command line arguments. You do not assign your command line arguments to the variable domain. You use the variable domain when it is uninitialised. You make a number of assumptions, any one of which could bite you hard. You assume that the nginx package has been installed and that it has created the paths and other things you require. You assume that the MySQL packages have been installed. You assume that the certbot package has been installed. You assume that the /var/www/html directory exists. You assume that the www-data user exists. You assume that this is the first time that the script has been run for domain. It will have unexpected results if domain already exists. Think about those assumptions and what would happen if they were incorrect and things didn't exist as you expect. Your variable names are not good - why, for example, is the database user's password assigned to waps and not something like dbpassword? It's not clear if you are planning on passing more than one domain to the script per invocation. If you only plan one domain per invocation you do not need the for loop. You assume that all of your commands will complete correctly. You really should check the exit status of those that could reasonably be expected to fail. For example the wget could fail for many reasons. If it does then everything after it should not be run. What happens if certbot fails or mysql or tar or ... Your chown -R... command assumes that nothing else is installed in /var/www/html or that everything installed there should be owned by www-data, this may not be the case, use /var/www/html/${domain}. Your find commands are the same. Your find commands will exec chmod for every file found. It is more efficient to use find ... -exec {} + which optimises the size of the command line and reduces the number of execs. I especially hope to learn how I could shorten this code in at least 10 rows, if at all. How sweet, I can see it being considerably larger.
{ "domain": "codereview.stackexchange", "id": 29031, "tags": "bash, linux, wordpress, installer, nginx" }
Automate the Boring Stuff Chapter 10 - Filling gaps in a sequence
Question: The project outline: Write a program that finds all files with a given prefix, such as spam001.txt, spam002.txt, and so on, in a single folder and locates any gaps in the numbering (such as if there is a spam001.txt and spam003.txt but no spam002.txt). Have the program rename all the later files to close this gap. My solution: import re from pathlib import Path def filename_change(marker, stem, extension, pattern_len): correct_number = str(marker).zfill(pattern_len) new_filename = f"{stem}{correct_number}.{extension}" return Path(new_filename) def check_gaps(basedir, prefix_pattern): sequence = [] marker = 0 pattern_len = len(prefix_pattern) filename_reg = re.compile(r"(.*)(\d{%s})\.(.*)" % pattern_len) for filename in basedir.glob("*"): match = filename_reg.search(str(filename)) if not match: continue sequence.append(filename) sequence.sort() for filename in sequence: match = filename_reg.search(str(filename)) stem = match.group(1) prefix = match.group(2) extension = match.group(3) marker = marker + 1 if int(prefix) != marker: new_filename = filename_change(marker, stem, extension, pattern_len) filename.rename(new_filename) def main(): while True: basedir = Path(input("Please enter a folder to search: ")) if not basedir.is_dir(): print("This path does not exist.") continue prefix_pattern = input("Enter a pattern to find. It must be zero padded (e.g. '001', '01', not '1'): ") check_gaps(basedir, prefix_pattern) if __name__ == '__main__': main() I assumed that the order of the sequence was important and so opted to rename every file after the gap rather than filling the gaps with files from the end of the list. This seems fine for small sequences but I wondered if there was a more efficient way when dealing with hundreds of files. Answer: Forewords I do believe that you misunderstood the prefix part of the task and that your implementation treats the number or index of the files as the prefix and that you call the prefix stem in your code. It thus result in spam001.txt, spam101.txt and unrelated007.txt being renamed into spam001.txt, spam002.txt and unrelated003.txt. A prefix being something that comes before the information, I do believe that spam, in the example, is the prefix of the files we need to rename. And the prefix will act as a filter on which "series" of files the script needs to deal with. In your implementation, though, you mostly use the user provided information to know how to pad the resulting filename with zeroes. This is an interesting question as the task at hand does not explicit how it should be dealt with but I will talk about it latter. User input As you will learn to automate (boring) stuff, you’ll find out that an interactive script as you did that ask the user about further information it needs to proceed is quite tedious to automate or even test in quick succession. A better and more common approach is to provide options on the command-line and implement a parser within your script. Python provides the argparse module for such tasks. You can easily say that you want to provide a folder to your script and (optionally?) a prefix to filter the files within that folder. Iteration sequence = [] for … in …: … sequence.append(…) Is a code smell, you should be better off writing a list-comprehension: sequence = [filename for filename in basedir.glob('*') if filename_reg.search(str(filename))] You can even use the capabilities of glob to incorporate the prefix in your search and simplify further the extraction of the index by using a simple slice instead of the re module: sequence = [filename for filename in basedir.glob(prefix + '*') if filename.stem[len(prefix):].isnumeric()] This makes sure that you match any spamXXX.txt while excluding both morespamYYY.txt and spammerZZZ.txt. You also use a construct like: marker = 0 for … in …: … marker = marker + 1 … Which is a convoluted way around enumerate. Path manipulations Using pathlib to manipulate filenames is a powerful tool that you don't leverage much. You apply the regex against the whole path instead of only the name or even the stem of the file, which leads to an expression more complex than it needs to be, and more difficult to handle. When renaming the path, you also recreate a whole Path from scratch instead of only changing the relevant part through means of with_name or with_stem. This adds up in terms of complexity for the reader of your code compared to, e.g.: new_name = f'{prefix}{expected_index:03}' filename.rename(filename.with_stem(new_name)) Also note the :03 format specifier when turning the expected_index (marker in your code) into a string, this feels clearer than str(expected_index).zfill(3). And you can still parametrize it: new_name = f'{prefix}{expected_index:0{padding}}'. Padding Because filenames may or may not have their index padded with leading zeroes, I find it quite tricky to have a "one-size-fits-all" kind of approach. Using .isnumeric on the remaining characters of the filename after the prefix is a first step to detect indexes of any length, but this says nothing about whether or not the resulting filenames should be padded or not. We could compute the len of the digits at the end of the filename and use that as the padding of the resulting filename, but if they aren't padded at all, a filename with a two-digits index (say spam11.txt) being renamed to a one-digit index (spam08.txt) will keep an extra leading zero, which is unexpected. So I guess the best course of action here is to let the user tell whether or not the script need to guess for the padding to apply or just use no padding at all. Proposed improvements """ Check and fill gaps in a directory. Providing a prefix, scan a folder for files numbered after this prefix and remove gaps by re-numbering them if necessary. """ import argparse from pathlib import Path def check_and_fill_gaps(folder: Path, prefix: str = '', ignore_padding: bool = False) -> None: files = sorted( (int(index), path) for path in folder.glob(prefix + '*') if (index := path.stem[len(prefix):]).isnumeric() ) for expected_index, (index, path) in enumerate(files, start=1): if index != expected_index: padding = 0 if ignore_padding else len(path.stem) - len(prefix) path.rename(path.with_stem(f'{prefix}{expected_index:0{padding}}')) def folder(value: str) -> Path: f = Path(value) if not f.is_dir(): raise argparse.ArgumentError(f'{value} is not an existing directory') return f def command_line_parser() -> argparse.ArgumentParser: parser = argparse.ArgumentParser(description=__doc__) parser.add_argument('folder', type=folder, help='folder in which run the scan and replace process') parser.add_argument('-p', '--prefix', default='', help='search for files of the form `prefixXXX.ext` where XXX is any amount of digits') parser.add_argument('-n', '--ignore-padding', action='store_true', help='do not try to 0-pad resulting filenames when renaming') return parser if __name__ == '__main__': args = command_line_parser().parse_args() check_and_fill_gaps(**vars(args)) Note the use of the walrus operator to avoid extracting the index from path.stem twice. Example usage Using the following test folder: $ ls -1 testing morespam42.txt spam103.txt spam10.txt spam11.txt spam19.txt spam1.txt spam21.txt spam23.csv spam2.txt spam3.txt spam6.txt spam7.txt spammer007.xls We can filter with padding guessed: $ python fill_gaps.py -p spam testing; ls -1 testing morespam42.txt spam011.txt spam06.txt spam07.txt spam08.txt spam09.txt spam10.csv spam1.txt spam2.txt spam3.txt spam4.txt spam5.txt spammer007.xls Or without: $ python fill_gaps.py -p spam -n testing; ls -1 testing morespam42.txt spam10.csv spam11.txt spam1.txt spam2.txt spam3.txt spam4.txt spam5.txt spam6.txt spam7.txt spam8.txt spam9.txt spammer007.xls Handling adding a fixed padding to every filename is left as an exercise to the reader.
{ "domain": "codereview.stackexchange", "id": 44018, "tags": "python" }
Lagrangian for a system of particles
Question: I want to know if, in general, is it true that the Lagrangian describing the energies of the center of mass of a system of particles is the same as the Lagrangian written for the energies corresponding to every particle in the system. The specific problem I am trying to solve is that of two particles of mass $m$ constrained to move inside a sphere of radius $b$, with the restriction that the two particles are always a distance $2a$ apart from each other (say, due to a massless rigid rod). When I write the kinetic energy for the center of mass of the system, I don't get the same expression as the one corresponding to the two individual particles: $T_{CM}=\frac{1}{2}(2m)R^{2}(\dot{\phi}^{2}\sin^{2}\theta+\dot{\theta}^{2}) \qquad \text{where}\qquad \theta=\frac{\theta_2+\theta_1}{2}, \quad R^2=b^{2}-a^{2}$ $T_{\text{individual}}=\frac{1}{2}mb^{2}(\dot{\phi}_{1}^{2}\sin^{2}\theta_1+\dot{\phi}_{2}^{2}\sin^{2}\theta_{2}+\dot{\theta}_{1}^{2}+\dot{\theta}_{2}^{2})$ Answer: No. The answer to the general question is NO. $L=T-V$. This is not true for either $T$ or $V$, although in this particular problem there is no potential energy. The total kinetic energy (sum for each particle) is equal to the CM energy plus the kinetic energy "relative to the CM" for each particle. In this particular problem, your $T_{individual}$ is correct, but you did not enforce the constraint. There are only 3 degrees of freedom, instead of 4. Your $T_{CM}$ is incorrect even if considered as CM energy. The constraint is not modeled correctly.
{ "domain": "physics.stackexchange", "id": 33699, "tags": "homework-and-exercises, classical-mechanics, lagrangian-formalism" }
Is the triplet state of the cyclopentadienyl cation really aromatic?
Question: While checking the ASAP articles of the Journal of the American Chemical Society I came across a very interesting contribution by Costa et al. who synthesised a $\ce{BF3}$ stabilised cyclopentadienyl cation 1 and confirmed that its singlet state was indeed antiaromatic as predicted by various definitions.[1] The authors, however, also included the following sentence: According to Baird’s rule, “the lowest triplet state for $4n$ rings is aromatic since the bonding energy is significantly greater than for the diradical reference structure”.[1] This statement confuses me. In my introductory organic chemistry classes, I was taught the $4n+2$ rule along with simplified Hückel analyses to predict the relative energy levels. It was always pointed out that $4n$ (if planar and a single π system) represents an unstable, antiaromatic state which the system will avoid by distortion. This was explained by the triplet, biradicallic structure of the $4n$ systems. Yet here by Costa et al. I read that $4n$ triplets are actually aromatic. How can I reconcile this new information with what I was taught previously? Reading the article and the references has not been able to help me. Reference: [1]: Costa, P.; Trosien, I.; Mieres-Perez, J.; Sander, W. Isolation of an antiaromatic singlet cyclopentadienyl zwitterion. J. Am. Chem. Soc. 2017, 139 (37), 13024–13030. DOI: 10.1021/jacs.7b05807. Answer: TL;DR Yes. For cyclic conjugated systems in the $T_1$ state (i.e. lowest energy triplet excited state), $4n$ electrons means aromatic and $4n+2$ means anti-aromatic. The $T_1$ state of the cyclopentadienyl cation, which has 4 pi-electrons, is therefore aromatic. Aromaticity The (quite understandable) confusion here arises over the interpretation of the word "aromatic" or "anti-aromatic". These terms can be used differently depending on context, and there is no unified physical description of (anti)aromaticity (although progress has been made in recent years, e.g. nucleus-independent chemical shifts, etc.) When anybody claims that something is aromatic, it is therefore imperative to check the context, and to figure out what their basis for assigning aromaticity is. In this case, the assignment of the triplet excited state of the cyclopentadienyl cation as aromatic by Costa et al.1 is based on Baird's rules. In turn, looking up Baird's article2 reveals that his assignment of (anti)aromaticity is based on the concept of Dewar resonance energies (DREs). The concept of DREs has been explained by Baird elsewhere.3 Essentially, it is based on the comparison of idealised cyclic structures with a reference acyclic structure; so, benzene is compared with hexatriene, and cyclobutadiene ($D_\mathrm{4h}$) is compared with butadiene. For the cyclopentadienyl cation, the relevant comparison is with a penta-2,4-dien-1-yl cation. The question thus arises as to how this comparison can be made. The way that is commonly taught, at introductory level, is to use Hückel molecular orbital (HMO) theory. So, let us use cyclobutadiene as an illustration. Note that within HMO theory the singlet and triplet configurations are of the same energy, as electron-electron repulsions are not taken into account. The resonance integral $\beta$ is negative. So, singlet cyclobutadiene is anti-aromatic because it has less π-bonding energy than the singlet state of its acyclic counterpart butadiene. However, triplet cyclobutadiene is aromatic because it has more π-bonding energy than triplet butadiene does. There are two things of note here: firstly, the Jahn–Teller distortion in cyclobutadiene isn't relevant for the purposes of finding whether it is aromatic or not; the aromaticity is assigned based on a hypothetical planar structure with no bond length alternation. Secondly, the aromaticity is determined with respect to the corresponding electronic state of the acyclic reference. That is to say, the triplet state of cyclobutadiene is compared with the triplet state of butadiene and not the singlet state of butadiene. Perturbational MO theory Qualitatively, we have reached the correct conclusion. However, this was not the approach that Baird used. We will restrict our discussion now to that of triplet states, since it is relevant to the question. Baird used a more sophisticated method involving perturbation theory. That is, Baird used MO theory to calculate the stabilisation/destabilisation that would occur upon bringing together two radicals to form either the cyclic compound or the acyclic compound. This is confusing, so a bit more explanation is in order. The triplet state of butadiene can be described by the promotion of a π electron to a π* electron. Upon this promotion, the C=C bond twists until it has a dihedral angle of 90°, which corresponds to the breakage of the π bond. In the triplet state of butadiene (1 ($T_1$)), the π systems of the allyl and methyl radical fragments are orthogonal to each other; they are simply joined by a C–C σ bond. Hence the total π-bonding energy of 1 ($T_1$) is simply equal to the sum of the individual π-bonding energies of 2 and 3. On the other hand, the triplet state of cyclobutadiene (4 ($T_1$)) can still be considered to be an allyl fragment plus a methyl fragment.4 The difference is that, due to the planarity of the ring, the two radical systems are forced to overlap in a π manner (depicted by the red lines). Now, if this overlap is stabilising in nature, then the π-system of 4 ($T_1$) is more stable than the π-system of 1 ($T_1$), and therefore we can say that 4 ($T_1$) is aromatic. Conversely, if the overlap is destabilising, then it is anti-aromatic. Finally, we get to the crux of the answer. (I make no apology for taking so long to get to it - I thought the buildup was pertinent.) Baird first constructed the MOs of the radical fragments 2 and 3, and labelled each MO with their symmetry under a reflection. The mirror plane is drawn in light blue; antisymmetric MOs are in red and symmetric MOs in green. Then, Baird identified two types of orbital interactions in the cyclic structures that could lead to either stabilisation or destabilisation. The first type is called a Type I interaction, and it involves the overlap of the two SOMOs on the two fragments. In this case, since the two SOMOs are of different symmetries, they cannot overlap and therefore the Type I interaction leads to neither stabilisation nor destabilisation. The second type of interaction (I leave the reader to guess what it was called) involves the interaction of the SOMO on one fragment with doubly occupied or empty orbitals on the other fragment. In this case, the SOMO of the allyl radical does not find any symmetry match on the methyl radical, and so there is again neither stabilisation nor destabilisation. However, the SOMO of the methyl radical can overlap with both symmetric MOs of the allyl radical; this is a stabilising interaction. Overall, therefore, the π-type overlap between the two radical fragments leads to a favourable interaction. As such, triplet cyclobutadiene is considered to be aromatic. For the cyclopentadienyl cation, one can construct an analogous diagram, starting from an ethene radical cation and an allyl radical. Again the SOMOs do not match in symmetry, so the Type I interaction is necessarily zero. And again, the Type II interaction is stabilising; this time, both SOMOs find symmetry matches on the partner fragment. Based on this analysis, the triplet state of the cyclopentadienyl cation is indeed aromatic. Triplet benzene At this stage one may wonder: when two orbitals interact, the only possibility of destabilisation is if both orbitals are fully filled. Since both interactions involve orbital overlap between not-fully-filled orbitals, how can any of them be destabilising, and how can anything be anti-aromatic? To address this issue, and for the sake of completeness, let us analyse the case of benzene. We expect the triplet state of this 6π system to be anti-aromatic. The interaction diagram may be constructed from two allyl radicals: First, we see that by symmetry, the Type II interaction is identically zero. Now, we turn to the Type I interaction. Surely the overlap of two SOMOs must be a bonding interaction? The astute reader will notice, though, that we are talking about a triplet state of benzene. The overlap of the two SOMOs still forms a bonding and an antibonding orbital, but if we were to put both electrons into the bonding orbital, we would obtain a singlet state. As such, the Type I interaction leads to both the bonding and antibonding MOs having one electron each. Since the antibonding MO is more antibonding than the bonding MO is bonding, this is a net destabilisation. Hence, triplet benzene may be assigned anti-aromaticity. Further reading The concept of excited state aromaticity has been reviewed by Rosenberg et al.5, which is a good read (I found the explanation more accessible than that in Baird's original article2). Notes and references Costa, P.; Trosien, I.; Mieres-Perez, J.; Sander, W. Isolation of an Antiaromatic Singlet Cyclopentadienyl Zwitterion. J. Am. Chem. Soc. [Online early access]. DOI: 10.1021/jacs.7b05807. Baird, N. C. Quantum organic photochemistry. II. Resonance and aromaticity in the lowest 3ππ* state of cyclic hydrocarbons. J. Am. Chem. Soc. 1972, 94 (14), 4941–4948. DOI: 10.1021/ja00769a025. Baird, N. C. Dewar resonance energy. J. Chem. Educ. 1971, 48 (8), 509–513. DOI: 10.1021/ed048p509. Technically, there is one hydrogen less on both fragments, but the C–H bonds do not contribute to the π framework so they can be ignored - after all, we never bothered taking the C–H bonds into account, not even in the Hückel approach. The point is that whether it is a methyl fragment or a methylene fragment, the π system remains the same (likewise for the allyl fragment). Rosenberg, M.; Dahlstrand, C.; Kilså, K.; Ottosson, H. Excited State Aromaticity and Antiaromaticity: Opportunities for Photophysical and Photochemical Rationalizations. Chem. Rev. 2014, 114 (10), 5379–5425. DOI: 10.1021/cr300471v.
{ "domain": "chemistry.stackexchange", "id": 8877, "tags": "organic-chemistry, aromaticity" }
S-matrix expansion for the $\phi^4$ theory and the interaction picture
Question: My question is about the perturbative expansion of the S-matrix using Dyson's expansion. Let the Lagrangian density of the $\phi^4$ theory be \begin{equation} \mathcal{L} = \frac{1}{2}\left[\partial_\mu\phi(x)\right]^2 - \frac{m^2}{2}\phi(x)^2 - \frac{\lambda}{4!}\phi(x)^4, \end{equation} from which we find the Hamiltonian density \begin{equation} \mathcal{H}(x) = \underbrace{\frac{1}{2}\left[ \dot\phi^2 + (\nabla\phi)^2 +m^2\phi^2 \right]}_{\mathcal{H}_0} + \underbrace{\frac{\lambda}{4!}\phi^4}_{\mathcal{H}_1}. \end{equation} Now, to expand \begin{equation} S = T\mathrm{e}^{-i\int\mathrm{d}^4z\mathcal{H}_I(z)}, \end{equation} we clearly need to identify $\mathcal{H}_I$ in the interaction picture. This is where I start to get confused. The Hamiltonian $H_I$ is given by \begin{equation} H_I = \mathrm{e}^{iH_0t}H_1\mathrm{e}^{-iH_0t}, \end{equation} which reduces to $H_I=H_1$ only if $[H_0,H_1]=0$. However, all of the textbooks I've seen simply use $\mathcal{H}_1$ (as above) for $\mathcal{H}_I$ in computing the S-matrix. Here is my question: It is not clear to me that $[H_0,H_1]=0$, which would surely require $[\mathcal{H}_0,\mathcal{H}_1]=0$ (?), which implies that $[\partial_\mu\phi,\phi]=0$. However, this latter commutator does not seem to give zero when the mode expansion is used. Furthermore, I am not entirely sure how the condition $[H_0,H_1]=0$ is inferred from the Hamiltonian density. Does one need to evaluate $[\mathcal{H}_0(x),\mathcal{H}_1(x)]$, IE at the same point in spacetime? I know that this is a small point, but I hate not knowing the logic going from one step to the next, so if anyone could shed light on the justification for using $\mathcal{H}_1$ for $\mathcal{H}_I$, I would be very appreciative. Answer: In general, $[H_0, H_1] \ne 0$ and $H_1 \ne H_I$. L&B describe the interaction picture in terms of state-vectors, which I'm not the biggest fan of, so we'll work from the point of the field operators instead. The fields in the interaction picture, $\phi_I$, are defined in terms of the Schrodinger-picture fields by $$ \phi_I(t, \vec{x})= e^{iH_0(t-t_0)} \phi_S(t_0, \vec{x})e^{-iH_0(t-t_0)} \tag{1} $$ In the Heisenberg picture, the fields evolve with the full Hamiltonian $H$: the above equation is defined using $H_0$ which is the first-order approximation to the full Hamiltonian when we expand around $\lambda = 0$, since we're working perturbatively. You shouldn't think of eq. $(1)$ as an equation for time evolution, but rather as a definition for convenience. The mode expansion for $\phi_S$ is: $$ \phi_S(\vec{x})=\int\frac{d^3 k}{(2\pi)^3}\frac1{\sqrt{2\omega_k}}\left(a(\vec{k})e^{i\vec{k}\cdot\vec{x}}+a^\dagger(\vec{k})e^{-i\vec{k}\cdot\vec{x}}\right) $$ and consequently, $\phi_I$ is (note carefully the four-vector notation, in contrast to the previous equation) $$ \phi_I(x)=\int\frac{d^3 k}{(2\pi)^3}\frac1{\sqrt{2\omega_k}}\left(a(\vec{k})e^{-ikx}+a^\dagger(\vec{k})e^{ikx}\right) $$ where $x_0 = \Delta t = t - t_0$ and $k^\mu$ is on-shell, so $\phi_I$ enjoys a free mode expansion. It is these $\phi_I$ that the interaction Hamiltonian in the interaction picture (annoyingly, both these concepts have the same name) $H_I$ in the S-matrix is expanded in terms of, so while it looks similar visually to $H_1$ (which uses $\phi_S$), the interaction picture encodes some of the time-dependence and thus hides a lot of the details. When you set up the differential equation for $U(t, t_0)$: $$ i\frac{\partial U(t, t_0)}{\partial t} = H_I(t)U(t, t_0), $$ $H_I$ is defined by wedging $H_1$ in between the same $e^{\pm iH_0(t-t_0)}$ as the fields, so you can see that $$H_I = e^{iH_0(t-t_0)} H_1 e^{-iH_0(t-t_0)} = e^{iH_0(t-t_0)} \phi^4_S(t_0, \vec{x})e^{-iH_0(t-t_0)} \\= e^{iH_0(t-t_0)} \phi_S(t_0, \vec{x})e^{-iH_0(t-t_0)}e^{iH_0(t-t_0)}\phi_Se^{-iH_0(t-t_0)}...\phi_Se^{-iH_0(t-t_0)} = \phi_I^4,$$ with the obvious integrals and prefactors.
{ "domain": "physics.stackexchange", "id": 75287, "tags": "quantum-mechanics, quantum-field-theory, lagrangian-formalism, s-matrix-theory" }
Can we use prefixes like iso, neo, etc in IUPAC nomenclature of organic compounds?
Question: Shouldn't the name of the compound in the picture be 5-(1-ethyl-2-methylpropyl) nonane instead of the given name? Answer: According to the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book) various prefixes are retained for use in general nomenclature. However, many prefixes are no longer recommended. Trivial, common, and traditional prefixes have always been an integral part of organic nomenclature. However, as systematic nomenclature develops and becomes widely used, many of these prefixes fall by the wayside. Accordingly, each set of IUPAC recommendations contains fewer of these traditional prefixes. For example, the prefixes “isobutyl”, “isopentyl”, “neopentyl” were contained in the 1993 recommendations but are no longer recommended as approved prefix. The prefix “isopropyl”, is still retained for use in general nomenclature; however, for the preferred IUPAC name (PIN), the preferred prefix is “propan-2-yl”. (The prefix “1-methylethyl” may be used in general nomenclature). Anyway, the name “5-(1-ethyl-1-isopropyl)nonane”, which is given in the picture in the question, is clearly wrong since no parent structure is given for the substituents 1-ethyl and 1-isopropyl: “5-(1-ethyl-1-isopropyl what?)nonane” If you really want to use such a name including the substituents 1-ethyl and 1-isopropyl, the corresponding systematic name would be “5-(1-ethyl-1-isopropylmethyl)nonane”. However, such a name would be highly unusual and not in accordance with the basic principles of IUPAC nomenclature. The name “5-(1-ethyl-2-methylpropyl)nonane”, which is proposed in the question, describes the correct compound and may be used in general nomenclature. However, the PIN for this compound is “5-(2-methylpentan-3-yl)nonane” since prefixes of the “alkanyl type” are preferred if the free valence is not in position 1 of the substituent group (see this similar question).
{ "domain": "chemistry.stackexchange", "id": 5768, "tags": "organic-chemistry, nomenclature" }
Why is the distance in non-local entanglement for teleportation limited by time
Question: In a recent paper, researchers described a system that can teleport a quantum state without the need for the entangled particles to actually "meet" each other. I'm reading, in particular, Traveling without moving: Quantum communication scheme transfers quantum states without transmitting physical particles (Stuart Mason, phys.org). The author states that (emphasis mine) scientists in China at Harbin Institute of Technology, Yanbian University and Changchun University demonstrated what is known as a counterfactual approach in which quantum information can be transferred between two distant participants without sending any physical particles between them. The researchers accomplished this by entangling two nonlocal qubits with each other without interaction – meaning that the present scheme can transport an unknown qubit in a nondeterministic manner without prior entanglement sharing or classical communication between the participants. Moreover, the scientists state that their approach provides a new method for creating entanglement that allows two qubits to be entangled without interaction between them. [...] Theoretically," Zhang acknowledges, "a galactic or intergalactic internet may be possible based on the present scheme, which would require a so-called long-arm intra- or intergalactic interferometer and a quantum obstructing object with very long coherent time. Obviously, however, it's currently unpractical to construct a long-arm interferometer, and there is no known quantum state with such a very long coherent time. I'm not sure what they're talking about with the long-arm interferometer, but if they can really transport the quantum information without the entangled particles meeting, then why would how long the coherence can be maintained determine the "distance", since "nothing" is traveling between the two separate locations? Answer: Not really. It is indeed perfectly possible to entangle two particles A and B without using direct interactions between them, the easiest example being to entangle A with some ancilla system C, carry C over to B, and then swap the states of B and C, which will transfer the A-C entanglement onto the A-B pair. This is of course not magical at all, and the approach in that paper is not very different from this. In particular, their approach still requires some ancillary system to travel between A and B, or for A and B to send one ancillary system each to a halfway location. Zhang's claim of an "intergalactic entangled internet" cannot be achieved without reliable intergalactic transport of photons. For more details, see Zhang et al.'s actual paper, Counterfactual quantum-information transfer without transmitting any physical particles. Q.Guo et al. Sci. Rep. 5, 8416 (2015). It's open access and the figures clearly show photons propagating (repeatedly) between Alice and Bob. This is not to say, however, that the scheme in question isn't weird. The way this "counterfactual entanglement" works is roughly as follows: Start with two excited systems in separate locations (say, two single ions in separate ion traps, a few meters apart), and allow them to decay and emit a photon. Collect the photons and put them in optical fibres. Bring the optical fibres together and connect them to the input ports of a 50:50 beam splitter. Put single-photon detectors on the output ports of the beam splitter. If you detect a single click on only one of the detectors, you know that one of the atoms has decayed to the ground state $|g⟩$ and that the other is still in the entangled state $|e⟩$. However, because there is a beam splitter between the detector and the atoms, you cannot know which is which. Even better, as it turns out, is that if you go through the math for it, the ions end up being entangled with each other, despite never having "met". So this is in some sense magical, or at least it's better than the first scheme I mentioned. It's important to note, however, that even here the statement that both ions are entangled looks a lot stronger than it actually is. In particular, you cannot use entanglement to communicate, either faster or slower than light. The only 'weird' thing that entanglement allows you to do is to perform simultaneous measurements whose outputs will exhibit bizarre sorts of nonlocal correlations (i.e. the correlations are stronger than would be possible using hidden local variables, but not strong enough to signal). If you do that, though, in the end you are only making a complicated statement about the correlations between measurements of the two ions and the photons they emitted, which is much more mundane than what you started with.
{ "domain": "physics.stackexchange", "id": 20330, "tags": "quantum-entanglement, decoherence, quantum-teleportation" }
What exactly is an aerolite?
Question: I have found definitions calling it a stony metiorite, calling it a meteor of silicate, a granite meteorite, or even calling it a metallic rock from space and more. What is the actual definition? Answer: An aerolite is, exactly, a dated term from protoscience practitioners. The term has been completely superseded by modern workers, in modern times, using modern data. The term dates back to the time when meteorites- as “rocks from the sky”- were still in doubt by many. That you still encountered the term “aerolite” in 2023 is a tribute to record preservation and good curation practices, not to the word being useful or relevant in any way to us modern practitioners. You can regard the word as dutifully as you regard rule by nobility, using ships-of-the-line, to fight Barbary pirates. All date to the same period.
{ "domain": "astronomy.stackexchange", "id": 6941, "tags": "terminology, meteorite, naming" }
A better upper bound to a recursive function
Question: it's my first answer here. I do have an algorithm to calculate the factorial of a given number $n$ recursively, but without using any multiplication like the usual one, only sums are available. I've ended up with an algorithm with a complexity function $T(n) = 1 + nT(n - 1)$. Opening the recurrence relation ends up in: $T(n) = 1 + n T(n - 1)$ $T(n) = 1 + n[1 + (n - 1)T(n - 2)]$ $T(n) = 1 + n + n^2 T(n - 2) - n T(n - 2)$ $T(n) = 1 + n + n^2[1 + (n - 2) * T(n - 3)] - n T(n - 2)$ $T(n) = 1 + n + n^2 + n^3T(n - 3) - n T(n - 2) - n^2T(n - 3)$ Because of the subtractions, I'm not able to reach good upper-bound for $T(n)$. Any tips ? Before asking the question, I've managed to state that $T(n) \in \mathcal{O}(n^n)$. Which is 'loose' upper bound, but I'm neither able to figure out a tighter upper bound. Answer: Solving your recurrence, assuming the base case $T(0) = 1$, we get $$ \begin{align*} T(n) &= 1 + nT(n-1) \\ &= 1 + n + n(n-1)T(n-2) \\ &= 1 + n + n(n-1) + n(n-1)(n-2)T(n-3) \\ &= \cdots \\ &= 1 + n + n(n-1) + \cdots + n(n-1)\cdots2 + n! \\ &= n! \left(\frac{1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \cdots + \frac{1}{n!}\right) \\ &= n! \left(e - O\left(\frac{1}{(n+1)!}\right)\right) \\ &= e \cdot n! - O\left(\frac{1}{n}\right). \end{align*} $$ In particular, $T(n) = \Theta(n!) = \Theta(\sqrt{n}(n/e)^n)$.
{ "domain": "cs.stackexchange", "id": 8753, "tags": "recurrence-relation, recursion" }
Why is it so much more energy intensive to compress hydrogen than methane?
Question: Why do you need 13.8 MJ/kg (9% of energy content) to compress hydrogen to 200 bar, but only 1.4 MJ/kg (2.5% of energy content) for methane? I looked into compressibility factors and the compressibility factor for methane is way lower than for hydrogen (up until high pressures). Does this determine how much work is required? Answer: The essential reason is that a kilogram of hydrogen contains 8 times as many molecules as a kilogram of methane (because the mass of a hydrogen molecule is about 1/8 of the mass of a methane molecule). If we assume, for the sake of argument, that the compression is isothermal (constant temperature, $T$) the work needed to compress a sample of $N$ molecules of an ideal gas from pressure $p_1$ to pressure $p_2$ is $$\text{Work}=Nk_BT \ln \left(\frac{p_2}{p_1}\right)\ \ \ \ [k_B= \text{Boltzmann's constant}]$$ So if the gases were ideal, 8 times more work would be needed per kilogram for the hydrogen, but at such high pressures the gases are far from ideal. Intermolecular forces and the finite volumes occupied by the molecules are significant and different for different gases. That would account for why the ratio of work needed is not exactly 8:1
{ "domain": "physics.stackexchange", "id": 67216, "tags": "thermodynamics, pressure, molecules, gas" }
How to simulate Temperature and create an office room
Question: 1)i want to create a room from a greyscale image, i have seen two methods using gazebo: with the tag but with this i can't apply a texture to the side of the wall, but only from the hight, and with the tag but this doesn't work, and i don't undertand why, the sdf file is: <?xml version="1.0" ?> model://sun 0 0 20 model://ground_plane 0 0 1 0 0 0 2 2 2 2 2 2 1 0 0 1 1 0 0 1 <!-- --> <model name="image_map"> <static>true</static> <pose> 0 0 0 0 0 0</pose> <link name="link"> <collision name="collision"> <geometry>  </geometry> </collision> <visual name="visual"> <geometry>  </geometry> </visual> </link> </model> i want to simulate a physics element like the temperature in environment and to relevate it with a sensor, is this thing possible to do? are there any projects available? thank you for your answer Originally posted by Luigi on Gazebo Answers with karma: 1 on 2013-09-25 Post score: 0 Answer: The geometry model is broken in Gazebo, see this issue. Temperature is currently no implemented in Gazebo. The easiest solution is to write a WorldPlugin that outputs temperature data. Example world plugin. Every update cycle, your world plugin could get the position of you robot, and then output a temperature value to a topic. Originally posted by nkoenig with karma: 7676 on 2013-09-25 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by GuidoG on 2013-10-30: How I can read the value published on topic? Comment by nkoenig on 2013-11-07: You can create a subscriber to a topic. From the command line, you can use the gztopic -h command.
{ "domain": "robotics.stackexchange", "id": 3466, "tags": "gazebo" }
What is the simplest known NP-Complete problem for testing P=NP solutions?
Question: About a year and a half ago I asked this question regarding $\mathsf{P}=\mathsf{NP}$. The answers have helped me understand the problem tremendously and since then I've dabbled further into the topic. With that stated, it is my understanding that $\mathsf{NP}$-complete problems are such that if a solution for $\mathsf{P}=\mathsf{NP}$ were found for that specific problem, then all $\mathsf{NP}$ problems could be solved using the same rules for resolving $\mathsf{NP}$. With that stated, what is the simplest $\mathsf{P}=\mathsf{NP}$ problem outlined to date that is $\mathsf{NP}$-complete? In other words, what is the most basic of problems that one could test a theoretical $\mathsf{P}=\mathsf{NP}$ solution against? I'm aware of many of the examples such as the Traveling Salesman or Knapsack problems but I assume there could be even simpler scenarios where all properties of the $\mathsf{P}=\mathsf{NP}$ or $\mathsf{P}\ne\mathsf{NP}$ dilemma are present. Answer: Since all NP-complete problems are basically equivalent, it's hard to say which one is the easiest. SAT was one of the original problems and has important practical applications, so it is really well studied. Writing fast SAT-solvers seems like a reasonably interesting hobby to me and getting started is not terribly hard. Integer Linear Programming is similarly important and well studied. The only objective way to discern NP-complete problems that I know of is their approximability. Some problems are hard to approximate, for others you can get almost arbitrarily good solutions in polynomial time. Subset sum is both very simple to understand and "simple" to solve. Simple to solve means that it has an easy Pseudo-polynomial algorithm and is easy to approximate. However, I don't know about any research involving the problem.
{ "domain": "cs.stackexchange", "id": 6885, "tags": "complexity-theory, np-complete, p-vs-np" }
how to migrate bag file containing roslib/Log and visualization_msgs/MarkerArray
Question: Hi! I want to migrate a bag file containing recorded with Diamondback to Electric. The bag contains roslib/Log messages which seem to be missing in Electric, plus visualization_msgs/MarkerArray for which there are no migration rules. So I run rosbag fix xxx.bag xxx-fixed.bag and after a while I am confronted with the message: * From: roslib/Log [acffd30cd6b6de30f120938c17c593fb] To: Unknown 1 rules missing: * From: roslib/Log [acffd30cd6b6de30f120938c17c593fb] To: Unknown * From: visualization_msgs/MarkerArray [f10fe193d6fac1bf68fad5d31da421a7] To: visualization_msgs/MarkerArray [90da67007c26525f655c1c269094e39f] 1 rules missing: * From: visualization_msgs/Marker [bc7602ad2ba78f4cbe1c23250683bdc0] To: visualization_msgs/Marker [18326976df9d29249efc939e00342cde] Try running 'rosbag check' to create the necessary rule files or run 'rosbag fix' with the '--force' option. So I do as it wishes and run rosbag check 2011-06-24-19-30-36.bag -g myRule.bmr which results in: WARNING: Within rule [GENERATED.update_visualization_msgs_MarkerArray_f10fe193d6fac1bf68fad5d31da421a7] cannot migrate from subtype [Marker] to [Marker].. The following migrations need to occur: * From: roslib/Log [acffd30cd6b6de30f120938c17c593fb] To: Unknown 1 rules missing: * From: roslib/Log [acffd30cd6b6de30f120938c17c593fb] To: Unknown * From: visualization_msgs/MarkerArray [f10fe193d6fac1bf68fad5d31da421a7] To: visualization_msgs/MarkerArray [90da67007c26525f655c1c269094e39f] 1 rules missing: * From: visualization_msgs/Marker [bc7602ad2ba78f4cbe1c23250683bdc0] To: visualization_msgs/Marker [18326976df9d29249efc939e00342cde] The message type roslib/Log appears to have moved. Please enter the type to migrate it to. > What am I supposed to do now? And does anyone have a migration rule for the MarkerArrays or know how to auto-generate them? Originally posted by daniel_maier on ROS Answers with karma: 290 on 2012-02-21 Post score: 1 Answer: roslib/Log migrations to rosgraph_msgs/Log I don't know how to migrate the Marker. It may be the case that auto-rule generator just works on it. (Make sure to backup your data before you give it a go) Originally posted by kwc with karma: 12244 on 2012-02-22 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 8332, "tags": "ros, rosbag, markerarray, ros-electric, roslib" }
Male and female voice spectrum
Question: I have a set of data which consists of male and female voices. They are pronunciation of a same sentence. What's the appropriate method for getting an average spectrum for the male and female voices separately and comparing them? I can take fft of each voice but what's the next step to get an average spectrum? Also Welch's method can be applied to one voice each time. I know that this is in fact a random process and each of the voices is a realization but how can we estimate PSD using these realizations? Edit: Thanks to Marcus Müller, here is the result which I've got using $$\frac{1}{N}( |\text{FFT}(x_1)|^2 + \dots + |\text{FFT}(x_n)|^2)$$ And here is the code: male = fft(k.'); %k: male voices male1 = (abs(male)).^2; male2 = sum(male1.')/36; %36: number of male voices female = fft(s.'); %s: female voices female1 = (abs(female)).^2; female2 = sum(female1.')/13; %13: number of female voices plot(f , fftshift(male2) , f , fftshift(female2)) legend({'male','female'},'Location','southwest')` It would be really nice to see the other ways for estimation of PSD in this case. Answer: in my opinion and limited experience, you can't just compare spectra of male and female voices to identify which is which. 2 decades ago i worked on two products called PurePitch and PitchDoctor that was able to shift the pitch of a voice independently of shifting the formants. it also had the ability to increase or reduce the amount of pitch inflection of the voice (all the way to monotone). the products had factory-supplied "presets", one was called "testosterone" and the other was called "estrogen", that could convincingly make a female voice sound masculine or a male voice sound feminine. anyway, i don't remember the numerical parameters that were researched literally by my boss in usage of the alpha version of the product. but it was something like 300 or 400 cents difference between the male and female voices and less than that for the difference between formants. i think you might need a pitch detector and get lotsa words and do some statistics on the pitch to get a good machine idea of whether the pitch is or was more likely male or female.
{ "domain": "dsp.stackexchange", "id": 9645, "tags": "power-spectral-density, speech-processing, sound, voice" }
AntiTrypsin Enzyme
Question: In a smoking patient, is the lung over digested because of a combination of smoking and a defect in the antitrypsin gene (prevents digestion from protease)? Or does smoking act the same as a patient with a defect in the gene? Answer: Wikipedia has a fine article on the disease. https://en.wikipedia.org/wiki/Alpha_1-antitrypsin_deficiency from that article Alpha-1 antitrypsin (A1AT) is produced in the liver, and one of its functions is to protect the lungs from neutrophil elastase, an enzyme that can disrupt connective tissue. Neutrophil proteases are active anywhere neutrophils are. People who don't smoke can get COPD just from having the double recessive form of this disease (no active alpha-1 antitrypsin). But if that is the case with you and you also smoke you get accelerated COPD. COPD is already an inflammatory disease associated with mucus and neutrophil action and the damage this causes is no doubt augmented with the lack of protective alpha-1 antitrypsin. If you suffer this condition you can get infusions of alpha-1 antitrypsin - sort of like type 1 diabetics get insulin.
{ "domain": "biology.stackexchange", "id": 7957, "tags": "physiology, pathology, lungs" }
What was the large green yellow thing streaking across the sky?
Question: I just saw a large green yellow streak cross the sky. It looked like a shooting star but way bigger. It seemed closer too. I’m in North Garden VA. It was heading north. It lasted for 5-7 seconds. Answer: There was a fireball visible in VA at about 10pm EDT on the 28th of July https://fireballs.imo.net/members/imo_view/event/2022/4424 Other observers suggest it lasted about 3.5 seconds and was as bright as the full moon. The green colour was also mentioned by some. Videos are on twitter A fireball is a bright shooting star. It is produced the same way a normal shooting star: a stone in space hitting the atmosphere. A brighter one is formed from a larger stone. A green colour would suggest the presence of nickel in the stone. The "closeness" is an optical illusion. All meteors start about 100km above the ground. But your brain associates "brightness" with "closeness", so brighter ones seem nearer.
{ "domain": "astronomy.stackexchange", "id": 6501, "tags": "identify-this-object, meteor-shower" }
Surprising tutorial behavior when using model->SetLinearVel. Also, where can I learn more?
Question: I've got questions about behavior seen in the model-plugin tutorial: http://gazebosim.org/tutorials?tut=plugins_model&cat=write_plugin When I compile and run it the box slides along the ground for a bit then starts to turn over. Why does this happen? Is the ground bumpy or something? It appears smooth. The behavior is also surprising because the plugin uses SetLinearVel() so I thought that would constrain the linear velocity of the box's center of mass but when it starts to flip over the z-component of velocity must be non-zero. Any comments about this discrepancy? Finally, can you tell me where I can learn about what functions are available for links and models so that I can learn to write plugins controlling multi-link bodies? Originally posted by raequin on Gazebo Answers with karma: 165 on 2016-01-14 Post score: 1 Original comments Comment by raequin on 2016-01-14: Maybe SetLinearVel has been changed "to refer to setting the velocity state (like an initial condition), but not turning on a persistent motor controller" like SetVelocity has, according to this answer: http://answers.gazebosim.org/question/8371/kobukiturtlebot-doesnt-move-when-given-velocity-commands-in-gazebo-5/?answer=8377#post-id-8377 Answer: To answer the first question, the box turns over because of friction coefficient, which has a default value of 1. Update your collision to below to prevent rolling: <collision name="collision"> <geometry> <box> <size>1 1 1</size> </box> </geometry> <surface> <friction> <ode> <mu>0</mu> <mu2>0</mu2> </ode> </friction> </surface> </collision> And remember, even though the tutorial moves the block with Model::SetLinearVel, which forces CG's linear motion in a straight line, physics dictates that there is still a downward gravity force, and there is still an upward normal contact force and its corresponding tangential friction forces. Ultimately, the combination of forces imparts a rolling moment about CG of the box, causing it to tumble. Originally posted by hsu with karma: 1873 on 2016-01-16 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 3854, "tags": "gazebo" }