anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Why TEB controller is not available in ros2 humble?
Question: On navigation plugin setup document , the TEB controller is mentioned, but I don't find any documentation to set up this controller in nav2 humble. And there is no humble branch in the teb_local_planner repository. So I copied the TEB parameters from foxy and launch the navigation node, but it told me the TEB controller plugin can't be found. [controller_server-4] [FATAL] [1697706023.360686729] [controller_server]: Failed to create controller. Exception: According to the loaded plugin descriptions the class teb_local_planner::TebLocalPlannerROS with base class type nav2_core::Controller does not exist. Declared types are dwb_core::DWBLocalPlanner nav2_mppi_controller::MPPIController nav2_regulated_pure_pursuit_controller::RegulatedPurePursuitController nav2_rotation_shim_controller::RotationShimController So why TEB controller is not available in ros2 humble? or did I miss something? Thanks. Answer: There was a recent post on the ROS Discourse, about a new MPPI controller. That post mentions: I would be inundated with messages regarding TEB if I did not address it here. While TEB is quite a fantastic piece of software, the maintainer has since moved on from his academic career into a position where it is not possible to maintain it into the future. As such, its largely unmaintained and represents techniques which are bordering on a decade old. This work is very modern and state of the art in the same vertical of techniques as TEB (MPC) so it meets the same need that TEB serves. With MPPI’s modern methodology, performance, active maintenance, and documentation, I hope you all will agree with me over time that this is an improvement. There are certain characteristics of the Elastic Band approaches that I did not like in a production setting that are not present in this work. So I conclude that TEB is indeed not supported anymore. EDIT: that thread also mentions the upcoming ROSCon 2023 presentation On Use of Nav2 MPPI Controller which is scheduled today, so you might want to watch the live feed (or later on the recording, when it gets published). EDIT 2: In the description of the ROSCon talk, MPPI is refered to as "the functional successor of the TEB and DWB controllers", so it seems that MPPI is effectively considered as the replacement for TEB: We introduce the Nav2 project's MPPI Local Trajectory Planner. It is the functional successor of the TEB and DWB controllers, providing predictive time-varying trajectories reminiscent of TEB while providing tunable critic functions similar to DWB.
{ "domain": "robotics.stackexchange", "id": 38787, "tags": "ros2, ros-humble, plugin, teb-local-planner, nav2" }
Why can the dispersion relation for a linear chain of atoms (connected by springs) be written as $\omega(k)=c_s \lvert k\rvert$?
Question: On the german wikipedia site (right under "Akustische Moden"), the dispersion relation for a linear chain of atoms (connected by springs): $$\omega(k)=2 \sqrt{\frac{K}{M}} \left \vert \sin{\frac{ka}{2}}\right \vert$$ is approximated as: $$ \omega (k)\approx c_s \lvert k\rvert$$ for small $k$. ($c_s$ is the speed of sound). Why are we allowed to do that? Answer: Because by expanding the sinus term into a taylor expansion, you get $\sin(x)\approx x - \frac{x^3}{6} +\cdots$ So, for small values of k you are allowed to take just the linear term.
{ "domain": "physics.stackexchange", "id": 31202, "tags": "solid-state-physics, phonons, dispersion" }
Why did they expect Astronaut Scott Kelley's telomere shortening to accelerate? (they got longer!)
Question: The NPR News article and podcast Scientists Share Results From NASA's Twins Study says: SCOTT KELLY (NASA Astronaut): You know, the symptomatic stuff is fine. I don't have any long-term negative feelings, physically, from being in space. Now, there's the things you can't feel. And hopefully, I will never learn that those are a problem. GREENE (Host): Those things you can't feel - well, it turns out they are as small as the protective structures at the ends of his chromosomes. MARTIN (Host): Yeah. These are called telomeres, and normally, they get shorter with age. But what about in space? SUSAN BAILEY (Principle Investigator): What we wanted to do was evaluate telomere length in both of the twins before and after so that we could see, you know, where they started and then where they ended. MARTIN: Susan Bailey was one of the scientists who answered this question. She expected the stresses of space to shorten telomeres quicker. BAILEY: And, in fact, we saw exactly the opposite thing - that during spaceflight, he had many more long telomeres than he did before he went up. So that really couldn't have been more of a surprise to us. See also Radiation Biologist Dr. Susan Bailey Studies the Cellular Clocks of Astronaut Twins Question: Why did investigators initially believe that Scott Kelley's year in space would accelerate the rate of telomere loss, relative to his baseline rate and the rate of his identical twin brother on the ground? What would be the postulated mechanisms? Answer: Dr. Bailey wrote a short piece that hints at the reasons behind why she expected what she expected: Telomeres are the ends of chromosomes that protect them from damage and from “fraying” – much like the end of a shoestring. Telomeres are critical for maintaining chromosome and genome stability. However, telomeres naturally shorten as our cells divide, and so also as we age. The rate at which telomeres shorten over time is influenced by many factors, including oxidative stress and inflammation, nutrition, physical activity, psychological stresses and environmental exposures like air pollution, UV rays and ionizing radiation. Thus, telomere length reflects an individual’s genetics, experiences and exposures, and so are informative indicators of general health and aging... Our study proposed that the unique stresses and out-of-this-world exposures the astronauts experience during spaceflight – things like isolation, microgravity, high carbon dioxide levels and galactic cosmic rays – would accelerate telomere shortening and aging. To test this, we evaluated telomere length in blood samples received from both twins before, during and after the one year mission.
{ "domain": "biology.stackexchange", "id": 9691, "tags": "molecular-biology, radiation, telomere" }
Next Palindrome
Question: I've written a function which, given a number string, returns the next largest palindrome in string form. For example, if the input string is "4738", the output should be "4774". The original problem description is found here. The function seems to be working for all the tests I have thrown at it so far, but the problem is that it is timing out when I submit it at the website above. Which parts are slow, and how can I improve the performance of this function? # Algorithm: If replacing the right half of the original number with the # mirror of the left half results in a bigger number, return that. # Otherwise, increment the left half of the number, then replace the # right half of the original number with the mirror of the new left half # and return. def next_palindrome(n): X = len(n)>>1 Y = X+(len(n)&1) first = n[:X][::-1] second = n[Y:] Z = n[:Y] if int(first) > int(second): return Z+first else: bar = str(int(Z)+1) return bar+bar[:X][::-1] Answer: Your indentation is a little odd. In Python, typically you'd never put two statements on the same line... unless you were code golfing, in which case you'd probably put all the statements on one single line and/or use 1-space indents. def next_palindrome(n): X = len(n)>>1 Y = X+(len(n)&1) first = n[:X][::-1] second = n[Y:] Z = n[:Y] if int(first) > int(second): return Z+first else: bar = str(int(Z)+1) return bar+bar[:X][::-1] The biggest time sink here is going to be the line int(first) > int(second), where you're taking two big strings and converting them to integers. That involves a lot of character comparisons and a lot of multiplications by 10... and with the inputs the grader is probably giving you, it'll involve bignum math, which means memory allocation. You don't actually need to convert the strings to int (or bignum), since you know that first and second are both strings of digits! You can do this instead: assert Y >= X, "by construction" assert len(second) >= len(first), "by construction" if len(first) == len(second) && first > second: assert int(first) > int(second), "because ASCII" I also suspect that first = n[:X][::-1] is not the most efficient way to reverse the first X characters of n in Python. I would try something like first = n[X-1::-1] But getting rid of the ints speeds up your program by a huge factor, so that the above suggestion is completely negligible. I used the following program to benchmark your code: import timeit setup = ''' your code ''' print timeit.timeit('next_palindrome("1234"*10000)', number=1000, setup=setup) Using int(first) > int(second): 4.452 seconds. Using X == Y and first > second: 0.019 seconds. P.S.: Notice that your code ignores leading zeros in both the input and the output. It thinks that the palindrome following 0110 is 22, not 111; and the palindrome following 99 is 101, not 00100. This is probably fine, but you might enjoy thinking about how to "fix" that issue, too.
{ "domain": "codereview.stackexchange", "id": 16920, "tags": "python, performance, programming-challenge, palindrome" }
ROS package EKF fusion for imu and lidar
Question: I have a 3d lidar and an imu both of which give a pose estimate. I need to get a EKF fused pose output combined from both of them. I couldn't find a ros package which does that. There is ETHZ's ethzasl_sensor_fusion which does it for camera and imu but not for a lidar. Hope someone can direct me to that if there is one. Or can tell me how to do it using some template available. Also came across robot_localisation (which i guess assumes odometry data comes from wheel encoders). I was wondering if i perform scan matching from lidar data to get an odometry estimate and feed it to this package along with imu data. I guess EKF requires which sensor are we using for computing the kalman gain and other matrices. So theory doesn't support this. But was wondering if this is still practically feasible given that lidar odometry can be assumed to be coming from wheel encoders. Originally posted by Harsh2308 on ROS Answers with karma: 80 on 2018-04-01 Post score: 1 Answer: If your 3d scan matching is giving you a pose estimate you can feed that directly into robot_localization (in this section of the documentation they go over what kind of data r_l can accept - it sounds like you'd be feeding PoseWithCovarianceStamped messages). Originally posted by stevejp with karma: 929 on 2018-04-02 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Harsh2308 on 2018-04-02: yes but the thing is pose estimate from lidar and pose estimate from a camera and a pose estimate from wheel encoders is not the same thing. Correct me if I am wrong here. Because EKF/UKF fusion step requires underlying model i.e. the sensor from which you are computing the pose estimate from. Comment by Akash Purandare on 2018-04-03: If you are giving Pose Estimates from various sensors located at different locations, you need to publish the transforms from the sensors to your base_link frame to make it work in the robot_localization package. Otherwise, you cannot do any localization. Transforms are very important for them.
{ "domain": "robotics.stackexchange", "id": 30510, "tags": "ros, imu, sensor-fusion, ros-kinetic" }
Analogous Library to OpenCV for Audio Processing / Analysis
Question: I understand OpenCV is the de facto library for programming image processing in C/C++; I'm wondering if there is a C or C++ library like that for audio processing. I basically want to filter raw waves from a microphone, and analyze them with some machine learning algorithms. But I may eventually also need: Multiplatform audio capture and audio playback DSP - Audio filters Tone detection Tonal property analysis Tone synthesis Recognition given some recognition corpus and model Speech / music synthesis Any advice would be appreciated. Answer: Consider the following: clam-project.org: CLAM (C++ Library for Audio and Music) is a full-fledged software framework for research and application development in the Audio and Music Domain. It offers a conceptual model as well as tools for the analysis, synthesis and processing of audio signals. MARF: MARF is an open-source research platform and a collection of voice/sound/speech/text and natural language processing (NLP) algorithms written in Java and arranged into a modular and extensible framework facilitating addition of new algorithms. MARF can run distributedly over the network and may act as a library in applications or be used as a source for learning and extension. aubio: aubio is a tool designed for the extraction of annotations from audio signals. Its features include segmenting a sound file before each of its attacks, performing pitch detection, tapping the beat and producing midi streams from live audio.
{ "domain": "dsp.stackexchange", "id": 11362, "tags": "image-processing, audio" }
Is the change in enthalpy (ΔH) for dissolution of urea in water positive or negative?
Question: To test the properties of a fertilizer, $\pu{15.0g}$ of urea, $\ce{NH2CONH2_{(s)}}$, is dissolved in $\pu{150 mL}$ of water in a simple calorimeter. A temperature change from $\pu{20.6^\circ C}$ to $\pu{17.8^\circ C}$ is measured. Calculate the molar enthalpy of solution for the fertilizer urea I worked through this question by finding $Q = mc\Delta T$, and then dividing $Q$ by the moles of urea present. I can tell the process is endothermic because $\Delta T$ is negative, however my answer for $\Delta H$ comes out as negative, which would only make sense if this was an exothermic reaction. I'm not sure where I am wrong to be honest. Here is my work: $$ \begin{align} \Delta H &= \frac{(\pu{150ml}) \times (\pu{1g mL^{-1}}) \times (\pu{4.18J g^{-1} K ^{-1}}) \times (\pu{-2.8 K})} {(\pu{15g}/\pu{60.07g})}\\ &= \pu{-7030.59J/mol}\\ &= \pu{-7.03kJ/mol} \end{align} $$ TL;DR - question asks for $\Delta H$ of an endothermic process, not sure if my answer should be positive or negative Answer: The sign of Q depends on the perspective. The water temperature decreased because it "lost" heat. The process of dissolving urea required energy, it "gained" energy. If I give you a penny, should that be +1 or -1 penny? Well, it depends who you ask. In your answer, you are missing a negative sign in $\Delta H=−Q$ the way you start out with $Q$ from the perspective of the water.
{ "domain": "chemistry.stackexchange", "id": 11871, "tags": "thermodynamics, water, aqueous-solution, enthalpy" }
Conditional merge with Ruby to include a parameter if it hasn't been included
Question: I have the following function, and I have a hunch it can be written more concisely in Ruby. def template_params filtered_params = params[:template].permit! filtered_params[:published] ||= false filtered_params end The end result is that filtered_params has published: false if the params hash doesn't have published as a key. I'm wondering if there is some kind of conditional merge w/ Ruby to turn that code into a 1-liner. Essentially, the params hash includes "what has been changed" and when :published becomes false (it's a checkbox that gets unchecked), it naturally doesn't get included in the params hash. I wanted to pass in the fact that :published became false when it got unchecked. I do not need to pass in published: false by default, because whatever the default is, is fine. It is the changed state that I need, but from false to true. The alternative works: if the checkbox is checked, it becomes true and gets passed through without issue. This is more an exercise of how can I turn this bit of code into something more "elegant" and maybe a 1-liner. As it stands, it does exactly what I need it to do. Answer: You want to merge params with one or more default values. How about, well, #merge? {"published" => false}.merge(params[:template].permit!) Only trick, if you can call it a trick, is that the default hash must use string keys, not symbols, despite the params object accepting either. The resulting hash will also be keyed with strings only. Alternatively, you can use HashWithIndifferentAccess to get around this: defaults = HashWithIndifferentAccess.new({ published: false }) filtered = defaults.merge(params[:template].permit!) And filtered will be a HashWithIndifferentAccess much like params produces. All that said, the better way to handle all this is to send the right parameters in the first place. With the code above you're always assuming that published should be set to false unless told otherwise. But there could be all sort of reasons for why only a subset of parameters are sent; if published is intentionally left out, it'll suddenly be filled in - perhaps wrongly - by the controller. Relatedly, if the parameter is required, raise an exception if it's missing and tell the client. It's not the controller's job to fill in the blanks when faced with incomplete information. Conversely, it's the client's job to send the parameters that are necessary or required. As I said in a comment, if you use Rails' regular checkbox helpers, they'll produce a hidden input with the checkbox's "unchecked" value, in addition to the checkbox input itself. If the checkbox is checked, it'll override the former, unchecked value. This is the typical (and better) way to solve this. So in short, if you want to merge to hashes, use #merge. But in this case, you probably don't want to do that.
{ "domain": "codereview.stackexchange", "id": 21101, "tags": "ruby, ruby-on-rails" }
find if one string is a valid anagram of another , in scala
Question: Question is taken from leet code. Problem Given two strings s and t , write a function to determine if t is an anagram of s. Examples Input: s = "anagram", t = "nagaram" Output: true Input: s = "rat", t = "car" Output: false Here is my implementation in scala import scala.collection.immutable.HashMap object ValidAnagram extends App { def validAnagram(s1: String, s2: String): Boolean = { if (s1 == s2) true else if (s1.length != s2.length) false else { val lookupTable = s1.foldLeft(HashMap[Char,Int]()) ((m,c) => m ++ HashMap(c -> (if (m contains c) m(c) + 1 else 1))) println(lookupTable) (s2.foldLeft(lookupTable) {(m,c) => if (m contains c) { val count = m(c) if (count > 1) {m + (c -> (count-1))} else m - c } else m // we can return here if not functional }).isEmpty } } println(validAnagram(args(0), args(1))) } Answer: Short circuit! Check the lengths first. In the case that they aren't the same, you don't need to incur a linear pass for an equality check. Don't check the hashmap for an entry! It's good that you know m(c) can throw if there's a missing key; luckily, you can use Map#getOrElse and provide a default -- in this case, it makes sense to provide a 0 default since you haven't seen the character. Compare the counts from each string directly! I find the subtracting of the counts to be very confusing. You'll need to remove keys from the counter if any of the values go to 0, which lowers the signal to noise ratio of the code. It's better to just build a second counter. If you decide to build two counters and compare them, you'll need to do the same fold logic twice. This is really messy to write, so you won't want to do this twice. You may want to pimp out the String class. You can do this in the following way: implicit class Counter(s: String) { def countCharacters: Map[Char, Int] = s.foldLeft(Map.empty[Char, Int])({ case (acc, c) => acc + (c -> (acc.getOrElse(c, 0) + 1)) }) } You can see it in action here: scala> "hello".countCharacters res0: Map[Char,Int] = Map(h -> 1, e -> 1, l -> 2, o -> 1) So now you can compare the counts with a clean, non-repeated API. Pulling this all together, you can write: object ValidAnagram extends App { // when I see a string, implicitly construct a `Counter` instance implicit class Counter(s: String) { // this will be "added" to the String API via an implicit class construction def countCharacters: Map[Char, Int] = s.foldLeft(Map.empty[Char, Int])({ case (acc, c) => acc + (c -> (acc.getOrElse(c, 0) + 1)) }) } def validAnagram(s1: String, s2: String): Boolean = if (s1.length != s2.length) false else if (s1 == s2) true else s1.countCharacters == s2.countCharacters println(validAnagram(args(0), args(1))) }
{ "domain": "codereview.stackexchange", "id": 32816, "tags": "strings, interview-questions, functional-programming, scala" }
Have there been multiple aboriginal species of dogs?
Question: In "Variation under Domestication", Darwin writes that: I may here state, that, looking to the domestic dogs of the whole world, I have, after a laborious collection of all known facts, come to the conclusion that several wild species of Canidae have been tamed, and that their blood, in some cases mingled together, flows in the veins of our domestic breeds. From this, I understand that several species of dogs evolved independently and half-civilised man tamed them later on, but I want to confirm this. Does Darwin argue that there have been multiple species of beings that resemble the shape of what we now call "dogs", or that there has been one meta-species of aboriginal dogs, which branched off into several wild varieties? Answer: That's one of — quite a few, we have to say — the mistakes Darwin made in his edition from 1859 (I have to confess that this is the only edition I've read, and I reckon this is the only edition anyone should read). This mistake is even more contrasting if you realize that he failed to apply the very same reasoning he had made just a few pages earlier, when discussing variation in pigeons: he explained that, even if counterintuitive, all different pigeons with all different features descend from the same wild species. Then, when he talks about dogs a few pages later, he incomprehensibly makes the same mistake he had just accused his readers of commiting. Darwin is not the only one: Lorenz made the same unfounded claim, namely that the domestic dog would have two different origins, one from Canis lupus and another one from Canis latrans. We have to agree that Lorenz was way more bold than Darwin when it comes to making unfounded claims. The fact is that, today, we're pretty sure (the scientific and statistical meaning of sure is being used here) that all domestic dogs (Canis lupus), from the chihuahua to the german shepherd, descend from a population of wolves (also Canis lupus) from east Asia. Here is a Nature paper from Peter Savolainen, one of the best researchers on this subject: Wang, G., Zhai, W., Yang, H. et al. Out of southern East Asia: the natural history of domestic dogs across the world. Cell Res 26, 21–33 (2016) doi:10.1038/cr.2015.147
{ "domain": "biology.stackexchange", "id": 10187, "tags": "evolution, species-identification, species" }
change detection
Question: I have a question related to change detection. Application domain is robotics/planning. Background/setting: There is a sensor detecting distance from obstacle (ultrasonic / sonar sensor) at a specific position (x, y, theta) in the environment. It returns some reading at regular time intervals. Lets say the reading is R and over a period of time it records R+ or R- (+/- means variation due to sensor inaccuracies). Case 1: I introduce an additional object between the sensor and the obstacle at a distance D (D < R) so that at the next instance D is detected and returned Case 2: I remove the original obstacle and now the next obstacle is D' (D' > R) and at the next instance D' is returned. Question Is there a way to exactly (or with high probability) say that a changed occurred NOW (when I add or remove an obstacle)? Most change analysis algorithms consider a run length before change point and some data after change point and indicate the position change occurred. But none I have read so far say change happened NOW; even the "online" algorithms seem to need some burn in data. EDIT: Ultimate goal I want to implement a method that takes the data vector and return if the latest data point was a change point. A possible Solution/hack Since my work involves streaming data, this is the approach I am currently taking. Read a window of data (for now, my window size is 20 values) from the end of the stream. Run bcp (from R) on this window. Check for the posterior probability of the change at location 18. (for all the runs i just had, the last value is NA, hence ignore that, and the data is zero indexed, (calling R from Python using rpy2), hence, the position turns out 18 for window size of 20. Set a threshold of 70% for the posterior probability (for now in my experimental setting this works fine, I may have to work on getting a proper threshold later) If the posterior probability at location 18 > 70%, I return TRUE indicating the recent data point has a different mean, or "change detected", else return FALSE. This may not be the most efficient way of doing it, but it is doing its job for now. I am using this approach to carry my work forward. I will update the thread if I find a better approach. Thanks you all for the help! Answer: Consider how an algorithm might detect a change. You're observing instances of some random variable, $X_1,X_2,\dots,X_{k-1}$. Suddenly (and unknown to you) at $X_k$ something about the distribution of $X$ changes. Now your observations $X_k,\dots,X_n$ are different in some way. You want to know what $k$ is based on your observations alone. In order to detect the change, you have to have some idea of what 'before' might look like so you can have confidence that 'after' is really different. So, yes, all change detection algorithms will use some run length before and after the true change to make a decision (edit: actually, you don't need run length before and after, you could just have an assumption about the data-generating process. Maybe you say its normal mean 0 variance 1 and your first observation is 5000, you don't need run length to know you're wrong somewhere). Anything else would be an even wilder kind of predicting the future. It seems like the real concern might be the latency of signal detection. You'd like the sensor to detect it after just a few instances of the data after the true change point. So my question is, do you really need it to work now? It seems reasonable to me that you're not interested in the number of data points, but the time it takes to gather them. If you have a sensor that updates 100,000 times a second, 100 data points isn't a huge deal.
{ "domain": "datascience.stackexchange", "id": 2240, "tags": "python, statistics, anomaly-detection" }
Going from a circuit to the quantum state output of the circuit
Question: I'm looking at the following lecture notes where we start with the circuit below for some state $\vert\psi\rangle_L$ that picks up an error to become $E\vert\psi\rangle_L$ It is later claimed in the notes that the syndrome extraction part of the circuit can be represented by the following operation on $E\vert\psi\rangle_L$. $$E|\psi\rangle_{L}|0\rangle_{A} \rightarrow \frac{1}{2}\left(I_{1} I_{2}+Z_{1} Z_{2}\right) E|\psi\rangle_{L}|0\rangle_{A}+\frac{1}{2}\left(I_{1}I_{2}-Z_{1} Z_{2}\right) E|\psi\rangle_{L}|1\rangle_{A}$$ How does one see this? I can write the Hadamard and the control $Z_1Z_2$ gates as 8x8 matrices but this seems like a tedious way to do it. The alternative is to express the control $Z_1Z_2$ gates using something like this answer. However, I was unable to do it this way either. So the question is - how do I see that the following line is true just by looking at the circuit? $$E|\psi\rangle_{L}|0\rangle_{A} \rightarrow \frac{1}{2}\left(I_{1} I_{2}+Z_{1} Z_{2}\right) E|\psi\rangle_{L}|0\rangle_{A}+\frac{1}{2}\left(I_{1}I_{2}-Z_{1} Z_{2}\right) E|\psi\rangle_{L}|1\rangle_{A}$$ Answer: Let's represent controlled $Z_1Z_2$ gate in the projector formalism, as described in this answer: $$C_AZ_1Z_2 = |0\rangle\langle0|_A I_1I_2 + |1\rangle\langle1|_A Z_1Z_2 $$ This just tells you to apply identity gates to qubits 1 and 2 if the ancilla is in the $|0\rangle$ state and to apply Z gates to qubits 1 and 2 if the ancilla is in the $|1\rangle$ state - which is the definition of the controlled gate. Now let's apply this to the state $\color{blue}{|+\rangle}_{A}E|\psi\rangle_{L}$ (this is the state of the system after the first Hadamard gate of syndrome extraction): $$C_AZ_1Z_2 \big( \color{blue}{|+\rangle}_{A}E|\psi\rangle_{L} \big) = \big( \color{blue}{|0\rangle\langle0|}_A I_1I_2 + \color{blue}{|1\rangle\langle1|}_A Z_1Z_2 \big) \bigg( \frac{1}{\sqrt2}\color{blue}{(|0\rangle + |1\rangle)}_AE|\psi\rangle_{L} \bigg) = $$ $$= \frac{1}{\sqrt2} \big( \color{blue}{|0\rangle}_A \otimes I_1I_2 E|\psi\rangle_{L} + \color{blue}{|1\rangle}_A \otimes Z_1Z_2 E|\psi\rangle_{L} \big)$$ Finally, apply the last Hadamard gate to the ancilla; after that the state of the system becomes $$\frac{1}{\sqrt2} \big( \color{blue}{|+\rangle}_A \otimes I_1I_2 E|\psi\rangle_{L} + \color{blue}{|-\rangle}_A \otimes Z_1Z_2 E|\psi\rangle_{L} \big) = $$ $$= \frac{1}{2} \big( \color{blue}{(|0\rangle + |1\rangle)}_A \otimes I_1I_2 E|\psi\rangle_{L} + \color{blue}{(|0\rangle - |1\rangle)}_A \otimes Z_1Z_2 E|\psi\rangle_{L} \big) = $$ (after reordering the terms and grouping same ancilla states together) $$= \frac{1}{2} \color{blue}{|0\rangle}_{A} \otimes \left(I_{1} I_{2}+Z_{1} Z_{2}\right) E|\psi\rangle_{L} + \frac{1}{2} \color{blue}{|1\rangle}_{A} \otimes \left(I_{1}I_{2}-Z_{1} Z_{2}\right) E|\psi\rangle_{L}$$ which is exactly the state you need to get.
{ "domain": "quantumcomputing.stackexchange", "id": 896, "tags": "quantum-gate, error-correction" }
What is the contribution of viruses to the evolution of mankind?
Question: I'm interested in horizontal gene transfer in bacteria, viruses, and organisms such as Bdelloid Rotifers. I've just read in Carl Zimmer's 'A Planet of Viruses' the following passage: As a host cell manufactures new viruses, it sometimes accidentally adds some of its own genes to them. The new viruses carry the genes of their hosts as they swim through the ocean, and they insert them, along with their own, into the genomes of their new hosts. By one estimate, viruses transfer a trillion trillion genes between host genomes in the ocean every year. It's interesting to consider the scale of DNA-swapping that has occurred given the frequency by which it happens and the evolutionary timescale. Are there any examples of genes in the human genome that we know were deposited by viruses that would have given an evolving human a physical/mental advantage? Where did they come from? What benefit did they provide? I'm interested in genetic additions from non-human-ancestor species, rather than the transfer of genes that occurred as mutations from other humans. Answer: The processes that control the germline of metazoans (multicellular animals) are highly regulated compared to single cell bacteria and eukaryotes as well as plants. At this point there are no clear stories of gene transfer into a complex animal, though there are some for plants: "animals and fungi seem to be largely unaffected, with a few exceptions, while lateral gene transfer frequently occurs in protists with phagotrophic lifestyles, possibly with rates comparable to prokaryotic organisms." Bacteria fungi and plants are more permissive and more susceptible to gene transfer and it probably is more important to their evolutionary path. Its been estimated that as much as eight percent of the human genome has been affected by viral integration. But viral genomes are highly selected against carrying non essential material - other genes rarely come along for the ride it seems. What is probably more influential is that viral insertions could participate in rewiring the regulatory network of animal cells, not adding genes, but modifying the conditions under which they are active.
{ "domain": "biology.stackexchange", "id": 384, "tags": "dna, virology, human-genetics, human-genome" }
(1+1)d collapsing null-shell?
Question: I am trying to understand the following Penrose diagram (from https://arxiv.org/abs/1507.03489) According to the authors, it is depicting the formation of a (1+1)d black hole from a collapsing null shell. But to me it looks like it is simply a null ray, that moves from past minus infinity to future plus infinity. Why does a black hole form there? Answer: A collapsing null shell in $\text{(1+1)D}$ is two oncoming photons meeting each other in the center of the diagram. They move up in time on the diagram, one (the dotted line) from left to right; the other (the red line) from right to left. Once they meet, a black hole is formed initially at the intersection point (event) in the center of the diagram. Then the event horizon expands (as a continuation of the same two lines up on the diagram) at the speed of light. As the authors state, "the horizon is the future light cone of the [central] point". Therefore this black hole expands at the speed of light to consume more and more space until the entire space becomes an infinite line of the spacelike singularity (the wavy line) in the infinite future of external observers, but in a finite proper time of those falling through the horizon. A black hole is formed, because, when the photons meet, they create a concentration of energy in a small space, similarly to a shell in $\text{(3+1)D}$ collapsing to its Schwarzschild radius. However note that simply writing the Einstein field equations in $\text{(1+1)D}$ does not work, as both the spacetime curvature and stress-energy tensor vanish. I am not sure what approach these authors employ, but often an alternative theory of gravity is used in $\text{(1+1)D}$, for example, the direct $\text{(1+1)D}$ analog of a theory of gravity in $\text{(3+1)D}$ proposed by Nordstrøm in $\text{1913}$: https://en.wikipedia.org/wiki/Nordstr%C3%B6m%27s_theory_of_gravitation
{ "domain": "physics.stackexchange", "id": 92152, "tags": "general-relativity, black-holes, spacetime, event-horizon, causality" }
Can a program that requires feedback be considered an AI?
Question: If I create a program which takes an input, gives an output and then requires a response to let it know whether the answer it gave was any good does it count as AI? If not, what is the process of AI? Does it not always need specific parameters? For example, I ask it "Who is the president of the USA?", and I have programmed it to look for news articles in SEOs and remove the "Who" part, is that AI? Answer: There is no "process of AI" as such. There are many, many different approaches to AI, different ones of which are used in specific applications. As to whether a purely trial and error approach could be considered AI... I'd offer up a qualified "maybe". If you do nothing but an exhaustive scan of the solution space, for every trial, then I'd say "No, it's not really any kind of AI". OTOH, if you're using a knowledge-base of some sort and applying some kind of reasoning (even if it's a heuristic) , and if you have a system that somehow learns from the feedback from the user and gets "smarter" over time, then you're likely working on something that could be considered AI. All of that said, the exact definition of what is and isn't AI is somewhat fuzzy. One popular definition is something like "any technology that allows a computer to do something well that currently only humans can do well". So if you're doing something that fits that descriptions, it's possibly an aspect of AI. And consider again that most people don't really consider "brute force" solutions to be AI.
{ "domain": "ai.stackexchange", "id": 995, "tags": "ai-design" }
How to populate pandas series w/ values from another df?
Question: I need help figuring out how to populate a series of one dataframe w/ specific values from another dataframe. Here's a sample of what I'm working with: df1 = pd.DataFrame({'Year':[1910, 1911, 1912], 'CA':[2.406, 2.534, 2.668], 'HI':[0.804, 0.821, 0.832]}) df2 = pd.DataFrame({'State':['CA', 'CA', 'CA', 'HI', 'HI'], 'Year':[1910, 1910, 1911, 1911, 1911]}) df2['Population'] = pd.Series() *I'm trying to populate df2['Population'] w/ the corresponding populations from df1 (i.e. the population of a specific state from a specific year) How can I do this? Answer: Here is a one solution: df2['Population'] = df2.apply(lambda x: df1.loc[x['Year'] == df1['Year'], x['State']].reset_index(drop=True), axis=1) The idea is for each row of df2 we use the Year column to tell us which row of df1 to access, and then State to select the column. Afterwards we reset the index of the result to prevent pandas from keeping the columns separate.
{ "domain": "datascience.stackexchange", "id": 3510, "tags": "pandas" }
Modularity of transcription factors
Question: I attended a seminar about neurogenesis that presented results for PAX6 as an important TF that contains 3 domains with very distinct patterns of downstream expression. The speaker ended up saying that PAX6 can be considered as 3 TFs in one protein, acting independently in different situations: Relative roles of the different Pax6 domains for pancreatic alpha cell development Does anybody know of more examples like this? Answer: The NF-κB family of transcription factors is very modular, with different combinations having different effects. The active (nuclear, DNA-bound) TF is a dimer, composed variously of RelA/p65, RelB, c-rel, NFKB1/p50, and/or NFKB2/p52 subunits. For example, the "canonical" p65/p50 dimer is activated in response to stimulants like TNF-α (tumor necrosis factor alpha, released in response to inflammatory signals like the presence of pathogens) and LPS (lipopolysaccharide from the cell walls of Gram-negative bacteria), while the RelB/p52 dimer plays an important role in the development of B cells (the immune system component that produces antibodies). The AP-1 transcription factor is also heterodimeric, containing proteins from the Jun, Fos, JDP, and ATF families, and there are numerous other examples of multimeric transcription factor complexes. This recent article in Nature Immunology (disclaimer: I was not involved in that research, but my old lab did similar work) shows DNA sequence-specific binding by different NF-κB dimers. Polymerism in general, and dimerism in particular, are quite common modes of transcriptional activation and regulation. The large number of ways in which a relatively small number of transcription factors can be combined allows for the exquisite control of genes, responding to a huge variety of cellular situations. Unfortunately, it also means that mutations in key, common components can result in transformation, unrestricted growth, and the generation of tumors.
{ "domain": "biology.stackexchange", "id": 386, "tags": "transcription, gene-regulation" }
C# Update account based on user order data
Question: The method works, but I would like to know if there is any way to make it more readable, optimized? I have user data (i want to import/update it). Accounts is finded by user data. User order data is entered into the system. userData.Id - user Id; userData.OrderNumber - user order number. The "UpdateAccount" method below is used by the user. The user enters their Id and OrderNumber into the system (there are FirstName, LastName, etc., but they are irrelevant here because they are used in the UpdateAllUserDataToAccount method). Therefore, it is necessary to discover whether an account with the relevant data exists in the system. You can update account data if the account has userData.OrderNumber or userData.Id. If no account under userData.OrderNumber or userData.Id is found in the system then create a new account and update its data. Do not allow anything if userData.OrderNumber, userData.Id is in different account (can not exist in the system in different account). An account can have userData.OrderNumber in the system and not userData.Id because the employee registered the parcel but did not have a user id. An account can have userData.Id in the system and not userData.OrderNumber because the user has previously been registered in the system. The user data userData.OrderNumber and userData.Id are unique and belong to only one user. bool _isSpecialData - to indicate that the user is special (set before this code). private void UpdateAccount(UserModel userData) { Account accountById = _accountController.GetAccountById(userData.Id); Account accountByNumber = _accountController.GetAccountByNumber(userData.Number); bool _isSpecialData = accountByNumber != null ? accountByNumber.Vip : false; if (accountById != null) { if (accountByNumber != null) { if (accountById.Id == accountByNumber.Id) { if (_isSpecialData) { AddPartUserDataToAccount(userData, accountByNumber); if (userData.Status == Blocked) return; } } else { _log.Error($"User data can not be in different accounts"); return; } } else { AddPartUserDataToAccount(userData, accountById); } } else { if (accountByNumber != null) { if (accountByNumber.Id == null) { accountByNumber.Id = userData.Id; if (_isSpecialData) { AddPartUserDataToAccount(userData, accountByNumber); if (userData.Status == Blocked) return; } } else { _log.Error($"accountByNumber.Id can be just with the same value as userData.Id or with null (because it was not set in first place)"); return; } } else { CreateNewAccount(userData); } } UpdateAllUserDataToAccount(userData); } Answer: well What I would do is created model from the if of validations you have 3 path in the if when accountById is null and accountByNumber is null when accountById is not null and the else Look at the code and see how the TypeCase property let identify each case using your if statement private void UpdateAccount(UserModel userData) { var result = ProcessUpdateAccount(userData); if (result.HasError) _log.Error(result.ErrorText); if (result.AllowGlobalUpdate) UpdateAllUserDataToAccount(userData); } the main function Public UpdateResult ProcessUpdateAccount(UserModel userData){ var firsAccount = _accountController.GetAccountById(userId); var secondAccount = _accountController.GetAccountByNumber(userNumber); if (firstAccount == null && secondAccount == 0) { CreateNewAccount(userData); return new UpdateResult() { TypeCase = 1, Title = "new account", HasError = false, ErrorText = "", AllowGlobalUpdate = true }; } if (firstAccount != null) return FirstAccountPath(userData, firstAccount, secondAccount); //second account path return SecondAccountPath(userData, secondAccount); } so accountById Path Public UpdateResult FirstAccountPath( UserModel userData, Account firstAccount, Account secondAccount){ if (secoundAccount == null) { AddPartUserDataToAccount(userData, firstAccount,); return new UpdateResult() { TypeCase = 2, Title = "First OK Second Not Exists", HasError = False, ErrorText = "", AllowGlobalUpdate = true }; } if (firstAccount.Id != secoundAccount.Id) { return new UpdateResult() { TypeCase = 3, Title = "", HasError = true, ErrorText = "User data can not be in different accounts", AllowGlobalUpdate = false }; } if (seconAccount.Vip){ AddPartUserDataToAccount(userData, secondAccount); if (userData.Status == Blocked) return new UpdateResult() { TypeCase = 4, Title = "", HasError = false, ErrorText = "", AllowGlobalUpdate = false }; return new UpdateResult() { TypeCase = 5, Title = "VIP", HasError = false, ErrorText = "", AllowGlobalUpdate = true }; } //No case in code path return new UpdateResult() { TypeCase = 6, Title = "No Case in code Path", HasError = false, ErrorText = "", AllowGlobalUpdate = true }; } and finally Public UpdateResult SecondAccountPath( UserModel userData, Account secondAccount){ if (secondAccount.Id != null) { return new UpdateResult() { TypeCase = 7, Title = "", HasError = true, ErrorText = "accountByNumber.Id can be just with the same value as userData.Id or with null (because it was not set in first place)", AllowGlobalUpdate = false }; } secondAccount.Id = userData.Id; if (seconAccount.Vip) { AddPartUserDataToAccount(userData, secondAccount); if (userData.Status == Blocked) return new UpdateResult(){ TypeCase = 8, Title = "", HasError = false, ErrorText = "", AllowGlobalUpdate = false }; return new UpdateResult() { TypeCase = 9, Title = "", HasError = false, ErrorText = "", AllowGlobalUpdate = true }; } //No case in code path return new UpdateResult() { TypeCase = 10, Title = "No Case in code Path", HasError = false, ErrorText = "", AllowGlobalUpdate = true }; } so the UpdateResult class Public Class UpdateResult { public int TypeCase {get;set;} public Title TypeCase {get;set;} public HasError TypeCase {get;set;} public ErrorText TypeCase {get;set;} public AllowGlobalUpdate TypeCase {get;set;} }
{ "domain": "codereview.stackexchange", "id": 39494, "tags": "c#, .net" }
Determining the equilibrium constant of a reaction
Question: I'm looking at the lecture slide of my thermodynamics class, and have a question about the the equilibrium constant: for the reaction $$ SO_4^{-2}+2H^+ <=> H_2SO_4. $$ The equilibrium constant is given as $$ K(\tau)=\frac{[H^+]^2[SO_4^{2-}]}{[H_2SO_4]}. $$ I'm wondering why the concentration of product is in the denominator? I've seen a couple of examples about writing the equilibrium constant but I'm still confused which side of the chemical reaction should be on the nominator and denominator. Answer: The concentration form of an equilibrium constant for a general reaction where activity coefficients are unity is defined as $$K_{EQ,C}^\star = \prod\ C_j^{\nu_j}$$ The terms $C_j$ are molar concentration of species $j$. The terms $\nu_j$ are the reaction coefficients. The reaction coefficients are the stoichiometric coefficients with negative signs for reactants and positive signs for products. By example, for N$_2$ + 3H$_2$ $\leftrightharpoons$ 2NH$_3$, we would write $$K_{EQ,C}^\star = C_{N_2}^{-1}\ C_{H_2}^{-3}\ C_{NH_3}^{2} $$ This approach results in a form leading to the common expression: products divided by reactants. This format is never violated as the definition of an equilibrium constant for a reaction.
{ "domain": "physics.stackexchange", "id": 78600, "tags": "thermodynamics, physical-chemistry, equilibrium" }
Does the Higgs field also bend in gravitational space-time fields?
Question: I hope I'm formulating this correctly: if the Higgs field is responsible for generating particle mass by interaction, does it bend in/around the gravitational space-time of amassed particles "given" gravity by the Higgs field? If so, does this bending of the field change its "mass-giving" interaction properties? I'm not thinking about black holes per se, since they are a singularity, but for example the gravitational bend in space-time caused by stars. Thank you for taking the time to read or answer this question. Answer: The higgs field is responsible for giving mass to the gauge bosons of the weak interaction and also to the massive particles in the standard model of particle physics. The macroscopic masses of composite particles like protons, neutrons and nuclei are mainly the result of the invariant masses of summed four vectors of the elementary particles that compose them; it is not just the sum of the masses of the particles, because special relativity reigns at that level . The higgs field contribution to the mass is small.The proton mass is about a GeV, the valence quarks add up to a few MeV. It is the sea of quark and antiquarks and gluons with their four vectors that generate the measured proton mass (attempts at modeling with lattice QCD). It is at the classical level that masses can be summed and Archimedes principle applied. At that level the higgs field is not relevant. At cosmological times , once quantization of gravity and a unified theory is found, it might be reasonable to expect variations in the effect of the higgs at symmetry breaking times with respect to the gravitational fields at that time, but after symmetry breaking the situation is stable, as far as lensing etc goes. It is the classical mechanics masses that apply
{ "domain": "physics.stackexchange", "id": 51655, "tags": "particle-physics, gravity, higgs" }
openni_camera error "bad parameter"
Question: I am able to use the openni_camera node with the command "roslaunch openni_camera openni_node.launch". However, if I try to use any other nodes that require openni_camera (such as openni_tracker or rviz), openni_camera fails. Previously, I had a different error that was solved, but I do not know if this new error is a result of the solution or not. This is what is output. The problem is in the last few lines. robot@peoplebot2:~$ roslaunch openni_camera openni_node.launch ... logging to /home/robot/.ros/log/dd0d89ac-a8ae-11e0-85d5-a088b4402bbc/roslaunch-peoplebot2-4636.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://peoplebot2:56174/ SUMMARY ======== PARAMETERS * /rosdistro * /openni_node1/use_indices * /openni_node1/depth_registration * /openni_node1/image_time_offset * /openni_node1/depth_frame_id * /openni_node1/depth_mode * /openni_node1/debayering * /rosversion * /openni_node1/projector_depth_baseline * /openni_node1/rgb_frame_id * /openni_node1/depth_rgb_translation * /openni_node1/depth_time_offset * /openni_node1/image_mode * /openni_node1/shift_offset * /openni_node1/device_id * /openni_node1/depth_rgb_rotation NODES / openni_node1 (openni_camera/openni_node) kinect_base_link (tf/static_transform_publisher) kinect_base_link1 (tf/static_transform_publisher) kinect_base_link2 (tf/static_transform_publisher) kinect_base_link3 (tf/static_transform_publisher) auto-starting new master process[master]: started with pid [4650] ROS_MASTER_URI=http://localhost:11311 setting /run_id to dd0d89ac-a8ae-11e0-85d5-a088b4402bbc process[rosout-1]: started with pid [4663] started core service [/rosout] process[openni_node1-2]: started with pid [4675] process[kinect_base_link-3]: started with pid [4676] process[kinect_base_link1-4]: started with pid [4682] process[kinect_base_link2-5]: started with pid [4689] process[kinect_base_link3-6]: started with pid [4690] [ INFO] [1310052981.236117554]: [/openni_node1] Number devices connected: 1 [ INFO] [1310052981.236264275]: [/openni_node1] 1. device on bus 002:07 is a Xbox NUI Camera (2ae) from Microsoft (45e) with serial id 'B00362205337039B' [ INFO] [1310052981.240329298]: [/openni_node1] searching for device with index = 1 [ INFO] [1310052981.304115108]: [/openni_node1] Opened 'Xbox NUI Camera' on bus 2:7 with serial number 'B00362205337039B' [ INFO] [1310052981.336603405]: rgb_frame_id = '/openni_rgb_optical_frame' [ INFO] [1310052981.341586992]: depth_frame_id = '/openni_depth_optical_frame' terminate called after throwing an instance of 'openni_wrapper::OpenNIException' what(): virtual void openni_wrapper::OpenNIDevice::startImageStream() @ /tmp/buildd/ros-diamondback-openni-kinect-0.2.1/debian/ros-diamondback-openni-kinect/opt/ros/diamondback/stacks/openni_kinect/openni_camera/src/openni_device.cpp @ 158 : starting image stream failed. Reason: Bad Parameter sent to the device! [openni_node1-2] process has died [pid 4675, exit code -6]. log files: /home/robot/.ros/log/dd0d89ac-a8ae-11e0-85d5-a088b4402bbc/openni_node1-2*.log Originally posted by qdocehf on ROS Answers with karma: 208 on 2011-07-15 Post score: 0 Answer: no activity > 1 month, closing Originally posted by kwc with karma: 12244 on 2011-09-02 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 6147, "tags": "kinect, openni-camera" }
Why does the neutron have an electric dipole moment?
Question: As I understand it, the neutron is believed to have an electric dipole moment (the so-called "$nEDM$"), the precise value of which hasn't been pinned down, although we've been able to place upper limits on its value, experimentally (see). In order for any "object" to have a dipole moment, there must some separation between the object's positive and negative charge distributions. How could this be, though, when the neutron is an uncharged particle? What constitutes the charge distribution (are we simply talking about tiny differences due to the "location" of quarks within the neutron and, if so, does this mean that the proton, too, has a dipole moment)? Why do we have reason to believe a meaningful "$nEDM$" exists, and how is it possible for such a thing to exist? If the discussion need involve "technical" details and terminology to lead to a satisfactory answer, I'm not at all opposed to researching further as needed. Answer: $ \newcommand{\CP}{\mathit{CP}} \newcommand{\fm}{\text{fm}} \newcommand{\efm}{\,e\,\fm} $The current state of the art is that we predict the neutron has a nonzero electric dipole moment, but seventy years of efforts to measure the neutron's EDM have been consistent with zero at higher and higher precision. Dipole moments are measured with units of $\text{charge}\times\text{length}$, so a "natural unit" for the neutron's EDM would involve the fundamental charge $e$ and the neutron's radius, a femtometer. In these units the current upper limit is $d_\text{n} \lesssim 10^{-13}\efm$; that is, any nonzero dipole moment is contributing at the sub-part-per-trillion level of the scale you'd predict from dimensional analysis. It's worth taking a minute to think about just how small that is. If you wanted to create a dipole moment this size with two hypothetical unit charges in the neutron's core, a convenient visualization is that $10^{-13}$ times the Earth's radius gives the wavelength of red light. This is a visualization used by people who think of the EDM as a property of the shape of the neutron's charge distribution. Or suppose you wanted to think in terms of Feynman diagrams for vacuum polarization, where an electrically neutral particle spends part of its time as a virtual pair of charged particles. You would probably have to draw at least ten trillion such diagrams for the neutron (including diagrams for the electromagnetic, strong, and weak interactions) before you found one that made a net contribution to the neutron's EDM, because those diagrams involve particles with unit-scale charges and neutron-sized lengths. In order for any "object" to have a dipole moment, there must some separation between the object's positive and negative charge distributions. How could this be, though, when the neutron is an uncharged particle? The neutron has zero net charge, but it has a nontrivial charge structure which is best explained using the quark model. The most obvious consequence of the neutron's internal charge structure is its nonzero magnetic dipole moment, whose natural unit $\mu_N = e\hbar / 2m_\text{nucleon}$ correctly sets the scale. Note that the value of the nuclear magneton is $\mu_N \approx \frac{1}{10}\efm \cdot c$, so another way to point out the astonishing smallness of the neutron's EDM is to slip into $c=1$ language and say that $d_\text{n}$ is at least a trillion times smaller than $\mu_\text{n}$. The reason for the shocking asymmetry between the electric and magnetic sectors is that $\vec d_\text{n}$ and $\vec \mu_\text{n}$ behave differently under $\CP$ transformations. The nucleon is the ground state of quantum chromodynamics, so all of QCD's symmetries correspond to good quantum numbers for the nucleon. The electric monopole and magnetic dipole are $\CP$-even, so nucleons can have nonzero values for those moments. The magnetic monopole and the electric dipole are $\CP$-odd, so only a hypothetical $\CP$-odd particle could have nonzero values for those moments at the "natural" scale. (This related answer begins with an important clarification.) Why do we have reason to believe a meaningful neutron EDM exists? The $\CP$ transformation is a mathematical procedure which transforms our model of a matter particle into our model of its antimatter partner. When we say "this theory is symmetric under $\CP$," we mean, explicitly or implicitly, that the theory predicts matter and antimatter will evolve the same way under the same conditions. We therefore have experimental evidence that our actual universe is not symmetric under $\CP$: our universe is full of matter baryons, but antimatter baryons occur only rarely and briefly. And in the spirit of Gell-Mann's totalitarian principle, if there is $\CP$ violation anywhere in the universe, then there is $\CP$ violation in the neutron. It leaks in by the same mechanism as do corrections to the magnetic dipole moment, a mechanism which was already mentioned above in the context of "vacuum polarization." If we have the same vacuum today that we had during baryogenesis, then whatever $\CP$ violation gave us a matter-filled universe is still happening deep down in the belly of a neutron interacting with an electric field. And if we can detect that $\CP$ violation in the neutron, it will tell us something about the baryogenesis epoch. (Here are some related answers I've written about using the same trick to study the $P$-violating weak interaction.) We have already discovered some violation of $\CP$ symmetry in the Standard Model. That known $\CP$ violation corresponds to a Standard Model prediction for the neutron's EDM of $d_\text{n}^\text{SM} \sim 10^{-17}\efm$, which is four orders of magnitude below the current limit. However, the current Standard Model also underpredicts the $\CP$-violating baryon asymmetry of the universe, and by a comparable factor. This makes the current generation of nEDM experiments very interesting, whether they finally see a nonzero effect or whether they continue to set more-stringent upper limits. How is it possible for such a thing to exist? We don't know. We know the mechanics. The net-neutral neutron has a complicated electric charge distribution, which can be polarized and which could in principle be polarized permanently. (This is a slightly different argument than for the net-charged proton and electron, whose permanent EDMs are forbidden by the same symmetry arguments but for which an EDM could be modeled as a displacement between the charge distribution and the mass distribution.) But we don't know the details. The community which measures neutron EDMs like to think of themselves as "theory killers," because there are dozens of interesting ideas about physics which have been abandoned for predicting unphysically large neutron EDMs. Next on the chopping block is supersymmetry; a good fraction of the phase space for supersymmetric $\CP$ violation has already been ruled out by the continued non-observation of a permanent neutron electric dipole moment.
{ "domain": "physics.stackexchange", "id": 76906, "tags": "particle-physics, neutrons, dipole, dipole-moment" }
Exists Method Implementation for Multidimensional Array in C#
Question: To determine whether the specified array contains specific element or not, Array.Exists can be used if the given array is one dimensional. I am attempting to implement Exists method for multidimensional array cases. The experimental implementation static class MDArrayHelpers { public static bool Exists<T>(Array array, Predicate<T> match) where T : unmanaged { if (ReferenceEquals(array, null)) { throw new ArgumentNullException(nameof(array)); } Type elementType = array.GetType().GetElementType(); if (!elementType.Equals(typeof(T))) { throw new System.InvalidOperationException(); } foreach (var element in array) { if (match((T)element)) { return true; } } return false; } } Test cases Predicate<int> isOne = delegate (int number) { return number == 1; }; Predicate<int> isFour = delegate (int number) { return number == 4; }; Console.WriteLine("One dimensional case"); int[] array1 = new int[] { 1, 2, 3 }; Console.WriteLine($"Is one existed in {nameof(array1)}: {Array.Exists(array1, isOne)}"); Console.WriteLine($"Is four existed in {nameof(array1)}: {Array.Exists(array1, isFour)}"); Console.WriteLine(""); Console.WriteLine("Two dimensional case"); int[,] array2 = { { 0, 1 }, { 2, 3 } }; Console.WriteLine($"Is one existed in {nameof(array2)}: {MDArrayHelpers.Exists(array2, isOne)}"); Console.WriteLine($"Is four existed in {nameof(array2)}: {MDArrayHelpers.Exists(array2, isFour)}"); Console.WriteLine(""); Console.WriteLine("Three dimensional case"); int[,,] array3 = { { { 0, 1 }, { 2, 3 } }, { { 0, 1 }, { 2, 3 } } }; Console.WriteLine($"Is one existed in {nameof(array3)}: {MDArrayHelpers.Exists(array3, isOne)}"); Console.WriteLine($"Is four existed in {nameof(array3)}: {MDArrayHelpers.Exists(array3, isFour)}"); Console.WriteLine(""); Console.WriteLine("Four dimensional case"); int[,,,] array4 = { { { { 0, 1 }, { 2, 3 } }, { { 0, 1 }, { 2, 3 } } }, { { { 0, 1 }, { 2, 3 } }, { { 0, 1 }, { 2, 3 } } } }; Console.WriteLine($"Is one existed in {nameof(array4)}: {MDArrayHelpers.Exists(array4, isOne)}"); Console.WriteLine($"Is four existed in {nameof(array4)}: {MDArrayHelpers.Exists(array4, isFour)}"); The output of the test code above: One dimensional case Is one existed in array1: True Is four existed in array1: False Two dimensional case Is one existed in array2: True Is four existed in array2: False Three dimensional case Is one existed in array3: True Is four existed in array3: False Four dimensional case Is one existed in array4: True Is four existed in array4: False If there is any possible improvement, please let me know. Answer: You can simply use the LINQ Enumerable.Any extension Method. bool result = array4.Cast<int>().Any(x => x == 1); This works for any collection implementing IEnumerable<T> and also for enumerations created algorithmically by C# Iterators. According the the C# Programming Guide (Arrays): Array types are reference types derived from the abstract base type Array. Since this type implements IEnumerable and IEnumerable<T>, you can use foreach iteration on all arrays in C#. This statement is, however, misleading, since we need to cast the array here. Obviously, we only have the access to the non-generic interface IEnumerable. LINQ Extension methods found in the Enumerable Class apply to IEnumerable<T> or IEnumerable.
{ "domain": "codereview.stackexchange", "id": 40928, "tags": "c#, array, generics, reflection" }
Parameterization of an arbitrary element of $U(2)_L \times U(2)_R$ (Chiral symmetry with two quarks)
Question: When you write down the Lagrangian for two quarks : \begin{equation} \mathcal{L}_\text{QCD}^0 = -\frac{1}{4} G_{\mu\nu}^a G^{a\mu\nu}+ \bar\Psi i \gamma^\mu D_\mu \Psi \end{equation} you find an $U(2)_L \times U(2)_R$ global symmetry because you can rewrite it : \begin{equation} \mathcal{L}_\text{QCD}^0 = -\frac{1}{4} G_{\mu\nu}^a G^{a\mu\nu}+ \mathcal{L}_\text{QCD}^L+\mathcal{L}_\text{QCD}^R \end{equation} with $\mathcal{L}_\text{QCD}^{L,R} = \bar\Psi_{L,R} i \gamma^\mu D_\mu \Psi_{L,R}$ An arbitrary element of $U(2)_L \times U(2)_R$ can be written : \begin{equation} (g_L, g_R) = \left(e^{ i \gamma +i \gamma^i \frac{\sigma_i}{2}},e^{ i \delta +i \delta^i \frac{\sigma_i}{2} }\right) \end{equation} where the $\sigma_i$s are the Pauli matrices. But you could, in principle, rewrite this element as : \begin{equation} (g_L, g_R) = \left(e^{ i \alpha} e^{ i \beta } e^ {i \alpha^i \frac{\sigma_i}{2}} e^{i \beta^i \frac{\sigma_i}{2}},e^{ i \alpha} e^{ - i \beta } e^{i \alpha^i \frac{\sigma_i}{2}} e^{-i \beta^i \frac{\sigma_i}{2}}\right) \end{equation} That expression shows that one can factor two $U(1)$s and obtain : \begin{equation} U(2)_L \times U(2)_R = SU(2)_L \times SU(2)_R \times U(1)_V \times U(1)_A \end{equation} What I don't understand is how to obtain explicitly the second expression of $(g_L, g_R)$ starting from the first one. Answer: Let's see what relation can we find between $\alpha, \beta, \alpha^i, \beta^i$ and $\gamma, \delta, \gamma^i, \delta^i$ First using Baker Campbell Hausdorff lemma we deduce two things: $$\alpha + \beta = \gamma \text{ and } \alpha - \beta = \delta$$ because $\mathbb{1}$ commutes with $\sigma$. And $$e^{i\vec{\alpha}\cdot \vec{\sigma}} = \mathbb{1}\text{cos }\alpha + i\sigma\cdot \hat{\alpha}\,\text{sin } \alpha$$ The latter gives us that $$ e^{i\alpha\cdot \sigma}e^{\pm i\beta\cdot \sigma} =\left( \mathbb{1}\text{cos }a + i \sigma\cdot\hat{a}\,\text{sin } a\right)\left(\mathbb{1}\text{cos }b \pm i \sigma\cdot\hat{b}\,\text{sin } b\right) $$ $$ = \left(\text{cos } a \text{ cos b} \mp (\sigma\cdot \hat{a})(\sigma\cdot\hat{b})\text{ sin } a \text{ sin }b \right) + i \left(\sigma\cdot\hat{a}\text{ sin } a\text{ cos }b \pm \sigma \cdot\hat{b}\text{ sin } b\text{ cos }a \right) $$ $$ = e^{i\gamma\cdot\sigma} \text{ or } e^{i\delta\cdot\sigma} $$ Then you verify the ansatz suggested by user40085; $\vec{a} = (\vec{\gamma}+ \vec{\delta})/2$ and $\vec{b} = (\vec{\gamma}- \vec{\delta})/2$
{ "domain": "physics.stackexchange", "id": 20386, "tags": "homework-and-exercises, symmetry, quantum-chromodynamics, quarks, chirality" }
How to determine which car/truck will pull given their acceleration, top speed etc?
Question: So let's say if two cars were having a tug of war, pulling each other. and we know all about the cars their top speed, acceleration, towing capacity and all. how can we determine which car will pull which, and at what speed? i hope this is the right forum to ask this question. I'm looking for an equation. Answer: I think there are an awful lot of variables here. There’s the maximum torque that can be generated at the drive wheel(s). There’s the traction rating of the tires. There’s the weight of the vehicle, the greater the weight force the greater maximum static friction force between the tires and the road surface (maximum force without slipping), all other things being equal. Overall, given the above and maybe additional variables, I think the vehicle that can develop the most torque at the drive wheel(s) without slipping will be the winner. Hope this helps.
{ "domain": "physics.stackexchange", "id": 63058, "tags": "newtonian-mechanics, forces" }
Complex organic molecules
Question: I am studying astronomy and came across the following term in the astrochemistry course called 'complex organic molecules' or also written as COMs. My question is: What is exactly meant with these molecules? Is it just a molecule with more than one carbon atom? Answer: tl;dr: two different definitions. Astronomy: multiple carbon atoms in molecule. Chemistry: polymer Interestingly enough, after reading about COMs here, as well as reading the Wikipedia page and the corresponding arXiv paper, it seems like chemists and astronomers have different definitions of what a complex organic molecule should be! As far as I knew, in chemistry complex organic molecules were long polymers, such as proteins, which were composed of thousands upon thousands of amino acid units. In the astronomy paper, however, they cite other types of molecules. $\ce{CH3OH, CH3CHO, HCOOCH3 and CH3OCH3}$, all cited as "complex" (haha) organic molecules in the paper, would appear to chemists as relatively simple molecules. (I read the paper, because it piqued my interest that something like a protein could be found in space). I then read the Springer article. The term “complex organic molecules” is used differently in astronomy and chemistry. In astronomy, complex organic molecules are molecules with multiple carbon atoms such as benzene and acetic acid. These molecules have been detected in interstellar space with radio telescopes. In chemistry, “complex organic molecules” refer to polymer-like molecules such as proteins.
{ "domain": "chemistry.stackexchange", "id": 11244, "tags": "organic-chemistry, molecular-structure, molecules" }
Robot won't work properly, claw will only go one way and will only open or close once per cycle
Question: We're students trying to make a clawbot for a Science Seminar class. However, for some reason, whenever we try to move the arm or the claw in a certain way, it will lock up and only move that direction. Code attached. Please help. #pragma config(Motor, port1, frWheel, tmotornormal, openLoop, reversed) //Setting up the motors #pragma config(Motor, port5, brWheel, tmotornormal, openLoop, reversed) #pragma config(Motor, port3, flWheel, tmotornormal, openLoop) #pragma config(Motor, port4, blWheel, tmotornormal, openLoop) #pragma config(Motor, port10, Arm, tmotornormal, openLoop) #pragma config(Motor, port6, Claw, tmotornormal, openLoop) task main() { int a = 0; //Arm integer int c = 0; //Claw integer while(true) { motor[frWheel] = vexRT(Ch2); //Wheels motor[brWheel] = vexRT(Ch2); motor[flWheel] = vexRT(Ch3); motor[blWheel] = vexRT(Ch3); if(a >= -30 && a <= 30) { if(vexRT[Btn8D] == 1) //If arm down button pressed... { motor[Arm] = --a; //then arm will go down. } else if(vexRT[Btn8U] == 1) { motor[Arm] = ++a; } else(vexRT[Btn8U] == 0 && vexRT[Btn8D] == 0); { motor[Arm] = a; } } else { } if(c <= 30 && c >= -30) { if(vexRT[Btn7U] == 1) //If claw up button pressed... { motor[Claw] = ++c; //Claw will open. } else if(vexRT[Btn7D] == 1) { motor[Claw] = --c; } else(vexRT[Btn7D] == 0 && vexRT[Btn7U] == 0); { motor[Claw] = c; } } else { } } } Answer: Your conditional logic needs to be improved. Consider the following scenario: a is initialized to zero. The operator presses Btn8d (arm down) a is decremented to -1, then -2, etc. If the button continues to be pressed, a eventually reaches -30. Depending on the loop rate, this may happen very quickly. When a == 30 the statement if(a >= -30 && a <= 30) is still true. a is decremented to -31. Recovery is not possible because the conditional will never be true again. Your code needs to handle this corner condition. Also, when is the vexRT[] array updated? If it is not continuously updated with the values of the button presses, then you will need to update it at the top of your while loop.
{ "domain": "robotics.stackexchange", "id": 1262, "tags": "robotic-arm, wheeled-robot, c, vex" }
What happens to bones that do not heal?
Question: I would like to know the steps the body takes to heal a broken bone and what happens if it cannot heal properly. I am not looking for advice or explanations on resting and not doing activity, I am genuinely interested in understanding the outcome if activity is not stopped Here is the scenario: A rock climber develops a pain in the lower part of their middle finger. The pain is not so bad so the climber continues to climb for four weeks, but the pain has not gone away. The climber decides to go to the doctor fearing they have an A2 Pulley tear. To their surprise the Doctors tests show no A2 pulley tear but rather a minor stress fracture of the Proximal Phalanx. The doctor says stop climbing, however the climber has several trips planned and does not experience any pain while climbing nor a loss of strength or power while climbing. The only time pain is felt is when the lower part of the middle finger is squeezed. Because of this the climber decides to continue climbing as they just love it so much. Question: What will happen to a stress fracture in the finger if adequate rest is not given? I understand the first thing the body does is try to stabilize the break and then starts to repair. What happens if it is continually used and not given the chance to properly heal? Will the climber just have a sore spot on their finger for the rest of their life or will it progressively get worse and worse? Answer: Fracture healing occurrs in several steps: haemorrhage: blood and surrounding cells fill the space created by the fracture. fibrous callus: chondrocytes colonize the fracture space, with neovascularization. bony callus: osteocytes colonize the fracture space and rearrange in woven bone remodelling: in long bones, woven bone is remodelled into lamellar bone and creates a new haversian system If the fracture is not immobilized enough (not enough rest), several things can happen that are mutually exclusive: normal healing: luckily, the bone is not too much mobilized and is able to heal normally fracture displacement: the bone heals, but in an incorrect alignment. This gets partially corrected over time, but the correction mechanism is limited, especially in adults. pseudarthrosis: the worst case. instead of normal healing, cartilage grows to protect the bony extremities that are mobilized and prevents further healing. This causes pain, and will require surgery in most cases.
{ "domain": "biology.stackexchange", "id": 2665, "tags": "human-anatomy, bone-biology" }
Is it possible to get the source code for a research paper?
Question: Is it possible to get the source code for a research paper? In particular, I want to see the source for this paper https://arxiv.org/abs/2304.03442 Answer: Generally speaking authors sometimes add a link to an official implementation of their code in their papers or in the Arxiv code section. Otherwise it is worthwile to check out this website https://paperswithcode.com or do a search with the papers name on github. For your specific paper I did not find an implementation on paperswithcode, but on GitHub there are plenty of unofficial implementations. https://github.com/search?q=Generative%20Agents%3A%20Interactive%20Simulacra%20of%20Human%20Behavior&type=repositories
{ "domain": "ai.stackexchange", "id": 3856, "tags": "research, resource-request, open-source" }
Lorentz Force Dimension Analysis
Question: A charged particle in the presence of charges and currents experiences a force due to electric (E) and magnetic (B) fields. Is is described by the Lorentz force: $$F = e (E + v × B )$$ where e is the charge of the particle and v is the instantaneous velocity of the particle. If we use the units of the International System and we do unit analysis. How can we conlcude that in the end its got units of force (Newton). Answer: As we expand the relation we get. $F$= $eE+evB$ Where $e$ = charge = $[M^0L^0T^1A^1]$ $E$ = Electric Field = $[M^1L^1T^{-3}A^{-1}]$ multiplying $eE$ = $[M^1L^1T^{-2}A^0]$ , which is the dimension of force. Similarly, v = velocity = $[M^0L^1T^{-1}A^0]$ B = Magnetic Field = $[M^1L^0T^{-2}A^{-1}]$ Multiplying $evB$ = $[M^1L^1T^{-2}A^0]$ Thus, we prove that the Lorentz force $F=e(E+vB)$ has the dimension $Newton(N)$.
{ "domain": "physics.stackexchange", "id": 70409, "tags": "electromagnetism, units, dimensional-analysis" }
Are there two different spinors for the same spin state?
Question: Let's say $ \begin{bmatrix} 1\\ 0\\ \end{bmatrix} $ and $ \begin{bmatrix} 0\\ 1\\ \end{bmatrix} $ are the eigenvector of $\hat S_z$, is the state $ -1\begin{bmatrix} 1\\ 0\\ \end{bmatrix} +0\begin{bmatrix} 1\\ 0\\ \end{bmatrix} = \begin{bmatrix} -1\\ 0\\ \end{bmatrix} $ the same spin state of $ \begin{bmatrix} 1\\ 0\\ \end{bmatrix} $? If yes why do we have two different spinors for indicate the same state? Answer: Yes they are, and not only those two but there are infinitely many vectors representing the same physical state. Don't forget that physical states in QM are represented by rays in Hilbert space. So in general, any state $|\psi\rangle$ it represents the same physical situation as $e^{i\phi}|\psi\rangle$. In your case you are just picking on the particular case of $\phi = \pi$
{ "domain": "physics.stackexchange", "id": 66878, "tags": "quantum-mechanics, hilbert-space, angular-momentum, quantum-spin, spinors" }
Is the product between proportionality constant and vector defined?
Question: $$ \delta {U} = - \int F \cdot ds = -k \int s \cdot ds = -1/2 ks^2 \tag{i}$$ In ($i$) is there is a dot product between spring constant $k$ and deviation $s$? Correct me if I am wrong. Answer: No, $k$ is a scalar quantity so a dot product between it and a vector is undefined. It's just normal multiplication. Proper notation for what you wrote above is actually: $$ \delta {U} = - \int \vec{F} \cdot d\vec{s} = -k \int\vec{s} \cdot d\vec{s} = -\frac{1}{2} ks^2 \tag{i}$$ (assuming that s and ds are in the same direction). That should make clearer where the actual dot product occurs
{ "domain": "physics.stackexchange", "id": 81040, "tags": "homework-and-exercises, newtonian-mechanics, potential-energy, vectors" }
What does a wing do that an engine can't?
Question: This isn't a question of how a wing works -- vortex flow, Bernoulli's principle, all of that jazz. Instead, it's a question of why we need a wing at all. A wing produces lift, but why is that necessary? I got to this by thinking of an airplane at a coarse level. The wing produces lift through some interesting physics, but it needs energy to do this. The engine is what ultimately provides all of this energy (let's assume no headwind, and in "ultimately" I'm not including chemical energy in the fuel, yadda yadda "it all comes from the sun"). That means the engine pushes enough air, and fast enough, to (a) offset gravity and (b) still propel the plane forward. So the question is: why can't we just angle the engine down a bit and get the same effect? To slightly reword: why do wings help us divert part of an engine's energy downward in a way that's more efficient than just angling the engine? One answer is that we can do exactly that; I'm guessing it's what helicopters and VTOL airplanes like the Harrier do. But that's less efficient. Why? One analogy that comes to mind is that of a car moving uphill. The engine doesn't have the strength to do it alone, so we use gears; for every ~2.5 rotations the engine makes, the wheel makes one, stronger rotation. This makes intuitive sense to me: in layman's terms, the gears convert some of the engine's speed-energy into strength-energy. Is this analogy applicable -- is the wing on a plane like the gearbox in my transmission? And if so, what's the wing doing, more concretely? If a gear converts angular speed to increased force, what X does a wing convert to what Y? None of the answers I could guess at satisfied my intuition. If the wing converts horizontal speed to vertical speed, tipping the engine downward would seem to have the same effect. If it's changing the volume/speed of the air (more air blown slower, or less air blown faster), it would still have to obey the conservation of energy, meaning that the total amount of kinetic energy of the air is the same -- again suggesting that the engine could just be tipped down. EDIT In thinking about this more from the answers provided, I've narrowed down my question. Let's say we want a certain amount of forward force $S$ (to combat friction and maintain speed) and a certain amount of lift $L$ (to combat gravity and maintain altitude). If we tilt our engine, the forces required look like this: The total amount of force required is $F = \sqrt{S^2 + L^2}$. That seems pretty efficient to me; how can a horizontal engine + wing produce the same $S$ and $L$ with a smaller $F'$? Answer: Let's look at the relationship between momentum and energy. As you know, for a mass $m$ kinetic energy is $\frac12mv^2$ and momentum is $mv$ - in other words energy is $\frac{p^2}{2m}$ Now to counter the force of gravity we need to transfer momentum to the air: $F\Delta t = \Delta(mv)$ The same momentum can be achieved with a large mass, low velocity as with small mass, high velocity. But while the momentum of these two is the same, THE ENERGY IS NOT. And therein lies the rub. A large wing can "move a lot of air a little bit" - meaning less kinetic energy is imparted to the air. This means it is a more efficient way to stay in the air. This is also the reason that long thin wings are more efficient: they "lightly touch a lot of air", moving none of it very much. Trying to replicate this efficiency with an engine is very hard: you need compressors for it to work at all (so you can mix air with fuel and have the thrust come out the back) and this means you will have a small volume of high velocity gas to develop thrust. That means a lot of energy is carried away by the gas. Think about the noise of an engine - that's mostly that high velocity gas. Now think of a glider: why is it so silent? Because a lot of air moves very gently. I tried to stay away from the math but hope the principle is clear from this.
{ "domain": "physics.stackexchange", "id": 81012, "tags": "aerodynamics, aircraft, lift" }
Redox potentials in photosynthesis light dependent stage
Question: In my lecture notes, it states ...there is a significant thermodynamic problem due to the respective redox potentials of the half reactions: H2O<--> 1/2 O2 + 2H+ +2e- pE=+0.82V NADP+ +2H+ +2e- <--> NADPH +H+ pE=-0.32V Could someone please explain what the 'pE' is? I thought it would be just ordinary redox potential, however the redox potential for water is apparently +1.23V not 0.82V. I cannot find anywhere a source stating the figures given above or what they mean (I have also consulted Biochemisty, Stryer et al) Could it have something to do with this being the potential in the conditions present in a typical chloroplast as opposed to standard conditions? Answer: You are right about the conditions in the chloroplast stroma vs. standard conditions. The pH is high (i.e., low H+) in the chloroplast compartment where the reaction takes place (within the stroma), so the pE is shifted from the nominal "pE0", which is +1.23V (effectively you push the equation to the right by low concentration of H+). See this for a textbook-like summary of the important reactions and this for a quick problem on the math behind pE in different conditions, as well as this for more on the stroma.
{ "domain": "biology.stackexchange", "id": 6460, "tags": "photosynthesis, energy, biochemistry, thermodynamics" }
I can't clean inside my bag file with python
Question: In my code i am trying to clean my bag file in every 10 seconds and then trying to write again, but it doesn't work. I couldn't find any rosbag function to clean inside my bag file. I tried with python functions but it didn't work too. Here is my code. #!/usr/bin/env python3 # -*- coding: UTF-8 -*- import sys import rospy import rosbag from geometry_msgs.msg import Twist import time current_time = time.time() last_time=time.time() filename='test2.bag' print(current_time) bag = rosbag.Bag(filename, 'w') def move(msg): global last_time current_time = time.time() if current_time - last_time >10: last_time=time.time() with open(filename, 'r+') as f: #f.read() f.truncate() print(f.encoding) #f.close() print("outside") else: bag.write('/cmd_vel', msg) print("inside") def main(args): rospy.init_node('bag', anonymous=True) rospy.Subscriber("/cmd_vel",Twist,move) rospy.spin() bag.close() print("Closing...") if __name__ == '__main__': main(sys.argv) Originally posted by anonymous60874 on ROS Answers with karma: 1 on 2022-04-01 Post score: 0 Original comments Comment by ljaniec on 2022-04-01: What kind of errors/issues have you encountered? Can you add their description and terminal output etc.? Comment by anonymous60874 on 2022-04-01: I encountered with this error. "UnicodeDecodeError: 'utf-8' codec can't decode byte 0xcc in position 40: invalid continuation byte" After i post this, i solved problem. I've removed truncate function and added "os.remove()". I guess deleting file and create it again is better solution. Here is new code: if current_time - last_time >10: last_time=time.time() os.remove(filename) bag = rosbag.Bag(filename, 'w') print("outside") Answer: You can not just modify the underlying file being used by the bag object. If you want to start a new one, close the existing bag object and create a new bag object. Originally posted by Mike Scheutzow with karma: 4903 on 2022-04-01 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 37549, "tags": "python, rosbag" }
Iterative uniform recombination of integer lists in python with numba
Question: I have a list of N integers and I iteratively take two integers at random, which are not at the same position in the list and uniformly recombine their binary representation, e.g. in intList = [1,4,1,5] (N=4) the number on the second and third position is chosen, aka 4 & 1, and their binary represenations 4=100 & 1=001 is uniformly recombined. Uniformly recombined means that I chose either the 1 or the 0 of the first position, one of the two 0's of the second position and either the 0 or 1 of the third position. This could result in 000 or 101 or 100 or 001. The result is saved as integer in the list. In each iteration I do this recombination for all N integers. This happens in a function with a numba decorator. My code is: @nb.njit() def comb(): iterations = 100000 N = 1000 intList = list(range(N)) # this is just an example of possible integer lists l = 10 # length of the largest binary representation. intList_temp = [0]*N for _ in range(iterations): for x3 in range(N): intList_temp[x3] = intList[x3] for x3 in range(N): randint1 = random.randint(0, N - 1) randint2 = random.randint(0, N - 1) while randint1 == randint2: randint1 = random.randint(0, N - 1) randint2 = random.randint(0, N - 1) a = intList[randint1] b = intList[randint2] c = a ^ ((a ^ b) & random.randint(0, (1 << l) - 1)) intList_temp[x3] = c for x3 in range(N): intList[x3] = intList_temp[x3] return intList print(timeit(lambda: comb(), number=1)) >>>2.59s My question is, can this be improved? Answer: No significant performance improvement but cleaner code. Because temp_list is overwritten element-wise you can create it once and then leave it. At the end of each iteration you can then copy the entire list into int_list for the next iteration. Similarly you can simplify creating the random ints a and b a bit. There are a lot of ways that get close to this in numpy but nothing I can find that beats the naive sampling directly works with numba and numba beats any overall solution I can think of without it. Unfortunately (or maybe fortunately depending on your view), the numba compiler is compiling it down to the best solution I can think of for both your original version and this. Only difference is readabilitiy: @nb.njit() def comb(int_list, l, iterations): n = len(int_list) temp_list = int_list for _ in range(iterations): for i in range(n): a, b = 0, 0 while a == b: a = random.randint(0, n - 1) b = random.randint(0, n - 1) temp_list[i] = a ^ ((a ^ b) & random.randint(0, l)) int_list = temp_list return int_list print(timeit(lambda: comb(list(range(1000)), (1 << 10) - 1, 100000), number=10))
{ "domain": "codereview.stackexchange", "id": 40058, "tags": "python, performance, numba" }
Return office details from multiple tables
Question: The function itself is just returning as void. But, the point of the posted code is not about what the function is returning. It is the code related to using T-SQL and C# to return data from a SQL database that I would like reviewed. Especially the way the using statements are structured. public void GetOffice(int syncID) { string strQry = @" Select so.SyncID, so.title From Offices o Left Outer Join SyncOffices so On so.id = o.SyncID Where o.SyncID = @syncID "; using (SqlConnection conn = new SqlConnection(Settings.ConnectionString)) { using (SqlCommand objCommand = new SqlCommand(strQry, conn)) { objCommand.CommandType = CommandType.Text; objCommand.Parameters.AddWithValue("@syncID", syncID); conn.Open(); SqlDataReader rdr = objCommand.ExecuteReader(); if (rdr.Read()) { this.OfficeName= rdr.GetString(1); } rdr.Close(); } } } Answer: First of all, kudos for using a Parameter and not concatenating the value into your T-SQL string. You're not disposing all IDisposable objects. SqlDataReader should be disposed as well. Now this makes it quite a bunch of nested using scopes, which you could rework like this: using (var connection = new SqlConnection(Settings.ConnectionString)) using (var command = new SqlCommand(sql, connection)) { command.CommandType = CommandType.Text; command.Parameters.AddWithValue("@syncID", syncId); connection.Open(); using (var reader = command.ExecuteReader()) { if (reader.Read()) { this.OfficeName = reader.GetString(1); } } } Note: Usage of var for implicit typing makes the code easier to read (IMO), if you're using C# 3.0+ Disemvoweling is bad. There's no reason to call a variable rdr over reader. Use meaningful names, always. Hungarian notation is evil. There's no reason to prefix a string with str. Stick to camelCasing for locals - that includes parameters, so syncID becomes syncId. The code assumes the query only returns 1 row, but the query isn't written to explicitly select a single row. This could lead to unexpected results. Given some IList<string> results = new List<string>();: using (var reader = command.ExecuteReader()) { while (reader.Read()) { results.Add(reader.GetString(1)); } } You could then do this.OfficeName = results.Single(); (which would blow up if no rows were returned). One thing that strikes me, is that you're selecting 2 fields, but only using 1, which makes this reader.GetString(1) statement look surprising. If you don't need to select the SyncID field, remove it from your query and do reader.GetString(0) instead. Finally, the T-SQL itself: Select so.SyncID, so.title From Offices o Left Outer Join SyncOffices so On so.id = o.SyncID Where o.SyncID = @syncID Could look like this: SELECT so.Title FROM Offices o LEFT JOIN SyncOffices so ON so.id = o.SyncID WHERE o.SyncID = @syncID Or, in a string: var sql = "SELECT so.Title FROM Offices o LEFT JOIN SyncOffices so ON so.Id = o.SyncId WHERE o.SyncId = @syncId"; The line breaks make it look weird, and since it's not too long of a query, I think it would make the code better to have it on a single line.
{ "domain": "codereview.stackexchange", "id": 6177, "tags": "c#, sql-server, ado.net" }
Grassmann's variables under integration
Question: If $\eta$ is a Grassmann variable, due to invariance under translations we get that, $$\int d\eta\ \eta = 1 \tag1$$ Nevertheless, for being Grassmann's, $\eta$ satisfies $\eta^2 = 0$. Differentiating this condition you get, $$d(\eta^2) = 2\eta d\eta \equiv 0 \Rightarrow \int d\eta\ \eta = 0 \tag2$$ So, Eq. (2) obtained just via definition of Grassmann variable goes against Eq. (1) that comes out from translation invariance. But I've seen the use of Eq. (1) in all books about fermions' path integral, so what is the thing that I'm misunderstanding? Answer: I am not sure that $d(\eta^2)$ is defined at all. But if it is, then, in my opinion, you should write it in this way $$ d(\eta^2) = d(\eta\eta) = d\eta\ \eta + \eta\ d\eta $$ So you get not $\eta\ d\eta = 0$, but natural anticommutation of $\eta$ and $d\eta$: $d\eta\ \eta + \eta\ d\eta = 0$. I think the latter equality is usual for Grassmann integrals.
{ "domain": "physics.stackexchange", "id": 54937, "tags": "quantum-field-theory, path-integral, fermions, grassmann-numbers" }
What makes light to be a special part of the electromagnetic spectrum that it has a particle?
Question: Why only light has photons while x-ray or micro/radio waves don't? If we build a device that can iterate over all frequencies, what will be so special about the light range that it will start to generate photons while in all other ranges no photons will be made? Are there special 0 mass particles for x-rays and radio waves or do they also generate photons? (the latter is highly unlikely since radio doesn't travel in a straight line) Answer: Light and all electromagnetic frequencies emerge from a superposition of photons, the underlying quantum mechanical state of electrodynamics, QED. Are there special 0 mass particles for x-rays and radio waves They are photons of mass zero and energy = to $h*ν$, where $ν$ is the frequency of the wave and $h$ is the Planck constant or do they also generate photons? (the latter is highly unlikely since radio doesn't travel in a straight line) They are composed out of zillions of photons, all frequencies of electromagnetic radiation. It is the classical electromagnetic wave that is deflected easily by obstacles. This means radio frequency photons will have time dependent build ups of superposition to the classical radio wave, but the mathematics is there and it is photons all the way to zero frequency.
{ "domain": "physics.stackexchange", "id": 50146, "tags": "electromagnetism, visible-light, electromagnetic-radiation, photons" }
Why am I getting different result from KDL forward kinematics and TF transform enquiry?
Question: Hi. I am trying to perform forward kinematics for one of the arm using KDL Chainfksolverpos_recursive. The code works fine and I am getting a reasonable result. But when I compare the results with the "rosrun tf tf_echo /frame1 /frame2" result for the same root and tip links, the results are different. Has anyone come across this problem and know the reason for this discrepancy. bool gotTree=kdl_parser::treeFromFile (file_path, tree_); if (!gotTree) ROS_ERROR("Failed to parse urdf file"); else { ROS_DEBUG_STREAM("Successfully created kdl tree"); std::cerr << "number of joint: " << tree_.getNrOfJoints() << std::endl; std::cerr << "number of links: " << tree_.getNrOfSegments() << std::endl; } const sensor_msgs::JointStateConstPtr initJoints = ros::topic::waitForMessage<sensor_msgs::JointState>("/robot/joint_states", n); ROS_INFO("Joint States published"); // ROS_INFO("length of joint states %d", initJoints->name.size()); tree_.getChain("base", "right_upper_elbow", chain_); // tree_.getChain("base", "right_wrist", chain_); if (chain_.getNrOfJoints() == 0) { ROS_INFO("Failed to initialize kinematic chain"); } else { ROS_INFO("No of joints in the chain: %u", chain_.getNrOfJoints()); } KDL::JntArray joint_pos= KDL::JntArray(chain_.getNrOfJoints()); // joint_pos.resize(chain_.getNrOfSegments()); KDL::Frame cart_pos; // ROS_INFO("total Joints(i): %d", initJoints->name.size()); ROS_INFO("chain_.getNrOfSegments(j): %d", chain_.getNrOfSegments()); ROS_INFO("chain_.getNrOfJoints(): %d", chain_.getNrOfJoints()); int k=0; for (int i=0; i<initJoints->name.size(); i++) { for(int j=0; j< chain_.getNrOfSegments(); j++) { // ROS_INFO("just checking1: i= %d, j:%d", i, j); KDL::Segment segmnt=chain_.getSegment(j); ROS_INFO("%d.getName(): %s", j, segmnt.getJoint().getName().c_str()); if (!segmnt.getJoint().getName().compare(initJoints->name[i]) ) { if(segmnt.getJoint().getType()!=KDL::Joint::None) { ROS_INFO("segment %s went fine", segmnt.getJoint().getName().c_str()); joint_pos(k)=initJoints->position[i]; ROS_INFO("k: %d, name: %s, initJoints->position[%d]: %f",k, initJoints->name[i].c_str(), i, initJoints->position[i]); k++; } } // ROS_INFO("just checking3: i= %d, j:%d", i, j); } } KDL::ChainFkSolverPos_recursive fksolver = KDL::ChainFkSolverPos_recursive(chain_); KDL::Frame cartpos; bool kinematics_status; kinematics_status = fksolver.JntToCart(joint_pos,cartpos); if(kinematics_status>=0) { // ROS_INFO("fk: x: %f, y: %f, z: %f", cartpos.p[0], cartpos.p[1], cartpos.p[2]); ROS_INFO("fk: x: %f, y: %f, z: %f", cartpos.p.x(), cartpos.p.y(), cartpos.p.z()); } else ROS_INFO("JntToCart did not work :("); Originally posted by olchandra on ROS Answers with karma: 7 on 2017-04-05 Post score: 0 Original comments Comment by Stefan Kohlbrecher on 2017-04-06: An example would be helpful I think. Is there some small discrepancy or are both results completely different? In any case, both KDL and tf are used in a lot of projects, so it is likely that something is off with the invocation (for which, again, code would be good to see). Comment by olchandra on 2017-04-06: In the code that I have added to the question, I am using Baxter robot in gazebo. The output from the JntToCart seems logical but not the same. I have tried with different tip links and the discrepancy increases with the number of joints in the chain. For just one joint it is almost the same as TF. Answer: Hi, It seems to be a bit late but if anyone else need this. I followed your method and after getting bad results too, I changed two things. The first, which I think cause your problem is the condition !segmnt.getJoint().getName().compare(initJoints->name[i]), which I transform as segmnt.getJoint().getName().compare(initJoints->name[i] == 0), because String1.compare(String2) does not return a bool. The second, which is not important for good results (but I'm telling it to understand my code below), is the order of your loops. If your joint_state message, like me, is not necessarily ordered like the chain and could include other joints, it is more "optimized" i think to invert these two loops "for" and break when you find the match. Anyway, this doesn't really matter and this is my code which seems to work : urdf::Model model; if (!model.initParam("robot_description")) return -1; KDL::Tree tree; if (!kdl_parser::treeFromUrdfModel(model, tree)) { ROS_ERROR("Failed to extract kdl tree from xml robot description"); return -1; } const sensor_msgs::JointStateConstPtr initJoints = ros::topic::waitForMessage<sensor_msgs::JointState>("/joint_states", n); KDL::Chain chain; tree.getChain("base_link", "xtion_wrist_link", chain); KDL::JntArray joint_pos= KDL::JntArray(chain.getNrOfJoints()); int k = 0; for(int i = 0; i < chain.getNrOfSegments(); i++) { KDL::Segment segment = chain.getSegment(i); for (int j = 0; j < initJoints->name.size(); j++) { if (segment.getJoint().getName().compare(initJoints->name[j]) == 0) { if(segment.getJoint().getType() != KDL::Joint::None) { joint_pos(k) = initJoints->position[j]; k++; break; } } } } KDL::ChainFkSolverPos_recursive fksolver = KDL::ChainFkSolverPos_recursive(chain); KDL::Frame frame; bool kinematics_status; kinematics_status = fksolver.JntToCart(joint_pos, frame); if (kinematics_status >= 0) { cout << "fk -> x : " << frame.p.x() << " y : " << frame.p.y() << " z : " << frame.p.z() << endl; } else cout << "JntToCart did not work" << kinematics_status << endl; code output : fk -> x : 0.426696 y : -0.426238 z : 0.887562 rrun tf tf echo output : At time 122.154 - Translation: [0.427, -0.426, 0.888] Originally posted by billus with karma: 16 on 2020-10-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27529, "tags": "ros" }
A multi label text classification problem
Question: I'm looking to solve a multi label text classification problem but I don't really know how to formulate it correctly so I can look it up.. Here is my problem : Say I have the document "I want to learn NLP. I can do that by reading NLP books or watching tutorials on the internet. That would help me find a job in NLP." I want to classify the sentences into 3 labels (for example) objective, method and result. The result would be : objective : I want to learn NLP method : I can do that by reading NLP books or watching tutorials on the internet. result : That would help me find a job. As you would have noticed, it's not a classical classification problem, since the classification here depends on the document structure (unless I'm wrong?) Any idea of the key words to better describe the problem ? or how I might solve it ? Many thanks! Answer: Based on some discussions and on the commentaries, the conclusion is that this problem could be rather considered as one of the following NLP tasks (some of which are pretty similar..) : Q&A (as suggested by @Akavall too) Intent Classification (or NER) One shot Learning Semantic Role Labeling Sequence Labeling (as suggested by @Erwan) Thanks!
{ "domain": "datascience.stackexchange", "id": 10667, "tags": "nlp, multiclass-classification, text-classification, language-model" }
How to remove effect of topography on Air temperature in Excel?
Question: I traversed a route in a city. The route has different elevation. So I need to remove the effect of elevation from my air temperature measurements. The measurements done by temperature data logger mounted over a car and a GPS in it. So, the air temperature joined to location which has elevation. As result, I have air temperature and elevation fields loaded up into Excel. All I need to do it to correct the effect of elevation on my air temperature data, which I do not know how to do that. Answer: Definitely an interesting question... You need to adjust your temperature data to a common elevation by using the lapse rate. The lapse rate can be determined as the temperature change over a standard interval in elevation change. For instance, find the lowest elevation on your traveled route, and note the temperature. Then for each location for which you have both elevation and temperature, determine the elevation difference and temperature difference to that lowest elevation location. The lapse rate will be derived as the temperature difference per unit elevation difference. In other words, the lapse rate will be noted as so many degrees per 100 meters change in elevation. The lapse rate determined for the locations on your traversed route can be averaged and your data reduced to a common elevation on the basis of the average lapse rate. My assumption is that the determined lapse rates would likely be relatively close in value when expressed as a temperature change for given standard elevation change. Now, with the lapse rate known, your temperature data can be reduced to a common elevation. Nevertheless, a consideration would be if you noticed that the lapse rate was considerably different from location to location. Consequently, you may wish to simply adjust each location to a common elevation by using the lapse rate derived for that location based on a comparison with that location's nearest neighbor. You can derive the local lapse rate by comparison of nearby locations, and then reduce your various measurements to a common elevation for comparison. Lapse rate is typically a feature of an airmass. The lapse rate is dependent on water vapor content in the air. Commonly, the lapse rate is taken as a specific value for dry air, or for moist air. You can compare the lapse rate you have derived with these values to get a sense of whether the air is moist or dry. Consequently, you can determine if the lapse rate is related to the region or area for which your temperature measurements were made. For instance, near the coast the lapse rate may be lower due to moisture in the air. Farther inland, in dryer air, the lapse rate may be higher due to less moisture in the air. An interesting question. Thanks for asking.
{ "domain": "earthscience.stackexchange", "id": 2578, "tags": "temperature, topography, air" }
Unable to install ROS in Ubuntu 14.04 LTS
Question: I am unable to install ROS on my Ubuntu 14.04 LTS. Below is the error I am facing This code block was moved to the following github gist: https://gist.github.com/answers-se-migration-openrobotics/4b82b30994acfd18785cbf9ea5914c6f is it the problem of the ubuntu version or did I miss anything I have followed all steps provided in the installation guide any help is highly appreciated Originally posted by Raghu Parvatha on ROS Answers with karma: 23 on 2014-04-27 Post score: 1 Answer: hydro not supported for ubuntu 14.04 try to install indigo Originally posted by Hamid Didari with karma: 1769 on 2014-04-27 This answer was ACCEPTED on the original site Post score: 6 Original comments Comment by Hansg91 on 2014-04-27: Indigo isn't officially released yet either. I had success installing ros_comm from source and installing the rest through apt-get. Comment by Raghu Parvatha on 2014-04-28: Hansg91 can you please help me in installation process. Comment by Raghu Parvatha on 2014-04-28: hamid thank you for your reply but as Hansg said even I am unable to install the package I am getting below error message admin@admin:~$ sudo apt-get install ros-indigo-desktop-full Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package ros-indigo-desktop-full Comment by Tom Moore on 2014-04-28: ros-indigo-desktop-full is not available yet. For now, you can just do sudo apt-get install ros-indigo-* It'll have far more packages than you'll probably want/need, but you'll be sure to have everything. :) Comment by Hamid Didari on 2014-04-28: did you update your sources.list? Comment by William on 2014-04-28: The from source instructions should work on 14.04 unless there are problems compiling against newer versions of libraries in 14.04: http://wiki.ros.org/hydro/Installation/Source However hydro isn't supported for 14.04, so it may not work without some debugging.
{ "domain": "robotics.stackexchange", "id": 17799, "tags": "rosinstall" }
TapeEquilibrium Codility implementation not achieving 100%
Question: Given the following task description from here : A non-empty zero-indexed array A consisting of N integers is given. Array A represents numbers on a tape. Any integer P, such that 0 < P < N, splits this tape into two non-empty parts: A[0], A[1], ..., A[P − 1] and A[P], A[P + 1], ..., A[N − 1]. The difference between the two parts is the value of: |(A[0] + A[1] + ... + A[P − 1]) − (A[P] + A[P + 1] + ... + A[N − 1])| In other words, it is the absolute difference between the sum of the first part and the sum of the second part. For example, consider array A such that: A[0] = 3 A[1] = 1 A[2] = 2 A[3] = 4 A[4] = 3 We can split this tape in four places: P = 1, difference = |3 − 10| = 7 P = 2, difference = |4 − 9| = 5 P = 3, difference = |6 − 7| = 1 P = 4, difference = |10 − 3| = 7 Write a function: def solution(A) that, given a non-empty zero-indexed array A of N integers, returns the minimal difference that can be achieved. For example, given: A[0] = 3 A[1] = 1 A[2] = 2 A[3] = 4 A[4] = 3 the function should return 1, as explained above. Assume that: N is an integer within the range [2..100,000]; each element of array A is an integer within the range [−1,000..1,000]. Complexity: expected worst-case time complexity is O(N); expected worst-case space complexity is O(N), beyond input storage (not counting the storage required for input arguments). Elements of input arrays can be modified. I decided to implement an algorithm that with 2 counters/pointers (one from leftmost and another one from rightmost of the array input) representing the total sum of values that the pointer(s) have traversed. The process works by first deciding which pointer to move closer to the other in each iteration, which is looking the next element directly next to the location of the current pointer, attempt to temporarily sum the value to the pointer, and then find the absolute difference between the other pointer. The absolute difference is also calculated for the other pointer, and then compared against each other's pointer temporarily accumulated value, to find out which one yields lower absolute difference. The pointer move that yields the lowest absolute difference then performs the actual summation to the pointer, and that particular pointer moves for that iteration. The following is my code : from math import fabs def solution(A): l_ptr = 0 r_ptr = A.__len__() - 1 l_sum = A[l_ptr] r_sum = A[r_ptr] while l_ptr < r_ptr - 1: if fabs(l_sum + A[l_ptr + 1] - r_sum) > fabs(l_sum - (r_sum + A[r_ptr - 1])): r_ptr -= 1 r_sum += A[r_ptr] else: l_ptr += 1 l_sum += A[l_ptr] return (int)(fabs(l_sum - r_sum)) In the test case, I didn't manage to achieve 100% test accuracy, and I'm not sure exactly why, but I think perhaps it has something to do with the pointer not being able to look for the array element that are several steps away at a given iteration and the possibility of having negative value. The followings are the test cases that it fails on according to Codility : ▶ small_random random small, length = 100 ✘WRONG ANSWER got 269 expected 39 ▶ large_ones large sequence, numbers from -1 to 1, length = ~100,000 ✘WRONG ANSWER got 228 expected 0 ▶ large_random random large, length = ~100,000 ✘WRONG ANSWER got 202635 expected 1 Obviously provided that it's evaluation of a test set, the actual input data for the test are not given, so it's more difficult for me to figure out what part of my algorithm that is incorrect and the causes. Could someone provide an explanation (preferably with a sample input data) what I misunderstood and the cause? Thank you in advance. Answer: Algorithm Your algorithm works for input with only positive integers. But it may not work with some input that contains negative numbers, for example it gives incorrect result for: [-1, 1, 1, -1, -2] Why does it work for all positive numbers? At any point in your loop, you basically have: leftsum: the sum of elements on the left so far leftnext: the next element on the left rightsum: the sum of elements on the right so far rightnext: the next element on the right When you know that all remaining elements in the middle are non-negative, then you can safely decide whether to take the left or the right, by minimizing the difference between leftsum + leftnext and rightsum + rightnext. This is safe, because all the remaining elements are non-negative, therefore the difference can only shrink, or otherwise be minimal. But when there can be negative numbers in the middle, you don't have such knowledge, and it can be impossible to decide which side to advance. Consider this alternative that's simple and intuitively easy to understand, and it's guaranteed to give correct result: Set left to the first element Set right to the total sum - left Initialize mindiff to the absolute difference of left and right Iterate from the 2nd element until the -1th: add to left and subtract from right and update mindiff (A quick tip for writing the loop: for value in A[1:-1]: ...) return mindiff Technique It's strange you did from math import fabs when you don't need floating point math to solve this problem. You can use abs instead of fabs. Instead of A.__len__() it's more natural to use len(A).
{ "domain": "codereview.stackexchange", "id": 43176, "tags": "python, python-2.x" }
Colour properties of nickel complexes
Question: According to Crystal Field theory (CFT), complexes with stronger ligands must absorb light with higher frequency hence would transmit corresponding complementary colour. The colour of $(\ce{[Ni(H2O)2(en)2]^2+})$ blue - purple and colour of $(\ce{[Ni(H2O)6]^2+})$ is green, whereas the colour of $(\ce{[Ni(en)3]^2+})$ purple. (Where $\ce{en ->}$ ethylene diamine.) (According to NCERT chemistry class 12) But the above example goes against the CFT statement. Any plausible explanation available? Answer: OP's argument: According to Crystal Field Theory(CFT), complexes with stronger ligands must absorb light with higher frequency hence would transmit corresponding complementary color. First of all, the different colors we see in these solutions are not emission colors. These solutions do not emit as AChem pointed out in his comment. As he correctly put, the colors are due to the transmitted light minus the absorbed portion of the spectrum. On other words, The color you see in s solution of a transition metal compound is due to the wavelengths of the visible light that isn't absorbed by the $\mathrm{3d}$ electronic transitions of the metal. For example, nickel(II) aqueous complexes $(\ce{[Ni(H2O)6]^2+})$ often absorb strongly in the red region of the visible spectrum, so the resulting color observed is green. That means the complexes with weaker ligands such as $\ce{H2O}$ absorb light with lower frequency (larger wavelengths) such as color red (see a representation of a color wheel below): That means, color of the solution is light blue $(\ce{[Ni(H2O)4(en)]^2+})$ means it absorbs orange color, blue $(\ce{[Ni(H2O)2(en)2]^2+})$ means it absorbs orange-yellow color, and pink (to me it looks pink, not purple) means $(\ce{[Ni(en)3]^2+})$ it absorbs yellow-green color (if it is purple, it still absorbs yellow light). If you look at these absorbed light, you'd see frequencies of yellow-green (or yellow) $\gt$ orange-yellow $\gt$ orange $\gt$ red, a good agreement with your statement. Note: According to Ref.1, the $\ce{Ni^2+}$ complexes exhibit a broad band in the region $\pu{15000-23000 cm-1}$ with a second band in the range of $\pu{23000-27000 cm-1}$. These bands are assigned to the transitions $\ce{^1A_{1g} -> ^1A_{2g}}$ and $\ce{^1A_{1g} -> ^1B_{1g}}$. Also, color change due to number of ethylenediamine molecules $(\ce{H2N-CH2CH2-NH2})$ is published recently as an undergraduate experiment. References: Sangamesh A. Patil and Vasant H. Kulkarni, “Complexes of nickel(II) with ethylenediamine and 1,3-diaminopropane derivatives of 2,2′-dihydroxychalkones,” Inorganica Chimica Acta 1983, 73, 125-129 (ODI: https://doi.org/10.1016/S0020-1693(00)90836-3). Mauro Ravera, Alessandro Nucera, and Elisabetta Gabano, “Freshening up Old Methods for New Students: A Colorful Laboratory Experiment to Measure the Formation Constants of Ni(II) Complexes Containing Ethane-1,2-Diamine,” J. Chem. Educ. 2022, 99(3), 1473–1478 (ODI: https://doi.org/10.1021/acs.jchemed.1c01186).
{ "domain": "chemistry.stackexchange", "id": 17315, "tags": "coordination-compounds, color, crystal-field-theory" }
Typical raw sensor data?
Question: It might be a silly question, but I would like to know for the sake of curiosity. I would like to see real sensor data for 2D case. I can't afford the cost of Laser sensors so far, but I'm sure that there is some recorded real data that allows me to understand and interpret this data. I know that data that is acquired by robot is represented in the robot's frame. For 2D, I should get the range and bearing to an obstacle. I've simulated the procedure in Matlab, but now I want see the real data. Thanks in advance. Originally posted by CroCo on ROS Answers with karma: 155 on 2014-02-02 Post score: 0 Answer: You should be able to find a number of bagfiles floating around with recorded laser + odometry. A few examples: I have posted a number of such bagfiles (mostly from lower-cost lasers like the Neato XV-11 and Hokuyo URG): http://fergy.me/slam MIT has also posted bagfiles acquired on a PR2: http://projects.csail.mit.edu/stata/downloads.php These include a lot more than just laser data, so they are probably quite large. Originally posted by fergs with karma: 13902 on 2014-02-02 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by CroCo on 2014-05-17: @fergs, thank you so much.
{ "domain": "robotics.stackexchange", "id": 16856, "tags": "ros, sensor" }
No inductor - any induced EMF when switch flipped on?
Question: My instructor put in the notes a picture of a loop without an inductor - just a battery with voltage $\epsilon$, a switch, and resistor with resistance $R$. He explains that as soon as the switch is flipped, an emf is created that will oppose the change in flux. With an inductor, I can see how this would happen. But this made me wonder what happens when there isn't an inductor. Does current immediately reach $\epsilon\ /\ R$? Or is there some back emf that opposes flux change? It would make sense thinking about it that even without an inductor, there would be a resistance to flux change inside the closed loop, so it wouldn't immediately reach $I = \epsilon\ /\ R$. Answer: The answer to your question is that there is always inductive coupling between the legs of such a loop. Beyond that, transmission line theory answers your question of what happens. Basically, there's parasitic inductance and capacitance everywhere you go, and they ensure you never see instantaneous jumps in current.
{ "domain": "physics.stackexchange", "id": 39697, "tags": "electromagnetism, electricity, magnetic-fields, electric-circuits, inductance" }
Number of Different AVL Tree
Question: I studying the related question. https://stackoverflow.com/questions/13500560/number-of-ways-to-create-an-avl-tree-with-n-nodes-and-l-leaf-node but it's not so general. In-fact, We want to know with N keys, how many different AVL Tree, we can make? we know with N=1 key is 1 AVL Tree. with N=2 key we have 2 different AVL Tree, but in general we can make any recurrence formula? for example for N=4, N=5 and so on. Answer: Let $a_{n,h}$ denote the number of AVL trees with $n$ nodes and height $h$. It is straightforward to get a recurrence for $a_{n,h}$: $$a_{n,h} = \sum_{k=1}^n \bigl(a_{k-1,h-1}a_{n-k,h-1} + a_{k-1,h-1}a_{n-k,h-2} + a_{k-1,h-2}a_{n-k,h-1}\bigr),~ n\geq h > 1,$$ with the initial conditions $a_{n,h} = 0$, if $h>n$ or $h\in\{0,1\}, n\neq h$, and $a_{0,0} = a_{1,1} = 1$. The numbers you are looking for then are $a_n = \sum_h a_{n,h}$. Unfortunately there apparently is no known closed form for this sequence. The OEIS has a list of the first 1000 terms and maple + mathematica code to compute further ones (via the recursion).
{ "domain": "cs.stackexchange", "id": 2910, "tags": "algorithms, graphs, data-structures, trees, binary-trees" }
Angular momentum representation
Question: It is well know that, using position representation $$\langle r\lvert L\rvert \psi\rangle =r \times (-i\hbar\nabla\langle r|\psi\rangle )=r \times (-i\hbar\nabla\psi(r)).$$ However, I read from some books that if $L$ is acting on some position ket directly, then $$L|r\rangle ~=~ r \times (i\hbar\nabla|r\rangle).$$ Can anyone explain the latter equation regarding the missing "-" sign? Answer: As Richard points out, you can derive the second equation by setting $\psi$ to be a position eigenstate in the first one. Doing that, you turn the general case $$\langle \mathbf{r}\lvert \mathbf{L}\rvert \psi\rangle =\mathbf{r} \times (-i\hbar\nabla\langle \mathbf{r}|\psi\rangle )$$ into the relation $$\langle \mathbf{r}\lvert \mathbf{L}\rvert \mathbf{r}'\rangle =\mathbf{r} \times (-i\hbar\nabla_\mathbf{r}\langle \mathbf{r}|\mathbf{r}'\rangle) =\mathbf{r} \times \left(-i\hbar\nabla_\mathbf{r}\delta(\mathbf{r}-\mathbf{r}')\right). $$ In here, you can change the $\mathbf{r}$'s into $\mathbf{r}'$s using the fact that both vectors are equal at the support of the delta function. Thus you can change $\mathbf{r}\times$ for $\mathbf{r}'\times$, but the derivative is a bit trickier: since the argument of the delta fuction is $\mathbf{r}-\mathbf{r}'$, its derivatives w.r.t. $\mathbf{r}$ differ from its derivatives w.r.t. $\mathbf{r}'$ by a sign, and you must change $\nabla_\mathbf{r}$ for $-\nabla_{\mathbf{r}'}$. With this, then, $$\langle \mathbf{r}\lvert \mathbf{L}\rvert \mathbf{r}'\rangle =\mathbf{r} \times \left(-i\hbar\nabla_\mathbf{r}\delta(\mathbf{r}-\mathbf{r}')\right) =\mathbf{r}' \times \left(+i\hbar\nabla_{\mathbf{r}'}\delta(\mathbf{r}-\mathbf{r}')\right) =\mathbf{r}' \times \left(+i\hbar\nabla_{\mathbf{r}'}\langle \mathbf{r}|\mathbf{r}'\rangle\right). $$ Once it is in this form, you simply have a global factor of $\langle\mathbf{r}|$, which you can simply "cancel out". (More formally, since the $|\mathbf{r}\rangle$ are a complete set, the projections on the $\langle \mathbf{r}|$ completely determine any vector. Or, if you prefer, simply multiply the equation by $|\mathbf{r}\rangle$ and integrate over all $\mathbf{r}$.) Doing that, then, and dropping the primes, you get, finally $$\mathbf{L}\rvert \mathbf{r}\rangle =\mathbf{r} \times \left(+i\hbar\nabla_{\mathbf{r}}|\mathbf{r}\rangle\right) \tag1$$ as you wanted to get. I must say, though that this relation is not particularly useful. What is useful, though, is its adjoint relation, which you can get from the original $$ \langle \mathbf{r}\lvert \mathbf{L}\rvert \psi\rangle =\left(\mathbf{r} \times (-i\hbar\nabla)\langle \mathbf{r}|\right)|\psi\rangle $$ by simply "cancelling out" $|\psi\rangle$. (Or, more formally, by noting that the linear functionals on both sides coincide for all $|\psi\rangle$, and must therefore be equal as linear functionals.) This gives simply $$ \langle \mathbf{r}\lvert \mathbf{L} =\mathbf{r} \times (-i\hbar\nabla_\mathbf{r})\langle \mathbf{r}|, \tag 2$$ which is evidently the adjoint of (1). (What's remarkable is that the vector calculus remains valid.) The reason I say that this is the form that's actually useful is that you very, very rarely deal with position ket $|\mathbf{r}\rangle$, as they are very much not physical states, but you do deal regularly with position bras $\langle \mathbf{r}|$, as they are an essential ingredient in well-written position representations. The form (2) then lets you find the position-representation wavefunction of the transformed vector $\mathbf{L}|\psi\rangle$ from the original wavefunction $\langle \mathbf{r}|\psi\rangle$. This is analogous to the way to make precise the intuition that $\mathbf{p}$ equals the derivative $-i\hbar \nabla$, by considering its actions on bras instead of kets, to get $$\langle \mathbf{r}|\mathbf{p}=-i\hbar\nabla_\mathbf{r}\langle \mathbf{r}|,$$ as I've said before in this answer. While this looks slightly unintuitive at first, it is actually more useful if you use it right.
{ "domain": "physics.stackexchange", "id": 10227, "tags": "quantum-mechanics, angular-momentum, operators, hilbert-space" }
Why do the melting points of Group 15 elements increase upto Arsenic but then decrease upto Bismuth?
Question: The boiling points of group 15 elements increase on going down the group (or, as size increases) but the same is not true for the melting points. The melting points increase from $\ce{N}$ to $\ce{As}$ and then decrease from $\ce{As}$ to $\ce{Bi}$. Why is this the case? Answer: A lower melting point is an indication of a lesser organised solid structure. So the question is actually: What in the structures of arsenic and phosphorus make them less prone to melting? Let’s start off by separating nitrogen from the entire group. It is a prime example of a diatomic gas such as hydrogen with a very inert $\ce{N#N}$ triple bond. As throughout the periodic table, multiple bonds with s and p orbitals between atoms of the same element are magnitudes more stable in the second period than anywhere else due to much better p-orbital overlap (smaller size, smaller interatomic distance). Therefore, we can consider phosphorus, arsenic, antimony and bismuth more or less as similar while nitrogen is an odd one out. Phosphorus has four stable allotropes with vastly different structures. The white phosphorus allotrope, $\ce{P4}$ molecules, basically falls into the same class as dinitrogen and melts at around 40 degrees. Red, purple and black phosphorus build up network-like structures in two dimensions, either in tube-like chains (red) or as ‘wavy graphite’ (black). Only the black modification is clearly defined. Red phosphorus has melting points of around $600~\mathrm{^\circ C}$ while black phosphorus sublimes in vacuum at $400~\mathrm{^\circ C}$ — probably meaning that its melting point at standard pressure is much higher, maybe even higher than arsenic’s sublimation point. I found no references to liquid phosphorus so I have no idea whether the bonds of the network structure need be broken to liquefy or not. Figure 1: crystal structure of black phosphorus, taken from Wikipedia. Henceforth, we have established a structure element which we will be sticking to with only moderate modifications. The transition phosphorus — arsenic — antimony — bismuth is more or less the non-metal—metal transition. What explicitly changes is the distance between the sheets as shown in table 1. $$\textbf{Table 1: }\text{Comparison of bond lengths and inter-sheet contact distances for pnictogens}$$ $$\begin{array}{lrrrr} \hline \text{ } & \ce{P_{black}} & \ce{As} & \ce{Sb} & \ce{Bi} \\ \hline \ce{E-E}\text{-bond/pm} & 223.0 & 251.7 & 290.8 & 307.1 \\ \ce{E\bond{...}E}\text{-contact/pm} & - & 311.9 & 335.4 & 352.8 \\ \text{d}_\text{contact}/\text{d}_\text{bond} & \approx 1.5 & 1.239 & 1.153 & 1.149 \\ \hline \end{array}$$ So while the phosphorus structure can be described well as consisting of semi-isolated sheets stacked above each other; when we have reached bismuth, these sheets are much closer to an overall three-dimensional network. Therefore, bismuth atoms are in a distorted octahedric environment as other metals would be. Arsenic, where the sheets are still well separated, exhibits almost metallic conductivity perpendicular to the sheets. Putting this together and taking the extreme points, we can say: The phosphorus structure is highly organised in that two directions are strictly discernible. Melting along one of these is no problem (perpendicular to the sheets) but melting within a sheet is a huge problem as $\ce{P-P}$ bonds would need to be broken. The bismuth structure no longer has a true distinction of directions; all three axes are more or less equal. Therefore, it takes similar energies to break bonds in one direction as it does to break them in another direction meaning the overall process is easiest. Furthermore, liquid phases are generally those that display a high local ordering but a low long-range ordering. The high local ordering can be preserved in bismuth even if the long-range ordering present in the solid state is lost. Conversely, the local ordering is phosphorus is a lot stricter and cannot easily be transferred into a state where long-range ordering is low. Finally, note that non-metals whose solid phase does not consist of small, well defined molecules generally have higher melting points than metals. (Compare carbon’s sublimation point of c. $3600~\mathrm{^\circ C}$.)
{ "domain": "chemistry.stackexchange", "id": 14466, "tags": "periodic-trends, melting-point, boiling-point" }
Why aren't cat hybrids common?
Question: Why aren't hybrids of different species of felinae (say, hybrid of domestic cat with manul, cheetah, ocelot, puma or panther) widespread? Why were they not artifically created? Answer: In order to form a hybrid, substantial genetic similarity between the organisms is required. To understand why, the successful formation of a zygote from the gametes of the two parents (i.e. fertilisation) in higher animals like mammals requires that the genomes of the organisms be reasonably similar (or homologous; see for instance human fertilisation). This is because successful development of the fetus appears to require that a very specific combination of genes and proteins be present and in the right relative ratios (it's helpful to remember that the genome is a highly analogue computer (see here also), not just a set of instructions to build an organism) - think of the significant effects of simply having an extra copy of one chromosome in Down syndrome, or the fact that triploidy or quadruploidy are often tickets to the death of the human embryo (and so having half of your genome from two different animals could lead to an imbalance in the proteins required to develop normally). I'll be addressing mainly the feasibility of obtaining a hybrid at all; of course in order to become widespread a hybrid would need to be fertile (although see the sterile mule) which would require even more genetic similarity, specifically in order that the chromosomes from each parent can interact, they have to be similar enough that the proteins regulating this process will recognize them as homologous, and enable a process like chromosomal crossover to take place during the meiosis occurring in gametogenesis. To answer your question about felinae and panthera first, several panthera genomes have been sequenced, and (see here) the tiger for instance has only 95.6% similarity with the domestic cat and diverged 10.8 MYA. For comparison, human and gorilla have 94.8% similarity and diverged 8.8 MYA, so that puts into perspective why you wouldn't expect a tiger-cat hybrid, or for that matter any felinae-panthera hybrids. Those dates are from the linked paper. (Edit: It appears from wikipedia as referred to by the OP above that there actually is a puma-leopard, i.e. felinae-panthera, hybrid which I wasn't aware of (although all of the reports are old, so there is no genetic evidence to prove that it was in fact a hybrid). I assume this may be an exception to the rule; what I was trying to show is why such hybrids are not necessarily to be expected, and as I said below, I'm more surprised about how many hybrids are possible. It does appear though that this particular hybrid is prone to dwarfism, which indicates that it probably does not have a particularly viable genetic makeup, and I think it was sterile). With regards to your question about felinae hybrids, if you go to Timetree, then it will tell you that Panthera tigris and Felis catus (domestic cat) diverged 14.4 MYA, and by comparison Aciconyx jubatus (cheetah) and Felis catus diverged 9.4 MYA, so it seems reasonable to suppose that they can't hybridise. (But note that P. tigris and P. leo still diverged 6.4 MYA and they can hybridise: the relatedness of two organisms isn't the only thing at work, and a better measure is the genetic similarity; this paper mentions that hybridisation ability correlates both with genetic similarity and time since divergence). Interestingly, this paper actually indicates that the percentage alignment of the cheetah genome to the domestic cat genome is only 91.1% with 93.6% similarity to the tiger, indicating that the cheetah may be a specialised outgroup. For a more familiar comparison to all of these, Canis lupus which is the ancestor of the domestic dog, dingo, and the gray wolf only arose around 700,000 YA according to wikipedia, although I do not know the exact sequence similarity. Domestic dogs (everything from chihuahuas to great Danes) only diverged 40,000 YA, so the different cats are only superficially similar by this comparison. There's a diagram on wikipedia shown below which nicely illustrates the different hybrids within felinae (although some seem dubious), so you can see there are actually quite a lot, and although most of the cats involved haven't been fully sequenced, presumably some of these hybrids cross some of the boundaries which I've mentioned above. They do actually include a successful hybrid between a domestic cat and an ocelot (although I can't find another source for this, so it might not be possible), and a hybrid with a jaguarundi (which appears to be closely related to the cheetah) is described as dubious here. Unfortunately, I don't know if either of those cats has been completely sequenced. Ultimately predicting hybridization is a complicated field. This image below shows a more detailed phylogeny of the cats to illustrate the evolutionary timescale: And for a comparison to all of that, this image below (from here) shows the phylogeny of the dog (which you might be more familiar with in terms of hybridisation), and if you realise that the whole part of the tree in B from the bush dog down diverged 9-10 MYA (see here - comparable to the whole cat family) and that the bush dog despite its name is extremely dissimilar to modern dogs (for instance it only has 74 chromosomes compared to the domestic dog's 78), then this will illustrate that the evolution of the modern hybridisable dog family (in C in the image) is tightly packed into a much much shorter time than the entire cat branch. All of the names like D1 on the right are modern dogs. With regards to your question about artificial creation, I think a point to remember is that a lot of these cats are rare/endangered and although such hybrids are still done, there is also criticism of such proceedings. Anyway, I think a good take-home point is that the cat family is actually very genetically old and diverse. I think it's more surprising how many cats can actually be hybridised.
{ "domain": "biology.stackexchange", "id": 6141, "tags": "zoology, hybridization, artificial-selection, feline" }
Peskin & Schroeder Chapter 3.1 EoM Lorentz Invariant under Lorentz Invariant Lagrangian
Question: From Peskin & Schroeder QFT page 35: The Lagrangian formulation of field theory makes it especially easy to discuss Lorentz invariance. And equation of motion is automatically Lorentz invariant by the above definition if it follows from a Lagrangian that is a Lorentz scalar. This is an immediate consequence of the principle of least action: If boosts leave the Lagrangian unchanged, the boost of an extremum in the action will be another extremum. Could anyone please help me translate the statement of this paragraph into a rigorous mathematical proof with symbols (and, in addition, to generalize it to proper orthochronous Lorentz transformations and not just boosts)? Maybe as warm up: for boosts, how does one show that the boost of an extremum in the action will be another extremum? Answer: In a (classical) lagrangian field theory, the configuration space $\mathcal C$ of the system is a space of field configurations. A field configuration (or just "field" for short) is usually taken to be a function $\phi:\mathcal M\to T$ where $M$ is a manifold and $T$ is some set, often a manifold or vector space or both, called the target space of the field. The configuration space $\mathcal C$ is then taken to be some sufficiently smooth (when a notion of smoothness can be defined) subset of the set of all possible fields. The lagrangian is then a function $L:\mathcal C\to\mathcal C$; \begin{align} \phi\mapsto L[\phi] \end{align} Namely, the lagrangian maps a particular field configuration, to another field configuration. Often, one considers a field theory for which the lagrangian can be written as local local density, but this is not strictly speaking necessary. The action of the theory can then be defined as the integral of $L[\phi]$ over $M$; \begin{align} S[\phi] = \int_M \, d^Dx\,L[\phi](x). \end{align} Note. My terminology and notation here are a bit non-standard in some contexts. For example, in relativistic physics (field theory) the Lagrangian will usually map a field configuration $\phi$ to a function $L[\phi]$ of time, and then this function of time will be integrated to yield the action. It's not hard in practice to translate between conventions. One can then define what it means for the action to possess symmetry. In particular, given a mapping $F:\mathcal C\to \mathcal C$ of the manifold on which field configurations are defined to itself, one says that the action is invariant under $F$ provided \begin{align} S[F(\phi)] = S[\phi] \end{align} for all $\phi\in\mathcal C$. For "continuous" transformations, one can also define notions of symmetry that don't involve full invariance, but let's keep the discussion simple at this point. Example. A common toy theory considered as the first example in most relativistic field theory texts is that of a single, free, real Lorentz scalar defined on Minkowski space (I'll use metric signature $+ - - -$). In this case, we have \begin{align} M = \mathbb R^{3,1}, \qquad T = \mathbb R \end{align} and $\mathcal C$ is a space of sufficiently smooth functions $\phi:\mathbb R^{3,1}\to\mathbb R$ that satisfy certain desired boundary conditions. The Lagrangian of such a theory is \begin{align} L[\phi](x) = \mathscr L(\phi(x), \partial_0\phi(x), \dots \partial_3\phi(x)) \end{align} where $\mathscr L$ is the Lagrangian density defined by \begin{align} \mathscr L(\phi, \partial_0\phi, \dots, \partial_3\phi) &= \frac{1}{2}\partial_\mu\phi\partial^\mu\phi - \frac{1}{2}m^2\phi^2. \end{align} Given any Lorentz transformation $\Lambda$, one can define a transformation $F_\Lambda:\mathcal C\to\mathcal C$ as follows: \begin{align} F_\Lambda(\phi)(x) = \phi(\Lambda^{-1} x). \end{align} A short computation then shows that the Lagrangian is a Lorentz scalar under this transformation, namely \begin{align} L[F_\Lambda(\phi)](x) = L[\phi](\Lambda^{-1}x). \end{align} In fact, this is essentially done for you on page 36 of Peskin. It follows from this that the action is invariant under $F$; \begin{align} S[F_\Lambda(\phi)] = \int_{\mathbb R^{3,1}} d^4x\, L[\phi](\Lambda^{-1}x) = \int_{\mathbb R^{3,1}} d^4x\, L[\phi](x) = S[\phi]. \end{align} since the measure $d^4 x$ is Lorentz-invariant. Notice, in particular, that the fact that the Lagrangian transformed as a Lorentz scalar (namely precise in the same way as the scalar field $\phi$ was defined to transform), immediately led to invariance of the action. Furthermore, suppose that $\phi$ is a field configuration that leads to stationary action, then we can also show that a Lorentz-transformed field leads to a stationary action using Lorentz invariance of the action. To see this, recall that the variational derivative in the direction of a field configuration $\eta$ is defined as follows: \begin{align} \delta_\eta S[\phi] = \frac{d}{d\epsilon}S[\phi+\epsilon\eta]\Big|_{\epsilon=0} \end{align} Now, suppose that $\phi$ is a stationary point of the action, namely that $\delta_\eta S[\phi] = 0$ for all admissible $\eta$, then for all such $\eta$ we have \begin{align} \delta_{F_\Lambda(\eta)} S[F_\Lambda(\phi)] &= \frac{d}{d\epsilon} S[F_\Lambda(\phi) + \epsilon F_\Lambda(\eta)]\Big|_{\epsilon = 0} \\ &= \frac{d}{d\epsilon} S[F_\Lambda(\phi+\epsilon_\eta)]\Big|_{\epsilon = 0} \\ &= \frac{d}{d\epsilon} S[\phi+\epsilon_\eta]\Big|_{\epsilon = 0} \\ &= 0 \end{align} Now set $\eta = F_\Lambda^{-1}(\xi)$, then the computation we just performed shows that \begin{align} \delta_{\xi} S[F_\Lambda(\phi)] =0. \end{align} for all admissible field configurations $\xi$. In other words, the Lorentz transformed scalar is also a stationary point of the action. Notice that this demonstration holds for any Lorentz transformation, not just boosts. Addendum. As pointed out in the comments, the argument at the end about variational derivatives hinges on linearity of $F_\Lambda$. This can be demonstrated as follows: \begin{align} F_\Lambda(a\phi+b\psi)(x) &= (a\phi+b\psi)(\Lambda^{-1}x) \\ &= a\phi(\Lambda^{-1}x) + b\psi(\Lambda^{-1}x) \\ &= aF_\Lambda(\phi)(x) + bF_\Lambda(\psi)(x). \end{align} Let me make some remarks about the mapping $F:\mathcal C\to \mathcal C$; a symmetry of the action. If there exists a mapping $f_T:T\to T$ on the target space that induces this mapping, namely \begin{align} F(\phi)(x) = f_T(\phi(x)), \end{align} then $F$ is called an internal symmetry. On the other hand, if there is a mapping $f_M:M\to M$ on the base manifold $M$ that induces this mapping, namely \begin{align} F(\phi)(x) = \phi(f_M(x)), \end{align} then $F$ is called a base manifold symmetry (or more commonly a spacetime symmetry since in the context of relativistic field theory, the base manifold is a spacetime like Minkowski space.) Furthermore, the mapping $F:\mathcal C \to\mathcal C$ on the field configuration space is often, as in the scalar field example, a group action of some group $G$ on $\mathcal C$. This means that to each $g\in G$, we associate a mapping $F_g:\mathcal C\to \mathcal C$ such that the mapping $g\mapsto F_g$ is a homomorphism of the group $G$. In practice, the group $G$ is sometimes a group of symmetries that naturally acts on the base manifold, and sometimes $G$ a group of symmetries that naturally acts on the target space (or even both when the $M=T$). In any event, this group action is usually obtained by composing a target space group action $(f_T)_g:T\to T$ with a base manifold group action $(f_M)_g:M\to M$. More explicitly, for each $g\in G$, we can define mappings $(F_T)_g:\mathcal C\to \mathcal C$ and $(F_M)_g:\mathcal C\to\mathcal C$ as follows: \begin{align} (F_T)_g(\phi)(x) = (f_T)_g(\phi(x)), \qquad (F_M)_g(\phi)(x) = \phi((f_M)_g(x)) \end{align} and then the full group action $F_g:\mathcal C\to\mathcal C$ is defined by the composition of these two; \begin{align} F_g = (F_T)_g\circ (F_M)_g \end{align} or more explicitly \begin{align} F_g(\phi)(x) = (f_T)_g(\phi((f_M)_g(x))). \end{align} Now, this is all a bit abstract, so let's write out what all of these objects would be for the scalar field example: \begin{align} G &= \mathrm{SO}(3,1) \\ g &= \Lambda\\ (f_T)_g(\phi(x)) &= \phi(x) \\ (f_M)_g(x) &= \Lambda^{-1}x \\ (F_T)_g(\phi)(x) &= \phi(x) \\ (F_M)_g(\phi)(x) &= \phi(\Lambda^{-1}x) \\ F_g(\phi)(x) &= \phi(\Lambda^{-1}x) \end{align} Notice that $f_T$ is simply the identity mapping on the target space. This is precisely what we mean when we say that the scalar field is a Lorentz scalar. On the other hand, for a Lorentz vector, the target space itself would be Minkowski space $\mathbb R^{3,1}$, and the target space group action would be \begin{align} (f_T)_\Lambda(A(x)) = \Lambda A(x), \end{align} namely, there is an internal symmetry in which the vector indices on the field transform non-trivially. In components, which is how you'll see this written in Peskin for example, the right hand side would be written as $\Lambda^\mu_{\phantom\mu\nu} A^\mu(x)$.
{ "domain": "physics.stackexchange", "id": 10419, "tags": "quantum-field-theory, special-relativity, lagrangian-formalism" }
interpretation of $\{H,L^2\}$
Question: In Hamiltonian mechanics, we show $\{H,L_z\}=0$, which can be interpreted as the conservation of angular momentum around $Oz$. Following the same idea, how can we interprete $\{H,L^2\}$? Is the interpretation the same (or only similar) as in quantum mechanics for $[H,L_z]$ and $[H,L^2]$? Answer: Dear Isaac, yes, $\{H,L^2\}=0$ holds for spherically symmetric Hamiltonians and it means that the magnitude of the angular momentum will be conserved in time. More precisely, the squared magnitude of the angular momentum is conserved but classically, it's the same thing. The time derivative of any observable, whether it's composite or not, is given by its Poisson bracket with the Hamiltonian - all these things are just a classical limit of "commutators" (Poisson bracket) with the corresponding "operators" (observables). If $\{H,L_x\}=\{H,L_y\}=\{H,L_z\}=0$, then one may also prove $\{H,L^2\}=0$. This is simplest to prove in the quantum mechanical language. If $H$ commutes with $L_z$, it also commutes with $L_z^2$, and similarly it may commute with $L_x^2$ and $L_y^2$ - and with the sum of these three terms which is $L^2$. The interpretation of classical physics and quantum mechanics is always different but the maths of the Poisson brackets directly reflects the quantum commutators. In quantum mechanics, all these things can only be measured probabilistically and so on; this portion of quantum mechanics is universal. In the case of the angular momentum, it's useful to talk about $L^2$ and $L_z$ in quantum mechanics - and we usually no longer add $L_x$ or $L_y$. The pair $L^2$ and $L_z$ commutes with each other, $[L^2,L_z]=0$, but commutators such as $[L_x,L_z]$ are not zero but rather $-i\hbar L_y$, and so on. Also, $L^2$ has eigenvalues which are always a bit greater than the maximum eigenvalue of $L_z^2$: you can't ever rotate a nonzero angular momentum exactly in the $z$-direction. Equivalently, the uncertainty principle always guarantees that $L_x$ and $L_y$ can't be simultaneously zero when $L_z$ is nonzero. The eigenvalues of $L^2$ are $l(l+1)\hbar^2$ for integer values of $l$ while the eigenvalues of $L_z$ are integers between $-l$ and $+l$, where the latter $l$ is the same $l$ used in the formula for the eigenvalue of $L^2$.
{ "domain": "physics.stackexchange", "id": 1029, "tags": "quantum-mechanics, hamiltonian-formalism" }
Finding the percent volume of each gas in the gas mixture
Question: A gas mixture of $\ce{H2}$ and $\ce{N2}$ weighs $\pu{2.00 g}$ and have a volume of $\pu{10.0 L}$ at $\pu{700 mmHg}$ and $\pu{63.0 °C}$. Calculate the vol.% and the partial pressures of the two gases in the mixture. I have only found the total number of moles using the formula:$PV=nRT$. But I cant get any further! Any ideas? Answer: Let's call $x$ = number of mole $\ce{H2}$, and $y$ = number of moles $\ce{N2}$. The mass of the gas mixture is : $$x·2 + y·28 = 2 g $$ The total number of moles is $$n = x+y = \frac{pV}{RT} = \frac{700}{760}101325 Pa·\frac{0.01 m^3}{8.314 J K^{-1}mol^{-1}·336 K}= 0.3341 mol$$Here you get two equations with two unknowns. This could be solved by choosing : $x = 0.3341 - y$. The first equation yields : $$2 = 2x + 28 y = 0.6682 - 2y + 28 y = 0.6682 + 26 y$$ $$y = \frac{2 - 0.6682}{26} = 0.0512 mol$$ $$x = 0.3341 mol - y = 0.3341mol - 0.0512mol = 0.2829 mol$$ $$x/n = \frac{0.2829mol}{0.3341mol} = 0.8467 $$ So the molar fraction of $\ce{H2}$ in the mixture is $84.67$% $\ce{H2}$. If you want the volume percent of $\ce{H2}$, you must calculate the volume of $\ce{H2}$ alone, which is $$V(\ce{H2}) = xRT/P = 0.2829 RT/P$$ As the total volume is $V = nRT/P= 0.3341RT/P$, the volume percent of $\ce{H2}$ is $\frac{0.2829mol}{0.3341mol} = 84.67$%. And by the same reasoning, the partial pressure of $\ce{H2}$ is $84.67$% of $700 mm$ Hg, which is $592.7 mm$ Hg.
{ "domain": "chemistry.stackexchange", "id": 15119, "tags": "inorganic-chemistry, gas-laws" }
Moment of Inertia of an Equilateral Triangular Plate
Question: I was reading about moment of inertia on Wikipedia and thought it was weird that it had common values for shapes like tetrehedron and cuboids but not triangular prisms or triangular plates, so I tried working it out myself. I will post my attempt below, but for some reason I cannot find any source online that confirms or denies my solution. Please let me know if you find anything wrong with it. Thanks. Q: What is the moment of inertia of an equilateral triangular plate of uniform density $\rho$, mass $M$, side length $L$, rotating about an axis perpendicular to the triangle's plane and passing through its center? First I modeled an equilateral triangle using three lines with its center of geometry at the origin as follows: $x=\frac{1}{\sqrt{3}}y-\frac{1}{3}L \\ x=\frac{1}{3}L-\frac{1}{\sqrt{3}}y \\ y=-\frac{\sqrt{3}}{6}L$ I used the fact that the circumradius of an equilateral triangle is $\frac{\sqrt3}{3}L$ and that its height is $\frac{\sqrt{3}}{2}L$ . Next, using the definition of moment of inertia ($I$) and with the help of Wolfram Alpha, I obtained the following result: $$I=\int r^2 dm=\rho \int r^2 dA\\ =\rho \int_{-\frac{\sqrt{3}}{6}L}^{\frac{\sqrt{3}}{3}L} \int_{\frac{1}{\sqrt{3}}y-\frac{1}{3}L}^{\frac{1}{3}L-\frac{1}{\sqrt{3}}y} x^2+y^2 dxdy\\ =\frac{\rho}{16 \sqrt{3}}L^4=(\frac{4M}{\sqrt{3} L^2})(\frac{L^4}{16\sqrt{3}})\\ =\frac{1}{12}ML^2$$ Answer: I can confirm your result. I can also suggest you a neater way to derive it inspired by David Morin - Introduction to Classical Mechanics, check it out in a library if you have access. The main idea is to use the symmetry of the equilateral triangle and split it into 4 smaller equilateral triangles like this Now analyse how the moment of inertia changes when we rescale its mass and sidelength, i.e. if $ i = \alpha ml^2$ and $L = 2l, M = 4m$, then $I = \alpha (4m) (2l)^2 = 16i$, where $m$ is the mass of a small triangle, $l$ is the sidelength of a small triangle and the capital $M, L$ correspond to the larger triangle. But the moment of inertia of the big triangle can be also split into $4$ moments of inertia. Be aware that we need to use the parallel axis theorem for the $3$ triangles which enclose the central triangle. Hence, $$I = 16i = 4i + 3m\left(\frac{l\sqrt{3}}{3}\right)^2,$$ which reduces to $i = \frac{1}{12} m l^2.$
{ "domain": "physics.stackexchange", "id": 48390, "tags": "homework-and-exercises, newtonian-mechanics, geometry, moment-of-inertia" }
Entanglement entropy of 1D chiral Fermion
Question: I was told that the entanglement entropy $S_E$ on the ground state of a (1+1)D conformal field theory (CFT) follows the logarithmic behavior $S_E=\frac{c}{12}\ln L$ where $L$ is the length scale between the entanglement cuts. I do not know how the CFT works, so I would like to convince myself by starting from a special case, say the chiral (and free) fermion. My question is how to calculate the entanglement entropy of 1D chiral fermion using the 2nd quantization language without referring to bosonization or mapping to CFT? Here is my attempt to approach the problem. Suppose we have a chiral fermion chain described by the Hamiltonian $H=\sum_k k c_k^\dagger c_k$. Consider ground state (at zero temperature), it would be $|\psi\rangle=\prod_{k<0}c_k^\dagger |0\rangle$. The density matrix can be constructed from the ground state as $\rho=|\psi\rangle\langle\psi|$. Then I should make entanglement cuts to separate the system into sectors A and B. Tracing out the fermion degrees of freedoms in B to obtain the reduced density matrix $\rho_A=\mathrm{Tr}_B\rho$. Then I was suppose to diagonalize $\rho_A$ to find the entanglement spectrum and evaluate the entanglement entropy. But when I tried to work out the details, I was stuck in the last several steps. Let me illustrate what I have obtained so far. First to understand the structure of $\rho$, I started from the correlation function, and found $$\begin{split} \mathrm{Tr}\rho c_{x_1}^\dagger c_{x_2} &= \langle c_{x_1}^\dagger c_{x_2} \rangle\\ &=\sum_{k_1 k_2}\langle c_{k_1}^\dagger c_{k_2}\rangle e^{i(k_2 x_2-k_1x_1)}\\ &=\sum_{k<0} e^{ik(x_2-x_1)}\\ &\simeq \frac{i}{x_1-x_2},\end{split}$$ where $x_1$ and $x_2$ are two spacial coordinates restricted in the sector A. So on the one-particle subspace, the density matrix should be $$\rho_{A1}=\int_0^L\mathrm{d}x_1\int_0^L\mathrm{d}x_2\;c_{x_1}^\dagger\frac{i}{x_1-x_2}c_{x_2}.$$ I also noticed that in the zero particle subspace, the density matrix is simply the identity $\rho_{A0}=1$. So I would generalize that in the $n$ particle space, the density matrix should be $\rho_{An}=\rho_{A1}^n$. (Let me know if I am wrong here.) Then the reduced density matrix would be $$\rho_A=\sum_{n=0}^{\infty}\rho_{An}=(1-\rho_{A1})^{-1}.$$ So here is where I stopped. I can not figure out how to diagonalize the reduced density matrix $\rho_A$. Even for $\rho_{A1}$, I do not know how to deal with it. The entanglement cuts breaks the space translational symmetry, and I can not do diagonalization by Fourier transform to the momentum space. Even if I tried some numerics by discretization, the eigen values varies from negative to positive, and I can not figure out a clue. I would appreciate so much if anyone could help me out from here. Answer: A useful reference is http://arxiv.org/pdf/0906.1663.pdf, by Peschel and Eisler. A common approach is to make use of the fact that the two point function you calculated is independent of whether one uses the full density matrix or the reduced density matrix, provided one looks at operators that are local to the region that one is not tracing over. If one then assumes that the reduced density matrix is also Gaussian, there is a simple relationship that can be constructed between the eigenvalues of the reduced density matrix and the eigenvalues of the two point function, treated as a matrix indexed by position. (See equation (17) in the reference above.) To actually use this approach, I would put the fermion on a lattice, which introduces the usual fermion doubling problem. So I would not be able to study chiral fermions...
{ "domain": "physics.stackexchange", "id": 15115, "tags": "condensed-matter, quantum-entanglement, fermions, second-quantization" }
Which ring expansion in cyclobutyl(cyclopropyl)methanol is favourable?
Question: Predict major product: In this question first I've protonated the OH group and then water is removed to form a carbocation. Now I have to expand the ring, but I'm confused which ring to choose: 3-membered ring or 4-membered ring? Note: I can make an alkene (which would be the final product of the reaction) by removing an alpha hydrogen if it is known which ring to expand. Answer: It is clear to me that no matter which ring you open the first, you'd get the same product as A. Michael Lautman gave the correct product , first opening the cyclopropyl ring. Michael's mechanism is fair, but I'm reluctant to say it is the most preferable path because it gives you bicyclobutyl $2^\circ$ carbocation as an intermediate, which needed extra 1,2-hydride shift to get stabilized as $3^\circ$ carbocation (total of 5 steps). However, if you have opened the cyclobutyl ring first, you'd get the same final product A, through a relatively low energy cyclopropylcyclopentane carbocation (total of 5 steps) even though it is also needed a 1,2-hydride shift (in relatively more stable cyclopentyl $2^\circ$ carbocation when compared to $2^\circ$ carbocation in cyclobytyl ring in Michael's mechanism) to get stabilized as $3^\circ$ carbocation. Complete mechanism is depicted below: This mechanism is supported by following reference: G. K. Surya Prakash, V. Prakash Reddy, G. Rasul, J. Casanova, G. A. Olah, “The Search for Persistent Cyclobutylmethyl Cations in Superacidic Media and Observation of the Cyclobutyldicyclopropylmethyl Cation,” J. Am. Chem. Soc. 1998, 120(51), 13362–13365 (https://doi.org/10.1021/ja9828962).
{ "domain": "chemistry.stackexchange", "id": 12093, "tags": "organic-chemistry, carbocation, rearrangements" }
Calculi for a computability class
Question: Proving two push down automata equivalent is undecidable. But proving two finite state machines equivalent is decidable. You also cannot write a programming language that allows expressing the complete set of push down automata, with nothing more powerful. First question: Can a language exist that expresses up to the power of finite state machines but nothing higher? Next. Working at the power of Turing machines we have the lambda calculus which with a little bit of sugar is a nice programming language. In addition there are weaker forms of the lambda calculus that are not Turing complete such as the simply typed lambda calculus. Second question: What calculi exist that cannot express problems above finite state machines, but a programming language could be built on top of. Kind of like the lambda calculus. Grammars can be restricted to only allow regular languages. This of course does not change the fact that context free grammars can represent regular languages. Third question: Even though context free grammars allow the expression regular languages, do the methods used to check if a grammar is regular allow the expression of all regular languages? Such as those talked about in "https://en.wikipedia.org/wiki/Regular_grammar". These questions are all directly related. Pardon how convoluted this is. Answer: You are asking several questions. Let me answer them one by one, though not in the same order. Even though context free grammars allow the expression regular languages, do the methods used to check if a grammar is regular allow the expression of all regular languages? You are misunderstanding the article. One does not check that a grammar is regular. Rather, we define a grammar as regular if it satisfies the constraints described in the article. Regular grammars indeed generate all (and only) regular languages, and it's not too hard to show this by interpreting them as NFAs (exercise). As an aside, the following is undecidable: given a context-free grammar, check whether the language it generates is regular. Can a language exist that expresses up to the power of finite state machines but nothing higher? Yes, for example the language of regular expression, the language of DFAs, the language of NFAs, the language of regular grammars, and so on. What calculi exist that cannot express problems above finite state machines, but a programming language could be built on top of. Kind of like the lambda calculus. This question is not really well defined, and it is very easy to cheat. A reasonably natural example is finite automata – if you add two stacks you get a model having the same power as a full-fledged Turing machine.
{ "domain": "cs.stackexchange", "id": 6023, "tags": "computability, finite-automata, formal-grammars, programming-languages, pushdown-automata" }
Does the current size of the cosmological sound horizon play a relevant role in the universe?
Question: I am doing some interactive plots about cosmological horizons and in my research I stumbled upon the sound horizon, the baryonic acoustic oscillations and how it had an impact on the formation of the first structures in the universe. Some people have calculated the radius of the sound horizon at the time of recombination using the same equation used in the particle horizon but with the speed of "sound", which itself is a function of the density of matter and radiation. But I've never seen anyone actually plot the radius into the future like some have done with other cosmological horizons. Is there a reason for that? Does it not play an important role in the universe anymore? Was it only important at the time of recombination? Answer: The sound horizon is the same today as at recombination. Before the time of recombination, the universe was full of free protons and electrons. Due to the electric charges of these particles, collisions were frequent. Via these collisions, sound waves were able to propagate. The idea of a sound wave is that a region of excess density exerts pressure on surrounding regions, squeezing them and making them exert pressure on their surroundings in turn. But this connection between density and pressure requires frequent particle collisions. During recombination, the free protons and electrons combined to make atoms. Collisions essentially stopped, since the atoms are electrically neutral. So the sound waves froze in place. Technically, they froze in comoving coordinates; the structure left by the sound waves continued to expand, following the expansion of the universe, but the sound waves stopped propagating. This is observationally confirmed since 2005, when astronomers detected these sound waves in the spatial distribution of galaxies. The sound horizon is, in comoving units, the same in the galaxy distribution as in the cosmic microwave background.
{ "domain": "physics.stackexchange", "id": 97188, "tags": "cosmology, acoustics, space-expansion, event-horizon" }
Why does the moon sometimes appear out-of-place?
Question: Quite often I go out in the morning and I'm in Milton Keynes, so I would expect the moon to rise in the east and set in the west. Sometimes at about 2, 3, 4 o’clock in the morning the moon is low in the east. I was just wondering how that worked out? Answer: Well, let’s zoom out for a bit and imagine you're off of the Earth and you’re looking at the Earth and the moon from space. So you have the Earth as the bigger of the two bodies sitting let’s say, in the centre and the moon is in orbit around the Earth. So the moon goes around the Earth and the moon takes a month to do a complete lap of the Earth and get back to where it started, 28 days to do a complete orbit of Earth. Also, inside the moon’s orbit, the Earth is turning and the Earth takes 24 hours to do a complete circle. So therefore, as the Earth turns then it’s going to see the moon from one side of the Earth go across the sky and then down on the other side. So, you're going to see the moon rise and set. But because the moon is also doing a lap around the Earth, the moon is going to appear at different points in the sky at different times of the day and night. So sometimes the moon will be up during the day.
{ "domain": "physics.stackexchange", "id": 5611, "tags": "moon" }
Pairing of electrons in Nickel in presence of strong ligand(tetrahedral)
Question: For tetrahedral CFSE, Nickel's last 6 electrons will go in the t2g set of orbitals. Since it is already the higher energy orbital, pairing should NOT happen right? However, all internet sources I found for NiCN4 show that pairing happens in the t2g. Shouldn't electrons not pair and follow hunds rule? Answer: You do not have $t_{2g}$ or for that matter even $t_2$ (there's a "g", meaning centrally symmetric, only if the coordination geometry has that symmetry). You don't have a tetrahedral coordination at all. When you have a $d^8$ central core with a strong field ligand, both crystal field theory and ligand field theory predict that a strong extra stabilization occurs by going to a square planar configuration, and so it generally does. $\ce{Ni(CN)4^{2-}}$ is actually a classic example of such a square planar coordination. This square planar coordination no longer has the right symmetry to give a $t_2$ set and an $e$ set; instead the orbitals split in an entirely different way with the net result that one of the orbitals is much more destabilized than all the others. From https://www.jove.com/science-education/11462/crystal-field-theory-tetrahedral-and-square-planar-complexes: Then the eight electrons go into the four lower-energy orbitals where perforce they are all paired.
{ "domain": "chemistry.stackexchange", "id": 18020, "tags": "coordination-compounds" }
Can an energy-momentum four vector include the quantities of all objects in a closed system?
Question: Say I have a particle moving along the $x$-axis in the Earth's reference frame. It decays into an upsilon and a proton, each of which has an energy of 60 GeV. They are traveling in opposite directions. The proton has a mass of 1 (or 1GeV/c^2) and the upsilon has a mass of 10 (or 10GeV/c^2). My question is; can I set the four-vector of the original particle as: $(E, Px, Py, Pz)$ And the four-vector of the decay particles as one general vector: $(E', Px', Py', Pz')$ Such that $E'=120$GeV, the total energy of the two decay particles? Or, to find the energy and momentum of each particle, would I have to have two separate four-vectors and calculate them using the inner product? Answer: Among the properties of vectors is that they have an addition operation, so you can certainly add two or more four-vectors together. More over that is a useful operation: the result represents the total energy and momentum of the system. But it goes one step further: the (invariant) mass of a system is found from the square of the system's four-momentum just like the (invariant) mass of a particle is found from the square of its four-momentum.
{ "domain": "physics.stackexchange", "id": 52482, "tags": "special-relativity, particle-physics, momentum, conservation-laws, inertial-frames" }
Joy to control servo
Question: Hi, I'm using Hydro. I've been working on and off with this code to control a servo using rosserial with an Arduino, using a PS3 controller and the Joy node. I wrote my code based off of this tutorial. Here is my code #include <ros/ros.h> #include <std_msgs/UInt16.h> #include <sensor_msgs/Joy.h> class Servo { public: Servo(); private: void joyCallback(const sensor_msgs::Joy::ConstPtr& joy); ros::NodeHandle nh_; // Joystick tuning params int linear_, angular_; double l_scale_, a_scale_; // Joystick dev to listen to ros::Subscriber joy_sub; // Robot bits to control ros::ServiceClient create_client; ros::ServiceClient servo_client; ros::Publisher deg_pub_; // I want to publish an angle as std_msgs/UInt16 ros::Subscriber joy_sub_; }; Servo::Servo(): linear_(1), angular_(2) { nh_.param("axis_linear", linear_, linear_); nh_.param("axis_angular", angular_, angular_); nh_.param("scale_angular", a_scale_, a_scale_); nh_.param("scale_linear", l_scale_, l_scale_); deg_pub_ = nh_.advertise<std_msgs::UInt16>("servo", 1); joy_sub_ = nh_.subscribe<sensor_msgs::Joy>("joy", 10, &Servo::joyCallback, this); } void Servo::joyCallback(const sensor_msgs::Joy::ConstPtr& joy) { std_msgs::UInt16 deg; deg = l_scale_*joy->axes[linear_]; deg_pub_.publish(deg); } int main(int argc, char** argv) { ros::init(argc, argv, "servo_joy"); Servo servo_joy; ros::spin(); } I get some errors when I try to build it. One is /home/donni/catkin_ws/src/rosberry_pichoptor/src/servo.cpp:55:35: error: no match for ‘operator=’ in ‘deg = (((Servo*)this)->Servo::l_scale_ * ((double)(& joy)->boost::shared_ptr<T>::operator-> [with T = const sensor_msgs::Joy_<std::allocator<void> >]()->sensor_msgs::Joy_<std::allocator<void> >::axes.std::vector<_Tp, _Alloc>::operator[] [with _Tp = float, _Alloc = std::allocator<float>, std::vector<_Tp, _Alloc>::const_reference = const float&, std::vector<_Tp, _Alloc>::size_type = unsigned int](((unsigned int)((Servo*)this)->Servo::linear_))))’ I guess this means you can't use = with UInt16? The second error is /opt/ros/hydro/include/std_msgs/UInt16.h:55:8: note: no known conversion for argument 1 from ‘double’ to ‘const std_msgs::UInt16_<std::allocator<void> >&’ This is also about UInt. I think my code is using UInt and float interchangeably. I don’t know how to get around this. I'm also unclear on how the Arduino node will read the Joy values. Is there a way to do the calculations with doubles and change them to UInt? Originally posted by dshimano on ROS Answers with karma: 129 on 2014-10-02 Post score: 1 Answer: The std_msgs/UInt16 message is a wrapper around an integer type, not an integer type. This means that you can't assign directly into it; rather you have to set its data field. Try: deg.data = l_scale_*joy->axes[linear_]; Originally posted by ahendrix with karma: 47576 on 2014-10-02 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by dshimano on 2014-10-07: Thanks, that fixed it!
{ "domain": "robotics.stackexchange", "id": 19600, "tags": "ros, arduino, joy" }
Getting a "system is computationally singular" error in sleuth
Question: I am analysing 142 samples belonging to 6 batches. Additionally, those samples belong to 72 strains, which means that for most of the strains there are two samples. I could fit simple models (for strain and batches for instance), but when I get to the "full" model (~batch+strain), I get the following error: so <- sleuth_fit(so, ~strain+batch, 'full') Error in solve.default(t(X) %*% X) : system is computationally singular: reciprocal condition number = 5.2412e-19 I should point out that of the 72 strains, only 15 have samples in distinct batches. This means that most strains (57) have both samples in the same batch. Is the error due to an unknown bug or rather to the experimental design? Does it mean that the information on batches cannot be used? Thanks EDIT I've posted the experimental design in a gist batch strain replica batch_1 strain_41 1 batch_4 strain_41 2 batch_1 strain_28 1 batch_4 strain_28 2 batch_1 strain_26 1 [...] Answer: You should be able to remove any one of the following strains to end up with a rank-sufficient model matrix: 5, 10, 12, 13, 14, 15, 19, 26, 28, 3, 30, 32, 36, 39, 41, 45, 46, 49, 5, 50, 52, 53, 58, 59, 60, 69, 8. As an aside you can figure this sort of thing out as follows (I read your dataframe in a the d object): > m = model.matrix(~batch+strain, d) > dim(m) # 142 row, 77 columns, so minimum rank is 77 > qr(m)$rank # 76, so just barely rank insufficient > #see if we can remove a single column and still get rank 76 > colnames(m)[which(sapply(1:77, function(x) qr(m[,-x])$rank) == 76)] You obviously don't want to remove the batch columns or the intercept. The normal tricks that you can sometimes use to get around this issue with case-control studies don't appear to help here, which is why I would just drop a strain and call it done. Keep in mind that your power is still likely terrible. I generally recommend at least 6 replicates per group (scale down the number of groups to fit your budget). EDIT: Once the desired strain is removed from the model, it can be fit directly into the sleuth_fit function to obtain the full model: > m = m[, -9] # whatever column to drop to get the appropriate rank > so <- sleuth_fit(so, m, 'full')
{ "domain": "bioinformatics.stackexchange", "id": 113, "tags": "rna-seq, r, differential-expression" }
What is fully connected layer additive bias?
Question: I'm going to use PyTorch specifically but I suspect my question applies to deep learning & CNNs in general therefore I choose to post it here. Starting at this point in this video and subsequently: https://www.youtube.com/watch?v=JRlyw6LO5qo&t=1370s George H. explains that the PyTorch function torch.nn.Linear with the bias parameter set to False then makes the torch.nn.Linear functionally equivalent (other than GPU support of course) to the following NumPy line: x = np.dot(weights, x) + biases Note that in torch.nn.Linear bias by default is set to True: https://pytorch.org/docs/stable/generated/torch.nn.Linear.html Here is the PyTorch documentation for the bias parameter: bias – If set to False, the layer will not learn an additive bias. Default: True Can anybody please explain what "additive bias" is? In other words, what additional steps is PyTorch doing if torch.nn.Linear bias parameter set to True? Surprisingly I was not able to find much on this topic upon Googleing. Answer: You probably misanderstood the video, it is not said that a linear layer with Bias set to False is equivalent to : x = np.dot(weights, x) + biases Because that is not true, a layer without Bias is equivalent to x = np.dot(weights, x) The way he recreates the layer without Bias is actually with the following function : x = x.dot(l1) # X = W1.X First linear layer x = np.maximum(x, 0) # X = ReLU(X) x = x.dot(l2) # X = W2.X Second linear layer Setting Bias=True means the layer has the second bias term it adds after multiplying the weights with the input (as in the formula you quoted) : x = np.dot(weights, x) + biases
{ "domain": "datascience.stackexchange", "id": 9714, "tags": "neural-network" }
Exponential of ladder operators acting on vacuum state
Question: How would I solve expressions of the following nature: $$<0|e^{Vt(a+a^\dagger)}|0>$$ and $$<0|e^{\omega aa^\dagger t}|0>~?$$ My intuition is that I have to expand the exponent as a power series but I can't understand how to deal with the pre-factors. Answer: Note that $aa^{\dagger}|0>=a|1>=|0>$. With this, its easy to see that $<0|aa^{\dagger}|0> = 1$, and then $<0|(aa^{\dagger})^2|0>=1$ and so on. you can generalize this for any $N \geq 0$: $$ <0|\left( \omega t a a^{\dagger} \right)^{N}|0> \ = \ (\omega t)^N <0|\left(a a^{\dagger} \right)^{N}|0> \ = \ (\omega t)^N $$ This means that: $$ <0|e^{\omega t a a^{\dagger}}|0> \ = \ \sum_{N=0}^{\infty} \frac{<0|\left( \omega t a a^{\dagger} \right)^{N}|0>}{N!} \ = \ \sum_{N=0}^{\infty} \frac{(\omega t)^N}{N!} \ = \ e^{\omega t} $$ The process is the same for $<0|e^{Vt(a+a^{\dagger})}|0>$: Find an expression valid for any $N \geq 0$ for $<0|\left(Vt(a+a^{\dagger})\right)^N|0>$ and plug it into the power series.
{ "domain": "physics.stackexchange", "id": 50121, "tags": "quantum-mechanics, homework-and-exercises, hilbert-space, second-quantization, coherent-states" }
What are the known relationships between rotation of planets/moons and their distance to Sun?
Question: What are the known relationships between rotation of planets/moons and their distance to Sun? Or any other known attributes? For example, the sidereal year for planets is directly related to their mass and their distance from the sun, but what about sidereal day? What are the known relationships for it? Answer: The length of the solar day is related to the sidereal day and the speed at which an object orbits the Sun, which in turn is dependent on its orbital radius. The sidereal year is not related to the mass of the orbiting object. Any object at the same radius in a circle orbit will have the same year. It IS dependent on the mass of the central object, though. Objects orbit faster around a more massive object if orbital radius is held constant.
{ "domain": "physics.stackexchange", "id": 3150, "tags": "orbital-motion, solar-system, rotation" }
Is it possible for an electron to jump up from other than the ground state by absorbing a photon?
Question: My thoughts are that this is possible, that is an electron can go from n=2 to n=3 states, however due to such a low probability this is not observed (the electron moves to the ground state as fast as possible so unlikely that it would absorb a photon in that time). Or is this type of transition forbidden? Answer: It's perfectly well allowed, but the n=2 state is often not highly populated, unless there is some process in place to promote electrons from the ground state to n=2. Even when there is, there is often some process that allows the n=2 electrons to return to n=1 fairly quickly, so the pumping of electrons from n=1 to n=2 needs to be maintained to make the n=2 to n=3 transition easy to observe. This type of process is commonly observed in time-resolved spectroscopy aka "pump-probe spectroscopy".
{ "domain": "physics.stackexchange", "id": 43072, "tags": "quantum-mechanics, spectroscopy" }
What is the correct Lewis structure of NH3O?
Question: Choose the most appropriate Lewis structure for $\ce{NH3O}$ among the two: and The oxygen in the second case has 3 nonbonding pairs around it. I chose the first as an answer, yet the second seems to be the correct one. Can anyone please explain why? Answer: Technically, both the structures $\ce{H2N-OH}$ and $\ce{H3N\bond{->}O}$ may exist. However, in reality hydrogen atom is rather prone to migration and the second structure is not favorable. So, for a compound with composition $\ce{NH3O}$, the correct structure would be $\ce{H2N-OH}$. The structure of second type is stable for compound $\ce{NOF3}$ and may be observed for amine oxides like $\ce{(C2H5)3N\bond{->}O}$ or pyridine N-oxide. Why $\ce{H2N-OH}$ and not $\ce{H3N\bond{->}O}$? One can come with several explanations for that, but I would like to focus on one: charge distribution. In $\ce{H3N\bond{->}O}$, the nitrogen has to carry a formal positive (and actually very real) charge, while for $\ce{H2N-OH}$, the structure has no formal charges. Still, according to some sources[1] up to 20% of hydroxylamine in water exists as ammonia oxide $\ce{H3N\bond{->}O}$, probably, due to stabilization by hydrogen bonds. Somewhat similar uncertainty may be found for phosphorus compounds (like hypophosphorous acid) and sulfur compounds (like sulfinic acids). Kirby, A. J.; Davies, J. E.; Fox, D. J.; Hodgson, D. R. W.; Goeta, A. E.; Lima, M. F.; Priebe, J. P.; Santaballa, J. A.; Nome, F. Ammonia oxide makes up some 20% of an aqueous solution of hydroxylamine. Chem. Commun. 2010, 46 (8), 1302. DOI: 10.1039/b923742a.
{ "domain": "chemistry.stackexchange", "id": 4924, "tags": "lewis-structure" }
Use cases for area of parallelogram (from vector addition)
Question: In first-year physics classes, you learn how to add vectors using the "Tip-to-Tail" or "Parallelogram" method. In my Calculus 3 class we learned you can find the area of a Parallelogram by using the cross-product from 3 of its vertices. (A = |PQ x QR|) My question is two fold: 1.) Is this PQR method of constructing a parallelogram the same thing as tip-to-tail method of adding vectors? i.e. You can use |PQ x QR| to find the area of a Tip-To-Tail parallelogram right? 2.) When would someone be interested in the area of a parallelogram that results from the addition of two vectors? Answer: The parallelogram that shows up in the "tip-to-tail" method is the same parallelogram you find the area of using the cross product. However, these are distinct operations. One concerns the sum of two vectors, and the other a product between two vectors. It just happens that the same parallelogram shows up when you draw a picture. The area of a parallelogram is the magnitude of the cross product of the vectors that lie along two adjacent sides. It really has nothing to do with the sum of the two vectors. So your second question is closer to "When is the cross product useful in physics?". Here is an incomplete list of when you might want to use the cross product.
{ "domain": "physics.stackexchange", "id": 94320, "tags": "vectors, vector-fields, geometry" }
Roscd cannot find catkin_make 'd package
Question: I run ros kinetic on Ubuntu 16.04. I try to use openpose ros wrapper on my platform. I share the error log below. Best, robolab3@robolab3-Inspiron-7566:~/catkin_ws$ catkin_make Base path: /home/robolab3/catkin_ws Source space: /home/robolab3/catkin_ws/src Build space: /home/robolab3/catkin_ws/build Devel space: /home/robolab3/catkin_ws/devel Install space: /home/robolab3/catkin_ws/install #### #### Running command: "make cmake_check_build_system" in "/home/robolab3/catkin_ws/build" #### #### #### Running command: "make -j8 -l8" in "/home/robolab3/catkin_ws/build" #### [ 0%] Built target sensor_msgs_generate_messages_cpp [ 0%] Built target _image_recognition_msgs_generate_messages_check_deps_Recognitions ... [ 92%] Built target image_recognition_msgs_generate_messages_eus Scanning dependencies of target openpose_wrapper [ 92%] Built target image_recognition_msgs_generate_messages [ 94%] Building CXX object image_recognition/openpose_ros/CMakeFiles/openpose_wrapper.dir/src/openpose_wrapper_mock.cpp.o [ 96%] Linking CXX shared library /home/robolab3/catkin_ws/devel/lib/libopenpose_wrapper.so [ 96%] Built target openpose_wrapper Scanning dependencies of target openpose_ros_node [ 98%] Building CXX object image_recognition/openpose_ros/CMakeFiles/openpose_ros_node.dir/src/openpose_ros_node.cpp.o [100%] Linking CXX executable /home/robolab3/catkin_ws/devel/lib/openpose_ros/openpose_ros_node [100%] Built target openpose_ros_node robolab3@robolab3-Inspiron-7566:~/catkin_ws$ roscd openpose_ros roscd: No such package/stack 'openpose_ros' robolab3@robolab3-Inspiron-7566:~/catkin_ws$ roslaunch openpose_ros_node [openpose_ros_node] is not a launch file name The traceback for the exception was written to the log file robolab3@robolab3-Inspiron-7566:~/catkin_ws$ Originally posted by tolga-uni-lu on ROS Answers with karma: 33 on 2018-03-22 Post score: 0 Original comments Comment by gvdhoorn on 2018-03-22: Please only include the console output that actually shows the error message in future questions. Answer: $ catkin_make $ roscd openpose_ros $ roslaunch openpose_ros_node I don't see a source devel/setup.bash in your copy-pasted console text. You need to "activate" your workspace after you've built it using catkin_make, otherwise utilities like roscd and roslaunch will not know where to find the new packages that you just built. Edit: I only now see this: $ roslaunch openpose_ros_node roslaunch takes either one or two arguments: a direct path to a .launch file a package name and a (package relative path to a) .launch file Your invocation is neither: openpose_ros_node is not a package name (that would probably be openpose_ros), and it's also not a launch file name. From the build output you show, it would appear openpose_ros_node is a ROS node (ie: an executable binary). Those you should start with rosrun $pkg $node_name, not roslaunch. Originally posted by gvdhoorn with karma: 86574 on 2018-03-22 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by tolga-uni-lu on 2018-03-25: Thank you for your answer. How can I activate it? I already exported it on .bashrc. Comment by gvdhoorn on 2018-03-25:\ I already exported it on .bashrc Doing it that way, your workspace is only sourced (ie: activated) when you start a new bash session (ie: open a new terminal). Any pkgs that you add in such a session will not be found, untill you catkin_make and source devel/setup.bash yourself. Or .. Comment by gvdhoorn on 2018-03-25: .. when you start a new terminal. A simple rule is this: add or created a new package -> build your workspace and source $CATKIN_WS/devel/setup.bash. If you don't, your new package(s) won't be found. Comment by tolga-uni-lu on 2018-04-10: Hello, I had this issue previously and solved it with the way you showed. Today I changed the name of my node to make it easier to understand. I cleaned with catkin_make clean, rebuilt with catkin_make, and source devel/setup.bash . However now ros cannot locate my renamed node, nor it can build it. Comment by tolga-uni-lu on 2018-04-10: ERROR: cannot launch node of type [gesture_detector/gesture_listener.py]: gesture_detector ROS path [0]=/opt/ros/kinetic/share/ros ROS path [1]=/home/robolab3/catkin_ws/src ROS path [2]=/opt/ros/kinetic/share :~/catkin_ws$ ls src/gesture_detector/scripts/ gesture_listener.py Comment by gvdhoorn on 2018-04-10: Delete your build and devel folders, rebuild your workspace, then try again. Comment by gvdhoorn on 2018-04-10: If this is a Python script, make sure that it has executable permissions set as well. Comment by tolga-uni-lu on 2018-04-10: I solved it with going inside the catkin_ws/src/gesture_detector and executing catkin_create_package I did not need to call chmod +x after this.
{ "domain": "robotics.stackexchange", "id": 30419, "tags": "ros, ros-kinetic, ubuntu, ubuntu-xenial" }
Far Field Approximation in Young's Double Slit Experiment
Question: I am studying some things surrounding the Young's double slit experiment and am trying to understand the derivations. The part that is not clear to me is the far-field approximation. That is, I understand what it means, but am failing to obtain the same equation as the tutorial. We start with a wave of wavelength $\lambda = \frac{2 \pi}{k}$ incident on a plate with two pinholes. Each pinhole or slit acts like a source of wavelength $\lambda$. The resultant wave at a point with distances $r_1, r_2$ from the slits is $\frac{e^{i(kr_1-\omega t)}}{r_1} + \frac{e^{i(kr_2-\omega t)}}{r_2}$ The far-field approximation we make is $r_1,r_2 \gg d$, where $d$ is the distance between the slits. The expression for the resultant wave should be $2 \frac{e^{i(kr-\omega t)}}{r} \cos(\frac{k d}{2}\theta)$, where $r = \frac{r_1 + r_2}{2}$ and $\theta$ - small angle of deviation from the normal to the screen on which the slits are located. It is the latter expression that I would like to obtain. Any advice or hint (preferred) is appreciated. Answer: You asked for a hint... express your equations as $r_1 = r+\delta$ and $r_2 = r-\delta$; then note that the intensity term ($1/r_1$ and $1/r_2$) will basically be the same for both (replace as above, and the $\delta$ term will vanish), and things will fall into place. You might need to be reminded that $e^{i\phi} = \cos\phi + i\sin\phi$ I will leave it as an exercise to see how $\delta$ relates to $d$, $\lambda$ and $\theta$... as Emilio Pisanty points out in the comment, you may need to remember that for small $\theta$, $\theta \approx \sin\theta \approx \tan\theta$.
{ "domain": "physics.stackexchange", "id": 43472, "tags": "optics, waves, double-slit-experiment, interference, diffraction" }
Why is it possible to expand the SM Higgs field in its second component only
Question: In the lecture, the professor said something along the lines of: "After a suitable gauge transformation, the standard model higgs field can be expanded as $$\phi =\left(\begin{array}{c} 0 \\ v+H(x) \end{array}\right)$$ ". Now, the argument I have been able to scramble from different sources goes along these lines: We can write small higgs field excitations as $$\tilde\phi =\left(\begin{array}{c} i \theta_1(x) + \theta_2(x) \\ v+H(x) - i \theta_3(x) \end{array}\right)$$ An appropriately chosen local $SU(2)_L$ transformation transforms this into the above form where all $\theta=0$. Thus, by applying an appropriate local $SU(2)_L$ transformation to all elements of the Lagrangian, we can use this form without loss of generality. I have the following two issues with this argumentation: First, how do I know that the first statement is true? Secondly and most importantly, wouldn't any symmetriy transformation that corrects the higgs weak spinor into the above from , given a random but fixed excitation of the higgs field, also mess up the spinors of the left handed leptons? In other words, how is it possible to find a local gauge transformation on the Lagrangian that corrects any $\tilde \theta$ into $\theta$, but does not change the form of $ \bar L =\left(\begin{array}{c} \bar\nu \\ \bar e \end{array}\right)$ ?? Answer: The first assumption is that whatever vev the higgs picks up is constant in space, because this has less energy than one that increases the kinetic term in the Lagrangian. So we can do one global transformation to make the vev be in the second component only. You can imagine doing this prior to symmetry breaking, if you know what it is going to be ahead of time, and since the other fields are invariant, bob's yer uncle. Stated differently, the pre-symmetry-breaking electrons and neutrinos are not the ones we observe, so we just label whatever remains as electrons and neutrinos. "Without loss of generality", we work in an electron-neutrino (global) basis in which the higgs starts out with only the second component of the vev being nonzero and real. If you buy that part, then it is just a matter of showing that you can perform a gauge transformation that gauges away all the other components of the Higgs except the real part of the second component. This gauge transformation will of course mix $\nu$ and $e$ spatially, but you can say that when we perform the path integral we have a gauge redundancy, and so we only integrate along a slice that obeys some gauge fixing condition. The components of $L$ might as well be labeled $L_1$ and $L_2$. It's only after we've chosen a gauge that we decide, hey, let's name them $\nu$ and $e$.
{ "domain": "physics.stackexchange", "id": 10598, "tags": "standard-model, symmetry-breaking, higgs" }
What factors must one consider choosing an NN structure?
Question: Suppose we have a classification problem and we wish to solve the problem by Neural Network. What factors must one consider choosing an NN structure? e.g Feed Forward, Recurrent and other available structures. Answer: Here is a list of parameters you should take into consideration (to name some): The learning algorithm (gradient descent is the most explained of them) The number of layers (3,4 layers is usually enough - input, 1 or 2 hidden, output). The output layer depends on your output (e.g., if you want to classify yes/no then your output layer consists of two nodes). The same applies for input layer. However, you may consider the case of using only a subset of the input for the learning. For instance, you may think that your problem is affected by $k$ variable. However, if you take $k'$ of them then you may get better results. Number of nodes in the hidden layers (you select that by trial and error) Number of training iterations (not too much to avoid over-fitting) Size of training/testing data (there are some known rules like the 80:20 for example) The type of the function used at the nodes (neurons) (e.g. $f(x) = 1/(1+e^x)$ or $f(x) = tanh(x)$), usually the first is sufficient. An important issue is the pre-processing and post-processing of data (this is common in all pattern recognition techniques). You may for instance convert your data by applying a certain function $f$ and then run your experiments. Note: given the many parameters you need to deal with, it is a good approach to use a search algorithm to select the best parameters for you. It is better be a heuristic search algorithm (e.g. genetic algorithm) if you had very large number of parameters set you will deal with (which is usually the case). Note: use the Matlab NN library or Weka (open source). They would have all these parameters for you. In fact, Weka has many other learning algorithms. Note: Perhaps, you may want to use other algorithms then. If this is case, try support vector machines. There was a historical battle between these two algorithms (in the 1990's). SVM won it ! (I am not being very scientific here).
{ "domain": "cs.stackexchange", "id": 867, "tags": "artificial-intelligence, neural-networks, neural-computing" }
Is it possible to obtain antiwater from antihydrogen and antioxygen atoms? And how is its property w.r.t. the ordinary water?
Question: I am interested in experimental physics and looking for information about the above question. Answer: Research has created antihydrogen, and that is about it for the present as far as antimatter in bulk, which one would need for antiwater.. Scientists in the US produced a clutch of antihelium particles, the antimatter equivalents of the helium nucleus, after smashing gold ions together nearly 1bn times at close to the speed of light. They were gone as soon as they appeared, but for a fleeting moment they were the heaviest particles of antimatter a laboratory has seen. If you look at the nuclear binding energy plot, oxygen needs a lot of antinucleons to materialize. Present research has just seen antihelium.
{ "domain": "physics.stackexchange", "id": 42129, "tags": "antimatter, molecules, matter" }
How is a physical qubit measured and how is the result interpreted?
Question: To my understanding, most of the qubits we use today are just Josephson junctions coupled to a resonator that is triggering the qubits to go to different states by using microwave pulses. The same resonators are also used to read the information form qubits. As shown in this picture: So I was wondering how does a qubit readout, when we measure it, gets read and how it is interpreted. How does raw information from qubit look like? Answer: What follows turned out to be a rather technical explanation, so I'll start with the main point: The qubit state can change the resonator's state, and the resonator's state can be easily measured only if there is a large different in frequencies between the qubit and the resonator. Let's model a qubit as a two-level system and a resonator as a harmonic oscillator. We need to give both a characteristic frequency, so let's call the qubit frequency $\omega_q$ and the resonator frequency $\omega_r$. We also need to characterize the strength of the interaction (how fast energy is transferred from one thing to another), and that's usually called $g$. Now we need to describe the dynamics of the qubit and resonator together. That's done by breaking the Hamiltonian down into three parts: qubit energy, resonator energy, and interaction strength. $$H_{\text{qubit}} = \frac{1}{2} \hbar \omega_q \sigma_z $$ $$H_{\text{resonator}} = \hbar \omega_r \ a^\dagger a $$ $$H_I = \hbar g ( \sigma_+a + a^\dagger \sigma_-)$$ $$ H = H_{\text{qubit}} + H_{\text{resonator}} + H_I $$ This is the Jaynes-Cummings Hamiltonian. Note that the rotating wave approximation was used. Pauli operators are given by $\sigma$, and creation and annihilation operators are $a^\dagger$ and $a$. Now, if the frequencies of the resonator and the qubit are the same (that's called the "resonance condition"), then they will exchange energy back and forth, much like an LC circuit or potential and kinetic energy in a pendulum. That's not ideal for measuring a qubit (you want to avoid as much as possible the measurement apparatus influencing the qubit state), so we make the frequency of the resonator very far away from the qubit's. This is a called a dispersive interaction. Formally, if $\omega_r - \omega_q = \Delta$, the Hamiltonian (now in the interaction picture) can be expressed as $$ H = \hbar \frac{g^2}{\Delta} ( |1\rangle \langle 1| + a^\dagger a \sigma_z ) $$ If the interaction between the qubit and the resonator is dispersive, and if the resonator is driven to a coherent state $|\alpha\rangle$, then the qubit states of $|0\rangle$ and $|1\rangle$ can be distinguished. Examine the evolution of two initial states, $|0\rangle|\alpha\rangle$ and $|1\rangle|\alpha\rangle$: $$ e^{-iHt/\hbar}|0\rangle|\alpha\rangle = |0\rangle|\alpha e^{-i\frac{g^2}{\Delta}t } \rangle $$ $$ e^{-iHt/\hbar}|1\rangle|\alpha\rangle = |1\rangle|\alpha e^{i\frac{g^2}{\Delta}t } \rangle $$ This is the point of the dispersive interaction: the coherent state of the resonator changes according to the state of the qubit, without having energy exchanged directly. You can see that the coherent state gets shifted counterclockwise for a 0 and clockwise for a 1. Furthermore, it's a simple matter to measure the state of a coherent resonator, since it can be described classically. This is only an outline of the complete answer. I've assumed knowledge of basic quantum mechanics, changing reference frames (interaction picture, Schrodinger picture, etc), coherent states, and the rotating wave approximation. I also glossed over the derivation of the dispersive Hamiltonian, which you can find in "Introductory Quantum Optics" by Gerry and Knight.
{ "domain": "quantumcomputing.stackexchange", "id": 757, "tags": "architecture, experimental-realization, superconducting-quantum-computing" }
Does a smaller half-life mean that the rate of energy/radiation release is faster?
Question: I had a discussion with someone about using radioactive elements as sources of energy on board long interplanetary voyages. Actually we were talking about going to the moon, and what our source of energy would be. He said that the element that is used should have a long half-life, to have enough of it for the duration of the flight. I said that it would be better to have a short half-life, because the flight is a few tens of hours and since a long one would mean that on average it takes a long time for it to supply significant energy. I want to ask you: ----Assumption to be taken: half-life is not probabilistic.---- Does a smaller half-life mean that the rate of energy/radiation release is faster? The definition of half-life is the amount of time it takes for half an amount(in terms of mass) of a radioactive substance to disintegrate. Since this mass is lost by radiation(energy), does that mean that an inverse proportionality between half-life and rate of energy release exists? My questions are: If one has (x) mass of radioactive element (1) which has a half-life (a) and (x) mass of radioactive element (2) which has a half-life (a/2). Will element (2) release energy faster than element (1)? If the question 1 is true, then does element (2) release energy twice as fast as element (1)? Answer: The half life is not a guarantee of faster rate of energy release: just faster rate of disintegration. You need to multiply that by the energy released per disintegration to get the energy release per second. All other things being equal (same atomic mass, same energy released), you need to bring less material (mass) along if you try to generate a certain amount of energy, if that material has a shorter half life. The simple way to see that: if the total energy needed is released by 3000 atoms disintegrating, and the journey takes one half life, then I need to bring 6000 atoms along (half disintegrate). But if the half life is twice as short, I only need to bring 4000 atoms: after half the flight 2000 disintegrated, and another 1000 in the second half of the flight. That demonstrates another problem: if you need constant power from your source, you need a longer half life... In other words - it depends.
{ "domain": "physics.stackexchange", "id": 37118, "tags": "radioactivity, half-life" }
Meaning of $\lambda_1$ factor in Eurocodes
Question: I'm reading here on buckling resistance of members according to Eurocode. Non-dimensional slenderness is defined as: $$\bar{\lambda}=\frac{\lambda}{\lambda_1}$$ where $$\lambda_1=\pi \sqrt{\frac{E}{f_y}}$$ $$\lambda = \frac{L_{cr}}{i}$$ The plain $\lambda$ is by definition the slenderness of the column, the ratio of its effective length to its radius of gyration. But what is the meaning of $\lambda_1$? No explanation besides its definition is given in the text. It is some factor by which we can divide the plain lambda or slenderness to make it "non-dimensional", but why does it take the form it does? What meaning does the square root of the ratio of elastic modulus to yield stress multiplied by pi have? Answer: $\lambda_1$ is the minimum slenderness ratio at which buckling will be the dominant condition for a purely axially loaded element. Interestingly, it is an intrinsic property of the material, not the geometry. It's derivation is simple: $$\begin{align} P_{crit} &= \dfrac{\pi^2 EI}{L^2} \\ \dfrac{P_{crit}}{A} &= \pi^2 E \cdot \dfrac{I}{L^2A} \\ \sigma_{crit} &= \pi^2 E \cdot \dfrac{i^2}{L^2} \\ \sigma_{crit} \equiv f_y &= \pi^2 E \cdot \dfrac{1}{\lambda_1^2} \\ f_y &= \dfrac{\pi^2 E}{\lambda_1^2} \\ \therefore \lambda_1 &= \pi\sqrt{\dfrac{E}{f_y}} \end{align}$$ From this we can calculate a slenderness ratio for steel and another for aluminium, above which any element made of each material will fail by buckling, and below which they will fail by simple compression. And that's all that's described by $\bar\lambda$: if it's greater than 1 (the element's slenderness ratio is greater than $\lambda_1$), the element will fail by buckling; if lower, by compression. If exactly equal to 1, it'll do both simultaneously. In reality, codes usually have "fuzzier" rules for elements with $\bar\lambda \approx 1$, since there can be interactions between both failure states which lead to ugly math.
{ "domain": "engineering.stackexchange", "id": 3702, "tags": "structural-engineering" }
Electron degeneracy in white dwarfs
Question: Consider a plasma in a star. Now in a plasma electrons are so excited that they can no longer be held by the electromagnetic field of the nucleus. But then when we are talking about cores or red giants or white dwarfs themselves they have the electron degeneracy pressure which is due to the potential well caused by the electromagnetic field itself. Does that mean that those electrons once again come under the influence of the electromagnetic force? But doesn't that go against the definition of a plasma? Answer: Electron degeneracy pressure is not caused by any electromagnetic interaction. It is an "ideal" effect that would be present in any high density gas of indistinguishable, non-interacting fermions. By constraining the fermions to have high density (with gravity in this case), you force electrons to occupy states well above zero momentum and kinetic energy, since the Pauli Exclusion Principle forbids more than one electron to occupy the same quantum state. It is this kinetic energy, even at low temperature, that produces degeneracy pressure.
{ "domain": "physics.stackexchange", "id": 65899, "tags": "electromagnetism, potential, plasma-physics, stars, white-dwarfs" }
Is making any material into wool transform it into a thermal insulator?
Question: This questions originates from the existence of a number of wools made of different materials, such as glass or rock, that are used as thermal insulators for e.g. buildings. I wanted to ask a more general question about whether texture alone can make something a good insulator but I was not sure it was not too broad. So, there is steel wool, for example. Although its main use is abrasion, does it provide thermal insulation or not? My intuition would be that the air trapped into the wool adds insulation but steel is a good conductor. Any general thoughts here? Answer: The thermal insularity of these various stems largely from the air contained (trapped) in them. Air is a fairly good thermal insulator. Especially if the air is contained in cellular structures, such as in expanded polystyrene or expanded polyethylene, because restricting the movement of the air reduces conduction/convection through it. It helps also helps if the 'matrix' material is a good thermal insulator and that's why steel wool will be a poorer insulator than e.g. glass wool because steel is a good thermal conductor, while glass is not.
{ "domain": "physics.stackexchange", "id": 73682, "tags": "insulators" }
Running Floyd-Warshall algorithm on graph with negative cost cycle
Question: I am trying to find the answer to the following question for the Floyd-Warshall algorithm. Suppose Floyd-Warshall algorithm is run on a directed graph G in which every edge's length is either -1, 0, or 1. Suppose that G is strongly connected, with at least one u-v path for every pair u,v of vertices, and that G may have a negative-cost cycle. How large can the final entries A[i,j,n] be, in absolute value (n denotes number of vertices). Choose the smallest number that is guaranteed to be a valid upper bound? There is the following answers: +∞ n^2 n - 1 2^n I have ruled out 3. (n-1) and 1. (+∞) since if a graph has a negative cost cycle, the absolute final value of a path including a negative cycle can be increased further than n-1. The answer also cannot be +∞ since the algorithm stops after a finite number of steps. But I am having trouble between answers 2. and 4. I am more inclined to 4. since I have run some test cases, and final values seemed to comply to an exponential growth. But I cannot find a proof for it. Answer: I think the answer is 4, and here is why. Assume the graph is fully connected with all edges having weight of -1. Now let's consider the three loops of Floyd-Warshall algorithm: for k = 1 to n: for i = 1 to n: for j = 1 to n: Since -2 is "shorter" than -1, after we finish k = 1, the weight for i -> k = 1 -> j is -2 for most i and j (exceptions would be i = k and j = k). After we finish k = 2, the weight for i -> k = 2 -> j is -4 for most i and j. This is because i -> 1 -> 2 -> 1 -> j is the shortest, giving us -4. And so on and so on for the exponential growth. Floyd-Warshall algorithm does not guarantee that we will find a simple shortest path, that is, a path containing only one instance of each vertex.
{ "domain": "cs.stackexchange", "id": 1840, "tags": "algorithms, graphs, shortest-path" }
Thrust to Weight ratio in Space with an off set CoM
Question: With regards to this thread, Thrust center in space My question is, if the thrust to weight ratio was increased so that it was much higher than the weighted mass of the sphere (ship), would the sphere then not start to follow a straight trajectory along the axis of the thruster. Answer: As can be inferred from the Wikipedia article on center of percussion, the governing equations of a rod of mass $m$ and moment of inertia $I$ with a rocket of force $F$ attached perpendicular to the rod at a distance $b$ from the center of mass is $$m\ddot{\mathbf{r}}=R_\theta\mathbf{F}\\Fb=\ddot{\theta}$$ where $$\mathbf{F}=(0,F)$$ is the force vector applied by the rocket when $\theta=0$, where $\theta$ is the angle the rod makes with the $x$-axis, and $R_\theta$ is the 2D counterclockwise rotation matrix of angle $\theta$. With initial conditions $\mathbf{r}(0)=\dot{\mathbf{r}}(0)=(0,0)$ and $\theta(0)=\dot{\theta}(0)=0$ this gives $$\theta(t)=\frac{b F t^2}{2 I}$$ $$\mathbf{r}(t)=\left(\frac{-\sqrt{\pi } t \sqrt{b F I} S\left(\sqrt{\frac{b F}{I\pi}} t\right)+I \left(-\cos \left(\frac{b F t^2}{2 I}\right)\right)+I}{b m},\frac{\sqrt{\pi } t \sqrt{b F I} C\left(\sqrt{\frac{b F}{\pi I}} t\right)-I \sin \left(\frac{b F t^2}{2 I}\right)}{b m}\right)$$ where $C$ and $S$ are the Fresnel C and Fresnel S functions. Notice that the expression is invariant under the transformation $F\rightarrow\lambda F$, $t\rightarrow \lambda^{-1/2}t$. This means that the path is unchanged when you alter the thrust to weight ratio. In short, changing the thrust to weight ratio actually does not cause the trajectory of the off-center rocket-rod to change, despite the initial physical intuition that this would cause the system's path to "straighten out". For reference, here is a plot of the center of mass trajectory for some particular choice of constants: ParametricPlot[{( i - i Cos[(b F t^2)/(2 i)] - Sqrt[b] Sqrt[F] Sqrt[i] Sqrt[\[Pi]] t FresnelS[(Sqrt[b] Sqrt[F] t)/(Sqrt[i] Sqrt[\[Pi]])])/(b m), ( Sqrt[i] (Sqrt[b] Sqrt[F] Sqrt[\[Pi]] t FresnelC[(Sqrt[b] Sqrt[F] t)/(Sqrt[i] Sqrt[\[Pi]])] - Sqrt[i] Sin[(b F t^2)/(2 i)]))/(b m)} /. {i -> 1, b -> 1, F -> 2, m -> 1}, {t, 0, 5}]
{ "domain": "physics.stackexchange", "id": 12487, "tags": "newtonian-mechanics, angular-momentum, rotational-dynamics, conservation-laws, torque" }
Putting numbers into words
Question: I have some embarrassingly long code which puts into words any number up into the trillions. As a newbie, and understanding that shorter, non-repetitive code is best, I am looking for suggestions on how to reduce this code to a respectable quantity. I realize that there is a lot of repetition in it. However, depending on the number being evaluated, the math looks a little different with each rep, so I am not sure if I can reduce that. I have tried rewriting it solely as an if/else (without the recursion) but it quickly becomes just as bad if not worse. class :: Fixnum def in_words(number) if number < 0 # No negative numbers. return 'Please enter a number that isn\'t negative.' end if number == 0 return 'zero' end numString = '' # This is the string we will return. onesPlace = ['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine'] tensPlace = ['ten', 'twenty', 'thirty', 'forty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety'] teenagers = ['eleven', 'twelve', 'thirteen', 'fourteen', 'fifteen', 'sixteen', 'seventeen', 'eighteen', 'nineteen'] #-----------------------------------------trillions left = number write = left/1000000000000 left = left - (write*1000000000000) if number > 999999999999 if write < 10 #1,00 - 9,000 millions = onesPlace[write - 1] numString = numString + millions + ' trillion' end if (write > 9) && (write < 20) #10,000 - 19,000 if write == 10 millions = tensPlace[write - 10] numString = numString + millions + ' trillion' else millions = teenagers[write - 11] #11 because the length of teenagers is only 9 so 15-11 = 4 and 'thirteen' is in numString = numString + millions + ' trillion' end end if (write > 19) && (write < 1000) #here i have to use recursion to get the first two/three digets --> 19,000,000 - 999,000,000 millions = in_words(write) numString = numString + millions + ' trillion' end if left > 0 numString = numString + ' ' end #-----------------------------------------billions #left = number write = left/1000000000 left = left - (write*1000000000) if number > 999999999 if write < 10 #1,00 - 9,000 millions = onesPlace[write - 1] numString = numString + millions + ' billion' end if (write > 9) && (write < 20) #10,000 - 19,000 if write == 10 millions = tensPlace[write - 10] numString = numString + millions + ' billion' else millions = teenagers[write - 11] #11 because the length of teenagers is only 9 so 15-11 = 4 and 'thirteen' is in numString = numString + millions + ' billion' end end if (write > 19) && (write < 1000) #here i have to use recursion to get the first two/three digets --> 19,000,000 - 999,000,000 millions = in_words(write) numString = numString + millions + ' billion' end if left > 0 numString = numString + ' ' end # ----------------------------------------millions write = left/1000000 left = left - (write*1000000) if number > 999999 if write < 10 #1,00 - 9,000 millions = onesPlace[write - 1] numString = numString + millions + ' million' end if (write > 9) && (write < 20) #10,000 - 19,000 if write == 10 millions = tensPlace[write - 10] numString = numString + millions + ' million' else millions = teenagers[write - 11] #11 because the length of teenagers is only 9 so 15-11 = 4 and 'thirteen' is in numString = numString + millions + ' million' end end if (write > 19) && (write < 1000) #here i have to use recursion to get the first two/three digets --> 19,000,000 - 999,000,000 millions = in_words(write) numString = numString + millions + ' million' end if left > 0 numString = numString + ' ' end #-----------------------------------------thousands write = left/1000 left = left - (write*1000) if number > 999 if write < 10 #1,00 - 9,000 thousands = onesPlace[write - 1] numString = numString + thousands + ' thousand' end if (write > 9) && (write < 20) #10,000 - 19,000 if write == 10 thousands = tensPlace[write - 10] numString = numString + thousands + ' thousand' else thousands = teenagers[write - 11] #11 because the length of teenagers is only 9 so 15-11 = 4 and 'thirteen' is in numString = numString + thousands + ' thousand' end end if (write > 19) && (write < 1000) #here i have to use recursion to get the first two/three digits --> 19,000 - 999,000 thousands = in_words(write) numString = numString + thousands + ' thousand' end if left > 0 numString = numString + ' ' end end end end end # ------------- hundreds write = left/100 left = left - (write*100) if write > 0 hundreds = in_words(write) numString = numString + hundreds + ' hundred' if left > 0 numString = numString + ' ' end end # ---------------- tens write = left/10 #stop here and return #numString = numString + left = left - write*10 if write > 0 if ((write == 1) and (left > 0)) numString = numString + teenagers[left-1] left = 0 else numString = numString + tensPlace[write-1] end if left > 0 numString = numString + '-' end end write = left left = 0 if write > 0 numString = numString + onesPlace[write-1] end numString end end#class puts 95202824653012.in_words(95202824653012) Another problem I have with it is that it is a method added to the Fixnum class and I would like to be able to call it directly on self without an argument, for example 3457278.in_words instead of 3457278.in_words(3457278). The problem seems to be that the recursion needs an argument when it is called, and therefore the method needs one. Or is there a way around that? Answer: Rather than rewrite your whole solution, I'll point out more idiomatic ways to do this stuff in Ruby. Recursion You don't need an argument to do recursion, self is enough. class ::Fixnum def in_words # ... millions = write.in_words # ... end end String Arrays %w{} is handy for making arrays containing one-word strings ones_place = %{one two three four five six seven eight nine} Long Numbers _ can be used like a comma (or period, if you're European) in long numbers. billion = 1_000_000_000 For powers of 10, you can also use exponentiation trillion = 10 ** 12 Modulus % gets the remainder of a number after dividing. write = left / 10 ** 9 left = left % 10 ** 9 Assignment Shortcut x = x <op> y can be written as x <op>= y where <op> is any operator. numString += ' ' left %= 10 ** 9 Loops Once you incorporate all of those, you should be able to see how that huge nested if can be turned into a tiny [12, 9, 6, 3].each do |power| break if self < 10 ** power # ... end
{ "domain": "codereview.stackexchange", "id": 12746, "tags": "ruby, numbers-to-words" }
Algebra and algebraic data types
Question: Which of the well-known structures of modern algebra (monoids, groups, rings etc) can be expressed as algebraic data types (ADTs)? Presumably a free monoid can be considered to be isomorphic to the familiar Nil, Cons construction for lists. Can finitely-presented monoids be represented as an ADT? If ADTs can't model structures having inverses, then is there a generalisation which can? Answer: In my understanding, algebraic data types are basically types whose terms arise as the terms freely constructed by an algebraic specification: the operations of this specification being the term-constructors. From this point of view, it seems to me that the only possible structures you can represent with algebraic data types are the free ones, that is those algebraic structures where no axiom is required: for instance magmas. Observe that the fact you can represent free-monoids (that is monoids of strings/lists) as algebraic data types comes from the fact that lists are ADTs for the algebraic specification with a $0$-ary operation (namely $\text{Nil}$ or $[]$) and a binary operation ($cons$ or $(:)$). Until you work constrained in the setting of simple types you cannot add constraints (that is equations) in your data-types, hence you cannot represent more general algebraic structures. A possible way to solve this problem is to use dependent type systems with an identity type: in these type-systems you can represent algebraic structures because equations(constraints) becomes terms (they are the same as programs). Hope this helps.
{ "domain": "cstheory.stackexchange", "id": 3563, "tags": "ds.data-structures, algebra" }
Seeming violation-wave travelling faster than speed of light
Question: Consider the basic relation $$E=\sqrt{(pc)^2+(mc^2)^2}.$$ Every particle possesses a wave nature and it depends on the situation in which one among the two is perceptible... Consider a particle with rest mass $m$. If we consider the speed of De-Broglie Waves, as usual for a wave $$v_{wave}=\nu \lambda.$$ And since we are taking relativistic effects into account, let's write $$\lambda =\frac{h}{\gamma mv}$$ where $\gamma$ denotes the Lorentz factor $\gamma =1/\sqrt{1-(v/c)^2}$, and $v$ the speed of the particle. Now clearly the energy of the wave could be written as $E=h \nu$. And for the particle, Energy is equal to $\gamma mc^2$. So clearly $$h \nu =\gamma mc^2.$$ Now plugging into $v_{wave}=\nu \lambda$, we get $$v_{wave}=\frac{\gamma mc^2}{h}\frac{h}{\gamma mv},$$ or $$v_{wave}=\frac{c^2}{v}.$$ Doesn't this seem to go against what we know, that the velocity of the wave is less than or equal to $c$? So can anyone point out what's the mistake here? Does this have anything to do with phase or group velocity? Answer: What you have calculated is the phase velocity, $v_p$, of the de Broglie wave associated with the particle. The phase velocity can be greater than $c$, and indeed it is always greater than $c$. The velocity of the particle is the group velocity, $v_g$, and as you have demonstrated the two are linked by: $$ v_p v_g = c^2 $$ The group velocity must always be less than $c$ and that implies the phase veocity must always be greater than $c$.
{ "domain": "physics.stackexchange", "id": 61380, "tags": "quantum-mechanics, special-relativity, waves, faster-than-light, phase-velocity" }
Why does the wavelength in curly hair differ?
Question: Different people and animals have different wavelengths in the curls in their hair. I understand what purposes hair serves in protecting the skin from light and bugs. I am wondering about the eventual physiological consequences of curly vs straight hairs. Do the curls help in cooling, blocking light or catching wind over flat hair? Does the wavelength in curly hair have intrinsic properties that physically would give an animal advantages in climates where straight hair would not? I will be happy to accept another answer. Answer: My hypothesis when the curls are left ungroomed they will form knots. These knots form a barrier that works similar to woven wool. Unlike strait hair the mat of knots (dreadlocks) collect dirt and form denser material that insulates against direct light and mosquitoes better than strait material. The wave length does play a part as well. The smaller the wavelength the tighter the knot. The tightest knots are commonly found in places that have the most sun light per year. Strait hair tends to keep heat in more and ice builds up less on it.
{ "domain": "biology.stackexchange", "id": 5679, "tags": "evolution, hair, heat" }
How to reduce tilting in self-made gimbal
Question: I made a gimbal at home with PVC. Used some cement filled PVC pieces as hanging weight to make the phone steady. But when I shoot with it, I feel some tilting movements in the video due to the momentum of these weights. How can I reduce this? Is it really possible to make a gimbal at home which can produce some decently stabilised videos? image of gimbal Answer: I'm going to guess at what's happening from your description. In general, the "moment" on both sides of the pivot (weight x distance) should be close to equal. You can have slightly more weight at the bottom so it wants to stay upright. Inertia is what keeps things from tilting when you start or stop moving. If you have weight just at the bottom (or a lot more weight, like a "zero" weight phone at the top and a concrete filled pipe at the bottom), you have a pendulum, and it will react to every motion. For mass, you could fill both the top and bottom pipes with equal amounts of concrete. Weigh them when the concrete has dried if you can measure that weight to the ounce, or use a balance and some additional small weights to see which one is heavier and by how much. Your phone will add a small amount of weight to the top, as will the bracket you're using to hold it. If the weight of one pipe will be a few ounces heavier than the weight of the other plus the phone and bracket, you're in business. Otherwise, attach a small weight to the bottom pipe so it will be a few ounces heavier than the top including the phone and bracket. Also, be precise matching the lengths of the arms to the top and bottom pipes. The phone doesn't weight much, but if you mount it above the top pipe, the weight will be farther from the pivot, which will magnify it. It would be better to mount it under the top pipe to minimize that, or even better, centered between the top and bottom pipes. If you have a gimbal in multiple planes, that would complicate putting the phone on the inside. You could compensate for the phone's distance by adding a little more weight at the bottom.
{ "domain": "engineering.stackexchange", "id": 2917, "tags": "mechanical-engineering" }
Kaluza-Klein metric and Ricci scalar?
Question: The metric is \begin{equation} ds^2 = G^D_{MN}dx^M dx^N = G_{\mu\nu}dx^\mu dx^\nu + G_{dd}(dx^d + A_\mu dx^\mu)^2. \end{equation} Then \begin{equation} G^D = \begin{bmatrix} G_{\mu\nu} + G_{dd}A_\mu A_\nu& G_{dd}A_\mu\\ G_{dd}A_\nu& G_{dd} \end{bmatrix}. \end{equation} In the above $G^D_{MN}$ is $D = d+1$ dimensional metric and $G_{\mu\nu}$ is $d$-dimensional metric. How can I get the Ricci scalar \begin{equation} R = R_d - 2e^{-\sigma}\nabla^2e^\sigma - \frac14e^{2\sigma}F_{\mu\nu}F^{\mu\nu} \end{equation} from the above metric? Here $R_d$ is constructed from $G_{\mu\nu}$ and $R$ is constructed from $G^D_{MN}$ and $G_{dd} = e^{2\sigma}$. Answer: This answer is essentially an elaboration of Prahar's answer in 5D Ricci Curvature but with explicit details of how to determine the coefficients. It is an entirely straightforward albeit exceedingly tedious calculation. Let us denote the $D$ dimensional quantities with a $\; \tilde{}\; $ and the $d$-dimensional quantities without one. We start by looking at the mass-dimensions of the various quantities. These are $-1$ for $x^\mu, x^d, r $, $0$ for $\tilde G_{\mu\nu}, G_{\mu\nu}, A^\mu, \sigma$, and $2$ for $\tilde R, R, F_{\mu\nu}$. The most general combination of mass dimension two that we could use to expand $\tilde R$, keeping in mind that we have a $d$-dimensional diffeomorphism so that we should have scalars under that and also that we have a gauge symmetry, i.e. under $x^d\longrightarrow x^d + \lambda$ we have $A_\mu \longrightarrow A_\mu - \partial \lambda$, so that $A_\mu$ can only appear in gauge invariant combinations, is $$ \tilde R = a R + b F_{\mu\nu}F^{\mu\nu} + c (\nabla \sigma)^2 + d \sigma \nabla^2 \sigma + e\nabla^2 \sigma + f \tag{1} $$ Here $a,b,c,d,e$ and $f$ are to be determined. They can depend on $\sigma$ as that is a scalar in $d$-dimensional space time, but they cannot depend on $G_{\mu\nu}$ or on $A_\mu$, as any such dependence is already in $R$ and $F_{\mu\nu}F^{\mu\nu}$ respectively. What else can we say? Under a scaling \begin{align} x^d \longrightarrow &\, \lambda x^d \nonumber\\ A_\mu \longrightarrow &\, \lambda A_\mu \nonumber\\ e^{2\sigma} \longrightarrow&\, \lambda^{-2} e^{2\sigma} \label{eq:c8hgehkoe} \end{align} and keeping $x^\mu$ fixed, with $\lambda$ a constant, the line element $ds^2 = G^d_{\mu\nu} dx^\mu dx^\nu + e^{2\sigma} (dx^d + A_\mu dx^\mu)^2$ is manifestly invariant. Under this scaling we obviously have \begin{align} F_{\mu\nu} F^{\mu\nu} \longrightarrow &\, \lambda^2 F_{\mu\nu} F^{\mu\nu} \nonumber\\ \nabla_\mu \sigma \longrightarrow& \, \nabla_\mu \sigma \end{align} The latter relation follows from $\sigma \longrightarrow \sigma - \ln \lambda$. How does the curvature scale under this? It turns out that $\tilde R$ remains unchanged under this scaling, and so does $R$ as well. This is not too hard to see if we investigate the scaling of the metric components. The metric is \begin{align} \tilde G_{\mu\nu} = &\,G_{\mu\nu} +e^{2\sigma} A_\mu A_\nu \nonumber\\ \tilde G_{\mu d} = &\, e^{2\sigma} A_\mu \nonumber\\ \tilde G_{dd} = &\,e^{2\sigma} \end{align} The first think we notice is that the determinant of the $D$ dimensional metric is very simply related to the determinant of the $d$ dimensional metric. One easily works out that $$ \tilde G = e^{2\sigma} \det G $$ The second observation is that the inverse of $\tilde G_{MN}$ is very simple \begin{align} \tilde G^{\mu\nu} = &\,G^{\mu\nu} \nonumber\\ \tilde G^{\mu d} = &\, - A^\mu \nonumber\\ \tilde G^{dd} = &\,e^{-2\sigma} + A_\mu A^\mu \end{align} Here $G^{\mu\nu}$ is the inverse of $G_{\mu\nu}$. It is easily checked that $\tilde G_{MN} \tilde G^{NK} =\delta_M^K$. From these expression of the metric we find the scaling \begin{align} \tilde G_{\mu\nu} \longrightarrow &\, \tilde G_{\mu\nu} \nonumber\\ \tilde G_{\mu d} \longrightarrow &\, \lambda \tilde G_{\mu d} \nonumber\\ \tilde G_{dd} \longrightarrow &\, \lambda^2\tilde G_{dd} \end{align} and for the inverse metric \begin{align} \tilde G^{\mu\nu} \longrightarrow &\, \tilde G^{\mu\nu} \nonumber\\ \tilde G^{\mu d} \longrightarrow &\, \lambda^{-1} \tilde G^{\mu d} \nonumber\\ \tilde G^{dd} \longrightarrow &\, \lambda^{-2}\tilde G^{dd} \end{align} We see that any upper index $^d$ gives a factor $\lambda$ and any lower index $_d$ gives a factor $\lambda^{-1}$. As the Ricci scalar is formed from the curvature tensor with all indices contracted and any $^d$ can only contract with a $_d$ we conclude that $\tilde R$ indeed remains unchanged. $R$ is also unchanged as both $\tilde G_{\mu\nu}$ and $\tilde G^{\mu\nu}$ are invariant under our scaling. Let us now go back to (1). The LHS is invariant under our scaling. So the RHS must be invariant as well. Recall that $a,b,c,d,e$ and $f$ are still allowed to be functions of $\sigma$, but as $R$ is invariant and $a(\sigma) R$ must also be invariant we need to have that $a$ is independent of $\sigma$. Similarly as $F_{\mu\nu} F^{\mu\nu}$ scales as $\lambda^2$, we must have that $b$ scales as $\lambda^{-2}$ and so $b\propto e^{2\sigma}$. Finally $\nabla \sigma$ is invariant, so $c,d$ and $e$ must be independent of $\sigma$. We have thus established that \begin{align} \tilde R = \alpha R + \beta e^{2\sigma} F_{\mu\nu}F^{\mu\nu} + \gamma (\nabla \sigma)^2 + \delta \sigma \nabla^2 \sigma + \varepsilon \nabla^2 \sigma + f \tag{2} \end{align} for some constants $\alpha, \beta,\gamma, \delta$ and $\varepsilon$. We can still have $f$ to be a function of $\sigma$. To fix the constants we need to work out the expression for $R$.This starts with getting the connections. The non-zero connections turn out to be \begin{align} \tilde \Gamma^\lambda_{\mu\nu} =&\, \Gamma^{\lambda}_{\mu\nu} -e^{2\sigma}\partial^\lambda \sigma A_\mu A_\nu + \frac{1}{2}e^{2\sigma} \big( A_\nu F_{\mu}^{\;\;\lambda} + A_\mu F_{\nu}^{\;\;\lambda} \big) \nonumber\\ \tilde \Gamma^\lambda_{\mu d} = & \, - e^{2\sigma} \partial^\lambda \sigma A_\mu + \frac{1}{2} e^{2\sigma} F_{\mu}^{\;\;\lambda} \nonumber\\ \tilde \Gamma^\lambda_{d d} = & \, -e^{2\sigma} \partial^\lambda \sigma \nonumber\\ \tilde \Gamma^d_{\mu\nu} =&\,\frac{1}{2} \Big(\nabla_\mu A_\nu + \nabla_\nu A_\mu\Big) + \frac{1}{2} e^{2\sigma}\Big( A^\rho A_\nu F_{\rho\mu} +A^\rho A_\mu F_{\rho\nu} \Big)\nonumber\\ & + \partial_\mu \sigma A_\nu + \partial_\nu\sigma A_\mu +e^{2\sigma}A_\mu A_\nu A^\rho \partial_\rho \sigma \nonumber\\ \tilde \Gamma^d_{\mu d} = & \, \frac{1}{2} e^{2\sigma} A^\rho F_{\rho\mu} + e^{2\sigma} A_\mu A^\rho\partial_\rho \sigma +\partial_\mu\sigma \nonumber\\ \tilde \Gamma^d_{d d} = & \, e^{2\sigma} A^\mu \partial_\mu \sigma \end{align} If you have problems deriving this, let me know and I will give more details. Let us set $A^\mu=\sigma=0$. In that case we simply have $\tilde R = R$ and so we find that $\alpha=1$. Next, let us take $G_{\mu\nu}= \delta_{\mu\nu}$ and $A_\mu=0$, leaving only $\sigma$ free. The only non-vanishing metric component with upper indices and connections are then \begin{align} \tilde G^{\mu\nu} =&\, \delta^{\mu\nu} \nonumber\\ \tilde G^{dd} =&\, e^{-2\sigma} \nonumber\\ \tilde \Gamma^\mu_{dd} =&\, -e^{2\sigma} \partial^\mu \sigma\nonumber\\ \tilde \Gamma^d_{\mu d} =&\, \partial_\mu\sigma \end{align} The Ricci scalar thus reduces to \begin{align} \tilde R=&\, \tilde G^{LM} \left( \partial_N \tilde \Gamma^N_{ML} -\partial_M \tilde \Gamma^N_{NL} +\tilde \Gamma^N_{NK} \tilde \Gamma^K_{ML} -\tilde \Gamma^N_{MK} \tilde \Gamma^K_{NL} \right) \tag{3} \\ =&\, \tilde G^{\lambda\mu} \left( \partial_N \tilde \Gamma^N_{\mu\lambda} -\partial_\mu \tilde \Gamma^N_{N\lambda} +\tilde \Gamma^N_{NK} \tilde \Gamma^K_{\mu\lambda} -\tilde \Gamma^N_{\mu K} \tilde \Gamma^K_{N\lambda} \right) \nonumber\\ & + \tilde G^{dd} \left( \partial_N \tilde \Gamma^N_{dd} -\partial_d \tilde \Gamma^N_{Nd} +\tilde \Gamma^N_{NK} \tilde \Gamma^K_{dd} -\tilde \Gamma^N_{dK} \tilde \Gamma^K_{Nd} \right) \nonumber\\ =&\, \tilde G^{\lambda\mu} \left(-\partial_\mu \tilde \Gamma^d_{d\lambda} -\tilde \Gamma^d_{\mu d} \tilde \Gamma^d_{d\lambda} \right) +\tilde G^{dd} \left( \partial_\nu \tilde \Gamma^\nu_{dd} +\tilde \Gamma^d_{d\kappa} \tilde \Gamma^\kappa_{dd} -\tilde \Gamma^\nu_{dd} \tilde \Gamma^d_{\nu d} -\tilde \Gamma^d_{d\kappa} \tilde \Gamma^\kappa_{dd} \right) \nonumber\\ =&\, -\partial_\mu \partial^\mu\sigma -\partial_\mu \sigma \partial^\mu \sigma + e^{-2\sigma} \left[ \partial_\mu \left(-e^{2\sigma} \partial^\mu \sigma \right) - \partial_\mu \sigma \left(-e^{2\sigma} \partial^\mu \sigma\right)\right] \nonumber\\ =&\, -\partial_\mu \partial^\mu\sigma -\partial_\mu \sigma \partial^\mu \sigma -2\partial_\mu \sigma \partial^\mu \sigma -\partial_\mu \partial^\mu \sigma + \partial_\mu \sigma \partial^\mu \sigma \nonumber\\ =&\, -2( \partial_\mu \partial^\mu \sigma + \partial_\mu\sigma \partial^\mu\sigma) \end{align} From (2) we have for this choice of metric \begin{align} \tilde R = \gamma (\partial \sigma)^2 + \delta \sigma\partial^2\sigma + \varepsilon \partial^2 \sigma + f \end{align} and we thus find that $\gamma=\varepsilon=-2$ and $\delta=f=0$. To link this to the expression in (8.1.8) note that \begin{align} e^{-\sigma} \partial^2 e^{\sigma} = &\, e^{-\sigma} \partial_\mu \Big( \partial_\mu \sigma e^\sigma \Big) = \partial_\mu\partial^\mu \sigma + \partial_\mu\sigma \partial^\mu \sigma \end{align} and $e^{-\sigma} \nabla^2 e^{\sigma}$ is just the covariant expression of this. Finally, let us take $G_{\mu\nu}= \delta_{\mu\nu}$ and $\sigma=0$, leaving only the $A_\mu$ free. Here we don't have to do any calculations as the theory we have is just a Euclidean $d$ dimensional flat spacetime with an Abelian gauge field $A_\mu$ and we know that the action reduces to $-\frac{1}{4} F_{\mu\nu} F^{\mu\nu}$ so that $\beta =-\frac{1}{4}$. Let us check this. The only non-vanishing metric component with upper indices and connections are in this case \begin{align} \tilde G^{\mu\nu} =&\, \delta^{\mu\nu} \nonumber\\ \tilde G^{\mu d} = &\, -A^\mu \nonumber\\ \tilde G^{dd} =&\, 1+ A_\mu A^\mu \nonumber\\ \end{align} and \begin{align} \tilde \Gamma^\lambda_{\mu\nu} =&\, \frac{1}{2} \big( A_\nu F_{\mu}^{\;\;\lambda} + A_\mu F_{\nu}^{\;\;\lambda} \big) \nonumber\\ \tilde \Gamma^\lambda_{\mu d} = & \, \frac{1}{2} F_{\mu}^{\;\;\lambda} \nonumber\\ \tilde \Gamma^d_{\mu\nu} =&\,\frac{1}{2} \Big(\partial_\mu A_\nu + \partial_\nu A_\mu\Big) + \frac{1}{2} \Big( A^\rho A_\nu F_{\rho\mu} +A^\rho A_\mu F_{\rho\nu} \Big)\nonumber\\ \tilde \Gamma^d_{\mu d} = & \, \frac{1}{2} A^\rho F_{\rho\mu} \end{align} Let us do the four terms of the Ricci scalar (3) separately. Because $G_{\mu\nu} =\delta_{\mu\nu}$ we do not have to make a distinction between upper and lower indices in $d$-spacetime and will move them all downstairs, but we do need to be careful with the order. We start with \begin{align} \tilde G^{LM} \partial_N \tilde \Gamma^N_{ML} =&\,\tilde G^{LM} \partial_\nu \tilde \Gamma^\nu_{ML} = \tilde G^{\lambda M} \partial_\nu \tilde \Gamma^\nu_{M\lambda} +\tilde G^{dM} \partial_\nu \tilde \Gamma^\nu_{Md} \nonumber\\ =&\,\tilde G^{\lambda \mu} \partial_\nu \tilde \Gamma^\nu_{\mu\lambda} + \tilde G^{\lambda d} \partial_\nu \tilde \Gamma^\nu_{d\lambda} + \tilde G^{d\mu} \partial_\nu \tilde \Gamma^\nu_{\mu d}+\tilde G^{dd} \partial_\nu \tilde \Gamma^\nu_{dd} \nonumber\\ =&\, \frac{1}{2}\delta_{\lambda \mu} \partial_\nu \big( A_\mu F_{\lambda\nu} + A_\lambda F_{\mu \nu} \big) -\frac{1}{2} A_\lambda \partial_\nu F_{\lambda \nu} -\frac{1}{2} A_\mu \partial_\nu F_{\mu \nu} \nonumber\\ =&\, \frac{1}{2} \Big( \partial_\nu A_\mu F_{\mu\nu} +A_\mu \partial_\nu F_{\mu\nu} +\partial_\nu A_\mu F_{\mu\nu} + A_\mu \partial_\nu F_{\mu\nu} - A_\mu \partial_\nu F_{\mu \nu} - A_\mu \partial_\nu F_{\mu \nu} \Big) \nonumber\\ =&\, \partial_\nu A_\mu F_{\mu\nu} = \frac{1}{2} F_{\mu\nu}(\partial_\nu A_\mu -\partial_\mu A_\nu) = -\frac{1}{2} F_{\mu\nu} F_{\mu\nu} \end{align} where in the last line we have antisymmetrised the result. Next we have \begin{align} - \tilde G^{LM} \partial_M \tilde \Gamma^N_{NL} =&\,- \tilde G^{L\mu} \partial_\mu \tilde \Gamma^N_{NL} = - \tilde G^{L\mu} \partial_\mu \tilde \Gamma^\nu_{\nu L}- \tilde G^{L\mu} \partial_\mu \tilde \Gamma^d_{dL} \nonumber\\ =&\, - \tilde G^{\lambda\mu} \partial_\mu \tilde \Gamma^\nu_{\nu \lambda }- \tilde G^{d\mu} \partial_\mu \tilde \Gamma^\nu_{\nu d }- \tilde G^{\lambda\mu} \partial_\mu \tilde \Gamma^d_{d\lambda} - \tilde G^{d\mu} \partial_\mu \tilde \Gamma^d_{dd} \nonumber\\ =&\, -\frac{1}{2} \delta_{\lambda\mu} \partial_\mu\big(A_\nu F_{\lambda\nu} + A\lambda F_{\nu\nu}\big) +\frac{1}{2} A_\mu \partial_\mu F_{\nu\nu} -\frac{1}{2} \delta_{\lambda\mu}\partial_\mu \big(A^\rho F_{\rho \lambda} \big)\nonumber\\ =&\, \frac{1}{2} \Big( -\partial_\mu A_\nu F_{\mu\nu} - A_\nu \partial_\mu F_{\mu\nu} -\partial_\mu A_\rho F_{\rho\mu} - A_\rho \partial_\mu F_{\rho\mu} \Big) =0 \end{align} The third and fourth terms with the products of the connections are a bit more work albeit straightforward. The result is \begin{align} \tilde G^{LM} \tilde \Gamma^N_{NK} \tilde \Gamma^K_{ML} = &\,0 \nonumber\\ -\tilde G^{LM} \tilde \Gamma^N_{MK} \tilde \Gamma^K_{NL} =&\, \frac{1}{4} F_{\mu\nu} F_{\mu\nu} \end{align} Bringing the four terms together we conclude that \begin{align} \tilde R = -\frac{1}{2} F_{\mu\nu} F_{\mu\nu}+ \frac{1}{4} F_{\mu\nu} F_{\mu\nu} = -\frac{1}{4} F_{\mu\nu} F_{\mu\nu} \end{align} and hence $\beta=-1/4$.
{ "domain": "physics.stackexchange", "id": 68491, "tags": "general-relativity, differential-geometry, metric-tensor, gauge-theory, kaluza-klein" }
genetic difference subpopulations vs movement rate
Question: Someone told me that if a geneticist finds no significant difference between 2 subpopulations that have temporal spatial overlap it might be that they are still almost closed (no connection). Is this right? Can I deduce anything about the individual exchange rate between populations and the genetic similarity/difference? For instance, one would need 5% of individuals to switch populations to keep them almost genetically identical. Does this differ between species (let's say fish and mammals)? Thanks. ps: any references to this would be great Answer: [..] if a geneticist finds no significant difference between 2 subpopulations that have temporal spatial overlap This isn't clear. Significance difference between what? What statistical hypothesis are you testing for which one would found no signicant difference (accepted the null hypothesis). Are you maybe thinking about a t-test between testing difference between the average phenotypic traits? it might be that they are still almost closed (no connection). Is this right? Do you mean no gene flow (aka. no migration)? Can I deduce anything about the individual exchange rate between populations and the genetic similarity/difference? "Individual exchange rate" is typically "called migration" or "gene flow". At some extend, yes you can. But the question is really broad and will be hard to answer in the general sense. I will just provide a very common methodology here. You can compute $F_{ST}$, a statistic of population divergence. A very good paper explaining the math of $F_{ST}$ is Nei (1973). I recommend computing $F_{ST}$ following Weir and Cockerham (1984). Then, if you can assume a finite island model (equal migration rate $m$ between any two population) and a constant and known population size $N$, then $$F_{St} = \frac{1}{1 + 4N(m+\mu) \frac{d}{d-1}}$$ , where $d$ is the number of demes (number of islands)This equation can be found at many places. Slatkin (1995) is a very good paper but note that its final solution differs a little bit from the one I presented (by a square term) for reasons explained in Charlesworth (1998). You can solve for $m$ and plug the observed $F_{ST}$ and $N$. Note that depending on the markers used, $\mu$ will be negligible in comparison to $m$. You might want to have a look at the very related post How could one calculate the gene flow between two populations?
{ "domain": "biology.stackexchange", "id": 7840, "tags": "genetics, movement" }
Modified shortest path problem
Question: For a given graph $G=(V,E)$ and a given weight function $W$ lets say we define the new weight for path p to be the regular weight minus the heaviest edge in that path, i.e: $w^*(p)=\varSigma(w(v_i,v_{i+1}) -max\{w(v_i,v_{i+1})) | 1\leq i \leq k$} The complexity needs to be $O(V+E\log V)$. Obviously i thought about dijkstra, define a new weight function s.t shortest path according to that weight function is the shortest path we looking for, and that just run dijkstra algorithm on the graph using the new weight function, however I cant think about such function, does anyone have an idea? Thanks in advance. Answer: Notice that if you allow once jumping without paying the weight cost, then the shortest path is exactly what you need. Create 2 copies of $G: G_1,G_2$. For every edge $e=(v,u)\in G$ also add an edge $(v_1,u_2)$ between the node $v$ of $G_1$ and $u$ of $G_2$. Make those edge's weight to $0$. Now, find a shortest path between $s_1$ and $t_2$ (using Djikstra)
{ "domain": "cs.stackexchange", "id": 16301, "tags": "algorithms, graphs, shortest-path, dijkstras-algorithm" }
In scuba diving, are nitrogen narcosis and high pressure nervous syndrome the same thing?
Question: In training for scuba diving, they tell you that when you're bellow 100 ft or so you have to watch out for changes in mental state that resemble drunkenness. The cause of these mental disturbances is called nitrogen narcosis, and it has something to do with the increased pressure on the nitrogen component of the gas that you're breathing out of your tank. I just read this article from the what-if section of XKCD (halfway down the page, under the header about Michael Phelps) that mentioned a very similar sounding disorder called high pressure nervous syndrome. Are nitrogen narcosis and high pressure nervous syndrome related, or are the effects of high pressure nitrogen and high pressure by itself separable? Answer: These are different things. Nitrogen Narcosis or more commonly Gas Narcosis is the narcotic effect of gasses like Oxygen and Nitrogen under pressure. The effects are most prominent under 30m and commercial/technical dives doing dives to 40/50m on air tend to experience the effects most. (Narcosis is managed by adding an inert gas - in most gases Helium - to air to form a mixture referred to as Trimix (Helium/Oxygen/Nitrogen) this then produces an effect as if the diver is diving to a shallower depth and is referred to as Equivalent Narcotic Depth, in some cases Heliox (Helium & Oxygen) was also used) High Pressure Nervous Syndrome is an interesting aspect of deep diving and used to be called Helium Jitters/Tremors as the accepted standard was that it was caused by breathing Helium mixtures under 150m. Some people hold this to be true and others claim that it has nothing to do with Helium but rather the pressure disrupting the flow of electric signals thru the nervous system. What is interesting is that they found that if you used Trimix (Helium/Nitrogen/Oxygen) rather than Heliox (Helium/Oxygen) the onset happened later or not as severe. And the slower your descent is the slower the onset of the symptoms. Some Docs to Read: Narcosis - Relatively simple document on Narcosis Narcosis - "Narcosis is not unique to nitrogen; however, it can occur with many of the so-called “noble” or inert gases, with the exception of helium. Add to this the fact that other inert gases each have their own brand of narcotic effects at depth, and you have a complicated picture for technical and commercial divers. One of these rare gases, argon, for example, has about twice the narcotic potency of nitrogen, but helium has very weak narcotic properties and is less soluble than nitrogen in body tissues." HPNS - Relatively Simple doc on HPNS HPNS - Very technical discussion on gasses, from a HPNS and Narcosis view point. References: Bennett, P.B. 1982b. The high pressure nervous syndrome in man. In: The Physiology and Medicine of Diving and Compressed Air Work. (P.B. Bennett and D.H. Elliot, eds), Balliere-Tindall, London. pp. 262-296. Bennett, P.B. 1990. Inert gas narcosis and HPNS. In: Diving Medicine, Second Edition (A.A. Bove and J.C. Davis, eds.).W.B. Saunders Company, Philadelphia. pp. 69-81. Bennett, P.B, R. Coggin, and J. Roby. 1981. Control of HPNS in humans during rapid compression with trimix to 650 m (2132 ft). Undersea Biomed. Res., 8(2): 85-100.
{ "domain": "biology.stackexchange", "id": 2710, "tags": "neuroscience, pathology, human-physiology, breathing" }
How can I get big bubbles when I electrolyse water with detergent?
Question: I've made carbon electrodes by putting 12V through a couple of short 15Ω pencils, but when I try to make hydrogen and oxygen bubbles from soapy water, I get tiny bubbles, smaller than table salt. If I move the electrodes closer together, it's the same tiny bubbles, but a bit faster. These don't catch fire very convincingly, and my daughter is unimpressed. How can I get larger bubbles? Different electrodes? Higher voltage? Just shove the AC power in straight from the wall? :) Answer: This might be tricky. The demos I've seen mostly rely on collecting the hydrogen and oxygen, separately or together, and then bubbling them through a soap solution. If you've got the soap in the electrolyte, you might have a hard time getting anything more than two piles of foam, one of which is slightly flammable. I doubt the hydrogen foam would even rise in air; the volume percentage of gas is too low.
{ "domain": "chemistry.stackexchange", "id": 15424, "tags": "electrolysis" }
Diagonalization of a hamiltonian for a quantum wire with proximity-induced superconductivity
Question: I'm trying to diagonalize the Hamiltonian for a 1D wire with proximity-induced superconductivity. In the case without a superconductor it's all fine. However, with a superconductor I don't get the correct result for the energy spectrum of the Hamiltonian in Tudor D. Stanescu and Sumanta Tewari. “Majorana fermions in semiconductor nanowires: fundamentals, modeling, and experiment.” Journal of Physics: Condensed Matter 25, no. 23 (2013): 233201. arXiv:1302.5433 [cond-mat.supr-con]. given by $$ H = \eta_{k}\tau_{z} + B\sigma_{x} + \alpha k\sigma_{y}\tau_{z} + \Delta\tau_{x} $$ Here $\sigma$ and $\tau$ are the Pauli matrices for the spin and particle-hole space. Now the correct result is: $E^{2}_{k} = \Delta^{2} + \eta_{k}^{2} + B^{2} + \left(\alpha k\right)^{2} \pm 2\sqrt{B^{2}\Delta^{2} + \eta^{2}_{k}B^{2} + \eta^{2}_{k}\left(\alpha k\right)^{2}}$ Now, my problem is that I don't know how I can bring the Hamiltonian in the correct matrix form for the calculation of the eigenvalues. If I try it with the upper Hamiltonian I have completely wrong results for the energy spectrum. I believe my mistake is the interpretation of the Pauli matrices $\tau$ but I don't know how I can write the Hamiltonian in the form to get the correct eigenvalues. Answer: Here is a cute little trick I've often found pretty handy: just keep squaring your matrices until they're diagonal! In this case you're going to have to make use of the standard identities of Pauli matrices $$\left\{ \sigma_{i},\sigma_{j}\right\} =2\delta_{ij}$$ You also need to make use of the fact that the different “species” of Pauli matrices, $\sigma$ and $\tau$, won’t “see” each other. In other words, when you’re working through the algebra, Pauli matrices of different species can pass through each other as if they were scalars. Anyways, the given Hamiltonian is $$H = \eta_{k}\tau_{z}+B\sigma_{x}+\alpha k\sigma_{y}\tau_{z}+\Delta\tau_{x}$$ As I mentioned above, we first square it: $$H^2 = \eta_{k}^{2}\tau_{z}^{2}+B^{2}\sigma_{x}^{2}+\left(\alpha k\right)^{2}\sigma_{y}^{2}\tau_{z}^{2}+\Delta^{2}\tau_{x}^{2}+2B\eta_{k}\tau_{z}\sigma_{x}+2\alpha k\eta_{k}\sigma_{y}\tau_{z}^{2}+2\Delta B\sigma_{x}\tau_{x}+\Delta\eta_{k}\left\{ \tau_{z},\tau_{x}\right\} +\alpha kB\left\{ \sigma_{x},\sigma_{y}\right\} \tau_{z}+\alpha k\Delta\sigma_{y}\left\{ \tau_{z},\tau_{x}\right\}$$ Now, using the anticommutator identity for either species of Pauli matrices, the above expression simplifies to $$H^2 = \eta_{k}^{2}+B^{2}+\left(\alpha k\right)^{2}+\Delta^{2}+2B\eta_{k}\tau_{z}\sigma_{x}+2\alpha k\eta_{k}\sigma_{y}+2\Delta B\sigma_{x}\tau_{x}$$ For reasons that will become obvious shortly, we rearrange the above expression in the following way and square it $$\left(H^{2}-\eta_{k}^{2}-B^{2}-\left(\alpha k\right)^{2}-\Delta^{2}\right)^{2}=\left(2B\eta_{k}\tau_{z}\sigma_{x}+2\alpha k\eta_{k}\sigma_{y}+2\Delta B\sigma_{x}\tau_{x}\right)^{2}$$ Expanding out that further we get $$\left(H^{2}-\eta_{k}^{2}-B^{2}-\left(\alpha k\right)^{2}-\Delta^{2}\right)^{2} = 4B^{2}\eta_{k}^{2}\tau_{z}^{2}\sigma_{x}^{2}+4\left(\alpha k\right)^{2}\eta_{k}^{2}\sigma_{y}^{2}+4\Delta^{2}B^{2}\sigma_{x}^{2}\tau_{x}^{2}+4\alpha kB\eta_{k}^{2}\tau_{z}\left\{ \sigma_{x},\sigma_{y}\right\} +4\Delta B^{2}\eta_{k}\sigma_{x}^{2}\left\{ \tau_{x},\tau_{z}\right\} +4\alpha k\eta_{k}\Delta B\left\{ \sigma_{x},\sigma_{y}\right\} \tau_{x}$$ Once again, using the anticommutators identities we get $$\left(H^{2}-\eta_{k}^{2}-B^{2}-\left(\alpha k\right)^{2}-\Delta^{2}\right)^{2}=4B^{2}\eta_{k}^{2}+4\left(\alpha k\right)^{2}\eta_{k}^{2}+4\Delta^{2}B^{2}$$ Note that the above expression contains only diagonal matrices; we have effectively diagonalized the Hamiltonian. This can be made more explicit by writing $$H^{2}=\left[\eta_{k}^{2}+B^{2}+\left(\alpha k\right)^{2}+\Delta^{2}\pm2\sqrt{B^{2}\eta_{k}^{2}+\left(\alpha k\right)^{2}\eta_{k}^{2}+\Delta^{2}B^{2}}\right]\mathbb{I}_{4\times4}$$ Now, from the above expression, it's not hard to figure out that $$E_{k}^{2}=\eta_{k}^{2}+B^{2}+\left(\alpha k\right)^{2}+\Delta^{2}\pm2\sqrt{B^{2}\eta_{k}^{2}+\left(\alpha k\right)^{2}\eta_{k}^{2}+\Delta^{2}B^{2}}$$ Forgive me if you’re wondering: “with all this algebra, how is that a trick?” Well, this was a tough example. But this trick is pretty general whenever your Hamiltonian consists of matrices (or their tensor products) which satisfy the Clifford algebra. I’m sure you can pick a much simpler example where this trick will really be a trick. For example, you can check the Hamiltonian in equation (51) of: Martin Leijnse and Karsten Flensberg. “[Introduction to topological superconductivity and Majorana fermions.][1]” Semiconductor Science and Technology 27, no. 12 (2012): 124003. ([arXiv][2]) where you can simply compute the eigenvalues (equation (52)) in your head using this trick.
{ "domain": "physics.stackexchange", "id": 8969, "tags": "energy, condensed-matter, eigenvalue" }
Reaction of cyclooctatetraene with sulfuric acid
Question: Problem The correct statement is (A) P & Q are aromatic compound and Q has $\mathrm{sp^3}$-hybridized carbon atom. (B) P is aromatic with 10 π electrons and Q is aromatic with 2 π electrons. (C) P is aromatic and Q is anti-aromatic. (D) P is aromatic and Q is non-aromatic. Answer (A) P & Q are aromatic compound and Q has sp3-hybridized carbon atom. Question I got P easily; the two $\ce{K}$ atoms would lose two electrons and form the cyclooctatetraene dianion, which will become planar and aromatic with 10 π electrons. However, I could not find what Q will be. My first step was a proton getting attached to one carbon of any of the four $\ce{C=C}$, which will give $\mathrm{sp^3}$ hybridization on that carbon which accepted the proton. There will now be another adjacent carbon with a positive charge on it, which will be delocalized through the 6 π bonds. However, the presence of $\mathrm{sp^3}$ carbon in between will prevent complete conjugation throughout the ring. I don't think the next step can be the attack of $\ce{HSO4^-}$ as a nucleophile, because $\ce{HSO4^-}$ is a stable anion and hence a very poor nucleophile. Water isn't mentioned, so I couldn't assume attack of $\ce{OH^-}$ as a nucleophile either. In any case, I don't see how Q can be aromatic. What am I missing? Answer: This is a rather unusual and interesting case of aromaticity, which has been given a special name: homoaromaticity. The Wikipedia page does a quite nice job of explaining what's going on. As you state, protonation of cyclooctatetraene generates a $\ce{C8H9^{+}}$ cation containing an $\mathrm{sp^3}$-hybridised carbon atom between all the other $\mathrm{sp^2}$-hybridised carbons. However, what's surprising is that this does not imply aromaticity must be broken. If you comfortably understand that the cyclooctatetraenide anion $\ce{C8H8^{2-}}$ is aromatic, you probably also know that the tropylium cation $\ce{C7H7^{+}}$ is also aromatic. As it turns out, this aromaticity is sufficiently strong that if you add a $\mathrm{sp^3}$-hybridised $\mathrm{CH_2}$ linkage, the cation pushes the $\mathrm{sp^3}$ carbon out of the way to allow through-space conjugation and maintain aromaticity. The homotropylium cation is exactly what cyclooctatetraene rearranges into upon protonation. The evidence for the aromatic nature of the homotropylium cation is ironclad. NMR spectrometry is one of the best techniques to determine the presence and strength of aromaticity in a compound, due to aromatic ring currents. In the homotropylium cation, the hydrogen atoms in the $\mathrm{CH_2}$ linkage are magnetically inequivalent, and indeed, starkly so - their $\mathrm{^1H}$ chemical shifts differ by over 6 ppm. In particular, one hydrogen atom lies above the ring and is heavily shielded by the aromatic ring current, generating a signal at -0.73 ppm which is very unusual for a carbocation. It would be excellent to study the crystal structure of the homotropylium cation, but there do not seem to be any experimental data for examples containing the parent ion, only derivatives such as a hydroxylated homotropylium cation (in fact, I'm surprised the parent ion hasn't been crystallised as a carborane superacid salt). Nevertheless, structural evidence for aromaticity can also be found in the derivatives, in the approximate coplanarity of the $\mathrm{sp^2}$-hybridised carbons and the unusual distance between the $\mathrm{sp^2}$ carbons directly attached to the $\mathrm{sp^3}$ carbon (longer than almost any C-C single bond, but far too short for isolated carbon atoms in the absence of aggressive steric constraints). For some more information, check out this reference: The homotropylium ion and homoaromaticity, Ronald F. Childs, Accounts of Chemical Research, 1984, 17 (10), 347-352 DOI: 10.1021/ar00106a001
{ "domain": "chemistry.stackexchange", "id": 15827, "tags": "organic-chemistry, reaction-mechanism, aromaticity" }
Why do nanoparticles have a different color than their macro counterparts?
Question: It astonished me when I learned that at nanoscale, gold is no longer "gold",rather it's red. Other elements' nanoparticles also have a different color than their macro counterparts. Why does this phenomenon occur? Answer: There is a fairly simple explanation of why small-enough particles are different from bulk materials. Once the particle becomes similar in size to the wavelengths of light involved then quantum effects start to matter for how the particle behaves. The actual mechanisms that give specific colour may vary, but the main point is that the size of the particle comes to dominate the bulk properties of the material. In a bulk semiconductor, for example, the electronic properties depend on the band gap between occupied electron levels and unoccupied electron levels. When the particle is small enough, however, the electrons are more constrained and become more like an electron confined to a small box. This can mean that the possible energy levels of the electrons are determined by the physical size of the box not the bulk properties of the material. Hence any emission or absorption of light will be different to the bulk material. The additional physical constraints impose a different set of energy levels on the electrons. The actual details can be quite complicated but the basic idea is that when particles are small compared to the wavelengths of light, you have to take into account the extra constraints on the system when calculating possible energy levels involved in electron transitions which are what causes colour.
{ "domain": "chemistry.stackexchange", "id": 6652, "tags": "nanotechnology" }
Salt analysis (cation)
Question: I'm reading about salt analysis and I've got some questions: Image source: pdf Why are some sulphides (group II) precipitated in presence of HCl while others (group IV) in presence of ammonia solution? Can NaOH be used in place of ammonia solution? What's the role of NH4Cl in testing for group III? Will Na2CO3 work for group V? (The groups here are not related to groups of periodic table.) Answer: Let me answer your three questions. Some sulphides are soluble in acidic solutions (group IV). Some are insoluble in the same sort of acidic solution (group II). It is a question of solubility product. It can be taken as an experimental result, a fact. If no $\ce{NH4Cl}$ were present in the solution tested for Group III, the precipitates $\ce{Al(OH)3}$ and $\ce{Fe(OH)3}$ may remain in a colloidal state, and will cross the filter paper. This is a serious drawback. In the presence of enough ions, like $\ce{NH4^+}$ and $\ce{Cl-}$, the colloid is transformed into a precipitate. $\ce{NaOH}$ can replace ammonia solution, but it will prevent the precipitation of $\ce{Al(OH)3}$ in group III and $\ce{Zn(OH)2}$ in group IV. These precipitates will redissolve in $\ce{NaOH}$ to form a solution of aluminate $\ce{Al(OH)4^-}$ and zincate $\ce{[Zn(OH)4]^{2-}}$. So the method would not detect $\ce{Al}$ or $\ce{Zn}$.
{ "domain": "chemistry.stackexchange", "id": 17082, "tags": "experimental-chemistry, salt" }
Fermion anti-commutation relations
Question: The fermion anti-commutation relations are given as $$\{\psi_{\alpha}({\bf x},t),\psi_{\beta}^{\dagger}{(\bf x'},t)\} = \delta_{\alpha,\beta} \, \delta({\bf x} - {\bf x'}).$$ I am interested in determining $\{\psi_{\alpha}({\bf x},t),{\bar \psi}{(\bf x'},t) \psi({\bf x'},t)\}$. Does $\{\psi_{\alpha}({\bf x},t),{\bar \psi}_{\beta} ({\bf x'},t)\}$ simplify to anything? In general you have $\{\psi_{\alpha}({\bf x},t),(\psi^{\dagger}({\bf x'},t) \, \gamma^0)_{\beta}\}$ which is equal to $$\{\psi_{\alpha}({\bf x},t),\psi^{\dagger}_{\rho}({\bf x'},t) \gamma^0_{\rho\beta}\},$$ with the sum over $\rho$ assumed. In the energy representation, for example, it is straightforward to check that the $\gamma^0_{\rho\beta}$ can be taken outside the anti-commutator, but how do you show this in general (if instead of $\gamma^0$ you had, say, $\gamma^1$ then this is not so obvious since the $\gamma_1$ involves the Pauli matrix $\sigma_1$ and the spinor $\psi$ also involves $\sigma$ so it doesn't look easy to see that it would be true in this case)? Answer: As you correctly state: \begin{align} \{\psi_{\alpha}({\bf x},t),{\bar \psi}_{\beta} ({\bf x'},t)\} &= \{\psi_{\alpha}({\bf x},t),(\psi^{\dagger}({\bf x'},t) \, \gamma^0)_{\beta}\} = \{\psi_{\alpha}({\bf x},t),\psi^{\dagger}_{\rho}({\bf x'},t) \gamma^0_{\rho\beta}\} \\ &= \psi_{\alpha}({\bf x},t)\psi^{\dagger}_{\rho}({\bf x'},t) \gamma^0_{\rho\beta} + \psi^{\dagger}_{\rho}({\bf x'},t) \gamma^0_{\rho\beta} \psi_{\alpha}({\bf x},t) \\ &= \gamma^0_{\rho\beta}\left(\psi_{\alpha}({\bf x},t) \psi^{\dagger}_{\rho}({\bf x'},t) + \psi^{\dagger}_{\rho}({\bf x'},t) \psi_{\alpha}({\bf x},t)\right) \\ &= \gamma^0_{\rho\beta} \{\psi_{\alpha}({\bf x},t),\psi^{\dagger}_{\rho}({\bf x'},t) \} \\ &= \gamma^0_{\alpha\beta}\delta(\mathbf{x}-\mathbf{x}') \end{align} After explicitly writing indices on everything, we are just dealing with products of (Grassman) numbers. $\gamma^0_{\alpha\beta}$ commutes with any other element, so it can be taken out. The commutation relations between $\psi$ and $\bar{\psi}\psi$ should be expressed as commutators, because $\psi$ is a fermion and $\bar{\psi}\psi$ is a boson. Using the equation above and $\{\psi_\alpha(\mathbf{x},t),\psi_\beta(\mathbf{x}',t)\}=0$ we get \begin{align} [\psi_{\alpha}({\bf x},t),{\bar \psi}{(\bf x'},t) \psi({\bf x'},t)] =& [\psi_{\alpha}({\bf x},t),{\bar \psi}_\beta{(\bf x'},t) \psi_\beta({\bf x'},t)] \\ =& \psi_{\alpha}({\bf x},t){\bar \psi}_\beta{(\bf x'},t) \psi_\beta({\bf x'},t) - {\bar \psi}_\beta{(\bf x'},t)\psi_\beta({\bf x'},t) \psi_{\alpha}({\bf x},t) \\ =& -\{\psi_{\alpha}({\bf x},t),{\bar \psi}_\beta{(\bf x'},t)\} \psi_\beta({\bf x'},t) - {\bar \psi}_\beta{(\bf x'},t)\psi_{\alpha}({\bf x},t) \psi_\beta({\bf x'},t) \\ &- {\bar \psi}_\beta{(\bf x'},t)\psi_\beta({\bf x'},t) \psi_{\alpha}({\bf x},t) \\ =& -\delta_{\alpha\beta} \delta(\mathbf{x}-\mathbf{x}')\psi_\beta({\bf x'},t)- {\bar \psi}_\beta{(\bf x'},t) \{\psi_{\alpha}({\bf x},t),\psi_{\beta} ({\bf x'},t)\} \\ =& -\delta(\mathbf{x}-\mathbf{x}')\psi_\alpha({\bf x'},t) \end{align}
{ "domain": "physics.stackexchange", "id": 36461, "tags": "homework-and-exercises, quantum-field-theory, fermions, anticommutator" }
Constructing W-algebras
Question: I am following the algorithm in W-algebras with two and three generators, in order to construct consistent (anti-)commutator relations for a particular W-algebra. I am considering $W(2,4,4)$ where both dimension four operators are fermionic. I have two questions related to the method introduced in the paper, namely: They use Jacobi identities (e.g. $\{\Psi_m,\{\Psi_n,\Psi_\ell\}\} + \mathrm{permutations} = 0$) to fix some of the arbitrary constants, but this isn't sufficient to constrain all of them. How do they calculate the rest? (There is some step in the example involving computing determinants but it isn't clear how this relates to the notation of the previous section.) For fermionic operators, when computing $\{\Psi_m,\{\Psi_n,\Psi_\ell\}\}$, due to the terms that appear in the inner anti-commutator, you encounter stuff like $\{\Psi_m, L_{n+\ell}\}$ but normally, we use a commutator in this case, as one element is even and one is odd. So when computing the Jacobi identity with fermionic operators, should one actually use $[,\}$ as opposed to $\{,\}$? Answer: As it turns out, even though the authors of the paper I linked claim there are benefits to working with the commutators rather than the OPEs, I found using the OPEs much simpler, and didn't need the machinery provided in the algorithm in the paper. In my case, the two fermionic generators carry an $\mathfrak{sl}(2)$ charge which we require the OPE to respect, and so I simply wrote down all possible terms that may arise order by order, i.e. $$\Psi^+(z) \Psi^{-}(z) = \frac{a_1}{z^8}\mathbb{I} + \frac{a_2}{z^6}T + \frac{a_3}{z^5}\partial T + \frac{a_4}{z^4}\partial^2 T + \frac{a_5}{z^4}T^2 + \dots$$ using combinations of derivatives and $T$ with arbitrary constants $a_i$. Imposing the Jacobi results in some trivial constraints already satisfied (e.g. taking $TTT$) but others (e.g. $\Psi^+ \Psi^{-}T$) led to constraints on the $a_i$ up to null states. A useful paper for carrying this out is An Algorithmic Approach to Operator Product Expansions, W-Algebras and W-Strings.
{ "domain": "physics.stackexchange", "id": 64666, "tags": "mathematical-physics, conformal-field-theory, research-level" }
Get the common values between two text files
Question: I'm just trying to get the common values from two distinct files called file1.txt and file2.txt where I'm using Python's set method to get the results and this works. However, I'm looking forward to see if there is better way to do this. #!/grid/common/pkgs/python/v3.6.1/bin/python3 def common_member(): a_set = dataset1 b_set = dataset2 if (a_set & b_set): print(a_set & b_set) else: print("No common elements") with open('file1.txt', 'r') as a: dataset1 = set(a) with open('file2.txt', 'r') as b: dataset2 = set(b) common_member() file1.txt teraform101 azure233 teraform221 teraform223 teraform224 file2.txt teraform101 azure109 teraform223 teraform226 teraform225 azure233 Result: { 'teraform101\n', 'azure233\n', 'teraform223\n' } Answer: Global variables (dataset1 and dataset2) are bad. I'd rather see you write no function at all than a function that accepts its inputs through global variables: with open('file1.txt') as file1, open('file2.txt') as file2: print((set(file1) & set(file2)) or "No common elements") If you do write a function, then it should accept its inputs as parameters. Name the function's parameters however you want, but avoid unnecessary reassignments like a_set = dataset1 and b_set = dataset2. def print_common_members(a, b): """ Given two sets, print the intersection, or "No common elements". """ print((a & b) or "No common elements") with open('file1.txt') as file1, open('file2.txt') as file2: dataset1 = set(file1) dataset2 = set(file2) print_common_members(dataset1, dataset2)
{ "domain": "codereview.stackexchange", "id": 31922, "tags": "python, python-3.x, file" }
Systematic name of OCS (COS)
Question: What is the systematic name of carbonyl sulfide? My professor commented that the systematic name was "very odd" so he didn't bother to mention it. Is it really that odd? And what is it, actually? I found the name "Thioxomethanone". Is that really odd? From my limited knowledge, I've seen the root "thiol" before in describing something with a sulfur in it. So at least the root doesn't seem weird. I also see "oxo." There's an O in OCS. Answer: Wikipedia lists the IUPAC name as "carbon oxide sulfide." Odd? Sounds kind of logical to me.
{ "domain": "chemistry.stackexchange", "id": 10531, "tags": "nomenclature" }