anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Standards for drawing chemical molecules
Question: I'll preface this by saying I haven't had a chemistry class in 10 years and molecular structures are not my thing. But I'm submitting a revision to a journal paper where I need to include figures of molecules and I need to make sure I get it right so I don't get nit-picked apart for it. Is there a standard/guide/list of rules for how to create chemical molecule figures like the one below? I'm looking for a comprehensive list of rules but as an example of things I would need to know: Should I include the carbon atoms in the ring drawing? At what angle should the entire molecule be drawn (I've seen rigid-body rotated versions of the same molecule before)? Rather than answering those specific questions, I'd like to know if anybody (IUPAC maybe?) has a standard for drawing these figures. Answer: As it happens, there are a set of quite involved documents from the "IUPAC Chemical Nomenclature and Structure Representation Division". There's a list here, but to avoid later link-rot, they are currently: Jonathan Brecher Graphical representation of stereochemical configuration (IUPAC Recommendations 2006) 2006, Vol. 78, Issue 10, pp. 1897-1970 Richard M. Hartshorn, Evamarie Hey-Hawkins, René Kalio and G. Jeffery Leigh: Representation of configuration in coordination polyhedra and the extension of current methodology to coordination numbers greater than six (IUPAC Technical Report) 2007, Vol. 79, Issue 10, pp. 1779-1799 W. Mormann and K.-H. Hellwich: Structure-based nomenclature for cyclic organic macromolecules (IUPAC Recommendations 2008) 2008, Vol. 80, Issue 2, pp. 201-232 Jonathan Brecher: Graphical representation standards for chemical structure diagrams (IUPAC Recommendations 2008) 2008, Vol. 80, Issue 2, pp. 277-410 Andrey Yerin, Edward S. Wilks, Gerard P. Moss and Akira Harada: Nomenclature for rotaxanes and pseudorotaxanes (IUPAC Recommendations 2008) 2008, Vol. 80, Issue 9, pp. 2041-2068
{ "domain": "chemistry.stackexchange", "id": 743, "tags": "molecules, structural-formula, reference-request" }
Termination of Moore's algorithm to minimize deterministic automata
Question: I use Moore's algorithm to minimize DFA as concluded below. We start with the complete graph with vertices Q and edges E = { {p, q} | p = q ∈ Q } We mark all edges {p, q} ∈ E with p ∈ F and q ∈ F As long as there exist marked edges in E, we repeat the following procedure. We choose some marked edge {p , q } ∈ E. Then we mark all unmarked edges {p, q} ∈ E with {p , q } = {p · a, q · a} for some a ∈ Σ. Afterwards we remove the edge {p , q } from E. All marked edges are eventually removed. Then I guess a new minimal automata is obtained by joining the states linked by the remaining unmarked edges. Is it true? If so, what about this case? States: {1, 2, 3} Alphabet: {a, b} Start: 1 Final: {3} Transition: {(1, a, 2), (2, b, 3)} By Moore's algorithm, at the end of loop we will have one unmarked edge between state 1 and 2. As my guess above, the next step is joining both states. But the result doesn't make sense. Answer: Most minimization algorithms assume automata to be complete s.t. for every state p and alphabet a, there is a transition where (p, a, q). Hence, for the case, we need to add one more state as a sink state. The DFA becomes: States: {1, 2, 3, 4} Alphabet: {a, b} Start : 1 Final : {3} Transition : {(1, a, 2), (1, b, 4), (2, a, 4), (2, b, 3), (3, a, 4), (3, b, 4), (4, a, 4), (4, b, 4)} By Moore's algorithm: Picking edge(3,4) causes we mark edge(2,1) and edge(2,4) by b-transition. Picking edge(2,4) causes we mark edge(1,4) by a-transition. Now we don't have any unmarked edges left. Hence, no states has to be joined.
{ "domain": "cs.stackexchange", "id": 5749, "tags": "algorithms, automata" }
Declaring multiple arrays in PHP with one line
Question: Is it possible to declare multiple arrays in PHP in one line? $userAnswers = array(); $questionIDs = array (); $sqlAnswers = array(); $sqlAnswersQuery = array(); $correctAnswers = array(); Any cleaner ways of doing this? NOTE: The contents of these arrays are all DIFFERENT. I don't think setting them equal to each other would work. Answer: There is a way. Wether you like it or not is another question: $userAnswers = $questionIDs = $sqlAnswers = $sqlAnswersQuery = $correctAnswers = array(); I'm not a fan of this. This isn't easy to read, especially for the-new-guy-who-doesn't-know-this-code. This works for small amounts of variables, but even then with caution: $hasErrors = $hasWarnings = false; I think the way you should declare the variables as array depends on how you set the first values, we can't see enough code to answer based on that.
{ "domain": "codereview.stackexchange", "id": 14243, "tags": "php, array" }
Send array of three numbers from ROS to Arduino?
Question: Hey all. We are currently working on getting a serial communication through ROS to an Arbotix-M type board through an FTDI adapter. We have looked into rosserial and other packages but it seems rather complicated and unnecessary since we only want to send an array of three integers to the Arbotix board. We're thinking something like: ROS-side: Serial write int test[1, 2, 1]; Arduino-side: Serial.read(something). Is this possible or something similar? Thanks in advance! Originally posted by MartinSA on ROS Answers with karma: 11 on 2016-12-06 Post score: 1 Answer: Thank you for your answers! I found a possible solution. As there would be no need for two way communication and no ROS on the arduino all i needed to do was basically pass three integers over the serial port. Turned out there was a much simpler way to do that, simply by importing the "Serial" library in the python script in ROS. The transmitting end in python(ROS): import serial import time arduino = serial.Serial(dev/ttyUSB0, 115200, timeout = 1) time.sleep(2) arduino.write(integer) Receiving end on arduino: if (Serial.available() > 0) { Serial.read(); integer = Serial.parseInt(); } Hope this can help others in the future. Originally posted by MartinSA with karma: 11 on 2016-12-08 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 26408, "tags": "ros, arduino, serial, arbotix" }
Do all hadrons experience the strong nuclear force?
Question: In nuclear physics, nuclear force, also known as the residual strong force, is mediated by pions exchanged between protons and neutrons. It doesn't seem like this should be limited to protons and neutrons, though, since the mechanism by which pions are exchanged comes up from the fact that protons and neutrons are made of quarks. Would hadrons with quarks other than up and down quarks and antiquarks exchange more exotic mesons? Follow-up question: If all hadrons experience the nuclear force, do different hadrons experience it differently, and if so, how? Answer: the residual strong force […] is mediated by pions exchanged between protons and neutrons. This is a pretty big oversimplification of the strong nuclear force. The pion, as the lightest member of the meson spectrum, can be associated with the longest-range part of the residual strong force. But if you’re also interested in the phenomenon that nuclear matter has constant density, you already have to go further up in the meson spectrum than the pion to find a repulsive interaction. In many-body systems or in high-energy interactions, the meson-exchange picture rapidly stops being a useful way to make quantitative predictions. Your question about “all hadrons” suggests you are also curious about long-range interactions between mesons. That’s basically impossible to measure directly, because all mesonic hadrons are short-lived. It’s one thing to build an accelerator that makes a beam of pions or kaons; it’s a different thing altogether to make two such accelerators and point the beams at each other. Beam-beam interaction experiments, like the Large Hadron Collider, make use of stored beams of stable particles. (A possible exception was a neutron-neutron scattering length measurement which used the simultaneous detonation of two nuclear bombs in the same underground cavity. Its results were never published; I have heard variously that a blast door failed and the DAQ was destroyed, that the experiment was proposed but never attempted, and that the whole thing is an extremely niche urban legend.) To the extent there are residual meson-meson interactions, virtual mesons will participate in them as well, and those meson-meson interactions show up as modifications to your model of the meson-mediated nuclear force. That recursive relationship gets messy fast. For a taste of how complicated it is, look for literature about whether the “fictitious $\sigma$” (renamed $f_0(500)$ in the current PDG) is a “real” meson or a two-pion bound state. Long-lived baryons absolutely participate in the nuclear force, illustrated most clearly by hypernuclei made of protons, neutrons, and exotic baryons. (In practice there’s just one hyperon, for the same reason as the absence of meson-meson colliders.) For short-lived baryons, it’s not clear whether the meson-mediated approximation would be useful under any circumstances. The effective range of the pion-mediated force is related to the pion’s mass: $$ r_\pi = \frac{\hbar c}{m_\pi c^2} \approx 1.3\rm\,fm \approx r_\text{nucleon} $$ An unstable state with lifetime $\tau$ has an intrinsic uncertainty $\Gamma$, or width, in its total energy, $$ \Gamma \tau \approx \hbar $$ which you can think of as a kind of energy-time version of the uncertainty principle. If we use $\hbar = c = 1$ units so that energy, mass, (inverse) time, and (inverse) distance are all measured in the same units, you might say that a particle whose decay width is larger than the pion mass, $\Gamma > m_\pi$, will probably have already decayed in the time it takes for light to cross a nucleon. It’s not clear (to me) what it would mean to “interact” with a particle which has already decayed before you finish touching it.
{ "domain": "physics.stackexchange", "id": 83554, "tags": "particle-physics, nuclear-physics, quarks, strong-force, pions" }
Using numpy with rospy tutorial - no module found
Question: Hi, I am following this tutorial. In Step 3, when I tried to run the listener, it threw following error: ~/catkin_ws/src/numpy_tutorial$ rosrun numpy_tutorial numpy_listener.py Traceback (most recent call last): File "/home/user/catkin_ws/src/numpy_tutorial/scripts/numpy_listener.py", line 7, in from rospy.numpy_pkg import numpy_msg ImportError: No module named numpy_pkg I tried to get the module and provide it to ros but to vain. Can anyone help me fix this, please? Thanks! Originally posted by achus on ROS Answers with karma: 1 on 2016-04-24 Post score: 0 Answer: There could be many reasons. Make sure you have numpy installed. But it should be already. Make sure you did in your catking_workspace catkin_make Make sure you did source devel/setup.bash in every terminal where you want to use rosrun or etc. Make sure roscore is running Originally posted by robopo with karma: 72 on 2016-04-24 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by achus on 2016-04-24: Thanks for the reply. Those possibilities had already been ruled out. The issue was that I kept my script file numpy_listener.py in numpy_tutorials/scripts directory. Till then I believed that all .py scripts go in /scripts while all .cpp files go in /src. My bad! Thanks again.
{ "domain": "robotics.stackexchange", "id": 24446, "tags": "ubuntu-trusty, ubuntu" }
Does ROS Groovy have any problems running on VirtualBox with Ubuntu 12.10?
Question: Does ROS Groovy have any problems running on VirtualBox with Ubuntu 12.10? Originally posted by shyamalschandra on ROS Answers with karma: 73 on 2013-01-04 Post score: 0 Original comments Comment by Eric Perko on 2013-01-04: Could you be more specific about what you want to run in VirtualBox? Basic roscpp? Navigation? Rviz? Gazebo? etc Answer: In general ROS programs run fine inside Virtual Box, except for programs which use hardware accelerated video. The effectiveness of accelerated video depends a lot on your setup, host and hardware. The other thing to be careful about when working in a VM is the availability and connectivity of the networking. The problems are mostly about making sure that you have configured things right, not that it won't work. Originally posted by tfoote with karma: 58457 on 2013-01-06 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 12270, "tags": "ros, ros-groovy, virtualbox" }
Apogalacticon and Perigalacticon
Question: What is the length of the apogalacticon and perigalacticon of the Sun and Milky Way? The general terms seem to be apoapsis and periapsis. My greatest efforts at Googling have failed miserably. If you can provide references as well, please do! Answer: According to this website we need another 15 million years to the perigalacticon. Recall the Sun's motion ... 30 degrees toward the galactic centre from circular motion, and 23 degrees upward out of the galactic plane. We will be at perigalacticon (closest to the nucleus) in 15 million years. At apogalacticon the Sun is seven percent farther out. During the galactic year of about 250 million years our solar system isn't orbiting on a Kepler ellipse around the galactic center, but instead oscillating up and down the galactic plane. This vertical oscillation cycles at 3.5 times per galactic year. We are now about 27,000 lightyears away from the galactic center, close to the perigalacticon. The difference between apogalacticon and perigalacticon is a little more than 4,000 lightyears (15% of 27,000 lightyears). The 15% number is calculated from the 7% eccentricity of the orbit by $1.07/0.93=1.15~~$ according to the definition of eccentricity, and Kepler's 1st law.
{ "domain": "astronomy.stackexchange", "id": 2454, "tags": "orbit, the-sun, galaxy" }
Yet another dynamic array design in C
Question: I've seen few similar post about dynamic array in C released as macros, but I tried a new approach to make it looks more like a template, wrapped in a big macros. However I need a review for suggestions or improvements also. Here is the trivial implementation: dynarray_t.h #ifndef DYNARRAY_T_H #define DYNARRAY_T_H #include <stdlib.h> /* malloc, calloc, realloc */ //in case no initsize is 0 or less we will assert #define DARRAY(T, N, INITSIZE, MOD) \ static const char __attribute__((unused)) \ N##_sassertsizeless[INITSIZE <=0 ? -1 : 1]; \ typedef struct \ { \ size_t size, count; \ T* pData; \ } N##_t; \ MOD N##_t* self_##N; \ \ static N##_t* N##_t##_init(void) \ { \ N##_t* pN = (N##_t*)malloc(sizeof(N##_t)); \ if (!pN) return 0x00; \ else { \ pN->pData = (T*)calloc(INITSIZE, sizeof(T)); \ if (!pN->pData) { free(pN); return 0x00; } \ else { \ pN->count = 0; \ pN->size = INITSIZE; \ return pN; } \ } \ } \ \ static void N##_t##_wiffull(N##_t* _this) \ { \ if (!(_this->count < _this->size-1)) { \ T* t = (T*)realloc(_this->pData, \ sizeof(T)* _this->size * 2); \ if (t) { \ _this->pData = t; \ _this->size *= 2; \ } \ } \ } \ \ static void N##_t##_resizeto(N##_t* _this, size_t ns) \ { \ if (ns > _this->size-1) { \ T* t = (T*)realloc(_this->pData, \ sizeof(T)* ns * 2); \ if (t) { \ _this->pData = t; \ _this->size = ns * 2; \ } \ } \ } \ \ static void N##_t##_add(T item, N##_t* _this) \ { \ N##_t##_wiffull(_this); \ *(_this->pData+_this->count) = item; \ _this->count++; \ } \ \ static T* N##_t##_getat(unsigned int idx, N##_t* _this) \ { \ if (idx < _this->count) \ return &_this->pData[idx]; \ else return 0x00; \ } \ \ static void N##_t##_cleanup(N##_t* _this) \ { \ if (_this) { \ if (_this->pData) free(_this->pData); \ _this->pData = 0x00; \ free(_this); \ _this = 0x00; \ } \ } \ static void N##_t##_add_at(T item, size_t idx, N##_t* _this) \ { \ N##_t##_resizeto(_this, idx); \ *(_this->pData+idx) = item; \ _this->count++; \ } \ #endif // DYNARRAY_T_H And some simple example usage: #include "dynarray_t.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #define BUFF_SZ 83 typedef struct _str_t { char data[BUFF_SZ]; } str_t; DARRAY(str_t, readBuff, 101,); int main(void) { int i; self_readBuff = readBuff_t_init(); // init for(i=0; i < 100; i++) { // fill str_t t = {{0}}; snprintf(t.data, sizeof (t.data), "Test line [%d]", i); readBuff_t_add(t, self_readBuff); } int s = self_readBuff->size; for(i=0; i < self_readBuff->size; i++) { // read element at(index) printf("%s\r\n", readBuff_t_getat(i, self_readBuff)->data); } readBuff_t_cleanup(self_readBuff); return 0; } Also please refer to C language only! Not interested in talking for C++, I am quite aware how to work it on template. I need something similar for C, so please give me an advice for design, or spot pitfalls if any. Answer: Use Common Definitions Rather Than Hard Coded Values I agree with @pm100 about NULL, it is much more common to use NULL rather than 0x00. Very early C++ compilers also used NULL rather than nullptr. Since stdlib.h is already included, the exit constants EXIT_SUCCESS and EXIT_FAILURE are availble, this would make the code more readable and maintainable. Most modern C and C++ compilers will add a final return 0; to the code so the return in main() isn't strictly necessary. Prefer size_t When the Variable Can Be Used As an Index In main the variable i should be declared as size_t rather than int. If you compile -Wall you will find that the comparison between i and self_readBuff->size yields a type mismatch warning between int and size_t. In the declaration of N##t_getat(unsigned int idx, N##_t* _this) the unsigned int should also be size_t. Prefer Local Variables Over Global Variables I would suggest a separate macro to define the variable of the proper type so that it can be used in a function rather than having a global variable. In main() it would be better if self_readBuff was declared locally rather than as a static variable globally. The variable ``self_##N` is not used anywhere else globally. Only Code What is Necessary The header file string.h is not necessary and slows down compile time. The variable s in main() is never referenced. int s = self_readBuff->size; Keep it Simple I would have defined each function as a separate macro and then included all of them in a single macro for ease of debugging and possible separate use. It will also make the code easier to maintain if each function can be maintained separately.
{ "domain": "codereview.stackexchange", "id": 39053, "tags": "c, array" }
Could you have a quartic or square degree Celsius or other degree on a temperature scale raised to any power?
Question: Could you have a quartic or square degree Celsius or other degree on a temperature scale raised to any power? The units of the Stefan-Boltzmann constant are watts per square meter per quartic kelvin, so it is possible with kelvins. But a kelvin is not a type of degree, although it is a unit of temperature, and is in fact the SI unit of temperature. Please note that I'm not sure about any of this, your feedback in the comments would be welcome. I'm not sure "per quartic kelvin" is optimal, though it is sometimes used; perhaps "per kelvin to the power four" is better. I know that nomenclature can be a controversial and emotive subject, even in science. Searching the Internet turned up no trace of a square degree of temperature, and Wikipedia has no disambiguation page for "square degree" and says, "A square degree (deg2) is a non-SI unit measure of solid angle. Other denotations include sq. deg. and (°)2. Just as degrees are used to measure parts of a circle, square degrees are used to measure parts of a sphere." So it's possible to write it, but it only means square degree of angle, according to Wikipedia. https://en.wikipedia.org/wiki/Square_degree So my question is: could you have a quartic or square degree Celsius or other degree on a temperature scale raised to any power? Answer: If you were measuring coefficients of thermal expansion, where the length at some temperature is $$ L(T) = L(T_\text{ref}) \times \big( 1 + (T-T_\text{ref}) \cdot\alpha\big) $$ then the unit for $\alpha$ is $\rm K^{-1}$ or $\rm(ºC)^{-1}$. This online reference uses units of $\frac{\rm \mu m}{\rm m\cdot K}$, because the typical scale for $\alpha$ is a part-per-million expansion per degree. If you were interested in the rate of change of $\alpha$ with temperature, you'd measure $\frac{\mathrm d\alpha}{\mathrm d T}$, which would have units of $\rm K^{-2}$.
{ "domain": "physics.stackexchange", "id": 84611, "tags": "dimensional-analysis, units, si-units" }
Trouble Installing ROS 2 Humble on Ubuntu 20.04 with arm64/x86_64 Architecture
Question: I've been encountering an issue while attempting to install ROS 2 Humble on my Ubuntu 20.04 system. My machine supports both arm64 and x86_64 architectures. Despite following the official installation instructions closely, I run into the "unable to locate package ros-humble-desktop" error every time I try to execute the installation command. Here are the steps I've followed based on the instructions from the ROS.org page for Humble: 1. Added the ROS 2 repository and keyring as follows: sudo apt update && sudo apt install curl -y sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] http://packages.ros.org/ros2/ubuntu $(. /etc/os-release && echo $UBUNTU_CODENAME) main" | sudo tee /etc/apt/sources.list.d/ros2.list > /dev/null 2. Updated and upgraded my package lists: sudo apt update sudo apt upgrade 3. Attempted to install ROS 2 Humble packages: sudo apt install ros-humble-desktop However, this results in the "unable to locate package ros-humble-desktop" error. Interestingly, I came across a similar issue on ROS Answers where a user encountered package location errors due to their system running on an ARMhf platform. However, after checking my system using dpkg --print-architecture and lscpu, I confirmed that my system is on arm64 and x86_64, which should be compatible with ROS 2 Humble according to REP 2000. I'm reaching out to see if anyone has faced a similar issue or could offer guidance on resolving this error. Could this problem be related to my prior attempts to uninstall ROS 2 Foxy and clean my system? Or is there something else I'm missing in the setup process for ROS 2 Humble on Ubuntu 20.04? Any advice or suggestions would be greatly appreciated. Thank you in advance for your help! Answer: In normal case, ROS2 Humble binary installation is available in Ubuntu 22.04 version, unless you have built a separate container environment. **Installation** Options for installing ROS 2 Humble Hawksbill: Binary packages Binaries are only created for the Tier 1 operating systems listed in REP-2000. Given the nature of Rolling, this list may be updated at any time. If you are not running any of the following operating systems you may need to build from source or use a container solution to run ROS 2 on your platform. We provide ROS 2 binary packages for the following platforms: Ubuntu Linux - Jammy Jellyfish (22.04) Debian packages (recommended) “fat” archive RHEL 8 RPM packages (recommended) “fat” archive Windows (VS 2019) Or you could try Source Install https://docs.ros.org/en/humble/Installation/Alternatives/Ubuntu-Development-Setup.html Ubuntu (source) Table of Contents System requirements System setup Set locale Add the ROS 2 apt repository Install development tools and ROS tools Get ROS 2 code Install dependencies using rosdep Install additional DDS implementations (optional) Build the code in the workspace Environment setup Source the setup script Try some examples Next steps after installing Using the ROS 1 bridge Additional RMW implementations (optional) Alternate compilers Clang Stay up to date Troubleshooting Uninstall System requirements The current Debian-based target platforms for Humble Hawksbill are: Tier 1: Ubuntu Linux - Jammy (22.04) 64-bit Tier 3: Ubuntu Linux - Focal (20.04) 64-bit Tier 3: Debian Linux - Bullseye (11) 64-bit
{ "domain": "robotics.stackexchange", "id": 38996, "tags": "ros2, ros-humble, installation, troubleshooting" }
Recurrent action on concurrent collection
Question: I have a bunch of concurrent threads carrying out operations which return alerts. The alerts have to be persisted to a DB, but this cannot be done concurrently as it may cause duplicate alerts being created. To avoid duplicate alerts I have written a class containing a ConcurrentBag<T> with a Timer that triggers Action<T> (in this case, sending my pending alerts to the DB) at a set interval. I'm wondering if what I wrote it is written in a way that will not cause problems in the execution in the future. If alerts get lost due to an unexpected exception that would be problematic. I'd appreciate comments/criticisms/suggestions for improvement. Here's the code: class PeriodicConcurrentBag<T> { public int Count { get { return this.items.Count(); } } public bool HasItems { get { return this.items.Any(); } } public bool IsStopped { get; private set; } private int timerPeriod; private Timer timer; private ConcurrentBag<T> items; private Action<IEnumerable<T>> actionOnItems; public PeriodicConcurrentBag(Action<IEnumerable<T>> actionOnItems, int periodInSeconds) { this.timerPeriod = periodInSeconds * 1000; this.actionOnItems = actionOnItems; this.items = new ConcurrentBag<T>(); } public void Start() { this.IsStopped = false; this.timer = new Timer(this.InternalPeriodicAction, null, this.timerPeriod, Timeout.Infinite); } public void Stop() { this.IsStopped = true; } public void AddItem(T item) { this.items.Add(item); } public void AddItems(IEnumerable<T> items) { this.items.AddRange(items); } private void InternalPeriodicAction(object state) { if (!this.IsStopped) { var currentItems = Interlocked.Exchange(ref this.items, new ConcurrentBag<T>()); this.actionOnItems(currentItems); if (!this.IsStopped) this.Start(); } } } EDIT: The AddRange() method is an extension method I created for ConcurrentBag. Answer: Your Stop method does not actually stop the timer. This is really counter-intuitive wastes CPU There is also a racing condition. If someone calls Stop in-between those two lines: if (!this.IsStopped) this.Start(); the Stop call will be ignored. The moral of the story: just take the lock. Timer implements IDisposable. So you should call Dispose when you no longer need it (in Stop method, for example.) I would refactor actionOnItems into a public event. int for time period is ambiguous and limiting. I can not use your class, if I want to send updates every 0.5 seconds, and I can easily make an error by assuming, that int represent milliseconds. You can eliminate those problems by replacing it with TimeSpan. Edit: Also your AddRange method rises some red flags. I assume it is not an atomic operation, and you add items to the bag one by one. What will happen, if AddRange is called right before Interlocked.Exchange(ref this.items, new ConcurrentBag<T>()); ? Do you think the new items will go to the new bag or to the old one? How is your extension method synchronized with the rest of your code? You are walking on the really thin ice here. If you have a limited experience with multi-threading, I suggest you start simple. Remove ConcurrentBag, use regular list, but lock every single access point (both public AND private, methods AND properties). Once this is working, test if it meats your performance requirements. If it does, leave it at that. There is no point in fighting over nanoseconds if You do not actually need those nanoseconds and You get an extremely bug-prone code as a result
{ "domain": "codereview.stackexchange", "id": 23007, "tags": "c#, .net, concurrency" }
Electrolysis of aqueous lead nitrate
Question: When going through one of my chemistry textbooks, I saw that the electrolysis of aqueous lead nitrate led to oxygen being formed at the anode and lead being formed at the cathode. However, in school, I was taught that only metals less reactive than hydrogen will form at the cathode. This electrolysis contradicts what I learned, as lead is more reactive than hydrogen, and should remain as ions in solution. Therefore, my question is: What forms at the cathode during electrolysis of aqueous lead nitrate and why? Answer: Firstly, the reactivity being referred to is not reactivity in the sense of the reactivity series of metals, but rather in the sense of standard electrode potentials (standard reduction potentials). This comes into play when setting up an electrolytic cell in determining the reaction that will take place at the cathode/anode. In this case, the reaction that requires the least amount of work (or voltage in this case, since voltage is essentially a measure of work) will take place. So, assuming that this is done aqueously, if the reduction/oxidation reaction to take place requires more voltage than the reduction/oxidation of the water, then the water will simply be reduced/oxidized instead. (I think this is where the reactivity of hydrogen comes into play in how you were taught). So to answer your question more directly, it is actually lead that forms at the cathode. Edit: To add some detail, the reduction potential of H2O is -0.8277 V, while the lead(II) ion is -0.13. In this case, less energy means closer to zero, the negative being essentially irrelevant. Hope this helped.
{ "domain": "chemistry.stackexchange", "id": 15017, "tags": "electrochemistry, aqueous-solution" }
Time recovery algorithm and a symbol with samples
Question: I am working on my graduation thesis. The topic is a receiver implementation in FPGA. I am doing an implementation of symbol timing recovery. For this part I have decided to choose the Gardner algorithm. I have found here discussions about Gardner, Mueller&Muller and other algorithms. All the posts were useful, but I still have the following questions. I have noticed that my implementation of Gardner algorithm doesn’t work as intended in case of a big initial phase offset. Could you please give a suggestion, what I can do in such case? I've already started a simulation in MATLAB with generating a signal with N symbols. Each symbol has to have 8 samples (as given by my supervisor). I did the first part: Input_signal = randi([0 15],2000,1); % mod order = 16, 2000 symbols How can I implement the next part of this task, which is to create 8 samples for each symbol? EDIT 1 I have used CORDIC algorithm and defined amplitude of each samples and phase. Using them I can define a sample with max amplitude in each symbol. How can I use this information in implementation the time recovery algorithm? Answer: Preamble: This answer is about timing recovery in a sense of symbol synchronization, i.e. finding the proper sampling phase of a baseband signal. Based on the stated requirement of only 8 samples per symbol, I will assume that you are employing a fully digital approach to timing recovery. That means that you have no control over the times of sampling by ADC. The fully digital approach looks like this (this nice picture is from slides here): The "digital processor" is your FPGA-based implementation of the timing recovery algorithm, including the selected Gardner algorithm to find a phase error. "Sampler" - is an ADC with fixed sampling rate, 8 samples per symbol. "Analog processor" - this is an analog front-end, which is implemented in your board outside of the FPGA. And "signal in" is a baseband signal from your target communication channel. In order to verify an implementation of your algorithm, you will want to model it, e.g. using a Matlab program or a Python script (whatever is more convenient). With the real signal samples from ADC you will be able to see the details of how your algorithm works in a long run, which otherwise is difficult to simulate in RTL and may be hard to debug when programmed on board. With a synthetic signal you can vary the parameters of the signal to see, how the algorithm reacts to it, and also you can plot a sigmoid for the TED, which can be very helpful. About TED: Timing error detector (TED) is intended to detect a phase error in sampling. Based on the sign of its output, the timing recovery scheme will incrementally shift the sampling phase by a little step value, until the timing error becomes zero (or close to zero). This step value is usually selected to be 1-2% of a symbol period. Note, that with a fully digital timing recovery approach, when there is no actual control of the sampling phase, an interpolator is used instead, to fill the gaps between the signal samples. As a result the phase shift is made for the recreated version signal and not the real one. Sampling phase error is defined by a phase difference between the transmitter, which generates symbols, and base sampling frequency of ADC (actual sampling frequency/8). Ideally the algorithm should be able to converge for arbitrary initial value of that error. The answer to the question: In order to adequately model the initial phase error, the model must include not only a digital processor, but also a sampler. This means, that the input signal for the model should be continuous or oversampled. The degree of oversampling will define the resolution ability of your tests, i.e. how many different initial values you can provide. The goal can be 50-100 points per symbol period. If you intended to use a real signal as an input of your model, you would use a logic analyzer to capture the signal at the input of the ADC with high enough frequency. Alternatively, you may want to use a synthetic signal, that models a real one. Then you have to create its mathematical model. Usually you will start with random symbol values (as you did), then apply the modulation scheme (you will skip this if the symbol is a signal level), then apply a pulse shape (if the target transmitter does), then apply the distortions/pulse shaping relevant to your intended target communicating channel and analog front end (e.g. a low-pass filter can model the distortion of the signal in a twisted pair). As you can see, the particular way of creating a synthetic signal will strongly depend on the type of the target communication channel. Edit: Here is an illustration of what the resulting input data $x(k)$ for the TED may look like. It shows two different cases. You can see that they differ in the phase of ADC sampling. Also the examples show that the selection of 2 samples out of 8, which is necessary for the Gardner TED, is random. The black lines are the exemplary eye diagram of the signal, the green line is the optimal sampling point and the blue lines are the ADC samples.
{ "domain": "dsp.stackexchange", "id": 9828, "tags": "digital-communications, symbol-timing" }
Why don't the kinematic equations agree with calculating velocity one second at a time?
Question: A car is moving at a velocity of $10 \, \text{m}/\text{s}$. After point $A$ no acceleration is provided. By simple measurement, the acceleration is found to be $-1 \, \text{m}/\text{s}^2$. Using standard equations: $$v = u + at, \; v=0, \; u = 10, $$ we arrive at $t = 10 \text{s}$. $$S = ut + .5 at^2 = 50 \,\text{m}$$ ie- the car stops at $50\, \text{m}$ from point $A$. However, by manual calculation, the car travels the following distance before coming to stop: $10 \, \text{m}$ at $t=0$, $9 \, \text{m}$ at $t =1$, etc since $a = -1$, the $v$ reduces by $1 \, \text{m}/\text{s}$, so on, so we get $S = 10 +9+..1 = 55 \, \text{m}$ Where am I going wrong here? Answer: The thing that is going wrong with your manual calculation is you are taking the velocity to be constant in every interval i.e., you are taking velocity to be $10m/s$ from $0$ to $1s$, $9m/s$ from $1s$ to $2s$ and so on which is incorrect. The velocity is continuously decreasing. You may calculate like this: At $t=0,v=10m/s,a=-1m/s^2$ which means from $t=0$ to $t=1$, car has travelled a distance, $$S=10\times1+\frac{1}{2}(-1)\times 1^2$$$$=10-\frac{1}{2}$$ and the velocity has become $9m/s$ at $t=1s.$ So, from $t=1$ to $t=2$,car has travelled a distance, $$S=9\times1+\frac{1}{2}(-1)\times 1^2$$$$=9-\frac{1}{2}$$ and so on. $$10-\frac{1}{2}+9-\frac{1}{2}+......$$$$=55-5=50$$
{ "domain": "physics.stackexchange", "id": 13296, "tags": "homework-and-exercises, kinematics" }
How to use LSD Slam with a USB Webcam
Question: How do you use LSD Slam with a generic USB webcam accessible to /dev/video1? The README's quickstart guide works flawlessly for me with Indigo. However, their instructions for using live_slam with a video source is: rosrun lsd_slam_core live_slam /image:=<yourstreamtopic> /camera_info:=<yourcamera_infotopic> but they don't explain what the values for and <yourcamera_infotopic> should be. How do I fill these in appropriately? I'm assuming one involves referencing the camera device, but there's no further documentation. Also, how would I launch the lsd_slam_* nodes with roslaunch? I tried creating mypackage/launch/webcam_lsd_slam.launch <launch> <node name="webcam" pkg="usb_cam" type="usb_cam_node" output="screen" > <param name="video_device" value="/dev/video1" /> <param name="image_width" value="640" /> <param name="image_height" value="480" /> <param name="pixel_format" value="mjpeg" /> <param name="camera_frame_id" value="usb_cam" /> <param name="io_method" value="mmap"/> </node> <node name="mapper" pkg="lsd_slam_core" type="live_slam" respawn="false" output="screen"> <remap from="image" to="/usb_cam/image_raw" /> </node> <node name="viewer" pkg="lsd_slam_viewer" type="viewer" respawn="false" output="screen" /> </launch> but I get the error: ERROR: cannot launch node of type [usb_cam/usb_cam_node]: usb_cam ROS path [0]=/opt/ros/indigo/share/ros ROS path [1]=/home/user/rosbuild_ws/mypackage ROS path [2]=/opt/ros/indigo/share ROS path [3]=/opt/ros/indigo/stacks ERROR: cannot launch node of type [lsd_slam_core/live_slam]: lsd_slam_core ROS path [0]=/opt/ros/indigo/share/ros ROS path [1]=/home/user/rosbuild_ws/mypackage ROS path [2]=/opt/ros/indigo/share ROS path [3]=/opt/ros/indigo/stacks ERROR: cannot launch node of type [lsd_slam_viewer/viewer]: lsd_slam_viewer ROS path [0]=/opt/ros/indigo/share/ros ROS path [1]=/home/user/rosbuild_ws/mypackage ROS path [2]=/opt/ros/indigo/share ROS path [3]=/opt/ros/indigo/stacks Presumably because the build didn't create true ROS packages for lsd_slam_core or lsd_slam_viewer. Predictably, rospack find lsd_slam_core finds nothing. How do I fix that? Per the build instructions, the lsd_slam_core and lsd_slam_viewer folders were located in /home/user/rosbuild_ws/mypackage/lsd_slam, so I tried relocating them to /home/user/rosbuild_ws/mypackage, but that did nothing. Originally posted by Cerin on ROS Answers with karma: 940 on 2015-05-03 Post score: 0 Original comments Comment by inflo on 2015-05-03: good starting point: http://answers.ros.org/question/9089/what-driver-should-i-use-for-my-usb-camera/ Comment by zzzZuo on 2016-08-09: Hi Cerin, I want to do the same things with you.I add a launch file which is the same with you.But when I ran it, there have some error. And I put it in the answers 4, plz give me some advise. Thank you in advance! Answer: I had the lsd_slam project in the wrong location. I created a workspace like: ~/workspace src lsd_slam_test lsd_slam when it should have been like: ~/workspace src lsd_slam lsd_slam_test After that, the next problem I ran into was that lsd_slam can't handle color video, so I had to convert it to mono using an image_proc node: <!-- This will read a camera and show a streaming feed in a display window. --> <launch> <!-- Activate the color webcam. --> <node name="usb_cam" pkg="usb_cam" type="usb_cam_node" output="screen" > <param name="video_device" value="/dev/video1" /> <param name="image_width" value="640" /> <param name="image_height" value="480" /> <param name="pixel_format" value="mjpeg" /> <param name="camera_frame_id" value="usb_cam" /> <param name="io_method" value="mmap" /> </node> <!-- Convert the color webcam's output to mono. --> <node name="to_mono_node1" pkg="image_proc" type="image_proc" ns="usb_cam" /> <!-- Display the mono stream. --> <node name="image_view" pkg="image_view" type="image_view" respawn="false" output="screen"> <remap from="image" to="/usb_cam/image_mono" /> <param name="autosize" value="true" /> </node> </launch> After that, lsd_slam ran correctly. Unfortunately, the performance and accuracy of the resulting depth map was unusably poor. Originally posted by Cerin with karma: 940 on 2015-05-14 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by jossy on 2015-07-02: Hello Cerin, I want to run lsd_slam using my webcam. I am newbie ROS and don't understand how you do. I use ubuntu 12.04 and fuerte. my launch file is under fuerte_workspace.roslaunch lsd_slam_core test.launch cannot locate [test.launch] in package [lsd_slam_core]
{ "domain": "robotics.stackexchange", "id": 21602, "tags": "ros, slam, navigation, webcam, lsd-slam" }
Why isn't the Heisenberg uncertainty principle stated in terms of spacetime?
Question: As I understand it, there are two "versions" of the Heisenberg uncertainty principle: Position-Momentum uncertainty \begin{equation} \sigma_x \sigma_p \geq \frac{\hbar}{2} \end{equation} where $[\hat{x},\hat{p}] = i\hbar$ implies no quantum state can be both a position and momentum eigenstate. and then Time-Energy uncertainty \begin{equation} \sigma_T \sigma_E \geq \frac{\hbar}{2} \end{equation} I don't understand why time and space are separated. Like why isn't $\hat{x}$ an operator that represents information about position in spacetime. We could call this operator $\hat{s}$. Presumably if there is a $\sigma_T$, there must be a time operator $\hat{T}$. If so, why is there a time operator devoid of any reference to spacetime? I read this 2010 article by John Baez suggesting not everyone agrees that the time-energy uncertainty version is valid. But isn't that one of the ideas that let's us believe virtual particles can exist? My Question There were several rhetorical questions here in the sense I intended them as food for thought. The one I want answered is Why isn't the Heisenberg uncertainty principle stated in terms of spacetime? Answer: The problem of including time as an operator rather than a parameter in Quantum Mechanics is what led to the development of Quantum Field Theory. I.e., the position operator was demoted to a parameter rather than promoting time to an operator. The two uncertainty principles you quote are entirely different. The first (position/momentum) principle is the Heisenberg uncertainty principle. It is fundamental in Quantum Mechanics and follows from the commutation relations you quoted. The second (energy/time) "principle" basically just comes from approximations in rate/scattering theory. It is not "as fundamental".
{ "domain": "physics.stackexchange", "id": 19569, "tags": "spacetime, operators, heisenberg-uncertainty-principle" }
Why we use information gain over accuracy as splitting criterion in decision tree?
Question: In decision tree classifier most of the algorithms use Information gain as spiting criterion. We select the feature with maximum information gain to split on. I think that using accuracy instead of information gain is simpler approach. Is there any scenario where accuracy doesn't work and information gain does? Can anyone explain what are the advantages of using Information gain over accuracy as splitting criterion? Answer: Decision trees are generally prone to over-fitting and accuracy doesn't generalize well to unseen data. One advantage of information gain is that -- due to the factor $-p*log(p)$ in the entropy definition -- leafs with a small number of instances are assigned less weight ($lim_{p \rightarrow 0^{+} } p*log(p) = 0$) and it favors dividing data into bigger but homogeneous groups. This approach is usually more stable and also chooses the most impactful features close to the root of the tree. EDIT: Accuracy is usually problematic with unbalanced data. Consider this toy example: Weather Wind Outcome Sunny Weak YES Sunny Weak YES Rainy Weak YES Cloudy Medium YES Rainy Medium NO Rainy Strong NO Rainy Strong NO Rainy Strong NO Rainy Strong NO Rainy Strong NO Rainy Strong NO Rainy Strong NO Rainy Strong NO Rainy Strong NO Rainy Strong NO Rainy Strong NO Rainy Strong NO Weather and Wind both produce only one incorrect label hence have the same accuracy of 16/17. However, given this data, we would assume that weak winds (75% YES) are more predictive for a positive outcome than sunny weather (50% YES). That is, wind teaches us more about both outcomes. Since there are only few data points for positive outcomes we favor wind over weather, because wind is more predictive on the smaller label set which we would hope to give us a rule that is more robust to new data. The entropy of the outcome is $ -4/17*log_2(4/17)-14/17*log_2(14/17)) =0.72$. The entropy for weather and outcome is $14/17*(-1/14*log_2(1/14)-13/14*log_2(13/14)) = 0.31$ which leads to an information gain of $0.41$. Similarly, wind gives a higher information gain of $0.6$.
{ "domain": "datascience.stackexchange", "id": 1158, "tags": "machine-learning, classification, decision-trees, information-theory" }
I runed rosrun gazebo_ros debug and I get this at end. signal SIGFPE?
Question: [ INFO] [1400250293.615927480]: Finished loading Gazebo ROS API Plugin. [ INFO] [1400250293.617035719]: waitForService: Service [/gazebo/set_physics_properties] has not been advertised, waiting... Msg Waiting for master Msg Connected to gazebo master @ Msg Publicized address: 192.168.10.234 [New Thread 0xa9df8b40 (LWP 10393)] Program received signal SIGFPE, Arithmetic exception. 0xb1e28fd2 in ?? () from /usr/lib/i386-linux-gnu/libdrm_radeon.so.1 Originally posted by Moussa on ROS Answers with karma: 1 on 2014-05-16 Post score: 0 Answer: This is a floating point error; it means that a program tried to do an illegal floating-point operation, such as dividing by zero or infinity. Since the library in question is libdrm_radeon, which is part of the graphics driver, I suspect this is a either a rendering bug or a bug in your graphics drivers. You may want to try upgrading your graphics drivers to the official ATI drivers. Originally posted by ahendrix with karma: 47576 on 2014-05-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 17972, "tags": "ros" }
Dominations under oracles which is closed under complement?
Question: Edited at 2010/11/29: As John Watrous have mentioned, the class $\mathsf{C^O}$ may be not well-defined. After reading some early posts, I try to restate my question in an unambiguous way. Let $\mathsf{O}$ be a complexity class that is closed under complement, i.e. $\mathsf{O} = \mathsf{coO}$. Also we assume that the logspace, $\mathsf{L}$, is a subset of $\mathsf{O}$. When does the equality $\mathsf{L^O} = \mathsf{O}$ hold? We define $\mathsf{L^O}$ as languages accepted by logspace oracle machines with an $\mathsf{O}$ oracle, where queries are written on a separated oracle tape not restricted to the logspace bound, and after each query the tape is automatically erased. We know that $\mathsf{NL} = \mathsf{coNL}$ by Immerman-Szelepcsényi Theorem, and we have $\mathsf{L^{NL}} = \mathsf{NL}$. Before the era of Reingold, when nobody knows whether $\mathsf{SL} = \mathsf{L}$, Nisan and Ta-Shma have proved that $\mathsf{SL}$ is closed under complement. They also show that $\mathsf{L^{SL}} = \mathsf{SL}$ in the paper. In the paper "Directed Planar Reachability Is in Unambiguous Log-Space" by Bourke, Tewari and Vinodchandran, they gave a claim in corollary 4.3 that $\mathsf{L^{UL \cap coUL}} = \mathsf{UL \cap coUL}$. Clearly $\mathsf{UL \cap coUL}$ is closed under complement, but is this equality holds so trivially? Do we have any easy conditions to decide if $\mathsf{L^O}$ and $\mathsf{O}$ are in fact the same? For easy conditions it means we only have to check some properties about $\mathsf{O}$, then we can decide if they are equal, without using definitions of the classes to prove the inclusion $\mathsf{L^O} \subseteq \mathsf{O}$. Another related question would be: Do we have any oracle $\mathsf{O}$ such that $\mathsf{L^O} \neq \mathsf{O}$? Answer: Edit: In revision 1, I wrote an embarrassingly complicated answer. The answer below is much simpler and stronger than the older answer. Edit: Even the “simplified” answer in revision 2 was more complicated than necessary. Let O be a complexity class that is closed under complement, i.e. O=coO. Also we assume that the logspace, L, is a subset of O. […] Do we have any oracle O such that LO≠O? Yes. Let O=RE∪coRE, where RE is the class of recursively enumerable languages. Then O is closed under complement and O contains L. However, note that LO=LRE, and in particular LO has a complete language under polynomial-time many-one reducibility. On the other hand, O=RE∪coRE does not have a complete language under polynomial-time reducibility because RE≠coRE. Therefore, LO≠O.
{ "domain": "cstheory.stackexchange", "id": 431, "tags": "cc.complexity-theory, complexity-classes, oracles" }
Difference in surface temperature between the Northwest Atlantic and Northeast Atlantic
Question: Why is the surface waters in the Northwest Atlantic Ocean colder than the surface waters of the Northeast Atlantic Ocean? Answer: The main reason is that the Gulf Stream transports warm surface water from the tropics, driven by the thermohaline circulation. The tropical trade winds push surface water towards Western Atlantic and build up stress. Further north the dominant wind direction is towards the east. The Coriolois force also helps to move the warm water masses towards the east. The Labrador Current, on the other hand, brings cold water down West Atlantic Ocean. The thermohaline turnover is further south in West Atlantic Ocean.
{ "domain": "earthscience.stackexchange", "id": 1136, "tags": "ocean, oceanography, temperature, ocean-currents" }
How is it that fructose has a different metabolic pathway than glucose but yet glucose is converted to fructose?
Question: Fructose is described to have a different metabolic pathway (a more fat-inducing one) than glucose (see: http://healthyeating.sfgate.com/difference-between-sucrose-glucose-fructose-8704.html) as it can only be metabolized by the liver and does not impact insulin. What I don't understand, is that in glycolysis, a paramount step of metabolizing glucose, it is very quickly converted to fructose-6-phosphate. Why does the body want to convert glucose into fructose? Answer: The metabolic pathway you are talking about is how fructose is converted into energy and how its concentration in the blood is regulated. It is indeed true that blood fructose level does not affect blood insulin level, and this is why it is more "fat-inducing", as it cannot be effectively regulated like glucose. It is also true that in glycolysis, glucose is converted to fructose-6-phosphate. However, this should not be confused with fructose itself. The fructose phosphate that fructolysis produced is fructose-1-phosphate, not fructose-6-phosphate.
{ "domain": "chemistry.stackexchange", "id": 3007, "tags": "organic-chemistry, biochemistry, chemical-biology, carbohydrates" }
Find rectangle of minimum area where dimensions are larger than minimum
Question: Problem: Given a collection $S$ containing $|S|=n$ rectangles defined by dimensions $(x,y)\in R^2$ (width and height of rectangles are real numbers), find the rectangles with the minimum area ($A_i = x_i * y_i$) where $(x_i \geq a)$ and $(y_i \geq b$) for any $(a,b) \in R^2$. The naive solution $F(S,a,b)$ will solve this with $O(n)$ runtime complexity and $O(1)$ memory complexity: loop through all the rectangles that are larger than the minimum required values $(a,b)$ for $(x,y)$, remember the one with the smallest area $x*y$ and return that. This requires no preparation or indexing of any kind, just a simple loop (sorting according to area beforehand won't improve the $O(n)$ worst-case runtime complexity). Can this problem be solved with an algorithm with faster than $O(n)$ runtime complexity and up to O(n) memory complexity? Answer: One simple solution uses k-d trees. In preprocessing, prepare a sorted list $L$ of $x_i y_i$ as well as a three dimensional k-d tree $T$ storing the points $(x_i,y_i,x_iy_i)$; this can be done in time $O(n\log n)$ and takes up space $O(n)$. Given $a,b$, we do binary search on $L$ to find the least $c$ such that $T$ contains a point $(x_i,y_i,x_iy_i)$ with $x_i \geq a,y_i \geq b,x_iy_i \leq c$; each such query takes time $O(n^{2/3})$, for a total query time of $O(n^{2/3}\log n)$. It's not clear whether we actually need a three dimensional k-d tree; if a two dimensional one suffices, then the running time could get down to $\tilde{O}(n^{1/2})$.
{ "domain": "cs.stackexchange", "id": 3086, "tags": "algorithms, time-complexity, search-algorithms, space-complexity" }
Why are asteroids so much richer in precious metals than Earth's crust?
Question: Did the majority of Earth's precious metals sink below the crust during Earth's formation? Answer: This is in part marketing hype by wanna-be asteroid mining companies. That said, some asteroids are suspected to be richer in precious metals than is the Earth's crust. For example, the Earth's crust is significantly depleted in gold compared to the solar system as a whole. I wrote about the reasons why this is the case at physics.stackexchange.com. Gold and related precious metals are siderophiles, which means "iron-loving". When the Earth differentiated, the iron and nickel that sank to the center of the Earth took other siderophiles with them. In a sense, the precious metals are more siderophilic than is iron itself. Gold et al. easily dissolve in molten iron. Precious metals are so chemically inert that they do not readily combine form compounds with other elements. There is a lot more gold and other precious metals in the Earth's core than there is in all of the asteroids combined.
{ "domain": "astronomy.stackexchange", "id": 6394, "tags": "asteroids" }
How to avoid code duplication in ordering system
Question: Here is the piece of order system. And this is hierarchy of order classes in this system Public MustInherit Class BaseItem Public Property Id() As Guid Public Property Sku() As String Public Property Quantity() As String Public Property TransactionId() As Integer? End Class Public Class SoOrderItem Inherits BaseItem Public Property BatchNumber() As String Public Property Lottable1() As String Public Property Lottable2() As String Public Property Lottable3() As String Public Property Lottable4() As String End Class Public Class AsnOrderItem Inherits BaseItem Public Property AdditionalInfo() As String End Class Public Class BaseOrder Public Property OrderReference() As String Public Property PoNumber() As String Public Property CustomerVat() As String Public Property CustomerReference() As String Public Property TransactionType() As String End Class Public Class SoOrder Inherits BaseOrder Public Property ConsigneeAddress() As String Public Property ConsigneeContact() As String Public Property ConsigneeName() As String Public Property OrderItems() As New List(Of SoOrderItem) End Class Public Class AsnOrder Inherits BaseOrder Public Property Account() As Account Public Property SupplierName() As String Public Property OrderItems() As New List(Of AsnOrderItem) End Class And here is confirm classes for them Public Class SoConfirmation Private _itransactionService As ITransactionService Private Sub AssignNewTransactionsToOrderItems(soOrders As IEnumerable(Of SoOrder)) For Each order In soOrders For Each item In order.OrderItems If (Not item.TransactionId.HasValue) Then Dim newTransactionId = _itransactionService.GenerateTransactionId() _itransactionService.AssignTransactionToOrderItem(order.OrderReference, item.Sku, item.Id, newTransactionId) item.TransactionId = newTransactionId End If Next Next End Sub Private Sub FilterForConfirmation(soOrders As IEnumerable(Of SoOrder)) For Each order In soOrders order.OrderItems.RemoveAll(Function(i) _itransactionService.IsTransactionProcessedSuccesfully(order.OrderReference, i.Id)) Next End Sub End Class Public Class AsnConfirmation Private _itransactionService As ITransactionService Private Sub AssignNewTransactionsToOrderItems(asnOrders As IEnumerable(Of AsnOrder)) For Each asn In asnOrders For Each item In asn.OrderItems If (Not item.TransactionId.HasValue) Then Dim newTransactionId = _itransactionService.GenerateTransactionId() _itransactionService.AssignTransactionToOrderItem(asn.OrderReference, item.Sku, item.Id, newTransactionId) item.TransactionId = newTransactionId End If Next Next End Sub Private Sub FilterForConfirmation(asnOrders As IEnumerable(Of AsnOrder)) For Each order In asnOrders order.OrderItems.RemoveAll(Function(i) _itransactionService.IsTransactionProcessedSuccesfully(order.OrderReference, i.Id)) Next End Sub End Class As you can see confirmation classes have a lot of duplcited code and i don't know how to avoid it ... Will be appreciate for any help. Answer: You are right that you have code duplication where no duplication should be. As far as I can see from the code you have posted, nothing really changes if you would write a class that would handle it as a BaseOrder, since all the properties you are referring to are available on the BaseOrder (except for OrderItems, however this could be retrieved through an extra method implementing it) So you could write your class more like Public MustInherit Class BaseOrderConfirmation Private _itransactionService As ITransactionService Protected MustOverride Function GetOrderItems(order as BaseOrder) As IEnumerable(Of BaseItem) Protected Sub AssignNewTransactionsToOrderItems(orders As List(Of BaseOrder)) For Each order In orders For Each item In GetOrderItems(order) If (Not item.TransactionId.HasValue) Then Dim newTransactionId = _itransactionService.GenerateTransactionId() _itransactionService.AssignTransactionToOrderItem(order.OrderReference, item.Sku, item.Id, newTransactionId) item.TransactionId = newTransactionId End If Next Next End Sub Protected Sub FilterForConfirmation(orders As IEnumerable(Of BaseOrder)) For Each order In orders GetOrderItems(order).RemoveAll(Function(i) _itransactionService.IsTransactionProcessedSuccesfully(order.OrderReference, i.Id)) Next End Sub End Class This would now be an abstract class (ie: a class that cannot be instantiated), from which the GetOrderItems method has to be implemented still. Also note, that this change means that all methods now changed from Private modifier to the Protected modifier In the implementing classes you can then implement something like: Public Class SoOrderConfirmation Inherits BaseOrderConfirmation Protected Overrides Function GetOrderItems(order as BaseOrder) as List(Of BaseItem) return DirectCast(order, SoOrder).OrderItems End Function End Class Preferably though, the GetOrderItems method shouldn't return anything that can be modified. I wish I could create it as an IEnumerable(Of BaseItem) instead, however, that would not work with your FilterForConfirmation method. As I do not know how you intend to filter, I chose to keep the List(Of BaseItem). The filter also doesn't seem to filter, but rather seems to remove items from that list, so without knowing how you are using it, I chose to stick with List(Of BaseItem).
{ "domain": "codereview.stackexchange", "id": 27083, "tags": "object-oriented, .net, vb.net" }
Difference between SNR and PSNR
Question: I understood that SNR is the ratio of signal power to the noise power. In terms of images, how the original image is affected by the added noise. In PSNR, we take the square of the peak value in the image (in case of an 8 bit image, the peak value is 255) and divide it by the mean square error. The SNR and PSNR are used to measure the quality of an image after the reconstruction. I understand that higher the SNR or PSNR, the reconstruction is good. What I don't understand is how SNR and PSNR differs in terms of their conclusion about the reconstructed image. What the PSNR of an image concludes that the SNR of the same image can't conclude ? Simply how the conclusion of PSNR differs from the conclusion of SNR? Answer: Let's start with the mathematical definitions. Discrete signal power is defined as $$P_s = \sum_{-\infty}^{\infty}s^2[n] = \left|s[n]\right|^2.$$ We can apply this notion to noise $w$ on top of some signal to calculate $P_w$ in the same way. The signal to noise ratio (SNR) is then simply $$P_{SNR}=\frac{P_s}{P_w}$$ If we've received a noise corrupted signal $x[n] = s[n]+w[n]$ then we compute the SNR as follows $$P_{SNR}=\frac{P_s}{P_w} = \frac{P_s}{\left|x[n]-s[n]\right|^2}.$$ Here $\left|x[n]-s[n]\right|^2$ is simply the squared error between original and corrupted signals. Note that if we scaled the definition of power by the number of points in the signal, this would have been the mean squared error (MSE) but since we're dealing with ratios of powers, the result stays the same. Let us now interpret this result. This is the ratio of the power of signal to the power of noise. Power is in some sense the squared norm of your signal. It shows how much squared deviation you have from zero on average. You should also note that we can extend this notion to images by simply summing twice of rows and columns of your image vector, or simply stretching your entire image into a single vector of pixels and apply the one-dimensional definition. You can see that no spacial information is encoded into the definition of power. Now let's look at peak signal to noise ratio. This definition is $$P_{PSNR}=\frac{\text{max}(s^2[n])}{\text{MSE}}.$$ If you stare at this for long enough you will realize that this definition is really the same as that of $P_{SNR}$ except that the numerator of the ratio is now the maximum squared intensity of the signal, not the average one. This makes this criterion less strict. You can see that $P_{PSNR} \ge P_{SNR}$ and that they will only be equal to each other if your original clean signal is constant everywhere, and with maximum amplitude. Notice that although the variance of a constant signal is null, its power is not; the level of such constant signal does make a difference in SNR but not in PSNR. Now, why does this definition make sense? It makes sense because the case of SNR we're looking at how strong the signal is and to how strong the noise is. We assume that there are no special circumstances. In fact, this definition is adapted directly from the physical definition of electrical power. In case of PSNR, we're interested in signal peak because we can be interested in things like the bandwidth of the signal, or number of bits we need to represent it. This is much more content-specific than pure SNR and can find many reasonable applications, image compression being on of them. Here we're saying that what matters is how well high-intensity regions of the image come through the noise, and we're paying much less attention to how we're performing under low intensity.
{ "domain": "dsp.stackexchange", "id": 7625, "tags": "image-processing" }
which specific field to study to know more about genes and genomes
Question: I recently got interested in specific subjects : 1.Evolution 2. DNA, genomes etc and its structure 3. Abiogenesis I am a software engineer with good fundamental understanding in math & physics and I was thinking to read some biology to get knowledge on it in my free time. An associate free course or a book which can build my broader understanding in fields of biology would definitely help. Regards Answer: I believe that you're looking for fields with names like "genomics" "genome evolution", "population genetics", "molecular evolution", and possibly "computational biology"/"bioinformatics". Each of these is a huge field with a large literature. I'll try to recommend some resources but they're not exhaustive. For a simple mathematical introduction to population genetics and microevolution, I like Joe Felsenstein's book (free online). Joe furthermore has a lot of slides up along with audio tracks for courses he's taught covering several of these areas. He is coming at it from the perspective of a mathematically-sophisticated computational biologist with an interest in designing software for phylogenetic inference, so it might be a point of view that works for you. I'm sure that there are lots of other similar resources but I'll admit that I'm personally fond of Joe's style. It does look like there are online courses for genomics intros, I don't know how good they are. The field changes so quickly that it may be hard for them to keep up. MIT has a computational biology course up- I imagine it's probably pretty good. Abiogenesis is a bit harder, but you might look into the field of exobiology/astrobiology- here is a list of online courses.
{ "domain": "biology.stackexchange", "id": 10487, "tags": "evolution, gene, genomes" }
How to make a language homoiconic
Question: According to this article the following line of Lisp code prints "Hello world" to standard output. (format t "hello, world") Lisp, which is a homoiconic language, can treat code as data in this way: Now imagine that we wrote the following macro: (defmacro backwards (expr) (reverse expr)) backwards is the name of the macro, which takes an expression (represented as a list), and reverses it. Here’s "Hello, world" again, this time using the macro: (backwards ("hello, world" t format)) When the Lisp compiler sees that line of code, it looks at the first atom in the list (backwards), and notices that it names a macro. It passes the unevaluated list ("hello, world" t format) to the macro, which rearranges the list to (format t "hello, world"). The resulting list replaces the macro expression, and it is what will be evaluated at run-time. The Lisp environment will see that its first atom (format) is a function, and evaluate it, passing it the rest of the arguments. In Lisp achieving this task is easy (correct me if I'm wrong) because code is implemented as list (s-expressions?). Now take a look at this OCaml (which is not a homoiconic language) snippet: let print () = let message = "Hello world" in print_endline message ;; Imagine you want to add homoiconicity to OCaml, which uses a much more complex syntax compared to Lisp. How would you do that? Does the language has to have a particularly easy syntax to achieve homoiconicity? EDIT: from this topic I found another way to achieve homoiconicity which is different from Lisp's: the one implemented in the io language. It may partially answer this question. Here, let’s start with a simple block: Io> plus := block(a, b, a + b) ==> method(a, b, a + b ) Io> plus call(2, 3) ==> 5 Okay, so the block works. The plus block added two numbers. Now let’s do some introspection on this little fellow. Io> plus argumentNames ==> list("a", "b") Io> plus code ==> block(a, b, a +(b)) Io> plus message name ==> a Io> plus message next ==> +(b) Io> plus message next name ==> + Hot holy cold mold. Not only can you get the names of the block params. And not only can you get a string of the block’s complete source code. You can sneak into the code and traverse the messages inside. And most amazing of all: it’s awfully easy and natural. True to Io’s quest. Ruby’s mirror can’t see any of that. But, whoa whoa, hey now, don’t touch that dial. Io> plus message next setName("-") ==> -(b) Io> plus ==> method(a, b, a - b ) Io> plus call(2, 3) ==> -1 Answer: You can make any language homoiconic. Essentially you do this by 'mirroring' the language (meaning for any language constructor you add a corresponding representation of that constructor as data, think AST). You also need to add a couple of additional operations like quoting and unquoting. That's more or less it. Lisp had that early on because of its easy syntax, but W. Taha's MetaML family of languages showed that it's possible to do for any language. The whole process is outlined in Modelling homogeneous generative meta-programming. A more lightweight introduction to the same material is here.
{ "domain": "cs.stackexchange", "id": 7313, "tags": "programming-languages, functional-programming" }
Stack - Implemented as Singly Linked List
Question: Note: I do know that Python libraries provide a Linked List and Stack. This implementation has been done to practice Python and some of the data structures and Algorithms. I have implemented Stack as a Singly Linked-List, feel free to make suggestions. Note: code works Targeted Big O: search: O(n), Push and Pop: O(1) Methods: push(value)- for insertion pop()- for removing is_empty()- to check if empty peek()- look at what head holds without poping stack_search(value)-find a value length()-get size Classes: class Node: def __init__(self, value): self.data = value self.front = None class Stack: def __init__(self): self.head = None self.count = 0 def push(self, value): #make new node new_node = Node(value) self.count += 1 if self.head is not None: #make new node point to the old node new_node.front = self.head # make head point to the new element self.head = new_node def is_empty(self): if self.head is None: return True else: return False def pop(self): if not self.is_empty(): self.count -= 1 temp = self.head.data # make head point to the node after the node that will be poped self.head = self.head.front return temp else: print("can not pop items from an empty list") def peek(self): return self.head.data def __iter__(self): # start from the node that head refers to node = self.head while node: yield node node = node.front def stack_search(self, value): # start from the head p = self.head while p is not None: # make p reference to next node if p.front is not None: if p.data == value: print("Found value") return p.data p = p.front else: print("fail") return 0 def length(self): return self.count Test: from stack_Queue import Stack def main(): print("-------Test Stack----------") print("------Test push-------") my_stack = Stack() test_list = [1, 2, 3, 4, -2000000, 'a', 500000, 50] for i in test_list: my_stack.push(i) print("-------Push Done------") print("-------Dump stack-----") for i in my_stack: print(i.data) print("-----Dump stack Done---") print("----check count-------") print("should be: 8") print(my_stack.length()) print("-----check count Done--") print("-------Test pop $ print remaining stack--------") my_stack.pop() for i in my_stack: print(i.data) print("-----Test pop Done-----") print("-----Test search-------") x = my_stack.stack_search('a') print(x) print("---Test search Done---") print("-----Test pop - full stack & print what is being poped-----") while my_stack.length() > 0: x = my_stack.pop() print(x) print("-----Test pop Done-----") print("-----Test Empty Stack-----") for i in my_stack: print(i.data) print("---------Done---------") if __name__ == "__main__": main() Result: -------Test Stack---------- ------Test push------- -------Push Done------ -------Dump stack----- 50 500000 a -2000000 4 3 2 1 -----Dump stack Done--- ----check count------- should be: 8 8 -----check count Done-- -------Test pop $ print remaining stack-------- 500000 a -2000000 4 3 2 1 -----Test pop Done----- -----Test search------- Found value a ---Test search Done--- -----Test pop - full stack & print what is being poped----- 500000 a -2000000 4 3 2 1 -----Test pop Done----- -----Test Empty Stack----- ---------Done--------- Process finished with exit code 0 Answer: Your stack_search function re-implements loop over your linked-list. Instead of that you can use your __iter__ itself to keep in DRY. Plus it should True or False instead of 1 and 0. The Pythonic way will be implement __contains__ to check if your container type contains an item. Then you could simply do 'a' in my_stack. def __contains__(self, value): return value in (node.data for node in self) Instead of length method implement __len__. This will work with Python's built-in len(). Plus after defining this you won't need the method is_empty either as __len__ can be used to check for falsyness: if my_stack: .... You could also define __bool__ based on self.head for checking falsiness instead of using __len__, but it would be redundant in my opinion. pop() should raise an exception when the stack is empty, for example list.pop, dict.pop, deque.pop all raise some error for it. Stack().peek() currently raises an AttributeError. Instead of that you should be raising some custom exception when the stack is empty.
{ "domain": "codereview.stackexchange", "id": 27157, "tags": "python, python-3.x, linked-list, stack" }
Schrodinger equation: If $V(x)=V(-x)$ then prove that $\psi(x)=\psi(-x) $ or $\psi(x)=-\psi(-x)$
Question: The title explains itself. If the potential is an even function then prove that wave function is either odd or even. I set $-x$ in Schrodinger equation and find out that $\psi(-x)$ is also a solution for the equation therefore any linear combination of $\psi(x)$ and $\psi(-x)$ is also a solution but I couldn't go any further from that. Any help is appreciated. Answer: As stated, that statement is false. Consider as a counterexample the infinite potential well centered at the origin with periodic boundary conditions. A plane wave with the appropriate momentum is an eigenstate of the Hamiltonian, but is neither even nor odd. Notice, however, that each allowed energy level is twofold degenerate - you can have a plane wave propagating to the left or to the right, and these states are linearly independent. Therefore, rather than taking the plane waves as a basis, we can consider the even and odd linear combinations of them at each momentum (coordinating to cosines and sines). By considering this basis instead of the plane wave basis, we have what you want - energy eigenstates which are also eigenstates of the parity operator (which flips the sign of $x$). This is the key point - it's not every possible energy eigenstate is even or odd, but rather that you can always find a basis of energy eigenstates which are. To help with your proof of this, note that if $\psi(x)$ is an energy eigenstate with energy $E$, then so is $\phi(x)=\psi(-x)$. From there, taking the even and odd linear combinations of these states yields energy eigenstates which are even and odd, respectively.
{ "domain": "physics.stackexchange", "id": 59882, "tags": "homework-and-exercises, wavefunction, potential, schroedinger-equation, parity" }
How can a NN guess the velocity of an object from a video source?
Question: In the case of a video stream, I'd like to detect the speed (an approximation) of an object, that is moving. What would be the best approach to take? I am thinking of 3 methodologies to take, though I've not found a lot resources to read about. Method 1: I could feed Frame images to my CNN and pinpoint the object in the frame. Then calculate the difference of the position boxes. Frame 1: [ o ] ---> NN --> 3 Frame 2: [ o ] ---> NN --> 7 7-3 = 4 -> so dX = +4 between 1 frame, so I can estimate the dT (if I know the sampling rate of the frames) Method 2: Is there any way where the NN can take as a feed-input the context of the previous frame and make the calculation itself? video lapse 1sec: [ oooo ] ---> NN --> 4m/sec Method 3: What if I can control the shutter speed of my camera, could I calculate the velocity of an object by the motion blur? Frame image: [ ---o ] ---> NN --> 4m/$shutterSpeed Any relevant resource to read would help a lot. Answer: That is not possible without external labeled data (i.e., each video would have to be labeled with the object speed). Without labels, the neural network would be unable to learn the object's velocity because the object might be moving in any direction relative to the camera. For example, an object directly away would be getting smaller without motion blur.
{ "domain": "datascience.stackexchange", "id": 7260, "tags": "neural-network" }
CFG for $L=\{a^m b^n c^k | m,n,k > 0, k\neq m+n\}$
Question: I started learning CFG and I'm trying to find CFG for this language, but I have no idea where to start and I can't seem to find this one online anywhere. It would be great help, if someone could show me how to do it. Thank you! Answer: You can verify that the following CFG recognizes the language $\{a^nb^mc^k|n,m,k>0, n+m=k\}$: $S \rightarrow aXc | aSc$ $X \rightarrow bXc | bc$ Now your language $L$ can be written as $L = L_1 \cup L_2$ where $L_1 = \{a^nb^mc^k|n,m,k>0, n+m>k\}$ and $L_2 = \{a^nb^mc^k|n,m,k>0, n+m<k\}$. You can then find a CFG for $L_1$ and $L_2$, based on the grammar I gave you above.
{ "domain": "cs.stackexchange", "id": 18172, "tags": "context-free, formal-grammars" }
Find all subsets of an int array whose sums equal a given target
Question: I am trying to implement a function below: Given a target sum, populate all subsets, whose sum is equal to the target sum, from an int array. For example: Target sum is 15. An int array is { 1, 3, 4, 5, 6, 15 }. Then all satisfied subsets whose sum is 15 are as follows: 15 = 1+3+5+6 15 = 4+5+6 15 = 15 I am using java.util.Stack class to implement this function, along with recursion. GetAllSubsetByStack class import java.util.Stack; public class GetAllSubsetByStack { /** Set a value for target sum */ public static final int TARGET_SUM = 15; private Stack<Integer> stack = new Stack<Integer>(); /** Store the sum of current elements stored in stack */ private int sumInStack = 0; public void populateSubset(int[] data, int fromIndex, int endIndex) { /* * Check if sum of elements stored in Stack is equal to the expected * target sum. * * If so, call print method to print the candidate satisfied result. */ if (sumInStack == TARGET_SUM) { print(stack); } for (int currentIndex = fromIndex; currentIndex < endIndex; currentIndex++) { if (sumInStack + data[currentIndex] <= TARGET_SUM) { stack.push(data[currentIndex]); sumInStack += data[currentIndex]; /* * Make the currentIndex +1, and then use recursion to proceed * further. */ populateSubset(data, currentIndex + 1, endIndex); sumInStack -= (Integer) stack.pop(); } } } /** * Print satisfied result. i.e. 15 = 4+6+5 */ private void print(Stack<Integer> stack) { StringBuilder sb = new StringBuilder(); sb.append(TARGET_SUM).append(" = "); for (Integer i : stack) { sb.append(i).append("+"); } System.out.println(sb.deleteCharAt(sb.length() - 1).toString()); } } Main class public class Main { private static final int[] DATA = { 1, 3, 4, 5, 6, 2, 7, 8, 9, 10, 11, 13, 14, 15 }; public static void main(String[] args) { GetAllSubsetByStack get = new GetAllSubsetByStack(); get.populateSubset(DATA, 0, DATA.length); } } Output in Console is as follows: 15 = 1+3+4+5+2 15 = 1+3+4+7 15 = 1+3+5+6 15 = 1+3+2+9 15 = 1+3+11 15 = 1+4+2+8 15 = 1+4+10 15 = 1+5+2+7 15 = 1+5+9 15 = 1+6+8 15 = 1+14 15 = 3+4+6+2 15 = 3+4+8 15 = 3+5+7 15 = 3+2+10 15 = 4+5+6 15 = 4+2+9 15 = 4+11 15 = 5+2+8 15 = 5+10 15 = 6+2+7 15 = 6+9 15 = 2+13 15 = 7+8 15 = 15 Please help me with the following 2 things: How can I improve this code to reduce the times for recursion? Is sorting the int array (from high to low) before recursion a better way? Is there a way to improve the code without using recursion? Answer: There are three reasonable responses here: yes, your recursion code can be improved for performance. yes, part of that improvement can come from sorting the data. yes, there's a way to refactor the code to not use recursion, and it may even be faster. Bearing that in mind, this answer becomes 'complicated'. Basic performance improvements for current code: if (sumInStack == TARGET_SUM) { print(stack); } can easily be: if (sumInStack >= TARGET_SUM) { if (sumInStack == TARGET_SUM) { print(stack); } // there is no need to continue when we have an answer // because nothing we add from here on in will make it // add to anything less than what we have... return; } I dislike any recursive function which rely on external (outside-the-method) values. In your case, the sumInStack is external. This makes the target hard to 'see'. Additionally, if we do sort the data, there are some benefits we can have, and a way to restructure the recursion to make it do less work (since we can guarantee that all values after a point have certain properties...): consider the method (assuming sorted data): public void populateSubset(final int[] data, int fromIndex, final int[] stack, final int stacklen, final int target) { if (target == 0) { // exact match of our target. Success! printResult(Arrays.copyOf(stack, stacklen)); return; } while (fromIndex < data.length && data[fromIndex] > target) { // take advantage of sorted data. // we can skip all values that are too large. fromIndex++; } while (fromIndex < data.length && data[fromIndex] <= target) { // stop looping when we run out of data, or when we overflow our target. stack[stacklen] = data[fromIndex]; populateSubset(data, fromIndex + 1, stack, stacklen + 1, target - data[fromIndex]); fromIndex++; } } You would call this function with: Arrays.sort(data); populateSubSet(data, 0, new int[data.length], 0, 15); So, that is 'can the code be improved?' and 'will sorting help' As for the 'unrolled' (no recursion) version of the system, it can be done. It would require three int[] arrays: int[] data = {....} int[] sum = new int[data.length]; int[] indices = new int[data.length]; int depth = 0; int lastindex = -1; The sum gives and indices act like a stack, and the depth is how deep the stack is (again, assume sorted data): Arrays.sort(data); while (depth >= 0) { lastindex++; if (lastindex == data.length) { // we have run out of data. do { // walk up the stack till we find some data. depth--; while (depth >= 0 && (lastindex = indices[depth] + 1) < data.length); } if (depth >= 0) { ..... you then add your code in here to check the target, keep it updated in your 'stack'. go down a level and move on.... } }
{ "domain": "codereview.stackexchange", "id": 15916, "tags": "java, array, recursion, mathematics, combinatorics" }
Is there an algorithm to find all connected sub-graphs of size K?
Question: I was just wondering if there is an efficient algorithm that given a undirected graph G, finds all the sub-graphs whose size is k (or less)? I searched around, and only found problems about finding the connected components. But I am interested in the smaller and more local connected sub-graphs. To clarify, the graph of interest (road networks) has n vertices, and has a relatively low degree (4 to 10). I am interested in finding/enumerating all connected sub-graphs with size k(in terms of nodes), e.g. by listing the vertices of each. k is relatively small. Answer: There can be exponentially many such subgraphs, so any such algorithm will necessarily be slow. To enumerate all of them, choose any number $i$ in the range $[1,k]$, choose any subset $S$ of $i$ of the vertices, discard all edges that have an endpoint not in $S$, choose any subset of the remaining edges, then check if the graph with vertex set $S$ and the chosen edge subset is connected; if not, discard the graph; if yes, output the subgraph. If you implement each "choose" with an for-loop that enumerates over all possibilities, this will enumerate over all graphs. There are standard ways to enumerate all subsets of a set. You can make it a bit more efficient by choosing the edges in a particular order: for each $i \in [1,k]$: for each subset $S$ of exactly $i$ of the vertices: (*) let $E_1 = \{(u,v) \in E : u \in S, v \in S\}$ and $T := \emptyset$. for each $v \in S$: let $E_2 = \{(u,v) \in E : u \in S\}$. if $E_2$ is empty and $T$ has no edge incident on $V$, go to the next iteration of the loop marked (*). if $E_2$ is non-empty: choose a non-empty subset $E_3$ of $E_2$. set $T := T \cup E_3$ and $E_1 := E_1 \setminus E_2$. if the graph $(S,T)$ with vertex set $S$ and edge set $T$ is connected, output it I don't know how to guarantee polynomial delay, but this might be fine for your particular application.
{ "domain": "cs.stackexchange", "id": 17259, "tags": "graphs" }
Is self-energy $\mathrm{Im}\Sigma^r<0$ always true?
Question: Consider a one-particle retarded Green's function $$G^r(\alpha)=[\omega+i\eta-\varepsilon(\alpha)-\Sigma^r(\alpha)]^{-1}$$ with self-energy $\Sigma^r(\alpha)$ for some quantum number $\alpha$. It is argued that $-\mathrm{Im}\Sigma^r>0$ always holds as it signifies the quasiparticle lifetime. Is this true always? Answer: Let us consider the following: $\Sigma^r$ is a retarded self-energy, i.e., it is given by expressions like $$\Sigma^r(\omega)=\sum_k |V_k|^2G^r(\omega).$$ Here $V_k$ might be replace by vortex parts with more complex structure, but the rpesence of the retarged Green's function, $\Im \left[G^r\right]<0$ suggests that $\Im [\Sigma^r]<0$. $G^r$ is a retarded Green's function - it would not be the case, if we had $\Im \left[\Sigma^r\right]>0$. I cannot say, if there is a rigorous mathematical proof that $\Im[\Sigma^r]\leq 0$, for any type of dispersions laws and interactions. However, if this relation breaks, we are facing the breakdown of the causality in our calculations, which means that some of the premises of our derivations are wrong - the initial Hamiltonian requires revision (keeping in mind that sometimes the breakdowns of theory are meaningful - e.g., many quantities diverge near phase transitions).
{ "domain": "physics.stackexchange", "id": 85580, "tags": "quantum-field-theory, condensed-matter, solid-state-physics, greens-functions, self-energy" }
can't get the right spectrum in scilab
Question: I'm trying to write a simple script that should plot the spectrum in scilab, to test it I use a sinus function with 440hz so that I get my dirac in this position , my problem is that it doesn't work and I don't understand why? here's the code : Fs = 8000; f = 440; t= 0:1/Fs:1; y = sin(2*%pi*f*t); nf = 1024; // number of point in the DFT Y = fft(y) f = Fs/2 * linspace(0,1,nf/2+1); clf(); plot(f,abs(Y(1:nf/2+1))); and this is what I get : any idea why I get this ? Answer: Your fft vector Y has the same length as your input signal y, because you just specified nf without letting fft() know what your desired FFT-length is. This is why your peak does not appear at 440 Hz. Your vector Y does not correspond to the frequencies in f. It's just a matter of correct scaling of the x-axis. EDIT: I do not know scilab, so I don't know how to pass the desired FFT-length to fft(). If you can't do that, you just need to make your time domain signal have the desired FFT-length.
{ "domain": "dsp.stackexchange", "id": 1579, "tags": "matlab, discrete-signals" }
Determining if a system is Linear
Question: Having $h[n]=u_0[n]+0.8u_0[n-1]+1.6u_0[n-2]$ and $x[n]=(u_1[n]-u_1[n-3])$ The goal is to determine if the system $y[n]=h[n]*x[n]$ is linear. I know that I would have to test it for Homogenity and Additivity in order to determine if the system is linear or not, but the actual proccess testing it is getting me confused. My professor did it the following way: $$y_1[n]=h[n]*x_1[n]$$ $$y_2[n]=h[n]*x_2[n]$$ $$h[n]*(\alpha x_1[n]+\beta x_2[n])$$ $$y[n]=h[n]*\alpha x_1[n]+h[n]*\beta x_2[n]$$ Wasn't this insufficient to prove linearity? Wouldn't we have to test it with the actual values of $x[n]$ and $h[n]$? Answer: To show that a system $$y[n] = T\{x[n]\} $$ is linear, you have to show that it's additive and homogeneous which is codified in $$T\{ a ~x_1[n] + b ~ x_2[n] \} = a ~T\{x_1[n]\} + b ~T\{x_2[n] \} $$ for all complex $a,b$ and all $x_1,x_2$. The property set above is satisfied for a system given by : $$y[n] = T\{x[n]\} = \sum_{k=-\infty}^{\infty} h[k] x[n-k] = h[n] \star x[n] $$ where $\star$ is the convolution operator and $h[n]$ is some sequence to be defined (incidentally it will be the impulse response of the given LTI system). Of course you didn't have to prove it, as it's already known that if the input-output relation of a system is given by convolution sum; i.e. $y[n] = h[n] \star x[n]$, then it's by definition an LTI (linear and time-invariant) system. Indeed input output relation of any LTI system is given by the convolution sum...
{ "domain": "dsp.stackexchange", "id": 8313, "tags": "linear-systems" }
How does it make sense for the universe to have started from a big bang?
Question: It has been said that the Big Bang started from a singularity. Think about a balloon radially growing over time. Fix a time $t_0, t_1 > 0$, and let $M_0, M_1$ be two balloons at time $t_0, t_1$ respectively. I can find a two-parameter diffeomorphism $\phi(t_0, t_1): M_0 \rightarrow M_1$. However, I cannot find a diffeomorphism if I let $t_0 = 0$ and $t_1 > 0$, i.e. $\phi(0, t_1): \{*\} \rightarrow M_1$. In what sense should I interpret a homotopy between initial state (big bang) and final state (the current universe)? Is it even true that the Big Bang started from a singularity? Answer: The singularity at the start of the universe in the Big Bang model is not supposed to be understood as part of the smooth manifold of spacetime, precisely for this reason. The time function on spacetime does not actually assign a "point" to $t = 0$. It's undefined (otherwise spacetime wouldn't be a Lorentzian manifold), and the same is true if you take an FLRW universe and try to keep the initial spatial slice 3d - since the scale factor goes to zero, the manifold is not Lorentzian there. If you want to model the initial singularity of the Big Bang as part of spacetime, you need to consider more general models of spacetime than a Lorentzian manifold.
{ "domain": "physics.stackexchange", "id": 88268, "tags": "general-relativity, cosmology, spacetime, big-bang, singularities" }
Leetcode 14. Longest Common Prefix beats only ~50% of C++ solutions
Question: The task is to find the longest common prefix. While I had no trouble to find the solution, I'm interested in the statistics of my code: Runtime: 7 ms, faster than 56.36% of C++ online submissions for Longest Common Prefix. Memory Usage: 9.3 MB, less than 54.46% of C++ online submissions for Longest Common Prefix. Although these statistics are taken to be with a grain of salt: 08/06/2022 12:06 Accepted 8 ms 9.1 MB cpp 08/06/2022 12:06 Accepted 6 ms 9.3 MB cpp 08/06/2022 12:00 Accepted 7 ms 9.3 MB cpp So appareantly my code can be optimized in 1 or 2 ways. How can I make it faster? And how can I make it more memory efficient? By what I see, I only allocate memory once, for the storage of the string I need to return. Granted, if the first string happens to be gargantually big and the other prefixes are small, this could be an issue and reserving only the size of the minimum length string would improve this. So maybe let's focus a bit more on the performance site. Try it on godbolt! #include <iostream> #include <vector> std::string f(std::vector<std::string>& strs){ std::string common = ""; common.reserve(strs[0].size()); for( std::size_t i = 0; i < strs[0].size(); ++i ){ for( auto const& str : strs ){ if( str.size() < i || str[i] != strs[0][i] ){ return common; } } common += strs[0][i]; } return common; } int main(){ std::vector<std::string> list = {"hh", "hhho", "hhh"}; std::cout << f(list) << "\n"; } Answer: Precalculate the size of the shortest string In the inner loop you are checking str.size() < i. Consider that the longest common prefix cannot be longer than the shortest string. I would first try to calculate the size of the shortest string, as this avoids the size check in the inner loop of the actual algorithm: std::string f(std::vector<std::string>& strs){ std::size_t min_size = SIZE_MAX; for (auto& str: strs) min_size = std::min(min_size, str.size()); for(std::size_t i = 0; i < min_size; ++i) for (auto& str: strs) if(str[i] != strs[0][i]) return str.substr(0, i); return strs[0]; } Memory access pattern The algorithm is quite simple, and I strongly suspect the bottleneck is how fast it can read the strings from memory. If you have a large number of strings which don't fit into the CPU cache, then the order in which you check things against each other might matter a lot. RAM access latency can be quite high, but this latency can be hidden by the CPU by looking at your memory access patterns, and prefetching memory based on that pattern. Contemporary CPUs can handle code reading and writing to a few areas in RAM simultaneously, but if for example you are comparing 100 strings, it will not be able to track that, and thus will either not prefetch (bad) or prefetch the wrong things (even worse). So it might be interesting to check only two strings against each other at a time, as that is much more likely to keep the prefetcher happy: std::string f(std::vector<std::string>& strs){ std::size_t min_size = SIZE_MAX; for (auto& str: strs) { min_size = std::min(min_size, str.size()); for (std::size_t i = 0; i < min_size; ++i) if(str[i] != strs[0][i]) { min_size = i; break; } } if (min_size == 0) break; } return strs[0].substr(0, min_size); } However, this strategy might work slower than your solution if everything already fit into the cache. You could probably find an even better solution that does something in-between the two strategies: consider checking the first cache line worth of bytes from each string against each other, then the next cache line worth of bytes if necessary, and so on. Avoid checking a given string against itself Both in your code and in my code above, the inner loop will check all strings against the first string. But that includes checking the first string against itself, which is unnecessary. Check multiple characters in one go You are checking individual characters against each other, but on contemporary computers, the CPU typically has registers that are 64 bits wide, and can thus hold 8 characters. There are even vector registers that are larger; with AVX512 you can have 64 characters in one register! For long strings, this might allow you to speed up the algorithm substantially. Say you compare 8 characters at a time, stored in uint64_ts. Then if two of those uint64_ts are equal, you know that all 8 characters are the same, and you can skip to the next set of 8 characters. But if they are not equal, you can still quickly find which of the 8 characters was the first that is not equal by XORing both uint64_ts together, and using std::countl_zero() (since C++20) or ffs() on the result.
{ "domain": "codereview.stackexchange", "id": 43708, "tags": "c++, performance, programming-challenge" }
Is BQP equal to BPP with access to an Abelian hidden subgroup oracle?
Question: Is BQP equal to BPP with access to an Abelian hidden subgroup oracle? Answer: Like many complexity-class separations, our best guess is that the answer is that BPP^{HSP} != BQP, but we can only prove this rigorously relative to oracles. This separation was observed by Scott Aaronson in this blog post where he observed that the welded-tree speedup of Childs, Cleve, Deotto, Farhi, Gutmann and Spielman was not contained in SZK. On the other hand, BPP^{HSP} is contained in SZK, at least if the goal is to determine the size of the hidden subgroup. This includes even the abelian HSP, although I'm not sure how exactly to find the generators of an arbitrary hidden subgroup in SZK. The reason we can decide the size of the hidden subgroup is that if f:G->S has hidden subgroup H, and we choose g uniformly at random from G, then f(g) is uniformly random over a set of size |G|/|H|. In particular, f(g) has entropy log|G| - log|H|. And entropy estimation is in SZK.
{ "domain": "cstheory.stackexchange", "id": 214, "tags": "complexity-classes, quantum-computing" }
Plot the sum frequency generation spectrum using convolution MATLAB
Question: I am attempting to calculate the spectrum of a pulse that has undergone sum-frequency generation (in this case it is a gaussian, so it is correct to also say frequency doubling/Second harmonic generation). The SHG signal in the frequency domain is given as, $$E_{SHG}(2\omega) = E_1(\omega)*E_2(\omega)$$ Therefore a signals SHG spectrum is just an autoconvolution of the original spectrum. However, I am unfamiliar with the practical use of discrete convolution and do not know how to transform the new x-axis in to a suitable vector for plotting? clear all; close all; dt = 0.01; x = 200:dt:1000; %Frequency axis (THz) %Generate Stokes Profile width_stokes = 20; % center frequency f = 500; Es = exp(-(x-f).^2/width_stokes^2); Es=Es./max(Es); plot(x,Es); title("Stokes spectrum"); SHG = conv(Es,Es,'same'); SHG = SHG./max(SHG); figure % New x-axis for SHG plot x1 = (1:length(SHG)); plot(x1,SHG) xlabel('frequency (A.U.)') ``` Answer: You have a gaussian centered at 500 THz. We would expect the convolution to have a single gaussian centered at 1000 THz. A linear convolution of two sequences of N points each will have a length of 2*N-1 samples. You have the added complication that your frequency vectors don't start at 0Hz. One way to fix this would be to have them simply start at zero dt = 0.01; x = 0:dt:1000; %Frequency axis (THz) starting at 0 N = length(x); freqAxisConvolution = dt*(0:2*N-2); Alternatively, you can just calculate the offset. If both vectors where unit impulses (starting at 200 THz) the convolution would be a unit impulse (starting at 400 THz), so the offset is simply the sum of the individual offsets. In other words if each vectors spans from 200 THz to 2000 THz the convolution will span from 400 THz to 4000 THz Here is the full thing %% close all dt = 0.01; x = 200:dt:1000; %Frequency axis (THz) %Generate Stokes Profile width_stokes = 20; % center frequency f = 500; Es = exp(-(x-f).^2/width_stokes^2); Es=Es./max(Es); plot(x,Es); title("Stokes spectrum"); % discrete convolution produces 2*N-1 output samples SHG = conv(Es,Es,'full'); SHG = SHG./max(SHG); figure % X axis: spans the sum of the original axes N = length(Es); freqAxis = 2*x(1)+(0:2*N-2)*dt; plot(freqAxis,SHG) grid on xlabel('frequency in THz');
{ "domain": "dsp.stackexchange", "id": 10820, "tags": "convolution" }
Fiducial based robot relocalization for AMCL
Question: I want to use a transform from Fiducial marker to re-localize my odometry on AMCL . https://github.com/UbiquityRobotics/fiducials , i intend to use this . The idea is to have a fixed fiducial on the map , demarcating the home location Once the fiducial is detected , the robots current position is relocalized based on the tf from the tracker . Originally posted by chrissunny94 on ROS Answers with karma: 142 on 2018-04-23 Post score: 0 Original comments Comment by jayess on 2018-04-23: What is your question? Comment by chrissunny94 on 2018-04-23: Once the robot sees a AR_tracker , it should relocalize . Its kind of like a 'homing' mechanism. When AMCL starts , it starts at some random coordinate , post which the odometry estimation is often wrong . I want to put in a correction mechanism . Comment by jayess on 2018-04-23: Ok, but what is your question? You're giving statements, but no question. Comment by chrissunny94 on 2018-04-25: @jarvisschultz , Thanks solved the problem . I didnt see the inirialpose before . Answer: One option would be to add use the initialpose topic to completely re-initialize your particle filter whenever you see a known fiducial. Perhaps a better choice would be to include fiducial measurements into robot_localization as a measurement source. Then the filtered estimate from robot_localization will produce a much more accurate tf tree that amcl can use. Originally posted by jarvisschultz with karma: 9031 on 2018-04-23 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 30721, "tags": "navigation, ros-kinetic, amcl" }
Why is the square of the neutrino mass negative?
Question: Why is the square of the neutrino mass negative? In arXiv:hep-ph/0009291 this is explained by giving the example of: $$m^2_{\nu_e}= -2.5 \pm 3.3 \text{eV}^2 \tag{1}$$ "Thee negative value of the neutrino mass-square simply means:" $$E^2/c^2 -p^2=m^2_{\nu_e} c^2 < 0$$ "The right-hand side in Eq. (3) can be rewritten as ($-m_s^2 c^2$), then $m_s$ has a positive value." What is the meaning of $m_s$? This isn't explained in this article, and it makes no sense to me that the value for the square of a mass could be negative. A similar question has been posted in Negative Neutrino Mass squared and in Negative Mass Square I do not understand the answers given. Answer: The mass squared of the electron neutrino obviously cannot be negative as then the neutrino would have an imaginary mass, so the obvious conclusion is that there is some unknown (probably systematic) error in the experiment. The latest measurements I am aware of are the results from the KATRIN experiment, and while they do still give a negative value $m^2 = -1.0_{-1.1}^{+0.9}~\text{eV}^2$ this is not significantly different from $0$. The paper you link is basically claiming the electron neutrino does in effect have an imaginary mass because it travels faster than light. This is not an explanation that most of us find convincing, especially since experimental measurements of the electron neutrino speed from the supernova 1987A give a result consistent with the speed of light, not faster than light. If you are interested to read a detailed discussion of the subject I recommend the paper Neutrino mass limit from tritium beta decay by E. W. Otten, C. Weinheimer. This gives a very detailed discussion of the tritium experiment and possible sources of error, though without coming to any firm conclusion. Note that this paper predated the KATRIN results, though it does discuss the KATRIN experiment.
{ "domain": "physics.stackexchange", "id": 71316, "tags": "particle-physics, experimental-physics, faster-than-light, neutrinos, tachyon" }
Does measuring an observable $\hat{\theta}$ for a QM system in a state $|\psi\rangle$ preserve the expansion coefficients of $|\psi\rangle$?
Question: Consider an idealised hydrogen atom in the state $|\psi\rangle=\frac{1}{\sqrt{6}}(2|1,0,0\rangle-|2,1,0\rangle+|2,1,1\rangle)$ where $|n,l,m\rangle$ are the normalised eigenstates of the Hamiltonian $\hat{H}$ and of $|\hat{\vec{L}}^{2}|, \hat{L}_z$ where $\hat{L}$ is the angular momentum operator. Assume now that we measure $\hat{L}_z$ with outcome $m=0$. What state is the system in after the measurement? solution from my lecture notes: By the form of $|\psi\rangle$, the state after the measurement is $|\phi\rangle=\lambda(2|1,0,0\rangle-|2,1,0\rangle)$ with $\lambda$ the normalisation constant (so $\lambda=1/\sqrt{5}$). Question: Why are the $|n,l,m\rangle$ in the expansion of $|\phi\rangle$ only ones that appear in the expansion of $|\psi\rangle$ and why are their coefficients preserved? Why not the other $|n,l,m\rangle$ with $n=1,2,\ldots$ and $l=0,1,\ldots,n-1$? I can see that their dot product with $|\psi\rangle$ vanishes and hence adding such a state to $|\phi\rangle$ does not change the probability of measuring this state. How do I continue? Answer: Strictly speaking, yes. As mentioned in the comments, a measurement can be modeled by the application of a projection operator onto some subspace of the Hilbert space corresponding to the measurement result. However, projection operators do not preserve normalization, so if $P$ projects onto the $m=0$ subspace, then $$P|\psi\rangle =\frac{1}{\sqrt{6}}\big(2|1,0,0\rangle-|2,1,0\rangle\big)$$ which clearly is no longer normalized. This isn't a problem, because physical states are only defined up to a multiplicative constant, meaning that $|\phi\rangle$ and $\lambda |\phi\rangle$ represent exactly the same physical state. In that sense, representing the state of your system as a normalized vector is just a useful convention which you don't need to follow. If you do choose to follow that convention, then you can re-normalize your state after the projective measurement by multiplying by an overall normalization constant, in this case given by $\sqrt{6/5}$. Obviously that changes the coefficients, but not the physical state.
{ "domain": "physics.stackexchange", "id": 80547, "tags": "quantum-mechanics, measurements, eigenvalue, hydrogen, quantum-measurements" }
Efficient String permutations calculation in Java
Question: In trying to compute permutations of a given String, what can I do to further improve the code below for memory and/or time efficiency? import java.util.ArrayList; import java.util.List; public class Main { public static void main(String[] args) { permutateString("ABC"); } private static void permutateString(String str) { List<String> results = permutations(new StringBuilder(str), new ArrayList<>()); System.out.println("Number of permutations: " + results.size() + "\n" + "Permutations: \n" + results); } private static List<String> permutations(StringBuilder input, List<String> permutationsList) { if (input.length() == 0 || input.length() == 1) { permutationsList.add(input.toString()); } else { char prefix; StringBuilder substring; for (int i=0; i<input.length(); i++) { prefix = input.charAt(i); substring = new StringBuilder(input).deleteCharAt(i); for (String str : permutations(substring, new ArrayList<>())) { permutationsList.add(String.valueOf(prefix) + str); } } } return permutationsList; } } Answer: Why are you giving as parameter to permutations a permutationsList, that you will both mutate and return? The purpose of this method is to generate the permutations of the given input; it shouldn't need to take any list as parameter, and should just return the result. (Also, node that this method is only called by giving it a new array list, which hints that it isn't really needed). Also, why does it take a StringBuilder as parameter? This method doesn't try build a String with it, it just get its length and the character at a given index. So pass it a String. Therefore, consider this instead: private static List<String> permutations(String input) { List<String> permutationsList = new ArrayList<>(); // ... return permutationsList; } And with this small change, we can clarify the code more. We would have List<String> permutationsList = new ArrayList<>(); if (input.length() == 0 || input.length() == 1) { permutationsList.add(input); } else { // ... } return permutationsList; (We don't need to call toString() anymore since the argument is now a String). The first part handles the base condition of the recursion. Notice that it builds a new list, only to add a single element to it and return it directly. Instead, we can use the optimized Collections.singletonList(o) which directly returns a list containing the single given element. Now we can have: if (input.length() == 0 || input.length() == 1) { return Collections.singletonList(input); } List<String> permutationsList = new ArrayList<>(); // ... return permutationsList; No need for an else-clause, and the use of the early return saves one level of indentation and adds to clarity. Then, we can move on to the scope of the variables. So far, the code is: char prefix; StringBuilder substring; for (int i=0; i<input.length(); i++) { prefix = input.charAt(i); substring = new StringBuilder(input).deleteCharAt(i); // ... } Why declare the variables outside of the for loop? They are only used inside of it; variables should have the tighter scope possible. Instead, consider: for (int i=0; i<input.length(); i++) { char prefix = input.charAt(i); StringBuilder substring = new StringBuilder(input).deleteCharAt(i); // ... } which saves in lines of code, makes sure prefix and substring have a minimal scope and is overall clearer. Finally, the inner for loop calculates String.valueOf(prefix) each time, when the result will always be the same. char prefix = input.charAt(i); // ... for (String str : permutations(substring)) { permutationsList.add(String.valueOf(prefix) + str); } This could be done a single time and reuse the result with String prefix = String.valueOf(input.charAt(i)); // ... for (String str : permutations(substring)) { permutationsList.add(prefix + str); } With all that, the end result is: private static List<String> permutations(String input) { if (input.length() == 0 || input.length() == 1) { return Collections.singletonList(input); } List<String> permutationsList = new ArrayList<>(); for (int i = 0; i < input.length(); i++) { String prefix = String.valueOf(input.charAt(i)); String substring = new StringBuilder(input).deleteCharAt(i).toString(); for (String str : permutations(substring)) { permutationsList.add(prefix + str); } } return permutationsList; } Other comments: permutateString is a wrong name. The word "permutate" doesn't exist, it should be "permute". But actually, even that would be a wrong name: the method doesn't actually permute the given String. It prints the permutations of it. The difference is that, with a method named permuteString, one would expect it to return the permutations, except this one doesn't, since it prints them. Consider renaming it printPermutationsOf. If you worry about the memory and time efficiency of the algorithm, then you're likely not using the right tool here. Building all the permutations will be memory intensive (with all the objects to build), and very slow (complexity is O(n!), where n is the length of the String). It is only practical for short Strings: there are already more than 3 millions permutations for a String of length 10. Depending on the use-case, you will either want to filter some of them, or even use a completely different approach that doesn't require building any permutations.
{ "domain": "codereview.stackexchange", "id": 22114, "tags": "java, algorithm, strings, combinatorics" }
Mineral Hardness Scales
Question: I am doing research into lunar regolith's hardness in relation to abrasion of lunar tools and need to choose a hardness scale (either Mohs, or Rockwell A, B, or C). Is either of these metrics more legitimate/accurate than the other? Answer: Rockwell,Brinell, Knoop, Vickers, etc are for materials like metals that have at least a little ductility as the rely on plastic deformation. Mohs is for brittle minerals although it is somewhat qualitative. Abrasion of tools is not that simple anyway. For example : aluminum is softer than many metals but aluminum castings are very abrasive to cutting tools as the high silicon content combines with the aluminum to make hard abrasive particles in the soft matrix.
{ "domain": "engineering.stackexchange", "id": 2405, "tags": "mechanical-engineering, materials" }
ROS2 fails to start: 'failed to load any RMW implementations'
Question: I installed ROS Humble on a machine following the installation steps. Now, I want to help people installing this version of ROS on different machines without having to follow the whole installation steps. My company policy unfortunately do not allow me to use docker. Then I simply: created a target folder (ros2_depends) copied to ros2_depends all the dependencies installed on the first machine (chocolatey, CMake and OpenSSL-Win64 folders) copied this folder to the target machine with ROS2 updated the PATH and other variables such as CHOCO_PATH or OPENSSL_CONF to point to ros2_depends folder. also copied C:\Python38 directly from the first machine to the new machine (as some ROS files like ros2-script.py assumes Python is located in C:\Python38, no choice with this one, could not make it part of ros2_depends) Then, when I try to run ROS2 (ros2 run demo_nodes_py listener), I got the error: Traceback (most recent call last): File "C:\ROS2\ros2-windows\lib\demo_nodes_py\listener-script.py", line 33, in <module> sys.exit(load_entry_point('demo-nodes-py==0.20.3', 'console_scripts', 'listener')()) File "C:\ROS2\ros2-windows\Lib\site-packages\demo_nodes_py\topics\listener.py", line 35, in main rclpy.init(args=args) File "C:\ROS2\ros2-windows\Lib\site-packages\rclpy\__init__.py", line 89, in init return context.init(args, domain_id=domain_id) File "C:\ROS2\ros2-windows\Lib\site-packages\rclpy\context.py", line 72, in init self.__context = _rclpy.Context( rclpy._rclpy_pybind11.RCLError: Failed to initialize init options: failed to load any RMW implementations, at C:\ci\ws\src\ros2\rmw_implementation\rmw_implementation\src\functions.cpp:125, at C:\ci\ws\src\ros2\rcl\rcl\src\rcl\init_options.c:75 [ros2run]: Process exited with failure 1 Any idea what could be wrong? Note: I originally posted this to stackoverflow, but I feel like it's more appropriate here. When answered, I'll update or close the stackoverflow post. Answer: The problem was because OpenSSL-Win64's bin folder was not added to the PATH. Adding it to the PATH fixes the issue.
{ "domain": "robotics.stackexchange", "id": 38650, "tags": "ros2, installation" }
Retrieve Father and Child structure in a Database Table
Question: I have a table with this schema: CREATE TABLE [td].[MyTable] ( [ID] [int] NOT NULL ,[FatherID] [int] NULL ) (Note: I have excluded all the columns not relevant to the discussion) I receive an [ID] as input, and I need to collect the relative record and all of its fathers in one single output. Every record has the id of the Father stored in the [FatherID] column. There is only one root element and it is recognized when we found the [FatherID] = 0. This is the current full working query: IF OBJECT_ID('tempdb..#MyTempTable') IS NOT NULL DROP TABLE #MyTempTable DECLARE @ID CHAR(11) SET @ID = 13192 SELECT * INTO #MyTempTable FROM [MyTable] WHERE [ID] = @ID SELECT @ID = [FatherID] FROM #MyTempTable WHERE [ID] = @ID WHILE @ID <> 0 BEGIN INSERT INTO #MyTempTable SELECT * FROM [MyTable] WHERE [ID] = @ID SELECT @ID = [FatherID] FROM #MyTempTable WHERE [ID] = @ID END SELECT * FROM #MyTempTable ORDER BY [ID] The perfomance are fine, but I want improve the readability. Any suggestions? Answer: Firstly, you use both the SELECT [XYZ] INTO [ABC] and the INSERT INTO [ABC] SELECT [XYZ] syntax in the same query, using one would make the query much more consistent and easier to read. Personally, I prefer using INSERT INTO [ABC] SELECT [XYZ], you do have to explicitly create the temporary table though, but that makes it usually clearer what is happening. If you'd prefer, you can change DECLARE @ID CHAR(11) SET @ID = 13192 to DECLARE @ID CHAR(11) = 13192 And you should really explicitly drop the #MyTempTable at the end of the query. Lastly, the biggest change I would suggest is to completely remove the selects from outside of the while, they are completely redundant. If you remove those selects from outside the while loop, then the first iteration of the while does exactly the same as the selects on the outside would have done. Here is what we end up with: IF OBJECT_ID('tempdb..#MyTempTable') IS NOT NULL DROP TABLE #MyTempTable DECLARE @ID CHAR(11) = 13192 CREATE TABLE #MyTempTable(ID INT, FatherID INT) WHILE @ID <> 0 BEGIN INSERT INTO #MyTempTable SELECT * FROM [MyTable] WHERE [ID] = @ID SELECT @ID = [FatherID] FROM #MyTempTable WHERE [ID] = @ID END SELECT * FROM #MyTempTable ORDER BY [ID] DROP TABLE #MyTempTable Here is an SQL Fiddle.
{ "domain": "codereview.stackexchange", "id": 9176, "tags": "sql, tree, sql-server" }
Calculating the Electric Potential through Computational Physics
Question: I am working on a computational physics project which focuses on finding both the electric potential and field of certain surfaces via the relaxation method. Things are going quite well, however I worry that the program I created is not actually performing the method. For example: Consider a 7x7 metal box, whose potential is -1 on one face and 1 on the opposite. There is no charge, so the partial differential equation (PDE) satisfies Laplace's equation. I wrote the program (C++) in such a way that the PDE can be solved as a two dimensional problem. Here are the segments of the program which creates the array, sets the boundary conditions, and performs the relaxation method. const int N = 7; //assign value for numerical array //create arrays double V[N][N];//final solution double Vn[N][N];//inital guess double Vdel = 0; //difference fstream outputFile; int i,j=0; //inital values for V for(i=0; i < N; i++) { for( j=0;j<N;j++) { V[i][j] = 0; } } // Give boundary condition for V for(i=0; i < N; i++) { for( j=0;j<N;j++) { V[i][0] = -1; V[i][6] = 1; } } // Relexation Method for (int iter=0;iter<10 && Vdel < 0.00001 ;iter++) { for(i=0; i < N; i++) { for (j=1; j < N-1; j++) { Vn[i][j] = (V[i+1][j] + V[i-1][j] + V[i][j+1] + V[i][j-1])*0.25;//Main calculation Vdel += (V[i][j]-Vn[i][j]);//updates Vdel } } for (i=0;i<N;i++) { for (j=1;j<N-1;j++) { V[i][j] = Vn[i][j]; //updates V } } } The final output I get for the potential is: -1, -0.508618, -0.173784, 0.0958632, 0.339737, 0.48976, 1 -1, -0.599484, -0.254206, 0.0505583, 0.335934, 0.619303, 1 -1, -0.635114, -0.294113, 0.0246417, 0.334137, 0.65152, 1 -1, -0.64686, -0.311775, 0.00798507, 0.324452, 0.652638, 1 -1, -0.640549, -0.314112, -0.00537884, 0.303049, 0.634579, 1 -1, -0.601225, -0.298744, -0.0184381, 0.258545, 0.582629, 1 -1, -0.465607, -0.261202, -0.0281748, 0.166941, 0.437392, 1 When in actuality it should equal to something like -1, -.67, -.33, 0, .33, .67, 1 -1, -.67, -.33, 0, .33, .67, 1 -1, -.67, -.33, 0, .33, .67, 1 -1, -.67, -.33, 0, .33, .67, 1 -1, -.67, -.33, 0, .33, .67, 1 -1, -.67, -.33, 0, .33, .67, 1 -1, -.67, -.33, 0, .33, .67, 1 Any thoughts? It would be greatly appreciated if anyone had any advice on the topic. P.S. (Apologies if the is the wrong forum to post this question, let me know and I can correct it immediately) Answer: You do not seem to have specified the boundary conditions along the other two sides of the square. Without this information the solution is not unique.
{ "domain": "physics.stackexchange", "id": 84444, "tags": "electrostatics, computational-physics" }
Stored Procedure calculating employee earnings
Question: I have following stored procedure in a C# win forms application which calculates employee earnings based on attendance as follows. Note that a shift is 12 hours and employees mark attendance for in and out of each shifts. Also salary period is from beginning to end of a month (1st to 28st / 30th / 31st). Related tables are: Employee (emp_id, initials, surname, basic_sal, budj_allowance) Attendance (emp_id, in_time, out_time, shift) Rank (rank_id, shift_rate) Calculations Work Days - This is the number of days a particular employee has worked and this value is taken from Attendance table. Day Offs - An employee is entitled for maximum of 4 day offs for a month and if more than four days have been taken by an employee, remaining days will be marked as "Leave days". No of Extra Shifts - This value is taken by this formula. [Total Shifts - total days worked] Basic Salary - This is taken from employee master table Budgetary Allowance - All employees are paid Rs.1,000/- as budgetary allowance No Pay Days - This is calculated from the formula [(No of days in the month-04) - days worked] Less No Pay Amount - This is calculated from the formula. [((Basic Salary + Budgetary Allowance) / (No of Days in the month-04)) x No Pay Days] Amount for the EPF - This is calculated from the formula [Basic Salary + Budgetary Allowance - Less No Pay Amount] Overtime Amount - This is calculated from the formula [Amount for the EPF - (Extra Shift Rate x Work Days)] CREATE PROCEDURE [dbo].[sp_Earnings] @fromDate datetime, @toDate datetime -- Add the parameters for the stored procedure here AS BEGIN -- Declaring a variable to hold on of days in the month. DECLARE @No_of_days int SELECT @No_of_days = DATEDIFF(day,@fromDate,DATEADD(day,1,(@toDate))) -- Declaring a constant to hold no of off days allowed in a month DECLARE @Day_offs_allowed int SELECT @Day_offs_allowed=4 --This is a reference to identify month and year of everyrecord. example - **"APR2014"** DECLARE @SalRef char(20) SELECT @SalRef= REPLACE(STUFF(CONVERT(varchar(12),CONVERT(date,@fromDate,107),106),1,3,''),' ','') -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here SELECT Employee.Emp_ID, Employee.Initials + ', ' + Employee.Surname AS Name, COUNT(DISTINCT CONVERT(DATE, Attendance.in_time)) AS work_days, CASE WHEN (@No_of_days - (COUNT(DISTINCT CONVERT(DATE, Attendance.in_time))) >= @Day_offs_allowed) THEN @Day_offs_allowed ELSE (@No_of_days - (COUNT(DISTINCT CONVERT(DATE, Attendance.in_time)))) END AS day_offs, CASE WHEN (@No_of_days - (COUNT(DISTINCT CONVERT(DATE, Attendance.in_time))) >= @Day_offs_allowed) THEN @No_of_days - (COUNT(DISTINCT CONVERT(DATE, Attendance.in_time))) - @Day_offs_allowed ELSE 0 END AS leave_days, COUNT(Attendance.shift) - COUNT(DISTINCT CONVERT(DATE, Attendance.in_time)) AS extra_shifts, Rank.Shift_Rate, (COUNT(Attendance.shift) - COUNT(DISTINCT CONVERT(DATE, Attendance.in_time)))* rank.Shift_Rate AS Extra_Shift_Amount, employee.Basic_Sal, employee.budj_allowance, (@No_of_days-@Day_offs_allowed)- COUNT(DISTINCT CONVERT(DATE, Attendance.in_time)) AS no_pay_days, CONVERT(DECIMAL(10,2),(((employee.basic_sal+employee.budj_allowance) / (@No_of_days-@Day_offs_allowed) )) * ((@No_of_days-@Day_offs_allowed)- COUNT(DISTINCT CONVERT(DATE, Attendance.in_time)))) AS less_no_pay_amt, employee.basic_sal+employee.budj_allowance-CONVERT(DECIMAL(10,2),((employee.basic_sal+employee.budj_allowance) / (@No_of_days-@Day_offs_allowed) ) * ((@No_of_days-@Day_offs_allowed)- COUNT(DISTINCT CONVERT(DATE, Attendance.in_time))))AS amt_for_epf, CONVERT(DECIMAL(10,2),((Rank.Shift_Rate*(COUNT(DISTINCT CONVERT(DATE, Attendance.in_time))))-((((employee.basic_sal+employee.budj_allowance)-(((employee.basic_sal+employee.budj_allowance) / (@No_of_days-@Day_offs_allowed)) * (@No_of_days-@Day_offs_allowed- COUNT(DISTINCT CONVERT(DATE, Attendance.in_time))))))))) AS over_time_amt, @salRef AS Reference FROM Employee INNER JOIN Attendance ON Employee.Emp_ID = Attendance.EID INNER JOIN Point ON Attendance.PID = Point.PID INNER JOIN Rank ON Employee.Rank = Rank.Rank_ID WHERE Attendance.in_time BETWEEN CONVERT(DATETIME, @fromDate, 102) AND CONVERT(DATETIME, @toDate, 102) GROUP BY Employee.Emp_ID, Employee.Initials + ', ' + Employee.Surname, Rank.Shift_Rate, Employee.Basic_Sal, Employee.budj_allowance ORDER BY Employee.Emp_ID END Questions: Can this be further optimized? Are there any notable flows? Is a stored procedure suitable for this requirement? Answer: Here are my thoughts on your proc. The Good Good job on commenting out sections that are not so obvious to figure out. I had very little difficulty understanding it, your code is much cleaner than the average SQL post on this site. I would say a stored procedure is the correct type of database object for this, since it sounds like it will be called regularly based on its nature. Improvements Even though SQL engine is set up to where you don't always need to use delimiter ; it is good practice to explicitly use them. DECLARE @No_of_days int becomes DECLARE @No_of_days int; etc. throughout. To avoid errors when creating a proc, it is a good idea to DROP PROCEDURE IF EXISTS [dbo].[sp_Earnings]; for instance. This may just be personal preference, but I think SET is less ambiguous to set variables, so SELECT @Day_offs_allowed=4; would become SET @Day_offs_allowed=4;. This helps to differentiate them from nested queries and such. I think this is a bit odd: SELECT @SalRef= REPLACE(STUFF(CONVERT(varchar(12),CONVERT(date,@fromDate,107),106),1,3,''),' ',''). I would be tempted to instead use SET @SalRef = CONCAT( DATEPART(Mm, @fromDate), '/', DATEPART(Yy, @fromDate) ). This reference will look slightly different, e.g. "04/2014" but achieve the same purpose more elegantly plus you can sort them in order more easily numerically. A point on formatting. When using long nested statements it is good practice to use line break and tabs to make it easier to read. For example: (COUNT(Attendance.shift) - COUNT(DISTINCT CONVERT(DATE, Attendance.in_time)))* rank.Shift_Rate AS Extra_Shift_Amount Becomes: (COUNT (Attendance.shift) - COUNT( DISTINCT CONVERT(DATE, Attendance.in_time) ) )* rank.Shift_Rate AS Extra_Shift_Amount, Other than that I think your code is good.
{ "domain": "codereview.stackexchange", "id": 8064, "tags": "performance, sql, sql-server, stored-procedure" }
What exactly is a Fluorescent lamp?
Question: A fluorescent tube (home-based) works on the principle of discharge of electricity through gases, as far as I can tell (I don't know much about cathode rays or gas discharge) What happens inside the tube to create this pink color? Wikipedia says that it is because the mercury vapor is absorbed and its spectrum is changed. Do all the fluorescent tubes (especially Compact Fluorescent Lamps which also use mercury) have this same pink color during dying? Answer: A fluorescent lamp or fluorescent tube is a gas-discharge lamp that uses electricity to excite mercury vapor. The excited mercury atoms produce short-wave ultraviolet light that then causes a phosphor to fluoresce, producing visible light. Taken right from Wikipedia! http://en.wikipedia.org/wiki/Fluorescent_lamp End of life The end of life failure mode for fluorescent lamps varies depending on how they are used and their control gear type. Often the light will turn pink (see "Loss of mercury" section for details) with black burns on the ends of the lamp due to sputtering of emission mix (see below). The lamp may also flicker at a noticeable rate (see "Flicker problems" section for details). Mercury is slowly absorbed into glass, phosphor, and tube electrodes throughout the lamp life, until it can't function. The failure symptom cycle starts out with a flicker (run-up time to full light output). When the mercury runs out (absorbed by the glass, phosphor, and tube electrodes) the lamp glows a dim pink and the argon base gas takes over as the primary discharge. The same effect can be observed with new tubes. Mercury is present in the form of an amalgam and takes some time to be liberated in sufficient amount. New lamps may initially glow pink for several seconds after startup. This period is minimized after about 100 hours of operation.
{ "domain": "physics.stackexchange", "id": 4538, "tags": "everyday-life, atomic-physics" }
How to apply two notch filters simultaneously
Question: I have a sound file that I need to apply a notch filter to in Matlab. comparing to original file I have two frequencies which are the noise. Applying a notch filter like this one: wo = 1750/(44100/2); bw = wo/35; [b,a] = iirnotch(wo,bw); fvtool(b,a); f1 = filter(b, a, f); This will atenuate the frequency at 1750 Hz. However there is one more frequency that is causing trouble at 820Hz. What I can do is reapply the same notch to the "new" version: wo = 820/(44100/2); bw = wo/35; [b,a] = iirnotch(wo,bw); fvtool(b,a); f2 = filter(b, a, f1); and f2 will be my clean file. However I need to have one vector "b" and one vector "a" that will clean all the noise at one go. Can I apply two notches simultaneously? I am very new to this kind of stuff. Thanks Answer: You want to concatenate the two filters, which is equivalent to convolving the $a$ and $b$ coefficients of the individual notch filters to obtain the coefficients of the resulting filter with two notches.
{ "domain": "dsp.stackexchange", "id": 7077, "tags": "matlab, filter-design, infinite-impulse-response" }
What makes diazo compounds so unstable and explosive?
Question: I once had an Orgo TA refer to a diazo compound as "diazo-boom-boom" (the technical term). I have always been curious as to the reason behind the instability and reactivity. According to Wikipedia Some of the most stable diazo compounds are α-diazoketones and α-diazoesters since the negative charge is delocalized into the carbonyl. In contrast, most alkyldiazo compounds are explosive What is it about the alkyldiazos that makes them so much more unstable? There doesn't appear to be any bond strain or other factors. Naively, I can say that having a resonance structure should make it marginally more stable. Answer: Well, your question is equivalent to “what is it about α-diazoketones that makes them so much more stable?”, which is easier to see. Compared to an alkyldiazo, the α-diazoketone has a resonance structure in which the negative charge goes to the ketone’s oxygen (and far away from the positively-charged nitrogen atom): Because the oxygen is a quite electronegative element, the resonance form is quite stable and explains the extra stability of α-diazoketones. It is for the same reason that the protons in position α to the ketones are always more acidic than alky chain protons. Coming back to alkyldiazo compounds, you have to realize that merely being able to write a resonance structure does not intrinsically imply stabilization: the resonance structure has to have some intrinsic stability factor. In the alkyldiazo, the resonance form you wrote is a carbanion, which is considered quite unfavourable unless it has a further stabilizing factor. Moreover, the most common reactions gives N2, which is a very stable compound… the reaction is thermodynamically very favourable.
{ "domain": "chemistry.stackexchange", "id": 22, "tags": "organic-chemistry, reaction-mechanism, explosives" }
A static array implementation in C++
Question: I'm implementing a basic array data structure with basic functionalities. #include <iostream> #include "barray.h" BArray::BArray(int init_size) { b_array = new int[init_size](); array_size = init_size; } BArray::BArray(int init_size, int init_val) { b_array = new int[init_size]; array_size = init_size; for(int i = 0; i < init_size; ++i) b_array[i] = init_val; } BArray::BArray(const BArray & rhs) { array_size = rhs.array_size; b_array = new int[array_size]; for(int i = 0; i < array_size; ++i) b_array[i] = rhs[i]; } BArray::~BArray() { delete [] b_array; } int BArray::getSize() const { return array_size; } int BArray::operator[](int index) const { return *(b_array + index); } int& BArray::operator[](int index) { return *(b_array + index); } BArray& BArray::operator=(const BArray& rhs) { if(this == &rhs) return *this; array_size = rhs.array_size; delete [] b_array; b_array = new int[array_size]; for(int i = 0; i < array_size; ++i) b_array[i] = rhs[i]; return *this; } std::ostream& operator<< (std::ostream& out, const BArray& arr) { for(int i = 0; i < arr.array_size; ++i) out << arr[i] << " "; return out; } And the header file #ifndef BARRAY_H #define BARRAY_H class BArray { public: BArray() = delete; //Declare the default constructor as deleted to avoid //declaring an array without specifying its size. BArray(const int init_size); BArray(int init_size, int init_val); //Constructor that initializes the array with init_val. BArray(const BArray & rhs); //Copy constructor. ~BArray(); //Destructor. int operator[](int index) const; //[] operator overloading for "reading" index value. int& operator[](int index); //[] operator overloading for "setting" index value. BArray& operator=(const BArray& rhs); //Copy assignment operator. //Utility functions. int getSize() const; //Friend functions. friend std::ostream& operator<< (std::ostream& out, const BArray& arr); private: int* b_array; int array_size; }; #endif // BARRAY_H Can you please give me a feedback on what is missing, what is wrong and what is good? I mean in terms of memory allocation, operators overloading, etc... Is this is the best way this class can be implemented? Edit: This is how I tested the code #include <iostream> #include "barray.h" int main() { //BArray invalid_instance; //Default constructor is deleted. BArray barr(5); //Declaring an array of size 5, this initializes all values to zeros. std::cout << barr[2] << std::endl; barr[2] = 15; //Setting index 2 to 15. std::cout << barr[2] << std::endl; //Reading out value of index 2. BArray anotherArray(barr); //Copy constructor. std::cout << anotherArray[2] << std::endl; anotherArray[3] = 8; BArray assignArray = anotherArray; //Copy assignment operator. std::cout << assignArray[2] << std::endl; std::cout << assignArray; //Printing out array values. return 0; } Answer: The default-ctor won't be implicitly declared as there are user-declared ctors. When you can define a default-ctor with reasonable behavior, consider doing so. If you use in-class initializers to 0 resp. nullptr for the members, it can even be explicitly defaulted, making the class trivially default-constructible. Top-level const on arguments in a function declaration is just useless clutter. Consider investing in move-semantics to avoid costly copies. If the allocation in one of the ctors throws, your dtor will be called on indeterminate members, which is undefined behavior. Use mem-initialisers or pre-init b_array to nullptr to fix it. Your copy-assignment also has pathological behavior in the face of exceptions. That aside, it pessimises the common case in favor of self-assignment. Read up on the copy-and-swap idiom. As a bonus, you get an efficient swap() out of it. Using a std::unique_ptr for the member would allow you to significantly simplify the memory-handling. Keep to the common interface-conventions. Failure to follow it makes generic code nigh impossible. Specifically, .getSize() should be .size(). You are missing most of the expected interface, specifically iterators (normal, constant, reverse, related typedefs, .data()). Better names for the private members would be array and count.
{ "domain": "codereview.stackexchange", "id": 34528, "tags": "c++, array, memory-management" }
BCS wave function in Neutron stars
Question: I've heard mentioned in various classes that neutron stars, like superconductors, are described by BCS theory. I know that in superconductors a key element in forming cooper pairs is a net attractive force between the electrons which would normally repel one another. That attractive force is accounted for via lattice vibrations (phonons) created and "absorbed" by electrons. So my question is: what provides the attractive force between neutrons? just gravity? If it is true that neutron stars follow BCS theory, by what means was someone able to verify that? Answer: I'm afraid the actual situation is much more complicated than you've been told. For one thing, the superconductivity does not occur between neutrons, but between quarks themselves. The topic of high density QCD is a very cool interplay of condensed matter and high energy physics, and a very nice review is available by Frank Wilczek. However, that article does need some background in QCD and superconductivity simultaneously to appreciate. A shortened version might go something like this: Inspiration: free fermions are incredibly unstable to superconductivity, in that any attractive interaction will cause it (in fact, there's an old theorem by Pierls (?) that almost all interactions (even repulsive ones) will cause superconductivity if you cool far enough). In QCD, quarks naturally attract already! So at sufficiently high density and low temperature, we can imagine that QCD can cause a strong attractive instability to a Fermi gas of quarks. Complications: 6 flavours, chiralities, masses of quarks are different, etc. Simplification by complication: realise that the normal state of QCD (i.e. 3- and 2-quark combinations) is just that: one possible state. Other phases of QCD exist, and we can study the phase boundaries and so forth even if we can't compute things exactly (universality saves the day!). We find that at really high densities, quarks pair up to give a background diquark condensate, through which single quarks move, and through the Anderson-Higgs mechanism gains a large mass by eating some Goldstone modes. All gluons become gapped (again, Anderson-Higgs), apart from one which mixes with the photon. The symmetries of this solution is actually the same as that of normal matter --- replace baryons with quarks (+ their diquark condensed background) and mesons with diquarks; this suggests that they are really the same phase in theory. In practise, getting from one to another requires some other phases in the middle, which are more complicated and arise due to the quark masses and the number of flavours, etc.
{ "domain": "physics.stackexchange", "id": 1154, "tags": "cosmology, condensed-matter, superconductivity" }
starter physical robot base recommendations?
Question: I am looking for graduating from simulator to actual physical hardware with ROS support. Could anyone please recommend popular (and cheaper) options for a base? I want to get a base where I could implement and experiment slam/navstack. Pioneer P3 DX (USD 4K) and Turtlebot (USD 2K) are on expensive side for individual starter. Originally posted by hmrobo on ROS Answers with karma: 1 on 2017-10-15 Post score: 0 Original comments Comment by gvdhoorn on 2017-10-16: I'm not sure, but I feel this is more suitable for some discussion (as there isn't any single answer that is the answer). Perhaps a post on discourse.ros.org would be better? Answer: How about the Trutlebot3 burguer? It is about $600 Some useful links: http://www.robotis.us/turtlebot-3/ http://turtlebot3.robotis.com/en/latest/ Originally posted by Martin Peris with karma: 5625 on 2017-10-15 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by hmrobo on 2017-10-15: Thanks, Martin. That is interesting but still on pre-order/nov delivery currently. Wondering if that's the cheapest or are there any other options including DIY.
{ "domain": "robotics.stackexchange", "id": 29086, "tags": "ros, turtlebot, pioneer-3dx" }
catkin-cmake-isolated fails to build moveit_core
Question: I'm trying to compile moveit_core with catkin_make_isolated but it failed because it could find a config file for fcl. -- catkin 0.5.74 CMake Error at /opt/ros/groovy/share/catkin/cmake/catkinConfig.cmake:72 (find_package): Could not find a package configuration file provided by "fcl" with any of the following names: fclConfig.cmake fcl-config.cmake I already installed fcl because fcl.pc is present in /usr/local/lib/pkgconfig. Is there a way to use it instead of the cmake config file ? Originally posted by Fabien R on ROS Answers with karma: 90 on 2013-10-18 Post score: 0 Answer: You need to make sure that fcl has installed one of the two .cmake files into the location and that it's on the CMAKE_PREFIX_PATH correctly if you want to use a non standard location. Originally posted by tfoote with karma: 58457 on 2013-10-19 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Fabien R on 2013-10-20: Finally, I added a cmake file base on FindPkgConfig.cmake to make it work. Comment by silentwf on 2014-10-29: @Fabien R Could you post your cmake file and where you placed it? I'm also facing the same problem as you are. Comment by Fabien R on 2015-01-22: It seems that I copied the file to /opt/ros/groovy/share/fcl/cmake and renamed it as fcl-config.cmake. But a cleaner way is to build fcl source as a package (with check_install for instance) and to use: find_package(PkgConfig) pkg_check_modules(FCL fcl) include_directories( ${FCL_INCLUDE_DIRS})
{ "domain": "robotics.stackexchange", "id": 15911, "tags": "ros, moveit, compilation, debian, ros-groovy" }
Non-interchangeability of time-like intervals
Question: I am reading Landau's Volume 2 of the course of theoretical physics. I have a doubt after reading the first few pages of it which I explain below. Landau first defines intervals and on pages 5 and 6 shows that two events having time like interval between them can never occur simultaneously in any reference system. Then he goes on to construct a 2D space-time graph (for visualization) with an event O occurring at (0,0,0,0). Then he considers any event which occurs in future in that frame and is time-like w.r.t. origin and says on page no. 7, But two events which are separated by a time-like interval cannot occur simultaneously in any reference system. Consequently, it is impossible to find a reference frame in which any of the events in region aOc occurred "before" the event O, i.e. at time t<0. The argument above just proves that because interval square should be positive, i.e. the events can't be simultaneous. But, if I replace the difference in time in the original frame with its negative in my proposed frame and let the space distance between them to be same in both frames, then I get an in my proposed frame an interval which is time like but in it the order of events is changed. Am I making some gross error or Landau has missed some argument? Answer: If you believe that (a) timelike separated events cannot be simultaneous in any reference frame, and (b) the set of inertial frames is (in some appropriate sense) a continuous set, then L&L's conclusion follows. After all, if there were two frames in which the order of two timelike separated events differed, then by continuously transforming one frame into another, you could find one in which they were simultaneous. But without some such additional assumption, you're right that the conclusion doesn't logically follow. There are coordinate systems that preserve the spacetime interval but flip the direction of time, such as the substitution $t\to -t$ that you mention. As BebopbutUnsteady observes in a comment to Karsus Ren's answer, we often use the term "orthochronous Lorentz transformation" to refer to a transformation that preserves the direction of time. The full group of Lorentz transformations (i.e., of all transformations that preserves the interval) includes both orthochronous and non-orthochronous components, which are not connected to each other. Physically, we usually only consider the orthochronous ones. You do have to be careful with the terminology: sometimes people use "Lorentz transformation" to mean just the orthochronous ones; sometimes it's the full group. By the way, pretty much the same thing applies to spatial reflections: is $x\to -x$ (leaving $y,z,t$ unchanged) a Lorentz transformation? After all, it preserves the spacetime interval. Often we refer to non-reflecting Lorentz transformations as "proper." So when people are being careful with their terminology they often refer to "proper orthochronous Lorentz transformations."
{ "domain": "physics.stackexchange", "id": 1280, "tags": "special-relativity" }
salt noise in an image
Question: Suppose an image has noise, but it is exclusively salt noise. What effect will dilation have? I am looking for a reference so I can answer this question. anyone has any idea? Answer: To answer your 1st question, dilation will enlargen the white spots in your image over the darker spots, which may be useful if you only have tiny dark salting you would like to remove in your image. However, you are referring to salt noise, so if the spots were also white crumbs in a dark image, then all you are doing is only making the crumbs bigger, which is what you don't want. What I would recommend is to use either Opening or Closing, (or both) depending on your application. These are a combination of dilations and erosions so that the minor parts of the image are removed while keeping the original portions of the image intact. For example, here is an image for a penguin (with salting): Notice how the image has both black and white salting. Running both an opening and closing algorithm (OpenCV) on this penguin, we remove the black and white salts, and get a nice, smoother image without the spots: Also, here is my OpenCV code that does this: #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include <iostream> using namespace cv; using namespace std; int main( ) { Mat image,dst; image = imread("saltnoise.png", CV_LOAD_IMAGE_GRAYSCALE); // Create a structuring element int erosion_size; cout << "What element size?" << '\n'; cin >> erosion_size; imshow("Salted", image); Mat element = getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(2 * erosion_size + 1, 2 * erosion_size + 1), cv::Point(erosion_size, erosion_size) ); //Opening and Closing erode(image,image,element); dilate(image,image,element); dilate(image,image,element); erode(image,image,element); imshow("Unsalted", image); waitKey(0); return 0; }
{ "domain": "dsp.stackexchange", "id": 3321, "tags": "image-processing, noise" }
Are extra dimensions timelike or spacelike?
Question: In special relativity there is a clear difference between spatial and temporal dimensions of spacetime due to the Minkowski metric diag(-1,1,1,1). In higher dimensional theories (10- and 26-dimensional string theories) does this asymmetry continue with additional dimensions being specifically time- or space-like or is there no clear difference? Answer: From Polchinski's String Theory, Chapter 1: We want to study the classical and quantum dynamics of a one-dimensional object, a string. The string moves in $D$ flat spacetime dimensions, with metric $\eta_{\mu \nu} = \mathrm{diag}(-,+,+,\cdots,+)$. So all additional dimensions are spacelike. Strictly speaking Polchinski is only talking about bosonic string theory at this point, but I believe the same applies to superstring theories as well. (It's been a long time since I thought about this in any detail.)
{ "domain": "physics.stackexchange", "id": 60221, "tags": "spacetime, string-theory, metric-tensor, causality, compactification" }
FIR code implementation question
Question: I have found this piece of code in Infenion DSP Library (asm code): Fir_Blk_16() ... ;(ACC)=(ACC)+h(i)*x(n-i)<<1 as comment I don't understand why the result is shifted (<<1)? Is this for rounding purposes? Thank you in advance, Anton Answer: The shifting is most likely related to the fixed point representation. If input, filter coefficients and accumulator are in different "Qs" (which is the fixed to float scale factor), shifts are used to adjust. See for example: https://en.wikipedia.org/wiki/Q_(number_format)
{ "domain": "dsp.stackexchange", "id": 11807, "tags": "dsp-core" }
Publisher Subscriber not working
Question: Hi everyone, I have been using ROS for quite a while but am suddenly stuck with a very weird problem. In a part of my code, the subscriber to a publisher is not working within the code segment. If I try to use rostopic echo, it works fine but as soon as I try running the subscriber node, not only does it give no result but also after disconnecting the subscriber node if I try to perform rostopic echo it does not work anymore. This is a very strange problem and any help would be very much appreciated. I am attaching the subscriber publisher nodes (much simplified versions) #include <ros/ros.h> #include <ros/callback_queue.h> #include <visualization_msgs/Marker.h> #include <std_msgs/String.h> #include <sensor_msgs/LaserScan.h> #include <nav_msgs/Odometry.h> #include <fstream> #include <sstream> #include <string> #include <stdlib.h> #include <algorithm> #include <cstdlib> #include <vector> #include <math.h> #include <std_msgs/MultiArrayLayout.h> #include <std_msgs/MultiArrayDimension.h> #include <std_msgs/Float32MultiArray.h> #include <std_msgs/Int8.h> #include "list_velocity_tracking.h" #define PI 3.14159265 using namespace std; std_msgs::Int8 lane_indicator; void laneCallback(const std_msgs::Int8::ConstPtr& lc); int main(int argc, char** argv) { ros::init(argc, argv, "velocit_estimation_list"); ros::NodeHandle n; ros::NodeHandle n1; pub2 = n.advertise<std_msgs::Int8>("lane_change_indicator", 5); int count = 0; lane_indicator.data = 0; while(ros::ok()) { pub2.publish(lane_indicator); } } and the subscriber node #include <ros/ros.h> #include <ros/callback_queue.h> #include <visualization_msgs/Marker.h> #include <std_msgs/String.h> #include <sensor_msgs/LaserScan.h> #include <nav_msgs/Odometry.h> #include <fstream> #include <sstream> #include <string> #include <stdlib.h> #include <algorithm> #include <cstdlib> #include <vector> #include <math.h> #include <std_msgs/MultiArrayLayout.h> #include <std_msgs/MultiArrayDimension.h> #include <std_msgs/Float32MultiArray.h> #include <std_msgs/Int8.h> #include "list_velocity_tracking.h" #define PI 3.14159265 using namespace std; void laneCallback(const std_msgs::Int8::ConstPtr& lc); int main(int argc, char** argv) { ros::init(argc, argv, "velocit_estimation_list"); ros::NodeHandle n; ros::NodeHandle n1; lane_lock = n.subscribe("lane_change_indicator", 10, laneCallback); int count = 0; while(ros::ok()) { ros::spinOnce(); } } void laneCallback(const std_msgs::Int8::ConstPtr& lc) { cout<<"inside callback\n"; } and yes, I have included lane_lock in the header file as a ROS subscriber. thanks a lot in advance Originally posted by Ashesh Goswami on ROS Answers with karma: 36 on 2015-03-19 Post score: 0 Answer: thanks a lot. i ran the rqt_graph and figured out that the problem was with an inactive node, i.e. i was running this node through another inactive node and hence the topic didn't have a clear path from the publisher to the subscriber..the problem is solved now Originally posted by Ashesh Goswami with karma: 36 on 2015-03-19 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 21170, "tags": "ros, roscpp, publisher" }
Perform one simulation step
Question: Hello everyone! I'm want to interface ROS and Gazebo to perform a robot control task. However, I want to have full control of the simulation evolution, by arbitrating the simulation steps. To be more specific, I publish a simulation clock in ROS and then: I want to tell Gazebo to execute the next simulation step, gather data, perform calculations over them and publish control outputs for the next simulation steps. Afterwards, run the next simulation step and start over. I found this question, but it is outdated and with broken links. Is there anything more recent on this topic? Thanks, George Originally posted by Georacer on Gazebo Answers with karma: 37 on 2015-12-07 Post score: 0 Answer: Hi there, as @scpeters mentioned in the other question you could publish gazebo world_control messages on the ~/world_control topic. You start the simulation paused and you send a message with step = true to step once in the simulation. Cheers, Andrei Originally posted by AndreiHaidu with karma: 2108 on 2015-12-07 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3838, "tags": "gazebo" }
Having trouble figuring out how loss was calculated for SQuAD task in BERT paper
Question: The BERT Paper https://arxiv.org/pdf/1810.04805.pdf Section 4.2 covers the SQuAD training. So from my understanding, there are two extra parameters trained, they are two vectors with the same dimension as the hidden size, so the same dimensions as the contextualized embeddings in BERT. They are S (for start) and E (for End). For each, a softmax is taken with S and each of the final contextualized embeddings to get a score for the correct Start position. And the same thing is done for E and the correct end position. I get up to this part. But I am having trouble figuring out how the did the labeling and final loss calculations, which is described in this paragraph "and the maximum scoring span is used as the prediction. The training objective is the loglikelihood of the correct start and end positions." What do they mean by "maximum scoring span is used as the prediction"? Furthermore, how does that play into "The training objective is the loglikelihood of the correct start and end positions"? From this Source: https://ljvmiranda921.github.io/notebook/2017/08/13/softmax-and-the-negative-log-likelihood/ It says the log-likelihood is only applied to the correct classes. So the we are only calculating the softmax for the correct positions only, Not any of the in correct positions. If this interpretation is correct, then the loss will be Loss = -Log( Softmax(S*T(predictedStart) / Sum(S*Ti) ) -Log( Softmax(E*T(predictedEnd) / Sum(S*Ti) ) Answer: From your description it sounds like for every position $i$ in the input text the model predicts $$p_S(i) = \mathbb P(\text{correct start position is } i)$$ and $$p_E(i) = \mathbb P(\text{correct end position is } i).$$ Now let $\hat s = \arg\max_i p_S(i)$ and $\hat e = \arg\max_i p_E(i)$ be the most probable start and end positions (according to the model). Then by "maximum scoring span is used as the prediction" they just mean that they output $(\hat e, \hat s)$ when predicting. Then “The training objective is the loglikelihood of the correct start and end positions” means that if the correct start and end positions are $s^*$ and $e^*$, they try to maximize the predicted probability of $s^*$ and $e^*$. If the start and end positions are independent then this is equal to $p_S(s^*) p_E(e^*)$ and taking the negative log the loss becomes $$L(e^*, s^*) = -\log p_S(s^*) -\log p_E(e^*).$$
{ "domain": "datascience.stackexchange", "id": 5058, "tags": "machine-learning, nlp, loss-function" }
What's the logical counterpart to jumps with arguments on CPS terms?
Question: It's well known that the CPS (continuation-passing style) translation often employed in compilers corresponds to double negation translation under the Curry-Howard isomorphism. Though often the target language of a CPS translation is the same as the source language, sometimes it's a specialized language which only allows terms in CPS form (i.e., there are no direct style functions anymore). See, e.g., this or this. As an example, consider Thielecke's CPS-calculus, where commands are defined as either jumps or bindings: $$b ::= x\langle\vec{x}\rangle\ |\ b\ \{\ x\langle\vec{x}\rangle = b\ \}$$ And one-hole contexts (commands with holes) are defined as follow: $$C ::= [-]\ |\ C \ \{\ x\langle\vec{x}\rangle = b\ \}\ |\ b\ \{\ x\langle\vec{x}\rangle = C\ \}$$ If we try to see these languages under the Curry-Howard isomorphism, we don't have implication anymore, but rather we use negations and products alone. The typing rules for such languages demonstrate we're trying to derive a contradiction: $$\frac{\color{orange}{\Gamma\vdash} k{:}\ \color{orange}{\neg\vec{\tau}}\quad\quad\color{orange}{\Gamma\vdash}\vec{x}{:}\ \color{orange}{\vec{\tau}}}{\color{orange}{\Gamma\vdash} k\langle\vec{x}\rangle}(J)$$ $$\frac{\color{orange}{\Gamma,}k{:}\ \color{orange}{\neg\vec{\tau}\vdash} b\quad\quad\color{orange}{\Gamma,}\vec{x}{:}\ \color{orange}{\vec{\tau}\vdash} c}{\color{orange}{\Gamma\vdash} b\ \{\ k\langle\vec{x}\rangle=c\ \}}(B)$$ (Note that these look similar to the (AXIOM) and (CUT) rules from linear logic, though on the other side of the sequent: we have a conjunction rather than a disjunction.) Reduction rules in intermediate languages such as the ones above allow jumps to be performed to bound continuations, immediately replacing arguments (hence the name "jump with arguments" sometimes employed). For the CPS-calculus, this can be represented by the following reduction rule: $$\frac{}{C[\color{blue}{k\langle \vec{x}\rangle}]\ \{\ k\langle\color{red}{\vec{y}}\rangle=\color{red}c\ \} \longrightarrow C[\color{red}{c[\color{blue}{\vec{x}}/\vec{y}]}]\ \{\ k\langle\color{red}{\vec{y}}\rangle=\color{red}c\ \}}$$ $$\frac{a\longrightarrow b}{C[a]\longrightarrow C[b]}$$ ...though similar languages have similar notions of jump. I'm not totally sure, but I believe that the reduction rule would corresponde to a cut inference rule similar to the following (quickly sketched): ...where we're allowed to copy a bound proof tree and replace the jump subtree with it (in the example above, replacing subtree a with a copy of subtree b, though with a different context). I'm interested on how such an intermediate language could be seen by the Curry-Howard isomorphism. So, my actual question is twofold: Has a similar implication-free subset of some logic (e.g., propositional logic) been studied somewhere? I mean, has a "logic without implication" been proposed? What is the equivalent of a jump with arguments in logic? Assuming the cut rule I sketched above is correct (and it corresponds to the reduction rule), has something similar to it appeared elsewhere? Answer: Such a logic of continuations (or a syntax of continuation that arose from logical considerations) would be Laurent's “polarised linear logic” (LLP): Olivier Laurent, Étude de la polarisation en logique (2002). A good explanation of what is going on from a categorical perspective is given in Melliès and Tabareau, Resource modalities in tensor logic (2010). A detailed description of the correspondence between LLP and CPS along the lines of your question appears in my PhD thesis (2013) (Chapter III, pp.91-95,153-199). (There are a lot of other references in this area; the bibliographies should provide you with a good starting point.) The two rules (J) and (B) you wrote are derived as follows (in the notations of Laurent's PhD thesis): \begin{array}{c} \dfrac{\dfrac{\vdash\mathcal{N},!(P_{1}^{\bot}\mathbin{⅋}\cdots\mathbin{⅋}P_{n}^{\bot})\qquad\dfrac{\dfrac{\vdash\mathcal{N},P_{1}\quad\cdots\quad\vdash\mathcal{N},P_{n}}{\vdash\mathcal{N},\dots,\mathcal{N},P_{1}\otimes\cdots\otimes P_{n}}}{\vdash\mathcal{N},\dots,\mathcal{N},?(P_{1}\otimes\cdots\otimes P_{n})}}{\vdash\mathcal{N},\mathcal{N},\dots,\mathcal{N}}}{\vdash\mathcal{N}}\\ \dfrac{\dfrac{\vdash\mathcal{N},?(P_{1}\otimes\dots\otimes P_{n})\qquad\dfrac{\dfrac{\vdash\mathcal{N},P_{1}^{\bot},\dots,P_{n}^{\bot}}{\vdash\mathcal{N},P_{1}^{\bot}\mathbin{⅋}\cdots\mathbin{⅋}P_{n}^{\bot}}}{\vdash\mathcal{N},!(P_{1}^{\bot}\mathbin{⅋}\cdots\mathbin{⅋}P_{n}^{\bot})}}{\vdash\mathcal{N},\mathcal{N}}}{\vdash\mathcal{N}} \end{array} LLP is a sequent calculus and as such is finer-grained: it makes explicit structural rules and (what corresponds to) left-introduction of negation. Visible above, it also internalizes duality (all formulae on the right-hand side, like in linear logic). While Laurent does not cite Thielecke's works, Thielecke's PhD thesis previously suggested that having a “duality functor” could be useful to clarify the duality aspects of CPS. Laurent's graphical syntax will let you interpret linear substitution, but one would have to look at the details to see if it is exactly the same as the one you mention.
{ "domain": "cstheory.stackexchange", "id": 5427, "tags": "lo.logic, type-theory, pl.programming-languages, curry-howard, continuations" }
A finite automaton accept no string
Question: How can a finite automaton over(0,1) doesn't accept any string? I only can think of s->a->q->F Where the final state F is empty set. Is that true please? Answer: A finite automaton does not accept any string if and only if all its final states are unreachable from the start state. This is true if there is no final state, or if you cannot go from the start state to the final state.
{ "domain": "cs.stackexchange", "id": 4214, "tags": "automata, finite-automata" }
Did the Double Asteroid Redirection Test (DART) trigger an answer for the Fermi paradox?
Question: The successful implementation of the Double Asteroid Redirection Test (DART - https://en.wikipedia.org/wiki/Double_Asteroid_Redirection_Test) was an awesome feat of engineering. However, hypothetically speaking here, if we assume there is a higher level of intelligence within our galaxy/universe then there would be a chance that they have mapped all the known comets etc. Does this then raise the question of the Fermi paradox (https://en.wikipedia.org/wiki/Fermi_paradox) as we have just signalled that another intelligent form of life exists. Would they pick up that we have altered the course of an asteriod via DART? Answer: Unlikely. We've done so many more conspicuous things. Nuclear explosions, artificial light, radio broadcasts, agriculture, atmospheric modification, ...
{ "domain": "physics.stackexchange", "id": 94011, "tags": "general-relativity, astronomy, collision" }
Is there any algorithm that implements wavelet?
Question: Is there any algorithm that implements wavelet (like there is Quantum Fourier Transform)? I've tried looking online, but couldn't find any, I wonder if something like this exists. Thank you. Answer: Could you be looking for this: Quantum Wavelet Transforms: Fast Algorithms and Complete Circuits? (this links to arxiv.) In particular, this paper presents efficient circuits for the Haar and Daubechies wavelet transforms.
{ "domain": "quantumcomputing.stackexchange", "id": 2017, "tags": "qiskit, hamiltonian-simulation" }
Finding the first-order perturbation of the energy of a hydrogen atom due to Spin Orbit coupling
Question: I am given an exercise on perturbation theory involving an electron in a hydrogen atom in the presence of a constant magnetic field $\vec{B} = B_z \hat{z}$. Due to Zeeman effect and Spin-Orbit coupling, the term \begin{equation} \Delta H = H_{SO} + H_Z = \frac{1}{2m^2 c^2} \frac{1}{r} \frac{\mathrm{d}V}{\mathrm{d}r} \vec{L} \cdot \vec{S} + \mu_B B_z (L_z + 2S_z) \end{equation} must be added to the unperturbed Hamiltonian $H_0$, with $V(r) = -e^2/r$ and $\mu_B$ is the Bohr magneton. Supposing that the second term dominates (Paschen-Back effect), I am asked to determine the first order perturbation of the energy spectrum, thus treating only the spin-orbit coupling as a perturbation. To this end, I have to evaluate the perturbation matrix with entries \begin{equation} \langle l', \frac{1}{2},m', m_s' | H_{SO}| l, \frac{1}{2},m, m_s \rangle. \end{equation} In the solution, only the diagonal entries are computed. I suppose that this might be due to the operator $\vec{L} \cdot \vec{S}$ commuting with $L^2$, $L_z$ and $S_z$, that is to say \begin{equation} [\vec{L}\cdot \vec{S}, L^2] = 0, \quad [\vec{L}\cdot \vec{S}, L_z] = 0, \quad [\vec{L}\cdot \vec{S}, S_z] = 0. \end{equation} This would indeed require that \begin{equation} l'= l, \quad m'=m, \quad m_s' = m_s, \end{equation} leaving only the diagonal terms to compute. I re-expressed $\vec{L}\cdot \vec{S}$ as \begin{equation} \vec{L} \cdot \vec{S} = \frac{1}{2} \left( (\vec{L}+\vec{S})^2 - \vec{L}^2 - \vec{S}^2 \right) \end{equation} but still failed to show that the above commutation relations hold. Am I missing something? Could/Should I use (rotational) symmetry instead? Answer: the operator $\vec{J}^2=(\vec{L}+\vec{S})^2$ (and likewise $\vec{L}\cdot\vec{S}$) does not commute with $L_z$ and $S_z$ separately, but only with the combined $J_z = L_z + S_z$. You can see this also from symmetry considerations: rotating just the spatial angular momentum or just the spin will not preserve the scalar product $\vec{L}\cdot\vec{S}$. In order for this term to be a scalar under rotations, the rotations must be both in the coordinate space (angular momentum) and the spin space, together. The reason that just the diagonal elements are computed is due to the nature of perturbation theory. In the first order in perturbation theory, we are just concerned with the degenerates states within the subspace that share the same energy. Once you incorporated the Zeeman term $\mu_B B(L_z+2S_z)$ into $H$, it lifted (some of) the degeneracy between states with different $m, m_s$. It has not completely disappeared, though: the states with $|m_s = 1/2, m\rangle$ and $|m_s=-1/2, m+2\rangle$ (if $m$ allows such a change) would still have the same energy. However, you can work out that $\vec{L}\cdot\vec{S}$ cannot connect between these states, because $J_z$ is still preserved, and they belong to different values of $J_z$. So you are left only with the diagonal terms which can play a role to first order in perturbation theory.
{ "domain": "physics.stackexchange", "id": 85363, "tags": "quantum-mechanics, homework-and-exercises, atomic-physics, perturbation-theory, spin-orbit" }
What, in simplest terms, is gauge invariance?
Question: I am a mathematics student with a hobby interest in physics. This means that I've taken graduate courses in quantum dynamics and general relativity without the bulk of undergraduate physics courses and sheer volume of education into the physical tools and mindset that the other students who took the course had, like Noether's theorem, Lagrangian and Hamiltonian mechanics, statistical methods, and so on. The courses themselves went well enough. My mathematical experience more or less made up for a lacking physical understanding. However, I still haven't found an elementary explanation of gauge invariance (if there is such a thing). I am aware of some examples, like how the magnetic potential is unique only up to a (time-)constant gradient. I also came across it in linearised general relativity, where there are several different perturbations to the spacetime metric that give the same observable dynamics. However, to really understand what's going on, I like to have simpler examples. Unfortunately, I haven't been able to find any. I guess, since "gauge invariance" is such a frightening phrase, no one use that word when writing to a high school student. So, my (very simple) question is: In many high school physics calculations, you measure or calculate time, distance, potential energy, temperature, and other quantities. These calculations very often depend only on the difference between two values, not the concrete values themselves. You are therefore free to choose a zero to your liking. Is this an example of gauge invariance in the same sense as the graduate examples above? Or are these two different concepts? Answer: The reason that it's so hard to understand what physicists mean when they talk about "gauge freedom" is that there are at least four inequivalent definitions that I've seen used: Definition 1: A mathematical theory has a gauge freedom if some of the mathematical degrees of freedom are "redundant" in the sense that two different mathematical expressions describe the exact same physical system. Then the redundant (or "gauge dependent") degrees of freedom are "unphysical" in the sense that no possible experiment could uniquely determine their values, even in principle. One famous example is the overall phase of a quantum state - it's completely unmeasurable and two vectors in Hilbert space that differ only by an overall phase describe the exact same state. Another example, as you mentioned, is any kind of potential which must be differentiated to yield a physical quantity - for example, a potential energy function. (Although some of your other examples, like temperature, are not examples of gauge-dependent quantities, because there is a well-defined physical sense of zero temperature.) For physical systems that are described by mathematical structures with a gauge freedom, the best way to mathematically define a specific physical configuration is as an equivalence class of gauge-dependent functions which differ only in their gauge degrees of freedom. For example, in quantum mechanics, a physical state isn't actually described by a single vector in Hilbert space, but rather by an equivalence class of vectors that differ by an overall scalar multiple. Or more simply, by a line of vectors in Hilbert space. (If you want to get fancy, the space of physical states is called a "projective Hilbert space," which is the set of lines in Hilbert space, or more precisely a version of the Hilbert space in which vectors are identified if they are proportional to each other.) I suppose you could also define "physical potential energies" as sets of potential energy functions that differ only by an additive constant, although in practice that's kind of overkill. These equivalence classes remove the gauge freedom by construction, and so are "gauge invariant." Sometimes (though not always) there's a simple mathematical operation that removes all the redundant degrees of freedom while preserving all the physical ones. For example, given a potential energy, one can take the gradient to yield a force field, which is directly measurable. And in the case of classical E&M, there are certain linear combinations of partial derivatives that reduce the potentials to directly measurable ${\bf E}$ and ${\bf B}$ fields without losing any physical information. However, in the case of a vector in a quantum Hilbert space, there's no simple derivative operation that removes the phase freedom without losing anything else. Definition 2: The same as Definition 1, but with the additional requirement that the redundant degrees of freedom be local. What this means is that there exists some kind of mathematical operation that depends on an arbitrary smooth function $\lambda(x)$ on spacetime that leaves the physical degrees of freedom (i.e. the physically measurable quantities) invariant. The canonical example of course is that if you take any smooth function $\lambda(x)$, then adding $\partial_\mu \lambda(x)$ to the electromagnetic four-potential $A_\mu(x)$ leaves the physical quantities (the ${\bf E}$ and ${\bf B}$ fields) unchanged. (In field theory, the requirement that the "physical degrees of freedom" are unchanged is phrased as requiring that the Lagrangian density $\mathcal{L}[\varphi(x)]$ be unchanged, but other formulations are possible.) This definition is clearly much stricter - the examples given above in Definition 1 don't count under this definition - and most of the time when physicists talk about "gauge freedom" this is the definition they mean. In this case, instead of having just a few redundant/unphysical degrees of freedom (like the overall constant for your potential energy), you have a continuously infinite number. (To make matters even more confusing, some people use the phrase "global gauge symmetry" in the sense of Definition 1 to describe things like the global phase freedom of a quantum state, which would clearly be a contradiction in terms in the sense of Definition 2.) It turns out that in order to deal with this in quantum field theory, you need to substantially change your approach to quantization (technically, you need to "gauge fix your path integral") in order to eliminate all the unphysical degrees of freedom. When people talk about "gauge invariant" quantities under this definition, in practice they usually mean the directly physically measurable derivatives, like the electromagnetic tensor $F_{\mu \nu}$, that remain unchanged ("invariant") under any gauge transformation. But technically, there are other gauge-invariant quantities as well, e.g. a uniform quantum superposition of $A_\mu(x) + \partial_\mu \lambda(x)$ over all possible $\lambda(x)$ for some particular $A_\mu(x).$ See Terry Tao's blog post for a great explanation of this second sense of gauge symmetry from a more mathematical perspective. Definition 3: A Lagrangian is sometimes said to posses a "gauge symmetry" if there exists some operation that depends on an arbitrary continuous function on spacetime that leaves it invariant, even if the degrees of freedom being changed are physically measurable. Definition 4: For a "lattice gauge theory" defined on local lattice Hamiltonians, there exists an operator supported on each lattice site that commutes with the Hamiltonian. In some cases, this operator corresponds to a physically measurable quantity. The cases of Definitions 3 and 4 are a bit conceptually subtle so I won't go into them here - I can address them in a follow-up question if anyone's interested. Update: I've written follow-up answers regarding whether there's any sense in which the gauge degrees of freedom can be physically measurable in the Hamiltonian case and the Lagrangian case.
{ "domain": "physics.stackexchange", "id": 32200, "tags": "gauge-invariance" }
Name of device with air capsule inside water
Question: Sorry for the title being so vague. I'm looking for the name of something in physics to do with pressure. It is a bottle of water with a small capsule with air inside it. When the bottle is squeezed, the capsule sinks, but when it is left alone, the capsule floats. Could anyone tell me what this is called, and hopefully how it works as well? Answer: It's called a Cartesian diver. When you squeeze the bottle to increase the pressure, the dropper compresses and becomes more dense. If this increased density exceeds the density of water then the diver will sink.
{ "domain": "physics.stackexchange", "id": 39928, "tags": "pressure, terminology, fluid-statics" }
Proof relevance vs. proof irrelevance
Question: I want to use use Agda to help me write proofs, but I am getting contradictory feedback about the value of proof relevance. Jacques Carette wrote a Proof-relevant Category Theory in Agda library. But some seem to think (perhaps here, but I was told elsewhere) that proof relevance can be problematic. 1-Category Theory is supposed to be proof irrelevant (and I guess above two categories, this is no longer the case?) I even heard that one may not get the same results if one uses proof relevant category theory. At the same time I believe the Category Theory in the HoTT book and the implementation in Cubical Agda are proof irrelevant (as the HomSets are Sets, i.e., have only one way of being equal). When should I be happy to have proof relevance? When should I rather choose a proof irrelevant library or proof assistant? What are the advantages of each? Would proof irrelevance be problematic as I move to two categories? Answer: There are several possible notions of proof relevance. Let us consider three similar situations: An element of a sum $\Sigma (x : A) . P(x)$ is a pair $(a, p)$ where $a : A$ and $p$ is a proof of $P(a)$. An element of $\Sigma (x : A) . \|P(x)\|$, where $\|{-}\|$ is propositional truncation, is a pair $(a, q)$ where $a : A$ and $q$ is an equivalence class of proofs of $P(a)$ (where any two proofs of $P(a)$ are considered equivalent). In set theory, an element of the subset $\{x \in A \mid \phi(x)\}$, where $\phi(x)$ is a logical formula, is just $a \in A$ such that $\phi(a)$ holds. The first situation is proof relevant because we get full access to the proof $p$, and in particular we may analyze $p$. The third situation is proof irrelevant because we get access just to $a \in A$ but have no further information as to why $\phi(a)$ holds, just that it does. The second situation looks like proof irrelevance, but is actually a form of restricted proof relevance: we do not delete the proof of $P(a)$ but just control its uses with truncation. That is, from $q$ we may extract a representative proof $p$ of $P(a)$, so long as the choice of $p$ is irrelevant. There is a cruicial difference between the third and the second situation, for having restricted access to $p$ is not at all the same as not having access at all. Here is a concrete example. Given $f : \mathbb{N} \to \mathbb{N}$, define $$ Z(f) = \Sigma(x : \mathbb{N}) . \Pi (y : \mathbb{N}) . \mathrm{Id}(f(x + y), 0) $$ An element of $Z(f)$ is a pair $(m, p)$ witnessing the fact that $f(n)$ is zero for all $n \geq m$. Given $f$ with this property, we want to define the sum $S(f) = f(0) + f(1) + f(2) + \cdots$, which of course should be a natural number since eventually the terms are all zero. But proof relevance matters: We may define $S : (\Sigma (f : \mathbb{N} \to \mathbb{N}) . Z(f)) \to \mathbb{N}$ by $$S(f, (m, p)) = f(0) + \cdots + f(m)$$ We may define $S : (\Sigma (f : \mathbb{N} \to \mathbb{N}) . \|Z(f)\|) \to \mathbb{N}$ by $$S(f, |(m,p)|) = f(0) + \cdots + f(m),$$ where $|(m,p)|$ is the truncated witness of $Z(f)$. This is a valid definition because using a different representative $(m',p')$ leads to the same value (as we just end up adding fewer or more zeroes). Imagining that in type theory we had proof irrelevant subset types $$\frac{\vdash a : A \qquad \vdash p : P(a)}{\vdash a : \{x : A \mid P(x)\}}$$ we cannot define $S : \{f : \mathbb{N} \to \mathbb{N} \mid Z(f)\} \to \mathbb{N}$ because we have no information that would allow us to limit the number of terms $f(0), f(1), f(2), \ldots$ that need to be added. (There are other things we can do, but that is beside the point here.) As long as one works in type theory, the only truly proof irrelevant judgements are judgemental equalities. We never define subset types, such as the one above, because that ruins many good properties of type theory (although it would be interesting to investigate this direction). In the old days type theory did not have propositional truncation or any other form of quotienting, and so one was forced to work in a completely proof relevant way all the time. This is unsatisfactory because it fails to capture properly a great deal of mathematical reasoning. People invented setoids to deal with the problem, and later on introduced propositional truncation (and other forms of quotienting). You ask wheter 1-categories are "proof relevant". Well, everything in type theory is proof relevant, the only question is how do we deal with having too much proof relevance. Concretely, in a 1-category $\mathcal{C}$, equality of morphisms $f, g : A \to B$ should be "irrelevant" in the sense that it never matter how $f$ and $g$ are equal, only that they are. In HoTT this is expressed by requiring that $\mathrm{Id}(f,g)$ have at most one element, which amounts to $\mathrm{Hom}_\mathcal{C}(A,B)$ being an h-set. In setoid-based formulations of category theory, one needs to account for this phenomenon in some other way, or else one is secretly doing something other than 1-category theory. But I never liked the setoid approach (or Bishop's notion of sets, for that matter), so I will let someone else explain why and how it all makes sense.
{ "domain": "cstheory.stackexchange", "id": 5166, "tags": "ct.category-theory, proof-theory, proof-assistants, homotopy-type-theory, agda" }
The Large Hadron Collider produce material residues?
Question: In the LHC, particles are accelerated until they collide, releasing their energy in the form of many other particles. My question is this: What happens to all those new particles and to the old particles that aren't destroyed in collision? Do they dissolve in the nether, are they absorbed by the container, or are they simply so insignificant that they are left inside? Answer: The collisions produce a shower of decay chain particles that impinge on the calorimeters (sensors that detect particle energy) surrounding the experiments. Particles in the beam that are not scattered every which-way by collisions can be diverted to a beam dump. Beam dumps are large targets, several meters long, that are designed to absorb the astonishing kinetic energy of the beam. I recall during the CMS (Compact Muon Solenoid, one of the LHC experiments) talk broadcast today that the CMS calorimeter pixels have been noticeably degrading over time due to the high energy particles hitting them. This is probably because the incident particles are of sufficient energy to introduce crystallographic defects into the lead tungstate crystals used in the calorimeters.
{ "domain": "physics.stackexchange", "id": 3922, "tags": "particle-physics, large-hadron-collider" }
What species is this large bug? (South India)
Question: I found this bug in South India. What species is it? The tiles in the picture are 5x5 centimeters each. The bug was moving relatively slowly and didn't seem to fear me much. I tried asking it directly, but it kept claiming to be a feature. Answer: I suppose it's a whip scorpion. Whip scorpion (order Uropygi, sometimes Thelyphonida), any of approximately 105 species of the arthropod class Arachnida that are similar in appearance to true scorpions except that the larger species have a whiplike telson, or tail, that serves as an organ of touch and has no stinger. The second pair of appendages, the pedipalps, are spiny pincers, and the third pair are long feelers. It is most common in India and Japan to New Guinea.
{ "domain": "biology.stackexchange", "id": 5983, "tags": "species-identification" }
Predicting pressure inside a container based on temperature
Question: I'm a mathematician and computer scientist, and for this particular problem I would benefit from some chemical expertise. Suppose I fill a container up with some liquid propane. I believe this is done under very cold conditions so that the propane remains liquid while the container is being filled. It is then sealed off. Suppose now the inside of the container changes temperature. I would like to calculate the pressure that would be exerted on the walls of the container at any given internal temperature $T$. Intuitively, I would think that the pressure would be a function of temperature and some initial conditions (how full is the tank initially)? If I filled up the tank to 95% capacity with liquid propane, and then heated the tank, I would expect the resulting pressure on the walls of the container to be much higher than if I filled up the tank to only 1% capacity and did the same thing. However, the only resources I've been able to find so far relate to the vapour pressure, which is a function of only temperature and this seems incomplete. Can anybody point me in the right direction on how to approach this problem? Some factors that seem to complicate things are: The containers always seem to contain some level of liquid (due to internal pressure?), the rest is gas. Does the liquid have an effect on the pressure exerted on the walls? Can I just ignore it? If, rather than fill up the tank with liquid at very cold temperatures, the tank is just directly filled with gas, how does this change things? My guess is that given the initial pressure and temperature you can predict the pressure at a different temperature, but I think the fact that some of the gas becomes liquid at some point is tripping me up and I'm not sure how to approach the problem. Answer: The problem with answering this question is that propane is not remotely close to being a permanent gas at typical room temperature. This means, in practice, that it is easily liquefied at room temperature with a little pressure and you can't, therefore, use the simple gas law equation to work out the pressure exerted by a given amount of the gas in a vessel of fixed volume. This means that the tank will fairly certainly contain some liquid (as a butane-filled cigarette lighter will at very modest pressures despite butane being a gas at normal temperatures and pressures). You can, therefore, estimate the pressure inside the vessel as being the vapour pressure of propane at the given temperature (which ignores any other gas included in the vessel though this may be a good approximation as the filling process is likely to sweep out any other gas). The Wikipedia entry on propane has a convenient chart of the vapour pressure and you can look up the value for a given temperature. The amount of the vessel filled won't make much of a difference to this as long as there is some liquid and some gas in the vessel. Because simple gas laws don't do a good job of predicting the vapour/gas equilibrium, this isn't a gas-law question and the best answer will always be empirical (see the Wikipedia page).
{ "domain": "chemistry.stackexchange", "id": 17688, "tags": "gas-laws, phase, gas-phase-chemistry" }
Finding the average kinetic energy of the molecules using thermodynamics
Question: I've a simple problem: In a balloon with volume $V=0.05m^3$, there is $0.12 kmol$ gas $\frac{m}{\mu}=0.12kmol$, under pressure $P=0.6*10^7Pa$. Find the average kinetic energy of the molecules? The first thing that came in my mind was the formula: $PV=\frac{2}{3}*N_a*<E_{ki}>$, where $N_a$ is Avogadro constant and $<E_{ki}>$ is the avarage kinetic energy, however when I took a look at the solution, I saw it presented this way: From $PV=\frac{2}{3}E_k=\frac{2}{3}N*E_{ki}$ and $\frac{m}{\mu}=\frac{N}{N_a} $ $=>$ $PV = \frac{2m}{3 \mu}N_a<E_ki>; <E_ki> = \frac{3\mu PV}{2mN_a}$ I've checked the formula in my textbook, and it's written the way I thought. Is the solution provided in my textbook wrong, or have I missed some concept? Answer: Your equation is for one mole of gas and the textbook equation is for $n =\frac {N}{N_a}=\frac{m}{\mu}$ moles of gas. Note that $\frac {N}{N_a}$ and $\frac{m}{\mu}$ have no units, they are dimensionless.
{ "domain": "physics.stackexchange", "id": 37251, "tags": "homework-and-exercises, thermodynamics, molecules" }
What are the dimensions of the generic quadrotor in the hector quadrotor package?
Question: Hello, The description page (http://wiki.ros.org/hector_quadrotor_description) talks about quadrotor_hokuyo_utm30lx.urdf.xacro providing the description but I am a little bit confused reading the file, where specifically or under which tag do I find the dimensions. By dimensions I mean the height, rotor radius, weight, max distance from the center etc. Adi Originally posted by fabritya on ROS Answers with karma: 13 on 2018-08-25 Post score: 0 Answer: Geometry is defined by the meshes in the hector_quadrotor_description/meshes/quadrotor folder. The blender, .stl and .dae files all contain the same geometry, just in different formats. Easiest option to look at is probably the .stl file, this can be opened/imported with many tools. You can even view it online on github. Inertial parameters are defined via the respective tags in the URDF/xacro files. For the basic quadrotor you can look at line 11 in quadrotor_base.urdf.xacro. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2018-08-26 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by fabritya on 2018-08-26: Thank you Stefan for your quick response. I think I got the values I needed. One final question: Are all the units in SI units? So those values would be (kg*m^2)
{ "domain": "robotics.stackexchange", "id": 31641, "tags": "ros, ros-kinetic, collada-urdf, hector-quadrotor" }
Is there a place on the equipotential surface where a charge feels no electric force?
Question: I'm given a graph of an equipotential surface where I need to find a place where a charge feels no electric force. I feel like it will be where the voltage is zero, which would be 2G on the graph, right? If not, can someone explain to me where? Answer: I feel like it will be where the voltage is zero Imagine that, instead of voltage, the height is the same along any closed contour. If you think clearly about this, you'll realize that wherever the lines of equal height are closely spaced, the height is changing rapidly - the slope is large there. Where the lines are spaced far apart, the slope is almost flat there. At a peak or a valley, the slope is zero. Where, on such a map, would a ball not accelerate 'downhill'? Essentially, the answer is wherever the slope is zero (wherever the ground is flat). In the electrostatic case, the force on a charged particle is due to an electric field (not potential) where the electric field is essentially the slope of the electric potential. Given, the above, do you think the particle experiences no force where the potential is zero or where the slope of the potential is zero?
{ "domain": "physics.stackexchange", "id": 16499, "tags": "homework-and-exercises, electrostatics" }
My Pythonic take on Psexec
Question: I've created a little program that basically does the same thing as psexec. It will connect to a host via computer name, or IP address, run the given command, and log the output into a file (little extra for myself). I would like some critique on what I've done, what can I do better, what did I do well, etc.. Some key points I would like to look at are (of course critique everything): Logging to a file, is there better ways to do so? Connecting and executing a shell command quicker import wmi import sys import socket import subprocess import logging import getpass import os import time from colorlog import ColoredFormatter log_level = logging.INFO logger_format = "[%(log_color)s%(asctime)s %(levelname)s%(reset)s] %(log_color)s%(message)s%(reset)s" logging.root.setLevel(log_level) formatter = ColoredFormatter(logger_format, datefmt="%I:%M:%S") stream = logging.StreamHandler() stream.setLevel(log_level) stream.setFormatter(formatter) LOGGER = logging.getLogger('pyshellConfig') LOGGER.setLevel(log_level) LOGGER.addHandler(stream) def create_log(dest_ip, host_ip, command, data): """ :param dest_ip: Destination IP address :param host_ip: IP address of where the command was run :param command: Command that was run :param data: Output of the command Write the output of the run command to a log file for further analysis """ log_path = "C:\\Users\\{}\\AppData\\pyshell".format(getpass.getuser()) if not os.path.isdir(log_path): os.mkdir(log_path) with open("{}\\{}_LOG.LOG".format(log_path, __file__), "a+") as log: log.write("![{}][LOG FROM:{} TO:{}]COMMAND RUN: {} OUTPUT:{}\n".format(time.strftime("%m-%d-%Y::%H:%M:%S"), host_ip, dest_ip, command, data)) def connect(hostname, command=None): """ :param hostname: Computer name or IP address :param command: Shell command to be run on the host Connect to the host and run a shell command on their system """ LOGGER.info("Attempting to connect to host: '{}'".format(hostname)) try: connection = wmi.WMI(hostname) # Attempt to connect to the hostname if connection: LOGGER.info("Connected successfully, running command: '{}'".format(' '.join(command))) call = subprocess.Popen(command, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) output = call.communicate("Running remote command...") # Start running the command and save output to variable create_log(socket.gethostbyname(hostname), socket.gethostbyname(socket.gethostname()), command, output) LOGGER.info("Command completed successfully.") except Exception, e: # I don't think there's really anyway of getting the exact error from the shell error_message = "Failed to connect due to %s. " % e error_message += "\nThis could mean that '%s' is not the correct " % hostname error_message += "computer name, or that the host has refused it. " error_message += "You can try to connect via IP address, if the " error_message += "user is able to get it." LOGGER.fatal(error_message) if __name__ == '__main__': help_page = """python shell.py [HOST] [COMMAND]""" error_message = "" try: if not sys.argv[1::]: # User ARGV for the arguments to more match Psexec error_message += "You did not supply a hostname to run against\n{}".format(help_page) LOGGER.fatal(error_message) exit(1) elif not sys.argv[2::]: error_message += "You did not supply a command to run\n{}".format(help_page) LOGGER.fatal(error_message) exit(1) connect(sys.argv[1].upper(), command=sys.argv[2::]) except IndexError: pass Answer: Keep your imports grouped. From PEP 8. Imports should be grouped in the following order: standard library imports related third party imports local application/library specific imports Follow Python's EAFP principle instead of checking if something exists. Plus when it comes to folder creation it will also help you avoid race-condition. import errno try: os.mkdir(log_path) except Exception as e: if e.errno != errno.EEXIST: # If error is something other than file # path already exists then handle it here. pass Note my use of except Exception as e here instead of except Exception, e. The latter syntax is deprecated and has been removed in Python 3. Hence go for as based format. Your try-except block is huge. Cover minimum statements at once, it will help you in narrowing down the actual problem easily and you will be able to show a much better error message to user. Have multiple try-excepts as long as they don't make the code too unreadable. Instead of dealing with sys.argv you could use the arsparse module to handle command line args in much better way. error_message = "" statement is not required. You can simply define it wherever you need it instead of defining a default value. [1::] and [2::] can be simply written as [1:] and [2:] respectively. The IndexError at the end is redundant because you're already checking for if not [1::] earlier and exiting the script right away, hence you would never hit that exception. Note that slicing never raises IndexError. call.communicate("Running remote command...") returns both STDOUT and STDIN. You can consider using them separately. Plus you could also check the status code by doing call.returncode. A non-zero exit code means an error. log_path = "C:\\Users\\{}\\AppData\\pyshell".format(getpass.getuser()) A hard-coded path is not a good idea. Perhaps ask the user for a path? You can also get path to user's home directory using os.path.expanduser. log_path = r"{home}\AppData\pyshell".format(home=os.path.expanduser('~')) Note that I also added r"" so that we can simply use a single backslash.
{ "domain": "codereview.stackexchange", "id": 24140, "tags": "python, python-2.x, shell" }
Load TurtleBot3 on Rviz
Question: I'm trying to launch turtlebot3_bringup and turtlebot3_remote on PC with this command line: $ roslaunch turtlebot3_bringup turtlebot3_remote.launch The result: [turtlebot3_remote.launch] is neither a launch file in package [turtlebot3_bringup] nor is [turtlebot3_bringup] a launch file name The traceback for the exception was written to the log file What can I do? Link to the tutorial: http://emanual.robotis.com/docs/en/platform/turtlebot3/bringup/#load-a-turtlebot3-on-rviz ROS: Kinetic OS: Ubuntu Xenial (16.04) TurtleBot3 Waffle Pi Originally posted by fish24 on ROS Answers with karma: 28 on 2019-07-25 Post score: 0 Original comments Comment by ashutosh08 on 2019-07-26: Are you sure you are running the command in the host where you have run roscore? Comment by fish24 on 2019-07-26: I have not worked in the workspace. I managed to solve the problem. Thank you for the help! Answer: I solved it. Before opening a new terminal to write: $ roslaunch turtlebot3_bringup turtlebot3_remote.launch , go in your workspace and add source, in my case: $ cd ~/catkin_ws $ source devel/setup.bash Originally posted by fish24 with karma: 28 on 2019-07-26 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by duck-development on 2019-07-26: so please mark the question as solved
{ "domain": "robotics.stackexchange", "id": 33518, "tags": "rviz, ros-kinetic" }
A grid and a menu walked into a program
Question: A program that creates a grid 10x10 and assigns a random number in each tile. It then asks you if you want to: create a new grid view the current one count how many of each number there are in the grid sum the rows sum the columns exit the program import java.util.Arrays; import java.util.Random; import java.util.Scanner; public class tenxten { static int numberRows = 10; static int numberColumns = 10; static int [][] grid = new int [numberColumns][numberRows]; private static int randomInt(int from, int to) { Random rand = new Random(); return rand.nextInt(to - from + 1) + from; } private static void amountOfSpecificNumbers() { int[] numbers = new int[numberColumns * numberRows]; for (int i = 1; i < 10; i++) { for (int y = 0; y < 10; y++) { for (int x = 0; x < 10; x++) { if (grid[y][x] == i) { numbers[i] += i; } } } System.out.println(" " + numbers[i] / i + " " + i + "s" ); } } private static void sumOfColumns() { int sumOfColumns[] = new int[numberColumns]; for (int x = 0; x < numberColumns; x++) { for (int y = 0; y < numberRows; y++) { sumOfColumns[y] += grid[x][y]; } } System.out.println(Arrays.toString(sumOfColumns)); } private static void sumOfRows() { int sumOfRows[] = new int[numberColumns]; for (int x = 0; x < numberColumns; x++) { for (int y = 0; y < numberRows; y++) { sumOfRows[x] += grid[x][y]; } } System.out.println(Arrays.toString(sumOfRows)); } private static void newField() { for (int x = 0; x < numberColumns; x++) { for (int y = 0; y < numberRows; y++) { int randomNumber = (randomInt(1, 10)); grid[x][y] = randomNumber; if (randomNumber < 10) { System.out.print(" " + randomNumber + " "); } else { System.out.print(randomNumber + " "); } } System.out.println(); } } private static void showField() { for (int x = 0; x < numberColumns; x++) { for (int y = 0; y < numberRows; y++) { if (grid[x][y] < 10) { System.out.print(" " + grid[x][y] + " "); } else { System.out.print(grid[x][y] + " "); } } System.out.println(); } } private static int readInt(Scanner scanner){ int choice = 0; while(choice > 6 || choice < 1) { System.out.println("Pleas enter number 1, 2, 3, 4, 5, or 6"); while (!scanner.hasNextInt()) { System.out.println("That's not even a number"); System.out.println("Pleas enter number 1, 2, 3, 4, 5, or 6"); scanner.next(); } choice = scanner.nextInt(); } return choice; } public static void main(String[] args) { newField(); while(true) { System.out.println("What do you want to do?"); System.out.println("1. Get a new field"); System.out.println("2. Show current field"); System.out.println("3. Count the numbers in the current field"); System.out.println("4. Sum all rows"); System.out.println("5. Sum all columns"); System.out.println("6. Exit program"); Scanner scanner = new Scanner(System.in); int choice = readInt(scanner); if (choice == 1){ newField(); } else if (choice == 2){ showField(); } else if (choice == 3){ amountOfSpecificNumbers(); } else if (choice == 4){ sumOfRows(); } else if (choice == 5){ sumOfColumns(); }else { return; } } } } Answer: public class tenxten { Java classes should start with a capital letter, and according to Java conventions should be named with something called "PascalCase". A name like TenXTen would adhere to that convention. static int numberRows = 10; static int numberColumns = 10; These are effectively used as constants (they do not change). Therefore they can be: private static final int NUMBER_ROWS = 10; private static final int NUMBER_COLUMNS = 10; (Constants are by convention named with ALL_CAPS_AND_UNDERLINES) static int [][] grid = new int [numberColumns][numberRows]; At one place you write grid[y][x] and in others grid[x][y] Luckily for you, it has the same dimensions so you won't notice, but should it be the following? static int [][] grid = new int [numberRows][numberColumns]; Random rand = new Random(); You are currently creating one Random each time you are generating a number. Random objects are meant to be re-used (for "better randomization" - I know it sounds fuzzy but trust me on this one). for (int y = 0; y < 10; y++) { for (int x = 0; x < 10; x++) { and for (int i = 1; i < 10; i++) { for (int y = 0; y < 10; y++) { for (int x = 0; x < 10; x++) { Use the constants for the upper bound for x and y here. newField has some duplication from showField. It might be better remove the output from newField and call the methods like this: newField(); showField(); I believe your readInt method can be rewritten using do-while, int choice; do { System.out.println("Pleas enter number 1, 2, 3, 4, 5, or 6"); while (!scanner.hasNextInt()) { System.out.println("That's not even a number"); System.out.println("Pleas enter number 1, 2, 3, 4, 5, or 6"); scanner.next(); } choice = scanner.nextInt(); } while (choice > 6 || choice < 1); return choice; Your amountOfSpecificNumbers() method can be simplified in a couple of ways: use numbers[i]++; instead of numbers[i] += i; and you won't have to divide by i in the output. don't use the outer loop, use a loop after the nested loop instead int[] numbers doesn't need to be that big, it is currently 100 in size but only needs to be 10. private static void amountOfSpecificNumbers() { int[] numbers = new int[10]; for (int y = 0; y < NUMBER_ROWS; y++) { for (int x = 0; x < NUMBER_COLUMNS; x++) { int value = grid[y][x]; numbers[value]++; } } for (int i = 0; i < numbers.length; i++) { System.out.println(" " + numbers[i] + " " + i + "s" ); } } A little nitpick: Sometimes you are writing int[] array and sometimes int array[] while both works in Java, I would recommend sticking to one (I personally prefer int[] array) Finally, imagine if you would have the requirement to handle more than one TenXTen grid at a time. Your program would really need TenXTen as an independent class in that case. (Currently, it doesn't need it, but it would be useful). Many of your methods are returning void and doing the output inside the method. It is a better idea to return the values required for the output, and do the output outside the method itself. Imagine a TenXTen grid ...who said it has to be 10 x 10 at all times? Consider the name NumberGrid... anyway, consider a class with these methods: void generate() int[] amountOfSpecificNumbers() int[] sumOfColumns() int[] sumOfRows() void showField() Then you would be able to use this methods for example like the following: public static void main(String[] args) { NumberGrid grid = new NumberGrid(20, 10); grid.showField(); System.out.println(Arrays.toString(grid.sumOfColumns())); grid.generate(); } etc... you might want to read up on Java Classes and Objects for that.
{ "domain": "codereview.stackexchange", "id": 11736, "tags": "java, beginner, matrix" }
Does nuclear fusion of light nuclei occur in fire or boiling water?
Question: At the temperature range of ordinary fire or maybe even ordinary boiling water, is the Coulomb potential between light atomic nuclei occasionally overcome to give way to fusion? Basically, how are the nuclear fusion cross sections calculated and are they a function of temperature? Answer: In principle, the answer should be yes. At any given temperature, the particles will have a distribution of speeds. Those in the tail of the distribution might have enough energy to fuse. However, the probability of this event would be extremely low because the number of particles with the required (HIGH!) energy is very low.
{ "domain": "physics.stackexchange", "id": 30194, "tags": "quantum-mechanics, nuclear-physics, plasma-physics, fusion" }
SWAP test and density matrix distinguishability
Question: Let us either be given the density matrix \begin{equation} |\psi\rangle\langle \psi| \otimes |\psi\rangle\langle \psi| , \end{equation} for an $n$ qubit pure state $|\psi \rangle$ or the maximally mixed density matrix \begin{equation} \frac{\mathbb{I}}{{2^{2n}}}. \end{equation} I am trying to analyze the following algorithm to distinguish between these two cases. We plug the $2n$ qubit state we are given into the circuit of a SWAP test. Then, following the recipe given in the link provided, if the first qubit is is $0$, I say that we were given two copies of $|\psi \rangle$, and if it is $1$, we say we were given the maximally mixed state over $2n$ qubits. What is the success probability of this algorithm? Is it the optimal distinguisher for these two states? The optimal measurement ought to be an orthogonal one (as the optimal Helstorm measurement is an orthogonal measurement). How do I see that the SWAP test implements an orthogonal measurement? Answer: First of all, let us compute the probability of success of this algorithm. If you are given the state $|\psi\rangle\langle\psi|\otimes|\psi\rangle\langle\psi|$, the SWAP test will return the state $|0\rangle$ with probability $1$, which is the probability of success of the algorithm in this case. Let us now consider the second case. The initial state is: $$\rho_0=\frac{1}{2^{2n}}\sum_{i,j}|0,i,j\rangle\langle0,i,j|$$ The first gate to be applied is: $$\mathbf{H}\otimes \mathbf{I}\otimes\mathbf{I}=\frac{1}{\sqrt{2}}\sum_{a,b,x,y}(-1)^{a\cdot b}|a,x,y\rangle\langle b,x,y|.$$ The resulting state is thus given by: $$\rho_1=\frac{1}{2}\frac{1}{2^{2n}}\sum_{a,i,j,b}|a,i,j\rangle\langle b,i,j|$$ We now apply the $\mathbf{CSWAP}$ gate, whose expression is: $$\mathbf{CSWAP}=\sum_{x,y}|0,x,y\rangle\langle0,x,y|+\sum_{x,y}|1,x,y\rangle\langle1,y,x|$$ The resulting state is: $$\rho_2=\frac{1}{2}\frac{1}{2^{2n}}\sum_{i,j}\left(|0,i,j\rangle\langle0,i,j|+|0,i,j\rangle\langle1,j,i|+|1,j,i\rangle\langle0,i,j|+|1,j,i\rangle\langle1,j,i|\right)$$ Finally, we apply the Hadamard gate on the first qubit once again, which results in the state: $$\rho_3=\frac{1}{4}\frac{1}{2^{2n}}\sum_{i,j}\left(\sum_{a,b}|a,i,j\rangle\langle b,i,j|+\sum_{a,b}(-1)^b|a,i,j\rangle\langle b,j,i|+\sum_{a,b}(-1)^a|a,j,i\rangle\langle b,i,j|+\sum_{a,b}(-1)^{a\oplus b}|a,j,i\rangle\langle b,j,i|\right)$$ We're interested by the diagonal coefficients of $\rho_3$ that can be written as $|0,i,j\rangle\langle0,i,j|$. Summing them would give us the probability of measuring $|0\rangle$. This probability is thus given by: $$\mathbb{P}[|0\rangle]=\frac{1}{4}\frac{1}{2^{2n}}\left(\sum_{i,j}1+\sum_{i}1+\sum_{i}1+\sum_{i,j}1\right)=\frac12+\frac{1}{2^{n+1}}.$$ All in all, this algorithm distinguishes these two states with probability $\frac34-\frac{1}{2^{n+2}}$. Now, let $T$ denote the trace distance between these two states. We know that the optimal probability of disinguishing these states is given by $\frac12(1+T)$. Let $U$ be a quantum gate such that $U|0\rangle=|\psi\rangle$. $T$ is then also equal to the trace distance between $\left(U^\dagger\otimes U^\dagger\right)\left(|\psi\rangle\langle\psi|\otimes|\psi\rangle\langle\psi|\right)\left(U\otimes U\right)=|0\rangle\langle0|\otimes|0\rangle\langle0|$ and $\frac{1}{2^{2n}}\left(U^\dagger\otimes U^\dagger\right)\mathbf{I}\left(U\otimes U\right)=\frac{1}{2^{2n}}\mathbf{I}$. $T$ is then easily seen to be: $$T=\frac12\sum_i\left|\lambda_i\right|=\frac12\left(1-\frac{1}{2^{2n}}+\sum_{i=1}^{2^{2n}-1}\frac{1}{2^{2n}}\right)=1-\frac{1}{2^{2n}}$$ which means that the maximal probability of distinguishing these states is $1-\frac{1}{2^{2n+1}}$. Thus, the SWAP test has a sub-optimal probability of success. Intuitively, this is due to the fact that the probability of measuring $|0\rangle$ is always larger than or equal to $\frac12$, which upper-bounds the probability of success with $\frac34$. Note however that this reasoning works assuming you know what $|\psi\rangle$ is. Otherwise, the initial density matrix in the first case is $\frac{1}{2^{n-1}\left(2^n+1\right)}P_{\text{Sym}^2}\left(\mathbb{C}^{2^n}\right)$ as explained in this answer, with $P_{\text{Sym}^2}\left(\mathbb{C}^{2^n}\right)$ being the projector on the symmetric subspace of $\mathbb{C}^{2^n}$ with $2$ copies.
{ "domain": "quantumcomputing.stackexchange", "id": 3197, "tags": "quantum-state, quantum-algorithms, quantum-operation, density-matrix" }
Quantum field theory why is it a trace?
Question: So in Peskin and Schoreder, when computing the amplitude of $e^+e^-\rightarrow\mu^+\mu^-$, summing up over spin they write \begin{align}\sum_{s,s'}\bar{v}^{s'}_a(p_2)\gamma^\mu_{ab}u^s_b(p_1)\bar{u}^{s}_c(p_1)\gamma^\nu_{cd}v^s_d(p_2) & =(\not{p}_2-m)_{da}\gamma^\mu_{ab}(\not{p}_1+m)_{bc}\gamma_{cd}^\nu\\ &= \textrm{tr}[(\not{p}_2-m)\gamma^\mu(\not{p}_1+m)\gamma^\nu] \end{align} Why is that a trace? As I understand that $a$, $b$, $c$, $d$ indexes are the matrix indexes and this is the combination to make a trace. Can someone please clarify, I can't find the answer anywhere. Answer: I was puzzled by this same thing when I took QFT classes several years back. After thinking about it, the reason is so trivial as to not merit an explanation in the literature, especially Peskin and Schroeder. Look at th LHS of your first equation: $\begin{align}\sum_{s,s'}\bar{v}^{s'}_a(p_2)\gamma^\mu_{ab}u^s_b(p_1)\bar{u}^{s}_c(p_1)\gamma^\nu_{cd}v^s_d(p_2) & =(\not{p}_2-m)_{da}\gamma^\mu_{ab}(\not{p}_1+m)_{bc}\gamma_{cd}^\nu\\ &= \textrm{tr}[(\not{p}_2-m)\gamma^\mu(\not{p}_1+m)\gamma^\nu] \end{align}$ $\bar{v}$ is a 1 X 4 matrix, $\gamma$ is a 4 X 4 matrix and $u$ is a 4 X 1 matrix. The product of these three matrices in this order is a 1 X 1 matrix. But the trace of a 1 X 1 matrix is equal to the matrix element itself! And because of the property of the trace, tr$(A*B*C)$ = tr$(C*A*B)$ = tr$(B*C*A)$. The same reasoning applies to the next three terms.
{ "domain": "physics.stackexchange", "id": 31547, "tags": "homework-and-exercises, quantum-electrodynamics, feynman-diagrams, trace" }
Automatically registering a class using header-only templated classes
Question: I'm trying to reduce the boilerplate of a lot of header-only classes I'm using. Each of these classes must go through a registration step. I want this step to be defined in the same file as the class definition. I came to the following solution (-std=c++17 required): #include <iostream> #include <typeinfo> template<typename T> bool __autoRegisteringFunction() { std::cout << "Registering: " << typeid(T).name() << std::endl; return true; } #define REGISTER_CLASS(TYPE) \ inline static const bool __registered = __autoRegisteringFunction<TYPE>(); class Foo { REGISTER_CLASS(Foo) }; class Bar { REGISTER_CLASS(Bar) }; int main() { return 0; } While doing some research, I realized it was even possible to skip the class name in the macro by being clever (inspired from this library): #include <iostream> #include <typeinfo> template<typename T> bool __autoRegisteringFunction() { std::cout << "Registering: " << typeid(T).name() << std::endl; return true; } template<typename T> struct __AutoRegisteringClass { inline static const bool __registered = __autoRegisteringFunction<T>(); }; #define REGISTER_CLASS() \ const void* __autoRegisteringMethod() const { \ return &__AutoRegisteringClass<decltype(*this)>::registered; \ } class Foo { REGISTER_CLASS() }; class Bar { REGISTER_CLASS() }; int main() { return 0; } Note that I simplified the example. The macro I plan to use needs more parameters, not requiring to pass the class name would both avoid possible typos and reduce repeatability. That's why I'm interested in the 2nd solution. From what I tested it works quite well. I still have a few questions: Is the first solution safe from the "static initialization order fiasco"? Is using a dummy method (never called) for instantiating a template thanks to decltype(*this) a common practice (or a known trick) in C++ meta-programming? Would you consider the given implementations well defined by standards and thus globally safe to use? Could the pattern I'm willing to use be somehow simplified? Answer: Questions: Is the first solution safe from the "static initialization order fiasco"? I hate that term. It is not relevant as there are no ordering issues in the code. The "static initialization order problem" happens when a static storage duration object depends on another static storage duration object that is in another compilation unit (and thus it is hard to guarantee the order of initialization without knowing things about the compiler and thus things are not portable easily). Note: There are is a simple solution to this problem once you know it exists. Is using a dummy method (never called) for instantiating a template thanks to decltype(*this) a common practice (or a known trick) in C++ meta-programming? Yes. Would you consider the given implementations well defined by standards and thus globally safe to use? Very close: The double underscore breaks some rules. But can be fixed. Not sure if the static inline const is guaranteed to not require declaration, I think that is an optimization. But can be fixed. Could the pattern I'm willing to use be somehow simplified? There is always somebody out there that will can come up with something better. Code Review: This function should be inline (it is obviously in a header file). template<typename T> bool __autoRegisteringFunction() { std::cout << "Registering: " << typeid(T).name() << std::endl; return true; } All identifiers with a double underscore are reserved. The rules for underscores at the beginning of identifiers are complex; prefer not to use them at the start (even if you know the rules as everybody else does not and will get it wrong). Again the double underscore. template<typename T> struct __AutoRegisteringClass { inline static const bool __registered = __autoRegisteringFunction<T>(); }; I am assuming the advantage here is that this does not take any space in the object while the original method does. Again the double underscore. #define REGISTER_CLASS() \ const void* __autoRegisteringMethod() const { \ return &__AutoRegisteringClass<decltype(*this)>::registered; \ } // ^^^ Forgot double underscore here. No need for return 0; in main. int main() { return 0; }
{ "domain": "codereview.stackexchange", "id": 42605, "tags": "c++, c++17, template-meta-programming" }
Motor controller calibration
Question: I bought 2 brushed motor controllers from China to use with my hobby-weight battle robot (http://www.banggood.com/ESC-Brushed-Speed-Controller-For-RC-Car-Truck-Boat-320A-7_2V-16V-p-915276.html). These are intended for use with my 2 cordless drill motors which will be driving the left and right wheel respectively. The robot will therefore be steered in "tank mode" by varying the speed and direction of rotation of the 2 motors using the two joysticks on my Turnigy 9x transmitter. My question is: I have seen videos on youtube where people calibrate brushless motor controllers (ESCs) using some system of pushing the joystick on a standard transmitter forward and listening to tones and then doing the same for reverse and so on. However when I asked the suppliers about a similar procedure for these brushed controllers, all they could say is that they did not need calibration. The exact words were "It seems that you're talking about transmitter for copters,but this ESC is for RC car or boat. You pull the trigger, it goes forward, you push the trigger, it reverse. And you don't need to calibrate it, just plug it, then it can work." My transmitter is not one of those gun shaped ones used for cars. So am I in trouble with these controllers or should they work correctly out of the box as the supplier seems to be implying? You may fairly ask why have I not just tried this out and the simple answer is that my LIPO charger has not yet arrived and I therefore cannot power anything up as yet. Answer: The calibration in other ESCs is used to compensate for different ranges of the input that different transmitters/receivers produce -- if you push the throttle all the way forward, your system will generate some width of PWM pulses. This might not be identical to full throttle on other system, so you can calibrate to use the full range of the ESC. It should definitely not be critical to calibrate the range.
{ "domain": "robotics.stackexchange", "id": 479, "tags": "motor" }
Non-linear Regression
Question: For example suppose I've data set which looks like: [[x,y,z], [1,2,5], [2,3,8], [4,5,14]] It's easy to find the theta parameters from those tiny data set. Which is theta = [1,2,0] z = 1*x + 2*y + 0 But if my data set are non linear. Suppose: [[x,y,z], [1,2,6], [2,3,15]]] If i choose the mapping function to be of: z = xy+yy It would return the theta parameter : theta = [1,1,0] So my deal is how to choose such mapping function for data sets which varies over time. As in recommender system the user rating varies as per the time, to reduce the cost. I've recently gone through regularization. Is there any other ideas for reducing the cost. Answer: To answer you first question about Non linear regression: I believe your problem of choosing mapping function for non linear regression can be solved by using Support Vector Machines. SVMs can learn non linear mapping functions in a kernel-induced feature space. What this means is in svms , the basic idea is to map the input data X into some high dimensional feature space f using a non linear mapping (kernel) and then doing linear regression in this feature space. To learn more about non-linear regression and kernels, you can read this. Secondly, Regularization is a technique that is used to solve over-fitting problem. This usually happens when you use a very dense model for your training set or you train the model for far too many steps. In this case, while the accuracy on your train set is high,but it performs very poorly in case of unseen data. Hence when you add regularization, it helps reduce the cost function. Regularization is of two types, L1 and L2. The difference lies in the power of weight-coefficients.These should be enough for your SVM based models. To reduce overfitting induced high cost, you can also use BatchNormalization and Dropout algorithms. Hope this helps :)
{ "domain": "datascience.stackexchange", "id": 4336, "tags": "linear-regression, objective-function" }
Delete duplicate entries with lower IDs
Question: I have a function that deletes duplicate entries. The highest ID is kept and the older ones are removed. function: DELETE [tableName] FROM [tableName] INNER JOIN (SELECT * , ROW_NUMBER() OVER (PARTITION BY [fork_id] ORDER BY ID DESC) AS RowNumber FROM [tableName]) Numbered ON [tableName].ID = Numbered.ID WHERE RowNumber > 1 For example, it changes |------|---------|--------| | ID | fork_id | Car | |------|---------|--------| | 1 | 2 | AUDI | <--- removed | 2 | 1 | AUDI | | 3 | 2 | BMW | |------|---------|--------| to |------|---------|--------| | ID | fork_id | Car | |------|---------|--------| | 2 | 1 | AUDI | | 3 | 2 | BMW | |------|---------|--------| The problem with that query is the execution time exceed time when with have many rows (more than 50k) in the table. I have a primary key for the ID column I'm In the sql server, I have a limitation about execution time. A connection can be cut off by the server for a number of reasons: Idle connection longer than 5 minutes. Long running query. Long running open transaction. Excessive resource usage. sources Answer: This would probably be better on DBA You can use a CTE An index on fork_id should help with cte as ( SELECT ROW_NUMBER() OVER (PARTITION BY [fork_id] ORDER BY ID DESC) AS RowNumber FROM [tableName] ) delete from cte WHERE RowNumber > 1 Optimize select * from cte WHERE RowNumber > 1 If that is fast it is volume thing and you could delete in batches
{ "domain": "codereview.stackexchange", "id": 26072, "tags": "sql, time-limit-exceeded, sql-server, t-sql" }
Trajectories using Polar Coordinates
Question: Once I asked my teacher how to find the trajectory of any particle that is acted upon any force.(Generally) He Told me that I couldn't do it as I did not know polar coordinate geometry as of then but now I've finally realized the effectiveness of polar coordinates and can solve simple polar geometry problems.Like finding the polar equations for electric equipotentials or the polar equation of electric field lines etc etc. Now I want to know how to use polar coordinates to find the trajectory of any given particle acted upon by a set amount of varying/steady forces. For example the trajectory of a particle kept in the vicinity of two charged particles. I know i begin by selecting the origin . then i select any assumed trajectory . then on that trajectory i need to take a small element $dl$ and find its components along the direction of the position vector $\vec{r}$ ,$\vec{dl}_r$ and one along the direction of increasing angle $\theta$, $\vec{dl}_{\theta}$ . then i need to find the forces along the given directional vectors. finally i relate $\vec{dl}_r , \vec{dl}_{\theta},\vec{F}_r \mbox{&} \vec{F}_{\theta}$ finally i get a relation between theta and r which will be the equation of the trajectory. are my steps correct if not can you guide me to any reference on the net helping me to gain knowledge as to how to go about my problem. Answer: You use a computer--- you pick the initial position and velocity, then you find the force (in x,y coordinates) and therefore the acceleration a, then you pick a small timestep $\epsilon$, and you add $\epsilon a$ to $v$, and $\epsilon v$ to x, and repeat. Once you find the solution, you make $\epsilon$ smaller until it stops changing, and this is the answer. Your teacher is probably incompetently remembering that for any power-law central force, you can solve this in polar coordinates. This is possible using conservation of angular momentum for a central force, and then integrating the radial motion equation directly. This is useless for more complicated forces, like the many charges you describe. It is impossible to find a solution by any method other than computational approximation. To prove this, note that the particle's position can encode data like in a computer, and an arbitrary force can do arbitrary computations on this data, which means that the only way to figure out what it does is to simulate it directly.
{ "domain": "physics.stackexchange", "id": 3486, "tags": "forces, electrostatics, coordinate-systems" }
Changes in physical properties in homologous series : Solubility
Question: In the text that I'm reading it is stated that : As the molecular mass increases in any homologous series, a gradation in physical properties is seen. This is because the melting points and boiling points increase with increasing molecular mass. Other physical properties such as solubility in a particular solvent also show a similar gradation. My question is regarding the last line; what kind of gradation in solubility is observed? Increase or decrease? Answer: My question is regarding the last line, what kind of gradation in solubiltiy is observed? Increase or decrease? Both Solubility is not one single property, but a collection of related properties. Solubility in water is different from solubility in ethanol, which is different from solubility in benzene. The cause of the gradation of (all of these) properties is a systematic change in intermolecular forces. The number and strength of these forces increase as molecular mass increases, which increases the boiling points and melting points. For solubility, we care about the interplay between the solute and the solvent, so we also need to know the type of intermolecular force (polar vs. nonpolar) in addition to the total strength. For the homologous series of alcohols below, I have listed their water solubilities (mined from UC Davis's ChemWiki and from Wikipedia articles). Notice that as the number of $\ce{CH2}$ units increases, the water solubility decreases. This decrease occurs because, while the strength of nonpolar intermolecular forces (London-Dispersion) increases with each added $\ce{CH2}$ until the nonpolar interactions overpower the polar interactions (dipole-dipole and hydrogen-bonding) of the alcohol functional group. Water is a polar solvent, and with each added $\ce{CH2}$, the alcohol molecules begin to have stronger attraction to themselves than to water, and the solubility decreases. Solubility of the homologous series of linear n-alkanols in water Methanol $\ce{CH3OH}$ - miscible (infinite) Ethanol $\ce{CH3CH2OH}$ - miscible (infinite) n-propanol $\ce{CH3CH2CH2OH}$ - miscible (infinite) n-butanol $\ce{CH3CH2CH2CH2OH}$ - 80 g / L n-pentanol $\ce{CH3CH2CH2CH2CH2OH}$ - 22 g / L n-hexanol$\ce{CH3CH2CH2CH2CH2CH2OH}$ - 5.9 g / L At the same time, the solubility of this homologous series of alcohols increases in nonpolar solvents like hexane and benzene. Those data are not readily available, but a quick experiment will verify it. The increased amount of nonpolar interactions in the alcohols increases their affinity with nonpolar solvents.
{ "domain": "chemistry.stackexchange", "id": 370, "tags": "organic-chemistry, solubility" }
Find pair of complex numbers with maximal sum
Question: Given two lists of complex numbers, is there an efficient algorithm to choose one element from each list such that the magnitude of their sum is maximal? Answer: Observations: The pairing mechanism creates the Minkowski sum of the two sets. The convex hull of the Minkowski sum is the Minkowski sum of the convex hulls. (See e.g. previously linked Wikipedia article). Given a set of points in the plane, the point furthest from the origin must be a vertex of the convex hull. The convex hull of a set of $n$ points can be found in $O(n \lg n)$ time. (In fact in $O(n \lg h)$ time, where $h$ is the number of points on the convex hull). For two convex polygons P and Q in the plane with $m$ and $n$ vertices, their Minkowski sum is a convex polygon with at most $m + n$ vertices and may be computed in time $O(m + n)$ by a very simple procedure... given in the Wikipedia article. Combining these observations gives an $O(n \lg n)$ algorithm.
{ "domain": "cs.stackexchange", "id": 12630, "tags": "algorithms, computational-geometry" }
Why do the diagrams in $\Gamma[\Phi]$ differ from those in $\Phi\Gamma^{\rm int}_{\Phi}[\Phi]$ only by numerical prefactors?
Question: Suppose $W$ is the generator of connected Feynman diagrams in $\Phi^4$ theory. We define $$\Gamma[\Phi]=W[j]-W_jJ,\tag{13.37}$$ where $$W_jJ=\int{dxW_j(x)j(x)}\tag{13.38}$$ and $$ \Phi\equiv\frac{\delta W[j]}{\delta j(x)}.\tag{13.39}$$ Now we define $$\Gamma^{int}[\Phi] \equiv \Gamma[\Phi]-\frac{1}{2} \Phi iG_0^{-1}\Phi, \tag{13.51}$$ where $G_0$ is the bare propagator. The claim is that the diagrams in $\Gamma[\Phi]$differ from those in $\Phi\Gamma^{int}_{\Phi}[\Phi]$ only by numerical prefactors, where $$\Gamma^{int}_{\Phi}[\Phi]=\frac{\delta \Gamma^{int}[\Phi]}{\delta \Phi(x)}\tag{13.41}$$ This is done in Kleinert's Chapter 13: Notes on formal perturbation theory. Why is this claim true? Answer: Kleinert is observing below eq. (13.64) that the Euler vector field$^1$ $$ V~:=~\int \!d^Dx~\Phi(x)\frac{\delta}{\delta \Phi(x)}$$ counts$^2$ the number of $\Phi$-powers in each term of the effective action $\Gamma[\Phi]$. So e.g. $V[\Phi^n]~=~n\Phi^n$, and so forth. -- $^1$Note that Kleinert is using deWitt's condensed notation, cf. e.g. eq. (13.38). $^2$ In the heat of the argument, Kleinert overlooks the quadratic free part $\Gamma_0=\Gamma[\Phi]-\Gamma^{\rm int}[\Phi]$, but that is anyway trivial to account for.
{ "domain": "physics.stackexchange", "id": 57868, "tags": "quantum-field-theory, lagrangian-formalism, feynman-diagrams, 1pi-effective-action" }
Cross compile ROS to ARM (OC8 Module)
Question: Hello, I need to cross compile ROS and, after that, use SLAM. I have an OC8 Modulee (ARM9) with a embedded linux kernel (www.opencontroller.com/modules.html). I've read so much about it, including everything about cross compiling in ros.org. Some idea how can I do that? Really need help. Regards Originally posted by Cássio on ROS Answers with karma: 16 on 2013-08-01 Post score: 0 Answer: I'd suggest looking into the Meta ROS project: https://github.com/bmwcarit/meta-ros Originally posted by tfoote with karma: 58457 on 2014-06-24 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 15140, "tags": "slam, navigation, compile, gmapping, slam-gmapping" }
when I only give command 'fit', my class does 'transform' too
Question: I have created 2 classes, first of which is: away_defencePressure_idx = 15 class IterImputer(TransformerMixin): def __init__(self): self.imputer = IterativeImputer(max_iter=10) def fit(self, X, y=None): self.imputer.fit(X) return self def transform(self, X, y=None): imputed = self.imputer.transform(X) X['away_defencePressure'] = imputed[:,away_defencePressure_idx] return X and the second one is home_chanceCreationPassing_idx = 3 class KneighborImputer(TransformerMixin): def __init__(self): self.imputer = KNNImputer(n_neighbors=1) def fit(self, X, y=None): self.imputer.fit(X) return self def transform(self, X, y=None): imputed = self.imputer.transform(X) X['home_chanceCreationPassing'] = imputed[:,home_chanceCreationPassing_idx] return X When I put IterImputer() in a pipeline and fit_transform, the outcome is: ******************** Before Imputing ******************** 7856 49.166667 12154 44.666667 10195 48.333333 18871 57.333333 267 48.833333 Name: home_chanceCreationPassing, dtype: float64 # of null values 70 ******************** After Imputing ******************** 7856 49.166667 12154 44.666667 10195 48.333333 18871 57.333333 267 48.833333 Name: home_chanceCreationPassing, dtype: float64 # of null values 0 It works fine. But then if I put the two imputers into one pipeline as follows and fit: p = Pipeline([ ('imputerA', IterImputer()), ('imputerB', KneighborImputer()) ]) p = Pipeline([ ('imputerA', IterImputer()), ('imputerB', KneighborImputer()) ]) X = X_train.copy() p.fit(X) even without transforming display(X.head()) print('# of null values', X.isnull().sum()) the outcome would be like home_buildUpPlaySpeed home_buildUpPlayDribbling home_buildUpPlayPassing home_chanceCreationPassing home_chanceCreationCrossing home_chanceCreationShooting home_defencePressure home_defenceAggression home_defenceTeamWidth away_buildUpPlaySpeed away_buildUpPlayDribbling away_buildUpPlayPassing away_chanceCreationPassing away_chanceCreationCrossing away_chanceCreationShooting away_defencePressure away_defenceAggression away_defenceTeamWidth 7856 50.833333 44.5 37.666667 49.166667 55.000000 48.166667 49.333333 43.000000 53.166667 61.333333 56.0 51.333333 67.000000 58.333333 57.166667 55.000000 47.166667 53.000000 12154 59.333333 69.0 42.666667 44.666667 59.166667 52.333333 40.333333 41.833333 52.666667 47.000000 54.0 41.166667 60.833333 53.833333 54.833333 49.666667 47.500000 56.500000 10195 58.000000 54.0 57.666667 48.333333 53.833333 55.833333 34.833333 60.333333 53.166667 56.333333 41.5 42.333333 52.166667 51.666667 57.166667 46.333333 53.666667 53.333333 18871 61.833333 54.5 58.000000 57.333333 55.000000 49.500000 47.833333 48.000000 57.000000 59.000000 64.0 57.333333 52.500000 63.000000 58.666667 46.500000 47.666667 60.833333 267 49.166667 52.0 46.500000 48.833333 55.833333 47.666667 53.666667 53.833333 54.666667 59.666667 45.0 60.333333 54.666667 58.833333 61.333333 51.500000 57.500000 56.500000 # of null values home_buildUpPlaySpeed 0 home_buildUpPlayDribbling 0 home_buildUpPlayPassing 0 home_chanceCreationPassing 70 home_chanceCreationCrossing 0 home_chanceCreationShooting 0 home_defencePressure 0 home_defenceAggression 0 home_defenceTeamWidth 0 away_buildUpPlaySpeed 0 away_buildUpPlayDribbling 0 away_buildUpPlayPassing 0 away_chanceCreationPassing 0 away_chanceCreationCrossing 0 away_chanceCreationShooting 0 away_defencePressure 0 away_defenceAggression 0 away_defenceTeamWidth 0 dtype: int64 So the thing is only by doing 'fit', second last step is commited! and the last step is commited when I do 'transform'. Does anyone know why such thing happens? Answer: Imputer fit() : provides statistics for the imputer i.e., fits data to imputer transform() : imputes and fills the missing values fit_transform() : Fit to data, then transform it. Pipeline ideally apply a list of transforms so the final estimator only needs to implement fit So to answer your question in the pipeline, the transform is already in place, so all you have to do is ensure fit is called by the final estimator
{ "domain": "datascience.stackexchange", "id": 10129, "tags": "machine-learning, python, scikit-learn, pipelines" }
Using analog values with Algebraic Normal Form?
Question: Algebraic normal form (ANF) is a way of describing digital circuits made up of AND and XOR gates. The below is an example of an ANF expression which evaluates to true if two or more of it's three inputs are true ($\oplus$ being XOR, implicit multiplication being AND) $x_0x_1 \oplus x_0x_2 \oplus x_1x_2$ When a digital circuit is expressed this way, you can evaluate it as a polynomial, taking 1 or 0 as inputs for the variables, doing a multiplication for an AND gate, addition for an XOR gate, and doing a modulus by 2 on the final result. $y = x_0x_1 + x_0x_2 + x_1x_2$ you can verify a truth table by plugging in 0s and 1s for the various $x$ parameters and seeing that it comes out to the correct values. What I'm curious about is what if we don't use whole numbers? I'm sure that's a well studied thing but I haven't been able to find any information about it. it seems sort of like fuzzy logic, but fuzzy logic is a well defined thing with different operations. Here's some values plugged in and their output. \begin{array}{|c|c|c|c|} \hline x_0 & x_1 & x_2 & output & output \% 2 \\ \hline 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 1 \\ 1 & 1 & 1 & 3 & 1 \\ 0.5 & 0.5 & 0 & 0.25 & 0.25\\ 0.9 & 0.9 & 0.9 & 2.43 & 0.43 \\ 1.0 & 1.0 & 0.5 & 2.0 & 0.0 \\ \hline \end{array} The analog value input gives output that seems especially wrong in the last two rows. It seems like maybe this just "doesn't work", but it also feels like maybe it does work, or does do something interesting, perhaps with some modifications? Has anyone come across non digital values used in ANF or similar? Thanks! Answer: Perhaps the most appropriate way of thinking of algebraic normal form is as the following statement: Every function from $\mathbb{Z}_2^n$ to $\mathbb{Z}_2$ can be represented uniquely as a multilinear polynomial. Here $\mathbb{Z}_2$ is the field with two elements, and a polynomial is multilinear if all monomials are products of distinct variables (so you can't have $x_i^2$ in a monomial). The upshot is that the modulo 2 operation is not the most natural way of understanding the computation involved in algebraic normal form; rather, addition is done over the field $\mathbb{Z}_2$. It seems rather unnatural to apply the modulo 2 operation to real numbers in this context. One generalization of algebraic normal form to real numbers is the Fourier expansion over the Boolean cube: Every function from $\{-1,1\}^n$ to $\mathbb{R}$ can be represented uniquely as a multilinear polynomial (with real coefficients). Here $\mathbb{R}$ is the field of real numbers. Another generalization is the Hermite expansion in Gaussian space: Every function from $\mathbb{R}^n$ to $\mathbb{R}$ can be approximated arbitrarily well by a polynomial (with real coefficients). Here arbitrarily well means up to $\epsilon$ distance in $L^2$ with respect to Gaussian measure (e.g., see this Wikipedia article).
{ "domain": "cs.stackexchange", "id": 7858, "tags": "computation-models, digital-circuits" }
FMCW Radar timing of transmission and reception
Question: I have a question regarding the implementation of FMCW radars that seems rather basic to me, but for some reason I am a bit unclear about this. So I understand that in FMCW radar, transmitter and receiver are constantly, simultaneously transmitting and receiving chirp signals, respectively. The received signal is then multiplied with a copy of the tranmsit signal to deramp/dechirp it and a subsequent FFT on this IF/beat signal yields the range estimates. Now what I am wondering: Is there some sort of synchronization going on between the transmission of the chirp and the deramping in the receiver? Lets say, I start transmitting at a certain time instant, does the receiver side start collecting samples at the exact same moment and multiplies the chirp-copy after the corresponding amount of samples have been collected? As there will be a time delay to, lets say, a point target in the scene, the received chirp and the reference copy would have a certain offset when they are multiplied together. Then, if I understand correct, the receiving signal would be similar to the transmit chirp, but in a sense shifted, with the end of it reappearing in the beginning (coming from the next transmitted and received chirp) I guess to summarize my question: Is the "input buffer" of samples at the receiver side cleared out before the beginning of a new chirp transmission, so that data recording and transmission start at the exact same time? Or is it rather that both transmitting and receiving run freely and a running window of the receive signal is collected and multiplied with the reference? From my understanding, it would matter at what point in time we multiply the reference chirp with the received signal, since this would cause a different delay. I hope my question makes sense, every tip is appreciated ! Cheers Lucas Answer: These are excellent questions and shows that you're really trying to dig into the details. In FMCW, or most radar systems for that matter, the time of transmission must be known so that you can decide at what time to open the receiver to determine range. In the case of FMCW, theoretically you transmit and receive at the exact same time. In the real world, there might be a slight delay. On top of that, some of the first few samples may have transients from filtering, and are thrown away. Lets say, I start transmitting at a certain time instant, does the receiver side start collecting samples at the exact same moment and multiplies the chirp-copy after the corresponding amount of samples have been collected? This is essentially what you want, given the caveats mentioned above. You transmit at some time $t_0$, wait for some period which is usually close to the chirp length $T$, then multiply the signal by the reference to yield the beat frequencies after the DFT. Is the "input buffer" of samples at the receiver side cleared out before the beginning of a new chirp transmission, so that data recording and transmission start at the exact same time? Or is it rather that both transmitting and receiving run freely and a running window of the receive signal is collected and multiplied with the reference? This depends. If you want to perform Doppler processing (or simply coherent integration), you commit enough memory to fit the $N$ number of pulses you want to collect. However, for every chirp that is generated and transmitted, the only real time that matters is the difference between the time we transmit $t_0$ and the end of the listening period with is roughly $t_0 + T$. The absolute value of $t_0$ is not all that important. The value can be zero or whatever the current timestamp the system may give, it's the delay that matters. Whether or not the transmitter and receiver run freely is really a design choice and comes down to the technology. However in actual implementation, the decision to transmit and receive a chirp is discrete, meaning that each chirp is commanded separately as many times as desired. This is really semantics, because you could also design a system that is always running, and you make the decision when to actually sample the return signals. At the end of the day, it all ends up being the same thing.
{ "domain": "dsp.stackexchange", "id": 12068, "tags": "discrete-signals, signal-analysis, frequency-spectrum, continuous-signals, radar" }
Problem connecting to docker container using gzclient
Question: I'm trying to connect to gzserver running in a docker container and I'm running into trouble starting gzclient. I can run the demo pendulum simulation from gazebo's docker hub page, and I can get the logs just fine and play them back. The problem is connecting to gzserver from the host machine. The strange thing is, I can start gzserver just fine, but I can't seem to start up gzclient. It just hangs on startup and the splash screen never goes away. Usually if I leave it for ~10 minutes, it will exit with the error below. With the container running, port 11345 published, and $GAZEBO_MASTER_URI=localhost:11345, gazebo will not start because it detects a conflict with the other server. $ gazebo --verbose debug:=true Gazebo multi-robot simulator, version 8.0.0 Copyright (C) 2012 Open Source Robotics Foundation. Released under the Apache 2 License. http://gazebosim.org [Msg] Waiting for master. Gazebo multi-robot simulator, version 8.0.0 Copyright (C) 2012 Open Source Robotics Foundation. Released under the Apache 2 License. http://gazebosim.org [Err] [Master.cc:96] EXCEPTION: Unable to start server[bind: Address already in use]. There is probably another Gazebo process running. [Err] [Master.cc:96] EXCEPTION: Unable to start server[bind: Address already in use]. There is probably another Gazebo process running. When you stop the docker container, gazebo and gzserver will start. $ gazebo --verbose --play ./logs/log/*/gzserver/state.log debug:=true Gazebo multi-robot simulator, version 8.0.0 Copyright (C) 2012 Open Source Robotics Foundation. Released under the Apache 2 License. http://gazebosim.org [Msg] Waiting for master. Gazebo multi-robot simulator, version 8.0.0 Copyright (C) 2012 Open Source Robotics Foundation. Released under the Apache 2 License. http://gazebosim.org [Msg] Waiting for master. [Msg] Connected to gazebo master @ http://127.0.0.1:11345 [Msg] Publicized address: 192.168.0.5 [Msg] Log playback: Log Version: 1.0 Gazebo Version: 8.0.0 Random Seed: 3419886574 Log Start Time: 0 388000000 Log End Time: 10 741000000 [Msg] Connected to gazebo master @ http://127.0.0.1:11345 [Msg] Publicized address: 192.168.0.5 However, if you try to run gzclient when the docker container is up, you get the following result: $ gzclient --verbose debug:=true Gazebo multi-robot simulator, version 8.0.0 Copyright (C) 2012 Open Source Robotics Foundation. Released under the Apache 2 License. http://gazebosim.org [Msg] Waiting for master. [Msg] Connected to gazebo master @ http://127.0.0.1:11345 [Msg] Publicized address: 192.168.0.5 [Wrn] [Publisher.cc:141] Queue limit reached for topic /gazebo/default/user_camera/pose, deleting message. This warning is printed only once. And then after ~10 minutes: libc++abi.dylib: terminating with uncaught exception of type boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::lock_error> >: boost: mutex lock failed in pthread_mutex_lock: Invalid argument Abort trap: 6 I've tried every viable combination of the environment variables I know. This one is the only one where gzclient doesn't exit with an error code. I've tried every trick with docker that I know, and it isn't a problem with published ports. I can access static webpages from containers on the same docker network just fine. Gzserver is running fine because I can run simulations in the container where it is running. It's a matter of not being able to connect to the server from the host. I'm hoping someone has experience with this. Any suggestions? Originally posted by ajthor on Gazebo Answers with karma: 11 on 2017-03-24 Post score: 1 Original comments Comment by sloretz on 2017-03-30: What's the command you're using to create/start the docker container? Comment by jetdillo on 2017-04-02: Are you running gzserver and gzclient on the same machine or different ones ? There may be port-forwarding and IP routing issues that need to be dealt with if you're not on the same machine(even if the remote machine is on the same subnet as the one running the container) Answer: Could you publish the output of the container when you run gzserver --verbose? I did a quick test and I managed to connect my gzclient to gzserver running on the container. In particular, I'm interested in the line that says "Publicized addres: ". In my case: [Msg] Waiting for master. [Msg] Connected to gazebo master @ http://127.0.0.1:11345 [Msg] Publicized address: 10.0.0.2 I connected to the gzserver using: GAZEBO_MASTER_URI=http://10.0.0.2:11345 gzclient Originally posted by Carlos Agüero with karma: 626 on 2017-03-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ajthor on 2017-03-28: The output from starting gzserver is up there in the second block. Gazebo multi-robot simulator, version 8.0.0 Copyright (C) 2012 Open Source Robotics Foundation. Released under the Apache 2 License. http://gazebosim.org [Msg] Waiting for master. [Msg] Connected to gazebo master @ http://127.0.0.1:11345 [Msg] Publicized address: 192.168.0.5 And changing the server to point to the host's IP, 192.168.0.5, doesn't help, either.
{ "domain": "robotics.stackexchange", "id": 4069, "tags": "docker" }
Expectation and autocorrelation for modulated sinusoid
Question: Given $$ Y(t) = A X(t) \cos(\omega t + \phi) $$ with $X(t)$ is zero-mean WSS (wide-sense stationary) process, $\phi$ ~ Unif$(0,2\pi)$. Suppose $X(t)$ and $\phi$ are independent random variables. I want to compute the mean and autocorrelation functions of $Y(t)$. My attempt is (using independence of $X(t)$ and $\phi$, and $E(X(t)) = 0$) $$ E(Y(t))= E(A X(t) \cos(\omega t + \phi)) = A \cdot E(X(t)) \cdot E(\cos(\omega t + \phi)) = 0$$ Similarly we have $$ R_Y(t,t+\tau)= E(A X(t) \cos(\omega t + \phi) \cdot A X(t+\tau) \cos(\omega (t + \tau) + \phi)) = 0$$ Is this a correct use of independence? If not, why? Answer: Your use of independence for computing $E[Y(t)]$ is correct. However, for the autocorrelation you get $$\begin{align}E[Y(t)Y(t+\tau)]&=A^2E[X(t)X(t+\tau)\cos(\omega t+\phi)\cos(\omega(t+\tau)+\phi)]\\&=A^2E[X(t)X(t+\tau)]\cdot E[\cos(\omega t+\phi)\cos(\omega(t+\tau)+\phi)]\\&=A^2R_X(\tau)E[\cos(\omega t+\phi)\cos(\omega(t+\tau)+\phi)]\tag{1}\end{align}$$ Note that in general $X(t)$ and $X(t+\tau)$ are not independent, so $E[X(t)X(t+\tau)]\neq E[X(t)]E[X(t+\tau)]$. You can compute the expectation in $(1)$ by using a trigonometric identity to write the product of the two cosines as a sum of two cosines. If you do it right, it should turn out that $E[Y(t)Y(t+\tau)]$ only depends on $\tau$ but not on $t$.
{ "domain": "dsp.stackexchange", "id": 11900, "tags": "modulation, statistics, random-process, stochastic, probability" }
Why is eye accommodation necessary?
Question: Why is eye accommodation necessary when infinite number of light rays come from a specific point of an object and we can use any pair of rays we need and the curvature of the lens need not to be changed? Answer: Your eyeball is a screen. It will record (send a message to the brain about) each and every beam (or ray) of light that hits it. Ideally the situation described by you is identical to that when we do not require a lens. If we do add a lens, all it does is just provides a larger field of view, but we can still go about without it. Now imagine a situation where the rays were coming randomly from every place and hitting every other part of your retina. All you would see just blurred white light. (This is actually a medical condition. See Aphakia) Similar to what happens if you try to use a projector in a very brightly lit room or outdoors on a sunny day. And now consider the dire opposite where we have a pinhole camera. Although we get a sharp image, it is very faintly lit, and our retina isn't sensitive enough to pick up all the information from this tiny amount of light. (Try making a fist with a tiny hole and look through it. The world is still visible, but very faintly lit and almost dark. So what the accommodation of the eye actually does is it: Prevents light from places other than where we want to focus to reach our eyes. Makes sure that whatever light is there, its all focused to where it is supposed to be ensuring the brightest image possible.
{ "domain": "physics.stackexchange", "id": 43678, "tags": "optics, lenses, vision" }
2-player dice game
Question: This is a game for two users who roll 2 dice 5 times. If the total of dice is even, the player gains 10 points; if it is odd, they lose 5. If there is a tie after the five rounds, then the users have to roll one die to determine the winner. ###### IMPORTING RANDOM AND TIME & DEFINING VARIABLES ###### import random import time i = 0 Player1Points = 0 Player2Points = 0 Player1Tiebreaker = 0 Player2Tiebreaker = 0 Winner_Points = 0 ###### LOGIN CODE ###### ### This Sets logged in to false, and then makes sure the username and password is correct before allowing them to continue ### logged_in1 = False logged_in2 = False while logged_in1 == False: username = input('What is your username? ') password = input('What is your password? ') if username == 'User1' or username == 'User2' or username == 'User3' or username == 'User4' or username == 'User5': if password == 'password': print('Welcome, ',username,' you have been successfully logged in.') logged_in1 = True user1 = username else: print('Incorrect password, try again') else: print('Incorrect username, try again') while logged_in2 == False: username = input('What is your username? ') password = input('What is your password? ') if username == 'User1' or username == 'User2' or username == 'User3' or username == 'User4' or username == 'User5': if password == 'password': print('Welcome, ',username,' you have been successfully logged in.') logged_in2 = True user2 = username else: print('Incorrect password, try again') else: print('Incorrect username, try again') ###### DEFINING ROLL ###### ### Makes the dice roll for the player and works out the total for that roll ### def roll(): points = 0 die1 = random.randint(1,6) die2 = random.randint(1,6) dietotal = die1 + die2 points = points + dietotal if dietotal % 2 == 0: points = points + 10 else: points = points - 5 if die1 == die2: die3 = random.randint(1,6) points = points +die3 return(points) ###### DICE ROLL ###### ### This rolls the dice 5 times for the players, and then adds up the total. If the scores are equal, it starts a tie breaker and determines the winner off that ### for i in range(1,5): Player1Points += roll() print('After this round ',user1, 'you now have: ',Player1Points,' Points') time.sleep(1) Player2Points += roll() print('After this round ',user2, 'you now have: ',Player2Points,' Points') time.sleep(1) if Player1Points == Player2Points: while Player1Tiebreaker == Player2Tiebreaker: Player1Tiebreaker = random.randint(1,6) Player2Tiebreaker = random.randint(1,6) if Player1Tiebreaker > Player2Tiebreaker: Player2Points = 0 elif Player2Tiebreaker > Player1Tiebreaker: Player1Points = 0 ###### WORKING OUT THE WINNER ###### ### This checks which score is bigger, then creates a tuple for my leaderboard code ( Gotton of stack overflow ) ### if Player1Points>Player2Points: Winner_Points = Player1Points winner_User = user1 winner = (Winner_Points, user1) elif Player2Points>Player1Points: Winner_Points = Player2Points winner = (Winner_Points, user2) winner_User = user2 print('Well done, ', winner_User,' you won with ',Winner_Points,' Points') ###### CODE TO UPLOAD ALL SCORES TO A FILE ###### ### This will store the winners username and score in a text file ### winner = (Winner_Points,',',winner_User) f = open('Winner.txt', 'a') f.write(''.join(winner)) f.write('\n') f.close() ###### CODE TO LOAD, UPDATE AND SORT LEADERBOARD ###### ### This loads the leaderboard into an array, then compares the scores just gotton and replaces it ### f = open('Leaderboard.txt', 'r') leaderboard = [line.replace('\n','') for line in f.readlines()] f.close() for idx, item in enumerate(leaderboard): if item.split(', ')[1] == winner[1] and int(item.split(', ')[0]) < int(winner[0]): leaderboard[idx] = '{}, {}'.format(winner[0], winner[1]) else: pass ### This sorts the leaderboard in reverse, and then rewrites it ### leaderboard.sort(reverse=True) with open('Leaderboard.txt', 'w') as f: for item in leaderboard: f.write("%s\n" % item) This was for my NEA task in computer science, which I have now finished; if anyone has any suggestions on how I could have made it better they will be greatly appreciated. So, please suggest how I can improve it! Answer: Login Make a function login and move all the code to do with logged_in1 into it. Use while not logged_in1 rather than while logged_in1 == False. Rather than using val == 'a' or val == 'b' you can use val in ('a', 'b') . Rather than nesting if's you can use a guard clause: # bad if username in ('User1', 'User2', 'User3', 'User4', 'User5'): # code else: print('Incorrect username, try again') # good if username not in ('User1', 'User2', 'User3', 'User4', 'User5'): print('Incorrect username, try again') continue # code You can use str.format or f-strings (3.6+) to make your print easier to read. print('Welcome, {} you have been successfully logged in.'.format(username)) print(f'Welcome, {username} you have been successfully logged in.') You can use while True and break from the loop if you login successfully. Alternately as you have a function you can return out of the function. This changes your login code to: def login(): while True: username = input('What is your username? ') password = input('What is your password? ') if username not in ('User1', 'User2', 'User3', 'User4', 'User5'): print('Incorrect username, try again') continue if password != 'password': print('Incorrect password, try again') continue print(f'Welcome, {username} you have been successfully logged in.') return username user1 = login() user2 = login() Roll Reduce the amount of empty lines in roll. You can use a turnery statement to reduce the amount of lines. # from if (die1 + die2) % 2 == 0: change = 10 else: change = -5 # to change = 10 if (die1 + die2) % 2 == 0 else -5 You can calculate points in one go, so it's easier to know what it is. It's easier to read points += 1, which is the same as points = points + 1. It's common practice to put a space after commas in function calls. return is a keyword, rather than a function so you should remove the brackets. This changes roll to: def roll(): die1 = random.randint(1, 6) die2 = random.randint(1, 6) change = 10 if (die1 + die2) % 2 == 0 else -5 points = die1 + die2 + change if die1 == die2: points += random.randint(1, 6) return points Game It's common in Python to use snake_case, so I suggest changing Player1Points to player_1_points or player1_points. I'd change the code so that it returns a tuple of both the points scored and if they won for both players. Winner It'd be easier to make this code in the main function, and so I'd move the calls to login and game here. It'd be simpler if you focus on just making winner, rather than winner_user and winner_points. Getting: def main(): user1 = login() user2 = login() (player1, player1_win), (player2, player2_win) = game(user1, user2) if player1_win: winner = (player1, user1) else: winner = (player2, user2) print('Well done, {winner[1]} you won with {winner[0]} Points') Winner and Scoreboard It'd be easier if you used an f-string to format the line to write to the file. You should use with when using files. You should add some more functions to get, mutate and write to the leaderboard. You should add a if __name__ == '__main__': guard to ensure your code doesn't run when you don't want it to. This gets the following code: import random import time def login(): while True: username = input('What is your username? ') password = input('What is your password? ') if username not in ('User1', 'User2', 'User3', 'User4', 'User5'): print('Incorrect username, try again') continue if password != 'password': print('Incorrect password, try again') continue print(f'Welcome, {username} you have been successfully logged in.') return username def roll(): die1 = random.randint(1, 6) die2 = random.randint(1, 6) change = 10 if (die1 + die2) % 2 == 0 else -5 points = die1 + die2 + change if die1 == die2: points += random.randint(1, 6) return points def game(user1, user2): player1_points = 0 player2_points = 0 for i in range(1,5): player1_points += roll() print(f'After this round {user1} you now have: {player1_points} Points') time.sleep(1) player2_points += roll() print(f'After this round {user2} you now have: {player2_points} Points') time.sleep(1) player1_tiebreaker = 0 player2_tiebreaker = 0 if player1_points == player2_tiebreaker: while player1_tiebreaker == player2_tiebreaker: player1_tiebreaker = random.randint(1,6) player2_tiebreaker = random.randint(1,6) player1_win = (player1_points + player1_tiebreaker) > (player2_points, player2_tiebreaker) return (player1_points, player1_win), (player2_points, not player2_win) def add_winner(winner): with open('Winner.txt', 'a') as f: f.write('{winner[0]},{winner[1]}\n') def get_leaderboard(): with open('Leaderboard.txt', 'r') as f: return [line.replace('\n','') for line in f.readlines()] def update_leaderboard(leaderboard, winner): for idx, item in enumerate(leaderboard): if item.split(', ')[1] == winner[1] and int(item.split(', ')[0]) < int(winner[0]): leaderboard[idx] = '{}, {}'.format(winner[0], winner[1]) else: pass leaderboard.sort(reverse=True) def save_leaderboard(leaderboard): with open('Leaderboard.txt', 'w') as f: for item in leaderboard: f.write(f'{item}\n') def main(): user1 = login() user2 = login() (player1, player1_win), (player2, player2_win) = game(user1, user2) if player1_win: winner = (player1, user1) else: winner = (player2, user2) print('Well done, {winner[1]} you won with {winner[0]} Points') add_winner(winner) leaderboard = get_leaderboard() update_leaderboard(leaderboard, winner) save_leaderboard(leaderboard) if __name__ == '__main__': main()
{ "domain": "codereview.stackexchange", "id": 32280, "tags": "python, python-3.x, game, dice" }