anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
List all checked boxes from webpage
Question: This is part of a Tampermonkey script that generates a box in the top right of a webpage and generates a list of checked boxes from the specified list selector (list chosen through buttons). The script works as expected; however, I am not a web guy and do not pretend to be one. My main source of contention is how poorly designed the UI is. Any advice on improving the code and UI would be much appreciated. var zNode = document.createElement ('div'); zNode.innerHTML = '<h3>Create List of Items</h3><h6><b>Not Selected<span></span>Selected</b></h6>' + '<button id="hospNotSelectBtn" type="button">Hospitals</button><span></span>' + '<button id="hospSelectBtn" type="button">Hospitals</button><br>' + '<button id="deptNotSelectBtn" type="button">Departments</button><span></span>' + '<button id="deptSelectBtn" type="button">Departments</button><br>' + '<button id="jobNotSelectBtn" type="button">Job Titles</button><span></span>' + '<button id="jobSelectBtn" type="button">Job Titles</button><br>' + '<button id="jobFunctionNotSelectBtn" type="button">Job Functions</button><span></span>' + '<button id="jobFunctionSelectBtn" type="button">Job Functions</button>' ; zNode.setAttribute ('id', 'myContainer'); document.body.appendChild (zNode); GM_addStyle ( multilineStr ( function () {/*! #myContainer { position: absolute; top: 0; right: 0; font-size: 20px; color: white; background: green; border: 3px outset black; margin: 2px; padding: 2px 2px; text-align: center; } #myContainer span { margin: 0 10px; } */} ) ); //Function provided by Brock Adams of SE function multilineStr (dummyFunc) { var str = dummyFunc.toString (); str = str.replace (/^[^\/]+\/\*!?/, '') // Strip function () { /*! .replace (/\s*\*\/\s*\}\s*$/, '') // Strip */ } .replace (/\/\/.+$/gm, '') // Double-slash comments wreck CSS. Strip them. ; return str; } Answer: In order to simplify the user interface I switched to using a drop down list. It has minimized the amount of visual interference the original script had on the webpage as well as streamlined the code. var zNode = document.createElement ('div'); zNode.innerHTML = `<h5>Selection Report</h5><select name='selection' id='numbers'"> <option value="Choose" selected="selected">Choose Option</option> <option value="#ctl00_CPH_lsD_pnlListSelector input:checked">Departments Selected</option> <option value="#ctl00_CPH_lsD_pnlListSelector input:not(:checked)">Departments Not Selected</option> <option value="#ctl00_CPH_lstSelectorPosition3_pnlListSelector input:checked">Hospitals Selected</option> <option value="#ctl00_CPH_lstSelectorPosition3_pnlListSelector input:not(:checked)">Hospitals Not Selected</option> <option value="#ctl00_CPH_lsJT_pnlListSelector input:checked">Job Titles Selected</option> <option value="#ctl00_CPH_lsJT_pnlListSelector input:not(:checked)">Job Titles Not Selected</option> <option value="#ctl00_CPH_lsJF_pnlListSelector input:checked">Job Functions Selected</option> <option value="#ctl00_CPH_lsJF_pnlListSelector input:not(:checked)">Job Functions Not Selected</option> </select>`; zNode.setAttribute ('id', 'myContainer'); document.body.appendChild (zNode); GM_addStyle ( ` #myContainer { position: absolute; top: 0; right: 0; font-size: 20px; color: white; background: green; border: 3px outset black; margin: 2px; padding: 2px 2px; text-align: center; } #myContainer span { margin: 0 10px; } #hospNotSelectBtn { cursor: pointer; } #hospSelectBtn { cursor: pointer; } #deptNotSelectBtn { cursor: pointer; } #deptSelectBtn { cursor: pointer; } #jobSelectBtn { cursor: pointer; } #jobNotSelectBtn { cursor: pointer; } #jobFunctionSelectBtn { cursor: pointer; } #jobFunctionNotSelectBtn { cursor: pointer; } #myAlertBox { font-size: small; background: white; border: 5px solid green; padding: 4px; position: absolute; top: 280px; right: 80px; max-width: 300px; white-space:pre-wrap; }`);
{ "domain": "codereview.stackexchange", "id": 27403, "tags": "javascript, html, css" }
Create a simple C++ client Application to control KUKA's Robot-arm LBR iiwa via FRI
Question: Until now I have been programming the robot using Java on KUKA's IDE "KUKA Sunrise.Workbench", what I want to do is control the robot arm via my C++.Net application (I would use a camera or Kinect to get commands). I'm reading the documents provided by Kuka, but as I'm a bit in hurry, I want to understand how a C++ client application (running on my laptop) can send/receive information to/from the robot's controller "KUKA Sunrise Cabinet" (running the server application) via FRI. I still have issues grasping the whole mechanism. A simple application (Server/Client) source code with explanation (or a schematic) would be more than helpful . Answer: I have a library called grl which integrates control of a KUKA iiwa in C++. Right now the most reliable mechanism I've found for control is to receive state over FRI, then send state via Java. All the tools necessary to do this are integrated into grl. While I've been able to receive state over FRI nicely, sending FRI commands so the robot drives reliably has proven more complicated. I'm close to a working implementation and I have a few simple test applications that move a single joint correctly. Once the bugs are worked out it should be very easy to use, and I'm hopeful it will work well. For a specific function controlling FRI see KukaFRIClientDataDriver.cpp. Unfortunately I've found the direct API KUKA provides to be a bit difficult to use as well, so I'm implementing functions that communicate over the underlying protobuf network messages and UDP. While this isn't a 100% answer, it is a solution that is 90% of the way to completion. Update: FRI based control using grl is now working with the Sunrise Connectivity Suite 1.7. Here is an example of how to use it in the simplest case of KukaFRIClientDataDriverTest.cpp: // Library includes #include <string> #include <ostream> #include <iostream> #include <memory> #include "grl/KukaFRI.hpp" #include "grl/KukaFriClientData.hpp" #include <boost/log/trivial.hpp> #include <cstdlib> #include <cstring> #include <iostream> #include <boost/asio.hpp> #include <vector> #include <iostream> using boost::asio::ip::udp; #include <chrono> /// @see https://stackoverflow.com/questions/2808398/easily-measure-elapsed-time template<typename TimeT = std::chrono::milliseconds> struct periodic { periodic():start(std::chrono::system_clock::now()){}; template<typename F, typename ...Args> typename TimeT::rep execution(F func, Args&&... args) { auto duration = std::chrono::duration_cast< TimeT> (std::chrono::system_clock::now() - start); auto count = duration.count(); if(count > previous_count) func(std::forward<Args>(args)...); previous_count = count; return count; } std::chrono::time_point<std::chrono::system_clock> start; std::size_t previous_count; }; enum { max_length = 1024 }; int main(int argc, char* argv[]) { periodic<> callIfMinPeriodPassed; try { std::string localhost("192.170.10.100"); std::string localport("30200"); std::string remotehost("192.170.10.2"); std::string remoteport("30200"); std::cout << "argc: " << argc << "\n"; if (argc !=5 && argc !=1) { std::cerr << "Usage: " << argv[0] << " <localip> <localport> <remoteip> <remoteport>\n"; return 1; } if(argc ==5){ localhost = std::string(argv[1]); localport = std::string(argv[2]); remotehost = std::string(argv[3]); remoteport = std::string(argv[4]); } std::cout << "using: " << argv[0] << " " << localhost << " " << localport << " " << remotehost << " " << remoteport << "\n"; boost::asio::io_service io_service; std::shared_ptr<KUKA::FRI::ClientData> friData(std::make_shared<KUKA::FRI::ClientData>(7)); std::chrono::time_point<std::chrono::high_resolution_clock> startTime; BOOST_VERIFY(friData); double delta = -0.0001; /// consider moving joint angles based on time int joint_to_move = 6; BOOST_LOG_TRIVIAL(warning) << "WARNING: YOU COULD DAMAGE OR DESTROY YOUR KUKA ROBOT " << "if joint angle delta variable is too large with respect to " << "the time it takes to go around the loop and change it. " << "Current delta (radians/update): " << delta << " Joint to move: " << joint_to_move << "\n"; std::vector<double> ipoJointPos(7,0); std::vector<double> offsetFromipoJointPos(7,0); // length 7, value 0 std::vector<double> jointStateToCommand(7,0); grl::robot::arm::KukaFRIClientDataDriver driver(io_service, std::make_tuple(localhost,localport,remotehost,remoteport,grl::robot::arm::KukaFRIClientDataDriver::run_automatically) ); for (std::size_t i = 0;;++i) { /// use the interpolated joint position from the previous update as the base if(i!=0 && friData) grl::robot::arm::copy(friData->monitoringMsg,ipoJointPos.begin(),grl::revolute_joint_angle_interpolated_open_chain_state_tag()); /// perform the update step, receiving and sending data to/from the arm boost::system::error_code send_ec, recv_ec; std::size_t send_bytes_transferred = 0, recv_bytes_transferred = 0; bool haveNewData = !driver.update_state(friData, recv_ec, recv_bytes_transferred, send_ec, send_bytes_transferred); // if data didn't arrive correctly, skip and try again if(send_ec || recv_ec ) { std::cout << "receive error: " << recv_ec << "receive bytes: " << recv_bytes_transferred << " send error: " << send_ec << " send bytes: " << send_bytes_transferred << " iteration: "<< i << "\n"; std::this_thread::sleep_for(std::chrono::milliseconds(1)); continue; } // If we didn't receive anything new that is normal behavior, // but we can't process the new data so try updating again immediately. if(!haveNewData) { std::this_thread::sleep_for(std::chrono::milliseconds(1)); continue; } /// use the interpolated joint position from the previous update as the base /// @todo why is this? if(i!=0 && friData) grl::robot::arm::copy(friData->monitoringMsg,ipoJointPos.begin(),grl::revolute_joint_angle_interpolated_open_chain_state_tag()); if (grl::robot::arm::get(friData->monitoringMsg,KUKA::FRI::ESessionState()) == KUKA::FRI::COMMANDING_ACTIVE) { #if 1 // disabling this block causes the robot to simply sit in place, which seems to work correctly. Enabling it causes the joint to rotate. callIfMinPeriodPassed.execution( [&offsetFromipoJointPos,&delta,joint_to_move]() { offsetFromipoJointPos[joint_to_move]+=delta; // swap directions when a half circle was completed if ( (offsetFromipoJointPos[joint_to_move] > 0.2 && delta > 0) || (offsetFromipoJointPos[joint_to_move] < -0.2 && delta < 0) ) { delta *=-1; } }); #endif } KUKA::FRI::ESessionState sessionState = grl::robot::arm::get(friData->monitoringMsg,KUKA::FRI::ESessionState()); // copy current joint position to commanded position if (sessionState == KUKA::FRI::COMMANDING_WAIT || sessionState == KUKA::FRI::COMMANDING_ACTIVE) { boost::transform ( ipoJointPos, offsetFromipoJointPos, jointStateToCommand.begin(), std::plus<double>()); grl::robot::arm::set(friData->commandMsg, jointStateToCommand, grl::revolute_joint_angle_open_chain_command_tag()); } // vector addition between ipoJointPosition and ipoJointPositionOffsets, copying the result into jointStateToCommand /// @todo should we take the current joint state into consideration? //BOOST_LOG_TRIVIAL(trace) << "position: " << state.position << " us: " << std::chrono::duration_cast<std::chrono::microseconds>(state.timestamp - startTime).count() << " connectionQuality: " << state.connectionQuality << " operationMode: " << state.operationMode << " sessionState: " << state.sessionState << " driveState: " << state.driveState << " ipoJointPosition: " << state.ipoJointPosition << " ipoJointPositionOffsets: " << state.ipoJointPositionOffsets << "\n"; } } catch (std::exception& e) { std::cerr << "Exception: " << e.what() << "\n"; } return 0; } Here is the Java side of the application: package friCommunication; import static com.kuka.roboticsAPI.motionModel.BasicMotions.positionHold; import static com.kuka.roboticsAPI.motionModel.BasicMotions.ptp; import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; import com.kuka.connectivity.fri.FRIConfiguration; import com.kuka.connectivity.fri.FRIJointOverlay; import com.kuka.connectivity.fri.FRISession; import com.kuka.roboticsAPI.applicationModel.RoboticsAPIApplication; import com.kuka.roboticsAPI.controllerModel.Controller; import com.kuka.roboticsAPI.deviceModel.LBR; import com.kuka.roboticsAPI.geometricModel.CartDOF; import com.kuka.roboticsAPI.motionModel.controlModeModel.CartesianImpedanceControlMode; /** * Creates a FRI Session. */ public class FRIHoldsPosition_Command extends RoboticsAPIApplication { private Controller _lbrController; private LBR _lbr; private String _hostName; @Override public void initialize() { _lbrController = (Controller) getContext().getControllers().toArray()[0]; _lbr = (LBR) _lbrController.getDevices().toArray()[0]; // ********************************************************************** // *** change next line to the FRIClient's IP address *** // ********************************************************************** _hostName = "192.170.10.100"; } @Override public void run() { // configure and start FRI session FRIConfiguration friConfiguration = FRIConfiguration.createRemoteConfiguration(_lbr, _hostName); friConfiguration.setSendPeriodMilliSec(4); FRISession friSession = new FRISession(friConfiguration); FRIJointOverlay motionOverlay = new FRIJointOverlay(friSession); try { friSession.await(10, TimeUnit.SECONDS); } catch (TimeoutException e) { // TODO Automatisch generierter Erfassungsblock e.printStackTrace(); friSession.close(); return; } CartesianImpedanceControlMode controlMode = new CartesianImpedanceControlMode(); controlMode.parametrize(CartDOF.X).setStiffness(100.0); controlMode.parametrize(CartDOF.ALL).setDamping(0.7); // TODO: remove default start pose // move to default start pose _lbr.move(ptp(Math.toRadians(10), Math.toRadians(10), Math.toRadians(10), Math.toRadians(-90), Math.toRadians(10), Math.toRadians(10),Math.toRadians(10))); // sync move for infinite time with overlay ... _lbr.move(positionHold(controlMode, -1, TimeUnit.SECONDS).addMotionOverlay(motionOverlay)); //_lbr.moveAsync(ptp(Math.toRadians(-90), .0, .0, Math.toRadians(90), .0, Math.toRadians(-90), .0)); // TODO: remove default start pose // move to default start pose _lbr.move(ptp(Math.toRadians(10), Math.toRadians(10), Math.toRadians(10), Math.toRadians(-90), Math.toRadians(10), Math.toRadians(10),Math.toRadians(10))); // done friSession.close(); } /** * main. * * @param args * args */ public static void main(final String[] args) { final FRIHoldsPosition_Command app = new FRIHoldsPosition_Command(); app.runApplication(); } }
{ "domain": "robotics.stackexchange", "id": 766, "tags": "robotic-arm, c++" }
where is indigo rqt source
Question: Hi, I am using ubuntu 14.04 and indigo devel. I am trying to build rqt from the source and on the github [link], it doesn't have indigo branch, so where can I download the rqt for indigo from the source?(https://github.com/ros-visualization/rqt) Also, I noticed there are a lot of package source does not have an indigo branch, such as rqt and qt_gui_core, etc. So, what should I do if I need the indigo source files? Originally posted by frodyteen on ROS Answers with karma: 62 on 2018-04-27 Post score: 0 Answer: Repositories might not have branches for each an every ROS version. Sometimes the same code is being used for different ROS distributions and then only a single branch (commonly with the name of the older ROS distro) exists. You can find the information you are looking for by look into the rosdistro database. In the Indigo distribution file you will find an entry for rqt which specifies the URL and branch name to be used in that ROS distro: https://github.com/ros/rosdistro/blob/df65c25ff7445aa818953c1e778fffabedb4f3d6/indigo/distribution.yaml#L13013-L13033 So the branch you are looking for is groovy-devel: rqt: doc: type: git url: https://github.com/ros-visualization/rqt.git version: groovy-devel release: packages: - rqt - rqt_gui - rqt_gui_cpp - rqt_gui_py - rqt_py_common tags: release: release/indigo/{package}/{version} url: https://github.com/ros-gbp/rqt-release.git version: 0.4.8-0 source: type: git url: https://github.com/ros-visualization/rqt.git version: groovy-devel status: maintained As an alternative you could also use rosinstall_generator to fetch the information for you: rosinstall_generator --rosdistro indigo rqt --upstream-development - git: local-name: rqt uri: https://github.com/ros-visualization/rqt.git version: groovy-devel Originally posted by Dirk Thomas with karma: 16276 on 2018-04-27 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by frodyteen on 2018-04-28: Thank you!
{ "domain": "robotics.stackexchange", "id": 30748, "tags": "ros, ubuntu, ubuntu-trusty, rqt, ros-indigo" }
A circuit that is net charged
Question: What differences would you measure if a circuit were significantly charged negatively? Would the resistance change? To be clear, I mean that excess electrons are added to the system. The circuit can be of any kind you can imagine. Answer: There are broadly three classes of materials: conductor, semiconductor, insulator. The conductor contains a LOT of electrons per unit volume. If you were to charge it, you would add a few more electrons. How many? Let's take copper. It has roughly $8.5\cdot 10^{28}$ electrons per $m^3$. If you have a wire of radius $r$ the number of electrons scales with $r^2$ and capacitance scales with $\log{r}$. So the thinner the wire, the more important the effect of surface charge on total number of electrons. I will leave it up to you to calculate how thin a wire would have to be before surface electrons contribute significantly to the measured resistance. A quick "back of the iPhone" estimate: For a macroscopic wire capacitance might be a few 100 pF per meter so you could get about $10^9$ electrons per meter on the surface. That would be roughly the same number of electrons as we can get in a 1 nm diameter wire (assuming that at that curvature a wire can hold 1 Volt without discharge to the air - which seems unlikely...) Good luck measuring that. For semiconductors and insulators the number of charge carriers is smaller. This will make the math slightly more favorable. But note that surface effects (would surface electrons even contribute to conduction?) would be very important to consider - the number of electrons in an insulator does not tell the whole story (there are plenty of electrons but they are not free to move. Not at all obvious it would be different for surface charge).
{ "domain": "physics.stackexchange", "id": 23394, "tags": "electric-circuits, charge" }
Bit-parallelism and NFA simulation
Question: In several papers I have read that Bit-parallel pattern matching is an NFA-simulation. My questions are: 1- Is it true in general? Or, is there any restrictions? 2- As any regular expression can be converted to NFA, how Bit-parallelism is able to handle some regex like: a?5 Update: Bit-parallel pattern matching is a family of well-known pattern matching algorithms in the literature of hardware-based pattern matching. It was introduced by Baeza-Yates and Gonnet (A New Approach to Text Searching, Communications of the ACM, 35(10):74–82, 1992; PDF) and has gained more attention recently, for example in Faro and Lecroq, Twenty Years of Bit-Parallelism in String Matching (Festschrift for Bořivoj Melichar, pp. 72–101; PDF). In these papers there are several statements like: "Bit-parallelism is indeed particularly suitable for the eefficient simulation of non-deterministic automata.", second reference, page 2. Answer: on p4-5 of your 2nd ref by Faro and Lecroq they write: Online string matching algorithms (hereafter simply string matching algorithms) can be divided into three classes: algorithms which solve the problem by making use only of comparisons between characters, algorithms which make use of deterministic automata and algorihtms which simulate nondeterministic automata using bit-parallelism. however, while sounding general, these statements seem to be wrt "within the realm of string matching algorithms". in other words there is massive use/ research of bit parallelism of NFAs for string matching algorithms in particular, but have not seen much bit parallelism ideas applied to arbitrary NFAs, even though it is clearly applicable. here are two refs turned up that do consider bitwise operations on a more general/ arbitrary case (apparently the focus of your question). in the 2nd case, "vector algorithms" are capable of/ nearly the same as "bit parallelism". Implementing WS1S via Finite Automata / Glenn, Gasarch, sec 4, "internal representation of Automata" we do this with elementary bitwise operations Fast Implementations of Automata Computations / Bergeron, Hamel sec 2.2 "solving more complex automata" In this section, we establish a sufficient condition for the existence of a vector algorithm for an automaton.
{ "domain": "cs.stackexchange", "id": 5516, "tags": "automata, finite-automata, pattern-recognition, matching, dfa" }
Orange ring in a black hole image
Question: What exactly is the origin of the orange ring around M$87$? I understand that the image was not taken in the visible light range. The colors are therefore artificial. I also read that the image shows the shadow of the black hole on a brighter region of space which is "glowing" gas. Still I wonder about the following: Why is the bright region circularly shaped and centered around the black hole? Is the bright region at the same distance as the black hole or much further away? Answer: The bright region is known as a "photon ring". It is light that is heading towards us from a radius of around $1.5 r_s$ around the black hole, where $r_s = 2GM/c^2$ is the Schwarzschild radius of the black hole. So yes, the light is certainly coming to us from the immediate environs of the black hole and so from the same distance. The light travelling towards us is warped by the distortion of space-time caused by the black hole. The warping acts like a magnifying glass meaning we see the photon ring as larger - with a radius of $2.6r_s$. The reason that we see a ring at all is because the plasma surrounding the black hole is "geometrically thick, but optically thin" at the 1.3 mm wavelengths used in the observations.This means that mm-waves are generated by fast-moving electrons in the plasma that is being accreted onto the black hole and the plasma exists over the whole of the region imaged (and beyond), but that most of the emitted light will escape self-absorption. The latter property is key. When viewing such a plasma, the brightness depends on the density of the plasma and the path length of the sightline we have into it. This matters greatly near a black hole, because the densest plasma will be nearest the black hole but any light that is emitted and heads inside the location of the "photon sphere" at $1.5 r_s$ will end up in the black hole, possibly after orbiting many times, and is lost. Light emitted outwards from dense plasma inside or at the photon sphere may orbit many times and then escape from the edge of the photon sphere. Light emitted just outside the photon sphere can be bent towards us on trajectories that graze the photon sphere. The result is a concentration of light rays that appear to emerge from the photon sphere and which we view as a circular ring. The ring is intrinsically narrow but is made fuzzy in the Event Horizon Telescope images by the limited (but amazing) instrumental resolution. Inside the ring is relative darkness. There is some light coming towards us from this direction - from plasma between us and the black hole, but it is much fainter than the concentrated light from the photon ring. Much of the light that would have come to us from that direction has fallen into the black hole and hence it is referred to as the "black hole shadow". The ring and the shadow should (according to General Relativity) be perfectly circular for a non-spinning, spherically symmetric black hole. The spherical symmetry is broken for a spinning Kerr black hole and small ($\leq 10$%) departures from circularity might be expected (e.g. see section 9 of paper VI in the Event Horizon Telescope series on M87). The spin of the black hole drags material around it and is thought to be responsible for the asymmetric brightness distribution of the ring, through Doppler boosting in the direction of forward motion. The observed ring is not the accretion disk The apparent radius of something residing in a Schwarzschild metric, when viewed from infinity is given by $$ R_{\rm obs} = R \left(1 - \frac{R_s}{R}\right)^{-1/2}\ ,$$ where $R_s$ is the Schwarzschild radius $2GM/c^2$. This enlargement is due to gravitational lensing and the formula is correct down to the "photon sphere" at $R =1.5 R_s$. Most of the light in the EHT image comes from the photon sphere. It is therefore observed to come from a radius $$ R_{\rm obs} =\frac{3R_s}{2}\left(1 - \frac{2}{3}\right)^{-1/2} = \frac{\sqrt{27}}{2}R_s\ .$$ This is almost precisely what is observed if the black hole has the mass inferred from independent observations of the motions of star near the centre of M87. By contrast, the accretion disk would be truncated at the innermost stable circular orbit, which is at $3R_s$ and would appear to be at $3.7R_s$ as viewed from the Earth (or larger for co-rotating material around a spinning black hole), significantly bigger than the ring that is observed. So we might expect disk emission to come from further out. Nevertheless, there is inflow from the disk and General Relativistic simulations involving magnetic fields do show some emissivity in a broader disc-like structure around the black hole. A set of simulations were done as part of the analysis of the EHT image and are described in paper V of the EHT M87 series. Fig.1 of this paper shows an intrinsic image (i.e. prior to blurring with the instrumental resolution) that provides a reasonable fit to what is seen (see below). In all cases the emission is dominated by the photon ring and the direct contribution of the accretion disk/flow is much lower. A direct quote from that paper: The central hole surrounded by a bright ring arises because of strong gravitational lensing (e.g., Hilbert 1917; von Laue 1921; Bardeen 1973; Luminet 1979). The so-called "photon ring" corresponds to lines of sight that pass close to (unstable) photon orbits (see Teo 2003), linger near the photon orbit, and therefore have a long path length through the emitting plasma. The Figure above is from paper V of the EHT data release on M87. It shows the observations (left) a General Relativistic simulation (center) and the same simulation blurred by the instumental resolution of the Event Horizon Telescope (right). The dominant feature is the photon ring. A weak disk contribution (or rather inflow from the disk) is seen in the simulation, but contributes little to the observed ring seen in the observations.
{ "domain": "physics.stackexchange", "id": 57520, "tags": "black-holes, astrophysics, astronomy, imaging" }
Why is the consumer/producer biomass ratio higher in the oceans?
Question: According to Bar-on et al. (2018) (see figure 2), the terrestrial consumer/producer biomass ratio is 20/450 while the marine consumer/producer biomass ratio is 5/1 (in Gton of carbon). Why is the consumer/producer ratio much higher in the oceans? Is it perhaps because marine animals are more efficient? Or is terrestrial plant biomass underexploited by surface animals? Bar-On, Y.M., Phillips, R. and Milo, R., 2018. The biomass distribution on Earth. Proceedings of the National Academy of Sciences, 115(25), pp.6506-6511. Answer: The paper you cited suggests an explanation: Such inverted biomass distributions can occur when primary producers have a rapid turnover of biomass [on the order of days (34)], while consumer biomass turns over much more slowly [a few years in the case of mesopelagic fish (35)]. Thus, the standing stock of consumers is larger, even though the productivity of producers is necessarily higher. Previous reports have observed inverted biomass pyramids in local marine environments (36, 37). An additional study noted an inverted consumer/producer ratio for the global plankton biomass (16). To explain/clarify this a bit further: there are two main processes that affect the consumer/producer ratio. One, which you have identified, is efficiency (how much of the biomass is transferred from one trophic level to the next). The other is turnover rate (rate at which biomass leaves a trophic level), which is the reciprocal of the residence time (the average length of time that a unit of biomass spends in a trophic level before transitioning to another trophic level/compartment). At equilibrium, the biomass in a trophic level is equal to (inflow/turnover rate) or (inflow $\times$ residence time). If consumers at a particular level consume one tonne of biomass per month (assume 100% efficiency for now) and the residence time is 6 months, the standing stock or quantity of biomass at that level will be 6 tonnes. Taking some of the numbers from the paragraph above: suppose the residence time for phytoplankton is 4 days while of consumers is 400 days. If the consumers have a 10% efficiency for taking up phytoplankton biomass, then the consumer biomass will be $(400~\textrm{days}/4~\textrm{days}) \times 10\% = 10$ times higher than the phytoplankton biomass.
{ "domain": "biology.stackexchange", "id": 11075, "tags": "ecology, marine-biology" }
Formation of carbocation on reaction with strong acid in presence of halogen
Question: Shouldn't the carbocation be formed away from the -I effect of the Cl atom for more stability. Backbonding is achieved in a later step after hydride shift. Answer: That would result in a primary carbocation. Due to substituent groups stabilizing the carbocation's empty p-orbital in what's known as hyperconjugation, primary carbocations are less stable than secondary. Here are some heats of formation from the Active Thermochemical Tables (version 1.130): t-Butylium (tertiary carbocation): 710.72 $\pm$ 0.98 kJ/mol sec-Butylium (secondary carbocation): 769.0 $\pm$ 1.6 kJ/mol n-Butylium (primary carbocation): 803.1 $\pm$ 1.2 kJ/mol While tertiary carbocations are more stable than secondary ones, an electron-withdrawing group such as a chloro group destabilizes the carbocation.
{ "domain": "chemistry.stackexchange", "id": 17929, "tags": "halides" }
Direct sum in Eq.(25) of *A Grand Unification of Quantum Algorithms* should be changed into $\sum_\lambda$?
Question: I have a problem when I read this paper, is there someone who has occasionally read this paper and can help me solve the problem? The problem is just about the notation used in the paper, the authors stated that $U=\left[\begin{array}{cc}\mathcal{H} & \sqrt{I-\mathcal{H}^{2}} \\ \sqrt{I-\mathcal{H}^{2}} & -\mathcal{H}\end{array}\right]$, where $\mathcal{H}=\sum_{\lambda} \lambda|\lambda\rangle\langle\lambda|$. But later, they stated again in eq.(25): $$ \begin{aligned} U &=\bigoplus_{\lambda}\left[\begin{array}{cc} \lambda & \sqrt{1-\lambda^{2}} \\ \sqrt{1-\lambda^{2}} & -\lambda \end{array}\right] \otimes|\lambda\rangle\langle\lambda| \\ &=\bigoplus_{\lambda}\left[\sqrt{1-\lambda^{2}} X+\lambda Z\right] \otimes|\lambda\rangle\langle\lambda| \\ &=: \bigoplus_{\lambda} R(\lambda) \otimes|\lambda\rangle\langle\lambda|, \end{aligned} $$ My question is, should the direct sum be changed into $\sum_\lambda$? Because the dimension in the direct sum form seems not the same as the original $U$. Answer: Yes, it seems that two possible notations, either $$ \sum_\lambda R(\lambda)\otimes |\lambda\rangle\langle\lambda| $$ or $$ \bigoplus_\lambda R(\lambda) $$ have been merged (the second needs to be interpreted as 'in the basis $\{|\lambda\rangle\}$'). As you say, I think you're better off with the normal sum.
{ "domain": "quantumcomputing.stackexchange", "id": 3321, "tags": "quantum-algorithms" }
ElectroQuasiStatic (EQS) regime - struggling with understanding the 1st order correction - computation of $\vec{H}$
Question: I’ve the following problem: $d \to 0$" /> Given a dipole with the moment $\vec{p} = p\hat{z}\cos(\omega t)$ I was tasked first with finding the field $\vec{E}$, justifying ElectroQuasiStatic (EQS) regime and then finding the first order correction: $\vec{H}$. I’ve managed to find the dipole with the dot product formula, and generally am able to compute curls - however their solution with ampers law baffles me a little. first of all if $\nabla \times \vec{H} = j \omega \epsilon_0 \vec{E}$ won’t that mean that $\vec{E} = \frac{-j}{\omega \epsilon_0} \cdot \nabla \times \vec{H}$ as opposed to $+j$ like we see in the answers? Also -why does $\vec{H}$ only have a component in the $\hat{\phi}$ direction? Answer: Physically, you have and oscillating electric dipole, so you would expect EM radiation. However, by the method you are employing, you are interested in the "near" field, namely, $$ r\gg d \quad r\ll \frac{c}{\omega} $$ In this case, you can indeed set the electric field given by the quasi-static approximation, namely calculate it as in the static case with the instantaneous distribution of charge. For the magnetic field, in this regime, it is zero since instantaneously, there is no current. The next leading term is therefore induced by the electric field. The Maxwell displacement current has at every point the plane symmetry with respect to the plane perpendicular to $\hat\phi$, so $H$ is parallel to $\hat \phi$ being a pseudo vector respecting the same symmetries. There is no sign issue, the fourth and fifth line are just: $$ E_r = \frac{-j}{\epsilon_0\omega}(\nabla\times H)_r \\ E_\theta = \frac{-j}{\epsilon_0\omega}(\nabla\times H)_\theta $$ where the expressions for $E$ and $\nabla H$ are substitutes. Note that to get the full solution, you'll need to look at the induced electric field with Faraday's law giving a correction to the electric field and repeat the process indefinitely. What you are effectively doing is writing the solution as a power series in $c$, or to make it dimensionalness, in powers of $\frac{c}{r\omega}$. It turns out that you can exactly solve this problem, it is given in the section Dipole radiation, and you can check that your result match the leading order terms in powers of $c$. Hope this helps.
{ "domain": "physics.stackexchange", "id": 94353, "tags": "electromagnetism, electric-fields, maxwell-equations" }
Bosonic representation of delta function for Grassmann-even quantity
Question: Suppose I have 2 Grassmann scalars $\theta$ and $\bar{\theta}$ and form the bosonic quantity $X = \bar{\theta}\theta$. Is there a purely bosonic representation of the delta function $\delta(X - \overline{\theta}\theta)$, which can be used in integrals to replace $$f(\bar{\theta}\theta) = \int dX \;\delta(X - \bar\theta\theta) f(X).$$ (By purely bosonic I mean that we do not introduce two new Grassmann scalars and write $X = \bar{\eta}\eta$.) Answer: $$ \begin{align} \int_{\mathbb{R}^{1|0}} \! dX ~\delta(X - \bar\theta\theta) f(X) ~=~&\int_{\mathbb{R}^{1|0}} \! dX ~\delta(X) f(X+ \bar\theta\theta)\cr ~=~&\int_{\mathbb{R}} \! dX_B ~\delta(X_B) f(X_B+ \bar\theta\theta)\cr ~=~&f(\bar{\theta}\theta). \end{align}$$ In the 1st equality, we used that an integral measure is translation invariant. In the 2nd equality, we used that an integral over a Grassmann-even real supernumber $X\in\mathbb{R}^{1|0}$ is by definition given by the corresponding integral over its body $X_B\in\mathbb{R}$, cf. Ref. 1. In the 3rd equality, we used the defining property of the Dirac delta distribution. References: Bryce DeWitt, Supermanifolds, Cambridge Univ. Press, 1992; p.7.
{ "domain": "physics.stackexchange", "id": 95198, "tags": "supersymmetry, calculus, dirac-delta-distributions, grassmann-numbers, superalgebra" }
Neural network q learning for tic tac toe - how to use the threshold
Question: I am currently programming a q learning neural network tha does not work. I have previously asked a question about inputs and have sorted that out. My current idea to why the program does not work is to do with the threshold value. this is a neural network - q learning specific variable. basically the theshold is a value that is between 0 and 1, you then make a random number between 0 and 1, if this random number is larger than the threshold then you pick a completely random choice, otherwise the neural network chooses by finding the largest q value. My question is that with this threshold value, i am currently implementing it as starting at almost 0, then increasing linearly until it reaches 1 by the time the program has reached the final iteration. Is this correct? The reason i suspect this is incorrect is that when plotting an error graph from training the neural network, the program doesnt not learn at all, but when the threshold reaches almost 1, it starts to learn very fast, and if you run more iterations after it reaches 1, the all the game sets in the replay memory become the same and the error is basically 0 from their on in. Any feedback is greatly appreciated and if this question in unclear in anyway just let me know and i will try and fix it. Thank you to anyone who helps out. Answer: You are effectively implementing $\epsilon$-greedy action selection. The usual way to represent this in RL, at least that I am familiar with, is not as a "threshold" for probability of choosing the best estimated action, but as a small probability, $\epsilon$, of not choosing the best estimated action. For consistency with RL literature that I know, I will use the $\epsilon$-greedy form, so instead of considering what happens as your threshold rises from 0 to 1, I will consider what happens when $\epsilon$ drops from 1 to 0. It is the same thing. I hope you can either adjust to using $\epsilon$ or mentally convert the rest of this answer so it is about your threshold . . . When monitoring Q-Learning, you have to be careful how you measure success. Monitoring the behaviour on the learning games will give you slightly off feedback. The agent will make exploratory moves (with probability $\epsilon$), and the results from a learning game might involve the agent losing even though it already has a policy good enough to not lose from the position where it started exploring. If you want to measure how well the agent has learned the game, you have to stop the training stage and play some games with $\epsilon$ set to $0$. I suspect this could be one problem - that you are measuring results from behaviour during training (note this would work with SARSA) In addition, choosing values that are too high or low for your problem will reduce the speed of learning. High values interfere with Q-learning because it has to reject some of data from exploratory moves, and the agent will rarely see a full game played using its preferred policy. Low values stifle learning because the agent does not explore different options enough, just repeating the same game play when there might be better moves that it has not tried. For Tic Tac Toe and Q-learning I would suggest picking a value of $\epsilon$ between $0.01$ and $0.2$ In fact, with Q-learning there is no need to change the value of $\epsilon$. You should be able to pick a value, say $0.1$, and stick with it. The agent will still learn an optimal policy, because Q-learning is an off-policy algorithm.
{ "domain": "datascience.stackexchange", "id": 2461, "tags": "machine-learning, neural-network, q-learning" }
validation accuracy and loss increase
Question: I am training a generic LSTM based autoencoder to get the sentence embeddings, the bleu score is the accuracy metric. The model is coded to output the same number of tokens as the length of labels, hence the losses are calculated using cross-entropy loss between the output of token and the corresponding label token and added to a total loss to be backpropagated The embeddings size is 1000 throughout. Here are the logs:- Training Epoch: 1/20, Training Loss: 78.32446559076034, Training Accuracy: 0.23442341868755373 Validation Epoch: 1/20, Validation Loss: 75.11487170562003, Validation Accuracy: 0.28851715943634565 Training Epoch: 2/20, Training Loss: 60.702940691499734, Training Accuracy: 0.3043263558919579 Validation Epoch: 2/20, Validation Loss: 68.58432596068359, Validation Accuracy: 0.337459858582381 Training Epoch: 3/20, Training Loss: 51.62519727157313, Training Accuracy: 0.35618672202599283 Validation Epoch: 3/20, Validation Loss: 64.17064862141332, Validation Accuracy: 0.37158793060235135 Training Epoch: 4/20, Training Loss: 44.40417488866389, Training Accuracy: 0.4094415453046547 Validation Epoch: 4/20, Validation Loss: 61.230048799977716, Validation Accuracy: 0.3955376494828317 Training Epoch: 5/20, Training Loss: 38.78325418571326, Training Accuracy: 0.46050421873328257 Validation Epoch: 5/20, Validation Loss: 59.78918521062842, Validation Accuracy: 0.4063787247291398 Training Epoch: 6/20, Training Loss: 33.65953556655257, Training Accuracy: 0.5193937894102788 Validation Epoch: 6/20, Validation Loss: 58.64455007580877, Validation Accuracy: 0.41980867690343776 Training Epoch: 7/20, Training Loss: 29.35849161540994, Training Accuracy: 0.5831378755700898 Validation Epoch: 7/20, Validation Loss: 58.26881152131025, Validation Accuracy: 0.4261582422867802 Training Epoch: 8/20, Training Loss: 25.244888168760856, Training Accuracy: 0.6488748581642462 Validation Epoch: 8/20, Validation Loss: 57.62903963564669, Validation Accuracy: 0.43286079887479756 Training Epoch: 9/20, Training Loss: 22.05663261861035, Training Accuracy: 0.7039174093261202 Validation Epoch: 9/20, Validation Loss: 58.09752491926684, Validation Accuracy: 0.4399501875046306 Training Epoch: 10/20, Training Loss: 19.248559526880282, Training Accuracy: 0.7486352249548112 Validation Epoch: 10/20, Validation Loss: 58.613073462421454, Validation Accuracy: 0.4470900014647744 Training Epoch: 11/20, Training Loss: 16.95602631587501, Training Accuracy: 0.7857343322245365 Validation Epoch: 11/20, Validation Loss: 58.38435334806304, Validation Accuracy: 0.44778823347334884 Training Epoch: 12/20, Training Loss: 14.74661236426599, Training Accuracy: 0.8136944976817879 Validation Epoch: 12/20, Validation Loss: 59.63633590068632, Validation Accuracy: 0.45206057264928495 Training Epoch: 13/20, Training Loss: 13.507415059699248, Training Accuracy: 0.8299945959036563 Validation Epoch: 13/20, Validation Loss: 60.149887264208886, Validation Accuracy: 0.4512303133278385 Training Epoch: 14/20, Training Loss: 12.026118357521792, Training Accuracy: 0.8491757446561087 Validation Epoch: 14/20, Validation Loss: 59.89944394497038, Validation Accuracy: 0.45497359431776685 Training Epoch: 15/20, Training Loss: 10.785567499923806, Training Accuracy: 0.8628473173326144 Validation Epoch: 15/20, Validation Loss: 61.482036528946125, Validation Accuracy: 0.45541000266481596 Training Epoch: 16/20, Training Loss: 9.373574649788727, Training Accuracy: 0.8767987081840235 Validation Epoch: 16/20, Validation Loss: 62.18386231796834, Validation Accuracy: 0.4580630794998584 Training Epoch: 17/20, Training Loss: 8.5658748998932, Training Accuracy: 0.8878869616990712 Validation Epoch: 17/20, Validation Loss: 63.56435154233743, Validation Accuracy: 0.4606744393166781 Training Epoch: 18/20, Training Loss: 7.807730126944895, Training Accuracy: 0.8960175152587504 Validation Epoch: 18/20, Validation Loss: 63.88373188037895, Validation Accuracy: 0.4606897915210869 Training Epoch: 19/20, Training Loss: 6.829077819740428, Training Accuracy: 0.9038927070366026 Validation Epoch: 19/20, Validation Loss: 65.59262917371629, Validation Accuracy: 0.4639800374912485 Training Epoch: 20/20, Training Loss: 6.152266260986982, Training Accuracy: 0.9090036335609419 Validation Epoch: 20/20, Validation Loss: 66.84154795008956, Validation Accuracy: 0.4672414105594907 Here is are the accuracy and loss vs epoch graphs : I want to know why it is that the validation loss and accuracy is increasing. Answer: An increase in validation loss while training loss is decreasing is an indicator that your model overfits. Check out this article for an easy to read general explanation. In the context of autoencoders this means your neural net almost reproduces the input image. Try to reduce overfit by applying regularization, e.g. add dropout, add input noise, use less layers or use less nodes per layer (not all at once but one by one).
{ "domain": "datascience.stackexchange", "id": 6841, "tags": "deep-learning, nlp, lstm" }
Mean estimation for nested location data
Question: I want to estimate the average income for a location. I have nested data in the following way: A block is inside a neighborhood, which is inside a zipcode, which is inside a district, which is inside a region, which is inside a state. I want to estimate the average income at a block level, and the issue is that I don't have much data at that level. I have much more data at a state level, but it is not such a good approximation. How would you deal with this problem? Are there any ways to incorporate the uncertainty of not having many data points at a block level? Are there any Bayesian frameworks that allow us to incorporate data of all levels? Is it possible that mixed models are able to do so? If you explain any method, if you can provide a python package where that method is built, it'll be great! Thanks! Answer: I don't know if that is the case, but if some kind of continuity assumptions are realistic, you could try to move away from categorical variables (block) to continuous variables (longitude and latitude). Then, if you have information on two neighboring blocks, you could interpolate those values with say a spline. Of course, this can also be fitted into a machine learning model with predictors such as average income of blocks with distance < x. And if you don't have data of nearby blocks, then your state average might be the next best approximation. Your state level data can serve as a predictor and also as validation. Also, plotting your data always helps get some kind of intuition.
{ "domain": "datascience.stackexchange", "id": 7547, "tags": "statistics, feature-engineering, feature-extraction, bayesian" }
Confluence of beta expansion
Question: Let $\to_\beta$ be $\beta$-reduction in the $\lambda$-calculus. Define $\beta$-expansion $\leftarrow_\beta$ by $t'\leftarrow_\beta t \iff t\to_\beta t'$. Is $\leftarrow_\beta$ confluent? In other words, do we have that for any $l,d,r$, if $l \to_\beta^* d\leftarrow_\beta^* r$, then there exists $u$ such that $l\leftarrow_\beta^* u \to_\beta^* r$? Keywords: Upward confluence, upside down CR property I started by looking at the weaker property: local confluence (i.e. if $l \to_\beta d\leftarrow_\beta r$, then $l\leftarrow_\beta^* u \to_\beta^* r$). Even if this were true, it would not imply confluence since $\beta$-expansion is non-terminating, but I thought that it would help me understand the obstacles. (Top) In the case where both reductions are at top-level, the hypothesis becomes $(\lambda x_1.b_1)a_1\rightarrow b_1[a_1/x_1]=b_2[a_2/x_2]\leftarrow (\lambda x_2.b_2)a_2$. Up to $\alpha$-renaming, we can assume that $x_1\not =x_2$, and that neither $x_1$ nor $x_2$ is free in those terms. (Throw) If $x_1$ is not free in $b_1$, we have $b_1=b_2[a_2/x_2]$ and therefore have $(\lambda x_1.b_1)a_1=(\lambda x_1.b_2[a_2/x_2])a_1\leftarrow(\lambda x_1.(\lambda x_2.b_2)a_2)a_1\rightarrow (\lambda x_2.b_2)a_2$. A naive proof by induction (on $b_1$ and $b_2$) for the case (Top) would be as follows: If $b_1$ is a variable $y_1$, If $y_1=x_1$, the hypothesis becomes $(\lambda x_1.x_1)a_1\rightarrow a_1=b_2[a_2/x_2]\leftarrow (\lambda x_2.b_2)a_2$, and we indeed have $(\lambda x_1.x_1)a_1=(\lambda x_1.x_1)(b_2[a_2/x_2])\leftarrow (\lambda x_1.x_1)((\lambda x_2.b_2)a_2)\rightarrow (\lambda x_2.b_2)a_2$. If $y_1\not=x_1$, then we can simply use (Throw). The same proofs apply is $b_2$ is a variable. For $b_1=\lambda y.c_1$ and $b_2=\lambda y.c_2$, the hypothesis becomes $(\lambda x_1.\lambda y. c_1)a_1\rightarrow \lambda y.c_1[a_1/x_1]=\lambda y.c_2[a_2/x_2]\leftarrow (\lambda x_2.\lambda y.c_2)a_2$ and the induction hypothesis gives $d$ such that $(\lambda x_1.c_1)a_1\leftarrow d\rightarrow (\lambda x_2.c_2)a_2$ which implies that $\lambda y.(\lambda x_1.c_1)a_1\leftarrow \lambda y.d\rightarrow \lambda y.(\lambda x_2.c_2)a_2$. Unfortunately, we do not have $\lambda y.(\lambda x_2.c_2)a_2\rightarrow (\lambda x_2.\lambda y.c_2)a_2$. (This makes me think of $\sigma$-reduction.) A similar problem arises for applications: the $\lambda$s are not where they should be. Answer: Two counterexamples are: $(\lambda x. b x (b c)) c$ and $(\lambda x. x x) (b c)$ (Plotkin). $(\lambda x. a (b x)) (c d)$ and $a ((\lambda y. b (c y)) d)$ (Van Oostrom). The counterexample detailed below is given in The Lambda Calculus: Its Syntax and Semantics by H.P. Barenredgt, revised edition (1984), exercise 3.5.11 (vii). It is attributed to Plotkin (no precise reference). I give an incomplete proof which is adapted from a proof by Vincent van Oostrom of a different counterexample, in Take Five: an Easy Expansion Exercise (1996) [PDF]. The basis of the proof is the standardization theorem, which allows us to consider only beta expansions of a certain form. Intuitively speaking, a standard reduction is a reduction that makes all of its contractions from left to right. More precisely, a reduction is non-standard iff there is a step $M_i$ whose redex is a residual of a redex to the left of the redex of a previous step $M_j$; “left” and “right” for a redex are defined by the position of the $\lambda$ that is eliminated when the redex is contracted. The standardization theorem states that there if $M \rightarrow_\beta^* N$ then there is a standard reduction from $M$ to $N$. Let $L = (\lambda x. b x (b c)) c$ and $R = (\lambda x. x x) (b c)$. Both terms beta-reduce to $b c (b c)$ in one step. Suppose that there is a common ancestor $A$ such that $L \leftarrow_\beta^* A \rightarrow_\beta^* R$. Thanks to the standardization theorem, we can assume that both reductions are standard. Without loss of generality, suppose that $A$ is the first step where these reductions differ. Of these two reductions, let $\sigma$ be the one where the redex of the first step is to the left of the other, and write $A = C_1[(\lambda z. M) N]$ where $C_1$ is the context of this contraction and $(\lambda z. M) N$ is the redex. Let $\tau$ be the other reduction. Since $\tau$ is standard and its first step is to the right of the hole in $C_1$, it cannot contract at $C_1$ nor to the left of it. Therefore the final term of $\tau$ is of the form $C_2[(\lambda z. M') N']$ where the parts of $C_1$ and $C_2$ to the left of their holes are identical, $M \rightarrow_\beta^* M'$ and $N \rightarrow_\beta^* N'$. Since $\sigma$ starts by reducing at $C_1$ and never reduces further left, its final term must be of the form $C_3[S]$ where the part of $C_3$ to the left of its hole is identical to the left part of $C_1$ and $C_2$, and $M[z \leftarrow N] \rightarrow_\beta^* S$. Observe that each of $L$ and $R$ contains a single lambda which is to the left of the application operator at the top level. Since $\tau$ preserves the lambda of $\lambda z. M$, this lambda is the one in whichever of $L$ or $R$ is the final term of $\tau$, and in that term the argument of the application is obtained by reducing $N$. The redex is at the toplevel, meaning that $C_1 = C_2 = C_3 = []$. If $\tau$ ends in $R$, then $M \rightarrow_\beta^* z z$, $N \rightarrow_\beta^* b c$ and $M[z \leftarrow N] \rightarrow_\beta^* (\lambda x. b x (b c)) c$. If $N$ has a descendant in $L$ then this descendant must also reduce to $b c$ which is the normal form of $N$. In particular, no descendant of $N$ can be a lambda, so $\sigma$ cannot contract a subterm of the form $\check{N} P$ where $\check{N}$ is a descendant of $N$. Since the only subterm of $L$ that reduces to $b c$ is $b c$, the sole possible descendant of $N$ in $L$ is the sole occurrence of $b c$ itself. If $\tau$ ends in $L$, then $M \rightarrow_\beta^* b z (b c)$, $N \rightarrow_\beta^* c$, and $M[z \leftarrow N] \rightarrow_\beta^* (\lambda x. x x) (b c)$. If $N$ has a descendant in $R$ then this descendant must also reduce to $c$ by confluence. At this point, the conclusion should follow easily according to van Oostrom, but I'm missing something: I don't see how tracing the descendants of $N$ gives any information about $M$. Apologies for the incomplete post, I'll think about it overnight.
{ "domain": "cs.stackexchange", "id": 12523, "tags": "lambda-calculus, term-rewriting" }
Why do we need enthalpy?
Question: In thermodynamics, the enthalpy of a system is defined as the sum of the internal energy of the system and the product of its pressure and volume. Since it is just a combination of other state properties of the system, why need we define it at all? Answer: We define everything in physics because it's useful. In this case it's useful when a fluid flows in a steady state system and you want to look at energy flows. If a fluid flows through a box, and has a change in specific enthalpy $\Delta h$, with a flow rate of $\dot m$, then the power transferred to the fluid is $\dot m \, \Delta h$ Sometimes all you know is the energy change, which can tell you the change in enthalpy, but without more information, you don't know how much is heat and how much is pressure. So by using enthalpy you can simplify the problem.
{ "domain": "physics.stackexchange", "id": 42649, "tags": "thermodynamics" }
Direction of velocity vector in 3D space
Question: According to a well-known textbook (Halliday & Resnick), the direction of a velocity vector, $\vec v$, at any instant is the direction of the tangent to a particle's path at that instant, as is illustrated below in 2D. According to the same textbook, the same holds for 3D. However, the tangent to a curve in 3D is not a line, but a plane! A vector could be in a plane and still take on any direction betwen $0^{\circ}$ and $360^{\circ}$ within that plane. How do we define and determine the direction of $\vec v$ given a particle's path in 3D? (If possible, include illustrations in your answers). Answer: Example with Figures on @JEB 's answer. Let the regular (smooth) curve with parametric equation \begin{equation} \mathbf{x}\left(t\right)=\bigl[x_{1}\left(t\right),x_{2}\left(t\right),x_{3}\left(t\right)\bigr]=\left(5\cos t,5\sin t,2t\right) \tag{01} \end{equation} The parameter $\:t\:$ would represent time in case that this curve is the trajectory of a particle. Now, the vector \begin{equation} \dfrac{\mathrm d\mathbf{x}}{\mathrm dt}=\Biggl(\dfrac{\mathrm dx_{1}}{\mathrm dt},\dfrac{\mathrm dx_{2}}{\mathrm dt},\dfrac{\mathrm dx_{3}}{\mathrm dt}\Biggr)=\left(-5\sin t,5\cos t,2\right) \tag{02} \end{equation} is tangent to the curve at the point $\:\mathbf{x}\left(t\right)\:$ and well-defined without any indeterminacy. In case of particle motion this is the velocity vector of the particle. In order to normalize this vector we have \begin{equation} \left\Vert\dfrac{\mathrm d\mathbf{x}}{\mathrm dt}\right\Vert=\sqrt{29} \tag{03} \end{equation} This norm, the speed of the particle, is a function of $\:t\:$ in general. Here accidentally is constant. From (02) and (03) we produce the unit vector \begin{equation} \mathbf{t}=\dfrac{\dfrac{\mathrm d\mathbf{x}}{\mathrm dt}}{\left\Vert\dfrac{\mathrm d\mathbf{x}}{\mathrm dt}\right\Vert}=\sqrt{\frac{1}{29}}\left(-5\sin t,5\cos t,2\right) \tag{04} \end{equation} The vector $\:\mathbf{t}\left(t\right)\:$ is the unit tangent vector to the curve at point $\:\mathbf{x}\left(t\right)$. \begin{equation} \boxed{\:\mathbf{t}=\sqrt{\frac{1}{29}}\left(-5\sin t,5\cos t,2\right)\:} \tag{05} \end{equation} Differentiating again we have \begin{equation} \dfrac{\mathrm d\mathbf{t}}{\mathrm dt}=\sqrt{\frac{1}{29}}\left(-5\cos t,-5\sin t,0\right) \tag{06} \end{equation} a vector normal to $\:\mathbf{t}\:$ with norm \begin{equation} \left\Vert\dfrac{\mathrm d\mathbf{t}}{\mathrm dt}\right\Vert=5\sqrt{\frac{1}{29}} \tag{07} \end{equation} Again this norm is a function of $\:t\:$ in general. From (06) and (07) we produce the unit vector \begin{equation} \mathbf{n}=\dfrac{\dfrac{\mathrm d\mathbf{t}}{\mathrm dt}}{\left\Vert\dfrac{\mathrm d\mathbf{t}}{\mathrm dt}\right\Vert}=\left(-\cos t,-\sin t,0\right) \tag{08} \end{equation} The vector $\:\mathbf{n}\left(t\right)\:$ is the principal normal unit vector to the curve at point $\:\mathbf{x}\left(t\right)$. \begin{equation} \boxed{\:\mathbf{n}=\left(-\cos t,-\sin t,0\right)\vphantom{\sqrt{\frac{1}{29}}}\:} \tag{09} \end{equation} Finally we construct the unit vector \begin{equation} \mathbf{b}=\mathbf{t}\boldsymbol{\times}\mathbf{n}=\sqrt{\frac{1}{29}} \begin{bmatrix} \mathbf{e}_{1} & \mathbf{e}_{2} & \mathbf{e}_{3}\vphantom{\dfrac{\dfrac{}{}}{}}\\ -5\sin t & 5\cos t & 2\vphantom{\dfrac{\dfrac{}{}}{}}\\ -\cos t & -\sin t & 0 \vphantom{\dfrac{\dfrac{}{}}{}} \end{bmatrix} =\sqrt{\frac{1}{29}}\left(2\sin t,-2\cos t,5\right) \tag{10} \end{equation} so \begin{equation} \boxed{\:\mathbf{b}=\sqrt{\frac{1}{29}}\left(2\sin t,-2\cos t,5\right)\vphantom{\sqrt{\frac{1}{29}}}\:} \tag{11} \end{equation} The vector $\:\mathbf{b}\left(t\right)\:$ is the unit binormal vector to the curve at point $\:\mathbf{x}\left(t\right)$. The triad of vectors $\:\left(\mathbf{t},\mathbf{n},\mathbf{b}\right)\:$ forms a right-handed orthonormal triplet, as shown in Figures, called the moving trihedron. For the three planes, sides of the trihedron, we have the following terminology \begin{align} \text{plane }\left(\mathbf{t},\mathbf{n}\right) & =\textit{Osculating plane} \tag{12a}\\ \text{plane }\! \left(\mathbf{n},\mathbf{b}\right) & =\textit{Normal plane} \tag{12b}\\ \text{plane }\left(\mathbf{b},\mathbf{t}\right) & =\textit{Rectifying plane} \tag{12c} \end{align} ----- 3D image 1 ----- 3D image 2 ----- 2D video ----- 3D video ----- The moving trihedron 3D video
{ "domain": "physics.stackexchange", "id": 44736, "tags": "kinematics, velocity, vectors, differentiation" }
Why the mesons $\pi^0$ and $\eta$ are "degenerate"
Question: I don't know anything about elementary particles, but reading the book by Zeidler on p158, table 2.8 one can observe the quark content of baryons and mesons. It is strange that the $\pi^0$ meson has either(?) quark components $u\overline{u}$ and $d\overline{d}$ and similarly $\eta$ has components $u\overline{u}$, $d\overline{d}$ and $s\overline{s}$. Why these hadrons appear to be "degenerate", i.e. there are two/three possible options in which the particle can be built? Furthermore, why the pairs $u\overline{u}$ and $d\overline{d}$ are repeated in both mesons? Answer: They are not degenerate in the technical sense: the η has three times the mass of the π. They are different blends with differing symmetries, of similar quark-antiquark pairs. But not quite, as you see from the quantum-mechanical wavefunctions of these valence quarks--the 2nd and third rows of this list. These wavefunctions, $\mathrm{\tfrac{u\bar{u} - d\bar{d}}{\sqrt{2}}}$, (π), and $\mathrm{\tfrac{u\bar{u} + d\bar{d} - 2s\bar{s}}{\sqrt{6}}}$, (η), are superposition of two-fermion states, but they are only evocative of their symmetry structure; in addition, there are zillions, indefinite numbers, of "sea" quark-antiquark pairs, and gluons, comprising the actual mesons, implied but omitted. There are technical explanations of the mass difference, but they might not make sense to you. You might think of them as different constructions your may arrange out of a small number of the same lego blocks.
{ "domain": "physics.stackexchange", "id": 71328, "tags": "elementary-particles, mesons" }
Problem in verifying zero blocking property of a zero of a multiple input multiple output system
Question: I have a multiple input multiple output control system, and in particular a two inputs two outputs system. I want to find the zero state direction and the zero input direction of the system, which has a zero at $s=0$. So, I have foud them with the following code: s = tf('s'); P = 1/(s+5); C = 6/s; S = 1/(1+P*C); T = P*C/(1+P*C); G_2 = [T S; S -S]; %transfer matrix S = ss(G_2) Smr = minreal(S) A = Smr.A B = Smr.B C = Smr.C D = Smr.D trz = tzero(A,B,C,D) RSM_1 = [0*eye(2)-A B; -C D] rRSM_1 = rank(RSM_1) null = null(RSM_1) %verifying zero blocking property x0 = [-0.4388; -0.1482]; u0 = [-0.0000; 0.8863]; t = linspace(0,5); u = exp(t); [y,x] = lsim(A,B*u0,C,D*u0,u,t,x0); plot(t,y) axis([0 5 -1 1]) xlabel('time, seconds') title('response to x_{0} and u(t)=u_{0}e^{t}') and I want to verify the zero blocking property, which means that if I apply an input at $u(t)=u_0e^{t}$ starting from $x(0)=x_0$, where $u_0$ is the input direction and $x_0$ is the state direction, the output is zero. I have found it analytically in this way: I know that the since the system zeros of a MIMO transfer function are those values $s = z$ which make the system matrix $S(z)$ loose rank, there exists at least a direction $u_0$ (zero input direction) and a n-dimensional vector $x_0$ (zero state direction), not simultaneously zero, such that $\begin{pmatrix} zI-A & -B\\ C&D \end{pmatrix}$ $\begin{bmatrix} x_0\\ u_0 \end{bmatrix}$ $=$ $\begin{bmatrix} 0\\ 0 \end{bmatrix}$ And by solving this system we obtain the zero state direction $x_0$ and the zero input direction $u_0$. In order to make the question even clearer I can make an example using a different transfer function which has a simpler minimim realization. The example is taken from this : Here, and it can be found at the bottom of the paper. The system in this case is: $P(s)=\begin{bmatrix} \frac{2}{s^2+3s+2} & \frac{2s}{s^2+3s+2}\\ \frac{-2s}{s^2+3s+2}& \frac{-2}{s^2+3s+2} \end{bmatrix}$ Its minimum realization is: $A=\begin{bmatrix} -1 &0 &0 \\ 0&-2 &0 \\ 0& 0 & -2 \end{bmatrix}$ $B=\begin{bmatrix} 2 &-2 \\ -2& 4\\ -4& 2 \end{bmatrix}$ $C=\begin{bmatrix} 1 &1 &0 \\ 1 &0 &1 \end{bmatrix}$ $D=\begin{bmatrix} 0 &0 \\ 0& 0 \end{bmatrix}$ and the after using the same code, which can also be found in the paper, the simulation for verifying the zero blocking property is: s = tf('s'); P = [2/(s^2+3*s+2) 2*s/(s^2+3*s+2); -2*s/(s^2+3*s+2) -2/(s^2+3*s+2)] S = ss(P) Smr = minreal(S) A = Smr.A B = Smr.B C = Smr.C D = Smr.D trz = tzero(A,B,C,D) RSM_1 = [eye(3)-A B; -C D] rRSM_1 = rank(RSM_1) null(RSM_1) x0 = [0.5345; -0.5345; -0.5345]; u0 = [0.2673; -0.2673]; t = linspace(0,5); u = exp(t); [y,x] = lsim(A,B*u0,C,D*u0,u,t,x0); plot(t,y),grid axis([0 5 -1 1]) xlabel('time, seconds') title('response to x_{0} and u(t)=u_{0}e^{t}') and the output of this is: which is correct, since the output coes to zero. But, what I obtain with my code is: I am still working on this, and by using the Matlab command isstable(T) it tells me that the closed loop is not stable. But if I run the step response, step(T) ,I find: what am I doing wrong? Answer: The transmission zeros of a MIMO system are obtained by solving \begin{align} \dot{x}^* = A\,x^* + B\,u^*, \\ y = C\,x^* + D\,u^*, \end{align} for $x^*$ and $u^*$ such that $y = 0$. By using the Laplace variable $s$ for the time derivative that system of equation can also be written as $$ \begin{bmatrix} 0 \\ 0 \end{bmatrix} = \begin{bmatrix} A - s\,I & B \\ C & D \end{bmatrix} \begin{bmatrix} x^* \\ u^* \end{bmatrix}, $$ which can be seen as the generalized eigenvalue problem $\mathcal{A}\,v = \lambda\,\mathcal{B}\,v$, with $\lambda$ equivalent to $s$, $$ \mathcal{A} = \begin{bmatrix} A & B \\ C & D \end{bmatrix}, \quad \mathcal{B} = \begin{bmatrix} I & 0 \\ 0 & 0 \end{bmatrix}, \quad v = \begin{bmatrix} x^* \\ u^* \end{bmatrix}. $$ I believe that tzero in matlab also solves such generalized eigenvalue problem to obtain the transmission zeros (however by using eig yourself the associated eigenvectors are more accurate than the nullspace calculations). The associated solution in time for both $x(t)$ and $u(t)$ can be obtained using $$ \begin{bmatrix} x(t) \\ u(t) \end{bmatrix} = e^{\lambda\,t}\, \begin{bmatrix} x^* \\ u^* \end{bmatrix}, $$ so you can't just pick any value for $\lambda$ when generating such solution. You could now simulate the system using $x(0)=x^*$ as initial condition and the expression for $u(t) = e^{\lambda\,t}u^*$ with lsim. However, I believe that slim uses zero-order-hold for the input so the actual applied input will deviate slightly causing the output $y$ to deviate away from zero using that approach. Instead, if you look at the analytical solution, both the output and the error in state dynamics should remain practically to zero (for non-positive $\lambda$): n = length(A); m = length(D); [V,L] = eig([A B; C D], diag([ones(1,n) zeros(1,m)])); ind = 1; % Index of associated eigenvalue (avoid infinite eigenvalues) lambda = L(ind,ind); x0 = V(1:n, ind); u0 = V(1+n:n+m, ind); N = 1e3; t = linspace(0, 5, N); x = @(t) x0 * exp(lambda* t); u = @(t) u0 * exp(lambda* t); y = @(t) C * x(t) + D * u(t); e = @(t) A * x(t) + B * u(t) - lambda* x(t); Y = zeros(n, N); E = zeros(m, N); for k = 1 : N Y(:,k) = y(t(k)); E(:,k) = e(t(k)); end figure, plot(t, E) figure, plot(t, Y)
{ "domain": "dsp.stackexchange", "id": 8397, "tags": "signal-analysis, control-systems" }
How to identify the sign of a derived nondimensional parameter and its physical meaning?
Question: I think that the nondimensional group is ordinarily defined to be positive value in a physical problem. But in some particular case, we probably need to decide the sign of a derived dimensionless parameter. For example, I have defined nondimensional gravity is $$G=\frac{gL^3}{\nu^2},$$ where $g$, $L$ and $\nu$ are gravitational acceleration, length scale, and kinematic viscosity. Then I derived another dimensionless group and defined it as Rayleigh number, because the similarity in form, $$Ra=\frac{gL^3\beta\Theta}{\alpha \nu},$$ where $\beta$ and $\alpha$ are the thermal expansion coefficient and diffusivity. Note that the temperature scale $\Theta$ here is given by a fraction instead of a temperature difference in the system($\Theta=\theta_0-\theta_{ref.}$). The problem (my confusion) is that in normal definition $Ra$ (or $\Theta$) increases as $\theta_0$, however, the $\Theta$ defined by the fraction decreases as increase in $\theta_0$. My question is how can I identify the sign of my redefined $Ra$ correctly to be consistent with the usual definition? Is it appropriately if I simply add a minus sign before the $Ra$? Thank you! Answer: Let's consider the Rayleigh-Bernard convection as a natural convective system. The governing equations are the incompressible Navier-Stokes equations with the Boussinesque approximation as body force accounting for bouyant forces due to temperature variations: $$\boldsymbol{v}\cdot\boldsymbol{\nabla}\boldsymbol{v}=\nu\nabla^2\boldsymbol{v} + \beta g\left(T-T_r\right)\boldsymbol{e}_y$$ and the thermal advection-diffusion equation: $$\boldsymbol{v}\cdot\boldsymbol{\nabla}T=\alpha\nabla^2T$$ $\boldsymbol{v}$ is the velocity field, $T$ is the temperature field, $\nu$ is the kinematic viscosity, $\alpha$ is the thermal diffusivity, $\beta$ is the coefficient of thermal expansion, $g$ is the gravitational constant, $T_r$ is the reference temperature at which $\beta$ was determined and $\boldsymbol{e}_y=\left(0,-1,0\right)$ is the unit vector in the direction of the acceleration due to gravity. Let's non-dimensionalize to determine the relevant dimensionless numbers: $$\tilde{y}=\frac{y}{H}\quad\tilde{\boldsymbol{v}}=\frac{\boldsymbol{v}}{U}\quad\tilde{T}=\frac{T-T_r}{\Delta T}$$ where $H$ is the length scale of the system (channel height), $U$ is the velocity scale and $\Delta T$ is the imposed temperature difference over the channel height. The non-dimensionalized governing equations become: $$\tilde{\boldsymbol{v}}\cdot\tilde{\boldsymbol{\nabla}}\tilde{\boldsymbol{v}}=\frac{1}{\mathrm{Re}}\tilde{\nabla}^2\tilde{\boldsymbol{v}} + \mathrm{\Pi}\tilde{T}\boldsymbol{e}_y$$ $$\tilde{\boldsymbol{v}}\cdot\tilde{\boldsymbol{\nabla}}\tilde{T}=\frac{1}{\mathrm{Pe}}\tilde{\nabla}^2\tilde{T}$$ with the relevant dimensionless numbers identified as: $$\mathrm{Re}=\frac{UH}{\nu}\quad\mathrm{Pr}=\frac{\nu}{\alpha}\quad\mathrm{Pe}=\mathrm{Re}\mathrm{Pr}\quad\mathrm{\Pi}=\frac{\beta gH\Delta T}{U^2}$$ Note: $\mathrm{\Pi}$ at the moment is just some arbitrary dimensionless number which is not identified yet. Unlike in forced convection, the velocity scale $U$ is not imposed but is due to the dynamics of the bouyant system. The system becomes interesting once inertial terms become similar to viscous terms such that $\mathrm{Re}\sim 1$. We can then identify: $$U\sim\frac{\nu}{H}$$ and the non-dimensionalized governing equations become: $$\tilde{\boldsymbol{v}}\cdot\tilde{\boldsymbol{\nabla}}\tilde{\boldsymbol{v}}=\tilde{\nabla}^2\tilde{\boldsymbol{v}} + \mathrm{\Pi}\tilde{T}\boldsymbol{e}_y$$ $$\tilde{\boldsymbol{v}}\cdot\tilde{\boldsymbol{\nabla}}\tilde{T}=\frac{1}{\mathrm{Pe}}\tilde{\nabla}^2\tilde{T}$$ $$\mathrm{Pe}=\mathrm{Pr}\quad\mathrm{\Pi}=\frac{\beta gH^3\Delta T}{\nu^2}=\frac{\mathrm{Ra}}{\mathrm{Pr}}\quad$$ where we have identified the Rayleigh number, $\mathrm{Ra}$ in the definition of $\mathrm{\Pi}$: $$\mathrm{Ra}=\frac{\beta gH^3\Delta T}{\nu\alpha}$$ To answer your (original) questions: If $\beta$ is a constant, then the definition of $\mathrm{Ra}$ contains a temperature difference not a temperature ratio. However, for ideal gases we can identify $\beta\sim\frac{1}{T_m}$ (where $T_m$ is the averaged temperature) and then indeed it becomes a ratio. The result is still that increasing the temperature difference across the channel result in increasing the Rayleigh number. A sign in a dimensionless number is irrelevant information for a dimensional analysis. We want to know how different mechanisms (inertia vs viscous forces, etc.) relate to each other such that we may simplify the problem. The sign of course play a crucial role in the equations and as such is included through the unit vector $\boldsymbol{e}_y$. It is a good habit to always define the terms in dimensionless numbers as absolute values.
{ "domain": "physics.stackexchange", "id": 30915, "tags": "dimensional-analysis" }
Most general second-rank symmetric tensor in Einstein theory
Question: I am reading MTW page 407, Exercise 17.1. (a) Show that the most general second-rank, symmetric tensor constructable from Riemann and $g$, and linear in Riemann, is $$a R_{\alpha\beta} + b R g_{\alpha\beta} + \Lambda g_{\alpha\beta}\tag{17.10} $$ where $a$, $b$, and $\Lambda$ are constants. How can I show this statement? The overall stuff in GR, the possible two rank symmetric tensor are $g_{\alpha\beta}$ and $R_{\alpha\beta}$ (as far as I know) so their linear combination might be nice guess, but i think it is not enough for proving such statement. Answer: Consider a contraction between $R_{\mu\nu\rho\sigma}$ and any number of $g$:s, such that the resulting tensor has two indices. Without loss of generality we can assume that each $g$ is contracted with $\text{Riemann}$ on both indices or not at all, and that no $g$ is contracted with another $g$. The first is because contracting once just raises an index, the second because contracting $g$:s with each other either produces a constant or a new $g$. Thus the only possibilities are $$R_{\mu\nu\rho\sigma}g^{\mu\rho}g^{\nu\sigma}g_{\alpha\beta}$$ and $$R_{\mu\nu\rho\sigma}g^{\nu\sigma}.$$ Strictly speaking, we should include also the tensors corresponding to permuting the indices on $R_{\mu\nu\rho\sigma}$, but by the symmetry properties of the Riemann tensor, these either vanish or are proportional to the tensors above.
{ "domain": "physics.stackexchange", "id": 17760, "tags": "homework-and-exercises, general-relativity, metric-tensor, curvature" }
PHP program to generate random phone number
Question: I created a PHP program to generate random phone numbers. A valid phone number: must be exactly 11 digits in length, and must start with any of the values mentioned in the $a array, and How can I improve it? Specifically, how can I improve its readability, and its performance, assuming I want to generate millions of results. <?php function rand_(){ $digits_needed=7; $random_number=''; // set up a blank string $count=0; while ( $count < $digits_needed ) { $random_digit = mt_rand(0, 8); $random_number .= $random_digit; $count++; } return $random_number; } $a=array('0812', '0813' ,'0814' ,'0815' ,'0816' ,'0817' ,'0818','0819','0909','0908'); $i=0; while($i<21){ $website = $a[mt_rand(0, count($a) - 1)]; print $website . rand_().'<br>'; $i++; } Answer: Use accurate and meaningful names for functions and variable to improve readability. Condense your numeric loop conditions into a for loop versus while loop. Avoid declaring single-use variables. Make functions versatile by passing output-altering arguments with the call. Write default argument values when possible/reasonable so that, in some cases, no arguments need to be passed for the most common scenario. Avoid counting a static array in a loop. If you need to count it, count it once before entering the loop and reference the count variable. If you prefer mt_rand's randomness, use it in place of my array_rand() call below, but obey #6 about counting. Obey PSR Coding Standards. Spacing and tabbing don't improve performance, but they sure make your code easier to read. Some suggested applications of my generalized advice: Code: (Demo) function randomNumberSequence($requiredLength = 7, $highestDigit = 8) { $sequence = ''; for ($i = 0; $i < $requiredLength; ++$i) { $sequence .= mt_rand(0, $highestDigit); } return $sequence; } $numberPrefixes = ['0812', '0813', '0814', '0815', '0816', '0817', '0818', '0819', '0909', '0908']; for ($i = 0; $i < 21; ++$i) { echo $numberPrefixes[array_rand($numberPrefixes)] , randomNumberSequence() , "\n"; } Possible Output: 08161776623 08157676208 08188430651 08187765326 08176077144 09087477073 08127415352 08191681262 08168828747 08195023836 08198008111 09096738254 08162004285 08166810731 08130133373 09093214002 08154125422 08160702315 08143817877 08194806336 08133183466
{ "domain": "codereview.stackexchange", "id": 34763, "tags": "php, random" }
Would magnetic flux be necessary for analogous systems?
Question: I learned about the analogousness between mechanical and electrical systems a few months back (with the help of Feynman). Yesterday, my professor was lecturing about this topic, when she told that electrical systems can be written in two ways - either by using the voltage source, or by using the current source (drawing the equivalent analogs, and writing differential equations). I've always thought current sources to be abstract constructs for the electrical engineers to solve problems in their boring way, node analysis and stuff, since it's voltage that drives the free electrons (hence, current). Anyways, my confusion was about something called "Force-Current" analogous system, where the situation has gone crazy. Here's my (rather ugly) drawing of her example. We were asked to draw the equivalent electrical analogs. (It's a convention that whenever we make use of current source, we analyze them using nodes) I can understand the $F\to V$ system alright. Because, it makes use of the analog: $x\to q$ $m\to L$ $c\to R$ $k\to 1/C$ But, it becomes hard to perceive, when current source $F\to I$ comes into play. Now, the analogs are crazy... (Note: This analog wasn't discussed by Feynman) $x\to \phi$ $m\to C$ $c\to 1/R$ $k\to 1/L$ Is it right, actually? I'm skeptical about this kind of mapping. In this case, $G=1/R$ is conductance (which is the thing analogous to the damping factor $c$). Would this mean that conductance is the actual resistance here? And, more weirdo - capacitance is analogous to mass. The DEs for this system are somewhat long. I'll choose an RLC circuit to express my point. In terms of $F\to V$, (using $x\to q$) $$V_R=R \frac{dq}{dt},\ \ V_L=L\frac{d^2 q}{dt^2},\ \ V_C=\frac{q}C$$ Now, using $F\to I$, (using $x\to \phi$) $$I_R=\frac{1}R\frac{d\phi}{dt},\ \ I_L=\frac{\phi}{L},\ \ I_C=C\frac{d^2 \phi}{dt^2}$$ The $x\to q$ was fine, because motion of charge is current, and that's the basis for all electrical systems. But, the $x\to \phi$ is inconceivable. The magnetic flux, how $\phi$ changes, induced emf, etc. are needed for inductors. Yeah, but how can they be applied to resistors or capacitors? With this framework for writing the differential equations, it seems as if magnetic flux is the clockwork behind the working of such a system. This horror made me think that the $F\to I$ analog (using $\phi$) is just another abstract mathematical construct for writing differential equations. Am I right? Is it reasonable at all? Answer: I think there is something wrong with your mapping. Looking at http://lpsa.swarthmore.edu/Analogs/ElectricalMechanicalAnalogs.html , I see the following table: This is inconsistent with the mapping you are showing. I can understand this table - I can't understand yours. I think an error crept in - which would reasonably explain your confusion. Looking forward to comments!
{ "domain": "physics.stackexchange", "id": 15320, "tags": "electric-circuits, electrical-engineering, linear-systems" }
librviz display sensor_msgs/LaserScan
Question: Hi, i want to use Librviz (http://pr.willowgarage.com/downloads/groovy-rviz-api-for-review/rviz/html/index.html) in a QT-Gui, subscribe to topic and visualize some data. I followed the tutorial; http://docs.ros.org/indigo/api/librviz_tutorial/html/index.html and it is doing fine. From the tutorial i've got the following: render_panel_ = new rviz::RenderPanel(); lvw->QRvizLayout->addLayout( controls_layout ); lvw->QRvizLayout->addWidget( render_panel_ ); manager_ = new rviz::VisualizationManager( render_panel_ ); render_panel_->initialize( manager_->getSceneManager(), manager_ ); manager_->initialize(); grid_ = manager_->createDisplay( "rviz/Grid", "adjustable grid", true ); Now, i want to subscribe to the topic "fts/laserscan_front", which is publishing "sensor_msgs/LaserScan"-Messages. Is there any tutorial out there, how to subscribe to a topic and visualize the messages with Librviz and qt? Originally posted by Roman2508 on ROS Answers with karma: 11 on 2014-09-02 Post score: 1 Answer: The same way you created your rviz::Display* grid_ (assuming this is what you did), you should create an additional display in your code for each corresponding display you would add if you were using RViz itself. For instance, to add a LaserScan display, you should add to your header file: rviz::Display* laser_; Then, in your source file, you should do something like: laser_ = manager_->createDisplay("rviz/LaserScan", "Laser scan", true); // configure rviz/LaserScan laser_->subProp("Topic")->setValue("fts/laserscan_front"); laser_->subProp("Style")->setValue("Boxes"); You can add the display in RViz and check which settings/properties you would be able to change there. Then just adapt the lines of code above to set the properties you want. The same idea works for any other display type you can add in RViz. Originally posted by Murilo F. M. with karma: 806 on 2014-09-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by lucasw on 2016-02-01: It would be great to get an example like this into http://docs.ros.org/jade/api/librviz_tutorial/html/index.html - the grid is too simple, a second tutorial with a Display that subscribes to a topic would make things much clearer. Comment by double on 2017-09-19: Hello Lucasw, this link only show one tutorial that librviz shows grid. Could you please provide the second tutorial again? Comment by lucasw on 2017-09-19: I never made a second tutorial, I do have some example code here though: https://github.com/lucasw/rviz_camera_stream/blob/librviz_node/librviz_camera/src/panel.cpp
{ "domain": "robotics.stackexchange", "id": 19269, "tags": "ros, librviz" }
Interpretation of surface integral of vector field over surface
Question: Is it correct to interpret the surface integral of a vector function $\mathbf{v}$ over four sides of a cube as the rate of flow of fluid (in mass per unit time) that would flow out of the cube when those three sides are opened, given that the cube has an "infinite" amount of fluid (so it won' t run out), and that $\mathbf{v}$ gives the rate of flow of fluid in mass per unit time per unit area? Answer: What you are looking for is called flux, specifically this definition. Answer is, yes. The $\mathbf{v}$, usually denoted $\mathbf{j}$, is named the current density in electrostatics and simply flux (the other definition of it, anyways) in other fields, and you can define it for mass, electrical charge, heat, number of particles etc. It's defined as amount of matter that will pass through a infinitesimal surface with normal vector $\hat{\mathbf{j}}$ per infinitesimal time divided by the surface and time (notice this definition matches yours). Furthermore, as per the continuity equation, you can connect this integral to the actual change in quantity over a specific volume as well.
{ "domain": "physics.stackexchange", "id": 59740, "tags": "fluid-dynamics, vector-fields, calculus" }
Why is peripheral vision not bleached by daylight?
Question: In daylight, rods are known to be bleached: we have to wait some time after going into darkness before scotopic vision becomes effective. But, as I understand, peripheral vision is also mostly due to rods, since away from the fovea, cone density rapidly declines. But I wonder then: if rods normally saturate in bright light, why does peripheral vision still work in daylight? Answer: Short answer In photopic lighting, peripheral vision is mediated by cones. Background The rods are indeed saturated at daylight, and even at twilight (source: Nature). However, the cones are active and although their density in the periphery is low, they are still present (Fig. 1). Hence, peripheral vision in photopic lighting conditions is mediated by cones. Because of their low density in the periphery, however, visual acuity is low (Kolb, 2012). Fig. 1. Rod & cone densities in the retina. (Kolb, 2012) Reference - Kolb, Photoreceptors, In: Webvision: The Organization of the Retina and Visual System. Utah University
{ "domain": "biology.stackexchange", "id": 9199, "tags": "neuroscience, neurophysiology, vision, sensation" }
Issue with 1D particle in a box position expectation value
Question: In the simplest particle-in-a-box experiment, a particle is confined to a potential where $V(x)=0$ on the interval $[0,a]$ and $V(x) = \infty$ otherwise. Then the energy eigenvalue equation, $\hat{H} \psi_n = E_n \psi_n$ is solved, where $\hat{H} = -\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2}$, and the energy eigenstates are found to to be $\psi_n(x) = \sqrt{\frac{2}{a}} \sin(k_n x)$. This is usually one of the very first exercises in an introductory physics class. Now, these eigenstates were found using the Hamiltonian, so they are eigenstates of the Hamiltonian. I see a lot of places on the internet that then try to find an expectation value for position, $\langle x \rangle$, using the eigenstates of the Hamiltonian. But this Hamiltonian operator and the position operator don't commute. Therefore, these two operators don't have a common set of eigenstates. Therefore, you shouldn't be able to use the eigenstates of the Hamiltonian to calculate the position expectation value. Is this right, or am I missing something? Answer: Just because something is not an eigenstate, does not mean you can or should not calculate expectation values of an operator in that state. It is just that in the case of an eigenstate, the calculation becomes particularly easy. You will also notice that the variance of that expectation value does not vanish, but for an eigenstate it does.
{ "domain": "physics.stackexchange", "id": 44441, "tags": "quantum-mechanics, hamiltonian" }
on uwsim, orientation from topic pose and topic imu is different. Why?
Question: on uwsim, orientation from topic pose and topic imu is different. I set the initial yaw of auv as 0 by modifying yaml file. Quaternion from topic pose is x=0 y=0 z=0 w=1. Quaternion from topic imu is x=0 y=0 z=1 w=0. They are different! why?? Originally posted by ZYS on ROS Answers with karma: 108 on 2016-02-29 Post score: 0 Answer: Hi, The pose topic (I guess you are talking about /g500/pose) is used to send information between dynamics and simulator. So the dynamics is sending the pose to the visualizer. This pose is referenced from robot initial position (stated in the URDF, or 3D file). Meanwhile IMU topic is a simulated sensor that provides rotation with respect to the simulated world (plus some noise), so the quaternion you obtain from here has an additional rotation which is dependant on the world to initial robot position (offsetr inside simParams in config .xml files). Short answer is they are different because they mean different things. Originally posted by Javier Perez with karma: 486 on 2016-03-01 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 23949, "tags": "ros, uwsim" }
[solved]jsk_boundingBOXARRAY can't publish
Question: my source code is this #include <ros/ros.h> #include <tf/transform_broadcaster.h> #include <tf/transform_listener.h> #include <geometry_msgs/PointStamped.h> #include <pcl_ros/point_cloud.h> #include <pcl_ros/transforms.h> #include <jsk_recognition_msgs/BoundingBoxArray.h> #include <pharos_msgs/object_custom_msg.h> #include <pharos_msgs/object_custom_msgs.h> ///전역변수/// ///------//// class velodyne_ { public: ros::NodeHandlePtr nh; ros::NodeHandlePtr pnh; velodyne_() { nh = ros::NodeHandlePtr(new ros::NodeHandle("")); pnh = ros::NodeHandlePtr(new ros::NodeHandle("~")); ///서브스크라이브는 this 사용 publish는 메세지형식 이것만 잘 생각해놓기 sub_custom_msg = nh->subscribe("pharos_object_msg", 10 , &velodyne_::BOX_CB,this); pub_box_msg = nh->advertise<jsk_recognition_msgs::BoundingBoxArray>("object_box",10); } void BOX_CB(const pharos_msgs::object_custom_msgs &input_msg){ ///CALLBACK 함수 jsk_recognition_msgs::BoundingBoxArray BOXS; for(int i=0; i<input_msg.objects.size(); i++){ jsk_recognition_msgs::BoundingBox box; box.label = i+1; box.pose.position.x = input_msg.objects[i].min_x; box.pose.position.y = input_msg.objects[i].min_y; box.pose.position.z = input_msg.objects[i].min_z; box.dimensions.x = input_msg.objects[i].max_x; box.dimensions.y = input_msg.objects[i].max_y; box.dimensions.z = input_msg.objects[i].max_z; box.header.frame_id ="velodyne"; box.header.stamp = input_msg.header.stamp; BOXS.boxes.push_back(box); } BOXS.header.stamp=input_msg.header.stamp; BOXS.header.frame_id = "velodyne"; pub_box_msg.publish(BOXS); } protected: ros::Subscriber sub_custom_msg; ros::Publisher pub_box_msg; }; int main(int argc, char **argv) { ros::init(argc, argv, "object_box"); //노드명 초기화 ROS_INFO("started object_box"); ROS_INFO("SUBTOPIC : "); ROS_INFO("PUBTOPIC : "); velodyne_ hello; ros::spin(); } i want to show up box with object ex) i know min_x,y,z , max_x,y,z so i have 8 points of bound box how i can bounding object with jsk msg? i can publish some image but doesnt match size and position Originally posted by dnjsxor564 on ROS Answers with karma: 11 on 2018-08-19 Post score: 0 Answer: You are filling the box incorrectly, if you look at boundingBox , you need to give pose (position) that will be the centre point of box, and dimensions that will be the length, width and height of box. It should look like following: box.pose.position.x = (max_x + min_x) / 2; box.pose.position.y = (max_y + min_y) / 2; box.pose.position.z = (max_z + min_z) / 2; box.dimensions.x = (max_x - min_x); box.dimensions.y = (max_y - min_y); box.dimensions.z = (max_z - min_z); This should solve it. Originally posted by Choco93 with karma: 685 on 2018-08-21 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by dnjsxor564 on 2018-08-22: thank you!! it is helpful to me!! Comment by Choco93 on 2018-08-22: if it solved your issue kindly mark the answer as correct/accepted
{ "domain": "robotics.stackexchange", "id": 31578, "tags": "ros-kinetic" }
Is the Clifford group perfect (equals its own commutator subgroup)?
Question: Let $ Cl_n $ be the Clifford group on $n$ qubits. What is the commutator subgroup of $ Cl_n $? It is definitely not all of $ Cl_n $ since $ Cl_n $ is not perfect. My guess is that the abelianization of $ Cl_n $ is an elementary abelian $ 2 $-group of rank $ 2n $, in other words the vector space $ \mathbb{F}^{2n} $. So I would imagine the commutator subgroup is something like the symplectic group $ Sp_{2n}(2) $ since $ Cl_n $ is roughly made of two parts: a symplectic part and a $ 2 $-group part. That's kind of a vague statement but an example of what I mean is that quotienting the Clifford group by the Pauli group gives the symplectic group. Answer: $$ \newcommand{\Sp}{\mathrm{Sp}} \newcommand{\Cl}{\mathrm{Cl}} \newcommand{\F}{\mathbb{F}} \newcommand{\Z}{\mathbb{Z}} $$ Note that the symplectic group $\Sp_{2n}(2)$ is not a subgroup of $\Cl_n(2)$ (see Is the Clifford group a semidirect product?). Claim: Suppose $p$ prime and $n\in\mathbb N$ are such that $\Sp_{2n}(p)$ is perfect. Then the projective Clifford group $\overline\Cl_n(p)$ is perfect. Note that $\Sp_{2n}(p)$ is perfect except for $(n,p)\in\{ (1,2), (1,3), (2,2) \}$. Proof: The situation is arguably simpler when the local prime dimension $p$ is not $2$. Then the Clifford group is the semidirect product $\Cl_n(p) \simeq \mathcal{P}_n(p) \rtimes \Sp_{2n}(p)$ where $\mathcal{P}_n(p)$ is the generalized Pauli group. This is not the case for $p=2$, but the argumentation still works in this case. We always have that $\mathcal{P}_n(p)$ is a normal subgroup and $\Cl_n(p) / \mathcal{P}_n(p) \simeq \Sp_{2n}(p)$. In terms of cosets, we have for any $U,V\in\Cl_n(p)$: $$ [U\mathcal{P}_n(p), V \mathcal{P}_n(p)] = [U,V] \mathcal{P}_n(p). $$ Note that we can alternatively write the cosets as $C_g$, labelled by elements $g\in\Sp_{2n}(p)$. The isomorphism above then implies $[C_g,C_h]=C_{[g,h]}$. Up to a phase, we can write any element of $\mathcal{P}_n(p)$ as $w(a)$ where $a\in\F_p^{2n}$. Any $U\in C_g$ acts as $U w(a) U^\dagger \propto w(g(a))$. Then: $$ [U,w(a)] = (U^{-1} w(a)^{-1} U) w(a) \propto w(g(a))^{-1} w(a). $$ If $\Sp_{2n}(p)$ is perfect, the commutators $[U,V]$ for $U,V\in\Cl_n(p)$ generate an arbitrary Clifford unitary, up to a Pauli operator (and a phase) by 1), i.e. we can write any $C\in\Cl_n(p)$ as $C = \alpha w(b) [U,V]$ with $\alpha\in Z(\Cl_n(p))$. If $b=0$, we're done, hence let us assume that $b\neq 0$. Since $\Sp_{2n}(p)$ acts transitively on $\F_p^{2n}\setminus 0$, we can use 2) to eliminate at least the Pauli operator $w(b)$ by a commutator of an element in $\Cl_{n}(p)$ and $\mathcal{P}_n(p)$. Concretely, we can choose $a\neq 0,-b$ and then find a $g\in\Sp_{2n}(p)$ such that $g(a) = a+b$. For any $W\in C_g$ we then have $[W,w(a)] \propto w(b)^{-1}$ and hence $C = \alpha' [W,w(a)] [U,V]$. Thus, we have shown that any element in $\Cl_n(p)$ is a product of commutators, up to a global phase. Remark: I think could hold in a non-projective version if $p\neq 2$, because we can then also try to eliminate the phase $\alpha \in \Z_p$. However, it's a bit more complicated and seems to depend on $p \mod 4$ (as indicated by the determinant of $H$).
{ "domain": "quantumcomputing.stackexchange", "id": 3745, "tags": "mathematics, clifford-group" }
Ros on fedora can't communicate publisher/subscriber
Question: Hello, I'm using the fedora repo given by @cottsay. I might be doing something stupid but it seems to me that the messages are not working fine. if I do : roscore then rostopic echo /search then rostopic pub -1 /search std_msgs/Bool 'data : False' Nothing is received. I got the good number of node if I do a rosnode list and when using rostopic info search I can see a node subscribing to the topic. I might do something incredibly stupid, but for now I did not find what is the problem and I don't really know what to do to debug it... EDIT : Still having that problem. This is my repolist : $ yum repolist Modules complémentaires chargés : langpacks, refresh-packagekit bumblebee/20 bumblebee-nonfree/20 fedora/20/x86_64 google-talkplugin planetccrma/20/x86_64 planetcore/20/x86_64 rpmfusion-free/20/x86_64 rpmfusion-free-updates/20/x86_64 rpmfusion-nonfree/20/x86_64 rpmfusion-nonfree-updates/20/x86_64 russianfedora-free/20/x86_64 russianfedora-free-updates/20/x86_64 russianfedora-nonfree/20/x86_64 russianfedora-nonfree-updates/20/x86_64 smd-ros-shadow-fixed/20/x86_64 smd-ros-staging/20/x86_64 *updates/20/x86_64 virtualbox/20/x86_64 I did yum install ros-hydro-desktop-full. I think it might be depend on my computer and not on ros itself but I have no idea what causes this. Here is a screenshot of what I'm doing : And nothing is received. Here is my .bashrc as well concerning hydro : alias hydro='source /opt/ros/hydro/setup.bash && source /home/malcolm/ros_ws/catkin_hydro/devel/setup.bash && export EDITOR='nano' && export ROS_PACKAGE_PATH=/home/malcolm/ros_ws/catkin_hydro/src:/opt/ros/hydro/share:/opt/ros/hydro/stacks and the env variable of my system : declare -x ROSLISP_PACKAGE_DIRECTORIES="/home/malcolm/ros_ws/catkin_hydro/devel/share/common-lisp" declare -x ROS_DISTRO="hydro" declare -x ROS_ETC_DIR="/opt/ros/hydro/etc/ros" declare -x ROS_MASTER_URI="http://localhost:11311" declare -x ROS_PACKAGE_PATH="/home/malcolm/ros_ws/catkin_hydro/src:/opt/ros/hydro/share:/opt/ros/hydro/stacks" declare -x ROS_ROOT="/opt/ros/hydro/share/ros" declare -x ROS_TEST_RESULTS_DIR="/home/malcolm/ros_ws/catkin_hydro/build/test_results" EDIT : It seems more like my system is strangely slow. I'v launch a bunch of node right now and each time I'm trying to have the tf with this command : cd /var/tmp && rosrun tf view_frames && evince frames.pdf I either have no tf data, or partial tf data, not always the sames... which makes me thing that it may just be very slow... Originally posted by Maya on ROS Answers with karma: 1172 on 2014-07-03 Post score: 0 Original comments Comment by cottsay on 2014-07-03: Are you sure the command you posted is correct? The usage for rostopic pub is: Usage: rostopic pub /topic type [args...] so I used the command: rostopic pub -1 /search std_msgs/Bool 'data : False' which was successful. What are you trying to do with the --? Comment by Maya on 2014-07-11: Woups I had the same thing again today... Just to be sure I just need : smd-ros-shadow-fixed/20/x86_64 and smd-ros-staging/20/x86_64 as repo or do I miss something ? I'm reinstalling it now... Comment by Maya on 2014-07-14: I'm pretty sure you're right and it has nothing to do with the repo since after after some trying the problem tend to appear and go depending on when. When it's working it seem kinda slow to me compare to my ubuntu and I'm wondering if it has smthg to do with it. I'll try your suggestion next time. Answer: I'm fairly confident that this has nothing to do with repos. The next time things act up, try exporting ROS_IP=127.0.0.1 on each terminal and trying again. Originally posted by cottsay with karma: 311 on 2014-07-14 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 18495, "tags": "ros, roscore, publisher, fedora" }
How to fight underfitting in a deep neural net
Question: When I started with artificial neural networks (NN) I thought I'd have to fight overfitting as the main problem. But in practice I can't even get my NN to pass the 20% error rate barrier. I can't even beat my score on random forest! I'm seeking some very general or not so general advice on what should one do to make a NN start capturing trends in data. For implementing NN I use Theano Stacked Auto Encoder with the code from tutorial that works great (less than 5% error rate) for classifying the MNIST dataset. It is a multilayer perceptron, with softmax layer on top with each hidden later being pre-trained as autoencoder (fully described at tutorial, chapter 8). There are ~50 input features and ~10 output classes. The NN has sigmoid neurons and all data are normalized to [0,1]. I tried lots of different configurations: number of hidden layers and neurons in them (100->100->100, 60->60->60, 60->30->15, etc.), different learning and pre-train rates, etc. And the best thing I can get is a 20% error rate on the validation set and a 40% error rate on the test set. On the other hand, when I try to use Random Forest (from scikit-learn) I easily get a 12% error rate on the validation set and 25%(!) on the test set. How can it be that my deep NN with pre-training behaves so badly? What should I try? Answer: The problem with deep networks is that they have lots of hyperparameters to tune and very small solution space. Thus, finding good ones is more like an art rather than engineering task. I would start with working example from tutorial and play around with its parameters to see how results change - this gives a good intuition (though not formal explanation) about dependencies between parameters and results (both - final and intermediate). Also I found following papers very useful: Visually Debugging Restricted Boltzmann Machine Training with a 3D Example A Practical Guide to Training Restricted Boltzmann Machines They both describe RBMs, but contain some insights on deep networks in general. For example, one of key points is that networks need to be debugged layer-wise - if previous layer doesn't provide good representation of features, further layers have almost no chance to fix it.
{ "domain": "datascience.stackexchange", "id": 660, "tags": "neural-network, deep-learning" }
Evaluating $n \otimes_A n^*$ in $SU(n)$
Question: In "Quantum Field Theory in a Nutshell" pg424 the author (Zee) writes: $$(n\oplus n^*)\otimes_A(n \oplus n^*)\quad\cong\quad(n^2-1)\oplus 1 \oplus n(n-1)/2 \oplus ((n(n-1))/2)^*$$ From what I understand to do the $n\otimes_A n$ we look at the contractions with the Levi-Civita tensor e.g. in SU(3) $$3\otimes_A 3=\varepsilon^{ijk}\psi_i\phi_j=3^*$$ but I don't understand how to do the same for $(n \otimes_A n^*)\oplus(n^* \otimes_A n)$ to get the adjoint (if it is actually the adjiont and not some anti-symmetric version?) and singlet reps. Please can someone explain? Note: I have recently have being asking similar questions on MSE but apparently the notation used here is only common in physics - that along with its relations to the standard model I thought it was best to post this one here. Answer: Hint: Use distributive law for tensor products. Then it boils down to e.g. $${\bf n} \otimes_A {\bf n} \quad\cong\quad \begin{array}{c} [~~]\cr [~~] \end{array} \quad\cong\quad {\bf\frac{n(n-1)}{2}},\tag{1}$$ and $${\bf n} \otimes_A {\bf n}^{\ast} \quad\oplus\quad {\bf n}^{\ast} \otimes_A {\bf n} \quad\cong\quad {\bf n}^{\ast}\otimes {\bf n}$$ $$\quad\cong\quad \begin{array}{c} [~~]\cr [~~]\cr \vdots\cr [~~] \end{array} \otimes [~~] \quad\cong\quad \begin{array}{c} [~~]\cr [~~]\cr \vdots\cr [~~]\cr [~~] \end{array} \quad\oplus\quad \begin{array}{cc} [~~]&[~~]\cr [~~]\cr \vdots\cr [~~]\end{array} \quad\cong\quad {\bf 1} \quad\oplus\quad {\bf(n^2-1)},\tag{2}$$ etc. (The number of boxes in each term of eq. (2) is supposed to be $n$.)
{ "domain": "physics.stackexchange", "id": 45886, "tags": "homework-and-exercises, group-theory, representation-theory, lie-algebra" }
partition function of the U=0 Hubbard model
Question: I'm trying to derive the following partition function for the U=0 Hubbard model: $Z=\prod_\mathbf{k}(1+e^{-\beta(\epsilon_\mathbf{k}-\mu)})$ My try was to use: $Z=\sum_{\sigma,\mathbf{k}} <\sigma,\mathbf{k}|e^{-\beta H}|\sigma,\mathbf{k}>$ where $H=\sum_{\mathbf{k},\sigma}(\epsilon_{\mathbf{k}}-\mu)c_{\mathbf{k},\sigma}^\dagger c_{\mathbf{k},\sigma}$; and to expand the exponential. However I seem to get something like: $Z=2\sum_{\mathbf{k}}e^{-\beta(\epsilon_{\mathbf{k}}-\mu)}$ Answer: Let us write the Hamiltonian as $$ H = \sum_{\mathbf{k}} \sum_{\sigma=1,2} \xi_\mathbf{k} \hat{n}_\mathbf{k,\sigma},$$ where $\xi_\mathbf{k} = \epsilon_\mathbf{k} - \mu$, and $\hat{n}_{\mathbf{k}\sigma} = c_{\mathbf{k}\sigma}^\dagger c_{\mathbf{k}\sigma}$. Now let's compute the partition function $$ Z = \mathrm{Tr} \left[\mathrm{e}^{-\beta H}\right] = \mathrm{Tr} \left[\bigotimes_{\mathbf{k},\sigma} \mathrm{e}^{-\beta \xi_\mathbf{k} \hat{n}_\mathbf{k\sigma}}\right] = \prod_\mathbf{k} \prod_{\sigma = 1,2} \mathrm{Tr}_\mathbf{k,\sigma}\left[\mathrm{e}^{-\beta \xi_\mathbf{k} \hat{n}_\mathbf{k\sigma}}\right].$$ The second equality follows because the Hamiltonian is a sum over independent modes, so its exponential is a tensor product. In the third equality I turned the trace of a tensor product into the product of partial traces $\mathrm{Tr}_\mathbf{k,\sigma}$ over the different Hilbert spaces of the independent modes. Now the problem is easy, because we know the eigenvalues of $\hat{n}_\mathbf{k\sigma}$ are $n_\mathbf{k\sigma} = 0,1$, so $$ \mathrm{Tr}_\mathbf{k,\sigma}\left[\mathrm{e}^{-\beta \xi_\mathbf{k} \hat{n}_\mathbf{k\sigma}}\right] = \sum_{n_\mathbf{k\sigma} = 0,1}\mathrm{e}^{-\beta \xi_\mathbf{k} n_\mathbf{k\sigma}} = 1 + \mathrm{e}^{-\beta \xi_\mathbf{k}}.$$ Putting it all together you get $$ Z = \prod_\mathbf{k}\prod_{\sigma = 1,2}\left( 1 + \mathrm{e}^{-\beta \xi_\mathbf{k}}\right),$$ which is the usual way of writing the partition function. However, you can also note that the energies $\xi_\mathbf{k}$ are independent of the spin index $\sigma$, allowing you to write $$ Z = \prod_\mathbf{k}\left( 1 + \mathrm{e}^{-\beta \xi_\mathbf{k}}\right)^2.$$ This must be the case since the free energy $\sim \log Z$ is additive for the two independent spin species.
{ "domain": "physics.stackexchange", "id": 28572, "tags": "statistical-mechanics, condensed-matter, many-body" }
ca_driver deploy catkin_make
Question: Hello community, I try to deploy ca_driver from create_autonomy to multiple devices. Problem: catkin_make install did not install ca_driver to install-dir. Bug or feature ? Any suggestions ? Thanks Cheers Chrimo Originally posted by ChriMo on ROS Answers with karma: 476 on 2016-12-11 Post score: 0 Answer: Problem: catkin_make install did not install ca_driver to install-dir. This is actually not a problem with catkin_make: the current CMakeLists.txt of ca_driver (here) simply does not contain any install(..) statements. Running catkin_make install will therefore not install anything, as there are no instructions to do so. I'd report this to the AutonomyLab people over at their issue tracker. Originally posted by gvdhoorn with karma: 86574 on 2016-12-11 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ChriMo on 2016-12-11: I've copied the binary manually to destination. Thanks for the fast answer Cheers Chrimo Comment by gvdhoorn on 2016-12-12: It would be really nice if you could report this to their issue tracker, as that would make this work for other users as well. Comment by jacobperron on 2016-12-12: Expect an update to create_autonomy tonight with install rules (with some other small changes I've been meaning to get to as well).
{ "domain": "robotics.stackexchange", "id": 26456, "tags": "ros, create-autonomy" }
Kinetic energy for a hydrostatic fluid versus a nonhydrostatic fluid
Question: I am reading in my textbook that the kinetic energy of a non-hydrostatic fluid parcel is given by $\boldsymbol{v}^2/2$, where $\boldsymbol{v}$ is the velocity with components $(u,v,w)$, but for a hydrostatic fluid parcel the kinetic energy is defined as $\boldsymbol{u}^2/2$, where $\boldsymbol{u}=(u,v)$. I assume that the hydrostatic equation thus makes vertical velocity negligible comapared to horizontal velocities - but I cannot wrap my head around why this occurs, or how the hydrostacy of a fluid reduces vertical motions. My one guess is that a fluid balanced in the vertical does not deviate much from this balance, thus reducing vertical motions, but I'm not sure about this, since there is no vertical velocity term $w$ in the hydrostatic balance. Answer: Hydrostatics is a branch of fluid dynamics dedicated to fluids at rest. Furthermore the property as described by you would be a property of the flow not the fluid itself. In simple shear flow (Couette flow) for example you may assume that there will be no vertical motion throughout the entire domain because there can't be a vertical velocity component at the top as well as at the bottom boundary. As you seem to come from the field of geoscience and meteorology I will use that as an example. Atmospheric transport In atmospheric transport there are huge differences in latitudinal, meridional and interhemispheric transport rates as they have different causes, mechanisms and different boundary conditions. Boundary conditions: The bottom of the earth acts approximately like a solid no-slip wall where all the velocities approach zero. The top is not constrained in latitudinal and meridional (horizontal) direction but in vertical direction the velocity can be assumed zero due to gravity (the fluid can't really leave the field of gravity). The latitudinal motion (around $10 \frac{m}{s}$) is caused by the rotation of the earth and the inertia of the air as well as local differences in atmospheric pressure. The meridional flow (around $1 \frac{m}{s}$) is mainly driven by differences in atmospheric pressure and is thus a lot lower. The vertical motion ($0.001-0.01 \frac{m}{s}$ while it can be higher locally due to buoyancy) is mainly a reaction to the other two motions due to continuity reasons (convergence and divergence in horizontal direction must be equilibrated by vertical motion). You can see the vertical motion can be most likely regarded as negligible as it is several orders of magnitude smaller than the other two components. A note on simplifications In the end such assumptions about the flow are merely acts of desperation: Realistic flows are impossible to solve analytically and time-resolved 3D simulations - if you neglect RANS simulations - are computationally too expensive to solve numerically for most applications as well. You will try to reduce complexity by making assumptions about the nature of the fluid and flow (incompressible flow or fluid, neglect certain terms as they have a small order of magnitude, reducing dimensions etc.). Nonetheless this will be far from true for most applications. In particular if you have aerodynamic flows they will be highly turbulent and a symmetry can only be imposed for time-averages but not for velocity fluctuations. For example in car aerodynamics - if obtaining estimates by hand analytically - you will assume that your car geometry is symmetric ("indefinitely long") and look at a 2D slice. You will further apply simplified concepts such as stationary inviscid (Euler equations) flow. The viscous effects are negligible small for $Re \to \infty$ far from the boundaries but actually not close to the walls, a car won't be sufficiently modelled by a 2D model as you will have things like a horseshoe vortex and the flow won't be stationary either. Nonetheless you might be able to get good estimates.
{ "domain": "physics.stackexchange", "id": 62605, "tags": "fluid-dynamics, fluid-statics" }
What is the first non-vanishing multipole moment of this configuration?
Question: Imagine that you have a triangle where each side has the length $a$ and a charge $q$ sitting at every vertex. Additionally, we have a charge $-3q$ sitting in the center of the triangle. What is the first non-vanishing multipole moment? I thought that there cannot be a net charge or dipole moment, but I was uncertain about the quadrupole moment? Answer: Spherical Multipole Expansion The first nonvanishing moment is the quadrupole moment. Assuming that you are looking for an exterior multipole expansion, the potential from the charge distribution $\rho$ can be expanded as $$V(\mathbf{r})=\frac{1}{4\pi\epsilon_0}\sum_{L=0}^\infty\sum_{m=-L}^LI_{Lm}(\mathbf{r})\langle R_{Lm},\rho\rangle$$ where $I_{Lm},R_{Lm}$ are the real solid irregular and regular harmonics, respectively (alternately, you can use the complex solid harmonics, but since $\rho$ is real, there is no need to), and $\langle,\rangle$ denotes inner product. From this we obtain the first several moments (in Mathematica): SphericalHarmonicYr[L_, m_, \[Theta]_, \[Phi]_] := Piecewise[{{(I*(SphericalHarmonicY[L, m, \[Theta], \[Phi]] - (-1)^m* SphericalHarmonicY[L, -m, \[Theta], \[Phi]]))/Sqrt[2], m < 0}, {SphericalHarmonicY[L, 0, \[Theta], \[Phi]], m == 0}, {(SphericalHarmonicY[L, -m, \[Theta], \[Phi]] + (-1)^m* SphericalHarmonicY[L, m, \[Theta], \[Phi]])/Sqrt[2], m > 0}}]; SolidHarmonicRr[l_, m_, r_, \[Theta]_, \[Phi]_] := Limit[Sqrt[(4*Pi)/(2*l + 1)]*R^l* SphericalHarmonicYr[l, m, \[Theta], \[Phi]], R -> r]; position = {{0, 0, 0}, {a/Sqrt[3], Pi/2, 0}, {a/Sqrt[3], Pi/2, (2*Pi)/3}, {a/Sqrt[3], Pi/2, (4*Pi)/3}}; charge = {-3*q, q, q, q}; Q[L_, m_] := Sum[charge[[i]]*SolidHarmonicRr[L, m, ##] & @@ position[[i]], {i, 4}]; QQ = Simplify[Table[Q[L, m], {L, 0, 2}, {m, -L, L}]] {{0}, {0, 0, 0}, {0, 0, -((a^2 q)/2), 0, 0}} This means that the potential due to the configuration can be well-approximated by $$\begin{array}{||}\hline V(\mathbf{r})\approx -\frac{1}{4\pi\epsilon_0}\frac{qa^2}{2}I_{2,0}(\mathbf{r})=-\frac{qa^2 (3 \cos (2 \theta )+1)}{32 \pi r^3 \epsilon _0}\\\hline \end{array}$$ which numerically agrees with the exact potential to within $\approx20\%$ at a radius of $|\mathbf{r}|=10a$, and agrees to within $\approx1.5\%$ at $|\mathbf{r}|>100a$. For reference, the regular real solid harmonics are given by $$R_{Lm}(\mathbf{r})=\sqrt{\frac{4\pi}{2L+1}}r^L\begin{cases} \sqrt{2}(-1)^m\text{Im}\left[Y_L^{|m|}(\theta,\phi)\right]&m<0\\ Y_L^0(\theta,\phi)&m=0\\ \sqrt{2}(-1)^m\text{Re}\left[Y_L^{|m|}(\theta,\phi)\right]&m>0 \end{cases}\\ =(-1)^mr^L\sqrt{\frac{(L-|m|)!}{(L+|m|)!}}P_L^{|m|}(\cos(\theta))\times\begin{cases} \sqrt{2}\sin(|m|\phi)&m<0\\ 1&m=0\\ \sqrt{2}\cos(|m|\phi)&m>0 \end{cases}$$ and the irregular harmonics $I_{Lm}$ are the same, but with $r^L$ replaced by $r^{1-L}$. Cartesian Multipole Expansion As vesofilev answered, the traceless Cartesian quadrupole tensor can be obtained via $$\mathbf{Q}=\langle3\mathbf{r}\otimes\mathbf{r}-|\mathbf{r}|^2\mathbf{I},\rho\rangle$$ which in Mathematica becomes position = Prepend[Table[ a/Sqrt[3] {Cos[2 \[Pi] k/3], Sin[2 \[Pi] k/3], 0}, {k, 3}], {0, 0, 0}]; charge = {-3 q, q, q, q}; f[r_] := 3 r\[TensorProduct]r - (r.r) IdentityMatrix[3]; Q = Sum[charge[[k]] f[position[[k]]], {k, 4}] // MatrixForm // TeXForm $$\mathbf{Q}=\left( \begin{array}{ccc} \frac{a^2 q}{2} & 0 & 0 \\ 0 & \frac{a^2 q}{2} & 0 \\ 0 & 0 & -a^2 q \\ \end{array} \right).$$ The potential can then be approximated as $$V(\mathbf{r})\approx \frac{1}{4\pi\epsilon_0}\frac{\mathbf{r}^\mathsf{T}\mathbf{Q}\mathbf{r}}{2|\mathbf{r}|^5}.$$
{ "domain": "physics.stackexchange", "id": 13737, "tags": "homework-and-exercises, electrostatics, multipole-expansion" }
Equilateral and One-of-n encoding
Question: I was reading AI For Humans Vol. 1 by Jeff Heaton when I came across the terms "equilateral encoding" and "one-of-n encoding." The explanations unfortunately made no sense to me and the reddit threads on the Web are blocked by my Internet provider (I use a high-school machine). Is anyone here able to provide basic explanations regarding the two procedures for me? Thanks in advance. Answer: Please check this book with the normalization infos: https://jamesmccaffrey.wordpress.com/2014/06/03/neural-networks-using-c-succinctly/ Freely available and gives good explanation to encoding with code example. Check the part: "Effects Encoding and Dummy Encoding"
{ "domain": "ai.stackexchange", "id": 471, "tags": "autoencoders" }
BigDecimal made simple
Question: Here is my take on (a lighter version of) Java's BigDecimal. I originally wrote this as a replacement for BigDecimal in order to improve performance during serialisation and data manipulation. public class SimpleDecimal implements Comparable<SimpleDecimal> { private long value = 0L; private int fractionalPrecision = 0; private boolean debug = false; public static SimpleDecimal simplify(SimpleDecimal o) { if (!((o.getFractionalPrecision() > 0) && ((o.getValue() % 10) == 0))) { return o; } SimpleDecimal sd = new SimpleDecimal(o); do { sd.setValue(sd.getValue() / 10); sd.setFractionalPrecision(sd.getFractionalPrecision() - 1); } while ((sd.getFractionalPrecision() > 0) && ((sd.getValue() % 10) == 0)); return sd; } public static SimpleDecimal adjust(SimpleDecimal o, int fractionalPrecision) { if (o.getFractionalPrecision() == fractionalPrecision) { return o; } if (o.getFractionalPrecision() > fractionalPrecision) { return simplify(o); } SimpleDecimal sd = new SimpleDecimal(o); sd.incFractionalPrecision(fractionalPrecision - sd.getFractionalPrecision()); return sd; } // Various constructors public SimpleDecimal(int value) { this(String.valueOf(value)); } public SimpleDecimal(long value) { this(String.valueOf(value)); } public SimpleDecimal(float value) { this(String.valueOf(value)); } public SimpleDecimal(double value) { this(String.valueOf(value)); } /** * This method will expect standard representation of numerical value as returned by * String.valueOf(...) * @param value */ public SimpleDecimal(String value) { int fpsl = value.lastIndexOf('.'); String strValue; if (fpsl > 0) { strValue = value.substring(0, fpsl) + value.substring(fpsl + 1); } else { strValue = value.substring(fpsl + 1); } setValue(Long.valueOf(strValue)); if (fpsl < 0) { fpsl = 0; } else { fpsl = value.length() - fpsl - 1; } setFractionalPrecision(fpsl); } public SimpleDecimal(SimpleDecimal o) { this(o.getValue(), o.getFractionalPrecision()); } public SimpleDecimal add(SimpleDecimal augend) { SimpleDecimal result = new SimpleDecimal(this); int fp = result.getFractionalPrecision(); if (fp < augend.getFractionalPrecision()) { fp = augend.getFractionalPrecision(); result = adjust(result, fp); } else { augend = adjust(augend, fp); } result.setValue(result.getValue() + augend.getValue()); return result; } public SimpleDecimal subtract(SimpleDecimal subtrahend) { SimpleDecimal result = new SimpleDecimal(this); int fp = result.getFractionalPrecision(); if (fp < subtrahend.getFractionalPrecision()) { fp = subtrahend.getFractionalPrecision(); result = adjust(result, fp); } else { subtrahend = adjust(subtrahend, fp); } result.setValue(result.getValue() - subtrahend.getValue()); return result; } public SimpleDecimal multiply(SimpleDecimal multiplicand) { SimpleDecimal result = new SimpleDecimal(this); result.setValue(result.getValue() * multiplicand.getValue()); result.setFractionalPrecision(result.getFractionalPrecision() + multiplicand.getFractionalPrecision()); return result; } public SimpleDecimal divide(SimpleDecimal divisor) { SimpleDecimal result = new SimpleDecimal(this); int fp = result.getFractionalPrecision(); if (fp < divisor.getFractionalPrecision()) { fp = divisor.getFractionalPrecision(); result = adjust(result, fp + fp); } else { result = adjust(result, fp + fp); divisor = adjust(divisor, fp); } result.setValue(result.getValue() / divisor.getValue()); result.setFractionalPrecision(fp); return result; } public SimpleDecimal simplify() { return simplify(this); } @Override public int compareTo(SimpleDecimal o) { SimpleDecimal sd1 = new SimpleDecimal(this); return compare(sd1, o); } // If you need to handle big values use BigDecimal instead public static int compare(SimpleDecimal o1, SimpleDecimal o2) { int result = 0; if (o1.getFractionalPrecision() == o2.getFractionalPrecision()) { return (int) (o1.getValue() - o2.getValue()); } SimpleDecimal sd1 = simplify(o1); SimpleDecimal sd2 = simplify(o2); int maxFractionalPrecision = sd1.getFractionalPrecision(); if (sd2.getFractionalPrecision() > maxFractionalPrecision) { maxFractionalPrecision = sd2.getFractionalPrecision(); } sd1 = adjust(sd1, maxFractionalPrecision); sd2 = adjust(sd2, maxFractionalPrecision); if (sd1.getFractionalPrecision() == sd2.getFractionalPrecision()) { return (int) (sd1.getValue() - sd2.getValue()); } return result; } @Override public String toString() { String str; if (isDebug()) { // Use this for debugging str = debugToString(); } else { // Use this for normal output str = stdToString(); } return str; } @Override public boolean equals(Object o) { if (null == o) return false; if (!this.getClass().isInstance(o)) return false; SimpleDecimal sd = (SimpleDecimal) o; return equals(sd); } public boolean equals(SimpleDecimal o) { boolean result = false; if (compareTo(o) == 0) { result = true; } return result; } public SimpleDecimal movePointLeft(int n) { SimpleDecimal sd = new SimpleDecimal(this); sd.incFractionalPrecision(n); return sd; } public SimpleDecimal movePointRight(int n) { SimpleDecimal sd = new SimpleDecimal(this); sd.decFractionalPrecision(n); return sd; } // Standard getters/setters public long getValue() { return value; } public void setValue(long value) { this.value = value; } public int getFractionalPrecision() { return fractionalPrecision; } public void setFractionalPrecision(int fractionalPrecision) { this.fractionalPrecision = fractionalPrecision; } public void incFractionalPrecision() { int fractionalPrecision = 1; incFractionalPrecision(fractionalPrecision); } public synchronized void incFractionalPrecision(int fractionalPrecision) { for (int i = 0; i < fractionalPrecision; i++) { this.value *= 10; } this.fractionalPrecision += fractionalPrecision; } public void decFractionalPrecision() { int fractionalPrecision = 1; decFractionalPrecision(fractionalPrecision); } public synchronized void decFractionalPrecision(int fractionalPrecision) { if (fractionalPrecision > this.fractionalPrecision) { fractionalPrecision = this.fractionalPrecision; } for (int i = 0; i < fractionalPrecision; i++) { this.value /= 10; } this.fractionalPrecision -= fractionalPrecision; } public boolean isDebug() { return debug; } public void setDebug(boolean debug) { this.debug = debug; } private SimpleDecimal(long value, int fractionalPrecision) { super(); this.value = value; this.fractionalPrecision = fractionalPrecision; } private String stdToString() { String strValue = String.valueOf(getValue()); String str; if (getFractionalPrecision() > 0) { StringBuilder sb = new StringBuilder(); for (int i = strValue.length(); i < getFractionalPrecision() + 1; i++) { sb.append('0'); } sb.append(strValue); strValue = sb.toString(); int dpp = strValue.length() - getFractionalPrecision(); int p = 0; str = ((getValue() >= 0) ? "" : strValue.substring(0, ++p)) + ((dpp == p) ? "0" : strValue.substring(p, dpp)) + '.' + strValue.substring(dpp); } else { str = strValue; } return str; } private String debugToString() { return "SimpleDecimal [" + "value=" + value + ", fractionalPrecision=" + fractionalPrecision + "]"; } } Answer: Your assertion that this is a replacement for BigDecimal, is.... ambitious. You are limited to about 18 significant figures, and your code will throw odd exceptions for values that exceed that will throw a NumberFormatException, as far as I can tell. Additionally, your class is not immutable, which is slightly concerning, since by convention, Java Number classes are immutable. You override the equals method, but do not override the hashCode() method, which means you cannot safely use this value in a HashSet, HashMap, or many other constructs. The simplify method is duplicated as both a public static method, as well as an instance method. There should be only one (the instance method, I presume), or the other should not be public. The adjust method looks broken: public static SimpleDecimal adjust(SimpleDecimal o, int fractionalPrecision) { if (o.getFractionalPrecision() == fractionalPrecision) { return o; } if (o.getFractionalPrecision() > fractionalPrecision) { return simplify(o); } SimpleDecimal sd = new SimpleDecimal(o); sd.incFractionalPrecision(fractionalPrecision - sd.getFractionalPrecision()); return sd; } If the o's precision is greater than the input, you simplify it, but, simplify may do nothing..... so, the adjust does nothing? Also, what is it adjusting? The method is badly named, and I suspect it should just go away. Your class does not extend java.lang.Number. This is not really essential, but, it would be nice. Your primitive-based constructors do things the wrong way around... they all convert the primitives to String, then convert the string back to the source value. This is slow. You should be converting directly to long, and then have a more comprehensive String constructor. All your arithmetic methods are adjusting the values to be stored in the SimpleDecimal as the calculation is done. This is against the immutable concept for Java numbers. You should instead calculate what the number-parts should be, and then construct the final result from those parts with no additional adjustment. One final observation, your code is not what I would consider to be 'production ready'. It does not deal with common issues that arise, like, the value -.01 should be legal, but I believe will cause your code to fail, and fail in a consistent way with other failure modes. Additionally, I have, in the past, attempted to write a mutable-version of BigDecimal to reduce the memory impact it has.... and failed. The code is complex, and I fear your SimpleDecimal has oversimplified too many things, and made too many compromises and assumptions.
{ "domain": "codereview.stackexchange", "id": 9481, "tags": "java, reinventing-the-wheel, fixed-point" }
Difference between "controlled release" vs "prolonged release"
Question: My sister is using the epilepsy drug Tegretol which has both CR and retard (previous naming for prolonged release according to https://www.medicines.org.uk/emc/product/5932/smpc) versions in different countries. Given the active ingredient is the same (carbamazepine), would there be any difference between the effects of the two drugs in terms of preventing seizures? Here are the boxes for the two versions: Answer: The first box contains Tegretol Retard, which is an old name for what is now known as Tegretol Prolonged Release (medicines.org.uk). The second box contains Tegretol CR, which means controlled-release (PubMed). Drugs.com says: Your medicine is called Tegretol 200mg prolonged-release Tablets/Tegretol CR 200mg Tablets or Tegretol 400mg prolonged-release Tablets/Tegretol CR 400mg Tablets but will be referred to as Tegretol prolonged-release Tablets throughout this leaflet. Concluding from that, Tegretol CR200 is the same as Tegretol Retard, which is now known as Tegretol Prolonged Release. Saying that, it is a doctor who prescribes drugs and who can answer such questions.
{ "domain": "biology.stackexchange", "id": 9996, "tags": "pharmacology, medicine" }
Can we make a PDA with only 1 state which accepts a^n and n is odd?
Question: We can have acceptance with empty stack, but in that case our stack at the start has to have a A in it, otherwise this machine will accept the empty string which is wrong But can we have a non empty stack at the start? is my method right? Answer: But can we have a non empty stack at the start? By the formal definition PDA has the initial stack symbol $Z$ with which a computation starts. The idea is to use three stack symbols $Z, Z_{even}, Z_{odd}$ to keep track of whether the number of $a$s read so far is even or odd. If the PDA reaches the end of string and topmost symbol is $Z_{odd}$ then empty the stack.
{ "domain": "cs.stackexchange", "id": 9858, "tags": "automata, pushdown-automata" }
jQuery script to toggle an element and handle a Close button
Question: First of all: I am a total javascript beginner and therefore I am asking you to rate my script and tell me whether its okay or just a big mess. Note: it does work, but I guess it could be improved. My main goal was to create a script, than can be used multiple times and does not depend on any class or id name (thats why I am using the data attribute). $(function(){ var toggleOpen = $('*.[data-toggle="open"]'); var toggleContent = $('*.[data-toggle="content"]'); var toggleClose = $('*.[data-toggle="close"]'); var toggleSpeed = 500; //Set height of toggle content to avoid animation jumping toggleContent.css('height', toggleContent.height() + 'px'); //Find content to toggle function findNextContent(target){ var nextContent = target.parent().parent().parent().find(toggleContent); return nextContent; } //Toggle content function slideToggle(target){ target.stop(true,true).slideToggle(toggleSpeed); } //CLose toggled content function closeToggle(target){ target.slideUp(toggleSpeed); } //On Open Click toggleOpen.click(function(){ var clicked = $(this); var nextContent = findNextContent(clicked); //Check if hidden to either scroll to bottom or not if(nextContent.is(':hidden')){ slideToggle(nextContent); smoothScrolling(toggleClose); }else{ slideToggle(nextContent); } return false; }); //On Close click toggleClose.click(function(){ var clicked = $(this); var nextContent = findNextContent(clicked); closeToggle(nextContent); return false; }); }); What it does: It toggles an element and it also takes care of a seperate close button. I am glad for any feedback - be it positive or negative! Answer: function findNextContent(target){ var nextContent = target.parent().parent().parent().find(toggleContent); return nextContent; } You could possibly use the jQuery closest function here. Or you could remove the variable and just do return target.parent().parent().parent().find(toggleContent); toggleClose.click(function(){ var clicked = $(this); var nextContent = findNextContent(clicked); closeToggle(nextContent); return false; }); There is no need for the clicked variable. You can just write findNextContent($(this)); You might also want to look into the aria-expanded attribute.
{ "domain": "codereview.stackexchange", "id": 842, "tags": "javascript, performance, beginner, jquery" }
Does this statement make any sense?
Question: I am asking this question completely out of curiosity. The other day, my roommate, by mistake, used 'Light year' as a unit of time instead of distance. When I corrected him (pedantic, much), he said the following: "Units are relative. And according to Fourier Transforms, units can be changed so Light year is a unit of time." That got me thinking and I read up Fourier Transforms on wikipedia but couldn't find anything about using a unit in one domain as a unit for another measurement. I do agree that units (particularly, base units are relative. eg: the meter), but does his statement make any sense? EDIT Thank you everyone for all the answers. It isn't so much to in it in or prove a point as it is to understand the concept better. Anyways this is his response after I showed him this thread. Any comments would be appreciated. His response: Nevermind, for the first time I accept I was wrong. BUT using lightyears to measure time is possible. My example didn't make sense bacause I was wrong when I meantioned that I'm still measuring dist. If you have a signal in time domain and ...take the FT, I get a signal which DOES NOT HAVE to be in frequency domain. Clarify this to the guy who posted last. Now the new signal is in a domain defined by me and so is its units. This signal although not equal to the original signal, still represents that if ya take an inverse FT. So, the idea of time will still be there. Now coming back to our case: lightyears here is not the lightyears you are used to read when dealing with distance. It represents time. Answer: This doesn't make much sense: light year is in any case a unit of distance. What is common is to use "reduced units", for examples units where $c=1$ (speed of light) or $h=2\pi$. But in these cases the opposite would happen: you would say "year" to mean a distance. Or for example you say "has a mass of xyz MeV" instead of "$MeV/c^2$". About the Fourier transform: this allow to go from the so-called "time domain" (even if "time" is not always the usual time) to the "frequency domain" involving ... frequencies. But as you can see this cannot change the definition of light-year.
{ "domain": "physics.stackexchange", "id": 52, "tags": "si-units, fourier-transform" }
What are streamlines and pathlines
Question: I asked a question earlier about having a vector field and a starting point, and then making a parametric that starts at the starting point and the derivative at any point in the parametric equals the vector field at that point. Anyway somebody mentioned streamlines and pathlines, what are they? Answer: From "Fluid Mechanics by Kundu": "In the Eulerian description, three types of curves are commonly used to describe fluid motion: streamlines, path lines, and streak lines. These are defined and described here assuming that the fluid velocity vector, $\vec{u}$, is known at every point of space and instant of time throughout the region of interest. Streamlines, path lines, and streak lines all coincide when the flow is steady. A streamline is a curve that is instantaneously tangent to the fluid velocity throughout the flow field. In unsteady flows the streamline pattern changes with time. In Cartesian coordinates, if $\vec{ds}=(dx, dy, dz)$ is an element of arc length along a streamline and $\vec{u} = (u, v, w)$ is the local fluid velocity vector, then the tangency requirement on $\vec{ds}$ and $\vec{u}$ leads to: $\frac{dx}{u}=\frac{dy}{v}=\frac{dz}{w}$ and $\vec{u} \times \vec{ds} = 0$ because $\vec{ds}$ and $\vec{u}$ are locally parallel. Integrating the above equation both the upstream and downstream directions from a variety of reference locations allows streamlines to be determined throughout the flow field. A path line is the trajectory of a fluid particle of fixed identity. It is defined as $\vec{x} = \vec{r}(t,\vec{r}_{0},t_{0})$. The equation of the path line for the fluid particle launched from $\vec{r}_{0}$ at $t_{0}$ is obtained from the fluid velocity $\vec{u}$ by integrating: $\frac{\vec{dr}}{dt}= [\vec{u}(\vec{x},t)]_{x=r}=\vec{u}(\vec{r},t)$ subject to the requirement $\vec{r}(t_{0})=\vec{r}_{0}$. Other path lines are obtained by integrating the above equation from different values of $\vec{r}_{0}$ or $t_{0}$. " I will not quote the streak lines part since that you did not ask for it, but if you are interest I can add it or you can read the book. The text is specific to fluid dynamics, but this is true for every vector field $\vec{u}$, so if you have a vector field $\vec{F}(\vec{x},t)$ and $\vec{G}(\vec{x},t)=\frac{DF}{Dt}$, where $\frac{D}{Dt}$ is the convective/material derivative, the same description applies.
{ "domain": "physics.stackexchange", "id": 100528, "tags": "vector-fields, differential-equations" }
Using fft to calculate vsdft
Question: We are currently implementing the velocity synchronous discrete Fourier transform (vsdft) for order tracking rotating machinery in Python. The formula is similar to dft. Simplified, it is given by $$ \text{VSDFT}(\Omega) = C\sum_{n=0}^{N-1} x[n] \omega[n] e^{j\Omega \theta[n]} $$ where $C$ is a constant and $$ \omega = \frac{d\theta}{dt} $$ Since dft is given by $$ F_k = \sum_{n=0}^{N-1} f_n\cdot e^{-i 2 \pi k n / N} $$ and we wanted to use the optimized fft implementing of numpy, and was thinking if there is a way to write the input f so that we could use fft directly. However, my math skills are failing me. Can any of you see a way or any pointer to implement the above VSDFT without having to reimplement the Cooley-Tukey FFT algorithm? Refrence to the original paper can be found in here. Answer: I believe the short answer is no. The paper is behind a paywall so I didn't look at it. In the abstract is the following blurb: " use interpolation techniques to resample the signal at constant angular increments, followed by a common discrete Fourier transform to shift from the angular domain to the order domain." In which they are describing other techniques. Looking at your equations, that is precisely what needs to be done to $\theta[n]$ in order to convert it into a regular DFT, suitable for a FFT. The FFT exploits symmetries in the even spacing of the exponents in order to save computations. With a varying $\theta[n]$, those symmetries aren't there so you aren't going to be able to implement a FFT algorithm. You don't tell us what $x[n]$ is either. How large is N? Hope this helps at least a little. Ced
{ "domain": "dsp.stackexchange", "id": 9329, "tags": "fft, dft, python, numpy" }
How much gravity to make lead flow?
Question: Lead is solid at room temperature. But it is quite soft, so under strong enough gravity, it should be possible to pour it out of a container like sand. What gravity is that? Let's do a Fermi estimate. I know little of material science, but this looks like a job for the shear modulus. A dimensional analysis tells me the shear modulus is equivalent to a pressure. Assuming a cube of lead 10 cm on one side, the pressure at the bottom is $$ pressure = \frac{density \times length^3 \times gravity}{length^2} $$ If deformation happens when $\text{pressure} > \text{shear modulus}$, we get $$ gravity > \frac{shear\ modulus}{density \times length} = \frac{5.6 \times 10^9\ Pa}{11000\ kg.m^{-3}\times 0.1\ m} \simeq 5 \times 10^6 m.s^{-2} $$ Which is about a million times the gravity on earth. That seems high. Did I get something wrong? If not, does anybody have a more precise estimate (or knows how to get one)? Answer: The answer seems highish, but not too crazy: max mountain heights scale as $1/g$, So under 5 million G a mountain would be a few cm. But lead probably starts changing earlier: the yield limit and compressive strength is 4-12 MPa rather than in the GPa range. Plugging in that gives 3,600 to 11,000 m/s$^2$. Now we are in the few hundred to a thousand G range. (Also, why shear modulus? Isn't Young's modulus the more obvious choice? Still, the yield limit is what really matters here.)
{ "domain": "physics.stackexchange", "id": 79317, "tags": "newtonian-mechanics, newtonian-gravity, material-science, fermi-problem" }
Quantum state space constructing operator
Question: If I use British money the amounts I can have are isomorphic to $\mathbb{Z}_{\geq0}$ (in pennies). If I also use Australian money, if I want to think about the amount I have in total, I can use addition and make a vector space. The possible amounts I can have are isomorphic to $\mathbb{Z}_{\geq0} \times \mathbb{Z}_{\geq0}$ (in pennies and cents). $\times$ fairly clearly describes how to build the more complicated space out of the simpler ones. It also carries across the structure and even semantics (if I use vector space $\times$ rather than set $\times$). Is there an equivalent operator for quantum spaces? ie an operator that if given two singletons/single points/values returns a Bloch sphere, and so forth. It would be perfect if there is an operator like this specifically for quantum mechanics. But if there is only a mathematical one that lacks semantics or some structure it would still be great to learn of it. I exclude $\otimes^\text{note 1}$ on Hilbert spaces as an answer because as far as I understand this includes unphysical degrees of freedom and I am looking for one that captures the construction exactly. Edit - note 1: to make sense in the context of this question $\otimes$ should be replaced with $\oplus$. Answer: I exclude $\otimes$ on Hilbert spaces as an answer That's a shame, because the tensor product of the underlying Hilbert spaces really is the correct constructor for the joint state space of quantum systems. because as far as I understand this includes unphysical degrees of freedom To the degree that it does $-$ i.e. in that the Hilbert space includes a global phase and a normalization constant $-$ those degrees of freedom are trivial to add back in. More technically, the 'true' state space of a $d$-dimensional quantum system is the complex projective space $$ \mathbb{C}\mathbf{P}^{d-1} = (\mathbb{C}^{d}\setminus\{0\})/\mathbb{C}^\times $$ of all complex $d$-ples modulo a complex amplitude (i.e. modulo a global phase and normalization). If you have two independent quantum systems, with state spaces $\mathbb{C}\mathbf{P}^{d_1-1}$ and $\mathbb{C}\mathbf{P}^{d_2-1}$, and you want to find the space of states that they can occupy jointly (i.e. including entangled states; if you don't want to include them then you just do a set-theoretic $\times$ and you're done) then first you recover the $\mathbb C^d$ structure, which you can always do automatically, you then form the tensor product $\mathbb C^{d_1}\otimes \mathbb C^{d_2}\cong \mathbb C^{d_1+d_2}$, and finally you re-take the projection to $\mathbb{C}\mathbf{P}^{d_1+d_2-1} = (\mathbb C^{d_1}\otimes \mathbb C^{d_2}\setminus\{0\})/\mathbb{C}^\times$ to get the joint state space. For this construction to make sense, there are two key things to note: You don't actually need a unique way to get from the projective space up $\mathbb{C}\mathbf{P}^{d-1}$ to the base Hilbert space $\mathbb{C}^{d}$: the former already comes equipped with the latter, and you don't need to do anything with it. What you do need to do, in order to deal with the unphysical re-specification of the normalization and phase, is to show that you do have a well-defined way to take two states of the subsystems, $[u]\in \mathbb{C} \mathbf{P}^{d_1-1}$ and $[v]\in\mathbb{C}\mathbf{P}^{d_2-1}$, and get a uniquely-defined state of the joint system. To do that, what you have to show is that the choice of representatives within the classes $[u]$ and $[v]$ do not matter: i.e. that if $u_1,u_2\in [u]$ and $v_1,v_2\in[v]$ are different representatives, with $u_2=\lambda u_1$ and $v_2=\mu v_1$, then the classes they give rise to, $[u_1\otimes v_1]$ and $$ [u_2 \otimes v_2] = [(\lambda u_1) \otimes (\mu v_1)] = [\lambda\mu (u_1\otimes v_1)] = [u_1\otimes v_1] $$ are the same $-$ which they are, because the combined multiplier $\lambda\mu$ gets ignored when we take the $/\mathbb C^\times$ equivalence in the final step. On the other hand, it is important to realize that not all states in the joint state space are of this form, because entangled states are simply not separable as the tensor product of two individual system states. Those cannot be constructed explicitly within the joint state space given only the classes, i.e. the entangled state the even superposition of system 1 in $[u]$ and system 2 in $[v]$ superposed with system 1 in $[w]$ and system 2 in $[x]$ is not sufficient as a description, because in forming the class $$\left[\frac{1}{\sqrt{2}}(u\otimes v + w\otimes x)\right]$$ the choices of representatives from $[u]$ and $[w]$ (resp. $[v]$ and $[x]$) do affect which class you end up in. This is, however, not an artifact of the math, and it has the physical backing that you've failed to specify a common phase reference for $[u]$ and $[w]$ (resp. $[v]$ and $[x]$) and you're not even able to form meaningful single-state superpositions $[u+w]$ or $[v+x]$ without that common phase reference. What does that mean? Basically, that while the projective space $\mathbb{C} \mathbf{P}^{d-1}$ does trim out some 'unphysical' aspects of the description from the vector-space picture, that doesn't mean that you can discard the vector-space structure in $\mathbb{C}^{d}$, which is essential to formulate the full ontology of the system. And finally, this brings me to my final comment on your question: an operator that if given two singletons/single points/values returns a Bloch sphere What you're describing there is not the joint state space of two systems whose individual description are singletons: if you have two systems whose state spaces $\mathbb{C} \mathbf{P}^{0} = \{[|\mathrm{s}⟩]\}$ only allow for a single state $|\mathrm{s}⟩$ on each system, then the only possible state for the system is $$ [|\mathrm{s}⟩\otimes|\mathrm{s}⟩], $$ i.e. the state in which each system is in its only allowed state; in other words, the joint state space is $$ (\mathbb C^{1}\otimes \mathbb C^{1}\setminus\{0\})/\mathbb{C}^\times = (\mathbb C^{1}\setminus\{0\})/\mathbb{C}^\times = \mathbb{C}\mathbf{P}^{0}, $$ which is also a singleton. The process that you describe, given two singletons/single points/values returns a Bloch sphere does not correspond to the joint space state of two subsystems, but rather to the expanded state space of a single system which is allowed to inhabit superpositions of states from space 1 and from space 2. This is, physically, a very different situation, but your vague description of what actual physical situations you do (and don't) want to consider makes it very hard to comment more accurately. (And no, I'm unlikely to expand this answer if you update your question to add in those specifications. You ask a flawed question, you get an imperfect answer.) If what you want is not the joint states of two subsystems but rather the superposition states of two possible (sets of) states, then the operator you want is not the tensor product $\otimes$, but rather the direct sum $\oplus$ of the Hilbert spaces that underlie each state space. Here, however, it is important to note that, in contrast with the tensor-product structure, the de-projectivization procedure $$ \mathbb C\mathbf P^{d-1} \to \mathbb C^d, $$ together with its attendant addition of "unphysical" information which was not present in the original projective space, cannot be ignored as it was before. The reason for this is that if you're taking superpositions, then the "unphysical" information contained in the "global" phase is no longer unphysical, because the phase is no longer global, and you need a way to specify the relative phase of the superpositions. Put another way, there is an infinity of possible ways to form superpositions of the physical states $[|0⟩]$ and $[|1⟩]$, depending on their relative phase; and, if you don't know the phase, then you've completely decohered and you're not doing quantum mechanics $-$ you're doing classical mechanics, and you're just forming a probability distribution over a larger set of states. Further reading: They don't cover joint-space constructors in detail (I imagine because they consider it a bit too basic), but if what you're looking for is a formalized geometric approach to the states of quantum systems, the book you probably want to be reading is Geometry of Quantum States: An Introduction to Quantum Entanglement, by Ingemar Bengtsson and Karol Życzkowski (Cambridge University Press, 2017).
{ "domain": "physics.stackexchange", "id": 51075, "tags": "quantum-mechanics, terminology, foundations" }
Use a Monitor like a Semaphore?
Question: When using monitors for most concurrency problems, you can just put the critical section inside a monitor method and then invoke the method. However, there are some multiplexing problems wherein up to n threads can run their critical sections simultaneously. So we can say that it's useful to know how to use a monitor like the following: monitor.enter(); runCriticalSection(); monitor.exit(); What can we use inside the monitors so we can go about doing this? Side question: Are there standard resources tackling this? Most of what I read involve only putting the critical section inside the monitor. For semaphores there is "The Little Book of Semaphores". Answer: Monitors and Semaphores are to accomplish the same task, to ensure that between n processes/threads that each enter their critical section atomically. Although Monitors go towards a Object Ordinate Approach to this, making the code easier to read for example. In this case a Monitor is at a higher level then a Semaphore. You can use a Semaphore to implement a Monitor. In that case a Monitor has a counter variable in the same way a Semaphore does. The V and P of a Semaphore is very much like the wait and signal of a Monitor. In which wait and signal are used on a Monitors condition variables. They can be used to implement the same functionality of a Semaphore. Even though there are some minor differences, such as for signal with no process in need of being unblocked there is no affect on the condition variable. Along with that locks are also used within the procedure. While in most cases you don't want to. You can use a Monitors condition variables and lock outside of it (monitor.lock,monitor.condition_i), basically doing the normal things we do within a procedure but on the outside, not within it's function. This is assuming you want to have the critical section outside of the monitor, otherwise there is no need in doing this: // monitor enter monitor.lock.acquire(); while(...) monitor.condition_i.wait( monitor.lock ); runCriticalSection(); // monitor exit monitor.condition_j.signal(); monitor.lock.release();
{ "domain": "cs.stackexchange", "id": 3440, "tags": "concurrency, synchronization" }
Wilson Loop in AdS/CFT
Question: In AdS/CFT correspondence one can compare results in $\mathcal{N}=4$ SYM with string theory type IIB in $AdS_5 \times S^5$. One of the observables that it's possible to get non-perturbative results is the Wilson loop. Maldacena proposed in his paper that in the string side the expectation value of the Wilson loop is the area of the fundamental string that ends in the contour of the loop. The metric of $AdS_5 \times S_5$ is given by: $dS^2 = \alpha' [ \frac{U^2}{R^2}(dt^2 + dx_i dx_i) + \frac{R^2}{U^2} dU^2 + d\Omega_5^2]$ From which one can obtain the induced metric on the worldsheet: $h_{\alpha \beta} = G_{MN} \frac{\partial X^M}{\partial X^{\alpha}}\frac{\partial X^N}{\partial X^{\beta}}$ and in the large N limit one gets the area as the nambu goto action: $S = \frac{1}{2\pi \alpha'} \int d\tau d\sigma \sqrt{det(h_{\alpha \beta})}$ Choosing a static gauge: $\tau = t$, $\sigma = x$, and fixing the coordinates in $S^5$ as constants $\theta^I_0$ one gets: $ S = \frac{T}{2\pi} \int dx \sqrt{U^4/R^4 + (\partial_x U)^2}$ From which we need to solve the Euler-Lagrange equations, in the paper Maldacena says that as the actions does not depend on X the canonical momentum is conserved, and one gets the condition: $\frac{U^4}{\sqrt{U^4/R^4 + (\partial_x U)^2)}} = constant$ But I can't exactly grasp what is he doing to get there, does anyone knows? Answer: This just classical mechanics. We have a Lagrangian ${\cal L}(U,\partial_x U)$. Since the Lagrangian does not depend on time (here: $x$), the Hamiltonian $$ {\cal H} = \frac{\partial {\cal L}}{\partial (\partial_x U)} (\partial _x U) -{\cal L} $$ is conserved. Presto.
{ "domain": "physics.stackexchange", "id": 29602, "tags": "homework-and-exercises, string-theory, ads-cft, wilson-loop" }
Why are there some inconsistencies with the underlying principle of center of gravity and rotational inertia?
Question: COG: The lower the center of gravity, the more stable and object is. Rotational Inertia: The farther the concentration of mass from the defined axis of rotation, the more resistance the object has to movement. I'm quite confused between these two statements. Answer: The difference is in how you are halting rotation once it starts. In a car, a low center of gravity means in effect that you have several "arms" extending outwards (the tires) to push against a solid surface every time the car tries to turn over, say in a sharp curve. The more horizontal those arms and the farther they extend outwards, the more stable your vehicle will be. That argues for making your center of gravity lower. There's another twist to it that I'll get into below. For rotational inertial, think of those long poles that people who walk tightropes usually carry. The farther out those poles extend mass (sometimes they have heavy balls on the ends for example), the greater the rotational inertia, and the more opportunity the walker will have to correct his balance by pushing against that inertia. So in that case, the rotational inertia directly contributes to her ability to remain stable. So there is an example of how, in the absence of a solid surface to push on, rotational inertia can be directly used to keep from tipping over. The twist I mentioned for vehicles is this: They too can benefit from the rotational inertia strategy, but usually don't bother! The reason is it produces funny-looking vehicles that most folks would not want to be seen driving. Take two big Harley-Davidson motorcycles, each with the wheel separation of and exactly half the mass each of that same small car. Also, adjust their centers of gravity to be the same in height and front-to-back position as for the small car. Now weld them together using lightweight bars, so that their wheels are the same distance apart left-to-right as in the small car. (I suggest getting permission from the Harley owners first; they can be so picky about such things...) You now have two vehicles with about the same mass, the same wheel patterns, and the same centers of gravity. The only difference is that in one case most of the mass is located close to the center of gravity (the small car) whereas the in the other case most of the mass is located near either set of wheels (the dual Harley). In terms of physics, this means that despite have identical masses, wheel positions, and even centers of gravity, the two vehicles are not identical because the dual Harley will have much higher rotational inertial. So if you put them into an extreme race course designed to test flip-over stability, which vehicle do you think will win? Yep: The dual Harley, for the same reason that a tight-rope walker is more stable with a pole than without one. So, bottom line: Both having long "push arms" against a solid surface and having higher rotational inertial can help stabilize a ground vehicle. It's just that the pushing option is so much easier and gives such better-looking vehicles that no one bothers with the additional edge given by concentrating most of the mass over the wheels. Addendum: The Dual Harley is not any better at roll resistance! I made an error! The Dual Harley will not do any better on curves than the same-mass vehicle! I'm flagging myself instead of editing out the error, though, since it's kind of an interesting error. I would have been fine if I had suggested this: Take your vehicle and add two long poles with weights on their ends sticking out both sides (passers beware!). That would have kept the vehicle more stable in the same way as for the tightrope walker. It also would have made turns harder, though, since the same added rotational inertia would resist left-right turns just as well as rolls! Now in the case of the Dual Harley, there's nothing wrong with my assertion distributing mass towards the wheels gives higher overall rotational inertia -- it does. But there's a more subtle error in why it doesn't provide any additional roll resistance. Anyone see it? I'll leave it open as a problem for someone else for now, and answer it if no one else does.
{ "domain": "physics.stackexchange", "id": 85323, "tags": "inertia, stability" }
How is the term "solar system" defined? Could confirmation of a new planet lead to a change in this definition?
Question: Voyager 1 (https://en.wikipedia.org/wiki/Voyager_1) is approximately 140 AU away, and is considered to have left the solar system. Now, the layman's definition of solar system provided by Google is: ...the collection of eight planets and their moons in orbit around the sun, together with smaller bodies in the form of asteroids, meteoroids, and comets. And from Wikipedia: The Solar System[a] is the gravitationally bound system of the Sun and the objects that orbit it, either directly or indirectly,[b] including the eight planets and five dwarf planets as defined by the International Astronomical Union (IAU). At this point, Voyager 1 is well beyond the aphelion of all known planets, so according to the definitions above, so at least colloquially, saying that it has left the Solar System seems to make sense. However, according to Wikipedia, it was only officially considered to have left the Solar System once it left the heliopause. This is not really in conflict with the above definitions, as, to my knowledge, the heliopause extends beyond the orbit of all the original 8/9 planets at all times. The Encyclopedia Britannica (https://www.britannica.com/science/heliopause) states that the heliopause extends for 123AU. However, the hypothesised Planet 9/X is conjectured to have an aphelion of approximately 1,200 AU and a Perihelion of 200 AU (https://en.wikipedia.org/wiki/Planet_Nine). This would seem to bring the two competing definitions of "solar system" discussed here into sharp conflict. Which definition would likely win out? Could the confirmation of such a planet force a reckoning on whether Voyager 1 has truly left the solar system? Edit: I wanted to add a more authoritative source that says Voyager 1 has indeed left the solar system. Voyager 1 is the first man-made object to leave our solar system and pass into interstellar space. Scientists confirmed this finding a year later after studying Voyager’s data, which showed clear changes in the plasma or ionized gas right outside of the solar bubble. https://www.nasa.gov/directorates/heo/scan/images/history/August2012_2.html Answer: The definition of the edge of the Solar System is not one of particular scientific importance, so it is not particularly well-defined. There are a number of plausible definitions, all reasonable. I've listed one which occur to me below in order of increasing size: The pancake-shaped volume which contains the orbits of the major planets. The sphere big enough to contain the major planets. The volume inside the solar bubble. The thickish disk which contains the Kupier Belt. The sphere which contains the Oort Cloud. The Sun's Hill sphere. The irregularly-shaped volume of space where the Sun is the closest star. As you note, the discovery of a Planet X would change some of them, but not others. How choose to define the edge of the Solar System depends a lot on what you intend to use it for. I very much doubt that any one will win out, at least not until the definition of the Solar System makes an important difference somewhere: government, commerce, science. So I think there's no clear answer to your question possible.
{ "domain": "astronomy.stackexchange", "id": 3221, "tags": "solar-system" }
Real and imaginary part of the solution to the Laplace Equation violates uniqueness?
Question: I am trying to solve for the magnetic vector potential on $\mathbb{R}^2$. I have used the phasor formulation of Maxwell's equations and therefore I believe I am solving the equation on $\mathbb{C}^2$. The equation I have derived is $$\nabla^2\hat{A}(r,\phi)=-\frac{\mu_0}{r}\delta(r-r')\delta(\phi-\phi')$$ where $\hat{A}(r,\phi)$ is the complex magnitude of the solution $A(r,\phi)=\Re({\hat{A}(r,\phi)e^{j \omega t}})$. I believe that the transformation to phasor form is mostly a mathematical convenience and that it should have no effect on my final solution. However, when I have a solution $\hat{A}(r,\phi)$ that satisfies the Poisson equation and Dirichlet boundary conditions ($\hat{A}(r\to \infty,\phi)=0$), I will then have two real solutions namely $A_1(r,\phi)=\frac{\hat{A}e^{j \omega t}+\hat{A}^*e^{-j \omega t}}{2}=\Re({\hat{A}(r,\phi)e^{j \omega t}})$ and $A_2(r,\phi)=\frac{\hat{A}e^{j \omega t}-\hat{A}^*e^{-j \omega t}}{2 j}=\Im({\hat{A}(r,\phi)e^{j \omega t}})$. My difficulty is in choosing which one of these is the "correct" solution (i.e. why I should choose one and disregard the other). And also why this doesn't violate the uniqueness of the Laplace equation with Dirichet boundary conditions. My understanding is that if I solve the Poisson equation on $\mathbb{R}^2$ with Dirichlet boundary conditions then I am guaranteed a unique solution (See Wiki: Uniqueness Theorem for Poisson Equation). Answer: You're presumably looking to solve the (real) equation $$ \nabla^2 A(r,\phi,t)=-\mu_0 J(r,\phi,t) $$ and have chosen a source of the form $J(r, \phi, t) = \frac{1}{r}\delta(r-r')\delta(\phi-\phi') \cos( \omega t)$. When you go to the complex domain, you have implicitly replaced $\Re (e^{i \omega t})$ in the "real source" with $e^{i \omega t}$ as a "complex source"; and so when you solve the problem and want to find the "real potential" from the "complex potential", you have to undo this step. In other words, if $J(r, \phi)e^{i \omega t}$ really stands for $\Re (J(r, \phi)e^{i \omega t} )$, then $\hat{A}(r, \phi)e^{i \omega t}$ really stands for $\Re (\hat{A}(r, \phi)e^{i \omega t} )$. To see this more formally, consider the solutions to two real equations: \begin{align} \nabla^2 A_r(r,\phi,t)&=-\mu_0 J(r,\phi) \cos \omega t \tag{1} \\ \nabla^2 A_i(r,\phi,t)&=-\mu_0 J(r,\phi) \sin \omega t \tag{2} \end{align} The complex function $A_c(r, \phi, t) \equiv A_r + i A_i$ will then satisfy (by linearity) $$ \nabla^2 A_c(r,\phi,t)=-\mu_0 J(r,\phi) e^{i \omega t} \tag{3} $$ So one way to solve both (1) and (2) above is to solve (3) for $A_c$, and then take the real and imaginary parts of $A_c$. The real part of $A_c$ gives the response to the real part of the source, and the imaginary part of $A_c$ gives the response to the imaginary part of the source. (In typical situations we're not interested in this latter response, but it's still a valid solution to Poisson's equation for that source.) See this excellent answer from Emilio Pisanty for more perspective on this last paragraph.
{ "domain": "physics.stackexchange", "id": 96162, "tags": "electromagnetism, magnetic-fields, complex-numbers" }
Are there two varities of dandelion leaves?
Question: This is in India. I noticed a dandelion plant which had the familiar leaves. Right next to it there was another plant which had similar flowers and white tufts, but the leaves were different. Is the second plant also dandelion? I'm asking because I was considering feeding the leaves to my budgies, but needed to know if it is a dandelion, to avoid having the birds consume a toxic plant species. Did a Google search, but all of them show the arrow-style leaves only as dandelion. Answer: Although the leaves are similar, these are both not Dandelions Taraxacum species, because: 1. Dandelions have leaves in a rosette, not distributed along a stem. 2. Dandelions have a stem with a single flower at the end, not multiple flowers. 3. The shape of the closed flower heads after flowering is too pointed. Dandelions usually have the pappus (white hairs) sticking out a little. These distinctions can be clearly seen in the picture below: I am not familiair with the flora of India, but I think these could be Sonchus species. Sonchus is known to have variable leaves. For comparative purposes, see how Sonchus looks like
{ "domain": "biology.stackexchange", "id": 7902, "tags": "taxonomy" }
Clean up improperly formatted phone numbers
Question: This is a problem from Exercism here, the goal is to clean up badly formatted phone numbers. The rules are: If the phone number is less than 10 digits assume that it is a bad number If the phone number is 10 digits assume that it is good If the phone number is 11 digits and the first number is 1, trim the 1 and use the first 10 digits If the phone number is 11 digits and the first number is not 1, then it is a bad number If the phone number is more than 11 digits assume that it is a bad number A function should pretty-print the numbers in the form (123) 456-7890. Here is my solution. I'm very new to Clojure and am sure it's not idiomatic in parts. For example, should I have stored the different parts of the number as separate variables? I would also be interested in tips on how to format the code to make it easier to read, I'm only aware of lispy-multiline which I find a bit too aggressive. All feedback much appreciated. (ns phone-number) (defn digit-not-one-or-zero [digit] (and (not (= \0 digit)) (not (= \1 digit)))) (defn clean-number [num-string] ;; strip all non-digit characters from a string (filter #(Character/isDigit %) num-string)) (defn check-valid [num-string] ;; number is valid only if: ;; a) length is between 10-11 chars ;; b) if 11 chars, first digit must be 1 ;; c) (after stripping first digit if 11), first and fourth can't be 0 or 1 (let [clean-string (clean-number num-string) len (count clean-string)] (if (or (< len 10) (> len 11)) false (if (= len 11) (if (= \1 (first clean-string)) (check-valid (rest clean-string)) false) (if (and (digit-not-one-or-zero (first clean-string)) (digit-not-one-or-zero (nth clean-string 3))) true false))))) (defn number [num-string] (if (check-valid num-string) (let [clean-string (clean-number num-string)] (if (= (count clean-string) 10) (apply str (clean-number clean-string)) (apply str (rest (clean-number clean-string))))) "0000000000")) (defn area-code [num-string] ;; get the first 3 chars of the cleaned up number (apply str (take 3 (number num-string)))) (defn pretty-print [num-string] (let [formatted-number (number num-string)] (apply str (concat "(" (area-code num-string) ")" " " (subs formatted-number 3 6) "-" (subs formatted-number 6))))) Answer: I suggest you to look at Clojure Cheatsheet and get familiar with the functions dedicated for working with strings. There is also the library clojure.string, containing a few more. With these functions, you won't have to convert your string into sequence and then back repeatedly. Take for example your pretty-print- subs returns string, concat returns sequence, apply str returns string again. Pick the correct data structure at the beginning, use dedicated functions and convert as little as possible. Use :require in ns to include clojure.string library: (ns phone-number (:require [clojure.string :as s]) (:gen-class)) I will start with the area-code. This function could be improved with ->> (thread-last macro) and clojure.string/join like this: (defn area-code [num-string] (->> (number num-string) (take 3) s/join)) But you can also use subs: (defn area-code [num-string] (subs (number num-string) 0 3)) Next, pretty-print. You can use format here: (defn pretty-print [num-string] (let [validated (number num-string)] (format "(%s) %s-%s" (area-code validated) (subs validated 3 6) (subs validated 6)))) Next, digit-not-one-or-zero. You usually create a set with these elements and call it on your argument, something like: (defn digit-not-one-or-zero [digit] (not (#{\0 \1} digit))) Or even shorter: (def digit-not-one-or-zero (complement #{\0 \1})) Your clean-number is ok (I would just add s/join at the end to ensure it will return a string, as any other function here), unless you want to use some regex: (defn clean-number [num-string] (s/replace num-string #"\D" "")) Next, number and check-valid. Don't use nested ifs- learn about cond, condp and case and use these. When can be also useful here. Don't use (if ... true false). In Clojure, all values are logically true or false. The only "false" values are false and nil - all other values are logically true. So, take this example: (if (and (digit-not-one-or-zero (first clean-string)) (digit-not-one-or-zero (nth clean-string 3))) true false) You can just write: (and (digit-not-one-or-zero (first clean-string)) (digit-not-one-or-zero (nth clean-string 3))) check-valid is too long- you should break it into smaller functions. So, here is my version you can study. Note how using short functions, idioms and proper flow-control macros increased readability: (ns phone-number (:require [clojure.string :as s]) (:gen-class)) (def allowed-digits (complement #{\0 \1})) (defn clean-number [num-string] (s/replace num-string #"\D" "")) (defn validate10 [s] (and (allowed-digits (.charAt s 0)) (allowed-digits (.charAt s 3)) s)) (defn validate11 [s] (when (= \1 (.charAt s 0)) (validate10 (subs s 1)))) (defn validate [num-string] (condp = (count num-string) 11 (validate11 num-string) 10 (validate10 num-string) false)) (defn number [num-string] (if-let [valid (validate (clean-number num-string))] valid "0000000000")) (defn area-code [num-string] (subs (number num-string) 0 3)) (defn pretty-print [num-string] (let [validated (number num-string)] (format "(%s) %s-%s" (area-code validated) (subs validated 3 6) (subs validated 6)))) EDIT: Alternative solution with regexes: (defn number [num-string] (if-let [valid (->> (s/replace num-string #"\D" "") (re-matches #"1?([^01]..[^01].{6})") second)] valid "0000000000")) (defn area-code [num-string] (subs (number num-string) 0 3)) (defn pretty-print [num-string] (s/replace (number num-string) #"(\d{3})(\d{3})(\d{4})" "($1) $2-$3"))
{ "domain": "codereview.stackexchange", "id": 44232, "tags": "strings, formatting, clojure" }
Empty Costmap using PointCloud2
Question: When I run rostopic echo /move_base/local_costmap/costmap, the costmap data is filled with zeros even though it has a pointcloud2 observation source. Below are all the costmap params and the two launch files I run at startup. costmap_common_params.yaml: obstacle_range: 25 raytrace_range: 25 footprint: [[1.65, 1.02], [2, 0], [1.65, -1.02], [-1.65,-1.02], [-1.65,1.02]] footprint_padding: 1 inflation_radius: 6 max_obstacle_height: 100 observation_sources: velodyne velodyne: {sensor_frame: velodyne, data_type: PointCloud2, topic: velodyne_points, marking: true, clearing: true, min_obstacle_height: -10, obstacle_range: 25, raytrace_range: 25} plugins: - {name: obstacle, type: "costmap_2d::ObstacleLayer"} - {name: inflation, type: "costmap_2d::InflationLayer"} obstacle: track_unknown_space: true global_costmap_params.yaml: global_costmap: global_frame: map robot_base_frame: base_link update_frequency: 1.0 static_map: false rolling_window: true local_costmap_params.yaml: local_costmap: global_frame: odom robot_base_frame: base_link update_frequency: 1.0 publish_frequency: 0.5 static_map: false rolling_window: true width: 10.0 height: 10.0 resolution: 0.025 origin_x: 0.0 move_base.launch: <launch> <master auto="start"/> <!-- Run the map server --> <!-- <node name="map_server" pkg="map_server" type="map_server" args="$(find map_server)/tutorialMap.pgm 0.05"/> --> <!--- Run AMCL --> <!-- <include file="$(find amcl)/examples/amcl_omni.launch" /> --> <node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen"> <rosparam file="$(find bolt_2dnav)/costmap_common_params.yaml" command="load" ns="global_costmap" /> <rosparam file="$(find bolt_2dnav)/costmap_common_params.yaml" command="load" ns="local_costmap" /> <rosparam file="$(find bolt_2dnav)/local_costmap_params.yaml" command="load" /> <rosparam file="$(find bolt_2dnav)/global_costmap_params.yaml" command="load" /> <rosparam file="$(find bolt_2dnav)/base_local_planner_params.yaml" command="load" /> <param name="odom_frame_id" value="odom"/> <param name="base_frame_id" value="base_link"/> <param name="global_frame_id" value="map"/> </node> </launch> origin_y: 0.0 min_obstacle_height: -10.0 robot_configuration.launch: <launch> <node pkg="robot_setup_tf" type="odom" name="odom" output="screen"> </node> <node pkg="robot_setup_tf" type="tf_broadcaster" name="tf_broadcaster" output="screen"> </node> <!-- <param name ="/use_sim_time" value="true"/> --> <node pkg="tf" type="static_transform_publisher" name="map_odom_tf_broadcaster" args="0 0 0 0 0 0 1 map odom 10" /> </launch> Originally posted by reben on ROS Answers with karma: 1 on 2017-11-20 Post score: 0 Answer: You need to add the observation sources inside of the plugin that should use them. Thus, move those into the obstacle plugin like so: plugins: - {name: obstacle, type: "costmap_2d::ObstacleLayer"} - {name: inflation, type: "costmap_2d::InflationLayer"} obstacle: observation_sources: velodyne velodyne: {sensor_frame: velodyne, data_type: PointCloud2, topic: velodyne_points, marking: true, clearing: true, min_obstacle_height: -10, obstacle_range: 25, raytrace_range: 25} track_unknown_space: true Originally posted by mgruhler with karma: 12390 on 2017-11-30 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by reben on 2017-11-30: Thanks for the response. I tried this solution but it seems to be outputting empty still.
{ "domain": "robotics.stackexchange", "id": 29408, "tags": "navigation, costmap" }
The electron potential energy of a single electron (in $H^+_2$ ) at any point
Question: In my book it says: We can choose a convenient reference by noting that the coulomb force at infinite distance is zero. It makes sense for this case, then, to choose $r=\inf$ as the vacuum level $E_v$: $E_p(r=\inf)=E_v$ (In here, p means potential) When the nuclei are allowed to approach each other, an electron would be influenced by both nuclei according to Coulomb's law. The electron potential energy of a single electron (in $H^+_2$ ) at any point is now $E_p=E_v-\frac{q^2}{(4\pi\epsilon_0)r^2_1}-\frac{q^2}{(4\pi\epsilon_0)r^2_2} $ where $r_1$ and $r_2$ are the distances between the electron and each of the two nuclei, respectively. But i can't understand. As i know, the potential energy's formula is $E_p=\int_C\vec{F}\bullet\vec{dr}$ And, by superposition, we can get $E_p=E_1+E_2=\int_C(\vec{F}_1+\vec{F}_2)\bullet\vec{dr}$ where $\vec{F_1}$ and $E_1$ are force and potential energy due to one charge. And $\vec{F_2}$,$E_2$ are due to other charge. So i think, $E_1=E_v-\frac{q^2}{(4\pi\epsilon_0)r^2_1} $,$E_2=E_v-\frac{q^2}{(4\pi\epsilon_0)r^2_2} $ Thus, it think $E_p=2E_v-\frac{q^2}{(4\pi\epsilon_0)r^2_1}-\frac{q^2}{(4\pi\epsilon_0)r^2_2} $, not $E_p=E_v-\frac{q^2}{(4\pi\epsilon_0)r^2_1}-\frac{q^2}{(4\pi\epsilon_0)r^2_2} $ What am I misunderstanding? Answer: (a) First, you can add any constant you want to the energy without changing the physics, so even if you decide to add $E_v$ to each of $E_1$ and $E_2$ so that $E_p=E_1+E_2=2E_v+\cdots$, you could always decide to subtract $E_v$ from $E_p$ to get the expression in the book. So in that sense, there is not really a right answer to this question. (b) If you want to interpret what the book is saying physically, then the correct energy superposition you should do is with three sources, not two: \begin{eqnarray} {\rm Potential\ Energy} &=& {\rm Energy\ of\ Charge\ 1} + {\rm Energy\ of\ Charge\ 2} + {\rm Vacuum\ Energy} \\ E_p &=& E_1 + E_2 + E_v \\ &=& -\frac{q^2}{4\pi \epsilon_0 r_1}-\frac{q^2}{4\pi \epsilon_0 r_2} + E_v \end{eqnarray}
{ "domain": "physics.stackexchange", "id": 100443, "tags": "electromagnetism, potential-energy" }
Does distance traveled by a vehicle after its engine has been switched off depend on its mass at all?
Question: A vehicle moving with some velocity on a rough horizontal road finally comes to rest after its engine has been turned off. Intuitively, it seems a vehicle with greater mass would stop first because it would experience a greater friction force, but if we go by the work-energy theorem as follows, it's clear that the distance covered does not depend on the vehicle's mass at all: $$K.E_{final} - K.E_{initial} = W_{friction}$$ $$KE_{final}=0\\ KE_{initial}=\frac12MV^2\\ W_{friction}=\mu Mgd\\ \text{Therefore, }d=\frac{v^2}{2\mu g}$$ Is my notion valid? Answer: This is admittedly a late answer. Hopefully it will clear up some of the confusion. If you ignore aerodynamic drag, ignore that the coefficient of rolling friction varies with load, and ignore a number of other factors such as friction between the axle and the bearings that support it, then yes, stopping time / stopping distance is independent of vehicle load. This is completely unrealistic reasoning. Things get a bit more complex if you don't ignore those factors. Consider a pickup with or without a load of dirt. I'll start by assuming a constant coefficient of drag and by ignoring everything but aerodynamic drag. The only horizontal force acting on the vehicle is drag, and that is strictly proportional to velocity. Per these assumptions, a fully loaded pickup and empty pickup moving at the same speed suffer exactly the same drag force. Since acceleration is inversely proportional to mass, the loaded pickup takes longer to come to rest than the empty pickup. Most people drive their pickups with the tailgate closed and the bed uncovered. This makes for a rather non-aerodynamic vehicle. Some people solve this by replacing the tailgate with an open mesh net, others solve this by adding a bed cover. I'll assume a typical pickup owner who hasn't done either. Filling the bed with dirt makes the vehicle more aerodynamic. This decrease in the coefficient of drag will make the loaded pickup take even longer to come to a stop than the simple analysis above. The above ignores friction between the tires and the road. There will be a significant increase in contact area between tire and road after filling the pickup with dirt. This increased contact area increases the coefficient of rolling friction. This will act to make the loaded vehicle come to rest more quickly than the empty vehicle. Whether the loaded pickup has a longer or shorter stopping distance than the empty truck depends on which of aerodynamic drag or rolling friction predominate. Anyone who has hauled a load of dirt with a pickup should know they need to reinflate the tires after taking on that load. Driving with a heavy load and tires squished flat to the ground is a good recipe for trouble. Reinflating the tires so that the contact area is more or less the same as it was prior to taking on the load reduces the coefficient of rolling friction. This makes drag and rolling friction work in concert with one another. The loaded truck takes a good deal more time or distance to come to a stop than does the empty truck. Similar concerns sometimes come into play during the launch of a rocket. Rockets with a high thrust to weight ratio oftentimes need to coast for a while between the shutdown of one stage and the ignition of the next. When that's the case, the separation of the burnt-out stage doesn't occur when the stage shutdowns. That separation instead is delayed until shortly before the next stage is supposed to ignite. Keeping the burnt-out stage attached until the last minute reduces drag acceleration by a considerable margin.
{ "domain": "physics.stackexchange", "id": 16237, "tags": "energy, mass, everyday-life, friction, work" }
fuerte's supported python is 2.6 that's not supported on Precise
Question: I noticed (while I'm looking at this question) that official python version in fuerte is 2.6 (link to REP-003), which is no longer supported in Ubuntu Precise (there's a way to install 2.7 though). I'm just curious what was the decision made in this regard? Prioritize Lucid as a earlier stable distro? Is there a link to a discussion? Originally posted by 130s on ROS Answers with karma: 10937 on 2013-04-18 Post score: 0 Answer: My interpretation of REP-0003 is that Python 2.6 is the minimum version supported by Fuerte. Probably, that entry should have explicitly stated "Python 2.6 and 2.7". Since Precise came out shortly after Fuerte was released, it's easy to understand how that might have been overlooked at the time. All Fuerte Precise release testing was done with Python 2.7, so I doubt there are many serious problems due to the newer version, which is highly compatible with 2.6, anyway. Originally posted by joq with karma: 25443 on 2013-04-19 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by 130s on 2013-04-20: Ah, release timing of fuerte and Ubuntu explains. I also found in the same REP that says "Our intent with Python is to track the minimum version provided in the supported Ubuntu platforms, " so I guess you're right.
{ "domain": "robotics.stackexchange", "id": 13882, "tags": "ros, python2.7, ubuntu, ros-fuerte, ubuntu-precise" }
Aliasing correspondance with respect to DFT
Question: Given a sampling frequency Fs lets say we plot the magnitude of the fft of a temporal signal $x$, for different frequencies above the Nyquist frequency, to show the effect of aliasing: Fs=10; t=0:1/Fs:1-(1/Fs); f=6;%7,8,9, ... to see aliasing starting a 6 since 6 is just above the Nyquist freq x=sin(2*pi*f*t); freq_range=Fs*((0:length(x)-1)/length(x)); plot(freq_range,abs(fft(x))); Can someone help to formulate mathematically to which lower frequencies a given freqency f > Fs/2 will be aliased to? Experimentally, we can see that (for integer frequencies, dont know if it really makes sense to use fractionnal ones?): 6 -> 4 7 -> 3 8 -> 2 9 -> 1 10 -> 0 <- should be zero? but the spectrum goes crazy and there are peaks everywhere. Why? Then one would think that this would be periodic i.e. f=11 -> f_aliased=1, etc. back to zero in decreasing order, then again some strange things happens at 15 then it goes on cyclically etc. What is the relation? Also, w.r.t. spectral resolution ($\Delta f = F_s/N$, let $N$ be the nb of samples, i.e. length(x)), one could ask if the next frequency abouve the Nyquist is in fact $f_{Nyq}+\Delta f = (F_s/2)+\Delta f$ ? But in this example the spectral resolution $\Delta f$ = 1 right? Answer: If you do spectral analysis is typically better to use WAY more points then 10 in the time domain Look at your y axis and make sure your graphs are properly scaled. $f=10$ aliases indeed back to $f=0$. All you see is numerical noise and your y-scale is $10^{-15}$. This would be different if you choose a a different phase, cosine instead of sine.
{ "domain": "dsp.stackexchange", "id": 7807, "tags": "fft, aliasing" }
Batch wise Inference to speed up Muzero's MCTS
Question: Context: I've implemented Muzero for the game Tic-tac-toe. Unfortunately, the self-play and training is very slow (like 10 hours until it plays quite well). I ran the python profiler to find the parts that take the most time. The result is that most time is spent doing Monte Carlo tree searches, specifically querying the neural networks for the next hidden state and the value, policy predictions. While running self-play, my GPU is only on 30% load and my CPU is on max load (single process since GIL) Note: In the muzero paper, they have done some fancy stuff like scaling some gradients, which I haven't implemented yet. This will probably also result in a small speedup What I'm trying to do: I want to speed up the self-play by running multiple MCTS's in different threads so that they pause whenever they want to query the neural network until enough other threads have queries for the network. Then I put all the queries into a batch and sent the batch to the network. Once I have the results, I return them to each thread, and they continue until they try to query the network again. Reason: Let's say I want to play 100 self-play games. I do 50 simulations per move, the average game length is 7 and the observation shape is (3,3,3). With my current approach, this would result in 100 * 7 * 50 = 35000 network queries of shape (1,3,3,3). With the approach described above, I could run all 100 games at once and batch the network queries, resulting in 7 * 50 = 350 network queries of shape (100,3,3,3) I hope that this will result in a significant speedup. Questions: What are your thoughts on this plan? Any frameworks/PyTorch features that can help me with my plan? How would you implement something like this? How would you tackle problems like threads never waking up because the batch never gets full enough (if there's something better than timeouts), or possible race conditions? Don't feel obligated to answer all of these questions. Answer: Batching: A Good Idea You're right, batching is a great way to speed up AlphaZero or MuZero self-play! Your proposed solution of running multiple games in parallel is the easiest way to achieve some batching. There are other solutions, most notably Virtual loss, which allows you to get batches even from a single tree search. You can even combine both approaches. Implementation Details Unfortunately, I don't know of any PyTorch features that help with this, and in general Python is not the best language for concurrency and multithreading. The way I implemented this in kZero (in Rust): There is a single thread responsible for NN execution that listens to a channel/queue of work items. Once it has collected enough items, it evaluates the batch and sends the results back to each work item through a dedicated temporary response channel that was part of the original item. Different games run on different threads or async tasks, send their evaluation requests to the main thread, and wait for the response. The way to implement this in Python is with either threading and queues or with asyncio and its built-in queues. See some of the diagrams in the kZero readme for some graphics to illustrate this. An alternative is to share a single thread between multiple games and write a state machine to switch between them yourself. Effectively, this means you're implementing your own async-like system, this is tricky to get right! I did this at first but later switched to using async proper to simplify the code a lot. Yet another alternative is to do what mctx does and let JAX "magically" handle all batching and concurrency. I'm not sure how well this actually works in practice, I haven't used JAX myself yet. Deadlocks and Race Conditions You can avoid the potential deadlock of batches never filling by just running enough games at once! For example, if you're only running a single NN execution thread with a batch size of 100, spawning at least 100 games will make deadlocks impossible. It's best to spawn at least twice as many games to get some concurrency, that way, by the time the first 100 requests have been evaluated, the next 100 are ready to go, and the NN thread never has to wait long. The way to avoid deadlocks and race conditions caused by concurrency bugs is to write correct code! This is tricky to get right and basically an entire domain by itself. I've found that only communicating between threads using (concurrency-supporting) queues makes it relatively easy to write concurrency-bug-free code.
{ "domain": "ai.stackexchange", "id": 4149, "tags": "reinforcement-learning, pytorch, performance, batch-learning, muzero" }
Gravity and Collision of two continuous mass distributions
Question: How could one explain the collision of two continuous mass distributions in view of gravitation (Newtonanian and General relativity) ? Answer: In the context of classical mechanics, your question is probably ill-formed. Since point masses have no physical extent, the gravitational force increases without bound as r approached zero, which it will indeed do because they'll only collide when they're superposed. Infinite force on a finite mass implies infinite acceleration (F = ma) which implies infinite velocity, which is inconsistent with special relativity.
{ "domain": "physics.stackexchange", "id": 76, "tags": "general-relativity, classical-mechanics" }
Increase Evaporation from oceans
Question: I would like to ask you about evaporation from sea water. Can you increase evaporation by spraying seawater to the air? Is there any difference compared to freshwater spraying (aerosol particles)? Answer: Yes, you can increase evaporation by spraying the sea water as an aerosol because this increases surface area. It's the same with fresh water too, but why would you want to do it? It wouldn't have any noticeable effect on the humidity of the general area. The only useful effect that I can see concerns fresh water only. Spraying the water from a pond or small lake as fine droplets would increase the oxygen content and benefit the fish. Hot weather depletes the oxygen in the water and sometimes fish become distressed, so if you want to save your valuable coy carp it might be a good idea. In the ocean it doesn't work, the sea is too vast.
{ "domain": "earthscience.stackexchange", "id": 1953, "tags": "meteorology, oceanography, water, climatology, evaporation" }
How to identify the middle of the first trough in a waveform?
Question: (Asked on CrossValidated before I found this community) I have audio files and timestamped fragment transcriptions that I have force-aligned with third party software. Unfortunately the software places the end of each fragment right before the start of the next fragment, so if you try to play just one fragment from the audio file it will bleed over to the next fragment. Here's a visualization of the waveform: The red line is the automatically produced timestamp for the end of one fragment, and the start of another. The left and right sides are 2 seconds of audio on either end of that timestamp. As you can see it's placed right up against the next time audio starts to output. The blue and purple lines are two different algorithms I've used to try to identify the middle of the trough. In this case, the blue one worked, but in others it doesn't: The data is a list of floating point numbers ranging from 0 to 1. They represent a normalized and bucketed version of decoded 48Hz mp3 audio. As you can see, in some cases there's a range of near zero values. In others there's varying degrees of "low volume", but always a visually-identifiable trough. Can anyone recommend a method or algorithm for working backwards from the red line and finding the middle of the first trough? I am a general software developer and unfortunately don't know much of the math that applies to this domain. I'm about 90% sure there's some method that does exactly this and I just need the name of it. Answer: Compute a moving average or a moving median $p_a[n]$ of the signal power, with a long enough window that it doesn’t get affected by spurious dips in power. Find the point at which $p_a[n]$ dips below a certain threshold that you define, and the point where it goes back above that threshold. That’s your “trough” window. Finding the middle point in that window is trivial. These things are trial and error based (especially with a hard threshold. You might want to experiment with a dynamic threshold based on previous median/average values too).
{ "domain": "dsp.stackexchange", "id": 12316, "tags": "signal-analysis, audio, mp3" }
Projectile acceleration with quadratic drag and coriolis force
Question: I'm wondering if this equations for the acceleration of a projectile in the x, y and z directions are correct, if we take into account the square law of resistance and the Coriolis force. The projectile would be launched at a certain angle from the northern hemisphere from north to south, with some initial velocity. Do I just combine them to get the equations of motion? Thanks for help Answer: I don't think your equations are consistent with your coordinate system. Let the projectile be launched only in the x-z plane. Then $\vec{\Omega} = (-\Omega \cos(\phi), 0, \Omega \sin(\phi))$ in your coordinate system ($\phi$ is the angle from equator, x points south, y east, z outward), so $\dot{\vec{r}} \times \vec{\Omega} = (\dot{y} \Omega \sin(\phi), -\dot{z} \Omega \cos(\phi) - \dot{x} \Omega \sin(\phi), -\dot{y} \Omega \cos(\phi))$. Hence, your coriolis equations should be $$\ddot{x} = 2 \Omega v_y \sin(\phi)$$ $$\ddot{y} = -2\Omega(v_z \cos(\phi) + v_x \sin(\phi))$$ $$\ddot{z} = 2 \Omega v_y \cos(\phi)$$ Gravity should not be included in the coriolis equations. Assuming that your $g$ is the observed free-fall acceleration (gravity force + centrifugal), that your object has the same cross-sectional area and drag coefficients in all three x,y,z directions (and $v_x$ in equation 3 should be $v_z$), then you can use Newton's Law $\ddot{\vec{r}} = \vec{g} + \text{ coriolis } + \vec{F}_{drag}/M$ for each coordinate (let $\gamma = C_d \rho A/(2M)$) to get your equations of motion: $$\ddot{x} = 2 \Omega v_y \sin(\phi) - \gamma |v| v_x$$ $$\ddot{y} = -2 \Omega (v_z \cos(\phi) + v_x \sin(\phi)) - \gamma |v| v_y$$ $$\ddot{z} = 2 \Omega v_y \cos(\phi) - g - \gamma |v| v_z$$
{ "domain": "physics.stackexchange", "id": 89909, "tags": "newtonian-mechanics, projectile, drag, coriolis-effect" }
How do we actually feel special theory of relativity?
Question: If I consider a thought experiment for understanding special theory of relativity, In which a person is travelling in a space ship at very very high speed with respect to an observer standing outside the space ship and they both have mechanical clock and both are measuring speed of light but instruments to measure speed of light are set inside space ship only, Let an emitter is set in space ship at point A and a photo receiver is set at point B such that A and B are aligned with direction of motion of space ship. Person inside space ship started emitter and noted time of receive by receiver and calculated speed of light as distance AB was known and at the same time person outside calculated the speed of light by looking at Same emitter and receiver while inside person was calculating. Now I have doubt that will the light hit receiver at the same instant for both the observers? If No then why not? Why person standing outside do not feel that time has slowed down? Will time shown by two mechanical stopwatches same as clock outside will also get slow and as it will tick slower than clock inside ultimately both will reach same position as outside one has more time also? What does slowing down of time actually mean in this case? And what actually the time is? Answer: Here are some comments which might help. If you are on a spaceship moving at 99% the speed of light, your wristwatch will run slow but so will your brain, so to you in the spaceship, time seems to run normally. But now if you leave your wristwatch in the spaceship and take up a position outside the spaceship, standing still and the spaceship zooms by you at 99% the speed of light with your wristwatch visible through a window, you will notice its hands moving very slowly. From this frame of reference, the clock in the speeding spaceship appears to be running slow. Special relativity gives you the precise equations you need to figure out the "true" passage of time experienced by moving and nonmoving clocks.
{ "domain": "physics.stackexchange", "id": 79205, "tags": "special-relativity, reference-frames, observers" }
Partition function of 3D quantum harmonic oscillator
Question: The following discussions are for isotropic quantum harmonic oscillators which have the energy eigenvalues as follows: $$E=\left(\sum_{i}^{N}n_i+\frac{N}{2}\right)\hbar \omega$$ where $N= $ number of dimensions. I have two interrelated doubts regarding the partition function calculation on the above system. First One oscillator in 3D is equivalent to three 1D independent oscillators so that if the partition function (P.F.) for a 1D harmonic oscillator is: $$Z_{1D}=\sum_{n=0}^{\infty} e^{-\beta E_n}$$ then for the 3-D harmonic oscillator it becomes : $$Z_{3D}=(Z_{1D})^3 \tag{1}$$ On the other hand, $$Z_{3D}=\sum_{n=0}^{\infty}g(n) e^{-\beta E_n} \tag{2}$$ where here, $n=n_x+n_y+n_z$ and $g(n)=\frac{(n+1)(n+2)}{2}$ On plugging the value of degeneracy, it is not easy to show whether (1) and (2) are equal. I tried solving it but was not getting (2) same as (1). Shouldn't (1) and (2) be the same or is there a problem with how did the above steps? If they indeed are the same, what manipulation is needed to show that? Second If after calculating the partition function for $N$ independent 1D harmonic oscillators, we try to take the indistinguishability into account by dividing with a factor of $N!$ $\\ $ we get: $$Z_{1D}^{N}=\frac{1}{N!}(Z_{1D})^N$$ If now we have to do the same for the $N$ number of 3D oscillators, we can instead consider the $3N$ number of independent 1D oscillators so that: $$Z_{3D}^{N}=\frac{1}{(3N)!}(Z_{1D})^{3N}$$ But I have read that instead it is correctly calculated as: $$Z_{3D}^{N}=\frac{1}{N!}((Z_{1D})^3)^N$$ which has a different denominator value. Which one of the above is correct and how? Answer: The partition function of the 1D harmonic oscillator is $$\eqalign{ {\cal Z}_{\rm 1D}&=\sum_{n=0}^{+\infty} e^{-\beta\hbar\omega\big(n+1/2\big)}\cr &=\lambda^{1/2}\sum_{n=0}^{+\infty}\lambda^n\cr &={\lambda^{1/2}\over 1-\lambda} }$$ where $\lambda=e^{-\beta\hbar\omega}$. Consider now the 3D harmonic oscillator. First, one can note that the system is equivalent to three independent 1D harmonic oscillators: $${\cal Z}_{\rm 3D}=\big({\cal Z}_{\rm 1D}\big)^3 ={\lambda^{3/2}\over (1-\lambda)^3}$$ On the other hand, using your equation (2), we get after some algebra, $$\eqalign{ {\cal Z}_{\rm 3D}&=\sum_{n=0}^{+\infty} g(n) e^{-\beta\hbar\omega\big(n+3/2\big)}\cr &=\lambda^{3/2}\sum_{n=0}^{+\infty} {(n+2)(n+1)\over 2}\lambda^n\cr &={\lambda^{3/2}\over 2}{\partial^2\over\partial\lambda^2}\Big[ \sum_{n=2}^{+\infty} \lambda^{n+2}\Big] \cr &={\lambda^{3/2}\over 2}{\partial^2\over\partial\lambda^2}\Big[ \lambda^2\sum_{n=0}^{+\infty} \lambda^n\Big] \cr &={\lambda^{3/2}\over 2}{\partial^2\over\partial\lambda^2}\Big[ {\lambda^2\over 1-\lambda}\Big] \cr &=\lambda^{3/2}\Big[{1\over 1-\lambda}+{4\lambda\over(1-\lambda)^2} +{\lambda^2\over(1-\lambda)^3}\Big]\cr &={\lambda^{3/2}\over(1-\lambda)^3}\cr }$$ i.e. the same result, as expected! In the Einstein solid, one considers $N$ atoms oscillating around their equilibrium position. In this simple model, two atoms are not expected to exchange their position so the atoms should be considered as distinguishable. Each atom is reduced to a 3D harmonic oscillator, equivalent to three independant 1D harmonic oscillators associated to the three directions. They also should be considered as distinguishable. As a conclusion, you should not divide the partition function by $(3N)!$, nor $N!$. Note that adding or not a factor $1/N!$ to the partition function is not of great importance: in both cases, the Dulong-Petit law for the specific heat is recovered (which is the goal of this model).
{ "domain": "physics.stackexchange", "id": 83995, "tags": "statistical-mechanics, harmonic-oscillator, partition-function" }
Data Processing equality variation
Question: Let $\rho_{AB}$ be a state and $T: B \rightarrow C$ be a CPTP map with $\sigma_{AC}= T(\rho_{AB})$. It is well known that $H_{\infty}(A \vert B)_{\rho} \geq H_{\infty}(A \vert C)_{\sigma}$ (aka data processing.) Furthermore, equality holds when $T=V$ is an isometry giving $H_{\infty}(A \vert B)_{\rho} = H_{\infty}(A \vert C)_{V\rho V^{\dagger}}$. My question is, does the result hold if we consider $V^{\dagger}$ instead of $V$, that is, is the following equality true? $$H_{\infty}(A \vert C)_{\sigma} = H_{\infty}(A \vert B)_{V^\dagger \sigma V}$$. Answer: No, the co-isometry map $\sigma \to V^\dagger \sigma V$ is not trace preserving. In the worst case you can have something like this. Take an isometry $V: \mathbb{C}^2 \rightarrow \mathbb{C}^3$ which just embeds $\mathbb{C}^2$ into $\mathbb{C}^3$ in the standard way i.e., $$ V = |0\rangle \langle 0 | + |1 \rangle \langle 1|. $$ This is an isometry $V^\dagger V = \mathbb{I}_{\mathbb{C}^2}$. But imagine now that we have a state $\sigma = |3 \rangle \langle 3|$ on $\mathbb{C}^3$, then $$ V^\dagger \sigma V = 0. $$ Which would result in an infinite min-entropy.
{ "domain": "quantumcomputing.stackexchange", "id": 3508, "tags": "quantum-operation, entropy, linear-algebra, min-entropy" }
Where do sublinear time algorithms fit in the picture of complexity theory?
Question: I know that there are complexity classes that are "below" P like L or NL and stuff like that. But I was curious to know where sub linear time algorithms fitted in this picture and maybe how they were related to the P vs NP problem or if they had their own version of something similar. I am aware that this question might lead to a list of open problem in sublinear time algorithms or something similar, which is a fine answer to my question. I am also curious to know what we know about this theory and what we don't know or what are open problems to tackle. [I tried adding the tag sublinear time algorithms but I am not sure if it exists. Feel free to add a tag that might help] Answer: Sublinear time algorithms (as well as sublinear space algorithms) are indeed an active area of research. The field of property testing covers much of the sublinear time regime, and I'll point you to the property testing blog to learn about recent work in the area. The question about complexity theory relating to this is very interesting, and is also an active area of research. P vs NP might not exactly be the right analogy here, but you're right that the boundary between computation and verification is something where sublinearity changes things. In particular, you can look at a PCP as "kind of" doing something sublinear, in that it only inspects a few bits of a long proof in order to check the prover's claim. More generally, there's been recent work prover-verifier systems where the verifier runs in sublinear time. Some references that are worth perusing: Interactive proofs of proximity: delegating computation in sublinear time Non-Interactive Proofs of Proximity
{ "domain": "cstheory.stackexchange", "id": 2447, "tags": "cc.complexity-theory, ds.algorithms" }
Understanding this metaphor involving e-mails, chaos and phase transitions
Question: I asked this question on the English Stack Exchange and people advised to try get the answer here. I can’t get the idea of metaphor in the last sentence of the following quote: Instead, email operates more like chaos theory: at some point the time/energy required crosses a critical threshold, an unpredictable, invisible boundary. It undergoes a phase transition, like ice changing to water and then to steam. The parameters change and the effects explode, cascading across the rest of your workflow with mounting consequences. Answer: As somebody who works in the field of chaos theory (for whatever that’s worth), I confirm Dmckee’s assessment: There is no reasonable relation to any concepts from chaos theory. There is, however, an attempt in your quote to relate this to the phenomenon of criticality – which is not chaos theory, but like chaos theory is related to the field of complex systems. Applied to e-mails, the concept of criticality can be summarised as follows: Let $φ$ be the average number of e-mails sent as a consequence of a given e-mail. If $φ<1$ and there is no mechanism (other than other e-mails) causing people to sent e-mails, e-mails will eventually die out. If $φ>1$, there is an exponentially growing cascade of e-mails and we will eventually drown in them. Therefore, there is a critical point at $φ=1$ separating the two phases described above and marking a phase transition. However, I cannot see any reasonable connection of the above concept to your quote. The time a given person spends on e-mails does not affect $φ$ in general and thus there is no critical point to cross when increasing the time you spend on e-mails. Moreover, this is neither unpredictable nor invisible nor is there a cascade involving the rest of one’s workflow. Finally, I fail to see any other way to apply the concept of criticality to e-mails.
{ "domain": "physics.stackexchange", "id": 28155, "tags": "thermodynamics, soft-question, phase-transition, chaos-theory, critical-phenomena" }
Formulas related to "Cooling a cup of coffee with help of a spoon"
Question: I was recently going through some of the top voted questions on the thermodynamics tab. And, I came across Cooling a cup of coffee with help of a spoon. I found this question really interesting. In fact, I plan on carrying out my own experiment related to it. However, to write my experiments and results in a formal research paper for my school, I must have an investigation involving a few independent and one dependent variables. Therefore, I cannot work with comparing methods of cooling coffee, as they are not quantifiable. My dependent variable would be time, since I want to find the fastest method of cooling coffee. What are the dependent variables I can use? It would be great if you guys could provide some relevant thermodynamics formula related to cooling rate which can be applied to this scenario of cooling a cup. I have thought about optimal stirring speed or area of stirrer, but I am yet to find a mathematical relationship between them and the time taken for cooling. Note: Although this question is tagged as homework. It is not. It is just a request for help in an experiment I may carry out in school. Note: I am not only restricting methods of cooling to methods using spoons. The method used for cooling a cup of coffee does not need to include a spoon. Answer: It would be great if you guys could provide some relevant thermodynamics formula related to cooling rate which can be applied to this scenario of cooling a cup. The 'go to' Law for this kind of cooling is Newton's Cooling Law: $$\boxed{\frac{\text{d}Q}{\text{d}t}=-hA\Delta T}$$ where: $\frac{\text{d}Q}{\text{d}t}$ is the heat energy loss per unit of time (aka the heat flux) of the cup. Obviously the higher the heat flux, the faster the cup will cool down (faster temperature loss). $\Delta T$ is the temperature difference between the 'hot' object (the cup) and the 'cold' object (the surrounding air). $A$: the surface area shared between the 'hot' object (the cup) and the 'cold' object (the surrounding air). The heat flows from hot to cold through that surface. $h$ the heat transfer coefficient. Having understood that in order to maximize cooling we need to maximize the product of the three factors on the RHS of the equation we can now suggest some factors to investigate. $\Delta T$ can be maximized by minimizing the temperature of the air. Experiment by cooling the cup in the refrigerator v. normal ambient air. It's is well know that the heat transfer coefficient $h$ can be increased by introducing turbulence, both inside the cup and outside of it. Experiment with controlled stirring, perhaps at different speeds. Experiment with air fans blowing on the cup. Greater surface area promotes heat flux and thus cooling rate. Note that a sphere has the lowest surface area to volume ratio of all regular shapes. Elongated cylindrical shapes will have higher surface area to volume ratios. Consider experimenting with cooling fins to boost $A$. With these suggestions carry out some screening experiments to identify the most important factors affecting cooling rate. Finally, by combining those, the fastest cooling rate scenario can be determined. This kind of investigation is very well suited for the application of Factorial Experimental Designs (FED). In this approach factors suspected to influence one or more measuring responses ('cooling rate' in our case) are identified. To each factor is assigned two values (symbolically represented by $-$ and $+$). We've identified 4 factors and can assign two values to each of them. This allows the running of a so-called $2^{4-1}$ FED, which requires a mere $8$ experiment runs. The FED matrix looks like this ($A$, $B$, $C$ etc represent the factors): Once the $8$ runs have been performed, the effects of each factor, as well as first order interactions (e.g. $AC$), are easily calculated. The advantage of this kind of experiment design is that it yields a high degree of information compared to one-variable-at-a-time designs.
{ "domain": "physics.stackexchange", "id": 60866, "tags": "homework-and-exercises, thermodynamics, temperature, home-experiment" }
Finding maximal factorization of regular languages
Question: Let language $\mathcal{L} \subseteq \Sigma^*$ be regular. A factorization of $\mathcal{L}$ is a maximal pair $(X,Y)$ of sets of words with $X \cdot Y \subseteq \mathcal{L}$ $X \neq \emptyset \neq Y$, where $X \cdot Y = \{xy$ | $x \in X, y \in Y\}$. $(X,Y)$ is maximal if for each pair $(X',Y') \neq (X,Y)$ with $X'\cdot Y' \subseteq \mathcal{L} $ either $X \not \subseteq X'$ or $Y \not \subseteq Y'$. Is there a simple procedure to find out which pairs are maximal? Example: Let $\mathcal{L} = \Sigma^∗ab \Sigma^∗$. The set $F = \{u, v, w\}$ is computed: $u =(\Sigma^∗, \Sigma^∗ab\Sigma^∗)$ $v = (\Sigma^∗a\Sigma^∗, \Sigma^∗b\Sigma^∗)$ $w = (\Sigma^∗ab\Sigma^∗, \Sigma^∗) $ where $\Sigma = \{a,b\}$. Another example: $\Sigma = \{a, b\}$ and $\mathcal{L} = \Sigma^*a\Sigma$ Factorization set $F = \{q, r, s, t\}$ with $q = (\Sigma^*, \mathcal{L})$ $r = (\Sigma^*a, \Sigma + \mathcal{L})$ $s = (\Sigma^*aa, \epsilon + \Sigma + \mathcal{L})$ $t = (\mathcal{L}, \epsilon + \mathcal{L}) $ Answer: As suggested in the comments to the question, I will try to give an (unfortunately partial) answer to the question, at least to the extent that I have understood the problem myself (this implies that you may well find mistakes, and if you find a way to more briefly or clearly explain one of the below points, feel free to edit the answer accordingly): First, one should note that we do not actually have to compute the universal automaton of a language if we want to compute the factorizations of a language. From the paper mentioned in my comment¹, there is a 1-1 correspondence between left and right factors of a regular language, that is, given a left factor of the language, the corresponding right factor is uniquely determined and vice versa. More precisely, we have the following: Let $(X,Y)$ be a factorization of $L$. Then $$Y = \bigcap_{x \in X}x^{-1}L, X = \bigcap_{y \in Y}Ly^{-1},$$ that is, any left factor is an intersection of right quotients, and any right factor is an intersection of left quotients. Conversely, any intersection of left quotients of $L$ is a right factor of $L$, and any intersection of right quotients of $L$ is a left factor of $L$. Note that for a regular language, there is only a finite set of left and right quotients, and thus or problem reduces to computing the left and right quotients of a language, and then to compute their $\cap$-stable closure, that is, a minimal superset of the quotients that is closed under intersection. These are then precisely the right factors and left factors, and then it is usually easy to see which pairs are subsets of $L$. Example In order to illustrate the above points, consider the first example in the question (of which I also think that it is incorrect in the paper): Let $L = \Sigma^\ast ab \Sigma^\ast$. Now, the left quotients of $L$ are the sets $x^{-1}L$ for $x\in \Sigma^\ast$, that is, those words $u$ in $\Sigma^\ast$ that can be prefixed with $x$, i.e. $xu \in L$. When is $y^{-1}L=x^{-1}L$ for distinct $x,y$? This is the case if and only if $x$ and $y$ can be augmented to words in $L$ with precisely the same suffixes. This means, to put it into more familiar terms, they are Nerode-equivalent, and the suffixes that are needed to append to words in a Nerode class are precisely the respective left quotients. For $L$, we see that our Nerode-equivalence classes are $N_1$, the set of words not containing $ab$ as a factor and ending with $a$, $N_2$, the set of words ending with $b$ and not containing $ab$ as a factor, and $N_3$, the set of words containing $ab$ as a factor, that is, $N_3 = L$ They can be augmented with the following sets (that is, these are the left quotients of the words in the respective classes): $S_1 = x^{-1}L$ for $x$ in $N_1$ consists of all words in $L$ (any word can be augmented with a word containing $ab$ as a factor and thus becomes a word in $L$) and $b\Sigma^\ast$, that is $S_1 = L \cup b\Sigma^\ast$ $S_2 = x^{-1}L$ for $x$ in $N_2$ is the language itself, that is, $S_2 = L$ and $S_3 = x^{-1}L$ for $x$ in $N_3$ is obviously $\Sigma^\ast$. That is, we have found three right factors of $L$. As $S_2\subset S_1\subset S_3$, their $\cap$-stable closure is trivially ${S_1,S_2,S_3}$, and those are then precisely the right factors. Hence, our factorization set $\mathcal{F}_L$ is of the form $(P_1,S_1),(P_2,S_2),(P_3,S_3)$. Now, for the left factors $P_i$, we use the equations of the beginning of this answer: $$ P_i = \bigcap_{x\in S_i} Lx^{-1} $$. For $P_1$, this yields $L \cup \Sigma^\ast a$, for $P_2$ we get $\Sigma^\ast$ and for $P_3$, we obtain $L$. You can see this by inspection (the most popular excuse for being too lazy to state a formal proof) or by explicitly computing the right quotients (which is fairly analogous, although not completely, to computing the left quotients). Our factorizations are thus given by $\mathcal{F}_L = {u,v,w}$ where $u = (P_1,S_1) = (\Sigma^\ast ab \Sigma^\ast \cup \Sigma^\ast a, \Sigma^\ast ab \Sigma^\ast \cup b\Sigma^\ast)$ $v = (P_2, S_2) = (\Sigma^\ast, \Sigma^\ast ab \Sigma^\ast)$ and $w = (P_3, S_3) = (\Sigma^\ast ab \Sigma^\ast, \Sigma^\ast)$ Summary To summarize (as you were asking for a simple procedure): For computing the factorizations of a language $L$, first compute the left quotients of $L$. You can do so, in the language of the paper, by constructing a minimal DFA $A$ for $L$ and then for each state $q$ in $A$ (corresponding, as a Nerode-equivalence class, to a left quotient) compute the future of $q$ in $A$, thus obtaining one left quotient of the language for each state. The collection of left quotients obtained in this way yields, in general, a subset $S_R$ of the right factors. Compute then the $\cap$-stable closure of $S_R$, which can be done in practice by forming the intersection of any subset of $S_R$ and adding any subset obtained in this way to $S_R$. The set $S_R$ together with all the intersections from the previous step is then the set of right factors of $L$. In order to obtain the left factors, we can compute the right quotients of $L$. These are sets of the form $Ly^{-1}$, for $y\in \Sigma^\ast$. Now, these are again only finitely many, and for $x\neq y$, we have $Ly^{-1} = Lx^{-1}$ if and only if for all $u\in \Sigma^\ast$, $ux \in L \Leftrightarrow uy \in L$, that is they can be prefixed to words in the language with precisely the same set of strings. To compute $Lx^{-1}$, consider those states $q$ in $A$ such that $x$ is contained in the future of $q$. The union of the pasts of those states constitute one right quotient. Find all these quotients. You know you are done when you have found as many left factors as you have right factors. Find those pairs of left and right factors $X,Y$ such that $X\cdot Y \subseteq L$. This is $\mathcal{F}_L$. The Universal Automaton by Lombardy and Sakarovitch (in Texts in Logic and Games, Vol 2: Logic and Automata: History and Perspectives, 2007)
{ "domain": "cs.stackexchange", "id": 1661, "tags": "algorithms, regular-languages, optimization" }
How to calculate the number of folds present in a protein
Question: Suppose I have number of PDB files of proteins. How can I get the number of folds present in these proteins? Is the fold count derivable from the PDB files? If so, how? Answer: There are various ways that you could do this. CATH is a hierarchical classification system SCOP is another such system with a different hierarchy PTGL is the protein topology graph library Tops motif will scan PDB files and match patterns against it How you actually apply these tools or lookup within these systems is documented on their websites. However, it depends on what you want to do - look up published structures, or new experimental data. I should point out that writing software from scratch to determine the fold of a protein may be tricky.
{ "domain": "biology.stackexchange", "id": 6052, "tags": "biochemistry, proteins, structural-biology, protein-folding, protein-structure" }
Validation generator in Autoencoder returning NaN
Question: I am trying to build a fairly simple autoencoder using Keras on the OpenImages dataset. Here is the architecture of the ae: Layer (type) Output Shape Param # ================================================================= conv3d_1 (SeparableConv2D) (None, 64, 64, 64) 283 _________________________________________________________________ max_pool_1 (MaxPooling2D) (None, 32, 32, 64) 0 _________________________________________________________________ batch_norm_1 (BatchNormaliza (None, 32, 32, 64) 256 _________________________________________________________________ sep_conv2d_2 (SeparableConv2 (None, 32, 32, 32) 2656 _________________________________________________________________ max_pool_2 (MaxPooling2D) (None, 16, 16, 32) 0 _________________________________________________________________ batch_norm_2 (BatchNormaliza (None, 16, 16, 32) 128 _________________________________________________________________ sep_conv2d_3 (SeparableConv2 (None, 16, 16, 32) 1344 _________________________________________________________________ max_pool_3 (MaxPooling2D) (None, 8, 8, 32) 0 _________________________________________________________________ batch_norm_3 (BatchNormaliza (None, 8, 8, 32) 128 _________________________________________________________________ flatten (Flatten) (None, 2048) 0 _________________________________________________________________ bottleneck (Dense) (None, 64) 131136 _________________________________________________________________ reshape (Reshape) (None, 8, 8, 1) 0 _________________________________________________________________ conv_2d_transpose_1 (Conv2DT (None, 16, 16, 32) 320 _________________________________________________________________ batch_norm_4 (BatchNormaliza (None, 16, 16, 32) 128 _________________________________________________________________ conv_2d_transpose_2 (Conv2DT (None, 32, 32, 32) 9248 _________________________________________________________________ batch_norm_5 (BatchNormaliza (None, 32, 32, 32) 128 _________________________________________________________________ conv_2d_transpose_3 (Conv2DT (None, 64, 64, 64) 18496 _________________________________________________________________ batch_norm_6 (BatchNormaliza (None, 64, 64, 64) 256 _________________________________________________________________ sep_conv2d_4 (SeparableConv2 (None, 64, 64, 3) 771 ================================================================= Total params: 165,278 Trainable params: 164,766 Non-trainable params: 512 I am then defining generators that flow from a directory where I have downloaded the images: train_data_dir = 'open_images/train/' validation_data_dir = 'open_images/validation/' batch_size = 128 train_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(64, 64), batch_size=batch_size, class_mode=None) validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(64, 64), batch_size=batch_size, class_mode=None) And here is the model build step: def fixed_generator(generator): for batch in generator: yield (batch, batch) num_epochs = 10 steps_per_epoch = 120 autoencoder.fit_generator( fixed_generator(train_generator), steps_per_epoch=steps_per_epoch, epochs=num_epochs, validation_data=fixed_generator(validation_generator), validation_steps=100 ) When I run this code it seems like something is going wrong with the validation step because it only returns NaN: Epoch 1/10 120/120 [==============================] - 241s 2s/step - loss: 0.0468 - val_loss: nan Epoch 2/10 120/120 [==============================] - 239s 2s/step - loss: 0.0278 - val_loss: nan Epoch 3/10 120/120 [==============================] - 240s 2s/step - loss: 0.0248 - val_loss: nan Epoch 4/10 120/120 [==============================] - 241s 2s/step - loss: 0.0234 - val_loss: nan Epoch 5/10 120/120 [==============================] - 240s 2s/step - loss: 0.0226 - val_loss: nan Epoch 6/10 120/120 [==============================] - 241s 2s/step - loss: 0.0221 - val_loss: nan Epoch 7/10 120/120 [==============================] - 242s 2s/step - loss: 0.0217 - val_loss: nan Epoch 8/10 120/120 [==============================] - 240s 2s/step - loss: 0.0213 - val_loss: nan Epoch 9/10 120/120 [==============================] - 240s 2s/step - loss: 0.0210 - val_loss: nan Epoch 10/10 120/120 [==============================] - 242s 2s/step - loss: 0.0207 - val_loss: nan Also when the validation generator code is run it prints: Found 0 images belonging to 0 classes. There are definitely images in that directory though. Any idea what might be going on? Edit: If you want to be convinced there are images in the folder... ubuntu@ip-172-16-1-35:~$ ls -l open_images/validation/ | head total 12661044 -rw-r--r-- 1 ubuntu ubuntu 290621 Jul 10 2018 0001eeaf4aed83f9.jpg -rw-r--r-- 1 ubuntu ubuntu 375363 Jul 10 2018 0004886b7d043cfd.jpg -rw-r--r-- 1 ubuntu ubuntu 462817 Jul 10 2018 000595fe6fee6369.jpg -rw-r--r-- 1 ubuntu ubuntu 302326 Jul 10 2018 00075905539074f2.jpg -rw-r--r-- 1 ubuntu ubuntu 970275 Jul 10 2018 0007cebe1b2ba653.jpg -rw-r--r-- 1 ubuntu ubuntu 614095 Jul 10 2018 0007d6cf88afaa4a.jpg -rw-r--r-- 1 ubuntu ubuntu 415082 Jul 10 2018 0008e425fb49a2bf.jpg -rw-r--r-- 1 ubuntu ubuntu 359851 Jul 10 2018 0009bad4d8539bb4.jpg -rw-r--r-- 1 ubuntu ubuntu 186452 Jul 10 2018 000a045a0715d64d.jpg Answer: please make your data like this format to work with flow_from_directory.
{ "domain": "datascience.stackexchange", "id": 6213, "tags": "deep-learning, keras, tensorflow, autoencoder" }
Array-based queue implementation using C
Question: I wrote this header for generic use of a queue. The one thing I'm wondering is if I understood the usage of void*. I hope that if somebody teach me some conventions when coding in C. /* * array-based queue implementation by using void* * written by kidkkr * May 6 '16 */ #ifndef QUEUE_H #define QUEUE_H #define QUEUE_CAPACITY 10 #include <stdio.h> typedef struct { void* data[QUEUE_CAPACITY]; int head; int tail; int size; } Queue; void initQueue(Queue* pQueue) { pQueue->head = 0; pQueue->tail = -1; pQueue->size = 0; } void enqueue(Queue* pQueue, void* item) { if (pQueue->size == QUEUE_CAPACITY) // when queue is full { printf("Queue is full\n"); return; } else { (pQueue->tail)++; (pQueue->tail) %= QUEUE_CAPACITY; (pQueue->data)[pQueue->tail] = item; (pQueue->size)++; } } void* dequeue(Queue* pQueue) { // Return NULL when queue is empty // Return (void*)item at the head otherwise. void* item = NULL; if (isEmpty(&pQueue)) { printf("Queue is empty\n"); } else { item = (pQueue->data)[pQueue->head]; (pQueue->head)++; (pQueue->head) %= QUEUE_CAPACITY; (pQueue->size)--; } return item; } int isEmpty(Queue* pQueue) { return pQueue->size == 0; } #endif Answer: Your code looks nice and nifty. However, I have a couple of suggestions. 1 I would change the type of head, tail and size from int to size_t. 2 void initQueue(Queue* pQueue) { pQueue->head = 0; pQueue->tail = -1; pQueue->size = 0; } The semantics is that you first update the value of tail and then use it as an index at which you enqueue a new data item. If you specify that you first insert at tail and only after that update it, you effectively get rid of negative value range, i.e., size_t will do just fine. 3 In all functions operating on the queue you should have a sanity check that the input queue pointer is not NULL. 4 void enqueue(Queue* pQueue, void* item) { if (pQueue->size == QUEUE_CAPACITY) // when queue is full { printf("Queue is full\n"); return; } else ... I would #include <stdbool.h> and return true if the enqueuing for successful and false otherwise. Also, it is not funky to print to standard output in a data structure function/algorithm. 5 For debugging purposes you could roll a separate function that neatly prints the contents of your queue. Summa summarum All in all, I had this in mind: queue.h #ifndef QUEUE_H #define QUEUE_H #include <stdbool.h> #include <stdio.h> #define QUEUE_CAPACITY 10 typedef struct { void* data[QUEUE_CAPACITY]; size_t head; size_t tail; size_t size; } Queue; bool initQueue(Queue* pQueue) { if (!pQueue) { return false; } pQueue->head = 0; pQueue->tail = 0; pQueue->size = 0; return true; } int isEmpty(Queue* pQueue) { return pQueue && pQueue->size == 0; } bool enqueue(Queue* pQueue, void* item) { if (!pQueue || pQueue->size == QUEUE_CAPACITY) // when queue is full { return false; } pQueue->data[pQueue->tail] = item; pQueue->tail = (pQueue->tail + 1) % QUEUE_CAPACITY; pQueue->size++; return true; } void* dequeue(Queue* pQueue) { // Return NULL when queue is empty // Return (void*)item at the head otherwise. void* item; if (!pQueue || isEmpty(pQueue)) { return NULL; } item = pQueue->data[pQueue->head]; pQueue->head = (pQueue->head + 1) % QUEUE_CAPACITY; pQueue->size--; return item; } void debugPrint(Queue* pQueue) { size_t index; size_t tmp; if (!pQueue) { printf("null"); return; } printf("["); if (pQueue->size >= 1) { printf("%d", (int) pQueue->data[pQueue->head]); } for (index = 1; index < pQueue->size; ++index) { tmp = (pQueue->head + index) % QUEUE_CAPACITY; printf(", %d", (int) pQueue->data[tmp]); } printf("]"); } #endif main.c #include "queue.h" int main(int argc, const char * argv[]) { int i; Queue q; initQueue(&q); for (i = 0; i < QUEUE_CAPACITY; ++i) { debugPrint(&q); puts(""); enqueue(&q, (void*) i); } for (i = QUEUE_CAPACITY; i < 3 * QUEUE_CAPACITY; ++i) { debugPrint(&q); puts(""); dequeue(&q); enqueue(&q, (void*) i); } for (i = 0; i < QUEUE_CAPACITY; ++i) { debugPrint(&q); puts(""); dequeue(&q); } while (!isEmpty(&q)) { debugPrint(&q); dequeue(&q); } debugPrint(&q); return 0; }
{ "domain": "codereview.stackexchange", "id": 19994, "tags": "c, queue" }
Are there any probabilistic models for the likelihood of finding a rogue planet closer to us than Proxima Centauri?
Question: There are some articles that claim there could be more rogue planets than stars in our galaxy such as this one. Now if this were true one might expect that there is a rogue planet closer to the earth than the star Proxima Centauri. Have any models been built regarding the probability of this? And/or perhaps a curve of mass of rogue object, distance from Earth, and probability? Answer: I've found a paper(1) with estimates based on extrapolation of known data for stellar-mass objects toward smaller values, using a power law probability distribution: Sumi et al.[4] used microlensing data to estimate the ratio of the number density of Jupiter-mass unbound exoplanets, nj , and the number density of main sequence stars n⋆, yielding an estimate nJ / n⋆ = 1.9(+1.3/−0.8) for their power law model. The stellar number density is well known from luminosity data [9], yielding an estimate for nJ , nJ = 6.7(+6.4/−3.0) × 10^−3 ly^−3 (1) and thus an estimate for the expected mean distance to the nearest Jupiter mass nomadic planet, DJ , with DJ = 3.28(+0.7/−0.6) ly , (2) the mean minimum distance being ∼77% of the distance to Proxima Centauri. The error margin is huge, specially when extending the model to poorly constrained low mass objects: In order to predict the number densities of nomadic exoplanets with masses much smaller than that of Jupiter it is necessary to extrapolate the power law density models into mass regimes not yet well constrained by microlensing [13], leading to the three order of magnitude uncertainty in the number density of Earth-mass nomads in Figure 1 and the factor of almost 6 uncertainty in the distance to the nearest Earth-mass nomad seen in Figure 2. Then their model points to these expected minimum distances, for the closest object of a given mass, taking the mass of a equivalent solar system object for comparison. If these estimates are correct, we should expect many planetary-mass objects to be found closer to us than Proxima Centauri: Object Mass Expected Analog Rmin (MJupiter) (ly) Earth 0.003 1.85 (+2.99/−1.01) Uranus 0.046 2.41 (+2.02/−0.99) Neptune 0.054 2.45 (+1.95/−0.99) Saturn 0.299 2.91 (+1.24/−0.84) Jupiter 1 3.28 (+0.71/−0.65) super-Jupiter 10 4.52 (+1.16/−1.61) In graph form: References: (1) Eubanks, T. M. (2014). Nomadic Planets Near the Solar System.
{ "domain": "astronomy.stackexchange", "id": 6537, "tags": "rogue-planet" }
What is meant by precision?
Question: I've found two different meanings of precision: Meaning 1 (from Grade 11 Physics textbook,India): Precision tells us to what resolution or limit a quantity is measured. Meaning 2 (from Grade 11 Chemistry Textbook,India): Precision refers to the closeness of various measurements for the same quantity. It's really confusing to have two different meanings for the same term. I think, textbooks can't be wrong. Does this mean that precision in physics is different from precision in chemistry? Answer: The first definition refers to the precision of a single measurement, whereas the second refers to the precision of a group of measurements. Suppose I have a digital stopwatch that measures time in seconds with three decimal places. If I make one measurement of the period of a pendulum I will get a value that has a precision of one thousandth of a second. If I make ten measurements of the period of the same pendulum then those measurements could be spread over a range of one tenth of a second or more - the limiting factor is now my reaction time, not the precision of the stopwatch. Note that we have said nothing so far about the accuracy of the measurements - if the stopwatch is faulty then they may be precise but very inaccurate.
{ "domain": "physics.stackexchange", "id": 81584, "tags": "terminology, definition, measurements, error-analysis" }
Is the temperature of the universe rising?
Question: I know the temperature of the universe is decreasing due to it's expansion after the big bang but after I came up with this article in AOP(please note I don't have the access of the journal,so I have just read the abstract) after reading this I am quite confused. A news media states that: The study by the Ohio State University Center for Cosmology and AstroParticle Physics shows that the "universe is getting hotter". This big revelation came amid the scientists' restless examinations on the thermal history of the universe over the last 10 billion years. It has been also stated that: The study also explained how, with the evolution of the universe, gravity pulls dark matter and gas in space together into galaxies and clusters of galaxies. The pull is so violent that more and more gas is shocked and heated up. Scientists used a new method to measure the temperature of gas farther away from Earth. The scientists, during the research, then compared those measurements to gases closer to Earth and near the present time. They said the "universe is getting hotter over time due to the gravitational collapse of cosmic structure, and the heating will likely continue". Data from the Planck and the Sloan Digital Sky Survey was used to observe how the universe's temperature has gone up.The universe is warming because of the natural process of galaxy and structure formation. So my question is: •Is it indirectly violating the principle that the universe is cooling down due to expansion? Or is it just a additional factor in a small region of our space? Or is it happening as a regular phenomenon for a long time? •Can someone explain to me the whole phenomenon with more science/ or any scientific explanation more than what I found? •If the findings are true then what are the probable effects? I am hoping to have clarity on the paper, maybe, being naive I haven't understood it completely, besides answer further suggestions are welcomed. [Edit: check this] Answer: There is a difference between the "temperature of the universe" and the temperature of the cosmic microwave background radiation (CMBR). The former can be changed by physical processes going on in the universe and for example, the conversion of gravitational potential energy, or the release of nuclear binding energy, into the thermal energy of particles. The CMBR temperature on the other hand is fixed when it is formed and modified just by the expansion history of the universe; it represents the temperature of a blackbody radiator with the same spectrum as the CMBR. In the study you refer to, the "temperature of the universe" is the density-weighted mean electron temperature, and is of order $10^6$ K. These electrons have been heated via a variety of physical processes, ultimately linked to the formation of clusters of galaxies, galaxies and stars (for example supernovae, or collisionless shock heating in gravitationally accelerated flows - Kravtsov & Yepes 2000; Bykov et al. 2008), and have cooling times that are long compared with the age of the universe. In contrast, the CMBR spectrum was formed about 400,000 years after the big bang, was essentially fixed at that point (at around 3000 K), and is only modified subsequently by the universe's expansion history, which stretches the wavelengths leading to a cooling temperature (currently 2.7 K). The two temperatures would have been similar around the epoch when the CMBR was formed but have diverged since then because the matter became effectively transparent to the CMBR and decoupled. According to the paper that is referred to in the question, the density-weighted mean electron temperature has increased by about a factor of 3 between $z=1$ and the present day; from $7\times 10^5$ K to $2\times 10^6$ K. Over the same period, the CMBR would have cooled from 5.4 K to 2.7 K.
{ "domain": "physics.stackexchange", "id": 73325, "tags": "cosmology, temperature, space-expansion, universe, observable-universe" }
GUI based control in ROS
Question: I am trying to develop an application, which has a robot having multiple tasks. For example task 1 can be "move left arm", task 2 can be "move both the arms", task 3 can be "start recording from the camera" etc. I have written the code for doing all these tasks in ROS using c++ and python. At present, I can visualize robot and camera output inside RViz. However, I want to create some GUI elements such as Buttons. Button 1 can be "move left arm", similarly button 2 can be "move both the arms" etc. I am not able to find out the documentation in order to add GUI elements in RViz. I am looking for suggestions and references. Thank you very much. Originally posted by ravijoshi on ROS Answers with karma: 1744 on 2017-11-07 Post score: 0 Answer: You are going to need to create a custom rviz plugin. There are several tutorials I think you'll likely want to first look at the New Dockable Panel which shows how to make a TeleopPanel as an example: http://docs.ros.org/lunar/api/rviz_plugin_tutorials/html/panel_plugin_tutorial.html Originally posted by tfoote with karma: 58457 on 2018-01-11 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by ravijoshi on 2018-01-14: Thank you very much. Meanwhile, I shifted to RQT and developed a custom plugin. It was quite easy as it was based on QT on python. I will surely check your suggestion too.
{ "domain": "robotics.stackexchange", "id": 29299, "tags": "rviz" }
Enthalpy & compressor work for an ideal gas
Question: The isentropic compressor work for an ideal gas is given by: γR(ΔT)/γ -1 . This eq is usually reduced to Cp(ΔT) ; since Cp= γR/γ - 1. So, even though there is pressure rise in compressor, we are calculating work by using specific heat at constant pressure, which is fine if we only look at this as a slight mathematical manipulation. But another way to view this situation would be that since work for steady flow devices such as compressors = ΔH (change in enthalpy) & since enthalpy is a state function, therefore it is independent of the path chosen and depends only on end-states. Therefore, the use of the constant pressure heating path to calculate work of compressor may be justified, but the problem is that in the constant pressure heating path, the end states would not be the same as that during isentropic compression. Sure, we might achieve the same temperature values T1 and T2 in both paths, but in isentropic compression path, the initial pressure value would be P1 and finally P2, whereas in constant pressure heating path we would have some pressure p both in the beginning and at the end. So, how can we justify the use of the eq CpΔT to calculate compressor work from the state function perspective. The source of my confusion arises from the definition of state. How do we define it? Answer: You're right that the isentropic compressor work is the difference of enthalpy between the initial and final states (which are defined by the particular values of the state variables P, V, T). And indeed, ΔH = Cp ΔT (assuming Cp does not depend much on T). However, this formula is always valid for a perfect gas and does not assume that the pressure is constant. Ideed, Cp is called the heat capacity at constant pressure, but see it simply as a constant with a particular value. You do not necessarily need the transformation to be isobaric to use this constant at some point in your reasoning. The "constant pressure" terminology just comes from the fact that for a perfect gas, for a transformation at constant pressure, the heat Q is equal to ΔH = Cp ΔT. Of course, in the isentropic transformation, pressure is not constant, but the formula above is still correct.
{ "domain": "physics.stackexchange", "id": 50378, "tags": "thermodynamics, work" }
Converting ångström spectral dimension to galaxy speed (km/s)
Question: I have a spectral cube (in FITS format) whose spectral dimension is in ångströms. The sampling along the spectral dimension is 0.28A (CDELT=0.28). The observation in the cube is Ha emission of a galaxy at redshift 0.1. How can I convert the sampling to km/s? Answer: If you want bins that have an equal step in km/s then you will have to rebin in steps of equal log wavelength. On this scale the rest wavelength of H alpha is at zero km/s and then each bin will represent a increment in velocity with respect to that. See How do I apply a velocity shift to a wavelength array with uniform logarithmic spacing?
{ "domain": "astronomy.stackexchange", "id": 1675, "tags": "spectroscopy, spectra, units" }
Calculate angle between planes
Question: I have written working code calculating the angle between the adjacent planes. I read subsequently from standart input: n - amount of triangles, m - amount of vertices ind[] - indices of the vertices given coord[] - coordinates of the vertices given The output is supposed to be the maximum angle between the adjacent planes in rad. Function calculate_angle() iterates over the amount of triangles so that I compare only new ones with the old ones (for z in range(k, n)) list_result = [i for i in index1 if i in index2] used for the adjacency detection: it asks, whether the indices of the first triangle coordinates == to the second. If there are at leat 2 of them - we start to calculate the normals to triangles (the angle between surfaces = to the angle between their normals) However, if the number of triangles is more than 104 it starts to work very slowly: import numpy as np import math import sys import cProfile, pstats, io pr = cProfile.Profile() pr.enable() num_triangles, num_vertices = input().split() n = int(num_triangles) # amount of triangles m = int(num_vertices) # amount of vertices ind = [] coord = [] angles_list =[] for i in range(n): ind.append([int(j) for j in input().split()]) # indices of the vertices given for j in range(m): coord.append([float(k) for k in input().split()]) # coordinates of the vertices given def unit_vector(v): # Returns the unit vector of the vector. return v/ np.sqrt(v[0]*v[0]+v[1]*v[1]+v[2]*v[2]) def angle_between(v1, v2): v1_u = unit_vector(v1) v2_u = unit_vector(v2) return math.acos(max(min(np.dot(v1_u, v2_u), 1), -1)) # (np.clip(np.dot(v1_u, v2_u), -1.0, 1.0)) def calculate_angle(): for k in range(0, n): for z in range(k, n): index1 = ind[k] index2 = ind[z] trignum =0 list_result = [i for i in index1 if i in index2] if (ind[k] != ind[z])&(len(list_result) >= 2)&(trignum <= 3): trignum = trignum + 1 n1 = angle_norm(index1) n2 = angle_norm(index2) ang = angle_between(n1, n2) angles_list.append(ang) return max(angles_list) def cross(a, b): c = [a[1]*b[2] - a[2]*b[1], a[2]*b[0] - a[0]*b[2], a[0]*b[1] - a[1]*b[0]] return c def angle_norm(triangle_index): p0 = coord[triangle_index[0]] p1 = coord[triangle_index[1]] p2 = coord[triangle_index[2]] V1 = np.array(p1) - np.array(p0) V2 = np.array(p2) - np.array(p0) an = cross(V1, V2) return an print(calculate_angle()) pr.disable() s = io.StringIO() sortby = 'tottime' ps = pstats.Stats(pr, stream=s).sort_stats(sortby) ps.print_stats() print(s.getvalue()) The analysis results: 0.0 12130580 function calls in 26.836 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1048576 6.038 0.000 6.038 0.000 /Users/mac/PycharmProjects/untitled3/Course:21(unit_vector) 1284 5.309 0.004 5.309 0.004 {built-in method builtins.input} 4194304 4.645 0.000 4.645 0.000 {built-in method numpy.core.multiarray.array} 1048576 3.304 0.000 10.566 0.000 /Users/mac/PycharmProjects/untitled3/Course:55(angle_norm) 1048576 2.617 0.000 2.617 0.000 /Users/mac/PycharmProjects/untitled3/Course:49(cross) 1 2.090 2.090 21.516 21.516 /Users/mac/PycharmProjects/untitled3/Course:33(calculate_angle) 524288 0.996 0.000 8.224 0.000 /Users/mac/PycharmProjects/untitled3/Course:26(angle_between) 524288 0.531 0.000 0.531 0.000 {built-in method numpy.core.multiarray.dot} 819840 0.487 0.000 0.487 0.000 /Users/mac/PycharmProjects/untitled3/Course:39(<listcomp>) 524288 0.329 0.000 0.329 0.000 {built-in method builtins.min} 524289 0.249 0.000 0.249 0.000 {built-in method builtins.max} 524288 0.095 0.000 0.095 0.000 {built-in method math.acos} 819840 0.080 0.000 0.080 0.000 {built-in method builtins.len} 525571 0.055 0.000 0.055 0.000 {method 'append' of 'list' objects} 1 0.008 0.008 0.008 0.008 {built-in method builtins.print} 1280 0.002 0.000 0.002 0.000 /Users/mac/PycharmProjects/untitled3/Course:16(<listcomp>) 1284 0.000 0.000 0.000 0.000 {method 'split' of 'str' objects} 3 0.000 0.000 0.000 0.000 /Users/mac/PycharmProjects/untitled3/Course:18(<listcomp>) 1 0.000 0.000 0.000 0.000 /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/codecs.py:318(decode) 1 0.000 0.000 0.000 0.000 {built-in method _codecs.utf_8_decode} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} Here's what I already tried to optimise: I got rid of couple of np built-in functions, e.g. np.cross() and np.linalg.norm(), that gave me a couple of seconds. It was for z in range(1, n), I changed 1 to k in order to not take into account already calculated triangles. I also tried to make faster input, but to no avail. (Used map, used sys.std) Please, can someone tell me how to make it significantly faster? I'm not well-acquainted with graphs, and I have a bad feeling about this... Answer: One immediate advice is to precompute angle_norm for each triangle. That alone will give you some boost. It also lets reformulate the problem as finding a point set diameter. It is a classical problem, and a generic case is quite hard. You may want to look here for an introduction and references. However, in your case you deal with normalized vectors; their endpoints lay on the unitary sphere, and the problem reduces to a simpler 2D case, which admits the \$O(n\log{n})\$ solution. PS About precomputing normals: normals = [] def compute_normals(triangles): for t in triangles: normals.append(unit_vector(angle_norm(t)) def calculate_angle(): .... if (....): .... # Already computed! # n1 = angle_norm(index1) # n2 = angle_norm(index2) ang = angle_between(normals[index1], normals[index2]) angles_list.append(ang)
{ "domain": "codereview.stackexchange", "id": 25550, "tags": "python, algorithm, python-3.x, time-limit-exceeded, computational-geometry" }
ROS kinetic with Ubuntu 4.13.0: Code doesn't invoke the callback method (poseHandler::poseHandlerCallBack(const turtlesim::PoseConstPtr& posePtrNew))
Question: #include <ros/ros.h> #include <turtlesim/Pose.h> #include <geometry_msgs/Twist.h> #include <std_srvs/Empty.h> #include <signal.h> #include <termios.h> #include <stdio.h> float initialX; float initialY; float initialTheta; class poseHandler { public: poseHandler(); turtlesim::PoseConstPtr currPosePtr; turtlesim::Pose currPose; void printPose(); void poseHandlerCallBack(const turtlesim::PoseConstPtr& posePtrNew); }; poseHandler::poseHandler() { } void poseHandler::printPose() { ROS_INFO("x=%f,y=%f", currPose.x, currPose.y); ROS_INFO("y=%f,", currPose.y); ROS_INFO("theta=%f,", currPose.theta); ROS_INFO("lin_vel=%f,", currPose.linear_velocity); ROS_INFO("ang_vel%f\n", currPose.angular_velocity); } void poseHandler::poseHandlerCallBack(const turtlesim::PoseConstPtr& posePtrNew) { currPosePtr = posePtrNew; currPose.x = currPosePtr->x; currPose.y = currPosePtr->y; currPose.theta = currPosePtr->theta; currPose.linear_velocity = currPosePtr->linear_velocity; currPose.angular_velocity = currPosePtr->angular_velocity; ROS_INFO("poseHandlerCallBack!\n"); } int main(int argc, char** argv) { poseHandler pH; ros::init(argc, argv, "draw_ractangle"); ros::NodeHandle nh; ros::Subscriber pose_sub = nh.subscribe<turtlesim::Pose> ("/turtlesim/turtle1/pose",1000, &poseHandler::poseHandlerCallBack, &pH); initialX = pH.currPose.x; initialY = pH.currPose.y; initialTheta = pH.currPose.theta; pH.printPose(); ros::Publisher twist_pub = nh.advertise<geometry_msgs::Twist> ("turtle1/cmd_vel", 100); ros::Rate loop_rate(1); geometry_msgs::Twist twist_obj; twist_pub.publish(twist_obj); while(ros::ok()) { ROS_INFO("target = %f", (initialX+2)); if(pH.currPose.x < (initialX + 2)) { twist_obj.linear.x = 0.4; twist_obj.angular.z = 0.0; ROS_INFO("Turtle move forward\n"); } else { twist_obj.linear.x = 0.0; twist_obj.angular.z = 0; ROS_INFO("Turtle Stopped\n"); } pH.printPose(); twist_pub.publish(twist_obj); loop_rate.sleep(); ros::spinOnce(); ROS_INFO("spinOnce() called\n"); } } Originally posted by Mohini on ROS Answers with karma: 23 on 2018-07-01 Post score: -2 Answer: There is a while loop inside your main loop where the code will get stuck. The callbacks are only processed when you call the ros::spinOnce function, so it will never call the callback when it's stuck inside that inner loop. It's not clear why you would want a while loop there anyway, you can possibly change it to an if condition to make it stop after you get to the desired position. Also, you might never get a message with an exact value of initialX + 1 in the pose message, so you may want to change it to an inequality check instead of equality. Originally posted by kartikmohta with karma: 308 on 2018-07-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by billy on 2018-07-02: +1 for reading his code. Comment by Mohini on 2018-07-04: Thank you for the reply. I've changed my code. now spinOnce() is executed, yet the call back function is not invoked. The objective of the code is to move the turtle a couple of units and make it stop. (I could not put the revised code in the comment, so edited original code) Comment by kartikmohta on 2018-07-04: You might want to check the topic name for the pose subscriber. Just do a rostopic list after launching the turtlesim to check the exact names. You're subscribing to turtlesim/turtle1/pose but the actual topic might be /turtle1/pose. Comment by Mohini on 2018-07-04: yey! it worked. I had the topic name right to start with. but to get to work I tried different names and forgot to revert to the original code. thanks for the help!
{ "domain": "robotics.stackexchange", "id": 31141, "tags": "ros-kinetic" }
Is the language of all $a^n$ for which $n$ has an even number of digits in 10-base system regular?
Question: Is the language $ L = \{a^n ~| ~n \text{ has even number of digits in 10-base system}\} $ regular? My approach: let the $ p $ be from the Pumping Lemma. Chose the smallest $ n $ which has even number of digits in 10-base system, but bigger or equal to $ p $, that is $ n = 10^{2m - 1} $ for appropriate $ m $. According to the Lemma, $ z = a^n = a^{10^{2m - 1}} = uvw $, and $ |uv| \leq p $, with $ v\neq \varepsilon $. So, we pump down $ v $, and we got the $ z' $ which should be in $ L $ (by the Lemma). That is $ uw $ should be in the language. If $ |v| $ is smaller than $ 10^{2m-1} - 10^{2m-3} $, than we reach a contradiction. If $ |v| $ is bigger, than we pump it up 10 times, to reach a contradiction again. I hope this is alright. Answer: A slightly simpler way is to say that if you consider a string $x\in L$ of size $p$, then there is a non-empty substring $v$ of $x$ that you can pump without getting out of the language, if it is regular. Then you can pump it in $x$ until it exceeds a size $10^{2m}$, which has to be greater than $p$, such that $10^{2m+1}-10^{2m} > |v|$. And you necessarily get the wrong number of digits. In other words, you try to find in the sequence of possible sizes a gap that is too wide to be jumped by pumping a constant pumping size $|v|$. This very simple reasonning also works with the context-free pumping lemma. Wide gaps, that cannot be pumped exist as soon as empty gaps can increase in size beyond any bound. This depends on the definition of the language. But using the pumping lemma is often a bit more sophisticated.
{ "domain": "cs.stackexchange", "id": 4587, "tags": "formal-languages, regular-languages, pumping-lemma" }
What's read depth in VCF?
Question: There is an attribute defined as DP (Total Depth) in VCF file format. For example, http://www.1000genomes.org/node/101. What does this attribute mean? If I get a value of like DP=128. How can be that be interpreted? Answer: So the general sequencing analysis pipeline is to generate a bunch of read sequences, figure out how they match to a reference genome and align them, then call any differences between the reference genome and the sequencing reads. The following image, from the Broad Institute's Interactive Genome Viewer, may help you visualize what is happening here. As you can see in the image, the human reference DNA is found at the top of the window, and the sequence of bases for the current selection, at the very bottom of the window. Then each of the gray bars that you see below that is an individual sequencing read that matches to that particular location along the human DNA. When you use a variant caller like Freebayes, it will list out information about locations along the reference genome that had differences in your actual sequences. For instance, as you bring up, DP or read depth. Looking back at the IGV image, you can see a location that had a lot of 'G' showing in the sequence reads, where the reference DNA has 'A'. Variant callers will pick up on this and list it as a possible variant. They will also tell you the read depth at that location which is the total number of sequence reads overlapping that position. In this case there are 21 gray bars (sequence reads) overlapping at that position, meaning that DP=21. Even though it is not listed in the VCF example to which you refer, variant callers can also give other data like AO, which is times that a variant was observed, and RO which is the number of times the reference base was observed. This information allows you to decide whether or not an identified variant was a sequencing error, or a real variant. For instance if DP=1, AO=1, RO=0, that means there was only a single read at that location and it was not what is listed in the reference genome. Because the depth is so low, and the error rate of a next-gen sequencer is somewhere between 0.5-5%, you cannot say that this is a mutation or just a sequencing error. However, as DP increases, you gain more confidence in whatever was called. So if DP=10, AO=10, RO=0 you can be quite confident that the reason you read an alternate base at a given location, it is not just a sequencing error because you read this location 10 times and every time you saw the same variant. Let me know if I can clarify further.
{ "domain": "biology.stackexchange", "id": 4173, "tags": "bioinformatics" }
Equations of motion for an object with normal and angular acceleration
Question: I am currently coding something with a moving object and can't figure out the physics with my school knowledge only. I hope this question is not below the standards of this forum. I have a moving object at ($x_0, y_0, \theta_0, v_0, \omega_0$), where $x$ and $y$ are the 2D-coordinates, $\theta$ the angle (to the x axis), $v$ the forward velocity and $\omega$ the angular velocity. I want to accelerate the object with a forward acceleration of $a$ and an angular acceleration of $\alpha$ and I want to calculate the new position after a time $\Delta t$. If $\alpha = 0$ (just as an intermediate step), the new state should be $$ x_1 = x_0 + \Delta t (v_0 \cos \theta_0) + 0.5 a (\Delta t^2) \\ y_1 = y_0 + \Delta t (v_0 \sin \theta_0) + 0.5 a (\Delta t^2) \\ \theta_1 = \theta_0 + \omega_0 \Delta t \\ v_1 = v_0 + a \Delta t \\ \omega_1 = \omega_0 $$ I am not sure if this is correct, but this is all I have. And I can't figure out how to add the angular acceleration to this equations, and I guess $\omega$ should be in the equations for $x$ and $y$ too. Answer: You need to keep track of 3 coordinates (2 positions and 1 rotation) and their derivatives. What you are missing is current velocity vector (2 components) which is not necessarily along the orientation direction unless constrainted to do so. Your best best is a simplectic integrator such as $$\begin{aligned} \begin{pmatrix} vx_1 \\ vy_1 \\ \omega_1 \end{pmatrix} & = \begin{pmatrix} vx_0 \\ vy_0 \\ \omega_0 \end{pmatrix} + \begin{pmatrix} a \cos\theta_0 \\ a\sin\theta_0 \\ \alpha \end{pmatrix} \Delta t \\ \begin{pmatrix} x_1 \\ y_1 \\ \theta_1 \end{pmatrix} & = \begin{pmatrix} x_0 \\ y_0 \\ \theta_0 \end{pmatrix} + \begin{pmatrix} vx_1 \\ vy_1 \\ \omega_1 \end{pmatrix} \Delta t \\ t_1 & = t_0 + \Delta t \end{aligned}$$ and take small steps to make it appear smooth. The above will not artificially add energy to the system like an Euler integrator does. The difference is to take the velocity step first and then use the new velocity in the position step. If the acceleration is constant for the entire step, the exact equations are $$\begin{pmatrix} x_1 \\ y_1 \\ \theta_1 \end{pmatrix} = \begin{pmatrix} x_0 \\ y_0 \\ \theta_0 \end{pmatrix}+ \frac{1}{2} \begin{pmatrix} vx_0 \\ vy_0 \\ \omega_0 \end{pmatrix} \Delta t + \frac{1}{2} \begin{pmatrix} vx_1 \\ vy_1 \\ \omega_1 \end{pmatrix} \Delta t \\$$ the problem with this is that typically the constant acceleration is a bad assumption.
{ "domain": "physics.stackexchange", "id": 11373, "tags": "homework-and-exercises, newtonian-mechanics" }
Nasty Age Printing Method
Question: I have this ugly age printing method. Can you do better? def age(birth_date) today = Date.today years = today.year - birth_date.year months = today.month - birth_date.month (months -1) if today.day < birth_date.day if years == 0 && months == 1 age = "#{months} month" elsif years == 0 && months > 1 age = "#{months} months" elsif years > 0 && months > 0 age = years == 1 ? "1 year, #{months} months" : "#{years} years, #{months} months" elsif years > 0 && months < 0 months = months + 12 years = years - 1 age = "#{months} months" if years == 0 age = "1 year, #{months} months" if years == 1 age = "#{years} years, #{months} months" if years == 1 end age end Answer: Some notes: (months -1) if today.day < birth_date.day. Note that this is doing nothing, months - 1 is evaluated and dropped. There is a age = in every branch of the conditional, and finally age is returned. That's unnecessary and not idiomatic, in Ruby conditionals are expressions, so we don't need to create a variable. About this: age = "#{months} months" if years == 0 age = "1 year, #{months} months" if years == 1 It's preferable not to use in-line conditionals when you are dealing with assignments or expressions (it's ok if you are performing side-effects, i.e. fail("error") if x < 0). In general follow functional programming guidelines, specially when dealing with logic (if you do some programming that does not deal with logic, please let me know ;-)). It should look (you could also use a case) something like this: age = if years == 0 "#{months} months" elsif years == 1 #{months} months" if years == 1 ... end I'd write: def age(birth_date) today = Time.zone.today total_months = (today.year*12 + today.month) - (birth_date.year*12 + birth_date.month) years, months = total_months.divmod(12) strings = [[years, "year"], [months, "month"]].map do |value, unit| value > 0 ? [value, unit.pluralize(value)].join(" ") : nil end strings.compact.join(", ") end Note that, to keep it simple, this code (as yours) ignores the day of the month, which gives a slightly wrong answer.
{ "domain": "codereview.stackexchange", "id": 3597, "tags": "ruby, ruby-on-rails" }
How does the Earley Parser store possible parses of an ambiguous sentence?
Question: I've got a pretty basic question concerning the Earley parser: In case of syntactic ambiguity ( S -> NP VP(V NP(NP PP)) vs. S -> NP VP(VP((V NP) PP) ), are both parses stored in one chart or in two? the grammar im talking about is the following: S -> VP NP VP -> V VP VP -> VP PP NP -> NP PP NP -> Det N PP -> P NP so you while parsing could either attach a PP to an NP or to an VP. my question is detail is how the graphical chart would look like, meaning the positions of predict, scan and complete. I was assuming that both parses would be stored in one (big) chart. So S' will then be found in, let's say, s[0][8] and s[0][16] ? Is that right? An attached image or link with a graphical chart parsing through an ambigous sentence would help. Answer: This is an example of what the graphical chart looks like when there are n tokens in the string: $\begin{array}{} [1,1] & [1,2] & [1,3] & \cdots & [1,n] \\ & [2,2] & [2,3] & \cdots & [2,n] \\ & & [3,3] & \cdots & [3,n] \\ & & & \ddots & \vdots \\ & & & & [n,n] \end{array} $ Because $S$ is the root of the grammar, the two parses of $S$ will be stored in the $[1,n]$ cell of the chart. Here as you can already guess, $[i,j]$ stands for the position indices of the parses, in other word, the span. In case of $S$, the positions span the whole string. If the positions do not span the whole string, what you have is a set of valid constituent parses in that span. So the answer to your question is: Possible parses are stored in the same chart, they may be stored in different cells of the chart if their spans are different. If their spans are the same, they are stored in the same cell in the chart. Btw, I think the first two rules in your grammar do not look right. Maybe you want S -> NP VP and VP -> V NP instead.
{ "domain": "cs.stackexchange", "id": 3625, "tags": "algorithms, parsers, ambiguity" }
Character Sprint Spaghetti Code
Question: I finally managed to get the code to work properly. It makes the player object move faster if the user double taps the arrow key with a few 'ticks' (new frames) after the 1st stroke. - User presses and has a limited time to press again to sprint the object. If the time runs out the user has to try again. However, It'd be better I believe to be working with time, miliseconds and not 'ticks', as what I intend involves lots of key combinations that have x seconds to be pressed. I would like to know the redundancies in the code, what can be made into a class and something better then screen refresh to time things up (delays?). package { import flash.display.Sprite; import flash.display.Graphics; import flash.events.Event; import flash.events.MouseEvent; import flash.ui.Keyboard; import flash.events.KeyboardEvent; public class Main extends Sprite { var keys:Array = []; var player:Sprite = new Sprite(); var sprint, condition1, condition2, timerStart, keyUp, keyDown:Boolean; var timer:int; public function Main():void { player.graphics.beginFill(0x000000); player.graphics.drawCircle(0, 0, 25); player.graphics.endFill; addChild(player); player.x = 100; player.y = 250; player.addEventListener(Event.ENTER_FRAME, update); stage.addEventListener(KeyboardEvent.KEY_DOWN, onKeyDown); stage.addEventListener(KeyboardEvent.KEY_UP, onKeyUp); } function update(e:Event):void { if ((timerStart)&&(timer>0)) { timer --; condition1 = true; } if (keyUp) { if ((condition1) && (timer > 0)) { condition2 = true; } if (sprint) { timerStart = false; condition1 = false; condition2 = false; sprint = false; timer = 7; } if (timer <= 0) { timerStart = false; condition1 = false; condition2 = false; sprint = false; timer = 7; } } if (keys[Keyboard.RIGHT]) { timerStart = true; if ((condition1) && (condition2)) { sprint = true; } if (sprint) { player.x += 7; } else player.x += 2; } function onKeyDown(e:KeyboardEvent):void { keys[e.keyCode] = true; keyUp = false; keyDown = true; } function onKeyUp(e:KeyboardEvent):void { keys[e.keyCode] = false; keyUp = true; keyDown = false; } } // class } // package Answer: From a once over Indentation is terrible, consider using a tool like jsbeautifier.org, it works for ActionScript as well Keep working with Event.ENTER_FRAME <- It is the better approach in my mind Always be on the look out for copy pasted code, this: if (sprint) { timerStart = false; condition1 = false; condition2 = false; sprint = false; timer = 7; } if (timer <= 0) { timerStart = false; condition1 = false; condition2 = false; sprint = false; timer = 7; } could be if (sprint || timer <= 0) { timerStart = false; condition1 = false; condition2 = false; sprint = false; timer = 7; } Do not track keyDown, you do not use it, and it's value is always the opposite of keyUp condition1 and condition2 are very unfortunate names, you should try to be more descriptive This code could use some well named constants and indenting: if (sprint) { player.x += 7; } else player.x += 2; I would rather see var FAST_SPEED = 7, NORMAL_SPEED = 2; if (isSprinting) { player.x += FAST_SPEED; } else { player.x += NORMAL_SPEED; } or if I wanted to be fancy player.x += isSprinting ? FAST_SPEED : NORMAL_SPEED;
{ "domain": "codereview.stackexchange", "id": 7205, "tags": "beginner, classes, actionscript-3" }
Is this thought regarding work and potential energy correct?
Question: I am really confused about this. The potential energy acquired by an object is equal to $mgh$, if we are on any planet, where $g$ is the planet's gravitational acceleration, which directly comes from the definition of work, that is $W = F.s$. But, suppose I am holding a book, and I lift it with a force greater than its weight $mg$, will it acquire more potential energy than before, just because $F$hand $>F$bookweight ? In addition to this, I want to ask this. Suppose I am standing in space, near a planet, where there in no air resistance, no external disturbance, just me and the planet. An suppose I am standing on the planet, and there is a book on the ground, and there is a gravitational field between us, which is attracting me and the planet. Now, if I pick up the book and take it to a certain height more than its weight on the planet, will it have acquired more potential energy? (Remember there is no drag, no resistance nothing) Answer: If you apply a greater force than $mg$ you will do more work and the book will accelerate, so as well as gaining gravitational potential energy it will also gain kinetic energy. The work that you do will equal the sum of the gain in gravitational potential energy and the gain in kinetic energy.
{ "domain": "physics.stackexchange", "id": 31566, "tags": "newtonian-mechanics, forces, work, potential-energy" }
Pose transformation between links
Question: I have a robot with 8 links. How do I get the pose transformation between the links? (I have a camera plugin, and I wish to get the pose of one of the links of the camera, which isnt exposed) I have gone through tf2 transformations, however, they provide the transformation between frames and not links. For instance, when I move a link in the Gazebo simulator, I can see that the pose of the link changes in RVIZ, however, when I use the command rosrun tf tf_echo frame1 frame2 Where frame2 is the link that is moving. (I am using robot_state_publisher for this). It gives a constant value. I suspect that, even though the links are moving, the frames seem to be constant. This can be verified, when I set my fixed frame to frame2, there seems to be no relative movement. What is the relationship between links and frame? How do I get the relative pose of two links using tf Originally posted by ThreeForElvenKings on ROS Answers with karma: 80 on 2020-05-09 Post score: 0 Original comments Comment by gvdhoorn on 2020-05-09: Frames are links, links are frames. If the transform you retrieve remains constant, it probably is (transforms tell you the translation + rotation body-relative (so if you ask for the transform from frame1 to frame2, you'll get it relative to frame1). Comment by ThreeForElvenKings on 2020-05-09: Hello. I can however, see that my link position values are changing. i.e. when I view my link states in rviz, I can see that link2 (which as you say is the same as frame2) is constantly changing while link1 (world) is constant. Yet the transformation command I give always gives a constant value. I reckon that my frames are being static, despite links moving. Comment by gvdhoorn on 2020-05-09: It would help if you could show both a screenshot of rqt_graph and a copy-paste of whatever urdf/xacro or other sources of TF you are using. Otherwise we'll just end up guessing. Comment by ThreeForElvenKings on 2020-05-09: Hello. I solved the error. There was a problem with my joint state publisher. Apologies Comment by gvdhoorn on 2020-05-10: Please post the solution you found as an answer here. Instead of closing the question, we will then mark your answer as the answer, which will signal much more clearly that your issue was actually resolved. Comment by ThreeForElvenKings on 2020-05-10: Apologies, I am new to the community. I have done that. Thanks Answer: Hi all. I have solved the issue. There was a problem with my joint state publisher. As it turns out, commands to move were not reflected in the joint states, as Gazebo had treated it independently. On loading the controllers properly, I could visualize the movements in Rviz, and was consequently able to get the correct transformations. Cheers! Originally posted by ThreeForElvenKings with karma: 80 on 2020-05-10 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by jstm on 2023-03-29: "able to get the correct transformations" how did you do this? I'd like to do the same
{ "domain": "robotics.stackexchange", "id": 34930, "tags": "ros-melodic" }
Are any genes over a billion years old?
Question: Are there any genes (for any organism) for which we can say with confidence that they are over a billion years old? Answer: Ribosomal RNA (rRNA) genes are shared among all living organisms, including lineages that have diverged from one-another well before a billion years. Bacteria, Archaea, and Eukaryotes diverged over 3 billion years ago, and their rRNAs are virtually identical to one-another. http://www.biologyreference.com/Ar-Bi/Archaea.html Any genes that are shared between plants, animals and fungi would also be over a billion years old.
{ "domain": "biology.stackexchange", "id": 10111, "tags": "evolution, gene" }
What is relation between electrons and photon?
Question: What is the relation between electrons and photons? Why do atoms get excited when their electrons come in contact with photons? Why do electrons go from a higher to lower energy level when emitting a photon? Answer: Photons are a quanta, or particles of light. The bare minimum piece of light. Consider them like "packets" of light. Einstein theorized and proved the Wave-Particle Duality which described the nature of light as both a wave and a particle, and this is how photons were originally speculated upon. What came from this however, was the Photoelectric-effect which proposed (and was proved) that photons can cause electrons to be emitted from a material because photons have momentum. Because photons have momentum, they have Energy, and you will be able to measure the amount of Work done on a material (although nearly insignificant in non-quantum realms). $ E = h * f $ The energy threshold for a material to release electrons is entirely dependent on that material, and by having enough photons to pass to a material, that material will release electrons. You can measure the kinetic energy of the emitted electrons: $ KE_{max} = (h * f) - W $
{ "domain": "physics.stackexchange", "id": 12451, "tags": "photons, electrons" }
Simple State Machine and Transition Table
Question: The goal is to have well defined state transitions, and the ability to provide the next event to execute. I'd like to know if this is a proper implementation of State Machine, considering how states and transitionTable are defined, and how I handle event as input and output via nextEvent. In many examples I cannot clearly define verbiage for state, so I defined them as verbs (notice ing suffix), as if it represents the ongoing progress of workflow. This may be wrong.. I also defined multiple events that could occur for a single state. For example, GetDeviceStatus and GetIntegrityStatus events both occur in RetrievingStatus state. This is also the case for Downloading state. You can see in the cases, when determining the next event, I need to check what the previous state was first. If flawed, what are the pitfalls to my design, and how could it be improved? thanks. enum ExampleState { case Initiate case Authorizing case RetrievingStatus case Downloading case Confirming case End } enum ExampleEvent { case InitiateSequence case GetAuthToken case GetDeviceStatus case GetIntegrityStatus case DownloadFromServer case DownloadToDevice case Confirm } class StateMachine { var oldState: ExampleState! var currentState: ExampleState! var currentEvent: ExampleEvent! var table: [ExampleState: [ExampleEvent: ExampleState]] = [.Initiate: [.InitiateSequence: .Authorizing], .Authorizing: [.GetAuthToken: .RetrievingStatus], .RetrievingStatus: [.GetDeviceStatus: .Downloading, .GetIntegrityStatus: .Confirming], .Downloading: [.DownloadFromServer: .Downloading, .DownloadToDevice: .RetrievingStatus], .Confirming: [.Confirm: .End]] func nextEvent(event: ExampleEvent) -> EventExecutor? { let transitionState = table[currentState]![event]! let oldState = currentState switch (transitionState) { case .Initiate: currentState = .Initiate return InitiateSequence() case .Authorizing: currentState = .Authorizing return GetAuthToken() case .RetrievingStatus: currentState = .RetrievingStatus switch (oldState) { case .Authorizing: return GetDeviceStatus() case .Downloading: return GetIntegrityStatus() default: return nil } case .Downloading: currentState = .Downloading switch (oldState) { case .RetrievingStatus: return DownloadFromServer() case .Downloading: return DownloadToDevice() default: return nil } case .Confirming: currentState = .Confirming return Confirm() case .End: currentState = .End return nil } } } Use case @objc func rxExecute() { publisher .map { self.stateMachine.nextEvent(event: $0) } // $0 == event that just completed .flatMap { $0!.rxExecute() } // execute next event from state machine output .subscribe(subscriber) // handles errors/success from completed event } Answer: First there are a few things about design Does you machine has something like 'error' that is not handled (e.g. by throwing an exception) or you have only 'outcomes' that handled in the same matter. Do you expect branches in the FSM? Is it possible for an fsm method to fail? If possible how it is handled? E.g. ignored (say we rely on caller to retry on timer), raise failure event and queue it for execution, use return value as a guard to branch in state machine method. Do you expect your code to maintained over 20+ years period? After many attempts to make better existing FSMs, making my own, trying different frameworks and making my own frameworks too, I came to a trivially simple conclusion (aka 'dumb' FSM pattern): each event should be represented by a function and each function should contain a switch by state; or every state should be represented by a function and every function should contain a switch by event. In the long run this is the cheapest solution. E.g. if you do not have branches/failures one can write a simple table driven fsm, however, once we will get the first branch/failure to handle we will need to either make it way more complicated or throw it away and rewrite using the 'dumb' approach
{ "domain": "codereview.stackexchange", "id": 38349, "tags": "object-oriented, swift, state-machine" }
Double every 2nd item in a list
Question: Some exercise required me to write a piece of code to double every second item in a list. My first thought was just using cons and I came up with the following: doubleSecond :: Num a => [a] -> [a] doubleSecond [] = [] doubleSecond [x] = [x] doubleSecond (x:y:xs) = x : (y * 2) : doubleSecond xs While it's straightforward and does work, it just doesn't look that elegant and I'm curious how this could be rewritten in a more elegant fashion :) Answer: Built-ins and generality It is a good first try, but I discourage explicit recursion and suggest a larger use of built-ins, Haskell provides you with incredibly powerful and general functions. Also writing something a bit more general will make you familiar with first class functions (the core of the Haskell experience). Decomposition As always, I am going to divide the task in smaller, more manageable parts: Find out how to apply a function to each second item of list. Give multiply_by_two as argument to the function above. mapSecond mapSecond :: (a -> a) -> [a] -> [a] mapSecond f = zipWith ($) (cycle [id, f]) Given a function and a list, apply the function to each second item of the list. This function is quite advanced, let me explain it: zipWith: given a function and two list applies the function to each pair. If you are familiar with Python, zip = zipWith (,) Prelude> zipWith (++) ["Hello", "Foo"] ["World", "Bar"] ["HelloWorld","FooBar"] $: given a function and two arguments, applies it to the arguments. Prelude> ($) (*) 2 5 10 cycle: given a list repeats it forever. Prelude> take 10 $ cycle [1,2,3] [1,2,3,1,2,3,1,2,3,1] The second argument is omitted, as it can be 'currried' (simplified) away. The main function is now: main = print $ ( mapSecond (* 2) ) [1,2,3,4,5,6]
{ "domain": "codereview.stackexchange", "id": 16794, "tags": "haskell" }
Why do Magnesium and Lithium form *covalent* organometallic compounds?
Question: Lithium and magnesium are Group 1 and Group 2 elements respectively. Elements of these groups are highly ionic, and I've never heard of them forming significantly covalent inorganic compounds. Yet these elements form a variety of organometallic compounds ($\ce{PhLi}$, the whole family of Grignard reagents, etc). Organometallic compounds have significant covalent character (i.e., the bond can be called covalent) in the carbon–metal bond. What's so special about carbon that makes these elements form covalent bonds? Answer: The character of the bond is determined by the electronegativity of the atoms. Speaking of bonds as purely ionic or covalent is not always correct - usually it is more correct to say that a bond has ionic or covalent characteristics. So comparing the difference in electronegativities gives us the following: $$\begin{array}{cc}\hline \text{Difference in electronegativity} & \text{Type of bond} \\ \hline < 0.4 & \text{Non-polar covalent} \\ 0.4 \mathrm{-} 1.7 & \text{Polar covalent} \\ >1.7 & \text{Ionic} \\ \hline \end{array}$$ At the upper end of the polar covalent spectrum, the bonds frequently have both covalent and ionic characteristics. For organometallic compounds, the difference in electronegativity between Li and C is $1.57$. So while this is still in the polar covalent range, it is also close to ionic. Similarly, the difference between Mg and C is $1.24$ - again, a very polar covalent bond. Compare this to the difference between H and C ($0.35$) - a non-polar covalent bond. So to answer your question, the thing that is "special" about carbon is that it has a fairly low electronegativity compared to the chalcogens and halogens. Granted, bonds with carbon are also going to be weaker than in say LiCl, but that's what makes organometallic compounds actually work to form carbon-carbon bonds. (The electronegativity values came from wikipedia's great chart)
{ "domain": "chemistry.stackexchange", "id": 7715, "tags": "organic-chemistry, ionic-compounds, organometallic-compounds, covalent-compounds" }
Cross validation Vs. Train Validate Test
Question: I have a doubt regarding the cross validation approach and train-validation-test approach. I was told that I can split a dataset into 3 parts: Train: we train the model. Validation: we validate and adjust model parameters. Test: never seen before data. We get an unbiased final estimate. So far, we have split into three subsets. Until here everything is okay. Attached is a picture: Then I came across the K-fold cross validation approach and what I don’t understand is how I can relate the Test subset from the above approach. Meaning, in 5-fold cross validation we split the data into 5 and in each iteration the non-validation subset is used as the train subset and the validation is used as test set. But, in terms of the above mentioned example, where is the validation part in k-fold cross validation? We either have validation or test subset. When I refer myself to train/validation/test, that “test” is the scoring: Model development is generally a two-stage process. The first stage is training and validation, during which you apply algorithms to data for which you know the outcomes to uncover patterns between its features and the target variable. The second stage is scoring, in which you apply the trained model to a new dataset. Then, it returns outcomes in the form of probability scores for classification problems and estimated averages for regression problems. Finally, you deploy the trained model into a production application or use the insights it uncovers to improve business processes. As an example, I found the Sci-Kit learn cross validation version as you can see in the following picture: When doing the splitting, you can see that the algorithm that they give you, only takes care of the training part of the original dataset. So, in the end, we are not able to perform the Final evaluation process as you can see in the attached picture. Thank you! scikitpage Answer: If k-fold cross-validation is used to optimize the model parameters, the training set is split into k parts. Training happens k times, each time leaving out a different part of the training set. Typically, the error of these k-models is averaged. This is done for each of the model parameters to be tested, and the model with the lowest error is chosen. The test set has not been used so far. Only at the very end the test set is used to test the performance of the (optimized) model. # example: k-fold cross validation for hyperparameter optimization (k=3) original data split into training and test set: |---------------- train ---------------------| |--- test ---| cross-validation: test set is not used, error is calculated from validation set (k-times) and averaged: |---- train ------------------|- validation -| |--- test ---| |---- train ---|- validation -|---- train ---| |--- test ---| |- validation -|----------- train -----------| |--- test ---| final measure of model performance: model is trained on all training data and the error is calculated from test set: |---------------- train ---------------------|--- test ---| In some cases, k-fold cross-validation is used on the entire data set if no parameter optimization is needed (this is rare, but it happens). In this case there would not be a validation set and the k parts are used as a test set one by one. The error of each of these k tests is typically averaged. # example: k-fold cross validation |----- test -----|------------ train --------------| |----- train ----|----- test -----|----- train ----| |------------ train --------------|----- test -----|
{ "domain": "datascience.stackexchange", "id": 11435, "tags": "machine-learning, cross-validation" }
How to determine if a regular language L* exists
Question: I'm trying to make sense of regular languages, operations on them, and Kleene operations. Let's say I define a language with the alphabet {x, y}. Let's further say that I place the restriction that there can be no string in the language that contains the substring 'xx'. Thus, my language could be expressed as L = {y, xy, yx}, since that language conforms to the definition. Could I then argue that there is no language L* since L* could contain LL? That is, can a particular but arbitrarily chosen finite language that conforms to the definition exist, but since LL can't be in L*, L* cannot exist? Or must any L necessarily omit anything preventing L* from existing? Answer: Your confusion seems to stem from the incorrect assumption that if a language $L$ has a certain property, then $L^*$ must also have that property. Consider this simpler example. Over the 1-symbol alphabet $\{a\}$ consider the property $L$ contains no words of length more than three Then one language with this property is $L=\{a, aaa\}$. Then by the definition of the $^*$ operator $$\begin{align} L^*&=\{\epsilon\}\cup L\cup LL\cup LLL \cup\dotsm\\ &=\{\epsilon\}\cup\{a, aaa\}\cup\{aa,aaaa, aaaaaa\}\cup\dotsm\\ &=\{\epsilon,a,aa,aaa,aaaa,aaaaa,aaaaaa,\dotsc\} \end{align}$$ In other words, given this $L$, $L^*$ consists of all finite strings of $a$s, which certainly doesn't have the property that $L$ did. Now of course there are some properties of a language $L$ which are shared by $L^*$: being regular is one such, consisting of just the empty string is another, and consisting of only even-length strings is yet another.
{ "domain": "cs.stackexchange", "id": 5955, "tags": "formal-languages, regular-languages, kleene-star" }
How does Melatonin put you to sleep? How is this compound radio-protective?
Question: Melatonin (IUPAC semi-formal name: N-acetyl-5-methoxytryptamine) is a hormone produced in humans, animals and some of plants. It's mainly famous for being the drug suggested to people with sleeping problems, jet lags and even insomnia (in trivial cases, still experimental). Recent studies have shown this compound's role as an anti-oxidant and a protection for mitochondrial DNA. Melatonin interacts with the immune system; and is clinically proved to have anti-inflammatory effects. It can act as a radio-protective agent... Hold it there. This is chemistry, not biology. First of all, how is Melatonin radio-protective? And what is in that structure that makes it radio-protective? 2nd, what reactions occur in the body, in presence of Melatonin, that cause sleepiness (resources have claimed the chemical function to be a mystery but I doubt that's true)? Most of the info above is available at the wikipedia: Melatonin.image credit: http://cenblog.org/the-haystack/files/2011/06/melatonin.png Answer: Melatonin is not "radio-protective" in the sense that it blocks radiation or anything. As you note, it's an antioxidant. One of the ways in which ionizing radiation can interact with the body is by providing enough energy to break chemical bonds, producing highly reactive radical species that can cause biological damage by reacting with various things in the body. Melatonin can easily be oxidized by many of these reactive species, preventing them from reacting with anything else. This paper shows the products for the reaction with radio-generated hydroxyl radicals. The role melatonin plays in sleep is fairly well known (though the effects of exogenous melatonin, less so). Without going into too much detail, there are melatonin receptors in the body, found in cell membranes in the brain and a few other places (G protein-coupled receptors). When these bind melatonin on the outside of the cell, their conformation changes, further activating signalling processes on the inside of the cell. The effects of almost all signalling molecules in the body work in this way. The molecules activate some kind of receptor, which then activates a sequence of other signals, eventually resulting in biological effects.
{ "domain": "chemistry.stackexchange", "id": 2461, "tags": "organic-chemistry, biochemistry, drugs" }
Recursive Fibonacci generator
Question: I decided to write a function that would generate an index of the Fibonacci sequence making use of Python's mutable default parameter. It's inspired by a question I answered on Stack Overflow where someone was converting a script with this idea from JavaScript to Python and I wanted to expand on it to see how I could handle it. I wanted to basically try calling an index of the list and if it doesn't exist, try generating it from the previous two indices. However if they don't exist, then keep going back until two existing cached results are found and work off of those. I wanted to guard against errors from exceeding the recursion depth without necessarily rejecting a number. For instance, calling fib(1800) immediately exceeds (my) recursion depth, but my while loop will allow you to get the result. For values that prove too high, I reject them with a ValueError. Though this does print every line of the recursive call's error. I'd like to know how I can handle this raise better to avoid that. I'd also like to know if I could actually increase the viable number, if there's a good way to calculate what is a reasonable number better. I currently halve the recursion limit value mostly for safety but I'm sure that's not the best way. I'm also interested to know about any potential dangers of the cache getting too large and how those should be dealt with. from sys import getrecursionlimit rec = getrecursionlimit() * 0.9 def fib(n, rec=rec, memo=[0,1]): """Return the sum of the fibonacci sequence up until n rec is the recursion limit, and should be left as default. memo is the cached list of fibonacci values. raises a ValueError if your value is too high. If that happens, call lower values to build up the cache first.""" while (n - len(memo)) >= rec: rec /= 2 try: fib(int(len(memo) + rec), rec) except RuntimeError: raise ValueError("Input is too high to calculate.") try: return memo[n] except IndexError: # Calculate the answer since it's not stored memo.append(fib(n - 1) + fib(n - 2)) return memo[n] Answer: As pointed out, top-down recursive approaches are almost never a good idea when it comes to Fibonacci numbers. Consider the alternative iterative approach: def fib(n): while n <= len(fib.cache): fib.cache.append(fib.cache[-1] + fib.cache[-2]) return fib.cache[n] fib.cache = [0, 1] Notice how the memoization cache is stored as an attribute of the function, which is the closest thing in Python to having a static variable inside a function.
{ "domain": "codereview.stackexchange", "id": 15318, "tags": "python, recursion, fibonacci-sequence" }
ROS QoS vs ROS2 QoS
Question: Is there anyway to have similar QoS policies to ROS (as the ones in ROS2) ? Or is this a DDS specific feature? Originally posted by KiD on ROS Answers with karma: 13 on 2020-05-30 Post score: 0 Answer: This is a DDS specific feature (and actually one of the reasons DDS was chosen as the default middleware for ROS 2). The things you can change to the ROS 1 "QoS" (in quotes, as it isn't really the same thing) are specified in the TransportHints structure. Originally posted by gvdhoorn with karma: 86574 on 2020-05-31 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 35027, "tags": "ros" }