anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
If gases assume the volume of the container that they are in, then wouldn't a gas that isn't enclosed in a container have an infinite volume?
Question: But if the volume was infinite for gases not enclosed in a container, then how would the ideal gas law even function? Thanks to anyone who can point out the piece of this puzzle I am not considering! Answer: I give you a case. Suppose you opened an airtight jar containing an ideal gas. Suppose by some mysterious power, you are able to observe the molecules of the gas. What do you think you'll see on opening the jar? Obviously you'll see the molecules dissipating in space(the correct word would be diffusing). Initially they suppose on opening occupied 100ml, then they occupied 200, then 300... Thus at every point of time the molecules have a finite volume, the value may be very large, but still its finite. Thus we say a gas "will occupy infinite volume" but only after "infinite time".
{ "domain": "chemistry.stackexchange", "id": 12687, "tags": "gas-laws" }
Comparing pKa of hydrocarbons
Question: We have to compare their acidic strength, and we know acidic strength is directly proportional to stability of the anion. So the first compound is more acidic due to its being aromatic after deprotonation. The second most acidic is the fifth compound, because of $\mathrm{\% s}$ character of $\ce{C-H}$ bond. And the least acidic is sixth compound because its anion is antiaromatic. But how would we compare the second, third and fourth compounds? Source: MS Chouhan; Advanced Problems in Organic Chemistry 11th Edition; Chapter 1 General Organic Chemistry problem 27 Answer: Nice work! You clearly understand how factors like aromaticity/antiaromaticity and % s-character in a $\ce{C-H}$ bond can affect acidity. You can use Coulson's Theorem to estimate the % s-character in the other molecules. In its general form Coulson's Theorem describes the relationship between the angle $\ce{X-Y-Z}$ in a molecule and the hybridization of the $\ce{Y-X}$ and $\ce{Y-Z}$ bonds according to the following equation: $$\mathrm{1+\lambda_{XY}\lambda_{YZ}cos(\Theta_{XYZ})=0}$$ (where $\mathrm{\lambda}$ represents the square root of the hybridization index of a bond, e.g. $\mathrm{\lambda=\sqrt{3}}$ and the hybridization index $\mathrm{=3}$ for an $\mathrm{sp^3}$ hybridized bond) In molecule III where the two $\ce{C-H}$ bonds are equivalent, Coulson's Theorem simplifies to $$\mathrm{1+\lambda^2cos(117)=0}$$ and we find that $\mathrm{\lambda^2 = 2.2}$, or the $\ce{C-H}$ bond in III is $\mathrm{sp^{2.2}}$ hybridized. As the hybridization index gets smaller, there is more s-character in the $\ce{C-H}$ bond which results in a more stable carbanion and consequently increased acidity. This is consistent with our thinking that, in general, acidity follows the order $\mathrm{sp > sp^2 > sp^3}$; or, the larger the $\ce{X-C-H}$ bond angle, the more acidic the $\ce{C-H}$ bond.
{ "domain": "chemistry.stackexchange", "id": 4877, "tags": "organic-chemistry, acid-base, hydrocarbons" }
Linear regression as a hylomorphism
Question: A hylomorphism consists of an anamorphism followed by a catamorphism. Is it possible to express linear regression as a hylomorphism? Answer: You can indeed see linear regression as arising from a fixed point computation, but it is better to think of it as related to transitive closure computations than to folds or unfolds. Regression is about minimising the mean squared error. A regression model has a vector of inputs (i.e., the design matrix) $X$, a vector of outputs $Y$ and a set of coefficients $\beta$, with a model $$ Y = X\beta + e $$ where $e$ is a vector of errors. If we view $e$ as a function of $\beta$, we can write: $$ e(\beta) = Y - X\beta $$ Then, the squared error (as a function of the coefficients $\beta$) will be given by the dot product $$ SE(\beta) = e^T e $$ Then a bit of vector calculus will tell you that $\beta$ gets minimised when $$ \beta = (x^T x)^{-1} x^T y $$ The matrix inversion in the equation above is secretly a fixed point calculation. To get an intuition for why, you need to recall two facts. First, the star in Kleene algebra (i.e., models of regular expressions) corresponds to iteration. You can see this in the characterizing equation: $$A^\ast = I + A A^\ast$$ Next, square matrices with elements valued in a Kleene algebra form a Kleene algebra. So if you think of the equation above in terms of linear algebra, you can sort of see this as saying: $$\begin{array}{lcl} A^\ast &=& I + A A^\ast \\ A^\ast - A A^\ast & = & I \\ A^\ast (I - A) & = & I \\ A^\ast &=& (I-A)^{-1} \\ \end{array}$$ So matrix inversion has a fundamental connection to the asteration operation in Kleene algebra. As another source of intuition, if you think of a square Boolean matrix as representing the edge relation in a graph, the Kleene star of the graph represents reachability in the graph -- and graph reachability is very obviously a fixed point property. This is all quite handwavy, but these ideas have been developed rigorously. They were first (I think) introduced to computer science by Roland Backhouse and B.A. Carré, and were developed by Robert Tarjan in his 1979 paper A Unified Approach to Path Problems. (Tarjan also points out this means that many graph algorithms can be seen as sparse matrix computations!)
{ "domain": "cstheory.stackexchange", "id": 5137, "tags": "linear-algebra, recursion" }
Capacitors in parallel final potential difference
Question: I was provided with the following problem. So I first calculated the total capacitance for (i), which was $$4.5 + 1.5 = 6.0 \mu F$$ Now part (ii) is the question i'm struggling at. I know that the charge on the $4.5 \mu F$ capacitor is $6.3 \times 4.5 = 28.35 \mu F$ So why is the p.d. across both capacitors equal to $$V = \frac{Q}{C} = \frac{28.35\times 10^{-6}}{(4.5+1.5)\times 10^{-6}} = 4.7V$$ I understand that the charge has to be the same on both plates but why do you add the capacitance values together? Wouldn't the different values of capacitances (C) mean that $V = \frac{Q}{C}$ are different across each capacitor? Answer: If the circuit is drawn differently, it can be observed that the capacitors are actually in parallel. Hence the equation is correct. Assuming the capactiors are set up as standard in the circuit above.
{ "domain": "physics.stackexchange", "id": 41067, "tags": "homework-and-exercises, electric-circuits, capacitance, batteries" }
Rotate 3d vector value into a single axis using a rotation quaternion
Question: I want to rotate the whole value of a 3d vector into one axis using quaternion rotations. The reason behind is that I want to align the X and Y Axis of my smartphone with the X and Y Axis of my vehicle in order to detect lateral and longitudinal acceleration separated on these two axis. Therefore I want to detect the first straight acceleration of the car and rotate the whole acceleration value into the heading axis (X-Axis) of the phone assuming a straight forward motion. How do I achieve this? Answer: Use the average magnitude of X,Y as the acceleration vector. Get the angle between this vector and the (1,0). Create a quaternion with this vector that looks something like (cos(angle/2),0,0,-1) Multiple all future acceleration results
{ "domain": "robotics.stackexchange", "id": 1099, "tags": "sensors, accelerometer, rotation" }
Relative acceleration with pullys
Question: I have tried this question every way I can think but in the equation for particle $L$ $g$ cancels every time. Could someone show me how to do it correctly or tell me what I am doing wrong. Thanks, The Question: A string with negligible mass passes over a smooth pulley V with a particle A of mass $18kg$ on one end of the string and a pulley ($J$) of negligible mass on the the other end Another string with negligible mass passes over pulley $J$ and has a particle $K$ of mass $12kg$ on one end and a particle $L$ of mass $9kg$ on the other end. Show the common acceleration of $A$ and $J$ then show the relative acceleration of $K$ and $L$ to $J$. So far I have worked out that the tension in the top string is equal to twice the tension in the bottom string. $T-2S=0a$ $$T=2S$$ I then plug this into $18g-T=18a$ to get $18g-2S=18a$ From that equation I get $S=9g-9a$ After that I plug the value of $S$ into the equations for $K$ and $L$ then in the equation for $L$ $g$ is canceled out and I am stuck. Answer: The accelerations of K and L will be different from one another and also from the particle A. To solve this consider $a_p$ to be the acceleration of the pulley J and $a_r$ be the relative accelerations of the particles K and L relative to the frame of reference of the pulley J. Then the net acceleration of the particle K will be $a_p-a_r$ and that of particle L will be $a_p+a_l$. Now apply the equations for the particles K and L and the acceleration term wont cancel out. ( you can interchange the accelerations of both the particles and the only change it will bring is the sign of the answer and that will tell you the actual direction of the particles K and L) for k $$S-12g=12(a_p-a_r)$$ and for L $$S-9g=9(a_p+a_r)$$. solve this and you will get your answer.
{ "domain": "physics.stackexchange", "id": 9105, "tags": "homework-and-exercises, newtonian-mechanics, acceleration, relative-motion" }
How to create a unitary matrix from a circuit
Question: I've been trying to find a way to analytically find the unitary matrix of a circuit, but I cant find the resources to do so. How can I do so? Answer: I will provide a short, incomplete answer on how to do it, but also refer you to resources that I would highly recommend you look at for a more complete picture. This will be an important skill for you to learn, so I would recommend taking your time to master it. How to compute the unitary matrix of a circuit Suppose a quantum circuit of $n$ qubits has $g$ unitary gates. Label these gates by $U_1, U_2, \dots, U_g$, in the order they occur in the circuit (if some gates occur simultaneously, it doesn't matter how you order those ones). Each of these gates has a representation as a $2^n \times 2^n$ unitary. To get $U$, just multiply these in reverse order: $U = U_g U_{g-1} \dots U_1$. The reversal comes from the way functions are ordered in standard math notation. So how do you get the matrices $U_i$ in the first place? If it's a single-qubit gate, you could take the tensor product of the matrix for the single qubit with the identity on the rest of them, minding the ordering. Or, for any gate you could looking at how it acts on computational basis states. Resources If any of the above seems confusing or leave you with further questions, I would recommend the following resources to develop a strong foundation: The Understanding Quantum Information and Computation course has video lectures and written material. It's free, available online, and at an introductory level, assuming background in linear algebra and complex numbers. By the end of Lesson 3, you should have the tools you need to answer your own question. More advanced, but considered the standard resource, is Nielsen and Chuang's Quantum Information and Quantum Computation. Check out especially Chapters 2 and 4.
{ "domain": "quantumcomputing.stackexchange", "id": 4750, "tags": "unitarity" }
What elements and/or substances without water are liquid at room temperature?
Question: I was thinking about liquids, and I started to wonder theses related questions: 1) Besides mercury, what elements are naturally liquid at room temperature? 2) What naturally found family of substances/mixtures that do not contain $\ce{H2O}$ are naturally liquid at room temperature? Answer: This question is a bit broad in terms of the sheer amount of chemical compounds and mixtures that are liquid at room temperature. Examples include: Compounds Acids, bases, many hydrocarbons (e.g. hexane) and many more Mixtures Crude oil, aqua regia and many more In terms of elements, there are only two that are liquid at room temperature (say about 20 °C or 293 K): Mercury (as you identified). Bromine Francium, cesium, gallium and rubidium are close, with melting points at 300 K, 301.59 K, 303.3 K and 312.46 K respectively. LennTech provides a list The elements of the periodic table sorted by melting point
{ "domain": "chemistry.stackexchange", "id": 3350, "tags": "solutions, phase" }
(BASIC) Deriving Hamiltonian from equations of motion
Question: Suppose that some particle can be described by a system of differential equations with respect to displacement and time. Is there a general procedure of extracting the hamiltonian of such a system? Answer: Nope. Many systems' equations of motion are not equivalent to Hamilton's equations for any Hamiltonian, so there is no Hamiltonian formulation for such systems. Search "non-Hamiltonian system" for examples. Such systems usually include friction, dissipation, or something else that violates conservation of energy. (However, some systems with friction or dissipation can still be described by a Hamiltonian that depends explicitly on time, or has position-momentum cross terms - e.g. http://www.hep.princeton.edu/~mcdonald/examples/damped.pdf.) Even if you do somehow know that your equations of motion do correspond to some Hamiltonian, I do not believe that there's any known general procedure for reconstructing that Hamiltonian, unless of course your equations of motion are simple, like $\dot{q} = p / m,\ \dot{p} = -dV(q)/dq$.
{ "domain": "physics.stackexchange", "id": 43654, "tags": "hamiltonian-formalism, differential-equations" }
ROS on Ubuntu XX.10 versions
Question: Is ROS installation supported on Ubuntu xx.10 (16.10, 18.10, 20.10) versions ? Originally posted by electrophod on ROS Answers with karma: 277 on 2020-12-12 Post score: 0 Answer: (You tagged this melodic and kinetic, so I'll answer for ROS 1, but the same would apply to ROS 2) The answer will depend a bit on what you mean by "is ROS installation supported": if you are asking whether you can use apt to install ROS using binary packages provided by the ROS buildfarm, then: no, ROS binary packages are typically only provided for Ubuntu LTS releases if your question was actually whether it would be possible to install ROS using whatever means, then: maybe. You could try building it from source. This is a maybe, as it's possible you'll run into issues with dependencies having changed between officially supported OS and your target OS, which may require patches to the sources For information on which specific platform + OS combinations are supported: ROS 1: REP-3: Target Platforms ROS 2: REP-2000: ROS 2 Releases and Target Platforms Each ROS release lists the Ubuntu (and other OS) versions for which official support is provided. To run ROS applications on OS which are not officially supported, Docker is often used. Originally posted by gvdhoorn with karma: 86574 on 2020-12-13 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 35865, "tags": "ros, ros-melodic, installation, ros-kinetic, ubuntu" }
Dataset with some mislabeled data (around 1%)
Question: I have a dataset with around 1% of mislabeled data, it is a multi label problem and i want to find a way to correct those incorrect labels. Assuming that the amount of mislabeled data is low i divided the dataset in Train/Test and trained a classifier taking care that the classifier does not overfit. After that i know that the accuracy on the Test set is as high as possible i evaluated the whole dataset using the classifier and the result is a new set of labels which i assume that are the corrected labels. Is this the correct approach to solve a problem like this? Answer: It think it's a reasonable approach, but currently it seems that you have no way to check whether the new labels are correct or not. I think you should at least check that the new labels don't introduce more errors than they solve. Ideally you would re-annotate a random sample of instances, keeping both the old (possibly erroneous) labels and the new ones. Then you can use this sample as a test set and evaluate the two following points: most/all instances for which the new label is the same as the old label should be predicted with this label (otherwise it means your method changes correct labels) most instances for which the new label is different from the old label should be predicted with the new label (otherwise it means your method doesn't fix the wrong labels) The problem with this approach is that you need to annotate a large sample, since you need a reasonable number of wrong labels which are only present in 1% of the data. If re-annotating a large sample is not possible, you could try a kind of boostrapping approach: run your method, then take a sample of instances which are predicted as different from the old label. Among these labels changes, count how many are correct. This approach requires less manual annotation effort since you don't need a large random sample, however it would miss the cases of wrong label which is not changed by the classifier.
{ "domain": "datascience.stackexchange", "id": 11447, "tags": "machine-learning, classification" }
Multiple nodelets in the same package
Question: I am trying to create a package that performs multiple computer vision tasks. I want to nodeletize both tasks to take advantage of zero-copying of the images. Working off the examples below I have been able to create a single nodelet successfully, but am running into naming issues when trying to catkin_make my package with a second nodelet. http://www.clearpathrobotics.com/assets/guides/ros/Nodelet%20Everything.html https://github.com/ros/common_tutorials/tree/indigo-devel/nodelet_tutorial_math http://wiki.ros.org/nodelet/Tutorials http://wiki.ros.org/usb_cam I was surprised that I couldn't find any examples of packages with multiple nodelets to reference. Is this because its bad practice? If not does anyone have an example of what the CMakeLists.txt and nodelets.xml files should look like for multiple nodelets? Originally posted by shoemakerlevy9 on ROS Answers with karma: 545 on 2018-08-29 Post score: 3 Answer: One thing that helps me is to write each nodelet/node so that it can be built either (or indeed both!) ways. Put all your functionality into a class with a constructor that takes both the public and private node handles. We then have this template that generates a thin interface class between the Nodelet code and our implementation: namespace nodelet_helper{ template<typename T> class TNodelet: public nodelet::Nodelet { public: TNodelet() {}; void onInit() { NODELET_DEBUG("Initializing nodelet"); m_theT = std::unique_ptr<T>(new T(getNodeHandle(), getPrivateNodeHandle())); } private: std::unique_ptr<T> m_theT; }; } // End nodelet_helper namespace Then, to use it we have code like this: namespace my_vision_tasks { using VisionTaskOneNodelet = nodelet_helper::TNodelet<VisionTaskOne>; using VisionTaskTwoNodelet = nodelet_helper::TNodelet<VisionTaskTwo>; } PLUGINLIB_EXPORT_CLASS(my_vision_tasks::VisionTaskOneNodelet, nodelet::Nodelet) PLUGINLIB_EXPORT_CLASS(my_vision_tasks::VisionTaskTwoNodelet, nodelet::Nodelet) Finally, the XML is: <library path="lib/libstructure_from_motion_v2_nodelet"> <class name="my_vision_tasks/VisionTaskOneNodelet" type="my_vision_tasks::VisionTaskOneNodelet" base_class_type="nodelet::Nodelet"> <description>Task one</description> </class> <class name="my_vision_tasks/VisionTaskTwoNodelet" type="my_vision_tasks::VisionTaskTwoNodelet" base_class_type="nodelet::Nodelet"> <description>Task two</description> </class> </library> Originally posted by KenYN with karma: 541 on 2018-08-29 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 31675, "tags": "ros-kinetic, nodelet" }
Initializing metadata and then update it from the background thread
Question: I am working on creating a scheduler which will connect to cassandra database and extract data from few tables and store it in few variables. Then from my main thread I will be using those variables by calling getters on them from CassUtil class. Basically, I will cache the result in memory and then from a single background thread I will keep updating those cache which runs every 15 mins. Here is my code which makes connection to cassandra cluster and then load stuff into these variables processMetadata, procMetadata and topicMetadata. And then I call getters on these three variables from my main thread to get the data from it. public class CassUtil { private static final Logger LOGGER = Logger.getInstance(CassUtil.class); private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1); private List<ProcessMetadata> processMetadata = new ArrayList<>(); private List<ProcMetadata> procMetadata = new ArrayList<>(); private List<String> topicMetadata = new ArrayList<>(); private Session session; private Cluster cluster; private static class Holder { private static final CassUtil INSTANCE = new CassUtil(); } public static CassUtil getInstance() { return Holder.INSTANCE; } private CassUtil() { List<String> servers = TestUtils.HOSTNAMES; String username = TestUtils.loadCredentialFile().getProperty(TestUtils.USERNAME); String password = TestUtils.loadCredentialFile().getProperty(TestUtils.PASSWORD); PoolingOptions opts = new PoolingOptions(); opts.setCoreConnectionsPerHost(HostDistance.LOCAL, opts.getCoreConnectionsPerHost(HostDistance.LOCAL)); Builder builder = Cluster.builder(); cluster = builder .addContactPoints(servers.toArray(new String[servers.size()])) .withRetryPolicy(DowngradingConsistencyRetryPolicy.INSTANCE) .withPoolingOptions(opts) .withReconnectionPolicy(new ConstantReconnectionPolicy(100L)) .withLoadBalancingPolicy( DCAwareRoundRobinPolicy .builder() .withLocalDc( !TestUtils.isProduction() ? "DC2" : TestUtils.getCurrentLocation() .get().name().toLowerCase()).build()) .withCredentials(username, password).build(); try { session = cluster.connect("testkeyspace"); StringBuilder sb = new StringBuilder(); Set<Host> allHosts = cluster.getMetadata().getAllHosts(); for (Host host : allHosts) { sb.append("["); sb.append(host.getDatacenter()); sb.append(host.getRack()); sb.append(host.getAddress()); sb.append("]"); } LOGGER.logInfo("CONNECTED SUCCESSFULLY TO CASSANDRA CLUSTER: " + sb.toString()); } catch (NoHostAvailableException ex) { LOGGER.logError("error= ", ExceptionUtils.getStackTrace(ex)); } catch (Exception ex) { LOGGER.logError("error= " + ExceptionUtils.getStackTrace(ex)); } } // start a background thread which runs every 15 minutes public void startScheduleTask() { scheduler.scheduleAtFixedRate(new Runnable() { public void run() { try { processMetadata = processMetadata(true); topicMetadata = listOfTopic(TestUtils.GROUP_ID); procMetadata = procMetadata(); } catch (Exception ex) { LOGGER.logError("error= ", ExceptionUtils.getStackTrace(ex)); } } }, 0, 15, TimeUnit.MINUTES); } // called from main thread to initialize the metadata // and start the background thread public void initializeMetadata() { processMetadata = processMetadata(true); topicMetadata = listOfTopic(TestUtils.GROUP_ID); procMetadata = procMetadata(); startScheduleTask(); } public List<String> listOfTopic(final String consumerName) { List<String> listOfTopics = new ArrayList<>(); String sql = "select topics from topic_metadata where id=1 and consumerName=?"; try { // get data from cassandra } catch (Exception ex) { LOGGER.logError("error= ", ExceptionUtils.getStackTrace(ex), ", Consumer Name= ", consumerName); } return listOfTopics; } public List<ProcessMetadata> processMetadata(final boolean flag) { List<ProcessMetadata> metadatas = new ArrayList<>(); String sql = "select * from process_metadata where id=1 and is_active=?"; try { // get data from cassandra } catch (Exception ex) { LOGGER.logError("error= ", ExceptionUtils.getStackTrace(ex), ", active= ", flag); } return metadatas; } public List<ProcMetadata> procMetadata() { List<ProcMetadata> metadatas = new ArrayList<>(); String sql = "select * from schema where id=1"; try { // get data from cassandra } catch (SchemaParseException ex) { LOGGER.logError("schema parsing error= ", ExceptionUtils.getStackTrace(ex)); } catch (Exception ex) { LOGGER.logError("error= ", ExceptionUtils.getStackTrace(ex)); } return metadatas; } public void shutdown() { LOGGER.logInfo("Shutting down the whole cassandra cluster"); if (null != session) { session.close(); } if (null != cluster) { cluster.close(); } } public Session getSession() { if (session == null) { throw new IllegalStateException("No connection initialized"); } return session; } public Cluster getCluster() { return cluster; } public List<ProcessMetadata> getProcessMetadata() { return processMetadata; } public List<String> getTopicMetadata() { return topicMetadata; } public List<ProcMetadata> getProcMetadata() { return procMetadata; } } And here is my Initializer code which calls initializeMetadata() method to initialize stuff. I am using Spring here. @Singleton @DependencyInjectionInitializer public class TestInitializer { public TestInitializer() { LOGGER.logInfo("Initializer called."); CassUtil.getInstance().initializeMetadata(); } @PostConstruct public void postInit() { LOGGER.logInfo("PostInit called"); // doing some stuff // accessing those three variables by calling getter on them from CassUtil class } @PreDestroy public void shutdown() { LOGGER.logInfo("Shutdown called"); // doing some stuff } } I wanted to see if my CassUtil class can be improved in any way. My main idea is to access processMetadata, procMetadata and topicMetadata variables from main thread without calling cassandra every time and instead it should load data from cache which is getting updated every 15 mins. So, I need to have a background thread which runs every 15 minutes and extract data from cassandra tables and then populate these variables and then I can use these variables from main thread. Is there any better way? I am using Java 7. Answer: Sugestions for your solution: I have few sugestions how I would improve your solution. BTW it's really good job what you have done. 1, externalize sql queries String sql = "select * from schema where id=1"; In my opinion in the code there should be as least as possible hardcoded strings. When you will need to change table name from schema to new_schema you will need to recompile all the project - it's OK for one or two times but what about 20times? I would pull my hair out :). 2, Method shutdown refactoring I would rather vote for: if (null != session || null != cluster) { session.close(); } 3, Constructor CassUtil has 40+ lines - I think you should break it to few methods. At least cluster is candidate for private method. 4, rename method to follow JavaBeans syntax definition listOfTopic -> getTopics processMetadata -> getProcessMetadata ... Tip #1: Since you are using spring I would think about making CassUtil as a spring managed bean. But if you are using CassUtil in non-spring components you don't have to read following: @Configuration public class AppConfig { @Bean public CassUtil() { // you are able to delete Holder and make `CassUtil` public construcotr // by default @Bean is in scope singleton return new CassUtil(); } } With that you are able to inject (@Autowired private CassUtil cassandraUtil) wherever you want (in spring component of course). I would create new spring @Component like: CassandraProcessor and put here all service methods like getters and also scheduled method (dont forget to @EnableScheduling): @Scheduled(fixedRate=900000) public void startScheduleTask() { processMetadata = processMetadata(true); topicMetadata = listOfTopic(TestUtils.GROUP_ID); procMetadata = procMetadata(); } This should be executed on the startup of spring application so I suppose you don't need initializeMetadata at all. After implementing this approach when you will need to get listOfTopic you just @Autowired private CassandraProcessor cassandraProcessor and call cassandraProcessor.listOfTopic(). Tip #2: The next option is to use ehcache or similar cache manager with cacheloader and you wouldn't have to manage scheduling etc.
{ "domain": "codereview.stackexchange", "id": 22798, "tags": "java, cassandra" }
My kinetic camera stop working (no device detected)
Question: update: assuming there are no USB 2.0 to 3.0 adapters since 3.0 was to be compatabile does ubuntu 13.10 updated kernel off any hope. I am running 13.04 update: tried to reinstall everything then run freenect launch. I get the following first waiting message then I plug the device in and get what follows. The USB ports are 3.0 [ INFO] [1390668700.050552116]: No devices connected.... waiting for devices to be connected [ INFO] [1390668703.054328712]: Number devices connected: 1 [ INFO] [1390668703.054588759]: 1. device on bus 000:00 is a Xbox NUI Camera (2ae) from Microsoft (45e) with serial id '0000000000000000' [ INFO] [1390668703.056447295]: Searching for device with index = 1 [ INFO] [1390668703.095462065]: No matching device found.... waiting for devices. Reason: [ERROR] Unable to open specified kinect Update: There seems to be some dissusion on a share lib causing some issues: I looked for it on my machine and it looks a bit confused any thought this. sudo glview glview: error while loading shared libraries: libfreenect.so.0.2: cannot open shared object file: No such file or directory viki@viki:~$ glview Kinect camera test Number of devices found: 1 Could not open device sudo find / -name libfreenect.so -ls 15341498 0 lrwxrwxrwx 1 viki viki 18 Jan 11 13:08 /home/viki/catkin_ws/src/KinectLibs/libfreenect/build/lib/fakenect/libfreenect.so -> libfreenect.so.0.2 12716955 0 lrwxrwxrwx 1 viki viki 18 Jan 11 13:08 /home/viki/catkin_ws/src/KinectLibs/libfreenect/build/lib/libfreenect.so -> libfreenect.so.0.2 11141534 0 lrwxrwxrwx 1 root root 18 Oct 9 21:08 /opt/ros/hydro/lib/fakenect/libfreenect.so -> libfreenect.so.0.1 10883699 0 lrwxrwxrwx 1 root root 18 Oct 9 21:08 /opt/ros/hydro/lib/libfreenect.so -> libfreenect.so.0.1 9438521 0 lrwxrwxrwx 1 root root 18 May 25 2012 /usr/lib/x86_64-linux-gnu/fakenect/libfreenect.so -> libfreenect.so.0.1 7355096 0 lrwxrwxrwx 1 root root 18 May 25 2012 /usr/lib/x86_64-linux-gnu/libfreenect.so -> libfreenect.so.0.1 7480477 0 lrwxrwxrwx 1 root root 18 Jan 11 13:08 /usr/local/lib64/fakenect/libfreenect.so -> libfreenect.so.0.2 7357524 0 lrwxrwxrwx 1 root root 18 Jan 11 13:08 /usr/local/lib64/libfreenect.so -> libfreenect.so.0.2 viki@viki:~$ Update: There is a CD with my kinetic. Is there anything useful in this to address this issue? Update: If I unplug the device and run glview again I get: From what I can see this is a known problem with the linux kernel since 2012 but little has been done to address it. I tried several solutions in the blogs but non seem to work Kinect camera test Number of devices found: 0 Update: I tried the following utility freenect-glview Kinect camera test Number of devices found: 1 Could not open device Update: lsusb -v Bus 003 Device 021: ID 045e:02ae Microsoft Corp. Xbox NUI Camera Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x045e Microsoft Corp. idProduct 0x02ae Xbox NUI Camera bcdDevice 2.05 iManufacturer 2 Microsoft iProduct 1 Xbox NUI Camera iSerial 3 0000000000000000 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 16mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x0bc0 2x 960 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x0bc0 2x 960 bytes bInterval 1 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Update: Is there anyway to further debug this problem of openni not recognizing my device plugged in via the USB port. Or is there some kind of work around? Update: as requested. (http://answers.ros.org/upfiles/13892061651366488.png) viki@viki:~$ roswtf Loaded plugin tf.tfwtf No package or stack in context ================================================================================ Static checks summary: Found 2 warning(s). Warnings are things that may be just fine, but are sometimes at fault WARNING You are missing core ROS Python modules: rosrelease -- WARNING You are missing Debian packages for core ROS Python modules: rosrelease (python-rosrelease) -- Found 2 error(s). ERROR Not all paths in ROS_PACKAGE_PATH [/home/viki/catkin_ws/src:/opt/ros/hydro/share:/opt/ros/hydro/stacks] point to an existing directory: * /opt/ros/hydro/stacks ERROR Not all paths in PYTHONPATH [/home/viki/catkin_ws/devel/lib/python2.7/dist-packages:/opt/ros/hydro/lib/python2.7/dist-packages] point to a directory: * /home/viki/catkin_ws/devel/lib/python2.7/dist-packages ================================================================================ Beginning tests of your ROS graph. These may take awhile... analyzing graph... ... done analyzing graph running graph rules... ... done running graph rules running tf checks, this will take a second... ... tf checks complete Online checks summary: No errors or warnings Update tried recommendation in comment. I even tried putting in leading zeros nothing seems to work. roslaunch openni_launch openni.launch device_id:=3@0 Still getting device not connected. [ INFO] [1389205418.542387296]: No devices connected.... waiting for devices to be connected Update: I tried putting in 3@4 for the device number but it says then it can not find device 3@4. [ INFO] [1389131675.359641075]: No matching device found.... waiting for devices. Reason: std::string openni2_wrapper::OpenNI2Driver::resolveDeviceURI(const string&) @ /tmp/buildd/ros-hydro-openni2-camera-0.1.1-0raring-20131113-2050/src/openni2_driver.cpp @ 623 : Invalid device number 1, there are 0 devices connected I was using my kinetic camera and it worked fine with rviz. But a few months later not sure what changed openni_launch says (no device detected) If I run openni2_launch it says (Invalid device number, 0 devices connected. When I run lsusb I get the following output: Any thoughts on how to proceed. Not sure what has changed since I am using the same launch file as installed. Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 003 Device 002: ID 045e:02c2 Microsoft Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 05ac:8509 Apple, Inc. FaceTime HD Camera Bus 002 Device 003: ID 0424:2513 Standard Microsystems Corp. Bus 003 Device 003: ID 045e:02ad Microsoft Corp. Xbox NUI Audio Bus 003 Device 004: ID 045e:02ae Microsoft Corp. Xbox NUI Camera Bus 002 Device 009: ID 05ac:821d Apple, Inc. Bus 002 Device 004: ID 0a5c:4500 Broadcom Corp. BCM2046B1 USB 2.0 Hub (part of BCM2046 Bluetooth) Bus 002 Device 005: ID 05ac:8242 Apple, Inc. IR Receiver [built-in] Bus 002 Device 006: ID 05ac:0252 Apple, Inc. Internal Keyboard/Trackpad (ANSI) Originally posted by rnunziata on ROS Answers with karma: 713 on 2014-01-07 Post score: 1 Answer: It sounds like you have USB 3.0 problems, since you can't use the Kinect on a USB 3.0 port (in general, sometimes it just works, sometimes it doesn't). Is it possible to disable USB 3.0 in your BIOS on a Mac? If not, try changing UsbInterface in /etc/openni/GlobalDefaults.ini to some other value (just try the four other options). Originally posted by Tim Sweet with karma: 267 on 2014-01-25 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Athoesen on 2014-01-25: Oh wow, I didn't even see that the first time. But he said that it had worked before I think. Anyways, yes. 3.0 = no bueno. Comment by Tim Sweet on 2014-01-25: Yeah I don't know why it would suddenly stop working...that happened to me once, I never found out the root problem, but it was fixed by changing the UsbInterface field. Maybe some kind of Kernal update? Comment by rnunziata on 2014-01-25: The USB line from my Mac GlobalDefaults.ini ; USB interface to be used. 0 - FW Default, 1 - ISO endpoints, 2 - BULK endpoints. Default: Arm - 2, other platforms - 1 Any one look better then the others? Comment by Tim Sweet on 2014-01-25: A USB 3.0 to 2.0 adapter won't work, I've tried that with a USB 2.0 hub. I suggest just trying each one of those interfaces, the GlobalDefaults file is loaded when OpenNI is run (ie with roslaunch openni_launch or your own variant) so just modify the file, try to start it, repeat. Comment by Tim Sweet on 2014-01-25: Your update with the most recent output tells me the interface (from GlobalDefaults.ini) might be the problem: it seems able to identify the camera on the bus, but won't get its serial number so it can't talk to the camera. Can you disable USB 3.0 in OpenFirmware (see: http://goo.gl/XJpAvQ)? Comment by rnunziata on 2014-01-26: Thanks Tim. I have been scanning for any info on how to turn off/on usb in the BIOS. Not something I normally do and would be great to see instructions or a demo. But I can not seem to find anything. Any idea where I can look? Comment by Tim Sweet on 2014-01-26: I've never owned a Mac so I have no clue..I only use one for video editing :) I'll ask around in my lab and see if anyone knows. It might not be possible. Did the UsbInterface thing not work? Comment by rnunziata on 2014-01-26: I came across this which is encouraging but does not tell me how to do this on a mac. http://rog.asus.com/forum/showthread.php?37159-How-do-I-disable-or-delete-the-USB-3.0-drivers. Thank you .
{ "domain": "robotics.stackexchange", "id": 16599, "tags": "ros, kinect, openni, openni2" }
Joint world pose for each step of simulation
Question: How would I obtain joint world pose data for each step in a simulation? I have a model plugin that can get the joint position before the simulation is run, but how do I get the joint world pose for each simulation step? Thanks! Originally posted by jc on Gazebo Answers with karma: 5 on 2018-08-13 Post score: 0 Answer: If you have a function that gets the joint world pose before simulation i assume you are calling it in the Load() function of the plugin. You can then, in the same way, call your function in the OnUpdate() loop. By my understanding, that gets called every iteration. You can find a more detailed description at: Sensor OnUpdate Originally posted by Veztak with karma: 36 on 2018-08-14 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by jc on 2018-08-14: Thanks for the help! Outside of the tutorials where is this documented?
{ "domain": "robotics.stackexchange", "id": 4312, "tags": "gazebo-plugin" }
How do we know gravitational lensing is caused by gravity and not by a magnetic field
Question: Is it possible that other factors could be contributing to the lensing effects we observe, particularly magnetic field disruptions? Light has a frequency, and my understanding is that a magnet can distort that by introducing other frequencies. Thank you, and sorry if my question shows great ignorance! I know very little. Answer: Magnetic fields only affect the trajectories of charged particles. Photons have no charge, so light is not deflected nor is its frequency changed by any magnetic fields that we can produce on Earth. If it was, we would see optical effects around magnets, electric motors, MRI machines etc. It is possible that light is affected by the immensely strong magnetic fields around magnetars, but these are small compared to a galaxy and are not on a big enough scale to produce the deflections that we see in gravitational lensing.
{ "domain": "physics.stackexchange", "id": 90595, "tags": "electromagnetism, general-relativity, gravitational-lensing" }
Why is Olympus Mons the largest volcano in the whole solar system?
Question: Why is it that the volcanoes found in the Tharsis Montes region near the Martian equator, (one of which is Olympus Mons) so much larger than those found on Earth. In comparison, Hawaii's Mauna Loa, the tallest volcano on Earth, only rises 10 km above the sea floor. Olympus Mons rises three times higher than Earth's highest mountain peak, Mount Everest. What makes these volcanoes rise to such enormous heights in Mars, when comparing to those found on Earth and the rest of the Solar System? Answer: This is mostly due to the fact that Mars does not have plate tectonics. Therefore the plate stays above the hotspot without moving, allowing magma to rise and pile up at the same place for millions and millions of years. Above the Hawaii hotspot, the oceanic plate is moving, so volcanism tends to drift away with time (actually the volcanism happens at the exact same place from a mantle point a view, but its surface expression moves with the plate). It's why rocks of the Hawaiian-Emperor seamount chain are older in the West and younger in the East. This is true even at the scale of Hawaii island itself, where Kohala and Mauna Kea are extinct, while volcanism$-$or rather the plate$-$has shifted to Mauna Loa, Kīlauea and Kamaʻehuakanaloa. Imagine if all the magma comprising these islands had piled up at the same place, it could have built a gigantic volcano like Olympus Mons! Well... not really. There is another parameter to account for: a theoretical limit to how high a mountain can possibly get, because of compressive strength of rock (or glacial erosion in some theories). See for instance answers in these questions: How high can a mountain possibly get? Why is Mauna Kea taller than the maximum height possible on Earth? On Earth the limit is ~10 km. So even if magma kept piling up at the same place, the resulting mountain would start to laterally spread or collapse. But on Mars gravitational acceleration is lower, making the limit much higher.
{ "domain": "earthscience.stackexchange", "id": 2174, "tags": "volcanology, mountains, mars, mountain-building" }
Is this problem #P-hard and why?
Question: Problem: In a directed graph $G=(V,E)$, each edge $e\in E$ is associated with a weight $w_e$ which is geometrically distributed with a parameter $p$, i.e. $P(w_e=i)=p(1-p)^{i-1}, i\geq 1$. $s,t$ are two nodes in $G$ and $k$ is a positive integer. What is the probability of the event that the shortest path from $s$ to $t$ has length at least $k$? I feel this problem should be #P-hard, but I have no idea how to prove its #P-hardness. I know I should choose a known #P-complete problem, and reduce it to the problem above. But I don't know which one to choose. Answer: I show a reduction from positive partitioned 2-DNF assignment counting (see proof of 5.1 in [http://www.vldb.org/conf/2004/RS22P1.PDF] for details and inspiration of my proof). This problem asks to count the number of valuations that satisfy a formula of the form $\Phi: \bigvee_{(i,j) \in E} X_i Y_j$ with $E \subseteq \mathbb{N}^2$, where the $X_i$ and $Y_j$ are pairwise distinct variables. Represent the positive partitioned 2-DNF as one vertex $x_i$ per variable $X_i$ and one vertex $y_j$ per variable $X_j$ and an edge from $x_i$ to $y_j$ for each $(i, j) \in E$. I add a source vertex $s$ and an edge from $s$ to each $x_i$ with parameter $p=1/2$, so with probability $1/2$ the distance is $1$ and with probability $1/2$ it is $>1$. Likewise I add an edge from each $y_j$ to the target vertex $t$ with the same distribution. Up to the exact value of edges with length $>1$, is a clear probability-preserving bijection between valuations of the $X_i$ and $Y_j$ and possible worlds of this graph: for a valuation $\nu$, the corresponding graph is the one where the probabilistic edge adjacent to $X_i$ has length $1$ iff $\nu(X_i) = 1$ and length $>1$ otherwise, and likewise for the $Y_j$. I set $k=3$. I claim that the probability that there is a path from $s$ to $t$ is the number of valuations that satisfy $\Phi$ divided by $2^N$, where $N$ is the number of variables. Indeed this is clear as a valuation is true iff some adjacent pair of $X_i$ and $Y_j$ is true iff the incident edges to the corresponding $x_i$ and $y_j$ both have length $1$ in the corresponding world. Note that the exact length of any edge of length $>1$ is irrelevant as the structure of the graph ensures it can never be part of a path of length $3$ from $s$ to $t$: there are only edegs from $s$ to the $x_i$, from the $x_i$ to the $y_j$, and from the $y_j$ to $t$.
{ "domain": "cstheory.stackexchange", "id": 3491, "tags": "cc.complexity-theory, np-hardness, complexity-classes" }
What if our galaxy didn't have a SMBH?
Question: From my understanding, it is believed that almost every big galaxy and especially spiral galaxies have supermassive black holes (SMBH's) at the center. Also, from what I've read, a SMBH isn't required for a galaxy to exist since and in layman's terms, the matter inside a galaxy such it gas clouds and formation of stars will keep it gravitational-ly in check. If I am right or even wrong, then would there be significant difference if our galaxy did not have a SMBH at the center? Significant enough for noticeable differences even here on Earth? Answer: The supermassive black hole (SMBH) in the center of the Milky Way (MW) — called Sgr A* [Sagittarius A-star] — has no direct impact on our galaxy. Its mass is only a few million Solar masses, and if you remove it$^\dagger$, it will only affect the most central stars, which would suddenly continue in straight paths out through the MW. These stars would almost surely not hit any other stars or something like that (since stars are really, really far apart), but some of them have velocities high enough that they may escape the MW. If Sgr A* weren't there to begin with, things might look a little different. There seems to be a relation between the mass of a galaxy's SMBH and the velocity dispersion of the stars in its central bulge; the so-called M-sigma relation. so MW without Sgr A* would mean a more ordered center. Our Solar system is located in the disk, far from the center, and their is evidence that SMBHs have little impact on the disk (Gebhardt et al. 2001). However, in their early phase (as an active galactic nucleus), their extreme luminosities cause galactic superwinds which blow out gas and may quench star formation (Tombesi et al. 2015). $^{^\dagger}$Removing Milky Way's SMBH is left as an exercise for the reader.
{ "domain": "astronomy.stackexchange", "id": 1049, "tags": "galaxy, supermassive-black-hole" }
What is difference between intersection over union (IoU) and intersection over bounding box (IoBB)?
Question: Can someone give a detailed explanation IoU and IoBB along with that the differences between them. Answer: The Intersection over Bounding Box is the Intersection over Union (IoU) for object detection tasks, where you have a bounding box. There are many tasks (e.g. image segmentation) where you have an IoU (the predicted segment vs the actual segment), but there are no bounding boxes.
{ "domain": "datascience.stackexchange", "id": 8955, "tags": "image-classification, image-recognition, object-detection" }
How to deal with multiple categorical data set
Question: Please tell me how with sex, smoker, region? Should I perform one hot encoder for all? Answer: Simply yes. Before that you may want to check how correlated those features are, so you can simply deselect redundant features, but in general you are right. Starting with one-hot encoding is a good choice. What may need more inspection later, is the number of different regions. Then you come up with many sparse features for which you need to reduce dimensionality. If you provide more information on the whole project I can provide more insight.
{ "domain": "datascience.stackexchange", "id": 6906, "tags": "regression" }
What is the height of liquid risen when a solid cylinder is inserted into a tub of water?
Question: Problem: In a cylindrical tub of area $A$ and height $H$, water is upto a level of $H/2$. A solid cylinder of length $L$ and area $A/5$ is inserted into the tub and it is floating vertically with $L/4$ portion immersed. What is the increase in water level? My Solution: Let us assume the new level to be $H'$ and conserved the volume of water. $H/2.A = L/4.(A-A/5) + (H'-L/4).A$ from which I got $H' = H/2 + L/20$. Which is even matching with answer given in the book. But in the solution given in the book author has taken a different approach which is what I want to understand. Author's Solution: Let $x$ be the height of liquid risen above previous level (H/2). Volume of water displaced by the cylinder ($= L/4.A/5)$ = $x.A$ Therefore, $x=L/20$ and new level of water is $H/2 + L/20$ I didn't understand how he equated volume of water displaced by the cylinder to $x.A$. Answer: To appreciate that xA is indeed the volume of water displaced, redraw the figure with the immersed part of the cylinder moved to the bottom (just for ease of calculation of the volume of the displaced water). In this new picture the top layer of thickness(height) x contains the displaced water and has a volume of xA. By the way, your method is correct too. It has a typo on the LHS of the first equation (H should be replaced by H/2).
{ "domain": "physics.stackexchange", "id": 38265, "tags": "homework-and-exercises, fluid-statics" }
Will a precessing spinning wheel fall down if there is no friction at all?
Question: If there where no friction at all, would a spinning wheel held up by one end of the axis spin precess forever without falling down? I just asked another question about the same problem: Direction of torque precession of a spinning wheel Since it seems to be a good practice on stackexchange not to ask several questions in one post, I splitted them up into two questions. However if I am wrong, feel free to merge this questions. Answer: It is spinning forever. As you see, change of angular momentum $$\frac{\text{d}\vec{L}}{\text{d}t} = \vec{\tau}$$ is always perpendicular to angular momentum itself, which means that angular momentum's direction is changed, while its magnitude is constant. Note the mathematical analogy with velocity and acceleration in case of circular rotation with constant velocity: $$\frac{\text{d}\vec{v}}{\text{d}t} = \vec{a}_\text{cp}$$
{ "domain": "physics.stackexchange", "id": 97793, "tags": "classical-mechanics, angular-momentum, gyroscopes, rigid-body-dynamics, precession" }
Question about the input of libviso2
Question: hi everybody.I am working on the project about using a single camera for navigation which will use the libviso2 to compute monocular odometry. I had read the post :http://answers.ros.org/question/46018/do-i-have-to-rectifycrop-images-before-process-them-with-viso2_ros-mono_odometry/ ,http://wiki.ros.org/image_proc and know that the input image should be rectified before feed to libviso2. What does the 'rectify' mean? Does it mean undistort image or anything else? I want to know more details about this word 'rectify' mean.Thank you everybody. Originally posted by jornwong on ROS Answers with karma: 3 on 2014-11-12 Post score: 0 Answer: Yes, rectify means exactly undistort the image. Please check the image_proc node for more details. Originally posted by IvanV with karma: 329 on 2014-11-12 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by jornwong on 2014-11-13: Thank you for your answer,Ivan.
{ "domain": "robotics.stackexchange", "id": 20022, "tags": "ros" }
Why Can Electrons be Modelled as Classical Spins?
Question: Although electrons are spin $1/2$ particles described by the Pauli matrices, the Ising model treats electrons as classical spins. As a result, the Ising model does not describe anything physical, but its results are good enough to approximate many properties of materials. Why can such a model which treats a purely quantum mechanical effect as a classical one describe physical systems well? Is there a reason why we can approximately treat the electron spins as classical spins? Answer: There is an interesting post here which I think answers this question: What is the difference between classical and quantum Ising model? For the Ising model specifically, the post says that the dynamics are equivalent to that of the classical problem because all the operators commute with each other.
{ "domain": "physics.stackexchange", "id": 83945, "tags": "statistical-mechanics, angular-momentum, electrons, ising-model, spin-models" }
$kx$ is the number of wavelengths per $2\pi x$-length segment. But what is $\vec{r}\cdot \vec{k}$?
Question: If $k$ is the wavenumber of a wave and $x$ is a length, then $kx$ is «the number of wavelengths per $2\pi x$-length segment». I have seen that the quantity $\vec{r}\cdot \vec{k}$ appear in many formulas in physics, but I have not been able to interpret it in a literal sense. Can you provide a useful literal interpretation of it? Answer: In 1 dimension I would formulate it a little different: If $k$ is the wavenumber, then $kx$ is the phase difference between position $0$ and position $x$. (Remember $2\pi = 360° =$ one period) Then you can carry over this statement almost unchanged to 3 dimensions: If $\vec{k}$ is the wavenumber vector (perpendicular to the wave fronts), then $\vec{k}\cdot\vec{r}$ is the phase difference between position $\vec{0}$ and position $\vec{r}$. Here the scalar product correctly accounts also for non-parallel vectors $\vec{k}$ and $\vec{r}$.
{ "domain": "physics.stackexchange", "id": 63168, "tags": "waves, vectors, wavelength" }
Client Server Stocks application
Question: I was given task to build a client server application, using any technology I want. Here are the requirements To simplify the process the Server would have an in memory stock list and there would be a random data generator to update the stock data Each client will have his own stock list Each client can add stocks to its list To simplify the process each client will use polling to get the stock prices, but the server needs to return only the changed stocks and no the whole list. There will be an option to add new stocks to the repository. I am focusing more on the server side here and for now the client is a console app. each client generates a token (guid) and sends it to the server as his ID. I used Nancy http server as the back end. I would like you to please comment about the correctness of my implementation as if it was a code review for your team. OOP design, efficient and safe server implementation I would appreciate any comments or questions. 1.Server Project ServerModule.cs namespace StocksApp { public class ServerModule : Nancy.NancyModule { public ServerModule() { Post["/User/{id}"] = parameters => CreateUser(parameters); Get["/UserShares/{id}"] = parameters => GetUserShares(parameters); Post["/AddShareToUser/{id}/{share}"] = parameters => RegisterUserToShare(parameters); Post["/AddShareToRepository/{id}"] = parameters => AddShareToRepository(parameters); } private dynamic AddShareToRepository(dynamic parameters) { string id = parameters["id"].ToString(); if (!string.IsNullOrEmpty(id)) { if (!RepositoriesFactory.StocksRepository.AddStock(id)) { return JsonConvert.SerializeObject(new SimpleResponse() { IsSuccess = false }); } return JsonConvert.SerializeObject(new SimpleResponse() { IsSuccess = true }); } return JsonConvert.SerializeObject(new SimpleResponse { IsSuccess = false, Message = "Id null" }); } private static dynamic RegisterUserToShare(dynamic parameters) { try { string id = parameters["id"].ToString(); if (!string.IsNullOrEmpty(id)) { User user = RepositoriesFactory.UsersRepository.GetUser(id); if (user.RegisterToStockUpdated(parameters["share"])) { return JsonConvert.SerializeObject(new SimpleResponse() { IsSuccess = true }); } return JsonConvert.SerializeObject(new SimpleResponse() { IsSuccess = false, Message = "Could not register user" }); } return JsonConvert.SerializeObject(new SimpleResponse { IsSuccess = false, Message = "Id null" }); } catch (Exception e) { return JsonConvert.SerializeObject(new SimpleResponse { IsSuccess = false, Message = e.ToString() }); } } private static dynamic GetUserShares(dynamic parameters) { try { string id = parameters["id"].ToString(); if (!string.IsNullOrEmpty(id)) { User user = RepositoriesFactory.UsersRepository.GetUser(id); string jsonData = JsonConvert.SerializeObject(new StockListResponse { Stocks = user.GetUpdatedStocks(), IsSuccess = true }); return jsonData; } return JsonConvert.SerializeObject(new StockListResponse { IsSuccess = false, Message = "Id null" }); } catch (Exception e) { return JsonConvert.SerializeObject(new StockListResponse { IsSuccess = false, Message = e.ToString() }); } } private static dynamic CreateUser(dynamic parameters) { try { string id = parameters["id"].ToString(); if (!string.IsNullOrEmpty(id)) { RepositoriesFactory.UsersRepository.AddUser(id); return JsonConvert.SerializeObject(new SimpleResponse { IsSuccess = true }); } return JsonConvert.SerializeObject(new SimpleResponse { IsSuccess = false, Message = "Id null" }); } catch (Exception e) { return JsonConvert.SerializeObject(new SimpleResponse { IsSuccess = false, Message = e.ToString() }); } } } } RepositoriesFactory.cs namespace StocksApp { public static class RepositoriesFactory { public static IUsersRepository UsersRepository { get;} public static IStocksRepository StocksRepository{ get; } static RepositoriesFactory() { StocksRepository = new StocksRepository(); UsersRepository = new UsersRepository(StocksRepository); } } } IStocksRepository namespace StocksApp.Repositories { public interface IStocksRepository { void StartUpdats(); IEnumerable<string> StockList { get; } Stock GetStock(string stockID); bool AddStock(string id); } } StocksRepository.cs namespace StocksApp.Repositories { public class StocksRepository : IStocksRepository { public ConcurrentDictionary<string, Stock> Stocks { get; set; } public Stock GetStock(string stockID) { if (Stocks.ContainsKey(stockID)) { return Stocks[stockID]; } return null; } public IEnumerable<string> StockList => Stocks.Keys; public StocksRepository() { Stocks = new ConcurrentDictionary<string, Stock>( new Dictionary<string, Stock> { {"INTC", new Stock("INTC") }, {"GOOG", new Stock("GOOG") }, {"MSC", new Stock("MSC") }, {"AMD", new Stock("AMD") }, {"AAPL", new Stock("AAPL") }} ); } public bool AddStock(string id) { return Stocks.TryAdd(id, new Stock(id)); } public void StartUpdats() { Random r = new Random(); Task.Run(() => { while (true) { int index = r.Next(0, Stocks.Keys.Count); double value = Math.Round(r.NextDouble() * 50, 2); string stockName = StockList.ElementAt(index); Console.WriteLine($"{stockName} was updated to {value}"); Stocks[stockName].UpdateValue(value); Thread.Sleep(500); } }); } } } IUsersRepository.cs namespace StocksApp.Repositories { public interface IUsersRepository { void AddUser(string id); User GetUser(string id); void RegisterForUpdated(string userID, string stockID); } } UsersRepository.cs namespace StocksApp.Repositories { public class UsersRepository : IUsersRepository { private readonly IStocksRepository _stockRepository; private Dictionary<string, User> _users = new Dictionary<string, User>(); public UsersRepository(IStocksRepository stockRepository) { _stockRepository = stockRepository; } public void AddUser(string id) { _users[id] = new User("id",_stockRepository); } public User GetUser(string id) { if (_users.ContainsKey(id)) { return _users[id]; } return null; } public void RegisterForUpdated(string userID, string stockID) { if (_users.ContainsKey(userID)) { _users[userID].RegisterToStockUpdated(stockID); } } } } User.cs namespace StocksApp.Repositories { public class User { private readonly IStocksRepository _stockRepository; private readonly ConcurrentDictionary<string,Stock> _registeredStocks = new ConcurrentDictionary<string,Stock>(); private readonly IDictionary<string, double> _changedStocks = new Dictionary<string, double>(); private readonly object _locker = new object(); public string ID { get; } public User(string id, IStocksRepository stockRepository) { _stockRepository = stockRepository; ID = id; } public bool RegisterToStockUpdated(string stockID) { //so we dont register twice. if (_registeredStocks.ContainsKey(stockID)) { return false; } Stock stock = _stockRepository.GetStock(stockID); if (stock != null) { stock.Updated += Stock_Updated; _registeredStocks.TryAdd(stockID,stock); return true; } return false; } private void Stock_Updated(Stock stock) { lock (_locker) { _changedStocks[stock.Name] = stock.Value; } } public IDictionary<string, double> GetUpdatedStocks() { IDictionary<string, double> returnValue; lock (_locker) { returnValue = new Dictionary<string, double>(_changedStocks); _changedStocks.Clear(); } return returnValue; } } } there is also the bootstrapper for the console server application Program.cs namespace StocksApp { class Program { private readonly IStocksRepository _stocksRepository; private readonly IUsersRepository _usersRepository; private string _url = "http://localhost"; private int _port = 8080; private NancyHost _nancy; public Program(IStocksRepository stocksRepository, IUsersRepository usersRepository) { _stocksRepository = stocksRepository; _usersRepository = usersRepository; var uri = new Uri($"{_url}:{_port}/"); var configuration = new HostConfiguration() { UrlReservations = new UrlReservations() { CreateAutomatically = true } }; _nancy = new NancyHost(configuration,uri); } private void Start() { _nancy.Start(); _stocksRepository.StartUpdats(); Console.WriteLine($"Started listennig port {_port}"); Console.ReadKey(); _nancy.Stop(); } static void Main(string[] args) { var p = new Program(RepositoriesFactory.StocksRepository, RepositoriesFactory.UsersRepository); p.Start(); } } } 2. SharedProject Stock.cs namespace StoksApp.Shared.Entities { public class Stock { public string Name { get; set; } public double Value { get; set; } public event Action<Stock> Updated = delegate { }; public Stock(string name) { Name = name; } public void UpdateValue(double value) { Value = value; Updated(this); } } } SimpleResponse.cs namespace StoksApp.Shared.Entities { public class SimpleResponse { public bool IsSuccess { get; set; } public string Message { get; set; } } } StockListResponse.cs namespace StoksApp.Shared.Entities { public class StockListResponse : SimpleResponse { public IDictionary<string, double> Stocks { get; set; } } } Answer: For me, the code looks well structured so far. Nancy is a great web framework, I like it :). However, some remarks: ServerModule.cs Some code fragments like return JsonConvert.SerializeObject(new SimpleResponse { IsSuccess = false, Message = "Id null" }); or return JsonConvert.SerializeObject(new SimpleResponse { IsSuccess = false, Message = e.ToString() }); (and others) are redundant. I would extract such fragments to separate methods to increase readability. Logging You are just returning the the error message in case of exceptions. I would really suggest to log exceptions and any special behaviors on server side. Otherwise you are not able to analyze errors in production and analyzing errors in QA is also cumbersome without logging! RepositoriesFactory.cs It is good, that the repositories are already abstracted so they can be easily replaced with "real" implementations. However, the RepositoryFactory has nothing to do with a factory - it is just a static container! I would try to drop the class completely and use dependency injection instead. Nancy has its own DI-framework and supports also the usage of other frameworks. StocksRepository.cs Even if StartUpdats is just a dummy method that produces test data, I would suggest to start the task with option TaskCreationOptions.LongRunning; otherwise a threadpool thread (which used used for short running actions) will be blocked. Further more, I would add a ContinueWith handler that logs the exception in case of failure. Otherwise exceptions are ignored because the task is not awaited. User It looks a little bit strange to me (violating the SRP), that the user gets the stock repository and registers event handlers to changes.... I would create another class that gets both repositories, registers itself for changes and updates user objects if anything changed.
{ "domain": "codereview.stackexchange", "id": 26565, "tags": "c#, http, server" }
Evolution of a state vector: Why is the action of $N$ equivalent to the action of $UNU^{†}$?
Question: There is another question asked on this on stack exchange but I did not find any answers there that fully answered the question. In Gottesman's paper "The Heisenberg Representation of Quantum Computers", he says: Suppose we have a quantum computer in the state $| \psi \rangle$, and we apply the operator $U$. Then $$UN |\psi \rangle = UNU^{†}U |\psi \rangle$$ The paper states that the operator $UNU^{†}$ acts on states in the same way that $N$ did before the operation. I don't understand this. I understand that $UN$ and $UNU^{†}U$ act on the state in the same way. However, $N$ is acting on the state $| \psi \rangle$ whereas $UNU^{†}$ is acting on the state $U |\psi \rangle$. Unless $U$ and $N$ are specifically commutative, I don't understand how the action of $N$ and $UN^{†}U$ are equivalent. Answer: I think it probably helps to understand what Gottesman is trying to do with the operator $N$ (later in the paper). He wants to start with some state $|\psi\rangle$, but instead of directly describing the state $|\psi\rangle$, he wants to specify it in terms of some operators $\{N_i\}$ for which $$ N_i|\psi\rangle=|\psi\rangle. $$ If you have enough $N_i$, then you no longer need to state $|\psi\rangle$, the set $\{N_i\}$ combined with the above relation is sufficient to implicitly define $|\psi\rangle$. Then we want to ask what happens to the state $|\psi\rangle$ when it evolves under $U$. It becomes $|\tilde\psi\rangle=U|\psi\rangle$. How do we describe it in terms of some new operators $\{\tilde N_i\}$? $$ \tilde N_i|\tilde\psi\rangle=|\tilde\psi\rangle $$ But how are the $\tilde N_i$ related to $N_i$ and $U$? We have \begin{align*} |\tilde\psi\rangle&=U|\psi\rangle \\ &=UN|\psi\rangle \\ &=(UNU^\dagger)U|\psi\rangle \\ &=(UNU^\dagger)|\tilde\psi\rangle. \end{align*} So, we see that $\tilde N_i=UN_iU^\dagger$. In the same way that we didn't need to write down $|\psi\rangle$, and instead relied on $\{N_i\}$, we never have to write down $|\tilde\psi\rangle$. We just update our description of the $N_i$ to $\tilde N_i=UN_iU^\dagger$.
{ "domain": "quantumcomputing.stackexchange", "id": 5487, "tags": "quantum-gate, quantum-state, quantum-operation, unitarity" }
Creat map without gmapping/laser sensor
Question: Hello everyone. I want to create a map, let say of a rectangle shaped room/area. I am using a turtlebot 3 but without laser sensor. Is it possible to create a map without any laser sensor or let say i give coordinate range of my area which will represent a border line of a map? Or is there any custom maps which can acquire and modify it's dimensions according to my usability? Originally posted by enthusiast.australia on ROS Answers with karma: 91 on 2019-08-12 Post score: 0 Original comments Comment by PG_GrantDare on 2019-08-12: To generate an actual map you will need some sensor to sense the environment. Answer by @billy describes how you can trick it by drawing your own map. This approach however assumes that you accurately describe the operating environment. If you choose to use this for navigation, you cannot avoid actual obstacles as it will assume your map is accurate. Comment by enthusiast.australia on 2019-08-13: Yes, i want to use it for navigation. In the map, can I also include actual static obstacles? For example, i make a map of square shape area of 4 meter square, is it possible to include static obstacle, let say a square shape area of 0.2 meter square, and then use this map for navigation. Answer: As per @billy s answer you can use map_server to load an existing map to your environment. To do this, assuming you know the details of the environment you are trying to describe, you can use an image editor to draw and represent the Occupancy Grid Map (OGM). To accompany this you will require a .yaml file that defines parameters about how map_server loads your map and interprets it. The YAML file is as so: image: testmap.png resolution: 0.1 origin: [0.0, 0.0, 0.0] occupied_thresh: 0.65 free_thresh: 0.196 negate: 0 in order, define: The location of the image The resolution meters/pixels (This is based on how you choose to draw the image) 0,0,0 (origin) of your map (set this number to where ever your robot will consistently start in your drawn map) (or you could initialize your robots pose at a point other than map origin) The % that occupied is defined as (leave this default if your map is black and white) The % that free is defined as (leave this default if your map is black and white) Leave this default as if true inverts occupied and free The restriction of not having environmental sensory data will affect navigation in that you won't have dynamic avoidance. You are restricted to what ever map you draw. Ensure you represent your environment accurately or you might collide with obstacles and ensure you have not added obstacles. Using a global planner your path will be generated to navigate the environment you have specified in your OGM. The local control is where sensory data is usually important. If you use a blank costmap then a local planner will control a local path to achieve the global plan without dynamic avoidance. Without dynamic information of obstacles, the navigation will assume there are no obstacles and will control the robot to achieve the global plan which has already accounted for the static obstacles you have defined in your OGM. I can understand that a LIDAR might be expensive however I have personally chosen to use an Xbox Kinect as it is rather cheap and can sense a 3D environment. You do not have to use 3D environments if you choose this path and can squash everything to 2D. Please consider choosing an answer :) Originally posted by PapaG with karma: 161 on 2019-08-15 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 33610, "tags": "ros, ros-kinetic" }
Do some materials change optical characteristics under stress/strain?
Question: My (basic) understanding of lenses is that their refraction is largely determined by the material (index of refraction) and shape. It seems possible to make lenses out of deformable materials, which I assume you can change the shape/curvature to change the lens behavior. My question is are there lens materials that changing their stress/strain/density/mechanical deformation without changing their outer shape can change lens behavior? Apologies if this is badly worded, and I'm open to suggestions for better framings. I'm pretty new to this. Answer: It is possible (by propogating sound waves into a solid, for example) to create internal stress and change optical properties. These changes are rather subtle, however polarized light reveals strain. Other than some concern on the part of lens manufacturers (the presence of internal strains can lower the quality of the image) this does not have many imaging applications. The color fringes seen in stressed plastic indicate refraction changes on the order of a part per million in the refractive index. That's not effective in making a lens, but it suffices for building a grating, and acousto-optic modulators are possible with interesting properties. While it is not mentioned in the question, a strong electric field can change the refraction of a material (solid, liquid, or gas), and much work on Kerr and Pockels effect is done in order to switch and steer laser beams.
{ "domain": "physics.stackexchange", "id": 37770, "tags": "material-science, optical-materials" }
Why Acetone does not behave like its computational values?
Question: I am trying to simulate the excitation state of acetone. I ran TDDFT for it both in gas phase and solvated state in water (both implicit and explicit water). The experimental data say that acetone undergoes n->π* transition, which means it has a higher wavelength in gas phase ($\mathrm{\lambda_{max}}=$~$276~\mathrm{nm}$) and a shorter wavelength in the solvated state ($\mathrm{\lambda_{max}}=265~\mathrm{nm}$). I started with gas phase and expected something a wavelength of about $276~\mathrm{nm}$ ($4.49~\mathrm{eV}$ ), but surprisingly I got $\mathrm{\lambda_{max}}=136.172~\mathrm{nm}$ ($9.326~\mathrm{eV}$)! I really cannot understand why there is such a big discrepancy! Here is my GAMESS input file. What is wrong with this molecule ? ! File created by the GAMESS Input Deck Generator Plugin for Avogadro $BASIS GBASIS=N311 NGAUSS=6 $END $CONTRL SCFTYP=RHF RUNTYP=ENERGY TDDFT=EXCITE DFTTYP=B3LYP $END $CONTRL ICHARG=0 MULT=1 $END $TDDFT NSTATE=9 $END $STATPT OPTTOL=0.0005 NSTEP=99 METHOD=RFO UPHESS=MSP HSSEND=.T. $END $SYSTEM MWORDS=1000 PARALL=.TRUE. $END $SCF DIRSCF=.T. DIIS=.T. DAMP=.T. $END $DATA Title C1 O 8.0 0.00000 -1.27900 0.00300 C 6.0 -0.00000 -0.05800 0.00100 C 6.0 1.29700 0.69100 -0.00000 C 6.0 -1.29800 0.69000 -0.00000 H 1.0 1.35900 1.32900 -0.90600 H 1.0 1.35900 1.33200 0.90300 H 1.0 2.15700 -0.01300 0.00100 H 1.0 -2.15700 -0.01400 0.00100 H 1.0 -1.35900 1.32900 -0.90600 H 1.0 -1.35900 1.33200 0.90300 $END Answer: It seems that the results of the calculations are more or less fine and the OP just misinterpreted the NIST data. As I said in my comment above, NIST does not claim that $\lambda_{\mathrm{max}}=276 \, \mathrm{nm}$. Clearly only a small region of wavelength is shown on the graph and in the paper[1] referenced on the NIST page it is said (emphasis mine): There are two diffuse ultraviolet bands; the first at 2800 Å. is very weak with its oscillator strength $f \sim 0.0004$ and the second at about 1900 Å is moderately intense with a maximum extinction coefficient $\epsilon_{\mathrm{m}} \sim 1000$. McMurry has identified the 2800 Å. band as a forbidden $\pi^* \leftarrow n$ transition involving excitation of a non-bonding $\ce{O}$ electron to an anti-bonding $\pi$ orbital between the $\ce{C}$ and $\ce{O}$ of the carbonyl group. A high-resolution photoabsorption spectrum in the energy range 3.7-10.8 eV can be found, for instance, in this recent study[2] and is more or less consistent with the result of the OP's calculations. Noel S. Bayliss, Eion G. McRae, J. Phys. Chem., 1954, 58 (11), 1006–1011. M. Nobre, A. Fernandes, F. Ferreira da Silva, R. Antunes, D. Almeida, V. Kokhan, S. V. Hoffmann, N. J. Mason, S. Eden, P. Limão-Vieira, Phys. Chem. Chem. Phys., 2008, 10, 550-560. (available at researchgate.net)
{ "domain": "chemistry.stackexchange", "id": 3402, "tags": "quantum-chemistry, computational-chemistry, spectroscopy, spectrophotometry" }
Compilers: How to see "the number of grammars where there exists a string that has at least two different left-most derivations"?
Question: Could someone tell why "G1 and G3 are ambiguous" and how to see whether a string has at least two different left-most derivations in general? Answer: Although the problem of detecting whether a grammar is ambiguous is, in general, undecidable, for toy grammars like this it is usually pretty easy to find ambiguities by simply enumerating the possible (left-most) derivations until you derive the same sentence in two ways. For example, $G_1$ has just three productions $S\to a S b \mid S b \mid c$, and none of them has more than one non-terminal on the right hand side. So there are only four derivations of three steps, and it's easy to see that two produce the same sentence. $$\begin{align}S&\to a S b \to a a S b b \to a a c b b \; (P_1, P_1, P_3)\\ S&\to a S b \to a S b b \to a c b b\; (P_1, P_2, P_3) \\ S&\to S b \to a S b b \to a c b b\; (P_2, P_1, P_3) \\ S&\to S b \to S b b \to c b b\; (P_2, P_2, P_3)\\ \end{align} $$ $G_3$ does have a production which produces two non-terminals, so there are a lot more short derivations. Even so, it shouldn't take you very long to find two derivations for the same sentence. Proving that a grammar is not ambiguous is not so easy. One possibility is to create a conflict-free parsing table, using any standard algorithm. Not all unambiguous grammars are deterministic (and fewer are deterministic with a single lookahead) but if you do manage to find a conflict-free parser, then the grammar was definitely unambiguous.
{ "domain": "cs.stackexchange", "id": 17763, "tags": "formal-grammars, compilers, parsers" }
Kraus decomposition for non trace preserving operation: shouldn't we have $0 \leq \sum_k E_k^{\dagger} E_k \leq I$
Question: In N&Chuang, on page 368 is written the following theorem: The map $\mathcal{E}$ satisfies axioms A1,A2,A3 if and only if $$\mathcal{E}(\rho)=\sum_k E_k \rho E_k^{\dagger}$$ Where $\sum_k E_k^{\dagger} E_k \leq I$ The axiom A2 is convex linearity, the axiom A3 is CP, the axiom A1 is: Axiom A1: $0 \leq Tr(\mathcal{E}(\rho)) \leq 1$ Shouldn't be added in the theorem: $\sum_k E_k^{\dagger} E_k \geq 0$ as well to ensure the fact the trace can never be negative ? So in the end we would have: $$0 \leq \sum_k E_k^{\dagger} E_k \leq I$$ Answer: It's true for any matrix $A$ that $A^\dagger A\ge 0$. It's because $(A^\dagger A v,v)=(Av, Av)$, where $(,)$ is the inner product and $v$ is any vector.
{ "domain": "quantumcomputing.stackexchange", "id": 1186, "tags": "textbook-and-exercises, quantum-operation, nielsen-and-chuang, kraus-representation" }
Are fields spatially quantised?
Question: Electromagnetic waves are composed of disturbances in the electric and magnetic fields, which I have heard described thus: each point in the two fields is a vector and it is not that these points move but that the vectors change in direction and magnitude and these changes are passed from one point to the next in a manner that collectively forms a wave. With or without this example, the concept of 'points' in a field or of there being one vector value in one area and another vector value in another area suggest that the field is spatially quantised, i.e. in order for there to be discrete points in a field, the field has to be divided up into discrete amounts. Firstly, is this correct? I understand that the use of 'points' to explain concepts is semantic and not necessarily a true representation of reality but again how are we to assign different vectors to different place in a field if it is continuous? Secondly, if fields are spatially quantised, is this because spacetime is quantised? The electromagnetic field is itself quantised in that photons exist, so would the division of the space that field exists in mean spacetime is the thing that is divided into discrete packets? Answer: No, this isn’t correct. In conventional physics, both classical and quantum, spacetime is continuous and fields have a value at every point of this continuum. For example, classical electric and magnetic fields are just vector-valued functions of $t,x,y,z$. In quantum electrodynamics the quantized electromagnetic field is still defined on continuous spacetime. The same applies to all seventeen quantum fields in the Standard Model. However, it is common to discretize spacetime, and have fields defined only at discrete points, as a calculational technique, such as in lattice gauge theory. It is also common to believe that a future theory of quantum gravity will involve the quantization of spacetime itself. In loop quantum gravity areas and volumes are quantized, but this is not considered conventional physics.
{ "domain": "physics.stackexchange", "id": 63037, "tags": "quantum-field-theory, spacetime, discrete" }
Software that determines whether a molecule can exist and draw it from a formula?
Question: I have run calculations that predict atomic configurations. As a simple example, in a system that contains H and O, I might get a list like: Configurations for the O atom O H 0 2 2 6 This would tell me that in my system, there's one O atom bonded to two H atoms and there's also a molecule made up of three O atoms bonded to each other and six H atoms. My real systems are more complex. I am not really a chemist and have no idea whether some of the molecules I have calculated actually exist and what they would look like if they did. Does anyone know if there's software available that might help me with this; i.e., where I can enter a potential formula and the software draws the molecule and tells me whether it exists? I have so far checked out ChemDraw Professional and Avogadro and it doesn't look like they do what I need. Thank you for any tips. Answer: I believe this website will be of use to you http://molview.org/ It appears to take names, formulas, smiles etc. If the name/formula/smile ID pops up in the search bar, it will draw it for you. Its database seems quite large, I use it for some pretty big drug molecules. It draws 2D and 3D images, and it also does single/double/triple bonds.
{ "domain": "chemistry.stackexchange", "id": 12531, "tags": "quantum-chemistry, coordination-compounds, molecular-structure, molecules, software" }
Gravitational slingshot of light using a black hole/massive object
Question: Wikipedia has this page on gravity assists using planets. In some cases this effect was used to accelerate the spacecraft to a higher velocity. This diagram shows this in a very oversimplified manner. That got me thinking that if light is affected by gravity, and if it slingshots around the black hole/massive object, can't it gain a higher speed than $c$? What limitations are stopping it from doing this? Forgive me if the answer to this question is pretty straight-forward or staring-you-in-the-face kind. I haven't fully understood the mechanics of the gravitational slingshot fully yet, but I couldn't wait to ask this. Note: If it is possible (I highly doubt that!), could you provide an explanation using Newtonian mechanics, I'm not very familiar with general relativity because I'm in high school. Answer: Newtonian mechanics are out of the question, but at least I can explain without using grad-level general relativity. At the same time, I'm quite sure of the answer. The mechanics internal to the black hole are, indeed, difficult. But if you think about it, we can basically draw a system boundary around the black hole. Its gravitational influence actually goes on forever, but we'll use the approximation of the sphere of influence. Once the photon is far enough, it's basically no longer affected by the black hole. That means that the black hole just acts like a mirror as far as we're concerned. To see this, we need to consider the case of a stationary black hole, and adjust this by reference frame transformations. In the reference frame of the black hole, there is no frequency shift of the photon between entering and exit. Only momentum is transferred. As the photon gets close to the circular orbit, it's energy has increased a great deal, but it will give this energy back to the gravity well as it exits. Just like driving down a hill and then back up. In order for me to convince you that the photon's energy does not change from entering to exiting, I will argue that the black hole's kinetic energy does not change. Since we're in the black hole's reference frame, it's velocity is zero. With a differential change to velocity, $v^2$ will be effectively zero. We can use the reverse logic by assuming the photon does transfer energy, and show a contradiction. Since the black hole gains no kinetic energy, if the photon's energy changed, that must be exhibited in some property of the black hole. But there is no property subject to change. The entering and exiting processes are time-symmetric so the black hole's rest mass can not have changed. Hopefully I have convinced you that this black hole operates identically to a mirror. With that info, we can simply apply the relativistic Doppler effect. $$1 + z=\frac{1}{\sqrt{1-\frac{U^2}{c^2}}} $$ This equation describes the relativistic redshift factor, $z$, of a photon when you transfer from one reference frame to another moving at some relative velocity (in the same direction as the photon's motion). We must apply that twice, because the black hole is a moving reference frame. Here is my description of the sequence of events: Photon moves toward black hole in lab frame with initial frequency $f_1$ Black hole observes photon's frequency as $f_1'$, shifted by $U$, in its reference frame Black hole emits photon at same frequency in its reference frame, formally $f_1'=f_2'$ Lab frame observes a $f_2$, which is $f_2'$ shifted by $U$ This would be true for any mirror moving at relativistic velocity. The situation you described is just a fancy mirror. I don't have your numbers, but for closure, I'll give this equation (from definition of redshift factor): $$1+z = \frac{f'}{f}$$ You might need to give some more thought to the sign of $U$ in applications here. Basically, apply these equations such that $f_1'<f_1$ and $f_2'>f_2$.
{ "domain": "physics.stackexchange", "id": 8939, "tags": "general-relativity, gravity, electromagnetic-radiation, speed-of-light, faster-than-light" }
Is there anything to stop an image being projected onto the side walls in a pinhole camera/camera obscura?
Question: I always see diagrams of how a camera obscura works where the projected image neatly stops before or at the edges of the wall opposite the pinhole. But when I look at pictures and videos of rooms converted into a camera obscura (usually as a fun experiment), the image often continues projecting onto the roof, side walls and floor. For example, there is a great picture here demonstrating this: Walk-In Camera Obscura - Empty Kitchen Even though we're primarily interested in the wall directly opposite the pinhole (since in a camera, that's usually where the film or sensor is) doesn't it also mean those diagrams are not entirely accurate then? Should they look less like this: And more like this?: Or even like this? For I have heard in various places that the angle of view can even be as much as 180°: (Apologies for my crude diagrams but I think they get the point across) (I originally posted this question in the Mathematics Stack Exchange, but decided it might be better here instead. I hope this is the right place for it) Answer: By placing a tube in front of the pinhole you should be able confine the solid angle that gets imaged into the room, such that only the wall is illuminated.
{ "domain": "physics.stackexchange", "id": 94626, "tags": "optics, visible-light, geometric-optics, geometry" }
unable to install ros on ubuntu 12.04
Question: Hi there! I recently upgraded to ubuntu 12.04 but I am not able to install ros-fuerte. "apt-get install" yields: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ros-fuerte-desktop-full : Depends: ros-fuerte-slam-gmapping (= 1.2.7-s1336014066~oneiric) but it is not going to be installed Depends: ros-fuerte-simulator-gazebo (= 1.6.7-s1336006714~oneiric) but it is not going to be installed Depends: ros-fuerte-visualization (= 1.8.11-s1335981752~oneiric) but it is not going to be installed Depends: ros-fuerte-vision-opencv (= 1.8.2-s1335952329~oneiric) but it is not going to be installed Depends: ros-fuerte-perception-pcl (= 1.2.2-s1335953254~oneiric) but it is not going to be installed Depends: ros-fuerte-image-pipeline (= 1.8.2-s1335953718~oneiric) but it is not going to be installed Depends: ros-fuerte-stage (= 1.6.6-s1336041641~oneiric) but it is not going to be installed Depends: ros-fuerte-image-transport-plugins (= 1.8.0-s1335952476~oneiric) but it is not going to be installed Depends: ros-fuerte-visualization-tutorials (= 0.6.3-s1336026792~oneiric) but it is not going to be installed Depends: ros-fuerte-laser-pipeline (= 1.4.4-s1335976222~oneiric) but it is not going to be installed Depends: ros-fuerte-navigation (= 1.8.1-s1335992190~oneiric) but it is not going to be installed E: Unable to correct problems, you have held broken packages. How can I overcome this problem? Thanks! Originally posted by sciarp on ROS Answers with karma: 129 on 2012-05-08 Post score: 1 Answer: That happened to me, too. I had a non-fuerte-compatible version of boost that apt-get would not install recursively for some reason. Try to apt-get install one of the packages it's complaining about. It'll probably give you another list of packages that it depends on. If you keep trying to install the dependencies, it'll eventually stop complaining and install it. In my case, installing the one boost library satisfied apt enough to install the desktop-full package without any more errors, but you might have to traverse the dependency tree a bit more. Good luck! Originally posted by thebyohazard with karma: 3562 on 2012-05-09 This answer was ACCEPTED on the original site Post score: 5
{ "domain": "robotics.stackexchange", "id": 9310, "tags": "ros-fuerte, ubuntu-precise, ubuntu" }
How to find the big O notation of $\log^b x$
Question: How would you determine big O notation for $\log^b x$? I don't think you can simply say $O(\log^b x)$, can you? If you can, then here is a better question: $x^3 + \log^b x$. How would you know if it's $O(x^3)$ or something else depending on the $b$ value? Answer: How would you determine big o notation for this? I don't think you can simply do O(log(x)^b) can you? $\mathcal O\left(\log^b x\right)$ or $\mathcal O\left(\left(\log x\right)^b\right)$ is correct. x^3 + log(x)^b Assuming $b$ is a constant. You always take the fastest growing term in a polynomial. $T(x)=\mathcal O\left(\log^b x\right)$ is called polylogarithmic time. In this case, $\mathcal O\left(x^3\right)$ grows faster than $\mathcal O\left(\log^b x\right)$. You can see a list of different complexities (sorted from lowest to highest) here.
{ "domain": "cs.stackexchange", "id": 19103, "tags": "asymptotics" }
Including ResourceBundle in enum in order to display enum constant name in different languages
Question: I'm working on spring web application which has to be accessible both in polish and english. So generally, it's all about internationalization. Of course, internationalization is based on getting values by keys from properties files, that are named according to locale id e.g messages_pl. As I'm taking advantage of enums in context of status, categories etc. used by most of my domain entities, I want to display to user enum's constant name according to mentioned above languages. I've already implemented some solution which is based on interface and default method within. I'd like to know your opinion on my approach. Basically I'm trying to make my code to speak for itself (be readable) and be reusable (not to mention about good design). PreferedContactForm: public enum PreferedContactForm implements EnumResourceBundleAware { EMAIL, MOBILE, BOTH; public String handleDisplayedMessage() { return this.handleDisplayedMessage(this); } @Override public String retrieveMessagePattern() { return "prefered.contact.form.{0}"; }} EnumResourceBundleAware: public interface EnumResourceBundleAware { default <E extends Enum<E>> String handleDisplayedMessage(E value) { ResourceBundle resourceBundle = ResourceBundleLoader.load(); String formattedValueAsStr = value.name().toLowerCase().replace('_', '.'); String displayedMessageCode = MessageFormat.format(retrieveMessagePattern(), formattedValueAsStr); return resourceBundle.getString(displayedMessageCode); } String retrieveMessagePattern();} I hope my english is not too bad. Thanks in advance for any opinion or other prefered solutions. @UP (Second solution) Thanks slowy for your opinion and tips (it was helpful). I implemented second solution, where i separated translation responsibility from enums by implementing some sort of service class, which is called EnumTranslator. Of course, I can call above mentioned serivce class from my views (i'm using thymeleaf template engine for rendering my html views). As with first solution, i look forward to your opinion. EnumTranslator interface: public interface EnumTranslator { public String searchTranslationForEnum(Enum enm); public String loadTranslationKeyForEnum(Enum enm); } EnumTranslatorSupport (it's my approach to call classes that implment interfaces): public class EnumTranslatorSupport implements EnumTranslator { @Override public String searchTranslationForEnum(Enum enm) { ResourceBundleLoader resourceBundleLoader = ResourceBundleLoader.create(); ResourceBundle resourceBundle = resourceBundleLoader.load(); String enumTranslationKey = loadTranslationKeyForEnum(enm); String foundTranslationForEnum = resourceBundle.getString(enumTranslationKey); if(StringUtils.isBlank(foundTranslationForEnum)) { throw new TranslationForEnumNotFoundException("There is no translation for enum"); } return foundTranslationForEnum; } @Override public String loadTranslationKeyForEnum(Enum enm) { String enumNamespace = enm.getClass().getSimpleName(); String enumNamePreparedForTranslationKey = enm.name().replace('_', ' ').toLowerCase(); String translationKey = ""; for(int i = 0, max = enumNamespace.length(); i < max; i++) { char ch = enumNamespace.charAt(i); translationKey += (Character.isUpperCase(ch)) ? ((i == 0) ? Character.toLowerCase(ch) : "." + Character.toLowerCase(ch)) : ch; } translationKey = translationKey.concat(".").concat(enumNamePreparedForTranslationKey); return translationKey; } } And i would like also to refer to data layer implemented for my web application in context of enum persist. So basically, i'm saving my enums with a little help from my custom converter for enums. The point is, that i want to save lowercased enum constant name along with underscore replaced with single space. To depict it better, here is an example: Enum: IN_PROGRESS Enum constant name, which is saved in database: in progress I know it can look different, but for the time being it works as expected. Answer: Well, when I read it first, it made sense. But when I take a closer look, man there's some fancy stuff going on there! public String handleDisplayedMessage() { return this.handleDisplayedMessage(this); } Dude, that's an endless loop. You just Stack-Overflow-Errored my VM :P When I use the API, I do something like this: String msg = PreferedContactForm.MOBILE.handleDisplayedMessage(PreferedContactForm.EMAIL); System.out.println(msg); Now, that's some dubious stuff, isn't it? Why can I call the method on 'MOBILE' and pass 'EMAIL'? This made me literally laugh xD. And what displayed message? Displayed where? Where do I get this display message? What does it do with it? 'getTranslationForEnum' or something would make much more sense. I can also do that: String msgPattern = PreferedContactForm.EMAIL.retrieveMessagePattern(); Now what? What can I do with that? Do I need that? Is this ... important? I would have chosen 'getMessageKeyPrefix' or something like that. Why must this be exposed anyway? And this part: default <E extends Enum<E>> String handleDisplayedMessage(E value) { With the generic, misses the whole point of having the 'EnumResourceBundleAware', you can pass any enum! it should be this: default String handleDisplayedMessage(EnumResourceBundleAware value) { Why didn't you just made a simple class with a static method, which takes a prefix and an enum value? If you chose to save the message key as 'class.name' + "." + enumValue, for instance 'pack.age.PreferedContactForm.MOBILE=Mobile', you wouldn't even use the messageKeyPrefix! And you can get rid of MessageFormat. Why not something like this: public static String getTranslationForEnum(EnumResourceBundleAware value) And the API would be something like TranslationThingerThing.getTranslationForEnum(PreferedContactForm.MOBILE); When it comes to design, I wouldn't put the functionality of translation into an enum, even if it's inherited. And to have a default implementation in an interface still bothers me, it feels like an oxymoron to me. The functionality of translation itself is a 'concern', and should be separated (See what I did there?). Also, let's assume the enum value can be saved in a backend, you then also pass translation functionality to the persistency layer. Not a big fan of that. And considering testing: Here's the usual 'having an abstract class'-problem. How do I test abstraction? You could use either a implementation, and call it good, or write an implementation for your test case only. It's not too bad, but it's not sexy either. My main problem with abstraction is, if you mix abstraction with implementation, in your case: your abstract method calls a method from the implementation, so the method you want to test is not tested cohesive. The 'having an interface for enums for indicating that it is translatable'-thing I have seen before. But with the main goal, to have a test case which crawls the classpath for those enums, to verify, all enums are translated. Another variant I've seen was to have those enums annotated. Hope this helps, slowy
{ "domain": "codereview.stackexchange", "id": 27175, "tags": "java, enum, localization" }
Is it possible to recover Classical Mechanics from Schrödinger's equation?
Question: Let me explain in details. Let $\Psi=\Psi(x,t)$ be the wave function of a particle moving in a unidimensional space. Is there a way of writing $\Psi(x,t)$ so that $|\Psi(x,t)|^2$ represents the probability density of finding a particle in classical mechanics (using a Dirac delta function, perhaps)? Answer: Sure you can! This is actually a simple but very interesting result, and it is usually shown in quantum mechanics courses. It's called the Ehrenfest theorem, and I won't prove it here but I'll copy the result from Sakurai Modern Quantum Mechanics (1991). You can check the mathematical details there, or in many other books. If you have a hamiltonian with the form $$H = \frac{p^2}{2\,m}+V(x)$$ you can prove that, in the Heisenberg picture, $$m \frac{\mathrm{d}^2x}{\mathrm{d}t^2} = -\nabla V(x) .$$ If you now take the expectation value of that equation (for certain state kets), you get $$m \frac{\mathrm{d}^2\langle x \rangle}{\mathrm{d}t^2} = \frac{\mathrm{d}\langle p \rangle}{\mathrm{d}t} = -\langle \nabla V(x) \rangle .$$ This result is valid in both Heisenberg and Schrödinger's picture. If you want to recover the classical limit, you need to say that the area where the wavefunction is significantly nonzero is much smaller than the scale of variations of the potential. In that case, you can identify the center of the wavefunction with the position of the particle, and $\langle \nabla V(x) \rangle $ turns into $\nabla V(\langle x \rangle) $. What this means, conceptually, is that the center of the wavefunction will move according to the classical laws if you can't "see" that your object/particle it's not a material point, and if your potential is also classical, in that it doesn't have variations that are comparable to the "size" of the wavefunction.
{ "domain": "physics.stackexchange", "id": 96230, "tags": "quantum-mechanics, classical-mechanics, schroedinger-equation, faq" }
How does one convert between Modified Julian Date (MJD) and a standard (mm/dd/yr, hr:mm:ss)
Question: I looked online and couldn't see an actual formula or anything, so I figured I'd ask here. If I had an MJD like the following: 59145.6678 How would I convert that to a month, day, and year with the hour and minutes? Thanks in advance! Answer: A modified Julian Date is just a Julian Date minus 2,400,000.5. So you could just add 2,400,000.5 to the MJD, and use existing Julian Date routines to do the conversion. Here are some routines which break a JD into Month, Day, Year and vice versa. But, if you're using any modern language which has built in date routines, I highly recommend you use those instead to avoid the many special cases when dealing with dates. Most languages will accept the time in seconds since Jan 1, 1970 ("Unix Time") as an input to date routines, and the functions below will convert a MJD to/from Unix Time in milliseconds: function ModifiedJulianDateFromUnixTime(t){ return (t / 86400000) + 40587; } function UnixTimeFromModifiedJulianDate(jd){ return (jd-40587)*86400000; } The above functions are in Javascript, but it is trivial to convert to nearly any other language. Some languages accept seconds rather than milliseconds, so divide by 1000 for those. For, example, in JavaScript, the code below will get all of the components. date = new Date(UnixTimeFromModifiedJulianDate(59145.6678)); day=date.getDate(); year=date.getYear()+1900; month=date.getMonth()+1; hour=date.getHours(); minutes=date.getMinutes(); seconds=date.getSeconds(); milli=date.getMilliseconds(); console.log(year,month,day,hour,minutes,seconds,milli); Returns: 2020 10 23 12 1 37 920 You must be careful about any dates prior to Oct 15, 1582 (the start of the Gregorian Calendar). What answer is correct depends a lot on what your intentions are. The routines in the link above assume anything before the Gregorian Calendar are in the Julian Calendar, which matches what JPL Horizons does.
{ "domain": "astronomy.stackexchange", "id": 6687, "tags": "observational-astronomy, star, telescope, amateur-observing, data-analysis" }
What is "Laplacian image space"?
Question: I have been working through Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis where the authors make the following claim (subjection 2.2.1): ...the Laplacian space is more robust to illumination changes and more indicative for face structure. what is Laplacian space in this context? How is it robust to illumination changes? Answer: They should really be more clear about what they mean, but I expect they're using a Laplacian pyramid. As more evidence, they cite: "Denton et al "Deep generative image models using a laplacian pyramid of adversarial networks." The idea is, store a very low resolution copy of your image, and a series of "difference" images. Each difference image tells you what to add to the lower resolution copy to get the next higher resolution version of the image. You can imagine that lots of values of the difference will be close to zero. The "mean intensity" and therefore "illumination" is only really stored at that lowest resolution copy, and doesn't really (usually, they hope) affect the gradient of the image, which is what the laplacian pyramid stores. That's why those authors say it's not sensitive to illumination changes. Does that make sense?
{ "domain": "datascience.stackexchange", "id": 6980, "tags": "deep-learning, computer-vision, 3d-reconstruction" }
HPV. How do viruses persist outside the body?
Question: The main route of transmission of human papillomavirus (HPV) is generally believed to be sexual. While fomites have been postulated for inexplicable infections, sexual health professionals regularly counter that lying respondents are the more likely explanation. Research into HPV seems to have mostly stalled and there are few in depth in-vitro studies into viability of the virus outside the body, the infectivity of fomites or their infective dose that I could find. There are some footnotes and references to animal models, with claims that HPV can remain infective at least 7 days after desiccation but no upper limits. There is some more precise research but understanding the significance is difficult for layman. For example research published in Nature stated HPV-16 "can remain viable and capable of transducing the reporter construct upon stimulation of cell cycle re-entry for at least 2 weeks, but most likely for no longer than 4 weeks." How can one understand the viability and limits of viruses, and HPV in particular, outside the body? To clarify: Despite the fanciful language of the bounty I'm not looking for certainty, but general principles of environmental virology and an overview of statistical observations in regards to HPV in particular. Note: The various aspects which influence the persistence of viruses in the environment spoken of below are summarized with some additional detail in a literature review published in 2010 in Food and Environmental Virology. Answer: As far as I can tell this topic, specifically for human papillomavirus (HPV), hasn't been fully investigated. There is some literature on it (as cited by OP), but it is sparse, making this an open field for someone to investigate. As such there is no certainty about how long a sample can persist in the environment as a huge number of factors come into it. However, this is a difficult topic to investigate - how do you mimic in the laboratory the conditions under which you would find the virus in a natural shedding? In terms of HPV, no-one seems to have fully investigated just how the virus is deposited in terms of how it is shed in a natural manner from warts. It is assumed in the literature that it is in the form of keratinised squamous epithelial cells with virus inside, and probably naked virus too. These will have different profiles of decay in the various environments because of the natures of the different forms. In terms of environments for persistence, there are a huge number of factors that might play into it. Here are a few of the known significant ones: heat (max temperature and min temperature as well as freeze/thaw cycles), light (particularly UV from sun), humidity (overall in environment and local to the place were the virions are), salinity (beach?), chemical (e.g. chlorine at a pool), how the virus is deposited, and last but not least, what the virus is in (a skin cell? mucus? naked virion?) HPV is a non-enveloped virion containing double-stranded DNA. The virion is composed of 72 capsomers each consisting of 5 copies of the structural protein L1 and forms a strucutre with icosahedral symmetry. The capsid is capable of self-assembly under the right conditions, which means that it is likely quite stable (relative to some other viruses), as it can potentially re-form if damaged by heat or something similar. The virus enters the cell through abrasions or micro-traumas of the skin surface, where it invades the squamous epithelium and only grows in those keratinized cells. It can also invade and grow in epithelial tissues of some mucosa such as the nasal passages, throat and vagina. Some of these tissues are shed containing virus through abrasion and natural shedding of the epithelial surfaces. Now that we have a bit of the biology out of the way: How do we test for persistence of a virus in the laboratory? Actually it is quite simple, generally you grow and purify the virus (or a surrogate similar virus) and then place some as droplets on the surface you wish to test. You then subject it to the conditions that you want to mimic and see how long you can detect it for. The methods of detection vary depending on the virus and study. Some will use molecular detection methods like the ability to PCR amplify genetic material from the virus as a surrogate for genetic damage making the virus incapable of replication. The other most commonly used is to try to propagate the virus from swabs taken from the surface it was applied to. This doesn't work for all viruses, as some are not culturable, or are very difficult to culture (e.g. for SARS-CoV-2, only about 30% of positive samples can be cultured, but for measles virus very close to 100% can be cultured). If the virus is not culturable, it may be that it can get into the cell and either replicate but not escape the cell, or that replication is incomplete, so genes are expressed, but no virions formed. If either of those two options are go then the presence/viability of the virus on the surface can be determined by looking for activity of one or more viral genes. I can't give you landmarks for HPV, and the exact sequence of events is unknown for any of the viruses I know anything about, but in general viruses tend to be non-viable before you can no-longer detect genetic material. Exactly how long each of these takes depends highly on the virus. For instance, take two quite topical viruses - SARS-CoV-2 and influenza, both enveloped (less stable) RNA viruses with similar size and structures and similar routes of transmission. Viable influenza persists for about 2 weeks on stainless steel, but only 1 week on cotton, but you can detect the RNA for up to 17 weeks, while CoV-2 only persisted for 1 week on steel, but RNA could be detected for the full duration of the study (though I can tell you personally from my CoV-2 as yet unpublished results >14 days on some surfaces under some conditions at room temp).
{ "domain": "biology.stackexchange", "id": 11341, "tags": "virology, cancer, infection, infectious-diseases" }
How to publish an array of 3D points for visualization in RVIZ? [Solved]
Question: Hello all, I did all the basic tutorials of visualization_msgs for publishing points as different shaped markers. is it possible for anyone to let me know how to publish an array of points so that I can see them in rviz in real time. I have an array of 20 points in 3D and I want to publish them as visualization_msgs/markerArray. But I have no idea how to do that. It will be a great help if anyone can provide a snippet in c++ for this purpose. Thanks, Prasanna. Originally posted by PKumars on ROS Answers with karma: 92 on 2016-04-06 Post score: 0 Answer: You can use a simple Marker (no Markerarray!) and use the Points-Type (http://wiki.ros.org/rviz/DisplayTypes/Marker#Points_.28POINTS.3D8.29). The Marker (http://docs.ros.org/jade/api/visualization_msgs/html/msg/Marker.html) has a points member that can be used to pass a list of points to RVIZ. If you want to have a nicer output you could also use the SPHERE_LIST although all spheres will have the same color. Originally posted by NEngelhard with karma: 3519 on 2016-04-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by PKumars on 2016-04-06: Thanks for your quick reply. what if the points are not fixed and are changing and then I have just an array. I know only that it is of 3d type and I don'n want to change the points manually? the points are also not linear in spacing. I mean to say that the points of my array can be any where. Comment by NEngelhard on 2016-04-06: You can just republish your points after their position changed. How often do you want to update the position in RVIZ? Comment by PKumars on 2016-04-06: I want to publish all the points at once. is it possible? and then delete in next iteration. Comment by NEngelhard on 2016-04-06: Yes. You send one marker that contains your 20 points. If you publish again (with the same namespace and id), the old points are replaced by the new ones. Comment by PKumars on 2016-04-06: I'm sorry But I can't figure out how to do that. I followed http://wiki.ros.org/rviz/Tutorials/Markers%3A%20Points%20and%20Lines this tutorial and used only points and not lines. Still now I'm unable to solve this problem. Comment by PKumars on 2016-04-07: Thanks for your kind and detailed suggestion. I solved the problem and I'm posting that as an answer.
{ "domain": "robotics.stackexchange", "id": 24325, "tags": "ros, c++, rviz, markerarray, visualization-msgs" }
Standing sound wave tube
Question: If there was a standing sound wave tube and a flammable gas was introduced then ignited, would the combustion be more forceful and more efficient since its following a standing wave, than just a gas ignited within a tube? Answer: Although your terms are not precise, I have a feeling that what you are describing is a Rubens' Tube. Yes, the combustion reflects the standing wave as shown below (from Wikipedia): Recall that sound is made up of pressure waves and so a standing sound wave means there is larger pressure at some points and lower pressure at others. At the high pressure points, the fuel is pushed out through the opening faster resulting in a taller flame. Also, because the density is increased, more fuel comes out and combustion in enhanced. The opposite is true at the lower pressure regions.
{ "domain": "physics.stackexchange", "id": 12562, "tags": "waves, acoustics, propulsion" }
Error when creating snaps for ros subscriber/publisher tutorial
Question: I am very new to snaps. I am following this tutorial to get started with ros and snaps. https://insights.ubuntu.com/2017/03/22/distributing-a-ros-system-among-multiple-snaps/ I was successful in creating the ros-base snap that contains roscore and the basic ros packages. When I try creating a snap for ros-app, I get an error Unable to find package path: "/home/user/snap-apps/ros-app/parts/ros-app/src/src" The problem I guess is that, it is looking the sub-directory src/src instread of just src/. Do I need to edit the snapcraft.yaml file to change some paths? Originally posted by prarobo on ROS Answers with karma: 35 on 2017-04-28 Post score: 0 Original comments Comment by kyrofa on 2017-04-28: Which version of Snapcraft are you using? Comment by prarobo on 2017-05-01: Snapcraft version: 2.29 Answer: It works for me, so it's possible you accidentally diverged from the tutorial somewhere along the line (e.g. the Snapcraft project isn't in the root of the workspace, etc.). That tutorial is actually a demo in the Snapcraft source tree, I suggest you start with that. Originally posted by kyrofa with karma: 347 on 2017-05-01 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by prarobo on 2017-05-04: I will try again and let you know.
{ "domain": "robotics.stackexchange", "id": 27748, "tags": "ros" }
Implications of proper Lorentz transformations having determinant one
Question: I'm presently studying the proper Lorentz transformations of a simple 1+1D Minkowski spacetime. What I know is that the matrices corresponding to those transformations have determinant of +1. $$ det\left[\begin{array}{cc} \gamma & -\gamma \frac{v}{c^2} \\ -\gamma v & \gamma \end{array}\right] = +1 $$ What I don't know is what this implies in physical terms. Is there any practical reason why areas of a spacetime diagram should remain constant after applying such transformations? Are areas describing any meaningful quantity? Answer: In $(t,x)$ coordinates, a boost acts as $$ \begin{pmatrix} t \\ x \end{pmatrix} \mapsto \begin{pmatrix} \cosh \phi & \sinh \phi \\ \sinh \phi & \cosh \phi \end{pmatrix} \begin{pmatrix} t \\ x \end{pmatrix}. $$ where $\phi$ is the rapidity (just a more convenient way to parameterize boosts) where $\cosh$ and $\sinh$ are just the "hyperbolic trigonometric functions," i.e. $$ \cosh \phi = \frac{1}{2} ( e^\phi + e^{- \phi} ) = \gamma \\ \sinh \phi = \frac{1}{2} ( e^\phi - e^{- \phi} ) = \gamma v $$ which we use because they satisfy the identity $$ (\cosh \phi)^2 - (\sinh \phi)^2 = 1 $$ just like how $$ (\gamma)^2 - (\gamma v)^2 = 1. $$ Anyway, in lightcone coordinates $$ u = t - x \\ v = t + x $$ a boost acts as $$ \begin{pmatrix} u \\ v \end{pmatrix} \mapsto \begin{pmatrix} e^{-\phi} & 0 \\ 0 & e^{\phi} \end{pmatrix} \begin{pmatrix} u \\ v \end{pmatrix} $$ where to get the above formula we just used the definitions $$ \cosh \phi + \sinh \phi = e^\phi \\ \cosh \phi - \sinh \phi = e^{- \phi}. $$ If you want $e^\phi$ in terms of $v$, then $$ e^\phi = \cosh \phi + \sinh \phi = \gamma + \gamma v = \frac{1 + v}{\sqrt{1 - v^2}} = \sqrt{ \frac{ 1 + v}{1 - v} }. $$ The point is that this transformation elongates the $v$ coordinate while squashing the $u$ coordinates in such a way that the total $uv$ area is preserved. Note that $$ uv = t^2 - x^2 $$ which is the invariant spacetime interval squared for someone who starts at $(0,0)$ and ends up at $(u,v)$. So the invariance of this area is exactly the invariance of the proper time under a Lorentz transformation, as one might expect.
{ "domain": "physics.stackexchange", "id": 78536, "tags": "special-relativity, spacetime, inertial-frames" }
Favored Conditions of Bacterial Growth
Question: I have read that bacteria "thrive" in warm places. Naturally, I am very interested in why this is the case. Humans for instance thrive also in relatively warm conditions if it's too cold or too warm we die. However why would this also apply to bacteria? Answer: The idea that bacteria thrive in warm places is mostly biased from a human perspective. Human researchers will tend to care more about bacteria that cause human disease. These pathogenic bacteria can replicate in/on humans at ~37 °C (which is warm). As Dexter points out, bacteria can grow at other temperatures. But I'd go deeper and consider proteins (and other macromolecules) in the cell. For example, the enzymes that carry out many important processes for the bacteria (such as DNA polymerases that build the DNA) are most active at certain temperatures: in E. coli or humans or dogs that's ~37 °C; for a thermophile it's warmer. See below:
{ "domain": "biology.stackexchange", "id": 4720, "tags": "microbiology, bacteriology" }
How does a traffic light sense the proximity of vehicles?
Question: Some traffic lights don't operate periodically but instead detect when a car is close by and then turns green. I have heard that they use a magnetic sensor embedded in the road to sense cars as they come near. Is this correct? Do they use other means as well? Answer: As others stated before, induction loops are the primary - most reliable method: the coils (usually just several loops of wire) embedded in the road; fed given frequency from a generator, in presence of metal the frequency of the LC circuit changes and the sensor circuitry detects the change of frequency, producing a presence signal. In some cases these may fail to detect bicycles, but they are by far most common as they aren't affected by weather (or more precisely, the detection circuit tunes in to slow changes of frequency caused by weather) and are immune to accidental false positives. Note the loops can be localized (~2m size) or cover a lengthy part of a lane. Detection is performed by cards like these: and by induction loops made with wire laid in grooves like these: or placed in pipes under the road surface at construction time (in the photo is a loop for tram detection, but pre-built loops are similar) Videodetection - cameras connecting to a specialized card with "detection zones" defined through specialized software detect the vehicles. They are vulnerable to bad weather and tend to produce false positives from glare of car headlights, shadows of vehicles on neighbor lane and such, but in certain cases - primarily where road surface makes installing detection loops impossible (gravel, or bad road surface) they are preferred. Additionally, the video detection cards are significantly more expensive than cards for detection loops. There are a few lesser used techniques like geomagnetic (detecting changes in magnetic field; These largely depend on size of the vehicle, so a large truck can trigger a sensor in neighbor lane - but they are more durable), radar (detect only moving vehicles* - but are frequently used to detect pedestrians as they rarely stay immobile), laser (measuring distance to road surface; vehicle in the way changes the distance measured. Quite reliable but only point-detection, no area detection). Pictured below is a geomagnetic sensor: and a radar sensors (short range for pedestrians and bicycles, and long range, for cars): I heard of pneumatic and piezzoelectric, but I've never seen these in use for traffic control - probably problems of wear and durability; I know these are used for automated barriers for parking lots, but they obviously support an order of magnitude lower traffic. For city transport traffic, the vehicles are equipped with an on-board computer with a short-range radio (up to 500m) and GPS, and they broadcast messages about entering pre-defined "checkpoints" to the traffic system, alongside with data about intended turn direction, delay against schedule and some others, allowing the controller to prioritize. An alternative is a system that feeds vehicle position to a central unit, which then contacts controllers with messages about prioritizing these vehicles. Last but not least, cameras/sensors detecting strobe lights of specific frequency give immediate priority to oncoming emergency vehicles. (and take a photo of the vehicle in question, to prevent abuse.) Controllers can communicate with each other, and share their detector states, so two controllers can use each other's detectors, for example when they are a short way away from each other. Two induction loops in a short distance (~1m) from each other are used to determine speed and length of vehicles, making adapting to longer or slower vehicles possible. Another application of pairs of detection loops near to each other is in directional detectors - basing on the order the neighbor loops are activated one can determine the direction the vehicle is moving. This is rarely used for cars but if a single rail line with trams (street cars) moving in both directions crosses a road, the same two pair of detectors can activate the green light for the vehicle and then register it finished crossing the street, regardless of its direction as the pairs can generate "approaching / departing" signals. A special "virtual" detector composed of two loops in one lane in a considerable distanc measures the length of queue of cars, allowing prediction of time necessary to vacate the lane (and making "time countdown displays" viable.) Another special type of detector is a "blocking" one, placed either in the middle of the crossing (camera) or behind it, on the "departing" lane (usually a detection loop); its purpose is to delay/block entry until the crossing is vacated, or prevent blocking the crossing if a traffic jam formed in the "exit" lane and new vehicles would be unable to depart. Note this is the "standard" set, but since the controllers can accept a standarized 24V/'contact' signal, any generic source can be used, for example an infrared remote control to enable that one specific direction which is used in 0.1% cases, activated by the owner of the house with driveway right into the crossing, or by a manual trigger from a factory gate to enable a truck to enter/leave, or whatever need arises. Below is a generic 16 inputs/16 outputs card. These are usually used for pedestrian buttons (and lamps) but they can provide signal from arbitrary sources and control arbitrary end-point devices. In some cities detectors work in "pairs" of two types; for example detection loops are very reliable for detecting vehicles, but mechanical stress from heavy transport can damage them, and repairing them is not a trivial matter. The card can detect a damaged loop (usually open circuit -> no frequency or short circuit -> very high frequency) and in such case the controller starts using a backup sensor, for example radar or laser. And just a screenshot from one of the controllers showing the map with detectors displaying their state live (blue = active). Note that detector on the far right - it doesn't belong to this controller; it's composite data from a neighbor controller, so that the short road connecting the two doesn't get congested - as long as there are cars waiting in the potential congestion zone no more will be allowed into it from the other directions. *Note that while radar detectors can only detect cars in motion, that doesn't mean they can't be used as a standalone solution ("just support"). Sometimes the induction loops are placed at wrong locations as well (for various reasons, incompetence of the investor not the least of them), so cars stop behind/between them and don't trigger them during red light. This is still not a very big problem as any detector can be set as one with "memory". Any vehicle even momentarily activating such a detector cause it to keep the active state until green light on the associated lane, then act as normal ("forgetful") during the green light. Also note this is the default behavior for pedestrian pushbuttons. Of course this is not ideal, as a vehicle may get stuck right out of the detection zone exactly during the change from green to red, or (say, due to driver's fault) miss the whole green cycle altogether. Still, these are relatively rare cases, especially that another approaching vehicle will usually trigger the detector anyway.
{ "domain": "engineering.stackexchange", "id": 33, "tags": "electrical-engineering, civil-engineering, traffic-light" }
How much torque is needed?
Question: If we wanted to open gate with maximum force that man can apply at distance of 1m from pivoted joint. So I want torque to determine motor and gearbox that can be used to automate gate. Answer: The maximum push force for an adult male, is 818N according to NASA - https://msis.jsc.nasa.gov/sections/section04.htm#Figure%204.9.3-6 Applying this at a distance of 1m from the pivot, corresponds to a torque of: $$T=F*d=818\text{ N} * 1\text{ m}=818\text{ Nm}$$ This is likely much more torque than is required to automate the gate, however - your calculations should be based on the mass and shape of the gate, and any speed requirements for opening time and/or acceleration, rather than on the force of a human.
{ "domain": "engineering.stackexchange", "id": 2604, "tags": "torque" }
Eigenvalue of a vector in a subspace
Question: Consider, a quantum system has a hamiltonian with eigenstates $\{|\phi_1\rangle,|\phi_2\rangle,|\phi_3\rangle\}$ and associated eigenvalues $\{\lambda_a,\lambda_a,\lambda_b\}$. My notes state that any vector in the subspace $\{|\phi_1\rangle,|\phi_2\rangle\}$ has the corresponding eigenvalue $\lambda_a$. This seems like an obvious statement, but would like to know how I could prove it (if there is a way to do so). Answer: Suppose we have an arbitrary state $|\phi\rangle$ in the subspace $\{|\phi_1\rangle,|\phi_2\rangle\}$. What this means is that we can write $|\phi\rangle$ as a superposition (linear combination) of these two states. For example, $|\phi\rangle = a|\phi_1\rangle + b|\phi_2\rangle$ for which $|a|^2 + |b|^2 = 1$. Now, we can find the eigenvalue of $|\phi\rangle$ by applying the Hamiltonian, as follows: $H|\phi\rangle = H(a|\phi_1\rangle + b|\phi_2\rangle) = aH|\phi_1\rangle + bH|\phi_2\rangle = a\lambda_a|\phi_1\rangle + b\lambda_a|\phi_2\rangle = \lambda_a(a|\phi_1\rangle + b|\phi_2\rangle) = \lambda_a|\phi\rangle$ So, we have shown that any state in the subspace $\{|\phi_1\rangle,|\phi_2\rangle\}$ has eigenvalue $\lambda_a$.
{ "domain": "physics.stackexchange", "id": 78182, "tags": "quantum-mechanics, homework-and-exercises, linear-algebra, eigenvalue, quantum-states" }
Do rotational degrees of freedom contribute to temperature?
Question: Recently I have come across a mathematical problem where I was said to calculate the temperature increase of certain mol of N2 gas confined in a room. However, I found that there was only consideration of kinetic energy in the issue of temeperature. My question is why don't we include rotational DOF in calculating temeperature increase? Answer: If you start with a monatomic gas then the only degrees of freedom available are the three translational degrees of freedom. Each of them absorbs $\tfrac{1}{2}kT$ of energy, so the specific heat (at constant volume) is $\tfrac{3}{2}k$ per atom or $\tfrac{3}{2}R$ per mole. If you move to a diatomic molecule there are two rotational modes as well - only two extra modes because rotation about the axis of the molecule has energy levels too widely spaced to be excited at normal temperatures. Each of those two rotational degrees of freedom will soak up another $\tfrac{1}{2}kT$, giving a specific heat of $\tfrac{5}{2}k$ per molecule or $\tfrac{5}{2}R$ per mole. But the rotational energy levels are quantised with an energy spacing of $E = 2B, 6B, 12B$ and so on, where $B$ is the rotational constant for the molecule: $$ B = \frac{\hbar^2}{2\mu d^2} $$ where $\mu$ is the reduced mass and $d$ is the bond length. So these rotational energy levels will only be populated when $kT$ is a lot greater than $B$ - say 10 to 100 times greater. You can look up the rotational constant of nitrogen, or it's easy enough to calculate, and the result is: $$ B \approx 3.97 \times 10^{-23} \text{J} $$ which is about $3k$. So as long as the temperature is above say $30K$ the rotational modes will be excited and nitrogen will have a specific heat of $\tfrac{5}{2}R$. If you go down to temperatures of $3K$ and below then the specific heat will fall to $\tfrac{3}{2}R$ just like a monatomic gas. The specific heat of nitrogen at constant volume is 0.743 kJ/(kg.K), and converting this to J/mole.K we get 20.8 J/(mole.K) and this is indeed 2.50R (to three significant figures). The conformist mentions that the vibrations of the nitrogen molecule will contribute to the specific heat, and indeed they will. However the energy of the first vibrational mode is 2359 cm$^{-1}$, which converted to non-spectrogeek units is $4.7 \times 10^{-20}$ J or about $3400k$. So the vibrational mode isn't going to contribute to the specific heat until the temperature gets above 3400K.
{ "domain": "physics.stackexchange", "id": 23948, "tags": "thermodynamics, rotational-kinematics, degrees-of-freedom" }
Display all files in a folder (object) along with nested subdirectories
Question: Follow-up question: Display all files in a folder (object) along with nested subdirectories part 2 Task: Given a main directory/folder, list all the files from it and if this directory have other nested sub-directories, list files from them also. Background history: short story long - I got an internship, no past experience with asp.net-mcv-5, a nuisance of an assignment (Yet fulfilling), and now I want to optimize my code, giving the content is to be used for our intern job portal. Path to a functional solution: In my quest for finding a solution have I looked at various recursive programs writing in other languages. However, they all made use of methods and after further research, I found out about @funtions in razor. It came to my conclusion, that people have different opinions on when, and if ever they should be used. That lead me to choose a different route, to finding a solution. I decided to go with Stacks inorder to store a collection of previous sub-directories, as I worked my way down each individual folder to display their content. I would greatly appreciate feedback. And should there be any problems with my post regarding community rules, let me know. Ps. My english dictionary isn't quite developed, so should you have any recommendations with how I should describe things. Please be specific, and concrete. @foreach (var parentFolder in Model) { Stack<Folder> folderStack = new Stack<Folder>(); folderStack.Push(parentFolder); var currentFolder = folderStack.Pop(); int dummyCounter = 1; //Parent folder <div class="row"> <div class="col-sm-2"> <a class="btn" role="button" data-toggle="collapse" href="#@currentFolder.Id" aria-expanded="false" aria-controls="@currentFolder.Id"> <span class="@GlyphionCategoryIcon"></span> </a> </div> <div class="col-sm-5">@currentFolder.Id</div> <div class="col-sm-5">@currentFolder.Name</div> </div> <div class="collapse" id="@currentFolder.Id"> @if (currentFolder.FoldersContained != 0) { do { //Prevents a copy of the parent folder, otherwise this display nested folders if (dummyCounter != 1) { <div class="row"> <div class="col-sm-2"> <a class="btn" role="button" data-toggle="collapse" href="#@currentFolder.Id" aria-expanded="false" aria-controls="@currentFolder.Id"> <span class="@GlyphionCategoryIcon"></span> </a> </div> <div class="col-sm-5">@currentFolder.Id</div> <div class="col-sm-5">@currentFolder.Name</div> </div> } // Create a collapse div using bootstrap 3.3.7 <div class="collapse" id="@currentFolder.Id"> @if (currentFolder.FoldersContained > 0) { for (int i = currentFolder.FoldersContained; i > 0; i--) { //Pushes all nested directories into my stack //in reverse inorder to display the top directory folderStack.Push(currentFolder.Folders[i - 1]); dummyCounter++; } } @if (currentFolder.FilesContained != 0) { // Should they contain any files, display them foreach (var file in currentFolder.Files) { <div class="row"> <div class="col-sm-2"> <a class="btn" role="button" href="@webUrl@file.Url" target="_blank"> <span class="@GlyphionPaperIcon"></span> </a> </div> <div class="col-sm-5">@file.Id</div> <div class="col-sm-5">@file.Name</div> </div> } } </div> //Ends the while loop if (folderStack.Count == 0) { dummyCounter = 0; } //Prepares the next nested folder object if (folderStack.Count != 0) { currentFolder = folderStack.Pop(); } // I make use of a dummy counter inorder to break the loop // should there no longer be any nested directories and files // left to display } while (dummyCounter != 0); } //Finally, display all files in the parent folder, should there be any @if (parentFolder.FilesContained != 0) { foreach (var file in parentFolder.Files) { <div class="row"> <div class="col-sm-2"> <a class="btn" role="button" href="@webUrl@parentFolder.Url" target="_blank"> <span class="@GlyphionPaperIcon"></span> </a> </div> <div class="col-sm-5">@parentFolder.Id</div> <div class="col-sm-5">@parentFolder.Name</div> </div> } } </div> } Output: (dummy data, folders expanded) Answer: I have just two comments: The name dummyCounter is really terrible, you should find something more appropriate like currentDepth or something but you actually don't need this at all, you can use the folderStack and just ask it whether it's not empty with folderStack.Any() You use the same html snippet four times (!) <div class="row"> <div class="col-sm-2"> <a class="btn" role="button" data-toggle="collapse" href="#@currentFolder.Id" aria-expanded="false" aria-controls="@currentFolder.Id"> <span class="@GlyphionCategoryIcon"></span> </a> </div> <div class="col-sm-5">@currentFolder.Id</div> <div class="col-sm-5">@currentFolder.Name</div> </div> This should be a partial view that you can reuse instead of copy-pasting it everywhere. The values that are chaniging can be passed via its own new model.
{ "domain": "codereview.stackexchange", "id": 32980, "tags": "c#, performance, asp.net-mvc-5, razor" }
Does the ros publisher publish in bytes?
Question: Hi all, Following is a simple publishing example I copied from the ROS tutorials. Is msg transmitted over TCP as bytes? or as string? For efficient transfer, I would like to know if I need to convert 'count' into bytes before publishing. std_msgs::String msg; double count = 12.76578589776736376983983231112; std::stringstream ss; ss << "hello world " << count; msg.data = ss.str(); ROS_INFO("%s", msg.data.c_str()); /** * The publish() function is how you send messages. The parameter * is the message object. The type of this object must agree with the type * given as a template parameter to the advertise<>() call, as was done * in the constructor above. */ chatter_pub.publish(msg); Originally posted by aswin on ROS Answers with karma: 528 on 2013-05-27 Post score: 0 Original comments Comment by weiin on 2013-05-27: Not quite sure what you mean by converting "hello world" into bytes before publishing. You can only publish the msg in the correct format (http://www.ros.org/doc/api/std_msgs/html/msg/String.html), which in this case is a string Comment by aswin on 2013-05-27: I am sorry. I meant how many bytes is 'count' converted to? Or in short, how many bytes are published in total? Comment by Bill Smart on 2013-05-28: strlen(msg.data.c_str()) + 1 + some amount of network headers Answer: Not sure what you mean by converting to bytes, since everything is represented as bytes internally. You're constructing a string on the line ss << "hello world " << count; so that's what you're sending. The string stream operator will take count, turn it into a string, append it to "hello world " and return it through the c_str() call. If you want to know how long that string is, then use strlen(). The double needs sizeof(double) bytes (probably 8, if your machine is like mine). Any string representation of a double with more than 8 characters will use more space. If you really want to send a double, create a message type with a double field. More than that is getting sent, though, since the underlying networking layers are adding protocol wrappers to everything that gets sent out. In general, I'd advise against trying to optimize things at this level, unless you're seeing a performance problem. Unless you understand linux networking and how ROS uses it at a pretty fine-grained level, you might end up doing a lot of work for little gain. Originally posted by Bill Smart with karma: 1263 on 2013-05-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by aswin on 2013-05-28: I later realized that this was not a ROS question. Anyway, as you mentioned there are more than 8 bytes that are sent for "count". A message field with double field is good, but when the message includes bool, int, string etc, this leads to more messages unnecessarily. Comment by aswin on 2013-05-28: The key is to convert double, short etc... into a hex byte representation before appending to string stream, and then sending it. In this case count will always occupy 8 bytes. For people working on intel architecture this will not cause latency. I use a single core ARM Comment by Bill Smart on 2013-05-28: I'm not sure why you're packing things into a string, since defining a new message with all of the fields you need will accomplish the same thing at the same storage cost, but without you having to do the packing yourself. Comment by aswin on 2013-05-29: Agree that it has the same storage cost. However, with a different message for each datatype, this would mean I will have to maintain 5 to 6 times the number of messages & topics I have in a distributed system. There is also slight overhead due to extra headers and checksum. Comment by Bill Smart on 2013-05-30: I understand now: you're looking to send a message with a dynamic data type, right? There's a strong typing in ROS messages, so you're going to have to pull some tricks to do this (like the one you suggest above). Comment by aswin on 2013-05-30: On another note, one should not use stringstream for packing in such applications i.e. at byte level. Presence of bytes such as 0x0A in the stream screws up the unpacking process
{ "domain": "robotics.stackexchange", "id": 14324, "tags": "ros, subscribe, publish, msg" }
Mapping by gmapping isn't correct
Question: Hi, I'm doing the mapping process by gmapping. At the beginning, my robot can recognize a wall very well, but when i spin the robot and face into empty space, it suddenly appears a line on the map in Rviz, although there is nothing there. What's the problem? Is there any parameter in gmapping launch file I should adjust? Originally posted by Kevin1719 on ROS Answers with karma: 58 on 2021-10-02 Post score: 0 Original comments Comment by Mike Scheutzow on 2021-10-03: Please edit your description to provide more information. Which gmapping package are you using (give us a link)? What sensor input is your robot using to generate a current pose? You can edit your question using the "edit" button at the end of your description. Comment by Kevin1719 on 2021-10-03: Oh, I have solved the problem, but thanks for your reply, I will close the answer. Comment by gvdhoorn on 2021-10-05: @Kevin1719: we're really happy you solved your problem, but we're less happy you didn't take 5 minutes to post an answer so future readers of your question who might be struggling with the same problem could benefit from what you found out. If everyone on ROS Answers would only post "oh I figured it out" but then not explain what they figured out, there would be no point in having this forum here (as there would be no usable information on it). Please take the time to do that and post what you did as an answer. Then accept your own answer. Comment by Kevin1719 on 2021-10-05: Yes, thank you for reminding me :) Comment by gvdhoorn on 2021-10-05: Please don't close questions if/once they've been answered. Accepting the answer suffices. Answer: Problem solved: the problem was caused because I set the joint xyz attribute too low and it seem to be inside the robot, so the laser beams couldn't get through the robot chassis and were looking strange. Setting it higher, above the base link solved the problem. Originally posted by Kevin1719 with karma: 58 on 2021-10-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 36975, "tags": "ros, navigation, laser, gmapping" }
Comparing different asymptotic notations
Question: Suppose we have 3 algorithms complexity times at the worst case: A = $O(nlogn)$ B = $O(n\sqrt{n})$ C = $\Theta(n)$ In my opinion, it is not possible to define the best solution, since we don't know how Cgrows. I'd like to confirm if that's correct. Answer: Well, we know how the algorithm C running time grows - it's linear, and it's two-side (lower and upper) bound. However there is still not enough information to choose the best (meaning: fastest in practice) algorithm here, because: We know only upper bounds for algorithms A and B - they might be more efficient than C actually, we just don't know that yet. We don't know constants, hidden in the $\Theta$ bounds - they might make the algorithm C less practical than A or B for limited problem size. Using the given information we can say only that for some sufficiently large problem size the algorithm C might win over algorithms A and B.
{ "domain": "cs.stackexchange", "id": 13832, "tags": "complexity-theory, time-complexity, asymptotics, big-o-notation" }
Is It necessary, in the standard model, that the number of quark generations equals the number of lepton generations, i.e. 3?
Question: This question showed up in my particle physics exam and I'm not sure of the answer. I said no, since I can't see any reason why they must be equal. But I my answer is right, I find it odd that it's just a coincidence that they're both 3. Answer: Honest answer for your question is ''No''! As in the standard model (SM), the number of fermion generations appears as an arbitrary parameter, meaning that a mathematically consistent theory can be built up using any number of fermion generations. Therefore, In order to answer the question perhaps we may need to beyond the standard model. In the quest for such a model(s), first lets rephrase your question is the following way: Is there any extension of the standard model be possible where the number of fermion generations can in any way be explained through the internal consistency of the model? (since we don't have the answer within the framework of SM.) Its good to start from SM itself. Lets go back in sixties. If we make a table of fermions, those were discovered up to 1965, it would looks like: \begin{eqnarray} \text{Lepton}& : & \begin{pmatrix} \nu_{e}\\ e \end{pmatrix},\quad \begin{pmatrix} \nu_{\mu}\\ \mu\end{pmatrix} \\ \text{Quark} & : & \begin{pmatrix} u \\ d \end{pmatrix},\qquad s \end{eqnarray} Anyone with naked eye can say how ''ugly'' this table is looks like! In fact it was James Bjorken and Sheldon Glashow proposed the existance of chram ($c$) quark in order to restore the ''quark-lepton symmetry''. The table now looks more symmetric and beautiful: \begin{eqnarray} \text{Lepton}& : & \begin{pmatrix} \nu_{e}\\ e \end{pmatrix},\quad \begin{pmatrix} \nu_{\mu}\\ \mu\end{pmatrix} \\ \text{Quark} & : & \begin{pmatrix} u \\ d \end{pmatrix},\qquad \begin{pmatrix} c \\ s \end{pmatrix} \end{eqnarray} Which was later discovered during November Revolution 1974 . Lesson is, these two physicists were dictated by the notion of symmetry in order to restore the order in the realm of fermions. Later GIM mechanism was given an explanation of the non-existent of FCNC in SM taking into account the charm quark. The very existence of three generations of quarks is necessary for CP violation. And also for anomaly cancellations to make the SM mathematically consistant. But the undeylying symmetry (if it really exists) which may ensure the equal numbers of quarks and leptons, yet to be discovered. Story that goes beyond SM: Back in nineties an extension of SM was proposed by F. Pisano here and V. Pleitez based on gauge group $SU(3)_{L}\times U(1)_{Y}$. Their model to accommodate the standard fermions into multiplets of this gauge group which must include some new fermions. This model has remarkable feature. As we already know that, a consistent quantum field theory must be free from gauge anomaly. Without that, theory become ill. In case of SM the anomalies get cancelled in miraculous (or should we say in an ugly way?) way. But for the model with gauge group $SU(3)_{c}\times SU(3)_{L}\times U(1)_{Y}$, has the interesting feature that each generation of fermions is anomalous, but that with three generations the anomalies cancelled. In other words, Electroweak interactions based on a gauge group $SU(3)_{L}\times U(1)_{Y}$, coupled to to the QCD gauge group $SU(3)_{c}$ can predict the number of generations to be multiples of three. (For technical detail one can read this paper). But with the cost that we have to incorporate a right handed neutrino in the game. In fact, one may find other different models with the same features. GUT considerations: In a recent paper Pritibhajan Byakti et al proposed a grand unified theory based on the gauge group $SU(9)$. The model uses fermions in antisymmetric representations only and the consistency of the model demands that the number of fermion generations is three. Nevertheless like all GUT, it also come up with some superheavy gauge bosons. Which can trigger baryon number non-conserving processes. The upshot is, perhaps we would be able to explain the lepton-quark symmetry with the price of some new physics (may be new particles) which lives beyond the SM.
{ "domain": "physics.stackexchange", "id": 34960, "tags": "standard-model, quarks, leptons" }
Is this a carpenter ant?
Question: I stumbled upon a few hundred of these swarming around two different places in my living room today. They kind of look like a carpenter ant to me, but I'm not sure. Also, I'd love to know how you identified the ant. // EDIT, per a helpful suggestion, I'm in Los Angeles, CA, US. We just had rain come through within the last 4-5 days and it has since dried up. Answer: No, this is not a carpenter ant...it's possibly a Pavement Ant (Tetramorium caespitum) - only it's a male & hence winged....for identification...follow these guidelines- Pavement ant workers are small, l/8-inch to 3/16-inch long, and blackish brown with light-colored legs and two spines at the end of the thorax. A distinguishing character, visible with a hand lens or microscope, is the series of fine parallel groves on the head and thorax
{ "domain": "biology.stackexchange", "id": 8490, "tags": "entomology, species-identification, ant" }
Show that for the QM harmonic oscillator $\langle m |x^3 |n\rangle =0$
Question: I have to demonstrate the following result: $$ \langle m |x^3 |n\rangle = \int_{-\infty}^{+\infty}\Psi_m x^3 \Psi_n =0 $$ Unless $m=n-3$, $m=n-1$, $m=n+1$, or $m=n+3$. I tried using the normalized wave function $\left(\Psi_n =\frac{e^{-\frac{x^2}{2}}H_n(x)}{(\sqrt{\pi}2^n n!)^{1/2}}\right)$ on the integral, but I couldn't get a result. I also tried to solve using linear algebra, but I was unable to develop correctly (using some results from the book Mathematical Physics, Butkov). Answer: This looks like an easy problem to solve using the ladder operators. We recall that $|n\rangle \propto (a^\dagger)^n |0\rangle$ and ${\hat x} \propto a + a^\dagger$. Thus, \begin{align} \langle m | {\hat x}^3 |n\rangle &\propto \langle 0| a^m(a+a^\dagger)^3 (a^\dagger)^n | 0 \rangle \\ &=\langle 0| a^m [ a^3 + 3 a^2 a^\dagger + 3 a (a^\dagger)^2 + (a^\dagger)^3 + x a + y a^\dagger ] (a^\dagger)^n | 0 \rangle \end{align} In the last equality, I have commuted all the $a$'s past the $a^\dagger$ using the commutator $[a,a^\dagger]=1$ which gives us some extra terms as shown above. Now, here is the crucial statement: $\langle 0 |a^m (a^\dagger)^n | 0 \rangle \propto \delta_{mn}$. I am going to leave a proof of this to you. Then, if you look at all the terms that appear above, we note that they are of the form $$ \langle 0| a^{m+3}(a^\dagger)^n | 0 \rangle , \langle 0| a^{m+2}(a^\dagger)^{n+1} | 0 \rangle , \langle 0| a^{m+1}(a^\dagger)^{n+2} | 0 \rangle , \langle 0| a^{m}(a^\dagger)^{n+3} | 0 \rangle , \langle 0| a^{m+1}(a^\dagger)^n | 0 \rangle , \langle 0| a^{m}(a^\dagger)^{n+1} | 0 \rangle $$ Then, using the crucial statement above (which you must prove) we see that unless $m=n\pm1,n\pm3$ all terms above vanish so we'll simply get zero. Non-zero answers are possible only for the values of $m$ above.
{ "domain": "physics.stackexchange", "id": 74828, "tags": "quantum-mechanics, homework-and-exercises, hilbert-space, wavefunction" }
Problem using hector exploration controller
Question: Hello, I have been trying to use hector navigation stack on my wheeled robot. I run the hector_mapping node and hector_exploration_node before running the hector_exploration_controller. I try to do this in two ways which are holding the laser sensor on me and trying to deliberately follow the path generated and mount the sensor on my robot and let it to follow the path on its own. However, I have encountered several problems: The robot started drifting away and caused the map to have error. When I was holding the sensor, the goal reached information is not shown even if I tried to follow the path as close as possible. When the robot is moving on its on to follow the path, the robot tends to just rotating along z-axis (perpendicular to the ground) almost all of the time. May I know what did I do wrongly? Below is the picture of me using the exploration controller showing the map in rviz. The red arrow indicates the position and orientation of the robot. As you can see, the robot pose is drifted away from its initial position and this happen when the robot perform rotating movement only. EDIT: The lidar I used is URG-04LX-UG01, the turning speed 0.28 rad/s while moving speed is 0.19 m/s. Below is the launch file for my hector mapping: <launch> <node pkg="hector_mapping" type="hector_mapping" name="hector_mapping" output="screen"> <param name="use_tf_scan_transformation" value="true" /> >param name="use_tf_pose_start_estimate" value="false" /> <param name="scan_topic" value="scan" /> <param name="pub_map_odom_transform" value="true"/> <param name="map_frame" value="map" /> <param name="base_frame" value="base_link" /> <param name="odom_frame" value="base_link" /> <!-- Map size / start point --> <param name="map_resolution" value="0.075"/> <param name="map_size" value="512"/> <param name="map_start_x" value="0.5"/> <param name="map_start_y" value="0.5" /> <param name="laser_z_min_value" value="-2.5" /> <param name="laser_z_max_value" value="3.5" /> <!-- Map update parameters --> <param name="update_factor_free" value="0.4"/> <param name="update_factor_occupied" value="0.7" /> <param name="map_update_distance_thresh" value="0.02"/> <param name="map_update_angle_thresh" value="0.02" /> <param name="scan_subscriber_queue_size" value="25" /> <param name="map_multi_res_levels" value ="2"/> </node> <node pkg="tf" type="static_transform_publisher" name="base_to_laser_broadcaster" args="0 0 0 0 0 0 base_link laser 100" /> </launch> Originally posted by zero1985 on ROS Answers with karma: 100 on 2014-01-27 Post score: 1 Original comments Comment by lukelu on 2020-09-11: We were wondering how did you make hector_exploration_node running since we got below error message: "Do not call canTransform or lookupTransform with a timeout unless you are using another thread for populating data. Without a dedicated thread it will always timeout. If you have a seperate thread servicing tf messages, call setUsingDedicatedThread(true) on your Buffer instance." Hope to hear your expertise, thanks. Answer: This looks more like a pose estimation problem (e.g. issue with hector_slam) than a problem with the exploration system. Can you edit your post with some info on your robot setup (LIDAR used, rotating how fast etc.). Note that performance with URG-04LX is not as good as with UTM-30 and movement will have to be smoother/slower. /edit: Some of your parameters look like they could cause problems. I'd recommend setting map_update_distance_thresh and map_update_angle_thresh to a value above the jitter that might be in the pose estimate due to sensor noise, Something like 0.2 and 0.4 (meters and rad, respectively) should work (or the default values). I'd also keep update_factor_occupied at it's default value of 0.9. How did you come up with these parameter changes? Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-01-27 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by zero1985 on 2014-01-27: thank you for the answer, I have already updated my question and also provided information on my launch file. Comment by zero1985 on 2014-02-10: thanks for the reply. The parameter setting if I remember correctly is refer to some post in ros answer but I couldn't find the post now. As I cant get a proper map previously (it will become a lot of map with different orientation and stack together) I found that changing map_update_distance_thresh and map_update_angle_thresh to a very small value helps negate this problem. Anyway I changed the value as your recommendation but the same problem still occurs, is it due to the limitation of the URG-04LX? Comment by Stefan Kohlbrecher on 2014-02-11: Depends on you point of view if that´s a limitation of the URG-04LX or of hector_mapping :) I know some people got reasonable results with URG-04LX in some scenarios, but not sure about the exact settings they used. Comment by zero1985 on 2014-02-12: Thank you for your reply. For the limitation I mean the short range and error %, I am using URG-04LX-UG01 which yields higher error as compare to URG-04LX, will this cause problem if my sensor is too far away from a sensible object? As for hector_mapping I think it is a great package and good to use :). Just that I can't set up it correctly myself. Do you have any guideline or suggestion in tuning the setting? Comment by zero1985 on 2014-02-16: Is it possible for me to send you my ros bag file of the navigation?
{ "domain": "robotics.stackexchange", "id": 16777, "tags": "hector" }
Is my computer powerful enough?
Question: Hello community, My team is constructing an underwater glider that is going to be controlled by a PC104 computer. It is going to have to control/read from servos, pressure sensors, orientation sensors, and such while computing the all the autopilot information. This computer has a 500MHz processor and 256MB of DDR DRAM. Will it be powerful enough to run everything? It is having troubles with the turtle simulation in the tutorial; especially the mimicking turtle... Thanks Originally posted by jamethy on ROS Answers with karma: 11 on 2011-07-22 Post score: 1 Original comments Comment by jamethy on 2011-07-25: PS Thanks for answering and so quickly! Comment by jamethy on 2011-07-25: Also, I didn't think about the removal of the GUI during operation (I guess I need to get though the tutorial!). I'm sure this will improve performance. I just wanted to make sure that ROS is a good choice for us before we put all our effort into it, as we are set using the PC104. Comment by jamethy on 2011-07-25: The autopilot system isn't really set in stone as of yet, but eventually this glider will be set loose in a large lake for a couple of days and it needs to be able to know it's position and stay on a course (which we will presumably set). Comment by dornhege on 2011-07-23: I think you need to elaborate what you want to do algorithmically. As for the turtle: This should be a very simple example, but I guess you ran the GUI with it that was taking all the performance. Comment by Stefan Kohlbrecher on 2011-07-22: Sounds like a Geode LX800 board to me. I can recommend using a small Atom based board (for example FitPC2) instead. We did that on our KidSize humanoid robots 2 years ago and are quite happy with the improved performance. FitPC2 definitely works well with ROS. Comment by Eric Perko on 2011-07-22: Could you elaborate on "computing the all the autopilot information"? Do you need full 3D (6dof) navigation support w/ obstacle avoidance? Just, say, keep the glider going "straight and level"? Answer: closing this question as it has not had activity in over a month In general, this question is not answerable by the community, as you are the expert on the your own software needs. It is possible to use ROS w/o adding significant overhead to what you are doing, so it is likely that your own software will be the bottleneck. i.e. if your system can run Linux, it can run ROS. Originally posted by kwc with karma: 12244 on 2011-09-02 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 6233, "tags": "ros, cpu" }
How to classify data which is spiral in shape?
Question: I have been messing around in tensorflow playground. One of the input data sets is a spiral. No matter what input parameters I choose, no matter how wide and deep the neural network I make, I cannot fit the spiral. How do data scientists fit data of this shape? Answer: There are many approaches to this kind of problem. The most obvious one is to create new features. The best features I can come up with is to transform the coordinates to spherical coordinates. I have not found a way to do it in playground, so I just created a few features that should help with this (sin features). After 500 iterations it will saturate and will fluctuate at 0.1 score. This suggest that no further improvement will be done and most probably I should make the hidden layer wider or add another layer. Not a surprise that after adding just one neuron to the hidden layer you easily get 0.013 after 300 iterations. Similar thing happens by adding a new layer (0.017, but after significantly longer 500 iterations. Also no surprise as it is harder to propagate the errors). Most probably you can play with a learning rate or do an adaptive learning to make it faster, but this is not the point here.
{ "domain": "ai.stackexchange", "id": 2859, "tags": "neural-networks, machine-learning, tensorflow, regression" }
Is it possible to prove closure of decidable languages under union and intersection, using enumerators?
Question: We can use multi-tape enumerators. (Of course it is not valid to use turing machines albeit the fact that any enumerator has an equivalent TM) What we need is to prove that if $A$ and $B$ are decidable then $A\cup B$ and $A\cap B$ are also decidable. Can it be done? If so, what is a possible approach? Answer: Given a decidable language $L$ consider the following enumerator: EnumL loop on strings x over 0 and 1 in canonical/lexicographic order if x in L then print x end end This procedure enumerates/emits all strings in $L$ in lexicographic order. So you can decide if $x \in L$ by running EnumL until it prints the first string of length $|x| + 1$. If it does print $x$ until that time then it means $x \in L$, otherwise $x \notin L$. Thus, using both enumerators EnumA and EnumB you can decide $x \in A\cup B$ and $x \in A\cap B$. In other words, if one of EnumA or EnumB eventually emits $x$ then $x \in A\cup B$, and if both EnumA and EnumB emit $x$ then $x \in A\cap B$.
{ "domain": "cs.stackexchange", "id": 9791, "tags": "undecidability, closure-properties, enumeration" }
Does iron(III) sulfate react with copper?
Question: As my understanding, there should be an oxidation-reduction reaction: $$ \ce{2 Fe^3+ + Cu → 2Fe^2+ + Cu^2+} $$ However, I always see the process using $\ce{FeCl3}$ to etch copper, but I never see people using $\ce{Fe2(SO4)3}$ to do this. Is it because such reaction does not occur, or is it because $\ce{FeCl3}$ is acidic in aqueous solution? Answer: In $\ce{FeCl3}$ etching of copper, the chloride ion is extremely important. Chloride ions coordinate to $\ce{Fe^3+}$, $\ce{Cu+}$ and $\ce{Cu^2+}$. See Copper Etching in Ferric Chloride Ind. Eng. Chem., 1959, 51 (3), pp 288–290 for the relative concentrations of the particular chloride complexes. In contrast, if $\ce{Fe2(SO4)3}$ were used, there would only be aqueous ions. At the copper surface, there is only a one-electron oxidation of $\ce{Cu}$ to $\ce{Cu+}$. As the above article explains, without coordination of chloride, the $\ce{Cu+}$ would be essentially insoluble. The rate limiting step of the etching reaction is diffusion of ions from the surface of the copper. Coordination of chloride provides solubility and enables diffusion. This is the main reason that $\ce{FeCl3}$ is needed.
{ "domain": "chemistry.stackexchange", "id": 2525, "tags": "inorganic-chemistry, redox, aqueous-solution, ions, transition-metals" }
Simple object oriented design of student system
Question: I have created a simple system to get hands on experience with OOPS design and some features in Java: The system is: A ClassOfStudents contains Students A Student contains a list of scores A Student can be a PartTimeStudent or FullTimeStudent A score is valid only if it is between 20 and 100 I have: Used enum for gender as it is a set of constants Created a user-defined exception to check for business rule Made student as abstract to make it extensible Used compare to reverse the natural order of sort and able to sort students Queries Please go through the code and feel free to advice all the OOPS design enhancements. I am new to OOPS and want to really understand.How to make code more extensible, reusable, secure. Some of the questions I have encountered while coding are: I don't want to create a object when score is not valid. I can achieve this by checking for score before using creating an object. But is there any other way? I am throwing an user defined exception and catching it immediately. Is it good practice? I don't want to interrupt the flow and continue. I am using logging for the first time. Is it the good way to log? I tired to implement as many as OOPS concepts but am unable to think of an interface. Please suggest a good use case to use an interface. How can I improve the exception handling (robustness)? Is there any other way I can add the student to studentList in the ClassOfStudents whenever a new student is created? Also suggest some new feature I can add to learn more OOPS/Java concepts. I have poster another question similar and got great response. I haven't implemented some of the feature like return immutable lists, not use throws for main here but grasped the concepts. import java.util.*; import java.util.logging.*; //I am trying to write extensible calss. So I have declared Student as abstact. //FullTimeStudent and PartTimeStudent sub classes //Moreover a Student need to be a FullTime are PartTime so Student object cannot be created. //I am using protected to scoreList. It is a good way? class ClassOfStudents { private List<Student> studentList = new ArrayList<Student>(); public void addStudent(Student student) { studentList.add(student); } public List<Student> getStudentList() { return studentList; } } abstract class Student implements Comparable<Student> { private String name; private Address address; private Gender gender; protected List<Score> scoreList = new ArrayList<Score>(); //Because the subclass need to access Student(String name) { this.name = name; this.gender = Gender.UNKNOWN; } Student(String name, Gender gender) { this.name = name; this.gender = gender; } public void setName(String name) { this.name = name; } public String getName() { return name; } public String toString() { return name+" "+gender; } public void addScore(Score score) { scoreList.add(score); } public List<Score> getScores() { return scoreList; } // Reverse of natural order of String. public int compareTo(Student otherStudent) { return -1 * this.name.compareTo(otherStudent.getName()); } public abstract boolean checkScores(); } //Inheritance class FullTimeStudent extends Student { FullTimeStudent(String name) { super(name); } FullTimeStudent(String name, Gender gender) { super( name, gender); } public boolean checkScores() { for(Score score : scoreList) { if (score.getStatus() == false) return false; } return true; } } //Inheritance class PartTimeStudent extends Student { PartTimeStudent(String name) { super(name); } PartTimeStudent(String name, Gender gender) { super( name, gender); } public boolean checkScores() { int countScoreFail = 0; for(Score score : scoreList) { if (score.getStatus() == false) countScoreFail++; } System.out.println(countScoreFail); if (countScoreFail >= 3) return false; else return true; } } class Address { private String streetAdress1; private String phoneNumber; private String zipCode; //Constructor, Setters and getters of Address } enum Gender { MALE,FEMALE,OTHER,UNKNOWN; } // Score can be between 20 to 100. //Score can only be incrmented / decremented by 1. //If Score < 40 , then status is false. Else true //I dont want to create a object when score is not valid. I can do this by checking for score before using new. But is there any other way? //I am throwing an user defined exception and catching it immediately, is it a good practice. I dont want to disturb the flow and continue? //I am using logging for the first time. Is it the good way to write this? class Score { private int score; private boolean status = false; Score(int score) throws scoreException { setScore(score); } public void setScore(int score) throws scoreException { if(score < 20 || score > 100) { try{ System.out.println("Invalid Score!!!"); throw new scoreException(); } catch(scoreException e) { Logger logger = Logger.getLogger("myLogger"); logger.log( Level.FINE,"Hello logging"); } } else { this.score = score; if(score >= 40) status = true; } } public int getScore() { return score; } public boolean getStatus() { return status; } public String toString() { return score+" "+status; } } class scoreException extends Exception { public String toString() { return "Entered Marks are not valid"; } } public class Test{ public static void main(String []args)throws scoreException { //Polymorphism ClassOfStudents c1 = new ClassOfStudents(); Student s1 = new FullTimeStudent("John"); Student s2 = new PartTimeStudent("Nancy",Gender.FEMALE); c1.addStudent(s1); c1.addStudent(s2); List<Student> studentList = c1.getStudentList(); Collections.sort(studentList); for(Student student : studentList) { System.out.println(student); } //************************* s1.addScore(new Score(10)); s1.addScore(new Score(50)); s1.addScore(new Score(30)); System.out.println("Student is "+s1); //Even for invalid score objects are created. I dont want them to be created. System.out.println("Printing content of all the scores of student"); for(Score score : s1.getScores()) { System.out.println(score); } System.out.println("Are all scores greater than 40?? ::"+s1.checkScores()); //**************************** System.out.println("Student is "+s2); s2.addScore(new Score(10)); s2.addScore(new Score(50)); s2.addScore(new Score(30)); //Even for invalid score objects are created. I dont want them to be created. System.out.println("Printing content of all the scores of student"); for(Score score : s2.getScores()) { System.out.println(score); } System.out.println("Are all scores greater than 40?? ::"+s2.checkScores()); } } Answer: Bracing style You have used mostly the Allman-style bracing approach, but there are momentary lapses in your try-catch and the Test class. In fact, I will also suggest introducing braces for the if-statements. Regardless of the styles you choose, please be consistent on this front. :) Constructor chaining Student(String name) { this.name = name; this.gender = Gender.UNKNOWN; } Student(String name, Gender gender) { this.name = name; this.gender = gender; } One of these two constructors should be chained to the other, which will facilitate in making your fields final as well: class Student { private final String name; private final Gender gender; Student(String name) { this(name, Gender.UNKNOWN); } Student(String name, Gender gender) { this.name = name; this.gender = gender; } // ... } Reverse comparisons If you happen to be on Java 8, you can make use of Comparator.reverseOrder() to do this for you: private static final Comparator<String> REVERSE_COMPARATOR = Comparator.reverseOrder(); public int compareTo(Student otherStudent) { // return -1 * this.name.compareTo(otherStudent.getName()); return REVERSE_COMPARATOR.compare(name, otherStudent.name); } Can a full-time students become part-time, and vice versa? Your current implementations for FullTime and PartTime students are fine, but you may also want to consider the relationships between the types of students, the common fields/methods/properties of students, and how the scores are 'checked'. An alternative solution is to instead consider full-timers or part-timers as only a status to any Student, similar to what you are doing for gender now: // switching to Java bracing convention for illustration enum StudentType { FULL_TIME { @Override public boolean checkScores(Student student) { ... } }, PART_TIME { @Override public boolean checkScores(Student student) { ... } }; public abstract boolean checkScores(Student student); } In this case, a Student (which is now non-abstract) can be toggled between full-time and part-time status, and arguably any Collection of students can be easily filtered by checking student.getType() == StudentType.FULL_TIME, instead of FullTimeStudent.class.isInstance(student). Of course, if you start to have more specific methods for each type of students, then the inheritance way will then become a better modeling approach. What's in a Score? As it stands, I'm not too sure about the usefulness of your Score class. It is nothing but a wrapper over an int now, and even the boolean status can be easily derived from the score. In any case, MAG's answer offers some suitable enhancements to the class, so I'll suggest taking a look at that. //I am throwing an user defined exception and catching it immediately, //is it a good practice. I dont want to disturb the flow and continue? //I am using logging for the first time. Is it the good way to write this? Throwing a specific Exception and catching that does seem... a little odd. It is as-if you are trying to control execution flow... Anyways, ScoreException (note: PascalCase for the class name) also seems to be a redundant class, as the built-in IllegalArgumentException ought to be good enough to convey the same exception cause. As for logging, you may want to take a look at logging frameworks such as SLF4J to handle logging in your codebase. On a related note, this StackOverflow question provides some useful insight regarding for-and-against the java.util.logging.* classes that you have adopted.
{ "domain": "codereview.stackexchange", "id": 15805, "tags": "java, object-oriented, design-patterns" }
Merging union Observables
Question: I am having a scenario where I need to execute observables that depends on the result of the first one. However I need to keep the result of the first observable. I couldn't find any extension that would help me do this. For instance SelectMany does a projection of the first observable discarding the source results. Therefore I made my own extension: public static IObservable<T> MergeWithResultPropagation<T>(this IObservable<T> src, Func<T, IEnumerable<IObservable<T>>> elems) { return src.SelectMany(result => elems(result).Union(new[] { Observable.Return(result) })) .SelectMany(r => r); } Here's a dummy example: Observable.Range(1, 3) .MergeWithResultPropagation(item => new[]{ Observable.Return(item * 2), Observable.Return(item * 3) }).Dump(); Is this an adequate way to solve the problem? Did I miss an extension method that does this? Answer: Union vs Concat As you addmited, there is no need to use the Union extension because no two items will ever have the same value. Concat would be more appropriate because it'll better show what is going on. Difference between the two is that [ 1, 2 ].Union([2, 3]) = [1, 2, 3] whereas the same with Concat would be [ 1, 2 ].Concat([2, 3]) = [1, 2, 2, 3] because of the not clear intention I had a hard time understanding this short code. The descriptions says something else then the implementation. Zip I don't know any extension that could do the same job but I think this one could be expressed cleaner by first producing the results and then zipping each result with the corresponding item that lead to this result using the Zip extension. I also think that it's nicer to use Enumerable.Repeat rather then new []{} return src .Select(x => elems(x)) .Zip(src, (results, x) => results.Concat(Enumerable.Repeat(Observable.Return(x), 1))) .SelectMany(z => z) .SelectMany(x => x); Functional To make it even cleaner I suggest encapsulating the Concat in its own method so the final extension could be: public static IObservable<T> MergeWithResultPropagation3<T>(this IObservable<T> values, Func<T, IEnumerable<IObservable<T>>> factory) { return values .Select(x => factory(x)) .Zip(values, AppendValue()) .SelectMany(z => z) .SelectMany(x => x); Func<IEnumerable<IObservable<T>>, T, IEnumerable<IObservable<T>>> AppendValue() { return (results, value) => results.Concat(Enumerable.Repeat(Observable.Return(value), 1)); } } yield return Alternatively to new []{} and Concat you could make the helper work with yield return public static IObservable<T> MergeWithResultPropagation3<T>(this IObservable<T> values, Func<T, IEnumerable<IObservable<T>>> factory) { return values .Select(x => factory(x)) .Zip(values, (results, value) => AppendValue(results, value)) .SelectMany(z => z) .SelectMany(x => x); IEnumerable<IObservable<T>> AppendValue(IEnumerable<IObservable<T>> results, T value) { foreach (var result in results) yield return result; yield return Observable.Return(value); } }
{ "domain": "codereview.stackexchange", "id": 25137, "tags": "c#, extension-methods, observer-pattern, system.reactive" }
JavaScript OOP calculator
Question: I've been trying to learn and implement object oriented programming in JavaScript. Could you please provide some advice and feedback (primarily on the application of OOP, overall code efficiency and proper use of patterns)? Code explanation: The code consists of a calculator constructor which has basic functions: add, subtract, divide and multiply, clear, equal. The input method captures the content of clicks assigns them to input variable and runs some basic filters. If the input is a number, it's appended to the variable number as a string. If any of the operators are clicked, the contents of the variable number and the operator is added to inputArray. This creates a list of numbers and operators in a sequence. If the '=' sign is clicked, the equal method is run, which loops through inputArray and assigns division and multiplication followed by add and subtract. Once the equal method completes, inputArray is left with one number, result, which is displayed on the screen. The printEquation method simply prints out the contents of inputArray on the screen. Here's the equation that has been typed so far: function Calculator() { "use strict"; var inputArray = [], operations = ["x", "/", "+", "-"], number = "", i, that = this, equation = document.getElementById("equation"), display = document.getElementById("display"); display.textContent = "0"; this.add = function(a, b) { var c = inputArray[a] + inputArray[b]; inputArray[a] = c; inputArray.splice(a + 1, 2); i -= 2; }; this.substract = function(a, b) { var c = inputArray[a] - inputArray[b]; inputArray[a] = c; inputArray.splice(a + 1, 2); i -= 2; }; this.divide = function(a, b) { var c = inputArray[a] / inputArray[b]; if (isNaN(c)) { c = 0; } inputArray[a] = c; inputArray.splice(a + 1, 2); i -= 2; }; this.multiply = function(a, b) { var c = inputArray[a] * inputArray[b]; inputArray[a] = c; inputArray.splice(a + 1, 2); i -= 2; }; this.equal = function() { for (i = 0; i < inputArray.length; i += 1) { if (inputArray[i] === "/") { that.divide(i - 1, i + 1); } if (inputArray[i] === "x") { that.multiply(i - 1, i + 1); } } for (i = 0; i < inputArray.length; i += 1) { if (inputArray[i] === "+") { that.add(i - 1, i + 1); } if (inputArray[i] === "-") { that.substract(i - 1, i + 1); } } display.textContent = inputArray[0]; }; this.clear = function() { inputArray = []; number = ""; display.textContent = "0"; equation.textContent = ""; }; this.printEquation = function() { equation.textContent = ""; for (i = 0; i < inputArray.length; i += 1) { equation.textContent += inputArray[i]; } }; this.input = function(e) { var input = e.target.textContent; var testInput = operations.indexOf(input) === -1 ? false : true; //Add a zero if operator is clicked without any input if (testInput && number === "") { number = "0"; } //Run clear if equal is clicked without any input if (input === "=" && inputArray.length === 0) { this.clear; } else if (testInput) { inputArray.push(parseInt(number, 10)); inputArray.push(input); number = ""; display.textContent = "0"; that.printEquation(); } else if (input === "C") { that.clear(); } else if (input === "=") { if (number !== "") { inputArray.push(parseInt(number, 10)); number = ""; that.printEquation(); that.equal(); } else { inputArray.pop(); number = ""; that.equal(); } } else { number += input; display.textContent = number; } }; } //Initialise calculator var calci = new Calculator(); var nodes = document.getElementById("calBtn").childNodes; for (var i = 0; i < nodes.length; i++) { if (nodes[i].nodeName.toLowerCase() === "span") { nodes[i].addEventListener("click", calci.input) } } * { -webkit-box-sizing: border-box; box-sizing: border-box; font: 500 1.1em sans-serif; background-color: mintcream; color: darkslategray; } h1, h2 { font: 500 sans-serif; } h1 { font-size: 1.5em; } .wrapper { margin: 0 auto; text-align: center; } #calculator { width: 330px; height: auto; background-color: blanchedalmond; border: 1px solid gray; padding: 2px; text-align: center; margin: 0 auto; } #calBtn { background-color: inherit; } #calBtn span { display: inline-block; background-color: lightgray; width: 71px; height: 50px; line-height: 50px; text-align: center; vertical-align: middle; margin: 5px; cursor: pointer; outline: none; -webkit-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; } #calBtn span:hover { -webkit-box-shadow: none; box-shadow: none; border: 0.5px solid gray; } .screen { width: 95%; height: auto; border: 0.5px solid gray; margin: 15px auto 10px auto; } #calculator, #calBtn span, .screen { -webkit-box-shadow: 1px 1px 2px darkslateblue; box-shadow: 1px 1px 2px darkslateblue; border-radius: 5px; } #equation, #display { display: block; width: 93%; height: 40px; line-height: 40px; margin: 0 auto; text-align: right; padding: 1px 0; } @media only screen and (max-width: 768px) { #calBtn span { height: 71px; line-height: 71px; } } <body> <div class="wrapper"> <div id="calculator"> <div class="screen"> <span id="equation"></span> <span id="display"></span> </div> <div id="calBtn"> <span>7</span><!-- --><span>8</span><!-- --><span>9</span><!-- --><span>/</span><!-- --><br><!-- --><span>4</span><!-- --><span>5</span><!-- --><span>6</span><!-- --><span>x</span><!-- --><br><!-- --><span>1</span><!-- --><span>2</span><!-- --><span>3</span><!-- --><span>-</span><!-- --><br><!-- --><span>0</span><!-- --><span>C</span><!-- --><span>=</span><!-- --><span>+</span> </div> </div> </div> </body> Answer: This is a really cool snippet! I like that you were able to avoid using eval. After a quick glance I only have a few minor suggestions. Separate the logic from the interface. You separated them to some degree in that you have the event handlers attached manually after constructing the calculator (even though the event handler is defined in the calculator constructor). I would instead make 2 constructors, a Calculate constructor that can take an expression and evaluate it, and a Calculator constructor that sets up the interface and the event handlers. That way you will be able to use that sweet calculator code in other projects, or extend it for use with a more advanced calculator. Clear the results of the previous equation before moving the current one to the top row. I believe that is fixed by fixing the typo in your input function where you do this.clear; instead of this.clear(); that is not necessary. You're using that variable instead of just using this in several places. In fact I don't see any places where that is needed at all, just stick with this and leave that alone. Consider taking advantage of prototype. Using this to assign methods makes us feel like we're writing classes so it feels natural, but JS doesn't have classes*, so embrace the prototype. *ES6 actually does have classes.
{ "domain": "codereview.stackexchange", "id": 26201, "tags": "javascript, object-oriented, calculator" }
Are there gaps present between lines in a continous line spectrum?
Question: It might seem counter-intuitive for gaps to be formed in a "continuous spectrum", but according to Planck energy carried by a photon is quantised and can have only discrete values so therefore accordingly the wavenumber should also be quantised and have only discrete values. Does that mean that when we use an ultra-powerful hypothetical microscope we should be able to see gaps between discrete lines in a continuous spectrum (hypothetical because how would you see gaps between light using light? wouldn't Heisenberg be disappointed? especially when then the gap is very minute? or maybe we might be using sensitive detectors to see if there are gaps between photons or something I didn't consider) Answer: Quantisation does not work like that. In the case of light it is more like requiring that a pile of debris is made of discrete rocks (photons), but those rocks can be any size (energy). The uncertainty principle means that even if you arrange the rocks in a "spectrum" of sizes from planetary core to dust, these sizes will blur into one another and the spectrum will be smooth.
{ "domain": "physics.stackexchange", "id": 70612, "tags": "quantum-mechanics, visible-light, electromagnetic-radiation, spectroscopy, discrete" }
Do objects have energy because of their charge?
Question: My gut feeling tells me things should have energy because of their charge, like they have energy because of their mass. Is this possible? Has it been shown? If not then what is missing to make such an equivalence possible? Answer: The problem with your question, and the reason you have so many comments asking for clarification, is that energy is a slippery concept. Generally speaking we are interested in energy differences. So, for example, if you consider a two charged particles it's easy to calculate the energy change as you bring them together. By contrast, if you have a universe with just single electron in it, it's not at all clear what you mean by the energy of the electron. One of the comments referred to the electron self energy, but classically this is infinite. Even if you consider quantum mechanics the self energy is infinite until you turn it into a difference. But let me suggest a way of looking at it that you might find interesting. NB this isn't an answer, because I'm not sure your question has an answer as it stands, but it is one perspective. Although we normally consider energy differences, we consider mass to be absolute. After all, a body can be massless or have a finite rest mass, and this is generally unambiguous. But we know that energy and mass are related by Einstein's famous equation $E = mc^2$, so if the charge on an electron increases its energy it must also increase its mass. Mass comes in two flavours: inertial mass and gravitational mass (Einstein tells us these are the same thing). We can't do much with the inertial mass because we don't have an uncharged electron to compare to a charged electron, but we can look at the gravitational mass. The gravitational field of an isolated, spherically symmetric, charged object (like an electron) is given by the Reissner-Nordström metric. This is somewhat opaque for the non-nerd, but let's ask a simple question: how does the escape velocity for the charged body depend on the charge? The escape velocity is given by: $$ v = -\sqrt{\frac{2G}{r} \left( M - \frac{Q^2}{2r} \right)} $$ where $M$ is the mass of the object and $Q$ is its charge. However this tells us something rather strange. As you increase the charge the escape velocity decreases, and in fact if you increase the charge enough the escape velocity falls to zero. So a charged body has a lower gravity than an uncharged body of identical mass. Now it almost certainly makes no sense to describe an electron as a Reissner-Nordström black hole. Apart from anything else its event horizon would be many orders of magnitude smaller than the Planck length and you'd expect some so far unknown theory of quantum gravity to take over from General Relativity and change its predictions. Nevertheless, you could use the above reasoning to claim that a charged electron actually has a lower energy than an uncharged one would. Now there's an unexpected result :-)
{ "domain": "physics.stackexchange", "id": 20007, "tags": "electromagnetism, special-relativity, charge, mass-energy" }
Karhunen loeve transform question
Question: I have read some about Karhunen-Loeve Transform (KLT) and its application to the field of seismic data processing. The method as I understand it based on decomposing the data (actually mostly used in image processing) using SVD on its covariance matrix and projecting the data back after manipulating one or some of the covariance matrix's eigenvalues. One of the assumptions, when using on seismic data to remove random noise, is that the features to keep, should be aligned in time. If this is not the case, then one have to correct for the observed move-out. For ex. in the top picture below 400 time samples (y-axis) for more than 50 data traces(x-axis) are shown. Each trace represents data recorded on one geophone. There are multiple "events" in the data represented by horizontal lines across the figure. There is no time delay for the observed event across the traces. The picture in the bottom is the result after applying KLT on the data, by zeroing the smallest eigenvalue of data's covariance matrix. My question is, why the assumption of time alignment of the "event" (horizontal features)? The method fails if the events occurred along a line with a slope, that is the same event was observed with a delay from one trace to the next. Thanks Answer: There is something that is not clear of what you have done with the data, and that is who do you form the random vectors to perform de SVD (or EVD) on the covariance matrix. 1 -The KLT can be succesfully used on a one dimensional signal (only one Geophone), taking frames of $M$ samples and estimating a covariance matrix from it, and the performing Eigenvalue decomposition, or with the SVD of the Data matrix (which supposedly is the same), then you sort the eigenvalues (or singular values) in descending order, and remove the smallest ones according to a particular criteria. With this you will remove part of the noise. Based on the process i described, there is no assumption about any correlation between the signaals on the geophones (and in reality i think there will be, but i am not an expert in geology), so if you perform the same processing to every signal, it doesnt matter what delay you have between particular events, the process will still remove noise. 2 - But, if you construct your random vector with one sample from each geophone taken at the same time instant, then the time delay might affect the correlation between each sample from a geophone and then applying KLT might not reduce the noise as it is expected. The approach number 1 i described is called Temporal PCD (Principal component analysis) and the second one is called Spatial PCA, assuming you put the geophones in different locations. Personally i think it should not fail if you do spatial PCA, because even though there will be time delay, the samples from different geophones at same time instants will have some degree of correlation, except if you put them hundreds of miles apart, but if they are all in the same region, then they will each sense approximately the same signal but contaminated with noise and probably convoluted with an impulse response of the geographical region, but as long as there is correlation, you can apply KLT.
{ "domain": "dsp.stackexchange", "id": 1620, "tags": "denoising, covariance" }
What would happen to an isolated block of material
Question: I was thinking recently about what might happen if you were to place a block of material in the middle of a complete vacuum. Obviously there's not going to be a way to ever achieve such a scenario but what would happen if you were to put a block of let's say steel at 100C in a vacuum such that the block is not in contact with any material connected to the containment and have it such that outside energy is minimized. I assume the block would lose heat/vibrational energy but what would be the mechanism for such an energy loss and what time scale would it take for the block to reach let's say 0C? Let me know if there's anything I can add to make the question more clear. Answer: The block of steel would lose energy via black body radiation. All objects at a temperature above absolute zero according the the priciples of black body radiation. A steel block at 100 degress C will radiate in the infrared. A typical blackbody spectrum is shown below. Notice how the frequecy gets smaller as the objects temperature gets less. Radiation can pass thru a vacuum fine.
{ "domain": "physics.stackexchange", "id": 44229, "tags": "energy, vacuum, vibrations" }
Cell dye in red fluorescence spectrum
Question: There are some cell dyes available for stianing cells intracellularly / membrane. I am aware of other channels (CFSE \ PKH26). But I want to stain cells for flow cytometry with a 'red' dye. I have looked at a few options from Thermo but did not find sufficient data on either. Specifically I am planning to stain a single cell population with two different dyes, treat each stained population somewhat different in between, and then culture them together and distinguish read-outs later. Antibody-staining for a surface marker would be possible with Fab fragments (the antibodies itself may interfere with cell activation during culturing). Therefore I would rather use a red-spectrum dye like CFSE or PKH26 as this may be cheaper and possible for secondary use with other cell lines. I have multiple cytometers available and the specific emittance doesn't matter as we can pick what is generally possible (LSR/Fortessa). Thus, I am looking for a dye with emittance in the red fluorescence spectrum which has staining protocols available that can be adapted for my purposes. Is anybody willing to share their experiences? Edit Clarified that I needed a 'no-antibody-staining' Clarified question with regard to biological question and FACS Answer: I've used the ThermoFisher brand of CellTrace dyes. Specifically, I co-cultured two populations of cells, one with CellTrace Far Red in the APC channel and another with CellTrace Yellow in the PE channel. My choice was in part guided by a product review by another lab touting the relatively low cytotoxicity of these products. The protocol is quite straight forward. It's also available in CFSE and Violet, though I haven't tested those. These dyes are also optimized for lymphocytes, and so I'd recommend titrating if you're staining anything else. The dyes are quite bright, and so there were issues with costaining APC-Cy7, for example. I ended up leaving my CellTrace dyes as the only markers on their respective lasers and stacking my phenotyping on the 355nm and 405nm lasers. For compensation, you might just stick with cells, but in the product manual for CellTrace, however, it says: "The CellTrace™ reagents readily diffuse into cells and bind covalently to intracellular amines, resulting in stable, well-retained fluorescent staining that can be fixed with aldehyde fixatives." So if you use the ArC amine-reactive comp beads for LIVE/DEAD kits, you might be able to just make comp beads with those. I'll verify this when I'm back at work if need be, as I haven't done it myself, but it's a nice thought!
{ "domain": "biology.stackexchange", "id": 6974, "tags": "flow-cytometry, materials" }
Possible to get transfer function coefficients from window?
Question: I am hoping to use scipy.signals.filtfilt() to smooth some signals in Python, and wanted to build the filter based on a window like a hanning window or whatever. E.g.: import scipy.signal.windows as windows window = windows.hann(filter_width) But standard filters don't just take in windows, they take in numerator and denominator transfer function coefficient arrays a and b: data_smoothed = scipy.signal.filtfilt(b, a, data_noisy) Is there a way to calculate the transfer function coefficients a and b from a window? I like filtfilt() more than straight-up convolution with the window because it has a lot of useful features baked in. Answer: What you are describing is an FIR filter, such that all the denominator coefficients are zero, save the basis, a[0]=1. So you could do something like: data_smoothed = scipy.signal.filtfilt(window, 1, data_noisy) There is a notable point. The DC gain of the filter is equal to the sum of the coefficients for FIR filters. Your window is likely normalized to 1, so the sum is probably higher than 1, which means your filter will have gain at low frequencies. You would want to divide all the coefficients by the sum of the window to keep the gain to unity.
{ "domain": "dsp.stackexchange", "id": 9190, "tags": "filters, python, convolution, smoothing" }
How to estimate uncertainty of measurements of equivalent widths?
Question: I'm measuring equivalent widths of absorption lines using a spectrum of a star. I make two or three measurements of each line by making reasonable gaussian fits of the line with IRAF's splot tool. Then I calculate the mean of the measurements, which serves as my final equivalent width estimate. What is a good way of estimating the uncertainty of this measurement? My current method I'm currently using half of the range for the uncertainty. For example, if I made two measurements 10 and 16 mA (milliangstrom), then the mean is 13 mA and uncertainty is 3 mA. This gives the estimate of equivalent width to be 13±3 mA. Do you see any problems with this method of estimating uncertainty? Answer: Yes there is a problem. You seem to be trying to derive an uncertainty in the measurement of EW by doing repeated measurements of the same data? This can only give you the uncertainty associated with your measurement technique (i.e. where you define the limits of the line and how you set the continuum level) - the systematic error you might call it (although there can be other systematic errors inherent to EW measurements, like whether you subtracted the sky or scattered light in your spectrograph correctly for example). What it does not do is evaluate the uncertainty in the EW caused by the quality or signal-to-noise ratio of the data itself. You might assess this using some rule-of-thumb formulae for a Gaussian line, e.g. $$\Delta EW \simeq 1.5 \frac{\sqrt{fp}}{{\rm SNR}},$$ (eqn 6 of Cayrel de Strobel 1988) where $f$ is the FWHM of the spectral line (in wavelength units), $p$ is the size of one pixel in wavelength units and SNR is the signal-to-noise ratio of the data in an average pixel. Or you could take a synthetic spectrum and add some artificial noise to it with the appropriate properties and measure the EW of several randomisations of the same spectrum, taking the standard deviation of your EW measurements to indicate the EW uncertainty for a particular level of signal-to-noise ratio. If this statistical uncertainty is not negligible, then you would then need to add it to any systematic uncertainties associated with your analysis of the spectrum. As far as the latter is concerned then your suggested method does give some indication of what that error might be, though I suspect it will overestimate the 1-sigma uncertainty.
{ "domain": "astronomy.stackexchange", "id": 3462, "tags": "spectroscopy" }
Trade off between width and depth of free BDDs for total functions
Question: Terminology A binary decision diagram is a directed acyclic graph with one source (root), and two sinks ($A$ and $B$). Each non-sink nodes is labeled by an integer $i \in \{1,...,n\}$ and has out-degree 2 (one edge labeled '0', the other '1'). Visiting a node with label $i$ corresponds to queering the bit $x_i$ of the input and following the $0$-edge if $x_i = 0$ and the $1$-edge otherwise. When you arrive at a sink, you output its value $A$ or $B$ as the solution to the function represented by the BDD. The depth of a BDD is the length of the longest path from source to sink. A slice at depth $k$ is the set of all nodes that are a distance $k$ from the root. The width of a BDD is the size of the largest slice. The size of a BDD is the total number of nodes. A BDD is free, if there is no constraints on the labeling of the nodes. This is in contrast to the more commonly studied restriction of Ordered BDD that does not allow you to label a vertex $i \geq j$ if an ancestor is labeled $j$. Question Given a total boolean function $f$ is there a trade off between the depth and width (or depth and size) of the BDDs representing it? Let $D(f)$ be the query complexity of $f$, and let BDD mean BDD representing $f$. Are there cases that every BDD of depth $D(f)$ has a strictly larger width than a BDD of larger depth? Motivation This question is a follow up to: Trade off between time and query complexity The hope is that by looking at a more rigid model such as BDDs we might be able to build intuition for that question in the interesting case of total functions. For a free BDD the depth corresponds directly to query complexity. The width is usually seen as a measure of space not time, so this question is also a restriction of time-space trade offs. However, the hope is that learning some trade offs between depth and width/size can help build similar things for query-vs-time in the circuit model. Note that the case of partial functions is not interesting, since the Kothari-Fitzsimons construction from the previous question can be modified to give a separation for BDDs (although not an arbitrary one; at most exponential). The case of the better studied ordered BDDs is also less interesting, because it is easy to show that the depth of an ordered BDD does not correspond to query complexity. You can give examples of total functions where any ordered BDD has exponentially higher depth than the function's query complexity. Answer: There are known functions on $n$ variables where depth $n$ branching programs (as non-oblivious BDDs are usually called) require exponential size. The book to read is by Wegner. Stronger lower bounds are given by Ajtai, and there is some newer work, but I'm not sure about which is best to read.
{ "domain": "cstheory.stackexchange", "id": 1032, "tags": "cc.complexity-theory, query-complexity, binary-decision-diagrams" }
Does a Mobius resistor have zero inductance? How would you calculate the inductance?
Question: Wikipedia describes a Möbius resistor as follows, and the Patent for this device gives a similar description. A Möbius resistor is an electrical component made up of two conductive surfaces separated by a dielectric material, twisted 180° and connected to form a Möbius strip. As with the Möbius strip, once the Möbius resistor is connected up in this way it effectively has only one side and one continuous surface. Its connectors are attached at the same point on the circumference but on opposite surfaces. It provides a resistor that has no residual self-inductance, meaning that it can resist the flow of electricity without causing magnetic interference at the same time. (a) Does such a Mobius resistor really have “no residual self-inductance” as claimed? (b) If it does have a definite non-zero value of inductance, then how is this calculated? (c) Are the claimed advantages of this resistor because it is constructed as a Mobius strip? Answer: The inductance can be calculated, but it is first necessary to look at the behavior at very fast timescales of a ns or so. Clearly the two faces of the strip form a transmission line and so, at short timescales, the resistor appears as two transmission lines in parallel. At short timescales, each line will look like a resistor of value equal to the characteristic impedance, Z, of the line. So at short time scales, this device looks like a resistor of value Z/2. But for typical construction of such a resistor with loop diameter of say 30 mm, this initial behavior will be gone in a ns or so, and then the resistance will be set by the resistivity, width and thickness and length of the conductive strips. I’m sure that when the inventor claims “no inductance”, then he means on time scales of more than a ns, after which the input to the device no longer looks like the input to a transmission line. OK, so how might we calculate the inductance, valid for time scales greater than the transit time around the loop? Referring to Fig1, which is a plan view, we see 2 identical loops in parallel, colored black and green, with the twist at the bottom of the sketch, near point P. The loops are identical, but the current travels in different directions around the loops. Thus the magnetic fields (almost) cancel, and the device looks (almost) non-inductive. One description is that the resistor is two anti-phased inductors wired in parallel, with a coupling coefficient almost equal to one, so that nearly all of the flux created by one coil passes through through the other. That’s fine as a qualitative description, but I asked how the inductance could be calculated. Perhaps there are a number of ways, but I offer the following. Refer to point P on the Fig 1, where the inner and outer conductors “cross over” due to the twist. It is assumed that the length of twisted section is small compared to the overall length of the loop. Symmetry tells us that this point, on both conductors, is always at a potential equal to half of the voltage applied to the resistor. As P is always at the same voltage (half of applied voltage) on both conductors, then it follows that we could electrically connect the two loops together at this point, and this would have no effect on the operation of the circuit. Fig 2 shows the equivalent circuit thus created by connecting the loops together at this point. Note that the small, horizontal link at P in Fig2 is not actually doing anything. From symmetry considerations there will be no current through this link – if there was then in which direction would the current be? And as there is no current through this link, then it can be removed without changing the operation of the circuit. In Fig 3, this small link has thus been removed. Also, the two long, thin loops have been straightened, which will also not affect the operation of the circuit, noting that the Bfield from these thin loops is very localized in the small gap between conductors, so the field from one does not interfere with the other (or with anything else nearby), so we are free to straighten or bend these loops as we choose without affecting the operation of the circuit. So we end up with the equivalent circuit of Fig3, where the resistor input terminals are still connected to two loops in parallel, but now the loops in question are long and very thin, magnetically separate, and their inductance is easily calculated with application of Ampere’s Law. Fig 4 illustrates the method for finding the inductance of one of these long, thin loops. Provided the strip width and length are large compared to the separation, then to a good approximation, the Bfield is strong and uniform between the conductors, and zero everywhere else. Amperes Law states :- $$\oint B.dl = \mu_0 I$$ Where Integral $\oint B.dl$ is a line integral is around a closed loop, $I$ is the current passing through the loop, and $\mu_0$ is magnetic permeability of free space The line integral is easily calculated, because there is a constant field B along the bottom of the rectangular integration path, the vertical legs of the path are of negligible length, and the field is zero along the top of the path. $$\oint B.dl = \mu_0 I$$ $$BW = \mu_0 I$$ $$B=\frac{\mu_0 I}{W}$$ Total flux through the loop = $\Phi = BA = BTC = \frac{\mu_0 ITC}{W}$ where $T$ is conductor separation and $C$ is loop length, in this case half of the total Mobius loop length. Finally, $L \text{(Henry)} = \frac{\Phi}{I}$ (can be found in any textbook) $L = \frac{\mu_0 TC}{W}$ (for one loop) $$\bf L_\text{mobius} = \frac{\mu_0 TC}{(2W)}$$ (because there are 2 identical loops in parallel) How beautifully simple. So to minimize the inductance, you need small strip separation, small length of Mobius loop, and a wide strip. The Bfield from these narrow loops is very localized in the small gap between conductors, so the field from one does not interfere with the other (or with anything else nearby) and the Mobius loop can thus be bent into an ellipse or other shape without affecting the inductance seen between the terminals. OK. So the formula shows that the inductance is not inherently zero as claimed, but just how small will it be for typical construction? Assume the following dimensions. $T = 0.05 mm = 0.05\times 10^{-3} m$ $C = 47 mm = 0.047 m$ (corresponds to D=30mm) $W = 10 mm = 0.01 m$ $\mu_0 = 1.26\times 10^{-6}$ (for free space, as no magnetic materials are present) . $L = uTC/(2W)$ $L = 1.48\times 10^{-10}$ Henry = 0.148 nH That is a very low inductance to be sure, ideal for fast current sensing. To exploit such low inductance will require Kelvin sense terminals (4-wire resistor) which is easily accomplished, otherwise the inductance of the lead-in wires will far exceed that of the resistor itself. To put the value of 0.15 nH in perspective, the package inductance of the mosfet source connection on a TO220 power semiconductor package is about 5 nH, measured 6 mm from the die, so 0.15 nH is an absurdly small value of inductance for an electronic component. I will later add more text to show a better method of constructing low-inductance resistors, that I have been using for decades with good results.
{ "domain": "physics.stackexchange", "id": 90604, "tags": "electric-circuits, electrical-resistance, inductance, electronics, electrical-engineering" }
Can we explain physical similarities between Black Scholes PDE and the Mass Balance PDE (e.g. Advection-Diffusion equation)?
Question: Both the Black-Scholes PDE{*} and the Mass/Material Balance PDE have a similar mathematical form of the PDE which is evident from the fact that on change of variables from Black-Scholes PDE we derive the heat equation (a specific form of Mass Balance PDE) in order to find analytical solution to the Black-Scholes PDE. I feel there should be some physical similarity between the two phenomena which control these two analogous PDEs (i.e. Black-Scholes and Mass/Material Balance). My question is, can one relate these two phenomena physically through their respective PDEs? I hope my question is clear, if not please let me know. Thanks. *PDE=Partial Differential Equation Answer: This was intended to be a comment, but is too long so I will post it as an answer. First of all, a disclaimer, I am a physict and all that I know about quantitative finance comes from self-learning, so please feel free of correcting me if I am mistaken (also in the physics stuff, of course!). I have been doing a little of research, and perhaps you are right in your last point. The Black-Scholes PDE relies in the assumption of (i) the option prize is a continuous function of time and the undelying asset and (ii) the current stock price follows a [geometric] Brownian motion. From the physics point of view, there is a deep connection between Brownian motion and the diffusion equation which is exemplified in the famous Einstein relation. As the heat equation is a particular form of the diffusion equation, it is not so surprising that the heat kernel appears in some of the solutions of the Black-Scholes PDE. However, there is no physical similarity between these two phenomena as they both do not describe physical processes but only are based in the same mathematics [stochastic calculus]. As a curious fact, I was reading not long time ago the book When Genius Failed: The Rise and Fall of Long-Term Capital Management. There it is said that Myron Scholes and specially Robert C. Merton developped all their option pricing theory mimicking physical models [taken from research papers and books of statistical mechanics] and having faith in the so-called "efficient market hypothesis". The history of LTCM is well-known in finance, they lose billions of dollars after having been leveraged $250$ to $1$ [that is they invested $250$ dollars while having actually $1$].
{ "domain": "physics.stackexchange", "id": 2646, "tags": "heat, conservation-laws, diffusion, mass" }
How to find 5 repeated values in O(n) time?
Question: Suppose you have an array of size $n \geq 6$ containing integers from $1$ to $n − 5$, inclusive, with exactly five repeated. I need to propose an algorithm that can find the repeated numbers in $O(n)$ time. I cannot, for the life of me, think of anything. I think sorting, at best, would be $O(n\log n)$? Then traversing the array would be $O(n)$, resulting in $O(n^2\log n)$. However, I'm not really sure if sorting would be necessary as I've seen some tricky stuff with linked list, queues, stacks, etc. Answer: The solution in fade2black's answer is the standard one, but it uses $O(n)$ space. You can improve this to $O(1)$ space as follows: Let the array be $A[1],\ldots,A[n]$. For $d=1,\ldots,5$, compute $\sigma_d = \sum_{i=1}^n A[i]^d$. Compute $\tau_d = \sigma_d - \sum_{i=1}^{n-5} i^d$ (you can use the well-known formulas to compute the latter sum in $O(1)$). Note that $\tau_d = m_1^d + \cdots + m_5^d$, where $m_1,\ldots,m_5$ are the repeated numbers. Compute the polynomial $P(t) = (t-m_1)\cdots(t-m_5)$. The coefficients of this polynomial are symmetric functions of $m_1,\ldots,m_5$ which can be computed from $\tau_1,\ldots,\tau_5$ in $O(1)$. Find all roots of the polynomial $P(t)$ by trying all $n-5$ possibilities. This algorithm assumes the RAM machine model, in which basic arithmetic operations on $O(\log n)$-bit words take $O(1)$ time. Another way to formulate this solution is along the following lines: Calculate $x_1 = \sum_{i=1}^n A[i]$, and deduce $y_1 = m_1 + \cdots + m_5$ using the formula $y_1 = x_1 - \sum_{i=1}^{n-5} i$. Calculate $x_2 = \sum_{1 \leq i < j \leq} A[i] A[j]$ in $O(n)$ using the formula $$ x_2 = (A[1]) A[2] + (A[1] + A[2]) A[3] + (A[1] + A[2] + A[3]) A[4] + \cdots + (A[1] + \cdots + A[n-1]) A[n]. $$ Deduce $y_2 = \sum_{1 \leq i < j \leq 5} m_i m_j$ using the formula $$ y_2 = x_2 - \sum_{1 \leq i < j \leq n-5} ij - \left(\sum_{i=1}^{n-5} i\right) y_1. $$ Calculate $x_3,x_4,x_5$ and deduce $y_3,y_4,y_5$ along similar lines. The values of $y_1,\ldots,y_5$ are (up to sign) the coefficients of the polynomial $P(t)$ from the preceding solution. This solution shows that if we replace 5 by $d$, then we get (I believe) a $O(d^2n)$ algorithm using $O(d^2)$ space, which performs $O(dn)$ arithmetic operations on integers of bit-length $O(d\log n)$, keeping at most $O(d)$ of these at any given time. (This requires careful analysis of the multiplications we perform, most of which involve one operand of length only $O(\log n)$.) It is conceivable that this can be improved to $O(dn)$ time and $O(d)$ space using modular arithmetic.
{ "domain": "cs.stackexchange", "id": 9934, "tags": "algorithms, arrays, searching" }
What is the significance of different modes?
Question: By "modes" I here refer to TE (transverse electrical; s-polarized) and TM (transverse magnetic; p-polarized) modes. After going through these notes(p 7-8) I somewhat understand the TE and TM modes: The idea is that any vector can be resolved into two components, one parallel and one perpendicular with respect to the chosen resolution axis. Along the same lines, we can think that there is a plane wave incident on a plane or a slab. $$\vec{E} = \vec{E}_{0}(x,y) e^{-ik_{z}z} $$ Our $yz$-plane or slab is chosen (by convention) as $\hat{y}$. We resolve $\vec{E}$ into two vectors, one along $\hat{y}$ (TE mode) and another perpendicular to it (TM mode). At the end we just add the contribution from both to get the total $\vec{E}$. My question is: How does the TEM (transverse electromagnetic) mode fit into this? How do we see the following: We have that $E_{z} = 0$, $H_{z}\ne 0$ for a TE mode and $H_{z} = 0$, $E_{z}\ne 0$ for a TM mode. Physical significance is what I am looking for. I checked a lot of references(using normal and transverse as seperate modes!) but these only add to my confusion regarding these modes. Answer: You're confused because there are two separate concepts here: the plane interface problem, and waveguide modes. The definition for what is TE or TM is completely different between the two cases. Confusing, I agree. In waveguide modes, the definition is based on what is transverse to the direction of propagation in the waveguide. So TE modes have an electric field completely transverse to the direction of propagation, with a non-zero transverse magnetic field. TM modes have a magnetic field completely transverse to the direction of propagation, with a non-zero transverse electric field. In TEM modes both the electric and magnetic field have zero components in the transverse direction. For the plane interface problem, the meaning of transverse is different. Now we're concerned with transverse to the plane of incidence, not the direction of propagation. It's kind of an opposite definition. So TE modes (s-polarized) have electric field completely transverse to this plane. This plane is the xz-plane in your notes, so the electric field is y-polarized, and the magnetic field has no polarization in the y-direction. TM modes (p-polarized) have magnetic fields completely transverse to this plane (y-polarized) and electric fields have no components in the y-direction. This definition of TE/TM for the plane interface problem can vary. I believe I have seen the definition of TE and TM switched so that they describe quantities that are transverse to the direction of propagation. This makes the definition more consistent with the definition used in waveguide modes. As per usual, keeping track of conventions is half the battle. Some references: Chapter 8 of Classical Electrodynamics by Jackson. That's based on my super old 2nd edition. I'm guessing it's true for 3rd edition as well. A good writeup from Rutgers Some slides from MIT. Elements of Electromagnetics by Sadiku, chapter 12 in the second edition. Advanced Engineering Electromagnetics by Balanis, chapter 8
{ "domain": "physics.stackexchange", "id": 38789, "tags": "electromagnetism, waveguide" }
Is it required to quantize wavelet coefficients before one derives features from?
Question: I am using wavelet coefficients for feature extraction in classification problem. As the wavelet coefficient values are real, positive and negative, is it required to quantize them before feature extraction? Answer: Not sure of what you mean by feature extraction and what nature of signal it is. Assuming the signal is an image and features are the texture features like GLCM (Gray-Level Co-Occurrence Matrix) and NGTDM (Neighborhood Gray Tone Difference Matrix) matrices etc. you will have to quantise the input to a manageable number of levels. I usually apply Inverse wavelet transform to the coefficients to obtain the desired image, e.g., approximation, diagonal, horizontal and vertical detail images. Then I quantise (32 bit 64 bit etc.) the reconstructed image and then derive the NGTDCM (or GLCM) matrix from the quantized input.
{ "domain": "dsp.stackexchange", "id": 4162, "tags": "wavelet" }
Project Euler #50 Consecutive prime sum
Question: I'm having trouble optimising the project euler #50 exercise, it runs for around 30-35 seconds, which is terrible performance. The prime 41, can be written as the sum of six consecutive primes: 41 = 2 + 3 + 5 + 7 + 11 + 13 This is the longest sum of consecutive primes that adds to a prime below one-hundred. The longest sum of consecutive primes below one-thousand that adds to a prime, contains 21 terms, and is equal to 953. Which prime, below one-million, can be written as the sum of the most consecutive primes? The catch here is that it's not necessary to start from 2, the sum of consecutive primes which add to 953 are starting from 7. Here's my code : static void Main(string[] args) { int max = 0; int maxCount = 1; List<int> primes = new List<int>(); Stopwatch sw = Stopwatch.StartNew(); bool[] allNumbers = SetPrimes(1000000); for (int i = 0; i < allNumbers.Length; i++) { if (allNumbers[i]) { primes.Add(i); } } foreach (int prime in primes) { int startingIndex = 0; while (primes[startingIndex] < prime/maxCount) { int n = prime; int j = startingIndex; int sum = 0; int count = 0; while (n > 0) { sum += primes[j]; n -= primes[j]; j++; count++; } if (sum == prime) { if (count > maxCount) { maxCount = count; max = prime; } } startingIndex++; } } sw.Stop(); Console.WriteLine(max); Console.WriteLine($"Time to calculate : {sw.ElapsedMilliseconds}"); Console.ReadKey(); } private static bool[] SetPrimes(int max) { bool[] localPrimes = new bool[max + 1]; for (int i = 2; i <= max; i++) { localPrimes[i] = true; } for (int i = 2; i <= Math.Sqrt(max); i++) { if (localPrimes[i]) { for (int j = i * i; j <= max; j += i) { localPrimes[j] = false; } } } return localPrimes; } Answer: Even though your prime number generator is not the bottleneck, you should not do this: for (int i = 2; i <= Math.Sqrt(max); i++) ^^^^^^^^^ Do not calculate the same square root in every loop iteration. Sqrt is an expensive enough operation that you don't want to call unnecessarily. Calculate it only once, store that in a variable and use that in the loop. I couldn't completely analyze your algorithm, but I ran through the first few steps with the debugger. It looks like you're doing a lot of unnecessary work. For each prime, you start by looking for a sum of length 1, then length 2, then 3, etc. Even if you have already found a sum of length 100, you always start at 0 again. I'm guessing that's where your bottleneck is. You want to find the longest anyway, so why not start at the maximum length and shorten it as you go? You can stop as soon as you find one. (there are a lot more possible sums of length 2 than of length 500) My algorithm works like this: We have a sum of primes: p(1), p(2), p(3), ... p(n-1), p(n) See if this sum is a prime also (by doing a binary search on the prime numbers) If it's not, check the next sum of the same length by substracting p(1) and adding p(n+1). Keep doing this until we find a prime or until the sum becomes greater than 1000000. Then, shorten the length by 1 by subtracting the last prime, so we get the sum p(1),p(2),p(3)...p(n-1) and check each sum of this length, etc. (This code is a few years old and might still be a little sloppy) public int Solve050() { const int Limit = 1000000; int[] primes = WhateverPrimeGenerator.PrimesUpTo(Limit).ToArray(); int sum = 0, length = 0; //Find the maximum possible length by adding up primes, while sum < Limit while (sum < Limit) { int newSum = sum + primes[length]; if (newSum >= Limit) break; sum = newSum; length++; } int answer = 0; for (; length > 1; length--) { answer = FindPrime(primes, Limit, sum, length - 1); if (answer > 0) break; sum -= primes[length - 1]; } return answer; } //Tries to find a prime of sum-length defined by lastIndex private static int FindPrime(int[] primes, int maxSum, int sum, int lastIndex) { int result = 0; int index = lastIndex + 1; for (int firstIndex = 0; lastIndex < primes.Length && sum <= maxSum; firstIndex++, lastIndex++) { index = Array.BinarySearch(primes, index, primes.Length - index, sum); if (index > 0) result = primes[index]; //Prime found if (index < 0) index = ~index; sum = sum - primes[firstIndex] + primes[lastIndex + 1]; } return result; }
{ "domain": "codereview.stackexchange", "id": 20489, "tags": "c#, performance, programming-challenge, primes" }
Maximizing a submodular function of two sets with different size constraints
Question: I have two totally distinct domains (apples and oranges) and I have a function $f$ that takes a set of objects from the first domain and a set of objects from the second domain and returns a real number. $f(S,T)$ has the following interesting properties: fixing $T$, it is non-negative, submodular and monotone w.r.t. $S$; fixing $S$, it is non-negative, submodular and monotone w.r.t. $T$. I want to maximize $f(S,T)$ with two cardinality constraints $|S| = s$ and $|T| = t$. How can I do that? If I consider the product space, the function is monotone and submodular. Thus I can apply the standard greedy algorithm. Dealing with the two different size constraints, might not be an issue: adding $(a, x)$ and $(a, y)$ in sequence allows me to increase $|T|$ without increasing $|S|$. The question is whether the $1-1/e$ approximation still holds. Answer: The problem is likely to be hard to approximate. The densest bipartite subgraph problem can be cast as a special case. Given a bipartite graph $(V,E)$ where $V=V_1 \uplus V_2$ define $f(S,T)$ for $S \subseteq V_1, T \subseteq V_2$ to be the number of edges between $S$ and $T$. Then $f$ satisfies the desired property. In fact $f(S,\cdot)$ is modular and so is $f(\cdot,T)$. If $a=b=k$ then we are asking for a $k$ by $k$ densest bipartite subgraph problem. Only a polynomial ratio approximation is known, and under some assumptions this problem can be shown to be hard.
{ "domain": "cstheory.stackexchange", "id": 1715, "tags": "ds.algorithms, optimization, submodularity" }
Does an ion thrust engine consume more energy as it speeds up?
Question: This question goes to a very basic non-understanding of mine that I have had in the back of my mind for ages - I just read the following here: ion thrusters are capable of propelling a spacecraft up to 90,000 meters per second (over 200,000 miles per hour (mph). To put that into perspective, the space shuttle is capable of a top speed of around 18,000 mph. The tradeoff for this high top speed is low thrust (or low acceleration). Thrust is the force that the thruster applies to the spacecraft. Modern ion thrusters can deliver up to 0.5 Newtons (0.1 pounds) of thrust, which is equivalent to the force you would feel by holding nine U.S. quarters in your hand. So when it hits the top speed what is the bottle neck? The logical thing to me is that it takes more and more electricity to maintain the 0.1 pounds of thrust, but if this is the case, does this not violate the premise that you cannot tell how fast you are going without something to compare to? In other words, if I turn the engine on and then off again repeatedly, should I expect different results from one time to the next? I know I'm confused about something very basic here - that's why I'm asking.. Answer: Does an ion thrust engine consume more energy as it speeds up? The answer to this question is no. So when it hits the top speed what is the bottle neck? The bottleneck is that the vehicle runs out of propellant. The problem is described by the rocket equation, $$\frac {\Delta v}{v_e} = \ln\frac{m_{\text{initial}}}{m_{\text{final}}}$$ Where $m_{\text{final}}$ is the final mass of the rocket, the masses of the structures that previously held the propellant, the engines, the power plants, the structure of the rocket itself, and finally, the payload; $m_{\text{initial}}$ is the initial mass of the rocket, the final mass plus the mass of the propellant; $v_e$ is the velocity of the exhaust relative to the vehicle; and $\Delta v$ is the change in velocity that results from using the propellant. Note the logarithm on the left hand side of the rocket equation. Adding more propellant has an ever decreasing effect on the change in the rocket's velocity. Another way to look at the rocket equation is to look at the proportion of the initial mass that is propellant: $$\frac{m_{\text{propellant}}}{m_{\text{initial}}} = 1 - \exp\left(-\frac{\Delta v}{v_e}\right)$$ This means that attaining a $\Delta v$ equal to twice the exhaust velocity requires that 86.5% of the initial mass be propellant. This is quite doable. On the other hand, attaining three times the exhaust velocity requires that 95% of the initial mass be propellant. This is almost possible from an engineering perspective. Anything beyond that is not. Single stage rockets have an upper limit on the change in velocity that is somewhere between two to three times the exhaust velocity. There are ways to overcome the tyranny of the rocket equation. One approach is to use a multi-stage rocket. The math described above pertains to single stage rockets. A single stage rocket using traditional chemical-based techniques cannot achieve orbital velocity from the Earth's surface thanks to that limit of two to three times exhaust velocity. The rocket equation changes a bit for multi-stage rockets. Another approach is to use a better kind of propellant, one with a higher exhaust velocity. That's what makes ion engines so appealing.
{ "domain": "physics.stackexchange", "id": 25277, "tags": "velocity, rocket-science, propulsion" }
What are the best known upper bounds and lower bounds for computing O(log n)-Clique?
Question: Input: a graph with n nodes, Output: A clique of size $O(\log n)$, Providing links to references would be great Answer: The best known upper bound is essentially $n^{O(\log n)}$. You can improve a little on the constant factor in the big-O using fast matrix multiplication, but that's about it. There are a lot of algorithmic references on the $k$-clique problem which describe this reduction, it originates from papers of Itai and Rodeh and Nesetril and Poljak. (Apologies to Czech readers, I am ignorant of the proper diacritical marks.) See http://en.wikipedia.org/wiki/Clique_problem If you could solve $\log n$-clique in $n^{\varepsilon \log n}$ for every $\varepsilon > 0$, then you could also solve 3SAT in subexponential time. This can be seen as a "lower bound" to further progress. One way to prove this is to first show that if $\log n$-clique in $n^{\varepsilon \log n}$ for every $\varepsilon > 0$, then MaxCut on $n$ nodes is in $2^{\varepsilon n}$ time for every $\varepsilon > 0$. This follows directly from a theorem in my ICALP'04 paper that relates the time complexity of MaxCut to the time complexity of $k$-clique. From there, one can appeal to standard reductions to reduce 3SAT to MaxCut, showing that subexponential MaxCut implies subexponential 3SAT. In terms of unconditional lower bounds, nothing nontrivial is known, to my knowledge. We don't even know how to show that $O(\log n)$-clique isn't solvable with an algorithm that runs in linear time and uses only logarithmic workspace.
{ "domain": "cstheory.stackexchange", "id": 65, "tags": "ds.algorithms, reference-request, graph-theory, lower-bounds, upper-bounds" }
How to extract probabilities from Kraus representation?
Question: Consider a quantum operation described by Kraus operators $K_1, ..., K_n$. As I understand the effect of this operation on a density matrix $\rho$ can be described as $ \mathcal{E}(\rho)= \sum_{i}p(i)\rho_i$, where $\rho_i$ is a possible state of the system after the operation and $p(i)$ is the probability of that state. If I only have Krauss operators, can I still infer the possible states and their probabilities? Each term $K_i\rho K^{\dagger}_i$ in operator-sum representation of the quantum operation seems to incorporate both the potential outcome and its probability. Is there a way to extract each of them? Answer: We can indeed rewrite $\mathcal{E}(\rho)=\sum_iK_i\rho K_i^\dagger$ as $\mathcal{E}(\rho)=\sum_ip(i)\rho_i$ by setting $p(i):=\mathrm{tr}(K_i\rho K_i^\dagger)$ and $\rho_i:=\frac{K_i\rho K_i^\dagger}{p(i)}$. Note that $\mathrm{tr}(\rho_i)=1$ and $\sum_ip(i)=\mathrm{tr}(\rho \sum_iK_i^\dagger K_i)=1$, so we can interpret $\rho_i$ as states and $p(i)$ as probabilities. That said, the probabilities $p(i)$ generally depend on the input state $\rho$. However, if $\mathcal{E}$ is a unitary mixture, i.e. if every $K_i$ is a scalar multiple of a unitary operator $K_i=\alpha U_i$ then $p(i)=\mathrm{tr}(K_i^\dagger\rho K_i)=|\alpha|^2$ is independent of $\rho$. Finally, note that $\rho_i$ and $p(i)$ are not unique since Kraus representation is not unique.
{ "domain": "quantumcomputing.stackexchange", "id": 4701, "tags": "textbook-and-exercises, quantum-operation, nielsen-and-chuang, kraus-representation" }
Synchronizing messages without time headers
Question: Hi all, I am trying to synchronize subscription from a few topics from custom messages. armj_cmd_pos_sub = message_filters.Subscriber("/arm_controller/position_command", JointPositions) armj_cmd_vel_sub = message_filters.Subscriber("/arm_controller/velocity_command", JointVelocities) armj_cmd_eff_sub = message_filters.Subscriber("/arm_controller/effort_command", JointTorques) armj_states_sub = message_filters.Subscriber("/joint_states", JointState) It gave me AttributeError: 'JointPositions' object has no attribute 'header' I am suspecting the /arm_controller/position_command topic. The type of the /arm_controller/position_command topic is brics_actuator/JointPositions and the content is Poison poisonStamp JointValue[] positions and the content of JointValue is time timeStamp #time of the data string joint_uri string unit #if empy expects si units, you can use boost::unit float64 value my question is how do I get rid of the attribute error? How to synchronize topics when their message don't have headers? Is there a way to add headers without meddling with the message files? Thanks in advance. Originally posted by whiterose on ROS Answers with karma: 148 on 2013-04-12 Post score: 1 Answer: Without extra work, you can't. Simply because the filter needs to know what to synchronize. The main problem here is that the message you pointed out doesn't use a Header, but a custom stamp. At least there is one, so probably the easiest method is to use the filters manually. I haven't checked the API, but I'd try to not directly use the subscriber as an input, but instead put your own "FixHeaderForMyCustomMsg"-Filter in between that just constructs a data type that contains your message + a header (that you fill from the stamp). Given pythons typing, you should be able to just pass that into the TimeSynchronizer. Meddling with built-in messages probably won't work as the types are defined using slots (probably for efficiency), so you can't change that. Originally posted by dornhege with karma: 31395 on 2013-04-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by whiterose on 2013-04-14: Does that mean the following steps: (1) Subscribe to the topic and get the message (2) republish the message with another topic with a message with header (3) And subscribe to the new topic ? Or is there a better way? Comment by dornhege on 2013-04-14: No. There is no need to republish. Basically message_filters are filters, so just take the message and put it in something that has a header. That something you should be able to put into the message filter.
{ "domain": "robotics.stackexchange", "id": 13796, "tags": "python, custom-message, synchronization" }
Testing if array is heap
Question: This method tests an array of integers satisfies to see if its a max binary heap. This means each node is greater than or equal to it's children each node has at most 2 children the leaf nodes are filled from left to right This is for heaps that do not use element 0. See here. /*returns true if array satisfies heap property for heap from [START, END]*/ private boolean isHeap(final int[] heap, final int START, final int END) { for(int i = START; i <= Math.ceil(END / 2); i++) { if(2 * i + 1 <= END && heap[i] < heap[2 * i + 1]) return false; else if(2 * i <= END && heap[i] < heap[2 * i]) return false; } return true; } Answer: Possible bug The method isHeap takes an array along with a start and end index. Although the code does try to take into account potential out of bounds by checking 2 * i + 1 <= end and 2 * i <= end, there is no check that end is strictly lower than heap.length. As such, out of bounds array indexes can still occur: isHeapReview(new int[] { 0 }, 0, 1); would throw an java.lang.ArrayIndexOutOfBoundsException: 1. There are multiple solutions depending on the intended usage of the method: You can defend from such a case by re-defining end to be the minimum of the given end and heap.length - 1 with Math.min(end, heap.length - 1). You can check if end is greater or equal than heap.length and, if true, throw an IllegalArgumentException. In the same way, there is no check that start is a positive integer. Those checks should be added to it too. Code style Watch out your indentation style and braces. The following if(2 * i + 1 <= END && heap[i] < heap[2 * i + 1]) return false; else if(2 * i <= END && heap[i] < heap[2 * i]) return false; doesn't use curly braces and is not indented properly. Even if they are redundant, it is best to explicitely add the curly braces, as they prevent future possible issues. Use this style instead: if (2 * i + 1 <= end && heap[i] < heap[2 * i + 1]) { return false; } else if (2 * i <= end && heap[i] < heap[2 * i]) { return false; } where the braces were added, indentation is fixed, spaces are added after if; all of this contribute to easier to read code. Simplification end / 2 performs integer division and will return an integer, so invoking Math.ceil on it will have no effect. You can remove it; the code already loops from 0 to end / 2 inclusive. Also, since end is expected to be lower or equal than heap.length - 1, you can remove the 2 * i <= END check in: if (2 * i <= END && heap[i] < heap[2 * i]) This will always be true since i maximal value is end / 2. With this change, you can even refactor the if statement to: if (2 * i + 1 <= END && heap[i] < heap[2 * i + 1] || heap[i] < heap[2 * i]) { return false; } without the need of an else if statement. It does make the line a tiny bit longer, but it is short enough to typically fit on a screen and is pretty direct. Namings Don't write the parameters in upper-case; only use this for constants. START should really be start; in the same way, END should be end.
{ "domain": "codereview.stackexchange", "id": 21217, "tags": "java, heap" }
What is lithium pyroborate?
Question: This component is mentioned in an article called "Novel geopolymer materials containing borate and phosphate structural units". I've never heard about this before, and a google search didn't really give an answer it seems. Answer: Lithium pyroborate ($\ce{Li2B4O7}$) is the salt of lithium ($\ce{Li}$) and pyroboric acid ($\ce{H2B4O7}$). Nomenclature The prefix ortho- designates an acid with the maximum number of hydroxyl ($\ce{OH}$) groups [reference]. For boron ($\ce{B}$), orthoboric acid is $\ce{H3BO3}$ (also written as $\ce{B(OH)3}$ to show that there are $3$ hydroxyl groups). The prefix pyro- here seems to not match with the usual usage, so I will not be explaining the usual usage here. One common feature between this and the usual usage is that their molecular formulas are multiples of the ortho acid's molecular formulas, minus some water molecules: $$\ce{4H3BO3 -> H2B4O7 + 5H2O}$$ Gallery Some proposed structures of lithium pyroborate: Some proposed structures of pyroboric acid: The above are modified from two proposed structures of sodium pyroborate, with the first one referenced here and the second one referenced here.
{ "domain": "chemistry.stackexchange", "id": 6438, "tags": "inorganic-chemistry" }
Available energy for potential life on Titan
Question: There is an interesting discussion about whether there could be life on Saturn's moon Titan. (For example here.) The life could use reaction of hydrogen and acetylene which are being produced by photolysis in Titan's atmosphere. I would like to know what total energy is available for such life per square meter per day, or at least how much hydrogen is produced in Titan's atmosphere per day. There are some estimates in article McKay and Smith, Icarus, 2005, but I have problem translating the quantities they use for the estimate to total amount of hydrogen produced per unit of time or energy produced per unit of time. Answer: I can give a rough answer to your question and some brief thoughts on the paper, and I invite correction from anyone smarter than me. Your article talks about energy per mole, but energy of production on a daily basis would be a product of solar energy. To produce hydrogen in Titan's atmosphere requires UV light, from this article, UV light of 1600 Angstroms (160 NM) (Source) Quote: Methane is a carbon atom surrounded by four hydrogen atoms, and it can be broken apart by ultraviolet light at wavelengths of about 1600 Angstroms. The fragments, or radicals, that are produced from this process are very chemically reactive. They’re things like CH, CH2, and in some cases CH3. Only a small percentage of the sun's light is in that range. 8%-10% of the sun's light is in the UV range, but maybe only 1%-2% in the upper UV3 range of 160 NM. (Source) The same Wiki article, using Saturn as a guide, Saturn receives between 13.4 and 16.7 Watts per square meter (Source), so figure an estimate of 15 watts per square meter total solar energy, 1%-2% of that strong enough to pull a hydrogen atom off Methane, so 0.15 - 0.3 watts per square meter. That might not sound like much, but on a moon the size of titan, that's a lot of square meters so the low wattage isn't a big deal. That's a peak estimate though as some of the hydrogen would recombine with the CH3 it split from, some energy would likely be converted to heat, and perhaps the biggest problem, mentioned in the article above, a share of the hydrogen, perhaps the lion's share would simply be lost off the planet. In the atmospheres of the giant planets, the hydrogen stays around because of the high gravity of those planets. Their atmospheres are primarily hydrogen anyway, and after the fragmentation occurs, the products sink into the deeper atmosphere, and methane is reconstituted. It’s a complete chemical cycle. On Titan, as far as we understand, that does not occur. Because the gravity is low, hydrogen should escape. The Voyager ultraviolet spectrometer saw a corona of hydrogen around Titan, which is a good indication that hydrogen is escaping. Now, if hydrogen is going away, the products that can be made from the methane are going to have a higher carbon to hydrogen ratio than methane itself, and you’re not going to be able to remake methane. So as far as we understand the photochemistry, Titan should be destroying methane and making more carbon-rich products, the simplest of which are acetylene – C2H2 – and ethane – C2H6. These are made directly from methane This is my problem with the article. I don't see any way the hydrogen finds it's way down to the planet's surface to be used in the way the article suggests, life in the lakes of Titan. It's an interesting idea and the people who wrote it are probably PHD educated and I'm not, but I suspect the majority of hydrogen produced high in Titan's atmosphere would remain high in it's atmosphere and over time, lost to the solar wind. It does raise the question, because Titan should lose it's Methane over time, why does it still have so much Methane - a very good question. Lets look at the timing. Huygens landed on Titan January 14th, 2005. Your article was written 14 January 2005; revised 18 April 2005 and here's an article from a few months later, November 2005. http://www.spaceref.com/news/viewpr.html?pid=18410 The origin of methane in Titan's atmosphere is a mystery because it gets broken down by sunlight and particle radiation from space in the upper atmosphere. If surface lakes and pools were the only source, all of Titan's methane would be lost by this mechanism in less than a hundred million years, a short time for a moon that's been around since the formation of the solar system 4.5 billion years ago.� Components of the methane molecules react with each other and atmospheric nitrogen. As they descend, they form larger and heavier molecules that comprise the orange haze that blankets the moon. Because Titan is very cold (292 degrees below zero F, or minus 180 degrees Celsius) these heavy compounds condense and rain out on the surface. "We have determined that Titan's methane is not of biological origin, so it must be replenished by geologic processes on Titan, perhaps venting from a supply in the interior that could have been trapped there as the moon formed," So, I'd chalk this one up to timing. January-April 2005, life on Titan was a viable theory, which has since been de-theoried, or, whatever the correct term is.
{ "domain": "astronomy.stackexchange", "id": 1015, "tags": "life, titan" }
How to Aggregate Multiple Gate Fidelities
Question: The fidelity of a qubit is nicely defined here and gate fidelity as "the average fidelity of the output state over pure input states" (defined here). How can one combine the fidelies of two (or more) gates to get a combined total gate fidelity? As in, if a qubit is operated on by two (or more) gates, how can we calculate the expected fidelity of the qubit (compared to its original state) after being operated on by those gates if all we know is the gate fidelity of each gate? I imagine it is deducible from the definition of qubit fidelity... I haven't been able to figure it out. I also did a lot of searching online and couldn't find anything. I prefer the definition on the wikipedia page: $F(\rho, \sigma)=\left|\left\langle\psi_{\rho} \mid \psi_{\sigma}\right\rangle\right|^{2}$ for comparing the input state to the output state. It is easy to work with. A solution explained in these terms is much preferred. Answer: I don't know if you can exactly compute the combined total gate fidelity since the noise processes reducing the fidelity of each gate individually might compose in nontrivial ways. However if you know the individual gate fidelities and those fidelities satisfy certain properties, then you can bound the total gate fidelity. This is the "chaining property for fidelity" ( e.g. Nielsen and Chuang Section 9.3). Suppose you intend to apply $U_1$ to $\rho$ as the first gate in a sequence, but the actual operation you apply is the CPTP map $\mathcal{E}_1(\rho)$ which is some noisy version of $U_1$. A natural way to measure the error is in the operation you applied is: $$ E(U_1, \mathcal{E}_1) = \max_\rho D(U_1 \rho U_1^\dagger, \mathcal{E}_1(\rho)) $$ where $D(\rho, \sigma) = \arccos \sqrt{F(\rho, \sigma)}$ is a possible choice for $D$, but you can use any metric over quantum states. Finding the maximum distance between $U_1 \rho U_1^\dagger$ and $\mathcal{E}_1(\rho)$ over density matrices $\rho$ tells you the worst possible outcome you can get from your noisy implementation of the gate. Then, if you define the error similarly for $U_2$ and its noisy implementation $\mathcal{E}_2$ then you can guarantee that $$ E(U_2 U_1, \mathcal{E}_2 \circ \mathcal{E}_1) \leq E(U_1,\mathcal{E}_1) + E(U_2, \mathcal{E}_2 ) $$ which says that the worst case error for applying both of your gates is no worse than the sum of the worst case errors for applying the gates individually. Unfortunately the fidelity $F(\rho, \sigma) =\text{Tr}( \rho \sigma)$ that you give isn't a proper metric over states so you can't substitute that into the chaining property above.
{ "domain": "quantumcomputing.stackexchange", "id": 2259, "tags": "quantum-gate, fidelity" }
How does the number of braces in a balsa wood tower affect the load capacity?
Question: I am deciding between two design ideas for a project in which I have to build a balsa wood tower that can maximize load weight and minimize structural weight (maximize efficiency). I've included a picture with the two designs I am deciding between. I am wondering which design would be able to hold greater weight, whether increasing the number of braces changes the load capacity, and in which way it changes (linear correlation, exponential correlation, etc.). More specifically, I'm considering a load weight of 145 N, and I am looking for the values of the internal member forces, especially those before buckling (assuming a static system). My understanding of this subject only extends to truss calculations, considering diagonal bracing (e.g. Howe Truss), which is why I am posting on this forum. Further, if you'd be willing to give a brief explanation of these diagrams from Bracing for Stability, I'd really appreciate it. Answer: Elements under compression such as the vertical columns in your tower can collapse in two very different ways. The first is via simple crushing of the member. This happens when the applied load generates an internal stress in the member which is higher than the member's strength. The second is via buckling. In this case, infinitesimal imperfections in the structure make it "easier" for the member to bow away from the load. Get a plastic straw or a piece of paper and try to place either under compression. You'll notice they just "jump" to the side. However, once the compression is removed, the straw or piece of paper will "jump" right back into its original shape as if nothing'd happened. The theoretical equation for the buckling load (often called Euler buckling) is $$P_e = \dfrac{\pi^2EI}{(kL)^2}$$ where $E$ is the member's modulus of elasticity, $I$ is its second moment of area (aka, moment of inertia), and $kL$ is the member's unbraced length ($k$ is a coefficient which depends on the member's boundary conditions; in your case, you could conservatively assume $k=1$). Obviously, the real-world buckling load is much lower than $P_e$, since real-world imperfections are actually quite significant, not infinitesimal. Braces aren't meant to carry any of the applied load. Their purpose is only to guarantee that the principal members (in this case, the vertical columns) do not buckle under compression. So a properly braced structure will instead collapse due to crushing or global buckling (where the entire structure buckles as if it were one member, which the braces can't help against). To find the unbraced length for your columns, you could rework the Euler's buckling load equation to give you $L$ given the other values (which you can find online). However, given how that equation is highly theoretical (and not used directly in actual engineering), it is probably best to merely create prototypes to see at which length the parts start to buckle and then adopt that.
{ "domain": "engineering.stackexchange", "id": 1987, "tags": "structural-engineering, design, statics" }
Multithread reading and processing when working with HDF5 files
Question: I have a program that should read and process about 500,000 files in the format hdf5, each of them containing about 400 data points representing the coordinates of carbon atoms in a sheet of graphene. Since I have an HDD, the process of reading is slow and as such I don't want to delay the reading process as it waits for the computation to finish. My idea was moving the reading process onto a new thread so that new files are read all the time and processed whenever they are available in the main thread. My implementation of this was spawning an std::thread for reading and std::moveing a std::vector<std::promise<DataEntry*>> of structs I have defined that are then used in the main thread through their respective futures. Is there a more efficient way of implementing this multithreading? It feels horribly inefficient to have to make 500,000 promises and futures. I suppose it would be easy to delete the futures once used on the main thread but I don't know how I would go about freeing the promises that are still on the reading thread. Apart from that, are there any other ways in general of improving my code, especially in terms of performance? utility.h #pragma once #include <vector> #include <future> #include <chrono> #include <string> #include <iostream> namespace porous { struct Timer { high_resolution_clock::time_point start; high_resolution_clock::time_point end; std::string message; Timer(std::string msg) { message=msg; start = high_resolution_clock::now(); } void now() { end = high_resolution_clock::now(); int64_t duration = std::chrono::duration_cast<std::chrono::milliseconds>(end-start).count(); std::cout << message << " took " << duration << "ms\n"; } ~Timer() { end = high_resolution_clock::now(); int64_t duration = std::chrono::duration_cast<std::chrono::milliseconds>(end-start).count(); std::cout << message << " took " << duration << "ms\n"; } }; struct double2 { double x,y; }; struct DataEntry { double energy; int n_points; double2* position; DataEntry(int n):n_points(n) { position = new double2[n_points]; } ~DataEntry() { delete[] position; } }; } typedef std::vector<std::promise<porous::DataEntry*>> promised_entries_t; typedef std::vector<std::future<porous::DataEntry*>> future_entries_t; reader.h #pragma once #include <string> #include <filesystem> #include <chrono> #include <mutex> #include <future> #include <vector> #include <H5Cpp.h> #include "utility.h" using namespace porous; using namespace H5; using std::filesystem::directory_iterator; using std::filesystem::directory_entry; namespace porous { class Reader { private: std::vector<std::string> files; std::thread reader_thread; void threaded_read(promised_entries_t entries); promised_entries_t entry_promises; const char* next(); public: Reader(std::string basedir); future_entries_t read_all(); DataEntry* read(std::string path); //called from worker thread void detach(); }; } reader.cpp #include "reader.h" #include "utility.h" #include <iostream> namespace fs = std::filesystem; using namespace std; using namespace porous; Reader::Reader(string path) { try { Timer t("[Reader] init"); for (auto& entry : fs::directory_iterator(path)) { files.push_back(entry.path().string()); } cout << "There are " << files.size() << " to read\n"; } catch (std::exception& e) { cout << e.what() << endl; } } DataEntry* Reader::read(string path) { H5File f; try{ f = H5File(path, H5F_ACC_RDONLY); } catch(...) { cout << "this file did not read:\n" << path << endl; return NULL; } DataSet coords_ds = f.openDataSet("coordinates"); DataSpace coords_space = coords_ds.getSpace(); hsize_t dims1[2]; coords_space.getSimpleExtentDims(dims1,NULL); const int coord_dimension = (int)dims1[0]; DataEntry* entry = new DataEntry(coord_dimension); const DataSet energy_ds = f.openDataSet("energy"); energy_ds.read(&entry->energy, PredType::NATIVE_DOUBLE); hsize_t offset[2] = {0,1}; hsize_t count[2] = {coord_dimension,2}; coords_space.selectHyperslab(H5S_SELECT_SET,count,offset); hsize_t offset_output[2] = {0,0}; hsize_t dim_output[2] = {coord_dimension,2}; DataSpace output_space(2, dim_output); output_space.selectHyperslab(H5S_SELECT_SET,dim_output,offset_output); coords_ds.read(entry->position, PredType::NATIVE_DOUBLE, output_space,coords_space); f.close(); return entry; } void Reader::threaded_read(promised_entries_t entries) { for (int i = 0; i < files.size(); i++) { entries[i].set_value(this->read(files[i])); } } future_entries_t Reader::read_all() { future_entries_t zukunft; for(int i = 0; i < files.size(); i++) { std::promise<DataEntry*> pr; zukunft.push_back(pr.get_future()); entry_promises.push_back(std::move(pr)); } reader_thread = std::thread(&Reader::threaded_read, this, std::move(entry_promises)); return zukunft; } void Reader::detach() { reader_thread.detach(); } main.cpp #include <iostream> #include <string> #include <chrono> #include "utility.h" #include "reader.h" using namespace porous; using namespace std; using namespace std::chrono_literals; int main() { using namespace porous; Reader r("../../big-graphene/test"); future_entries_t dat = r.read_all(); for (auto& fut_dat : dat) { DataEntry* e = fut_dat.get(); std::this_thread::sleep_for(1s);//simulate some long computation //cout << e->energy << endl; delete e; } r.detach(); return 0; } Answer: user673679’s answer covers pretty much all of the suggestions I thought to make about the actual code as presented. But it doesn’t answer the primary question: “Is there a more efficient way of implementing this multithreading?” To answer that, I’m going to do a higher-level review, a review not of the actual code, but rather of the design. That’s going to require that I make some guesses and assumptions, because there’s so much code missing, and no real information about what the ultimate goals of the code are supposed be. So this design review is going to be necessarily vague. A note about the C++ standard library (and particularly the threading stuff) Before I begin, I want to give some guidance about the C++ standard library, and particularly the threading sub-library. The C++ standard library is very different from the standard libraries of most popular languages. Most languages try to give you a fully-complete, high-level, universally-usable set of libraries as part of their standard library—basically, most languages want you to use their standard libraries for everything that they cover, and only use third-party libraries for stuff that isn’t important enough to be worthy of being part of their standard library. The upshot of that is that you can do most things “out of the box” in those languages—it’s rare to need a third-party library. The downside is that their standard libraries tend to be HUGE… often there’s only a single implementation because it would be too big a task for anyone to try to re-implement the whole thing. The C++ standard library goes another way. It does not try to be all things to all people. Instead, it’s quite spartan, focusing on providing only a small set of vocabulary types, and key facilities that you need to do many/most things, but are either too hard or simply impossible to roll your own portably. You can use the standard library directly if it happens to solve your problem, but the main goal of the C++ standard library is to give you the tools you need to write good, high-level libraries… not to be those libraries itself. In other words, you shouldn’t look at the C++ standard library’s threading stuff and say, “okay, everything I need to solve my problem should be in here”. You should think about what high-level tools you need, and then see what the C++ standard library provides to help you build those tools. Or, even better, find a third-party library that’s already done that. The reason why I explained all this will become clear in a moment. So let’s get started. The review I’d say your intuition—that it’s horribly inefficient to make a half-million promises and futures—is spot on. Of course, if you need to do that, then, well, that’s that; if you need it, you need it, and you just have to accept that it’s gonna take time. But the million-object question here is: do you need those promises and futures? Promises and futures are an excellent way to model values that will be coming from somewhere at any time, but that you don’t immediately need because you can do other stuff if the values aren’t available. That is, you have a task, and you need a value within that task… but not necessarily immediately—you can do other stuff while waiting for that value. That… doesn’t really sound like what you’re doing, does it? You’re not really doing a task where you can do other stuff while waiting on a value. What you’re doing is generating values, and you want to be able to work with them as soon as they come available, even if all the values aren’t ready yet. To me, that sounds like the tool you want is a concurrent queue. And this is where we circle back to what I said about the C++ standard library. Unfortunately, there is no concurrent queue in the standard library. (Yet! There’s actually one proposed for the future.) You could write one yourself, using the threading stuff in the standard library, or, better, you can use a third-party library. There are a lot out there; Boost has one, for example. There are even lock-free concurrent queues out there, if you like (though they sometimes come with limitations on use, like only a single producer, or they’re less efficient because they do a lot of dynamic allocation). So what would the code look like if you used a queue? Well, assuming a queue with an interface like the Boost one (which is similar to what’s proposed for future C++ standard library), it might look like this: // The reader thread function. // // Pretty simple. Takes the path and a reference to the queue, and just // iterates through the items in the path, reading the files, and pushing the // data to the queue. // // Once it's done, closes the queue. auto reader_thread_func(std::filesystem::path path, queue_t<DataEntry>& entries) { for (auto&& p : std::filesystem::directory_iterator{path}) entries.push(read(p.path())); entries.close(); } // The main thread. // // First, we create the queue. auto entries = queue_t<DataEntry>{}; // Then we create the reader thread. auto reader_thread = std::thread{reader_thread_func, "/path/to/files", std::ref(entries)}; // At this point, the reader thread is busy reading the data files in the // background. The data being read will be coming in entry-by-entry. // // So let's start using the entries as they come in: while (true) { auto entry = DataEntry{}; if (auto result = entries.wait_pop(entry); result == queue_op_status::success) { // We just got another entry! // // Do whatever computation you want with it. I'll just use what you // wrote: std::this_thread::sleep_for(1s);//simulate some long computation //cout << e->energy << endl; } else if (result == queue_op_status::empty) { // The queue is empty, but not closed. // // This means we're processing entries faster than we're reading them. // So we have to wait until an entry becomes available. We'll just // have this thread give up its time, and then try again. std::this_thread::yield(); } else if (result == queue_op_status::closed) { // The queue is empty, and closed. // // We're done. break; } } // Clean up. reader_thread.join(); There’s no error handling above (it wouldn’t be too much to add), but basically, that’s everything you need. The neat thing about using the right tool for the job is that you usually get additional benefits. In this case, a concurrent queue is really the right tool for the job, and it comes with benefits aplenty. For starters, you no longer have to allocate all the data entries—you can just use them more or less as they come in, and then discard them. That means no need for a half-million element vector. In fact, you can even restrict the maximum number of data entries to read to keep memory use bounded; the reader thread can simply wait until you’ve finished with a data entry and are ready for more. Also, you could even have multiple reader threads—perhaps even a thread pool of reader threads, so you can read like 4 or 8 or more files at a time. Okay, but I’ve made an assumption here (remember, there’s a lot of guesswork here for me, because you haven’t given enough information). I’m assuming that you don’t actually need all half-million data entry objects at the same time. I’m assuming you can read each entry, then discard it. But if that assumption is wrong—if you have to read the data entries, do some computation on them one-by-one as they come in, and then later do another computation on them altogether, then you can’t simply use a queue (because with a queue, you’d be discarding each entry after you processed it). But the basic idea still applies. You could use a vector instead of a queue, and do all the concurrency stuff manually. Or, perhaps better, you could make a “concurrent queue view” wrapped around a vector, and use the queue view while the data is coming in, and still have the vector of all entries at the end. What’s “best” depends entirely on the details, and I don’t know the details. Summary Promises and futures are the wrong tool for this job. They’re for situations where you’re waiting on a value, but you can do other stuff while you wait. Your situation is that you have a bunch of values coming in, and you want to start using them as soon as possible, without waiting for all the values to finish reading it. You’re not really doing anything else while waiting on each value… you’re just waiting on the next value. That situation usually means you want to use a queue. Unfortunately, the standard library doesn’t have a concurrent queue (yet). So you either have to roll your own, or use a third-party library. That’s pretty normal when it comes to C++ and its standard library. It doesn’t come with everything built-in, and all the bells and whistles you could ever need. It’s pretty spartan, only providing the bare minimum necessary to that actually useful libraries can be built on top of it. So get/make a concurrent queue, and then just use one thread (or multiple threads!) to read in the data, while another thread (or, again, multiple threads!) keeps pumping the queue for whatever data is available. You’ll spare yourself from having to allocate a half-million promises and a half-million futures, and get a more flexible and powerful abstraction to boot.
{ "domain": "codereview.stackexchange", "id": 40073, "tags": "c++, performance, multithreading, processing" }
How to use opencv-contrib modules in ROS Indigo?
Question: I am using ROS Indigo on Ubuntu 14.04. I installed the OpenCV 3 for ROS with apt-get: sudo apt-get install ros-indigo-opencv3 However, I would like to use modules from the opencv_contrib repository (https://github.com/Itseez/opencv_contrib.git). Is there an easy way to install and use these within ROS? Initially, I had a standalone OpenCV 3 installation with the contrib modules, but I had to uninstall it because of referencing issues when using ROS. EDIT: I think the installation of OpenCV 3 is discussed in point 5 of http://wiki.ros.org/vision_opencv, I actually ran this script, but I'm not sure what to do after this. Originally posted by donald on ROS Answers with karma: 61 on 2015-10-23 Post score: 0 Answer: I can use some contrib modules in ROS indigo as dnn and rgbd, but on the other hand sfm is no available. Originally posted by francisco.dominguez@urjc.es with karma: 16 on 2016-06-24 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by vishal@leotechsa on 2018-10-02: how you use the contrib modules in ROS indigo.
{ "domain": "robotics.stackexchange", "id": 22827, "tags": "ros, opencv, vision-opencv, ros-indigo" }
Training Deep Nets on an Ordinary Laptop
Question: Would it be possible for a an amateur who is interested in getting some "hands-on" experience in desining and training deep neural networks, to use an ordinary laptop for that purpose (no GPU), or is it hopeless to get good results in reasonable time without a powerful computer/cluster/GPU? To be more specific, the laptop's CPU is an Intel Core i7 5500U fith generation, with 8GB RAM. Now, since I haven't specified what problems I would like to work on, I'll frame my questions in a different way: which deep architectures would you recommend that I try to implement with my hardware, such that the following goal is achieved: Acquiring intuition and knowledge about how and when to use techniques that were introduced in the past 10 years and were essential to the uprising of deep nets (such as understanding of initialisations, drop-out, rmsprop, just to name a few). I read about these techniques, but of course without trying them out myself I wouldn't know exactly how and when to implement these in an effective way. On the other hand, I'm afraid that if I try using a PC which isn't strong enough, then my own learning rate will be so slow that it would be meaningless to say that I've acquired any better understanding. And if I try using these techniques on shallow nets, maybe I wouldn't be building the right intuition. I imagine the process of (my) learning as follows: I implement a neural net, let it practice for up to several hours, see what I've got, and repeat the process. If I do this once or twice a day, I would be happy if after, say, 6 months I will have gained practical knowledge which is comparable to what a professional in the field should know. Answer: Yes, a laptop will work just fine for getting acquainted with some deep learning projects: You can pick a smallish deep learning problem and gain some tractable insight using a laptop so give it a try. The Theano project has a set of tutorials on digit recognition that I've played with and moded on a laptop. Tensorflow also has a set of tutorials. I let some of the longer runs go overnight, but nothing was intractable. You might also consider availing yourself of AWS or one of the other cloud services. For 20-30 dollars you can perform some of the bigger calculations in the cloud on some sort of elastic computing node. The secondary advantage is that you can also list AWS or other cloud services as skill on your resume also :-) Hope this helps!
{ "domain": "datascience.stackexchange", "id": 812, "tags": "machine-learning, deep-learning" }
Questions on the movie Gravity (2013)
Question: To start off, I'm not a physicist but a programmer, and only had a few years of physics education in high school so I'm sorry if I'm asking a stupid question. I just finished watching the movie Gravity (2013) and I found online already a whole list of things wrong with the movie, but there are two things that I think is wrong that I couldn't find: The premise of the movie is that a satellite is hit by a missle which causes debris setting off a chain reaction leading to the catastrophic destruction of space stations and space shuttles. Is it true that debris can stay in orbit at the same height as space stations, while having a very large relative speed to the space stations? It was my understanding that if the velocity of objects change, then also its orbit altitude (given a circular orbit), right? In the end, the main character gets to the Chinese space station, which is already deorbiting although it seems structurally intact. Is it possible for the intact station to deorbit, simply by getting a little bit hit (it couldn't have been much since it was still mostly intact)? Answer: Is it true that debris can stay in orbit at the same height as space stations, while having a very large relative speed to the space stations? Most LEO objects orbit in the same direction, namely the direction the Earth rotates. This is done to take advantage of the initial speed supplied by the Earth's rotation. So the main velocity difference would likely be due to different inclinations (i.e., not in the same plane) or elliptical vs. circular orbits. Let's make it simple and assume that two objects, E and I, both have circular orbits at the same altitude, thus they have the same orbital speed. However, let us assume object I has an inclined orbit at some angle $\alpha$ to the equatorial plane and that they orbit in the same azimuthal direction. Now the when we move into the frame of, say, the equatorial orbiting object the inclined object will appear to be coming from ahead of you. If the inclined orbit is from south-to-north(north-to-south), then object I would appear to be coming at you from the south(north) and from ahead of you. This is just from the vector subtraction, where: $$ \mathbf{V}_{I} - \mathbf{V}_{E} = -\mathbf{V}_{rel} \tag{1} $$ or $$ \mathbf{V}_{E} = \mathbf{V}_{I} + \mathbf{V}_{rel} \tag{2} $$ where $\lvert \mathbf{V}_{E} \rvert = \lvert \mathbf{V}_{I} \rvert$ we have defined: $$ \begin{align} \mathbf{V}_{E} & \sim V_{E} \ \hat{\mathbf{x}} \tag{3a} \\ \mathbf{V}_{I} & \sim V_{I} \left( \cos{\alpha} \ \hat{\mathbf{x}} + \sin{\alpha} \ \hat{\mathbf{y}} \right) \tag{3b} \end{align} $$ Note: I really should be doing this in spherical trigonometry, but let's only concern ourselves with the immediate region of interaction so we can take the limit as the radius of curvature goes to infinity. As you can see, $\mathbf{V}_{rel}$ is given by: $$ \mathbf{V}_{rel} = V_{E} \left[ \left( 1 - \cos{\alpha} \right) \hat{\mathbf{x}} - \sin{\alpha} \hat{\mathbf{y}} \right] \tag{4} $$ which shows that $\lim_{\alpha \rightarrow 0} \mathbf{V}_{rel} = 0$, as expected. The maximum of $\lvert \mathbf{V}_{rel} \rvert$ occurs in the limit as $\alpha \rightarrow \pi/2$, which is $\sim \sqrt{2} \ V_{E}$. The typical orbital speed of a LEO object is ~7-8 km/s (or ~25,200-28,800 kph = ~15,700-17,900 mph), so the maximum impact speed is going to be ~10-11 km/s (or ~35,600-40,700 kph = ~22,100-25,300 mph). So the short answer to your question is yes. If the objects were orbiting in opposite directions, then the maximum of $\lvert \mathbf{V}_{rel} \rvert$ would increase, of course. Regardless, unless $\alpha \ll 1$, $\lvert \mathbf{V}_{rel} \rvert$ is going to be large for all intents and purposes. Is it possible for the intact station to deorbit, simply by getting a little bit hit (it couldn't have been much since it was still mostly intact)? I am going to guess no because of my response to the previous part. A space station has a great deal of linear and angular momentum. To exert enough force and torque to change those would require high impact speeds. Since most of the space station's outer hull is very thin, the end result would not be the entire object being "shoved" to a different altitude. Rather, the impacting object would probably just tear through the station and/or ablate on impact. More Important Note In the movie, the characters are able to see the objects moving towards them. If we assume that the objects are moving at, say, $\alpha \sim \pi/6$ to the space station's plane, then $\lvert \mathbf{V}_{rel} \rvert \sim \sqrt{2 - \sqrt{3}} V_{E}$. If we assume the same values as above for LEO orbits, then $\lvert \mathbf{V}_{rel} \rvert$ ~3.6-4.1 km/s (or ~13,000-14,900 kph = ~8,100-9,200 mph). These values correspond to Mach ~ 10-12 at sea level. For comparison, the muzzle velocity of a bullet from a high powered rifle varies from Mach ~ 1.8-3.6, depending on the caliber and model. I am inclined to think that were this situation to really occur, a human would not be able to see the incident objects unless they were very large because of the lighting conditions and the speed at which the objects were moving. A more accurate portrayal would have shown parts of the space station just disappearing and/or exploding. There would not have been the nice, slow moving objects flying by that destroyed everything. Update When we look at the full range of muzzle velocities from a list of cartridges, we find handguns/pistols range from ~304–515 m/s and rifles range from ~600–1392 m/s. I plotted the relative speed between two orbiting objects as a function of inclination (i.e., Equation 4 above) shown in the following image. As shown, after even only ~3-4 degrees the speed differences exceed most handgun round muzzle speeds and after ~11 degrees nearly all civilian rifle rounds.
{ "domain": "physics.stackexchange", "id": 26833, "tags": "newtonian-mechanics, angular-momentum, orbital-motion, satellites" }
Mechanical energy and fluids
Question: Suppose we have two tanks connected by a tube with a tap in the middle. One tank is filled with water to a height of $h$ and the other is empty. The tap is then opened and the water 'levels itself'. I had problems with understanding how such a system brings about a loss of $\frac{mgh}{4}$ in the water's potential energy, but I do understand very well now. However, isn't the water here a body under the sole effect of gravity? Shouldn't, provided no energy is lost to the surroundings or converted to internal energy, mechanical energy be conserved in such a way that the loss in K.E. is gain in P.E. and vice versa? That is, the kinetic energy used to transport the water will all eventually be turned into P.E, provided no energy 'losses' occur. I know that every body strives to reach a state of low potential energy, which is why water flattens out in contact with certain surfaces, but I'm still confused as to what becomes of the lost P.E. and would appreciate help. Thanks. Answer: If you had an ideal fluid (with zero viscosity), then the difference in potential energy would appear as kinetic energy of the fluid (mostly the one in the originally empty tank). In other words, you would have some sort of fluid motion in that tank (probably some large-scale vortices, and possibly others, depending on the parameters of the experiment; you could also have turbulent flow, generating a cascade of smaller and smaller-scale vortices) which, in the absence of viscosity, would persist forever. With viscosity, kinetic energy is going to be dissipated and turned into heat and any motion would eventually cease in the asymptotic limit. And, yes, again depending on exactly how this experiment is conducted, you could have an oscillating solution where potential energy is converted into kinetic energy and back into potential energy, analogous to what happens with a mechanical pendulum. However, in most cases your fluid system has many, many more degrees of freedom than a simple mechanical pendulum, and energy will consequently spread within a far more complex, potentially even infinite-dimensional configuration space.
{ "domain": "physics.stackexchange", "id": 36472, "tags": "newtonian-mechanics, classical-mechanics, energy, gravity" }
Why don't photons split up into multiple lower energy versions of themselves?
Question: A photon could spontaneously split up into two or more versions of itself and all the conservation laws I'm aware of would not be violated by this process. (I think.) I've given this some thought, and a system consisting of multiple lower energy photons would have a significantly higher number of micro-states (and consequentially higher entropy) than one consisting of a single photon with that much energy. This would make the process more favorable. Why does this not happen? Answer: After the hypothetical split, 2 photons with the same energy would be propagating at an angle ok with momentum conservation. Then there would be a rest frame where the angle is 180 degrees. Now if you stay in this restframe and go back in time before the split, your single photon would be at rest. However, that is not possible: According to relativity, speed of light is constant for all frames. Thus, there can be not split of a single photon into two in vacuum (i.e. without momentum transfer during split). Mathematically, the reason is that the Lorentz group is non-compact, which means that the parameter gamma can take any value from [1, infinity) but not infinity itself which would correspond to a coordinate frame moving at lightspeed with all massive particles having infinite kinetic energy.
{ "domain": "physics.stackexchange", "id": 94777, "tags": "electromagnetism, photons, entropy, quantum-electrodynamics" }
What is the Hilbert dimension of a Fock space?
Question: Quantum field theory in curved spacetimes is often described in the algebraic approach, which consists of describing observables as elements of a certain $*$-algebra. To recover the notion of a Hilbert space, one represents this algebra as operators acting on said Hilbert space. Given a state on the algebra, the GNS construction allows one to obtain a particular representation of this algebra. Given two different states, such as the Minkowski and Rindler vacua in Minkowski spacetime, it might happen that the representations are not unitarily equivalent. What I find curious, though, is the existence of the following theorem in Functional Analysis (see Kreyszig's Introductory functional analysis with applications Theorem 3.6-5) Two Hilbert spaces $H$ and $\tilde{H}$, both real or both complex, are isomorphic if and only if they have the same Hilbert dimension. The Hilbert dimension is the cardinality of an orthonormal basis of the Hilbert space. Now my question is: how to make sense of this? For example, both the Minkowski and Rindler vacua lead to representations in Fock spaces. Isn't a Fock space always separable, and hence has Hilbert dimension $\aleph_0$? Shouldn't then any two Fock space representations be unitarily equivalent? In particular, shouldn't the Minkowski and Rindler vacua lead to unitarily equivalent representations? Why don't they? Answer: You need to distinguish two different notions of isomorphism here: An isomorphism of Hilbert spaces and an isomorphism of representations of algebras on Hilbert spaces. All Hilbert spaces of the same cardinality are isomorphic as Hilbert spaces, and indeed the usual Fock spaces of quantum field theory are all separable and infinite-dimensional, i.e. have the same cardinality. But what you're looking at are not merely "Hilbert spaces", but representations of the algebra of canonical commutation relations between the quantum fields of your theory. In addition to an isomorphism of Hilbert spaces $U : H_1 \to H_2$, for unitary equivalence of representations we have two representations $\pi_i : A\to \mathfrak{gl}(H_i)$ (where $A$ is the algebra of fields or any other equivalent presentation of the CCR), and $U$ is required to be an intertwiner between these, i.e. $$ \pi_2(a)U = U\pi_1(a)$$ for all $a\in A$. The statement that certain spaces like the Minkowksi and Rindler Fock spaces are not unitarily equivalent is not about the non-existence of an isomorphism of Hilbert spaces $U$, it's about no such $U$ fulfilling the intertwiner condition.
{ "domain": "physics.stackexchange", "id": 99261, "tags": "quantum-field-theory, hilbert-space, mathematical-physics, qft-in-curved-spacetime" }
List of major Open Problems in Computational Complexity and their Likelihood?
Question: I remember reading an article/paper (or perhaps a talk, most probably by Scott Arranson) where he lists the major open problems and their likelihood of being true or false in a table/graph. This is listed along with the 'surprise factor' of each result if its true/false. I am unable to locate the article though. I wonder if someone remembers the article and can help. I am aware of a similar one by Ryan Williams but I am looking for the other one. P.S. I know its a silly request. Apologies. But, still need it. Answer: Perhaps you are looking for the diagram on slide 12 of this talk by Scott Aaronson. Scott has given the talk many times, and not all versions contain the slide. Note that except for P vs NP, it does not contain any open problems, but apart from that, it appears to match your description.
{ "domain": "cs.stackexchange", "id": 14093, "tags": "complexity-theory, reference-request" }