anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
error: "invoking make failed"
Question: hello, when I run catkin_make I get the following errors: Scanning dependencies of target gira [100%] Building CXX object prueba/CMakeFiles/gira.dir/src/gira.cpp.o Linking CXX executable /home/turtlebot/catkin_ws/devel/lib/proyecto/gira CMakeFiles/gira.dir/src/gira.cpp.o: In function `main': gira.cpp:(.text+0x55): undefined reference to `ros::init(int&, char**, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int)' gira.cpp:(.text+0xa5): undefined reference to `ros::NodeHandle::NodeHandle... any idea what could be the mistake? thanks the executable gira.cpp: #include "ros/ros.h" #include "geometry_msgs/Twist.h" int main(int argc, char **argv) { ros::init(argc, argv, "gira"); ros::NodeHandle n; ros::Publisher vel_pub_=n.advertise<geometry_msgs::Twist>("cmd_vel", 1); geometry_msgs::Twist vel; ros::Rate loop_rate(10); while (ros::ok()) { vel.linear.x = 0.1; //velocidad de avance vel.angular.z = 0.3; //velocidad de giro vel_pub_.publish(vel); loop_rate.sleep(); } return 0; } the CMakeLists.txt (catkin specific configuration and built) : cmake_minimum_required(VERSION 2.8.3) project(prueba) find_package(catkin REQUIRED COMPONENTS geometry_msgs image_transport nav_msgs roscpp sensor_msgs std_msgs ) catkin_package( INCLUDE_DIRS include LIBRARIES prueba CATKIN_DEPENDS geometry_msgs image_transport nav_msgs roscpp sensor_msgs std_msgs turtlebot_node DEPENDS system_lib ) Specify additional locations of header files Your package locations should be listed before other locations include_directories(include ${catkin_INCLUDE_DIRS} ) Declare a cpp library add_library(prueba src/${PROJECT_NAME}/prueba.cpp ) Declare a cpp executable add_executable(prueba_node src/prueba_node.cpp) add_executable(gira src/gira.cpp) Add cmake target dependencies of the executable/library as an example, message headers may need to be generated before nodes add_dependencies(prueba_node prueba_generate_messages_cpp) Specify libraries to link a library or executable target against target_link_libraries(prueba_node ${catkin_LIBRARIES} ) Originally posted by albarranco on ROS Answers with karma: 11 on 2013-07-15 Post score: 0 Original comments Comment by Asfandyar Ashraf Malik on 2013-07-15: It seems that the problem is with your code. Without the code, its quite hard to figure out the solution to this problem Answer: Undefined references indicate linker problems. It seems like the linker cannot find any ROS related symbols. Are you sure you have your package.xml and CMakeLists.txt setup properly? Originally posted by ipso with karma: 1416 on 2013-07-15 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by albarranco on 2013-07-16: I've only added to CMakeLists.txt add_executable(gira src/gira.cpp). Do I have to change something else in the CMakeLists.txt or package.xml? Comment by ipso on 2013-07-16: Then please post the contents of your CMakeLists.txt (add it to your original question). Comment by albarranco on 2013-10-18: Solved, I forgot to add target_link_libraries (gira $ {catkin_LIBRARIES}) to CMakeList.txt. Thanks
{ "domain": "robotics.stackexchange", "id": 14932, "tags": "ros" }
Telling grep to treat N as [ATCG]
Question: Okay so I'm using grep to try and get a preview of some trimming operations that are not going as expected.. Lets say that my sequence in the FastQ file is: ATNGCNATCG What I want to do is.. grep "ATCGCTATCG" my.fastq ..and match the sequence given above Surely there is some way or some existing tool that I can use besides doing: grep "[A|N][T|N][C|N][G|N][C|N]...etc." Answer: If you want to stick to grep, use a scripting language such as Perl to generate the regex programmatically. For example: perl -le 'print join "", map "[${_}N]", split //, $ARGV[0];' ATCGCTATCG Prints: [AN][TN][CN][GN][CN][TN][AN][TN][CN][GN] You can use it in grep like so: grep '[AN][TN][CN][GN][CN][TN][AN][TN][CN][GN]' <<< ATNGCNATCG Prints: ATNGCNATCG If that works for you, you could make it into a little bash function that also runs the grep. Add these lines to your ~/.bashrc: grepN(){ seq="$1" file="$2" pattern=$(perl -le 'print join "", map "[${_}N]", split //, $ARGV[0];' "$seq") grep "$pattern" "$file" } You can now run grepN ATCGCTATCG my.fastq Of course, this is not a good idea since the sequence might be in different lines, but that's what you were doing originally.
{ "domain": "bioinformatics.stackexchange", "id": 1764, "tags": "fastq, trimming, grep" }
Subscribe from C++ external file
Question: Hi!, I am a new user of Gazebo and maybe this is a trivial question but I was trying to subscribe a topic from a c++ file, so I basically followed the custom messages tutorial. If I have a plugin and want to subscribe a topic the line that I need is: subJointStates = nodeController->Subscribe("~/robot/JointState",&UpdateJointState,this); When I tried to subscribe from the c++ file, I can't use "this", since I don't have a class, I have tried replacing "this" with a boolean value, because the declaration of Subscribe wants a boolean. I tried both of them subJointStates = nodeController->Subscribe("~/robot/JointState",&UpdateJointState,true); subJointStates = nodeController->Subscribe("~/robot/JointState",&UpdateJointState,false); But, looks like it's not working, should I try a different way to subscribe from a C++ External File? Thank you, Victor Originally posted by vcparedesc on Gazebo Answers with karma: 43 on 2013-02-15 Post score: 0 Answer: My mistake!!, sorry, I realized that "~/robot/JointState" calls `gazebo/<node_parent_name>/robot/JointState` However, I was in another node, so "~/robot/JointState" was pointing to: `gazebo/<another_node_name>/robot/JointState` Just in case, if some beginner as me gets confused by a simple error. Originally posted by vcparedesc with karma: 43 on 2013-02-15 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Bharadwaj Ramesh on 2013-05-07: HI, I am trying to write a simple subscriber. I am pretty much a beginner with Gazebo and with C++. Here is my question. http://answers.gazebosim.org/question/2676/simple-subscribing-plugin/ Can you please help me with it. Any help is appreciated. Thanks
{ "domain": "robotics.stackexchange", "id": 3043, "tags": "gazebo" }
Simple 3 layers Neural Network cannot be trained
Question: #3 layers neural network import numpy as np from __future__ import division def nonlin(x,deriv=False): #activation function if(deriv==True): return np.exp(x)/(1+np.exp(x))**2 return 1/(1+np.exp(-x)) X = np.array([ [0,0,1], [0,1,1], [1,1,1], [1,0,0], [0,1,0], [1,1,0]]) Y = np.array([[0,1,0,1,1,0]]).T np.random.seed(1) l0 = X syn0 = 2*np.random.random((3,30))-1 syn1 = 2*np.random.random((30,1))-1 for i in xrange(60000): l0 = X l1 = nonlin(np.dot(l0,syn0)) l2 = nonlin(np.dot(l1,syn1)) l2_error = Y-l2 l2_delta = l2_error*nonlin(l2,deriv=True) l1_error = l2_delta.dot(syn1.T) l1_delta = l1_error*nonlin(l1,deriv=True) syn0 += l0.T.dot(l1_delta) syn1 += l1.T.dot(l2_delta) print l2 I have been messing with the neural network implementation at https://iamtrask.github.io/2015/07/12/basic-python-network/ This is the output of the code: [[1.85572928e-04] [9.99755942e-01] [5.21248255e-09] [9.99767481e-01] [9.99963580e-01] [2.07334909e-04]] I expect something like Y [[0] [1] [0] [1] [1] [0]] What could have possibly gone wrong here? Answer: The results you are getting are the following: [[1.85572928e-04] = 0.000185572928 ~ 0 [9.99755942e-01] = 0.999755942 ~ 1 [5.21248255e-09] = 0.000000000521248255 ~ 0 [9.99767481e-01] = 0.999767481 ~ 1 [9.99963580e-01] = 0.999963580 ~ 1 [2.07334909e-04]] = 0.000207334909 ~ 0 These are indeed very close to your expected results. You are computing and predicting floating point numbers, and not binary zeros and ones. You could for example add a simply rule that will accepts values below a threshold to be zero and above the threshold to be one.
{ "domain": "datascience.stackexchange", "id": 3336, "tags": "machine-learning, python, neural-network" }
Why can the $1$-point correlation function be made to vanish?
Question: The $1$-point correlation function in any theory, free or interacting, can be made to vanish by a suitable rescaling of the field $\phi$. I would like to understand this statement. With the above goal in mind, consider the following theory: $$\mathcal{L} = \frac{1}{2}\left((\partial\phi)^{2}-m^{2}\phi^{2}\right)+\frac{g}{2}\phi\partial^{\mu}\phi\partial_{\mu}\phi.$$ What criteria (on the Lagrangian $\mathcal{L}$) is used to determine the value of the field $\phi_{0}$ such that the transformation $\phi \rightarrow \phi + \phi_{0}$ leads to a vanishing $1$-point correlation function $$\langle \Omega | \phi(x)| \Omega \rangle=0~?$$ Answer: The 1-point function is constant in spacetime because of translation invariance, i.e. $\langle \phi(x)\rangle = \phi_0\in\mathbb{R}$ for all $x\in\mathbb{R}^4$. Obviously, the 1-point function of $\phi'(x) := \phi(x) - \phi_0$ is zero since the expectation value is linear. So $\phi\mapsto \phi' = \phi + \phi_0$ gets rid of the non-zero 1-point function. This works for all Poincaré-invariant Lagrangians.
{ "domain": "physics.stackexchange", "id": 35454, "tags": "quantum-field-theory, lagrangian-formalism, renormalization, vacuum, correlation-functions" }
groovy script failed in Krel_reconfigura-jobs
Question: Hi, I was trying to setup a buildfarm for my group. I have ran import_upstream successfully. Then next I triggered "Krel_reconfigure-jobs". It took a lot of time to finish configuring tons of jobs. After this, the job failed when it ran the generated groovy files, complaining unable to resolve imports. The tail of the log is pasted as below. I was also thinking about write a tutorial about some of the problems I met configuring the buildfarm (https://gist.github.com/prclibo/90fc9587f8382c069a3fb525291d2a39). Maybe after I succeed setup the buildfarm I can fiinsh this. ...... 18:06:17 Configuration for jobs: Ksrc_uX__zeroconf_avahi_suite__ubuntu_xenial__source, Kbin_uX64__zeroconf_avahi_suite__ubuntu_xenial_amd64__binary 18:06:17 Configuration for jobs: Ksrc_dJ__zeroconf_avahi_suite__debian_jessie__source, Kbin_dJ64__zeroconf_avahi_suite__debian_jessie_amd64__binary 18:06:21 Writing groovy script '/tmp/reconfigure_jobs/reconfigure_jobs.groovy' to reconfigure 4 views and 5704 jobs 18:06:22 + echo # END SECTION 18:06:22 # END SECTION 18:06:23 ERROR: Build step failed with exception 18:06:23 org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: 18:06:23 Script1.groovy: 12: unable to resolve class org.apache.xml.serialize.OutputFormat 18:06:23 @ line 12, column 1. 18:06:23 import org.apache.xml.serialize.OutputFormat 18:06:23 ^ 18:06:23 18:06:23 Script1.groovy: 13: unable to resolve class org.apache.xml.serialize.XMLSerializer 18:06:23 @ line 13, column 1. 18:06:23 import org.apache.xml.serialize.XMLSerializer 18:06:23 ^ 18:06:23 18:06:23 Script1.groovy: 1: unable to resolve class difflib.DiffUtils 18:06:23 @ line 1, column 1. 18:06:23 import difflib.DiffUtils 18:06:23 ^ 18:06:23 18:06:23 3 errors 18:06:23 18:06:23 at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310) 18:06:23 at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:946) 18:06:23 at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:593) 18:06:23 at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:542) 18:06:23 at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298) 18:06:23 at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268) 18:06:23 at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688) 18:06:23 at groovy.lang.GroovyShell.parse(GroovyShell.java:700) 18:06:23 at groovy.lang.GroovyShell.parse(GroovyShell.java:736) 18:06:23 at groovy.lang.GroovyShell.parse(GroovyShell.java:727) 18:06:23 at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SecureGroovyScript.evaluate(SecureGroovyScript.java:165) 18:06:23 at hudson.plugins.groovy.SystemGroovy.run(SystemGroovy.java:95) 18:06:23 at hudson.plugins.groovy.SystemGroovy.perform(SystemGroovy.java:59) 18:06:23 at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20) 18:06:23 at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:779) 18:06:23 at hudson.model.Build$BuildExecution.build(Build.java:205) 18:06:23 at hudson.model.Build$BuildExecution.doRun(Build.java:162) 18:06:23 at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534) 18:06:23 at hudson.model.Run.execute(Run.java:1728) 18:06:23 at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) 18:06:23 at hudson.model.ResourceController.execute(ResourceController.java:98) 18:06:23 at hudson.model.Executor.run(Executor.java:404) 18:06:23 Build step 'Execute system Groovy script' marked build as failure 18:06:23 Sending e-mails to: ros-buildfarm-testfarm@googlegroups.com 18:06:23 ERROR: Could not connect to SMTP host: localhost, port: 25 18:06:24 javax.mail.MessagingException: Could not connect to SMTP host: localhost, port: 25; 18:06:24 nested exception is: 18:06:24 java.net.ConnectException: Connection refused (Connection refused) 18:06:24 at com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:1934) 18:06:24 at com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:638) 18:06:24 at javax.mail.Service.connect(Service.java:295) 18:06:24 at javax.mail.Service.connect(Service.java:176) 18:06:24 at javax.mail.Service.connect(Service.java:125) 18:06:24 at javax.mail.Transport.send0(Transport.java:194) 18:06:24 at javax.mail.Transport.send(Transport.java:124) 18:06:24 at hudson.tasks.MailSender.run(MailSender.java:131) 18:06:24 at hudson.tasks.Mailer.perform(Mailer.java:170) 18:06:24 at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:78) 18:06:24 at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20) 18:06:24 at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:779) 18:06:24 at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:720) 18:06:24 at hudson.model.Build$BuildExecution.post2(Build.java:185) 18:06:24 at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:665) 18:06:24 at hudson.model.Run.execute(Run.java:1753) 18:06:24 at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) 18:06:24 at hudson.model.ResourceController.execute(ResourceController.java:98) 18:06:24 at hudson.model.Executor.run(Executor.java:404) 18:06:24 Caused by: java.net.ConnectException: Connection refused (Connection refused) 18:06:24 at java.net.PlainSocketImpl.socketConnect(Native Method) 18:06:24 at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) 18:06:24 at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) 18:06:24 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) 18:06:24 at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 18:06:24 at java.net.Socket.connect(Socket.java:580) 18:06:24 at com.sun.mail.util.SocketFetcher.createSocket(SocketFetcher.java:286) 18:06:24 at com.sun.mail.util.SocketFetcher.getSocket(SocketFetcher.java:231) 18:06:24 at com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:1900) 18:06:24 ... 18 more 18:06:24 Finished: FAILURE Originally posted by Bo Li on ROS Answers with karma: 1 on 2017-07-21 Post score: 0 Original comments Comment by gvdhoorn on 2017-07-21:\ Sending e-mails to: ros-buildfarm-testfarm@googlegroups.com ERROR: Could not connect to SMTP host: localhost, port: 25 please fix this also: disable notification emails for both maintainers and committers in all your *.yaml files in your fork of ros_buildfarm_config .. Comment by gvdhoorn on 2017-07-21: .. and make sure to update the email addresses listed under the notification_emails keys. Comment by Bo Li on 2017-07-22: @gvdhoorn Yes, thanks for the remind. Answer: According to the error messages you are missing some Java packages. Until recently the buildfarm downloaded e.g. diffutils on demand. But as of https://github.com/ros-infrastructure/ros_buildfarm/pull/418 the Jenkins Groovy plugin provides that library already. Your instructions mention that you are using an older version of Jenkins and therefore likely older plugins. You might want to try using the same version as on the ROS buildfarm (currently Jenkins 2.46.2 and Groovy 2.0). Please also note that there is an active effort to update the buildfarm_deployment to use Xenial instead of Trusty. That upgrade will also move us from Java 7 to Java 8 which will allow us to upgrade to a newer Jenkins version which requires Java 8. Originally posted by Dirk Thomas with karma: 16276 on 2017-07-21 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Bo Li on 2017-07-22: Thanks for the reply. Switched to Jenkins 2.46.2 and the Groovy plugin is 2.0 but still get the same error. Is there a way to check why Groovy plugin is not downloading the libraries? I checked /var/lib/jenkins/plugins/groovy/WEB-INF/lib and see commons-exec-1.2.jar groovy.jar. Comment by Bo Li on 2017-07-23: The above issue is solved by following buildfarm_deployment/issues/147 and also downloading apache-xml-xerces.jar. Thanks again for the hint. Then after approving some scripts I reached another error "com.thoughtworks.xstream.mapper.CannotResolveClassException: hudson.plugins.view.dashboard.Dashboa Comment by Bo Li on 2017-07-23: The full error is updated to the end of the tutorial gist. Just wanna seek help again:) Comment by Bo Li on 2017-07-23: OK, finally got it worked. Thanks for the answer. The later error requires the dashboard view plugin. Comment by Dirk Thomas on 2017-07-24: If you provision the machines manually please make sure to include all the plugins and configurations from the buildfarm_deployment repo (in this case https://github.com/ros-infrastructure/buildfarm_deployment/blob/e63a990e292175360d58de247b5af43bfc013239/master/manifests/site.pp#L128).
{ "domain": "robotics.stackexchange", "id": 28410, "tags": "ros, buildfarm" }
Intermediate axis theorem in higher dimensions
Question: The intermediate axis theorem states that the rotation of an object around its first and third principal axes is stable, while rotation around its second principal axis (or intermediate axis) is not. What is the analogue of this theorem in higher dimensions, where rotations are no longer described by axes but instead by bivectors? Answer: This answer doesn't show the whole derivation, but it indicates how to set it up and what the result looks like. (I haven't seen this in the literature before, so let the reader beware: nobody has double-checked my derivation.) Treat the rigid body as a conglomerate of pieces. Let $m_n$ be the mass of the $n$th piece, and let $\mathbf{b}_n$ denote its displacement from the body's center of mass (in body-fixed coordinates). Define the square matrix $$ M = \sum_n m_n \mathbf{b}_n\mathbf{b}_n^T $$ where $T$ means transpose. This definition makes sense in any number $D$ of spatial dimensions. When $D=3$, it's different than what we usually call the moment-of-inertia tensor, but it's closely related. The stability analysis uses a $D$-dimensional version of Euler's equation, which can be written $$ \{\dot W,M\}+[W^2,M]=0 $$ with $\{A,B\} = AB+BA$ and $[A,B]=AB-BA$ and $W=R^T\dot R$, where $R$ is the time-dependent $D\times D$ rotation matrix that relates the body-fixed coordinate system to an inertial coordinate system, and $\dot R$ is the time-derivative of $R$. This is the equation of motion for a freely rotating rigid body (no external torque), expressed in a body-fixed coordinate system. The quantity $W$ is the angular velocity bivector. The square matrix $W$ is antisymmetric, and the square matrix $M$ is symmetric. Work in a basis where $M$ is diagonal, and assume that the $D$ eigenvalues of $M$ are all distinct so that stability can be analyzed using first-order perturbation theory. According to first-order perturbation theory (if I didn't make any mistakes), rotation in the $j$-$k$ plane is stable only if the quantities $$ \lambda_\ell := \frac{(M_{\ell\ell}-M_{jj})(M_{\ell\ell}-M_{kk})}{ (M_{\ell\ell}+M_{jj})(M_{\ell\ell}+M_{kk})} $$ are positive for all $\ell\neq j,k$. This is possible only if the two factors in the numerator are either both positive or both negative for all $\ell\neq j,k$, which in turn is possible only if $M_{jj}$ and $M_{kk}$ are either the two largest or the two smallest components of $M$ (in a basis where $M$ is diagonal). In other words, assuming no degeneracies, there are only two planes in which rotational motion is stable: One is the $j$-$k$ plane for which $M_{jj}$ and $M_{kk}$ are the two largest components of $M$ (in a diagonal basis), and one is the $j$-$k$ plane for which $M_{jj}$ and $M_{kk}$ are the two smallest components of $M$. This is consistent with the familiar situation in $D=3$.
{ "domain": "physics.stackexchange", "id": 61702, "tags": "newtonian-mechanics, classical-mechanics, rotational-kinematics, rotation, stability" }
Heat preserving performance of container relative to content
Question: This question has been addressed in the case of a thermos bottle: Performance of a thermos bottle relative to contents I am asking the question again without the hypothesis that it is a thermos bottle. Given a container with a warm liquid inside ("warm" meaning warmer than the medium surrounding the container), will it cool faster, slower or equally, when it is half-full than when it is full. To simplify the analysis, it is assumed that the opening and cap of the container have the same properties with respect to heat as the rest of the container, so that they may be ignored in the analysis. Answer: I started asking myself this question because I was somehow unsatisfied with the answers to the previous question "Performance of a thermos bottle relative to contents" concerning a very specific kind of container. There were assumptions made in the answers. Even though these assumptions were fair, given that a thermos bottle is a rather precise and well known object, the reasonning in the answers did not make them explicit. I tried to avoid it, by reasonning explicitly about the cork of the bottle, but I still used properties of the bottle shape without saying so explicitly (I became aware of it later). And even though my assumption was close to the actual facts, a thermos bottle is not a cylinder. The bottom is often somewhat spherical, which could have called for an extra line of justifications (how high should the bottle be compared to the radius of the bottom half sphere, unless the top is also considered a half-sphere ?). Sometimes, we also make (explicit or implicit) assumptions that are not needed. The other thing that bothered me is that people will often vote for simple answers they understand quickly (not necessarily the best answer or even a correct one). At least that is the feeling I get. If warranted, this would justify making lots of unstated assumptions when answering. Not to mention the fact that fast answers get a better chance at upvotes, when acceptable. Then, considering the thermos question, I started wondering about what could make our statements wrong, and what could be the assumptions that are often made implicitly, just for that kind of problem (though I actually made one or two explicit in stating this new problem). Here are some such assumptions, probably an incomplete list (other ideas are welcome): role of the cork: can it be ignored as not significant ? homogeneity of the bottle sides: is it the same kind of material all over ? shape of the container: is it just a bottle, which we tend to assume ? heat conductivity of the bottle side: is it isotropic ? uniformity of liquid cooling: well, that is always wrong, but liquid conductivity is so efficient that it seems a good approximation. Is it? Then I wondered whether falsifying these assumptions could also falsify the conclusion. Once I had satisfied myself that it was the case, I asked the question, carefully stated so as not to induce any assumption (for example by always using the word "container" instead of "bottle"). I was not trying to trap anyone, only experimenting. I unfortunately got few reactions (thank you to those who did react). I should have started that more anonymously as some users clearly wondered what I was after (comments welcome). So here is my answer. The picture is a cross-section image of a container that will cool faster when it is full than when it is half-full. It consists of a large disk on top of a sphere, with the same volume so that only the sphere contains liquid when it is half-full. The opening between them is large so that heat can flow easily between the two parts when it is full. Clearly, the disk has a very large surface to volume ratio and will act effectively as a radiator to cool the liquid it contains, while the shere has the smallest possible ratio and will not cool fast. However, when the container is full, heat will flow through the liquid from the sphere to the disk so that all the liquid content will cool rather fast, though the disk will get cooler faster than the sphere. If the container is only half-full, only the sphere contains warm liquid, with a small surface/volume ratio. Hence it will cool more slowly. If the container is large enough, this should be sufficient. It can be improved by remarking that a horizontal disk shape is not very good for convection heat transfer. Replacing the hollow disk with an inverted bell shape would work better. You can even reinforce further the effect by using for the side of the container a material that is more heat conductive transversally than laterally, so that heat goes out quickly but is not conducted efficiently from one part of the container side to another. That avoids heat being transferred to the disk when the container is half full, and still permits the disk to cool efficiently the liquid it contains when full. It can be easily produced by using a heat insulating material with copper nails piercing it at close regular intervals. The same effect could be achieved with a bottle having a standard shape, but with isolation only in the bottom part. This is somewhat close to what I said about the effect of a conductive cork in the thermos case. Of course, this is no major discovery in elementary physics. At best a moderately easy puzzle game. But it may be telling about our reasonning process. Coming back to the remark about votes. I am wondering whether the initial downvote for that question (without an explicitly related explanatory comment) was motivated by such an unstated assumption. Was it really justified? Now, it is possible that people who are very proficient in a field will vote more accurately. It is probably more the case, but I think not always (I do have one example in mind, not from physics). Proficiency is an ill-defined concept, and schools of thought are often biased, even in hard sciences.
{ "domain": "physics.stackexchange", "id": 8590, "tags": "thermodynamics, heat, everyday-life, thermal-radiation, thermal-conductivity" }
/tf sends messages at different timestamp
Question: I have a /tf message that seems to send messages at different timestamps. I added comments "// Look here" to highlight the stamps in secs and Im mostly intrigued by the one with the comment "// Look here !!" as the time stamp in secs has absolutely different timestamp. However, the next after that one follows the the pattern as the majority. Why does this happen? -- transforms: - header: seq: 0 stamp: secs: 1305031234 // Look here nsecs: 763107844 frame_id: /kinect child_frame_id: /openni_camera transform: translation: x: -0.0112272525515 y: 0.0502554918469 z: -0.0574041954071 rotation: x: 0.129908557896 y: -0.141388223098 z: 0.681948549539 w: 0.705747343414 - header: seq: 0 stamp: secs: 1305031234 // Look here nsecs: 763107844 frame_id: /world child_frame_id: /kinect transform: translation: x: 1.28183615112 y: 0.60716934967 z: 1.60902340698 rotation: x: -0.226407331155 y: -0.209261051914 z: 0.662704621347 w: 0.682474993971 --- transforms: - header: seq: 0 stamp: secs: 1551776935 // Look here !!! nsecs: 528851222 frame_id: world child_frame_id: camera_pose transform: translation: x: 0.0543486074629 y: -0.0490532501685 z: -0.083467717153 rotation: x: 0.0249910632545 y: -0.369232593093 z: 0.0535471565154 w: 0.927456436165 --- transforms: - header: seq: 0 stamp: secs: 1305031234 // Look here nsecs: 773104542 frame_id: /kinect child_frame_id: /openni_camera transform: translation: x: -0.0112272525515 y: 0.0502554918469 z: -0.0574041954071 rotation: x: 0.129908557896 y: -0.141388223098 z: 0.681948549539 w: 0.705747343414 Originally posted by murdock on ROS Answers with karma: 75 on 2019-03-05 Post score: 0 Original comments Comment by EdwardNur on 2019-03-05: Can you post your tree? Comment by gvdhoorn on 2019-03-05: It's likely there are either multiple hosts involved that don't have their clocks synchronised, or a bag is being played together with live nodes. Comment by murdock on 2019-03-05: @gvdhoorn, what do you mean by live nodes? @EdwardNur, updated my question with the tree, Comment by gvdhoorn on 2019-03-05: "live nodes" as in: nodes that publish messages that are not part of the bag. That is of course ok, but then use_sim_time should probably be used. If the bag contains messages "from the past" (as they probably all do), then not setting use_sim_time will cause "real time" to be mixed with old .. Comment by gvdhoorn on 2019-03-05: .. stamps, which can lead to issues such as the one you show here. Whether that is the case here depends of course on you: only you know how you configured your system. Comment by gvdhoorn on 2019-03-05: According to the screenshot of your TF tree, it would appear that you have rosbag play running and camera_pose is being broadcast by the /Mono node. Are you specifying --clock? /Mono seems to be using Wallclock, not the clock from the bag. Comment by murdock on 2019-03-05: Nope, not using clock. Will check it out. Comment by murdock on 2019-03-14: @gvdhoorn, setting up --clock didnt work. I get the same issue. Answer: I looked up this answer. Just by setting the use_sim_time alone didnt work in the beginning. I completely stopped the roscore, then set the sim_time param and then let all of my stuff run and it seemed to work. Originally posted by murdock with karma: 75 on 2019-03-14 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 32587, "tags": "ros-indigo" }
What liquid crystals are used most frequently in displays?
Question: Is there a specific chemical that is used frequently in the production of liquid crystal displays? During my internet research so far, it seems as if the specific chemical composition of the liquid crystal does not matter very much. This website gives a list of apparently commonly used LCs. Are there one or two chemicals that are used in practically all common LCDs? Answer: The article Liquid Crystal Display: Environment & Technology (2013) provides a detailed list of chemicals often used in LCDs. There are a myriad of chemicals that are used, each serving a specific purpose for the functionality of LCDs. However, for liquid crystals (LCs) themselves: The composition of LC includes a bicyclohexyl compound (35-50% by weight), a cyclohexyl phenyl compound (15-25% by weight), a bicyclohexyl phenyl compound (20-25% by weight), and a cyclohexyl biphenyl compound (15-20% by weight). The article states that there are typically 10-25 compounds used to manufacture LCs, with many mixtures used, depending on the application the LCD is used for. The variations in mixtures result in differing chemical properties of alkyl or alkoxy side chains. The back light unit has an interesting component: The backlight unit (BLU) major constituent of LCD contains hazardous mercury to operate.
{ "domain": "engineering.stackexchange", "id": 200, "tags": "chemical-engineering, optics, liquid" }
Lorentz algebra and its generators
Question: I'm reading Maggiore's book A Modern Introduction to Quantum Field Theory and I'm getting a bit confused when he writes about Lorentz algebra: $$K^i = J^{i0},$$ $$J^{i}=\frac{1}{2}\epsilon^{ijk}J^{jk},$$ $$[J^{i}, J^{j}] = i\epsilon^{ijk} J^{k},$$ $$[J^{i}, K^j] =i\epsilon^{ijk} K^k. $$ Then he states that $K^i$ is a spatial vector due to the last commutation relation. Is that the way a spatial vector transform under the $SO(3)$ algebra? If yes why? Answer: From Claude Cohen-Tannoudji, Volume 2, X.D.1: (...) an observable $\textbf{V}$ is a vector if its three components $V_x, V_y$ and $V_z$ in an orthonormal frame $Oxyz$ satisfy the following commutation relations: $$ \tag{4-a} [J_x,V_x] = 0$$ $$ \tag{4-b} [J_x,V_y] = i \hbar V_z$$ $$ \tag{4-c} [J_x, V_z] = -i \hbar V_y$$ as well as those obtained by cyclic permutation of the indices $x,y$ and $z$. In your notation, these relations can be more compactly written as $$ \tag{1} [J_i,V_j] = i \epsilon_{ijk} V_k$$ or (in a more formal, less rigorous way) $$ \textbf{J} \times \textbf{V} = i \hbar \textbf{V}.$$ In other words, (1) are the defining relations of a vector operator $\textbf{V}$. Other information about vector operators can be found on this wikipedia article and this physics.se answer.
{ "domain": "physics.stackexchange", "id": 18694, "tags": "special-relativity, vectors, lie-algebra, representation-theory, lorentz-symmetry" }
Generic Graph using Adjacency matrix - Java
Question: The main purpose of representing Graph using adjacency matrix method is, to check the vertex and its neighbor's existence in constant time proportional to \$\mathcal{O}(n)\$. In the various tutorials I have seen, Graphs contain only integer vertices and it becomes straight forward to represent them in a \$v \times v\$ integer 2D array to map the vertices. In real world, we might have to store a custom type Object as the graph vertex. For this, I have created a Graph implementation with Adjacency matrix. Can you please let me know of any feedback / improvements? //V - type of Object stored on graph vertices public class GraphAM<V> { //Maps vertex with its adjacency matrix index. O(1) to retrieve index of a vertex private Map<V, Integer> vertices; //To get vertex using index at O(1) time private List<V> verticesLookup; //adjacency matrix private int[][] adj; private int index; public GraphAM(int numVertices) { adj = new int[numVertices][numVertices]; index = 0; vertices = new HashMap<>(); verticesLookup = new ArrayList<>(); } public void addEdge(V from, V to) { addVertex(from); addVertex(to); int fromIndex = vertices.get(from); int toIndex = vertices.get(to); adj[fromIndex][toIndex] = 1; } private void addVertex(V v) { if(!vertices.containsKey(v)) { vertices.put(v, index); verticesLookup.add(index, v); index++; } } public void bfs(V start) { Queue<V> queue = new LinkedList<>(); boolean[] visited = new boolean[vertices.size()]; queue.add(start); int index = vertices.get(start); visited[index] = true; while(!queue.isEmpty()) { V v = queue.poll(); System.out.print(v + " "); List<V> adjacentVertices = getAdjacentVertices(v); for(V a : adjacentVertices) { int adjInd = vertices.get(a); if(!visited[adjInd]) { queue.add(a); visited[adjInd] = true; } } } } public void dfs(V start) { boolean[] visited = new boolean[vertices.size()]; dfs(start, visited); } private void dfs(V v, boolean[] visited) { System.out.print(v + " "); int index = vertices.get(v); visited[index] = true; List<V> adjacentVertices = getAdjacentVertices(v); for(V a : adjacentVertices) { int aIndex = vertices.get(a); if(!visited[aIndex]) { dfs(a, visited); } } } private List<V> getAdjacentVertices(V v) { int index = vertices.get(v); List<V> result = new ArrayList<>(); for(int i=0; i<adj[index].length; i++) { if(adj[index][i] == 1) { result.add(verticesLookup.get(i)); } } return result; } } Main class class Main { public static void main(String[] args) { GraphAM<Integer> g = new GraphAM<>(4); g.addEdge(0, 1); g.addEdge(0, 2); g.addEdge(1, 2); g.addEdge(2, 0); g.addEdge(2, 3); g.addEdge(3, 3); System.out.println("Following is Breadth First Traversal "+ "(starting from vertex 2)"); g.bfs(2); System.out.println("\nFollowing is Depth First Traversal "+ "(starting from vertex 2)"); g = new GraphAM<>(4); g.addEdge(0, 1); g.addEdge(0, 2); g.addEdge(1, 2); g.addEdge(2, 0); g.addEdge(2, 3); g.addEdge(3, 3); g.dfs(2); } } Answer: You can simplify the mapping you are using to get from vertex to it's associated object. Instead of using a List<V>, which can only guarantee \$\mathcal{O}(n)\$ lookup time, you could just use an array. Comments restating what's known should be removed, comments that can be made obsolete through refactoring should prompt that refactoring: public class GraphAM<V> { private Map<V, Integer> vertexToIndex; private V[] indexToVertex; private int[][] adjacencyMatrix; private int index; vertices (aka vertexToIndex can be eagerly initialized. all private fields (apart from index) should be marked final. Instead of passing numVertices to the constructor, you should consider passing a Collection<V>, which removes the need to deal with adding vertices in addEdge. Finally instead of using a boolean[] for visited, a Set<V> is significantly more obvious. It might come with a minor performance penalty though. Finally the search functions do not search for anything... they're just performing an exhaustive traversal and are accordingly useless in their current form...
{ "domain": "codereview.stackexchange", "id": 29931, "tags": "java, graph" }
The Morra game implementation in Haskell
Question: Me and my friends play a variant of Morra when drinking. The rules are as follows: The players sit in a circle and hold one of their hands in front of them. One player guesses a number. All the players (including this one) then hold out between 0 and 5 fingers on their hand. If there are as many fingers held out as the player guessed, he is "saved", leaves the circle and doesn't have to play anymore. Next turn another player tries to guess (otherwise the game just goes on with the player staying in the circle). The last player to remain in the circle has to drink. I'm just learning about monad transformers, so I thought it would be nice to implement this simple game. module Morra where import Control.Monad.Trans.State import Control.Monad.Trans import Control.Monad import Data.List import Data.Char main :: IO () main = do putStrLn "Welcome to Morra. The last to remain has to drink." putStr "Please input the names of players: " psraw <- getLine putStrLn "" ([p], n) <- execStateT (play 0) (words psraw, 0) putStrLn $ p ++ " has lost, after " ++ show n ++ " rounds. (S)he has to take a shot." play :: Int -> StateT ([String], Int) IO () play p = do (ps, n) <- get let currp = ps !! p if length ps == 1 then put (ps, n) else do liftIO $ putStr $ "It's now " ++ currp ++ "'s turn. How many fingers will be help up? " a <- liftIO safeGetLine putNewLine liftIO $ putStrLn "[INPUT PHASE] Each player now chooses, how many fingers does (s)he hold up." fs <- liftIO $ getFingers (length ps) liftIO $ putStr "[END OF INPUT PHASE]" putNewLine if sum fs == a then do liftIO $ putStrLn $ currp ++ " has guessed right, he's out." putNewLine let newps = delete currp ps put (newps, n + 1) play ((p + 1) `mod` length newps) else do liftIO $ putStrLn $ currp ++ " hasn't guessed right, the game goes on!" putNewLine put (ps, n + 1) play ((p + 1) `mod` length ps) where putNewLine = liftIO $ putStrLn "" getFingers :: Int -> IO [Int] getFingers n = replicateM n helper where helper = do putStr "Input the number of fingers you hold up: " fs <- safeGetLine putStrLn "Pass the computer to the next player now. [ENTER]" getLine return fs safeGetLine :: IO Int safeGetLine = do x <- getLine if all isNumber x then return $ read x else do putStrLn "Please input a number" safeGetLine Firstly, you can see the strings I print are pretty long, longer than the 80 chars limit. What is the common approach to tackle these and keep all the lines below 80 chars? And secondly, please point out anything that could be made more elegant, whatever that might be (even changing the underlaying StateT). I'm also open to new ideas and challenges. Answer: You don't really use the StateT to your advantage. Your play function can be written as play :: Int -> ([String], Int) -> IO ([String], Int) play p (ps, n) = do let currp = ps !! p if length ps == 1 then return (ps, n) else do putStr $ "It's now " ++ currp ++ "'s turn. How many fingers will be help up? " a <- safeGetLine putNewLine putStrLn "[INPUT PHASE] Each player now chooses, how many fingers does (s)he hold up." fs <- getFingers (length ps) putStr "[END OF INPUT PHASE]" putNewLine if sum fs == a then do putStrLn $ currp ++ " has guessed right, he's out." putNewLine let newps = delete currp ps play ((p + 1) `mod` length newps) (newps, n + 1) else do putStrLn $ currp ++ " hasn't guessed right, the game goes on!" putNewLine play ((p + 1) `mod` length ps) (ps, n + 1) where putNewLine = putStrLn "" That's not really surprising, since State s a is more or less just an abstraction of s -> (s, a). Since a is () in your case, we can just get rid of it and end up with s -> IO s. We can use State(T) to our advantage if we have several Stateful functions. However, we only have play. Speaking about play, there are several nitpicks: You use length several times, although a single length call would suffice. You should use guess instead of a as name. If we apply both, we end up with play :: Int -> StateT ([String], Int) IO () play p = do (ps, n) <- get let currp = ps !! p let plength = length ps unless (plength == 1) $ do liftIO $ putStr $ "It's now " ++ currp ++ "'s turn. How many fingers will be help up? " guess <- liftIO safeGetLine putNewLine liftIO $ putStrLn "[INPUT PHASE] Each player now chooses, how many fingers does (s)he hold up." fs <- liftIO $ getFingers plength liftIO $ putStr "[END OF INPUT PHASE]" putNewLine if sum fs == guess then do liftIO $ putStrLn $ currp ++ " has guessed right, he's out." putNewLine put (delete currp ps, n + 1) play ((p + 1) `mod` (plength - 1)) else do liftIO $ putStrLn $ currp ++ " hasn't guessed right, the game goes on!" putNewLine put (ps, n + 1) play ((p + 1) `mod` plength) where putNewLine = liftIO $ putStrLn "" We didn't gain that much, did we? So let's split play: type Player = String type Turn = Int type MorraT m a = StateT ([Player], Turn) m a type Morra a = MorraT IO a play :: Morra () play = do pcount <- getPlayerCount unless (pcount == 1) $ do currp <- getNextPlayer guess <- guessFingers currp fingers <- getFingers if fingers == guess then do putStrLn' $ currp ++ " has guessed right, they're out!" removePlayer currp else do putStrLn' $ currp ++ " hasn't guessed right, the game goes on!" increaseTurnCount play where putStrLn' xs = liftIO $ putStrLn xs >> putStrLn "" That's easy enough to undestand. We also added some types so that we can change our program in a single line, for example if you want to change the StateT to something else later. Now we need our helper functions: players :: Monad m => MorraT m [Player] players = fst <$> get That's a convenient small function and makes it possible to change our Morra later without changing too many functions. For our getNextPlayer we want to get the next player as well as queue them to the back of the line. A list isn't the perfect data structure here, by the way, but we keep it for simplicity: getNextPlayer :: Monad m => MorraT m Player getNextPlayer = do (p:ps) <- players put (ps ++ [p]) return p guessFingers looks exactly how you'd imagine it. It doesn't need to be in Morra, as it does not inspect the state, but let's keep it there for simplicity, again: guessFingers :: Player -> Morra Int guessFingers p = do liftIO $ putStrLn $ p ++ ", what's your guess?" liftIO $ safeGetLine Your variant of getFingers only tells a person to give the PC to the next player, but it does not tell them who the next player is going to be. That can be fixed by using forM instead of replicateM: getFingers :: Morra Int getFingers = do ps <- players fmap sum $ forM ps $ \p -> liftIO $ putStrLn $ "How many fingers do you hold up, " ++ p ++ "?" liftIO $ getSafeLine removePlayer looks the same as yours: removePlayer :: Monad m => Player -> MorraT m () removePlayer p = modify $ \(ps, n) -> (delete p ps, n) Note that a setPlayers or withPlayers function would make both this and getNextPlayer simpler. increaseTurnCount :: Monad m => MorraT m () increaseTurnCount = modify $ \(ps, n) -> (ps, n + 1) That's it. We now have several functions that can be used independently but act on the same state. By the way, the single increaseTurnCount hints that the amount of Turns isn't really part of the State, but could be used as the return value of play instead. For completeness, we can provide runMorra :: Monad m => [Player] -> MorraM m a -> m (Player, Int) runMorra ps = do ([p], n) <- execStateT play (ps, 0) return (p, n) One last remark on safeGetLine. Instead of all isNumber use readMaybe from Text.Read: safeGetLine :: IO Int safeGetLine = do x <- getLine case readMaybe x of Just n -> return n Nothing -> putStrLn "Please input a number" >> safeGetLine Otherwise your safeGetLine isn't safe, since isNumber also returns True on Unicode Characters in the 'Number, Other' Category.
{ "domain": "codereview.stackexchange", "id": 28302, "tags": "beginner, haskell" }
DiscountManager class in C#
Question: I built a class in C#, DiscountManager, which is responsible for calculating a customer discount based on years of loyalty. I want to refactor it and am seeking any suggestions for conciseness and efficiency. public class DiscountManager { public decimal Calculate(decimal amount, int type, int years) { decimal result = 0; decimal disc = (years > 5) ? (decimal)5/100 : (decimal)years/100; if (type == 1) { result = amount; } else if (type == 2) { result = (amount - (0.1m * amount)) - disc * (amount - (0.1m * amount)); } else if (type == 3) { result = (0.7m * amount) - disc * (0.7m * amount); } else if (type == 4) { result = (amount - (0.5m * amount)) - disc * (amount - (0.5m * amount)); } return result; } } Answer: I would recommend to move discountForLoyaltyInPercentage and switch statement into their own methods. The issue with the current method is that it has multiple purposes : Calculating and setting the Loyalty Discount Percentage. Calculating and setting the General Discount Percentage. Applying All discounts to the price, and return the result. If you move each point above to one method, it would be easier to read, extend, and to have a proper handling. public static class DiscountManager { /// <summary> /// Calculates and returns the discount percentage /// based on the time period of the account (number of years since created). /// </summary> /// <param name="timeOfHavingAccountInYears"></param> /// <returns></returns> public static decimal GetLoyaltyDiscountPercentage(int timeOfHavingAccountInYears) { return timeOfHavingAccountInYears > 5 ? 0.50m : timeOfHavingAccountInYears / 100.00m; } /// <summary> /// Calculates and returns the general discount /// based on the <see cref="AccountStatus"/> /// </summary> /// <param name="accountStatus"></param> /// <returns></returns> private static decimal GetDiscountPercentage(AccountStatus accountStatus) { switch(accountStatus) { case AccountStatus.SimpleCustomer: return 0.10m; case AccountStatus.ValuableCustomer: return 0.30m; case AccountStatus.MostValuableCustomer: return 0.50m; default: return 0.00m; } } /// <summary> /// Applying the discounts (if any) on the price and returns the final price (after discounts). /// </summary> /// <param name="price"></param> /// <param name="accountStatus"></param> /// <param name="timeOfHavingAccountInYears"></param> /// <returns></returns> public static decimal ApplyDiscount(decimal price , AccountStatus accountStatus , int timeOfHavingAccountInYears) { decimal loyaltyDiscountPercentage = GetLoyaltyDiscountPercentage(timeOfHavingAccountInYears); decimal discountPercentage = GetDiscountPercentage(accountStatus); decimal priceAfterDiscount = price * (1.00m - discountPercentage); decimal finalPrice = priceAfterDiscount - ( loyaltyDiscountPercentage * priceAfterDiscount ); return finalPrice; } } since all methods don't need several instance, and the nature of the class is unchangeable, making the class static would be more appropriate. You can then reuse it : var finalPrice = DiscountManager.ApplyDiscount(price, accountStatus, timeOfHavingAccountInYears); Now, this would be fine for small discount system, but with larger scale discount system, you may need to use abstractions to make each discount with its own properties. (so you can manage discounts on accounts, products, season discounts .. etc). Something like this : public abstract class AccountDiscount { public abstract decimal Discount { get; } public virtual decimal GetDiscountedPrice(decimal price) { return price * (1.00m - Discount); } } public class DefaultDiscount : AccountDiscount { public override decimal Discount => 0.00m; } public class SimpleCustomerDiscount : AccountDiscount { public override decimal Discount => 0.10m; } public class LoyaltyDiscount : AccountDiscount { public override decimal Discount { get; } public LoyaltyDiscount(int totalYears) : base() { Discount = totalYears > 5 ? 0.50m : totalYears / 100.00m; } } then you can add each discount to an account and store that in the database level, which will help you achieve this : public decimal ApplyDiscounts(IEnumerable<AccountDiscount> discounts, decimal price) { decimal finalPrice = price; foreach(var discount in discounts) { finalPrice -= discount.GetDiscountedPrice(finalPrice); } return finalPrice; } public decimal ApplyDiscountsByAccountId(int accountId, decimal price) { List<AccountDiscount> discounts = _someRepoistory.GetAccountDiscounts(accountId); return ApplyDiscounts(discounts, price); } as here the discounts would be already linked to an account, and you only need to handle the logic between them, so you can pass an accountId and the system will populate the account profile along with its registered discounts. This is just an example on how it can be implemented on a larger scale.
{ "domain": "codereview.stackexchange", "id": 42192, "tags": "c#, formatting" }
'Simple' capacitor problem
Question: I posted this question because I have a problem in grasping the connection between force on a charge and voltage potential equilibrium. So the problem is the following: we have a charged capacitor disconnected from the battery. If a negative capacitor plate is divided into two pieces but still connected to each other with a wire and we do the same to the positive plate, the charge should be equally distributed. Now, if we decide to bring one half of the negative plate closer to the half part of the positive plate we changed potential but as far as the charge on the plate isn't moving the far negative plate-half and the closer negative plate-half should equally act on the positive pieces as the electric field is uniform and not dependent on distance between plates. There shouldn't be any difference in attraction between all plates. Now, a current should flow through the wire when one part of the capacitor is brought closer as the voltage is different but on the other hand it shouldn't if we take that the electric field on the charge between plates doesn't change with distance (in an ideal case)? Answer: Because the plates are connected by wires, they remain an equipotential surface. We can therefore safely assign both left plates (-) with $V=0$ and both right plates (+) with $V=V_0$. This MUST be true in both pictures A and B. Now in what follows let $u$ denote the upper plates separated by distance $d_u$ and $l$ denote the lower plates, separated by distance $d_l$ with $d_u<d_l$ in situation B. In situation B, $E_u = {\sigma_u\over\epsilon_0}$, and similarly for $E_l = {\sigma_l\over\epsilon_0}$. $V_0 = {\sigma_u\over \epsilon_0}d_u = {\sigma_l\over \epsilon_0}d_l$ ${\sigma_u\over\sigma_l} = {d_l\over d_u}$ Since the areas of each plate is identical, ${Q_u \over Q_l} = {d_l \over d_u}$. The charges are not equal, and the fields are not equal. Only the voltage difference between the plates remains equal since the left plates must be equipotential, and the right plates must be equipotential.
{ "domain": "physics.stackexchange", "id": 80273, "tags": "forces, electric-fields, charge, potential, capacitance" }
How to calculate the process correctly using Feynman diagrams?
Question: The general approach of perturbation theory is familiar to me, only a slight clarification. We describe the interaction of two particles. For this, one must first take into account the contribution of the zero-order approximation diagram (without interaction), and then the contribution with one or two interaction elements (first approximation), and so on. Do I understand correctly? Answer: The question is a bit vague, thus the answer will also be quite vague. First of all when you say that you want to "calculate the process" you have to be more specific. What do you want do to? For example: You may want to calculate the probability that the system will go from state $1$ to state $2$(in the case you mentioned, these states are two-particle states). In this case, what you want to do is calculate the $S$-matrix. On the other hand you may want to calculate the Green function for your system. From here you can determine renormalization, life-time... Any one of these cases can be calculated using Feynman diagrams. They basically represent the pictorial method to do perturbation theory. For this, one must first take into account the contribution of the zero-order approximation diagram (without interaction) Yes. The zeroth-order approximation represents the case if the particles wouldn't be interacting at all. Thus, in perturbation theory this represents the simplest (zeroth-order) approximation. The corresponding Feynman diagrams do not have any vertices. The number of vertices represent the order of perturbation theory. Thus first order approximation Feynman diagrams would only have one vertex. This is the lowest non-trivial (interacting) approximation. If you want to go even further, then you would need Feynman diagrams with two vertices. In theory if you accounted infinitely many diagrams you would get the perfectly correct result. Although in turns out that even if you could sum all the diagrams it is not so straightforward to obtain the exact result because the series you get with Feynman diagrams is in fact Asymptotic expansion. This means that in the case when you have many diagrams, you should not just sum them, but you need to to do something called Pade transformation which is probably advanced for you in this point in time. For example in QED this becomes important if you have more than 137 diagrams (which you will never have)...
{ "domain": "physics.stackexchange", "id": 75155, "tags": "quantum-field-theory, feynman-diagrams, perturbation-theory, interactions" }
Thermodynamics of Forming Peptide Bonds
Question: Which of the following shows the correct changes in thermodynamic properties for a chemical reaction in which amino acids are linked to form a protein? A) +ΔH, +ΔS, +ΔG B) +ΔH, -ΔS, -ΔG C) +ΔH, -ΔS, +ΔG D) -ΔH, -ΔS, +ΔG E) -ΔH, +ΔS, +ΔG Answer: C I know that dehydration is endergonic (+ΔG). I have two questions: Why is entropy decreasing? In the beginning of the reaction, we have two amino acids. By dehydration, we have at the end two molecules: the two amino acids together and the H2O molecule. The same number of particles before and after. Why is enthalpy change positive? You made a bond between the two amino acids. That is exothermic. Answer: The energy used to catalyze the peptidyl transferase reaction is from the breakage of the bond between the amino acid in question, and the aminoacyl-tRNA it's attached to. The two reactions are coupled by the ribosome. The ribosome can then lower the entropy by positioning of the molecules (including water) in the active site as described here. So we have our reaction ΔG = ΔH - TΔS Your ΔG is positive because your reaction is endergonic, and the ΔH is positive because the peptide bond is the system, and you're absorbing energy to form it. The entropy is decreasing as such (removing enerygy = removing heat from outside the system), and the actual reaction for the formation of a peptide bond is unfavorable. As I mention above, this unfavorable reaction is coupled to a favorable reaction in the hydrolysis of an aminoacyl-tRNA to make it possible. What I forgot to mention above is that the ribosome is making this more favorable as well by decreasing the activation entropy for the reaction, and that's what they describe in the linked journal.
{ "domain": "biology.stackexchange", "id": 3428, "tags": "proteins, amino-acids, thermodynamics" }
creating rostopic
Question: Hi, I am confused in how to create ROSTOPICs. In case of ROSMSG,one has to create a msg folder and then .msg file .Does one has to do the similar process for creating ROSTOPIC ? Originally posted by SaurabhR on ROS Answers with karma: 5 on 2016-08-02 Post score: 0 Answer: A topic is "created" (and will show up when using ROS introspection tools such as rostopic) as soon as a publisher or subscriber to that topic is created within a ROS node. From the Python and C++ creating a publisher/subscriber tutorials: C++: ros::Publisher chatter_pub = n.advertise<std_msgs::String>("chatter", 1000); Python: pub = rospy.Publisher('chatter', String, queue_size=10) Originally posted by jarvisschultz with karma: 9031 on 2016-08-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by SaurabhR on 2016-08-02: What is "chatter " here? Is it the name of topic? Comment by ahendrix on 2016-08-02: Yes. "chatter" is the name of the topic.
{ "domain": "robotics.stackexchange", "id": 25424, "tags": "rostopic" }
Gravity near surface of a large body
Question: The question I want to ask is what point is there in using $$ F = G {m_1 m_2 \over r^2} $$ when we don't talk about point masses, but one of the masses is a sphere of radius r. I'm currently not good enough at maths to calculate that myself. In particular, let's take a vector form for point of mass: $$ \overrightarrow{F} = G {m_1 m_2 \over |r|^2 } {\overrightarrow{r} \over |r|} $$ and a differential form, (okay, bear with me, this is probably incorrect already: ) $$ d\overrightarrow{F} = G {\rho_1 m_2 \over |r|^2 } {\overrightarrow{r} \over |r|} dV $$ Now we should integrate the vectors over volume V, which is a sphere with center at distance r from the point of our calculations. How would I go about such calculations? If the scope of maths involved exceeds what's welcome on physics.se, could you help me rephrase the question for mathematics.se, or point me to resources that would let me solve this? (assume uniform distribution of mass - asteroids etc.) Answer: The way you would go about calculating the gravitational forces from a sphere of radius R is to start by looking at a homogeneous ring and calculating its gravitational pull on a fixed point at a certain distance from the circle's center on the rings center axis. Then you look at the gravitational forces from a spherical layer of an infinitesimal thickness, consisting of the rings from before with some infinitesimal width $\rho d\phi$ if $\rho$ is the radius of the spherical layer and $d\phi$ is the angular extent of the rings width. You'll find that the attraction from such a spherical layer with mass $M$ is the same as for a point mass $M$ located at the center of the spherical layer. And since a filled homogeneous sphere is built up from such elementary spherical layers (each with mass $dM$) the total sphere will have the same property: for gravitational calculations with Newton's law, the sphere could just be replaced by a point mass located at its center. The whole calculation is a bit long to write out completely here, but if this is too short, I could expand this answer. Also, if the sphere is not homogeneous, corrections need to be made. I'm not entirely sure how that's done though. EDIT: the link provided by John Rennie explains in (much) more detail what I'm talking about.
{ "domain": "physics.stackexchange", "id": 5639, "tags": "newtonian-gravity" }
Which came first: leptons or baryons?
Question: If you had a big collection of neutrons, they could decay into protons, electrons, and neutrinos through beta decay since protons are the only stable baryons. After a while, you should get hydrogen out of that. Assuming some process to account for baryogenisis, this would mean that the universe could start with a whole bunch of neutral baryons and generate electrons as needed without having to worry about pair production to get its leptons. With this in mind, is there any evidence that hadrons were created first and then, in turn, created leptons (regardless of evidence for baryogenisis itself)? Or does the current theory suggest that leptons and hadrons were created contemporaneously in the very early universe? Answer: There exists a standard model of cosmology, i.e. accepted as the current status of research, called the Big Bang. This makes extensive use of the known interactions of particle physics encapsulated in the standard model., and assumes a singularity at the beginning of the universe where the energy seen in the universe now was originally generated. I like this display of the history of the universe as understood at present. From the hypothesized quantum gravitational state from the beginning to the end of the inflation period, it is all a model that reconciles observed behavior of the cosmic microwave background radiation data to a big bang model. From 10^-32s to 1μs the standard model of particle physics is used, where this is a soup of all the particles in the table appearing and disappearing in pair productions, so there is a quark gluon plasma which with the expansion cooling allows the creation of stable protons. There will be also by symmetry neutrons in the mix, but they will be decaying and recreated as long as the average energy of the soup, leptons and hadrons, is enough to allow for inverse beta decay.. That is where baryogenesis is necessary, the asymmetry between protons and antiprotons , and where particle physics does not have the answer yet. Assuming it happens, then it is at the next stage, in nucleosynthesis that the neutrons are captured into stable nuclei and matter as we know it exists. So the answer is that in the current model of the universe, leptons and quarks(hadrons) came at the same time . Stable baryons made out of quarks came after 1 ms .
{ "domain": "physics.stackexchange", "id": 41337, "tags": "particle-physics, cosmology, big-bang, baryons, leptons" }
A simple Vector implementation
Question: Just made a simple Vec class, nothing fancy but I am open to suggestions for improvements. Of course, this is not suppose to replace or used for the same tasks as std::vector. This is more like opencv's Vec and eventually I want to include this in a bigger project of mine. #include <cmath> #include <vector> #include <initializer_list> #include <cassert> #define VEC_ASSERT(x) assert(x) template<typename T, unsigned int C> class Vec { public: typedef T dataType; typedef T& dataType_ref; typedef const T& dataType_cref; // Empty Constructor Vec(); // Single-Arg Constructor explicit Vec(T v); // From std::vector Constructor Vec(const std::vector<T>& v); // From std::initializer_list Constructor Vec(const std::initializer_list<T>& l); // Main Constructor template<typename ... Args> explicit Vec(T v, Args&& ... args); // Get vector dimensions unsigned int dim() const; // Get vector length double length() const; // Get vectors dist double dist(const Vec<T, C>& v) const; // Get the cross product (3D Vectors only) Vec<T, C> cross(const Vec<T, C>& v) const; // Get the dot product double dot(const Vec<T, C>& v) const; // Get ortho vector (2D vectors only) Vec<T, C> ortho() const; // Normalize vector values Vec<T, C> norm() const; // Rotate (2D Vectors only) Vec<T, C> rotate(double angle) const; // Rotate on x-axis (3D Vectors only) Vec<T, C> rotateX(double angle) const; // Rotate on y-axis (3D Vectors only) Vec<T, C> rotateY(double angle) const; // Rotate on z-axis (3D Vectors only) Vec<T, C> rotateZ(double angle) const; // Convert to std::vector std::vector<dataType> to_std_vector() const; // Cast template<typename TT, unsigned int CC = C> Vec<TT, CC> to() const; // Access vector values dataType_ref operator[](int index); dataType_ref operator()(int index); dataType_cref operator[](int index) const; dataType_cref operator()(int index) const; // Vector Operations with Scalars Vec<T, C> operator+(T v); Vec<T, C> operator-(T v); Vec<T, C> operator*(T v); Vec<T, C> operator/(T v); Vec<T, C>& operator+=(T v); Vec<T, C>& operator-=(T v); Vec<T, C>& operator*=(T v); Vec<T, C>& operator/=(T v); // Vector Operations with Vectors Vec<T, C> operator+(const Vec<T, C>& v); Vec<T, C> operator-(const Vec<T, C>& v); Vec<T, C> operator*(const Vec<T, C>& v); Vec<T, C> operator/(const Vec<T, C>& v); private: // Recursive pusher (used by constructor) template<typename ... Args> void push(T v, Args&& ... args); // Base pusher void push(T v); // Vector values dataType values[C]; // Index for Vector pusher unsigned int idx; }; template<typename T, unsigned int C> Vec<T, C>::Vec() { for ( unsigned int i = 0; i < C; ++i ) this->values[i] = 0; } template<typename T, unsigned int C> Vec<T, C>::Vec(T v) { for ( unsigned int i = 0; i < C; ++i ) this->values[i] = v; } template<typename T, unsigned int C> Vec<T, C>::Vec(const std::vector<T>& v) { VEC_ASSERT(v.size() <= C); for ( unsigned i = 0; i < v.size(); ++i ) this->values[i] = v[i]; } template<typename T, unsigned int C> Vec<T, C>::Vec(const std::initializer_list<T>& l) { VEC_ASSERT(l.size() <= C); unsigned i = 0; for ( auto it : l ) this->values[i++] = it; } template<typename T, unsigned int C> template<typename ... Args> Vec<T, C>::Vec(T v, Args&& ... args) { this->idx = 0; this->values[idx] = v; this->push(args ...); } template<typename T, unsigned int C> template<typename ... Args> void Vec<T, C>::push(T v, Args&& ... args) { this->values[++(this->idx)] = v; this->push(args ...); } template<typename T, unsigned int C> void Vec<T, C>::push(T v) { VEC_ASSERT(this->idx + 1 < C); this->values[++(this->idx)] = v; } template<typename T, unsigned int C> unsigned int Vec<T, C>::dim() const { return C; } template<typename T, unsigned int C> double Vec<T, C>::length() const { double result = 0; for ( unsigned int i = 0; i < C; ++i ) result += this->values[i] * this->values[i]; return std::sqrt(result); } template<typename T, unsigned int C> double Vec<T, C>::dist(const Vec<T, C>& v) const { Vec<T, C> result; for ( unsigned int i = 0; i < C; ++i ) result[i] = this->values[i] - v[i]; return result.length(); } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::cross(const Vec<T, C>& v) const { VEC_ASSERT(C == 3); Vec<T, C> result; result[0] = this->values[1] * v[2] - this->values[2] * v[1]; result[1] = this->values[0] * v[2] - this->values[2] * v[0]; result[2] = this->values[0] * v[0] - this->values[1] * v[0]; return result; } template<typename T, unsigned int C> double Vec<T, C>::dot(const Vec<T, C>& v) const { double result = 0.0; for ( unsigned int i = 0; i < C; ++i ) result += this->values[i] * v[i]; return result; } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::ortho() const { VEC_ASSERT(C == 2); return Vec<T, C>(this->values[1], -(this->values[0])); } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::norm() const { VEC_ASSERT(this->length() != 0); Vec<T, C> result; for ( unsigned int i = 0; i < C; ++i ) result[i] = this->values[i] * (1.0 / this->length()); return result; } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::rotate(double angle) const { VEC_ASSERT(C == 2); double theta = angle / 180.0 * M_PI; double c = std::cos(theta); double s = std::sin(theta); double x = this->values[0] * c - this->values[1] * s; double y = this->values[0] * s + this->values[1] * c; return Vec<T, C>(x, y); } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::rotateX(double angle) const { VEC_ASSERT(C == 3); double theta = angle / 180.0 * M_PI; double c = std::cos(theta); double s = std::sin(theta); double x = this->values[0]; double y = this->values[1] * c - this->values[2] * s; double z = this->values[1] * s + this->values[2] * c; return Vec<T, C>(x, y, z); } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::rotateY(double angle) const { VEC_ASSERT(C == 3); double theta = angle / 180.0 * M_PI; double c = std::cos(theta); double s = std::sin(theta); double x = this->values[0] * c + this->values[2] * s; double y = this->values[1]; double z = -(this->values[0]) * s + this->values[2] * c; return Vec<T, C>(x, y, z); } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::rotateZ(double angle) const { VEC_ASSERT(C == 3); double theta = angle / 180.0 * M_PI; double c = std::cos(theta); double s = std::sin(theta); double x = this->values[0] * c - this->values[1] * s; double y = this->values[0] * s + this->values[1] * c; double z = this->values[2]; return Vec<T, C>(x, y, z); } template<typename T, unsigned int C> auto Vec<T, C>::to_std_vector() const -> std::vector<dataType> { return std::vector<dataType>(&this->values[0], &this->values[0] + C); } template<typename T, unsigned int C> template<typename TT, unsigned int CC> Vec<TT, CC> Vec<T, C>::to() const { Vec<TT, CC> result; for ( unsigned int i = 0; i < std::min(C, CC); ++i ) result[i] = static_cast<TT>(this->values[i]); return result; } template<typename T, unsigned int C> auto Vec<T, C>::operator[](int index) -> dataType_ref { VEC_ASSERT(index < C); return this->values[index]; } template<typename T, unsigned int C> auto Vec<T, C>::operator()(int index) -> dataType_ref { VEC_ASSERT(index < C); return this->values[index]; } template<typename T, unsigned int C> auto Vec<T, C>::operator[](int index) const -> dataType_cref { VEC_ASSERT(index < C); return this->values[index]; } template<typename T, unsigned int C> auto Vec<T, C>::operator()(int index) const -> dataType_cref { VEC_ASSERT(index < C); return this->values[index]; } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::operator+(T v) { Vec<T, C> result; for ( unsigned int i = 0; i < C; ++i ) result[i] = this->values[i] + v; return result; } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::operator-(T v) { Vec<T, C> result; for ( unsigned int i = 0; i < C; ++i ) result[i] = this->values[i] - v; return result; } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::operator*(T v) { Vec<T, C> result; for ( unsigned int i = 0; i < C; ++i ) result[i] = this->values[i] * v; return result; } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::operator/(T v) { VEC_ASSERT(v != 0); Vec<T, C> result; for ( unsigned int i = 0; i < C; ++i ) result[i] = this->values[i] / v; return result; } template<typename T, unsigned int C> Vec<T, C>& Vec<T, C>::operator+=(T v) { for ( unsigned int i = 0; i < C; ++i ) this->values[i] += v; return *this; } template<typename T, unsigned int C> Vec<T, C>& Vec<T, C>::operator-=(T v) { for ( unsigned int i = 0; i < C; ++i ) this->values[i] -= v; return *this; } template<typename T, unsigned int C> Vec<T, C>& Vec<T, C>::operator*=(T v) { for ( unsigned int i = 0; i < C; ++i ) this->values[i] *= v; return *this; } template<typename T, unsigned int C> Vec<T, C>& Vec<T, C>::operator/=(T v) { VEC_ASSERT(v != 0); for ( unsigned int i = 0; i < C; ++i ) this->values[i] /= v; return *this; } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::operator+(const Vec<T, C>& v) { Vec<T, C> result; for ( unsigned int i = 0; i < C; ++i ) result[i] = this->values[i] + v[i]; return result; } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::operator-(const Vec<T, C>& v) { Vec<T, C> result; for ( unsigned int i = 0; i < C; ++i ) result[i] = this->values[i] - v[i]; return result; } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::operator*(const Vec<T, C>& v) { Vec<T, C> result; for ( unsigned int i = 0; i < C; ++i ) result[i] = this->values[i] * v[i]; return result; } template<typename T, unsigned int C> Vec<T, C> Vec<T, C>::operator/(const Vec<T, C>& v) { for ( unsigned int i = 0; i < C; ++i ) VEC_ASSERT(v[i] != 0); Vec<T, C> result; for ( unsigned int i = 0; i < C; ++i ) result[i] = this->values[i] / v[i]; return result; } typedef Vec<int, 2> Vec2i; typedef Vec<int, 3> Vec3i; typedef Vec<int, 4> Vec4i; typedef Vec<unsigned int, 2> Vec2u; typedef Vec<unsigned int, 3> Vec3u; typedef Vec<unsigned int, 4> Vec4u; typedef Vec<float, 2> Vec2f; typedef Vec<float, 3> Vec3f; typedef Vec<float, 4> Vec4f; typedef Vec<double, 2> Vec2d; typedef Vec<double, 3> Vec3d; typedef Vec<double, 4> Vec4d; typedef Vec<char, 2> Vec2c; typedef Vec<char, 3> Vec3c; typedef Vec<char, 4> Vec4c; typedef Vec<unsigned char, 2> Vec2uc; typedef Vec<unsigned char, 3> Vec3uc; typedef Vec<unsigned char, 4> Vec4uc; Answer: Finally a real vector class :P. Any reason why you use VEC_ASSERT instead of just assert. I don't really see the advantage of doing so. using declarations are nicer than typedefs IMO: using dataType = T; If you use exceptions, mark functions that don't or shouldn't throw noexcept. Make use of the injected-class-name: Vec cross(const Vec &v) const; You should implement all the @= operators for vectors. Also unary -. Use standard algorithms: std::fill(values, values + C, 0); // default constructor, second one too std::copy(v.begin(), v.end(), values); // std::vector constructor, init list std::inner_product(v.begin(), v.end(), v.begin(), 0); // length/dot std::transform(values, values + C, result.values, [length](const auto& value) { return values / length; }); // norm, operator@ assert(std::all_of(v.values, v.values + C, [](const auto& value) { return value != 0; })); // operator/ You don't have to use this-> everywhere you know :). You should consider using assert messages: assert(v[0] == 0 && "the first element must be 0!"); No love for long double and signed char? They don't have aliases. You don't fill the rest of the elements to 0 in your std::vector, initializer list constructors and in to. You don't need push by changing your signature a bit and doing: template <typename T, unsigned int C> template <typename... Args> Vec<T, C>::Vec(Args&&... args) : Vec{std::forward<Args>(args)...} { static_assert(sizeof...(Args) <= C, "too many arguments to vector"); } There are techniques to avoid implementing similar code between operator@ and operator@=: friend Foo operator+(Foo lhs, const Foo& rhs) { lhs += rhs; return lhs; } Foo& operator+=(const Foo& rhs) const { // do logic return *this; } More information very good advice for overloading operator can be found on cppref. If you want you can reduce the code duplicate of cv qualified operator[]s by using const_cast. Please prohibit creating a vector of length 0! :) You could provide rvalue overloads of operator[] to enable efficient moving, but you don't have to. This is overkill, mostly used in the standard library. Have a look at std::optional::operator* to see this in action. You might want to consider adding a constructor that takes a pair of iterators, so that Vec can be initialized by anything really, not just a std::vector. Consider making everything constexpr.
{ "domain": "codereview.stackexchange", "id": 31788, "tags": "c++, vectors" }
Strong empirical falsification of quantum mechanics based on vacuum energy density
Question: It is well known that the observed energy density of the vacuum is many orders of magnitude less than the value calculated by quantum field theory. Published values range between 60 and 120 orders of magnitude, depending on which assumptions are made in the calculations. Why is this not universally acknowledged as a strong empirical falsification of quantum mechanics? Answer: Experimentally, based on cosmological observations, there seems to be a vacuum energy (the "dark energy" component of the cosmological energy budget), with a certain value. A the present epoch, there seems to be about three times as much vacuum/dark energy as there is "dark matter" and about fifteen or twenty times as much dark energy as there is visible matter. This concentration of dark energy poses two very serious puzzles, but neither of them is at all suggestive of a breakdown of quantum mechanics. The first problem, mentioned in the question, is the "hierarchy" problem. There is no quantum mechanical prediction for the absolute energy density of vacuum. However, it is possible to make some very crude "guesstimates" about this quantity. We know that some new fundamental physics must take over at the Planck energy scale $E_{P}$, where gravitational interactions are in the deeply quantum regime. We may therefore guess that the vacuum energy density is proportional to $E_{P}^{4}$. (This is certainly not a prediction of quantum mechanics though. Strictly, according to quantum field theory, without including gravity, the energy of the vacuum is unobservable and therefore not even well defined.) The problem with the $\propto E_{P}^{4}$ guess for the vacuum energy is that it is off by 275 nepers or so. But that does not falsify quantum mechanics, since our guess was not based on rigorous quantum theory anyway. The other puzzle with the vacuum energy density (the "coincidence" problem) is that its value at the present cosmological epoch is pretty close to the energy density of matter in the universe, even though there is no a priori reason why the two should be related. The fact that the (light plus dark) matter and dark energy densities are relatively close suggests that what we are observing as apparent vacuum energy might very well be something else entirely anyway. But what that "something else" might be, no one knows.
{ "domain": "physics.stackexchange", "id": 65614, "tags": "quantum-mechanics, quantum-field-theory, energy, vacuum, cosmological-constant" }
Can the equivalence principle be safely used in non-relativistic mechanics?
Question: Imagine an ideal pendulum in a train. While the train is in uniform motion, Newton's laws apply within the train, and we can easily write down the equations of motion for the pendulum. Now assume the train is being uniformly accelerated with acceleration $a$. If we would like to directly approach this using Newton's laws, we would probably have to consider the force accelerating the train, the reaction force on the pivot of the pendulum, and so on. I think this would get quite cumbersome. Naively invoking the equivalence principle as in general relativity, this would become easy again: let $\vec g$ be the gravitational acceleration, $\vec a$ the acceleration of the train, then in the train this would give us an equivalent gravitational field with uniform value $\vec g - \vec a$, and an equilibrium position in the direction of this vector, which corresponds to the angle $\theta_0 = \arctan\frac{-a}g$, and we conclude that the equation of motion for the angle $\theta - \theta_0$ is the same as that for an ordinary pendulum with gravitational acceleration $\sqrt{g^2 + a^2}$. It is not at all clear (to me) that this reasoning is justified by Newton's laws. My questions: Can the equivalence principle (possibly in some restricted form) be safely invoked in non-relativisting mechanics? Can its validity be derived from Newton's laws? Answer: Yes, it can be. In classical mechanics the principle is nothing but the statement that inertial and gravitational masses are identical. As a consequence all inertial forces can be mathematically interpreted as gravitational forces using the standard mathematical machinery of Newtonian mechanics. Your solution is correct.
{ "domain": "physics.stackexchange", "id": 74932, "tags": "newtonian-mechanics, classical-mechanics, newtonian-gravity, harmonic-oscillator, equivalence-principle" }
hector_quadrotor_demo goes unstable when robot gets high enough
Question: Related to a ros package, so originally asked on answers.ros.. but can anyone see a reason why a simulation would become unstable and subsequently crash because of the position of a robot in Gazebo? See video and another video here. For reference in this is with roslaunch hector_quadrotor_demo outdoor_flight_gazebo.launch and rostopic pub /cmd_vel geometry_msgs/Twist '[0,0,0.1]' '[0,0,0]' Originally posted by SL Remy on Gazebo Answers with karma: 319 on 2012-12-09 Post score: 0 Original comments Comment by nkoenig on 2012-12-10: Which version of Gazebo are you using? Comment by SL Remy on 2012-12-11: Gazebo 1.6.16 (from ubuntu precise ros-fuerte packages) Comment by nkoenig on 2012-12-12: Just for clarity, the above version number is not the version of Gazebo. That is the version of the ROS simulator_gazebo package. The actual Gazebo version is 1.0.2. Answer: So far the answer seems related to the controller for the quadrotor.. the problem only appears to occur if both the imuTopic and the stateTopic are subscribed. Still not sure why.. but when the controller reads data directly from gazebo link->GetWorldPose() and link->GetWorldLinearVel() instead of subscribing to the data that gazebo is publishing, the model is much more well behaved. Originally posted by SL Remy with karma: 319 on 2012-12-13 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by SL Remy on 2012-12-13: Both of those topics are set by default in the quadrobor ubuntu debs.
{ "domain": "robotics.stackexchange", "id": 2856, "tags": "physics" }
Force/torque on a current loop due to its own magnetic field?
Question: A current loop generates a magnetic field all around itself as shown in picture. My question is: does this magnetic field produce mechanical effects (force or torque) on the loop itself? If so, what effects are involved? If we take two opposites elements of the loop and evaluate the force on each element due to this magnetic field the resultant is not zero, as shown in the figure below. The its seems like there is a force downwards! But how can that happen? Will the loop start moving just because of its own magnetic field? Answer: The answer lies in the figure itself. The field lines points in opposite directions at diametrically opposite points on the loop. Hence the forces will be equal and opposite at diametrically opposite points on the loop, by symmetry along the axis of the loop. So, in effect the loop stays put since there is no net force acting on the loop as a whole. However there are forces at each and every points on the loop if you consider every single point on the loop individually. But as a whole, the loop feels no force at all.
{ "domain": "physics.stackexchange", "id": 36887, "tags": "electromagnetism, magnetic-fields, electric-current, magnetic-moment" }
Creating a map with gmapping using kinect and fake odometry data
Question: Hi, the title speaks for itself, what I've done so far is creating this launchfile, which seems to do everything right - at least "rostopic echo /scan" echos something. <launch> <!-- kinect and frame ids --> <include file="$(find openni_launch)/launch/openni.launch"/> <!-- openni manager --> <node pkg="nodelet" type="nodelet" name="openni_manager" output="screen" respawn="true" args="manager"/> <!-- throttling --> <node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_manager"> <param name="max_rate" value="2"/> <remap from="cloud_in" to="/camera/depth/points"/> <remap from="cloud_out" to="cloud_throttled"/> </node> <!-- fake laser --> <node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_manager"> <param name="output_frame_id" value="/openni_depth_frame"/> <remap from="cloud" to="cloud_throttled"/> </node> </launch> Furthermore I followed the instructions to create fake odometry data. found here This also seems to work....so I tried this, but all I get is "Waiting for map". Please feel free to ask for further information, if needed. Originally posted by Flowers on ROS Answers with karma: 342 on 2012-09-12 Post score: 0 Answer: Give this a try. You don't need odometry for it. Originally posted by allenh1 with karma: 3055 on 2012-09-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Flowers on 2012-09-13: Maybe I should have mentioned that: The fake odometry data are just needed for tests - the robot I want to use publishes odometry data. Any ideas why my solution does not work? thx
{ "domain": "robotics.stackexchange", "id": 10996, "tags": "slam, navigation, kinect, slam-gmapping, gmapping" }
How to model simple URDF objects?
Question: I'm trying to model a simple hammer for use in Gazebo; I'm using ROS Fuerte in Ubuntu 12.04. It doesn't have to be terribly complicated, it's just to do some grasping stuff with the NASA Robonaut r2 model (which runs just fine). I've tried modeling a simple rectangle with a static joint to a cylinder in URDF, but I can't get anything to spawn. It either says it can't communicate or crashes Gazebo outright. I feel like I'm stumbling in the dark; there's precious little I can find for how to do object modeling for Gazebo, and any help would be greatly appreciated. Originally posted by cosmic_cow on Gazebo Answers with karma: 1 on 2012-10-14 Post score: 0 Original comments Comment by hsu on 2013-01-11: If you are using ROS Fuerte version, does this work for you? http://www.ros.org/wiki/simulator_gazebo/Tutorials/SpawningObjectInSimulation Answer: Try grabbing the latest Gazebo (version 1.2). The default install will allow you drag-and-drop a hammer into simulation. Here is the SDF we use for the hammer: <?xml version="1.0"?> <gazebo version="1.2"> <model name="hammer"> <static>false</static> <link name="link"> <collision name="collision"> <geometry> <mesh> <uri>model://hammer/meshes/hammer.dae</uri> </mesh> </geometry> </collision> <visual name="visual"> <geometry> <mesh> <uri>model://hammer/meshes/hammer.dae</uri> </mesh> </geometry> </visual> </link> </model> </gazebo> An improvement can be made by using simple shapes for the collision object rather than the mesh itself. Originally posted by nkoenig with karma: 7676 on 2012-10-18 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 2769, "tags": "gazebo, gazebo-model" }
How much does the weight of urban structures (buildings) affect the compaction (permeability, porosity, density) of alluvial sediments below a city?
Question: I live in Van, Turkey. Van City is situated on an alluvial plain beside Lake Van. General geological structure of the area can be seen on pages 43-44 in this conference book. One of the images is below. According to the article in the link, sediment layers has a slope of 15-30 degrees toward the lake. With the construction of every building, the pressure over the sediments increases. Obviously, this increase is greater in downtown area which has a lot of apartment blocks (mainly 5 to 7 story high reinforced concrete buildings). Downtown area is roughly in the middle of the plain. I want to know that how much does the weight of urban structures (buildings) affect the compaction (permeability/porosity/density?) of alluvial sediments below a city and how does this affect the flow of underground water? Answer: This question might be more appropriate for the SE Engineering site. The thing about soils is that they vary from place to place. Some are sandy, some contain more clay than others and the thicknesses of layers is also variable. All this influences how different soils react to surface loading stresses, such as from building foundations/footings. Immediately below the footings the soils will experience the greatest stress increases. As discussed in this document, particularly from page 10 onwards, the magnitude of stress the soil experiences from footings decreases with depth - see the effect of a point load $Q$, applied to the surface, on the upper right of page 12. At a depth $z$, the vertical stress $\sigma_v$ has a certain higher value. At a depth of $2z$, the value of $\sigma_v$ is lower. One reason why the affect of surface loads decreases with depth is due to the nature of soil particles and how they lie in relation to each other. Soil particles are not uniform in size or shape, so they rarely lie directly on top of each other. There are gaps between soil particles, called pores. The pores can hold ground water. Compaction and consolidation of soils will reduce the volume (size) of pore spaces. Foundation loadings will only affect the soil beneath them. Because soil particles do not lie directly on top of each other, one particle may lie above two or more particles, the stress exprienced by the upper soil particle will be transferred to the other particles. Providing the soil is deep enough, the affect of this is the to, over depth, transfer vertical stresses to horizontal stresses. Ultimately, if a soil profile is deep enough, the stress the lower reaches of the soil experience due to urban development will be small and will not affect ground water movement.
{ "domain": "earthscience.stackexchange", "id": 1913, "tags": "sedimentology, groundwater, pressure, stress" }
Is there a distinction between rest mass and relativistic mass for photons?
Question: I'm trying to reconcile how photons do and don't have mass, and the distinction seems to come from the frame of reference. As far as I understand, if you were to somehow stop a photon relative to an observer so that it was at rest, you wouldn't be able to measure its mass, probably for multiple paradoxical reasons, although maybe in some weird scenario you could decohere perpendicular photons and measure the effects, but anyway, photons can't stop, so they can't have rest mass I guess. However, how do we know which causes the other? Do they not have rest mass because they must stay in motion? Or must they stay in motion because they formed in such a way that they never could have had rest mass to begin with? And then, how do they have mass simply by not being at rest? Answer: Photons don't have rest mass because if they did, then they would have an infinite amount of energy. For all particles with mass, it requires an infinite amount of energy to accelerate it to the speed of light. This can be seen from the formula for the energy of a particle, which is of the form $E={\gamma}\dot {mc^2}$. $\gamma$ approaches infinity as the velocity goes to c, which means that $E$ approaches infinity too. Therefore, we see that the property we call "rest mass" must have a value of 0 for all photons (at least if we define the quantity "rest mass" to be whatever the quantity $\frac{E}{\gamma c^2}$ approaches to as the velocity goes to c). For a similar reason, we see that all particles with 0 rest mass must move at the speed of light, for if they didn't, then according to the formula above, they would also have 0 energy. But no physical particles can ever have 0 energy, so we see that they must move at the speed of light to be a physical particle in the first place. But I should note that nothing causes anything here. All we have done is conclude that for both statements "we have a physical particle" and "the particle has 0 rest mass" to hold, the particle must move at c. At least this is how I think about it. Perhaps someone else have different thoughts.
{ "domain": "physics.stackexchange", "id": 69188, "tags": "special-relativity, photons, mass, inertial-frames" }
WinSCP IDisposable Wrapper
Question: I have written a simple wrapper for WinSCP in C#. I have written it to simplify SFTP connections I will needing to perform. public class WinSCPConn : IDisposable { //private fields private SessionOptions sessionOptions; private TransferOptions transferOptions; private Session session; //public properties public bool IsSessionOpen { get { return session.Opened; } } public WinSCPConn(string hostname, string username, string password) { sessionOptions = new SessionOptions(); sessionOptions.Protocol = Protocol.Sftp; sessionOptions.HostName = hostname; sessionOptions.PortNumber = 22; sessionOptions.Password = password; sessionOptions.UserName = username; sessionOptions.Timeout = new TimeSpan(0, 3, 0); sessionOptions.GiveUpSecurityAndAcceptAnySshHostKey = true; transferOptions = new TransferOptions(); transferOptions.TransferMode = TransferMode.Binary; try { session = new Session(); session.ExecutablePath = Properties.Settings.Default.WinSCPPath; } catch (SessionLocalException) { throw; } } public void Open() { try { if (session.Opened == false) { session.Open(sessionOptions); } } catch (SessionRemoteException) { throw; } } public TransferOperationResult SendFile(string SourceFile, string DestFile) { TransferOperationResult result; try { result = session.PutFiles(SourceFile, DestFile, false, transferOptions); return result; } catch (SessionRemoteException) { throw; } } public void CreateDirectory(string FolderPath) { try { if (!session.FileExists(FolderPath)) { session.CreateDirectory(FolderPath); } } catch (SessionRemoteException) { throw; } } public void Close() { if (session.Opened == true) { session.Dispose(); } } public void Dispose() { Close(); } } It would be used as follows: using (WinSCPConn conn = new WinSCPConn("host", "username", "password")) { conn.Open(); conn.CreateDirectory("/path/"); conn.SendFile(@"C:\file.txt","/path/file.txt"); } My questions are as follows: How is my exception handling? Is there too much going on in the constructor? Answer: Timeout sessionOptions.Timeout = new TimeSpan(0, 3, 0); Instead of using the TimeSpan constructor, I would instead use the TimeSpan.FromX methods. In this case, it would be TimeSpan.FromMinutes: sessionOptions.Timeout = TimeSpan.FromMinutes(3); It is a little more obvious that the timeout is 3 minutes now. Port sessionOptions.PortNumber = 22; For this, I see two options: Take port as a parameter Omit the line If you are always using the default port, you can omit the assignment. According to the WinSCP API Doc, leaving the port to the default of 0 will cause it to use the protocol default (22 for SFTP). Keep in mind that always is not always always :) Session Creation try { session = new Session(); session.ExecutablePath = Properties.Settings.Default.WinSCPPath; } catch (SessionLocalException) { throw; } This part actually has two problems with it: the catch does not do anything aside from re-throw the exception you are throwing an exception in the constructor of an IDisposable type, which breaks using statements For the first point, you could just remove the try/catch. It only has a purpose if you are planning on doing something in the catch. This is true for all of your try/catch blocks. The next point could be solved with lazy initialization of the session. If you change the if statement within Open to do a null check, you can create and open the session there. public void Open() { if (session == null) { session = new Session(); session.ExecutablePath = Properties.Settings.Default.WinSCPPath; session.Open(sessionOptions); } } Perhaps more robust, though, is just to add a separate if check and do the creation/opening separately: public void Open() { if (session == null) { session = new Session(); session.ExecutablePath = Properties.Settings.Default.WinSCPPath; } if (session.Opened == false) { session.Open(sessionOptions); } } IDisposable There are a couple things you may want to address with your IDispoable implementation: suppress finalization support cases where Dispose is never called allow sub-classes to override disposal behavior or seal the class The first one is pretty easy: public void Dispose() { Close(); GC.SuppressFinalize(this); } The second one depends largely on how the WinSCP Session class is written. You may need to ensure that the session is closed even if Dispose is never called, though you only need to worry about actually disposing of it when Dispose is called. For the last one, I would just mark the class as sealed, since you have neither virtual nor protected members. Otherwise, you should provide a virtual Dispose method. Generally, when doing so, your empty Dispose remains non-virtual, and you would provide an additional protected overload which takes in a boolean. For more information, see: Code analysis rule CA1063 and the MSDN page on Implementing a Dispose Method.
{ "domain": "codereview.stackexchange", "id": 11401, "tags": "c#, .net, wrapper" }
Entropy: two explanations for the same quantity?
Question: I studied thermodynamics and I saw the following definition for entropy: $$ \Delta S = \int_1^2 \frac{\text{d}Q}{T} $$ that we use to calculate $\Delta S$ for different types of transformations. In the last lecture we started to talk about entropy like measurement of disorder and information. The form of entropy becomes: $$ S = k \ln{W} $$ where $W$ is the number of microstates. At this point i felt lost and searching in internet I increased my confusion. I really can't see the relation of the two formulation of the same quantity. How they are related? Disorders means disorder on the microscopical structure of matter? What is the "information" carried by entropy? Answer: The way to understand the relation between the two definitions is to consider two systems which are touching, so that they exchange energy. The energy exchanged is called "heat" when it is random and microscopic. Start with the definition in terms of microstates. The entropy is the log of the number of microstates, so there is an $S_1(E)$ for system one, and $S_2(E)$ for system 2. The total number of microstates is the product of the number of states in each of system 1 and 2, so you get that the logarithm is additive $$ S(E) = S_1(E_1) + S_2(E-E_1)$$ Now you ask, what is the condition that $S(E)$ is at a maximum? This determines when you reach equilibrium. The condition is that $$ {\partial S \over \partial E_1} = 0 = S'(E_1) - S'(E-E_1)$$ So that the equilibrium condition is that the derivative of the entropy with respect to the energy of the two systems must be equal. We define the thermodynamic temperature to be the reciprocal of this derivative, and one concludes that two systems are at the same temperature, and so in thermal equilibrium, when the rate of increase of entropy with energy is equal for the two. Then you ask, what is the change in entropy in a system when you add a quantity of energy dQ to the system? By the definition of the derivative, it is $$ {\partial S\over \partial E} dQ = {dQ\over T} $$ There is nothing more to it then that. The issue is to make sure that the thermodynamic concept is identical to the intuitive concept of temperature, and for this it helps to verify that for an ideal gas, the thermodynamic temperature is (up to a universal constant) the product of the pressure and volume divided by the number of particles in the gas. To verify this, you can just count the microstates, and differentiate.
{ "domain": "physics.stackexchange", "id": 4375, "tags": "thermodynamics, statistical-mechanics, entropy, information, disorder" }
Is there a difference in Earth's magnetic field between day and night?
Question: Is there a difference in Earth's magnetic field between day and night? Magnetic Field Answer: It looks like there is some difference between day and night geomagnetic field, and one of the causes is mentioned in @JEB's comment. Yamazaki Earth, Planets and Space (2022) 74:99 The Earth’s upper atmosphere is weakly ionized, as it receives energy inputs from the Sun in the form of electromagnetic waves. Ionized particles interact with neutrals by collisions and move through the ambient geomagnetic field, which gives rise to an electromotive force to support electric fields and currents. The process is known as the ionospheric wind dynamo, or simply ionospheric dynamo, and it is the dominant production mechanism of ionospheric electric fields and currents at middle and low latitudes during geomagnetically quiet periods (e.g., Richmond 1995a; Heelis 2004). The dynamo currents flow mainly on the dayside at E-region altitudes (ca 90–150 km), where the electrical conductivity of the ionosphere is greatest. At night, the ionospheric conductivity is smaller by about two orders of magnitude (e.g., Richmond 2011). Thus, the currents are also much weaker and have a negligible efect on the geomagnetic feld on the ground. The daytime presence and nighttime absence of the magnetic effect associated with ionospheric dynamo currents lead to daily variation of the geomagnetic field measured at ground stations. Geomagnetic daily variation is smooth and regular in appearance on geomagnetically quiet days when high-frequency geomagnetic disturbances associated with geomagnetic storms and substorms are absent, and is often referred to as solar-quiet (Sq) variation (e.g., Campbell 1989; Yamazaki and Maute 2017)."
{ "domain": "physics.stackexchange", "id": 89995, "tags": "visible-light, magnetic-fields, geomagnetism" }
Macroing program with Java
Question: When you press CTRL you activate the "Register" mode. In this mode, when you click, the program stores the X and Y of the mouse. To turn it off, you press CTRL again. When you turn "Register" mode off, a message option box pops up asking "Perform actions?". If you respond "YES", the program will click on the registered coordinates. I'm new to programming and Java. This code is a mess and I could use some feedback. public class Gui extends JFrame { private JPanel mousePanel; private JLabel statusBar; private JLabel keyBar; public boolean ctrl; List<Integer> xList = new ArrayList<Integer>(); List<Integer> yList = new ArrayList<Integer>(); public int[] x; public int[] y; public Gui() { super("Program"); mousePanel = new JPanel(); mousePanel.setBackground(Color.WHITE); add(mousePanel, BorderLayout.CENTER); statusBar = new JLabel("No events"); keyBar = new JLabel("No key events"); add(keyBar, BorderLayout.NORTH);; add(statusBar, BorderLayout.SOUTH); HandlerClass handler = new HandlerClass(); mousePanel.addMouseListener(handler); mousePanel.addMouseMotionListener(handler); this.addKeyListener(handler); } public void Click(int x, int y) throws AWTException { Robot bot = new Robot(); bot.mouseMove(x, y); bot.mousePress(InputEvent.BUTTON1_MASK); bot.mouseRelease(InputEvent.BUTTON1_MASK); } private class HandlerClass implements MouseListener, MouseMotionListener, KeyListener { //Mouse Listener public void mouseClicked(MouseEvent event) { statusBar.setText(String.format("Clicked at %d, %d", event.getX(), event.getY())); if(ctrl) { xList.add(MouseInfo.getPointerInfo().getLocation().x); yList.add(MouseInfo.getPointerInfo().getLocation().y); } } public void mousePressed(MouseEvent event) { statusBar.setText(String.format("You are pressing the mouse at %d, %d", event.getX(), event.getY())); } public void mouseReleased(MouseEvent event) { statusBar.setText(String.format("Released at %d, %d", event.getX(), event.getY())); } public void mouseEntered(MouseEvent event) { statusBar.setText(String.format("Mouse entered at %d, %d", event.getX(), event.getY())); mousePanel.setBackground(Color.RED); } public void mouseExited(MouseEvent event) { statusBar.setText(String.format("Mouse exited at %d, %d", event.getX(), event.getY())); mousePanel.setBackground(Color.WHITE); } //Mouse Motion public void mouseDragged(MouseEvent event) { statusBar.setText(String.format("Dragging mouse at %d, %d", event.getX(), event.getY())); } public void mouseMoved(MouseEvent event) { statusBar.setText(String.format("Moving mouse at %d, %d", event.getX(), event.getY())); } //Key Listener public void keyPressed(KeyEvent e) { if(e.getKeyCode() == e.VK_CONTROL && !(ctrl)){ keyBar.setText("CTRL ON"); ctrl = true; } else if(e.getKeyCode() == e.VK_CONTROL && ctrl) { keyBar.setText("CTRL OFF"); ctrl = false; if(JOptionPane.showOptionDialog(null, "Perform actions?", "", JOptionPane.YES_NO_OPTION, JOptionPane.WARNING_MESSAGE, null, null, null) == JOptionPane.YES_OPTION) { int index = 0; for(int actionX : xList) { try { Click(actionX, yList.get(index)); } catch (AWTException e1) { e1.printStackTrace(); } index++; try { Thread.sleep(2000); } catch (InterruptedException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } } } } } public void keyReleased(KeyEvent e) { } public void keyTyped(KeyEvent e) { } } } Answer: The biggest problem I found with your code is that IT DOES NOT DISPLAY ANYTHING. The JFrame(String title) constructor, according to Java SE 8 Documentation: Creates a new, initially invisible Frame with the specified title. So you have to put setVisible(true) somewhere in your constructor. Also, when it shows, is so small you could see nothing. You should also set the size you want it to be and if you want it to be resizable or not in the constructor, like this: setSize(500, 500); setResizable(true); Also, here: List<Integer> xList = new ArrayList<Integer>(); List<Integer> yList = new ArrayList<Integer>(); I suggest you use LinkedList instead, because ArrayList uses arrays to store variables, and since arrays have a limited space, when you run out, it will create a new array and move everything from the old one to the new one. This takes a lot of time, and LinkedList doesn't do that. You should also add a serial version ID, because that will then save the time the compiler requires to generate one for you, like this: private static final long serialVersionUID = 1L; Other than that, your code is good. Final Code: import java.awt.AWTException; import java.awt.BorderLayout; import java.awt.Color; import java.awt.MouseInfo; import java.awt.Robot; import java.awt.event.InputEvent; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; import java.awt.event.MouseEvent; import java.awt.event.MouseListener; import java.awt.event.MouseMotionListener; import java.util.LinkedList; import java.util.List; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JOptionPane; import javax.swing.JPanel; public class Test { public static void main(String[] args) { Gui gui = new Gui(); } } class Gui extends JFrame { private static final long serialVersionUID = 1L; private JPanel mousePanel; private JLabel statusBar; private JLabel keyBar; public boolean ctrl; List<Integer> xList = new LinkedList<Integer>(); List<Integer> yList = new LinkedList<Integer>(); public int[] x; public int[] y; public Gui() { super("Program"); setVisible(true); setSize(500, 500); setResizable(true); mousePanel = new JPanel(); mousePanel.setBackground(Color.WHITE); add(mousePanel, BorderLayout.CENTER); statusBar = new JLabel("No events"); keyBar = new JLabel("No key events"); add(keyBar, BorderLayout.NORTH); add(statusBar, BorderLayout.SOUTH); HandlerClass handler = new HandlerClass(); mousePanel.addMouseListener(handler); mousePanel.addMouseMotionListener(handler); this.addKeyListener(handler); } public void Click(int x, int y) throws AWTException { Robot bot = new Robot(); bot.mouseMove(x, y); bot.mousePress(InputEvent.BUTTON1_MASK); bot.mouseRelease(InputEvent.BUTTON1_MASK); } private class HandlerClass implements MouseListener, MouseMotionListener, KeyListener { public void mouseClicked(MouseEvent event) { statusBar.setText(String.format("Clicked at %d, %d", event.getX(), event.getY())); if (ctrl) { xList.add(MouseInfo.getPointerInfo().getLocation().x); yList.add(MouseInfo.getPointerInfo().getLocation().y); } } public void mousePressed(MouseEvent event) { statusBar.setText(String.format( "You are pressing the mouse at %d, %d", event.getX(), event.getY())); } public void mouseReleased(MouseEvent event) { statusBar.setText(String.format("Released at %d, %d", event.getX(), event.getY())); } public void mouseEntered(MouseEvent event) { statusBar.setText(String.format("Mouse entered at %d, %d", event.getX(), event.getY())); mousePanel.setBackground(Color.RED); } public void mouseExited(MouseEvent event) { statusBar.setText(String.format("Mouse exited at %d, %d", event.getX(), event.getY())); mousePanel.setBackground(Color.WHITE); } public void mouseDragged(MouseEvent event) { statusBar.setText(String.format("Dragging mouse at %d, %d", event.getX(), event.getY())); } public void mouseMoved(MouseEvent event) { statusBar.setText(String.format("Moving mouse at %d, %d", event.getX(), event.getY())); } public void keyPressed(KeyEvent e) { if (e.getKeyCode() == e.VK_CONTROL && !(ctrl)) { keyBar.setText("CTRL ON"); ctrl = true; } else if (e.getKeyCode() == e.VK_CONTROL && ctrl) { keyBar.setText("CTRL OFF"); ctrl = false; if (JOptionPane.showOptionDialog(null, "Perform actions?", "", JOptionPane.YES_NO_OPTION, JOptionPane.WARNING_MESSAGE, null, null, null) == JOptionPane.YES_OPTION) { int index = 0; for (int actionX : xList) { try { Click(actionX, yList.get(index)); } catch (AWTException exc) { exc.printStackTrace(); } index++; try { Thread.sleep(2000); } catch (InterruptedException exc) { exc.printStackTrace(); } } } } } public void keyReleased(KeyEvent e) { } public void keyTyped(KeyEvent e) { } } }
{ "domain": "codereview.stackexchange", "id": 11213, "tags": "java, beginner, swing" }
Detection of sulphide in solution also containing sulphate
Question: Question: The reagent(s) that can selectively precipitate $\ce{S^2-}$ from a mixture of $\ce{S^2-}$ and $\ce{SO4^2-}$ in aqueous solution is(are): (A) $\ce{CuCl2}$ (B) $\ce{BaCl2}$ (C) $\ce{Pb(OOCCH3)2}$ (D) $\ce{Na2[Fe(CN)5NO]}$ Answer: (A) or (A) and (C) Why (C) is considered as correct option even though lead sulphide forms black precipitate and lead sulphate forms white precipitate? Question source: JEE Advance 2016 Answer: I will add one thing that may be the reason behind why (C) is an acceptable answer choice. Yes, both $\ce{PbS}$ and $\ce{PbSO_4}$ are insoluble. However, if you look at the $\ce{K_{sp}}$ for both compounds, $\ce{PbS}$ is about $\pu{3.2 x10^-28}$ while the $\ce{K_{sp}}$ for $\ce{PbSO_4}$ is $\pu{1.3 x 10^-8}$. When you have a mixture of ions, the resulting molecule with the lowest $\ce{K_{sp}}$ will precipitate first, given that the same stoichiometric ratio is the same. In this case, since $\ce{PbS}$ has a lower $\ce{K_{sp}}$ than $\ce{PbSO_4}$ and that they have the same stoichiometric ratio, $\ce{PbS}$ will precipitate first. This technique is called fractional precipitation where you separate two ions, in our case $\ce{S^{2-}}$ and $\ce{SO_4^{2-}}$, by precipitating one of the ions selectively through taking advantage of their differing $\ce{K_{sp}}$ values. This is the likely reason behind why choosing (A) itself is acceptable or choosing (A) and (C) is also acceptable.
{ "domain": "chemistry.stackexchange", "id": 15477, "tags": "experimental-chemistry, analytical-chemistry, identification" }
Reproducing electricity
Question: We all know reproducing solar energy is possible. Same stands for mechanical energy (air, water, coals) - they are all reproducible. But what about other types of light ? A diode light for example that differs from the solar light tremendously. So can we convert its type of light back into electricity? Videlicet, can we convert another types of energy into electricity, after the consumer has been supplied ? For instance converting noises back to EAS and then from EAS to electricity ? Answer: If I understand your question correctly, the answer is "yes". For most energy conversion processes, the "inverse" process exists. Typically though, as you go from one to the other, and back again, you will lose some efficiency - think of it as the universe entropy increasing at every step of the way. Specifically, with regard to your two examples: The light from an LED consists of photons - once they leave the LED there is no way of knowing whence they came. And so if you hit a photodiode with LED light, you will induce a current. If you illuminated the same diode with photons from the sun, filtered by wavelength, you would get the same current per photon. Specific photo cells (like CdTe "solar panels") may or may not be sensitive for the wavelength of the LED in question - but that is not a matter of the source of the light itself, just the frequency. Which could be generated by the sun or any other light source. And just like a loudspeaker can convert electricity into sound, you can in fact use a loudspeaker as a microphone. This type of microphone is sometimes called "dynamic" to distinguish it from a condenser style microphone. The ear phones commonly used with iPods etc are good (enough) microphones - I am aware of at least one medical instrument that was developed in Israel that used one ear bud of headphones as a source of sound, and the other as the receiver (microphone). By moving a coil in a magnetic field, you induce a current - just as driving a current through the coil will cause it to move. The efficiency is often quite poor - but the process is possible.
{ "domain": "physics.stackexchange", "id": 24021, "tags": "electromagnetism, energy, electricity, acoustics" }
Existence of good error correcting codes
Question: I recently asked this question and got an answer from Yuval Filmus stating that we can build a solution using error-correcting codes. More specifically, I'm looking for error correcting codes (for binary alphabet) with constant $R>0$ non-zero relative rate, and as high as possible relative distance $\delta$. I know that using this theorem, we can achieve any $\delta < \frac{1}{2}$. As pointed out by Yuval Filmus's answer to my last question, this is the best $\delta$ we can hope for. Where can I find a proof that states there is no binary error-correcting code with a relative distance bigger than $\frac{1}{2}$? Answer: This is known as the Plotkin bound.
{ "domain": "cs.stackexchange", "id": 18525, "tags": "reference-request, error-correcting-codes" }
What is the difference between deterministic and confluent?
Question: I understand deterministic as a function for some input will always give the same output, and these inputs and outputs can be sets of values represent by a predicate. I understand confluent as convergence of a rewriting system, ie the rewritten terms always converge to some term, which could also represent a predicate. It seems like these definitions are very similar in what they achieve. Would all deterministic systems be confluent and vice versa? Or is determinism really about the paths taken and exact timmings of computations to reach an answer, ie a deterministic algorithm must always have the same trace of paths for every run? Also there is a notion of choice in non-deterministic algorithms, how does this fit in? I feel like these definitions should work across sequential and concurrent systems, but for concurrent systems the exact timings are less important as the scheduler controls this. Answer: If a binary relation is confluent and terminating, then the map from initial state to final state is total and deterministic. The converse also holds. If a binary relation is confluent, the binary relation need not be a function. You'll have to decide what you mean by "deterministic" and whether you refer to what the original state ultimately leads to (then the answer to your question is yes, as mentioned in the first paragraph of this answer) or whether you refer to the behavior in a single step of the system (then the answer is no, as mentioned in the second paragraph of this answer).
{ "domain": "cs.stackexchange", "id": 21229, "tags": "nondeterminism, term-rewriting" }
How are chickens affected by light?
Question: When I was young, I was told never to put a fluorescent light in a hen house because it continuously turns on and off. Extra lighting is used, as far as I know, to 'extend' a chicken's day. Human eyes don't have the refresh frequency to notice that, but there are animals who have a frame-rate of up to 200 Hz (so they actually see the lights turn off and back on again). I was told chickens are among those animals and that they'd stop laying eggs when continuously exposed to interrupted lighting. However, my neighbour has fluorescent lighting in his hen house. So either the above is bogus, or I have my facts mixed up. So, how (if at all) are chickens affected by light? Answer: It seems like they are not affected by fluorescent light frequency. I did not find anything about their visual sampling rate. Their hearing is between 0-200Hz with an average of 86Hz so I guess the visual sampling rate is under this, but that's just a guess. We conclude that at the illumination levels used in this experiment, the hens did not perceive the flicker of low-frequency light or they perceived it but did not find it aversive. Low-frequency fluorescent light does not appear to adversely affect the welfare of hens. 1996 - Laying hens do not have a preference for high-frequency versus low-frequency compact fluorescent light sources It concludes that there is no evidence that fluorescent or high pressure sodium lighting, irrespective of intensity or spectral distribution, has any consistent detrimental effect on growth, food utilization, reproductive performance, mortality, behaviour or live bird quality in either domestic fowl or turkeys, nor in the egg production of geese. 1998 - Responses of domestic poultry to various light sources A monochromatic (LED) light can be more beneficial according to this: A significant reduction in egg production was observed in all 880nm groups; no differences in egg production and quality were found in the other groups. Feed consumption was significantly lower by 7% in all 0.01 W/m2 groups. We suggest that an important reduction in rearing costs of laying hens may be obtained by using this system. 1998 - New monochromatic light source for laying hens
{ "domain": "biology.stackexchange", "id": 2940, "tags": "zoology, ornithology, light, chickens" }
What is the meaning of step (e) in the prioritized sweeping algorithm? Why is P calculated like that?
Question: Following is the "Prioritized Sweeping" algorithm in Sutton-Barto's RL book (page 170). What is the meaning of step (e) in the prioritized sweeping algorithm? More importantly, why is P calculated like that? Answer: $P$ tells you how "off" the evaluation for $Q(S, A)$ is. If the difference between $Q(S,A)$ (current best guess) and $R + \gamma\max_a Q(S', a)$ (update-value for $Q(S, A)$ since you received $R$ and were moved to $S'$ when you did $A$) is large, then it is an indication that this is a state-action-pair you should consider to learn more about, and $(S, A)$ will be close to the front of the queue. On the other hand, if $P$ is small, there is "nothing new to learn", and no need to prioritize this state-action-pair.
{ "domain": "ai.stackexchange", "id": 3273, "tags": "reinforcement-learning, prioritized-sweeping" }
Why do high tides occur simultaneously on opposite sides of the Earth?
Question: Most explanations for high tides say high tides come from water being attracted by the moon (2/3) and the sun (1/3). Attraction occurring in the direction of the moon is visible on the side close to the moon below: Source: This question However why does water also move away on the opposite side of the Earth? And moreover, why does this high tide on the opposite side also occur when the Moon and Sun are in conjunction and therefore nothing can attract water on the far side? Source Answer: First of all, tides are not as simple as the "two-bulge" simplification. In reality, the diagram shown is misleading. The two bulges appear assuming an ocean of constant depth covers the entire surface of Earth. Clearly that is not the case and in the diagram you can see the continents. Considering the different sizes of the basins and the distinct frictional characteristics in each location, the resulting tidal effect is much more complex. The difference in phase and amplitude is shown here and it clearly shows that the the tide varies for the same longitude. That wouldn't be the case in the simple explanation above. Source Wikipedia. Looking at this tidal animation from TPXO is also illustrative. The simple "two-buldge" explanation would result in a pure two peak daily tide. That is certainly not the case in places like the Gulf of Mexico. As mentioned in Camilo Rada's answer, the bulges are a consequence of the tidal force. This apparent force result from the difference in strength in the gravitational field. The result is that Earth's body is stretched toward and away from the center of mass of the Earth-Moon system. The water thus adjust to this difference in geopotential giving rise to the tides. A more intuitive explanation is given in Project Earth Science: Physical Oceanography The explanation of the two-bulge tide comes from the fact that the Moon and Earth form a two-body system that rotates about an axis located within Earth. The bulge of water on the side of Earth that faces the Moon is easily explained. It is due to the gravitational attraction between the Moon and Earth, including the water on Earth. This attraction pulls water toward the Moon and creates a “bulge” on the surface of Earth. The bulge on the other side of Earth is due to inertia. Inertia is the tendency of an object at rest to stay at rest and the tendency of a body in motion to continue its motion in a straight line. There is an inertial tendency resulting from the rotation of the Earth-Moon system for objects (water among them) to move away from both sides of Earth—the side facing toward the Moon and the side facing away. The model demonstrates that the effect of things moving away from Earth is much greater on the side facing away from the Moon. Many textbooks and other sources use the concept of “centrifugal force”— which is actually a preconception—to explain the effects of inertia. According to this preconception, there is a force that acts on all objects that are in circular motion, and this force pushes or pulls the object out from the circle. There is no such force. The preconception arises from our own experience with circular motion. The gravitational forces of Earth, Sun and Moon cause a bulge of water on the nearest side and an equal bulge on the other side. Thus, in this simple scenario, the tide is composed of two bulges of water (four, in fact), traveling around the world as the world spins. When Moon and Sun aligned, their respective bulges add together to form "spring tides" every two weeks. When the Moon and Sun are at right angles, we encounter "neap tides", as the bulge of the sun adds to the low lunar tide, resulting in higher low tides but lower high tides. The limitations of this model are: It cannot explain that there are places without tides, with one daily high, and most with two tidal highs each day. Tidal height is not maximal at the Equator (and minimal at the poles) as the simplification suggests. High tide is not associated with the position of the Moon. It occurs at different times of the lunar cycle depending on the location. If continents are included, the tidal wave would reflect off the continental shelf as it reaches a continent. A tidal wave of almost equal magnitude will be propagating in the opposite direction, which is not observed. The tidal waves required for this model would have to travel at much faster speeds that are possible in reality. In reality, the tides instead of running east to west as Earth rotates, tidal waves propagate around in circles around islands, and certain points in the sea, called tidal nodes or amphidromic points. These nodes can be seen in the first figure from this answer. Thus, the tidal patterns in the ocean are a set of rotating standing waves. These waves have periods that represent the natural resonance periods of the ocean basins. These waves can be considered modes of "vibration" and can be decomposed using a Fourier decomposition. That is the source of the different tidal constituents that are used currently for tidal prediction.
{ "domain": "earthscience.stackexchange", "id": 1720, "tags": "tides, astronomy" }
Do viruses compete with each other or even infect each other? (Virus vs Virus)
Question: I have read on a few websites that there can be competition between the viruses in a host for replication, nutrition etc. Do viruses fight against each other, i.e. are there viruses that infect or attack other viruses directly or indirectly? If yes, can this property help in curing or limiting hazardous viruses such as HIV using less hazardous viruses which can be cured and can compete with these hazardous viruses? Answer: From what I understand virus on virus action is not particularly common but it isn't unheard of. Consider the Sputnik virophage which reproduces in amoeba cells that are already infected by a certain helper virus; Sputnik uses the helper virus's machinery for reproduction and inhibits replication of the helper virus. Also consider the Mavirus virophage, which is a double stranded DNA virus that infects the marine phagotrophic flagellate Cafeteria roenbergensis in the presence of a second virus — Cafeteria roenbergensis virus. Mavirus can integrate into the genome of cells of C. roenbergensis, and thereby confer immunity to the population. As a last example, consider the Organic Lake virophage, which is a double stranded DNA virus that infects the marine phagotrophic flagellate Cafeteria roenbergensis in the presence of a second virus — Cafeteria roenbergensis virus. [...] [It] preys on Organic Lake phycodnaviruses, which in fact may rather belong to Mimiviridae than to Phycodnaviridae. So yes, viruses do at least occasionally attack viruses and inhibit their infectious potential and grant a sort of immunity. I'm not aware of any applications of this in medicine for now but maybe somebody else can add something to this if they know.
{ "domain": "biology.stackexchange", "id": 9203, "tags": "virology" }
Adaptive Merge Sort in C++
Question: As the title says, I'm trying to implement a merge sort algorithm in C++ that's also adaptive. This is a personal exercise, and I don't have any specific application in mind. My main goal is to write something that's succinct and easy to understand but also has reasonably good performance. I'm aware that implementations already exist: TimSort (C++ implementation), for example. And new enough versions of GCC's C++ library seem to implement std::stable_sort using an adaptive algorithm as well. I'm not looking to replace either of these, or beat them in performance (though I would be happy if I came close). So here is what I have. I'd be particularly interested to know of any bugs/special cases I've missed, or opportunities to improve the performance without increasing the complexity/code size too much. I've also tried to make good use of C++11 features (other than, of course, std::stable_sort itself), and if there are improvements that could be made on that front, I'd like to know as well. #include <algorithm> #include <iterator> #include <vector> /* * This algorithm borrows some ideas from TimSort but is not quite as * sophisticated. Runs are detected, but only in the forward direction, and the * invariant is stricter: each stored run must be no more than half the length * of the previous. * * As in TimSort, an already-sorted array will be processed in linear time, * making this an "adaptive" algorithm. */ template<typename Iter, typename Less> class MergeSort { private: typedef typename std::iterator_traits<Iter>::value_type Value; typedef typename std::vector<Value>::size_type Size; /* Inserts a single element into a sorted list */ static void insert_head (Iter head, Iter tail, Less less) { Iter dest; for (dest = head + 1; dest + 1 < tail; dest ++) { if (! less (* (dest + 1), * head)) break; } Value tmp = std::move (* head); std::move (head + 1, dest + 1, head); * dest = std::move (tmp); } /* Merges two sorted sub-lists */ static void do_merge (Iter head, Iter mid, Iter tail, Less less, std::vector<Value> & buf) { /* copy list "a" to temporary storage */ if (buf.size () < (Size) (mid - head)) buf = std::vector<Value> (std::make_move_iterator (head), std::make_move_iterator (mid)); else std::move (head, mid, buf.begin ()); auto a = buf.begin (); auto a_end = a + (mid - head); Iter b = mid; Iter dest = head; while (1) { if (! less (* b, * a)) { * (dest ++) = std::move (* a); if ((++ a) == a_end) break; } else { * (dest ++) = std::move (* b); if ((++ b) == tail) break; } } /* copy remainder of list "a" */ std::move (a, a_end, dest); } public: /* Top-level merge sort algorithm */ static void sort (Iter start, Iter end, Less less) { /* A list with 0 or 1 element is sorted by definition. */ if (end - start < 2) return; std::vector<Value> buf; /* The algorithm runs right-to-left (so that insertions are left-to-right). */ Iter head = end; /* Markers recording the divisions between sorted sub-lists or "runs". * Each run is at least 2x the length of its left-hand neighbor, so in * theory a list of 2^64 - 1 elements will have no more than 64 runs. */ Iter div[64]; int n_div = 0; do { Iter mid = head; head --; /* Scan right-to-left to find a run of increasing values. * If necessary, use insertion sort to create a run at 4 values long. * At this scale, insertion sort is faster due to lower overhead. */ while (head > start) { if (less (* head, * (head - 1))) { if (mid - head < 4) insert_head (head - 1, mid, less); else break; } head --; } /* Merge/collapse sub-lists left-to-right to maintain the invariant. */ while (n_div >= 1) { Iter tail = div[n_div - 1]; while (n_div >= 2) { Iter tail2 = div[n_div - 2]; /* * Check for the occasional case where the new sub-list is * longer than both the two previous. In this case, a "3-way" * merge is performed as follows: * * |---------- #6 ----------|- #5 -|---- #4 ----| ... * * First, the two previous sub-lists (#5 and #4) are merged. * (This is more balanced and therefore more efficient than * merging the long #6 with the short #5.) * * |---------- #5 ----------|-------- #4 -------| ... * * The invariant guarantees that the newly merged sub-list (#4) * will be shorter than its right-hand neighbor (#3). * * At this point we loop, and one of two things can happen: * * 1) If sub-list #5 is no longer than #3, we drop out of the * loop. #5 is still longer than half of #4, so a 2-way * merge will be required to restore the invariant. * * 2) If #5 is longer than even #3 (rare), we perform another * 3-way merge, starting with #4 and #3. The same result * holds true: the newly merged #3 will again be shorter * than its right-hand neighbour (#2). In this fashion the * process can be continued down the line with no more than * two sub-lists violating the invariant at any given time. * Eventually no more 3-way merges can be performed, and the * invariant is restored by a final 2-way merge. */ if ((mid - head) <= (tail2 - tail)) break; do_merge (mid, tail, tail2, less, buf); tail = tail2; n_div --; } /* * Otherwise, check whether the new sub-list is longer than half its * right-hand neighbour. If so, merge the two sub-lists. The * merged sub-list may in turn be longer than its own right-hand * neighbor, and if so the entire process is repeated. * * Once the "head" pointer reaches the beginning of the original * list, we simply keep merging until only one sub-list remains. */ if (head > start && (mid - head) <= (tail - mid) / 2) break; do_merge (head, mid, tail, less, buf); mid = tail; n_div --; } /* push the new sub-list onto the stack */ div[n_div] = mid; n_div ++; } while (head > start); } }; template<typename Iter, typename Less> void mergesort (Iter start, Iter end, Less less) { MergeSort<Iter, Less>::sort (start, end, less); } template<typename Iter> void mergesort (Iter const start, Iter const end) { typedef typename std::iterator_traits<Iter>::value_type Value; mergesort (start, end, std::less<Value> ()); } GitHub repository: https://github.com/jlindgren90/mergesort Answer: Design A class with all static members! Seems like the wrong use case. We have namespace for that type of thing. Code Review I like the comment at the top: /* * This algorithm borrows some ideas from TimSort but is not quite as * sophisticated. Runs are detected, but only in the forward direction, and the * invariant is stricter: each stored run must be no more than half the length * of the previous. * * As in TimSort, an already-sorted array will be processed in linear time, * making this an "adaptive" algorithm. */ Though I am unfamiliar with TimSort it easy to google. So a very nice comment all in all. I hate this comment though: /* Inserts a single element into a sorted list */ static void insert_head (Iter head, Iter tail, Less less) Especially when it does not seem to match the function. What element is being inserted here? After reading the code it seems like the element at the head of the range is sorted into place as the elements [head + 1 to tail) are already sorted. Better name better comment on what it does. Preferably just a better function name. Not all iterators support + operation or the < operation. This is why we have std::next or operator++ and iterators are usually tested with != or ==. Also, it looks like you are just doing a std::find_if(), so use the algorithm. for (dest = head + 1; dest + 1 < tail; dest ++) { if (! less (* (dest + 1), * head)) break; } // I would rewrite as: I loop = head; ++loop; auto find = std::find_if(loop, tail, [&less](I lhs, I rhs){return !less(*lhs, *rhs);}); This bit of code: Value tmp = std::move (* head); std::move (head + 1, dest + 1, head); * dest = std::move (tmp); is implemented by std::rotate(). Again I hate the comment. Not because it or the function are badly named. But because the comment does not give me any extra information. If it is not giving me information, it is actually worse than nothing as it will suffer from comment rot over time. The name of the function and its parameters should be your documentation. /* Merges two sorted sub-lists */ static void do_merge (Iter head, Iter mid, Iter tail, Less less, std::vector<Value> & buf) Using the operator - on iterators is not always supported. You should use std::distance(). Also using a C-cast is not tolerated in any civilized world. Take your heathen ways and reform, sinner! C++ has its own set of cast operators that do this much better. In this case static_cast<>(). But if you use std::distance() you don't need it. Very clever. So clever I had to go through it a couple of times to convince myself it worked. This is where you may want to comment on being clever. /* copy list "a" to temporary storage */ if (buf.size () < (Size) (mid - head)) buf = std::vector<Value> (std::make_move_iterator (head), std::make_move_iterator (mid)); else std::move (head, mid, buf.begin ()); But a vector contains two sizes: size() and capacity(). There is no need to allocate a new vector just because the size has been exceeded: you can go until you reach capacity. But even then, why are you doing it manually? The vector is designed to do this stuff all internally in the most efficient way. You should just copy using move iterators and a back inserter. Let the vector sort out its own resizing (this will be usually be more efficient). buf.clear(); std::copy(std::make_move_iterator(head), std::make_move_iterator(mid), std::back_inserter(buf)); Using move iterators and correctly sizing the buffer will make the following code cleaner. auto a = buf.begin (); auto a_end = a + (mid - head); Iter b = mid; Iter dest = head; Sure but this can be made much more readable: while (1) { if (! less (* b, * a)) { * (dest ++) = std::move (* a); if ((++ a) == a_end) break; } else { * (dest ++) = std::move (* b); if ((++ b) == tail) break; } } // I would write it like this: while(a != a_end && b != tail) { *dest++ = (! less (* b, * a)) ? std::move(*a++); : std::move(*b++); } OK. That's enough for one season. Seems like plenty that needs to be re-worked already.
{ "domain": "codereview.stackexchange", "id": 33626, "tags": "c++, c++11, sorting, mergesort" }
Can a second process execute when the Operating System is performing a context switch?
Question: A process was running on a uniprocessor system. It is being context switched. While the scheduler is performing this context switch can another process be allocated the CPU while the context switch is being performed? Answer: (1) First, lets define our terms. From Wikipedia: In computing, a context switch is the process of storing and restoring the state (more specifically, the execution context) of a process or thread so that execution can be resumed from the same point at a later time. In other words, a context switch is the act of transfering control from one process to another. To recall the current process, the CPU has to take a snapshot of its state before allocating resources to the new process. (2) Now let's answer exactly what get's saved. Also from Wikipedia: registers and memory maps, updating various tables and lists, the PC, stack pointer, etc. So, pretty much everything in the CPU needs to be stored somewhere else before loading in a new process' state. Now let's consider your question by reasoning about two arbitrary processes, PID 1 and PID 2 PID 1 is preparing to hand the CPU to PID 2 and begins storing its state. Thus, all critical CPU elements are in use. One by one as these elements are stored, they are freed (often the timing is more complex) and the CPU can allocate those resources to PID 2. Squeezing in an extra process would just cause overhead in the primary context switch between PID 1 and PID 2. You might even consider it a context switch inside of a context switch for any nontrivial process. Sure, if the ALU is open, it could probably do a quick calculation or two but what good would those results do when not a part of a process which is far more than just a few calculations? Theoretically possible? Sure. Reasonable, NO. Instead of trying to use that time "lost" during a context switch, innovations focus on classifying different degrees of context switches. Basically, if you have a lightweight switch between a process, or perhaps a thread, the entire processor is not held up while doing so. For more reading, I suggest: https://en.wikipedia.org/wiki/Light-weight_process to see the difference in process types. Clarification: The scheduler is a kernel space process.
{ "domain": "cs.stackexchange", "id": 5372, "tags": "operating-systems" }
How does a photon drive out the electrons in a solar cell?
Question: We know that solar cells work when a photon hits the n-type the photon's energy drives free the electrons in the n-type to generate a current. But we also know that when a photon hits the atoms it makes the electrons excited. So why doesn't the photon make the electron excited and makes the electron drive out? OR is it like this that in solar cells the photon gives so much energy to the electron that when it goes to a higher energy state and changes shell it gets out of the atom shells and becomes a free electron carrier? Thanks, Bhavesh Answer: I think a simple view is this: The solar cell must have a PN junction, which is a junction between p-type (many holes, no electrons) and n-type (many electrons, no holes) materials. Right where they meet there is actually a "depletion width" within which there is hardly any of either. Within this region, as photons come in they generate electron-hole pairs, which really just means that an electron has been excited from the valence to conduction band, leaving a hole behind. The electron is then pushed back to the n-side by the "built-in" electric field, while the hole is pushed to the p-side. Think of this is terms of energy: both end up where their energy is lower, so electrons prefer the n-type material, while holes prefer the p-type. Some understanding of semiconductor doping and Fermi statistics helps greatly to understand this.
{ "domain": "physics.stackexchange", "id": 21296, "tags": "quantum-mechanics, energy, photons, photoelectric-effect, solar-cells" }
Why do puzzles like Masyu lie in NP?
Question: The puzzle is made up of (n x n) squares so when taking the problem the input size would be n. Rules of Masyu: The goal is to draw a single continuous non-intersecting loop that properly passes through all circled cells. The loop must "enter" each cell it passes through from the center of one of its four sides and "exit" from a different side; all turns are therefore 90 degrees. White circles must be travelled straight through, but the loop must turn in the previous and/or next cell in its path. Black circles must be turned upon, but the loop must travel straight through the next and previous cells in its path. Answer: Because a problem is in NP if, and only if, "yes" instances have succinct certificates. Informally, this means that the solution to any instance is at most polynomially large in the size of the instance, and if you're given an instance and something that is claimed to be a solution to it, you can check in polynomial time that it really is a solution. More formally, a language $L\subseteq\Sigma^*$ is in NP if, and only if, there is a relation $R\subseteq \Sigma^*\times\Sigma^*$ such that: there is a polynomial $p$ such that, for all $(x,y)\in R$, $|y|\leq p(|x|)$; the problem of determining whether $(x,y)\in R$ is in P; $L = \{x\mid \exists y\,(x,y)\in R\}$. For example, consider graph 3-colourability. You can describe a graph on $n$ vertices just by writing out its adjacency matrix, which requires $n^2$ bits. If a graph is 3-colourable, you can describe a colouring in $2n$ bits: just list the colour of each vertex in turn (say, $00$ for red, $01$ for green, $10$ for blue). So, if a string $x$ describes a graph, and that graph is 3-colourable, each 3-colouring can be described in a string $y$ with length $2\sqrt{|x|}\leq 2|x|$. Furthermore, if I give you a decription of a graph and a description of a 3-colouring, you can easily check in polynomial time that it really is a 3-colouring: just check that adjacent vertices always have different colours. So, the relation $$\{(x,y)\mid x\text{ describes a graph $G$ and $y$ describes a 3-colouring of }G\}$$ proves that 3-colourability is in NP. So, to show that Masyu is in NP, we just need to construct the corresponding relation $R$. We can describe an $n\times n$ instance of Masyu in $2n^2$ bits: for each square in turn, write $00$ if it's blank, $01$ if it contains a black circle and $10$ if it's a white circle. We can describe a solution in $3n^2$ bits: for each square in turn, write $000$ if the square isn't on the solution path, and $100$, $101$, $110$ and $111$ if it's on the path and the next square is the one to the right, left, up and down, respectively. It's easy to check in polynomial time that a claimed solution really is a solution: just find a square that's on the path, follow the path and check it satisfies the criteria.
{ "domain": "cs.stackexchange", "id": 5646, "tags": "complexity-theory, np-complete, proof-techniques, np" }
Using Force instead of torque to calculate angular acceleration
Question: Suppose we have a solid filled cone with small oscillations occurring about its apex. We can find its equation of motion in $\theta$ with the M.O.I by $I\frac{dw}{dt} = -mgL\sin\theta$. Where L is the distance to the centre of mass from the apex and gravity provides net torque about the centre of mass (by definition). But why can we not use equate the tangential component of gravity with the acceleration of the centre of mass instead, like it has been shown with a pendulum bob? I'm guessing it's to do with the fact that the tangential forces on the bob are known to be purely due to gravity because tension is radial, however this isn't the case with a cone? Answer: Because the hinge in apex produces a force to make the apex stay at rest, that force has horizontal and vertical components.
{ "domain": "physics.stackexchange", "id": 48481, "tags": "classical-mechanics, rotational-dynamics" }
JPL Horizons sending too many emails
Question: I submitted a job that, I think, should have returned 100 emails from JPL Horizons. I now have over 6000 emails and they are coming in at about four a minute. Anyone else had this problem? Anyone know how to stop the deluge? I have emailed JPH Horizons default email address. Details Initial PID = 3073788 Thu May 4 09:40:31 2023 UTC Request started: COMMAND = '1;' '2;' '3;' '4; at 09:38 Latest email to arrive starts: Automated mail xmit by MAIL_REQUEST, PID= 272715 Thu May 4 04:16:09 2023 ++++++++++++++++++++++++++++++++ (part 1 of 1) +++++++++++++++++++++++++++++++ ******************************************************************************* JPL/HORIZONS 10 Hygiea (A849 GA) 2023-May-04 04:16:08 Initial request was for the first 100 asteroids ('1;' '2;' etc.) with these parameters: OBJ_DATA = 'YES' MAKE_EPHEM = 'YES' EPHEM_TYPE = 'ELEMENTS' CENTER = '500@0' REF_PLANE = 'ECLIPTIC' START_TIME = '2351-JUL-1 00:00' STOP_TIME = '2352-JUL-1 00:00' STEP_SIZE = '1 years' CSV_FORMAT = 'NO' ELM_LABELS = 'YES' TP_TYPE = 'ABSOLUTE' !$$EOF Answer: It turns out the "more robust" mail queue manager implemented April 21 has a condition where it can rediscover itself. That condition occurred in the middle of the night local time with your query. The good news is it was fixed yesterday. The bad news is 49K emails were already sent before someone else entered the mail queue and it let go and moved on normally. The "four per minute" is just your local system gradually leaking yesterday's pathology in. Recommend purging the accumulation from your delivery queue if you have the level of control. If not, redirect duplicate emails from Horizons to trash until it clears. If consolation, about 150K messages were received here from your system explaining it can't deliver X for the last four hours, but it will keep trying for the next five days, each with a copy of the as-yet undelivered email.
{ "domain": "astronomy.stackexchange", "id": 6924, "tags": "ephemeris" }
If a non-deterministic Turing machine runs in f(n) space, then why does it run in 2^O(f(n)) time?
Question: Assuming that f(n) >= n. If possible, I'd like a proof in terms of Turing machines. I understand the reason why with machines that run on binary, because each "tape cell" is a bit with either 0 or 1, but in Turing machines a tape cell could hold any number of symbols. I'm having trouble why the base is '2' and not something like 'b' where 'b' is the number of types of symbols of the Turing machine tape. Answer: why the base is '2' and not something like 'b' where 'b' is the number of types of symbols of the Turing machine tape? Because it does not matter in the sense that $2^{O(f(n))} = b^{O(f(n))}$ for any positive $b > 1$. Using the popular set-theoretic understanding of the big $O$-notation, we have $$\begin{align} 2^{O(f(n))}&=\{2^{g(n)}\mid g(n)\in O(f(n))\} \\ &=\{\left(b^{\log_b2}\right)^{g(n)}\mid g(n)\in O(f(n))\} \\ &=\{b^{h(n)}\mid \exists g(n),\ h(n)= (\log_b2)g(n),\ g(n)\in O(f(n))\} \\ &=\{b^{h(n)}\mid h(n)\in O(f(n))\} \\ &=b^{O(f(n))}\end{align}$$ By the way, there is no need to assume $f(n)\ge n$. For example, all above hold when $f(n) = \lceil\sqrt n\rceil$ or $f(n) = \lceil\log_2n\rceil$.
{ "domain": "cs.stackexchange", "id": 12941, "tags": "complexity-theory, turing-machines" }
Gazebo /clock publish rate
Question: Gazebo publishes /clock at 10Hz. Can this be increased and how? Thank you! Originally posted by fiji3119 on Gazebo Answers with karma: 3 on 2020-06-09 Post score: 0 Answer: /clock publishes a message once per every new state, i.e. at 1/max_step_size Hz. You should rather focus on why your simulation runs so slow than the clock publish rate (did you perhaps tweak max_step_size?). Nonetheless, if you still want to change the frequency, you can do that with the /gazebo/pub_clock_frequency parameter on the parameter server. Originally posted by nlamprian with karma: 833 on 2020-06-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by fiji3119 on 2020-06-09: I am using default value i.e max_step_size = 0.001 but when I run ros2 topic hz /clock, I get avg 10Hz. Shouldn't I expect 1000hz? Comment by nlamprian on 2020-06-09: So, things are different in ROS2. It seems that /clock indeed publishes at 10 Hz. You can find the answer to your question here. Comment by fiji3119 on 2020-06-10: Exactly what I was looking for. Thank you!
{ "domain": "robotics.stackexchange", "id": 4521, "tags": "ros, gazebo-9" }
How does flavor change in weak interaction works with $Z$ and $W^{\pm,0}$?
Question: How does flavor change in weak interaction works with $Z$ and $W^{\pm,0}$? I'm completely confused of how could weak interaction just "approximately" conserve the flavor, but $u,d,s,c$ could be just changed seemingly randomly. For example, (https://en.wikipedia.org/wiki/Weak_interaction#Charged-current_interaction) showed that $W^-$ was used to mediate $d\rightarrow u$ and $c\rightarrow s$, where its decay contained $e^- +\overline{\nu}_e$... Is there a default makeup(composition) of $W,Z$ at all? How could $W,Z$ just flip flavors without constrains? Is there any rule at all? Answer: The rule is called quantum mechanics, it depends on the postulates and the mathematical formulation with wave functions leading to probabilities of interaction. If there is a probability non zero for a specific interaction, it will happen, in order to fulfill the probability spectrum. Probabilities are controlled not only by the form of the wavefunctions in the boundary conditions of the interaction, but also by conservation laws: energy, momentum, angular momentum, and a plethora of quantum number conservation as charge, baryon number , lepton number ... If these are conserved during the interaction, it will have a probability of happening. In the specific case, the weak interaction does not conserve flavor, charge and lepton number are, so if it is energetically possible, the quarks can change flavor through weak interaction. The W and Z are elementary point particles in the standard model of particle physics. They are gauge bosons and carry no quantum numbers. In the link you gave note the example diagram. Weak interactions do not conserve flavor, so there is flavor change to an energetically (by the energy in the interaction) allowed one. Baryon number, lepton number and charge are conserved in the weak interaction. The interaction shown has a probability of happening, that is why the neutron is not stable.
{ "domain": "physics.stackexchange", "id": 61303, "tags": "homework-and-exercises, particle-physics, definition" }
Returning a viewmodel
Question: In my ASP.NET MVC code, I like to use controller service classes for my controllers. These service classes contain methods for retrieving viewmodel objects. Here is an example controller snippet: public SubscriptionsController(ISubscriptionsControllerService service) { _service = service; } public ActionResult Index(Guid id) { return View("Subscriptions", _service.GetSubscriptionsViewModelOnGet(id)); } [HttpPost] public ActionResult Index(SubscriptionsViewModel viewModel) { _service.SaveSubscriptions(viewModel); return View("Subscriptions", _service.GetSubscriptionsViewModelOnPost(viewModel)); } As you can see, I have a method for retrieving the subscriptions viewmodel on a GET request, as well as the equivalent for a POST request. The POST method takes in an existing viewmodel object and updates any relevant data e.g. a list of subscription items, that need to be refreshed before passing back to the view. My question is whether the naming of the methods (GetSubscriptionsViewModelOnGet() and GetSubscriptionsViewModelOnPost()) makes sense. They seem OK to me, but I'm interested in other people's views. Answer: Why not name them both the same? The difference is the type of parameter you're passing. This leaves you free to do some method overloading: public SubscriptionsViewModel GetSubscriptionsViewModel(Guid id) { //GET Logic here... } public SubscriptionsViewModel GetSubscriptionsViewModel(SubscriptionsViewModel viewModel) { //POST Logic here... } Why call them the same? They both do the same: return a SubscriptionsViewModel.
{ "domain": "codereview.stackexchange", "id": 4119, "tags": "c#" }
Are the stars outside of the galactic plane in the galactic halo?
Question: The majority of the stars we see in the sky, like Pollux, are outside of the galactic plane. That means that all those stars we see are not in the galactic disk, and therefore are in the galactic halo, whereas Capella, for example, is where? Answer: The galactic disk, as Riley Jacob wrote, has a definite thickness. It's actually composed of a thin disk $\sim0.3\text{ kpc}$ thick and a thick disk $\sim1\text{ kpc}$ thick, at least (McMillan (2011) has models with data from the Sloan Digital Sky Survey). There's also a central bulge that is even thicker, as the following diagram (from here) shows: Pollux is $\sim10\text{ pc}$ away from the Solar System, which is about 1% of the thickness of the thick disk (Capella is barely farther). Essentially, it's in the same plane as the Solar System; that's an insignificant distance. There are stars in the thin disk, thick disk, and halo, which compose different populaitons based on metallicity. In the thin disk are Population I stars, which are high in metals. In the halo (and thin disk, to some extent) are Population II stars, which are lower in metals and on average older than Population I stars. The Sun is a Population I star in the disk. There may also be a sub-population of Population II stars in the thick disk. Halo stars certainly do not compose the majority of stars in the Milky Way galaxy. Most stars are contained within the thin disk, thick disk, and bulge.
{ "domain": "astronomy.stackexchange", "id": 1959, "tags": "star, galaxy, milky-way, stellar-dynamics" }
What lies at the very edge of the expanding universe?
Question: We all know that the universe is expanding at an accelerated rate and it might appear much like a soap bubble. That is where the phrase dark energy whose essence is unknown and which is thought to have caused this comes from. But that is not what this question is really about. If we could stand at the very edge of the expanding universe... What would we see just outside of it? Pure blackness or other expanding bubbles of multiverses? How about at the very edge? Would there be a membrane of some kind? What about at the inside of the edge? Of course this last question is easy to tackle because we would just see our own universe. Answer: What would we see just outside of it? Pure blackness or other expanding bubbles of multiverses? Outside our particle horizon we assume that everything is more or less the same like where we are, at least if the assumption of homogenity and isotropy holds. If we live in a multiverse there might also be other laws of nature beyond or horizon, but there is no way to really test this. Anyway, more speculations on this can be found here and here. How about at the very edge? The universe has no edges. If it is finitely curved you always get back to where you started from when you move in a straight line (except if superluminal expansion confines you to a horizon smaller than the cirumference of you dimension).
{ "domain": "physics.stackexchange", "id": 19874, "tags": "cosmology, universe, dark-energy, multiverse, observable-universe" }
What are the different notions of one-way functions?
Question: For instance, A function that is computable but not invertable in log space, Is it one-way function? What are the known definitions of one-way functions? (especially the ones that do not invoke polynomials) Answer: Usually one-way functions are used for crypto, and so you want that no efficient adversary can invert the function. Identifying efficient adversaries with randomized polynomial-time, you get the typical notion of security which talks of randomized poly-time machines. But of course you can think of different security notions. For super-fast crypto, you may want the one-way function to be computable in restricted models. Here a great result by Applebaum, Ishai, and Kushilevitz shows that a poly-time computable OWF implies a OWF where each output bit depends on just O(1) input bits (which is arguably one of the simplest computational models you can think of).
{ "domain": "cstheory.stackexchange", "id": 47, "tags": "cc.complexity-theory, space-bounded, one-way-function" }
Maximum Heap Container
Question: I implemented this max-heap to satisfy the C++ Container requirements. I am fairly confident I implemented bubble_up, bubble_down, and sort correctly. It passes for small sets of numbers I give it. If you're wondering what bubble_down(iterator, iterator) does, it does the bubble down action but ignores positions past last (inclusive). In sort, the vector becomes sorted from right to left, so as the root bubbles down it will not interact with the already sorted part. Also note that after sorting you will want to call build to rebuild the heap. #include <iostream> #include <string> #include <vector> #include <iterator> #include <utility> #include <sstream> #include <cmath> template<typename Type> class max_heap { public: using value_type = Type; using reference = Type&; using const_reference = const Type&; using iterator = typename std::vector<Type>::iterator; using const_iterator = typename std::vector<Type>::const_iterator; using difference_type = typename std::vector<Type>::difference_type; using size_type = typename std::vector<Type>::size_type; max_heap() = default; ~max_heap() = default; max_heap(iterator begin, iterator end); max_heap(const max_heap<Type>& other); max_heap(max_heap<Type>&& other); iterator begin(); const_iterator begin() const; const_iterator cbegin() const; iterator end(); const_iterator end() const; const_iterator cend() const; reference operator=(max_heap<Type> other); reference operator=(max_heap<Type>&& other); bool operator==(const max_heap<Type>& other); bool operator!=(const max_heap<Type>& other); void swap(max_heap& other); template<typename T> friend void swap(max_heap<T>& lhs, max_heap<T>& rhs); size_type size() const; size_type max_size() const; bool empty() const; void sort(); void insert(value_type value); void remove(iterator elem); void remove_maximum(); reference maximum(); const_reference maximum() const; // build tree in linear time void build(); private: iterator parent_of(iterator child); iterator left_child_of(iterator parent); iterator right_child_of(iterator parent); void bubble_up(iterator elem); void bubble_down(iterator elem); void bubble_down(iterator elem, iterator last); std::vector<int> rep; }; template<typename Type> max_heap<Type>::max_heap(iterator begin, iterator end) : rep(begin, end) { build(); } template<typename Type> max_heap<Type>::max_heap(const max_heap<Type>& other) : rep(other.rep) { } template<typename Type> max_heap<Type>::max_heap(max_heap<Type>&& other) { std::swap(rep, other.rep); } template<typename Type> typename max_heap<Type>::iterator max_heap<Type>::begin() { return rep.begin(); } template<typename Type> typename max_heap<Type>::const_iterator max_heap<Type>::begin() const { return rep.begin(); } template<typename Type> typename max_heap<Type>::const_iterator max_heap<Type>::cbegin() const { return rep.begin(); } template<typename Type> typename max_heap<Type>::iterator max_heap<Type>::end() { return rep.end(); } template<typename Type> typename max_heap<Type>::const_iterator max_heap<Type>::end() const { return rep.end(); } template<typename Type> typename max_heap<Type>::const_iterator max_heap<Type>::cend() const { return rep.end(); } template<typename Type> typename max_heap<Type>::reference max_heap<Type>::operator=(max_heap<Type> other) { // copy-swap std::swap(rep, other.rep); return *this; } template<typename Type> typename max_heap<Type>::reference max_heap<Type>::operator=(max_heap<Type>&& other) { // copy-swap std::swap(rep, other.rep); return *this; } template<typename Type> bool max_heap<Type>::operator==(const max_heap<Type>& other) { return std::equal(begin(), end(), other.begin(), other.end()); } template<typename Type> bool max_heap<Type>::operator!=(const max_heap<Type>& other) { return !operator==(other); } template<typename Type> void max_heap<Type>::swap(max_heap<Type>& other) { std::swap(rep, other.rep); } template<typename Type> void swap(max_heap<Type>& lhs, max_heap<Type> rhs) { lhs.swap(rhs); } template<typename Type> typename max_heap<Type>::size_type max_heap<Type>::size() const { return rep.size(); } template<typename Type> typename max_heap<Type>::size_type max_heap<Type>::max_size() const { return rep.max_size(); } template<typename Type> bool max_heap<Type>::empty() const { return rep.empty(); } template<typename Type> void max_heap<Type>::build() { // skip leaf nodes const auto n = begin() + std::ceil(size() / 2); for (auto i = n; i >= begin(); --i) bubble_down(i); } template<typename Type> void max_heap<Type>::sort() { auto iter = end() - 1; while (iter >= begin()) { std::swap(*begin(), *iter); // bubble root down but ignore elements past iter bubble_down(begin(), iter); --iter; } } template<typename Type> typename max_heap<Type>::reference max_heap<Type>::maximum() { return *begin(); } template<typename Type> typename max_heap<Type>::const_reference max_heap<Type>::maximum() const { return *begin(); } template<typename Type> void max_heap<Type>::remove(iterator elem) { std::swap(*elem, *(end() - 1)); rep.resize(size() - 1); if (size() > 0) bubble_down(begin()); } template<typename Type> void max_heap<Type>::remove_maximum() { remove(begin()); } template<typename Type> typename max_heap<Type>::iterator max_heap<Type>::parent_of(iterator child) { // parent = floor((i - 1) / 2) const auto idx = std::distance(begin(), child); return begin() + (idx - 1) / 2; } template<typename Type> typename max_heap<Type>::iterator max_heap<Type>::left_child_of(iterator parent) { // left_child = 2i + 1 const auto idx = std::distance(begin(), parent); return begin() + (2 * idx) + 1; } template<typename Type> typename max_heap<Type>::iterator max_heap<Type>::right_child_of(iterator parent) { // right_child = 2i + 2 const auto idx = std::distance(begin(), parent); return begin() + (2 * idx) + 2; } template<typename Type> void max_heap<Type>::bubble_up(iterator elem) { auto child = elem; auto parent = parent_of(child); // bubble up while (child != parent && *child > *parent) { std::swap(*child, *parent); child = parent; parent = parent_of(parent); } } template<typename Type> void max_heap<Type>::bubble_down(iterator elem) { bubble_down(elem, end()); } template<typename Type> void max_heap<Type>::bubble_down(iterator elem, iterator last) { auto parent = elem; auto left_child = left_child_of(parent); auto right_child = right_child_of(parent); // stop at last while (left_child < last || right_child < last) { // find maximum of parent, left_child, right_child auto max = parent; if (left_child < last) if (*max < *left_child) max = left_child; if (right_child < last) if (*max < *right_child) max = right_child; // max_heap property fixed if (parent == max) break; // swap with the greater child std::swap(*parent, *max); parent = max; left_child = left_child_of(parent); right_child = right_child_of(parent); } } template<typename Type> void max_heap<Type>::insert(value_type value) { rep.push_back(value); bubble_up(end() - 1); } template<typename Type> std::ostream& operator<<(std::ostream& out, const max_heap<Type>& max_heap) { // output contents of max_heap if (max_heap.size() > 0) { std::cout << *max_heap.begin(); for (auto i = max_heap.begin() + 1; i < max_heap.end(); ++i) std::cout << ' ' << *i; } return out; } int main() { std::string line; // get set of integers from stdin do { std::cout << "insert set of integers: "; std::getline(std::cin, line); } while (line == ""); // parse stdin input into vector<int> std::stringstream stream(line); std::vector<int> arr; while (std::getline(stream, line, ' ')) { try { const int n = std::stoi(line); arr.push_back(n); } catch (...) { // ignore for now continue; } } // linear time to build max_heap max_heap<int> h(arr.begin(), arr.end()); std::cout << "h before: " << h << '\n'; h.sort(); std::cout << "h sorted: " << h << '\n'; h.build(); h.insert(22); std::cout << "h inserted: " << h << '\n'; } EDIT: I also realized I can std::move the value into the vector in insert. EDIT2: I realized that inside of build it should be const auto n = begin() + (size() / 2) - 1;. What is above is not strictly wrong but it may do one more bubble_down than is necessary. Answer: You have a container that can store anything template<typename Type> class max_heap but your underlying storage is this std::vector<int> rep; I'm somewhat shocked that you didn't notice this. Did you only test it on integers? You define everything out-of-line but this makes the overall implementation quite a bit larger than it needs to be, while also making the code a little less pleasant to read. If you want to add or remove a function, you have to do it twice. In the class, you have a forward declaration iterator begin(); ...then you have the implementation later. template<typename Type> typename max_heap<Type>::iterator max_heap<Type>::begin() { return rep.begin(); } You could just do this iterator begin() { return rep.begin(); } In the sort function, you have this std::swap(*begin(), *iter); When swapping two objects and you don't know what their type is, you should allow ADL to find custom swap functions. This pattern is very common in generic code. The standard library does this as well. using std::swap; swap(*begin(), *iter); If you have a class with its own swap function like this namespace my { class swappable_thing {}; void swap(swappable_thing &, swappable_thing &) { std::cout << "Custom swap\n"; } } This following code does not call the custom swap function my::swappable_thing a, b; std::swap(a, b); but this does my::swappable_thing a, b; using std::swap; swap(a, b); C++ has algorithms devoted to dealing with heaps. You're not using them. If there is a standard algorithm that does what you need it to do, use it. Your code becomes much simpler when you use standard algorithms.
{ "domain": "codereview.stackexchange", "id": 32613, "tags": "c++, reinventing-the-wheel, heap, heap-sort" }
Discrete spacetime: what does it mean for spacetime fields and vacuum?
Question: Suppose we imagine that the space-time continuum is really a discrete stuff of something. This something could be some kind of particles or "substance"/matter/energy field. Then, I see two conceptual problems if we keep the broad picture of Nature made of quantum fields on a space-time substratum: What is "between" two atom of space-time or atoms of space (and time) if different? Vacuum? Can distances or neighbourhood be defined if no space and no time and no field is defined? Vacuum is generally believed to be the main lowest state of field theory, on space-time. Supposing there is no space-time, can we even define what vacuum is? Does vacuum exist without reference to a particular background independent theory? In summary, is field theory (including classical gravity) and the notion of vacuum into trouble if we assume that the spacetime continuum is a discrete set of "something"? What is vacuum in a discrete theory of spacetime? Answer: You should really look into Loop Quantum Gravity for a quantitative example. While unconfirmed and highly speculative, it does offer a toy example for a background independent quantum field theory, that is, a theory that describes the quantization of space-time rather than living on classical space-time. I'll try to answer your questions 1 and 2 from the point of view of Loop Quantum Gravity. But first, I want to clear one very important conceptual point that is almost always misunderstood. According to LQG, space-time isn't exactly discrete, nor is it continuous. Instead it is quantum. Quantum objects have been known to consistently combine continuous and discrete properties, think e.g. wave-particle duality. The same thing is going on with space-time. Let's consider a very simple thought experiment – imagine a quantum theory of space-time that has a minimal length $l_P$. Moreover, all lengths in the theory can only be integer multiples of $l_P$ (this is just a toy example, the formulas from LQG are similar in spirit but more complicated): $$ l = n l_p, \; n \in \mathbb{Z}, \; n \ge 0. $$ Naively, this violates Lorentz invariance. For example, if we boost this length by going to a moving reference frame, we expect it to Lorentz-contract according to $$ l' = \sqrt{1 - v^2} \cdot l, $$ where the value of the square root is continuous hence it can't be consistent with the discreteness of length... That, however, is completely wrong. Allow me an analogy to demonstrate the flaw in the argument above. Consider a spinning particle with angular momentum $J = j \hbar$. It is well known that the $z$-component of the angular momentum can only take discrete values ranging from $-j$ to $j$. Does this mean that rotational symmetry is broken? Not at all! There are still continuous rotations around, for example, the $x$-axis acting on the system, these are generated by $e^{i \varphi L_x}$ where $\varphi$ is the rotation angle and $L_x$ is the generator of the symmetry in the spin-$j$ representation. These symmetries act on states by changing the wavefunction components, but they don't change the discrete spectrum. The expectation values of observables therefore transform under continuous rotations continuously, while the spectrum remains discrete. A similar situation happens with the length spectrum. Let's denote the quantum states of spacetime with the length under considerations taking values $l = n l_P$ by $\left| n \right>$. We can define the length operator via $$ L \left| n \right> = n l_p \left| n \right>. $$ Real states are always superpositions of form $$ \left| \Psi \right> = \sum_n C_n \left| n \right>. $$ Imagine acting with a Lorentz boost on a state of this form. The generator of the boost will continuously change the values of $C_n$, but it won't touch the spectrum. Alternatively, in the "Heisenberg" picture the state doesn't change at all, but the operator $L$ evolves continuously according to $$ i \frac{\partial}{\partial \varphi} L = \left[ L, K \right], $$ where $K$ is the boost operator. In either case, the expectation value contracts continuously: $$ \left< \Psi' \right| L \left| \Psi' \right> = \left< \Psi \right| L' \left| \Psi \right> = \sqrt{1 - v^2} \cdot \left< \Psi \right| L \left| \Psi \right>, $$ but the spectrum, including the "length gap" $l_P$, remains unchanged and discrete. Therefore, the existence of minimal length does not go against Lorentz symmetry in the quantum theory of gravity. At least not in this primitive way. Global Lorentz symmetries indeed don't exist in LQG, but that's not related to discreteness. In fact, global Lorentz symmetries also don't exist in classical General Relativity, unless unphysical constraints of asymptotic flatness are applied. Now to come to your questions. What is "between" two atom of space-time or atoms of space (and time) if different? Vacuum? Can distances or neighbourhood be defined if no space and no time and no field is defined? You'll need to study LQG to answer this question, but I'll try to give you a picture that emerges from applying loop quantization to General Relativity. It may appear superficial, so keep in mind that this structure isn't among the axioms of the theory, instead it can be obtained by a calculation. The quantum states of spacetime in LQG are very mysterious and still ill-understood. Those can be defined by considering a kernel of the so-called "Hamiltonian constraint operator", defined on another auxiliary Hilbert space called the kinematical Hilbert space (because it doesn't know about the dynamics of General Relativity). The kinematical Hilbert space $\mathcal{K}$ describes the quantum states of spatial geometry unconstrained by General Relativity. It is well understood and possesses a unique structure. The basis of states on $\mathcal{K}$ is given by spin networks. Those are 4-valent graphs (each node has 4 links adjacent to it), where links are labeled by irreducible projective representations of the "little group" $SO(3) \sim SU(2)$, which are just spins, i.e. half-integers $j$. The appearance of the little group has to do with the fact that states are defined at the boundary and not in the bulk, in fact, there's a slight resemblance with the holographic principle here. The nodes of the spin network are labeled by normalized intertwining operators, which are the $SU(2)$-invariant subspaces of $\mathcal{H}_{j_1}\otimes\mathcal{H}_{j_2}\otimes\mathcal{H}_{j_3}\otimes\mathcal{H}_{j_4}$ (here $\mathcal{H}_j$ is the spin-$j$ irrep of SU(2), and $j_{1\dots4}$ are the spins of the 4 links adjacent to the node). To every surface $S$ immersed in the 3-dimensional boundary, General Relativity associates a geometric area. For example, in classical General Relativity, $$ \mathcal{A}(S) = \intop_{S} d^2 x \sqrt{g'}, $$ where $g'$ is the induced metric given by $$ g'_{uv} = \frac{\partial X^{a}}{\partial x^{u}}\frac{\partial X^{b}}{\partial x^{v}} g_{ab}(X(x)). $$ In Loop Quantum Gravity, $\mathcal{A}(S)$ becomes a self-adjoint operator on $\mathcal{K}$. The spin network basis is particularly useful, because spin networks diagonalize area operators. In particular, the eigenvalue of area of a surface $S$ on the spin network state $\left| SN \right>$ is $$ \mathcal{A}(s) \left| SN \right> = 8 \pi l_P^2 \gamma \sum_{n} \sqrt{j_n (j_n + 1)} \left| SN \right>.$$ Here $l_P$ is the Planck length, $\gamma$ is the LQG-specific Barbero-Immirzi constant which is dimensionless and takes values of order $\gamma \sim 1$, and the sum is over the links of the spin network that intersect $S$. In LQG, area is quantized. The area spectrum is discrete. The whole spacetime is arranged such that you can't get a value of area which doesn't belong to the spectrum. This is in no contradiction with relativity, for reasons outlined above. The minimal "area gap" that any physical surface can have is when among the links that intersect it all have spin $0$ (which is equivalent to saying they don't physically exist, because they don't contribute to physical area) except for one which has spin $1/2$: $$ \Delta \mathcal{A} = 4 \sqrt{3} \pi \gamma l_P^2. $$ If we substitute the value of $\gamma$, fixed by matching the numeric coefficient of the predicted black hole entropy with the Bekenstein's formula: $$ \gamma = \frac{\ln 2}{\sqrt{3} \pi}, $$ we get a distinctive prediction for the area gap: $$ \Delta \mathcal{A} = \left( 4 \ln 2 \right) l_P^2 \approx 2.77 l_P^2. $$ The nodes of the spin network can be interpreted as quantum tetrahedra, which are joined along common triangles – the links of the spin network. The areas of triangles are encoded by the spins, and the volumes of the tetrahedra are encoded by the intertwining operators. In reality (according to LQG), however, space is not a spin network, but a superposition of spin networks. It is easy to see – classical tetrahedra have 6 geometric degrees of freedom (6 lengths), but in LQG there's only 5 (4 spins and 1 intertwiner). Hence, quantum tetrahedra are always fuzzy. Geometry itself is noncommutative. Real tetrahedra on large scales are given by specific superpositions of spin networks that minimize the product of uncertainties between the last remaining 2 degrees of freedom of the tetrahedron (the volume and the dihedral angle). They are called Livine-Speziale coherent states. Vacuum is generally believed to be the main lowest state of field theory, on space-time. Supposing there is no space-time, can we even define what vacuum is? Does vacuum exist without reference to a particular background independent theory? Short answer is – no, the vacuum does not exist. The notion of energy does not exist as well (this is already apparent in GR with all of its energy paradoxes – it is possible to define gravitational energy only if GR is expanded around the flat space, which in turn excludes a lot of interesting solutions e.g. the cosmological FLRW solution). The dynamics of background independent theories is drastically different from anything else. It is in fact completely encoded in terms of constraints – for LQG this is the Hamiltonian constraint. It is expected (and in fact numerical simulations suggest this is true, see Rovelli's book for references) that among the solutions of the constraint there are those resembling classical geometries satisfying Einstein's equations. Among those, there should be the Minkowski space somewhere. In fact, there are two formulations of the Hamiltonian constraint operator that are presently known. One is the canonical formulation, which is defined in terms of matrix elements of the Hamiltonian constraint (or the so-called master constraint) on spin network states. This one is mathematically well-defined, but so far no one was able to prove that it gives General Relativity in the classical limit (and as far as I know there's indications that it may not be true). The other is the covariant formulation. Here in the spirit of path integrals, the projector on the subspace of solutions of the Hamiltonian constraint is defined in terms of sums over histories of spin networks. These are 2-complexes known as spinfoams. Links of the spin networks trace faces of spinfoams, nodes of the spin network trace edges of spinfoams, structural changes in the topology of the spin networks are encoded in the vertices of spinfoams. The spinfoam model for 4-dimensional LQG is called the EPRL model. In sharp contrast with the canonical formulation, it is not known if this model can be made mathematically well defined (amplitudes for individual spinfoams are always approximate, to get the precise answer we'd need to take the projective limit, for which it is unclear whether it has the right properties or even if it exists). However, it gives classical General Relativity in the classical limit with Livine-Speziale coherent states. To summarize, LQG is a toy example (which also has the potential to become realistic at some point) of truly quantum space-time. It looks very weird to a physicist who's studying it for the first time. The geometry itself is fuzzy and non-commutative. There is no time evolution, no well-defined notions of conserved energy, no unitarity. This, however, doesn't indicate a flaw in the formulation of the theory (not that there aren't any – there are plenty of flaws in the current understanding of LQG dynamics, but this isn't one of them). Instead, this is an indication that we should use completely new techniques to extract physical predictions. All physics is encoded in constraints, there are no evolution laws. But that also doesn't mean that the theory doesn't incorporate time evolution – it does. Only quantum things evolve with respect to each other, not with respect to an external time flow like in ordinary quantum field theories. This is very strange and counter intuitive, and we should not have expected any less from a theory of quantum gravity.
{ "domain": "physics.stackexchange", "id": 63685, "tags": "quantum-field-theory, spacetime, vacuum, discrete, loop-quantum-gravity" }
PyTorch: LSTM for time-series failing to learn
Question: I'm currently working on building an LSTM network to forecast time-series data using PyTorch. I tried to share all the code pieces that I thought would be helpful, but please feel free to let me know if there's anything further I can provide. I added some comments at the end of the post regarding what the underlying issue might be. From the univariate time-series data indexed by date, I created 3 date features and split the data into training and validation sets as below. # X_train weekday monthday hour timestamp 2015-01-08 17:00:00 3 8 17 2015-01-12 19:30:00 0 12 19 2014-12-01 15:30:00 0 1 15 2014-07-26 09:00:00 5 26 9 2014-10-17 20:30:00 4 17 20 ... ... ... ... 2014-08-29 06:30:00 4 29 6 2014-10-13 14:30:00 0 13 14 2015-01-03 02:00:00 5 3 2 2014-12-06 16:00:00 5 6 16 2015-01-06 20:30:00 1 6 20 8256 rows × 3 columns # y_train value timestamp 2015-01-08 17:00:00 17871 2015-01-12 19:30:00 20321 2014-12-01 15:30:00 16870 2014-07-26 09:00:00 11209 2014-10-17 20:30:00 26144 ... ... 2014-08-29 06:30:00 9008 2014-10-13 14:30:00 17698 2015-01-03 02:00:00 12850 2014-12-06 16:00:00 18277 2015-01-06 20:30:00 19640 8256 rows × 1 columns # X_val weekday monthday hour timestamp 2015-01-08 07:00:00 3 8 7 2014-10-13 22:00:00 0 13 22 2014-12-07 01:30:00 6 7 1 2014-10-14 17:30:00 1 14 17 2014-10-25 09:30:00 5 25 9 ... ... ... ... 2014-09-26 12:30:00 4 26 12 2014-10-08 16:00:00 2 8 16 2014-12-03 01:30:00 2 3 1 2014-09-11 08:00:00 3 11 8 2015-01-15 10:00:00 3 15 10 2064 rows × 3 columns # y_val value timestamp 2014-09-13 13:00:00 21345 2014-10-28 20:30:00 23210 2015-01-21 17:00:00 17001 2014-07-20 10:30:00 13936 2015-01-29 02:00:00 3604 ... ... 2014-11-17 11:00:00 15247 2015-01-14 00:00:00 10584 2014-09-02 13:00:00 17698 2014-08-31 13:00:00 16652 2014-08-30 12:30:00 15775 2064 rows × 1 columns Then, I transformed the values in the datasets by using MinMaxScaler from the sklearn library. scaler = MinMaxScaler() X_train_arr = scaler.fit_transform(X_train) X_val_arr = scaler.transform(X_val) y_train_arr = scaler.fit_transform(y_train) y_val_arr = scaler.transform(y_val) After converting these NumPy arrays into PyTorch Tensors, I created iterable datasets using TensorDataset and DataLoader classes provided by PyTorch. from torch.utils.data import TensorDataset, DataLoader from torch.autograd import Variable train_features = torch.Tensor(X_train_arr) train_targets = torch.Tensor(y_train_arr) val_features = torch.Tensor(X_val_arr) val_targets = torch.Tensor(y_val_arr) train = TensorDataset(train_features, train_targets) train_loader = DataLoader(train, batch_size=64, shuffle=False) val = TensorDataset(val_features, val_targets) val_loader = DataLoader(train, batch_size=64, shuffle=False) Then, I defined my LSTM Model and train_step functions as follows: class LSTMModel(nn.Module): def __init__(self, input_dim, hidden_dim, layer_dim, output_dim): super(LSTMModel, self).__init__() # Hidden dimensions self.hidden_dim = hidden_dim # Number of hidden layers self.layer_dim = layer_dim # Building your LSTM # batch_first=True causes input/output tensors to be of shape # (batch_dim, seq_dim, feature_dim) self.lstm = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True) # Readout layer self.fc = nn.Linear(hidden_dim, output_dim) def forward(self, x): # Initialize hidden state with zeros h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_() # Initialize cell state c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_() # We need to detach as we are doing truncated backpropagation through time (BPTT) # If we don't, we'll backprop all the way to the start even after going through another batch out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach())) # Index hidden state of last time step out = self.fc(out[:, -1, :]) return out def make_train_step(model, loss_fn, optimizer): # Builds function that performs a step in the train loop def train_step(x, y): # Sets model to TRAIN mode model.train() # Makes predictions yhat = model(x) # Computes loss loss = loss_fn(y, yhat) # Computes gradients loss.backward() # Updates parameters and zeroes gradients optimizer.step() optimizer.zero_grad() # Returns the loss return loss.item() # Returns the function that will be called inside the train loop return train_step Finally, I start training my LSTM model in mini-batches with AdamOptimizer for 20 epochs, which is already long enough to see the model is not learning. import torch.optim as optim input_dim = n_features hidden_dim = 64 layer_dim = 3 output_dim = 1 model = LSTMModel(input_dim, hidden_dim, layer_dim, output_dim) criterion = nn.MSELoss(reduction='mean') optimizer = optim.Adam(model.parameters(), lr=1e-2) train_losses = [] val_losses = [] train_step = make_train_step(model, criterion, optimizer) n_epochs = 20 device = 'cuda' if torch.cuda.is_available() else 'cpu' for epoch in range(n_epochs): batch_losses = [] for x_batch, y_batch in train_loader: x_batch = x_batch.unsqueeze(dim=0).to(device) y_batch = y_batch.to(device) loss = train_step(x_batch, y_batch) batch_losses.append(loss) training_loss = np.mean(batch_losses) train_losses.append(training_loss) with torch.no_grad(): batch_val_losses = [] for x_val, y_val in val_loader: x_val = x_val.unsqueeze(dim=0).to(device) y_val = y_val.to(device) model.eval() yhat = model(x_val) val_loss = criterion(y_val, yhat).item() batch_val_losses.append(val_loss) validation_loss = np.mean(batch_val_losses) val_losses.append(validation_loss) print(f"[{epoch+1}] Training loss: {training_loss:.4f}\t Validation loss: {validation_loss:.4f}") And this is the output: C:\Users\VS32XI\Anaconda3\lib\site-packages\torch\nn\modules\loss.py:446: UserWarning: Using a target size (torch.Size([1, 1])) that is different to the input size (torch.Size([64, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) [1] Training loss: 0.0505 Validation loss: 0.0315 [2] Training loss: 0.0317 Validation loss: 0.0315 [3] Training loss: 0.0317 Validation loss: 0.0315 [4] Training loss: 0.0317 Validation loss: 0.0315 [5] Training loss: 0.0317 Validation loss: 0.0315 [6] Training loss: 0.0317 Validation loss: 0.0315 [7] Training loss: 0.0317 Validation loss: 0.0315 [8] Training loss: 0.0317 Validation loss: 0.0315 [9] Training loss: 0.0317 Validation loss: 0.0315 [10] Training loss: 0.0317 Validation loss: 0.0315 [11] Training loss: 0.0317 Validation loss: 0.0315 [12] Training loss: 0.0317 Validation loss: 0.0315 [13] Training loss: 0.0317 Validation loss: 0.0315 [14] Training loss: 0.0317 Validation loss: 0.0315 [15] Training loss: 0.0317 Validation loss: 0.0315 [16] Training loss: 0.0317 Validation loss: 0.0315 [17] Training loss: 0.0317 Validation loss: 0.0315 [18] Training loss: 0.0317 Validation loss: 0.0315 [19] Training loss: 0.0317 Validation loss: 0.0315 [20] Training loss: 0.0317 Validation loss: 0.0315 Note 1: Looking at the warning given, I'm not sure if that's the real reason why the model is not learning well. After all, I'm trying to predict the future values in the time-series data; therefore, 1 would be a plausible output dimension. Note 2: To train the model in mini-batches, I relied on the class DataLoader. When iterating over the X and Y batches in both train and validation DataLoaders, the dimensions of x_batches were 2, while the model expected 3. So, I used PyTorch's unsqueeze function to match the expected dimension as in x_batch.unsqueeze(dim=0) . I'm not sure if this is how I should have gone about it, which could also be the issue. Answer: The issue was resolved once I used Tensor View to reshape the mini-batches for the features in the training and in the validation set. As a side note, view() enable fast and memory-efficient reshaping, slicing, and element-wise operations, by avoiding an explicit data copy. It turned out that in the earlier implementation torch.unsqueeze() did not reshape the batches into tensors with the dimensions (batch size, timesteps, number of features). Instead, the function unsqueeze(dim=0) returns a new tensor with a singleton dimension inserted at the Oth index. So, the mini batches for the feature sets is shaped as follows x_batch = x_batch.view([batch_size, -1, n_features]).to(device) Then, the new training loop becomes: for epoch in range(n_epochs): batch_losses = [] for x_batch, y_batch in train_loader: x_batch = x_batch.view([batch_size, -1, n_features]).to(device) # <--- y_batch = y_batch.to(device) loss = train_step(x_batch, y_batch) batch_losses.append(loss) training_loss = np.mean(batch_losses) train_losses.append(training_loss) with torch.no_grad(): batch_val_losses = [] for x_val, y_val in val_loader: x_val = x_val.view([batch_size, -1, n_features]).to(device) # <--- y_val = y_val.to(device) model.eval() yhat = model(x_val) val_loss = criterion(y_val, yhat).item() batch_val_losses.append(val_loss) validation_loss = np.mean(batch_val_losses) val_losses.append(validation_loss) print(f"[{epoch+1}] Training loss: {training_loss:.4f}\t Validation loss: {validation_loss:.4f}") Here's the output: [1] Training loss: 0.0235 Validation loss: 0.0173 [2] Training loss: 0.0149 Validation loss: 0.0086 [3] Training loss: 0.0083 Validation loss: 0.0074 [4] Training loss: 0.0079 Validation loss: 0.0069 [5] Training loss: 0.0076 Validation loss: 0.0069 ... [96] Training loss: 0.0025 Validation loss: 0.0028 [97] Training loss: 0.0024 Validation loss: 0.0027 [98] Training loss: 0.0027 Validation loss: 0.0033 [99] Training loss: 0.0027 Validation loss: 0.0030 [100] Training loss: 0.0023 Validation loss: 0.0028
{ "domain": "datascience.stackexchange", "id": 8903, "tags": "python, deep-learning, time-series, lstm, pytorch" }
ROS Answers SE migration: rostest on pr2
Question: Is it possible to use rostest on the PR2? I'm running them fine in simulation with pr2_gazebo. When I run on the real PR2 (of course without pr2_gazebo) I see: [FATAL] [1405106407.155258687]: Could not load the xml from parameter server: robot_description [ERROR] [1405106407.155347537]: Could not load robot model. Are you sure the robot model is on the parameter server? My guess is it has something to do with the ROS IP address changing that rostest does, but I'm not sure. Many thanks for any help. Basic info: Ubuntu 12.04 ROS Groovy rosbuild python Originally posted by mbforbes on ROS Answers with karma: 70 on 2014-07-11 Post score: 1 Answer: rostest is specifically designed not to interfere with a running ROS system, and to be stateless. To achieve this, it runs its own roscore on a different port. There is currently no way to turn off this feature. What this means is that your test files must do everything to set up the parameters, environment and any simulation or robot that your test needs in order to run. I suspect that your original test had an include for the PR2 launch file, and you simply removed that when trying to run it on a real robot. This approach fails, because your nodes will now be running on a separate roscore, without a robot to talk to at all. Instead, I recommend setting up three launch/rostest files: A core test which includes the nodes you're testing, and your test nodes A simulation rostest file which sets up the simulation by including the appropriate PR2 gazebo launch files, and then includes your core test file (#1) An optional PR2 rostest file which includes the PR2 launch file (the one normally run by robot start), and your core test file (#1) Your simulation test is then simply invoking rostest #2 If you want to run your tests on a real robot, you have a few choices: Start the robot normally, and use roslaunch to run your core test file (#1). This will run all of the regular nodes in that file, but not the test nodes Stop the robot, and use rostest to run your PR2 rostest file (#3). This file will then start all of the PR2 processes, and along with your nodes and tests, run them, and produce a test result. Originally posted by ahendrix with karma: 47576 on 2014-07-11 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by mbforbes on 2014-08-10: Thanks for the thorough answer! I think #3 was the key insight that helped me.
{ "domain": "robotics.stackexchange", "id": 18586, "tags": "rostest, pr2" }
What is the point of complex fields in classical field theory?
Question: I see a lot of books/lectures about classical field theory making use of complex scalar fields. However why complex fields are used in the first place is often not really motivated. Sometimes one can read as a footnote that a complex field is in principle mathematically equivalent to two real fields (see also What is the difference between complex field and two scalar fields?), but then the author often goes on using a complex field anyway. This is confusing, because from quantum mechanics one learns that a complex quantity is not measurable. This is of course not the case in classical field theory, where both the real and the imaginary part must be simultaneously measurable quantities. I heard physically motivated reasons for using complex fields like: A complex scalar field represents different particles than a vector of two real fields. But this argument doesn't make sense in classical field theory, it is (if at all correct) only relevant in quantum field theory. Only a complex field can represent charged particles, real fields are necessarily neutral. A complex scalar field is a scalar and so it is by definition Lorentz invariant. A vector of two real fields is not Lorentz invariant and so one must use a complex field. But I'm unsure which of these reasons (if any) is really valid. What is the point of using complex fields in classical field theory? Answer: Two real scalar fields $\phi_1$ and $\phi_2$ satisfying an $SO(2)$ symmetry and one complex scalar field $\psi$ are equivalent. However, the latter is more convenient because the particles made by $\psi$ and $\psi^\dagger$ are each others' antiparticles. In the real case, the fields that have this property are $\phi_1 \pm i \phi_2$, so once you change basis from $\phi_1$ and $\phi_2$ to $\phi_1 \pm i \phi_2$ you've reinvented the complex scalar field. This is explained nicely starting from p.53 in Sidney Coleman's QFT notes. As you said, a complex quantity is not measurable in QM. And indeed $\psi$ is not an observable, which feels strange because quantum fields are often motivated at the start of a QFT course as local observables. Unfortunately this motivation isn't quite right, as we rarely measure quantum fields directly. For example, the number density, charge density, and current density for a charged complex scalar field are all field bilinears like $\psi^\dagger \psi$, and hence real.
{ "domain": "physics.stackexchange", "id": 28151, "tags": "classical-mechanics, field-theory, complex-numbers, classical-field-theory" }
The hermitian conjugate of anti-linear operator
Question: Some quantum mechanics books tell us that the definitions of hermitian are If $\langle\psi|A\phi\rangle=\langle B\psi|\phi\rangle$ for linear operators, then $B=A^\dagger$ If $\langle\psi|C\phi\rangle=\langle D\psi|\phi\rangle^*$ for anti-linear operators, then $D=C^\dagger$ Why are definitions different between linear operators and anti-linear operators ? Are they equivalent to the definition $Q^\dagger=(Q^T)^*$ in linear algebra, where $Q^T$ is the transpose of $Q$ ? Answer: Why are defintions different between linear operators and anti-linear operators? Let $\lambda$ be any complex scalar. Then $$\langle \psi|C(\lambda \phi)\rangle = \langle\psi|\overline \lambda C\phi\rangle = \overline \lambda\langle\psi|C\phi\rangle$$ by the definition of anti-linearity. If we use the definition we normally use for linear operators, then we would find $$\langle \psi | C(\lambda \phi)\rangle = \langle D\psi|\lambda \phi\rangle$$ but also that $$\overline \lambda \langle\psi|C\phi\rangle = \overline \lambda\langle D \psi|\phi\rangle = \langle D\psi|\overline \lambda \phi\rangle$$ which is inconsistent. Using the slightly modified definition fixes this problem. Are they equivalent to the definition $Q^\dagger=(Q^T)^∗$ in linear algebra, where $Q^T$ is the transpose of $Q$? The first one is, yes. If you imagine two arbitrary column vectors $\mathbf x$ and $\mathbf y$ and matrices $A$ and $B$, then the first definition becomes $$\mathbf x^\dagger A \mathbf y = (B\mathbf x)^\dagger \mathbf y \implies B=A^\dagger$$ where the dagger denotes conjugate transposition. But this should be clear, since $$(B\mathbf x)^\dagger = \mathbf x^\dagger B^\dagger$$ so $$\mathbf x^\dagger A \mathbf y= \mathbf x^\dagger B^\dagger \mathbf y$$ If that equality holds for all $\mathbf x$ and $\mathbf y$, then $A=B^\dagger \iff A^\dagger = B$. The case of anti-linear operators is a bit more subtle because an anti-linear operator cannot be written as a matrix all by itself; it has to be a matrix plus a complex conjugation, and therefore cannot be realized if we restrict ourselves to standard matrix algebra.
{ "domain": "physics.stackexchange", "id": 54003, "tags": "quantum-mechanics, operators, hilbert-space, notation, complex-numbers" }
Can Young's double slit experiment be considered a holographic setup?
Question: As described in his landmark paper on holography, "A new microscopic principle", Denis Gabor sent coherent light through a transparent plate with some black letters on it. The light that was diffracted by the letters then interfered with non-diffracted waves. In Thomas Young's famous double slit experiment (or at least the way it is taught today), coherent light is sent through two small slits. Some light is diffracted when passing through and interferes with non-diffracted waves. Can this be seen as an in-line holographic setup? If I were to record the resulting pattern of light and dark, could I reconstruct the two slits? Answer: The answer is an unequivocal "yes": Young's double-slit experiment can be considered an in-line holographic setup. Strictly speaking, light going through one of the slits can be considered an object beam and light going through the other slit can be considered a reference beam. So, if you record the pattern and reconstruct using just the "reference beam", you will reconstruct the "object beam". In principle, that means you will reconstruct just one slit. However, you're very likely to obtain higher diffraction orders as well -- so you will probably see multiple slit reconstructions. Note that the first-order reconstruction, using the light from the original "reference slit" to illuminate the hologram, will produce a virtual image of the other slit, located in its original position. In an in-line setup in which the object is at the same distance from the recording plane as the (nominally) point reference source, there also typically will be a real image reconstructed at infinity.
{ "domain": "physics.stackexchange", "id": 49067, "tags": "optics, double-slit-experiment, hologram" }
How to know the history about rplidar wiki
Question: How to get the edit history about rplidar? Originally posted by kint on ROS Answers with karma: 53 on 2018-07-31 Post score: 0 Answer: MoinMoin displays editing history of pages on the Revision History page, which you can access for wiki/rplidar here: wiki/action/info/rplidar?action=info. Originally posted by gvdhoorn with karma: 86574 on 2018-07-31 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 31420, "tags": "ros, rplidar" }
Map topic data not published on remote computer
Question: Hi all, I'm using ROS to run my own differential drive robot. Since my robot is powered by a UDOO board I'm using a minimal Ubuntu installation without GUI so I cannot run rviz on this device. For this reason I use a second laptop with ubuntu in order to control my robot. Roscore and the complete navigation stack (AMCL, move base, gmapping...) runs on the UDOO board and only rviz is running on remote laptop. On rviz I'm able to see the laser scan, the robot pose and another topics data but I'm not able to see the map data (the topic is there and also listed by rostopic list). In order to understand what is going I try to show topic data from the remote system: rostopic echo scan show me scan data. Instead typing rostopic echo map does not shown any data. The strange thing is that the same test, performed on the system running Roscore, show me topic data for map topic too. It seems that map data only is not sent/received on the remote system! I'm using ROS hydro and already try to update all packages on two systems and I made a succesfull network test. Any idea about this behavior? Thanks Ale UPDATE After some tests I notice that some map topic data are published on remote PC but not every one sent by my navigation stack. Some idea about possible reason? Originally posted by afranceson on ROS Answers with karma: 497 on 2014-11-01 Post score: 1 Original comments Comment by Wolf on 2014-11-07: Network bandwidth? CPU/memory used out? Answer: Maybe the message type of the /map topic is not built on the remote PC because the package containing it is not installed there!? Originally posted by Wolf with karma: 7555 on 2014-11-01 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by afranceson on 2014-11-02: Dear Wolf I checked and map topic exist on the remote PC too. After some test I notice that some map data are send to the remote PC but nit every data published by gmapping node. Today I will make some test using another WiFi router just to be sure the problem is not caused by bad networking. Tha
{ "domain": "robotics.stackexchange", "id": 19918, "tags": "ros, navigation, mapping, ros-hydro, remote" }
Electroweak Interaction and SSB
Question: I read that above unification energy, on the order of 100 GeV, electromagnetic force and weak force would merge into a single electroweak force. I do not really understand how and when exactly two forces are getting unified in terms of the mass scale that is given. Going through Glashow-Weinberg-Salam's Electroweak model, I understood how the SSB $SU(2)_L\times U(1)_Y\rightarrow U(1)_{EM}$ gives rise to masses of $W^{\pm}, Z$ bosons ($\sim 100$ GeV), but I do not understand when exactly the theory is getting unified (or what exactly does it mean). Things that confuse me are when exactly the forces appear to be unified and when they get broken (in terms of energy scales). Answer: The link you are referencing indicates that the operative scale is $v= 246$GeV, not 100; but if that's what you have in mind, the loose language "unified" indicates that, for ambient (thermal) energies above that v, the massive weak bosons W and Z mix with the photon mediating electromagnetism into the (2)×(1) theory. This weak mixing/blending is metaphorically dubbed "unification", in the loose sense it represents inseparable aspects of the same structure, loosely as electric and magnetic forces blend into Maxwell's "unified" electromagnetism. It is a popular language thing.
{ "domain": "physics.stackexchange", "id": 97334, "tags": "particle-physics, symmetry-breaking, elementary-particles, weak-interaction, electroweak" }
Refactoring based on OPEN CLOSE PRINCIPLE- C#
Question: I have some code to write Error Logs to different places like Console/SignalR Messages/Text File. The code is as follows:- public class Logger { public void WriteToLog(string message, LogType logType) { switch (logType) { case LogType.Console: Console.WriteLine(message); break; case LogType.SignalR: // Code to send message to SignalR Hub break; case LogType.File: // Code to write in .txt file break; } } } Where my LogType is a simple Enum as below:- public enum LogType { Console, SignalR, File } But, when I think of Open-Close Principle, I am not getting optimal solution for it. Edit: I need to refactor my code as per open close principle. I need to optimize it, so that I don't need to Modify WriteToLog method once any LogType gets added. Answer: Create an interface which will implement your logic for Logging:- Say: public interface ILogger { void LogMessage(string message); } Create a new class for all message type in place of Enum, later stages if any new LogType gets added, respective class can be included without changing your existing Logic: public class SignalRLogger : ILogger { public void LogMessage(string message) { throw new NotImplementedException();//your Implementaion here } } Now, your implementation would be very similar as below:- public class Logger { ILogger _logger; public Logger(ILogger messageLogger) { _logger = messageLogger; } public void Log(string message) { _logger.LogMessage(message); } }
{ "domain": "codereview.stackexchange", "id": 4809, "tags": "c#, object-oriented" }
Is it possible to allow some ROS command lines to be executed automatically when a Raspberry Pi is turned on?
Question: Hello, For example if I want to make the roscore and some launch programs (such as Crazyflie activation launch file or VICON camera launch file....) to automatically executed whenever I turn on the Raspberry Pi. Is it possible? How can I do that? This is because I have to do this all the time I work on my project, so I want it to be automatic. Thanks. Originally posted by thanhvu94 on ROS Answers with karma: 27 on 2017-05-01 Post score: 0 Answer: You could have a look at robot_upstart. Originally posted by NEngelhard with karma: 3519 on 2017-05-01 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 27766, "tags": "ros" }
Ab initio effective potentials
Question: I have been reading a journal article on ab-initio pseudo-potentials, and I need some help understanding it. The article is C. F. Melius and W. A. Goddard, III. Phys. Rev. A 1974, 10, 1528. A summary of the article for those that do not have access follows. The authors present a way to construct a pseudo-potential. Basically, their argument goes like this (according to my understanding): Suppose that we are using the Hartree Fock method. We can represent the ground-state wavefunction of some atom as a function of the valence orbitals, core orbitals, and spin. $$\Psi= f (\phi_{core}, \varphi_{valence},X)$$ Note: in the article they use a weird letter instead of $f$. After we obtain the ground-state occupied orbitals, we solve for the valence orbitals. We obtain the valence orbital by solving the variational equations $$H^{HF}\varphi_{i}=\epsilon_i\varphi_{i}$$ Where $H^{HF}$ is the Hartree Fock Hamiltonian and is given by $$H^{HF}=-\frac{\nabla^2}{2}-\frac{Z}{r}+2J_{\alpha}-K_{\alpha}$$ For the valence orbital, $J_{\alpha}$ is equal to the classical Coloumb potential due to a charge density corresponding to the $\phi_{core}$. The exchange operator $K_{alpha}$ is an integral operator resulting from the antisymmetric form of the wavefunction. The authors then state that the valence orbital of the ground-state wavefunction is NOT determined uniquely by the variational principle. This is because the solutions of the variational equations may not yield a wavefunction that is purely a valence orbital. In other words, the mixing of core orbitals with a valence orbitals does not change its energy. Hence the solution $\varphi_i$ may be of the form $$\varphi_i=\varphi_{valence}+\sum_i c_i\phi_{core}$$ The authors then impose the constraint that $\varphi_{i}$ be orthogonal to other occupied orbitals. What I do not understand is this. How does orthogonality allow for a unique solution? Also later on, the authors state "Although this orthogonality restriction has the desired consequence of leading to a prescription for a unique valence orbital, there is no reason to consider $\varphi_i$ to have a special significance over any other combination of $\varphi_i$ with the various core orbitals." What do they mean by this statement? Answer: First off, I would like to note that you wrongly interpreted the very first equation. Below I quote the relevant part of the paper: In the Hartree-Fock (HF) approximation for, say, the $\ce{Na}$ atom, the ground-state wave function has the form $$ \Psi = \mathcal{A} (\Phi_{\mathrm{core}} \phi_\nu X) \, , \tag{1} $$ where $\Phi_{\mathrm{core}}$ is a product of (ten) spatial orbitals very similar to the orbitals of $\ce{Na+}$ and $\phi_\nu$ is the valence orbital (the one removed in ionizing to $\ce{Na+}$). ($X$ is an appropriate product of spin functions.) So, $\Psi$ is not just some function of spin orbitals designated by a weird letter as you said, it is, as usual, their antisymmetrized product, where I bet the calligraphic A symbol ($\mathcal{A}$) as usual stands for the antisymmetrizer. The only small difference is that small calligraphic a is used in the text rather then much more common capital one. Now to you actual question. Both core and valence orbitals are thought in the derivation as the solutions of the Hartree-Fock equations, which have a very well-known feature: they are defined to within a unitary transformation (see, for instance, here). So, orbitals which are solutions of the Hartree-Fock equations are not unique: there are infinite different sets of orbitals that minimize the Hartree-Fock energy and you are free to choose any of these set for any further work. This is, essentially, what is said in the paper: It will be important in our later analysis to note that the valence orbital in (1) is not determined uniquely by the variational principle. One can modify $\phi_\nu$ by mixing in an arbitrary amount of any core orbital (doubly occupied. in $\Phi_{\mathrm{core}}$) without changing the energy. Just note that this is true not only for the valence orbital: all orbitals (i.e. core as well) are not uniquely determined by the variational principle, as I mentioned above. Now, if the goal is to have a unique valence orbital, then it can be achieved by restricting it to be orthogonal to core orbitals. In that case when transforming orbitals you can not mix any amount of any core orbital into the valence one since it will immediately make it non-orthogonal with this particular core orbital. And since mixing in core orbitals is the only way you can change the valence one, orthogonality of the valence orbital implies its uniqueness. In order to obtain unique solutions for $\phi_\nu$, one generally requires that $\phi_\nu$ be orthogonal to the other occupied orbitals. And it is important to note (as it is done in the next sentence of the paper) that : Although this orthogonality restriction has the desired consequence of leading to a prescription for a unique valence orbital, there is no reason to consider the orthogonal valence orbital $\phi_\nu$, to have a special significance over any other combination of $\phi_\nu$, with the various core. Note finally, that due to the orthogonality restriction you can not also mix any amount of the valence orbital into the core ones, but you can mix core orbitals with themselves anyway you like since it won't do any harm for their orthogonality with the valence orbital. Thus, while the valence orbital is uniquely defined by the orthogonality requirement, the core orbitals are still defined to within a unitary transformation.
{ "domain": "chemistry.stackexchange", "id": 5246, "tags": "quantum-chemistry, computational-chemistry, density-functional-theory, ab-initio" }
Writing .ppm images to a file
Question: I am struggling with commenting and variable naming. My teacher says my comments need to be more explanatory and explicit. He says my variable names are also sometimes confusing. I was just wondering whether you could go through my code and see whether you are able to understand the comments and add in comments/edit where you feel I should add comments or improve them. Lastly, are there any general rules to follow with commenting? class PPM(object): def __init__(self, infile, outfile): self.infile=infile self.outfile=outfile #Read in data of image data= open(self.infile,"r") datain=data.read() splits=datain.split(None, 4) #Header info and pixel info self.type=splits[0] self.columns=int(splits[1]) self.rows=int(splits[2]) self.colour=int(splits[3]) #(Return a new array of bytes) self.pixels=bytearray(splits[4]) def grey_scale(self): for row in range(self.rows): for column in range(self.columns): start = row * self.columns * 3 + column * 3 end = start + 3 r, g, b = self.pixels[start:end] brightness = int(round( (r + g + b) / 3.0 )) self.pixels[start:end] = brightness, brightness, brightness def writetofile(self): dataout= open(self.outfile, "wb") #Use format to convert back to strings to concatenate them and Those {} in the write function get's replaced by the arguments of the format function. dataout.write('{}\n{} {}\n{}\n{}'.format(self.type, self.columns, self.rows, self.colour, self.pixels)) sample = PPM("cake.ppm", "Replica.ppm") sample.grey_scale() sample.writetofile() Answer: To steal an old quote: "There are 2 hard things in computer science. Naming, cache invalidation, and off-by-one errors". That being said, there is room for improvement here. Firstly, I'm assuming the class name, PPM, is short for Portable Pixmap Format. However, this isn't immediately obvious, and if you aren't familiar with that format (I'm not), it required a search. Hence, the first thing I'd do is change the name to something a bit more descriptive, and add a docstring explaining something about the format: class PortablePixmap(object): '''A class encapsulating basic operations on images that use the portable pixmap format (PPM). ''' Python itself has a style guide known as PEP8 that you should try to follow as much as possible. Generally the convention in python is to name ClassesLikeThis, methods_like_this, and variables_like_this. Hence, another change I'd make is to rename infile and outfile to in_file and out_file respectively. Continuing on, the first comment under __init__ is fairly obvious: #Read in data of image data= open(self.infile,"r") datain=data.read() As a minor aside, try and keep the whitespace around operators like = consistent. Again, as per PEP8, these should be: data = open(self.infile, "r") data_in = data.read() I'd also consider renaming data_in to something like raw_image_data. Back to the comments. The next line has no comment, but needs it far more than the previous 2 lines: # Break up the image data into 4 segments because ... splits = datain.split(None, 4) The comment #(Return a new array of bytes) is both obvious and misleading: this is __init__; you're constructing the object - assigning to self.pixels isn't returning anything! For grey_scale, your indentation moves to 8 spaces instead of 4. Be careful with this - especially in Python, where whitespace can modify the semantics (meaning) of your program. This function should again have a docstring: def grey_scale(self): '''Converts the supplied image to greyscale.''' The final function, def writetofile(self): should again use _ as separators to make it easier to read. I'd also probably move the out_file parameter to this function, rather than passing it in __init__: def write_to_file(self, output_location): '''Writes the image file to the location specified output location. The file is written in <some description of the file format>. ''' Watch the length of your comments (they should stick to the same line lengths as everything else in your program). #Use format to convert back to strings to concatenate them and Those {} in the write function get's replaced by the arguments of the format function. The comment itself is also difficult to understand: "convert back to string to concatenate them and Those {} ..." That made me do a double take. Try and write comments as complete sentences.
{ "domain": "codereview.stackexchange", "id": 7123, "tags": "python, image, io" }
How to determine the summing range of the $j_{12}$ for $ 6j$ symbols?
Question: $$ |j_{1},j_{2}j_{3}(j_{23});j\rangle = \sum_{j_{12}}(-1)^{j_{1}+j_{2}+j_{3}+j}\hat{j}_{12}\hat{j}_{23} \begin{Bmatrix} j_{1}&j_{2}&j_{12}\\ j_{3}&j&j_{23} \end{Bmatrix} |j_{1}j_{2}(j_{12})j_{3};j\rangle $$ where, $\hat{j}=\sqrt{2j+1}$. I want to design Mathematica codes for this formula. However, I do not where are the summing range of the $j_{12}$. But for a special case, for example, three spin-$\frac{1}{2}$, if they couple into total angular momentum $j=1/2$, we have the expression as follows. $$ |\frac{1}{2},(\frac{1}{2},\frac{1}{2})0;\frac{1}{2}\rangle = \frac{\sqrt{3}}{2}|(\frac{1}{2},\frac{1}{2}){\color{red}0},\frac{1}{2};\frac{1}{2}\rangle + \frac{1}{2}|(\frac{1}{2},\frac{1}{2}){\color{red}1},\frac{1}{2};\frac{1}{2}\rangle $$ Yes, calculating by hand, I can easy know that the values of $j_{12}$ I have to take. But for generalized vaules of $j_{1},j_{2},j_{3}$, how can I determine the range of the $j_{12}$ when I design Mathematica codes for this formula? Answer: By construction $j_{12}$ is the result of coupling $j_1$ and $j_2$ so the possible values of $j_{12}$ range from $\vert j_1-j_2\vert$ to $j_1+j_2$, incrementing in steps of 1. In your specific case $j_1=j_2=1/2$ so the range of $j_{12}$ is between $0$ and $1$ in steps of $1$, i.e. precisely $0$ and $1$.
{ "domain": "physics.stackexchange", "id": 74987, "tags": "angular-momentum, representation-theory" }
Total momentum in linear monoatomic chain
Question: Context: Solid state physics. Monoatomic linear chain. Question: To prove that the total momentum of the chain is zero. Attempted solution: I consider the sum: \begin{align*} p = \sum_{n=1}^{N} m \dot{u_n} \end{align*} where $u_n$ is the displacement from the equilibrium position of the $n$-th atom. The displacement is given by the formula: \begin{align*} u_n = u_0 \exp\left[-i\left(\omega t \pm k n a\right)\right] \end{align*} where $k$ is the wavevector, $n$ is the $n$-th atom and $a$ is the distance between atoms. If I substitute the above formula to the first sum, I get a result of the form: \begin{align*} p \sim \sum_{n=1}^{N} \exp(i k n a) \end{align*} I wonder, how could I prove that this is always zero ? If I treat it as the sum of a geometric series with $\alpha_1=\exp(ika)$ and $\lambda=\exp(ika)$, I still get a result that isn't necessarily zero. \begin{align*} S_{1\to N} = \alpha_1 \frac{\lambda^N-1}{\lambda - 1} = e^{ika} \frac{e^{ikNa}-1}{e^{ika}-1} \end{align*} If I further require that the first and last atoms are fixed, then $\exp(ikNa) = \exp(ika)$ and $S_{1\to N} = \exp(ika)$. Then \begin{align*} p = -i\omega u_0 m \exp(-i\omega t) \exp(ika) = -i\omega m \underbrace{u_0 \exp\left[-i(\omega t - k a)\right]}_{u_1} = -i \omega m u_{1} = 0 \end{align*} since we assumed that the 1st atom is fixed. Does this sound correct ? Answer: With $$u(n)=u_0e^{-i(\omega t+k n a)}$$ we have $$p=\sum_{n=1}^Nm\frac{d}{dt}u(n)=i\omega m u_0e^{-i\omega t}\left(\frac{1-e^{-iakN}}{1-e^{iak}}\right).$$ With cyclic boundary conditions we have $$u(N+1)=u(1)\Rightarrow k=\frac{2\pi j}{Na}\text{ for }j\in\mathbb{Z}$$ and inserting $k$ into $p$ gives $$p=i\omega m u_0e^{-i\omega t}\left(\frac{1-e^{-2 i \pi j}}{1-e^{\frac{2 i \pi j}{N}}}\right)=0.$$ Your method uses a slightly different boundary condition, but I think it's still valid.
{ "domain": "physics.stackexchange", "id": 13080, "tags": "homework-and-exercises, solid-state-physics" }
Is a purely vertical or almost vertical orbital launch possible?
Question: Is it possible, for the sake of argument, to launch a payload into an orbit around the earth by putting almost all the energy going at a 90 degree angle? What velocity would it take, and what horizontal orbital burns would you need? Answer: As an orbit goes around the Earth, your challenge if you launch vertically is that once you reach the desired height you will then need to accelerate sideways to orbital velocity. For an L2 orbit, this is around 1km/s with respect to the Earth. So you need to carry all that fuel up with you, to then burn it. That's a vast amount of mass wasted in the initial launch, so realistically it isn't going to happen. In reality, the launch profiles used give you orbital velocity and height as efficiently as possible, to maximise payload.
{ "domain": "physics.stackexchange", "id": 7087, "tags": "gravity, rocket-science" }
What is the actual meaning of velocity?
Question: There's a scenario where a car is moving between two points A and B in a way that it first goes 30m north and then 20m south in a time period of 10 seconds. Now the speed of the car comes out to be 5 m/s while the velocity comes out to be 1m/s in the north direction. So my doubt is that when I say that the speed of car is 5m/s and the velocity is 1m/s towards north for the same car isn't it conflicting because throughout the journey the car was in one single motion and not two different motions so then how can the car travel 5meters per second and 1meter per second in north as well between two points A and B? If we say that velocity tells us about the direction of the motion along with the magnitude then why isn't it that the velocity is just 5m/s{the speed} + some direction. How is it that the car is able to travel 5 meters per unit second and at the same time travel 1 meter per unit second in some direction between the two same set of points?? {what happens to the remaining magnitude, where is it lost in the velocity??} Also is it just that my understanding of the concept of velocity is wrong? If so please provide with a good definition for explaining the concept of velocity. Thanks! Answer: Without getting into calculus, average velocity is total displacement divided by total time of the journey. Average speed is total distance divided by total time. Total displacement is the arrow that points from the start position to the end position, all the little displacement vector arrows added together into one total vector arrow. Total distance can be what an odometer reads. It can be the number of steps taken, all of the little distances added together. It is the length of the path followed, regardless of direction. A person can run around a one-mile track and end up at the starting point. The person traveled a distance of one mile. If this takes ten minutes, the the person's average speed is six miles per hour. Displacement is zero. The run around the track did not result in a total change of position at all. Although velocity during the run was never zero, the average velocity is zero. The eastward effects balance the westward effects. The northward effects balance the southward effects. We use both speed and velocity because each can be important, depending on what we are trying to accomplish.
{ "domain": "physics.stackexchange", "id": 94938, "tags": "homework-and-exercises, kinematics, velocity, calculus, displacement" }
Why doesn't Earth's atmosphere form bands due to different rotational speeds?
Question: If the Earth's atmosphere is rotating at the same speed as Earth, then the atmosphere must be rotating much faster at the equator than at the poles. If you spin a ball covered in oil, it will form rings. Also Jupiter has rings. So why doesn't the Earth have rings of weather too? Answer: The short answer is -- there are bands! They behave very similar to the bands on Jupiter, but are not as pronounced. And we don't have a really unappealing colored atmosphere to show us what the bands look like. Here is an example of what they look like (source): There are two bands along each side of the equator. Another set of bands starts 30 degrees north and south of the equator. And another band starts 30 degrees further north and south (at 60 degrees total). You'll also note that these differences in wind in the same direction of rotation also causes wind to form in the north-south direction. All of this is what drives the major weather systems. Consider the US. Weather systems will typically move from west to east. Atlantic hurricanes form in the tropical band off the coast of Africa. They form here because the wind is relatively calm and there is little north/south shearing. They then move westward in the tropical band while also moving north due to the Coriolis forces. As they move north, they begin to encounter the westerly winds that are characteristic of the mid-latitude cell. This will eventually turn them around so they move north-east along the US coastline until turning due-east and moving towards Europe (which in turn induces a southward drift due to Coriolis forces). Here are what several of these hurricane paths look like (source): These bands are not typically readily apparent. Mostly this is because our atmosphere is transparent so we have no way to "visualize" the bands. It is possible to sometimes capture bands however. A band of rainfall in the intertropical convergence zone around the equator is captured in this GOES satellite image (source): Also, these bands are climatological features and not meteorological features. This means their structure is not always apparent instantaneously but appear in a time-averaged view of the atmosphere. It turns out that NOAA released a time-lapsed video of 10 years worth of GOES-12 images and the bands become pretty apparent! @DavidHammen found another great video looking at the infrared signature caused by water vapor in the air by the GOES-13 satellite shows the bands better than looking at the visible cloud cover.
{ "domain": "physics.stackexchange", "id": 19877, "tags": "earth, atmospheric-science, weather" }
How do we code the matrix for a controlled operation knowing the control qubit, the target qubit and the $2\times 2$ unitary?
Question: Having n qubits, I want to have the unitary described a controlled operation. Say for example you get as input a unitary, an index for a controlled qubit and another for a target. How would you code this unitary operation? Answer: Here’s some pseudo code, where id(n) creates an $2^n\times 2^n$ identity matrix, and tensor(A,B,...) returns $A\otimes B\otimes\ldots$. def cU(ctrl,targ,U,size): '''implement controlled-U with: control qubit ctrl, target qubit targ, within a set of size qubits''' #check input ranges assert 1<=ctrl<=size assert 1<=targ<=size assert ctrl<>targ assert ctrl,targ,size ∊ ℤ #ensure U is a 2x2 unitary assert U∊ℂ2x2 assert U.U†=id(2) #the actual code if ctrl<targ: return id(size)+tensor(id(ctrl-1),id(1)-Z,id(targ-1-ctrl),U-id(1),id(size-targ))/2 else: return id(size)+tensor(id(targ-1),U-id(1),id(ctrl-1-targ),id(1)-Z,id(size-ctrl))/2 However, remember that usually you're trying to calculate the action of a unitary on some state vector. It will be far more memory efficient to calculate that application directly, rather than first calculating the unitary matrix and applying it to the state vector. To understand where this formula came from, think about the two-qubit version, where the first qubit is the control qubit. You'd normally write the unitary as $$ |0\rangle\langle 0|\otimes\mathbb{I}+|1\rangle\langle 1|\otimes U. $$ Let's rewrite this as $$ (\mathbb{I}-|1\rangle\langle 1|)\otimes\mathbb{I}+|1\rangle\langle 1|\otimes U=\mathbb{I}\otimes\mathbb{I}+|1\rangle\langle 1|\otimes (U-\mathbb{I}). $$ It can be easier to write things in terms of Pauli matrices, so $$ |1\rangle\langle 1|=(\mathbb{I}-Z)/2. $$ To get the same unitary on a different number of qubits, you just need to pad with identity matrices everywhere.
{ "domain": "quantumcomputing.stackexchange", "id": 317, "tags": "quantum-gate, programming, gate-synthesis" }
How does direction finding work when using GMS mobile station and additional handset?
Question: I am trying to understand the concept of direction finder (DF), for mobile GSM devices. I have found the following description: http://www.pki-electronic.com/products/interception-and-monitoring-systems/gsm-direction-finder/ It seems to describe the following configuration: <IMSI catacher>-------- <additional handset (mobile)> | |-------- <target mobile device> According to web site description, it is composed of IMSI catcher (probably), i.e. active base station, which force the target mobile to transmit, and probably the attacker base station (SDR radio) can detect the exact direction/signal strength of the attacked device. I am not sure how it works, but this is what I think: The base station (which is also a receiver) force the target mobile to keep transmission (by sending silent sms which force the mobile to transmit ack). The additional handset is actually a second receiver. So we get 2 receivers (handset and base stations) , each can find the signal strength. We can draw a map, with 2 circles, and the merge points will point us to the correct direction. Do you think this is how it works ? any comment is appreciated. EDIT: Can someone please give a description how it is done and what is the purpose of the handset ? Is it a reciever of just an antenna ? Thanks. Answer: The fact that this thing only has a single antenna port and a loudness knob: This will just look for a transmitter in the band of interest, and you'll have to turn around a directive antenna until the tone gets loud. That's absolutely low-tech, it can't discern between GSM phones logged into the same network (and also active), and the whole setup will only work if the straightest path to the phone is also the one that delivers the most energy – which is not the case for e.g. a lot of the indoor and urban mobile channel scenarios that we use nowadays.
{ "domain": "dsp.stackexchange", "id": 5094, "tags": "frequency-spectrum" }
What does "n" stand for in n-akyldiols, n-alkanes, n-dicholoroalkanes?
Question: I have read two conflicting answers upon a google search: " 'n' in this context stands for normal or the latin equivalent. That is why, like other Latin and Greek abbreviations, it is in italics. It refers to straight chain alkanes. If we weren't dealing with n-alkanes we would have branched chain isomers to consider and potentially very different answers. It is old nomenclature from the days when adjacent carbons in a molecule were designated alpha,beta,gamma, etc and when acidity was given in 'N' units of normality." and " 'n' in science is like "x". It is just a reminder that a number needs to be there. You need to know that numbers can be replaced by x, x/y, n, or any variable and that groups can be replaced by their generic names." Answer: The "n" stands for "normal", which in this context means "straight-chain". It is not italisized, since it is not Latin, but English. In contrast, the prefixes "i" or "a" for "iso" and "anteiso" are Latin and are properly italisized, but rarely are. Your second choice, "n is like x", can be found in the subscripts of chemical formulae, where it does denote a variable amount.
{ "domain": "chemistry.stackexchange", "id": 6455, "tags": "nomenclature" }
Isothermal Irreversible process
Question: I'm in high school and wanted to get a few things cleared up. Isothermal process is defined as a thermodynamic process where temperature remains constant. Does this mean that temperature remains constant at every instant? Does an isothermal process have to be reversible? The Joule expansion of an ideal gas is an irreversible process where there is no net change in internal energy. Can it be called an isothermal process? If yes, is this the only way an isothermal irreversible process can be realised? What are other scenarios where such a process can be done? (Consider an ideal gas) Answer: Most people consider an isothermal irreversible process as one in which the system is held in contact with a constant temperature reservoir at the initial gas temperature throughout the process. This says nothing about the spatial and temporal variations in temperature interior to the system, only at its boundary with the reservoir. Even in the Joule expansion, except at the beginning and end, there can be temperature variations within the gas.
{ "domain": "physics.stackexchange", "id": 67788, "tags": "thermodynamics, reversibility" }
Spectrum of OFDM with raised cosine window
Question: I'm having trouble implementing OFDM with a raised cosine (RC) window in Matlab. I know how to generate an OFDM signal and how to show its spectrum, I just don't know how to generate the window extensions. Each OFDM symbol is extended by $TW$ samples at both ends to smooth the transitions between successive symbols, this is done mainly to improve the out of band spectrum and reduce the interference to adjacent channels. I'm hoping someone here knows how to do this. Answer: This spectral shaping technique is applied in time domain, after adding the guard interval (GI). But I find it easier to add GI and the cyclic extension for windowing in one step. Let $x(n)$ be an $N$ subcarriers OFDM symbol without guard interval. Then $W + G$ samples are copied to the beginning accounting for guard interval and windowing samples. Additionaly, $W$ samples are copied to the end, also for windowing: $$ y(n) = \begin{cases} x(n+N) & \text{for} & -G-W \leq n \leq -1 \\ x(n) & \text{for} & 0 \leq n \leq N -1 \\ x(n-N) & \text{for} & N \leq n \leq N + W -1 \end{cases} $$ In the next step, the raised cosine function is applied to the first and last $W$ samples of $y(n)$, respectively. The windowing function $w(n)$ is given by: $$ w(n)= \begin{cases} \cos^2\left( \frac{n+G+1}{W-1} \frac{\pi}{2}\right) & \text{for} & -G-W \leq n \leq -G-1 \\ 1 & \text{for} & -G \leq n \leq N - 1 \\ \cos^2\left( \frac{n-N}{W-1} \frac{\pi}{2}\right) & \text{for} & N \leq n \leq N+W-1 \\ 0 & \text{otherwise} \end{cases} $$ $w(n)$ is similar but not equal to the transfer function of a raised cosine filter often used as impulse shaper in single carrier transmission systems. The two differences are: (1) the raised cosine function is applied in time domain for OFDM systems and in frequency domain for single carrier systems and (2) the "flat top" is usually much longer for OFDM systems, whereas its length is in a fixed relation with the flanks' length, given by the roll-off factor, for single carrier systems. Finally, the OFDM symbol including GI and spectral shaping is calculated by $$ z(n) = w(n)y(n) $$ When transmitting several OFDM symbols $z_i(n)$, two consecutive symbols overlap at $W$ samples. The discrete transmit signal $u(n)$ is therefore given by $$ u(n)=\sum_{i=-\infty}^\infty z_i(n-i(N+G+W)) $$ The implementation in Matlab should now be straightforward by substituting $n$ with $n' = n + G+ W+ 1$ in the above equations. Leave a comment if not.
{ "domain": "dsp.stackexchange", "id": 820, "tags": "frequency-spectrum, ofdm" }
Equivalent of C-style "Static Local Variable" in Swift
Question: I'm porting some Obj-C code to Swift, and I've written the following code to allow me to deal with "static local variables" which do not exist in Swift. A static local variable has these requirements: it is shared between all instances that access it it has one assignment method which only sets the value when it is first declared. has another assignment method which sets its value normally (i.e. any time it is used) There has to be a better way then what I have coded. For starters, I know that using UnsafeBitcast is not good practice. class Container<T:Any>{ var _memory:Any var memory:T { get { if let typed_value = self._memory as? T { // for value types, such as "String" return typed_value } else { // for types conforming to "AnyObject", such as "NSString" return unsafeBitCast( self._memory, T.self ) } } set { self._memory = newValue } } init( memory:Any ){ self._memory = memory } } var Static_Containers = [String:AnyObject]() func static_var <T:Any>( value:T, file: StaticString = __FILE__, line: UWord = __LINE__, col: UWord = __COLUMN__, fun: StaticString = __FUNCTION__ ) -> Container<T> { let unique_key = "FUNC_\(fun)__LINE\(line)_COL\(col)__FILE_\(file)" let needs_init = !contains( Static_Containers.keys, unique_key ) if needs_init { Static_Containers[unique_key] = Container<T>( memory:value ) } return Static_Containers[unique_key]! as! Container<T> } Here's a couple tests: func test_with_nsstring( str:NSString, init_only:Bool ) -> NSString { var stat_str = static_var( str ) if !init_only { stat_str.memory = str } return stat_str.memory } test_with_nsstring( "this should get set", true ) test_with_nsstring( "this should be ignored", true ) // only repeated declaration test_with_nsstring( "this should change the value", false ) test_with_nsstring( "as should this", false ) func test_with_int( i:Int, init_only:Bool ) -> Int { var stat_int = static_var( i ) if !init_only { stat_int.memory = i } return stat_int.memory } test_with_int( 0, true ) test_with_int( 1, true ) // only repeated declaration test_with_int( 2, false ) test_with_int( 3, false ) func test_with_optstr( optstr:String?, init_only:Bool ) -> String? { var stat_optstr = static_var( optstr ) if !init_only { stat_optstr.memory = optstr } return stat_optstr.memory } test_with_optstr( nil, true ) test_with_optstr( "this should be ignored", true ) // only repeated declaration test_with_optstr( "this should change the value", false ) test_with_optstr( "as should this", false ) When I test this code in a Playground, it seems to behave correctly. I'd just like a less nutty, and less brittle, way to accomplish this. Answer: The only work around for this that I've come up with thus far is something that looks like this: func foo() -> Int { struct Holder { static var timesCalled = 0 } return ++Holder.timesCalled; } It's pretty clunky in my opinion, and I'm not sure why Apple doesn't just allow straight static function variables, but this seems quite a bit cleaner than your approach.
{ "domain": "codereview.stackexchange", "id": 13569, "tags": "swift, static" }
Beyond the frequency cutoff in Debye model
Question: I understand when wavelength is smaller than the atom interval, sound waves can't travel; hence, we need a frequency cutoff in the Debye Model. But surely when it is the case, atoms are still oscillating; therefore, the oscillations must contribute some energy to the energy density. I am left wondering what happens exactly beyond the frequency cutoff? Are those oscillation so small(?) that we can just ignore these very small energy contribution? Answer: "But surely when it is the case, atoms are still oscillating" I think not (if we don't count zero point energy). In the Debye model there are many frequencies corresponding to different modes of standing wave. When you reach the cut-off frequency, that's it: no more modes, no more ways of storing energy. You seem to want to revert to the Einstein model (individual atoms oscillating independently of each others' oscillations) beyond the Debye cut-off, but apart from the mixing of models, you may be forgetting that there's only one Einstein frequency, the supposed natural frequency of oscillation of independent atoms.
{ "domain": "physics.stackexchange", "id": 47182, "tags": "statistical-mechanics, solid-state-physics, frequency, phonons" }
Raising and Lowering Indices of a Perturbed Metric
Question: I have seen in GR that if a metric is a perturbation of some base metric $g^{(B)}_{\mu \nu}$ such that $g_{\mu \nu} = g^{(B)}_{\mu \nu} + h_{\mu \nu},$ then $g^{\mu \nu} = g^{(B) \mu \nu} - h^{\mu \nu}.$ Does this mean that $g^{(B) \mu \nu}$ is the inverse metric such that $ g^{(B) \alpha \beta} g^{(B)}_{\beta \gamma} = \delta^{\alpha}_{\gamma}$ and that $h^{\mu \nu}$ is obtained by raising two indices of $h_{\mu \nu}$ with $ g^{(B) \alpha \beta} $? (I haven't matched the indices on the last one but hopefully get my meaning, you raise one index with the inverse of the base metric, then raise another one). Answer: Yes, here you are working in linear approximation, so all terms of highers orders in $h$ are negligible, so $h^{\mu \nu} = g^{(B) \mu \alpha} g^{(B) \nu \beta} h_{\alpha \beta}$, if you replace $g^{(B)}$ by $g$, the difference will be only in terms of order $h^2, h^3$.
{ "domain": "physics.stackexchange", "id": 69061, "tags": "general-relativity, differential-geometry, metric-tensor, approximations, linearized-theory" }
Encoding System that Assign Same Number of Bits for Each Character
Question: I am trying to get a binary string that has been converted from text of a text file, I am able to get that but the problem is, I need each character to be represented by same number of bits, but that is not what I get (please see the below python code and corresponding output). For example, character i is represented by 1101001, which is 7 bits long, but character ! is represented by 100001, which is 6 bits long. Is there any encoding/decoding system where each character takes same amount of bits? content = open('a.txt', 'r').read() test_str = content # using join() + ord() + format() ... Converting String to binary Binary = ' '.join(format(ord(i), 'b') for i in test_str) #Decimal=int(Binary, 2) # printing original string print("The original string is : " + str(test_str)) # printing result print("The string after Binary conversion : \n" + str(Binary)) Output: The original string is : Hi! Is there a solution? The string after Binary conversion : 1001000 1101001 100001 100000 1001001 1110011 100000 1110100 1101000 1100101 1110010 1100101 100000 1100001 100000 1110011 1101111 1101100 1110101 1110100 1101001 1101111 1101110 111111 Answer: The usual way to solve this problem is by adding leading zeroes. So i would still be represented by 1101001, while ! will be represented by 0100001. This is similar to how you digital clock might use 06:40 for 6:40, or 12:05 for 12:5.
{ "domain": "cs.stackexchange", "id": 15667, "tags": "information-theory, coding-theory, encoding-scheme" }
Isn't Domain of a variable nothing but a constraint?
Question: In Constraint programming we have Variables and their Domains and then all the constraints, but if you at the concept of a domain of a variable it is nothing but another type of constraint, you are saying that this variable can take all these values. Is there any particular reason why domain is defined as a different concept than constraints? Answer: As you observe, restricting the domain of a variable has exactly the same effect as applying a unary constraint to it. One situation where you might prefer to use unary constraints rather than restricted domains is when you want to control very tightly the relations that are allowed to be used in constraints. For example, if you want to investigate the computational complexity of CSP with a particular class of constraint languages. On the other hand, such investigations often assume that all unary relations are included in the constraint language, which is equivalent to fixing a global domain but allowing the domain of any variable to be any subset of that. (This is known as the "conservative" case because of certain algebraic properties of the constraint langauges.)
{ "domain": "cs.stackexchange", "id": 5128, "tags": "type-theory, typing, constraint-programming, curry-howard, constraint-satisfaction" }
Finding most common contiguous sub-lists in an array of lists
Question: Objective: Given a set of sequences ( eg: step1->step2->step3, step1->step3->step5) ) arranged in an array of lists, count the number of times every contiguous sub-lists occur Where I need your help: The code below works, but is very slow on my original dataset (100M sequences, with 100 unique steps). Can you help me make this more efficient? For larger datasets, is there a more efficient programming method than just brute force? My code currently depends on each element in the list being a single character. How can I adapt this code to handle multiple-character elements? Working code: from collections import Counter sequences = [['A','B'],['A','B','B'],['A','C','A','B']] counts = Counter() for sequence in sequences: input = "".join(sequence) for j in range(1,len(input)+1): counts = counts + Counter(input[i:i+j] for i in range(len(input)-(j-1))) print counts for x in counts: print x,":",counts[x]," times" Answer: 1. Write a test case When working on performance of code, the first thing to do is to make a reproducible test case. We'll need to have your code in a function: from collections import Counter def subsequence_counts_1(sequences): counts = Counter() for sequence in sequences: input = "".join(sequence) for j in range(1,len(input)+1): counts = counts + Counter(input[i:i+j] for i in range(len(input)-(j-1))) return counts and then we need some test data: import random import string def test_data(n, m, choices): """Return a list of n lists of m items chosen randomly from choices.""" return [[random.choice(choices) for _ in range(m)] for _ in range(n)] So let's try a small example: >>> data = test_data(50, 50, string.ascii_uppercase) >>> from timeit import timeit >>> timeit(lambda:subsequence_counts_1(data), number=1) 102.42408156394958 2. Don't build data structures by repeated addition The main problem with the code is this line: counts = counts + Counter(...) which has essentially the same effect as: new_counts = counts + Counter(...) counts = new_counts That is, it creates a new Counter object and populates it with the counts from both of the addends. In particular, this involves copying across the whole contents of counts into the new object. We can avoid all that copying by using the update method: def subsequence_counts_2(sequences): counts = Counter() for sequence in sequences: input = "".join(sequence) for j in range(1,len(input)+1): counts.update(input[i:i+j] for i in range(len(input)-(j-1))) return counts and this is a thousand times faster: >>> timeit(lambda:subsequence_counts_2(data), number=1) 0.08853063406422734 3. Further improvements There are few more minor improvements that we could make: Instead of having separate iterations over i and j, we can iterate over both at the same time using itertools.combinations. Instead of calling Counter.update for each sequence, we could do all the work in one comprehension. This results in the following: from itertools import combinations def subsequence_counts_3(sequences): return Counter(seq[i:j] for seq in map(''.join, sequences) for i, j in combinations(range(len(seq) + 1), 2)) which is about 60% faster than subsequence_counts_2: >>> timeit(lambda:subsequence_counts_3(data), number=1) 0.052610712591558695 But it's still not going to be able to solve your problem in a reasonable amount of time: >>> data2 = test_data(100, 100, string.ascii_uppercase) >>> timeit(lambda:subsequence_counts_3(data2), number=1) 0.5441978382878006 So to process 100 million sequences of 100 characters would take more than half a million seconds, which is more than six days. 4. Using tuples If you want to handle other kinds of data, convert the sequences to tuples: def subsequence_counts_4(sequences): return Counter(seq[i:j] for seq in map(tuple, sequences) for i, j in combinations(range(len(seq) + 1), 2)) and then you can use any hashable items: >>> data3 = test_data(50, 50, list(range(10))) >>> subsequence_counts_4(data3)[(8, 8, 8)] 5
{ "domain": "codereview.stackexchange", "id": 16297, "tags": "python, performance, algorithm" }
How to show that basis of space of Dirac gamma-matrices is given by following matrices?
Question: How to show that 16 matrices $$ \mathbf E , \quad \gamma^{\mu}, \quad \gamma^{5} = \frac{i}{4}\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}, \quad \eta^{\mu \nu} = -\frac{1}{4}\left(\gamma^{\mu}\gamma^{\nu} - \gamma^{\nu}\gamma^{\mu}\right), \quad \gamma^{5}\gamma^{\mu} \qquad (.1) $$ create the basis in space of Dirac matrices, by having $$ [\gamma^{\mu}, \gamma^{\nu}]_{+} = 2g^{\mu \nu}\mathbf E ? $$ I don't want to calculate each of matrices $(.1)$. Answer: So far I can see only 12 different nonzero matrices. Perhaps you missed $\gamma^5\gamma^\mu$. EDIT: There is a proof of linear independence of the 16 matrices, e.g., in Bogoliubov and Shirkov's Introduction to theory of quantized fields. They prove (using trace invariance under cyclic permutations and anticommutation relations) that each of the matrices, except E, has zero trace, assume that a nontrivial linear combination of the matrices vanishes and show (multiplying this linear combination separately by each of the matrices) that this assumption is in contradiction with the trace of each matrix (except E) being zero.
{ "domain": "physics.stackexchange", "id": 9849, "tags": "dirac-equation, matrix-elements" }
How do you know you can add thermal resistance but not thermal transmittance?
Question: I followed through the example on this wikipedia page for calculating the thermal resistance of a composite material (a wall) composed of many layers: https://en.wikipedia.org/wiki/Thermal_transmittance#Calculating_thermal_transmittance Thickness Material Conductivity Resistance = thickness / conductivity — Outside surface — 0.04 K⋅m2/W 0.10 m (0.33 ft) Clay bricks 0.77 W/(m⋅K) 0.13 K⋅m2/W 0.05 m (0.16 ft) Glasswool 0.04 W/(m⋅K) 1.25 K⋅m2/W 0.10 m (0.33 ft) Concrete blocks 1.13 W/(m⋅K) 0.09 K⋅m2/W — Inside surface — 0.13 K⋅m2/W [...the] total resistance is 1.64 K⋅m2/W [...] thermal transmittance [is] 0.61 W/(m2⋅K). I understand intuitively that you can not add the thermal transmittances together to get the final result as this "does not make sense". But from a purely "dimensional analysis" point of view it is perfectly reasonable to add numbers of the same units (in this case W/m2⋅K). So I assume there is not some mathematical rule or heuristic that I can reference and instead there must be some physics rule or heuristic that I can reference? I almost very easily made this mistake and would like to know if there is some simple way to avoid making a similar mistake with other physical systems? Or is it just an inherent part of physics as mathematically you can do many things that do not make physical sense and physics is the deliberate systematic act of observing and experimenting to limit down the number of "allowed" mathematical operations so as to only perform calculatings that add value / cohere with reality? Perhaps I should post this to philosophy.stackexchange.com instead. Answer: The additivity of thermal resistances (and not thermal conductances) is derived from (1) our understanding of temperature (specifically, that a certain point can have only a single temperature), (2) conservation of energy, and (3) Fourier's law of conduction. From this, we find that if two objects are placed in end-to-end contact, then any interface point has a single temperature, that energy entering (leaving) from the left side must equal the energy leaving (entering) from the right side, and that this energy flow must be $q=k_iA\frac{\Delta T_i}{L_i}=A\frac{\Delta T_i}{R_i}$ for object $i$, where $k$ is the thermal conductivity, $A$ is the cross-sectional area, $\Delta T$ is the temperature difference down the length $L$ of the object, and $R$ is the thermal resistance. The interface temperature is then found through algebra to be $$T_\text{interface}=\frac{\sum_i(k_iT_i/L_i)}{\sum_i (k_i/L_i)},$$ and so the heat flow is $q=A\frac{T_1-T_2}{\sum_i R_i}$, which holds generally for even $i>2$ if the end temperatures are taken as $T_1$ and $T_2$. Note that the thermal resistances (not the thermal conductances $1/R_i$) end up adding.
{ "domain": "physics.stackexchange", "id": 90275, "tags": "thermal-conductivity" }
How can I connect optitrack and ros together?
Question: Hi, I'm using Ubuntu12.04 installed ROS indigo, I made a catkin workspace and git clone mocap_optitrack into it. I have another computer with windows system, which installed Motive. I want to get data in ros from Optitrack system. Do I need NatNet any more? Now I don't know how to configure. Could anyone tell me how to configure in Motive and in mocap_optitrack in detail? Must I use ros_vrpn_client? I'll appreciate if you could help. PS: I followed the instructions in http://wiki.ros.org/mocap_optitrack, downloaded package at https://github.com/h2r/mocap_optitrack. I can run /mocap_node successfully, and can get rqt_graph, which indicate /mocap_node can send messages to pose_stamper_1. But when I run rostopic echo /rigid_body_1/pose, I can get nothing. So I don't konw if Motive really connect with ros successfully. I don't know what to do next. Originally posted by Shusen on ROS Answers with karma: 27 on 2016-02-23 Post score: 0 Original comments Comment by ros_geller on 2016-02-24: I am using ros_vrpn_client for my optitrack with motive setup, and it works perfectly fine. Just make sure they are on the same network with firewall disabled and motive trackers activated. Of course you have to setup your launch file to match your Motive PC IP, port and tracker name. Comment by Shusen on 2016-02-24: Hi, I just followed your advice, config my launch file: Motive PC IP, port, and tracker name,but in the terminal windows, it just said connection established, but no more, no messages at all. Is that a real connection? How can I get see the specific data? Comment by ros_geller on 2016-02-25: Check to see if your node, ros_vrpn_client, is running with the command "rosnode list". Then check to see if the topic it publishes the data to is active with the command "rostopic list", if you find your topic, named after your trackable id, then use "rostopic echo your_topic_name" to see the data. Comment by ros_geller on 2016-02-25: To use the data you have to make a node yourself and subscribe to the topic that ros_vrpn_client publishes to. See ROS Tutorial for help. Comment by Shusen on 2016-02-25: Thank you for your reply, I restarted my Optitrack system and reconfig it, then I can get data using mocap_optitrack. Thanks! Comment by ros_geller on 2016-02-25: Cheers, glad it worked out. I'll post the comments as an answer, and then you can check it as solved. Comment by abhi15491 on 2017-06-13: hi, ros_geller can you tell me how to configure the launch file to match my motive PC IP,port and tracker name. Comment by UbuntuROs on 2017-07-05: @ros_geller I ran rosnode list and vrpn_client_node did exist but when I ran rostopic list it only showed me /rosout and /rosout_agg Why is that? Answer: Follow the directions to configure Motive on your Windows system via the GUI on the Wiki page: http://wiki.ros.org/mocap_optitrack Originally posted by jackie with karma: 296 on 2016-02-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Shusen on 2016-02-24: Thanks for your reply! I just followed the directions you told me, but when I run rosrun mocap_optitrack mocap_node It comes to this and crashed, I don't know why. 826 bytes received from 192.168.1.100:1511 Segmentation fault (core dumped) Comment by Shusen on 2016-02-24: How can I get see the specific data from optitrack? Comment by jackie on 2016-02-24: Did you create a rigid body to publish and edited the mocap.yaml? Can you get a gdb backtrace of that segfault and post it as an issue on the Github repo for the package (https://github.com/ros-drivers/mocap_optitrack), to help the maintainers? Comment by Shusen on 2016-02-24: Yes, I creared a rigid body, edited the mocap.yaml file: named rigid_body_1. Because I can't see any data, I don't know if it is a real connection. Comment by ros_geller on 2016-02-25: Try doing the same as i mentioned in my comment above, might work here too. Comment by Shusen on 2016-02-25: Thank you for your reply, I restarted my Optitrack system and reconfig it, then I can get data. Thanks! Comment by UbuntuROs on 2017-07-05: @Shusen Could you please guide me through how to create connection between Motive and ROS? I have ROS indigo with Ubuntu 14.4 Comment by UbuntuROs on 2017-07-05: @jackie Could you please guide me through how to create connection between Motive and ROS? I have ROS indigo with Ubuntu 14.4 Comment by UbuntuROs on 2017-07-05: @ros_geller Could you please guide me through how to create connection between Motive and ROS? I have ROS indigo with Ubuntu 14.4
{ "domain": "robotics.stackexchange", "id": 23877, "tags": "ros, optitrack, mocap-optitrack" }
How to process natural language queries?
Question: I'm curious about natural language querying. Stanford has what looks to be a strong set of software for processing natural language. I've also seen the Apache OpenNLP library, and the General Architecture for Text Engineering. There are an incredible amount of uses for natural language processing and that makes the documentation of these projects difficult to quickly absorb. Can you simplify things for me a bit and at a high level outline the tasks necessary for performing a basic translation of simple questions into SQL? The first rectangle on my flow chart is a bit of a mystery. For example, I might want to know: How many books were sold last month? And I'd want that translated into Select count(*) from sales where item_type='book' and sales_date >= '5/1/2014' and sales_date <= '5/31/2014' Answer: Natural language querying poses very many intricacies which can be very difficult to generalize. From a high level, I would start with trying to think of things in terms of nouns and verbs. So for the sentence: How many books were sold last month? You would start by breaking the sentence down with a parser which will return a tree format similar to this: You can see that there is a subject books, a compound verbal phrase indicating the past action of sell, and then a noun phrase where you have the time focus of a month. We can further break down the subject for modifiers: "how many" for books, and "last" for month. Once you have broken the sentence down you need to map those elements to sql language e.g.: how many => count, books => book, sold => sales, month => sales_date (interval), and so on. Finally, once you have the elements of the language you just need to come up with a set of rules for how different entities interact with each other, which leaves you with: Select count(*) from sales where item_type='book' and sales_date >= '5/1/2014' and sales_date <= '5/31/2014' This is at a high level how I would begin, while almost every step I have mentioned is non-trivial and really the rabbit hole can be endless, this should give you many of the dots to connect.
{ "domain": "datascience.stackexchange", "id": 9561, "tags": "nlp" }
Is there a readily available liquid that's more dense than water and insoluble?
Question: I am looking for any liquid that does not dissolve in water with higher density and does not react with water, one that is readily available. Answer: Trichloroethylene is a common chemical solvent that has reasonably low toxicity, it has density ~1.5 g/ml. However, it slowly becomes restricted. Tetrachloroethylene also may be considered. Dichlormethan has reasonably low toxicity, but its BP is too low. I would not use the liquids mentioned without fume hood on regular basis. One-time exposure is probably OK, but repeating exposure may lead to cancer. Chloroform may be considered, but I recommend to avoid it, it has accumulating liver toxicity. Tetrachloromethane should be avoided, it also has undesired toxicity. Mercury MAY be considered, it is toxic if one is repeatedly exposed to vapours, but has very low vapour pressure and is reasonably cheap. But it has annoying tendency to produce small drops that are very hard to collect and move into smallest holes and accumulates in the system, so it clearly requires dedicated workplace. Liquids mentioned above are reasonably cheap, so if they are not restricted in your home country, you should be able to allow yourself one. Unfortunately, that's all ideas I got. Well, there are some more, but they either has unreasonable cost, or are too toxic. Bromine, for example, is toxic and, though not mixing with water, partially dissolves in it.
{ "domain": "chemistry.stackexchange", "id": 1492, "tags": "water, solubility" }
Wigner-Ville Distribution (WVD) vs STFT for Spectral Analysis
Question: What are the Pros and Cons of using Wigner-Ville Distribution for spectral analysis of a signal vs taking the STFT. When is it appropriate to use one over the other and is WVD used in real-time implementations? Update: Since no answers were received for the first few days of this post, I tried posting the same question on ChatGPT and below is the response I got from it: The WVD is a time-frequency representation that provides a high degree of resolution in both time and frequency. It is useful for analyzing signals that have non-stationary properties, such as chirps and transient signals. The WVD can also distinguish between closely spaced frequency components, and can also show the instantaneous frequency of a signal. On the other hand, the STFT is a widely used method in signal processing, and is useful for analyzing signals that are stationary or have slow time-varying properties. It provides a high degree of frequency resolution, but its time resolution is limited. Unlike WVD, it is not able to distinguish between closely spaced frequency components, and it does not show the instantaneous frequency of a signal to the degree in which WVD can. The major disadvantage of the WVD is the computational cost. It has a computational complexity of O(N^2) which makes it impractical for large data sizes. Is the above response true or false? I had originally added this as a possible answer, removed it due to objection from a member (Jdip) due to inaccuracy and adding it back as part of question as asked by a member (OverLordGoldDragon) in the comments. Hope posting ChatGPT responses is not against the rules or anything. Peter K. also mentioned a link to his publication in the comments of the deleted answer which I am adding here as well. Peter K. also mentioned in OverlordGoldDragon's answer that the WVD is not very much useful in the presence of noise, a separate question has been added here for discussion on that in case anyone is interested. Answer: I preface this answer with that I know little about WVD and never worked with it, but do know time-frequency, and synchrosqueezing, which shares similarities. Part of my answer will be for SSQ. Re: ChatGPT The WVD is a time-frequency representation that provides a high degree of resolution in both time and frequency. No, oversimplified It is useful for analyzing signals that have non-stationary properties, such as chirps and transient signals. The WVD can also distinguish between closely spaced frequency components, and can also show the instantaneous frequency of a signal. Yes On the other hand, the STFT is a widely used method in signal processing, and is useful for analyzing signals that are stationary or have slow time-varying properties. So is DFT, misses the point It provides a high degree of frequency resolution, but its time resolution is limited. Nonsense, the whole point is we can tune it Unlike WVD, it is not able to distinguish between closely spaced frequency components, No and it does not show the instantaneous frequency of a signal to the degree in which WVD can. Yes The major disadvantage of the WVD is the computational cost. It has a computational complexity of O(N^2) which makes it impractical for large data sizes. No Re: dorian111 When increasing the sampling length to improve the frequency domain resolution, the time domain resolution will deteriorate. I can't tell if this refers to WVD or STFT. For STFT or any localized time-frequency method, it's wrong - the sampling rate, not duration, affects time resolution. WVD appears to have a global temporal operator, so it may be true there. For WVD, it is generally believed that it is not limited to the uncertainty principle and can achieve the maximum mathematical accuracy of frequency domain resolution. No method completely escapes Heisenberg, but it's true we can achieve practically perfect localization for certain classes of signals. The general conclusion is that the accuracy of WVD is much higher than that of STFT. No. This isn't even true for synchrosqueezing, which significantly improves upon WVD. The worst case in SSQ vs STFT is close, I can't say SSQ is better, and certainly not "much better". But it is true that the best case for SSQ is far superior. The disadvantage is that the frequency spectrum will appear pseudo-frequency when there are multiple frequency signals in the data. Unsure what this means, WVD is time-frequency, there's no "frequency spectrum" in standard sense. It's true that introducing additional intrinsic modes worsens WVD, esp. with "quadratic interference" (that SSQ lacks). Compared with STFT, the calculation cost is much higher, and the performance is the difference between O(N2log(N)) and O(kNlog(N)). When the STFT sliding length is taken as the minimum limit of 1, k=N, and the performance of WVD is the same. Not necessarily. The compute burden depends on what we need WVD for, and whether we window. Of chief consideration is information, and how much we lose, which can be measured - and conversely, how much we gain by computing the full WVD as opposed to a part of it. The original MATLAB synchrosqueezing toolbox used n_fft=N, with logic that DFT is length N, and which most will agree is completely unnecessary. Without windowing, I imagine WVD is like a fancified Hilbert transform and struggles with more than one component - see Figure 4.18 and below. Windowing, particularly with kernels which make WVD complex, enable tremendous optimization, similar to CWT. These optimizations are unrealized in most code... for now. $$X(\tau,f)=\int_{-\infty}^\infty x(t)w(t-\tau)e^{-j2\pi f t}dt $$ This is a correct STFT formulation which is what most libraries implement, but I'd like to note that it's bad. $$W_x(t,f)=\int_{-\infty}^{\infty}x(t+\frac\tau{2})x^{*}(t-\frac\tau{2})e^{-j2\pi f\tau}d\tau $$ From this formula, something sticks out: $x(... \pm \tau)$. This screams boundary effects - a major disadvantage compared to STFT. Re: original question Two major advantages of the spectrogram (abs(STFT)) over WVD, or at least SSQ, is stability, and sparsity. SSQ, as a feature, is quite brittle to noise (in some ways, yet also more robust in other ways, see related). Sparsity may come as a surprise, as SSQ claims that's its advantage over STFT - and it is - but the form of sparsity that matters a lot is subsampleability. Note, hop_size is just the subsampling factor for STFT. We can hop in the spectrogram because there's high redundancy along time, and doing so loses little information. Not the case with SSQ, which generates rough and spiky time-frequency geometries - subsampling it a lot means losing a lot, and not subsampling likewise means keeping too many data points to be useful for machine learning, not because of data size, but correlated features prone to overfitting. As I understand, WVD is more a measurement tool - it can be used to describe time-frequency characteristics of time-frequency kernels, e.g. wavelets. Though really I don't know where its applicability ends. Lastly, third major advantage, STFT doesn't straight up invent signals: Figures 4.18 & 4.20, Wavelet Tour. Left is plain WVD, such interference is a dealbreaker for most applications. Right is windowed WVD, which attenuates interferences, but "reduces the time-frequency resolution" (under Eq 4.156).
{ "domain": "dsp.stackexchange", "id": 11720, "tags": "matlab, signal-analysis, stft, time-frequency, real-time" }
Could gravity accelerate light?
Question: Gravity causes anything with energy to accelerate toward the source. Black holes, for example, have such strong gravity that they pull in light and don't let any escape. But can acceleration still apply to light? The speed of light is constant, of course, but why are photons affected by gravity yet aren't accelerated by it? Edit: My main question is why photons aren't affected in the same way as most other particles. I'm perfectly aware that it cannot surpass lightspeed, but I want to know what makes it unaffected by acceleration while other particles are affected. Answer: Photons are blue-shifted when attracted by gravity (I mean - moving towards a mass, not moving at right angles to the gravitational field like in an orbit). They can't go faster, but their energy goes up.
{ "domain": "physics.stackexchange", "id": 26391, "tags": "general-relativity, gravity, photons, speed-of-light, faster-than-light" }
Why do wave functions need to be normalized? Why aren't the normalized to begin with?
Question: Before I started studying quantum mechanics, I thought I knew what normalization was. Just pulling off Google, here's a definition that matches what I've understood normalization to mean: Normalization -- to multiply (a series, function, or item of data) by a factor that makes the norm or some associated quantity such as an integral equal to a desired value (usually 1). Most often I have seen normalization that normalizes to 1 or 100% or something like that. For instance, isn't putting things in percentages a kind of normalization? If I take a quiz and get 24 / 25 points, then I "normalize" this by saying I got 96%. That's what I understood normalization to be. Why I am Confused Now Ever since I started studying quantum mechanics, I have felt confused by the term normalization. Let me quote this portion from Griffiths to illustrate an example of how he uses the term: We return now to the statistical interpretation of the wave function, which says taht $|\Psi(x,t)|^2$ is the probability density for finding the particle at point $x$, at time $t$. It follows that the integral of $|\Psi|^2$ must be 1 (the particle's got to be somewhere. \begin{equation} \int^{+\infty}_{-\infty} |\Psi(x,t)|^2 dx = 1 \end{equation} Without this, the statistical interpretation would be nonsense. However, this requirement should disturb you: After all, the wave function is supposed to be determined by the Schrödinger equation --- we can't go imposing an extraneous condition on $\Psi$ without checking that the two are consistent. Well, a glance at [the time-dependent Schrödinger equation] reveals that if $\Psi(x,t)$ is a solution, so too is $A\Psi(x,t)$, where $A$ is any (complex) constant. What we must do, then, is pick this undetermined multiplicative factor so as to ensure $\int^{+\infty}_{-\infty} |\Psi(x,t)|^2 dx = 1 $ is satisfied. This process is called normalizing the wave function. I get the idea we need the probability distribution $\rho$ to be 1 over the whole position space. That makes sense and is obvious. So the integral makes sense. But I don't understand a couple things: What was the wave function like prior to normalization? Why did it need to be normalized in the first place? To use my quiz analogy, why wasn't the test out of 100 points to begin which in which case no normalization would be needed. 96% would be 96 points. Why if $\Psi(x,t)$ is a solution, so too is $A\Psi(x,t)$? Perhaps an answer could comment on how my initial definition of normalization relates to normalizing the wave function. Also, if you like to write, adding a comment or two about Dirac normalization would be awesome. Answer: Let us take a canonical coin toss to examine probability normalization. The set of states here is $\{|H\rangle,|T\rangle\}$. We want them to occur in equal amounts on average, so we suggest a simple sum with unit coefficients: $$\phi=|H\rangle+|T\rangle$$ When looking at probabilities, we fundamentally care about ratios. Since the ratio of the coefficients is one, we get a 1:1 distribution. We simply define the unnormalized probability as $$P(\xi)=|\langle\xi|\phi\rangle|^2$$ Plugging the above state in, we see we get a probability of 1 for both states. The probability (as we normally think of it), is the unnormalized probability divided by the total probability: $$P(\xi)=\frac{|\langle\xi|\phi\rangle|^2}{\langle\phi|\phi\rangle}$$ If we make the conscious choice of $\langle\phi|\phi\rangle$ every time, we don't have to worry about this normalized definition. For your 2., note that the SE is linear. Thus $A\Psi$ is also a solution.
{ "domain": "physics.stackexchange", "id": 20053, "tags": "quantum-mechanics, wavefunction, superposition, normalization, linear-systems" }
Solid hydrophilic transparant materials that don't dissolve in water
Question: I am looking for hydrophilic transparant materials which are solid at room temperature and that don't dissolve in water. I need these for an experiment in which I condense droplets on a plate made out of/or coated with the material Just thinking `chemically' I imagine that any polymer with a lot of hydroxyl groups will be hydrophilic (like Polyvinylalcohol), but PVA is water soluble. Polyethylenetereftalate (PET) is reasonable but still has a contact angle around 70 degrees (common polymer contact angles) Does anybody know transparant materials with contact angles closer to 0 that don't dissolve in water? An additional `demand' is that the material does not break easily (like glass) Answer: Several commercially available titanium dioxide coatings are available, commonly used to kill bacteria or aid UV stability (I think). When exposed to light, the titanium dioxide has a contact angle of almost 0 degrees. You should be able to deposit a thin layer on something like a PVA or polycarbonate sheet. The commercial coatings are somewhat expensive, though, as far as I can tell, and the common process for doing it yourself involves heating the surface in a mid-temperature flame for 10-15 minutes which excludes pretty much all plastics I know of that would otherwise be suitable.
{ "domain": "chemistry.stackexchange", "id": 379, "tags": "physical-chemistry, water, polymers, surface-chemistry" }
How do we prove that the initial velocity is equal to final velocity relative to centre of mass?
Question: In elastic collision it is stated that the initial velocity relative to centre of mass is equivalent to final velocity of centre of mass of the same object. How do we prove that? Answer: Start with the two pertinent conservation laws for elastic collisions: kinetic energy and momentum. Remember that momentum is a vector. In the center of mass frame, the total momentum is zero. That will get you started. Do the work for two particles first. As an aside you should try to show the total momentum is zero in the CoM frame by example by taking two different mass particles with different velocities colliding head-on in the lab. Calculate the velocity of the CoM, find the velocities of the particle relative to the CoM, then find the momentum.
{ "domain": "physics.stackexchange", "id": 20108, "tags": "homework-and-exercises, newtonian-mechanics, momentum, conservation-laws, collision" }
Why isn't the curvature scale in Robertson-Walker metric dynamic?
Question: $$ds^2=-c^2dt^2+a(t)^2 \left[ {dr^2\over1-k{r^2\over R_0^2}}+r^2d\Omega^2 \right]$$ This is the FRW metric, here k=0 for flat space, k=1 for spherical space, k=-1 for hyperbolic space. $R_0$ is the curvature scale for the corresponding maximmaly symmetric 3-dimensional space. why can't $R_0$ be a dynamical $R(t)$? There seems no problem on symmetry basis since the time slice of the manifold is definitely still a maximally symmetrical space, suiting the requirement of cosmology. Meanwhile this is a distinct new metric because the dynamical $R_0$ can't be simply rescaled away. Answer: Your generalized metric fails to be homogeneous. It's isotropic, but only around the privileged center of the universe ($r=0$). One way of showing that is to compute the Ricci scalar curvature; you'll see that it depends on $r$ (but it's a nasty calculation). Informally, you could think of it this way. Imagine a pie-slice-shaped path with two straight edges of equal length that meet at $r=0$, and a curved edge connecting the other ends at fixed $r>0$. Along this path, there are a bunch of comoving galaxies, so as the universe expands, the whole path (made of galaxies) should expand uniformly. But the length of the straight edges depends on $R(t)$ while the length of the curved edge doesn't, so this can only happen if $R(t)$ is constant. Also note that if $k=+1$ then only $r$ coordinates that are $\le R(t)$ are meaningful, and therefore, if $R(t)$ is not constant, a comoving galaxy at some $r$ strictly between the minimum and maximum of $R(t)$ will find itself suddenly with "nowhere to be" (in its future or its past, or both).
{ "domain": "physics.stackexchange", "id": 97685, "tags": "cosmology, differential-geometry, metric-tensor, space-expansion, curvature" }
Find The One Element In An Array That is Different From The Others
Question: https://www.codewars.com/kata/find-the-stray-number/train/javascript I solved this challenge by sorting from least to greatest, and checking if the first element in the array matched the second element in the array. If the first element does not match, then the different element is the first element. If the first element does match, then the different element is the last element. function stray(numbers) { numbers = numbers.sort((a, b) => a - b); if (numbers[0] !== numbers[1]) { return numbers[0]; } else { return numbers[numbers.length - 1]; } } console.log(stray([17, 17, 3, 17, 17, 17, 17])); I'm wondering if / how this can be done with the filter() method instead? Answer: This is a similar idea to the other answers here, but the implementation is a bit different. First of all, we can assume that the array's length is at least 3, since it needs to have at least two of the same values and one different value. Let's start by handling the case where the stray value is not in the first element. We could simply write: a.find(v => v != a[0]) That is, find an element that's different from the first element. But what if the stray element comes first in the array? We can check if the first two elements differ. If they do, then the stray is either in the first or second position, so the third element is not a stray. In this case, we can check against the third element instead of the first; otherwise we check against the first element as before, thus: a.find(v => a[0] != a[1] ? v != a[2] : v != a[0]) This is a bit code-golfey and not very readable, so I wouldn't recommend it in production, but it may be of some interest as a curiosity. It may be worth noting that this solution appears to perform quite well, and can be further optimized by doing the inequality check on the first two elements before invoking find, and by using the third parameter to find to access the array, making the callback a pure function and eliminating the need to reference the array via the closed-over variable, for example: a.find(a[0] != a[1] ? (v, i, a) => v != a[2] : (v, i, a) => v != a[0])
{ "domain": "codereview.stackexchange", "id": 35216, "tags": "javascript, programming-challenge, array" }
Minimal regex engine
Question: A few months back, I posted a state machine for review. Thanks to the feedback, I was able to greatly simplify the implementation. I am posting the revised version, together with the Regex class which calls it. I think I'm mainly looking for feedback on structure. The relationship between my Node class and StateMachine class feels a little tangled; I'm not always sure which method ought to belong to which class. I think the way I communicate the next token of my lexer is also cumbersome. state_machine.py class Node: def __init__(self, value): self.value = value self.children = set() def empty(self): return self.value == '' def add_child(self, other): self.children.add(other) def find_parent_of_terminal(self, terminal): """ We assume that there shall only be one node leading to the terminal and that there is only one terminal """ visited = set() to_explore = {self} while to_explore: current = to_explore.pop() visited.add(current) if terminal in current.children: # If this fails, then there is a bug in union, concat, or kleene assert len(current.children) == 1 return current to_explore.update({node for node in current.children if node not in visited}) return None def leads_to(self, value): """ Return True iff argument can be reached by traversing empty nodes """ return bool(self._get_node_if_reachable(value)) def _get_node_if_reachable(self, value): for node in self.children: while node and node.empty(): if node == value: return node node = node._get_node_if_reachable(value) return None def __repr__(self): result = '{} : ['.format(self.value) for node in self.children: result += str(node.value) + ', ' result += ']\n' return result def EmptyNode(): return Node('') class StateMachine: def __init__(self): self.initial = EmptyNode() self.terminal = EmptyNode() def __repr__(self): return str(self.initial) @staticmethod def from_string(source): dfa = StateMachine() nodes = [Node(char) for char in source] dfa.initial.add_child(nodes[0]) for i in range(len(source) - 1): nodes[i].add_child(nodes[i + 1]) nodes[-1].add_child(dfa.terminal) return dfa @staticmethod def from_set(chars): dfa = StateMachine() first = EmptyNode() penultimate = EmptyNode() dfa.initial.children = {first} for char in chars: char_node = Node(char) first.add_child(char_node) char_node.add_child(penultimate) penultimate.add_child(dfa.terminal) return dfa def concat(self, other): other.initial.find_parent_of_terminal(other.terminal).children = {self.terminal} self.initial.find_parent_of_terminal(self.terminal).children = other.initial.children return self def union(self, other): self.initial.children.update(other.initial.children) this_last = self.initial.find_parent_of_terminal(self.terminal) other_last = other.initial.find_parent_of_terminal(other.terminal) empty_node = EmptyNode() empty_node.add_child(self.terminal) this_last.children = {empty_node} other_last.children = {empty_node} return self def kleene(self): penultimate_node = self.initial.find_parent_of_terminal(self.terminal) dummy = EmptyNode() penultimate_node.children = {dummy} dummy.add_child(self.terminal) penultimate_node.add_child(self.initial) self.initial.add_child(dummy) return self def _get_next_state(self, nodes, value, visited=None): if visited is None: visited = set() result = set() for node in nodes: visited.add(node) for child in node.children: if child.empty() and child not in visited: result.update(self._get_next_state([child], value, visited)) elif child.value == value: result.add(child) return result def is_match(self, nodes): for node in nodes: if node .leads_to(self.terminal): return True return False def match(self, source): """ Match a target string by simulating a NFA :param source: string to match :return: Matched part of string, or None if no match is found """ result = '' last_match = None current = {self.initial} for char in source: next_nodes = self._get_next_state(current, char) if next_nodes: current = next_nodes result += char if self.is_match(current): last_match = result else: break if self.is_match(current): last_match = result return last_match regex.py import collections import enum import string from state_machine import StateMachine class Token(enum.Enum): METACHAR = 0 CHAR = 1 ERROR = 2 class LexResult(collections.namedtuple('LexResult', ['token', 'lexeme'])): def __bool__(self): return self.token != Token.ERROR class RegexLexer(object): metachars = '-|[]^().*' def __init__(self, pattern: str): self._pattern = pattern self._pos = 0 self._stack = [] def peek(self) -> LexResult: if self._pos >= len(self._pattern): return LexResult(Token.ERROR, '') next_char = self._pattern[self._pos] if next_char in self.metachars: token = Token.METACHAR else: token = Token.CHAR return LexResult(token, next_char) def _eat_token_type(self, token: Token) -> LexResult: next_match = self.peek() if next_match.token != token: return LexResult(Token.ERROR, next_match.lexeme) self._pos += 1 return next_match def _eat_token(self, match: LexResult) -> LexResult: next_match = self.peek() if next_match == match: self._pos += 1 return next_match return LexResult(Token.ERROR, next_match.lexeme) def mark(self): self._stack.append(self._pos) def clear(self): self._stack.pop() def backtrack(self): self._pos = self._stack.pop() def eat_char(self, char=''): if char: return self._eat_token(LexResult(Token.CHAR, char)) return self._eat_token_type(Token.CHAR) def eat_metachar(self, metachar): return self._eat_token(LexResult(Token.METACHAR, metachar)) class Regex(object): CHARACTERS = string.printable def __init__(self, pattern: str): """ Initialize regex by compiling provided pattern """ self._lexer = RegexLexer(pattern) self._state_machine = self.parse() def match(self, text: str) -> str: """ Match text according to provided pattern. Returns matched substring if a match was found, or None otherwise """ assert self._state_machine return self._state_machine.match(text) def parse(self): nfa = self.parse_simple_re() if not nfa: return None while True: self._lexer.mark() if not self._lexer.eat_metachar('|'): self._lexer.backtrack() return nfa next_nfa = self.parse_simple_re() if not next_nfa: self._lexer.backtrack() return nfa nfa = nfa.union(next_nfa) self._lexer.clear() def parse_simple_re(self): """ <simple-re> = <basic-re>+ """ nfa = self.parse_basic_re() if not nfa: return None while True: next_nfa = self.parse_basic_re() if not next_nfa: break nfa = nfa.concat(next_nfa) return nfa def parse_basic_re(self): """ <elementary-re> "*" | <elementary-re> "+" | <elementary-re> """ nfa = self.parse_elementary_re() if not nfa: return None next_match = self._lexer.peek() if not next_match or next_match.token != Token.METACHAR: return nfa if next_match.lexeme == '*': self._lexer.eat_metachar('*') return nfa.kleene() if next_match.lexeme == '+': self._lexer.eat_metachar('+') return nfa.union(nfa.kleene()) return nfa def parse_elementary_re(self): """ <elementary-RE> = <group> | <any> | <char> | <set> :return: DFA """ self._lexer.mark() nfa = self.parse_group() if nfa: self._lexer.clear() return nfa self._lexer.backtrack() if self._lexer.eat_metachar('.'): return StateMachine.from_set({x for x in self.CHARACTERS}) char = self._lexer.eat_char() if char: return StateMachine.from_string(char.lexeme) set_chars = self.parse_set() if not set_chars: return None return StateMachine.from_set(set_chars) def parse_group(self): """ <group> = "(" <RE> ")" :return: DFA """ if not self._lexer.eat_metachar('('): return None state_machine = self.parse() if not state_machine: return None if not self._lexer.eat_metachar(')'): return None return state_machine def parse_range(self) -> {str}: """ <range> = <CHAR> "-" <CHAR> """ first = self._lexer.eat_char() if not first: return set() if not self._lexer.eat_metachar('-'): return set() last = self._lexer.eat_char() if not last: return set() return {chr(x) for x in range(ord(first.lexeme), ord(last.lexeme) + 1)} def parse_set_item(self) -> {str}: """ <set item> = <range> | <char> """ self._lexer.mark() set_item = self.parse_range() if set_item: self._lexer.clear() return set_item self._lexer.backtrack() next_item = self._lexer.eat_char() return {next_item.lexeme} if next_item else set() def parse_set_items(self) -> {str}: """ <set items> = <set item>+ """ items = self.parse_set_item() if not items: return set() next_items = self.parse_set_item() while next_items: items.update(next_items) next_items = self.parse_set_item() return items def parse_positive_set(self) -> {str}: if not self._lexer.eat_metachar('['): return set() set_items = self.parse_set_items() if not set_items: return set() if not self._lexer.eat_metachar(']'): return set() return set_items def parse_negative_set(self) -> {str}: if not self._lexer.eat_metachar('['): return set() if not self._lexer.eat_metachar('^'): return set() set_items = self.parse_set_items() if not set_items: return set() if not self._lexer.eat_metachar(']'): return set() return set(string.printable).difference(set_items) def parse_set(self) -> {str}: """ Parse something like [a-z9] and return the set of allowed characters """ self._lexer.mark() set_items = self.parse_positive_set() if set_items: self._lexer.clear() return set_items self._lexer.backtrack() return self.parse_negative_set() Finally, a small set of unit tests to show usage: import unittest from state_machine import StateMachine from regex import Regex class TestStateMachine(unittest.TestCase): def test_union(self): state_machine = StateMachine.from_string('abc') state_machine = state_machine.union(StateMachine.from_string('def')) self.assertEqual(state_machine.match('abc'), 'abc') self.assertEqual(state_machine.match('def'), 'def') self.assertIsNone(state_machine.match('de')) def test_kleene(self): state_machine = StateMachine.from_string('abc') state_machine = state_machine.kleene() self.assertEqual(state_machine.match(''), '') self.assertEqual(state_machine.match('abc'), 'abc') self.assertEqual(state_machine.match('abcabc'), 'abcabc') self.assertEqual(state_machine.match('abcDabc'), 'abc') def test_concat(self): state_machine = StateMachine.from_string('ab') state_machine = state_machine.concat(StateMachine.from_string('cd')) self.assertEqual(state_machine.match('abcd'), 'abcd') self.assertEqual(state_machine.match('abcde'), 'abcd') self.assertIsNone(state_machine.match('abc')) class TestRegex(unittest.TestCase): def test_identifier_regex(self): regex = Regex('[a-zA-Z_][a-zA-Z0-9_]*') self.assertEqual(regex.match('a'), 'a') self.assertFalse(regex.match('0')) self.assertTrue(regex.match('a0')) self.assertEqual(regex.match('a0_3bd'), 'a0_3bd') self.assertEqual(regex.match('abd-43'), 'abd') def test_parentheses(self): regex = Regex('d(ab)*') self.assertEqual(regex.match('d'), 'd') self.assertEqual(regex.match('dab'), 'dab') self.assertEqual(regex.match('daa'), 'd') self.assertEqual(regex.match('dabab'), 'dabab') def test_union(self): regex = Regex('(ab*d)|(AG)') self.assertEqual(regex.match('adG'), 'ad') self.assertEqual(regex.match('AGfe'), 'AG') if __name__ == '__main__': unittest.main() Answer: Yay! You ran flake8 and followed PEP-8. Nice clean code. self.assertEqual(state_machine.match('abc'), 'abc') Ummm, this is arguably backwards. Convention for xUnit in many languages is to assertEqual(expected, computed). It can affect how the diagnostic output is displayed for a failure. state_machine = state_machine.union(StateMachine.from_string('def')) Choosing the name union for your public API is perhaps slightly confusing. "Union" is drawn from set theory, while "alternation" is the term the regex literature tends to use for |. state_machine = StateMachine.from_string('abc') The class name is perfectly clear, it's great. For a local variable that we'll be using a bunch, sm would have sufficed. You already have a line that verifies that .from_string() doesn't blow up, so consider combining two assignments on a single line: sm = StateMachine.from_string('abc').kleene() The Regex class is wonderfully straightforward. Pat yourself on the back. The peek method in the lexer is perhaps a little on the tricky side, and would benefit from comments about when we consume something or not. I'm looking for invariants on pos and the stack. I like the assert in find_parent_of_terminal, and its comment. to_explore.update({node for node in current.children if node not in visited}) That's just set difference, yes? children - visited Overall, looks good. Ship it!
{ "domain": "codereview.stackexchange", "id": 34067, "tags": "python, python-3.x, parsing, regex, reinventing-the-wheel" }
What is a chemical daily life experience that can be modelled by superior mathematics?
Question: I'm searching for a chemical daily life experience that can be modelled by superior mathematics so that the origin of the equations involved can be easily explained to a beginner in chemistry. By superior mathematics I mean things like partial and ordinary differential equation, Integral Calculus, group theory, differential geometry. But I want to understand how are the equations derived. For example, in physics, I could model the trajectory of the rear wheel of a bicycle (a daily life experience) and I fully understand all the equations involved, without being a physicist. I want something similar to find in chemistry. Answer: Diffusion is one example from the daily life. For example, the absorption of ethanol from the small intestine to blood can be modeled using differential equations. The extraction of caffeine from grounded coffee is also a diffusion phenomenon.
{ "domain": "chemistry.stackexchange", "id": 2164, "tags": "theoretical-chemistry" }
Finding Permutations with Exclusions
Question: Recently, I was given a puzzle by a friend of mine, which has 6 pieces. Giddy to try it out, I took it apart without batting an eye to follow what I was removing or moving or sliding. I've been struggling with this puzzle for over a year and a half now, and so thought I would write an algorithm that would list all the possible orientations or possibilities of pieces. To make it more general, I numbered each piece 1 though 6. But, each piece can be rotated and placed, so I extended this to 1 through 12. So now each piece has two numbers, 1 and 12 is one piece, 2 and 11, 3 and 10, etc... Now to the root of my question: Is it even possible to write a recursive algorithm to display all possible permutations of the 12 (6) pieces such that if piece 1 is used, 12 cannot, and so forth. Reason I ask is because if this were recursive, we wont know what we picked above (unless we keep running down in the recursion tree our last picked items) so that we can skip others. Or, even better, would it be better to write a non recursive function? Just as a note: I am not looking for code, per se. Pseudo code or concepts, or pointers would be so grateful and helpful. Relevant References : The Math behind Combinations with Restrictions The Math behind Permutations with Restrictions Just as a note: Seeing as this may come up; I know the number to this solution is massive, but I've already written functions that tell me if a said permutation can even be played on the board (do certain blocks collide), if at least one block is moveable (we won't want gridlock), and if the placement of blocks conforms to the size of the puzzle enclosure. All this will, hopefully, eliminate quite a bunch, and let me try by hand the remaining games. Answer: Generate all permutations of 1,...,6 (outer loop below), and all binary numbers of length 6 (inner loop below), then combine each permutation in the first set with each one from the second set: for π in permutations(1,...,6): for x in 000000,...,111111: output C(π[1],x[1]),...,C(π[6],x[6]) where $C(n,0) = n$, $C(n,1) = 13 - n$.
{ "domain": "cs.stackexchange", "id": 8226, "tags": "algorithms, recursion, permutations" }
Finding optimal mapping patterns that transform relational data into a linear format
Question: I'm trying to upgrade my javascript to be more functional. I have used function composition and curry successful on less complicated code but i'm now running into a point where i'm not sure on how to approach it. Given the below relational data structures. // Contains all node types and its property definitions. Including any additional information like readable descriptions etc.. const nodes = [{ id: '0ec7deff-91af-e911-812a-00155d0a5146', description: 'Airplanes', properties: [{ id: '0fc7deff-91af-e911-812a-00155d0a5146', description: 'Weight' }, { id: '06c7deff-91af-e911-812a-00155d0a5146', description: 'Color' }] }, { id: '278182d0-4ba2-4813-b74e-277a2296e864', description: 'Manufacturers', properties: [{ id: '003e14d7-c11a-41e1-ad08-ab8896a3cb55', description: 'Boeing' }] }] // Contains only the to be used nodes and there relations(defined through children nodes) in the eventual output. The above definitions can be used to pull in additional data. const rootNode = { id: '0ec7deff-91af-e911-812a-00155d0a5146', properties: [{ id: '0fc7deff-91af-e911-812a-00155d0a5146', }], children: [{ id: '278182d0-4ba2-4813-b74e-277a2296e864', properties: [{ id: '003e14d7-c11a-41e1-ad08-ab8896a3cb55', }] }], } The rootNode is the starting point. It can contain an infinite amount of children including properties that need to be displayed. Each node needs to be checked for child nodes and be added to the eventual result. For example a linear set to be displayed on the x axis where the properties will be shown on a next row also on the x axis. [{ id: '0ec7deff-91af-e911-812a-00155d0a5146', description: 'Airplanes', properties: [{ id: '0fc7deff-91af-e911-812a-00155d0a5146', description: 'Weight' }] }, { id: '278182d0-4ba2-4813-b74e-277a2296e864', description: 'Manufacturers', properties: [{ id: '003e14d7-c11a-41e1-ad08-ab8896a3cb55', description: 'Boeing' }] }] The eventual output will be shown in a 2 dimensional grid or table. Airplanes Manufacturers Weight Boeing Or in a more elaborate example if i would provide more data and a larger query. Airplanes Manufacturers Weight Color Type Boeing Lockhead I build a function that works and uses recursion but it just doesn't feel right. I want to pull it apart into smaller units. Note that lodash is used here, hence the shorthands etc. function initializeNodeMapper(nodes) { const result = [] return function mapNodes(node) { const usedPropertyIds = map(node.properties, 'id') const currentNode = find(nodes, ['id', node.id]) const nodeWithReducedProperties = reduce(currentNode, (acc, value) => ({ ...acc, properties: filter(value, property => indexOf(usedPropertyIds, property.id) !== -1), }), currentNode) result.push(nodeWithReducedProperties) if (node.children && node.children.length > 0) { forEach(node.children, mapNodes) } return result } } const mapData = initializeNodeMapper(nodes) const mappedData = mapData(initialNode) Using functional patterns i want to find a more optimal pattern of mapping and testing the code. Answer: Some slight improvements: You could do without the result array (less state ). I think using vanilla object destructuring is more clear than the lowdash reduce method which is confusing to me (unless I still don't understand it and my code actually is not equivalent to yours ). const allNodes = [ ]; const rootNode = { }; function initializeNodeMapper(nodes) { return function mapNodes(node) { const propertyIds = node.properties.map(({ id }) => id); const currentNode = nodes.find(({ id }) => id === node.id); const nodeWithFilteredProperties = { ...currentNode, properties: currentNode.properties.filter(({ id }) => propertyIds.includes(id) ) }; return node.children ? [nodeWithFilteredProperties].concat(node.children.map(mapNodes)) : [nodeWithFilteredProperties]; }; } const mappedData = initializeNodeMapper(allNodes)(rootNode); Update: Are you looking for something like this? const nodes = [ ]; const rootNode = { }; const pipe = (...fns) => x => fns.reduce((v, f) => f(v), x); const getFlattenedNodes = node => node.children ? [node].concat( node.children.reduce( (acc, val) => acc.concat(getFlattenedNodes(val)), [] ) ) : [node]; const getPropertyIds = node => node.properties.map(({ id }) => id); const getNodeWithProperties = nodesWithProperties => node => nodesWithProperties.find(({ id }) => id === node.id); const getNodesWithFilteredProperties = propertyIdsAndNodesToBeFiltered => propertyIdsAndNodesToBeFiltered.map(([propertyIds, nodeWithProperties]) => ({ ...nodeWithProperties, properties: nodeWithProperties.properties.filter(({ id }) => propertyIds.includes(id) ) })); const getPropertyIdsAndNodes = nodesWithProperties => flattenNodes => flattenNodes.map(node => [ getPropertyIds(node), getNodeWithProperties(nodesWithProperties)(node) ]); const mappedData = pipe( getFlattenedNodes, getPropertyIdsAndNodes(nodes), getNodesWithFilteredProperties )(rootNode);
{ "domain": "codereview.stackexchange", "id": 35487, "tags": "javascript, functional-programming, join, lodash.js" }