anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Closure of a CFL under specific operation
Question: Consider the following operation on language $L$: $\mathrm{inv}(L) = \{ xy^Rz \mid x,y,z\in \Sigma^*, xyz\in L \}$ I understand that if $L$ is regular, then $\mathrm{inv}(L)$ is regular too, and proved it by guesing when $y^R$ starts and running it on the inverse DFA. However, if $L$ is a CFL, then $\mathrm{inv}(L)$ is not, and I don't understand why. Can't we just also guess when $y^R$ starts, insert all of it into the stack, then simulate the DFA of $L$ on each item we take out, then continue on $z$ when it's empty? thanks. Answer: You are right, regular languages are closed under inversion $\mathrm{inv}$, using some proper guesses, or see my answer to a relevant question here. Context-free languages are not closed under $\mathrm{inv}$. Your intuition does not work. We cannot push $y$ on the stack because it will make the 'real' stack below it inaccessible for the simulation. Here is a possible counter example. Consider $L = \{ a^nb^nc^md^m \mid m,n\ge 1 \}$. Then try $\mathrm{inv}(L) \cap a^*c^*b^*d^* $.
{ "domain": "cs.stackexchange", "id": 10285, "tags": "context-free, computation-models, closure-properties, pushdown-automata, nondeterminism" }
TypeError: expected [string] but got [gazebo_msgs/ApplyBodyWrenchResponse]
Question: I use rosservice to drive the rrbot model, python code is as follows: #!/usr/bin/env python import rospy from geometry_msgs.msg import Pose, Quaternion, Point, PoseStamped, PoseWithCovariance, TwistWithCovariance, Twist, Vector3, Wrench from gazebo_msgs.srv import ApplyBodyWrench wrench = Wrench() wrench.torque.x = 0 wrench.torque.y = 5 wrench.torque.z = 0 rospy.wait_for_service ('/gazebo/apply_body_wrench') apply_body_wrench = rospy.ServiceProxy('/gazebo/apply_body_wrench', ApplyBodyWrench) resp = apply_body_wrench(apply_body_wrench(body_name = "rrbot::link3",reference_frame = "rrbot::link3", wrench = wrench,start_time = rospy.Time(0), duration = rospy.Duration(5))) Although successfully driven the robot, however, the results does not receive srv information(such as success) and some errors as follows: Traceback (most recent call last): File "test.py", line 38, in <module> torque_node(point , wrench, start_time, duration) File "test.py", line 15, in torque_node resp = apply_body_wrench(apply_body_wrench(body_name = "rrbot::link3")) File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 439, in __call__ return self.call(*args, **kwds) File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 495, in call request = rospy.msg.args_kwds_to_message(self.request_class, args, kwds) File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/msg.py", line 121, in args_kwds_to_message raise TypeError("expected [%s] but got [%s]"%(data_class._slot_types[0], arg._type)) TypeError: expected [string] but got [gazebo_msgs/ApplyBodyWrenchResponse] Originally posted by 505852124@qq.com on ROS Answers with karma: 3 on 2020-01-30 Post score: 0 Answer: resp = apply_body_wrench(apply_body_wrench(..)) shouldn't that second apply_body_wrench be ApplyBodyWrench instead (and perhaps even ApplyBodyWrenchRequest)? Originally posted by gvdhoorn with karma: 86574 on 2020-01-30 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by 505852124@qq.com on 2020-01-31: You are right.I have soved this problem.Thank you.
{ "domain": "robotics.stackexchange", "id": 34353, "tags": "control, gazebo, force, ros-melodic, gazebo-ros" }
Robot missing color and wheels fall halfway through plane
Question: When i load any urdf model robot into gazebo's empty world, the color of the robot is not displayed. Here is an example of one of the robots I which does not show up correctly: I also noticed an issue where the robots wheels penetrate halfway through the plane of the world. I think it might be an issue of the position of my colliders for the wheels but I cannot be sure since I am new to urdf. <?xml version="1.0"?> <robot name="pioneer"> <link name="base_link"> <visual> <geometry> <box size="1 0.5 0.4"/> </geometry> <material name="red"> <color rgba="1 0 0 1"/> </material> </visual> <collision> <geometry> <box size="1 0.5 0.5"/> </geometry> </collision> <inertial> <mass value="10"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> </inertial> </link> <link name="right_front_wheel"> <visual> <geometry> <cylinder length="0.1" radius="0.15"/> </geometry> <material name="black"> <color rgba="0 0 0 1"/> </material> <origin rpy="1.6 0 0" xyz="-0.2 0 0"/> </visual> <collision> <geometry> <cylinder length="0.1" radius="0.16"/> </geometry> </collision> <inertial> <mass value="10"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> </inertial> </link> <link name="left_front_wheel"> <visual> <geometry> <cylinder length="0.1" radius="0.15"/> </geometry> <material name="black"/> <origin rpy="1.6 0 0" xyz="-0.2 0 0"/> </visual> <collision> <geometry> <cylinder length="0.1" radius="0.16"/> </geometry> </collision> <inertial> <mass value="10"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> </inertial> </link> <link name="left_back_wheel"> <visual> <geometry> <cylinder length="0.1" radius="0.15"/> </geometry> <material name="black"/> <origin rpy="1.6 0 0" xyz="-0.2 0 0"/> </visual> <collision> <geometry> <cylinder length="0.1" radius="0.16"/> </geometry> </collision> <inertial> <mass value="10"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> </inertial> </link> <link name="top"> <visual> <geometry> <box size="1.3 0.8 0.1"/> </geometry> <material name="black"/> <origin rpy="0 0 0" xyz="0 0 0"/> </visual> <collision> <geometry> <box size="1.3 0.8 0.1"/> </geometry> </collision> <inertial> <mass value="10"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> </inertial> </link> <link name="right_back_wheel"> <visual> <geometry> <cylinder length="0.1" radius="0.15"/> </geometry> <material name="black"/> <origin rpy="1.6 0 0" xyz="-0.2 0 0"/> </visual> <collision> <geometry> <cylinder length="0.1" radius="0.16"/> </geometry> </collision> <inertial> <mass value="10"/> <inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0"/> </inertial> </link> <joint name="front_Left" type="fixed"> <parent link="base_link"/> <child link="left_front_wheel"/> <origin xyz="0.5 -0.3 -0.25"/> </joint> <joint name="front_Right" type="fixed"> <parent link="base_link"/> <child link="right_front_wheel"/> <origin xyz="0.5 0.3 -0.25"/> </joint> <joint name="back_Left" type="fixed"> <parent link="base_link"/> <child link="left_back_wheel"/> <origin xyz="-0.1 -0.3 -0.25"/> </joint> <joint name="back_Right" type="fixed"> <parent link="base_link"/> <child link="right_back_wheel"/> <origin xyz="-0.1 0.3 -0.25"/> </joint> <joint name="top_to_base" type="fixed"> <parent link="base_link"/> <child link="top"/> <origin xyz="0 0 0.25"/> </joint> </robot> Originally posted by avatarofwill13 on ROS Answers with karma: 11 on 2012-04-29 Post score: 0 Answer: I loaded your file up and everything looks fine for me. Maybe try updating? As far as the wheel positions, make sure you go through the urdf tutorials and understand how the coordinate systems are specified with respect to each other... then you'll see why your wheels are penetrating the floor. Originally posted by mcevoyandy with karma: 235 on 2012-04-30 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 9187, "tags": "gazebo, urdf" }
How do long rivers exist?
Question: I was recently on a long-haul flight over Siberia and it struck me as rather remarkable that something like the Lena River could exist. It seems very surprising that there's a route from some random spot near Lake Baikal over 2,800 miles or so to the Arctic Ocean, which is downhill all the way. Likewise, the Missouri River tells us that there's an even longer downhill-only route from some place in Montana to the Gulf of Mexico. But these things certainly do exist and it's a pattern repeated all over the world: the great majority of the Earth's surface is drained by rivers that run downhill to the sea. They don't get stuck and they don't seem to form huge numbers of lakes where they have to flood an area to raise the water level over some barrier. I'm aware of endorheic basins, which don't drain to the ocean but they only cover about 18% of the land surface, mostly in central Asia. Is the existence of long downhill routes more likely than I imagined? (I can't tell if I think it's more likely in mountainous areas than in flatter areas, which suggests I have poor intuition about this) Or is it just a coincidence of our planet? Did the rivers make the downhill paths themselves? Answer: Well, I can only agree that it is indeed amazing and it doesn't get less amazing when some geology, hydrology and geomorphology is added to the the amazement. All precipitation that falls on land must flow back to the oceans somehow, and unless it evaporates it will flow in rivers or as groundwater towards a lower hydrostatic level. Water can not flow uphill, so it follows (and crafts) the slope in topography. Water will flow from mountains towards the ocean and due to geometric and morphological reasons, rivers joint to a drainage pattern due to the geology and topography. Mountain ranges are often formed from orogeny, where tectonic plates collide. Rivers starts at high altitude, radiating out in all directions from mountains, but as collisions in the present continental settings are often on the rim of large plates (e.g Andes, Alps, Himalaya), and the rivers can't cross the range, it will have to travel the whole continental plate to reach ocean level. In the case of Asia, most large rivers starts in Himalaya (or other tectonic active regions, as Altai), in Europe large rivers starts in the Alps. In Africa, they start in the tectonic active rift zone. This map shows the ocean drainage dividers. The border between the drainage areas are the line where rivers start. Lakes could be understood as temporary stop in the flow, and it will quickly fill up with vegetation and sediments as the velocity of the water decreases. In a geologic context, lakes are never very old, Baikal being the oldest, 25 million years. Drainage patterns are often much older. So why are some rivers so long? Because of tectonic uplifts and orogenies. The answer to your question 'Did the rivers make the downhill paths themselves?' is yes, as erosion works toward the peneplain (low relief).
{ "domain": "earthscience.stackexchange", "id": 686, "tags": "hydrology, rivers, geomorphology" }
What exactly are UV lenses?
Question: Edmund Optics has these UV lenses listed on their site but I'm not quite sure what it means for a lens to be 'UV'. Do glass lenses have significant issues transferring UV (I would also appreciate the correct terminology for this behaviour) or is this simply about avoiding aberration? Answer: Yes, a lot of "regular" glasses absorb in the UV, meaning you can't use them as lenses without suffering losses. In addition, They probably have coatings to make them anti-reflective, making the loss due to reflection decrease as well (as absorption loss)
{ "domain": "physics.stackexchange", "id": 63946, "tags": "optics" }
Years, months, days, and ... weeks?
Question: Why do we divide time into weeks? Is there any celestial reason why humans do this? one year: earth revolution around the sun one month: moon revolution around the earth one week: 7 days = ??? one day: earth rotation about its axis Answer: The synodic period of the moon is $29.53$ days, a little shorter than a calendar month, which is on average about $30.4$ days. This is slightly longer than its orbital period, but corresponds to the periodic visual appearance of the moon as viewed from Earth. I mention this to make it clear that we should be forgiving of a little imprecision. Conventionally, the moon's appearance is divided into four phases: first quarter, full, last quarter, new. That means that on average, each phase lasts about $7.4$ days. Since calendars count days in integer amounts, a $7$-day period seems to be a natural choice. The social importance of the seven-day period in Western cultural probably has much more to do with its religious significance in Abrahamic religions than astronomy per se (although certainly not unique to it). But its ultimate origin probably does lie in the natural division of the moon's appearance into four phases, which correspond to an apparent geocentric celestial latitude difference between the Moon and Sun of $0^\circ$, $90^\circ$, $180^\circ$, and $270^\circ$. That, the explicit answer to your question is 1 week = 7 days = one lunar phase.
{ "domain": "astronomy.stackexchange", "id": 2931, "tags": "planet, time" }
Inverting a binary tree
Question: I have written this code to invert a binary tree. Thus if the original tree is: Then the tree returned should be: /** * Definition for a binary tree node. * struct TreeNode { * int val; * TreeNode *left; * TreeNode *right; * TreeNode(int x) : val(x), left(NULL), right(NULL) {} * }; */ class Solution { public: TreeNode* invertTree(TreeNode* root) { if(root==NULL) return NULL; /* Performing post-order traversal of the tree */ //Visiting the left node first if(root->left != NULL) invertTree(root->left); //Visiting the right node next if(root->right != NULL) invertTree(root->right); //Visiting the current node; swapping the left and the right nodes of the current node TreeNode* tn = new TreeNode(0); tn=root->right; root->right=root->left; root->left=tn; return root; } }; While the code runs correctly and returns the required output, I think I am invoking undefined behavior when the control hits the leaf nodes. Since, at the leaf, a new node is created and it is assigned (incorrectly) the right of the current node and so on. Could someone please clarify if what I am doing is indeed wrong? Note: The question and the images are taken from LeetCode.com Answer: Here you are creating a node unnecessarily: TreeNode* tn = new TreeNode(0); tn=root->right; You could write simpler as: TreeNode* tn = root->right; Because you're not using the original value of tn anyway, you only need it as a work variable for swapping the left and right nodes. Also, your implementation doesn't use the return value of the recursive invertTree calls. You can use them better and get rid of the conditional statements, making the solution more compact: TreeNode* invertTree(TreeNode* root) { if (root == NULL) { return NULL; } TreeNode* work = root->right; root->right = invertTree(root->left); root->left = invertTree(work); return root; } I think I am invoking undefined behavior when the control hits the leaf nodes. No, you're not invoking "undefined behavior" when hitting the leaf nodes. Actually I think you meant to say intermediary nodes. At leaf nodes, the function receives a null node and you return null. At intermediary nodes, you allocate an unnecessary node, as explained earlier. The unnecessary allocation doesn't trigger an undefined behavior, but since you never free the allocated memory, this is a memory leak, and indeed a problem if the program was used for a substantial amount of time, accumulating unreleased memory.
{ "domain": "codereview.stackexchange", "id": 23568, "tags": "c++, tree" }
A common definition of a scalar
Question: Some dictionaries define a scalar as follows: A quantity, such as mass, length, or speed, that is completely specified by its magnitude and has no direction. -- The Free Dictionary However, it is my impression that in many contexts scalars can be signed, in which case their magnitude (their absolute value) does not specify its value. This definition is even used on a test question here. Is it true that this definition is inaccurate? Answer: The dictionary definition is wrong. For example, time is a scalar in Newtonian mechanics, and time can be negative. That means that time is not completely specified by its magnitude (absolute value). Other examples include charge, energy, and Celsius temperature. The definition could be improved by cutting "is completely specified by its magnitude" and clarifying "direction" to be "direction in space." We'd then have this definition: a scalar is something that has no direction in space, i.e., if you rotate it, it doesn't change.
{ "domain": "physics.stackexchange", "id": 15623, "tags": "definition, invariants" }
Why is ice more reflective than liquid water?
Question: Why is ice more reflective (has higher albedo) than liquid water? They're both the same substance (water). Is something quantum mechanical involved? Answer: In fact ice is slightly less reflective than water. The reflectivity is related to the refractive index (in a rather complicated way) and the refractive index of ice is 1.31 while the refractive index of water is 1.33. The slightly lower refractive index of ice will cause a slightly lower reflectivity. In both cases the reflectivity is about 0.05 i.e. at an air/water or air/ice surface about 5% of the light is reflected. Water generally has a relatively smooth surface so the light falling on the water only gets a chance to reflect back once. Any light that doesn't reflect off the surface propagates down into the water where it is eventually absorbed and converted to heat. The end result is that a large body water reflects only about 5% of the light. Ice is generally covered with some snow, and snow is made up of small ice crystals with air gaps between them. Light falling onto snow may be reflected at the first surface, but any light that isn't reflected will meet lots more ice/air interfaces as it travels through the snow, and at every surface more light will be reflected. The net result is that much more of the light is reflected from snow. So the difference isn't anything fundamental, it's just because water is continuous while snow isn't. It is possible to form an air water dispersion, for example foam or fog. Both foams and fogs reflect light far more efficiently than a large body of water.
{ "domain": "physics.stackexchange", "id": 72778, "tags": "water, reflection, ice" }
Electric Flux Contradiction
Question: I am currently reading about electric flux; and from this passage I am reading, I am sensing a bit of a contradiction: "If the E-field is not perpendicular to the surface area, then the flux will be less than EA because less electric field lines will penetrate A. Consider the wedge shape surface below. The electric field lines are perpendicular to the surface area A' but not to A. Since the same number of electric field lines cross both surfaces, the flux must be the same through both surfaces." Clearly, the surface A is not perpendicular to the electric field, but surface A' is. So, the number of electric fields lines passing through A should be less than the number passing through A', as they suggest in the passage before the picture. Yet, they go on to say that the number of electric fields lines passing though each surface is the same. What is going on? Answer: It is a misleading diagram due to the way the field lines are drawn and the way angle is defined. The angled surface has actually increased in Area, thereby inadvertently keeping the flux the same. The usual definition is that $\theta$ is zero when E and the surface are perpendicular. $\theta=0$ , flux is proportional to A $\theta\ne0$, flux is proportional to $A\cos\theta$ until the field lines and surface are parallel ($A\cos90=0$) This is a better (although somewhat exaggerated and not as pretty picture): Flux through a surface at an angle
{ "domain": "physics.stackexchange", "id": 6354, "tags": "electrostatics, electric-fields" }
Why do magnetic fields act on moving free charges?
Question: I can understand why ferromagnets create a magnetic field around them, because of the orientation of the magnetic spin of their electrons, and how other permanent magnets can respond to that magnetic field, because the material is magnetized. However, why does a moving charge get deflected by a magnetic field? It's not like it's magnetized at all, I think, and it's even more counter-intuitive that the force exerted on the particle in question is perpendicular to the magnetic field, unlike what happens in electric or gravitional fields. Why do free charges and magnetized objects behave differently in a magnetic field, and why do moving free charges feel the field at all? Answer: The magnetic B-field is defined in terms of the Lorentz force exerted on a moving charged particles, such that a particle moving in an electromagnetic field experiences a force of $q\vec{E} +q\vec{v}\times\vec{B}$ (see here). When considered in the context of special relativity, there is only an electromagnetic field. What we choose to define as electric or magnetic fields are simply frame-dependent manifestations of that field - hence the velocity term in the Lorentz force. Starting with a basic idea of how electric fields work for charged particles, you can demonstrate that a magnetic component to the Lorentz force is required that acts perpendicularly to the velocity, using this type of argument, which I won't cut and paste here.
{ "domain": "physics.stackexchange", "id": 36572, "tags": "electromagnetism, magnetic-fields" }
How was the sinusoidal model for propagation developed?
Question: It's a little difficult to explain this question .. but I'll try anyway. To the best of my knowledge propagation models - audio, RF - are modelled as travelling in a sinusoidal form. Surely if a model was constructed based on observation, and then verified by experiment the experiment would more often yield the desired result than not. Is the model for sinusoidal motion/waves deemed true because the equations are proved true by experiment? How do we know these waves are actually sinusoidal? How was this model developed? Answer: The waves are not necessarily sinusoidal, but any description of a function which is integrable (i.e., has a finite area or "energy") can be decomposed into superpositions (sums) of sines and cosines, or alternatively (and equivalently) complex exponentials. This is why they are shown as sine or cosine waves, because that is the simplest object to think about. In reality, they are a (possible infinite) sum of the sine and/or cosine waves. Also, you can build an antenna, and if you modulate it very carefully with a sine wave electrically, it will radiate a sine wave...
{ "domain": "physics.stackexchange", "id": 5981, "tags": "waves" }
Unscented Kalman Filtering?
Question: I want to use an unscented kalman filter (UKF) in ROS to do state estimation. I can't find any ROS packages that implement an UKF so I've been looking around at filtering libraries for C++ that I could use within ROS. Does anyone have any recommendations? I've found the following libraries but many are poorly documented or inactive. Any suggestions? UKF support: Bayes++ bfilt No UKF support: Mobile Robot Programming Toolkit (The main page lists UKFs but the code doesn't appear to implement them) Orocos Bayesian Filtering Library KFilter Originally posted by Andrew Chambers on ROS Answers with karma: 96 on 2011-12-21 Post score: 5 Original comments Comment by Martin Peris on 2011-12-21: Hi Andrew! OpenCV implements Kalman Filter, Extended Kalman Filter (EKF) and is fully integrated with ROS. But I am afraid it doesn't implement UKF (as far as I know). Answer: Since you already know you want to do an unscented KF, I assume you understand the mathematics. I would suggest just implementing the math yourself using the matrix/vector capabilities of either OpenCV or Eigen libraries which are already part of ROS. Originally posted by Kevin with karma: 2962 on 2011-12-29 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 7707, "tags": "ros, filter, kalman" }
Why doesn't the gravitational energy in this system of evaporating and condensing water violate the second law of thermodynamics?
Question: Let's consider the following system shaped as in the picture below, in which the only fluid contained is water at room temperature. As far as I understand, the water should be in an equilibrium between its liquid and gaseous phases. While some of the liquid water at the bottom continuously evaporates due to vapor pressure, some of the water vapour molecules will cluster into droplets, causing condensation. Solid surfaces --such as the ceiling and walls of this system-- are likely sites for this condensation because they reduce the energy barrier that needs to be overcome for this nucleation to take place. However, when I try to bring gravity into the equation, I'm struck by what seems to me like a remarkable asymmetry. Any water droplets condensed against the ceiling of the container have greater potential gravitational energy than the liquid molecules at the bottom. The stalactite-esque spire protruding from the ceiling takes advantage of water's surface tension to direct a trickle of water onto a tiny waterwheel below, powering a tiny turbine. Going in the other direction, any evaporated water molecules that end up condensed against the ceiling seem to do so without any input of external energy. Gas molecules will travel in any direction throughout a container, spontaneously reaching the upper regions merely through their own energetic brownian motions, trading heat for gravitational energy if you will; while apparently decreasing entropy of the entire system over time, violating the 2nd law of thermodynamics while summoning Maxwell's Demon. That can't be right, right? NB: It deserves mention that condensation produces heat, whereas evaporation consumes heat. The resulting temperature differences should remain constant though, given that convection and conduction would keep the system in thermodynamic equilibrium between the sites of evaporation and condensation. Using thermally conductive materials in-between top and bottom (e.g. copper container walls) is just one measure that can be taken to minimize the temperature difference of this equilibrium. Answer: Why doesn't the gravitational energy in this system of evaporating and condensing water violate the second law of thermodynamics? this is the second law : The second law of thermodynamics states that the total entropy of an isolated system always increases over time, or remains constant in ideal cases where the system is in a steady state or undergoing a reversible process. italics mine. In reality creating an isolated system is an approximate process, one has to assume that external to the system conditions do not affect the system. In the statement of your question you already have opened the system to gravity, so it is not a closed system and the force of the second law does not apply. This can be understood in the statistical formulation of entropy [This definition] describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) which could give rise to the observed macroscopic state (macrostate) of the system. The constant of proportionality is the Boltzmann constant.Specifically, entropy is a logarithmic measure of the number of states with significant probability of being occupied Intoducing gravity into the problem introduces gravitons, the carriers of gravitational waves, and each gravitational interaction of a graviton with a putative drop generates extra microstates. As these come from the mass of the earth the system by construction is not isolated so that the second law does not apply. Now for the content of the question: at best , if it is true that condensation can happen at a fixed temperature with special materials, as you state in a comment to bpedit, you are transforming thermal energy to gravitational energy to kinetic energy, and it might go on for a long time like those birds drinking water perpetually, until dissipation stops them. Dissipation would be the cooling from removing the tails of the distribution, and also the black body radiation cooling the system . The distributions of kinetic energy of the water and of the vapor over it have long tails. It is the molecules from the tails that evaporate from the water and allow the droplets to reach the ceiling, i.e. acquire gravitatiional potential, and form the droplets on the ceiling surface (hypothesis that this can happen at constant temperature for special materials). When a molecule from the tail condenses into a drop, the average temperature of the gas drops by that tiny amount because it is no longer contributing in the average that defines the temperature. The same had happened when the molecule left the liquid. When the drop falls, all the molecules acquire back the kinetic energy and if they drop in the water the steady temperature is maintained. If they hit the propeller of the turbine they give up the kinetic energy , and when they fall back into the liquid they do not restore the temperature to the previous value, because their kinetic energy, leaving with the evaporation has not been returned. So slowly the temperature falls, because it is connected with the root mean square of the velocities in the liquid. So thermal energy is turned into gravitational energy which is turned into the kinetic energy of the turbine, so the temperature will fall to the point that no longer droplets can form on the ceiling. (depending on the material). If such a material does not exist,the other answers are adequate.
{ "domain": "physics.stackexchange", "id": 37782, "tags": "thermodynamics, energy-conservation, phase-transition, thought-experiment, perpetual-motion" }
Array kept contiguous by swapping with the last element
Question: I made a class that encapsulates the common pattern of keeping an array contiguous when an element is removed by swapping the last element into its place. A few specific things I'm wondering about: Is it unsafe for non-POD types (because of the memcpy)? Is there a more efficient way to add a new object (so it doesn't have to be copied)? Should I worry about code bloat from capacity being a template parameter (N)? This was not written with C++11 in mind, except for a few little features supported by Visual Studio 2010 (like auto). Edit for clarification: I'm using aligned memory instead of a simple array of T so that I have control over when the objects in the array are created and destroyed. I'm using a memcpy to avoid calling a constructor every time something is swapped. SwapArray.h: #ifndef INCLUDE_GUARD_78F4FE5C_3E57_44C4_9E42_2574051D5A18 #define INCLUDE_GUARD_78F4FE5C_3E57_44C4_9E42_2574051D5A18 #include <boost/type_traits/alignment_of.hpp> #include <boost/type_traits/aligned_storage.hpp> namespace ktc { // an array that keeps all its elements contiguous by swapping removed elements // with the last element of the array. // // add and remove are both O(1). template <typename T, unsigned int N> class SwapArray { public: SwapArray(); ~SwapArray(); // number of elements currently in the array. unsigned int size() const; // max size. unsigned int capacity() const; // add a value to the end. void add( const T& val ); // remove the value at this index. void rem( unsigned int i ); // remove all. void clear(); T& operator[]( unsigned int i ); const T& operator[]( unsigned int i ) const; template <typename Compare> void sort( Compare compare ); private: typedef typename boost::aligned_storage < N * sizeof(T), boost::alignment_of<T>::value >::type AlignedMem; // aligned for type T. AlignedMem m_alignedMem; unsigned int m_size; // return the data for the i-th element. char* memAt( unsigned int i ); const char* memAt( unsigned int i ) const; }; } // namespace ktc #include "SwapArray.hpp" #endif SwapArray.hpp: #include <algorithm> #include <cstdio> #include <new> #include "ktcAssert.h" namespace ktc { template <typename T, unsigned int N> SwapArray<T,N>::SwapArray() { static_assert( N > 0, "SwapArray can't have capacity 0" ); m_size = 0; } template <typename T, unsigned int N> SwapArray<T,N>::~SwapArray() { clear(); } template <typename T, unsigned int N> void SwapArray<T,N>::clear() { for( unsigned int i = 0; i < m_size; i++ ) (*this)[i].~T(); m_size = 0; } template <typename T, unsigned int N> unsigned int SwapArray<T,N>::size() const { return m_size; } template <typename T, unsigned int N> unsigned int SwapArray<T,N>::capacity() const { return N; } template <typename T, unsigned int N> void SwapArray<T,N>::add( const T& val ) { ktcAssert( m_size < N ); // copy it onto the end. new( memAt(m_size) ) T(val); m_size++; } template <typename T, unsigned int N> void SwapArray<T,N>::rem( unsigned int i ) { ktcAssert( m_size > 0 ); ktcAssert( i < m_size ); // remove the element. char* r = memAt(i); T* rt = (T*) (r); rt->~T(); // if it wasn't the last element that was removed, swap the last element // into its spot. if( i < m_size - 1 ) { char* last = memAt( m_size - 1 ); ktcAssert( last != r ); memcpy( r, last, sizeof(T) ); } m_size--; } template <typename T, unsigned int N> T& SwapArray<T,N>::operator[]( unsigned int i ) { const auto* ct = static_cast<const SwapArray<T,N>*>( this ); return const_cast<T&>( ct->operator[](i) ); } template <typename T, unsigned int N> const T& SwapArray<T,N>::operator[]( unsigned int i ) const { ktcAssert( i < m_size ); return *( (T*) memAt(i) ); } template <typename T, unsigned int N> char* SwapArray<T,N>::memAt( unsigned int i ) { auto ct = static_cast<const SwapArray<T,N>*> (this); return const_cast<char*> ( ct->memAt(i) ); } template <typename T, unsigned int N> const char* SwapArray<T,N>::memAt(unsigned int i) const { ktcAssert( i < N ); return static_cast<const char*>( m_alignedMem.address() ) + (i * sizeof(T)); } template <typename T, unsigned int N> template <typename Compare> void SwapArray<T,N>::sort( Compare compare ) { if( size() < 2 ) return; std::sort( &(*this)[0], &(*this)[size() - 1], compare ); } } // namespace ktc Answer: Firstly, to answer your questions: Yes. memcpy makes this inherently unsafe for anything that isn't trivially copyable (which can be deduced using std::is_trivially_copyable, although this is part of C++11). Generally std::copy will use tag and will invoke memcpy when it is able to. You should always use std::copy over memcpy in C++. Yes, by having a variant of your add function that takes an rvalue reference and invoking std::move on the value you get: void add(T&& val) No. See here for the specifics. The main question I have with this class is why go through all the effort of using boost::aligned_storage for this? Using a simple std::array<T, N> (or even just T[N]) would simplify things, and the only cost you would incur is the initial default constructor calls. Given that your class isn't safe for non-trivial types anyway, the performance difference between the two is going to be something close to 0. Also, although you're calling destructors on elements that have been removed, this is only destroying the object, and not reclaiming any of the memory. Using a std::array, you could do this without a lot of the mucking around that you have to do with ugly casts back and forth between char* and T*, without having to use placement new, and without having to manually call ~T. Edit: As per the C++ standard: § 3.9.2 For any object (other than a base-class subobject) of trivially copyable type T, whether or not the object holds a valid value of type T, the underlying bytes (1.7) making up the object can be copied into an array of char or unsigned char. If the content of the array of char or unsigned char is copied back into the object, the object shall subsequently hold its original value. The reason it is unsafe is due to the fact that it does a bit-for-bit copy of the original. If the original holds any memory allocated with new, then it will only get a copy of that pointer, and not the original - much like leaving the default copy-constructor default when you have new'ed memory is a bad idea. In this case it is even more dangerous, however. If you have a std::string as a member variable, for example, as long as that string is stack-allocated, then the default copy constructor will call the copy constructor for std::string and all will be well. This does not hold for memcpy - it will only get a copy of the pointer that the string points to, so when the original goes out of scope, you're going to get undefined behavior.
{ "domain": "codereview.stackexchange", "id": 9508, "tags": "c++, array" }
Book repository for storing books that are accessed by ISBN
Question: I want to implement a book repository using a map where books can be added, removed and updated. Books in this repository should be accessed by their ISBN which is an object property. The books should not be edited from the outside of the repository, because if a book’s ISBN is changed from the outside, the ISBN used as key is out of sync with the ISBN of the book. Another option is to use a list, so I don’t have a key that can be out of sync. I came up with the following solution. Book: public class Book { private String isbn; private String title; private String author; private double price; public Book(String isbn, String title, String author, double price) { this.isbn = isbn; this.title = title; this.author = author; this.price = price; } public Book(Book book) { this.isbn = book.isbn; this.title = book.title; this.author = book.author; this.price = book.price; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; Book other = (Book) obj; if (isbn == null) { if (other.isbn != null) return false; } else if (!isbn.equals(other.isbn)) return false; return true; } // getters, setters and hashcode omitted } BookRepository: public class BookRepository { private Map<String, Book> books; public BookRepository() { books = new HashMap<String, Book>(); } public List<Book> getAllBooks() { return new ArrayList<Book>(books.values()); } public Book getBook(String isbn) { if (books.containsKey(isbn)) return new Book(books.get(isbn)); return null; } public boolean addBook(Book book) { Book copy = new Book(book); if (!books.containsKey(copy.getIsbn())) return books.put(copy.getIsbn(), copy) == null; return false; } public boolean deleteBook(String isbn) { return books.remove(isbn) != null; } public boolean updateBook(Book book) { Book copy = new Book(book); if (books.containsKey(copy.getIsbn())) return books.put(copy.getIsbn(), copy) != null; return false; } } Answer: I changed my code based on the answer of Antot and the answer of Ronan Dhellemmes. Because performance it not an issue, I'm not going to use an additional list to improve speed as described in this answer. Book import java.util.Objects; public class Book { private final String isbn; private String title; private String author; private double price; public Book(String isbn, String title, String author, double price) throws InvalidBookException { if (isbn == null) throw new InvalidBookException("ISBN cannot be null."); if (isbn.isEmpty()) throw new InvalidBookException("ISBN cannot be an empty string."); this.isbn = isbn; this.title = title; this.author = author; this.price = price; } public Book(Book book) { this(book.isbn, book.title, book.author, book.price); } @Override public String toString() { return String.format("Book [isbn=%s, title=%s, author=%s, price=%s]", isbn, title, author, price); } @Override public int hashCode() { return Objects.hashCode(this.isbn); } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; Book other = (Book) obj; return Objects.equals(this.isbn, other.isbn); } public String getIsbn() { return isbn; } public String getTitle() { return title; } public void setTitle(String title) { this.title = title; } public String getAuthor() { return author; } public void setAuthor(String author) { this.author = author; } public double getPrice() { return price; } public void setPrice(double price) { this.price = price; } } BookRepository import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; public class BookRepository { private Map<String, Book> books; public BookRepository() { this.books = new HashMap<>(); } public Book get(String isbn) { Book copy = null; if (this.books.containsKey(isbn)) copy = new Book(this.books.get(isbn)); return copy; } public boolean add(Book book) { Book copy = new Book(book); return this.books.putIfAbsent(copy.getIsbn(), copy) == null; } public boolean remove(String isbn) { return this.books.remove(isbn) != null; } public boolean update(Book book) { Book copy = new Book(book); return this.books.computeIfPresent(copy.getIsbn(), (isbn, existing) -> copy) != null; } public List<Book> getAll() { List<Book> list = new ArrayList<>(); for (Book book : this.books.values()) list.add(new Book(book)); return list; } public void removeAll() { this.books.clear(); } }
{ "domain": "codereview.stackexchange", "id": 26168, "tags": "java" }
Why are quantum gates unitary and not special unitary?
Question: Given that the global phases of states cannot be physically discerned, why is it that quantum circuits are phrased in terms of unitaries and not special unitaries? One answer I got was that it is just for convenience but I'm still unsure. A related question is this: are there any differences in the physical implementation of a unitary $U$ (mathematical matrix) and $ V: =e^{i\alpha}U$, say in terms of some elementary gates? Suppose there isn't (which is my understanding). Then the physical implementation of $c\text{-}U$ and $c\text{-}V$ should be the same (just add controls to the elementary gates). But then I get into the contradiction that $c\text{-}U$ and $c\text{-}V$ of these two unitaries may not be equivalent up to phase (as mathematical matrices), so it seems plausible they correspond to different physical implementations. What have I done wrong in my reasoning here, because it suggests now that $U$ and $V$ must be implemented differently even though they are equivalent up to phase? Another related question (in fact the origin of my confusion, I'd be extra grateful for an answer to this one): it seems that one can use a quantum circuit to estimate both the modulus and phase of the complex overlap $\langle\psi|U|\psi\rangle$ (see https://arxiv.org/abs/quant-ph/0203016). But doesn't this imply again that $U$ and $e^{i\alpha}U$ are measurably different? Answer: Even if you only limit yourself to special-unitary operations, states will still accumulate global phase. For example, $Z = \begin{bmatrix} i & 0 \\ 0 & -i \end{bmatrix}$ is special-unitary but $Z \cdot |0\rangle = i |0\rangle \neq |0\rangle$. If states are going to accumulate unobservable global phase anyways, what benefit do we get out of limiting ourselves to special unitary operations? are there any differences in the physical implementation of a unitary $U$ (mathematical matrix) and $V :=e^{i\alpha}U$, say in terms of some elementary gates? As long you're not doing anything that could make the global phases relevant, they can have the same implementation. But if you're going to do something like, uh- add controls to the elementary gates Yeah, like that. If you do stuff like that, then you can't ignore global phases. Controls turn global phases into relative phases. If you want to completely ignore global phase, you can't have a black box "add a control" operation modifier.
{ "domain": "quantumcomputing.stackexchange", "id": 157, "tags": "quantum-gate, unitarity" }
Simple Python calculator 5
Question: I am really new to Python and programming in general and I am trying to improve doing some projects like this one. I would like to know how I can improve this the right way. """A simple calculator """ def additions(): print("ADDITION:") num1 = input("Give me your first number: ") num2 = input("Give me a second number: ") try: result = float(num1) + float(num2) print(result) except ValueError: print("INVALID") def subtractions(): print("SUBTRACTION:") num1 = input("Give me your first number: ") num2 = input("Give me a second number: ") try: result = float(num1) + float(num2) print(result) except ValueError: print("INVALID") def divisions(): print("DIVISION:") num1 = input("Give me your first number: ") num2 = input("Give me a second number: ") try: result = float(num1) + float(num2) print(result) except ValueError: print("INVALID") def multiplications(): print("MULTIPLICATION:") num1 = input("Give me your first number: ") num2 = input("Give me a second number: ") try: result = float(num1) + float(num2) print(result) except ValueError: print("INVALID") print("Hello to Simple Calculator ver.0.0003.") print("Type:\n 1. for Addition.\n 2. for Subtraction .\n 3. for Multiplication.\n 4. for Division. \n 0. to EXIT.") while True: try: user_input = int(input("What operation do you need? ")) except ValueError: print("INVALID!!!") continue if user_input == 1: additions() elif user_input == 2: subtractions() elif user_input == 3: multiplications() elif user_input == 4: divisions() elif user_input == 0: break Answer: For a beginner this is not bad at all. Consider factoring out repeated code into its own function, particularly the code that reads two input numbers and converts them to floats. In that function you could also include the printing of the operation title. Finally, consider putting your globally scoped code into a main function. The application can be even more abbreviated if you use the operator library and some simple tuple lookups: #!/usr/bin/env python3 from operator import add, sub, mul, truediv '''A simple calculator ''' def main(): ops = (('Addition', add), ('Subtraction', sub), ('Multiplication', mul), ('Division', truediv)) print('Hello to Simple Calculator ver.0.0003.') print('Type:') print('\n'.join(' %d. for %s' % (i+1, name) for i, (name, op) in enumerate(ops))) print(' 0. to exit') while True: try: user_input = int(input('What operation do you need? ')) except ValueError: print('Invalid input.') continue if user_input == 0: break elif 1 <= user_input <= 4: title, op = ops[user_input - 1] print('%s:' % title) try: num1 = float(input('Give me your first number: ')) num2 = float(input('Give me a second number: ')) print(op(num1, num2)) except ValueError: print('Invalid input.') else: print('Invalid input.') if __name__ == '__main__': main() You don't even need your operations to be separated into functions.
{ "domain": "codereview.stackexchange", "id": 32706, "tags": "python, beginner, python-3.x, calculator" }
Would gravity still act if all objects in a closed system somehow became stationary?
Question: I have been trying to understand the implications of general relativity. I unfortunately don't have a good knowledge of advanced topics and I may have made some silly assumptions. As far as I understand, spacetime dictates the trajectory of an object, and the object curves spacetime. Objects follow the shortest path, and it appears as if things are being pulled when instead they're just accelerating in a specific way due to the curvature. Gravity is a fictitious force. I'm confused about what would happen if we imagine a universe with two identical stationary objects. I'm guessing that because it's not actually possible for anything to be completely stationary (because we cannot reach absolute zero (uncertainty principle?)), these objects will move along the curvature. But if we consider it was possible for objects to be completely stationary, does this mean that these objects won't follow the curvature since they're not moving to begin with and it would appear as if gravity has stopped working. The objects stay stationary instead of crashing into each other. What would happen if there are two identical stationary objects and I apply a force to one of the objects, such that the direction of the force is perpendicular to the line connecting the two objects? I'm guessing that it should start to orbit the other object, but I also know that in this inertial frame of reference, since there are two objects, I shouldn't be able to tell who's moving. So the outcome of some force being applied should be symmetrical, so does this mean the objects would start to chase each other? But then, in a situation where the objects are not identical, if I move the heavier object, would it still appear as if the smaller object has started orbiting due to a force pulling it? But this sounds like movement in one stationary object has induced movement in another stationary object (assuming stationary objects were possible)? Answer: By 'stationary', you would mean 'stationary with respect to the spatial axes of a certain inertial frame of reference'. However, a stationary object would still 'move' along the time axis -- in fact, it will 'move' as fast as it can (=at the speed of light) along the time axis, if it is stationary spatially. The thought is that your four-dimensional 'speed' is always constant. The only difference between a moving body and a stationary one is the composition of their four-dimensional 'speed'; if it's stationary, then all of its '4D speed' will come from its temporal 'motion'; if it's not, then the 'speed' will be a combination of its spatial motion and temporal 'motion'. Of course, here the words 'move' and 'speed' should be interpreted as somewhat metaphorically, for you cannot really make sense of things moving along the temporal axis. (or maybe you could.) Anyhow, from the four-dimensional point of view, you're always 'moving' at a constant 'speed', which is the speed of light, regardless of your spatial motion or temperature or whatever. (By the way, all you need to articulate this is just special relativity: Suppose that it took t seconds for an object S moved from a point A to a point B at a constant speed v in a certain inertial frame K. In the frame of reference of S (i.e., the frame of reference where s is always at the origin), let's say, the same journey took t' seconds. Then we could think of $\frac{t'}{t}$as the 'temporal speed' of the object S relative to the frame K. It tells you, so to speak, how slow S's clock ticks with respect to a clock in K. Then there's the following special relativistic relation between S's 'temporal speed' and its 'spatial speed (v)': $(\frac{t'}{t})^2 + (\frac{v}{c})^2 = 1$ This is what I meant when I said your '4D speed' is always constant. You always follow some spacetime trajectory regardless of your velocity with respect to a certain frame. So, yes, gravity still works even in the physically impossible hypothetical situation where things have zero spatial motion with respect to a certain inertial frame. 2. Regarding the indiscernible objects: if you apply force to one of the objects but not to the other, then, of course, you can tell which one is which; one that experiences acceleration is the one that you pushed, and whether something is accelerated or not is not a relative matter in general relativity. What is completely relative is inertial motion (in SR) or geodesic motion (in GR). If a bucket of water is rotating, then everyone should agree that it's rotating, for angular motion is a form of acceleration and acceleration is not totally relative.
{ "domain": "physics.stackexchange", "id": 71771, "tags": "general-relativity" }
denosing using soft thresholding or hard thresholding in matlab
Question: let us consider following code clear all; clc; f1=10; f2=40; fs=100; ts=1/fs; t=0:ts:2.93; x=19*sin(2*pi*f1*t).*((t<0.25)+(t>1))+20*cos(2*pi*f2*t).*((t>=0.25)+(t<1))+1.5*randn(size(t)); plot(t,x); axis tight title('Signal'); xlabel('Time or Space'); output of plot is given i would like to apply denosing method using wavelet method,generally i can compute continuous wavelet transform using cwt command,but how exactly procedures can be done for denosing signal and for reconstruction?please help me ,just i need few matlab codes for this.thanks you very much EDITED : i have added to my code following command scales=1:32; wname = 'gaus4'; coefficients=cwt(x,scales,wname,'plot'); and got result know methods like soft and hard thresholding,there ar esteps Apply wavelet transform to the noisy signal to produce the noisy wavelet coefficients to the level which we can properly distinguish the PD occurrence. •Select appropriate threshold limit at each level and threshold method (hard or soft thresholding) to best remove the noises. • Inverse wavelet transform of the thresholded wavelet coefficients to obtain a denoised signal. in my case coefficients are two dimensional matrix,so how can i continue? i have tried following code [XD,CXD,LXD] = wden(x,'sqtwolog','s', 'mln',4,'gaus4'); but there is error ************************************************ ERROR ... ------------------------------------------------ wfilters ---> The wavelet gaus4 is not valid! ************************************************ Error using wfilters (line 92) Invalid argument value. Error in wavedec (line 32) [Lo_D,Hi_D] = wfilters(IN3,'d'); Error in wden (line 72) [c,l] = wavedec(x,n,w); please help me to finalize my work Answer: The wden function should do exactly what you need: 1-D de-noising. The documentation states that the wavelet family must be orthogonal. The family you specified - Gaussian wavelets - are not orthogonal, thus it is not possible to use it for wavelet denoising. Use any of the following wavelets: Haar Daubechies Symlets Discrete Meyer You'll find some information on the different wavelet families in the MATLAB help page on waveletfamilies.
{ "domain": "dsp.stackexchange", "id": 2438, "tags": "matlab, wavelet" }
How can I install the 'ibmq_qasm_simulator' backend?
Question: I'm having trouble running the following cell: The system told me "The 'ibmq_qasm_simulator' backend is not installed in your system." I'm so confused. Is there a way I can install this backend in my system? Thanks! Answer: The ibmq_qasm_simulator is a cloud-based simulator. You need to say from qiskit import IBMQ provider = IBMQ.load_account() sim = provider.backends.ibmq_qasm_simulator
{ "domain": "quantumcomputing.stackexchange", "id": 1648, "tags": "programming, ibm-q-experience, ibm-quantum-devices, qasm" }
cv_bridge::Exception bayer_rggb8 vs 8UC3
Question: I'm publishing /camera/image_raw of type sensor_msgs/Image from PointgreyDriver : image_transport::SubscriberStatusCallback cb = boost::bind(&PointGreyCameraNodelet::connectCb, this); it_pub_ = it_->advertiseCamera("image_raw", 5, cb, cb); On the other hand I'm subscribing to that topic: this->sub = new image_transport::Subscriber(this->imageTransport->subscribe("/camera/image_raw",1000,&p_vision_apriltags::callback,this)); imgTmp = cv_bridge::toCvCopy(msg,sensor_msgs::image_encodings::TYPE_8UC3); The error I'm getting is an exception: terminate called after throwing an instance of 'cv_bridge::Exception' what(): [bayer_rggb8] is a color format but [8UC3] is not so they must have the same OpenCV type, CV_8UC3, CV16UC1 .... [AprilTagsDetector-1] process has died [pid 29207, exit code -6, cmd /home/jaouadros/catkin_ws/devel/lib/p_vision_apriltags/p_vision_apriltags __name:=AprilTagsDetector __log:=/home/jaouadros/.ros/log/da606018-a91e-11e7-80aa-0024d7d045f0/AprilTagsDetector-1.log]. log file: /home/jaouadros/.ros/log/da606018-a91e-11e7-80aa-0024d7d045f0/AprilTagsDetector-1*.log all processes on machine have died, roslaunch will exit When I change TYPE_8UC3 to bgr8 it shows frames with random uniform colors. What types should I be using? normally TYPE_8UC3 works with image_raw ! Originally posted by ROSkinect on ROS Answers with karma: 751 on 2017-10-04 Post score: 0 Answer: sensor_msgs::image_encodings::TYPE_8UC3 is not a color format, it is a data type with 3 channels of 8 bits each (but with no information of what each channel mean), so cv_bridge::toCvCopy doesn't know who to transform from bayer_rggb8 color format. I assume that what you really want is to transform the original image received in bayer_rggb8 format to the more commonly used BGR8 format. In that case try the following: imgTmp = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); Now, BGR8 is a color format, it also has 3 channels of 8 bits each, but now you know that the first channel corresponds to Blue, the second corresponds to Green and the third corresponds to Red. If by doing this you get random uniform colors, then make sure that the original image published by the driver is actually in bayer_rggb8 format. Originally posted by Martin Peris with karma: 5625 on 2017-10-04 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ROSkinect on 2017-10-05: BGR8 gives me a uniform color each frame. How can I make sure that it is of type bayer_rggb8? rostopic info /camera/image_raw doesn't give much information? Comment by Martin Peris on 2017-10-05: You should check the specifications of your camera, which camera are you using? Comment by ROSkinect on 2017-10-06: I'm using this camera Blackfly 0.9 MP Color GigE PoE
{ "domain": "robotics.stackexchange", "id": 29006, "tags": "ros" }
What is the lagrangian of Einstein field equations
Question: What is a lagrangian such that Euler-Lagrange equation (not sure if it's correct form for this case) $$\frac{\partial \mathcal{L}}{\partial g_{\mu\nu}}=\partial_\lambda\frac{\partial \mathcal{L}}{\partial (\partial_\lambda g_{\mu\nu})}.$$ Gives us Einstein field equations? Answer: This is almost certainly answered elsewhere, but the Hilbert Action, from which Einstein's equation can be derived, is: $$S = \int d^{4}x\;\left(\sqrt{|g|}\frac{1}{16\pi G}R + \mathcal{L}_{m}\right)$$ taking the variation is pretty complicated (there are second derivatives of the metric in the action, and you have to deal with gauge invariance) and best looked up in a textbook, though. But note, that by this definition, we define $T_{ab} = \frac{\delta \mathcal{L_m}}{\delta g^{ab}}$
{ "domain": "physics.stackexchange", "id": 89090, "tags": "general-relativity, lagrangian-formalism" }
What is the difference between TE and TI?
Question: I have done some research about the subject but I can't find the difference between Temperature Element (TE) and Temperature Indicator (TI) Here is an image of the P&ID of the project am working on if it can help clarify my question. Answer: The element does the measuring and the indicator shows the value. If you think of a simple thermometer, the bulb at the bottom measures the temperature and the glass tube indicates the value...
{ "domain": "engineering.stackexchange", "id": 4833, "tags": "mechanical-engineering, electrical-engineering, instrumentation" }
Is this an optimal implementation of merge sort?
Question: I am taking an algorithm course and we are to implement merge sort that handles list of elements with even or odd number of elements and handles duplication, so I wrote this function: void mergesort (int* list, int len) { if(len == 1) return; int i = len/2, j = len-i; int list1[i], list2[j]; for(int k=0;k<i;k++) { list1[k]= list[k]; list2[k]= list[i+k]; } if(len%2!=0) list2[j-1] = list[len-1]; mergesort(list1 , i); mergesort(list2 , j); int k=0,l=0; // k represent counter over elements in list1 // l represent counter over elements in list2 // k+l represent counte over total elements in list while(k+l!=len) { if(k==i) { for(;l<j;l++) list[k+l] = list2[l]; return; } else if (l==j) { for(;k<i;k++) list[k+l] = list1[k]; } else if(list1[k]<list2[l]) { list[k+l]=list1[k]; k++; } else if(list1[k]>list2[l]) { list[k+l] = list2[l]; l++; } else { //handles dublication list[k+l] = list1[k]; k++; list[k+l] = list[l]; l++; } } } I have 2 questions: How can I make this implementation more optimal (best possible performance)? When handling arrays of large lengths (1000000), what causes a segmentation fault? NOTE: I tried the function using array randomly generated of length 1000 and it worked. Answer: Suspect that segmentation fault on large arrays occurs because the list1[] and list2[] ran out of space. With the recursive calls, code is heavily using the stack space. Use malloc() and free() for large arrays instead of VLA[] Memory allocation could be reduced. Via recursion, this takes > 2n (maybe 4n) memory space. At worst it should be 2n. Use size_t rather than int for a integer type that can handle all array indexes. // void mergesort (int* list, int len) void mergesort (int* list, size_t len) Cope with 0 length. // if(len == 1) return; if(len <= 1) return;
{ "domain": "codereview.stackexchange", "id": 11692, "tags": "optimization, algorithm, c, mergesort" }
How by the fact that mass ratios are identical can we conclude that mass is independent of the source of acceleration?
Question: I am currently studying Kleppner D., Kolenkow R. - An Introduction to Mechanics, and I stuck at a point where they show that by carrying further rubberband experiment, By causing motion using springs and magnets or any other source, we found that ratio of acceleration, hence the mass ratio are identical no matter how we produce acceleration, provided that we do the same thing to each body This till here was understood. but then they continue to conclude that Thus, mass turns out to be independent of the source of acceleration, but appears to be inherent property of the body. I can not understand how mass ratio being constant implies that mass does not depends on source of acceleration (or force as I understand it). As ratio of acceleration is also identical but it depends on the force. Answer: I believe the author is saying that if two objects exhibit different accelerations when subjected to the same force, there must be some property that is determining how much that object should accelerate. As it turns out, as the force is increased, ("no matter how we produce acceleration"), the same accelerations increase in the same ratio. I.e. if we double the force, both objects' accelerations double, but the ratio between their accelerations, such as one being three times larger than the other, stays the same. This is why the author concludes that this property, called "mass", must be an "inherent property of the body", and be independent of the conditions applied to the body (unlike, say, colour which changes with different lighting conditions). For example: say we apply a force of $1$N to two bodies, and see that they accelerate at $2\text{ms}^{-2}$ and $6\text{ms}^{-2}$. Then the author is saying that if we were to increase this force to, say, $2$N, then, although their accelerations would increase (here to $4\text{ms}^{-2}$ and $12\text{ms}^{-2}$), the ratio of their accelerations stays the same: $1:3$. So there must be something inherent that tells the objects how much to accelerate - this is mass.
{ "domain": "physics.stackexchange", "id": 67350, "tags": "newtonian-mechanics, forces, mass, acceleration" }
Parse input string into vector of i16 in Rust
Question: I'm new to Rust and I'm working through the exercises found at the bottom of this page. The following code parses a space deliminated input string from the user into a Vec<i16>. If the input string is invalid, the code loops and prompts again. If the string is valid, it prints out the debug value of the Vec<i16> and prompts again. The code works, but I feel there is a more idomatic Rust way to deal with this. Particularly, in the get_input function's looping and how its assigning the return value. Behold. use std::io::Write; fn main() { while let Some(ary) = get_input() { println!("{:?}", ary); } println!("Peace!"); } fn get_input() -> Option<Vec<i16>> { let mut out: Option<Vec<i16>> = None; let mut valid = false; while !valid { // Get user input. print!("Enter series-> "); std::io::stdout().flush().unwrap(); let mut input = String::new(); std::io::stdin().read_line(&mut input).unwrap(); // Parse it. match input.trim() { "q" => { valid = true; out = None; } _ => { let parsed = parse_input(input); if let Ok(_) = parsed { out = Some(parsed.unwrap()); valid = true; } } } } out } fn parse_input(input: String) -> Result<Vec<i16>, std::num::ParseIntError> { input .split_whitespace() .map(|token| token.parse::<i16>()) .collect::<Result<Vec<i16>, _>>() } Answer: I appreciate that you have formatted your code in the current idiomatic Rust style; thanks! There's no reason to set the type of the out variable, type inference will handle it. There's no need for the out or valid variables at all. Switch to an infinite loop and return from it when you need to. It's likely overkill, but you are allocating and freeing the string for each loop. You could instead pull it outside of the loop to reuse it some, or pull the whole thing into a structure and reuse it for the entire program. You will need to ensure you clear the string before each use. Don't unwrap inside of an if let. Instead, bind the result to a variable when doing the pattern matching and use it inside the block. (Clippy also tells you this, in a slightly different manner) There's no benefit to taking a String in parse_input; you don't reuse the allocation. Accept a &str instead. (Clippy also tells you this) You don't need any of the turbofish in parse_input; type inference knows what to do based on the return type. use std::io::Write; fn main() { while let Some(ary) = get_input() { println!("{:?}", ary); } println!("Peace!"); } fn get_input() -> Option<Vec<i16>> { loop { // Get user input. print!("Enter series-> "); std::io::stdout().flush().unwrap(); let mut input = String::new(); std::io::stdin().read_line(&mut input).unwrap(); // Parse it. match input.trim() { "q" => return None, input => { if let Ok(numbers) = parse_input(input) { return Some(numbers); } } } } } fn parse_input(input: &str) -> Result<Vec<i16>, std::num::ParseIntError> { input .split_whitespace() .map(|token| token.parse()) .collect() }
{ "domain": "codereview.stackexchange", "id": 27496, "tags": "rust" }
Why in $SU(5)$ we do not consider $\bar{\nu}_L$?
Question: In GUT, why in representation $\bar{5}+10$ of $SU(5)$ we do not consider $\bar{\nu}_L$? One says that there are 15 particles-antiparticles per generation but, for me, there are 16 particles-antiparticles. Answer: There is no deep reason for this. When Georgi and Glashow discovered the first GUT model, they noticed that all standard model particles fit perfectly in the $ \bar{5}\oplus 10$ representation of $SU(5)$. That's why they proposed that it is a good idea to study this kind of model with $SU(5)$ as a GUT group. However, if you like you can consider an $SU(5)$ model with fermions in the $ \bar{5}\oplus 10 \oplus 1$ representation, i.e. simply add the right-chiral fermion by hand to the model. This is, in fact, exactly what you get when you consider $SO(10)$ as a GUT group. The $15$ standard model fermions plus the right-chiral neutrino fit perfectly in the $16$ of $SO(10)$. When you break $SO(10)$ to $SU(5)$ you get $$ 16 \to \bar{5}\oplus 10 \oplus 1 $$
{ "domain": "physics.stackexchange", "id": 40621, "tags": "particle-physics, beyond-the-standard-model, grand-unification" }
Maximum number of states in minimized DFA from NFA with $n$ states
Question: If an NFA with $n$ states is converted to an equivalent minimized DFA then what will be the maximum number of states in the DFA? Will it be $2^n$ or $2n$? Answer: The maximum number of states is $2^n$. Conversion from NFA to DFA is done by subset construction and the number of states of the resulting DFA is in the worst case $2^n$. Minimization of the resulting DFA in the worst case might not reduce the number of states. An example of this is automaton that accepts strings over $\Sigma = \{0, 1\}$ which have $1$ as the $n$th symbol from the end. Of course, $n$ is a concrete number. A NFA has states $q_0...q_n$ and the following transition function: $$(q_0, 0) \rightarrow \{q_0\} \;\;\;\; (q_0, 1) \rightarrow \{q_0, q_1 \}$$ $$(q_i, 0) \rightarrow \{q_{i+1}\} \;\;\;\; (q_i, 1) \rightarrow \{q_{i+1}\} \;\;\; 1 \leq i \leq n-1$$ Intuitively, the corresponding DFA needs to remember last $n$ symbols since it does not know has it seen the end, which means there are $2^{n}$ states. For more details, I suggest looking into this book. It has a more detailed proof and generally covers these topics in thorough.
{ "domain": "cs.stackexchange", "id": 2037, "tags": "automata, finite-automata, nondeterminism" }
Prefix Sums in Mutable 2D Array
Question: Suppose I have a 2D array M[n][n] of integers (in fact, binary is fine, but I doubt it matters). I am interested in repeated queries of the form: given a coordinate pair $k,l$, what is $$ \sum_{i = 0}^{k-1} \sum_{j = 0}^{l-1} M[i][j]? $$ Of course, all these values can be computed in $\mathcal O(n^2)$ time total, and after that queries take $\mathcal O(1)$. However, my array is mutable, and each time I change a value, the obvious solution requires a $\mathcal O(n^2)$ update. We can create a quad tree over M; the preprocessing takes $\mathcal O(n^2\log(n))$, and this allows us to do queries in $\mathcal O(n\log(n))$, and updates in $\mathcal O(\log(n))$. My question is: Can we improve significantly on the queries without sacrificing too much on the updates? I am especially interested in getting both the update and query operations sub-linear, and in particular getting them both to $\mathcal O(n^\epsilon)$. Edit: for some more information, although I think the problem is interesting even without this further restriction, I expect to do roughly $\mathcal O(n^3)$ queries, and about $\mathcal O(n^2)$ updates. The ideal goal is to get the full runtime down to about $\mathcal O(n^{3+\epsilon})$. Thus, a situation where an update takes $\mathcal O(n \log(n))$ while a query takes $\mathcal O(\log(n))$ would also be interesting to me. Answer: There is a relatively straightforward solution where each query and each update can be done in $O(\log^2 n)$ time. The data structure uses $O(n^2)$ space. We will have $\lg n$ "granularities" of data structure, one for each power of two $2^m$ such that $1 \le 2^m \le n$. The data structure for granularity $2^m$ stores the sums $$\sum_{i=k_0 \cdot 2^m}^{(k_0+1) \cdot 2^m -1} \sum_{j=0}^{l-1} M[i,j]$$ for each $k_0,l$. This data structure for granularity $2^m$ can in turn be represented using $n/2^m$ balanced trees (one for each possible value of $k_0$) to store prefix sums. Now to look up the prefix sum for $k,l$, we break up the interval $[0,k-1]$ into a union of intervals of power-of-two length; at most $\lg n$ intervals are needed. For each such interval of length $2^m$, we do a lookup into the data structure of granularity $2^m$. Thus queries can be answered by doing $O(\log n)$ lookups into a balanced tree, each of which takes $O(\log n)$ time, for a total time of $O(\log^2 n)$ per query. Updates can also be done in $O(\log^2 n)$ time. To update $M[i,j]$, for each granularity $2^m$, you update the appropriate balanced tree in the data structure of granularity $2^m$. This is $O(\log n)$ updates oin $O(\log n)$ balanced trees; each such takes $O(\log n)$ time, so the total time is $O(\log^2 n)$ time. Finally, the data structure of granularity $2^m$ contains $n/2^m$ trees, each taking up $O(n)$ space, so the total space usage is $O(n^2 \cdot (1 + 1/2 + 1/4 + \cdots)) = O(n^2)$.
{ "domain": "cs.stackexchange", "id": 10905, "tags": "complexity-theory, data-structures, trees" }
Binding Lists of Commits to a DataGridView in Winforms with an MVP architecture
Question: I'm using Winforms and I find this pattern showing up in my code a lot. It works, but I'm unsure if it's best. The goal is to pass an IList of items through a view interface and then bind that list to a DataGridView in the concrete GUI class. I've been accomplishing this by keeping a private BindingList<T> in the control, and converting the IList when getting/setting the properties. I can't help but feel that there's a better way to do this databinding. The GUI looks like this. The View interface. public interface IUnSyncedCommitsView { event EventHandler<EventArgs> Fetch; event EventHandler<EventArgs> Pull; event EventHandler<EventArgs> Push; event EventHandler<EventArgs> Sync; IList<ICommit> IncomingCommits { get; set; } IList<ICommit> OutgoingCommits { get; set; } } The UserControl code behind. [ExcludeFromCodeCoverage] public partial class UnSyncedCommitsControl : UserControl, IUnSyncedCommitsView { public UnSyncedCommitsControl() { InitializeComponent(); SetText(); } private void SetText() { CurrentBranchLabel.Text = RubberduckUI.SourceControl_CurrentBranchLabel; FetchIncomingCommitsButton.Text = RubberduckUI.SourceControl_FetchCommitsLabel; PullButton.Text = RubberduckUI.SourceControl_PullCommitsLabel; PushButton.Text = RubberduckUI.SourceControl_PushCommitsLabel; SyncButton.Text = RubberduckUI.SourceControl_SyncCommitsLabel; IncomingCommitsBox.Text = RubberduckUI.SourceControl_IncomingCommits; OutgoingCommitsBox.Text = RubberduckUI.SourceControl_OutgoingCommits; } private BindingList<ICommit> _incomingCommits; public IList<ICommit> IncomingCommits { get { return _incomingCommits; } set { _incomingCommits = new BindingList<ICommit>(value); this.IncomingCommitsGrid.DataSource = _incomingCommits; } } private BindingList<ICommit> _outgoingCommits; public IList<ICommit> OutgoingCommits { get { return _outgoingCommits; } set { _outgoingCommits = new BindingList<ICommit>(value); this.OutgoingCommitsGrid.DataSource = _outgoingCommits; } } public event EventHandler<EventArgs> Fetch; private void FetchIncomingCommitsButton_Click(object sender, EventArgs e) { var handler = Fetch; if (handler != null) { handler(this, e); } } public event EventHandler<EventArgs> Pull; private void PullButton_Click(object sender, EventArgs e) { var handler = Pull; if (handler != null) { handler(this, e); } } public event EventHandler<EventArgs> Push; private void PushButton_Click(object sender, EventArgs e) { var handler = Push; if (handler != null) { handler(this, e); } } public event EventHandler<EventArgs> Sync; private void SyncButton_Click(object sender, EventArgs e) { var handler = Sync; if (handler != null) { handler(this, e); } } } This is the pattern in particular that I'm really worried about. private BindingList<ICommit> _incomingCommits; public IList<ICommit> IncomingCommits { get { return _incomingCommits; } set { _incomingCommits = new BindingList<ICommit>(value); this.IncomingCommitsGrid.DataSource = _incomingCommits; } } Answer: A problem I see here is that the current approach will generate deeply nested BindingList objects if you repeatedly call the setter with a list you previously got from the getter. If you see what I mean. Probably you don't really do such thing in practice, but it's still ugly. I'm going to assume that you generally want the getter return the same list that was originally passed to the setter. That is, if you passed some list x to the setter, you want that list x back from the getter, not some wrapped(x) list. To make this work, you would need to keep the original list, or have a way to access it again if the setter puts it in a wrapper. It seems this can be done using a BindingSource: Create a BindingSource with a BindingList and null parameter, and keep it in a field Bind your grid to this BindingSource Make the list getter return the underlying list with bindingSource.List Make the list setter replace the underlying list, either by replacing the list in the BindingList, or by replacing the BindingList in the BindingSource This last item is a work in progress. I spent some time searching through the docs for it, but it seems harder than I expected. (Btw I found this discussion illuminating.) If all else fails, you could clear the underlying list and add all elements from the incoming list. Depending on how you use the getters and setters, you might want to do some defensive copies as appropriate. Unrelated to your main concern, the code duplication when calling the fetch/push/... handlers is not great, with the boilerplate null checks. It would be good to create a helper method CallHandlerIfNotNull or similar.
{ "domain": "codereview.stackexchange", "id": 17592, "tags": "c#, winforms, rubberduck, databinding" }
Why are there so many species of bats?
Question: I read that roughly 20% of all species of mammals are bats. Is there a good explanation for why bats have diversified so much compared to other mammals? Is it because bats' ability to fly allows them to fill niches that are completely out of reach (pun intended) of other mammals? If so, how is this reconciled with the coexistence of bat populations and bird populations? Answer: Indeed, good question. There's tiny bats, large bats, and so many different species of them! A clue can be found in the observation that each bat species is almost always restricted to a small geographical region. With this in mind, it has been shown that an animal lineage's ability to diversify quickly in new environments depends largely on its diet. Bats have an incredible variety of dietary sources. Plant nectar, insects (there's so many different kinds of insects...), blood-feeding, and more! So, as you can imagine, bat species are restricted because they have specialized diets and because they are very good at taking up new diets over evolutionary time. Unfortunately for them, it restricts them to a geographical location much of the time. Geographical restriction is always a good starting point for speciation events, and there have been many in the bat clade. But why bats, specifically, you might ask? Well - it is argued (below) that it is due to the characteristics of their skulls, rather than their wings. Dumont, E. R., Dávalos, L. M., Goldberg, A., Santana, S. E., Rex, K., & Voigt, C. C. (2012). Morphological innovation, diversification and invasion of a new adaptive zone. Proc. R. Soc. B, 279(1734), 1797-1805.
{ "domain": "biology.stackexchange", "id": 9322, "tags": "zoology" }
Mapping phone numbers to names
Question: Problem description: You are given a phone book that consists of your friend's names and their phone number. After that you will be given your friend's name as query. For each query, print the phone number of your friend. The first line will have an integer N denoting the number of entries in the phone book. Each entry consists of two lines: a name and the corresponding phone number. After these, there will be some queries. Each query will contain name of a friend. Read the queries until end-of-file. For each case, print "Not found" without quotes, if the friend has no entry in the phone book. Otherwise, print the friend's name and phone number. See sample output for the exact format. Sample input: 3 sam 99912222 tom 11122222 harry 12299933 sam edward harry Sample output: sam=99912222 Not found harry=12299933 My solution: int main() { map<string, string> PhoneList; int n; string name,ph,str; while(n--) { cin>>name>>ph; PhoneList[name] = ph; } while( getline (cin,t) ) { cin>>str; auto it = PhoneList.find(str); if(it==PhoneList.end()) cout<<"Not Found\n"; else cout<<it->second; } return 0; } Answer: There are several issues with your program, some of them rendering it non-functional. As it is a very simple one, it does not take much time to fix it, but as already specified by Edward, you should submit only working code that requires reviewing. 1) Code aspect You should try to keep you code as organized and homogeneous as possible. Usually, it is not required for homework, but it is good in a programmer's life: int main() { map<string, string> PhoneList; int n; should be properly formatted like int main() { map<string, string> PhoneList; int n; Try to leave blanks between operators and operands, as this will increase readability: string name,ph,str; should be replaced with string name, ph, str; 2) Variable declaration closer to usage C++ allows you to declare your variables anywhere before they are actually used and it is a good idea to be as close as possible. The first part of your program may look like this: int n; cout << "Provide number of entries: "; cin >> n; map<string, string> PhoneList; while (n --) { string name, phone; // this is added just to see something relevant on the screen - it might be removed cout << "Name " << (n + 1) << ":\n"; cin >> name; cout << "Phone " << (n + 1) << ":\n"; cin >> phone; PhoneList[name] = phone; } 3) getLine requires a cin.ignore() before, as it will catch a newline for a previously entered phone (at least when testing manually) Finally, a version of your program with all improvements made: // Example program #include <iostream> #include <string> #include <map> using namespace std; int main() { int n; cout << "Provide number of entries: "; cin >> n; map<string, string> PhoneList; while (n --) { string name, phone; cout << "Name " << (n + 1) << ":\n"; cin >> name; cout << "Phone " << (n + 1) << ":\n"; cin >> phone; PhoneList[name] = phone; } string str; cin.ignore(); while(getline(cin, str)) { auto it = PhoneList.find(str); if (it == PhoneList.end()) cout << "Not Found\n"; else cout << it->second << "\n"; } } This is tested in cpp.sh
{ "domain": "codereview.stackexchange", "id": 17832, "tags": "c++, file-system, search, hash-map" }
Deriving Unitarity of $S$-matrix in 1D Quantum Mechanics
Question: I was studying about scattering across a one-dimensional unknown potential ( pretty elementary Quantum Mechanics) and how, if we know the $S$-matrix of such a system, we can deduce an awful lot of information about the potential. Also, the $S$-matrix satisfies some properties. First, for the sake of notational clarity, let me define it. Suppose there exists a potential $V(x)$ such that it is zero everywhere but some other arbitrary function between $-a/2$ and $+a/2$. Now, by treating this time independently such that plane waves hit the potential and are reflected or transmitted accordingly, I can write the wave function as follows: $$\psi(x)= \begin{cases} Ae^{ikx} + Be^{-ikx},& \text{for } x\leq -a/2\\ Ce^{ikx} + De^{-ikx},& \text{for} x\geq +a/2\\ \end{cases}$$ Now we create a 2x2 matrix called the S-matrix which relates incoming amplitudes $A,D$ to outgoing amplitudes $B,C$ such that $$ \begin{pmatrix} B \\ C \\ \end{pmatrix}= \begin{pmatrix} S_{11} & S_{12} \\ S_{21} & S_{22} \\ \end{pmatrix} \begin{pmatrix} A \\ D \\ \end{pmatrix}$$ Now to prove that this matrix is unitary, many sources including Wikipedia use the fact that since the integral of probability density $\int_{-\infty}^{\infty}|\psi(x,t)|^2=1$ is time-independent, $J_{left}=J_{right}$ where $J_{left}$ and $J_{right}$ are probability currents to left and right of the potential, which implies that $|A|^2-|B|^2=|C|^2-|D|^2$ which can then further be used to prove unitarity. My main question is how did everyone deduce that $J_{left}=J_{right}$ and that current inside the potential region is 0? How do I know there exists no probability for the particle to stay inside that region? And even if I know that, how can the above result be derived? Any sort of help would be really appreciated. Answer: The reason for the equality of the currents is particle conservation. You can start with the continuity equation $$\partial_t |\psi(x,t)|^2+ \partial_x j(x,t)=0;$$ integrating it over the central region we get $$\int_{x_L}^{x_R}dx \partial_t |\psi(x,t)|^2 + \int_{x_L}^{x_R}dx \partial_x j(x,t) = \frac{dQ(t)}{dt} +j(x_R,t) - j(x_L,t) = 0,$$ where we assumed that the potential is limited to the interval $[x_L, x_R]$ and $Q(t)$ is the charge in this region. Since we are dealing with a time-independent problem, $\partial_t |\psi(x,t)|^2 =0$, i.e. $$J_{right} = j(x_R,t) = j(x_L,t) = J_{left}.$$ Note that the continuity equation is directly derivable from the Schrodinger's equation. Another tip: for calculating the scattering matrix, it is convenient to consider separately the waves incident from the left and from the right.
{ "domain": "physics.stackexchange", "id": 66343, "tags": "quantum-mechanics, hilbert-space, scattering, s-matrix-theory, unitarity" }
Why is X(0) the DC component
Question: Why exactly is X(0) the DC component of a signal? How is it equal to N times x(n)'s average value and why it is at X(0)? Answer: Follows from the DFT definition. It's defined as \begin{equation} X(k) = \sum_{n=0}^{N-1} x(n) e^{-j2\pi \frac{kn}{N}} \end{equation} So $X(0)$ is \begin{equation} X(0) = \sum_{n=0}^{N-1} x(n) e^{-j2\pi \frac{0 \cdot n}{N}} \end{equation} Having $k=0$ gives $e^0=1$ all the time so that \begin{equation} X(0) = \sum_{n=0}^{N-1} x(n) 1 \end{equation} Comparing this to the average \begin{equation} \overline{x} = \frac{1}{N} \sum_{n=0}^{N-1} x(n) \end{equation} shows that $X(0) = N \overline{x}$
{ "domain": "dsp.stackexchange", "id": 1107, "tags": "dft" }
Error term formulation in Graph SLAM (conceptual doubt)
Question: I am reading A Tutorial on Graph-Based SLAM.Grisetti, Kummerle, Stachniss & Burgard On page 5, the error function is introduced as follows $$e_{ij}(x_i, x_j) = z_{ij} - \hat{z}_{ij}(xi, xj)$$ here $z_{ij}$ is the mean of virtual measurement and $\hat{z}_{ij}(x_i, x_j)$ is the prediction of the virtual measurement. The following image supplements the description The Algorithm 1 (on page 6), requires $e_{ij}$ as input. My doubt is regarding the calculation of $e_{ij}$. I need both $z_{ij}$ and $\hat{z}_{ij}$ to calculate $e_{ij}$ The evaluation of $\hat{z}_{ij}$ is dependent on the robot poses $x_i$ and $x_j$. In turn, these robot poses $x_i, x_j$ (as well as $z_{ij}$) are calculated using $z_{raw}$ (incrementally with Odometry?) and to calculate $\hat{z}_{ij}$ we are again going to (indirectly) use $z_{raw}$. And that does not make sense because then $e_{ij}=0$? Surely, I'm missing something about how $z_{ij}$ and $\hat{z}_{ij}$ differ! Kindly help me resolve the above doubt! Any concrete example of $z_{ij}$ and $\hat{z}_{ij}$ are appreciated. Answer: Good question. That is quite confusing in the beginning. Let's say you have an observation of a relative pose zij between two positions(or nodes) from your wheel odometry. Then, you accumulate the relative poses to create the trajectory of your robot. From the accumulated trajectory you can extract zij_hat. Given obesrvation and prediction, the error function is eij = zij - zij_hat which are zero initially except the loop closure terms. For example, if you have 4 nodes with a loop closure at 4 and 1, your errors looks like this. e12 = 0 e23 = 0 e34 = 0 e41 = big error! During the optimization, your initially accumulated trajectory moves, therefore, e12..e34 are not zero anymore. Optimization moves the nodes to reduce the error at e41 which result in distributing the error to other nodes. The paper you are reading is not a good tutorial(in my opinion) by the way.
{ "domain": "robotics.stackexchange", "id": 1958, "tags": "slam" }
Why does the neutralisation of any strong acid in an aqueous solution by any strong base
Question: Why does the neutralisation of any strong acid in an aqueous solution by any strong base always result in a heat of reaction of approximately –57 kJ mol−1? Answer: Definition of enthalpy of neutralization: CHANGE IN ENTHALPY WHEN ACIDS AND BASES REACT TOGETHER AND FORM 1 MOLE OF H2O Strong acids and bases have 100% dissociation in water, meaning that 1mol of water is formed from the reaction of any two (1mol quantities of) strong acids and bases Notice that the definition of the neutralization of enthalpy is based on moles of water formed; we don't care about the salts Thus, enthalpy of neutralization is constant regardless of the strong acids or based used
{ "domain": "chemistry.stackexchange", "id": 10990, "tags": "acid-base" }
map resolution issue between explore and pr2_2dnav_slam
Question: Hi ! Another question about mapping this time : I'm doing SLAM with pr2. I roslaunch an empty_world headless where I load a pr2 and wg_walls. Then I launch rviz_move_base from pr2_navigation_global for visualisation, and pr2_2dnav.launch from pr2_2dnav_slam for laser scan and SLAM. Next is explore_slam.launch from explore_stage (but without stage interface coz I'm using rviz). I modified pr2_2dnav.launch to use slam_mapping-2.xml that is the standard slam_mapping.xml file with the following changes : <param name="map_update_interval" value="10.0"/> <!-- was 30.0--> <param name="temporalUpdate" value="0.5"/> <!-- was -1.0 --> In explore_slam.launch, I commented stage node launch : <!-- <node pkg="stage" type="stageros" name="stage" args="$(find gazebo_worlds)/worlds/wg.world" respawn="false" output="screen"/> --> When running, I got an error from pr2_2dnav_slam/pr2_2dnav.launch terminal : [ERROR] [1336401780.958141624, 48.135000000]: You cannot update a map with resolution: 0.0500, with a new map that has resolution: 0.1000 I looked in many launch and xml files related to theses nodes but I can't find any explicit "map resolution" parameter that I could set to a value I want. More strange is that the "map resolution" parameter changes during the simulation in rviz (I can't edit it there - click for zoomed view) : Map w/ reso 0.05 http://anusite.free.fr/simumapreso0p05-6.png Map w/ reso 0.1 http://anusite.free.fr/Simu-mapreso0p1.png Is there any way to set resolution definitely, or is it a normal feature of theses nodes (but it's generating errors, so I don't think so) ? Update in comments. Originally posted by Erwan R. on ROS Answers with karma: 697 on 2012-05-09 Post score: 1 Answer: The problem looks to have been solved by increased laser range. I don't understand why, any idea ? Originally posted by Erwan R. with karma: 697 on 2012-05-14 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 9312, "tags": "gazebo, navigation, mapping, parameters, resolution" }
Count pairs with given sum
Question: I want to find the number of the pairs in an array that equal a given sum. I've naively tried to implement brute force as a solution, but it is too slow and this task should not take more than one second. def getPairsCount(numbers, shouldEqualTo): count = 0 for i in range(len(numbers)): for j in range(i + 1, len(numbers)): if numbers[i] + numbers[j] == shouldEqualTo: count += 1 return count Answer: If you can use extra space : # O(n) running time / O(n) memory def get_pair_count(nums, target_sum): count = {} for num in nums: count[num] = count.get(num, 0) + 1 total_double = 0 for num in nums: complement = target_sum - num if complement in count: total_double += count[complement] if complement == num: total_double -= 1 return total_double // 2 source : http://www.geeksforgeeks.org/count-pairs-with-given-sum/ If you can't use more space you could try this version I just made (at your own risk) # O(n log n) running time / O(1) memory def get_pair_count_no_extra_memory(nums, target_sum): nums.sort() start = 0 end = len(nums) - 1 total = 0 while start < end: current_sum = nums[start] + nums[end] if current_sum == target_sum: start_count = 1 end_count = 1 special_case = False if nums[start] == nums[end]: special_case = True while start+1 < len(nums) and nums[start] == nums[start+1]: start_count += 1 start += 1 while end-1 >= 0 and nums[end] == nums[end-1]: end_count += 1 end -= 1 if special_case: total += ((start_count - 1) * start_count) // 2 else: total += start_count * end_count start += 1 end -= 1 elif current_sum < target_sum: start += 1 else: end -= 1 return total
{ "domain": "codereview.stackexchange", "id": 28513, "tags": "python, performance" }
What is the purpose of the hidden layers?
Question: Why would anybody want to use "hidden layers"? How do they enhance the learning ability of the network in comparison to the network which doesn't have them (linear models)? Answer: "Hidden" layers really aren't all that special... a hidden layer is really no more than any layer that isn't input or output. So even a very simple 3 layer NN has 1 hidden layer. So I think the question isn't really "How do hidden layers help?" as much as "Why are deeper networks better?". And the answer to that latter question is an area of active research. Even top experts like Geoffrey Hinton and Andrew Ng will freely admit that we don't really understand why deep neural networks work. That is, we don't understand them in complete detail anyway. That said, the theory, as I understand it goes something like this... successive layers of the network learn successively more sophisticated features, which build on the features from preceding layers. So, for example, an NN used for facial recognition might work like this: the first layer detects edges and nothing else. The next layer up recognizes geometric shapes (boxes, circles, etc.). The next layer up recognizes primitive features of a face, like eyes, noses, jaw, etc. The next layer up then recognizes composites based on combinations of "eye" features, "nose" features, and so on. So, in theory, deeper networks (more hidden layers) are better in that they develop a more granular/detailed representation of a "thing" being recognized.
{ "domain": "ai.stackexchange", "id": 2562, "tags": "neural-networks, deep-learning, deep-neural-networks, hidden-layers" }
Shell command-line interpreter with pipeline parsing
Question: Like many others I've been writing a shell command-line interpreter that can do a pretty decent pipeline parsing, first it splits the command at pipeline char (|), then it splits the substring at unquoted whitespace so that a pipeline e.g. $ ls -a -h -l|awk'{print $3}'|sort -n gets represented with a matrix A where A[i][j] is program (pipe) i and argument number j. The code produces the correct output except for quoted pipelines (e.g. $ echo 'foo|bar'|cat won't work) but that functionality I can add. What I want is to make the code more readable and maintainable now that it "works" (doesn't crash and makes the right output). main.c #define _XOPEN_SOURCE 500 #include <sys/stat.h> #include <stdio.h> #include <string.h> #include <stdlib.h> #include <signal.h> #include <sys/wait.h> #include "openshell.h" #include "errors.h" #include "do.h" #include "CommandEntry.h" #include <errno.h> #include <readline/readline.h> #include <unistd.h> #include <readline/history.h> #include <histedit.h> #ifdef SIGDET #if SIGDET == 1 int isSignal = 1; /*Termination detected by signals*/ #endif #endif static int count = 0; static FILE *sourcefiles[MAX_SOURCE]; /* * The special maximum argument value which means that there is * no limit to the number of arguments on the command line. */ #define INFINITE_ARGS 0x7fffffff static struct option options[] = { {"with_param", 1, 0, 'p'}, {"version", 0, 0, 'v'}, {"help", 0, 0, 'h'}, {0, 0, 0, 0} }; /* * The table of built-in commands. * A command is terminated wih an entry containing NULL values. * These commands should preferable by written in openshell */ static const CommandEntry commandEntryTable[] = { { "checkenv", do_checkenv, 1, INFINITE_ARGS, "Check environment variables", "" }, { "editenv", do_editenv, 3, INFINITE_ARGS, "do_editenv", "[txp]v arFileName fileName ..." }, { "cd", do_cd, 1, 2, "Change current directory", "[dirName]" }, { "exit", do_exit, 1, 2, "Exit from shell", "[exit value]" }, { "help", do_help, 1, 2, "Print help about a command", "[word]" }, { "killport", do_killport, 2, INFINITE_ARGS, "Send a signal to the specified process", "[-sig] pid ..." }, { NULL, 0, 0, 0, NULL, NULL } }; char *concat(char *s1, char *s2) { char *result = malloc(strlen(s1) + strlen(s2) + 1);//+1 for the zero-terminator //in real code you would check for errors in malloc here if (result == NULL) { fprintf(stderr, "malloc failed!\n"); return (char *) '0'; } strcpy(result, s1); strcat(result, s2); return result; } int testFn(const char *str) { return (str && *str && str[strlen(str) - 1] == '}') ? 1 : 0; } static int runCmd(const char *cmd) { const char *cp; pid_t pid; int status; struct command structcommand[15]; char **argv = 0; int argc = 1; bool pipe = false; char *string[75][75]; char *pString3[40]; char *pString2[40]; int n = 0; char **ptr1; char string1[75]; bool keep = false; char *pString1[75]; char *pString[75]; *pString1 = "\0"; *pString = "\0"; char temp[75] = {'\0'}; int w = 0; bool b = false; int j = 0; int i; int p = 0; char **ptr; char *tmpchar; char *cmdtmp; bool b1 = false; char *dest; int y = 0; i = 0; int h = 0; for (int x = 0; x < 75; x++) { /* for each pipeline */ for (int c = 0; c < 75; c++) { /* for each pipeline */ string[x][c] = '\0'; } } if (cmd) { for (cp = cmd; *cp; cp++) { if ((*cp >= 'a') && (*cp <= 'z')) { continue; } if ((*cp >= 'A') && (*cp <= 'Z')) { continue; } if (isDecimal(*cp)) { continue; } if (isBlank(*cp)) { continue; } if ((*cp == '.') || (*cp == '/') || (*cp == '-') || (*cp == '+') || (*cp == '=') || (*cp == '_') || (*cp == ':') || (*cp == ',') || (*cp == '\'') || (*cp == '"')) { continue; } } } if (cmd) { cmdtmp = malloc(sizeof(char *) * strlen(cmd) + 1); strcpy(cmdtmp, cmd); tmpchar = malloc(sizeof(char *) * strlen(cmd) + 1); if (tmpchar == NULL) { printf("Error allocating memory!\n"); /* print an error message */ return 1; /* return with failure */ } strcpy(tmpchar, cmd); ptr1 = str_split(pString3, cmdtmp, '|'); if (strstr(cmd, "|") == NULL) { /* not a pipeline */ makeArgs(cmd, &argc, (const char ***) &argv, pipe, 0, 0); for (j = 0; j < argc; j++) { string[0][j] = argv[j]; structcommand[i].argv = string[0]; /*process;*/ } n++; } else { for (i = 0; *(ptr1 + i); i++) { /* tokenize the input string for each pipeline*/ n++; /* save number of pipelines */ int e = 0; /* a counter */ *pString = "\0"; /* should malloc and free this? */ strcpy(string1, *(ptr1 + i)); if ((string1[0] != '\0') && !isspace(string1[0])) { /* this is neither the end nor a new argument */ ptr = str_split(pString2, *(&string1), ' '); /* split the string at the arguments */ h = 0; for (j = 0; *(ptr + j); j++) { /* step through the arguments */ /* the pipeline is in cmdtmp and the argument/program is in ptr[i] */ if (ptr + j && !b && strstr(*(ptr + j), "'")) { b = true; strcpy(temp, *(ptr + j)); if (y < 1) { y++; } } while (b) { if (*(ptr + j) && strstr(*(ptr + j), "'")) { /* end of quote */ b = false; if (y < 1) { string[i][j] = strcpy(temp, *(ptr + j)); } y = 0; } else if (*(ptr + j)) { /* read until end of quote */ string[i][j] = temp; continue; } else { b = false; break; } } if (ptr + j) { if (*(ptr + j)[0] == '{') { keep = true; } if (testFn(*(ptr + j))) { /* test for last char */ string[i][j - p] = concat(*pString1, *(ptr + j)); keep = false; free(*pString1); goto mylabel; } if (keep) { *pString1 = concat(*pString1, *(ptr + j)); *pString1 = concat(*pString1, " "); p++; } else { strcpy(temp, *(ptr + j)); b1 = false; int q = j; for (e = 0; *(ptr + q + e); e++) { /* step through the string */ b1 = true; if (*(ptr + e + q)) { *pString = concat(*pString, *(ptr + e + q)); *pString = concat(*pString, " "); } j = e; } if (makeArgs(*pString, &argc, (const char ***) &argv, pipe, i, h)) { for (int r = 0; argv[r] != NULL; r++) { dest = malloc(sizeof(char *) * strlen(argv[r]) + 1); *dest = '0'; strcpy(dest, argv[r]); string[w][r] = dest; } w++; } else { if (!b1) { /* no args (?) */ for (int r = 0; argv[r] != NULL; r++) { string[i][r] = argv[r]; } } } } } } mylabel: free(ptr); dump_argv((const char *) "d", argc, argv); } } free(ptr1); free(cmdtmp); free(tmpchar); } for (i = 0; i < n; i++) { for (j = 0; DEBUG && string[i][j] != NULL; j++) { if (i == 0 && j == 0) printf("\n"); printf("p[%d][%d] %s\n", i, j, string[i][j]); } structcommand[i].argv = string[i]; } fflush(NULL); pid = fork(); if (pid < 0) { perror("fork failed"); return -1; } /* If we are the child process, then go execute the string.*/ if (pid == 0) { /* spawn(cmd);*/ fork_pipes(n, structcommand); } /* * We are the parent process. * Wait for the child to complete. */ status = 0; while (((pid = waitpid(pid, &status, 0)) < 0) && (errno == EINTR)); if (pid < 0) { fprintf(stderr, "Error from waitpid: %s", strerror(errno)); return -1; } if (WIFSIGNALED(status)) { fprintf(stderr, "pid %ld: killed by signal %d\n", (long) pid, WTERMSIG(status)); return -1; } } return WEXITSTATUS(status); } /* The shell performs wildcard expansion on each token it extracts while parsing the command line. * Oftentimes, globbing will obviously not do anything (for example, ls just returns ls). * When you want nullglob behavior you'll have to know whether the glob function actually found any glob characters, though */ static void expandVariable(char *shellcommand) { char mystring[CMD_LEN]; char *cp; char *ep; strcpy(mystring, shellcommand); cp = strstr(mystring, "$("); if (cp) { *cp++ = '\0'; strcpy(shellcommand, mystring); ep = ++cp; while (*ep && (*ep != ')')) ep++; if (*ep == ')') *ep++ = '\0'; cp = getenv(cp); if (cp) strcat(shellcommand, cp); strcat(shellcommand, ep); } return; } int do_help(int argc, const char **argv) { const CommandEntry *entry; const char *str; str = NULL; if (argc == 2) str = argv[1]; /* * Check for an exact match, in which case describe the program. */ if (str) { for (entry = commandEntryTable; entry->name; entry++) { if (strcmp(str, entry->name) == 0) { printf("%s\n", entry->description); printf("usage: %s %s\n", entry->name, entry->usage); return 0; } } } /* * Print short information about commands which contain the * specified word. */ for (entry = commandEntryTable; entry->name; entry++) { if ((str == NULL) || (strstr(entry->name, str) != NULL) || (strstr(entry->usage, str) != NULL)) { printf("%-10s %s\n", entry->name, entry->usage); } } return 0; } char s[] = "Interrupt\n"; char *input; void handler(int signum) { input = '\0'; if (write(fileno(stdin), s, sizeof s - 1)) { } else { } if (signum) { if (false); } else { } } /* * Try to execute a built-in command. * Returns TRUE if the command is a built in, whether or not the * command succeeds. Returns FALSE if this is not a built-in command. */ bool exec_builtin(const char *cmd) { const char *endCmd; const CommandEntry *entry; int argc; const char **argv; char cmdName[CMD_LEN]; /* * Look for the end of the command name and then copy the * command name to a buffer so we can null terminate it. */ endCmd = cmd; if (endCmd) while (*endCmd && !isBlank(*endCmd)) endCmd++; memcpy(cmdName, cmd, endCmd - cmd); cmdName[endCmd - cmd] = '\0'; /* * Search the command table looking for the command name. */ for (entry = commandEntryTable; entry->name != NULL; entry++) { if (strcmp(entry->name, cmdName) == 0) break; } /* * If the command is not a built-in, return indicating that. */ if (entry->name == NULL) { return false; } bool bo = false; /* * The command is a built-in. * Break the command up into arguments and expand wildcards. */ if (!makeArgs(cmd, &argc, &argv, bo, 0, 0)) { return true; } /* * Give a usage string if the number of arguments is too large * or too small. */ if ((argc < entry->minArgs) || (argc > entry->maxArgs)) { fprintf(stderr, "usage: %s %s\n", entry->name, entry->usage); return true; } /* * Call the built-in function with the argument list. */ entry->func(argc, argv); return true; } /* * Parse and execute one null-terminated command line string. * This breaks the command line up into words, checks to see if the * command is an alias, and expands wildcards. */ int command(const char *cmd) { const char *endCmd; char cmdName[CMD_LEN]; freeChunks(); /* * Skip leading blanks. */ if (cmd) { while (isBlank(*cmd)) cmd++; /* * If the command is empty or is a comment then ignore it. */ if (cmd) if ((*cmd == '\0') || (*cmd == '#')) return 0; /* * Look for the end of the command name and then copy the * command name to a buffer so we can null terminate it. */ endCmd = cmd; if (endCmd) while (*endCmd && !isBlank(*endCmd)) endCmd++; memcpy(cmdName, cmd, endCmd - cmd); cmdName[endCmd - cmd] = '\0'; /* * Expand simple environment variables */ if (cmd) while (strstr(cmd, "$(")) expandVariable((char *) cmd); /* * Now look for the command in the builtin table, and execute * the command if found. */ if (exec_builtin(cmd)) { return 0; } /* * The command is not a built-in, so run the program along * the PATH list. */ return runCmd(cmd); } else return 0; } void getPath() { if (getenv("PATH") == NULL) { printf("'%s' is not set.\n", "PATH"); /* Default our path if it is not set. */ putenv("PATH=/bin:/usr/bin:/sbin:/usr/sbin:/etc"); } else if (getenv("PATH")) { printf("'%s' is set to %s.\n", "PATH", (getenv("PATH"))); } } char *prompt(EditLine *e) { static char p2[] = "\1\033[36m$ \033[0m\1"; static char p[] = "$ "; return p; } int main(int argc, char *argv[]) { struct sigaction sh; /* char *shell_prompt[100];*/ sh.sa_handler = handler; sigemptyset(&sh.sa_mask); sh.sa_flags = 0; sigaction(SIGINT, &sh, NULL); int index = 0; int i; /* EditLine *el = el_init(argv[0], stdin, stdout, stderr); el_set(el, EL_PROMPT_ESC, &prompt, '\1'); el_set(el, EL_EDITOR, "emacs"); el_set(el, EL_BIND, "bind ^I el_complete");*/ rl_bind_key('\t', rl_complete); /*rl_parse_and_bind("bind ^I rl_complete");*/ /*rl_bind_key('\t', rl_complete);*/ HistEvent ev; History *myhistory; while (1) { index = 0; i = getopt_long(argc, argv, "p:vh", options, &index); if (i == -1) break; switch (i) { case 'p': { /* store_parameter(optarg); */ break; } case 'v': { printf("OpenShell version 0.1(a)\n"); printf("Version: %s\n", VERSION); exit(EXIT_SUCCESS); } case 'h': { printf("Usage: ./shell\n"); /*print_help();*/ exit(EXIT_SUCCESS); } default: { /* fprintf(stderr, "Error (%s): unrecognized option.\n", __FUNCTION__);*/ /* print_help();*/ return 1;/*RETURN_FAILURE;*/ } } } getPath(); char *shell_prompt; for (; ;) { shell_prompt = malloc(sizeof(char *) * 1024); snprintf(shell_prompt, sizeof(shell_prompt), "%s: $ ", getenv("USER")); input = readline(shell_prompt); if (input) add_history(input); command(input); free(input); free(shell_prompt); } return 0; } util.c #define _XOPEN_SOURCE 500 #include <sys/stat.h> #include <stdio.h> #include <unistd.h> #include <string.h> #include <sys/wait.h> #include <stdlib.h> #include "errors.h" #include <errno.h> #include <assert.h> #include "openshell.h" /* struct command { char *const *argv; };*/ static CHUNK *chunkList; /* * Free all chunks of memory that had been allocated since the last * call to this routine. */ void freeChunks(void) { CHUNK *chunk; while (chunkList) { chunk = chunkList; chunkList = chunk->next; free((char *) chunk); } } /* * Allocate a chunk of memory (like malloc). * The difference, though, is that the memory allocated is put on a * list of chunks which can be freed all at one time. You CAN NOT free * an individual chunk. */ char *getChunk(int size) { CHUNK *chunk; if (size < CHUNK_INIT_SIZE) size = CHUNK_INIT_SIZE; chunk = (CHUNK *) malloc(size + sizeof(CHUNK) - CHUNK_INIT_SIZE); if (chunk == NULL) return NULL; chunk->next = chunkList; chunkList = chunk; return chunk->data; } bool find_less_program(char *path) { bool found = false; char *curr_path; const char program[] = "/less"; while (path && !found) { if ((curr_path = malloc(strlen(path) + sizeof(program))) != NULL) { strcpy(curr_path, path); strcat(curr_path, program); if (file_exist(curr_path)) { found = true; // we found the program } free(curr_path); path = strtok(NULL, ":"); } else { fprintf(stderr, "malloc failed!\n"); return false; } } return found; } /* Sort routine for list of fileNames. */ int nameSort(const void *p1, const void *p2) { const char **s1; const char **s2; s1 = (const char **) p1; s2 = (const char **) p2; return strcmp(*s1, *s2); } /* Routine to see if a text string is matched by a wildcard pattern. * Returns true if the text is matched, or false if it is not matched * or if the pattern is invalid. * * matches zero or more characters * ? matches a single character * [abc] matches 'a', 'b' or 'c' * \c quotes character c * Adapted from code written by Ingo Wilken. */ bool match(const char *text, const char *pattern) { const char *retryPat; const char *retryText; int ch; bool found; retryPat = NULL; retryText = NULL; while (*text || *pattern) { ch = *pattern++; switch (ch) { case '*': retryPat = pattern; retryText = text; break; case '[': found = 0; while ((ch = *pattern++) != ']') { if (ch == '\\') ch = *pattern++; if (ch == '\0') return 0; if (*text == ch) found = 1; } if (!found) { pattern = retryPat; text = ++retryText; } /* fall into next case */ case '?': if (*text++ == '\0') return 0; break; case '\\': ch = *pattern++; if (ch == '\0') return 0; /* fall into next case */ default: if (*text == ch) { if (*text) text++; break; } if (*text) { pattern = retryPat; text = ++retryText; break; } return 0; } if (pattern == NULL) return 0; } return 1; } /* This will replace all occurrence of "str" with "rep" in "src"... */ void strreplace(char *src, char *str, char *rep) { char *p = strstr(src, str); do { if (p) { char buf[1024]; memset(buf, '\0', strlen(buf)); if (src == p) { strcpy(buf, rep); strcat(buf, p + strlen(str)); } else { strncpy(buf, src, strlen(src) - strlen(p)); strcat(buf, rep); strcat(buf, p + strlen(str)); } memset(src, '\0', strlen(src)); strcpy(src, buf); } } while (p && (p = strstr(src, str))); } char **str_split(char *a[], char *a_str, const char a_delim) { char **result = 0; size_t count = 0; char *tmp = a_str; char *last_comma = 0; char delim[2]; delim[0] = a_delim; delim[1] = 0; /* Count how many elements will be extracted. */ while (*tmp) { if (a_delim == *tmp) { count++; last_comma = tmp; } tmp++; } /* Add space for trailing token. */ count += last_comma < (a_str + strlen(a_str) - 1); /* Add space for terminating null string so caller knows where the list of returned strings ends. */ count++; result = malloc(sizeof(char *) * count); if (result == NULL) { printf("Error allocating memory!\n"); //print an error message return result; //return with failure } if (result) { size_t idx = 0; char *token = strtok(a_str, delim); while (token) { assert(idx < count); *(result + idx++) = strdup(token); token = strtok(0, delim); } assert(idx == count - 1); *(result + idx) = 0; } return result; } /* Expand the wildcards in a fileName wildcard pattern, if any. * Returns an argument list with matching fileNames in sorted order. * The expanded names are stored in memory chunks which can later all * be freed at once. The returned list is only valid until the next * call or until the next command. Returns zero if the name is not a * wildcard, or returns the count of matched files if the name is a * wildcard and there was at least one match, or returns -1 if either * no fileNames matched or there was an allocation error. */ int expandWildCards(const char *fileNamePattern, const char ***retFileTable) { const char *last; const char *cp1; const char *cp2; const char *cp3; const char *possible_tilde; char *str; DIR *dirp; struct dirent *dp; unsigned long dirLen; int newFileTableSize; char **newFileTable; char dirName[PATH_LEN]; char *path; static int fileCount; static int fileTableSize; static char **fileTable; /* * Clear the return values until we know their final values. */ fileCount = 0; *retFileTable = NULL; /* * Scan the file name pattern for any wildcard characters. */ cp1 = strchr(fileNamePattern, '*'); cp2 = strchr(fileNamePattern, '?'); cp3 = strchr(fileNamePattern, '['); /* * Scan the file name pattern for tilde */ possible_tilde = strchr(fileNamePattern, '~'); if (possible_tilde != NULL) { path = getenv("HOME"); if (path == NULL) { fprintf(stderr, "No HOME environment variable\n"); return 1; } strreplace((char *) fileNamePattern, "~", path); } /* * If there are no wildcard characters then return zero to * indicate that there was actually no wildcard pattern. */ if ((cp1 == NULL) && (cp2 == NULL) && (cp3 == NULL) && (possible_tilde == NULL)) return 0; /* * There are wildcards in the specified filename. * Get the last component of the file name. */ last = strrchr(fileNamePattern, '/'); if (last) last++; else last = fileNamePattern; /* * If any wildcards were found before the last filename component * then return an error. */ if ((cp1 && (cp1 < last)) || (cp2 && (cp2 < last)) || (cp3 && (cp3 < last))) { fprintf(stderr, "Wildcards only implemented for last file name component\n"); return -1; } /* * Assume at first that we are scanning the current directory. */ dirName[0] = '.'; dirName[1] = '\0'; /* * If there was a directory given as part of the file name then * copy it and null terminate it. */ if (last != fileNamePattern) { memcpy(dirName, fileNamePattern, last - fileNamePattern); dirName[last - fileNamePattern - 1] = '\0'; if (dirName[0] == '\0') { dirName[0] = '/'; dirName[1] = '\0'; } } /* * Open the directory containing the files to be checked. */ dirp = opendir(dirName); if (dirp == NULL) { perror(dirName); return -1; } /* * Prepare the directory name for use in making full path names. */ dirLen = strlen(dirName); if (last == fileNamePattern) { dirLen = 0; dirName[0] = '\0'; } else if (dirName[dirLen - 1] != '/') { dirName[dirLen++] = '/'; dirName[dirLen] = '\0'; } /* * Find all of the files in the directory and check them against * the wildcard pattern. */ while ((dp = readdir(dirp)) != NULL) { /* * Skip the current and parent directories. */ if ((strcmp(dp->d_name, ".") == 0) || (strcmp(dp->d_name, "..") == 0)) { continue; } /* * If the file name doesn't match the pattern then skip it. */ if (!match(dp->d_name, last)) continue; /* * This file name is selected. * See if we need to reallocate the file name table. */ if (fileCount >= fileTableSize) { /* * Increment the file table size and reallocate it. */ newFileTableSize = fileTableSize + EXPAND_ALLOC; newFileTable = (char **) realloc((char *) fileTable, (newFileTableSize * sizeof(char *))); if (newFileTable == NULL) { fprintf(stderr, "Cannot allocate file list\n"); closedir(dirp); return -1; } fileTable = newFileTable; fileTableSize = newFileTableSize; } /* * Allocate space for storing the file name in a chunk. */ str = getChunk((int) dirLen + (int) strlen(dp->d_name) + 1); if (str == NULL) { fprintf(stderr, "No memory for file name\n"); closedir(dirp); return -1; } /* * Save the file name in the chunk. */ if (dirLen) { memcpy(str, dirName, dirLen); } strcpy(str + dirLen, dp->d_name); /* * Save the allocated file name into the file table. */ fileTable[fileCount++] = str; } /* * Close the directory and check for any matches. */ closedir(dirp); if (fileCount == 0) { fprintf(stderr, "No matches\n"); /* for(int i=0;i<fileCount;i++) { printf("fileTable %d %s", fileCount, fileTable[i]); }*/ return -1; } /* * Sort the list of file names. */ qsort((void *) fileTable, (size_t) fileCount, sizeof(char *), nameSort); /* * Return the file list and count. */ *retFileTable = (const char **) fileTable; return fileCount; } int do_checkenv(int argc, const char **argv) { int status; int len = 1; char *grep[4]; char *tmp; int k; char *pagerValue; int pos = 0; int i = 0; struct command shellcommand[4]; char *pager_cmd[] = {"less", 0}; char *printenv[] = {"printenv", 0}; char *sort[] = {"sort", 0}; char *path_strdup; char *path_value; char *pathValue; pid_t pid; pathValue = getenv("PATH"); path_strdup = strdup(pathValue); path_value = strtok(path_strdup, ":"); if (find_less_program(path_value)) { pager_cmd[0] = "less"; } pagerValue = getenv("PAGER"); if (!pagerValue) { if (find_less_program(path_value)) { pager_cmd[0] = "less"; } else { pager_cmd[0] = "more"; } } else { pager_cmd[0] = pagerValue; } if (i == 1) { /* do nothing */ } else { for (k = 1; k < i; k++) { len += strlen(argv[k]) + 2; } tmp = (char *) malloc(len); tmp[0] = '\0'; for (k = 1; k < argc; k++) { pos += sprintf(tmp + pos, "%s%s", (k == 1 ? "" : "|"), argv[k]); } printf("tmp %s", tmp); grep[0] = "grep"; grep[1] = "-E"; grep[2] = tmp; grep[3] = NULL; shellcommand[0].argv = printenv; shellcommand[1].argv = grep; shellcommand[2].argv = sort; shellcommand[3].argv = pager_cmd; fflush(NULL); pid = fork(); if (pid < 0) { perror("fork failed"); return -1; } if (pid == 0) { fork_pipes(4, shellcommand); } /* * We are the parent process. * Wait for the child to complete. */ status = 0; while (((pid = waitpid(pid, &status, 0)) < 0) && (errno == EINTR)); if (pid < 0) { fprintf(stderr, "Error from waitpid: %s", strerror(errno)); return -1; } if (WIFSIGNALED(status)) { fprintf(stderr, "pid %ld: killed by signal %d\n", (long) pid, WTERMSIG(status)); return -1; } return WEXITSTATUS(status); } return 1; } int do_editenv(int argc, const char **argv) { int r = 0; if (getpid() == 1) { fprintf(stderr, "You are the INIT process!\n"); return 1; } if (argc == 2) { r = atoi(argv[1]); } exit(r); return 1; } int do_cd(int argc, const char **argv) { const char *path; if (argc > 1) { path = argv[1]; } else { path = getenv("HOME"); if (path == NULL) { fprintf(stderr, "No HOME environment variable\n"); return 1; } } if (chdir(path) < 0) { perror(path); return 1; } return 0; } int do_exit(int argc, const char **argv) { int r = 0; if (getpid() == 1) { fprintf(stderr, "You are the INIT process!\n"); return 1; } if (argc == 2) { r = atoi(argv[1]); } exit(r); return 1; } int do_killport(int argc, const char **argv) { return 0; } /* * Take a command string and break it up into an argc, argv list while * handling quoting and wildcards. The returned argument list and * strings are in static memory, and so are overwritten on each call. * The argument list is ended with a NULL pointer for convenience. * Returns true if successful, or false on an error with a message * already output. */ bool makeArgs(const char *cmd, int *retArgc, const char ***retArgv, bool pipe, int g, int h) { const char *argument; char *cp; char *cpOut; char *newStrings; const char **fileTable; const char **newArgTable; int newArgTableSize; int fileCount; int len; int ch; int quote; bool quotedWildCards; bool unquotedWildCards; bool tilde; static int stringsLength; static char *strings; static int argCount; static int argTableSize; static const char **argTable; /* * Clear the returned values until we know them. */ argCount = 0; *retArgc = 0; *retArgv = NULL; tilde = false; /* * Copy the command string into a buffer that we can modify, * reallocating it if necessary. */ len = strlen(cmd) + 1; if (len > stringsLength) { newStrings = realloc(strings, len); if (newStrings == NULL) { fprintf(stderr, "Cannot allocate string\n"); return false; } strings = newStrings; stringsLength = len; } memcpy(strings, cmd, len); cp = strings; /* * Keep parsing the command string as long as there are any * arguments left. */ while (*cp) { /* * Save the beginning of this argument. */ argument = cp; cpOut = cp; /* * Reset quoting and wildcarding for this argument. */ quote = '\0'; quotedWildCards = false; unquotedWildCards = false; /* * Loop over the string collecting the next argument while * looking for quoted strings or quoted characters, and * remembering whether there are any wildcard characters * in the argument. */ while (*cp) { ch = *cp++; /* if (ch == '|') { *cpOut++ = '|'; printf("** deal with it **%s", argument); char *tmpchar = malloc(sizeof(argument)); strcpy(tmpchar, argument); char *command[40]; tokens = cp; */ /*++tokens = "\0";* ;//str_split(command, tmpchar, '|');*/ /* calc = 1; continue;* }*/ /* * If we are not in a quote and we see a blank or a pipeline char then * this argument is done. */ if (isBlank(ch) && (quote == '\0')) break; /* check for tilde */ if (ch == '~') { tilde = true; } /* * If we see a backslash then accept the next * character no matter what it is. */ if (ch == '\\') { ch = *cp++; /* * Make sure there is a next character. */ if (ch == '\0') { fprintf(stderr, "Bad quoted character\n"); return false; } /* * Remember whether the quoted character * is a wildcard. */ if (isWildCard(ch)) quotedWildCards = true; *cpOut++ = ch; continue; } /* * If we see one of the wildcard characters then * remember whether it was seen inside or outside * of quotes. */ if (isWildCard(ch)) { if (quote) quotedWildCards = true; else unquotedWildCards = true; } /* * If we were in a quote and we saw the same quote * character again then the quote is done. */ if (ch == quote) { quote = '\0'; continue; } /* * If we weren't in a quote and we see either type * of quote character, then remember that we are * now inside of a quote. */ if ((quote == '\0') && ((ch == '\'') || (ch == '"'))) { quote = ch; continue; } /* * Store the character. */ *cpOut++ = ch; } /* * Make sure that quoting is terminated properly. */ if (quote) { fprintf(stderr, "Unmatched quote character\n"); return false; } /* * Null terminate the argument if it had shrunk, and then * skip over all blanks to the next argument, nulling them * out too. */ if (cp != cpOut) *cpOut = '\0'; while (isBlank(*cp)) *cp++ = '\0'; /* * If both quoted and unquoted wildcards were used then * complain since we don't handle them properly. */ if (quotedWildCards && unquotedWildCards) { fprintf(stderr, "Cannot use quoted and unquoted wildcards\n"); return false; } if (tilde) { /* * Expand the argument into the matching filenames. */ fileCount = expandWildCards(argument, &fileTable); } /* * Expand the argument into the matching filenames or accept * it as is depending on whether there were any unquoted * wildcard characters in it. */ if (unquotedWildCards) { /* * Expand the argument into the matching filenames. */ fileCount = expandWildCards(argument, &fileTable); /* * Return an error if the wildcards failed to match. */ if (fileCount < 0) return false; if (fileCount == 0) { fprintf(stderr, "Wildcard expansion error\n"); return false; } } else { /* * Set up to only store the argument itself. */ fileTable = &argument; fileCount = 1; } /* * Now reallocate the argument table to hold the file name. */ if (argCount + fileCount >= argTableSize) { newArgTableSize = argCount + fileCount + 1; newArgTable = (const char **) realloc(argTable, (sizeof(const char *) * newArgTableSize)); if (newArgTable == NULL) { fprintf(stderr, "No memory for arg list\n"); return false; } argTable = newArgTable; argTableSize = newArgTableSize; } if (1) { /* for(int i=0;i<argCount;i++){ }*/ /* * Copy the new arguments to the end of the old ones. */ /* copy_of_argv = malloc(sizeof(char *) * (argTableSize+fileCount));*/ /* memcpy(copy_of_argv,argTable, fileCount*sizeof(const char **));*/ /* int q; char **copy_of_argv = malloc(sizeof(char *) * (argc-1)); for (q = 0; q < argc - 1; q++) { copy_of_argv[q] = strdup(argv[q + 1]); } */ memcpy((void *) &argTable[argCount], (const void *) fileTable, (sizeof(const char **) * fileCount)); /* int i; copy_of_argv = malloc(sizeof(char *) * (argTableSize-1)); for (i = 0; i < argCount - 1; i++) { copy_of_argv[i] = strdup(argTable[i + 1]); } */ /* if (tokens) { int count = argCount + 1; memcpy((void *) &argTable[argCount], (const void *) tokens, (sizeof(const char **) * 20)); tokens = NULL; }*/ /* * Add to the argument count. */ argCount += fileCount; } } /* * Null terminate the argument list and return it. */ if (tilde) --argCount; argTable[argCount] = NULL; *retArgc = argCount; *retArgv = argTable; return true; } /* int isBetweenQuotes(int pos, char *str) { return IBQplain(pos, str, 0); } int IBQsingle(int pos, char *str, int offset) { int escaped = 0; for (; str[offset]; ++offset) { if (!escaped) { switch (str[offset]) { case '\\': escaped = 1; case '\'': return IBQplain(pos, str, offset + 1); } } else { escaped = 0; } if (pos == offset) { return 1; } } } int IBQdouble(int pos, char *str, int offset) { int escaped = 0; for (; str[offset]; ++offset) { if (!escaped) { switch (str[offset]) { case '\\': escaped = 1; case '"': return IBQdouble(pos, str, offset + 1); } } else { escaped = 0; } if (pos == offset) { return 1; } } } int IBQplain(int pos, char *str, int offset) { char ch; if (pos == offset) return 0; // Not within quotes int escaped = 0; for (ch = str[offset]; ch; ch = str[++offset]) { if (!escaped) { switch (str[offset]) { '\'': return IBQsingle(pos, str, offset + 1); '"': return IBQdouble(pos, str, offset + 1); '\\': escaped = 1 } else { escaped = 0; } if (pos == offset) return escaped; // Not within quotes, but may be single-escaped } } } */ /* Helper function that spawns processes */ int spawn_proc(int in, int out, struct command *cmd) { pid_t pid; fflush(NULL); pid = fork(); if (pid == 0) { if (in != 0) { if (dup2(in, 0) < 0) err_syserr("dup2() failed on stdin for %s: ", cmd->argv[0]); close(in); } if (out != 1) { if (dup2(out, 1) < 0) err_syserr("dup2() failed on stdout for %s: ", cmd->argv[0]); close(out); } fprintf(stderr, "%d: executing %s\n", (int) getpid(), cmd->argv[0]); execvp(cmd->argv[0], cmd->argv); err_syserr("failed to execute %s: ", cmd->argv[0]); } else if (pid < 0) { err_syserr("fork failed: "); } else { /* printf("** we are the parent ***"); */ } return pid; } /* Helper function that forks pipes */ void fork_pipes(int n, struct command *cmd) { int i; int in = 0; int fd[2]; for (i = 0; i < n - 1; ++i) { if (pipe(fd) == -1) { err_syserr("Failed creating pipe"); } spawn_proc(in, fd[1], cmd + i); close(fd[1]); in = fd[0]; } if (dup2(in, 0) < 0) { err_syserr("dup2() failed on stdin for %s: ", cmd[i].argv[0]); } fprintf(stderr, "%d: executing %s\n", (int) getpid(), cmd[i].argv[0]); execvp(cmd[i].argv[0], cmd[i].argv); err_syserr("failed to execute %s: ", cmd[i].argv[0]); } /* helper function that determines whether a file exists */ int file_exist(char *filename) { struct stat buffer; return (stat(filename, &buffer) == 0); } void handle_sigchld(int sig) { int saved_errno = errno; while (waitpid((pid_t) (-1), 0, WNOHANG) > 0) { } errno = saved_errno; } Answer: Start with your runCmd method, that is one of the most unreadable methods I have ever seen and it CRIES to be broken up into several methods at least. Start with giving each variable a meaningful name, then break up as much as you can into sub methods to reduce the depth. As a general guideline, two nested control flow statements are already hard to read.
{ "domain": "codereview.stackexchange", "id": 19783, "tags": "c, parsing, shell" }
generating boxes from octomap::octree error
Question: I am using the fcl where I have an octree and I want to generate boxes from an octree function I used the following code: octomap::OcTree* st_tree = new octomap::OcTree(0.1); octomap::Pointcloud st_cld; // I fill the tree for(int i = 0;i<msg.points.size();i++){ point3d endpoint((float) msg.points[i].x,(float) msg.points[i].y,(float) msg.points[i].z); st_cld.push_back(endpoint); } point3d origin(0.0,0.0,0.0); st_tree->insertPointCloud(st_cld,origin); st_tree->updateInnerOccupancy(); st_tree->writeBinary("static_occ.bt"); std::vector<CollisionObject*> boxes; generateBoxesFromOctomap(boxes, *st_tree); The function is defined as the following: void generateBoxesFromOctomap(std::vector<CollisionObject*>& boxes, octomap::OcTree& tree) { std::vector<boost::array<FCL_REAL, 6> > boxes_ = tree.toBoxes(); for(std::size_t i = 0; i < boxes_.size(); ++i) { FCL_REAL x = boxes_[i][0]; FCL_REAL y = boxes_[i][1]; FCL_REAL z = boxes_[i][2]; FCL_REAL size = boxes_[i][3]; FCL_REAL cost = boxes_[i][4]; FCL_REAL threshold = boxes_[i][5]; Box* box = new Box(size, size, size); box->cost_density = cost; box->threshold_occupied = threshold; CollisionObject* obj = new CollisionObject(boost::shared_ptr<CollisionGeometry>(box), Transform3f(Vec3f(x, y, z))); boxes.push_back(obj); } std::cout << "boxes size: " << boxes.size() << std::endl; } I get the following error: error: ‘class octomap::OcTree’ has no member named ‘toBoxes’ std::vector<boost::array<FCL_REAL, 6> > boxes_ = tree.toBoxes(); But I included the all the required libraries?? How can I solve it ? Originally posted by RSA_kustar on ROS Answers with karma: 275 on 2015-02-18 Post score: 0 Answer: There is no function toBoxes defined in OctoMap, so that code won't work. Best refer to the official OctoMap documentation. Originally posted by AHornung with karma: 5904 on 2015-02-21 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 20912, "tags": "ros, octomap, fcl, boost, octree" }
If one endures the same form pain over a long period of time, would the pain begin to lose intensity?
Question: Metaphorically thinking, if one endured the pain of constant burning for decades, would the pain slowly lose its strength? Answer: Under the assumption that the temperature experienced is not high enough to simply burn the nerves responsible for the pain (which would lead to insensible regions), yes and it would due to what is called receptor desensitization. In the case of burning pain the receptor involved is called the transient-receptor potential channel V1 (TRPV1). It is the same receptor involve in the oral burning sensation of spiciness. Now two scenarios are possible given your question, namely a constant stimulus over a long period and "oscillating" stimuli (i.e. a repetition of stimuli followed by non-stimulated periods). Constant stimulus I provided a similar answer to this question: What features cause mechano sensory adaptation?. This is the scenario where you keep the painful stimulus constant over a period of time. Here desensitization, i.e. reduction in pain feeling, is due to a direct receptor desensitization. This happens because the receptor is internalized (also called receptor-mediated endocytosis) into the cell after being activated and it takes some times for the receptor to reach the cell membrane again meaning if the stimulus remains constant, pretty much no receptors will be left on the cell membrane. After decades long term desensitization will occur and is slightly different in it's mechanism compared to standard desensitization as described hereafter. "Oscillating" stimuli In the case you are talking about a chronic pain interrupted by periods of non stimulation then long term desensitization will occur. In this scenario, the concentration of receptors will be reduced semi-permanently (i.e. reduced expression of the receptor) in the receptor cells reducing the pain signal. This is what happens with spiciness sensation. People eating regularly very spicy will physiologically be less sensitive to spices as they have less receptors sensing the spices provoking a reduced burning sensation. Just as a footnote, very long chronic pain will also change the brain wiring of the pain signal but as I don't know enough about neurobiology I will leave that appart.
{ "domain": "biology.stackexchange", "id": 4258, "tags": "pain" }
String manipulation in Python
Question: I have this very ugly piece of code which runs ethtool and then parses the output (Lilex is just a static class encapsulating subprocess and is not relevant to the question. I'm looking for someone to suggestions to shorten the string manipulation. bash_command = "/sbin/ethtool -P " + identifier output = Lilex.execute_command(bash_command) mac_address = str(output) mac_address = mac_address.replace("Permanent address: ", "") mac_address = mac_address.replace("\\n", "") mac_address = mac_address.replace("'", "") mac_address = mac_address[1:].strip() This is example output that is produced by ethtool -P: Permanent address: 12:af:37:d0:a9:c8 I'm not sure why I'm replacing single quotes with nothing, but I'm sure I've seen the command output single quotes before, so that part needs to stay. An alternative suggestion (which is actually not much different): mac_address = mac_address \ .split(":")[1]\ .replace("\\n","") \ .replace("'","") \ .strip() Answer: How about this? mac_address = mac_address[19:].translate(str.maketrans("", "", "\n':")).strip()
{ "domain": "codereview.stackexchange", "id": 35634, "tags": "python, python-3.x, strings" }
Materials with a high compressive strength to weight ratio
Question: We know about extremely strong materials such as carbon nanotubes. However, this is only in tension. What are some high strength-to-weight materials (both available and hypothetical) in uniaxial compression? Such a material would be useful in supertall structures and vacuum airships. Of my research, PVC pipe seems to be a good bet at 100MPa and only 1.3 g/cm^3. However, this is still inadequate for a vacuum airship. Beryllium is also a likely candidate as it is 1400MPa and only 1.85 c/cm^3, making it doable in a vacuum airship. However, it is very toxic to work with (I don't know if bulk beryllium is safe) and expensive. Answer: Compressive strength to density ratio is not the most critical factor for vacuum balloons, as the most dangerous failure mode for vacuum balloons is buckling. The elasticity modulus to density squared ratio is more important. This issue is considered in my US patent application 20070001053 (11/517915) (written together with my coauthor) - http://akhmeteli.org/wp-content/uploads/2011/08/vacuum_balloons_cip.pdf . It is shown that no homogeneous shell can be both light enough to float in air and strong enough to withstand atmospheric pressure. However, finite element analysis shows that spherical sandwich structures made of commercially available materials can meet these requirements. The face sheets can be made of beryllium, boron carbide, or some other materials; the core can be made of aluminum honeycomb or some other materials. However, manufacturing of such vacuum balloons is not easy and has not been done yet.
{ "domain": "physics.stackexchange", "id": 6150, "tags": "material-science" }
Why is wavefunction collapse always non unitary?
Question: The 'wavefunction collapse' upon measurement is usually referred to as being a non-unitary transformation, since it does not preserve the norm of the state vector. Indeed, if a linear superposition like $\psi + \phi$ collapses to let's say just $\phi$, then $||\psi+\phi|| \neq ||\phi||$. But what if $\psi + \phi$ collapses into $\alpha \phi$ where $\alpha$ is such that $||\alpha \phi|| = ||\psi + \phi||$. Then norm is preserved, and $\alpha \phi$ only differs from $\phi$ by a constant, so it represents the same state as $\phi$. Wouldn't this type of collapse be a unitary transformation, and if so, why can't all types of state collapse be treated like this? Answer: The reason is two-fold: 1) A unitary transformation does preserve the norm $\left\|\psi \right\| = \langle \psi | \psi \rangle$, but not only the norm. 2) A quantum measurement must produce a state that is not affected by a repeat identical measurement. In general, a unitary transformation $U$, $U^\dagger U = U U^\dagger = I$, preserves overlaps: $$ \langle U\phi | U\psi\rangle = \langle \phi |U^\dagger U |\psi \rangle = \langle \phi | \psi \rangle $$ Say a normalized state reads $a\psi + b\phi$ before collapse, for orthogonal and normalized $\psi$ and $\phi$, $\langle \phi | \psi \rangle = 0$, $\langle \psi|\psi \rangle = \langle \phi|\phi \rangle = 1$, and $|a|^2 + |b|^2 = 1$ such that $\left\|a\psi + b\phi \right\| = \langle a\psi + b\phi| a\psi + b\phi \rangle = 1$. Let $a\psi + b\phi$ collapse into $\phi$ upon measurement. By definition, a 2nd identical measurement must leave $\phi$ unchanged. If the collapse were a unitary evolution such that $U(a\psi + b\phi) = \phi$, then the same measurement on $\phi$ would have to result in $U\phi = \phi$. The unitary $U$ would indeed preserve the norm, since $\left\|a\psi + b\phi \right\| = \left\|U(a\psi + b\phi) \right\| = \left\|\phi \right\| = 1$. But $U$ should also preserve the overlap $\langle \phi | a\psi + b \phi \rangle = b$, whereas instead $$ \langle U\phi | U(a\psi + b \phi) \rangle = \langle \phi | \phi \rangle = 1 > b $$ Since we already took care of normalizations, there is no way to remove the above disagreement by a rescaling of $\phi$. So collapse cannot be unitary. A faster way to arrive at the same conclusion is to consider collapse from a mixed initial state $\rho$, $\rho \neq \rho^2$. The result of the collapse would still be a pure state, so in this case $U$ would have to take a mixed state into a pure state. But unitary transformations always take pure states into pure states, so again this cannot work.
{ "domain": "physics.stackexchange", "id": 25925, "tags": "quantum-mechanics, unitarity" }
Finding a partition with minimum "maximal length"
Question: We're given $n^2$ different points in $(0,1)$ : $x_1< x_2 < \dots < x_{n^2}$. We are required to choose $n$ points $x_{i_1}<\dots<x_{i_n}$ such that the value of $\max\,\{x_{i_1}, x_{i_2}-x_{i_1}, x_{i_3}-x_{i_2},\dots,x_{i_n} - x_{i_{n-1}}, 1-x_{i_n}\} $ is minimal. It feels a lot like a dynamic programming problem, but I didn't manage to divide it into appropriate subproblems. Any ideas? Answer: This problem is slightly misleading in that one might wonder how to use the condition that the number of points to choose from is the square of the number of chosen points. That is actually a distraction. Here comes an hint: How about instead of $n^2$, you are given some $m$ different numbers where $m\geq n$? This more general problem is not in any way harder then the original question. This problem can indeed be solved by dynamic programming efficiently. In case when the previous hint is not enough, here are the subproblems stated explicitly. Can you compute the following function of integer $w$ and $k$, where $1\leq k \leq w$? $$ m(w,k) := \min_{i_0 = 0,\,i_k = w,\,i_1\lt\cdots<i_k} \max\{x_{i_1} - x_{i_0},\,x_{i_2}-x_{i_1},\,\cdots,\,x_{i_k} - x_{i_{k-1}}\} $$ What is the technique? We replace the last item $1-x_{i_n}$ with $x_k - x_{i_{k-1}}$. We can compute subproblem $m(w,k)$ from subproblems $m(w',k-1)$ with $w'<w$. The original problem is just the special case $m(n^2+1, n+1)$ if we set $x_{n^2+1} = 1$.
{ "domain": "cs.stackexchange", "id": 11958, "tags": "algorithms, dynamic-programming" }
"roslaunch explore_stage explore.launch" problem
Question: plz heeelllpp! :( i used explore_stage before, but after upgrade my ubuntu, it gave me this error! mohsen@mohsen-ThinkPad-R500:~$ roslaunch explore_stage explore.launch ... logging to /home/mohsen/.ros/log/e3e2f4c6-bca0-11e2-b4c1-001c259b7f6b/roslaunch-mohsen-ThinkPad-R500-5445.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. WARNING: ignoring defunct <master /> tag started roslaunch server http://mohsen-ThinkPad-R500:43843/ SUMMARY ======== PARAMETERS * /explore/close_loops * /explore/explore_costmap/base_scan/clearing * /explore/explore_costmap/base_scan/data_type * /explore/explore_costmap/base_scan/expected_update_rate * /explore/explore_costmap/base_scan/marking * /explore/explore_costmap/base_scan/observation_persistance * /explore/explore_costmap/base_scan/sensor_frame * /explore/explore_costmap/circumscribed_radius * /explore/explore_costmap/cost_scaling_factor * /explore/explore_costmap/global_frame * /explore/explore_costmap/height * /explore/explore_costmap/inflation_radius * /explore/explore_costmap/inscribed_radius * /explore/explore_costmap/lethal_cost_threshold * /explore/explore_costmap/map_type * /explore/explore_costmap/max_obstacle_height * /explore/explore_costmap/observation_sources * /explore/explore_costmap/obstacle_range * /explore/explore_costmap/origin_x * /explore/explore_costmap/origin_y * /explore/explore_costmap/publish_frequency * /explore/explore_costmap/raytrace_range * /explore/explore_costmap/resolution * /explore/explore_costmap/robot_base_frame * /explore/explore_costmap/rolling_window * /explore/explore_costmap/static_map * /explore/explore_costmap/track_unknown_space * /explore/explore_costmap/transform_tolerance * /explore/explore_costmap/update_frequency * /explore/explore_costmap/width * /explore/gain_scale * /explore/orientation_scale * /explore/potential_scale * /move_base/NavfnROS/transform_tolerance * /move_base/TrajectoryPlannerROS/acc_lim_th * /move_base/TrajectoryPlannerROS/acc_lim_x * /move_base/TrajectoryPlannerROS/acc_lim_y * /move_base/TrajectoryPlannerROS/dwa * /move_base/TrajectoryPlannerROS/escape_reset_dist * /move_base/TrajectoryPlannerROS/escape_reset_theta * /move_base/TrajectoryPlannerROS/goal_distance_bias * /move_base/TrajectoryPlannerROS/heading_lookahead * /move_base/TrajectoryPlannerROS/heading_scoring * /move_base/TrajectoryPlannerROS/heading_scoring_timestep * /move_base/TrajectoryPlannerROS/holonomic_robot * /move_base/TrajectoryPlannerROS/max_vel_th * /move_base/TrajectoryPlannerROS/max_vel_x * /move_base/TrajectoryPlannerROS/min_in_place_vel_th * /move_base/TrajectoryPlannerROS/min_vel_th * /move_base/TrajectoryPlannerROS/min_vel_x * /move_base/TrajectoryPlannerROS/occdist_scale * /move_base/TrajectoryPlannerROS/oscillation_reset_dist * /move_base/TrajectoryPlannerROS/path_distance_bias * /move_base/TrajectoryPlannerROS/sim_granularity * /move_base/TrajectoryPlannerROS/sim_time * /move_base/TrajectoryPlannerROS/simple_attractor * /move_base/TrajectoryPlannerROS/transform_tolerance * /move_base/TrajectoryPlannerROS/vtheta_samples * /move_base/TrajectoryPlannerROS/vx_samples * /move_base/TrajectoryPlannerROS/world_model * /move_base/TrajectoryPlannerROS/xy_goal_tolerance * /move_base/TrajectoryPlannerROS/yaw_goal_tolerance * /move_base/footprint * /move_base/global_costmap/base_scan/clearing * /move_base/global_costmap/base_scan/data_type * /move_base/global_costmap/base_scan/expected_update_rate * /move_base/global_costmap/base_scan/marking * /move_base/global_costmap/base_scan/observation_persistance * /move_base/global_costmap/base_scan/sensor_frame * /move_base/global_costmap/circumscribed_radius * /move_base/global_costmap/cost_scaling_factor * /move_base/global_costmap/global_frame * /move_base/global_costmap/height * /move_base/global_costmap/inflation_radius * /move_base/global_costmap/inscribed_radius * /move_base/global_costmap/lethal_cost_threshold * /move_base/global_costmap/map_type * /move_base/global_costmap/max_obstacle_height * /move_base/global_costmap/observation_sources * /move_base/global_costmap/obstacle_range * /move_base/global_costmap/origin_x * /move_base/global_costmap/origin_y * /move_base/global_costmap/publish_frequency * /move_base/global_costmap/raytrace_range * /move_base/global_costmap/resolution * /move_base/global_costmap/robot_base_frame * /move_base/global_costmap/rolling_window * /move_base/global_costmap/static_map * /move_base/global_costmap/transform_tolerance * /move_base/global_costmap/update_frequency * /move_base/global_costmap/width * /move_base/local_costmap/base_scan/clearing * /move_base/local_costmap/base_scan/data_type * /move_base/local_costmap/base_scan/expected_update_rate * /move_base/local_costmap/base_scan/marking * /move_base/local_costmap/base_scan/observation_persistance * /move_base/local_costmap/base_scan/sensor_frame * /move_base/local_costmap/circumscribed_radius * /move_base/local_costmap/cost_scaling_factor * /move_base/local_costmap/global_frame * /move_base/local_costmap/height * /move_base/local_costmap/inflation_radius * /move_base/local_costmap/inscribed_radius * /move_base/local_costmap/lethal_cost_threshold * /move_base/local_costmap/map_type * /move_base/local_costmap/max_obstacle_height * /move_base/local_costmap/observation_sources * /move_base/local_costmap/obstacle_range * /move_base/local_costmap/origin_x * /move_base/local_costmap/origin_y * /move_base/local_costmap/publish_frequency * /move_base/local_costmap/raytrace_range * /move_base/local_costmap/resolution * /move_base/local_costmap/robot_base_frame * /move_base/local_costmap/rolling_window * /move_base/local_costmap/static_map * /move_base/local_costmap/transform_tolerance * /move_base/local_costmap/update_frequency * /move_base/local_costmap/width * /rosdistro * /rosversion * /use_sim_time NODES / explore (explore/explore) fake_localize (tf/static_transform_publisher) move_base (move_base/move_base) stage (stage/stageros) auto-starting new master Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[master]: started with pid [5461] ROS_MASTER_URI=http://localhost:11311 setting /run_id to e3e2f4c6-bca0-11e2-b4c1-001c259b7f6b Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[rosout-1]: started with pid [5474] started core service [/rosout] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[stage-2]: started with pid [5488] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[fake_localize-3]: started with pid [5503] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[move_base-4]: started with pid [5517] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored [Loading /home/mohsen/fuerte_workspace/stage/world/willow-erratic.world]process[explore-5]: started with pid [5535] [Image "willow-full.pgm"[ INFO] [1368541001.031481439]: Subscribed to Topics: base_scan [ INFO] [1368541001.306642966]: Subscribed to Topics: base_scan ] warn: worldfile /home/mohsen/fuerte_workspace/stage/world/willow-erratic.world:21 : property [laser_return] is defined but not used (/home/mohsen/fuerte_workspace/stage/build/stage/libstage/worldfile.cc WarnUnused) [ INFO] [1368541002.140752250]: found 1 position/laser pair in the file [ WARN] [1368541003.030337928, 0.800000000]: Message from [/stage] has a non-fully-qualified frame_id [base_laser_link]. Resolved locally to [/base_laser_link]. This is will likely not work in multi-robot systems. This message will only print once. [ INFO] [1368541003.037880366, 0.800000000]: MAP SIZE: 1200, 1200 [ INFO] [1368541003.044607620, 0.800000000]: Subscribed to Topics: base_scan [ INFO] [1368541003.361674617, 1.200000000]: Sim period is set to 0.05 [explore-5] process has died [pid 5535, exit code -4, cmd /opt/ros/fuerte/stacks/exploration/explore/bin/explore __name:=explore __log:=/home/mohsen/.ros/log/e3e2f4c6-bca0-11e2-b4c1-001c259b7f6b/explore-5.log]. log file: /home/mohsen/.ros/log/e3e2f4c6-bca0-11e2-b4c1-001c259b7f6b/explore-5*.log [ WARN] [1368541003.752455595, 1.600000000]: The base_scan observation buffer has not been updated for 0.30 seconds, and it should be updated every 0.20 seconds. [ WARN] [1368541004.055580594, 1.900000000]: The base_scan observation buffer has not been updated for 0.60 seconds, and it should be updated every 0.20 seconds. [ WARN] [1368541004.173130739, 2.000000000]: Message from [/stage] has a non-fully-qualified frame_id [base_laser_link]. Resolved locally to [/base_laser_link]. This is will likely not work in multi-robot systems. This message will only print once. [move_base-4] process has died [pid 5517, exit code -4, cmd /opt/ros/fuerte/stacks/navigation/move_base/bin/move_base __name:=move_base __log:=/home/mohsen/.ros/log/e3e2f4c6-bca0-11e2-b4c1-001c259b7f6b/move_base-4.log]. log file: /home/mohsen/.ros/log/e3e2f4c6-bca0-11e2-b4c1-001c259b7f6b/move_base-4*.log ^C[fake_localize-3] killing on exit [stage-2] killing on exit ^[[A[rosout-1] killing on exit [master] killing on exit shutting down processing monitor... ... shutting down processing monitor complete done what am i should to0 do0? Originally posted by Mohsen Hk on ROS Answers with karma: 139 on 2013-05-10 Post score: 0 Answer: Hello, you can find a temporary solution in this post Also the next time you write a question try to do it a little bit more polite, it will be nicer to read. Thanks. Originally posted by martimorta with karma: 843 on 2013-05-14 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Mohsen Hk on 2013-05-14: Hello. i am beginner in english, and i was uneasy when asked. sorry "Marti Morta" @};
{ "domain": "robotics.stackexchange", "id": 14133, "tags": "ros, navigation, move-base, explore" }
C# sitemap crawler
Question: Basically my code piece is a sitemap crawler - it opens sitemap, that contains sub-sitemap listings from - to a datespan. It opens sub-sitemap and gets all the URLs (most of them are seourl's, but not all). As sub-sitemap is enumerated all of it's put in list called list2. Code overview is like: var dict = list2.ToDictionary(o => o, async o => await new WebClient { Credentials = new NetworkCredential(user, pass) } .DownloadStringTaskAsync(new Uri(o.Replace(liveUrl, devUrl) + end))); There is also a development site, that contains the same data, but with some date propagation delay and it does not have as hard Wall of Cache as live does. Normally page is 'heavy' to mitigate a hit from that a template page is created containing only data I need. In addition, even dev site is view-able on particular domain after entering credentials. WebClient { Credentials = new NetworkCredential(user, pass) } .DownloadStringTaskAsync(new Uri(o.Replace(liveUrl, devUrl) + end)) Now I could open them up 1 by 1, but this would take unreasonably long time. The problem that I am having is that I can not just open them asynchronously as well. Server would deal fine with first and second pack of 500+ URLs, but after that it will start to choke sending 504's (This literally means for F this for next ~20s your request is on hold). How can I set up reasonable batch size? (Rx-Linq?) dict.Keys.ToList().ForEach(delegate(string o){ try { result.Add(new PublicationLinkData { Url = o, Title = dict[o].Result.FindTagValue("<div id=\"title\">", "</div>"), Date = DateTime.Parse(dict[o].Result.FindTagValue("<div id=\"time\">", "</div>")) } ); } catch (Exception ex) { sw.WriteLine("{0}\t{1}", item.AbsoluteUri, o); }}); dict.Clear(); result.Serialize(string.Format("d://sitemap/{0}-{1}.xml", @from, @to)); This basically tells try to loop over each result and get {URL, data, date}. If there are errors log sitemap and URL with the problem. Finally serialize results. Answer: Managed to solve my problem decently. var dict = new Dictionary<string, string>(); Parallel.ForEach(list2, new ParallelOptions { MaxDegreeOfParallelism = 5 }, o => { try { var data = new WebClient { Credentials = new NetworkCredential(user, pass) } .DownloadString(new Uri(o.Replace(liveUrl, devUrl) + end)); result.Add(new PublicationLinkData { Url = item.AbsoluteUri, Title = data.FindTagValue("<div id=\"title\">", "</div>"), Date = DateTime.Parse(data.FindTagValue("<div id=\"time\">", "</div>")) } ); } catch (Exception ex) { sw.WriteLine("{0}\t{1}", item.AbsoluteUri, o); } }); Basically this creates 5 workers that work synchronously.
{ "domain": "codereview.stackexchange", "id": 12138, "tags": "c#, linq" }
Iterative Implementation of Towers of Hanoi
Question: Here is an implementation of Towers of Hanoi based on few observed patterns1 from the easier recursive solution: function [] = myTowersOfHanoi(N, from, to, alt) % Accepts three integers: N - number of disks % from - number of start tower, to - number of end tower, alt - free tower. % Returs string outputs with succesive moves to complete the task of solving % the Towers of Hanoi with N disks moved from tower with number stored % in the second argument to one with number in third arg. totalNumberOfMoves = (2 ^ N) - 1; M = generateDiskMoves(totalNumberOfMoves); % These are the paths of disks if N is odd. path1 = [from, alt, to]; % Path of disk with odd number: from->alt->to path2 = [from, to, alt]; % Path of disk with even number. currentPositions = ones(1, N); % index-disk number, value-number of moves len = numel(path1); % If N (numer of disks) is even the paths are swapped. if mod(N, 2) == 0 [path2, path1] = swapArrays(path1, path2); end % Solve for i = M from = -1; to = -1; if mod(i, 2) == 0 % if number of disk, i is even j = currentPositions(i); % j - number of moves for i-th disk % In C++ indexes: [0, size - 1] in Octave: [1, size] % so: mod(j - 1, len) + 1, to avoid index = 0. from = path1( mod(j - 1, len) + 1); % Cycle over 1->2->3 j = j + 1; to = path1( mod(j - 1, len) + 1); currentPositions(i) = j; % update moves of i-th disk else k = currentPositions(i); from = path2( mod(k - 1, len) + 1); k = k + 1; to = path2( mod(k - 1, len) + 1); currentPositions(i) = k; end disp(sprintf('Move disk %d from %d to %d.', i, from, to)) end end function [a2, a1] = swapArrays (a1, a2) [a2, a1] = deal(a1, a2); end % From: http://mathworld.wolfram.com/BinaryCarrySequence.html function [M] = generateDiskMoves(N) % Accepts integer: N - total number of moves. % Returns a 1xN integer array with the first N consecutive disk moves % in Tower of Hanoi where the index is the move number % and the value is the disk number. m - is discarded. [m, M] = Omega2(N); % Generate the first N terms of: "Binary Carry Sequence". M = M .+ 1; % Add one and get moves of disk in Tower of Hanoi. if N < 2 % Get only the first move. M = M(1); end end % From : https://oeis.org/A007814 function [m, M] = Omega2(n) % Accepts an integer: n. % Returns m: max power of 2 such that 2^m divides n, and % M: 1-by-K matrix where M(i) is the max power of 2 such % that 2^M(i) divides n. M = NaN * zeros(1, n); M(1) = 0; M(2) = 1; for k = 3 : n if M(k - 2) ~= 0 M(k) = M(k - k / 2) + 1; else M(k) = 0; end end m = M(end); end Input: Move 4 disks from 1st to 3rd peg, 2nd is free. myTowersOfHanoi(4, 1, 3, 2) Output: Move disk 1 from 1 to 2. Move disk 2 from 1 to 3. Move disk 1 from 2 to 3. Move disk 3 from 1 to 2. Move disk 1 from 3 to 1. Move disk 2 from 3 to 2. Move disk 1 from 1 to 2. Move disk 4 from 1 to 3. Move disk 1 from 2 to 3. Move disk 2 from 2 to 1. Move disk 1 from 3 to 1. Move disk 3 from 2 to 3. Move disk 1 from 1 to 2. Move disk 2 from 1 to 3. Move disk 1 from 2 to 3. I would appreciate your opinion and suggestions related to: MATLAB / Octave coding style and readability. thoughts on / possible improvements of the algorithm. 1. The observations were that firstly: the sequence of transitions could be described by a slightly modified formula: "Binary Carry Sequence" and secondly: individual disk transitions are following only two different cyclic paths which were based on the parity of the total number of the disks, N, and the parity of the number of the currently moving disk, i.e: Answer: Wow. Some legible matlab code. I'm impressed. Too often matlab seems to be a "write only" language, in the sense that perl regex line noise or Iverson's APL can be write only. No biggie, but I wouldn't mind seeing consistent comment formatting where (N, from, to, alt) appear in the left margin in each of four separate lines. Kudos for telling us about the args, anyway. In the matlab ecosystem this is maybe redundant, but speaking for myself I wouldn't mind seeing a reminder that there's no zero-origin going on here, by mentioning from > 0 or something. Saying it once would suffice - to & alt would clearly use the same convention. I see that later you spell this out in the "avoid index = 0" comment. typo: Returs Kudos on helpfully explaining that 2^N-1 is totalNumberOfMoves. Your figure was helpful. The "from -> alt -> to" comment is on the redundant side. Would you do the Gentle Reader a small favor, please, and bump the currentPositions and len assignments down slightly? Just a few lines. That way we have a full-line comment on the "odd" case, setting up dramatic tension for "what about even?", and the swapArrays immediately shows the even case. Switching from j to k for the path2 case was maybe a little odd. Wouldn't hurt to stick with j, as we always assign it a value at top of loop. Switching to k made me wonder if variable value needs to survive until some subsequent iteration. Renaming deal to swapArrays made sense, thank you. Comment for generateDiskMoves is very nice. Except I'd delete that "m is discarded" remark, as that's not part of the public API. Personally I view the comment for M .+ 1 as "% Convert to one-origin moves." The "Get only the first move" comment is accurate and helpful, but consider something stronger: "% The trivial case requires just one move." Omega2 accepts lowercase n. Consider using lowercase in the other functions. I had been thinking of upper as matrix and lower as scalar. I wouldn't mind seeing a comment that spells out whether disk 1 is smallest or biggest disk. As far as the algorithm goes, if results of a parity function were available, could you verify, or synthesize, the Omega2 results? Perhaps with less looping?
{ "domain": "codereview.stackexchange", "id": 27336, "tags": "algorithm, matlab, tower-of-hanoi, octave" }
Normalizable wave functions?
Question: How can I test whether a wave function is normalizable? If you apply an operator to a wave function, sometimes the result will not be normalizable. But how can I find these wave functions that do not correspond to normalizable eigenstates of this operator? The Hamilton operator for the harmonic oscillator, I am told, has both a discrete and a continuous spectrum. The discrete spectrum is the eigenvalues, but how do I find the continuous spectrum? It's relevant because I am also told these are the non normalizable values of the spectrum. Answer: You test a wave function for normalizability by integrating its square magnitude. If you get a finite result then it is normalizable. To spare you complicated integrations you can also take a simpler wave function that you know is normalizable and compare it using the usual arguments. An operator is not only defined by the mathematical operation it performs, but also by which space it acts on. The Hilbert space of square integrable functions is where quantum operators act on. So by definition they take a square integrable function and give you a square integrable function. It can happen though that such an operator has eigenfunction that are not in the set of square integrable functions. For some of these cases consistency requires that we extend the hilbert space (or its dual) we work with. The eigenfunctions of the position and momentum operators fall into this category. See the Gelfand construction or rigged Hilbert space for details. The hamiltonian of the harmonic oscillator as an operator on the quantum hilbert space has only a discrete spectrum. If you define it to be an operator on a more general space that admits functions that are not square integrable then the spectrum may in fact be continuous. This is the perfect example for why an operator must always be stated with the space it acts on.
{ "domain": "physics.stackexchange", "id": 4924, "tags": "quantum-mechanics, normalization" }
Whitecapping in ocean surface waves
Question: Even though the physics of wave breaking for ocean surface waves may not be well understood, what wave breaking is and what it looks like is no mystery to the average beach-goer. However, I am confused to as what "whitecapping" is, in the context of "whitecapping dissipation" of waves that appears in literature. What is whitecapping? Is it the white "stuff" that is created when a wave break? If so, what causes the white "stuff"? Is it the same as "foam", a term that is found in the literature as well? Are the terms "whitecapping" and "wave breaking" synonymously used? Are they in fact one and the same? Relevant references would also be appreciated. Answer: Whitecapping refers to the steepness-induced wave dissipation in deep water during which some air is entrained into the near-surface water, forming an emulsion of water and air bubbles (foam) that appears white. It occurs when the velocity of individual water particles near the wave crest exceed the phase speed of the wave, causing the front face of the wave to become too steep and "break". Steeper and more vigorous breakers are more efficient at entraining air bubbles into the water column, and they generate more foam. Whitecapping is an essential process for air-sea gas exchange. In the wave modeling community, because current spectral wave prediction models do not resolve air entrainment by the breaking waves, we use the term "whitecapping" to describe all steepness-induced deep-water wave dissipation. For example, see Hasselmann (1974). Waves also break due to shoaling as they enter shallow water, and these are usually referred to as plunging breakers. While plunging breakers without doubt generate a lot of foam, we don't use the term whitecapping to describe them. Thus, whitecapping is a specific kind of wave breaking, as described in the first paragraph. See a comprehensive review by Cavaleri et al. (2007) for a description of all the processes represented by current wave models.
{ "domain": "earthscience.stackexchange", "id": 488, "tags": "ocean, waves, wave-modeling" }
Installing Sibelia for Ragout on Mac OSX
Question: I am trying to use Ragout: https://github.com/fenderglass/Ragout to fill the gaps in my de novo genome assembly. You can access the article freely here: https://www.ncbi.nlm.nih.gov/pubmed/24931998 For this, I first need to install Sibelia. I tried doing so by cloning the GitHub repository into output/software/ragout/ and then running the following command: git clone https://github.com/fenderglass/Ragout output/software/ragout python2 output/software/ragout/scripts/install-sibelia.py Installing Sibelia Downloading source... -- The CXX compiler identification is AppleClang 9.0.0.9000037 -- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- The C compiler identification is AppleClang 9.0.0.9000037 -- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Performing Test HAVE_UNKNOWN_WALL -- Performing Test HAVE_UNKNOWN_WALL - Success -- Performing Test HAVE_UNKNOWN_FOMIT_FRAME_POINTER -- Performing Test HAVE_UNKNOWN_FOMIT_FRAME_POINTER - Success -- Looking for inttypes.h -- Looking for inttypes.h - found -- Looking for memory.h -- Looking for memory.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stdlib.h -- Looking for stdlib.h - found -- Looking for string.h -- Looking for string.h - found -- Looking for strings.h -- Looking for strings.h - found -- Looking for sys/types.h -- Looking for sys/types.h - found -- Performing Test HAVE_INLINE -- Performing Test HAVE_INLINE - Success -- Performing Test HAVE___INLINE -- Performing Test HAVE___INLINE - Success -- Performing Test HAVE___INLINE__ -- Performing Test HAVE___INLINE__ - Success -- Performing Test HAVE___DECLSPEC_DLLEXPORT_ -- Performing Test HAVE___DECLSPEC_DLLEXPORT_ - Failed -- Performing Test HAVE___DECLSPEC_DLLIMPORT_ -- Performing Test HAVE___DECLSPEC_DLLIMPORT_ - Failed -- Check size of uint8_t -- Check size of uint8_t - done -- Check size of int32_t -- Check size of int32_t - done -- Looking for PRId32 -- Looking for PRId32 - found -- Configuring done -- Generating done -- Build files have been written to: /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/build Scanning dependencies of target divsufsort [ 4%] Building C object libdivsufsort-2.0.1/lib/CMakeFiles/divsufsort.dir/divsufsort.o [ 8%] Building C object libdivsufsort-2.0.1/lib/CMakeFiles/divsufsort.dir/sssort.o [ 12%] Building C object libdivsufsort-2.0.1/lib/CMakeFiles/divsufsort.dir/trsort.o [ 16%] Building C object libdivsufsort-2.0.1/lib/CMakeFiles/divsufsort.dir/utils.o [ 20%] Linking C static library libdivsufsort.a [ 20%] Built target divsufsort Scanning dependencies of target Sibelia [ 24%] Building CXX object CMakeFiles/Sibelia.dir/sibelia.cpp.o In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/sibelia.cpp:8: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/postprocessor.h:10: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/align.h:43: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/sequence.h:103: /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/sequence/string_set_dependent_generous.h:38:9: warning: 'SEQAN_SEQUENCE_STRING_SET_DEPENDENT_GENEROUS_H_' is used as a header guard here, followed by #define of a different macro [-Wheader-guard] #ifndef SEQAN_SEQUENCE_STRING_SET_DEPENDENT_GENEROUS_H_ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/sequence/string_set_dependent_generous.h:39:9: note: 'SEQAN_SEQUENCE_STRING_SET_DEPENDENT_GENEROUSH_' is defined here; did you mean 'SEQAN_SEQUENCE_STRING_SET_DEPENDENT_GENEROUS_H_'? #define SEQAN_SEQUENCE_STRING_SET_DEPENDENT_GENEROUSH_ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SEQAN_SEQUENCE_STRING_SET_DEPENDENT_GENEROUS_H_ In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/sibelia.cpp:8: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/postprocessor.h:10: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/align.h:44: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/score.h:48: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/score/score_matrix.h:40: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/file.h:70: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/system.h:82: /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/system/system_sema.h:120:27: warning: 'sem_init' is deprecated [-Wdeprecated-declarations] SEQAN_DO_SYS(!sem_init(hSemaphore, 0, init)); ^ /usr/include/sys/semaphore.h:55:42: note: 'sem_init' has been explicitly marked deprecated here int sem_init(sem_t *, int, unsigned int) __deprecated; ^ /usr/include/sys/cdefs.h:176:37: note: expanded from macro '__deprecated' #define __deprecated __attribute__((deprecated)) ^ In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/sibelia.cpp:8: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/postprocessor.h:10: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/align.h:44: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/score.h:48: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/score/score_matrix.h:40: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/file.h:70: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/system.h:82: /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/system/system_sema.h:124:27: warning: 'sem_destroy' is deprecated [-Wdeprecated-declarations] SEQAN_DO_SYS(!sem_destroy(hSemaphore)); ^ /usr/include/sys/semaphore.h:53:26: note: 'sem_destroy' has been explicitly marked deprecated here int sem_destroy(sem_t *) __deprecated; ^ /usr/include/sys/cdefs.h:176:37: note: expanded from macro '__deprecated' #define __deprecated __attribute__((deprecated)) ^ 3 warnings generated. [ 28%] Building CXX object CMakeFiles/Sibelia.dir/postprocessor.cpp.o In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/postprocessor.cpp:9: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/postprocessor.h:10: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/align.h:43: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/sequence.h:103: /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/sequence/string_set_dependent_generous.h:38:9: warning: 'SEQAN_SEQUENCE_STRING_SET_DEPENDENT_GENEROUS_H_' is used as a header guard here, followed by #define of a different macro [-Wheader-guard] #ifndef SEQAN_SEQUENCE_STRING_SET_DEPENDENT_GENEROUS_H_ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/sequence/string_set_dependent_generous.h:39:9: note: 'SEQAN_SEQUENCE_STRING_SET_DEPENDENT_GENEROUSH_' is defined here; did you mean 'SEQAN_SEQUENCE_STRING_SET_DEPENDENT_GENEROUS_H_'? #define SEQAN_SEQUENCE_STRING_SET_DEPENDENT_GENEROUSH_ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SEQAN_SEQUENCE_STRING_SET_DEPENDENT_GENEROUS_H_ In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/postprocessor.cpp:9: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/postprocessor.h:10: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/align.h:44: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/score.h:48: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/score/score_matrix.h:40: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/file.h:70: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/system.h:82: /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/system/system_sema.h:120:27: warning: 'sem_init' is deprecated [-Wdeprecated-declarations] SEQAN_DO_SYS(!sem_init(hSemaphore, 0, init)); ^ /usr/include/sys/semaphore.h:55:42: note: 'sem_init' has been explicitly marked deprecated here int sem_init(sem_t *, int, unsigned int) __deprecated; ^ /usr/include/sys/cdefs.h:176:37: note: expanded from macro '__deprecated' #define __deprecated __attribute__((deprecated)) ^ In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/postprocessor.cpp:9: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/postprocessor.h:10: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/align.h:44: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/score.h:48: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/score/score_matrix.h:40: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/file.h:70: In file included from /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/system.h:82: /Users/cr517/Documents/phd/project/sibelia-build/Sibelia-master/src/include/seqan/system/system_sema.h:124:27: warning: 'sem_destroy' is deprecated [-Wdeprecated-declarations] SEQAN_DO_SYS(!sem_destroy(hSemaphore)); ^ /usr/include/sys/semaphore.h:53:26: note: 'sem_destroy' has been explicitly marked deprecated here int sem_destroy(sem_t *) __deprecated; ^ /usr/include/sys/cdefs.h:176:37: note: expanded from macro '__deprecated' #define __deprecated __attribute__((deprecated)) ^ 3 warnings generated. [ 32%] Building CXX object CMakeFiles/Sibelia.dir/indexedsequence.cpp.o [ 36%] Building CXX object CMakeFiles/Sibelia.dir/util.cpp.o [ 40%] Building CXX object CMakeFiles/Sibelia.dir/outputgenerator.cpp.o [ 44%] Building CXX object CMakeFiles/Sibelia.dir/blockfinder.cpp.o [ 48%] Building CXX object CMakeFiles/Sibelia.dir/blockinstance.cpp.o [ 52%] Building CXX object CMakeFiles/Sibelia.dir/bifurcationstorage.cpp.o [ 56%] Building CXX object CMakeFiles/Sibelia.dir/bulgeremoval.cpp.o [ 60%] Building CXX object CMakeFiles/Sibelia.dir/dnasequence.cpp.o [ 64%] Building CXX object CMakeFiles/Sibelia.dir/edge.cpp.o [ 68%] Building CXX object CMakeFiles/Sibelia.dir/fasta.cpp.o [ 72%] Building CXX object CMakeFiles/Sibelia.dir/serialization.cpp.o [ 76%] Building CXX object CMakeFiles/Sibelia.dir/synteny.cpp.o [ 80%] Building CXX object CMakeFiles/Sibelia.dir/test/unrolledlisttest.cpp.o [ 84%] Building CXX object CMakeFiles/Sibelia.dir/platform.cpp.o [ 88%] Building CXX object CMakeFiles/Sibelia.dir/stranditerator.cpp.o [ 92%] Building CXX object CMakeFiles/Sibelia.dir/vertexenumeration.cpp.o [ 96%] Building CXX object CMakeFiles/Sibelia.dir/resource.cpp.o [100%] Linking CXX executable Sibelia [100%] Built target Sibelia [ 20%] Built target divsufsort [100%] Built target Sibelia Install the project... -- Install configuration: "Release" -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/bin/Sibelia -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/NEWS.md -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/ANNOTATION.md -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/README.md -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/USAGE.md -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/INSTALL.md -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/SIBELIA.md -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/C-SIBELIA.md -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/LICENSE.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/C-Sibelia -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/C-Sibelia/Staphylococcus_aureus -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/C-Sibelia/Staphylococcus_aureus/NCTC8325.fasta -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/C-Sibelia/Staphylococcus_aureus/README.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/C-Sibelia/Staphylococcus_aureus/RN4220.fasta -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/C-Sibelia/Staphylococcus_aureus/variant.vcf -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/blocks_coords.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/circos -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/circos/circos.conf -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/circos/circos.highlight.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/circos/circos.highlight1.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/circos/circos.highlight2.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/circos/circos.highlight3.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/circos/circos.highlight4.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/circos/circos.image.conf -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/circos/circos.png -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/circos/circos.segdup.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/circos/circos.sequences.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/circos/circos.svg -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/coverage_report.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/d3_blocks_diagram.html -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/genomes_permutations.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/Helicobacter_pylori.fasta -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Helicobacter_pylori/README.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus/blocks_coords.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus/circos -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus/circos/circos.conf -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus/circos/circos.highlight.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus/circos/circos.image.conf -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus/circos/circos.png -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus/circos/circos.segdup.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus/circos/circos.sequences.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus/circos/circos.svg -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus/coverage_report.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus/d3_blocks_diagram.html -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus/genomes_permutations.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus/README.txt -- Installing: /Users/cr517/Documents/phd/project/sibelia-build/share/Sibelia/doc/examples/Sibelia/Staphylococcus_aureus/Staphylococcus.fasta Traceback (most recent call last): File "output/software/ragout/scripts/install-sibelia.py", line 111, in <module> sys.exit(main()) File "output/software/ragout/scripts/install-sibelia.py", line 107, in main return int(not install_deps(args.prefix)) File "output/software/ragout/scripts/install-sibelia.py", line 31, in install_deps return install_sibelia(prefix) File "output/software/ragout/scripts/install-sibelia.py", line 64, in install_sibelia shutil.copy(sibelia_bin_src, sibelia_bin_dst) File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 119, in copy copyfile(src, dst) File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 83, in copyfile with open(dst, 'wb') as fdst: IOError: [Errno 2] No such file or directory: '/Users/cr517/Documents/phd/project/lib/Sibelia' As you can see, I get a lot of errors. Could someone help me, please? I also tried to install Sibelia following their instructions here: https://github.com/bioinf/Sibelia cd Sibelia/build ➜ build git:(master) cmake ../src -- The CXX compiler identification is AppleClang 9.0.0.9000037 -- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- The C compiler identification is AppleClang 9.0.0.9000037 -- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Performing Test HAVE_UNKNOWN_WALL -- Performing Test HAVE_UNKNOWN_WALL - Success -- Performing Test HAVE_UNKNOWN_FOMIT_FRAME_POINTER -- Performing Test HAVE_UNKNOWN_FOMIT_FRAME_POINTER - Success -- Looking for inttypes.h -- Looking for inttypes.h - found -- Looking for memory.h -- Looking for memory.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stdlib.h -- Looking for stdlib.h - found -- Looking for string.h -- Looking for string.h - found -- Looking for strings.h -- Looking for strings.h - found -- Looking for sys/types.h -- Looking for sys/types.h - found -- Performing Test HAVE_INLINE -- Performing Test HAVE_INLINE - Success -- Performing Test HAVE___INLINE -- Performing Test HAVE___INLINE - Success -- Performing Test HAVE___INLINE__ -- Performing Test HAVE___INLINE__ - Success -- Performing Test HAVE___DECLSPEC_DLLEXPORT_ -- Performing Test HAVE___DECLSPEC_DLLEXPORT_ - Failed -- Performing Test HAVE___DECLSPEC_DLLIMPORT_ -- Performing Test HAVE___DECLSPEC_DLLIMPORT_ - Failed -- Check size of uint8_t -- Check size of uint8_t - done -- Check size of int32_t -- Check size of int32_t - done -- Looking for PRId32 -- Looking for PRId32 - found -- Configuring done -- Generating done -- Build files have been written to: /Users/cr517/Documents/phd/project/Sibelia/build ➜ build git:(master) make Scanning dependencies of target lagan [ 3%] Creating directories for 'lagan' [ 6%] Performing download step (DIR copy) for 'lagan' [ 9%] No patch step for 'lagan' [ 12%] No update step for 'lagan' [ 15%] No configure step for 'lagan' [ 18%] Performing build step for 'lagan' rightinfluence.cpp:21:2: error: reference to 'end' is ambiguous end.score = -2; ^ rightinfluence.cpp:3:18: note: candidate found by name lookup is 'end' Fragment origin, end; ^ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/iterator:1768:1: note: candidate found by name lookup is 'std::__1::end' end(const _Cp& __c) ^ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/iterator:1611:1: note: candidate found by name lookup is 'std::__1::end' end(_Tp (&__array)[_Np]) ^ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/iterator:1760:1: note: candidate found by name lookup is 'std::__1::end' end(_Cp& __c) ^ rightinfluence.cpp:22:22: error: reference to 'end' is ambiguous origin.totalScore = end.totalScore = 0; ^ rightinfluence.cpp:3:18: note: candidate found by name lookup is 'end' Fragment origin, end; ^ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/iterator:1768:1: note: ... Answer: There is a link to compiled binaries for OSX on the project homepage
{ "domain": "bioinformatics.stackexchange", "id": 253, "tags": "genomics, scaffold" }
Spin-orbit coupling, degeneracy of eigenvalues
Question: I just read in a book about atomic physics that an important part of the fine structure of hydrogen is spin-orbit coupling. The Hamiltonian of spin-orbit coupling in the hydrogen atom is given by $$H_{SO} = \beta L\cdot S = \frac{1}{2}\left(J^2-L^2-S^2\right),$$ where $L$ is the orbital angular momentum operator, $S$ the spin operator and $J = L + S$. I want to determine the eigenvalues and degeneracies of $H_{SO}$ and the possible values for the quantum number $j$ of $J$ because the book and other sources just tell me that and don't derive it. This is what I've done so far: Since $[J^2,H]=[L^2,H]=[S^2,H]=[J_z,H]=0$, let $\psi$ be an eigenstate of $H, J^2, L^2, S^2$ and $J_z$. So we get $$ H_{SO}\psi = \frac{\hbar^2\beta}{2}\left(j(j+1)-l(l+1)-s(s+1)\right)\psi$$ and the eigenvalues of $H_{SO}$ are therefore given by $\alpha_{j,l,s} = \frac{\hbar^2\beta}{2}(j(j+1)-l(l+1)-s(s+1))$. What I'm struggling with is the degeneracy of the eigenvalues and how to determine the possible values for $j$. Can anybody help? Answer: This is basically a problem of recursive counting. Start with the uncoupled basis state, i.e. the set of states of the form $\vert \ell m_\ell \rangle \vert s m_s\rangle$. There are clearly $(2\ell+1)(2s+1)$ of these, and the job is to reorganize them. The key counting result is based on the observation that $\vert \ell m_\ell\rangle\vert sm_s\rangle$ is an eigenstate of $\hat J_z=\hat L_z+\hat S_z$ with eigenvalue $M=m_\ell+m_s$. With this in mind organize your $\vert \ell m_\ell\rangle\vert sm_s\rangle$ states so that those with the same value of $M$ are on the same line. Explicitly, for instance, you would have \begin{align} \begin{array}{rlll} M=\ell+s:&\vert \ell \ell\rangle\vert s s\rangle \\ M=\ell+s-1:& \vert \ell,\ell-1\rangle\vert ss\rangle& \vert\ell\ell\rangle \vert s,s-1\rangle\\ M=\ell+s-2:&\vert \ell,\ell-2\rangle \vert ss\rangle & \vert\ell,\ell-1\rangle\vert s,s-1\rangle&\vert \ell \ell\rangle \vert s,s-2\rangle\\ \vdots\qquad& \qquad\vdots \end{array} \end{align} and replace each state with a $\bullet$ to get $$ \begin{array}{rlll} \ell+s:&\bullet \\ \ell+s-1:&\bullet & \bullet\\ \ell+s-2:&\bullet&\bullet&\bullet\\ \vdots\qquad&\vdots \end{array} $$ Now, if $M=\ell+s$ is the largest value and it occurs once, the value of $j=\ell+s$ must occur once and also all the states $\vert j=\ell+s,m_j\rangle$ will occur once. There is a linear combination of the two states with $M=\ell+s-1$ that will be the state $\vert j=\ell+s,m_j=\ell+s-1\rangle$, there will be a linear combination of the three states with $M=\ell+s-2$ that will be the $\vert j=\ell+s,m_j=\ell+s-2\rangle$ state etc. Since we are only interested in enumerating the possible resulting values of $j$, and not interested in the actual states per se, we can eliminate from our table the first column since it contains one state with $m_j=\ell+s$, one with $m_j=\ell+s-1$ etc. Eliminating this column yields the reduced table $$ \begin{array}{rll} \ell+s-1: & \bullet\\ \ell+s-2:&\bullet&\bullet\\ \vdots\qquad&\vdots \end{array} $$ Since the value of $m_j=\ell+s-1$ occurs once, the value $j=\ell+s-1$ must occur once, and the states $\vert \ell+s-1,m_j\rangle$ will each occur once. We take out those from the list by deleting the first column to obtain a further reduced table $$ \begin{array}{rl} \ell+s-2:&\bullet\\ \vdots\qquad &\vdots \end{array} $$ The process so continues until exhaustion. In the examples above we have found $j=\ell+s,\ell+s-1$ and the final reduced table of the example, if not empty, would indicate the value of $j=\ell+s-1$. It is clear this process produces a decreasing sequence of $j$. The last value of $j$ is determined by the width of the original table. It is not hard to convince yourself that the width of the table will stop increasing once we reach $M=\vert \ell-s\vert $, and this is the last value of $j$. Thus by exhaustion you find the possible values of $j$ in the range $$ \vert \ell-s\vert\le j\le \ell+s\, . $$ As an example consider $\ell=1$ and $s=2$. The original table then looks like $$ \begin{array}{rlll} \frac{3}{2}:&\vert 11\rangle\vert 1/2,1/2\rangle \\ \frac{1}{2}:&\vert 10\rangle\vert 1/2,1/2\rangle & \vert 11\rangle\vert 1/2,-1/2\rangle\\ -\frac{1}{2}:&\vert 1,-1\rangle\vert 1/2,1/2\rangle&\vert 10\rangle\vert 1/2,-1/2\rangle\\ -\frac{3}{2}:&\vert 1,-1\rangle \vert 1/2,-1/2\rangle \end{array}\qquad \to \qquad \begin{array}{rlll} \frac{3}{2}:&\bullet \\ \frac{1}{2}:&\bullet & \bullet \\ -\frac{1}{2}:&\bullet &\bullet \\ -\frac{3}{2}:&\bullet \end{array} $$ It is only $2$ column wide, and the width stop growing at $M=1/2$, indicating the possible $j$ in this case are $3/2$ and $1/2$, and indeed $$ \vert 1-1/2\vert \le j\le 1+1/2 $$ Finally, note that the absolute value is required on the left because one could write state $\vert sm_s\rangle\vert \ell m_\ell\rangle$ without affecting the possible values of $j$.
{ "domain": "physics.stackexchange", "id": 41279, "tags": "quantum-mechanics, angular-momentum, quantum-spin, hamiltonian, hydrogen" }
Determine the configuration space for a robotic arm
Question: I'm working with a 4DOF Parallel-Mechanism arm. I'm interested in writing planners for this arm (PRM or RRT) in the configuration space, but I'm not sure how to identify obstacles/collisions. When writing planners for mobile robots in a 2d workspace, it was easy to define and visualize the workspace and obstacles in which the planner/robot was operating. This website (link) shows a great example of visualizing the workspace and configuration space for a 2DOF arm, but how can I do this for higher dimensions? Answer: This is one of the basic problems in planning robot trajectories: The transformation from Configuration-Space to Workspace via direct kinematic is unambiguous, while the inverse way from Workspace to Config-Space is ambivaltent (in all cases where Dim(Config-Space) > Dim(Workspace)). So what almost all planers do (especially the sampling based ones like RRT) is sampling in Config-Space, transforming the sample to Workspace and performing a collision check there. If no collision occurs, add sample to planning-graph. The same has to be done for longer trajectories in between the samples: Break them down to smaller subtrajectories (< Robot/Endeffector-Size) and do a collision check for each waypoint.
{ "domain": "robotics.stackexchange", "id": 709, "tags": "robotic-arm, motion-planning" }
Pressure of carbon dioxide due to two simultaneous equilibriums
Question: $1~\mathrm{mol}$ each of barium carbonate and calcium carbonate are put in a closed vessel which is maintained at $300~\mathrm{K}$ temperature. Two equilibria are established, with the respective equilibrium constants: $$\begin{align} &\ce{BaCO3(s) <=> BaO(s) +CO2} & K_p&=3\mathrm{~atm} \\ &\ce{CaCO3(s) <=> CaO(s) +CO2} & K_p&=5\mathrm{~atm} \end{align}$$ Find the final pressure exerted by carbon dioxide. Let $n_1$ moles of $\ce{CO2}$ form from barium carbonate and $n_2$ moles of $\ce{CO2}$ form from calcium carbonate. Then $$K_{p1}=(n_1+n_2)RT$$ $$K_{p_2}=(n_1+n_2)RT$$ But the above two equations are not meaningful because $K_p$ values are different. (what mistake am I making here?) Let barium carbonate be put into the vessel first. Then there will be $5\mathrm{~atm}$ of carbon dioxide. Then If I put calcium carbonate, will it affect the equilibrium mixture? The answer he gave was $5\mathrm{~atm}$. Answer: It is a conceptual subtlety. It is needed all the species in the hypothetical reaction to require that the equilibrium constant equation be satisfied. This is normally non understood in most chemistry textbooks. The sentence Two equilibria are established is unhappy. Briefly you can not use this two equations system till you be sure that all the species exist in equilibrium state. Initially, both solids will react to form the products, once a pressure of 3 atm is reached the only the second reaction will progress. But, as it will increase pressure above of 3 atm the reaction 1 will go in reverse. This situation will continue while CaCO$_3$ is present. Notice that until now there is no requirement that equilibrium constant equation be satisfied due to the states in which the system was were non equilibrium state. This situation will continue till all BaO disappears. So, the pressure will increase to 5 atm and the first equilibrium won't take place. Edit: "what happens if CaCO$_3$ is exhausted? then wont CaO react back to reduce the pressure of CO$_2$?" If CaCO$_3$ is exhausted before reach 5 atm (after that it is not possible), let say at 4 atm, CaO could react back, but as a soon as a it happens (that a little piece of CaCO$_3$ enough big to have the intensive properties of a macroscopic crystal is formed), there will be more tendency to go forward again. In such case the final pressure will be 4 atm, different from both equilibrium pressure. It will be another equilibrium state. Nothing will change for BaCO$_3$ in this analysis. Above I exposed two possibilities: The first one, before the edition, that the vessel is small enough to allow the CO$_2$ pressure reach 5 atm. This is the supposed "right answer". The later where final pressure of CO$_2$ greater than 3 atm and smaller than 5 atm. If the vessel is very big, it can happen that reactives are exhausted before reach 3 atm, so other equilibrium state will take place.
{ "domain": "chemistry.stackexchange", "id": 5275, "tags": "physical-chemistry, equilibrium, gas-laws" }
Why is aluminium powder sticky?
Question: Aluminium powder is used in applications like the “etch-a-sketch” toy because it tends to stick to surfaces. Also it tends to clog abrasive tools like files and grinding wheels or stones, and the chips are quite annoying to clean after machining, because they tend to stick everywhere. What are the specific reasons related to its atomic structure for this phenomenon? Answer: For powders where the mass of an individual particle is small relative to the size of the electrostatic effects due to slight charge differences on surfaces, "sticking" to those surfaces is common. The reason why aluminum "loads" grinding wheels and files is 1) its ductility and 2) the fact that it exhibits galling- in which pieces of aluminum, when rubbed together under pressure, tend to friction-weld themselves together at asperities on their facing surfaces and then roll up into globs of aluminum that get sheared between the surfaces, glue themselves onto them, and then get sheared loose while tearing loose fresh aluminum from the now-roughened surfaces. So when a bit of loose aluminum gets jammed inbetween the teeth of a file, that aluminum rubs against the workpiece, galls itself into the interface, and then gets caught between more teeth on the file, etc., etc. One reason this happens with aluminum is that when fresh aluminum surfaces are exposed to air, a very thin oxide film quickly forms on them- which can be rubbed away with sliding action to bring fresh, unoxidized aluminum on those surfaces into direct contact where "pressure welding" can occur, and the galling process initiated.
{ "domain": "physics.stackexchange", "id": 86914, "tags": "material-science, physical-chemistry" }
What are the uses of Penrose diagrams?
Question: Are Penrose diagrams just used for a nice visual representation of compactified space and time? Are there any other applications? I figured out how to make my own Penrose diagram, built individually, curve by curve. It's slightly different in its geometry but approximately a Penrose diagram. I didn't use conformal maps, I did it using real functions. Answer: Without a Penrose diagram, I find it extremely difficult to reason about causal relationships in GR. I've seen other people try to do it using various coordinate-based representations, e.g., Kruskal-Szekeres coordinates for the Schwarzschild spacetime, and it seems painful to me. If you want to define what a black hole is, you kind of need to define null infinity, even if you don't call it that. If you can define null infinity, then you pretty much have a Penrose diagram. This relates to a more general topic called boundary constructions. When you want to talk about maximal extensions of spacetimes, it really helps to be able to refer to the relevant Penrose diagrams.
{ "domain": "physics.stackexchange", "id": 49182, "tags": "general-relativity, spacetime, causality" }
Calibrating in OpenNI_Kinect
Question: Hi all, I was wondering if it was possible to calibrate the Kinect in the openni_camera package, and if there was a standard way to do this. I did stumble across the following link: http://www.ros.org/wiki/kinect_calibration but noticed that this was associated with a deprecated package (and I was having issues getting that code to compile with the current version of libfreenect). I also noticed in openni_camera/include several different parameter files that look like they have calibration information (e.g. calibration_rgb.yaml), but I noticed that the launch file openni_camera/openni_node.launch doesn't refer to any of them. In fact, when I removed those configuration files, everything still worked. Are these files even read by OpenNI_Camera? And if they aren't, where does OpenNI_Camera find its intrinsic parameters? I apologise if I missed some other package out there. Thanks, Chris Tralie Originally posted by ctralie on ROS Answers with karma: 41 on 2011-10-17 Post score: 4 Original comments Comment by Marco on 2011-10-19: From what I understand the OpenNI package uses the factory calibration stored on the Kinect. I've been trying to figure out if it is possible to do a manual (better?) calibration but have not succeeded so far to use the data from the Kinect_Calibration in the latest OpenNI driver. Answer: The newer openni_kinect drivers get the factory calibration settings, so calibration is no longer necessary. This is why the calibration tools are associated with the deprecated package, because it doesn't use the factory calibration settings. Originally posted by drew212 with karma: 57 on 2011-11-01 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 6993, "tags": "calibration, kinect, openni-kinect, openni-camera" }
Tic Tac Toe Android game
Question: Can you help me with suggestions regarding my code for Tic Tac Toe Game? This is my code, I've tested it and it works, but I feel like it can still be made shorter/clearer. I am not a good coder and it might look ugly to some. public class MainActivity extends Activity { /** * Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); setBoard(); } int check[][]; int i,j; Button b[][]; int player=0; TextView textView; Button newGame; // Set up the game board. private void setBoard() { b = new Button[4][4]; check = new int[4][4]; textView = (TextView) findViewById(R.id.textview1); newGame = (Button) findViewById(R.id.newgame); newGame.setOnClickListener (new View.OnClickListener(){ public void onClick(View v) { if(newGame.isEnabled()) { textView.setText("Click button to start!"); player=0; setBoard(); } } }); b[1][3] = (Button) findViewById(R.id.one); b[1][2] = (Button) findViewById(R.id.two); b[1][1] = (Button) findViewById(R.id.three); b[2][3] = (Button) findViewById(R.id.four); b[2][2] = (Button) findViewById(R.id.five); b[2][1] = (Button) findViewById(R.id.six); b[3][3] = (Button) findViewById(R.id.seven); b[3][2] = (Button) findViewById(R.id.eight); b[3][1] = (Button) findViewById(R.id.nine); for (i = 1; i <= 3; i++) { for (j = 1; j <= 3; j++) check[i][j] = 2; } // add the click listeners for each button for (i = 1; i <= 3; i++) { for (j = 1; j <= 3; j++) { b[i][j].setOnClickListener(new MyClickListener(i, j)); if (!b[i][j].isEnabled()) { b[i][j].setText(""); b[i][j].setEnabled(true); } } } } class MyClickListener implements View.OnClickListener { int x; int y; public MyClickListener(int x, int y) { this.x = x; this.y = y; } public void onClick(View view) { if (b[x][y].isEnabled()) { b[x][y].setEnabled(false); if (player == 0) { b[x][y].setText("X"); check[x][y] = 0; player = 1; checkBoard(); } else { b[x][y].setText("O"); check[x][y] = 1; player = 0; checkBoard(); } } } // check the board to see if someone has won private boolean checkBoard() { boolean gameOver = false; if (( check[1][1] == 0 && check[2][2] == 0 && check[3][3] == 0) || ( check[1][3] == 0 && check[2][2] == 0 && check[3][1] == 0) || ( check[1][2] == 0 && check[2][2] == 0 && check[3][2] == 0) || ( check[1][3] == 0 && check[2][3] == 0 && check[3][3] == 0) || ( check[1][1] == 0 && check[1][2] == 0 && check[1][3] == 0) || ( check[2][1] == 0 && check[2][2] == 0 && check[2][3] == 0) || ( check[3][1] == 0 && check[3][2] == 0 && check[3][3] == 0) || ( check[1][1] == 0 && check[2][1] == 0 && check[3][1] == 0)) { textView.setText("Player 1: You win!"); gameOver = true; } else if (( check[1][1] == 1 && check[2][2] == 1 && check[3][3] == 1) || ( check[1][3] == 1 && check[2][2] == 1 && check[3][1] == 1) || ( check[1][2] == 1 && check[2][2] == 1 && check[3][2] == 1) || ( check[1][3] == 1 && check[2][3] == 1 && check[3][3] == 1) || ( check[1][1] == 1 && check[1][2] == 1 && check[1][3] == 1) || ( check[2][1] == 1 && check[2][2] == 1 && check[2][3] == 1) || ( check[3][1] == 1 && check[3][2] == 1 && check[3][3] == 1) || ( check[1][1] == 1 && check[2][1] == 1 && check[3][1] == 1)) { textView.setText("Player 2: You Win!"); gameOver = true; } else { boolean empty = false; for (i = 1; i <= 3; i++) { for (j = 1; j <= 3; j++) { if (check[i][j] == 2) { empty = true; break; } } } if (!empty) { gameOver = true; textView.setText("Game over. It's a draw!"); } }if(gameOver) for(i=1;i<=3;i++) { for(j=1;j<=3;j++) { b[i][j].setEnabled(false); } } return gameOver; } } } Answer: This is the main problem with your code: if(player==0) This one line is the biggest mistake in your code, and you use it everywhere, for instance: if (player == 0) { b[x][y].setText("X"); check[x][y] = 0; player = 1; checkBoard(); } else { b[x][y].setText("O"); check[x][y] = 1; player = 0; checkBoard(); } There are a number of places you use that type of horrid duplicate structure. Get rid of it. A (more) acceptable alternative is to do something like this: b[x][y].setText(player.getSymbol()); check[x][y] = player; player = player.next(); checkBoard(); At a bare minimum, even something like this would still make your code more readable: b[x][y].setText(symbol[player]); check[x][y] = player; player = (player + 1)%2; checkBoard(); There is no need to check if(player==0) in your code, since you are doing the exact same thing either way - the only difference is that you are trivially modifying some of the parameters. Handle it by making a separate Player class to keep track of the differences those parameters. Once you have done that, you should refactor out a number of methods from your code, but there is no point until you take care of the first issue.
{ "domain": "codereview.stackexchange", "id": 12610, "tags": "java, beginner, android, tic-tac-toe" }
What does thermal conductivity actually measure?
Question: Forgive my layman, non-physicist terminology used here. Hopefully I'm not too much of a caveman to express myself properly. What does thermal conductivity actually express? Is it measuring the amount of heat that transfers through a material? Or the speed at which the heat transfers? Or some combination of the two? Or something else? For example, if I have a wall with such-and-such thermal conductivity and a heat source on one side, what does thermal conductivity actually tell me for the amount of heat that will be transferred to the other side, how long that will take, and so on? Edit: if thermal conductivity is the speed of heat transfer, what am I to make of the fact that dense materials like concrete and compressed earth blocks have a high thermal conductivity (≈1.5) relative to dedicated insulation materials (≈0.04), yet heat transfers through them slowly--this property being explicitly utilized in certain applications, in fact, such as passive solar design. Answer: Thermal conductivity measures the speed at which heat energy travels through material. That's different to the speed at which changes in temperature travel through material, which is driven by a combination of thermal conductivity and thermal mass. So, to use your example, concrete has a high thermal conductivity: it will lose heat energy quite quickly, so a hot thing inside a concrete box can cool down quite quickly. However, concrete has high thermal mass: it takes a lot of energy to raise its temperature by 1 Kelvin. So even with heat going into it quickly, its temperature will rise slowly. That's why concrete and earth walls are used in some passive solar designs: not necessarily for their insulation properties, but for their properties as a heat buffer: they can absorb a lot of heat for relatively low changes in their own temperature, and radiate it back out again. That gives you a wall surface with a fairly steady radiant temperature, which feels a lot more comfortable than a surface with a highly variable radiant temperature; and it gives you a huge buffer that allows you to store solar energy in the day and release it at night, thus giving you cooling during the day when you need it, and heating during the night when you need it.
{ "domain": "physics.stackexchange", "id": 67690, "tags": "thermal-conductivity" }
Does Newton’s third law of motion apply to time?
Question: I have a question about how physics works. I am asking about what the law applies to, not about whether this effect could be experienced, so please don’t answer “it doesn’t matter because causality prohibits time travel”. This might be the case, but the law might still work in this was. And please don’t state that I’m “ignoring the laws of physics”, because I am not. Newton’s third law of motion basically states that when you produce a force in one direction, it produces an equal force in the opposite direction. Given that time is also a dimension, if something had advanced technology that allowed it to exert force along the temporal axis, would Newton’s law still apply? Answer: You might have misunderstood the concepts. Newton's third law is all about how forces interact. When one object pushes or pulls another, the second object pushes or pulls back with the same strength but in the opposite direction. This law doesn't really touch on the nature of time itself. Imagine you're playing with toy cars. When one car bumps into another, the second car might move or even bump back because of the impact. This bumping back-and-forth is like Newton's third law. Now, talking about time in this scenario would be like discussing how long you've been playing with the toy cars. It's a separate concept. It doesn't change how the cars bump into each other, just like Newton's third law doesn't really explain how time behaves. Time, in this case, is just the background against which all these car interactions are happening.
{ "domain": "physics.stackexchange", "id": 98301, "tags": "newtonian-mechanics" }
Black hole collision animation: What are these extrusions?
Question: In this video uploaded by LIGO Lab Caltech, two inspiraling black holes are depicted. The video's description explains what is shown and can be summarized by: The colored surface is the space of our universe, as viewed from a hypothetical, flat, higher-dimensional universe, in which our own universe is embedded. ... the colors depict the rate at which time flows. [Space] is dragged into motion by the orbital movement of the black holes, and by their gravity and by their spins. This motion of space is depicted by silver arrows... Just before things become quiescent, regions of space around the merging black holes extrude upward. (Actually, the extrusions seem to begin sometime before things get chaotic.) If "space" here is being represented like the rubber sheet analogy, what do these extrusions mean? If gravitational forces create depressions in the sheet, then it seems to follow that those extrusions are anti-gravitational, which can't be right. Could they be regions where the equations modeling the black holes' interactions spit out nonsense? (From video, timestamp 0:52) Answer: The shape of the surface shown in the video is a depiction of the spacial curvature of the spacetime. (The relationship with time are depicted separately by the arrows and the colors.) More particularly, the shape is depicting the curvature of equatorial plane of the binary. The depicted surface has been embedded in a (fictional) 3D space in such a way that the curvature of the surface is equal to the intrinsic curvature of the equatorial plane. Let's try to unpack what this means for the interpretation of the extrusions. First, note that there is no physical meaning to whether something is shown as an extrusion or a depression, this does not actually effect the curvature. The video makers could have also chosen to depict the depressions around the black hole as extrusions instead -- without changing the meaning. What is relevant, however, is that some regions are shown as depressions while others are shown as extrusions. This means that somewhere in between, there must be a saddle point in the depicted surface. Saddle points correspond to regions with negative (spatial) curvature (i.e. an area where if you would draw a triangle, its angles would sum up to less than 180 degrees). The extrusions themselves quite clearly have positive curvature (a triangle would have more than 180 degrees). Note that the sign of the spatial curvature has little to do with whether gravity is attractive or repulsive. If you want an indication of direction in which gravity is working, the arrows give a better idea (although that interpretation should also be taken with a pinch of relativistic salt). Clarifying the last point a bit, the animation depicts three aspects of the spacetime curvature: The rate at which time flows (the lapse) as a color map, the rate at which space is dragged (the shift) as gray/silver arrows, and the spatial curvature as the curvature of the surface. Together these three completely characterize the curvature of spacetime. Consequently, they dictate how a test object would move in the spacetime, i.e. "how gravity acts". Although all three elements are important for the motion of particles, some give a better qualitative indication of the behavior of test particles than others. In this respect, the color map and the arrows are more important than the spatial curvature. Typically, a particle will want to move with the arrows, and along the gradient of color towards the redder regions (in both cases this generally means toward the black holes). The spatial curvature plays a somewhat secondary role, and matters mostly for particles moving at high velocities. Hence my comment that the sign of the spatial curvature is not a good indicator for whether gravity is attractive or repulsive at some point.
{ "domain": "astronomy.stackexchange", "id": 4098, "tags": "black-hole, collision" }
C Implementation of Tideman Algorithm
Question: i'm self learning programming and following CS50 course on Youtube. Below is my attempt to solve the PSET 3 Tideman. The program "seems" to work, as it doesn't produce any errors or warnings, and it seems to produce the result correctly for a few test cases(3 to 5 candidates with 8 or more voters). What I would like to learn more about are as follows: whether or not my code contains any obvious logical errors that won't work for cases where there is a lot of tied pairs between candidates, or if there are multiple candidates who have not lost in any preference pairs, i.e., multiple candidates have not been on the losing side. How would I need to change my program to determine the result then? Since I'm new to programming, I'm using a lot of loops and nested loops, I'm not sure if this(using a whole bunch of loops in a program) is a common/good/normal practice or if there are any more "elegant" way of doing things in programming in the professional coding world? Any other critiques in any areas are greatly appraciated! thank you! #include <stdio.h> #include <string.h> #include <ctype.h> #include <stdlib.h> #include <stdbool.h> /*max number of candidates*/ #define MAX 9 /*preferences[i][j] is the number of voters who prefer i over j, i and j being candidate index*/ int preferences[MAX][MAX]; /*bool type 2-d array, if true, locked[i][j] means i is locked in over j*/ bool locked[MAX][MAX]; /*each pair has a winner and a loser index, win_margin is the number of votes won*/ struct pair { int winner; int loser; int win_margin; }; /*used for merge-sorting the pairs, based on win_margin*/ struct left { int winner; int loser; int win_margin; }; struct right { int winner; int loser; int win_margin; }; char* candidate[MAX]; struct pair pairs[MAX * (MAX - 1) / 2]; int pair_count; int candidate_count; /*prototype functions*/ bool vote(int rank, char name[], int ranks[]); void record_preferrences(int ranks[]); void add_pairs(void); void sort_pairs(void); void merge_sort(struct pair* pairs, int beg_index, int end_index); void merge(struct pair* pairs, int beg_index, int end_index, int mid); void lock_pairs(void); void print_winner(void); int main(int argc, char* argv[]) { char buffer[100]; int voter_count; pair_count = 0; /*check the commanline input validty*/ if (argc < 2 || argc > MAX) printf("Need to put in candidates' names, and max number of candidates is 9.\n"); /*populate the candidate array from command line arguments*/ candidate_count = argc - 1; /*memory allocation*/ for (int i = 0; i < MAX; i++) { if ((candidate[i] = malloc(50)) == NULL) printf("Error. memory allocation unseccessful.\n"); } /*populate the candidate array with candidates' names*/ for (int j = 0; j < candidate_count; j++) { strcpy(candidate[j], argv[j + 1]); printf("Candidate %d is: %s\n", j + 1, candidate[j]); } /*clear out the graph of locked in pairs*/ for (int i = 0; i < candidate_count; i++) { for (int j = 0; j < candidate_count; j++) { locked[i][j] = false; } } /*get number of voters*/ do { int flag = 0; printf("Number of voters: "); fgets(buffer, sizeof(buffer), stdin); sscanf(buffer, "%d", &voter_count); if (voter_count < 2) { printf("Enter a number that's larger than 2.\n"); continue; } for (int i = 0; i < strlen(buffer) - 1; i++) { if (isdigit(buffer[i]) == 0) /*found non-numeric character with in user input*/ { if (buffer[i] == '-' || buffer[i] == '+') continue; else { printf("Enter only whole numbers that's larger than 2.\n"); flag = 1; break; } } } if (flag == 1) continue; else break; } while (true); /*gather rank info for each vote; check each vote's validty; record preferences*/ for (int i = 0; i < voter_count; i++) { /*ranks[i] represent voter's i th preference*/ int* ranks; if((ranks = malloc(sizeof(int) * candidate_count) == NULL) printf("memory allocation for voter[%d]'s ranks failed.\n", i + 1); char name[50]; for (int j = 0; j < candidate_count; j++) { do { int flag = 0; printf("Ballot %d, rank %d: ", i + 1, j + 1); fgets(name, sizeof(name), stdin); name[strlen(name) - 1] = '\0'; if (!vote(j, name, ranks)) { printf("Invalid name entered. Please Re-enter.\n"); continue; } if (j >= 1) { for (int k = 0; k < j; k++) { if (strcmp(candidate[ranks[k]], candidate[ranks[j]]) == 0) { flag = 1; printf("Duplicate name entered. Please Reenter.\n"); break; } } if (flag == 1) continue; else break; } else break; } while (true); free(ranks); } /*after each vote info has been collected, record perferences in preferences[i][j]*/ record_preferrences(ranks); printf("\n"); } /*for each pair of candidates, record winner and loser.*/ add_pairs(); /*sort the pairs, by the winning margin of each pair*/ sort_pairs(); /*lock pairs with the largest win margin first, keep locking down the list until a cycle is created*/ lock_pairs(); print_winner(); } /*check each vote's validty; update the ranks[], ranks[0] is the first rank of a vote*/ bool vote(int rank, char name[], int ranks[]) { int flag = 0; for (int k = 0; k < candidate_count; k++) { if (strcmp(name, candidate[k]) == 0) { flag = 1; ranks[rank] = k; break; } } if (flag == 0) return false; else return true; } /*record preferences*/ void record_preferrences(int ranks[]) { /*loop through each rank in ranks[], */ for (int i = 0; i < candidate_count; i++) { /*rank[0] is preferred over all other ranks; rank[1] preferred over all ranks after it */ int winner = ranks[i]; for (int j = i + 1; j < candidate_count; j++) { int loser = ranks[j]; preferences[winner][loser] += 1; } } } /*record the preferences in struct pair pairs*/ void add_pairs(void) { for (int i = 0; i < candidate_count; i++) { for (int j = i + 1; j < candidate_count; j++) { if (preferences[i][j] > preferences[j][i]) { pairs[pair_count].winner = i; pairs[pair_count].loser = j; pairs[pair_count].win_margin = preferences[i][j]; printf("pair[%d], in pair[%d][%d] winner is: %s; loser is : %s, by %d votes.\n", pair_count, i + 1, j + 1, candidate[i], candidate[j], preferences[i][j]); } if (preferences[j][i] > preferences[i][j]) { pairs[pair_count].winner = j; pairs[pair_count].loser = i; pairs[pair_count].win_margin = preferences[j][i]; printf("pair[%d], in pair[%d][%d] winner is: %s; loser is : %s, by %d votes.\n", pair_count, i + 1, j + 1, candidate[j], candidate[i], preferences[j][i]); } if(preferences[i][j] == preferences[j][i]) { pairs[pair_count].winner = i; pairs[pair_count].loser = j; pairs[pair_count].win_margin = 0; printf("pair[%d], in pair[%d][%d], there is a tie between %s and %s.\n", pair_count, i + 1, j + 1, candidate[i], candidate[j]); } pair_count++; } } printf("There are %d pairs.", pair_count); printf("\n"); } /*sort pairs based on the amount of winning margin in each pair*/ void sort_pairs(void) { printf("unsorted array is:"); for (int i = 0; i < pair_count; i++) { printf("%d ", pairs[i].win_margin); } /*use merge sort to sort the array*/ int beg_index = 0; int end_index = pair_count - 1; merge_sort(pairs, beg_index, pair_count - 1); printf("\n"); printf("sorted array is:"); for (int i = 0; i < pair_count; i++) { printf("%d ", pairs[i].win_margin); } printf("\n"); } /*produce the sorted array*/ void merge_sort(struct pair *pairs, int beg_index, int end_index) { if (beg_index < end_index) { int mid = (beg_index + end_index) / 2; merge_sort(pairs, beg_index, mid); merge_sort(pairs, mid + 1, end_index); merge(pairs, beg_index, end_index, mid); } } /*merge the sorted 2 half arrays*/ void merge(struct pair* pairs, int beg_index, int end_index, int mid) { int n1 = mid - beg_index + 1; int n2 = end_index - (mid + 1) + 1; struct left* left; if((left = malloc(sizeof(struct left) * (n1 + 1))) == NULL) printf("memory allocation failed for struct left array.\n"); struct right *right; if((right = malloc(sizeof(struct right) * (n2 + 1))) == NULL) printf("memory allocation failed for struct right array.\n"); for (int i = 0; i < n1; i++) { left[i].win_margin = pairs[beg_index + i].win_margin; left[i].winner = pairs[beg_index + i].winner; left[i].loser = pairs[beg_index + i].loser; } for (int j = 0; j < n2; j++) { right[j].win_margin = pairs[mid + 1 + j].win_margin; right[j].winner = pairs[mid + 1 + j].winner; right[j].loser = pairs[mid + 1 + j].loser; } int i = 0; int j = 0; int k = beg_index; while (i < n1 && j < n2) { if (left[i].win_margin <= right[j].win_margin) { pairs[k].win_margin = left[i].win_margin; pairs[k].winner = left[i].winner; pairs[k].loser = left[i].loser; i++; } else { pairs[k].win_margin = right[j].win_margin; pairs[k].winner = right[j].winner; pairs[k].loser = right[j].loser; j++; } k++; } while (i < n1) { pairs[k].win_margin = left[i].win_margin; pairs[k].winner = left[i].winner; pairs[k].loser = left[i].loser; i++; k++; } while (j < n2) { pairs[k].win_margin = right[j].win_margin; pairs[k].winner = right[j].winner; pairs[k].loser = right[j].loser; j++; k++; } free(left); free(right); } void lock_pairs(void) { /*the pair with the largest win_margin always got locked first*/ locked[pairs[pair_count - 1].winner][pairs[pair_count - 1].loser] = true; /*unique_ounter counts the number of unique candidate index on the loser side; if less than candidate_count, keep locking since locking won't create a cycle*/ int unique_counter = 1; int flag; for (int i = pair_count - 2; i >= 0; i--) { for (int j = pair_count - 1; j >= i; j--) { flag = 0; /*check duplicate loser index for locked pairs so far*/ if (pairs[i].loser == pairs[j].loser) { flag = 1; break; } } if (flag == 0) unique_counter += 1; if (unique_counter < candidate_count && pairs[i].win_margin != 0) locked[pairs[i].winner][pairs[i].loser] = true; if(unique_counter == candidate_count) break; } for (int i = pair_count - 1; i >= 0; i--) { if (locked[pairs[i].winner][pairs[i].loser]) { printf("pair[%d] locked.\n", i); } else { printf("pair[%d] remains unlocked.\n", i); } } } void print_winner(void) { int* lost_pool; if((lost_pool = malloc(sizeof(int) * candidate_count)) == NULL) printf("memory allocation failed for lost_pool array.\n"); int* win_pool; if((win_pool = malloc(sizeof(int) * candidate_count)) == NULL) printf("memory allocation failed for win_pool array.\n"); int flag; lost_pool[0] = pairs[pair_count - 1].loser; int k = 1; int l = 0; /*for all the locked pairs, find the candidates on the losing side and add to lost_pool array; for duplicates, only add once.*/ for (int i = pair_count - 2; i >= 0; i--) { flag = 0; for (int j = pair_count - 1; j > i; j--) { if (pairs[i].loser == pairs[j].loser) { flag = 1; break; } } if (locked[pairs[i].winner][pairs[i].loser] && flag == 0) { lost_pool[k] = pairs[i].loser; k++; } } printf("Candidats who lost in locked pairs: "); for (int i = 0; i < k; i++) { printf("%s ", candidate[lost_pool[i]]); } printf("\n"); /*the candidate(s) that is not in the lost_pool within the locked pairs, will be the winner*/ for (int i = 0; i < candidate_count; i++) { int flag = 0; for (int j = 0; j < k; j++) { if (i == lost_pool[j]) { flag = 1; break; } } if (flag == 0) { win_pool[l] = i; l++; } } if (l == 1) printf("The Winner is: %s!\n", candidate[win_pool[0]]); if (l > 1) { printf("The following are in the win_pool: "); for (int i = 0; i < l; i++) printf("%s ", candidate[win_pool[i]]); } free(lost_pool); free(win_pool); } Answer: Avoid global variables The larger the project, the higher the chance that if you use global variables, that you have conflicting global variable names. Try to avoid them when possible. Here are some generic rules you can follow: Declare variables in the function that first uses them. Pass variables as arguments to other functions that need to access them. This can be either by pointer or by value; if the variables are large structs or if they need to be written to, pass them by pointer, otherwise by value. If you need to pass more than few variables, consider grouping them in a struct instead. Avoid forward declarations You had to add some forward declarations above main(). You can avoid this by reversing the order in which the functions appear in the source file. Doing so avoids you having to repeat yourself, and there is less chance of mistakes. Only if there are cyclic dependencies between functions should you need forward declarations. Print errors to stderr, and handle them in some way First, always print errors to stderr. This is especially useful if you are redirecting the output of your program to a file for example; this way the errors don't get lost, and the output doesn't contain unexpected data. However I also see that while you check for invalid conditions and print an error message, you don't do anything about it, and just let the program continue running. In the best scenario, this leads to a crash, but in the worst scenario it looks like it is running fine but the final output will be incorrect. Whenever you have an error condition, do something about it. If you don't know how to recover from the error, the immediately exit from the program in some way, for example by calling exit(1) or abort(). Avoid assignments in if-statements Prefer to do the assignment before the if-statement. The reason is that it is quite easy to make mistakes when combining the assignment with the condition being tested (for example, writing = instead of == or the other way around). So for example, prefer to write: for (int i = 0; i < MAX; i++) { candidate[i] = malloc(50); if (!candidate[i]) { fprintf(stderr, "Error. Memory allocation unsuccessful.\n"); abort(); } } You often do this for memory allocations. I would just write a wrapper function that does the error handling, like so: void *checked_malloc(size_t size) { void *ptr = malloc(size); if (!ptr) { fprintf(stderr, "Error. Memory allocation unsuccessful.\n"); exit(1); } return ptr; } And then use it like so: for (int i = 0; i < MAX; i++) { candidate[i] = checked_malloc(50); } Avoid magic numbers Avoid writing magic numbers in your code, and instead declare them as a constant variable or a macro. For example: #define MAX_NAME_LENGTH 50 ... candidate[i] = checked_malloc(MAX_NAME_LENGTH); Guard against buffer overflows There are a few cases where you still allow buffer overflows. For example, when reading the candidate names from the command line, you use a bare strcpy() which doesn't check if the candidate's name fits in the allocated buffer. So for example, write: strncpy(candidate[j], argv[j + 1], MAX_NAME_LENGTH); candidate[j][MAX_NAME_LENGTH - 1] = 0; // Necessary! But you could have avoided this by allocating buffers of the right size. You can do this by doing a strlen() before allocating candidate[j], but if possible use the function strdup(): candidate[j] = strdup(argv[j + 1]); But why do you need to make a copy of the name anyway? You can just have candidate[j] point directly to the right command line argument: candidate[j] = argv[j + 1]; Nested loops Nesting loops and/or conditional statements is perfectly normal, and often it's just the natural way to write things. However, if you nest too much the code might "fall off" the right hand of the screen and become unreadable. Sometimes it makes sense to wrap an inner loop into a function; this way you reduce the apparent nesting of loops, and make the code simpler to reason about. Proper input validation Your check for a valid number of voters is both quite complex and still wrong. For example, I can give the input 2-3+4, and this will be accepted. There will also be a potential crash if the input reaches EOF before any character (not even a newline) is read, because then strlen(buffer) will be zero. There are multiple ways to solve this. To keep the code short, I would do the following: while (true) { char dummy; printf("Number of voters: "); fgets(buffer, sizeof buffer, stdin); if (sscanf(buffer, "%d %c", &voter_count, &dummy) == 1 && voter_count > 2) { break; } fprintf(stderr, "Enter a single number larger than 2!\n"); } I used the fact that sscanf() returns the number of successful conversions, that a space in the format string matches any amount of consecutive whitespace (including the newline), and that after the newline there should not be any other character in the buffer. Thus, if it succesfully read the number and there was nothing after it except for whitespace, the return value will be 1, otherwise it will be 0 or 2. I also noted that your error message says the number should be larger than two, but you actually check whether the number is not smaller than two, which means it passes it if it is equal to or larger than two. Make flags bool, give them a proper name If you want to store a true/false value in a variable, make it a bool. You already do that in some places, but for some reason you use int for flag. Also, flag tells me what kind of variable it is, but not what is used for. Try to give a more descriptive name. For example, when checking for duplicates, name it duplicate_found. This way, you can write: if (pairs[i].loser == pairs[j].loser) { duplicate_found = true; break; } ... if (!duplicate_found) unique_counter++;
{ "domain": "codereview.stackexchange", "id": 39684, "tags": "c" }
What are the differences between IBMQJobManager and Qiskit aqua QuantumInstance?
Question: I have been using the Qiskit aqua QuantumInstance for a while, and have recently discovered the IBMQJobManager class and the two seem to be quite similar. Are there differences between them? Will they be consolidated together in the future? Answer: By looking at the documentation for the two, it actually seems the two have quite different objectives, here is a brief summary: the quantum instance is mostly used to control the transpilation and execution of a circuit via many different parameters, such as the backend, for simulation the noise model, basis gates, coupling map, etc., and is quite useful when wanting to run an Aqua algorithm on the particular instance. As for the IBMQJobManager, as said in the documentation, the main objective is to handle jobs and pulse schedules in order to be able to run them on backend and then rebuild them as fitted. See the documentation for both, maybe this will help you better understand their differences : quantum instance and the job manager. Do you need a specific detail more explained? If so please tell me and I'll try to detail it more :)
{ "domain": "quantumcomputing.stackexchange", "id": 2435, "tags": "programming, qiskit" }
In a system of two charges, if one charge suddenly disappears, does the force on other charge vanish instantaneously?
Question: It is commonly asked that if the Sun disappears, will Earth shoot off in a tangent instantaneously or after some time? We know from the theory of relativity that the gravitational waves travel at the speed of light. And the Earth will shoot off in a tangent in about 8 mins. Is this also true for electrostatic forces? Is there anything like the electrostatic force waves? an electrostatic counterpart of gravity. And what about other forces i.e. the weak and strong nuclear forces? Answer: "Is there anything like electrostatic force waves?" Yes, also known as electromagnetic waves or light. The electromagnetic force, of which electrostatic force is a subset, is carried by none other than photons, which travel at the speed of light. The strong force is carried by gluons, which are massless and therefore also travel at the speed of light, so disturbances in the strong force should too. The weak force, on the other hand is carried by massive W and Z bosons, so it would travel slower than light speed.
{ "domain": "physics.stackexchange", "id": 25796, "tags": "electromagnetic-radiation, speed-of-light, relativity, gravitational-waves, causality" }
Penrose spacetime diagram for the Schwarzchild solution
Question: Consider the metric corresponding to the Schwarzchild solution. It represents a Non-rotating Black hole. When we want to understand the causal structure of the spacetime we find the null geodesic equation. Outgoing radial null geodesic - The outgoing null geodesics are drawn in region r>2m and r<2m. For r>2m, the null geodesics go to infinity as we expect. For r<2m, it goes and hits the singularity. But these are outgoing null geodesics in the region r<2m seems to be coming from the horizon at time t=-(infinity). Are these outgoing null geodesics in both the regions somehow connected at t=-(infinity)? Ingoing radial null geodesic- The ingoing null geodesics are drawn similarly in region r<2m and r>2m. In the region r<2m, the geodesics come from the horizon and hits the singularity. They start at t=+(infinity) and hits the singularity at some finite time which is less that infinity. Do these null geodesics travel back in time in this region? Answer: You shouldn't give much physical meaning to the coordinates inside the event horizon. In particular, $r$ is the coordinate that "acts like" time, and $t$ acts like space. Something that is important to understand is that since the Schwarzschild coordinates don't cover all of the spacetime and become singular at the horizon, rigorously speaking they don't define a manifold: they define two pieces of a manifold, which a priori are not connected. A proper mathematical definition of the black hole spacetime uses (for example) the Kruskal-Szekeres coordinates, which cover the whole manifold. These are the coordinates used to make a Penrose diagram. To answer your questions: the two sets of outgoing geodesics are not connected. This makes sense, since they sort of go in opposite directions. The ones inside the horizon go towards the singularity, and those outside go off to infinity. The ingoing geodesics are connected, which again makes sense: you just have one geodesic, which goes through the horizon. But "traveling back in time" is not a meaningful phrase, since the $t$ coordinate doesn't necessarily mean "time". You're right that along the geodesic it goes to $+\infty$ and back, but it doesn't travel back in time. You can see this in the Kruskal spacetime (ignore regions III and IV): I've drawn outgoing geodesics in green, and ingoing in orange.
{ "domain": "physics.stackexchange", "id": 43837, "tags": "general-relativity, black-holes, spacetime, causality, geodesics" }
Check whether an array can be the result of merging two smaller arrays, maintaining order
Question: I'm prepping for a tech interview coming up and trying to have a better understanding of what I should look out for in writing code. Also, feel free to correct the terminology that I use as I want to be able to talk about my code clearly and succinctly. One of the specifications for the tech interview process is that I use idiomatic JavaScript, according to the recruiter. I'm not familiar with that term and would appreciate any feedback on how to best write to that standard. Is there a specific standard that I should adhere to? Is that related to proper naming conventions? Is there a better way for me to optimize time and space in my code? When I assign values to variables within loops, are there rules that I should be aware of for when I shouldn't? For example, the current element being iterated over assigned to currentH1 variable, in my mind, makes it more readable in understanding what's happening as opposed to half1[h1Index], and I reuse it more than once. Plus, I believe this might make it more idiomatic but I'm not sure? Am I losing out on space complexity in any way? Or is there something that I may not be aware of by checking current against undefined instead of checking the current index against the size of the array? When I assign values to variables outside of loops but within the function, are there space complexities that I should pay attention to or does garbage collection take care of this? Is this \$O(1)\$ space complexity as I'm just keeping track of the indices? Feel free to breakdown space complexity as I truly do want to have a more solid understanding of memory management. I believe I've accounted for the edge cases that I can think of, but are there more that I should be aware of? I placed the if condition for checking lengths at the top even before the index definitions because I figured that if they aren't even the same size, why bother with doing anything else. Is that weird? function isMergedArray(half1, half2, mergedArray) { if ((half1.length + half2.length) != mergedArray.length ) { return false; } let h1Index = half1.length - 1; let h2Index = half2.length - 1; let maIndex = mergedArray.length - 1; while (maIndex >= 0) { const currentH1 = half1[h1Index]; const currentH2 = half2[h2Index]; const currentMa = mergedArray[maIndex]; if (currentH1 != undefined && currentH1 === currentMa) { h1Index--; } else if (currentH2 != undefined && currentH2 === currentMa) { h2Index--; } else { return false; } maIndex--; } return true; } Answer: Is there a specific standard that I should adhere to? Pick one, and stick to it. I use this one: https://github.com/airbnb/javascript One of the specifications for the tech interview process is that I use idiomatic JavaScript I understand that as follow the community conventions for writing code. Using a common standard helps. Is there a better way for me to optimize time and space in my code? I does not get much better than this, though you could drop the use of currentH1, currentH2, and currentMa. When I assign values to variables within loops, are there rules that I should be aware of for when I shouldn't? In my mind, if you are going to use the assigned value only once, then it does not make sense to assign it. Is there something that I may not be aware of by checking current against undefined instead of checking the current index against the size of the array? Absolutely, your code breaks in a test case where the array contains undefined as a value When I assign values to variables outside of loops but within the function, are there space complexities that I should pay attention to or does garbage collection take care of this? Nah Is this O(1) space complexity as I'm just keeping track of the indices? That is my understanding I believe I've accounted for the edge cases that I can think of, but are there more that I should be aware of? As I mentioned, dupe values across the two halfs. It changes the paradigm of the routine completely. Also, as mentioned before, an array with undefined as a value. I placed the if condition for checking lengths at the top even before the index definitions because I figured that if they aren't even the same size, why bother with doing anything else. Is that weird? Not at all, however I only do this in functions that are very frequently called. Other than, I tried to write this 'the proper way' by checking from the first value to the last, and it looked worse. From a naming convention, mergedArray is a bit of misnomer, you don't whether it is a merged array. It annoys me personally that you assign a value to currentH1 in a loop, but then declare it as const. It is not wrong technically, but it reads wrong to me. I rewrote the code a bit with my comments in mind: function isMergedArray(half1, half2, list) { if ((half1.length + half2.length) != list.length ) { return false; } let i = list.length - 1, h1 = half1.length - 1, h2 = half2.length - 1; while (i >= 0) { let currentElement = list[i]; if (h1 != -1 && half1[h1] === currentElement) { h1--; } else if (h2 != -1 && half2[h2] === currentElement) { h2--; } else { return false; } i--; } return true; }
{ "domain": "codereview.stackexchange", "id": 33364, "tags": "javascript, performance, algorithm, memory-optimization" }
Why do Decision Tree Learning Algorithm preferably outputs the smallest Decision Tree?
Question: I have been following the ML course by Tom Mitchel. The inherent assumption while using Decision Tree Learning Algo is: The algo. preferably chooses a Decision Tree which is the smallest. Why is this so when we can have bigger extensions of the tree which could in principle perform better than the shorter tree? Answer: Consider the LCD (least common denominator) principle in algebra. A larger denominator would work for most processes for which the LCD would be calculated, however the least is the one used by convention. Why? The interest in prime numbers in general is based on the value of reductive methodologies. In philosophy, Occam's Razor is a principle that, given two conceptual constructs that correlate equally well with observations, the simpler is most likely the best. A more formal and generalized prescription is this: Given two mathematical models of a physical system with equal correlation to the set of all observations about the system being modeled, the simpler model is more likely to be the most likely to be predictive of conditions outside those of the observation set. This principle of simplicity as the functional ideal is true of decision trees. The more likely functional and clear decision tree resulting from a given data set driving its construction will be the simplest. Without a clear reason for additional complexity, there may be no benefit derived from the complexity added and yet there may be penalties in terms of clarity and therefore maintainability and verifiability. There may be computational penalties too, in some applications. The question mentions, "Bigger extensions of the tree which could in principle perform better," however the performance of the tree should be a matter optimization in preparing for execution and execution of the decision tree in real time. In other words, the minimalist decision tree is the most clear, workable, and verifiable construct, however a clever software engineer could translate that minimalist construct to a run time optimized equivalent. Just as with compilation of source code, performance is multidimensional in meaning. There are time efficiency, memory efficiency, network bandwidth efficiency, or other performance metrics that could be used when optimizing the tree for run time. Nonetheless, the simpler tree is the best starting point for any weighted combination of these interests.
{ "domain": "ai.stackexchange", "id": 292, "tags": "machine-learning, unsupervised-learning, learning-algorithms" }
Improving Velocity estimation
Question: I have a sensor reduction model which gives me a velocity estimate of a suspension system(velocity 1) . This suspension system estimate velocity is used to calculate another velocity(velocity 2) via a transfer function/plant model. Can I use velocity 2 to improve my velocity estimate (velocity 1) through Kalman filtering or through some feedback system.?? V1 is "estimated" using these two sensors.That is fed into a geroter pump (Fs in diagram) which pumps fluid to manupulate the damper viscous fluid thereby applying resistance to the forces applied to the car body. There is no problem did I have an velocity sensor on the spring.I could measure it accurately but now I only have an estimate. I am trying to make the estimate better.Assume I have a model/plant or transfer function already that gives me the V2 given a V1. Answer: If you have transfer function such that $$ \frac{V_2}{V_1} = H \\ V_2 = H V_1 \\ $$ Then wouldn't your estimate of $V_1$ be given by inverting the transfer function? $$ V_1 = H^{-1} V_2 $$ The problem is that you can't use this to measure $V_1$, and here's why: Your measurements are an estimate of $V_1$. $$ V_{est} = f(V_1) $$ You feed that estimate into the pump and get a flow output. $$ V_2 = H V_{est} $$ Now, if you invert the plant, you do NOT get a measurement of $V_1$, you get a measurement of your original estimate. $$ V_{est} = H^{-1} V_2 $$ It's like you are trying to draw your own ruler and then use that ruler to see if you drew the ruler correctly. It's a circular definition that's not going to get you anything useful.
{ "domain": "robotics.stackexchange", "id": 732, "tags": "control, sensors, pid, kalman-filter" }
Generate every possible single word of three letters
Question: I was asked to generate every possible single word of three letters, so I wrote this script. How good is my code? Would it be better doing it using recursion or would that be over-engineering? const getAllPossibleThreeLetterWords = () => { const chars = 'abcdefghijklmnopqrstuvwxyz' const arr = []; let text = ''; for (let i = 0; i < chars.length; i++) { for (let x = 0; x < chars.length; x++) { for (let j = 0; j < chars.length; j++) { text += chars[i] text += chars[x] text += chars[j] arr.push(text) text = '' } } } return arr } console.log(getAllPossibleThreeLetterWords()); Answer: It is very clear both what your code does and how it does it, this is a good thing. However, right now it is incredibly rigid. If requirements change more code will need to be changed than if you spent a bit more time on making it more generic (more on this later). You can avoid doing multiple assignments to text by combining the lines with text = chars[i] + chars[x] + chars[j]. This also removes the need to reset text to the empty string after adding the word. The variables i, x, and j seem to be randomly chosen. Not a big deal for loops this small, but it would be fairly easy to accidentally swap the order around. You might want to consider using a, b, and c instead. What would you do if you were told to modify this method so that it generates 4 letter words? How about 20 letter words? Or just n letter words? While you could write a ton of for loops, it would be better to handle a more generic case. Whether you use recursion or not is entirely up to you. To help you decide, here is a comparison. Recursive, handling n letters. let flatten = arr => arr.reduce((carry, item) => carry.concat(item), []) let getAllWords = wordLength => { if (typeof wordLength != 'number') throw Error('wordLength must be a number') if (wordLength < 0) throw Error('wordLength must be greater than or equal to zero') if (wordLength == 0) return [] let alphabet = 'abcdefghijklmnopqrstuvwxyz'.split('') let lengthen = word => alphabet.map(letter => word + letter) let addLetters = words => flatten(words.map(lengthen)) let _getAllWords = (letters, words = alphabet, current = 1) => { return letters == current ? words : _getAllWords(letters, addLetters(words), current + 1) } return _getAllWords(wordLength) } console.log(getAllWords(2)) Iterative, handling n letters. let getAllWords = wordLength => { if (typeof wordLength != 'number') throw Error('wordLength must be a number.') if (wordLength < 0) throw Error('wordLength must be greater than or equal to zero.') let alphabet = 'abcdefghijklmnopqrstuvwxyz'.split('') let words = [] if (wordLength != 0) words = alphabet for (let i = 1; i < wordLength; i++) { let temp = [] words.forEach(word => { alphabet.forEach(letter => temp.push(word + letter)) }) words = temp } return words } console.log(getAllWords(2)) It really is somewhat of a matter of opinion which is more clean. I personally would likely use the recursive version with the helper functions pulled out of the function itself to improve readability.
{ "domain": "codereview.stackexchange", "id": 27241, "tags": "javascript, algorithm" }
No compression algorithm can compress all input messages?
Question: I just started reading a book called Introduction to Data Compression, by Guy E. Blelloch. On page one, he states: The truth is that if any one message is shortened by an algorithm, then some other message needs to be lengthened. You can verify this in practice by running GZIP on a GIF file. It is, in fact, possible to go further and show that for a set of input messages of fixed length, if one message is compressed, then the average length of the compressed messages over all possible inputs is always going to be longer than the original input messages. Consider, for example, the 8 possible 3 bit messages. If one is compressed to two bits, it is not hard to convince yourself that two messages will have to expand to 4 bits, giving an average of 3 1/8 bits. Really? I find it very hard to convince myself of that. In fact, here's a counter example. Consider the algorithm which accepts as input any 3-bit string, and maps to the following outputs: 000 -> 0 001 -> 001 010 -> 010 011 -> 011 100 -> 100 101 -> 101 110 -> 110 111 -> 111 So there you are - no input is mapped to a longer output. There are certainly no "two messages" that have expanded to 4 bits. So what exactly is the author talking about? I suspect either there's some implicit caveat that just isn't obvious to me, or he's using language that's far too sweeping. Disclaimer: I realize that if my algorithm is applied iteratively, you do indeed lose data. Try applying it twice to the input 110: 110 -> 000 -> 0, and now you don't know which of 110 and 000 was the original input. However, if you apply it only once, it seems lossless to me. Is that related to what the author's talking about? Answer: What you are missing is that you need to consider all bits of size 3 or less. That is: if in a compression scheme for bits of size 3 or less we compress one of the 3-bit strings to a 2-bit string, then some string of size 3 or less will have to expand to 3 bits or more. A losless compression scheme is a function $C$ from finite bit strings to finite bit strings which is injective, i.e., if $C(x) = C(y)$ then $x = y$, i.e., $C(x)$ uniquely determines $x$. Consider an arbitrary compression scheme $C$ and let $S$ be a set of binary strings. We can express how well $C$ works on $S$ by computing the ratio $$\text{CompressionRatio}(C,S) = \frac{\sum_{x \in S} \mathrm{length}(C(x))}{\sum_{x \in S} \mathrm{length}(x)}.$$ A small compression ratio is good. For example, if it is $1/2$ that means we can on average compress strings in $S$ by 50% using $C$. If we try to compress all strings of length at most $n$ then we are in trouble: Theorem: Let $S$ be the set of all strings of length at most $n$ and $C$ any compression scheme. Then $\text{CompressionRatio}(C,S) \geq 1$. So, the best compression scheme in the world is the identity function! Well, only if we want to compress random strings of bits. The bit strings which occur in practice are far from random and exhibit a lot of regularity. This is why it makes sense to compress data despite the above theorem.
{ "domain": "cs.stackexchange", "id": 3305, "tags": "data-compression, coding-theory" }
How to design a differential steering mechanism?
Question: I want to give my robot a differential mechanism for the system of turning and steering. Considering the case of turning a right-angled corner, the robot will achieve this by following a gradual circular arc through the intersection while maintaining a steady speed. To accomplish this end, we increase the speed of the outer wheel while slowing that of the inner. But supposing i want the turn to be within a definite radius, how do i calculate what ratio the 2 speeds have to be in? Can someone give me an insight into this? What Ive done is this, although I have my doubts. If the speed of the right wheel is $V_r$ and the speed of the left wheel is $V_l$, then the ratio of their speeds while turning will be equal to the ratio of the circumferences of their corresponding quadrants. Therefore $$V_r :V_l =\frac{r+A}{r}$$ Is this right? I have a sinister feeling Im missing something out.. Answer: Your intuition is correct. We can look at the difference in arc length that each wheel will roll for a given sector (specified by $\theta$ in degrees). $$d_l = \frac{\theta*2\pi{r}}{360}$$ $$d_r = \frac{\theta*2\pi(r+A)}{360}$$ This simplifies to: $$ \frac{d_r}{d_l} = \frac{\frac{\theta*2\pi(r+A)}{360}}{\frac{\theta*2\pi{r}}{360}} = \frac{r+A}{r}$$ Speed is just distance over time: $$ \frac{V_r}{V_l} = \frac{d_r/t}{d_l/t} = \frac{r+A}{r}$$
{ "domain": "robotics.stackexchange", "id": 202, "tags": "kinematics" }
Taking force, mass and length as base units, find the dimensional formula of velocity
Question: My doubt is that how can force be considered as a base quantity. Is that possible? How can I represent the dimension of velocity using it? Answer: You are missing time. Try to write Force in terms of mass, length and time and get units of time from the equation. Put it in that of speed. You can use any unit that has time to substitute it. But we generally tend to use basic units as fundamental units. The best example is that of current. Although current is rate of flow of charge and charge is more fundamental than current, we use current as base unit because we are comfortable measuring it.
{ "domain": "physics.stackexchange", "id": 13777, "tags": "homework-and-exercises, units, dimensional-analysis" }
A question on Lewis dot structures
Question: In the image shown is a carboxylate ion, I don't understand why we draw 6 electrons on the oxygen and say it has a formal negative charge on it instead of putting 5 dots to represent it used one of its valence electrons? Like is it wrong to put an odd number of electrons on the atom in a Lewis dot structure, or is there some other reason? Answer: Starting from acetic acid, there is single bond between oxygen and hydrogen because each of them contribute a single electron: When the acid donates this proton, both electrons remain on the more electronegative atom (in the Pauling scale, oxygen: 3.44, hydrogen 2.20, reference). Thus, this totals to six. Note, when drawing structure formulae with electrons and electron pairs, do not confuse the dash indicating the charge, with the dash to represent two electrons.
{ "domain": "chemistry.stackexchange", "id": 16214, "tags": "lewis-structure" }
Determine the language of the NPDA
Question: I have to write the language of the below $NPDA$(Non-Deterministic Push Down Automata). I think that from $q_0$ to $q_1$ and then $q_2$, we are actually building the below all the strings of $0$'s and $1$'s of the form $a^nb^n$ with the lenght at least 2. But there is also a transition from $q_2$ to $q_0$, which makes a cycle, and it just read a $1$ from input string. For example these strings are accepted by this machine, and the point is that in all of them the last character is $0$. 1100 1 10 1 1100 111000 1 1100 1 10 1100 But, unfortunatly I don't know how to write its language accuratly. My idea was to write something like the below language: $L = \{ w(1^n) z (0^n) | w ∈ L \}$ and $z ∊ \{1,ɛ \}$ But it is not correct. I will be grateful for any help. Answer: Consider $L = \{1^n0^n\mid n>0\}$. It seems that the language of your PDA is $L(1L)^*$. It is not exactly the way you wanted, but I think it is an accurate and short way to write it. You could be a bit more verbose and write it as: $$\{1^{n_1}0^{n_1}11^{n_2}0^{n_2}1…11^{n_k}0^{n_k}\mid k \geqslant 1 \wedge \forall i\in \{1, …, k\}, n_i > 0\}$$
{ "domain": "cs.stackexchange", "id": 19951, "tags": "pushdown-automata" }
What motivates the trial solution of $\left[-\frac{\hbar^2\nabla^2}{2m}+\frac{e^2y^2B^2}{2m}-\frac{i\hbar eyB}{m}\right]\psi(x,y,z)=E\psi(x,y,z).$?
Question: The time-independent Schrodinger equation for the problem of charged particles in an uniform magnetic field ${\vec B}=B{\hat k}$, in the Coulomb gauge ${\vec A}=(-yB,0,0)$, reduces to the following differential equation $$\left[-\frac{\hbar^2\nabla^2}{2m}+\frac{e^2y^2B^2}{2m}-\frac{i\hbar eyB}{m}\frac{\partial}{\partial x}\right]\psi(x,y,z)=E\psi(x,y,z).$$ It can be solved by assuming a trial solution of the form $$\psi(x,y,z)=f(y)\exp [i(k_xx + k_zz)].$$ What motivates this trial solution? I can guess the part $\exp(ik_zz)$. What motivates the plane wave part $\exp(ik_xx)$ when the Hamiltonian couples $y$ with $p_x=-i\hbar\frac{\partial}{\partial x}$? Answer: The (not entirely) physical reason to try that ansatz is the fact that the Hamiltonian commutes with both $p_x$ and $p_z$, and they obviously commute between themselves: $$ \begin{cases} [p_x, H]=0\\ [p_z, H]=0\\ [p_x, p_z]=0 \end{cases} $$ hence $\{p_x,p_z,H\}$ is a set of commuting observables. For what concerns $\hat{x}$ and $\hat{z}$ directions, we then expect the eigenfunctions of $H$ to be plain waves. You can easily see that $[p_y,H]\neq 0$, as $H$ contains explicitly $y$, so you have to include an unknown function $f(y)$ in the trial solution. Anyway, with a different choice of Gauge, for instance $\vec{A}=(0,Bx,0)$, you can switch the roles of $x,y,z$. Note that this affects only the wave function, and not the energy levels of the system, as one expects from the fact that the choice of $\vec{A}$ is arbitrary. I hope this can be of some help!
{ "domain": "physics.stackexchange", "id": 75450, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, fourier-transform, differential-equations" }
how to augument speech sentiment dataset?
Question: I am building an LSTM to recognize if the person is sad, happy, angry or neutral. This is done by feeding-in his the wave of his voice into the network, as a sequence of bytes (each byte is 0-to-255). The problem is, my dataset is not large enough, are there efficient ways I could augument my dataset? I am training on short 1.5 second clips and I have 800 of those, which is not enough. My current augumentation is: to add variations in volume to add a bit of white noise, which makes it worse :( Reversing the sequences doesn't seem to be applicable, after all, my network will be predicting non-reversed speech when it's fully trained. Answer: Your problem is to identify whether a person is sad, happy, angry or neutral from his voice which is emotion classification problem. For speech, we use frames of short duration like 10-20 ms, and extract features from the same like MFCC or other frequency domain features. We extract short duration frames because speech is non-stationary over time, and frequency domain features make the features shift-invariant. I suggest you read some latest research papers on emotion classification to get latest research in emotion classification. Emotion in speech is captured by variation in pitch and amplitude across time. So capturing short time energy and instantaneous pitch frequency in speech across time are basic features for emotion classification. Modifying speech by adding variation in volume and white noise will not help at all. You need to build a emotion classification system with 4 classes: sad, happy, angry or neutral, using the below steps: Extract frames of 10-30 ms duration Convert these frames to frequency domain features like STFT or extract instantaneous pitch Use these STFT features along with pitch to train a LSTM
{ "domain": "datascience.stackexchange", "id": 3160, "tags": "sentiment-analysis, rnn" }
General approahces for grouping a continuous variable based on text data?
Question: I have a general methodological question. I have two columns of data, with one a column a numeric variable for age and another column a short character variable for text responses to a question. My goal is to group the age variable (that is, create cut points for the age variable), based on the text responses. I'm unfamiliar with any general approaches for doing this sort of analysis. What general approaches would you recommend? Ideally I'd like to categorize the age variable based on linguistic similarity of the text responses. Answer: Since it is general methodological question, let's assume we have only one text-based variable - total number of words in a sentence. First of all, it's worth to visualize your data. I will pretend I have following data: Here we see slight dependency between age and number of words in responses. We may assume that young people (approx. between 12 and 25) tend to use 1-4 words, while people of age 25-35 try to give longer answers. But how do we split these points? I would do it something like this: In 2D plot it looks pretty straightforward, and this is how it works most of the time in practise. However, you asked for splitting data by a single variable - age. That is, something like this: Is it a good split? I don't know. In fact, it depends on your actual needs and interpretation of the "cut points". That's why I asked about concrete task. Anyway, this interpretation is up to you. In practise, you will have much more text-based variables. E.g. you can use every word as a feature (don't forget to stem or lemmatize it first) with values from zero to a number of occurrences in the response. Visualizing high-dimensional data is not an easy task, so you need a way to discover groups of data without plotting them. Clustering is a general approach for this. Though clustering algorithms may work with data of arbitrary dimensionality, we still have only 2D to plot it, so let's come back to our example. With algorithm like k-means you can obtain 2 groups like this: Two dots - red and blue - show cluster centres, calculated by k-means. You can use coordinates of these points to split your data by any subset of axes, even if you have 10k dimensions. But again, the most important question here is: what linguistic features will provide reasonable grouping of ages.
{ "domain": "datascience.stackexchange", "id": 114, "tags": "bigdata, clustering, text-mining" }
RosAria_pose orientation
Question: I am using /RosAria/pose to get position (x,y) and orientation (w). Does anyone knows how to convert the w to grades? Originally posted by acp on ROS Answers with karma: 556 on 2013-05-18 Post score: 0 Answer: You mean degrees probably You need a quaternion to degrees conversion tool. Or use for instance the bullet class. edit: tf::Pose pose; tf::poseMsgToTF(odom->pose.pose, pose); double yaw_angle = tf::getYaw(pose.getRotation()); from here Originally posted by davinci with karma: 2573 on 2013-05-18 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by acp on 2013-05-20: Aha :), sorry for the mistake, yea, I mean degrees, do you have a piece of code I can use? :) Comment by acp on 2013-05-22: Thank you for the piece of code :)
{ "domain": "robotics.stackexchange", "id": 14216, "tags": "ros" }
Initial condition distributions for phase space densities
Question: I'm having difficulty understanding Liouville's theorem: I have seen the quantity $\rho(p,q;t)$ defined (i) using ensemble theory as the probability density function over realizations/copies of an orbit in phase space and described (ii) as the density of neighboring orbits in phase space corresponding to many particles. I don't understand description (ii); if, for example, orbits are calculated using Hamilton's equations, shouldn't each particle evolve along a collection of trajectories defined using various initial data? How can we construct a density in this case? Answer: Let me first note that $p,q$ here are not coordinates of a single particle, but of all the particles in the system, and a point in a phase space is a state of the system. Now,an important assumption in statistical physics is ergodicity - that is that the averaging over time can he replaced by an ensemble averaging. In other words, instead of observing a long trajectory of a single system, we can consider trajectories of many systems with different initial conditions. This also implies that the trajectory of the single system would be ling enough to visit all the accessible points in the phase space.
{ "domain": "physics.stackexchange", "id": 93758, "tags": "classical-mechanics, statistical-mechanics, hamiltonian-formalism, phase-space" }
What is the purpose of Sequence Length parameter in RNN (specifically on PyTorch)?
Question: I am trying to understand RNN. I got a good sense of how it works on theory. But then on PyTorch you have two extra dimensions to your input data: batch size (number of batches) and sequence length. The model I am working on is a simple one to one model: it takes in a letter than estimates the following letter. The model is provided here. First please correct me if I am wrong about the following: Batch size is used to divide the data into batches and feed it into model running in parallels. At least this was the case in regular NNs and CNNs. This way we take advantage of the processing power. It is not "ideal" in the sense that in theory for RNN you just go from one end to another one in an unbroken chain. But I could not find much information on sequence length. From what I understand it breaks the data into the lengths we provide, instead of keeping it as an unbroken chain. Then unrolls the model for the length of that sequence. If it is 50, it calculates the model for a sequence of 50. Let's think about the first sequence. We initialize a random hidden state, the model first does a forward run on these 50 inputs, then does backpropagation. But my question is, then what happens? Why don't we just continue? What happens when it starts the new sequence? Does it initialize a random hidden state for the next sequence or does it use the hidden state calculated from the very last entry from the previous sequence? Why do we do that, and not just have one big sequence? Does not this break the continuity of the model? I read somewhere it is also memory related; if you put the whole text as sequence, gradient calculation would take the whole memory it said. Does it mean it resets the gradients after each sequence? Thank you very much for the answers Answer: The RNN receives as input a batch of sequences of characters. The output of the RNN is a tensor with sequences of character predictions, of just the same size of the input tensor. The number of sequences in each batch is the batch size. Every sequence in a single batch must be the same length. In this case, all sequences of all batches have the same length, defined by seq_length. Each position of the sequence is normally referred to as a "time step". When back-propagating an RNN, you collect gradients through all the time steps. This is called "back-proparation through time (BPTT)". You could have a single super long sequence, but the memory required for that would be large, so normally you must choose a maximum sequence length. To somewhat mitigate the need of cutting the sequences, people normally apply something called "truncated BPTT". That is what the code you linked uses. It consists of having the sequences in the batches arranged so that each of the sequences in the next batch are the continuation of the text from each of the sequences in the previous batch, together with reusing the last hidden state of the previous batch as the initial hidden state of the next one.
{ "domain": "datascience.stackexchange", "id": 10842, "tags": "lstm, rnn, pytorch" }
[Morse simulation]Module not found: No module named 'morse.middleware.ros_request_manager'
Question: HI there, I'm trying to use the Morse simulation but find some problems while run the tutorial"examples/tutorials/tutorial-1-ros.py". The error is as follow: [helpers.loading] Module not found: No module named 'morse.middleware.ros_request_manager' [helpers.loading] Could not load the class RosRequestManager in morse.middleware.ros_request_manager [helpers.loading] Could not create an instance of morse.middleware.ros_request_manager.RosRequestManager [core.services] Request Manager morse.middleware.ros_request_manager.RosRequestManager not found. Check for typos in the configuration file! I've already installed the python3.4 and other elements which tutorials indicates. link text And also refered the answer: link text But still cannot solve the problem. Any ideas of this problem? Originally posted by jiangqueque on ROS Answers with karma: 1 on 2016-12-28 Post score: 0 Original comments Comment by gvdhoorn on 2016-12-28: This would really seem to be a problem with Morse and one of its plugins, would you agree? I would suggest you ask this question on a Morse support forum, as you stand a much better chance of getting (good) answers. Answer: Hello! Can you tell us: What version of Ubuntu & ROS are you using? How you have installed MORSE? ...since quite a while, MORSE supports ROS 'out of the box', and does not require any particular ROS configuration steps. Originally posted by severin with karma: 240 on 2016-12-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by jiangqueque on 2017-01-05: Ubuntu 14.04 Ros 1.11.19 I've already installed MORSE, and launched a simple simulation by using command: $ morse create mysim $ morse run mysim I want to develop a vehicle dynamic model by MORSE on ROS, so I need to use ROS modules. Does MORSE support my Ubuntu & ROS version?
{ "domain": "robotics.stackexchange", "id": 26587, "tags": "ros, morse, python3" }
Launching the JointGroupVelocityController
Question: I have some problems in launching the jointGroupVelocityController. This was my .yaml file before: joint_state_controller: type: joint_state_controller/JointStateController publish_rate: 50 joint_fr_motor_controller: type: velocity_controllers/JointVelocityController joint: joint_front_right_prop pid: {p: 10000, i: 1, d: 1000} joint_fl_motor_controller: type: velocity_controllers/JointVelocityController joint: joint_front_left_prop pid: {p: 10000, i: 1, d: 1000} joint_br_motor_controller: type: velocity_controllers/JointVelocityController joint: joint_back_right_prop pid: {p: 10000, i: 1, d: 1000} joint_bl_motor_controller: type: velocity_controllers/JointVelocityController joint: joint_back_left_prop pid: {p: 10000, i: 1, d: 1000} In this way, I cannot provide velocities SIMULTANEOUSLY to all the four joints. I have a .launch file that loads all these controllers, using the controller_manager pkg. This is the snippet of the controller_manager in my .launch file: <group ns="Kwad"> .................. <node name="control_spawner" pkg="controller_manager" type="spawner" respawn="false" output="screen" args="--namespace=/Kwad /Kwad/joint_state_controller /Kwad/joint_fr_motor_controller /Kwad/joint_fl_motor_controller /Kwad/joint_bl_motor_controller /Kwad/joint_br_motor_controller" /> .................... </group> I came across the JointGroupVelocityController, that provides multiple velocities to the joints SIMULTANEOUSLY, thus I want to know how to change the .yaml file and the .launch file accordingly. This is my updated .yaml file, but i am not sure if it's correct: joint_state_controller: type: joint_state_controller/JointStateController publish_rate: 50 joint_motor_controller: type: velocity_controllers/JointGroupVelocityController joint: - joint_front_right_prop - joint_front_left_prop - joint_back_left_prop - joint_back_right_prop pid: {p: 10000, i: 1, d: 1000} Can someone please help me out? Thank you. Originally posted by Jarvis1997 on ROS Answers with karma: 81 on 2019-07-01 Post score: 0 Answer: I found the solution. The .yaml file must be as follows: joint_state_controller: type: joint_state_controller/JointStateController publish_rate: 50 joint_motor_controller: type: velocity_controllers/JointGroupVelocityController joints: - joint_front_right_prop - joint_front_left_prop - joint_back_left_prop - joint_back_right_prop gains: joint_front_right_prop: {p: 10000, i: 1, d: 1000} joint_front_left_prop: {p: 10000, i: 1, d: 1000} joint_back_left_prop: {p: 10000, i: 1, d: 1000} joint_back_right_prop: {p: 10000, i: 1, d: 1000} Make sure to change "joint" to "joints". These must include the joints you have defined in the .xacro file. In my case, it is joint_front_right_prop, joint_front_left_prop, joint_back_left_prop, joint_back_right_prop. Also, my launch file is: <node name="control_spawner" pkg="controller_manager" type="spawner" respawn="false" output="screen" args="--namespace=/Kwad joint_state_controller joint_motor_controller" /> This launches all four joints at the same time, and SIMULTANEOUSLY publishes velocities to these four joints. The command initially was: rostopic pub -1 /Kwad/joint_front_right_prop/command std_msgs/Float64 "data: 10" I had to repeat this command 4 times for 4 different joints. Now it is the follows. One command for four joints. rostopic pub -1 /Kwad/joint_motor_controller/command std_msgs/Float64MultiArray "data: [10, 10, 10, 10]" This publishes velocity of 10 units (in Gazebo world) to all the 4 joints, SIMULTANEOUSLY. I hope this helps anyone in need. Thank you!! Originally posted by Jarvis1997 with karma: 81 on 2019-07-01 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 33303, "tags": "ros-melodic" }
Create palindrome by rearranging letters of a word
Question: Inspired by a recent question that caught my interest (now deleted), I wrote a function in Python 3 to rearrange the letters of a given string to create a (any!) palindrome: Count the occurrences of each letter in the input Iterate over the resulting (letter, occurrences) tuples only once: If the occurences are even, we remember them to add them around the center later A valid palindrome can contain only 0 or 1 letter(s) with an odd number of occurrences, so if we've already found the center, we raise an exception. Otherwise, we save the new-found center for later Finally, we add the sides around the center and join it to create the resulting palindrome string. I'm looking for feedback on every aspect you can think of, including readability (including docstrings), how appropriate the data structures are, if the algorithm can be expressed in simpler terms (or replaced altogether) and the quality of my tests. Implementation: palindromes.py from collections import deque, Counter def palindrome_from(letters): """ Forms a palindrome by rearranging :letters: if possible, throwing a :ValueError: otherwise. :param letters: a suitable iterable, usually a string :return: a string containing a palindrome """ counter = Counter(letters) sides = [] center = deque() for letter, occurrences in counter.items(): repetitions, odd_count = divmod(occurrences, 2) if not odd_count: sides.append(letter * repetitions) continue if center: raise ValueError("no palindrome exists for '{}'".format(letters)) center.append(letter * occurrences) center.extendleft(sides) center.extend(sides) return ''.join(center) Unit tests: test_palindromes.py (using py.test) def test_empty_string_is_palindrome(): assert palindrome_from('') == '' def test_whitespace_string_is_palindrome(): whitespace = ' ' * 5 assert palindrome_from(whitespace) == whitespace def test_rearranges_letters_to_palindrome(): assert palindrome_from('aabbb') == 'abbba' def test_raises_exception_for_incompatible_input(): with pytest.raises(ValueError) as error: palindrome_from('asdf') assert "no palindrome exists for 'asdf'" in error.value.args Manual testing in the console while True: try: word = input('Enter a word: ') print(palindrome_from(word)) except ValueError as e: print(*e.args) except EOFError: break Answer: from collections import deque, Counter def palindrome_from(letters): """ Forms a palindrome by rearranging :letters: if possible, throwing a :ValueError: otherwise. :param letters: a suitable iterable, usually a string :return: a string containing a palindrome """ counter = Counter(letters) sides = [] center = deque() for letter, occurrences in counter.items(): repetitions, odd_count = divmod(occurrences, 2) odd_count is a bit of a strange name, because its just whether its odd or even, not really an odd_count if not odd_count: sides.append(letter * repetitions) continue avoid using continue, favor putting the rest of the loop in an else block. Its easier to follow that way if center: raise ValueError("no palindrome exists for '{}'".format(letters)) center.append(letter * occurrences) center.extendleft(sides) center.extend(sides) Avoid reusing variables for something different. Changing center to be the whole phrase isn't all that good an idea. I suggest using itertools.chain(sides, center, reversed(sides)) and then joining that. return ''.join(center)
{ "domain": "codereview.stackexchange", "id": 3701, "tags": "python, algorithm, palindrome" }
regex for JSON only for one nested data
Question: Hi guys do you know how to change this regex, that it only looks for test : Answer: You need to apply two regexes: first, get r'^test:.*$' with m option, then your original regex on the result of the first.
{ "domain": "datascience.stackexchange", "id": 8362, "tags": "python, regex" }
Standardizes Rows of a Pandas.DataFrame Object
Question: Please take a look at this little snippet of code and explicate on whether there are an efficiency enhancements that you'd make for it. It standards a row of a pandas.DataFrame by adding zeros to it so that the total length of the datum is the same as the longest datum in the column Sample "solubility.csv" file for testing SMILES,Solubility [Br-].CCCCCCCCCCCCCCCCCC[N+](C)(C)C,-3.6161271205000003 O=C1Nc2cccc3cccc1c23,-3.2547670983 [Zn++].CC(c1ccccc1)c2cc(C(C)c3ccccc3)c(O)c(c2)C([O-])=O.CC(c4ccccc4)c5cc(C(C)c6ccccc6)c(O)c(c5)C([O-])=O,-3.9244090954 C1OC1CN(CC2CO2)c3ccc(Cc4ccc(cc4)N(CC5CO5)CC6CO6)cc3,-4.6620645831 def generate_standard(): dataframe = pd.read_csv('solubility.csv', usecols = ['SMILES','Solubility']) dataframe['standard'],longest = '','' for _ in dataframe['SMILES']: if len(str(_)) > len(longest): longest = str(_) continue # index,row in dataframe for index,row in dataframe.iterrows(): # datum from column called 'SMILES' smi = row['SMILES'] # zeros = to difference between longest datum and current datum zeros = (0 for x in range(len(longest) - len(str(smi)))) # makes the zeros into type str zeros_as_str = ''.join(str(x) for x in zeros) # concatenate the two str std = str(smi) + zeros_as_str # and place it in a new column called standard at the current index dataframe.at[index,'standard'] = std return dataframe,longest Answer: Ways to improve and optimize: function generate_standard is named too general whereas it loads concrete csv file 'solubility.csv' and processes a particular column 'SMILES'.A more maintainable and flexible way is to make the function more decoupled and unified such that it accepts an input csv file name and the crucial column name. The main purpose can be described as "aligning a specific column's values length by the longest value".Let's give a function the appropriate name with the following signature: def align_col_length(fname, col_name): """Align a specified column's values length by the longest value""" df = pd.read_csv(fname) ... finding the longest string value in SMILES column.Instead of for loop, since you are dealing with pandas which is powerful enough and allows str accessor on pd.Series objects (points to column values in our case): Series and Index are equipped with a set of string processing methods that make it easy to operate on each element of the array. Perhaps most importantly, these methods exclude missing/NA values automatically. These are accessed via the str attribute and generally have names matching the equivalent (scalar) built-in string methods Thus, applying a flexible chain df[col_name].str.len() + pd.Series.idxmax we would be able to get the row label/position of the maximum value.Then, just easily get the column's longest value by the row position using: longest = df.loc[df[col_name].str.len().idxmax(), col_name] New dataframe['standard'] column with padded values. It only remains to apply a flexible one-liner using pd.Series.str.pad routine (to pad the col_name values up to width of len(longest)) The final optimized function now becomes a more concise and pandas-flavored: import pandas as pd def align_col_length(fname, col_name): """Align a specified column's values length by the longest value""" df = pd.read_csv(fname) longest = df.loc[df[col_name].str.len().idxmax(), col_name] df['standard'] = df[col_name].str.pad(len(longest), side='right', fillchar='0') return df, longest Testing: df, longest = align_col_length('solubility.csv', col_name='SMILES') print('longest SMILE value:', longest) print('*' * 30) # just visual separator print(df) The output: longest SMILE value: [Zn++].CC(c1ccccc1)c2cc(C(C)c3ccccc3)c(O)c(c2)C([O-])=O.CC(c4ccccc4)c5cc(C(C)c6ccccc6)c(O)c(c5)C([O-])=O ****************************** SMILES ... standard 0 [Br-].CCCCCCCCCCCCCCCCCC[N+](C)(C)C ... [Br-].CCCCCCCCCCCCCCCCCC[N+](C)(C)C000000000000000000000000000000000000000000000000000000000000000000000 1 O=C1Nc2cccc3cccc1c23 ... O=C1Nc2cccc3cccc1c23000000000000000000000000000000000000000000000000000000000000000000000000000000000000 2 [Zn++].CC(c1ccccc1)c2cc(C(C)c3ccccc3)c(O)c(c2)C([O-])=O.CC(c4ccccc4)c5cc(C(C)c6ccccc6)c(O)c(c5)C([O-])=O ... [Zn++].CC(c1ccccc1)c2cc(C(C)c3ccccc3)c(O)c(c2)C([O-])=O.CC(c4ccccc4)c5cc(C(C)c6ccccc6)c(O)c(c5)C([O-])=O 3 C1OC1CN(CC2CO2)c3ccc(Cc4ccc(cc4)N(CC5CO5)CC6CO6)cc3 ... C1OC1CN(CC2CO2)c3ccc(Cc4ccc(cc4)N(CC5CO5)CC6CO6)cc300000000000000000000000000000000000000000000000000000 [4 rows x 3 columns]
{ "domain": "codereview.stackexchange", "id": 36442, "tags": "python, strings, pandas" }
Pressure-temperature dependence of a pure substance
Question: Saturation temperature is defined as the temperature at which a pure substance say water, changes phase, at a fixed pressure. So water boils at 100⁰C , if the external pressure is kept constant at 101.3 kPa(1atm). At 12.35 kPa water boils at 50°C, which indicates that there is a pressure-temperature dependence at saturation points, but is this p-t dependence driven by pressure only? i.e can we keep the pressure fixed at 1atm and still boil water at a temperature other than 100°C, by increasing the heat supply and hence increasing temperature. Also, is this pressure-temperature dependence a consequence of the gibbs-phase rule. Answer: I guess you cannot increase the fluid temperature while keep the pressure constant when it is boiling. The temperature is fixed once the pressure is fixed determined by the fluid vapor pressure curve. Practically, it is more feasible, as you said, to adjust pressure. For example in a high pressure cook, the weight is used to adjust pressure and thus the cooking (boiling) temperature is increased.
{ "domain": "physics.stackexchange", "id": 47319, "tags": "thermodynamics, phase-transition, phase-diagram" }
More DRY way of conditionally filling variables?
Question: I'm building a chart component and it behaves differently if data has been passed in or not. Here are two ways I can think of to conditionally fill chart variables. (data will be empty {} or have non-empty values) // helper function isEmptyObject(obj) { for (const i in obj) return false; return true; } const gridInterval = 5; let yTopTickValue; let yBottomTickValue; let yTickCount; let xTopTickValue; let xBottomTickValue; let xTickCount; if (!isEmptyObject(data)) { yTopTickValue = Math.ceil(data.yMax / gridInterval) * gridInterval; yBottomTickValue = Math.floor(data.yMin / gridInterval) * gridInterval; yTickCount = (yTopTickValue - yBottomTickValue) / gridInterval + 1; xTopTickValue = Math.ceil(data.yMax / gridInterval) * gridInterval; xBottomTickValue = Math.floor(data.yMin / gridInterval) * gridInterval; xTickCount = (xTopTickValue - xBottomTickValue) / gridInterval + 1; } and const gridInterval = 5; const isDataEmpty = isEmptyObject(data); const yTopTickValue = isDataEmpty ? null: Math.ceil(data.yMax / gridInterval) * gridInterval; const yBottomTickValue = isDataEmpty ? null: Math.floor(data.yMin / gridInterval) * gridInterval; const yTickCount = isDataEmpty ? null: (yTopTickValue - yBottomTickValue) / gridInterval + 1; const xTopTickValue = isDataEmpty ? null: Math.ceil(data.yMax / gridInterval) * gridInterval; const xBottomTickValue = isDataEmpty ? null: Math.floor(data.yMin / gridInterval) * gridInterval; const xTickCount = isDataEmpty ? null: (xTopTickValue - xBottomTickValue) / gridInterval + 1; Are there any other clever versions I'm missing? Answer: Based on @CertainPerformance's feedback, here is where I landed. This is the most DRY version possible. const chartSettings = (gridInterval, data) => { if (isEmptyObject(data)) return; const ticks = { yTopTick: Math.ceil(+data.yMax / gridInterval) * gridInterval, yBottomTick: Math.floor(+data.yMin / gridInterval) * gridInterval, xTopTick: Math.ceil(+data.yMax / gridInterval) * gridInterval, xBottomTick: Math.floor(+data.yMin / gridInterval) * gridInterval, }; return { ...ticks, yTickCount: 1 + (ticks.yTopTick - ticks.yBottomTick) / gridInterval, xTickCount: 1 + (ticks.xTopTick - ticks.xBottomTick) / gridInterval, }; };
{ "domain": "codereview.stackexchange", "id": 39581, "tags": "javascript" }
Asus Xtion camera calibration fails to start
Question: Hi, I'm trying to calibrate Asus Xtion following tutorial in openni_launch package. I'm getting following error: $ rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.030 image:=/camera/rgb/image_raw camera:=/camera/rgb Waiting for service /camera/rgb/set_camera_info ... OK Exception in thread Thread-3: Traceback (most recent call last): File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner self.run() File "/opt/ros/electric/stacks/image_pipeline/camera_calibration/nodes/cameracalibrator.py", line 68, in run self.function(m) File "/opt/ros/electric/stacks/image_pipeline/camera_calibration/nodes/cameracalibrator.py", line 133, in handle_monocular drawable = self.c.handle_msg(msg) File "/opt/ros/electric/stacks/image_pipeline/camera_calibration/src/camera_calibration/calibrator.py", line 684, in handle_msg rgb = self.mkgray(msg) File "/opt/ros/electric/stacks/image_pipeline/camera_calibration/src/camera_calibration/calibrator.py", line 233, in mkgray rgb = self.br.imgmsg_to_cv(msg, "bgr8") File "/opt/ros/electric/stacks/vision_opencv/cv_bridge/src/cv_bridge/cv_bridge.py", line 106, in imgmsg_to_cv source_type = self.encoding_as_cvtype(img_msg.encoding) File "/opt/ros/electric/stacks/vision_opencv/cv_bridge/src/cv_bridge/cv_bridge.py", line 54, in encoding_as_cvtype return eval("cv.CV_%s" % encoding) File "<string>", line 1, in <module> AttributeError: 'module' object has no attribute 'CV_yuv422' I'm running electric. Originally posted by liborw on ROS Answers with karma: 801 on 2012-05-31 Post score: 1 Answer: Hi, I had the same problem (using Fuerte), but I was able to calibrate my Asus Xtion by changing the launch call to: rosrun camera_calibration cameracalibrator.py --size 10x7 --square 0.025 image:=/camera/rgb/image_mono camera:=/camera/rgb --no-service-check for the RGB camera, and: rosrun camera_calibration cameracalibrator.py --size 10x7 --square 0.025 image:=/camera/ir/image_raw camera:=/camera/ir --no-service-check for the IR camera. Hope this helps. Originally posted by Smot with karma: 61 on 2012-07-09 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by tbernhard on 2012-07-11: (Also on Fuerte) Worked for me, thanks! I didn't need the --no-service-check, just the switch from image:=/camera/rgb/image_raw to /camera/rgb/image_mono
{ "domain": "robotics.stackexchange", "id": 9627, "tags": "calibration, openni, xtion, camera-calibration" }
An exit condition for a do-while loop using lists
Question: I have an integer list l and a predefined integer value row (which is never manipulated inside the code). If value of row is 5, then I want the loop to exit if l contains 1,2,3,4. If it is 3, then l should contain 1,2 and so on for any value. I have devised a way of doing this, but since I mean to use this in an app, can someone tell me a better way of doing this? do { }while(check(row,l)) boolean check(int row,list<int> l) { for(int i=1;i<row;i++) { if((l.contains(i)) continue(); else return true; } return false; }` Answer: check is not a good name for the function as it does not say what it checks. It returns false if all numbers below row are present and true otherwise. So a better name might be numbersAreMissing. You can make your loop body a bit shorter by inverting the condition: for (int i = 1; i < row; ++i) { if (!l.contains(i)) { return true; } } return false; You should really think about whether or not this check is something you want to do on every loop iteration. Especially if row is a bit larger this gets wasteful. There are probably better ways to organize your data so this check can be avoided but that entirely depends on the actual problem you are trying to solve.
{ "domain": "codereview.stackexchange", "id": 5504, "tags": "c#" }
What is the difference between those two calculations? Drude model
Question: I stumbled upon 2 questions that I cannot really understand the physical difference between; Question 1: Use Drude model and its assumptions: A metal is found in constant temperature and an external electric field $ \overrightarrow{E} $. A valence electron collide with a ion and then colide once again with another ion. Find the average energy loss of the electron in the second collision, given that the time distribution is $ f\left(t\right)=\frac{1}{\tau}e^{-\frac{t}{\tau}} $. This is not too hard a calculation, and the answer is $$ \langle\varepsilon\rangle=\frac{\left(eE\tau\right)^{2}}{m_{e}} $$ (where $ \langle\varepsilon\rangle $ denotes the expectation). Now there's question 2 which looks familiar but not the same: Question 2: Use Drude model and its assumptions: A metal is found in constant temperature and an external electric field $ \overrightarrow{E} $. A valence electron collides with an ion *and after time $t$ * collides with another ion. Find the average energy loss of the electron in the second collision. The answer for the second question is : $$ \frac{\left(eEt\right)^{2}}{2m_{e}} $$ I do not understand what is actually the difference between the questions (what is the difference from the physics point of view) I'll show my calculations for the first question: By Newton's second law: $$ -\frac{e\overrightarrow{E}}{m_{e}}=\frac{d\overrightarrow{v}}{dt} $$ and thus $$ \overrightarrow{v}\left(t\right)=-\frac{e\overrightarrow{E}}{m_{e}}t+\overrightarrow{v_{0}} $$ Where $v_0 $ is the velocity between collisions (I guess it's just the thermal velocity, and thus the expectation of this velocity is zero). The energy loss in the collision, is due to the energy gained by the electric field, that is the energy: $$ \varepsilon\left(t\right)=\frac{1}{2}m_{e}\left(-\frac{e\overrightarrow{E}}{m_{e}}t\right)^{2}=\frac{1}{2m_{e}}\left(eE\right)^{2}t^{2} $$ and thus for the average energy we want to calculate the expectation. That is: $$\langle\varepsilon\rangle=\langle\frac{1}{2m_{e}}\left(eE\right)^{2}t^{2}\rangle=\frac{1}{2m_{e}}\left(eE\right)^{2}\langle t^{2}\rangle=\frac{1}{2m_{e}}\left(eE\right)^{2}\intop_{0}^{\infty}\frac{t^{2}}{\tau}e^{-\frac{t}{\tau}}dt=\frac{\left(eE\tau\right)^{2}}{m_{e}} $$ I'm not sure how exactly to do the calculations of the second question, because I'm not sure what should I do different (and how exactly are we supposed to calculate the expectation of the energy when the distribution is not given) A clarification would be very helpful. Answer: In the first problem, the collision time is a random variable, and you're given a PDF telling you how this variable is distributed. The collision could take place at any time after $t=0$. In the second problem, the collision time is definite — we know the collision happens at a certain time $t=\tau$. If it helps, you could think about the collision time as a random variable with PDF \begin{align} f(t) = \delta(t - \tau), \end{align} where $\delta(t)$ is the Dirac delta distribution. If you repeat the calculation of the first problem using this distribution function, you will get the answer of the second problem.
{ "domain": "physics.stackexchange", "id": 76820, "tags": "quantum-mechanics, homework-and-exercises, solid-state-physics" }
What order is this butterfly?
Question: I found this butterfly in the rocky mountains of colorado. Its obviously a lepidoptera but I'm not sure what family this butterfly could be. I found it perched on a bush and it is around 40ish mm long. Thank you very much for any answers, making an amature insect collection :3 Edit: I was thinking its a Riodinidae, but It doesnt quite look like it. Answer: Its obviously a lepidoptera but I'm not sure what family this butterfly could be. This is a "Painted Lady" butterfly, which belongs to the family Nymphalidae, and the species Vanessa cardui. The following is a distribution map of the butterfly within the US, and then Colorado specifically. An interactive version of this map can be found here. Following its annual spring migration the painted lady may be found anywhere in the state. However, it is primarily a species of fields and open areas. They are also common visitors to flowers in yards and gardens. The painted lady is one species that many school children have encountered as rearing one of these is now almost a rite of passage in elementary school classes. It is also the butterfly species commonly used for release at weddings and other celebratory events. source This in mind, the painted lady is in no risk of being endangered.
{ "domain": "biology.stackexchange", "id": 7812, "tags": "classification" }
Can any animals photosynthesize?
Question: Plants and animals have the following distinct properties: Plants live from solar energy by photosynthesis, they use solar energy to make sugar and oxygen out of carbon dioxide, which gives them energy. Animals live from the sugar and oxygen plants created and produce carbon dioxide for their energy. Animals can move across the planet while plants are tied to the ground. Clearly, animals have a harder time to survive with no plants within their reach than plants have without animals coming close. This is logical because solar energy is always there while plants are not. So my question is: Are there animals that can do photosynthesis? It's obvious that an animal with plant-like stateliness would be non-beneficial since it relies on eating other plants for it's energy and there may not always be plants within reach from it's spot. But animals using the sun and carbon dioxide for energy production does not sound so stupid. Night animals could also gather energy in their sleep. Much easier than plants, animals could make sure nothing blocks their sunlight. Many animals go through periods of hunger because food is scarce, for some of them this period is paired with high sunlight levels. (the dry season f.e.) (EDIT: This is just an idea, of course photosynthesis requires water, which is absent in the dry season. But still, in warm period with enough water, there's sometimes too much animals to feed from the available vegeation.) Some things I already took into consideration: I know that plants, because they are small in mass (compared to the area with which they can collect sunlight) and static, don't need nearly as much energy as animals do. Is this the main reason? I know that f.e. reptiles, but in fact all cold-blooded animals, already use the sun's energy. But they only use the heat from the sun to warm their bodies, they don't photosynthesize. Answer: There are 5 answers, all "yes" (though the first one is disputable). First: there exists at least one animal which can produce its own chlorophyll: A green sea slug appears to be part animal, part plant. It's the first critter discovered to produce the plant pigment chlorophyll. The sea slugs live in salt marshes in New England and Canada. In addition to burglarizing the genes needed to make the green pigment chlorophyll, the slugs also steal tiny cell parts called chloroplasts, which they use to conduct photosynthesis. The chloroplasts use the chlorophyl to convert sunlight into energy, just as plants do, eliminating the need to eat food to gain energy. The slug in the article appears to be Elysia chlorotica. Elysia chlorotica is one of the "solar-powered sea slugs", utilizing solar energy via chloroplasts from its algal food. It lives in a subcellular endosymbiotic relationship with chloroplasts of the marine heterokont alga Vaucheria litorea. UPDATE: As per @Teige's comment, this finding is somewhat disputable. Second, animals need not produce their own Chlorophyll, and instead symbiotically host organisms that use Photosynthetis - e.g. algae and cyanobacteria. This approach is called Photosynthetic symbioses. Overall, 27 (49%) of the 55 eukaryotic groups identified by Baldauf (2003) have representatives which possess photosynthetic symbionts or their derivatives, the plastids. These include the three major groups of multicellular eukaryotes: the plants, which are derivatives of the most ancient symbiosis between eukaryotes and cyanobacteria; the fungi, many of which are lichenized with algae or cyanobacteria; and the animals. We, the authors, and probably many readers were taught that animals do not photosynthesize. This statement is true in the sense that the lineage giving rise to animals did not possess plastids, but false in the wider sense: many animals photosynthesize through symbiosis with algae or cyanobacteria. Please note that while most organisms known for this are fungi, and some rare invertebrates (corals, clams, jellyfish, sponges, sea anemones), there is at least one example of vertebrate like this - spotted salamander (Ambystoma maculatum) Non-chlorophyll synthesis A 2010 study by researchers at Tel Aviv University discovered that the Oriental hornet (Vespa orientalis) converts sunlight into electric power using a pigment called xanthopterin. This is the first scientific evidence of a member of the animal kingdom engaging in photosynthesis, according to Wikipedia. Another discovery from 2010 is possibly a second piece of evidence: University of Arizona biologists researcher Nancy Moran and Tyler Jarvik discovered that pea aphids can make their own carotenoids, like a plant. “What happened is a fungal gene got into an aphid and was copied,” said Moran in a press release. Their research article is http://www.sciencemag.org/content/328/5978/624, and they did not consider it conclusive: The team warns that more research will be needed before we can be sure that aphids truly have photosynthesis-like abilities. Third, depending on how you understand Photosynthesis, you can include other chemical reactions converting sunlight energy. If the answer is "usual 6H2O + 6CO2 ----------> C6H12O6+ 6O2 reaction done via chlorophyll", then see answers #1,#2. But if you simply literally translate the term (synthesizing new molecules using light), then you can ALSO include the process of generating Vitamin D from exposure to sunlight that humans do thanks to cholesterol (link) Non-biological answer. As a side bonus, Ophiocordyceps sinensis is referred to as half-animal half-plant (not very scientifically IMHO). But it doesn't do photosynthesis.
{ "domain": "biology.stackexchange", "id": 1150, "tags": "evolution, botany, zoology, plant-physiology, photosynthesis" }
Elixir PEG Parser Generator
Question: I am trying to learn Elixir, so I decided to write a port of Ruby's Parslet library. I want to define a DSL that lets you define your grammar and given a string can parse it into an AST. I wrote something, but it's not very idiomatic. I'd love some pointers. Also, I wanted to be able to define rules as follows: rule :test do { str("a") >> (str("b") >> repeat(2)) } where the '2' is 'min_count' and have it match 'abbbbb'. With my current design I had to do: rule :test do { str("a") >> repeat(2, str("b")) } Here is my code and the tests: defmodule Parslet do @moduledoc """ Documentation for Parslet. """ # Callback invoked by `use`. # # For now it returns a quoted expression that # imports the module itself into the user code. @doc false defmacro __using__(_opts) do quote do import Parslet # Initialize @tests to an empty list @rules [] @root :undefined # Invoke Parslet.__before_compile__/1 before the module is compiled @before_compile Parslet end end @doc """ Defines a test case with the given description. ## Examples rule :testString do str("test") end """ defmacro rule(description, do: block) do function_name = description quote do # Prepend the newly defined test to the list of rules @rules [unquote(function_name) | @rules] def unquote(function_name)(), do: unquote(block) end end defmacro root(rule_name) do quote do # Prepend the newly defined test to the list of rules @root unquote(rule_name) end end # This will be invoked right before the target module is compiled # giving us the perfect opportunity to inject the `parse/1` function @doc false defmacro __before_compile__(_env) do quote do def parse(document) do # IO.puts "Root is defined as #{@root}" # Enum.each @rules, fn name -> # IO.puts "Defined rule #{name}" # end case apply(__MODULE__, @root, []).(document) do {:ok, any, ""} -> {:ok , any} {:ok, any, rest} -> {:error, "Consumed #{inspect(any)}, but had the following remaining '#{rest}'"} error -> error end end end end def call_aux(fun, aux) do fn doc -> case fun.(doc) do {:ok, match, rest} -> aux.(rest, match) other -> other end end end # TODO ... checkout ("a" <> rest ) syntax... # https://stackoverflow.com/questions/25896762/how-can-pattern-matching-be-done-on-text def str(text), do: str(&Parslet.identity/1, text) def match(regex_s), do: match(&Parslet.identity/1, regex_s) def repeat(fun, min_count), do: repeat(&Parslet.identity/1, fun, min_count) def str(fun, text), do: call_aux( fun, fn (doc, matched) -> str_aux(text, doc, matched) end ) def match(fun, regex_s), do: call_aux( fun, fn (doc, matched) -> match_aux(regex_s, doc, matched) end ) def repeat(prev, fun, min_count), do: call_aux( prev, fn (doc, matched) -> repeat_aux(fun, min_count, doc, matched) end ) defp str_aux(text, doc, matched) do tlen = String.length(text) if String.starts_with?(doc, text) do {:ok, matched <> text, String.slice(doc, tlen..-1) } else {:error, "'#{doc}' does not match string '#{text}'"} end end defp match_aux(regex_s, doc, matched) do regex = ~r{^#{regex_s}} case Regex.run(regex, doc) do nil -> {:error, "'#{doc}' does not match regex '#{regex_s}'"} [match | _] -> {:ok, matched <> match, String.slice(doc, String.length(match)..-1)} end end defp repeat_aux(fun, 0, doc, matched) do case fun.(doc) do {:ok, match, rest} -> repeat_aux(fun, 0, rest, matched <> match) _ -> {:ok, matched, doc} end end defp repeat_aux(fun, count, doc, matched) do case fun.(doc) do {:ok, match, rest} -> repeat_aux(fun, count - 1, rest, matched <> match) other -> other end end def identity(doc) do {:ok, "", doc} end end Tests defmodule ParsletTest do use ExUnit.Case doctest Parslet defmodule ParsletExample do use Parslet rule :test_string do str("test") end root :test_string end test "str matches whole string" do assert ParsletExample.parse("test") == {:ok, "test"} end test "str doesnt match different strings" do assert ParsletExample.parse("tost") == {:error, "'tost' does not match string 'test'"} end test "parse reports error if not all the input document is consumed" do assert ParsletExample.parse("test_the_best") == {:error, "Consumed \"test\", but had the following remaining '_the_best'"} end defmodule ParsletExample2 do use Parslet rule :test_regex do match("123") end # calling another rule should just work. :) rule :document do test_regex() end root :document end test "[123]" do assert ParsletExample2.parse("123") == {:ok, "123"} assert ParsletExample2.parse("w123") == {:error, "'w123' does not match regex '123'"} assert ParsletExample2.parse("234") == {:error, "'234' does not match regex '123'"} assert ParsletExample2.parse("123the_rest") == {:error, "Consumed \"123\", but had the following remaining 'the_rest'"} end defmodule ParsletExample3 do use Parslet rule :a do repeat(str("a"), 1) end root :a end test "a+" do assert ParsletExample3.parse("a") == {:ok, "a"} assert ParsletExample3.parse("aaaaaa") == {:ok, "aaaaaa"} end defmodule ParsletExample4 do use Parslet rule :a do str("a") |> str("b") end root :a end test "a > b = ab" do assert ParsletExample4.parse("ab") == {:ok, "ab"} end defmodule ParsletExample5 do use Parslet rule :a do repeat(str("a") |> str("b") , 1) end root :a end test "(a > b)+" do assert ParsletExample5.parse("ababab") == {:ok, "ababab"} end defmodule ParsletExample6 do use Parslet rule :a do str("a") |> repeat(str("b"), 1) end root :a end test "a > b+" do assert ParsletExample6.parse("abbbbb") == {:ok, "abbbbb"} end end Answer: The latest version is at https://github.com/NigelThorne/ElixirParslet/blob/master/test/json_parser_test.exs I changed the dsl to create a data structure that then gets interpreted as the parser runs. This let me decouple the DSL from the behaviour in a way that passing functions around didn't. Here is a JSON Parser written in my language. defmodule JSONParser do use Parslet rule :value do one_of ([ string(), number(), object(), array(), boolean(), null(), ]) end rule :null do as(:null, str("null")) end rule :boolean do as(:boolean, one_of ([ str("true"), str("false"), ])) end rule :sp_ do repeat(match("[\s\r\n]"), 0) end rule :string do (str("\"") |> as(:string, repeat( as(:char, one_of( [ (absent?(str("\"")) |> absent?(str("\\")) |> match(".")), (str("\\") |> as(:escaped, one_of( [ match("[\"\\/bfnrt]"), (str("u") |> match("[a-fA-F0-9]") |> match("[a-fA-F0-9]") |> match("[a-fA-F0-9]") |> match("[a-fA-F0-9]")) ])) ) ])),0) ) |> str("\"")) end rule :digit, do: match("[0-9]") rule :number do as(:number, as(:integer, maybe(str("-")) |> one_of([ str("0"), (match("[1-9]") |> repeat( digit(), 0 )) ])) |> as(:decimal, maybe(str(".") |> repeat( digit(), 1 )) ) |> as(:exponent, maybe( one_of( [str("e"), str("E")] ) |> maybe( one_of( [ str("+"), str("-") ] )) |> repeat( digit(), 1) ) ) ) end rule :key_value_pair do as(:pair, as(:key, string()) |> sp_() |> str(":") |> sp_() |> as(:value, value())) end rule :object do as(:object, str("{") |> sp_() |> maybe( key_value_pair() |> repeat( sp_() |> str(",") |> sp_() |> key_value_pair(), 0) ) |> sp_() |> str("}")) end rule :array do as(:array, str("[") |> sp_() |> maybe( value() |> repeat( sp_() |> str(",") |> sp_() |> value(), 0) ) |> sp_() |> str("]")) end rule :document do sp_() |> value |> sp_() end root :document end defmodule JSONTransformer do def transform(%{escaped: val}) do {result, _} = Code.eval_string("\"\\#{val}\"") result end def transform(%{string: val}) when is_list(val) do List.to_string(val) end def transform(%{string: val}), do: val def transform(%{char: val}), do: val def transform(%{array: val}), do: val def transform(%{null: "null"}), do: :null #replace null with :null def transform(%{boolean: val}), do: val == "true" def transform(%{number: %{integer: val, decimal: "", exponent: ""}}) do {intVal, ""} = Integer.parse("#{val}") intVal end def transform(%{number: %{integer: val, decimal: dec, exponent: ex}}) do {intVal, ""} = Float.parse("#{val}#{dec}#{ex}") intVal end def transform(%{object: pairs}) when is_list(pairs) do for %{pair: %{key: k, value: v}} <- pairs, into: %{}, do: {k,v} end def transform(%{object: %{pair: %{key: k, value: v}}}) do %{k => v} end #default to leaving it untouched def transform(any), do: any end def parseJSON(document) do {:ok, parsed} = JSONParser.parse(document) Transformer.transform_with(&JSONTransformer.transform/1, parsed) end So calling parseJSON(~S({"bob":{"jane":234},"fre\r\n\t\u26C4ddy":"a"})) == %{"bob" => %{"jane" => 234},"fre\r\n\t⛄ddy" => "a"}
{ "domain": "codereview.stackexchange", "id": 30470, "tags": "elixir, dsl" }
Shape Functions of beam element with 3 nodes (quadratic element)
Question: For a two node beam element there are four shape functions for four degree of freedom: For a straight three node beam element how shape functions are? Please note that beam is straight and not curved. Answer: If the nodes are at $\xi = -1, 0, +1$ you can find the shape functions using Lagrangian polynomial interpolation. In fact you don't need to work through the general procedure, since you can write down the general form the shape functions must take with only a few unknown parameters, and then solve for the unknown values. Consider the shape functions that are non-zero, and have non-zero derivative, at $\xi = 1$. They must be zero, and have zero derivatives, at $\xi = -1, 0$. Therefore they must have the general form $$N(\xi) = (\xi+1)^2 \xi^2 (a\xi + b)$$ for some values of $a$ and $b$. Using the product rule, the derivative is $$N'(\xi) = 2(\xi+1)\xi(2\xi + 1)(a\xi + b) + (\xi+1)^2 \xi^2 a.$$ At $\xi = 1$ we therefore have $$\begin{align}N(1) &= 4a + 4b \\ N'(1) &= 16a + 12b\end{align}$$ For one shape function we want $N(1) = 1, N'(1) = 0$ and for the other, $N(1) = 0, N'(1) = 1$. Solving the simultaneous equations for $a$ and $b$ gives the two shape functions as $$\begin{gather}(\xi+1)^2 \xi^2 (-3\xi+4)/4 \\ (\xi+1)^2 \xi^2 (\xi-1)/4 \end{gather}$$ You can get the shape functions that are nonzero at $\xi = -1$ simply by changing $\xi$ to $-\xi$ (be careful with the sign of the non-zero slope shape function!) The shape functions for the middle node can be found in the same way, but if you see what is going on you can just write them down by inspection: $$\begin{gather}(\xi-1)^2(\xi+1)^2 \\ (\xi-1)^2 \xi (\xi+1)^2 \end{gather}$$
{ "domain": "engineering.stackexchange", "id": 2583, "tags": "structural-engineering, beam, finite-element-method" }
how to represent location-code as a feature in machine learning model?
Question: I am trying to predict the damage to a buildings after earthquake on a dataset which contains "district number" as feature. I think the feature will have a significant importance in predicting the label but I am not sure how to best represent it. Any thoughts? Answer: You can get as creative as you want, but here are two general approaches that work for me. Clustering the data into known geographical divisions and create dummy variables. For example, in the United States a person can use zip codes. Find the center for known clusters (ie zipcodes) or by some similar unsupervised cluster and use the longitude and latitude. How you choose to augment that information depends on what exactly you're trying to predict.
{ "domain": "datascience.stackexchange", "id": 3537, "tags": "machine-learning, feature-selection, feature-extraction, feature-engineering, feature-construction" }
Printing fizzy lines
Question: Challenge Given a test case print out a FizzBuzz Series. Specifications The first argument is a path to a file. Each line includes a test case. Each test case is comprised of three spaced delimited integers. The first two integers are the dividers X and Y. The third integer, N is how far to count. Print out the series 1 through N, replacing numbers divisible by X with F, numbers divisible by Y with B and numbers divisible by both with FB. Constraints The input file is formatted correctly. The numbers are valid positive integers. X is in range [1, 20] Y is in range [1, 20] N is in range [21, 100] Output should be one line per set with no trailing or empty spaces. Input Sample 3 5 10 2 7 15 Output Sample 1 2 F 4 B F 7 8 F B 1 F 3 F 5 F B F 9 F 11 F 13 FB 15 Source My Solution: #include <stdio.h> void print_buzzified(int fizz, int buzz, int count) { for (int i = 1; i <= count; i++) { if (i % fizz == 0 && i % buzz == 0) { printf("%s", "FB"); } else if (i % fizz == 0) { printf("%s", "F"); } else if (i % buzz == 0) { printf("%s", "B"); } else { printf("%d", i); } printf(i < count ? " " : "\n"); } } int main(int argc, const char * argv[]) { FILE *file; if (argc < 2 || !(file = fopen(argv[1], "r"))) { puts("No argument provided / File not found."); return 1; } file = fopen(argv[1], "r"); int fizz; int buzz; int count; while (!feof(file)) { fscanf(file, "%d %d %d", &fizz, &buzz, &count); print_buzzified(fizz, buzz, count); } fclose(file); } Answer: Design: Good job on setting up everything in main(), then passing off control to another function. Declare and initialize file after checking the command line arguments. Initialize fizz, buzz, and count. Don't use !feof(file) to control your loop. See this answer for more details. Check the return value of fscanf() to make sure we're reading good data. Good job remembering to close the file. Readability & Maintainability/Performance I'm grouping these two categories together for this review, since they happen to go hand in hand. Your for loop can be refactored down a bit: if (i % fizz == 0) fputc('F', stdout); if (i % buzz == 0) fputc('B', stdout); if (i % fizz && i % buzz)) fprintf(stdout, "%d", i); Some people may find issue with the duplicated check on i with fizz and buzz, but we cut out some conditional branches doing this and the more branches we cut, the less susceptible to branch prediction we are. There may be a better way to write this, but I can't think of it right now. Note how I used fputs() and fputc() in the code above instead of printf(). This is because those functions are \$ O(1) \$ operations instead of \$ O(n) \$ to loop over the string and format it. This may speed up your program by a hair, if that, but the main reason I recommend this change is to make clear we aren't modifying any strings. Since the range of \$ X \$, \$ Y \$, and \$ N \$ is 1-20, it would be good to change the type representing these variables to something a bit more compact in memory. Depending on what we want to optimize for (memory or performance), I would recommend either uint_least8_t or uint_fast8_t. uint_least8_t: give me the smallest type of unsigned int which has at least 8 bits. Optimize for memory consumption. uint_fast8_t: give me an unsigned int of at least 8 bits. Pick a larger type if it will make my program faster, because of alignment considerations. Optimize for speed.
{ "domain": "codereview.stackexchange", "id": 20519, "tags": "beginner, c, programming-challenge, io, fizzbuzz" }
Is it possible for two observers to observe different wavefunctions for one electron?
Question: Suppose there are 2 scientists who have decided to measure the location of an electron at a same fixed time. Is possible that while one observes the wavepacket localized at (position=x) while the other observes the wavepacket localized at (position=y). The condition however is position x is not equal to y.Please dont confuse about the degree of localization which can be quite varying depending upon which measurement-momentum or position is given priority :( I have little to no experience with quantum superposition and gaussian wavepackets.......kindly manage with my rough knowledge Answer: In order to observe an electron one must interact with it in some way. For example one could shine light at it so that it scattered the light, or one could arrange for it to hit something like a multi-channel array (a charge detector with many small elements). The various observers will study some sort of large-scale signal such as a current from the array or else the light hitting a camera. They will all agree on what large-scale signal was seen, and they will agree on the chain of inference which determines what information it gives about the electron. So, in summary, the answer to your question is that they all get the same answer.
{ "domain": "physics.stackexchange", "id": 68849, "tags": "quantum-mechanics, quantum-information, schroedinger-equation, schroedingers-cat" }
Explicit check of Ward identity (Peskin & Schroeder p. 160)
Question: I am trying to check explicitly that the (Compton) amplitude $$i\mathcal{M} = -ie^2\epsilon^*_\mu(k’)\epsilon_\nu(k)\bar u(p’)\left[\frac{\gamma^\mu \not k\gamma^\nu + 2\gamma^\mu p^\nu}{2p\cdot k}+\frac{-\gamma^\nu\not{k'}\gamma^\mu+2\gamma^\nu p ^\mu}{-2p\cdot k'}\right]u(p)\tag{5.74}$$ vanishes when we replace either $\epsilon_\nu(k)$ with $k_\nu$ or $\epsilon^*_\mu(k’)$ with $k’_\mu$. This would confirm the Ward identity, as suggested on page 160 of Peskin and Schroeder’s book “Introduction to QFT”. However, with the replacement $\epsilon_\nu(k) \rightarrow k_\nu$, I obtain: $$\begin{aligned}k_\nu\mathcal{M}^\nu(k) :&= -e^2\epsilon^*_\mu(k’)k_\nu\bar u(p’)\left[\frac{\gamma^\mu \not k\gamma^\nu + 2\gamma^\mu p^\nu}{2p\cdot k}+\frac{-\gamma^\nu\not{k'}\gamma^\mu+2\gamma^\nu p ^\mu}{-2p\cdot k'}\right]u(p) \\ &= -e^2\epsilon^*_\mu(k’)\bar u(p’)\left[\frac{\gamma^\mu (\not k)^2 + 2\gamma^\mu (p\cdot k)}{2p\cdot k}+\frac{-\not k\not{k'}\gamma^\mu+2\not k p ^\mu}{-2p\cdot k'}\right]u(p) \\ &= -e^2\epsilon^*_\mu(k’)\bar u(p’)\left[\frac{2\gamma^\mu (p\cdot k)}{2p\cdot k}+\frac{-\not k\not{k'}\gamma^\mu+2\not k p ^\mu}{-2p\cdot k'}\right]u(p) \\ &= -e^2\epsilon^*_\mu(k’)\bar u(p’)\left[\gamma^\mu+\frac{-\not k\not{k'}\gamma^\mu+2\not k p ^\mu}{-2p\cdot k'}\right]u(p)\end{aligned}$$ and the terms in the bracket apparently don’t cancel each other. Are there any further possible simplifications? Answer: You can further simplify this expression by using the dirac equation $$ 0=(\not p-m)u(p)=\bar u(p')(\not p'-m) $$ and $k+p=k'+p'$. Then the second term can be expressed as $$ 2\not k p^\mu-\not k\not k'\gamma^\mu = 2(\not k'+\not p' - \not p)p^\mu -(\not k'+\not p' - \not p)\not k'\gamma^\mu\simeq 2\not k' p^\mu -(m-\not p)\not k'\gamma^\mu $$ The last equality holds only between the spinors. Then commuting $\not p$ through $\not k' \gamma^\mu$ gives $$ \not p\not k'\gamma^\mu = 2pk'\gamma^\mu-\not k' \not p\gamma^\mu \simeq 2pk'\gamma^\mu - 2\not k' p^\mu + \not k'\gamma^\mu m $$ In total we then have $$ 2\not k p^\mu-\not k\not k'\gamma^\mu \simeq 2pk' \gamma^\mu $$ and the terms in the bracket cancel.
{ "domain": "physics.stackexchange", "id": 88339, "tags": "quantum-field-theory, quantum-electrodynamics, correlation-functions, ward-identity" }