anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Scrubbing user input | Question: I'm taking user input from a file in form (1, 2, 3) to create a color, and I just wanted to know if I was taking most cases into account, and if there is any way I could improve this method in terms of being robust and readable.
private Color getColorFromString(String rgb) throws MalformedColorException{
rgb = rgb.replaceAll("[()\\s]", "");
String[] colors = rgb.split(",");
if(colors.length > 3){
throw new MalformedColorException("Too many arguments");
}else if (colors.length < 3){
throw new MalformedColorException("Too little arguments");
}
try{
int red = Integer.parseInt(colors[0]);
int green = Integer.parseInt(colors[1]);
int blue = Integer.parseInt(colors[2]);
return new Color(red, green, blue);
}catch(NumberFormatException e){
throw new MalformedColorException("Malformed number", e);
}
}
Answer: I only have a few minor suggestions, but none are hard-line.
You have a space between the else if and open parenthesis. I don't like this style of removing spaces around keywords and from between ) and { as it becomes hard to read without syntax highlighting (even with it).
Instead of too few/many arguments which isn't very specific, have a single exception that says how many were provided and how many are required.
if (colors.length != 3) {
throw new MalformedColorException("Color requires 3 values, got " + colors.length);
}
Moving the individual value parsing to a separate method would allow for more checks (range, sign, etc) and allow specifying which color was incorrect in the exception.
return new Color(
parseComponent("red", colors[0]),
parseComponent("green", colors[1]),
parseComponent("blue", colors[2])
);
private int parseComponent(String name, String value) {
try {
int parsed = Integer.parseInt(value);
if (parsed < 0 || parsed > 255) {
throw new MalformedColorException(
name + " component \"" + value + "\" out of range");
}
return parsed;
} catch (NumberFormatException e) {
throw new MalformedColorException(
name + " component \"" + value + "\" must be an integer");
}
} | {
"domain": "codereview.stackexchange",
"id": 7733,
"tags": "java, exception-handling, validation"
} |
Identifying serpentinite reworked during high-T metamorphism | Question: What kind of mineralogy results when a serpentinite is reworked during high-T or UHT metamorphism? I am envisaging a possible scenario where an obducted ophiolite terrane is buried and metamorphised within thickened orogen.
Since seprentinite is hydrated/altered mafic or ultramafic rock, I would assume dehydration would be the first pathway. With further heating are serpentinites likely to yield melt? Could they end up as a 'normal' mafic granulite? Are there mineralogical or geochemical characteristics that could be used to identify a mafic granulite that started out life as a serpentinite?
Answer: Of course, any ultramafic rock will ultimately melt at about 2000 deg C, but long before that there will be some interesting phase transitions, possibly involving serpentine > talc > olivine + orthopyroxene.
Various subtle changes occur in the rock during its original hydration from peridotite to serpentinite. There is high- and low-temperature serpentinite. Isotopic studies have shown that most exposed serpentinite exposed in outcrop has hydrated at ambient groundwater temperature. Consider what happens. The Ca from the clinopyroxene in the original peridotite ends up as Ca- or Ca-Mg- carbonates, of which much is lost or precipitated elsewhere. The Fe in the original orthopyroxene, clinopyroxene and olivine is either lost from the system or re-precipitates as a mesh-texture of goethite. Cr from the silicate phases is excluded from the serpentine crystal lattice and from the surface of chromite crystals leaving a zoned Magnetite-Chromite structure in previously homogeneous chromite crystals. Since these are inert and refractory, they should survive any subsequent dehydration process. Any Ni will probably survive within the serpentine lattice. Summing up, when the serpentine is metamorphosed and turns back into peridotite, look for zoned chromite, reduced Ca (as clinopyroxene), and either reduced Fe, or Fe as a relict ghost sieve structure. Depending upon local circumstances there may, or may not, be a slight loss of silica during the hydration-dehydration cycle. | {
"domain": "earthscience.stackexchange",
"id": 592,
"tags": "geochemistry, petrology, subduction, metamorphism"
} |
Swaps in a uniform superposition | Question: Starting from a state $|00\rangle|01\rangle|10\rangle$ (from $0$ to $n$ without repetitions), I want to reach an state in which all possible combinations of swaps are in a uniform superposition. I mean:
$\frac{1}{\sqrt{6}}(|00\rangle|01\rangle|10\rangle + |00\rangle|10\rangle|01\rangle + |01\rangle|00\rangle|10\rangle + |01\rangle|10\rangle|00\rangle + |10\rangle|00\rangle|01\rangle + |10\rangle|01\rangle|00\rangle)$
In fact, I want to do a general code with for any $n$ and not just $3$ as in the example above. I can do it for cases in which $n$ is a power of $2$ like $4, 8, 16,$ etc. using an ancilla. However, I can do the same for $n = 3$ because I got something like:
00-01-10: 0.125
00-10-01: 0.25
01-00-10: 0.125
01-10-00: 0.25
10-00-01: 0.125
10-01-00: 0.125
(two states are repeated so its probability is higher)
Answer: Appendix C in this paper provides an algorithm to do this based on a variant of the Fisher-Yates shuffle. The paper provides all the details you need to implement the algorithm in easy to follow steps. | {
"domain": "quantumcomputing.stackexchange",
"id": 3531,
"tags": "qiskit, programming, entanglement-swapping"
} |
Could you know in advance specifically when a star will go supernova? | Question: I've been thinking about star trek 2009 and star trek Picard in which they happen to talk about a sun inside a fictional solar system which goes supernova destroying a particularly important planet to a militaristic alien species. This got thinking, could you know in advance whether it be in days/weeks/months/years/decades when a star is most definitely going to go supernova? Or is it rather indeterminate or chaotically random when such event will take place?
Answer: At the moment, our understanding of the final, pre-supernova stages of stellar evolution are not good enough to give precise warnings based on the outward appearance of a star. Typically, you might be able to say that a star might explode sometime in the next 100,000 years.
However, there is an exception. There are ways to establish when a star is approaching the final few days of its existence from an acceleration of the neutrino flux and energy from its core, associated with the higher temperatures of silicon burning.
Even more precisely, you could get a few hours notice of a core collapse supernova from a spike in the neutrino signal that should emerge some hours ahead of the shock wave reaching the visible photosphere (and which was detected on Earth for SN 1987a). Similarly, one might expect a gravitational wave signal from the core collapse event. | {
"domain": "physics.stackexchange",
"id": 70202,
"tags": "astrophysics, stars, fusion, supernova, stellar-evolution"
} |
What are the advantages of the rectified linear (relu) over the sigmoid activation function in networks with many layers? | Question: The state of the art of nonlinearity is to use rectified linear units (ReLU) instead of a sigmoid function in deep neural networks. What are the advantages?
Answer: The sigmoid function becomes asymptotically either zero or one which means that the gradients are near zero for inputs with a large absolute value.
This makes the sigmoid function prone to vanishing gradient issues which the ReLU does not suffer as much.
In addition, ReLU has an attribute which can be seen both as positive and negative depending on which angle you are approaching it. The fact that ReLU is effectively a function that is zero for negative inputs and identity for positive inputs means that it is easy to have zeros as outputs and this leads to dead neurons. However, dead neurons might sound bad but in many cases, it is not because it allows for sparsity. In a way, the ReLU does a similar job of what an L1 regularization would do which would bring some weights to zero which in turn means a sparse solution.
Sparsity is something that, lots of times, leads to a better generalization of the model but there are times which has a negative impact on performance so it depends.
A good practice when using ReLU is to initialize the bias to a small number rather than zero so that you avoid dead neurons at the beginning of the training of the neural network which might prevent training in general. | {
"domain": "datascience.stackexchange",
"id": 1879,
"tags": "deep-learning"
} |
Can we exlain Bell's Theorem without dwelving into Quantum Mechanics? | Question: I have recently started learning Quantum Mechanics. We learnt some things on light polarization, so I referred to this site. Luckily I stumbled upon this answer which links to this site which sated my quest for an answer. However, I recalled a video titled "Bell's Theorem: The Quantum Venn Diagram Paradox" by MinutePhysics. Is the classical answer insufficient? Why is it insufficient?
Answer: The classical answer is indeed insufficient. The way to attempt a classical description of a Bell experiment would be to imagine the probabilities for experimental outcomes as being given by a probability distribution over some set of "actual, physical states," called ontic states. The ontic states are not in general experimentally accessible; you can only measure some of their properties, so they are also called "hidden variables" (some people also use this to mean the probability distribution over them.) Crucially, the ontic states are seen as existing independent of measurement, and measurement is seen as revealing some of their preexisting properties. This allows us to preserve our classical intuitions of measurement, i.e., that when we measure some quantity, its value already exists before our measurement.
For example, in the "three-polarizer" experiment you linked to, assuming the light gets through the first filter, you could imagine the possible ontic states to be:
$$\{00\},\{01\},\{1x\},$$
where $\{00\}$ means "will make it through the second and third filters," $\{01\}$ means "will make it through the second filter, but not the third," and $\{1x\}$ means "won't make it through the second filter." Then we can describe the whole experiment classically by just assigning probability 1/4 to each of the first two ontic states and probability 1/2 to the third. These probabilities are seen as just encoding our lack of knowledge about the ontic state of each photon, rather than something intrinsic to the photon.
The Bell (EPR) experiment is more complicated, so since there are good resources available to describe it (the wikipedia article on Bell's Theorem is fine), I will not reproduce the details here. But without going into those, what it basically shows is the one can create an experiment where the (probabilistic) predictions of quantum mechanics cannot be reproduced by any probability distribution over any set of ontic states. Since such experiments have subsequently been performed, and since they confirm the quantum mechanical predictions, we conclude that quantum mechanics cannot be described classically in this way (by a local hidden-variable theory). | {
"domain": "physics.stackexchange",
"id": 57621,
"tags": "quantum-mechanics, classical-mechanics, bells-inequality"
} |
Controlling robots in gazebo without using gazebo plugins | Question:
Is there a way to control robot models in gazebo by using external controller? I know usually it is controlled using plugins which we are adding in urdf file.
I mean, without using a gazebo control plugin is there a way to control robot model?
Originally posted by SVS on ROS Answers with karma: 233 on 2015-03-21
Post score: 3
Original comments
Comment by 130s on 2015-03-21:
I'm interested in the answer too. I would ask on Gazebo's forum http://answers.gazebosim.org/questions/
Answer:
You can do non-physical movements like this:
rostopic pub -1 /gazebo/set_model_state gazebo_msgs/ModelState '{model_name: testbot, pose: { position: { x: -0.32, y: 0, z: 2.1 }, orientation: {x: 0.0, y: 0.0, z: -0.766, w: 0.643 } }, reference_frame: world }'
It's okay for setting model positions while time is stopped but intersecting two things will explode the sim. It might be better to set velocities instead of positions but the model state has both, so any publish is going to overwrite both (in my example the velocities are going to be zero by default). More advanced stuff can be done with GazeboJS, but C++ plugins are going to be the most powerful.
The rawest controller plugins are the effort ones, in order to achieve a position you would have to write your own controller around them. Are those not usable for your application?
Originally posted by lucasw with karma: 8729 on 2015-03-21
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by SVS on 2015-03-21:
Thank u lucasw. It is useful.
Comment by FabianD on 2017-12-20:
For reference, wth gazebojs and gazebo8 it would the equivalent would be:
gazebo.publish('gazebo.msgs.Model', '/gazebo/default/model/modify', {name: 'testbot, pose: { position: { x: -0.32, y: 0, z: 2.1 }, orientation: {x: 0.0, y: 0.0, z: -0.766, w: 0.643 } } }) | {
"domain": "robotics.stackexchange",
"id": 21190,
"tags": "ros, microcontroller, gazebo, gazebo-plugin, gazebo-ros"
} |
Period in simple harmonic motion | Question: I would like to know why period in simple harmonic motion does not depend on the amplitude of oscillation.
Answer: Simple harmonic motion obeys the differential equation $\ddot{x}+\omega^2 x=0$ for a constant $\omega>0$ with the units of frequency. Since this is a homogeneous linear equation, its solutions are closed under multiplication by constants. Doing this changes the amplitude, but not the frequency. | {
"domain": "physics.stackexchange",
"id": 44473,
"tags": "harmonic-oscillator"
} |
slam_gmapping odometry tf already published | Question:
Hey, I'm running my robot with a laser scanner and wheel odometry and the navigation stack. I already built a map from logged data and now I want to build it online. So I start my launch file for my robot (ibr_node) and the laser scanner. My ibr_node sends a transform from the map frame to the odom_base_link_frame.
<launch>
<arg name="laser_x_offset_arg" default="0.72"/>
<param name="/use_sim_time" value="false"/>
<node pkg="mcm_ros_node" type="ibr_node" name="ibr_node" output="screen">
<param name="min_distance_control_on" value="false"/>
<param name="min_distance" value="1.0"/>
<param name="dev_ti_left" value="/dev/ti_left4"/>
<param name="dev_ti_right" value="/dev/ti_right4"/>
<param name="sigma_x_2" value="100.0"/>
<param name="sigma_y_2" value="100.0"/>
<param name="sigma_p_2" value="50.0"/>
<param name="sigma_xy" value="10.0"/>
<param name="sigma_xp" value="0.0"/>
<param name="sigma_yp" value="0.0"/>
<param name="laser_x_offset" value="$(arg laser_x_offset_arg)"/>
</node>
<!-- Laser Scanner -->
<node pkg="lms1xx" type="LMS1xx_node" name="LMS1xx_node"/>
<!-- static transforms -->
<node pkg="tf" type="static_transform_publisher" name="base_to_laser_broadcaster" args="0.72 0 0 0 0 0 /base_link /laser 100"/>
<node pkg="tf" type="static_transform_publisher" name="odom_base_to_base_broadcaster" args="0 0 0 0 0 0 /odom_base_link /base_link 100"/>
</launch>
Then I start a launch file for slam_gmapping:
<launch>
<node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen">
<rosparam>
odom_frame: odom_base_link
base_frame: base_link
</rosparam>
</node>
</launch>
With this transform tree:
map -> odom_base_link -> base_link -> laser
The transform from odom_base_link to base link and the transform from base_link to laser are static.
roswtf shows this Error:
ERROR TF multiple authority contention:
* node [/slam_gmapping] publishing transform [odom_base_link] with parent [map] already published by node [/ibr_node]
* node [/ibr_node] publishing transform [odom_base_link] with parent [map] already published by node [/slam_gmapping]
How can I solve this?
When I just dont send the odometry transform in my ibr_node the map frame is missing in my transform tree.
Thanks in advance!! :)
Originally posted by hannjaminbutton on ROS Answers with karma: 65 on 2015-03-31
Post score: 0
Original comments
Comment by pexison on 2015-04-01:
Can you post every command that you use before launching this node (including this one)?
Comment by hannjaminbutton on 2015-04-06:
I just added my launch file for the robot :)
Answer:
You have two nodes publishing the same transform. slam_gmapping is producing your odom_base_link->base_link transform, and the static_transform_publisher is providing the same transform. Get rid of this line:
<node pkg="tf" type="static_transform_publisher" name="odom_base_to_base_broadcaster" args="0 0 0 0 0 0 /odom_base_link /base_link 100"/>
Originally posted by Tom Moore with karma: 13689 on 2015-04-06
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by hannjaminbutton on 2015-04-07:
When I get rid of this line and start my launch files, I have two unconnected tf trees (map -> odom_base_link and base_link -> laser) and roswtf still shows the same error.
Comment by Tom Moore on 2015-04-07:
Wait, I didn't read your errors thoroughly enough. What is ibr_node? That is also publishing the same transform. Only one node can publish a transform with a given child frame_id. | {
"domain": "robotics.stackexchange",
"id": 21308,
"tags": "slam, navigation, 2d-mapping, static-transform-publisher, gmapping"
} |
How to prove by contradiction that every nonempty hereditary language contains the empty string? | Question: A language L is called hereditary if it has the following property:
For every nonempty string x in L, there is a character in x which can be deleted from x to give another string in L.
Prove by contradiction that every nonempty hereditary language contains the empty string.
Here's my attempt:
To prove by contradiction, we assume that for every nonempty string x in L, there is no character in x which can be deleted from x to give another string in L.
This means that if a character in x is deleted an empty string is left. Since an empty string is also a string, every nonempty hereditary language contains the empty string.
I'm not exactly sure how to proof by contradiction. Can someone help review this?
Answer: Let L be a nonempty hereditary language. Let x be one of the shortest strings in L. x exists because L is not empty.
Removing any character from x would produce a string not in L, since x is one of the shortest strings. Because L is hereditary, we can remove a character from x giving another string in L, unless x is the empty string. Therefore x is the empty string. Therefore L contains the empty string.
If you insist on proof by contradiction: Let L be a nonempty hereditary language not containing the empty string, and let x be one of the shortest strings in L. x is not the empty string since L doesn't contain the empty string. Therefore there is a character in x which can be removed, giving a string y in L.
But y is shorter than x, therefore x is not the shortest string in L. | {
"domain": "cs.stackexchange",
"id": 14571,
"tags": "proof-techniques, check-my-answer"
} |
Finding first unique character in string | Question: Below is a solution I came up with (in C++) for an algorithm that is supposed to find the first character in a string that only appears once in the input string (the input string is guaranteed to be made up of only lower case characters from English alphabet). The catch is that the algorithm must return the index of that character from the original string, or negative one (-1) if there is no such character.
The solution below is O(n) in time and O(1) in space (const space because the list and the map can never have more than 26 entries).
I'm not convinced that my use of data structures is precise/clean.
I'm not sure if the code is very readable as is. Would it be more effective to write small functions that say what they do to replace things like: map_char_to_tracking[s[i]].char_count_in_string == 1 [so the reader can read at a higher level of abstraction without having to know the details of the data structures]?
Any improvements would be much appreciated.
#include <iostream>
#include <list>
#include <unordered_map>
typedef struct
{
int char_index_in_original_string;
std::list<char>::iterator char_position_in_list_itr;
int char_count_in_string;
}
Tracking;
// O(n) time | O(1) space, where n is the number of characters in the input string.
int FirstOccurrenceOfUniqueChar(const std::string& s)
{
std::list<char> unique_chars;
std::unordered_map<char, Tracking> map_char_to_tracking;
for (int i = 0; i < s.size(); ++i)
{
if (map_char_to_tracking[s[i]].char_count_in_string == 1)
{
// char has appeared once in string already, so remove it from unique chars list
++map_char_to_tracking[s[i]].char_count_in_string;
unique_chars.erase(map_char_to_tracking[s[i]].char_position_in_list_itr);
}
else if (map_char_to_tracking[s[i]].char_count_in_string == 0)
{
// char hasn't appeared in string yet, so add it to unique chars list and map.
unique_chars.push_back(s[i]);
map_char_to_tracking[s[i]] = Tracking{ i, --unique_chars.end(), 1 };
}
}
if (unique_chars.empty())
{
return -1;
}
return map_char_to_tracking[unique_chars.front()].char_index_in_original_string;
}
int main()
{
std::string s1{ "abcdeabcd" };
std::string s2{ "v" };
std::string s3{ "" };
std::string s4{ "aabbcc" };
std::cout << FirstOccurrenceOfUniqueChar(s1) << std::endl; // expect 4
std::cout << FirstOccurrenceOfUniqueChar(s2) << std::endl; // expect 0
std::cout << FirstOccurrenceOfUniqueChar(s3) << std::endl; // expect -1
std::cout << FirstOccurrenceOfUniqueChar(s4) << std::endl; // expect -1
return 0;
}
Answer: As already mentioned, you can do this in a one-pass algorithm. You only need to keep track of whether you have seen a character before, and what the position of its first occurence is. Here is a possible implementation:
#include <algorithm>
#include <array>
#include <climits>
#include <cstdint>
#include <string>
std::size_t FirstOccurrenceOfUniqueChar(const std::string& s) {
std::array<bool, 1 << CHAR_BIT> seen{};
std::array<std::size_t, 1 << CHAR_BIT> pos;
pos.fill(s.npos);
for (std::size_t i = 0; i < s.size(); ++i) {
auto ch = static_cast<unsigned char>(s[i]);
pos[ch] = seen[ch] ? s.npos : i;
seen[ch] = true;
}
return *std::min_element(pos.begin(), pos.end());
}
Note that in the above code, only two std::arrays are used; since there are only a fixed number of characters, you can use a fixed-size container instead of the slower std::list and std::unordered_map.
Also note that this version returns a std::size_t; this is necessary to support strings longer than an int can represent. Also, std::string::npos is used to indicate that there is no unique character. It's the largest possible value of std::size_t, which is very convenient here.
Also, if you cast std::string::npos to an int, it will become -1. | {
"domain": "codereview.stackexchange",
"id": 43839,
"tags": "c++, algorithm, strings"
} |
Problem of "Please point me at a wall." with turtlebot_calibration | Question:
I'm a ROS newby attempting to bring up a TurtleBot with ROS Fuerte and Ubuntu 12.04. I've got the laptop and TurtleBot configured and can get the turtlebot_dashboard going and drive the robot around with a keyboard using the keyboard teleop. I can also apply power to the Kinect device and see the three requisite Microsoft USB devices with 'lsusb':
Bus 001 Device 010: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor
Bus 001 Device 017: ID 045e:02ad Microsoft Corp. Xbox NUI Audio
Bus 001 Device 018: ID 045e:02ae Microsoft Corp. Xbox NUI Camera
The problem I have is with turtlebot_calibration. I get the following output:
-------------------------------------------------------------------------
turtlebot@turtlebot ~ $ roslaunch turtlebot_calibration calibrate.launch
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http;//192.168.0.38;56954/
SUMMARY
========
PARAMETERS
* /kinect_laser/max_height
* /kinect_laser/min_height
* /kinect_laser/output_frame_id
* /kinect_laser_narrow/max_height
* /kinect_laser_narrow/min_height
* /kinect_laser_narrow/output_frame_id
* /openni_launch/debayering
* /openni_launch/depth_frame_id
* /openni_launch/depth_mode
* /openni_launch/depth_registration
* /openni_launch/depth_time_offset
* /openni_launch/image_mode
* /openni_launch/image_time_offset
* /openni_launch/rgb_frame_id
* /pointcloud_throttle/max_rate
* /rosdistro
* /rosversion
* /scan_to_angle/max_angle
* /scan_to_angle/min_angle
NODES
/
kinect_breaker_enabler (turtlebot_node/kinect_breaker_enabler.py)
kinect_laser (nodelet/nodelet)
kinect_laser_narrow (nodelet/nodelet)
openni_launch (nodelet/nodelet)
openni_manager (nodelet/nodelet)
pointcloud_throttle (nodelet/nodelet)
scan_to_angle (turtlebot_calibration/scan_to_angle.py)
turtlebot_calibration (turtlebot_calibration/calibrate.py)
ROS_MASTER_URI=http;//192.168.0.38;11311
core service [/rosout] found
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[kinect_breaker_enabler-1]: started with pid [2132]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[openni_manager-2]: started with pid [2133]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[openni_launch-3]: started with pid [2139]
[ INFO] [1360034582.865846913]: Initializing nodelet with 2 worker threads.
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[pointcloud_throttle-4]: started with pid [2199]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[kinect_laser-5]: started with pid [2218]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[kinect_laser_narrow-6]: started with pid [2240]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[scan_to_angle-7]: started with pid [2264]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[turtlebot_calibration-8]: started with pid [2266]
[kinect_breaker_enabler-1] process has finished cleanly
log file: /home/turtlebot/.ros/log/0f317c9e-6f42-11e2-a312-74f06d7d6e3b/kinect_breaker_enabler-1*.log
[ INFO] [1360034585.493648021]: [/openni_launch] Number devices connected: 1
[ INFO] [1360034585.494195015]: [/openni_launch] 1. device on bus 001:18 is a Xbox NUI Camera (2ae) from Microsoft (45e) with serial id 'A00366910379111A'
[ WARN] [1360034585.498586615]: [/openni_launch] device_id is not set! Using first device.
[ INFO] [1360034585.684649890]: [/openni_launch] Opened 'Xbox NUI Camera' on bus 1:18 with serial number 'A00366910379111A'
[ INFO] [1360034585.745636145]: rgb_frame_id = 'camera_rgb_optical_frame'
[ INFO] [1360034585.754150877]: depth_frame_id = 'camera_depth_optical_frame'
[INFO] [WallTime: 1360034586.442313] has_gyro True
[INFO] [WallTime: 1360034586.526588] Estimating imu drift
[INFO] [WallTime: 1360034586.829244] Still waiting for imu
[INFO] [WallTime: 1360034587.131364] Still waiting for scan
[INFO] [WallTime: 1360034587.433592] Still waiting for scan
[INFO] [WallTime: 1360034587.736142] Still waiting for scan
[INFO] [WallTime: 1360034588.038365] Still waiting for scan
[INFO] [WallTime: 1360034588.340482] Still waiting for scan
[ERROR] [WallTime: 1360034588.598461] Please point me at a wall.
[INFO] [WallTime: 1360034588.642893] Still waiting for scan
[ERROR] [WallTime: 1360034588.653256] Please point me at a wall.
[ERROR] [WallTime: 1360034588.721784] Please point me at a wall.
[ERROR] [WallTime: 1360034588.787651] Please point me at a wall.
[ERROR] [WallTime: 1360034588.855477] Please point me at a wall.
[ERROR] [WallTime: 1360034588.921614] Please point me at a wall.
[INFO] [WallTime: 1360034588.944915] Still waiting for scan
[ERROR] [WallTime: 1360034588.985250] Please point me at a wall.
[ERROR] [WallTime: 1360034589.051861] Please point me at a wall.
[ERROR] [WallTime: 1360034589.119464] Please point me at a wall.
[ERROR] [WallTime: 1360034589.195161] Please point me at a wall.
[INFO] [WallTime: 1360034589.246879] Still waiting for scan
[ERROR] [WallTime: 1360034589.257410] Please point me at a wall.
[ERROR] [WallTime: 1360034589.318238] Please point me at a wall.
[ERROR] [WallTime: 1360034589.392053] Please point me at a wall.
[ERROR] [WallTime: 1360034589.453150] Please point me at a wall.
[ERROR] [WallTime: 1360034589.519380] Please point me at a wall.
[INFO] [WallTime: 1360034589.548499] Still waiting for scan
[ERROR] [WallTime: 1360034589.586282] Please point me at a wall.
[ERROR] [WallTime: 1360034589.653485] Please point me at a wall.
[ERROR] [WallTime: 1360034589.724938] Please point me at a wall.
[ERROR] [WallTime: 1360034589.797709] Please point me at a wall.
[INFO] [WallTime: 1360034589.850087] Still waiting for scan
[ERROR] [WallTime: 1360034589.853053] Please point me at a wall.
[ERROR] [WallTime: 1360034589.921030] Please point me at a wall.
[ERROR] [WallTime: 1360034589.999481] Please point me at a wall.
[ERROR] [WallTime: 1360034590.056063] Please point me at a wall.
[ERROR] [WallTime: 1360034590.120422] Please point me at a wall.
-------------------------------------------------------------------------
The output will continue until I hit ^C. It never sees a wall.
I've tried moving the robot different distances from the wall, but nothing seems to change the results. It always asks "Please point me at a wall."
Any ideas what may be wrong or how I may go about diagnosing and fixing this so that I can correctly calibrate the turtlebot?
Thanks,
Mike Thompson
Originally posted by mpthompson on ROS Answers with karma: 153 on 2013-02-04
Post score: 0
Original comments
Comment by tfoote on 2013-02-04:
Can you view the data from the Kinect?
Comment by mpthompson on 2013-02-05:
Yes, I can view both image and depth data from the Kinect following the instructions here: http://surenkum.blogspot.com/2012/06/getting-kinect-to-work.html
Answer:
The problem here turned out to be topics not be remapped properly. This is now fixed in source (see commit here), version 2.0.2 should be released shortly and so debs will be updated in a few days.
Originally posted by fergs with karma: 13902 on 2013-03-28
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Zayin on 2013-06-04:
I've applied the changes, but the problem persists. I'm using Fuerte. Any suggestions?
Comment by fergs on 2013-06-04:
I honestly have no idea what shape the Fuerte release is in. I only tried groovy, and would generally suggest using groovy at this point since releases into Fuerte are few and far between.
Comment by Zayin on 2013-06-05:
I reverted back to the old config and made it work my modifying kinect.launch. So I guess the old config was appropriate on Fuerte. I don't want to switch to Groovy because making everything work took a lot of time, and I don't want to go through that process again, at least not yet! | {
"domain": "robotics.stackexchange",
"id": 12738,
"tags": "calibration, turtlebot, ros-fuerte"
} |
Calculate concentration gradients in FPLC | Question: Our FPLC works with two solutions:
A: 20 mM Tris.HCl, 0.2 M NaCl, pH 7.8
B: 20 mM Tris.HCl, 1.6 M NaCl, pH 7.8
I'm interested in sample volume from 102 mL to 117 mL. The FPLC will have a linear concentration gradient: At 24 mL it will be 75% A and at 144 mL it will be 5% A (the rest is filled up with B).
For visualization, I made a graph (y = -7/1200*x + 0.89):
Now, I have a sample with the whole output between the two indicated points (102 mL and 117 mL, so I have 15 mL in my tube) and would like to calculate the NaCl concentration.
Answer: Percentage of both A and B would change 70% from 24 to 144mL, with A from 75% to 5% and B from 25% up to 95%. The change rate would be 70% during the 120mL period or $0.70/120$ per mL. At any given point $x$ mL, between 24 and 144mL, concentration of $\ce{NaCl}$ in $\ce{mol/L}$ is: $$\ce{[NaCl]} = (0.75 - 0.7\times (x-24)/120)\times0.2 + (0.25 + 0.7\times (x-24)/120)\times1.6$$
For a 15mL solution between 102 and 117mL, the concentration should be the average of the two points when x = 102, and 117mL. | {
"domain": "chemistry.stackexchange",
"id": 4223,
"tags": "solutions, concentration, chromatography"
} |
Faster way to parse file to array, compare to array in second file, write final file | Question: I currently have an MGF file containing MS2 spectral data (QE_2706_229_sequest_high_conf.mgf). The file template is here, as well as a snippet of example:
BEGIN IONS
TITLE=File3249 Spectrum10594 scans: 11084
PEPMASS=499.59366 927079.3
CHARGE=3+
RTINSECONDS=1710
SCANS=11084
104.053180 3866.360000
110.071530 178805.000000
111.068610 1869.210000
111.074780 10738.600000
112.087240 13117.900000
113.071150 7148.790000
114.102690 4146.490000
115.086840 11835.600000
116.070850 6230.980000
... ...
END IONS
This unannotated spectral file contains thousands of these entries, the total file size is ~150 MB.
I then have a series of text files which I need to parse. Each file is similar to the format above, with the first column being read into a NumPy array. Then the unannotated spectra file is parsed for each entry until a matching array is found from the annotated text files input.
(Filename GRPGPVAGHHQMPR)
m/z i matches
104.05318 3866.4
110.07153 178805.4
111.06861 1869.2
111.07478 10738.6
112.08724 13117.9
113.07115 7148.8
114.10269 4146.5
115.08684 11835.6
116.07085 6231.0
Once a match is found, an MGF annotated file is written that then contains the full entry information in the unannotated file, but with a line that specifies the filename of the annotated text file that matched that particular entry. The output is below:
BEGIN IONS
SEQ=GRPGPVAGHHQMPR
TITLE=File3249 Spectrum10594 scans: 11084
PEPMASS=499.59366 927079.3
... ...
END IONS
There may be a much more computationally inexpensive way to parse. Given 2,000 annotated files to search through, with the above large unannotated file, parsing currently takes ~ 12 hrs on a 2.6 GHz quad-core Intel Haswell CPU.
import numpy as np
import subprocess as sp
import sys
from pyteomics import mgf, auxiliary
def main():
pep_files = []
if (len(sys.argv) > 0):
spec_in = sys.argv[1]
else:
print 'Not enough Command Line Arguments!'
path = '/DeNovo/QE_2706_229_sequest_high_conf.mgf'
print spec_in
pep_files.append(spec_in)
for ann_spectra in pep_files:
seq = ann_spectra[:ann_spectra.find('-') - 1]
print seq
a = np.genfromtxt(ann_spectra, dtype=float, invalid_raise=False, usemask=False, filling_values=0.0, usecols=(0))
b = np.delete(a, 0)
entries = []
with mgf.read(path) as reader:
for spectrum in reader:
if np.array_equal(b, spectrum['m/z array']):
entries.append(spectrum)
file_name = 'DeNovo/good_training_seq/{}.mgf'.format(ann_spectra[:-4])
with open(file_name, 'wb') as mgf_out:
for entry in entries:
mgf_out.write('BEGIN IONS')
mgf_out.write('\nSEQ={}'.format(seq))
mgf_out.write('\nTITLE={}'.format(entry['params']['title']))
mgf_out.write('\nPEPMASS={} {}'.format(entry['params']['pepmass'][0], entry['params']['pepmass'][1]))
mgf_out.write('\nCHARGE={}'.format(entry['params']['charge']))
mgf_out.write('\nRTINSECONDS={}'.format(str(entry['params']['rtinseconds'])))
mgf_out.write('\nSCANS={}'.format(entry['params']['scans']))
mgf_out.write('\n')
p = np.vstack([entry['m/z array'], entry['intensity array']])
output = p.T
np.savetxt(mgf_out, output, delimiter=' ', fmt='%f')
mgf_out.write('END IONS')
if __name__ == '__main__':
main()
The Python script is used with command line arguments with the following bash script:
for f in *.txt ; do python2.7 mgf_parser.py "$f"; done
This was used to be able to only parse a given number of files at a time. Suggestions on any more efficient parsing methods?
Answer: Disclaimer : I have very limited skill/knowledge for anything related to np. Also, I haven't run your code to identify bottleneck which should be the very first thing to do before trying to perform optimisations.
Common ways to make things faster are :
not to do something if you don't need to.
not to do something multiple times if once is enough.
In the first category, your import subprocess as sp and from pyteomics import auxiliary are not required : you can get rid of this. This shouldn't change anything from a performance point of view but it's always good to make things easier.
In the second category, it seems like you could open filename once for each ann_spectra.
Also, for each entry, you could retrieve entry['params'] only once. Similarly, you could call mgf_out once.
This is the code I have at this stage :
def main():
pep_files = []
if (len(sys.argv) > 0):
spec_in = sys.argv[1]
else:
print 'Not enough Command Line Arguments!'
path = '/DeNovo/QE_2706_229_sequest_high_conf.mgf'
print spec_in
pep_files.append(spec_in)
for ann_spectra in pep_files:
seq = ann_spectra[:ann_spectra.find('-') - 1]
print seq
a = np.genfromtxt(ann_spectra, dtype=float, invalid_raise=False, usemask=False, filling_values=0.0, usecols=(0))
b = np.delete(a, 0)
entries = []
file_name = 'DeNovo/good_training_seq/{}.mgf'.format(ann_spectra[:-4])
with open(file_name, 'wb') as mgf_out, mgf.read(path) as reader:
for spectrum in reader:
if np.array_equal(b, spectrum['m/z array']):
entries.append(spectrum)
for entry in entries:
param = entry['params']
mgf_out.write('BEGIN IONS\nSEQ={}\nTITLE={}\nPEPMASS={} {}\nCHARGE={}\nRTINSECONDS={}\nSCANS={}\n').format(
seq,
param['title'],
param['pepmass'][0],
param['pepmass'][1],
param['charge'],
str(param['rtinseconds']),
param['scans'])
p = np.vstack([entry['m/z array'], entry['intensity array']])
output = p.T
np.savetxt(mgf_out, output, delimiter=' ', fmt='%f')
mgf_out.write('END IONS')
Now, I have realised something that I find quite confusing : you keep adding elements to entries and then you loop over it : it will loop once the first time, twice the second time, n times the n_th times. Is this something we really need/want to do ?
Finally, you are doing things in an un-usual way : you have pep_files = [], then pep_files.append(spec_in). This could easily be written pep_files = [spec_in]. Then, you do for ann_spectra in pep_files:, you might as well get rid of pep_files, ann_spectra and the loop and work directly with spec_in (and maybe you should exit when the argument is not provided). | {
"domain": "codereview.stackexchange",
"id": 8533,
"tags": "python, performance, bioinformatics"
} |
Shortest distance scales that a string can resolve | Question: On page 5 of the notes by Veronika Hubeny on The AdS/CFT correspondence, we find the following:
Nevertheless, already at this level we encounter several intriguing surprises. Since strings are extended objects, some spacetimes which are singular in general relativity (for instance those with a timelike singularity akin to a conical one) appear regular in string theory. Spacetime topology-changing transitions can likewise have a completely controlled, non-singular description. Moreover, the so-called 'T-duality' equates geometrically distinct spacetimes: because strings can have both momentum and winding modes around compact directions, a spacetime with a compact direction of size $R$ looks the same to strings as spacetime with the compact direction having size $\ell_{s}^{2}/R$, which also implies that strings can’t resolve distances shorter than the string scale $\ell_{s}$. Indeed this idea is far more general (known as mirror symmetry [11]), and exemplifies why spacetime geometry is not as fundamental as one might naively expect.
The penultimate sentence states that
A spacetime with a compact direction of size $R$ looks the same to strings as spacetime with the compact direction having size $\ell_{s}^{2}/R$.
Why does this imply that strings can’t resolve distances shorter than the string scale $\ell_{s}$?
Answer: If $R < \ell_s$, then the T-dual spacetime has $R' = \ell_s^2/R > \ell_s$, and since the string theories on both spacetimes are equivalent, you can not meaningfully talk about $R$ being smaller than $\ell_s$ since there's always an equivalent theory where it is greater than $\ell_s$, so there are no phenomena which can only happen for very small values of $R$, and therefore there is no conceivable measurement that could test a hypothesis like $R < \ell_s$ (nor strictly speaking the hypothesis $R > \ell_s$, this is what she means by spacetime geometry not being as fundamental as expected). | {
"domain": "physics.stackexchange",
"id": 40836,
"tags": "string-theory, string, duality"
} |
Time Complexity of inserting a vector to a vector of vectors in C++ | Question: I was solving a question on LeetCode, where I had to generate all possible subsets of a given set of numbers.
Although, the solution makes sense to me, I am unable to understand the derivation of time complexity for those solutions.
A solution I found there was:
class Solution {
public:
vector<vector<int>> subsets(vector<int>& nums) {
vector<vector<int>> subs = {{}};
for (int num : nums) {
int n = subs.size();
for (int i = 0; i < n; i++) {
subs.push_back(subs[i]); <---- LINE X
subs.back().push_back(num); <---- LINE Y
}
}
return subs;
}
};
It is an iterative solution to solve the problem.
What I don't understand is the time complexity of the given solution and more importantly, the complexities of Line X and Line Y.
Does copying subs[i] take O(n) time and then pushing back take another O(n) time, or is it an O(1) step?
Answer: First tings first:
Line X does a copy subs[i] and creates a new vector.
In terms of vector length $n$ this step is $O(n)$.
Line Y is amortized O(1). (The vector might need to reallocate making it O(n) in the worst case; to avoid reallocation see @MSalters post.)
The runtime of the whole procedure is a bit more difficult.
At the $i$-th number subs contains every possible subset of all numbers so far.
Thus length of subs $=2^i$, but we are interested in how many numbers we need to copy. this can be calculate with the sum of all subset sizes which is $i2^{i-1}$ (meaning at step $i$ we have to copy $i2^{i-1}$ numbers and make $2^i$ push operations).
$$O\left(\sum_{i=0}^{n-1} i2^{i-1}+2^i\right) = O(n2^n)$$
The complexity on the right makes sense, all sizes of all vectors in subs will be $n2^{n-1}$ in the end. | {
"domain": "cs.stackexchange",
"id": 16580,
"tags": "time-complexity"
} |
"why is current the derivative of charge and not integral of charge?" | Question: Current is defined as the amount of charge passing a given point per unit time. The word amount throws me off. sorry if this question seems dumb, but
why can current not be equal to integral of charge from time t=t1 to t2?
since we want to know the amount of charge passing a given point, we can add up charge from time t1 until time t2
Answer:
Current is defined as the amount of charge passing a given point per
unit time.
In fact, electric current is defined as the flow of electric charge. From the Wikipedia article Electric current
An electric current is a flow of electric charge.
From the Britannica article Electric current
Electric current is a measure of the flow of charge
A flow is a rate, i.e., an amount over an elapsed time. If an amount of electric charge $\Delta Q$ flows into a region in some time $\Delta t$, then there is an electric current into the region (with an average value of)
$$\bar{I} = \frac{\Delta Q}{\Delta t}$$
In the electric circuit context, we have a circuit law (Kirchhoff's Current Law) that requires the current into a region equal the current out of the region and so we can think of the current through the region, e.g., the body of a resistor. | {
"domain": "physics.stackexchange",
"id": 52534,
"tags": "electric-current, charge"
} |
Heat & thermodynamics question based on heat loss | Question:
A Sphere A is placed on a smooth table.Another sphere B is suspended as shown in the figure.Both the spheres are identical in all respects.Equal quantity of heat is supplied to both spheres.All kinds of heat loss are neglected.The final temperatures of A & B are T1 & T2 respectively,then
T1>T2
T1 = T2
T1
None of these
Please give reasons for your answer.
Answer: This is an old physics olympiad problem, I think. The answer hinges on the spheres expanding due to heating. Sphere A raises its center of mass some and sphere B lowers its center of mass some. By conservation of energy sphere A is thus slightly colder, since more of its energy went into its gravitational potential.
It's a bit of a silly problem since the effect is extremely small. We can see this because from common experience if you heat a metal sphere $10 C$, the change in radius is pretty small - so small you probably won't notice without measuring it or else having something with a different expansion coefficient wrapped around the sphere. Meanwhile, if you drop a fist-sized a metal sphere by $1cm$, a distance much larger than such a sphere would expand with a $10 C$ change, the temperature change is much, much smaller than $10 C$. It's smaller than you can even notice, really. So the gravitational potential change is very small compared to the heat, and the difference in temperatures is minute. | {
"domain": "physics.stackexchange",
"id": 1357,
"tags": "thermodynamics, homework-and-exercises, heat"
} |
Why is it so hard to compress air without any machine? | Question: I know that the particles that constitute air move freely about. There must be a significant amount of empty space between the bouncing particles. So why is it so hard to compress air without any machines/devices?
Answer: By decreasing the volume of the gas, you're increasing the number of collisions the particles make with the walls of the container (over some time interval), hence the walls feel a larger force. It becomes harder and harder to compress the gas because the gas is pushing back with more and more force.
Yes, there is space between the particles still, but the force from the collisions between the particles and the container wall doesn't arise from lack of space between the gas particles. The force the gas exerts on the container doesn't come from interactions between the gas particles (at least if we are still in the regime of an ideal gas). | {
"domain": "physics.stackexchange",
"id": 95755,
"tags": "gas"
} |
List directories, subdirectories and files, while ignoring some dirs | Question: In my application, the user may or may not want to ignore some directories. I can do that, but it seems like I am repeating myself. Does anyone have an idea to refactor that?
from os import walk, path
exclude = ['dir1/foo']
for root, dirs, files in walk('.', topdown=True):
if exclude != None:
dirs[:] = [d for d in dirs if d not in exclude]
for name in files:
for excluded in exclude:
if excluded not in root:
print path.join(root, name)
else:
for name in files:
print path.join(root, name)
exclude is None when there are no dirs to exclude. I thought of setting it to an empty list, but then, this loop for excluded in exclude: won't execute at all. My ambition was to avoid such a big if/else. Any ideas?
Example:
gsamaras@pc:~/mydir$ ls */*
dir1/bar:
test.txt
dir1/foo:
test.txt
dir2/bar:
test.txt
dir2/foo:
test.txt
I am getting:
./dir2/foo/test.txt
./dir2/bar/test.txt
./dir1/bar/test.txt
If I want, I can do an exclude = ['foo'], and then get:
./dir2/bar/test.txt
./dir1/bar/test.txt
meaning that I ignored all directories named "foo".
Answer: You are already modifiyin the list of directories, so that should be enough. But your exclude includes the full path, so your check in the list comprehension does not actually filter the excluded directories, you only do that in the for loop below when already having descended into those excluded directories (wasting time).
So, this should work:
from os import walk, path
exclude = {'./dir1/foo'}
for root, dirs, files in walk('.'):
if exclude is not None:
dirs[:] = [d for d in dirs if path.join(root, d) not in exclude]
for name in files:
print path.join(root, name)
Note that exclude needs to contain paths starting with the starting point of os.walk, so in this case ..
I also made exclude a set (\$\mathcal{O}(1)\$ in lookup), used the fact that topdown=True by default and used is not instead of != for comparison to None.
If you want to instead exclude folder names (regardless of their position in the directory tree), you can do that as well like this:
from os import walk, path
exclude = {'foo'}
for root, dirs, files in walk('.'):
if exclude is not None:
dirs[:] = [d for d in dirs if d not in exclude]
for name in files:
print path.join(root, name)
What is not possible with either of these two approaches is to include foo only in sub-directories of dir1, like in your example. However, I think this is more consistent behaviour, so you should choose one of the two, IMO.
As a last point, you should probably switch to Python 3 sooner rather than later if at all possible, because support for Python 2 will end in a bit more than a year (at the time of writing this). | {
"domain": "codereview.stackexchange",
"id": 32777,
"tags": "python, python-2.x, file-system"
} |
Why is E85 less efficient than straight gasoline? | Question: Why is straight gasoline (or whetever the mixture was before the introduction of ethanol) more efficient (ie, more miles/gallon) than E85? I've known since it's introduction that E85 was less efficient, but why is it?
Answer: There is less chemical energy per unit mass in ethanol than there is in the major chemical components of gasoline.
Standard enthalpies of formation for:
Ethanol: -277.0 kJ/mol
n-Hexane: -40.0 kJ/mol (other hexanes have similar (40-60 kJ/mol) standard enthalpies)
Carbon Dioxide: -393.5 kJ/mol
Water: -285.83 kJ/mol
Combustion of ethanol: 1365 kJ/mol
Combustion of n-Hexane: 4381 kJ/mol
Even taking into account that n-Hexane has a molecular mass ~2 times greater than ethanol, you can see that burning hexane releases a lot more energy. | {
"domain": "physics.stackexchange",
"id": 1300,
"tags": "energy, physical-chemistry, renewable-energy"
} |
Hardware Interface for Arduino | Question:
Hey,
Recently i started my own robot project to learn ROS, using Arduino.
So far i just have a single Arduino + L289n dual h-bridge motor controller to power 2 DC motors
The initial setup of my project is based on the Husky project, so i have a base node containing a hardware_interface::RobotHW class that communicates with the Arduino. Using the teleop_twist_keyboard i'm able to move my robot. Currently this is done using a custom message, where the RobotHW is the advertiser and the Arduino the subscriber. But just driving the motors is far from enough. I'm using powerfull 24V DC motors salvaged from broken printers and big (heavy) wheels from an old RC car. Currently i'm running the motors are 12v, which seem to be enough to get the robot rolling, but it takes time to get up to speed cause of the big heavy wheels and they keep spinning once the wheel speed is set to 0.
To apply brake force to the wheels i need to reverse the motor power till the wheels stopped, so i got myself some motor encoders to measure the speed of the wheels. According to the hardware interface documentation and code i been checking to learn how it works, i need to read from the hardware, update the controller manager, then write to the hardware. So using ROS message system doesn't seem the correct way.
Been looking into using rosserial for c++ and using the Serial communication in the Arduino firmware instead of message advertisers/subscribers, but this require writing some custom protocol for messages between the hardware and the hardware interface node.
So before continuing writing a custom serial comm protocol, i wanted to know if this is the correct way to go, or if i should just use advertisers/subscribers to send custom messages between the hardware/ros node?
Full code of my project in the current state of using messages can be found here:
https://github.com/DeborggraeveR/ampru
the ampru_base package, contains both the firmware and the hardware interface node.
Originally posted by RandyD on ROS Answers with karma: 161 on 2017-06-08
Post score: 0
Original comments
Comment by Soleman on 2022-02-07:
hey @RandyD. I saw your work its very impressive. I am very new to ros. I have same configuration like your as 2 dc motos with encoders, 1 arduino mega, 1 IMU sensor, 1 intel realsense D415. Currently I am facing a problem that how to set odometry using my dc motors and arduino. I am very new the brief guidelines will be highly appreciated. Thank you.
Answer:
As there were no answers to my questions, i went forward with using a custom serial protocol instead of publish/subscribe topics to communicate between the hardware interface node and Arduino.
The protocol is based on HDLC and seems to work fine :)
Code pushed to git repository.
Originally posted by RandyD with karma: 161 on 2017-06-16
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Sai Krishna on 2018-08-26:
Hi Randy,
I'm new to roscontrol.
Can I know what was the problem if you use public/subscribe to communicate between hardware interface node and Arduino.?
Can't we subscribe to the joint values from Arduino and update control manager and then publish the new joint values to Arduino.?
Thank you :)
Comment by pranavb104 on 2018-12-13:
Hi Sai & Randy, I also wanted to know if we can work with publisher/subscriber instead of a custom protocol . Just wanted an opinion before I go forward with my method. Let me know. Thanx!
Comment by RandyD on 2018-12-14:
Hi, i tested with publisher/subscriber on Arduino and it works fine, but my concern was that using it contains more code, while the programming memory on arduino is limited and there is more data going over the serial port.
Comment by pranavb104 on 2018-12-14:
Ah...makes sense....i also got it working anyway....thanx
Comment by Sai Krishna on 2018-12-15:
Hi Pranavb,
Can I know your mail id. So that we can share our works.
Mine is banda.saikrishna93@gmail.com.
Comment by pranavb104 on 2018-12-16:
my email is pranavb104@gmail.com
Comment by RandyD on 2018-12-17:
Share your code on github, might be useful to others as well ;)
Comment by pranavb104 on 2019-01-04:
sure ! https://github.com/pranavb104/Robo-moveit/tree/master
Comment by Hunterx on 2019-03-21:
Hey pranavb104
The github link is not valid anymore. Could you send me the code. :)
Comment by fjp on 2020-03-18:
Hi @pranavb104 the link to your Robo-moveit GitHub repository seems to not work anymore. Could you please update it? Thank you.
Comment by pranavb104 on 2020-03-18:
@fjp https://github.com/pranavb104/ROS-Arduino-Robot-Arm
Hope this works :D | {
"domain": "robotics.stackexchange",
"id": 28085,
"tags": "ros, arduino, serial, hardware-interface"
} |
Detecting food fraud | Question: There's undoubtedly more than one way to do this, but if a DIY biologist were to attempt to detect food fraud (e.g. as done by students from Stanford University and Trinity School, Manhattan with respect to fish samples from markets and sushi restaurants), then what would be the minimum steps and equipment?
(I know barely anything about molecular genetics, but have been reading about DremelFuge, OpenPCR, and Blue Transilluminator, and wondered whether they - or things like them - might get such an investigator some of the way towards the goal above; and what else would be required.)
Answer: There are several ways you could go about identifying species through DNA. If you want to do everything yourself, the simplest option in terms of equipment needed consists of evaluating fragment lengths observed during gel electrophoresis after amplifying specific DNA sequences using PCR.
If you are content with some outsourcing, you can also send DNA samples to a commercial company for sequence analysis.
A compromise between these options in terms of information obtained, is to do study Restriction Fragment Length Polymorphism (RFLP) by amplifying DNA fragments and using restriction enzymes to cut the fragments, before analyzing the fragmentation pattern using gel electrophoresis. To perform RFLP analysis, you would need to obtain restriction enzymes in addition to the chemicals mentioned below, and they can be a bit pricey.
The minimum equipment would consist of a PCR machine, one or more pipettes with matching pipette tips, a gel electrophoresis tray with power supply and a transilluminator (preferably blue-light/non-UV). A centrifuge is not strictly necessary, but can be useful for processing/filtering your DNA source.
Some chemicals will also be needed: Polymerase and dNTPs for the PCR reaction (or a pre-made "master mix" containing both), electrophoresis-grade agarose and running buffer for the electrophoresis, along with a DNA dye specific to the type of transilluminator (Usually UV or blue light). For a blue-light transilluminator, GelGreen is a suitable DNA dye. You will also want to use a "loading dye" to mix in your DNA sample before applying it to the electrophoresis gel. This can either be purchased or prepared yourself by mixing sugar and food coloring in water.
You need some form of heating to dissolve the agarose - a microwave oven is convenient for this, but take care to avoid over-heating, glass explosions or flash boiling. It is convenient but not strictly necessary to have some lab glassware. Preferably use a screw-top bottle to mix your agarose solution. Always leave the top off when heating bottles.
Photographic equipment can be also useful for documenting results of gel electrophoresis.
Finally, you will need single-stranded DNA oligomers (primers) specific to the DNA regions you want to amplify. DNA primers can be bought from a number of companies, but it varies how easy it is for non-affiliated individuals to order and make payments. Macrogen has been my choice: They both deliver DNA primers and perform DNA sequencing.
You may be interested in the following thread on the DIY Bio e-mail group: https://groups.google.com/forum/#!topic/diybio/cPzfEuiZH58
I have collected some of the primer sequences mentioned in the thread on a page on OpenWetware: http://openwetware.org/wiki/User:Jarle_Pahr/Meat | {
"domain": "biology.stackexchange",
"id": 2228,
"tags": "molecular-biology, molecular-genetics, food, diy-biology"
} |
Rayleigh-Taylor instability with negative Atwood number? | Question: I was reading a paper entitled "The Rayleigh—Taylor instability in astrophysical fluids" by Allen & Hughes (1984) that indicates the instability can occur for $ \rho_{01} < \rho_{02} $ which would indicate a negative Atwood number. But how is this possible? Does not the density gradient have to be opposite the direction of the effective gravity? Must not the Atwood number be necessarily positive for a Rayleigh-Taylor instability?
Answer: Your intuition is correct; there's no such thing as a Rayleigh-Taylor instability with a negative Atwood number. That would imply that the density of the upper fluid, $\rho_{01}$, is less than the density of the lower fluid, $\rho_{02}$, which is clearly a stable situation with respect to the R-T instability.
So how did $\rho_{01} < \rho_{02}$ appear in the Allen and Hughes paper?
I'm pretty sure it was just a typo. I read through the paper and the only place I saw anything that looked like a negative Atwood number was in section 4.2.2, where there's a sentence: In conclusion, it may be seen that the growth of R—T instabilities saturates for large
accelerations, except in the limit $\rho_{01} \ll \rho_{02}$ where the growth remains of the usual form
$$
\omega = (gk)^{1/2}
$$
But this sentence refers to an earlier paragraph in the same section that says, "Again, for $\eta$ ~ 1, compressibility has little effect and
$\omega^2 \approx gk$." Since in the authors' notation, the Atwood number $\eta$ is defined as
$$
\eta = \frac{\rho_{01} - \rho_{02}}{\rho_{01} + \rho_{02}}
$$
it is obvious that $\eta$ ~ 1 implies $\rho_{01} \gg \rho_{02}$ rather than $\rho_{01} \ll \rho_{02}$. | {
"domain": "physics.stackexchange",
"id": 48899,
"tags": "fluid-dynamics, stability"
} |
Confusion about symmetry spontaneous breaking in a simple model | Question: Hamiltonian and symmetry
I was learning about the symmetry spontaneous breaking (SSB), but confused by this simple minima model$$H=E_0\sum_{i=1}^3|i\rangle\langle i|+J\left(|1\rangle\langle 2|+|2\rangle\langle 3|+|1\rangle\langle 3|+h.c.\right)$$It's matrix form$$H=\begin{pmatrix}E_0 & J & J \\ J & E_0 & J \\ J & J & E_0\end{pmatrix}$$which describes that a particle hops on a lattice consisting of only 3 cells with on-site energy $E_0$ and hopping strength $J$, and the PBC is applied. It's clear that $H$ respects the $Z_3$ symmetry under the PBC, which means all the eigenstates of $H$, $|\psi_{i=1,2,3}\rangle$, will satisfy the symmetry condition$$Z_3|\psi_{i}\rangle=|\psi_i\rangle\ \ i\in1,2,3$$as long as the $Z_3$ symmetry was not spontaneously broken (for simplicity,
$Z_3$ denotes both the group and it's unitary representation).
Spectrum of $H$: The degeneracy of eigenvalues and the symmetry of eigenstates
The eigenvalues and eigenstates were very easy to obtain$$\left\{\begin{aligned}&E_e=E_0+2J\\&E^{(2)}_g=E_0-J\end{aligned}\right.\ \ \ \
\left\{\begin{aligned}&|\psi_e\rangle=\frac{1}{\sqrt{3}}(|1\rangle+|2\rangle+|3\rangle)\\&|\psi^{(2)}_g\rangle=|\psi_{a,b}\rangle\ \ \ for \ \ \langle \psi_a|\psi_e\rangle=\langle \psi_b|\psi_e\rangle=\langle \psi_a|\psi_b\rangle=0\end{aligned}\right.$$
where $E_e, E^{(2)}_g$ denotes the excited state and ground state respectively, and the superscript in $E^{(2)}_g$ denotes the two-fold degeneracy of the GS.
The unique ES $|\psi_e\rangle$ stays unchanged under the symmetry transformation $$Z_3|\psi_e\rangle=|\psi_e\rangle.$$
For the two-fold degeneracy GS, one can always choose two referance state $|\psi_a\rangle,|\psi_b\rangle$ that satisfies the orthogonality condition but changes under the action of $Z_3$, for example$$\left\{\begin{aligned}&|\psi_a\rangle=\frac{1}{\sqrt{2}}(|1\rangle-|2\rangle)\ \ \ \ \ \ \ \ \ \ \ \ \ \ Z_3|\psi_a\rangle\ne|\psi_a\rangle\\&|\psi_b\rangle=\frac{1}{\sqrt{6}}(|1\rangle+|2\rangle-2|3\rangle)\ \ \ Z_3|\psi_b\rangle\ne|\psi_b\rangle \end{aligned}\right.$$
Review some important properties of TFI model
Consider the TFI model $$H_{\mathrm{TFI}}=-\sum_{i}{h\sigma^x_i}+\sigma^z_{i}\sigma^{z}_{i+1}$$for $h<1$
1). The true GS keeps the $Z_2$ symmetry, and the two aligned states $|\mathrm{all\ up}\rangle,|\mathrm{all\ down}\rangle$ are neither degenerated nor the eigenstates of TFI model for finite $N$. 2). These two aligned states become both degenerated and the eigenstates of TFI model which breaks $Z_2$ symmetry when $N\to +\infty$.
Questions
Can I say that this $H$ has SSB due to the spontaneous $Z_3$ symmetry breaking by the two-fold degeneracy GS?
If the answer of Q1 is Yes. It's obvious that I don't need anything (like $N\to\infty$) to make $|\psi^{(2)}_g\rangle$ become degeneracy and eigen. So my question 2 is: What kinds of quantities make this symmetry breaking become "Spontaneous" in this simple model?
Answer: The question is based on misinterpretation of the meaning of terms symmetry and spontaneous symmetry breaking.
Symmetry
The Hamiltonian in the OP possesses a rotational symmetry (note that all this is easily generalizable to N sites with periodic boundary conditions). This does not mean that the eigenfunstions functions should possess the same symmetry, but only that in symmetry transformations they transform into linear combinations of the eigenfunctions, which they surely do. Where the symmetry does manifest itself is that the eigenstates correspond to the irreducible representations of the symmetry group, which they do.
Spontaneous symmetry breaking
Spontaneous symmetry breaking applies to many-body systems with symmetric Hamiltonians exhibiting non-symmetric states, such as the well-defined polarization of a ferromagnet. This is essentially a many-body/statistical-physics phenomenon, which cannot be uncovered in a system with a finite number of degrees of freedom (i.e., as @PaulMalinowski pointed in their answer, it is possible only in in thermodynamic limit).
The standard basic reading here is the article by Phil Anderson More is Different. It could be particularly relevant here, since Anderson considers an example of an ammonia molecule, which is somewhat similar to the one in the OP. | {
"domain": "physics.stackexchange",
"id": 88230,
"tags": "quantum-mechanics, statistical-mechanics, condensed-matter, hamiltonian, symmetry-breaking"
} |
Biot-Savart and Amperes law inconsistentcy | Question: Imagine the situation below where a steady current $I$ goes through the line. The radius of the semicircle is $r$. How do I compute the magnetic flux in the point $P$, marked by a black dot?
Furthermore, I don't know the answer, and the two attempts that I have made gave me different results. I want to have this inconsistency resolved:
Solution 1. Biot-Savart law:
$$
\mathbf{B} = \frac{\mu_0 I}{4\pi r^2}\int_C\rm{d}\mathbf{l}\times \mathbf{\hat{r}} = \frac{\mu_0 I}{4\pi r^2}\int_C \rm{dl}=\frac{\mu_0 I}{4r}
$$
since $\rm{d}\mathbf{l}\times \mathbf{\hat{r}} = dl\cdot 1\cdot \rm{sin}(\frac{\pi}{2})=dl$ and $\int_C dl=\pi r$ since the curve is the semicircle.
Solution 2. Ampere's lag
$$
\oint \mathbf{B}\cdot \rm{d}\mathbf{l} = B\cdot 2\pi r = I\mu_0\Rightarrow B = \frac{I\mu_0}{2\pi r}
$$
Answer: One can make an educated guess that the magnetic field at $P$ is one-half the value of the magnetic field at the center of a current loop which is given by:
$$B = \frac{\mu_0 I}{2R} $$
So your first approach is correct.
The approach of your solution 2 doesn't make any sense to me. The integral is along a closed path and, evidently, you're assuming $\mathbf B \cdot d\mathbf l$ is a constant along a closed path of constant $r$? | {
"domain": "physics.stackexchange",
"id": 17649,
"tags": "homework-and-exercises, electromagnetism"
} |
Temperature and the conductivity of sea water | Question: I came across a table in Griffith Introduction to Electrodynamics (2nd edition, p 297), which shows a table of various resistivities for different materials. I found it surprising that sea water is listed as a semiconductor here.
As it turned out, I could not find much online agreeing or disagreeing with this. Wikipedia introduces semiconductors as the following:
A semiconductor is a material, which has an electrical conductivity value falling between that of a conductor, such as copper, and an insulator, such as glass. Its resistivity falls as its temperature rises; metals behave in the opposite way.
I am not sure about the second property here, and I was wondering if sea water, as an electrolyte solution, actually exhibits this effect.
Answer: I asked the Internet, and the Internet came through. Yes, seawater conductivity does rise with temperature (graph from Temperature Effects on Conductivity of Seawater and Physiologic Saline,
Mechanism and Significance, Sauerheber and Heinz, Chem Sci J 2015, 6:4).
So seawater meets the Wikipedia definition of a "semiconductor".
However, in the same reference, frozen seawater has a similar conductivity but a negative conductivity vs. temperature slope, presumably kicking it out of Wikipedia's definition of "semiconductor" and leaving it homeless. | {
"domain": "physics.stackexchange",
"id": 92149,
"tags": "electrical-resistance, semiconductor-physics"
} |
doubt regarding working of base_local_planner and oscillations about the global plan | Question:
Hi all,
I have a doubt regarding the working of the base_local_planner, specifically Trajectory Rollout planner. According to the above link, the local planner samples in the robot's control space, performs forward simulation, evaluates different trajectories and finally picks the highest scoring trajectory, My question is how does the local planner makes sure that the robot follows that exact trajectory? Does it use a PID Controller or something like that? The problem I am having is that robot does not follow the global plan accurately but oscillates about the global plan. I have tried tuning different parameters and followed http://answers.ros.org/question/73195/navigation-oscilation-on-local-planner/ which reduced the oscillations but its still not very good, the robot still oscillates about the global plan. So, I was wondering if it uses something like PID, how can I tune those parameters?
Update 1: One more thing, how does the base_local_planner uses odom (nav_msgs/Odometry) and gives a better estimate for /cmd_vel? There has to be some kind of a controller (maybe only P Control). Sorry If I am being really naive, but it would be great if someone can clear my doubt or point me in the right direction?
Any help will be appreciated.
Thanks a lot.
Naman Kumar
Originally posted by Naman on ROS Answers with karma: 1464 on 2015-05-29
Post score: 1
Answer:
Short answer would be: It doesn't.
The local planner itself is run in a control loop by move_base, i.e., the local trajectory isn't passed off to another component to be executed. Only the velocity commands at the beginning of this trajectory are send to the robot (which must implement these within its controller). In the next timestep the trajectories are reevaluated given the current pose, sensor data, etc.
If nothing unexpected changed, the trajectory that was best before should still be the best in the next step and thus the velocity commands are send again. This closes the loop.
Originally posted by dornhege with karma: 31395 on 2015-05-29
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Naman on 2015-05-29:
@dornhege, Thanks! So, once the base_local_planner publishes the velocities ("cmd_vel"), there should be a base controller (like a PID) (move_base) running to make sure that the robot follows the trajectory, right?
Comment by dornhege on 2015-06-01:
No, the base controller only follows velocity commands, but not a trajectory and these are usually resend by the local planner frequently. The local planner should make sure the robot follows a trajectory. It assumes that the base controller correctly follows the given velocity commands. | {
"domain": "robotics.stackexchange",
"id": 21800,
"tags": "ros, navigation, base-controller, base-local-planner"
} |
Shuffling a deck | Question: My objective is to swap every element of a string array with a random element.
for (int i = 0; i < array.length; i++) { // scanning the deck
int abc = rm.nextInt(77); // random object range
String temp = array[i]; // swapping cards at random places
array[i] = array[abc];
array[abc] = temp;
}
I checked the code, and it seems to work. There’s no visible pattern of increase or decrease in the elements of the resulting array, and no repetition either. Am I right? Are there any problems in this code?
Answer: There is a problem with the distribution of your shuffle. Instead of choosing a random index from anywhere in the array, choose an index from zero to i (inclusive). This should prevent the same card from being shuffled twice* and ensure a more even distribution (think of it as being analagous to taking cards out of a deck at random and stacking them on a new pile. Once they're on the new pile, they don't move around anymore). This is essentially the Fisher-Yates shuffle.
Shuffling an array of strings "a", "b", and "c" using your algorithm, I got the following results from 100,000 runs:
abc: 14974
acb: 18531
bac: 18755
bca: 18225
cab: 14694
cba: 14821
Using the algorithm I just described, I got these results:
abc: 16515
acb: 16758
bac: 16523
bca: 16706
cab: 16788
cba: 16710
* By "prevent the same card from being shuffled twice" I mean that each card can only be the "initiator" of one swap; they can still be "displaced" by another card initiating a swap. | {
"domain": "codereview.stackexchange",
"id": 7880,
"tags": "java, shuffle"
} |
What is the difference between binding energy and nuclear binding energy? | Question: So in my book there is a chapter summary that says “nuclear binding energy is the amount of energy that is released when nucleons (protons and neutrons) bind together” and the first time it was mentioned in the chapter it simply called this binding energy. However, when I looked it up online, binding energy is the amount of energy required to separate particles from a system. Is the amount of energy required to separate particles equal to the amount released when they are bound? Or is there something else I’m missing?
Answer: Nuclear binding energy can be seen as a "special form" of binding energy. You can see that the given definition for the latter
the amount of energy required to separate particles from a system
also applies to a nucleus: In this case, the particles are the nucleons and the system is the nucleus.
The only difference, as you noted is that in one of the given cases, the binding energy is said to be released, in the other case it is the energy required to break a bond. However, this is no contradiction: If you get $X$ "amounts" of energy when forming a bond between two (or more) particles, you have to "pay" the same amount $X$ of energy to break the bond.
So tl;dr: Nuclear binding energy is a subset of binding energy and yes, it is the same. | {
"domain": "physics.stackexchange",
"id": 78309,
"tags": "energy, nuclear-physics, binding-energy"
} |
Understanding sine wave generation in Python with linspace | Question: I was trying to sample a 12.8 MHz sine wave (78.125 ns) signal at every 160us (micro seconds). Since 160us is multiple of base period 78.125ns(x2048) i expected to get a sample of fixed amplitude but instead what I am seeing is a another periodic sine wave. I don't understand why ?
I am doubting quantization error but shouldn't that generate uniform noise instead of a creating a periodic sine wave.
import numpy as np
from matplotlib import pyplot as plt
fig2 = plt.figure()
ax2 = fig2.add_subplot(1, 1, 1)
capture_size1 = 2048
timestep1 = 160e-6
freq1 = 12.8e6
time1 = np.linspace(0, capture_size1 * timestep1, capture_size1)
w1 = np.sin(time1 * 2 * np.pi * freq1)
ax2.plot(time1, w1, '.')
plt.show()
Edit1 :
1. the 12.8 MHZ is intentionally under sampled
Adding the screenshot of the plot with capture_size1 = 2048, the sine wave has proper amplitude of [+1, -1]
Edit2: I tried to increase the precision by using Decimal and i see it is behaving as expected. I expect a straight line as the sampling point is an exact multiple of period.
from decimal import Decimal
from math import pi as mpi
from math import sin as msin
import numpy as np
from matplotlib import pyplot as plt
fig2 = plt.figure()
ax2 = fig2.add_subplot(1, 1, 1)
capture_size1 = 2048
timestep1 = 160e-6
freq1 = 12.8e6
time1 = np.linspace(0, capture_size1 * timestep1, capture_size1)
w1 = np.sin(time1 * 2 * np.pi * freq1)
ax2.plot(time1, w1, '.')
capture_size3 = Decimal(2048 * 16)
timestep3 = Decimal(160e-6)
freq3 = Decimal(12.8e6)
time3 = [Decimal(i) * timestep3 for i in range(capture_size1)]
w3 = [msin(Decimal(i) * timestep3 * Decimal(2) * Decimal(mpi) * freq3) for i in range(capture_size1)]
ax2.plot(time3, w3, '.')
plt.legend(["Actual", "Expected"])
plt.show()
Edit3:
I further did some analysis thanks to the comment by @jithin. Looks like this is an issue with linspace. I tried to generate the time interval by just multiplication as shown in below code and removed the original plot which used linspace(this is crucial) so now i am able to see the values in 1e-9 range as others suggest. So is there indeed issue with linspace ?
from decimal import Decimal
from math import pi as mpi
from math import sin as msin
import numpy as np
from matplotlib import pyplot as plt
fig2 = plt.figure()
ax2 = fig2.add_subplot(1, 1, 1)
capture_size1 = 2048
# timestep1 = 160e-6
# freq1 = 12.8e6
# time1 = np.linspace(0, capture_size1 * timestep1, capture_size1)
# w1 = np.sin(time1 * 2 * np.pi * freq1)
# ax2.plot(time1, w1, '.')
capture_size2 = 2048
timestep2 = 160e-6
freq2 = 12.8e6
time2 = [i * timestep2 for i in range(capture_size2)]
w2 = [np.sin(i * timestep2 * 2 * np.pi * freq2) for i in range(capture_size2)]
ax2.plot(time2, w2, '.')
capture_size3 = Decimal(2048)
timestep3 = Decimal(160e-6)
freq3 = Decimal(12.8e6)
time3 = [Decimal(i) * timestep3 for i in range(capture_size1)]
w3 = [msin(Decimal(i) * timestep3 * Decimal(2) * Decimal(mpi) * freq3) for i in range(capture_size1)]
ax2.plot(time3, w3, '.')
plt.legend(["multiply", "Decimal"], fontsize='xx-large')
plt.show()
The image of the above python code is below
Answer: Change the following line :
time1 = np.linspace(0, capture_size1 * timestep1, capture_size1)
To the following:
time1 = np.linspace(0, capture_size1 * timestep1, capture_size1, endpoint=false)
You will see correct results. Your original time instances is not what you intend because Python will create 2048 equally spaced point between 0 and 2048*Ts. What you want is equally spaced 2048 points starting from 0 and 160usec apart.
You can also use the following line if you dont want to use 'endpoint=false' :
time1 = np.linspace(0, (capture_size1-1) * timestep1, capture_size1)
At your current sampling period of $160\mu sec$, it will mean that you are taking 1 sample of the sinusoidal every 2048 periods. There will be aliasing, but you are not bothered about that because you want to see fixed amplitude across discrete time. Basically, you want aliasing to happen. | {
"domain": "dsp.stackexchange",
"id": 8780,
"tags": "python, tone-generation"
} |
How is this system in equilibrium? | Question: So, after mathematically calculating the answer I found it to be zero. But my teacher says I don't have to do the calculation as this system is in equilibrium and the initial acceleration is zero. But I don't get how is that. Please refer to the question number 8 in this photo.
Answer: It is not obvious that the rope is in equilibrium, but in fact it is.
Think of the rope as being in two parts, with length a on the left hand side of the wedge and length b on the right hand side. The rope is uniform, so the mass on the rope on the left hand side is ka and the mass on the right hand side is kb where k is the mass per unit length of the rope.
The forces on the rope on the left hand side are gravity and the normal force from the wedge. The rope does not move perpendicular to the wedge's surface so we only need to resolve forces in a direction parallel to the wedge's surface. The normal force has no component in this direction, and the component of the gravitational force is
$$mg\sin\alpha = kag\sin\alpha$$
Similarly the component of gravitational force parallel to the surface of the wedge on the right hand side is $$kbg\sin\beta$$
and so the net force on the rope is $$kg(a\sin\alpha - b\sin\beta)$$
But because the ends of the rope are at the same level, we know that $a\sin\alpha = b\sin\beta$, so the net force on the rope is zero, and it is indeed in equilibrium.
Without a calculation like this, I don't think it is obvious that the rope must be in equilibrium. | {
"domain": "physics.stackexchange",
"id": 39245,
"tags": "homework-and-exercises, newtonian-mechanics, equilibrium, statics"
} |
Getting from $E^2 - p^2c^2 = m^2c^4$ to $E = \gamma mc^2$ | Question: What is each mathematical step (in detail) that one would take to get from:
$E^2 - p^2c^2 = m^2c^4$
to
$E = \gamma mc^2$,
where $\gamma$ is the relativistic dilation factor.
This is for an object in motion.
NOTE: in the answer, I would like full explanation. E.g. when explaining how to derive $x$ from $\frac{x+2}{2}=4$, rather than giving an answer of "$\frac{x+2}{2}=4$, $x+2 = 8$, $x = 6$" give one where you describe each step, like "times 2 both sides, -2 both sides" but of course still with the numbers on display. (You'd be surprised at how people would assume not to describe in this detail).
Answer: Starting with your given equation, we add $p^2 c^2$ to both sides to get
$$ E^2=m^2 c^4 + p^2 c^2$$
now using the definition of relativistic momentum $p=\gamma m v$ we substitute that in above to get
$$E^2 = m^2 c^4 +(\gamma m v)^2 c^2=m^2 c^4 +\gamma^2 m^2 v^2 c^2$$
Now, factoring out a common $m^2 c^4$ from both terms on the RHS in anticipation of the answer we get
$$E^2=m^2 c^4 (1+\frac{v^2}{c^2}\gamma^2)$$
Now using the definition of $\gamma$ as
$$\gamma=\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$$
and substituting this in for $\gamma$ we get
$$E^2=m^2 c^4 \left(1+\frac{\frac{v^2}{c^2}}{1-\frac{v^2}{c^2}}\right)$$
and making a common denominator for the item in parenthesis we get
$$E^2=m^2 c^4 \left( \frac{1}{1-\frac{v^2}{c^2}} \right)=m^2 c^4 \gamma^2$$
Taking the square root of both sides gives
$$E=\pm \gamma mc^2$$
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 3528,
"tags": "homework-and-exercises, special-relativity, mass-energy"
} |
Do not understand why log n = O(n^c) (for any c>0) | Question: Can anyone help me understand this equation?
$\log (n) = O(n^c)$ (for any $c>0$)
Does it mean that $O(\log (n)) < O(n^c)$ (for any $c>0$)?
Added:
Please also prove that $\log (n) = O(n^c)$ is true.
Answer: It's not actually an equation $f = O(g)$ is a lazy shorthand that should be written $f \in O(g)$. So if you look back at the definition of $O$, you should be able to see what $\log n \in O(n^{c})$ for any $c > 0$ means:
For every $c > 0$, there exists $n_{0} \geq 0$ and $k \geq 0$ such that $\log n \leq k\cdot n^{c}$ for all $n \geq n_{0}$.
Always remember that $O(\cdot)$ describes a set, $O(\log n) < O(n^{c})$ doesn't actually make sense (unless you make up a special meaning for $<$, but then no-one will know what you're talking about). You could say $O(\log n) = O(n^{c})$, as equality for sets has a understood meaning, though of course this statement in particular would be false.
As John Kugelman points out in the comments below, normal set relations do make sense, so $O(f) \subset O(g)$, $O(f) \subseteq O(g)$, etc., make sense. | {
"domain": "cs.stackexchange",
"id": 4486,
"tags": "asymptotics"
} |
equivalent deduction of Non-mechanical thermodynamic work | Question: "Work (as in Thermodynamics) is said to be done by a system if sole effect on the surroundings could be the raising of a weight." So, how is non mechanical work like : Chemical work, Magnetization work or even electric current flow work be reduced to the same?
Answer: Other work is also a flow of energy between system and surroundings (across a boundary). Heat is flow of energy too. Consider these three sayings as analogous:
Mechanical work is said to be done BY a system if the sole effect on the surroundings could be the raising of a weight (in the surroundings).
Heat flow is said to be EXOTHERMIC if the sole effect on the surroundings could be an increase in the temperature of the surroundings.
Other work is said to be done BY a system if the sole effect on the surroundings could be either the raising of a weight (in the surroundings) or an increase in the temperature of the surroundings. | {
"domain": "engineering.stackexchange",
"id": 2661,
"tags": "thermodynamics, chemical-engineering"
} |
Algorithm for graphically spacing items | Question: I am developing a chart and graph library and am having trouble developing an algorithm.
**This is not a homework assignment for a student. See my open source project: https://github.com/eddieios/CoreChart
The algorithm is to space the Y axis labels in a given coordinate graphical space. It should space the labels at about 50 pixels apart, but have no less than 5 labels. All data points are assumed to be positive integers. The labels should be multiples of 5 (so as to have nice clean numbers). The last label can be smaller or larger than the maximum possible value, but it should be the closer of the two. The algorithm inputs are 1) the maximum possible value from the chart data, 2) the height of the graphical space. The output is a list of labels and their vertical positions.
For example:
Maximum possible chart value = 79
Height of graphical space = 200
Output would be:
Label: 0, Vertical Position: 0
Label: 20, Vertical Position: 50
Label: 40, Vertical Position: 100
Label: 60, Vertical Position: 150
Label: 80, Vertical Position: 200
I have written the following code (Obj-C). But I'm having trouble handling end-cases. For example, if the maxValue = 39, then the labels are set 5 apart. The optimal case here would be to set labels 10 apart. There's something about I'm deciding how many labels there should be that isn't working for all cases.
int maxValue = 39;
float graphHeight = 259.0f;
int numLevels = (int)(graphHeight / 50.0f);
float offset = (int)(maxValue / (float)numLevels);
offset /= 5;
offset = (float)((int)(offset + 0.5));
offset *= 5;
CGFloat stepY = graphHeight * ((float)offset/maxValue);
for (int i = 0; i <= numLevels; i++) {
NSLog(@"label %f position %f", i*offset, i*stepY);
}
Answer: One possible approach which does not require any hairy arguments is to come up with a "nicety" measure, find a range of offsets, and choose the best one. Usually rounding errors are accounted for by taking a $\pm 1$ (in your case $\pm 5$) modification from the starting point. In your case, you would calculate the nicety measure for both $5$ and $10$, and if your nicety measure is reasonable, choose $10$ over $5$. | {
"domain": "cs.stackexchange",
"id": 1067,
"tags": "algorithms"
} |
Bond length in cyclic organic compound | Question: Why is bond length $(a)$ greater than that of $(b)$?
Is this because $\ce{-NH}$ is electron donating group (+M) and this results in a higher electron density in the ring causing repulsion?
Answer: So, although I stated in the comments that the second structure is very unstable and converts to phenol, as @Alchimista pointed out, tautomeric structures are different compounds and we need to solve the problem based on the compounds that are given to us, regardless of how stable they are.
However, resonance is operative in both of the above compounds. A compound such as this exists as a resonance hybrid, so we need to analyse the bond lengths in all the canonical structures and judge their relative contributions to the resonance hybrid.
Read this extract:
As noted above, we can more accurately describe the bonding in a molecule or polyatomic ion using the (weighted) average of its resonance structures. One model for estimating bond orders and charges in a compound is to simply take the average of those values from all (important) contributing resonance structures.
Note that here 'weighted' refers to giving resonance structures which are more stable more weight.
Try drawing out the resonance structures of both the compounds given above. Alternate resonance structures will have more contribution in the first as opposed to the second, as resonance in the second structure leads to an incomplete octet on the carbon which becomes a $\ce{C+}$, while in the first, only an $\ce{N+}$ results which is still a complete octet. So resonance structures in the first are more stable, and thus, the single bond character of the CO bond in the first structure increases more than in the second, and that causes its bond length to increase. So, the bond length of (a) is greater.
References: Resonance Structures, Chemlibre texts | {
"domain": "chemistry.stackexchange",
"id": 13766,
"tags": "organic-chemistry, bond"
} |
Why were solar constant measurements before TSIS-1 all about 0.3% high? | Question: Phys.org's Solar energy tracker powers down after 17 years says:
"The big surprise with TSI was that the amount of irradiance it measured was 4.6 watts per square meter less than what was expected," said Tom Woods, SORCE's principal investigator and senior research associate at the University of Colorado's Laboratory for Atmospheric and Space Physics (LASP) in Boulder, Colorado. "That started a whole scientific discussion and the development of a new calibration laboratory for TSI instruments. It turned out that the TIM was correct, and all the past irradiance measurements were erroneously high."
"It's not often in climate studies that you make a quantum leap in measurement capability, but the tenfold improvement in accuracy by the SORCE / TIM was exactly that," said Greg Kopp, TIM instrument scientist for SORCE and TSIS at LASP.
Question: What was it about either the past measurements or their analysis or calibration that made their measurements of the Sun's output about 0.3% high? What TSI's improvement primarily instrumental, or due to better calibration?
Answer: From the abstract of Kopp & Lean (2011) "A new, lower value of total solar irradiance: Evidence and climate significance":
Scattered light is a primary cause of the higher irradiance values measured by the earlier generation of solar radiometers in which the precision aperture defining the measured solar beam is located behind a larger, view‐limiting aperture. In the TIM, the opposite order of these apertures precludes this spurious signal by limiting the light entering the instrument.
(emphasis mine)
For example, in the case of the ACRIM instruments used on the Solar Maximum Mission, the Upper Atmosphere Research Satellite, and ACRIMSat:
Notably, for the ACRIM instrument NIST determined that diffraction from the view‐limiting aperture contributes a 0.13% signal not accounted for in data from the three ACRIM instruments [Butler et al., 2008]. This correction lowers the reported ACRIM values, resolving part of ACRIM's difference with TIM. In ACRIM and all other instruments, the precision aperture used to define the measured solar beam is deep inside the instrument with a larger view‐limiting aperture at the front, which, depending on edge imperfections, in addition to diffraction can directly scatter light into the absorbing cavity. Additionally, this design allows into the instrument interior two to three times the amount of light intended to be measured; if not completely absorbed or scattered back out, this additional light produces erroneously high signals. In contrast, the TIM's optical design places the precision aperture at the front so only light intended to be measured enters (Figure 4a).
The paper goes into a bit more detail about the testing done to determine the reasons for the instrumental differences. | {
"domain": "astronomy.stackexchange",
"id": 4801,
"tags": "the-sun, observational-astronomy, solar-system"
} |
Mechanism of reaction of benzenediazonium chloride with ethanol to form benzene and acetaldehyde | Question: In my textbook, all that was told was that ethanol is a mild reducing agent and hence reduces BDC while oxidising itself. Is there any mechanism or am I just supposed to remember this like a redox reaction?
I did search for a mechanism but did not find anything. I have no idea as to how this reaction works. Can someone please point me in the right direction?
Answer: In reality, benzene and acetaldehyde are the minor product of the reaction of benzenediazonium chloride with ethanol. The major product is ethoxybemzene (phenetole). Peter Griess, who had discovered diazonium salts in 1858, had reported benzenediazonium salt (nitrate or sulfate) with ethanol undergoes aforementioned redox reaction in 1864 (Ref.1). However, by 1901, it was confirmed that the products from the redox reaction is minor (about ~9%) when compared to major product, phenetole (about ~64%) when benzenediazonium chloride was used. By 1956, Kelley (Ref.2) found that the yield is even low as 3% and his student, Miller (Ref.1) later confirmed this result.
Now, it is believed that the reaction follows the scheme $\bf{A}$ and $\bf{B}$ (Ref.1):
$$\ce{Ar-^+N#NX- + RCH2-OH -> Ar-O-CH2R + N2 + HX} \tag{$\bf{A}$}$$
$$\ce{Ar-^+N#NX- + RCH2-OH <=> Ar-N=N-O-CH2R HX \\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad -> ArH + RCHO + N2 + HX} \tag{$\bf{B}$}$$
where process $\bf{A}$ proceeding through a $\mathrm{S_N2}$ displacement of the diazonium groups while process $\bf{B}$ involves a homolytic fission of the bonds to produce radicals which lead ultimately to aromatic hydrocarbon and aldehyde.
Proposed mechanism:
$$\ce{Ar-N=N-X -> Ar^. + N2 + X^.} $$
$$\ce{Ar^. + R-CH2-OH -> ArH + R-^.CH-OH} $$
$$\ce{AR-^.CH-OH + X^. -> R-CHO + HX} $$
It was found that when oxygen is added to the system rate of reaction has not changed significantly. The propagation steps with $\ce{O2}$ has been suggested as:
$$\ce{AR-^.CH-OH + O2 -> R-CHO + H-O-O^.} $$
$$\ce{H-O-O^. + AR-CH2-OH -> AR-^.CH-OH + H-O-O-H} $$
Ref.1 and Ref.2 have reported the first order kinetics of this reaction with $k = 0.96 \times 10^{-4}$ at $\pu{25 ^\circ C}$ and $k = 2.03 \times 10^{-4}$ at $\pu{30 ^\circ C}$, which are consistent with other published data.
Note: More insight on mechanism, read Ref.3.
References:
Robert Warren Miller, "The stoichiometry of the reaction of benzenediazonium chloride with ethanol," MS Thesis 1957, University of Arizona, Arizona (PDF).
A. E. Kelley, PhD Dissertation 1956, Purdue University, Indiana.
DeLos F. DeTar, Takuo Kosuge, "Mechanisms of Diazonium Salt Reactions. VI. The Reactions of Diazonium Salts with Alcohols under Acidic Conditions; Evidence for Hydride Transfer," J. Am. Chem. Soc. 1958, 80(22), 6072–6077 (https://doi.org/10.1021/ja01555a044). | {
"domain": "chemistry.stackexchange",
"id": 14534,
"tags": "organic-chemistry, reaction-mechanism, redox"
} |
Not able to convert from NFA to DFA | Question: I have a simple problem of making a DFA which accepts all inputs starting with double letters (aa, bb) or ending with double letters (aa, bb), given $\Sigma =\{a, b\}$ is the alphabet set of the given language.
I tried to solve it in a roundabout way by:
Generating a regular expression
Making its corresponding NFA
Using powerset construction to deduce a DFA
Minimizing the number of states in DFA
Step 1: Regular expression for given problem is (among countless others):
((aa|bb)(a|b)*)|((a|b)(a|b)*(aa|bb))
Step 2: NFA for given expression is:
(source: livefilestore.com)
In Tabular form, NFA is:
State Input:a Input:b
->1 2,5 3,5
2 4 -
3 - 4
(4) 4 4
5 5,7 5,6
6 - 8
7 8 -
(8) - -
Step 3: Convert into a DFA using powerset construction:
Symbol, State + Symbol, State (Input:a) + Symbol, State (Input:b)
->A, {1} | B, {2,5} | C, {3,5}
B, {2,5} | D, {4,5,7} | E, {5,6}
C, {3,5} | F, {5,7} | G, {4,5,6}
(D), {4,5,7} | H, {4,5,7,8} | G, {4,5,6}
E, {5,6} | F, {5,7} | I, {5,6,8}
F, {5,7} | J, {5,7,8} | E, {5,6}
(G), {4,5,6} | D, {4,5,7} | K, {4,5,6,8}
(H), {4,5,7,8} | H, {4,5,7,8} | G, {4,5,6}
(I), {5,6,8} | F, {5,7} | I, {5,6,8}
(J), {5,7,8} | J, {5,7,8} | E, {5,6}
(K), {4,5,6,8} + D, {4,5,7} + K, {4,5,6,8}
Step 4: Minimize the DFA:
I have changed K->G, J->F, I->E first. In the next iteration, H->D and E->F. Thus, the final table is:
State + Input:a + Input:b
->A | B | C
B | D | E
C | E | D
(D) | D | D
(E) | E | E
And diagramatically it looks like:
(source: livefilestore.com)
...which is not the required DFA! I have triple checked my result. So, where did I go wrong?
Note:
-> = initial state
() = final state
Answer: You are fine up to step 3 (the DFA) but your minimization is incorrect.
It's clear that the minimized DFA is not right, because both the inputs ba and ab (which are not in the original language, nor are they accepted by the DFA in step 3) lead to final state E.
Looking at your minimization steps, it seems that you have unified final and non-final states; for example J (final) -> F (not final) and I (final) -> E (not final). Merging a final state with a non-final state changes the language accepted by the automaton, leading to the acceptance of incorrect strings as noted above. | {
"domain": "cs.stackexchange",
"id": 1038,
"tags": "automata, finite-automata"
} |
Why do small mirror imperfections matter with modern computers | Question: Modern telescopes go to great lengths to have perfectly shaped parabolic mirrors. My question is, why go to the trouble of having a perfect mirror? Why not take a mirror roughly the right shape, and then correct for the distortion using computers?
Answer:
correct for the distortion
An imperfect mirror does not produce a distorted image - it produces a blurry image. With light-field sensors and phase imaging, one could possibly correct for the blur, but it is much more challenging problem than normal lens distortion correction.
Distortion refers to a systematic change in how shapes are projected in an image. It results from a lens or mirror with good, accurate geometry that just does not produce a rectilinear projection.
Random imperfections in a mirror do not cause distortion. Every point in the surface of a mirror contributes to every pixel in the result image. If a single part of the mirror is at slightly wrong angle, it does not cause a distortion in one point of the image. Instead, it projects the same image at a slightly different alignment on the same sensor. (1)
In the case of a starfield, this would cause ghost images of very dim stars to appear next to the real stars. Repeat this for a thousand imperfections, and the result is just blurry dots. Deconvolution is a process that can be used to remove blurriness, but noise and other uncertainties limit its effectiveness.
(1) This may be a bit unintuitive if you think about funhouse mirrors where the image is distorted. Those work differently because they act the part of a planar mirror, where indeed each part of image is reflected by a single part of a mirror. But planar mirrors cannot form an image by themselves, instead the lens in your eye is the critical component of the image accuracy. | {
"domain": "astronomy.stackexchange",
"id": 5540,
"tags": "observational-astronomy, telescope"
} |
TypeError: unhashable type: 'numpy.ndarray' | Question: I'm trying to do a majority voting of the predictions of two deep learning models.The shape of both y_pred and vgg16_y_pred are (200,1) and type 'int64'.
max_voting_pred = np.array([])
for i in range(0,len(X_test)):
max_voting_pred = np.append(max_voting_pred, statistics.mode([y_pred[i], vgg16_y_pred[i]]))
I run into the following error:
TypeError: unhashable type: 'numpy.ndarray'
How should I pass the data?
Answer: The problem is that you're passing a list of numpy arrays to the mode function.
It requires either a single list of values, or a single numpy array with values (basically any single container will do, but seemingly not a list of arrays).
This is because it must make a hash map of some kind in order to determine the most common occurences, hence the mode. It is unable to hash a list of arrays.
One solution would be to simple index the value out of each array (which then means mode gets a list of integers). Just changing the main line to:
max_voting_pred = np.append(max_voting_pred, mode([a[i][0], b[i][0]]))
Let me know if that doesn't fix things.
If you want something that is perhaps easier than fixing your orignal code, try using the mode function from the scipy module: scipy.stats.mode.
This version allows you to pass the whole array and simply specify an axis along which to compute the mode. Given you have the full vectors of predictions from both models:
Combine both arrays to be the two columns of one single (200, 2) matrix
results = np.concatenate((y_pred, vgg16_y_pred), axis=1)
Now you can perform the mode on that matrix across the single rows, but all in one single operation (no need for a loop):
max_votes = scipy.stats.mode(results, axis=1)
The results contain two things.
the mode values for each row
the counts of that mode within that row.
So to get the results you want (that would match your original max_voters_pred, you must take the first element from max_votes:
max_voters_pred = max_votes[0] | {
"domain": "datascience.stackexchange",
"id": 3955,
"tags": "python, scikit-learn, numpy"
} |
I installed ROS melodic for my 18.04 Unbuntu. But I couldnt launch gazebo with the gazebo command | Question:
After I typed gazebo and entered, this happened:
gazebo: symbol lookup error: /usr/lib/x86_64-linux-gnu/libgazebo_common.so.9: undefined symbol: _ZN8ignition10fuel_tools12ClientConfig12SetUserAgentERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
Originally posted by Sy Anh on ROS Answers with karma: 11 on 2019-07-05
Post score: 1
Original comments
Comment by ahendrix on 2019-07-05:
Looks similar to https://bitbucket.org/osrf/gazebo/issues/2448/problem-running-gazebo7 . Try upgrading the libignition-math2 package as suggested on that ticket.
Comment by Sy Anh on 2019-07-05:
Unfortunately, it was very different. Thanks for your help
Comment by drhombus on 2019-09-02:
what ended up fixing it for you?
Answer:
Upgrading the libraries worked for me sudo apt upgrade libignition-math2
Originally posted by cambel07 with karma: 92 on 2019-12-09
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by SwapUNaph on 2020-07-03:
I get this when I upgrade libignition-math2
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package libignition-math2 | {
"domain": "robotics.stackexchange",
"id": 33348,
"tags": "ros-melodic"
} |
Why is the density of the Fermi gas in a neutron star not changing the potential depth caused by the strong nuclear interaction? | Question: In some textbooks, the neutron star is explained as a degenerate Fermi gas. To calculate the degenerate pressure of the neutron fermi gas the average Energy of a neutron, U is calculated when the Volume V is changed. p = dU/dV. However, this does not consider that the changing neutron density under compression also changes the potential caused by strong interaction between the neutrons. At least the (attractive) Hartree-Potential in the Hartree-Fock Formalism grows linearly with nulceon density (which grows by 1/V). This gain in 1/V would win over the (repulsive) increase in the kinetic energy under compression, which grows only by V^-(2/3). Thus, the negative pressure caused by the increasing binding energy grows faster than the positive Fermi gas pressure due to the kinetic energy U. It is energetically advantageous to collapse.
What will certainly prevent a collapse is the "hard core" of the neutron, i.e. the strongly repulsive nuclear force for distances below 1 fm, as dicussed in question "Why is the central density of the nucleus constant". The density of the nuclei is not too far away from tightly packed spheres with 1 fm radius. However, this is totally different physics, and the textbooks would wrongly explain the neutron star as a Fermi-Gas.
I suspect that the constant density may have something to do with the Fock-term (exchange potential) in the Hartree-Fock equation, which in fact is another representation of Pauli-exlusion principle, compensating the increase in the Hartree-Term.
What is the nature of a neutron star?
Answer: You are quite correct that a neutron star is not supported by ideal neutron degeneracy pressure. Any book or web source that claims so should be given a wide berth.
As long ago as 1939 Oppenheimer & Volkhoff showed that a neutron star supported by ideal NDP became unstable at finite density, with a maximum mass of around $0.7 M_{\odot}$. All measured neutron star masses are much higher than this.
The repulsive core of the strong nuclear force in asymmetric nuclear matter is almost certainly what supports neutron stars. The polytropic index of the pressure can exceed 2, as opposed to somewhere between 4/3 and 5/3 for ideal NDP, so is a much harder equation of state.
The review by Lattimer (2013) does a good job of describing how observations of neutron star masses and radii provide constraints on the equation of state and uncertain parameters in the symmetry energy beyond nuclear densities. | {
"domain": "physics.stackexchange",
"id": 35490,
"tags": "stars, neutrons, strong-force, fermi-liquids"
} |
Heat Loss per Linear Ft | Question: For an assignment, I was given:
Calculate the heat loss per linear ft from $2$ $in$ nominal pipe. ($2.375$ $in$ outside diameter covered with $1$ $in$ of an insulating material having an average thermal conductivity of $0.0375$ $Btu/hrft^oF$. Assume that the inner and outer surface temperatures of the insulation are $380^oF$ and $80^oF$ respectively.
The correct answer is $116$ $Btu/lb^oF$
I used the formula from conduction through pipes:
$${ Q = \frac{\triangle t}{R_{total}} = \frac{\triangle t}{ \frac{ln \frac{r_2}{r_1} }{2 \times \pi \times k \times L} } = \frac{\triangle t}{ \frac{ln \frac{d_2}{d_1} }{2 \times \pi \times k \times L} } }$$
Where:
$${ \triangle t = 380-80= 300^oF }$$
$${ d_2 = 3.375 }$$
$${ d_1 = 2 }$$
$${ k = 0.0375 }$$
And I get close, but not correct. I calculated it to be: $135.090464$ $Btu$
What am I doing wrong?
Answer: I get 115.706...
= 116
ie their answer.
What you are doing wrong (in this problem and your other recent one) is going too fast and not visualising how the data given translates into the real world situation. You are applying formulae correctly and seem to have a good understanding of what is involved to turn given parameters into correctly formulated expressions. Now all you have to do is slow down a bit and think about what you are doing. Do that and you'll do well ! :-)
The pipe is described as
2 inch nominal pipe.
(2.375 inch outside diameter covered with 1 inch of an insulating material).
One interpretation of that would be a pipe of 2.375 inch finished OD and 1 inch thick insulation under the outer surface. That gives a 0.375 inch internal pipe ID. Doesn't sound likely.
A second interpretation is a 2.375 inch ID internal pipe with 1 inch of external insulation over it for an external OD of ???
A third interpretation is the one you used.
I used the 2nd one and get the correct answer.
Look at the 2nd version above.
What is the OD?
plug in the results and see what you get.
Short answer: Doh!!! :-)
Try to aim at no doh!s - we all manage them but they can be minimised with due care. | {
"domain": "engineering.stackexchange",
"id": 317,
"tags": "heat-transfer, pipelines"
} |
Computer-Generated Holograms: I'm completely lost. How are they physically implemented? | Question: I have been reading about holography, and I think I understand the general concept, but one thing that has me completely lost is how computer generated holography works in practice.
I think I get the basic idea behind how CGHs work. If we were to take a 3D object, like a Utah teapot, we could emulate the behaviour of an actual laser beam bouncing off the teapot and interfering with itself, thus forming the hologram. Now, here's where I'm confused: I've read about printing holograms (as in, with a regular printer), recording actual holograms on CCDs, patterning a holographic plate with the fringes using an LCD, and even holographic displays. What I don't get at all is how this is even vaguely possible? Aren't the interference fringes which make up the hologram much smaller than wavelength of light? Even if we had LCDs with massive resolution, wouldn't the diffraction limit prevent the using of them to pattern the plate, in the same way that visible light photolithography is nearing its physical limitations in the microfabrication? Basically, I've never seen a straight forward explanation of how computer holograms are actually transferred to the physical recording medium. As far as I know, it is possible, because there are companies currently doing it (such as Zebra Imaging). However, reading over patents and other papers in the literature yielded no clear understanding of how this really works, most authors seems to gloss over the implementation, and often seemingly contradict themselves. It was my understanding that one needed an electron microscope to actually make out the fringes because they are so small. If this is the case, why does one not need an electron microscope to etch the fringes?
Answer: The distance between the typical adjacent lines in a hologram is comparable to or longer than the wavelength of the light we use. After all, the lines arise from interference and the interference depends on the relative phase.
If you consider the distance of points H1, H2 from two generic points A, B and calculate the distances, the difference between H1-A and H1-B distances will differ from the difference between H2-A and H2-B by a distance comparable to the distance between H1 and H2 themselves. So the wave is imprinted in the hologram.
However, when the object we are visualizing is sufficiently far from the screen in the normal direction, the change of the phase will actually be much smaller which means that the lines on the photographic plates will be much further from each other than the wavelength. This should be known from double-slit experiments and diffraction gratings.
At most, you need the resolution of the hologram to exceed one pixel per the wavelength of the light. That's comparable to 0.5 microns. Invert it and you get 5,000 wave maxima per inch. That's close to the dots-per-inch resolution of some best printers.
However, the condition above is one for a really fine hologram. In reality, you can make a hologram even when its resolution is worse than that. Note that when we look at the hologram, in each direction we see the result of the interference of pretty much all the points on the screen - it's some kind of a Fourier transform. Because there are so many points that interfere, they can effectively reconstruct the subpixel structure of the image.
It's also a well-known fact that you may break a hologram into pieces and you may still see the whole object in each piece. | {
"domain": "physics.stackexchange",
"id": 34977,
"tags": "optics, electromagnetic-radiation, hologram"
} |
Wick-rotated quantum computers e.g. to be realized with Ising-like systems? | Question: Quantum mechanics is equivalent with Feynman path ensemble, which after Wick rotation becomes Boltzmann path ensemble - and e.g. Ising model is a basic condensed matter model, which is assumed to use Boltzmann ensemble of e.g. sequences of spins - in spatial direction instead of temporal in QM.
Such spatial realization of Wick-rotated quantum mechanics seems to allow to violate Bell-like inequalities, so a natural next question is if we could build Wick-rotated quantum computers in Ising-like systems? For example to be "printed" on a surface, solving encoded problem if assuming Boltzmann ensemble among sequences?
Notice that Wick-rotated QC is different from adiabatic QC - the latter minimizes Hamiltonian, having huge problem with usually exponentially growing number of local minima. The former is closer to Shor - exploits path ensemble, should not have this optimization problem (?)
While quantum computers use unitary gates: with eigenspectrum in complex unitary circle, such Wick-rotated gates would have real eigenspectrum.
Hadamard gate $H$ is used to get initial superposition in quantum computers, below mixing gate $X$ can be used to get (Boltzmann) ensemble in Wick-rotated computers:
$$H=\frac{1}{\sqrt{2}} \left(\begin{array}{cc}1 & 1 \\ 1 & -1 \\ \end{array} \right)
\qquad\qquad X= \left(\begin{array}{cc}1 & 1 \\ 1 & 1 \\ \end{array} \right) $$
In theory, controlled e.g. NOT, X should be also possible, the question is what could be realized e.g. in Ising-like system?
While in quantum computers we can only fix initial amplitude in the past, a big advantage of such spatial realization is that we could fix amplitudes in both directions (left and right), what might allow e.g. to solve 3-SAT (NP-complete, end of this arxiv).
In quantum subroutine of Shor's algorithm below, we prepare ensemble (past direction) of all inputs, calculate classical function and measure its value (future direction) - restricting the initial ensemble to inputs giving the same value of classical function - period of such restricted ensemble (found with QFT) gives a hint for the factorization problem.
Analogously for Boltzmann path ensemble for 3-SAT below, but in spatial realization we can also fix values from second direction (right) - restriction (in splits) becomes to inputs satisfying all the alternatives:
Which Wick-rotated gates could be realized in Ising-like systems?
Assuming we could build e.g. above 3-SAT setting, would it work? In other words - is Boltzmann sequence ensemble a perfect assumption, or only an approximation?
Is there a literature for Wick-rotated quantum computers, gates?
Answer: I'm not exactly sure what you're asking, but note that if you just Wick rotate any old Hamiltonian, you're likely to end up with a path integral with negative Boltzmann weights, which won't actually correspond to any (local) physical statistical system, eg. Ising.
The Hamiltonians that do Wick rotate to a path integral with positive Boltzmann weights are called "stoquastic" and finding their ground state energy has its own complexity class, called StoqMA (contained somewhere in QMA and containing MA). This paper describes the complexity in some detail, but I am not expert enough to summarize it.
I found this nice diagram as Fig. 1 in this paper ("On the complexity of stoquastic Hamiltonians" by Ian Kivlichan... I couldn't find an arxiv link). | {
"domain": "physics.stackexchange",
"id": 64414,
"tags": "quantum-mechanics, statistical-mechanics, quantum-computer, ising-model, wick-rotation"
} |
ROS Rviz groovy segfault and assertion fail | Question:
Hi
RVIZ keeps segfaulting everytime i visualise a point cloud, I am not sure why.
I have attached a pcd file test.jpg (text file , test.jpg rename this to test.pcd) which causes the following error in RVIZ
rviz: ../../../../../src/glsl/ralloc.c:81: get_header: Assertion `info->canary == 0x5A1106' failed.
Mostly it just segfaults everytime I try to visualise the pointcloud 2
rosrun pcl_ros pcd_to_pointcloud test.pcd 10 cloud_pcd:=Laser _frame_id:=/map
I have been experimenting with the number of points. Set WIDTH and POINTS to N to visualise N points. When this number is 100 it segfaults and when its small I can see the pointcloud.
for instance if WIDTH and POINTS = 10 then the segfault does not occur.
If your gradually increase this number there is no segfault, but if you make this number large for the first time you publish the pointcloud eg 100 RVIZ segfaults
Can anyone else visualise the same pointcloud? I am using ubantu 12.04 and groovy.
This weird segfault error only started happening recently....
This bug started happening after I visualise a pcd file instead of a segfault and RVIZ crashing. My screen flashed several times all the task bars disappeared and screen started floating around. I had to restart my computer the fix the problem
Do i need to reinstall RVIZ or ROS???
ALSO
I noticed ros-groovy-rviz was updated the day before this bug started happening ...
Edit Attempting to troubleshoot
export LIBGL_ALWAYS_SOFTWARE=1
rosrun rviz rviz
caused the following error
*** glibc detected *** /opt/ros/groovy/lib/rviz/rviz: double free or corruption (!prev): 0x0000000004015ef0 ***
Originally posted by Sentinal_Bias on ROS Answers with karma: 418 on 2014-02-06
Post score: 4
Answer:
I solved the issue by building RVIZ groovy 1.9.32 by source
https://github.com/ros-visualization/rviz/tree/1.9.32
both versions 1.9.33 and 1.93.4 dont work for me
$ cd ~/old_rviz
$ mkdir build
$ cd build
$ cmake ..
$ make
$ source ./devel/setup.bash
$ rosrun rviz rviz
-note this bug fill be fixed soon i guess.
Originally posted by Sentinal_Bias with karma: 418 on 2014-02-06
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Bharadwaj on 2014-04-03:
Worked for me as well. - lubuntu 12.04 - ROS Groovy
Comment by h iman on 2014-04-22:
Hello guys, Ive an embarrassing question, how do I clone RVIZ groovy 1.9.32 from github? Also how can (if possible) replace the current version that I have in my groovy (1.9.34) with this version? | {
"domain": "robotics.stackexchange",
"id": 16902,
"tags": "rviz"
} |
How to calculate the permitted resultant states of 3 quadrupole phonons ($\ell=2$)? | Question: How do i get the permitted resultant states of 3 quadrupole phonons ($\ell=2$)?
I think im supposed to somehow tabulate the $m$ states.
Can anyone help?
Answer: You would need to repeatedly couple $\ell=2$ states.
Step 1 is to get the Clebsch-Gordan series: $2\otimes 2=4\oplus 3\oplus 2\oplus 1\oplus 0$ if there is no symmetry/antisymmetry restriction. Next, couple $\ell=2$ again to this sum, i.e.
$2\otimes \left(4\oplus 3\oplus 2\oplus 1\oplus 0\right)$. That's a lot of states: since the dimension of the irrep $\ell=2$ is $5$, you get $5\times 5\times 5=125$ states in total. (Note that some final values of $L$ may appear more than once.)
If needed the step after is to actually construct the states. For $L_{12}\in \{4,3, 2, 1, 0\}$ you first construct the states
$$
\vert L_{12}M_{12}\rangle =\sum_{m_1m_2} C^{L_{12}M_{12}}_{2m_1,2m_2}
\vert 2m_1\rangle\vert 2m_2\rangle\tag{1}
$$
with $C^{L_{12}M_{12}}_{2m_1,2m_2}$ a Clebsch-Gordan coefficient.
If $L_{123}$ is in $2\otimes \left(4\oplus 3\oplus 2\oplus 1\oplus 0\right)$, you then proceed with
$$
\vert L_{123}M_{123}\rangle =\sum_{m_{12}m_3} C^{L_{123}M_{123}}_{L_{12}m_{12},2m_3}
\vert L_{12}m_{12}\rangle\vert 3m_3\rangle
$$
where $\vert L_{12}m_{12}\rangle$ is as per (1). | {
"domain": "physics.stackexchange",
"id": 44265,
"tags": "condensed-matter, angular-momentum, phonons"
} |
Reference for broader spectral lines? | Question: I'm looking for some reference listing spectral lines outside the visible spectrum. (In particular, I'm looking for an element or compound that emits strongly in the 700-900nm range.)
Answer: Assuming your question is a reference request:
I now about one resource that has a very broad range of spectral data, the
NIST Atomic Spectra Database Lines Form
This has worked great for me.
This page lists a couple of different alternatives. For the near IR case you are interested in, several collections are listed here. | {
"domain": "astronomy.stackexchange",
"id": 1389,
"tags": "spectra, infrared, resource-request"
} |
Additional Power to DC Motor via Second Power Source | Question: How can I provide more power to a DC motor that is in series behind a receiver circuit hacked out of a cheap RC car without burning up the receiver board? The board runs off two AAs at about 3V. I'm replacing the stock motor with a slightly larger one (12V, taken from a printer) and remounting it on a chassis for a homebrew robotics project... just messing around to learn more. I imagine I could go safely to 4.5V or even 6V with the receiver but I don't want to go much higher since half the stuff is epoxied and I can't really tell what's in there.
What I'd like to be able to do is add an additional two AA batteries behind the receiver to run the receiver system at 6V but add another two 3V 123A batteries to have the motor at 12V with the ability to run with the higher current draw due to the heavier load the motor will handle on its fancy new chassis... but without pulling that current through the receiver circuit.
My first thought is to simply connect my 123As negative to the motor and positive to a common ground... but I'm really not sure and I want to be careful to not damage the circuit or batteries. My next thought is to simply build a single power supply out of my 123As and use a current divider but I've only read about them and never actually tried so.
I've been doing some of those kiddie "electronic playgrounds," a few books and probably cost Google an extra few bucks in energy costs and I'm still kinda at a loss.
Answer: It is safest to isolate the power supply for the motors from the electronics.
Normally, there is a single top-level power supply, eg. at 12V. This can supply the motors and other actuators directly. Because electronics normally run at 5V or 3V, a voltage regulator is often used to decrease the voltage (it taps into the 12V supply, and outputs a regulated 5V for the electronics). This is good, because the current motors draw can change quickly and may sometimes affect the power supply. In this case, if the voltage regulator is insufficient, capacitors may be added across the 12V supply, but this is not often required.
In your case, supposing you don't want to use a voltage regulator, you should have separate battery cells supply your electronics (at 3V), and another stack of cells at 12V to run your motors. You can then connect the ground of each circuit to each other to provide a common reference for your motor control signals. This setup again prevents the motors from affecting the voltage supplied to your electronics (assuming they draw a relatively constant, low current, won't require voltage regulation, although you might add a few capacitors to help with this - especially when you have op-amps or other switching electronics). Also, each cell in each battery stack will have the same current drawn from it. This avoids the some batteries in a stack having more charge than others, or having more current drawn. | {
"domain": "robotics.stackexchange",
"id": 602,
"tags": "motor, power"
} |
Why is relativistic velocity addition not symmetric? | Question: The Galilean velocity addition formula is $$u' = v + u$$ This is symmetric if one swaps $v$ and $u$.
What are the fundamental reasons why the (generalized) relativistic velocity addition formula is not symmetric?
$$\vec{u}^{\prime}=\frac{\vec{v}\left(1+\vec{v} \cdot \vec{u} /|\vec{v}|^{2}\right)+\left(\vec{u}-\vec{v}(\vec{v} \cdot \vec{u}) /|\vec{v}|^{2}\right) \sqrt{1-|\vec{v}|^{2}}}{1+\vec{v} \cdot \vec{u}}$$
Is it due to the constant, finite speed of light?
Answer: The fundamental reason behind the asymmetry is that two Lorentz boosts don’t in general commute, just like two 3D spatial rotations don’t in general commute. For Lorentz boosts and for rotations, it matters which one you do first, and thus the formula for composing them cannot be symmetric.
In terms of the infinitesimal rotation generators $J_i$ and boost generators $K_i$, the non-commutation of the transformations is clear:
$$[J_i, J_j]=\epsilon_{ijk}J_k$$
$$[K_i, K_j]=-\epsilon_{ijk}J_k$$
$$[J_i, K_j]=\epsilon_{ijk}K_k.$$
Only when the two rotations are around the same axis, or the two boosts are in the same direction, do they commute.
In general, it is usual for transformations not to commute, and unusual for them to commute. Since even 3D rotations are non-commutative, you should not think of the non-commutativity of Lorentz boosts as being due the finite speed of light (although in some sense they are, since Galilean boosts are commutative). I think of the non-commutativity of rotations and Lorentz boosts as being due to the dimensionality of spacetime being high enough to destroy the trivial commutativity in lower dimensions. | {
"domain": "physics.stackexchange",
"id": 61159,
"tags": "special-relativity, velocity, inertial-frames"
} |
Basic C++ IOT Weather Station | Question: I am making an IOT weather station based on a particle photon that sends data via webhook to one of my other projects. I am very new to c++ and programming in general.
What can I do better with my code? What can be optimised? And what are industry best practices that I can put into place?
Here's my code:
#include <Adafruit_DHT/Adafruit_DHT.h>
#define DHTPIN 2
#define DHTTYPE DHT11
DHT dht(DHTPIN, DHTTYPE);
void setup() {
Serial.begin(9600);
dht.begin();
}
void loop() {
delay(2000);
//Variables
//DHT11 sensor floats
float h = dht.getHumidity();
float t = dht.getTempCelcius();
float f = dht.getTempFarenheit();
float hi = dht.getHeatIndex();
//UV sensor floats
float sensorValue = analogRead(A0);
int UVLevel = 0;
//Error Retry
if (isnan(h) || isnan(t) || isnan(f))
{
Serial.println("Failed to read from DHT sensor!");
return;
}
if(sensorValue <=10)
{
UVLevel = 0;
}
if(sensorValue <= 46 && sensorValue > 10)
{
UVLevel = 1;
}
if(sensorValue <= 65 && sensorValue > 46)
{
UVLevel = 2;
}
if(sensorValue <= 83 && sensorValue > 65)
{
UVLevel = 3;
}
if(sensorValue <= 103 && sensorValue > 83)
{
UVLevel = 4;
}
if(sensorValue <= 124 && sensorValue > 103)
{
UVLevel = 5;
}
if(sensorValue <= 142 && sensorValue > 124)
{
UVLevel = 6;
}
if(sensorValue <= 162 && sensorValue > 142)
{
UVLevel = 7;
}
if(sensorValue <= 180 && sensorValue > 162)
{
UVLevel = 8;
}
if(sensorValue <= 200 && sensorValue > 180)
{
UVLevel = 9;
}
if(sensorValue <= 221 && sensorValue > 200)
{
UVLevel = 10;
}
if(sensorValue <= 240 && sensorValue > 221)
{
UVLevel = 11;
}
if(sensorValue > 240)
{
UVLevel = 12;
}
//Serial Print DHT
Serial.println();
Serial.println();
Serial.print("Humid: ");
Serial.print(h);
Serial.print("%");
Serial.println();
Serial.print("Temp: ");
Serial.print(t);
Serial.print("C ");
Serial.println();
Serial.print("Apparent Temperature: ");
Serial.print(hi);
Serial.println();
Serial.println();
//Serial Print UV
Serial.print("UV Level =");
Serial.print(UVLevel);
Serial.println();
//Publish Data To Particle Cloud
Particle.publish("Humidity", String(h));
Particle.publish("Temperature", String(t));
Particle.publish("Apparent Temperature", String(hi));
Particle.publish("UV Index",String(UVLevel));
delay(5000);
}
For reference these are the sensors I am using:
DHT 11 Temp/Humidity sensor
UV sensor
Answer: I'll assume the indentation is an artifact of pasting the code here.
The big problem here is that long series of if statements. A simple improvement is to make use of the else clause, along with knowledge of what values you've tested for already, it can be simplified to
if (sensorValue <= 10)
UVLevel = 0;
else if (sensorValue <= 46)
UVLevel = 1;
// etc.
But we can do better. Since the range of values you're checking with is contiguous, and the result (UVLevel) is linear, we can set up an array to hold the data, then just use a loop to find the right place.
static values[] = { 10, 46, 65, /* ... */, 240 };
UVLevel = std::size(values); // 12
for (int i = 0; i < std::size(values); ++i)
if (values[i] <= sensorValue) {
UVLevel = i;
break;
}
}
We initially set UVLevel the the maximum value so that if the sensor value is greater than 240 we get the correct result without having to check after the loop.
For a larger range of possible values, the for loop used here can be replaced with a more complicated binary search. | {
"domain": "codereview.stackexchange",
"id": 30613,
"tags": "c++, arduino"
} |
Couldn't high gear ratio replace need for high torque engines? | Question: I've noticed that there is a correlation between displacement and torque, so trucks, SUVs etc. have large displacement engines to produce lots of torque to move heavy loads.
Sports cars have lower displacements than most trucks, but produce as much if not more power, but typically much less torque. They also tend to get better gas mileage. This of course has a lot to do with weight, aerodynamics etc, but I can't help but notice these engines seem more efficient.
Regardless, it costs lots of money to develop and maintain different engine lines (in 1954, for instance, ALL Chevy's had the SAME inline 6 ). So there needs to be significant advantages to different types of engines.
Now forget all that supposition about displacement and gas mileage, here is the actual question:
Why don't vehicles with more torque-intensive loads use the same engines as vehicles with more power-intensive loads, just with different transmissions and gear ratios?
Answer: Consider two engines making the same amount of power, one through displacement (blue) and one through engine speed (red).
If geared appropriately with similar shift points then you will notice that high rpm engine has significantly less ability to accelerate when not near peak rpm. We call this engine elasticity. The ability to accelerate with a similar force pretty much across all rpm range makes cars desirable to drive. The red curve would be a car that is going to be very frustrating as constant shifting is going to be needed to keep the rpms high.
The main difference comes from the way internal combustion engines make torque. There are compromises that need to be made to make high specific power (power at high rpm) that significantly reduce torque at lower rpm. The workaround this problem is either switching cam technology (ex. Honda VTEC) or turbocharging (ex. Audi).
In the end you want whatever design will give you the widest torque curve for the power requirement. This just makes cars more usable. | {
"domain": "physics.stackexchange",
"id": 27051,
"tags": "torque, power"
} |
Centre of Mass - Mechanics | Question: Please see image attached.
In the image, moments (torque) was taken about the point $O$. They have defined $\bar{x}$ as the distance from the point $O$ and the COM. If you scroll down to the third line of workings, they say that
Taking moments about $O$:
$$ (2g \times 3)+(5g \times 4) + (3g \times 6) = Mg \times \bar{x}$$
$M = 2kg + 5kg+3kg = 10kg$
Now, I have a few questions about about the way moments (torque) was found about $O$.
Wikipedia states the following: In physics, the center of mass of a distribution of mass in space is the point where if a force is applied it moves in the direction of the force without rotating
With above definition from wikipedia in mind, how can $ (2g \times 3)+(5g \times 4) + (3g \times 6) = Mg \times \bar{x}$ when $Mg$ is the downward force applied at centre of mass, according to the above definition this would just move the system of objects downwards with no rotation
Answer: The point of using torque in this question is only to find the center of mass where when an imaginary 10kg mass is placed will behave equivalent to the combination of the 3 particle masses given in the question. There is no connection between the masses and so there is no meaning of rotation of a body in here. If you imagine them to be connected by a mass less rod and say a force is applied at the center of mass, the whole rod along with the 3 masses would move downwards without any rotation. The torque calculation here is only a way to find the equivalent point (center of mass). It doesn't signify the presence of a torque nor is there a rigid body to undergo rotation. The confusion might arise by one assuming the point O as a reference and assuming the masses are rotation about it which is not the case. | {
"domain": "physics.stackexchange",
"id": 48662,
"tags": "homework-and-exercises, newtonian-mechanics"
} |
How does gravity swing a door on a leaning post? | Question: A door or gate that is hinged on a leaning post will swing due to gravity. Most of the force of gravity is pulling straight downwards and the bearing surfaces of the hinge are nearly perpendicular to the direction of the gravitational force. This force causes pressure and friction within the hinge which resists movement.
The motion of the gate sideways looks like gravity being able to slide an object across the surface of a slightly un-level table because the gate must move sideways before it can go down. To be specific, I imagine that opposite minor segments of the earths crust are pulling against the door, to one side the small slope of the hinge surface is more in line with the force than the other which determines which side wins.
So, is this type of motion due to an action like a balanced pencil toppling or is it due to tidal forces from adjacent parts of earth crust?
Answer: The explanation is rather simple but not so easy to depict. I've made an attempt below:
$OA$ is the post. It is located in the $xz$-plane and inclined somewhat.
The door's bottom edge $OB$ is located in the $yz$-plane.
Assume only gravity $mg$ acts on the door, on the centre of gravity.
The gravity vector $m\vec{g}$ (purple) can now be decomposed into two components, $\vec{F}_1$ and $\vec{F}_2$.
$\vec{F}_2$ acts parallel to the post and is counteracted by the door hinges (they prevent vertical movement).
$\vec{F}_1$ acts perpendicularly to the door's plane and the post.
$\vec{F}_1$ now causes a torque to arise about the post. Assuming a uniform door and if $W$ is the width of the door, this torque $\tau$ is:
$$\vec{\tau}=\vec{F}_1\frac{W}{2}$$
By Newton's second law (applied to rotation) this causes angular acceleration $\vec{\alpha}$:
$$\vec{\alpha}=\vec{\tau} I=\vec{F}_1\frac{W}{2}I,$$
where $I$ is the inertia moment of the door about the $AO$ axis (the post).
So the door starts rotating about the post, due to gravity alone. | {
"domain": "physics.stackexchange",
"id": 37190,
"tags": "gravity, geophysics, tidal-effect"
} |
State produced by spontaneous parametric down-conversion (SPDC) | Question: I'm researching SPDC's efficacy for use in an optical quantum computing model and I've been trying to figure out exactly what state the photons are in when they come out (as represented by a vector, for example), if I'm using type 1 SPDC and I'm looking at the polarization of the photons.
Please provide any references used =)
Answer: Background
First of all, I'll use $\lvert H\rangle$ as a horizontally polarised state and $\lvert V\rangle$ as a vertically polarised state1. There are three modes of light involved in the system: pump (p), taken to be a coherent light source (a laser); as well as signal and idler (s/i), the two generated photons
The Hamiltonian for SPDC is given by $H = \hbar g\left(a^{\dagger}_sa^{\dagger}_ia_p + a^{\dagger}_pa_ia_s\right)$, where g is a coupling constant dependent on the $\chi^{\left(2\right)}$ nonlinearity of the crystal and $a\left(a^{\dagger}\right)$ is the annihilation (creation) operator. That is, there is a possibility of a pump photon getting annihilated and generating two photons2 as well as a possibility of the reverse.
The phase matching conditions for frequencies, $\omega_p = \omega_s + \omega_i$ and wave vectors, $\mathbf{k}_p = \mathbf{k}_s + \mathbf{k}_i$ must also be satisfied.
Type 1 SPDC
This is where the two generated (s and i) photons have parallel polarisations, perpendicular to the polarisation of the pump, which can only be used to perform SPDC when the pump is polarised along the extraordinary axis of the crystal.
This means that defining the extraordinary axis as the vertical (horizontal) direction and inputting coherent light along that axis will generate pairs of photons in the state $\lvert HH\rangle\, \left(\lvert VV\rangle\right)$. This is not of much use, so to generate an entangled pair of photons, two crystals are placed next to each other, with extraordinary axes in orthogonal directions. The coherent source is then input with a polarisation of $45^\circ$ to this, such that if the first crystal has an extraordinary axis along the vertical (horizontal) direction, there is a probability of generating photons in the state $\lvert HH\rangle\, \left(\lvert VV\rangle\right)$ as before from the first crystal, as well as a probability of generating photons in the state $\lvert VV\rangle\, \left(\lvert HH\rangle\right)$ from the second crystal.
However, as the light from the pump is travelling through a material, it will also acquire a phase in the first crystal, such that the final state is $$\lvert\psi\rangle = \frac{1}{\sqrt{2}}\left(\lvert HH\rangle + e^{i\phi}\lvert VV\rangle\right).$$
Due to the phase matching conditions, the emitted photon pairs will be emitted at opposite points on a cone, as shown below in Figure 1.
Figure 1: A laser beam is input into two type 1 SPDC crystals, with orthogonal extraordinary axes. This results in a probability of emitting a pair of entangled photons at opposite points on a cone. Image taken from Wikipedia.
1 This can be mapped to qubit states using e.g. $\lvert H\rangle = \lvert 0\rangle$ and $\lvert V\rangle = \lvert 1\rangle$
2 called signal and idler for historical reasons
References:
Keiichi Edamatsu 2007 Jpn. J. Appl. Phys. 46 7175
Kwiat, P.G., Waks, E., White, A.G., Appelbaum, I. and Eberhard, P.H., 1999. Physical Review A, 60(2) - and the arXiv version | {
"domain": "quantumcomputing.stackexchange",
"id": 4,
"tags": "experimental-realization, optical-quantum-computing, spdc, quantum-state"
} |
Understanding the concept of surface tension | Question: The concept of surface tension doesn't seem to be well explained in the first course on Fluid Mechanics. Fundamentals of Fluid Mechanics writes
A tensile force may be considered to be acting in the plane of the surface along any line in the surface. The intensity of the molecular attraction per unit length along any line in the surface is called the Surface Tenison .
There are a few things that are causing me problems:
The analogy tensile force is quite hard to understand, I mean the force of attraction looks something like this . As you can see, the molecules at the top have no upward force acting on them and therefore they form something like a surface (this what others writes). Well, okay there is no upward force but we can certainly go for superposition of forces and from the diagram, we can see that the upper molecule should accelerate downwards but it doesn't, why? How all this have any correlation with tension? (the way I have understood tension till now is the force that a string exerts on an object connected to it).
The phrase along any line in the surface is causing problems because it writes in the surface not on on the surface which is quite hard to comprehend what the book intends.
I request you to please explain the concept of Surface Tension considering the problems that have written over here. If you present your personal understanding of the topic then it will be much appreciated.
Thank you.
EDIT: The concept of surface tension is causing me problem because what I’m thinking of surface tension is something like a stretched bed sheet on which things kept do not fall, but the problem is how this bed sheet analogy has arrived in fluids, I mean at the surface molecules and the mathematical definition of surface tension doesn’t make sense to me.
Answer: Separating molecules requires work to be done against the attractive forces. So because molecules in the surface don't have molecules above them, they need less energy to move down into the bulk of the liquid than is needed for molecules to move from bulk to surface. Therefore the rate of movement of molecules due to their random thermal energy is greater surface to bulk than bulk to surface. [Compare Boltzmann factors exp$\left( -\frac{E_{S\ to\ B}}{kT}\right)$ and exp $\left(-\frac{E_{B\ to\ S}}{kT}\right)$.] This tends to deplete the surface layer, which in turn reduces the movement of molecules from surface to bulk, re-establishing (dynamic) equilibrium (equal rates of movement to and from the surface layer).
But with this 'new' dynamic equilibrium, the molecules are further apart in the surface layer than their usual separations so, recalling the intermolecular force curve, they attract each other, in other words the surface is under tension, like a stretched balloon-skin. | {
"domain": "physics.stackexchange",
"id": 63302,
"tags": "fluid-statics, surface-tension"
} |
About qiskit's error mitigation | Question: In qiskit, the error correction using least squares is apparently in qiskit-ignis/qiskit/ignis/mitigation/measurement/filters.py, source code from github and reads:
# Apply the correction
for data_idx, _ in enumerate(raw_data2):
if method == 'pseudo_inverse':
raw_data2[data_idx] = np.dot(
pinv_cal_mat, raw_data2[data_idx])
elif method == 'least_squares':
nshots = sum(raw_data2[data_idx])
def fun(x):
return sum(
(raw_data2[data_idx] - np.dot(self._cal_matrix, x))**2)
x0 = np.random.rand(len(self._state_labels)) # ********
x0 = x0 / sum(x0) # ********
cons = ({'type': 'eq', 'fun': lambda x: nshots - sum(x)})
bnds = tuple((0, nshots) for x in x0)
res = minimize(fun, x0, method='SLSQP',
constraints=cons, bounds=bnds, tol=1e-6)
raw_data2[data_idx] = res.x
else:
raise QiskitError("Unrecognized method.")
I'm not too skilled in python and I would not like to change my qiskit's base installation. I have marked with * the two lines that seems to me strange.
My question is: in this error correction, one does least squares to minimize the function $F=|c_{\rm exp} - Mc_{\rm corr}|^2$, where $c_{\rm exp}, c_{\rm corr}$ are the experimental and corrected counts, respectively and $M$ is the "correction matrix".
As it is common, minimize requires a fair guess x0 in order to find the answer of $F$. I don't understand why the built-in function sets x0 as a random vector. Ok, I can buy that the method "does not know which previous circuit was ran", but in theory if one knows the circuit, one should choose x0 based on this information, right?
Answer: So yeah it is not the best choice. My guess is the individual who programmed it did not think about the physics of the problem. In short, it is best to use the raw input data as the starting point when measurement errors are small. In practice this gives you much faster convergence. | {
"domain": "quantumcomputing.stackexchange",
"id": 2942,
"tags": "programming, qiskit, error-mitigation"
} |
Could the Large Hadron Collider accelerate one kilogram of protons at once? | Question: Is it possible to accelerate a very large number of protons in a particle accelerator as opposed to only a few as is regularly done? What's to keep someone from accidentally dumping too many particles into an accelerator and destroy the facility with its kinetic energy and enormous thrust?
Answer: The LHC cannot accelerate 1 kg of particles at once, and neither the experiments would be able to cope with such a big number of collisions. I have already partially answered here: How many particles can a particle accelerator accelerate at once?, without further details (just to allow you to google them up), the limiting effects at the LHC are (in order of importance): electron cloud, beam-beam, impedances-wakefields, space charge. but I would like to focus on your last question:
What's to keep someone from accidentally dumping too many particles into an accelerator and destroy the facility with its kinetic energy and enormous thrust?
This requires to understand how particles are "dumped" (or better: injected) into an accelerator. You always need to start with a source or gun. In case of protons it is a bottle of (high-purity) hydrogen gas, which is extracted and ionized for instance by an electric discharge. As the electrons leave you are left with protons which can start to gain energy for instance by a DC voltage (a capacitor). While the protons are at low energy, they are very susceptible to external perturbations, including the ones coming from the nearby protons. Therefore if for some reason the proton production is strongly increased at the source, you will start to see odd behaviours right after the source, where the low energy makes the beam much less destructive.
Before being injected into the LHC the protons have to reach 450 GeV, going through an accelerator chain (see Particle colliders: why do they need an accelerator chain) which means that the beam is prepared and transferred between many intermediate accelerators where there is plenty of time to monitor it and eventually abort the injection if the required parameters are not met. | {
"domain": "physics.stackexchange",
"id": 27475,
"tags": "particle-physics, large-hadron-collider, estimation, protons, particle-accelerators"
} |
What common/simple problem would work well as a web app? | Question: Context
I'm currently writing a simple tutorial to demonstrate a tool to data scientists and analysts that turns Jupyter Notebooks into web apps. Basically, it discusses setting up the web app as a front end, running some code in the notebook and then returning data to the web app.
Question
My question is, what is a small interesting problem in data science that I could solve in the notebook?
I'm looking for something more interesting than doubling an input but smaller/simpler than building a computer vision model.
Additional information
As you can probably tell, I am new to data science. Apologies if this is the wrong forum for this type of question.
Here is the version with a non-interesting problem being solved in the notebook, it may provide more context if needed.
Thanks in advance for any help.
Answer: Maybe you could try solving easy classification problems like with Iris Dataset or Titanic Dataset. You'll find many tutorials dealing with those subjects, and they are basic and famous exercices for someone starting in Data Science. | {
"domain": "datascience.stackexchange",
"id": 8133,
"tags": "python, beginner, jupyter"
} |
Why is a 2-electron wavefunction antisymmetric? | Question: Why does a 2-electron system have an antisymmetric wavefunction when the combination should be bosonic? I.e. If it's an overall bosonic combination, shouldn't the overall wavefunction be symmetric?
Answer: States of a system of indistinguishable fermions are antisymmetric under exchange of any two particles. This is the defining characteristic of what it means to be a fermion.
Of course, this means that they're symmetric under the exchange of any two pairs of particles. If you have e.g. two hydrogen atoms in your system, then exchanging the electronic states gives you a minus sign which is canceled out by exchanging the proton states. In that sense, the state of such a system is symmetric under exchange of hydrogen atoms (proton+electron pairs), but antisymmetric under either exchange individually. | {
"domain": "physics.stackexchange",
"id": 77716,
"tags": "quantum-mechanics, hilbert-space, wavefunction, pauli-exclusion-principle, identical-particles"
} |
Moving To Object Orientated Programming | Question: Please keep in mind I am new and still learning when reading the following.
What I am doing
I have the following code which pulls a sport, tournament and round NR, from a DB table called event where the event is still active.
The Problem
The code works and does what I want it to do, but my problem is looking at the code makes me sick, I know there are more efficient ways to achieve what I am trying to do but I am unsure of where to start to improve the code below, I would like to move to a more object orientated or at least a more efficient way of coding.
I would appreciate it if one of the more experienced members of the community could give the code a look and provide some pointers.
$date = date('Y-m-d');
//get sport & tournament
$sql = "Select distinct sport, tournament, round FROM event WHERE date > $date AND active = 'y'";
$result = mysqli_query($conn,$sql) or die(mysqli_error($conn));
while($row = mysqli_fetch_array($result)){
$sport[] = $row['sport'];
$tournament[] = $row['tournament'];
$round[] = $row['round'];
}
//make form with tournament & sport
?>
<form name="select" name="sport" method="post">
<!--GET SPORT ON SELECT -->
<select name="sport">
<?php
//get sport
foreach($sport as $index => $sportCode){
echo '<option value="'. $sport[$index].'">'.$sport[$index].'</option>';
}
?>
</select>
<!--GET TOURNAMENT ON SELECT -->
<select name="tournament">
<?php
//get tournament
foreach($tournament as $index => $tournamentCode){
echo '<option value="'. $tournament[$index].'">'.$tournament[$index].'</option>';
}
//get round
?>
</select>
<select name="round">
<?php
foreach($round as $index => $roundNr){
echo '<option value="'. $round[$index].'">'.$round[$index].'</option>';
}
//get round
?>
</select>
</form>
Answer: TL;DR: Read about MVC and understand the basics and then learn laravel
This is not about object oriented approach but more like a separation of concerns thing. You have two concerns for the moment:
You're collecting some data
You're displaying it
so what you have to do is to separate them, start by using a template system like twig or smarty. Collect the data in some php and pass it to the template file and render it so the html code and php code will be separated.
The template will be responsible of "how" you display the data and your php file will be responsible of "preparing the data" for the template.
Second step is to separate the code that is "dealing with the data" from the code that is "setting up the template". You can move the data related code to some class so that you'll have smt. like this in the end:
<?php
// Require classes, setup things etc.
$obj = new Events;
$events = $obj->getAllEvents();
echo $tpl->render('events/index.tpl', array('events' => $events));
Third step may be about the // Require classes, setup things etc. part in the example above. You can use a front-controller to deal with that...
... and the list goes on...
After some struggling you'll start to realize that you need some helper classes to deal with the http requests, responses, sessions etc.
...and you'll need more and more "concerns" to "separate" in the future and in the end you'll understand why people are using mature frameworks these days.
I was writing like this in 1997-1998 (no kidding) because I had to, but you don't :) | {
"domain": "codereview.stackexchange",
"id": 21028,
"tags": "php, object-oriented, html, pdo, mysqli"
} |
Different materials have different temperatures? | Question: Why do two materials, under the same weather, have different temperatures?
I have a small clue about it. For example, iron and wood supposed under the sun's radiation, and if we touch both of them, we'll notice a remarkable difference in temperatures just on the surface, or even by a near area, and that's a fact about the material itself if it reflects radiations totally (as the iron) or just a part of it (as the wood) .
That just leads to another question, if that was true then why in a cold area we found totally reversed results, because if we put the same materials in a cold area, and as the iron is the one who's going to reflect energy more than the wood, we found out that iron is actually colder than the wood?
That problem is really making me nervous and I keep asking people but none had the the convincing answer.
Answer: When you touch something, you don't feel how hot/cold the thing is; you feel how hot/cold it makes your hand. Metal conducts heat more easily than wood. So if wood and metal are hot, the heat will flow more easily from the metal to your hand. If wood and metal are cold, the heat will flow more easily from your hand to the metal. | {
"domain": "physics.stackexchange",
"id": 44036,
"tags": "thermodynamics, temperature, thermal-radiation, thermal-conductivity"
} |
Are binary states (bits) pervasive in classical physics? | Question: If
quantum physics is a refinement of classical physics, and
quantum computing is a refinement of classical computing, and
classical computers use bits (binary digits) whereas quantum computers use qubits
then are binary states (0,1) a significant and pervasive construct underpinning the theory of classical physics as opposed to quantum physics?
Answer:
Quantum computing is not a refinement of classical computing; it's simply a different paradigm of computing aimed at solving specific categories of problems more efficiently.
Quantum computing doesn't necessarily require qubits (cf. qudit); that's just a theoretical and experimental convenience. In fact, continuous-variable quantum computing seems to be a hot research area these days.
As for the broader question of whether the universe is 'discrete' or 'quantizable', you might find the concepts of digital physics and bit string physics interesting. | {
"domain": "quantumcomputing.stackexchange",
"id": 1085,
"tags": "classical-computing"
} |
Deep DFS traverse on graph | Question: According to chess rules, the (undirected) graph generated from the Knight move on a keypad is the following:
$$
\begin{array}{ccccc}
3 & - & 4 & - & 9 \\
| & & | & & | \\
8 & & 0 & & 2 \\
| & & | & & | \\
1 & - & 6 & - & 7
\end{array}
$$
The question is:
Consider the uniform distribution on all $n+1$-sequences $(s_i)_{i=0}^n$ starting with $s_0=0$. What is the expected value and the standard deviation of $\sum_{i=1}^n s_i \bmod n$, under this distribution on sequences?
My naive approach is simply traverse every single sequence of fixed given length and do the calculation. But as the $n$ grows, the number of visits grows exponentially. For example, for $n=1024$, the number of sequences exceeds $6.04\times 10^{367}$. Hence a DFS or any kind of traverse is not a feasible approach.
So my second thought is using some analytic method. To get a few ideas, I first did some simulations on $\sum s_i$ (without modular reduction) with $10^5$ trials. The program generates the following distribution which doesn't match any obvious candidate so I couldn't advance further. So I am thinking if this is the correct way to tackle this problem. Any hint or help is appreciated.
Answer: You can compute the exact distribution of $\sum_{i=1}^n s_i$ using dynamic programming: for each vertex $v$, index $m \in \{0,\ldots,n\}$ and index $S \in \{0,\ldots,9m\}$, calculate the number of walks of length $m$ starting at $0$, ending at $v$, and summing to $S$. The running time is roughly $O(n^3)$ (the extra $n$ factor is due to the length of the numbers involved).
If you want an estimate, you should first find the stationary distribution of the Markov chain corresponding to a random walk on your graph. Given that, $\sum_{i=1}^n s_i$ has roughly Gaussian distribution with parameters that you can calculate from the stationary distribution; more specifically, roughly $N(n\mu, n\sigma^2)$ (this requires a version of the central limit theorem for Markov chains). This Gaussian approximation shows that the sum lies inside the interval of width $n$ centered around $n\mu$ with high probability, so taking modulo $n$ doesn't have much effect beyond shifting this interval to $\{0,\ldots,n-1\}$. | {
"domain": "cs.stackexchange",
"id": 7088,
"tags": "graphs, probability-theory, simulation"
} |
When to use cosine simlarity over Euclidean similarity | Question: In NLP, people tend to use cosine similarity to measure document/text distances. I want to hear what do people think of the following two scenarios, which to pick, cosine similarity or Euclidean?
Overview of the task set: The task is to compute context similarities of multi-word expressions. For example, suppose we were given an MWE of put up, context refers to the words on the left side of put up and as well as the words on the right side of it in one text. Mathematically speaking, similarity in this task is about calculating
sim(context_of_using_"put_up", context_of_using_"in_short")
Note that context is the feature that built on top of word embeddings, let's assume each word has an embedding dimension of 200:
Two scenarios of representing context_of_an_expression.
concatenate the left and right context words, producing an embedding vector of dimension 200*4=800 if picking two words on each side. In other words, a feature vector of [lc1, lc2, rc1, rc2] is build for context, where lc=left_context and rc=right_context.
get the mean of the sum of left and right context words, producing a vector of 200 dimensions. In other words, a feature vector of [mean(lc1+lc2+rc1+rc2)] is built for context.
[Edited] For both scenarios, I think Euclidean distance is a better fit. Cosine similarity is known for handling scale/length effects because of normalization. But I don't think there's much to be normalized.
Answer:
When to use cosine similarity over Euclidean similarity
Cosine similarity looks at the angle between two vectors, euclidian similarity at the distance between two points.
Let's say you are in an e-commerce setting and you want to compare users for product recommendations:
User 1 bought 1x eggs, 1x flour and 1x sugar.
User 2 bought 100x eggs, 100x flour and 100x sugar
User 3 bought 1x eggs, 1x Vodka and 1x Red Bull
By cosine similarity, user 1 and user 2 are more similar. By euclidean similarity, user 3 is more similar to user 1.
Questions in the text
I don't understand the first part.
Cosine similarity is specialized in handling scale/length effects. For case 1, context length is fixed -- 4 words, there's no scale effects. In terms of case 2, the term frequency matters, a word appears once is different from a word appears twice, we cannot apply cosine.
This goes in the right direction, but is not completely true. For example:
$$
\cos \left (\begin{pmatrix}1\\0\end{pmatrix}, \begin{pmatrix}2\\1\end{pmatrix} \right) = \cos \left (\begin{pmatrix}1\\0\end{pmatrix}, \begin{pmatrix}4\\2\end{pmatrix} \right) \neq \cos \left (\begin{pmatrix}1\\0\end{pmatrix}, \begin{pmatrix}5\\2\end{pmatrix} \right)
$$
With cosine similarity, the following is true:
$$
\cos \left (\begin{pmatrix}a\\b\end{pmatrix}, \cdot \begin{pmatrix}c\\d\end{pmatrix} \right) = \cos \left (\begin{pmatrix}a\\b\end{pmatrix}, n \cdot \begin{pmatrix}c\\d\end{pmatrix} \right) \text{ with } n \in \mathbb{N}
$$
So frequencies are only ignored, if all features are multiplied with the same constant.
Curse of Dimensionality
When you look at the table of my blog post, you can see:
The more dimensions I have, the closer the average distance and the maximum distance between randomly placed points become.
Similarly, the average angle between uniformly randomly placed points becomes 90°.
So both measures suffer from high dimensionality. More about this: Curse of dimensionality - does cosine similarity work better and if so, why?. A key point:
Cosine is essentially the same as Euclidean on normalized data.
Alternatives
You might be interested in metric learning. The principle is described/used in FaceNet: A Unified Embedding for Face Recognition and Clustering (my summary). Instead of taking one of the well-defined and simple metrics. You can learn a metric for the problem domain. | {
"domain": "datascience.stackexchange",
"id": 9690,
"tags": "machine-learning, nlp, clustering, similarity"
} |
What are the odometry msg type in ROS SLAM? | Question:
We are building a three wheel holonomic motion system. Is there a standard way to publish odometer values? How odometer values are present in gmapping package?
Originally posted by Ricco on ROS Answers with karma: 13 on 2018-07-24
Post score: 1
Original comments
Comment by Hypomania on 2018-07-24:
Could you link the stack/package you are looking at?
Comment by stevejp on 2018-07-24:
Is this what you're looking for?
Comment by Ricco on 2018-07-24:
Actually I'm looking for how to acquire odometer data from motor encoders. I need to know what exactly I have to publish from encoders.
Answer:
The odometry information should be published by the ROS driver for your mobile robot base. Usually, the robot sends encoder ticks via serial or whatever, and the ROS driver then computes the odometry message (i.e., pose and twist; see the link by @stevejp). The formula how to compute the ROS odometry message from encoder ticks depends on your kinematics.
Example 1: Generic differential drive platform
https://github.com/ros-controls/ros_controllers/blob/f5dc6f2c31cfc76419f073c54a14ab04394ab919/diff_drive_controller/src/odometry.cpp
Example 2: PR2 robot (four-wheeled holonomic robot)
https://github.com/PR2/pr2_controllers/blob/f5ad5f6b821ba2aab073703dcabc4c104cd86112/pr2_mechanism_controllers/src/pr2_odometry.cpp
Originally posted by Martin Günther with karma: 11816 on 2018-07-24
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Ricco on 2018-07-25:
thanks. this is what I was looking for. | {
"domain": "robotics.stackexchange",
"id": 31344,
"tags": "ros-kinetic"
} |
Best way to repeat simulations | Question:
Hi all,
I am currently working on a simple fetch and carry scenario in a simulation using ros gazebo. it's an almost complete mobile manipulator system with kinect and laser scanners.
I would like to do something this:
1: launch environment and related objects
2: run the fetch and carry node
3: reset the environment (robot and object back to initial position)
4: iterate 2 and 3 for a few hundred or thousand times.
My question is, is there a good way for repeating the simulations?
EDIT: I guess it's best to describe an example. Say that I would like to run the fetch and carry scenario 100 times. Each time it runs, an object will be thrown on the floor. In that 100 runs, how many times the mobile manipulator collide / bump into the new object.
I'm actually imagining a big for loop that resets the simulation environment and runs the fetch and carry package again. How expensive will that be!!
So, the question is still: What's the best way to repeat simulations.
NB: I wonder how will rosbag help
Originally posted by whiterose on ROS Answers with karma: 148 on 2012-12-25
Post score: 1
Answer:
Maybe there's better solution, but I can give some advice in case you need this in a hurry.
I'll write two launch files, one for environment and related objects(let's call it A.launch), and the other for the fetch and carry node(this one, B.launch).
In a single test, first roslaunch A.launch, then you can roslaunch B.launch and check your work.
After a single test, kill these two processes.Then you can start another test over by roslaunch A.launch and B.launch again.
I am not sure what the purpose is.If you want to do the iteration hundred or thousand of times due to the needs to test the robustness of your node, the method proposed above probably won't satisfy tour need(You'll need to write some scripts to roslaunch the launch files and check if your node works correctly). But if you want to do the iteration again quickly for testing the function of the fetch and carry node while in development stage, this method is quite adequate.
Originally posted by Po-Jen Lai with karma: 1371 on 2012-12-26
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 12204,
"tags": "gazebo"
} |
Can BNF consist of zero rules? | Question: I am trying to use BNF to describe its own grammar to get used to it. What I can not find any information about is whether BNF must consist of zero or more rules or one or more rules. The difference would be:
<grammar> ::= <grammar> <rule> | <rule>
vs.
<grammar> ::= <grammar> <rule> | ""
The only difference I can see is that an empty language can be described by an emtpy BNF in the second case, but must use <grammar> ::= "" in the first case.
Answer: Concretely speaking, the Backus-Naur form is a notation for context-free grammars, applied to the (formal) description of languages at the syntactical level.
While there are extended versions of BNF that have been standardized, the BNF itself hasn't, so there is no way of determining if the empty grammar is "legal" BNF.
As a mathematical object, the possibility of an empty grammar will depend on the definition (of grammar) used. A grammar that has no productions is usually said to define the empty language, but any grammar whose set of possible terminal derivations is empty is also a candidate definition of the empty language. | {
"domain": "cs.stackexchange",
"id": 8813,
"tags": "context-free, formal-grammars"
} |
jQuery code to total the items in a shopping cart | Question: I implemented a shopping cart with asp.net and jquery on my website, and here is my js code which calculates amount and total sum on client side:
$(document).ready(function () {
update();
$(".quant").change(function() {
update();
});
function update() {
var id = $('.quant').attr('data-id');
var sum = 0.0;
var quantity;
$('#myTable > tbody > tr').each(function () {
quantity = $(this).find('.quant').val();
var price = parseFloat($(this).find('.price').attr('data-price').replace(',', '.'));
var amount = (quantity * price);
sum += amount;
$(this).find('.amount').text('' + amount + ' грн');
});
$('.total').text(sum + ' грн');
$.get(
'/Cart/AddTocart',
{
id: id,
returnUrl: '',
quantity: quantity
}
);
}
});
Some people review my code and said me that it needs refactoring:
this part of code var id = $('.quant').attr('data-id');
is wrong because if we have 1 element jquery works correctly, but if we have array of elements - jquery "takes" only first element and attr doesn't work good , so how can i fix it?? This id I pass to server side. I thought it needs write something like this:
var id = $(this).attr('data-id');
but id will always undefined.
Problem here:
var quantity;
$('#myTable > tbody > tr').each(function () {
quantity = $(this).find('.quant').val();
...
}
It recalculates all elements of quantity and quantity "saves" last known value; what's wrong here??
I was told that I pass first id and last quantity, but it is all present when I call function in $(document) - Please explain this to me.
Here is html code:
<thead>
<tr>
<th class="text-center">Товар</th>
<th class="text-center">К-сть</th>
<th>Назва Товару</th>
<th class="text-right">Ціна</th>
<th class="text-right">Загальна ціна</th>
</tr>
</thead>
<tbody>
@foreach (var line in Model.Cart.Lines)
{
IEnumerable<FurnitureImages> images = line.Furniture.Images;
FurnitureImages mainImage = images.Where(x => x.IsMainImage).FirstOrDefault();
<tr>
<td class="text-center">
@if (mainImage != null)
{
<img src="@Url.Content(mainImage.Path)" style="width:110px; height:70px" />
}
</td>
<td class="text-center">
<input type="text" data-id="@line.Furniture.FurnitureId" data-price="@line.Furniture.Price" value="@line.Quantity" class="quant" />
</td>
<td class="text-left">@line.Furniture.Name</td>
<td class="text-right price" data-price="@line.Furniture.Price">@((line.Furniture.Price).ToString("#.## грн"))</td>
<td class="text-right amount">@((line.Quantity * line.Furniture.Price).ToString("#.## грн"))</td>
<td>
@using (Html.BeginForm("RemoveFromCart", "Cart", new { Id = line.Furniture.FurnitureId }))
{
@Html.Hidden("Id", line.Furniture.FurnitureId)
@Html.HiddenFor(x => x.ReturnUrl)
<input class="btn btn-sm btn-warning" type="submit" value="Видалити з кошику" />
}
</td>
</tr>
}
</tbody>
<tfoot>
<tr>
<td colspan="4" class="text-right"><b>Всього до оплати:</b></td>
<td id="test" class="text-right total">@Model.Cart.ComputeTotalValue().ToString("#.## грн")</td>
</tr>
</tfoot>
Answer:
this part of code var id = $('.quant').attr('data-id');
Presuming there are multiple inputs with class "quant" (perhaps 1 for each row in the table), then this code will get a collection of elements and then as is desribed in the documentation for .attr():
Get the value of an attribute for the first element in the set of matched elements or set one or more attributes for every matched element.
So yes it will get the value of the data-id attribute of the first element with that class name.
Consider the following snippet (try running it to see the result):
//find all elements with class name "quant"
var inputsWithClassQuant = $('.quant');
console.log('inputsWithClassQuant.length: ',inputsWithClassQuant.length);
//attr only returns attribute of first element:
console.log("inputsWithClassQuant.attr('data-id') (first element only): ",inputsWithClassQuant.attr('data-id'));
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<input type="text" data-id="30" data-price="20" value="3" class="quant" />
<input type="text" data-id="35" data-price="10" value="0" class="quant" />
<input type="text" data-id="36" data-price="15" value="2" class="quant" />
Problem here:
var quantity;
$('#myTable > tbody > tr').each(function () {
quantity = $(this).find('.quant').val();
...
}
Yes, quantity = $(this).find('.quant').val(); overwrites the previous value each time so after the loop (i.e. the call to .each()), quantity will contain the value of the last element with class name quant.
I was told that I pass first id and last quantity, but it is all present when I call function in $(document) - Please explain this to me.
This is basically a summary of points #1 and #2 above.
Perhaps a better implementation would move the code to call the AJAX request (i.e. to '/Cart/AddTocart') with the id and quantity of the row that changed, then call update() to update the total. Something like the code below.
Another suggestion is to update the request sent to the server to send the entire list of items with the respective quantities. That way, if a quantity is decreased or cleared, the cart can be accurately updated. It all depends on the back-end API - i.e. if it has endpoints to add/remove/update items with quantities, etc.
I also changed the input type of the quantity inputs to "Number" - that way only numbers can be entered, and many browsers will add up/down controls to the side of the input for the user to click on.
$(document).ready(function() {
update();
$(".quant").change(function() {
//this: context of the input that was changed
console.log('calling /Cart/AddTocart; id:',$(this).attr('data-id'),' quantity: ', $(this).val());
$.get(
'/Cart/AddTocart', {
id: $(this).attr('data-id'),
returnUrl: '',
quantity: $(this).val()
});
update();
});
function update() {
var sum = 0.0;
var quantity;
$('#myTable > tbody > tr').each(function() {
quantity = $(this).find('.quant').val();
var price = parseFloat($(this).find('.price').attr('data-price').replace(',', '.'));
var amount = (quantity * price);
sum += amount;
$(this).find('.amount').text('' + amount + ' грн');
});
$('.total').text(sum + ' грн');
}
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<table id="myTable">
<thead>
<tr>
<th class="text-center">Товар</th>
<th class="text-center">К-сть</th>
<th>Назва Товару</th>
<th class="text-right">Ціна</th>
<th class="text-right">Загальна ціна</th>
</tr>
</thead>
<tr>
<td class="text-center"></td>
<td class="text-center">
<input type="number" data-id="97" data-price="30" value="1" class="quant" />
</td>
<td class="text-left">Kart</td>
<td class="text-right price" data-price="30">30</td>
<td class="text-right amount">30</td>
<td>
<input class="btn btn-sm btn-warning" type="submit" value="Видалити з кошику" /> </td>
</tr>
<tr>
<td class="text-center"></td>
<td class="text-center">
<input type="number" data-id="99" data-price="60" value="0" class="quant" />
</td>
<td class="text-left">Kart</td>
<td class="text-right price" data-price="10">10</td>
<td class="text-right amount">30</td>
<td>
<input class="btn btn-sm btn-warning" type="submit" value="Видалити з кошику" /> </td>
</tr>
<tfoot>
<tr>
<td colspan="4" class="text-right"><b>Всього до оплати:</b></td>
<td id="test" class="text-right total">@Model.Cart.ComputeTotalValue().ToString("#.## грн")</td>
</tr>
</tfoot>
</table> | {
"domain": "codereview.stackexchange",
"id": 26759,
"tags": "javascript, jquery, dom, e-commerce"
} |
Customize CameraLens function for Wide angle (fish eye) cameras | Question:
In class CameraLens(), [r = c1ffun(theta/c2+c3)], I like to define my own mapping function using theta as input for the function (for example - Polynomial Fish Eye Transform). I like to know what is the better way of doing it.
Some important issues are:
I cannot access the value of 'theta' and so I could not define anything on my own.
I cannot set the value of 'r' for the above mentioned mapping function.
I don't have access to the source code of CameraLens() class? If so, then I could implement something on my own.
How the see the implementation of CameraLens() class?
PS: I already checked this tutorial http://gazebosim.org/tutorials?tut=wide_angle_camera&branch=wideanglecamera .
I dont want to use any standard models like gnomonical, stereographic, equidistant, equisolid_angle, orthographic.
Originally posted by sekaran on Gazebo Answers with karma: 3 on 2016-10-20
Post score: 0
Answer:
Theta is the <horizontal_fov>, and r is the result of the function which means you don't set that value.
According to the tutorial, you can set everything in SDF: http://gazebosim.org/tutorials?tut=wide_angle_camera&branch=wideanglecamera#PluginExample
All of our source code is open: https://bitbucket.org/osrf/gazebo
Originally posted by nkoenig with karma: 7676 on 2016-10-20
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by sekaran on 2016-10-21:
I already in the tutorial how to use the custom type lens with the same mapping function r = c1ffun(theta/c2+c3). But, I need to change the function itself.
In the source code, I see the values for c1, c2, c3 etc are set to the shader. But I cannot really find, where r = c1ffun(theta/c2+c3) is implemented in the source code?
Comment by sekaran on 2016-10-21:
Thanks for the reply. I just found its implemented in the shader 'wide_lens_map_vs.glsl'. But <horizontal_fov> is not 'theta' exactly. The value of 'theta' is calculated in the same shader.
Comment by owais2k12 on 2018-07-24:
Hi! I am wondering if someone actually changed the mapping function according to the actual fish eye model used in opencv? if yes, can anyone point me towards the source. If not, can somebody explain me whether this function can be related with that model and produce the same results, in that case I am looking forward to some mathematical explanation particularly used in this mapping function instead of the opencv one. | {
"domain": "robotics.stackexchange",
"id": 4002,
"tags": "gazebo-camera"
} |
rosserial multi-array issue | Question:
Here is the relevant parts of my Arduino code:
#include <Arduino.h>
#include "TeensyThreads.h"
// for ROS
#include <ros.h>
#include <std_msgs/Int16.h>
#include <std_msgs/Int16MultiArray.h>
volatile int liDARfront = -1;
volatile int liDARleft = -1;
volatile int liDARright = -1;
// ROS Variables
ros::NodeHandle nh;
std_msgs::Int16 front_msg;
std_msgs::Int16 left_msg;
std_msgs::Int16 right_msg;
std_msgs::Int16MultiArray lidar; //multiarray
ros::Publisher range_front("lidar/range_front", &front_msg);
ros::Publisher range_left("lidar/range_left", &left_msg);
ros::Publisher range_right("lidar/range_right", &right_msg);
ros::Publisher lidar_pub("lidarArray", &lidar); //multiarray
void setup()
{
Serial1.begin(115200); // HW Serial for TFmini Front
Serial2.begin(115200); // HW Serial for TFmini Left
Serial3.begin(115200); // HW Serial for TFmini Right
Serial.begin(115200); // Serial output through USB to computer
delay (100); // Give a little time for things to start
// init ros node
nh.initNode();
nh.advertise(range_front);
nh.advertise(range_left);
nh.advertise(range_right);
// for multiarray
lidar.layout.dim = (std_msgs::MultiArrayDimension *)
malloc(sizeof(std_msgs::MultiArrayDimension) * 2);
lidar.layout.dim[0].label = "lidar";
lidar.layout.dim[0].size = 2;
lidar.layout.dim[0].stride = 1*2;
lidar.layout.data_offset = 0;
lidar.layout.dim_length = 1;
lidar.data_length = 8;
lidar.data = (int *)malloc(sizeof(int)*2);
nh.advertise(lidar_pub);
}
void loop()
{
delay(10); // Don't want to read too often as TFmini samples at 100Hz
Serial.println(liDARfront);
// publish to ROS
front_msg.data = liDARfront;
left_msg.data = liDARleft;
right_msg.data = liDARright;
// collect to one topic
lidar.data[0] = liDARfront;
lidar.data[1] = liDARleft;
lidar.data[2] = liDARright;
range_front.publish( &front_msg );
range_left.publish( &left_msg );
range_right.publish( &right_msg );
lidar_pub.publish( &lidar);
nh.spinOnce();
}
I get an error that reads:
Arduino: 1.8.5 (Linux), TD: 1.41, Board: "Teensy 3.2 / 3.1, Serial, 96 MHz (overclock), Faster, US English"
TFmini: In function 'void setup()':
error: cannot convert 'int*' to 'std_msgs::Int16MultiArray::_data_type* {aka short int*}' in assignment
lidar.data = (int *)malloc(sizeof(int));
^
cannot convert 'int*' to 'std_msgs::Int16MultiArray::_data_type* {aka short int*}' in assignment
any help would be much appreciated!
Originally posted by vai on ROS Answers with karma: 1 on 2018-03-16
Post score: 0
Answer:
The problem is that "int" and "short int" are different data types, so pointers to those data are also different types. It appears that on the Teensy an "int" is 32 bits, while you're trying to create a message of type Int16MultiArray.
This should work:
lidar.data = (std_msgs::Int16MultiArray::_data_type*) malloc(2 * sizeof(std_msgs::Int16MultiArray::_data_type));
Originally posted by Mark Rose with karma: 1563 on 2018-04-03
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 30346,
"tags": "ros-kinetic"
} |
Yet another implementation of malloc() in C | Question: I'm really curious to know someone else's opinion on this implementation. Specifically, I would be interested to know the way you would have implemented it. Here the list of issues I've found with in this primitive implementation:
It's slow. You have to traverse the entire list in order to find a free block of memory.
Overuse of sbrk().
Very high possibility of internal fragmentation.
mem.h
#ifndef __MEM__
#define __MEM__
#include<stdio.h>
#include<stdlib.h>
typedef enum { false, true } bool;
typedef struct page
{
size_t size;
bool free;
struct page* next;
struct page* prev;
} page_t;
extern void* memalloc(size_t size);
extern void memfree(void* pointer);
#endif
mem.c
#include "include/mem.h"
#include <assert.h>
#include <sys/types.h>
#include <unistd.h>
#define SBRK_ERROR (void*)(-1)
#define UNDEFINED 0
page_t* global;
page_t* last;
page_t* allocate(size_t size)
{
page_t* node = sbrk(0);
void* pointer = sbrk(size + sizeof(page_t));
if(pointer == SBRK_ERROR)
return NULL;
if(last != NULL)
last-> next = node;
last = node;
last-> size = size;
last-> free = false;
last-> next = NULL;
return last;
}
void* search(size_t size)
{
page_t* node = global;
while(node != NULL){
if(node-> size >= size || node-> size == UNDEFINED){
if(node-> free)
return node;
}
node = node-> next;
}
return NULL;
}
void* memalloc(size_t size)
{
page_t* result = NULL;
if(size >= 0){
if(global != NULL){
result = search(size);
if(result == NULL)
result = allocate(size);
}else {
global = allocate(size);
result = global;
}
}
return result != NULL ? (result + 1) : NULL;
}
page_t* to_page(void* pointer){
return pointer - sizeof(page_t);
}
void memfree(void* pointer)
{
if(pointer != NULL) {
page_t* page = to_page(pointer);
page-> size = UNDEFINED;
page-> free = true;
}
}
int main(int argc, char** argv)
{
char* pointer = memalloc(5);
memfree(pointer);
return 1;
}
What are the most common techniques (implementations) to solve the issues above? Just rough ideas of it.
Answer: Design.
The trouble with your design is that you have to search a list. Where none of the sizes may match. Even if there is a block in the list of the correct size you have to search through all the blocks that don't match your size.
Step 1
Use a list of lists. The top level list is an ordered list of sizes. So you can loop across this list quickly looking for the size you want. And stop if it does not exist.
The second level list is a list of all blocks of a specific size. So if you find the list of 24 bytes. That list contains all the 24 byte blocks that have been freed (so you just take the first one).
When an object is used you remove it from a list and when it is free'd you add it back to the list of free blocks.
global->********* -->********* -->********* -->*********
* 8 * | * 16 * | * 24 * | * 32 *
* next *-| * next *-| * next * * next *
********* ********* ********* *********
| | | |
\/ \/ \/ \/
********* ********* ********* *********
* block * * block * * block * * block *
* next * * next * * next * * next *
********* ********* ********* *********
| | | |
\/ \/ \/ \/
********* null ********* *********
* block * * block * * block *
* next * * next * * next *
********* ********* *********
| | |
\/ \/ \/
********* null *********
* block * * block *
* next * * next *
********* *********
| |
\/ \/
********* null
* block *
* next *
*********
|
\/
null
Step 2 Simple Optimizations
If you don't find a block of the exact size you can re-use a block that is too large. There are two version of this. a) slightly too large and just waste a small amount of space b) twice as big as you need and split the block into two pieces.
Searching the size list can still be expensive but that can be simplified by using a skip list. This should significantly reduce the time you need to search the list.
Alternatively rather than a top level list you could use a balanced tree to hold the top level structure. That way finding a value is always O(ln(x)) where x is the number of different sizes.
Code Review
Your code is fine for handling string data. But you don't take into account the alignment of the object you are allocating for. If you allocate a block of 5 you don't guarantee that the object is aligned for a structure of 5 bytes (only aligned for an array of 5 characters).
Search across blocks that have already been allocated seems like a waste of time. Either simply remove allocated blocks from the global chain or if you absolutely must track them then keep two lists (one for allocated objects and one for freed objects).
There is no need to add blocks onto the end. It is far simpler to add new blocks to the front of a list.
Your code does not use the prev pointer!!!! | {
"domain": "codereview.stackexchange",
"id": 29777,
"tags": "c, linked-list, reinventing-the-wheel, memory-management"
} |
How to change default sensor parameters in turtlebot3 (waffle)? | Question:
Hi everyone, I am a ROS newbie. I am using the turtlebot3 packages to perform simulations in Gazebo/RViz. I am trying to make changes to the turtlebot3(waffle) sim sensors and see how it affects the simulation outcomes. Can anyone guide me on how to localize the specific files that have sensor parameters that can be modified? Is it possible to include all modifications in ONE launch file? It would be great to make changes to the distance sensor, camera, IMU, and odometry sensors. Perhaps, adding noise to those sensors would be great (if possible). For clarification, is turtlebot3 (waffle) using a LiDAR sensor in Gazebo/RViz simulations?
I am performing my sim using Kinetic on Ubuntu 16.06 platform.
Thank you in advance!
-Jesus
Originally posted by jesus on ROS Answers with karma: 15 on 2019-03-03
Post score: 1
Answer:
Hi Jesus,
In ROS robots are described using a URDF, normally stored with .urdf extention. That description may include the sensor/actuator models for Gazebo as well. Some times the URDF files are generated using a script language named xacro (.xacro extension).
So what I would do to find the parameters passed to gazebo for any robot that I haven't worked with before is to find any .urdf and .xacro files inside of the robot's ROS packages.
If you have installed and compiled turtlebot3 packages from source code (https://github.com/ROBOTIS-GIT/turtlebot3) you can go to your catkin workspace and execute the following command:
find ./ -type f \( -iname \*.urdf -o -iname \*.xacro \)
And then dig into any files listed on the output of the previous command.
Originally posted by Martin Peris with karma: 5625 on 2019-03-03
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 32576,
"tags": "gazebo, navigation, rviz, ros-kinetic, turtlebot3"
} |
How can we relate the Feynman rules obtained from the Lagrangian with the usual definition from canonical quantization? | Question: I am following a course on QFT that is based upon canonical quantization and not path integrals.
When calculating scattering amplitudes we compute, at the relevant order, the corresponding matrix element, which might look like
$$\int \textrm{d}^4x\textrm{d}^4y\,\langle 0|\hat{a}_p\hat{a}_q T\{\bar{\psi}_x\phi_x\psi_x\bar{\psi}_y\phi_y\psi_y\}\hat{a}^\dagger_r\hat{a}^\dagger_s|0\rangle .$$
In order to compute this matrix element, we simply replace the time ordered product by the normal order of contractions, and then we see we need to fully contract the fields with the external states, as is explained in Peskin and Schroeder and in Computing S-Matrix Elements from Feynman Diagrams.
We then come up with Feynman Rules, define propagators as vacuum expectation values and retrieve the expression for the amplitude.
However, I have just read that both propagators and Feynman Rules can be recovered from the Lagrangian of the theory. How is this related to the usual definition of propagators $$\langle 0|T\{\phi_x\phi_y\}|0\rangle=D(x-y)$$ and the Feynman rules derived from the Wick's theorem?
Also, please, could you provide me with a good reference to derive Feynman Rules from Wick's Theorem? What I have found so far is just a list of rules to apply, but I would like to understand where they come from.
Edit: Just to clarify the question. I am familiar with the definition of propagator $\langle 0|T\{\phi_x\phi_y\}|0\rangle=D(x-y)$. How is this related to the fact that propagator can also be retrieved from second order terms in the lagrangian? View for instance https://cds.cern.ch/record/319569/files/AT00000309.pdf.
Second part of the question is: since Feynman Rules are just a way to calculate Wick contrations in the reasoning I exposed in the first part of my question, I do not see why they can also be derived from the Lagrangian, as explained in the same link. In P&S, Feynman Rules are obtained as a way to compute the matrix element of the fully contracted fields and initial/final states.
Answer:
The propagator $\langle 0\vert T\phi(x)\phi(y)\vert 0\rangle$ is a Green's function for the free equation of motion of the field $\phi$. If you write the quadratic term in the Lagrangian as $\phi D \phi$ for some differential operator $\phi$, then this means the propagator is the inverse of $D$ (where "inverse" means it is the Green's function for this operator), i.e. you can tell the propagator for a field by looking at its quadratic term in the Lagrangian.
"The Feynman rules" is a bit of a vague term that often encompasses not only the graphical rules for how to associate Feynman diagrams to contractions of operators and how to evaluate those diagrams, but also how to know which diagrams to draw to compute the S-matrix elements/scattering amplitudes for a theory with a given Lagrangian. E.g. "the only internal vertices allowed are those with four lines attached" for $\phi^4$ theory is "a Feynman rule" in this latter, broader sense.
The LSZ formula says that S-matrix elements are of the form $\langle \Omega \vert T\prod_i \phi(x_i)\vert \Omega \rangle$ where $\lvert \Omega\rangle$ is the interacting vacuum and then a computation in the interaction picture shows that those can be computed by looking at
$$ \langle 0\vert T\prod_i \phi_I(x_i) \mathrm{e}^{-\mathrm{i}\int H_I(t)\mathrm{d}t}\vert 0\rangle,$$
where $H_I$ is the interacting part of the Hamiltonian. It is here that the Lagrangian implicitly enters - for the standard cases, this $H_I$ is just the potential in the Lagrangian. A Taylor expansion of the exponential in terms of some parameter in the potential then gives us expressions of the form
$$\langle 0\vert T \prod_i \phi_I(x_i) \left(\int V(\phi_I(y))\mathrm{d}y\right)^n\vert 0\rangle$$
and it is to these expressions that we now apply Wick's theorem/the "generic" Feynman rules. So, knowing this, you can "read off" the diagrams that will occur in this expansion just by looking at the potential term in the Lagrangian, and that's what people mean when they say you can read off "the Feynman rules" from the Lagrangian. | {
"domain": "physics.stackexchange",
"id": 86084,
"tags": "quantum-field-theory, lagrangian-formalism, feynman-diagrams, path-integral, wick-theorem"
} |
Double interpolation for a large set of data in Excel | Question: How do you do double interpolation for a large set of data in Excel?
The user is required to input an x value and y value then the program has to find the points and do interpolation if necessary. For example, if the x is 3.5 and y value is 4.5 then I'll have 2 x value and 2 y values so I need to do the double interpolation then final interpolation between the remaining values.
I have used a vlookup function and forecast function but it doesn't seem to be working out. I have attached a picture of the data set.
Answer: It appears that your dataset has integer row and column header values starting at 0. You can try the following code.
Turn on Excel's Developer ribbon if it's not already on.
Click the Visual Basic button.
Insert | Module.
Paste the code below into the module.
Adjust the rowOffset value to suit.
Enter the formula =Interpol2D(y, x) into your spreadsheet where you want the interpolation result displayed.
Save as .xlsm (with macro).
Every time you change one of the referenced values the calculation will be run again.
You can monitor the calculated values in the Visual Basic Immediate window.
Code:
Function Interpol2D(r, c) 'Row and column.
'By Transistor.
'https://engineering.stackexchange.com/questions/21400/double-interpolation-for-a-large-set-of-data-in-excel/21414#21414
rowOffset = 10 'The row number for the header line of the table.
r1 = Int(r + rowOffset) 'The table row number for the first parameter
r2 = Int(r) + rowOffset + 1 'The next row.
c1 = Int(c) + 2 'The table column number for the second parameter
c2 = Int(c) + 3 'The next column.
'Layout of the four adjacent cells and the interpolated result.
' (r1, c1) *---p-------* (r1, c2)
' |
' |
' r (result)
' |
' (r2, c1) *---q-------* (r2, c2)
r1c1 = ActiveSheet.Cells(r1, c1)
r2c1 = ActiveSheet.Cells(r2, c1)
r1c2 = ActiveSheet.Cells(r1, c2)
r2c2 = ActiveSheet.Cells(r2, c2)
'Interpolate by multiplying the difference between the two cells by the fractional
'part of the parameter and then add in the first cell value.
p = (r1c2 - r1c1) * (c - Int(c)) + r1c1 'The interpolation along the first horizontal.
q = (r2c2 - r2c1) * (c - Int(c)) + r2c1 'The interpolation along the second horizontal.
r = (q - p) * (r - Int(r)) + p 'The interpolation along the vertical.
'Results will be printed out in the Immediate window.
Debug.Print "r1c1:", r1c1, p, r1c2, ":r1c2"
Debug.Print "r: ", , r
Debug.Print "r2c1:", r2c1, q, r2c2, ":r2c2"
Debug.Print "================================================================================"
Interpol2D = r 'Return the result.
End Function
Figure 1. Test data.
*Table 1. Debug window result.
r1c1: 12 15 18 :r1c2
r: 16.5
r2c1: 14 17.5 21 :r2c2
================================================================================ | {
"domain": "engineering.stackexchange",
"id": 2072,
"tags": "statistics"
} |
Material that reflects only a certain polarization? | Question: Do materials exist that (significantly) reflect one polarisation of light and transmit all others?
Answer: Every material. Well, at a certain angle, at least (the Brewster angle), where p-polarized light is not reflected and hence only s-polarized light is reflected.
There is a modern interest in optical metamaterials and such things to engineer polarization reflections over wider angle ranges, though they tend to be limited in frequency. | {
"domain": "physics.stackexchange",
"id": 29816,
"tags": "material-science, optical-materials"
} |
Rotate image 90 degree clockwise recursively | Question: Previous question:
Rotate image 90 degree clockwise
In this code I have used tail recursive, which is bad. Time complexity is still \$O(N^2)\$, which is worse-case. The only good part is memory complexity.
How can I improve the recursive part of this code?
#include <iostream>
#include <iomanip>
#include <algorithm>
#include <vector>
template<typename T>
using matrix = std::vector<std::vector<T>>;
// for debug info
static size_t numTotalSwaps = 0;
template<typename T>
matrix<T> rotateImage(matrix<T>&& image, size_t row = 0, size_t column = 0)
{
if (row == image.size())
{
std::cout << "\nNumber of total swaps: " << numTotalSwaps << '\n';
return image;
}
if ( ++column == image.size() )
{
std::reverse(image[row].begin(), image[row].end());
++row;
column = 0;
}
if (row != column && row < column)
{
++numTotalSwaps;
std::swap(image[row][column], image[column][row]);
}
return rotateImage(std::move(image), row, column);
}
int main()
{
const size_t SIZE = 7;
matrix<int> image(SIZE, std::vector<int>(SIZE));
std::cout << "******* original image ******\n";
int value = 0;
for (auto&& i : image)
{
for (auto&& j : i)
{
std::cout << std::setfill(' ') << std::setw(3) << (j = value++) << ' ';
}
std::cout << '\n';
}
std::cout << "******* rotated image ******\n";
for (const auto& i : rotateImage(std::move(image)))
{
for (const auto& j : i)
{
std::cout << std::setfill(' ') << std::setw(3) << j << ' ';
}
std::cout << '\n';
}
}
Answer: Nitpicks
using matrix = std::vector<std::vector<T>>;
The standard is to capitalize names of user defined types, so matrix should be Matrix. I may have misled you in my other review, so I edited it for clarity.
// for debug info
static size_t numTotalSwaps = 0;
While I understand that it can often be helpful to define a global variable to do some quick and dirty debugging, by the time that you are sending code out for code review, this should be gone. You should be done debugging your code, so you can leave just the actual code.
What I would actually like to see here are the results. Given a particular input, what is the output?
if (row == image.size())
This can be fragile.
if ( row >= image.size() )
By switching to the inequality, you can avoid a class of bugs where you start incrementing by more than one row at a time but don't update your gate condition. The check costs the same either way, so you might as well do the more expansive check.
if (row != column && row < column)
This is redundant. If row < column, then you know that row != column.
if ( row < column )
This is sufficient, as it covers both cases.
std::reverse(image[row].begin(), image[row].end());
std::swap(image[row][column], image[column][row]);
I find this harder to follow than just saying
std::swap(image[row][column], image[image.size() - 1 - column][row]);
I'm not sure which performs better. It might be worth profiling if that matters to you.
for (const auto& j : i)
{
std::cout << std::setfill(' ') << std::setw(3) << j << ' ';
I don't know what i and j are here.
for ( const auto& element : row )
{
std::cout << std::setfill(' ') << std::setw(3) << element << ' ';
Now I can easily see what I'm printing. Using i and j didn't tell me what they actually were, but row and element do.
Move Semantics
matrix<T> rotateImage(matrix<T>&& image, size_t row = 0, size_t column = 0)
return image;
return rotateImage(std::move(image), row, column);
for (const auto& i : rotateImage(std::move(image)))
This seems a bad place to use move semantics. Note that the point of move semantics is to avoid doing a copy when you don't need to do so. However, here it may create a copy that you don't need. Note that your algorithm passes an rvalue reference to the function but returns an lvalue. If each recursive call triggers a full copy on return, your algorithm goes from \$O(n^2)\$ to \$O(n^4)\$.
You might be able to fix this by saying
return std::move(image);
This will turn the named variable image back into an rvalue reference.
We shouldn't need to pass by value inside the recursive function. We can work with the actual matrix. We need to make one copy at most to avoid changing the variable outside the function.
const Matrix<T> & rotateImage(Matrix<T> & image, size_t row = 0, size_t column = 0)
return rotateImage(image, row, column);
And then insist outside
Matrix<int> rotatedImage = std::move(image);
for ( const auto& row : rotateImage(rotatedImage, 0, 0) )
This makes a more regular use of move semantics. If there is an assignment function that uses move semantics, this will just work.
We also enjoy the speed advantages of working with a reference when dealing with the recursive function. We don't have to worry if move semantics get used or not, as our function always passes a reference. We don't force the caller to use std::move, which is more commonly used with an assignment or copy constructor.
At the same time, I added the starting row and column numbers back. I don't think that the code will work without them. It looks like they got lost in an edit.
Recursion
The normal advantage of using recursion is that it allows for a more elegant solution. However, in this case, I don't think that it does. Your iterative solution traversed the grid smoothly and understandably. The recursive solution is more complex, as you have to transform the grid into a one dimensional object.
It's also problematic in that the caller has to understand more than it should about what's happening internally. It has to call std::move and add the initial row and column numbers. You can sort of fix this by providing two functions. The first function for use by the caller. The first function then calls the second function with the right arguments for the recursion. The second function does the actual work. Note that it makes sense for the second function to use references. The first function could pass by value. Calling with std::move would trigger move semantics on the copy constructor.
The problem with recursion is that it works by making use of the stack. For each function call, you have to push the current state onto the stack. However, most of the state doesn't change from call to call, so saving it is redundant. Further, since this is tail recursion, we're done with the current state by the time that we save it. A good compiler will optimize this out, but why not go ahead and do it yourself?
Rather than relying on the compiler to fix things, just write the iterative version. Tail recursion will have a natural transformation into an iterative solution. In this case, we can do even better. The recursive solution maps the grid into a vector. An iterative solution can work with a grid directly. | {
"domain": "codereview.stackexchange",
"id": 11243,
"tags": "c++, c++11, recursion, image, matrix"
} |
ε-NFA to DFA - initial state with only epsilon transitions | Question: I am having trouble discovering how to convert a ε-NFA to DFA (image below) when all transitions in the initial state are epsilon transitions. I already know how to convert ε-NFA to DFA (common cases), but this case I never saw before.
Thank you guys!
Answer: Here is an algorithm to convert an $\epsilon$-NFA to an equivalent DFA.
Let the $\epsilon$-NFA consist of a set of states $Q$, an initial state $q_0$, a set of accepting states $F$, and a transition function. We construct a DFA on the set of states $2^Q$ (the power set of $Q$). The initial state is the $\epsilon$-closure of $q_0$ (the set of all states in $Q$ reachable from $q_0$ by a path of $\epsilon$-transitions). A state is accepting if it intersects $F$. We define the transition function $\delta$ as follows: given a state $S \in 2^Q$ and a letter $a$, $\delta(S,a)$ consists of all states in $Q$ reachable from a state in $S$ by taking an $a$-transition followed by a path of $\epsilon$-transitions. | {
"domain": "cs.stackexchange",
"id": 20761,
"tags": "finite-automata"
} |
What is left out by treating the nucleus of an atom as a point particle? | Question: In several computational software dealing with electronic calculations, protons and neutrons are lumped together into a point particle. This is done to simplify the problem, but I am wondering what gets left out by using this approximation. I am also wondering about what would be the magnitude of what is left out. Is the effect small enough such that we do not have to worry about it ever? What are the exceptions? This question is for electronic structure calculations.
Answer: The lowest-energy excitations in an atomic nucleus are typically thousands or millions of electron-volts, while chemical excitations are typically a few or a few tens of electron-volts. So treating the nucleus as inert during chemical reactions at thermal energies is an excellent approximation.
The electronic effect is negligible. The nuclear radius is typically $\sim 10^{-15}\rm\,m$, while the atomic radius is closer to $\sim 10^{-10}\rm\,m$.
The nucleus therefore occupies roughly $10^{-15}$ of the atomic volume, and treating it as a simple point is also a reasonable approximation. | {
"domain": "physics.stackexchange",
"id": 35446,
"tags": "computational-physics"
} |
Simple harmonic motion net force | Question: I was doing some practice questions for a physics test I have tomorrow. I encountered the following question:
However, I was quite sure the answer was B, as in position B gravity is being contrasted by tension and there are no other forces. I think I am missing something obvious or is the mark scheme wrong?
Answer: Is the acceleration equal to zero in $B$?
No, it's not, since the radial component of the equations of motion in $B$ reads
$0 \ne -m R \dot\theta_B^2 = mg - T_B = F_{r,B}^{tot}$
For completeness, the radial and azimuthal components of the equation of motion of the mass are
$\hat{r}: - m R \dot\theta^2 = T - mg \cos \theta = F_r^{tot}$
$\hat{\theta}: m R \ddot \theta = - mg \sin \theta = F_{\theta}^{tot}$ | {
"domain": "physics.stackexchange",
"id": 91659,
"tags": "homework-and-exercises, forces, harmonic-oscillator"
} |
How find bag files in Ubuntu | Question:
Hi All,
I recorded a .bag file and then played it and it worked.Now I cannot find where it is saved in my filesystem.Any suggestion?
Also maybe I messed around with something because now it is showing this error:
sam@sam-ThinkStation-P300:~/catkin_ws$ rosbag info sadrecord.bag
ERROR reading sadrecord.bag: [Errno 2] No such file or directory: 'sadrecord.bag'
Originally posted by Sam on ROS Answers with karma: 13 on 2015-06-15
Post score: 0
Answer:
sudo updatedb
locate sadrecord.bag
Originally posted by lucasw with karma: 8729 on 2015-06-15
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Sam on 2015-06-15:
Thanks.Found it. | {
"domain": "robotics.stackexchange",
"id": 21932,
"tags": "rosbag"
} |
Find minimum number of rooms required | Question: Problem statement is as follows
Given an array of time intervals (start, end) for classroom lectures (possibly overlapping), find the minimum number of rooms required.
For example, given [(30, 75), (0, 50), (60, 150)], you should return 2.
This is supposed to be an easy challenge but it was not the case for me.
Here is my solution in JavaScript
// Given an array of time intervals (start, end) for classroom lectures (possibly overlapping),
// find the minimum number of rooms required.
// For example, given [(30, 75), (0, 50), (60, 150)], you should return 2.
exports.roomsRequired = function roomsRequired(lectureIntervals) {
function Room() {return {busy: []};}
let rooms = [];
lectureIntervals.forEach(lectureInterval => {
let roomFound = false;
rooms.forEach(room => {
let roomBusyInLectureHours = false;
room.busy.forEach(reserved => {
if (lectureInterval[0] > reserved[0] && lectureInterval[0] < reserved[1]) roomBusyInLectureHours = true;
if (lectureInterval[1] > reserved[0] && lectureInterval[1] < reserved[1]) roomBusyInLectureHours = true;
if (reserved[0] > lectureInterval[0] && reserved[0] < lectureInterval[1]) roomBusyInLectureHours = true;
if (reserved[1] > lectureInterval[0] && reserved[1] < lectureInterval[1]) roomBusyInLectureHours = true;
});
if (!roomBusyInLectureHours) {
room.busy.push(lectureInterval);
roomFound = true;
}
});
if (!roomFound) {
let room = new Room();
room.busy.push(lectureInterval);
rooms.push(room);
}
});
return rooms;
};
The only test case I have so far
let rooms = exports.roomsRequired([[30, 75], [0, 50], [60, 150]]);
for (let i = 0; i < rooms.length; i++) {
console.log(rooms[i].busy)
}
Which prints
[ [ 30, 75 ] ]
[ [ 0, 50 ], [ 60, 150 ] ]
I am aware that I am not returning the number of rooms, but that is essentially the number of rows seen, so for example in the case above it is 2 as expected.
My question is, can this code be much shorter? I suspect I am missing something obvious being this challenge easy.
Pseudo code of my implementation would be something like this
For Each Interval:
For Each Room in Rooms:
For Each IntervalWithinThatRoom:
Check If IntervalWithinThatRoom overlaps with Interval
If No Overlaps Found
Push Interval to Room
If Interval Not pushed to any Room
Create new Room
Push Interval to Room
Push new Room to Rooms
Edit - Unit Tests I have
expect(roomsRequired.roomsRequired([[30, 75], [0, 50], [60, 150]]).length).eq(2);
expect(roomsRequired.roomsRequired([[5, 7], [0, 9], [5, 9]]).length).eq(3);
Answer: I know that my approach may not add much value, but I would like to share it with you.
Detecting room hours overlapping with this logic is correct, but it is hard to read and understand:-
if (lectureInterval[0] > reserved[0] && lectureInterval[0] < reserved[1]) roomBusyInLectureHours = true;
if (lectureInterval[1] > reserved[0] && lectureInterval[1] < reserved[1]) roomBusyInLectureHours = true;
if (reserved[0] > lectureInterval[0] && reserved[0] < lectureInterval[1]) roomBusyInLectureHours = true;
if (reserved[1] > lectureInterval[0] && reserved[1] < lectureInterval[1]) roomBusyInLectureHours = true;
I think following the approach below will make it more readable and understandable:-
let [reservedStart, reservedEnd] = reserved, [lectureStart, lectureEnd] = lectureInterval;
let busyHours = [...new Array(reservedEnd - reservedStart)].map((v, i)=> reservedStart+i);
let lectureHours = [...new Array(lectureEnd - lectureStart)].map((v, i)=> lectureStart+i);
roomBusyInLectureHours = busyHours.filter(hour => lectureHours.includes(hour)).length > 0; | {
"domain": "codereview.stackexchange",
"id": 39253,
"tags": "javascript, programming-challenge"
} |
Charge Conjugation to Analyze CPT Invariance | Question: Source: http://hyperphysics.phy-astr.gsu.edu/hbase/Particles/cpt.html#:~:text=Charge%20conjugation(C)%3A%20reversing,like%20momentum%20and%20angular%20momentum.
In the image below, reaction (1) shows CP conservation whereas reaction (2) does not. Does this have to do with the fact that the charge conjugation of the RHS of the reaction 1 is $(-1)(-1)=1$ whereas the charge conjugation of the RHS of reaction 2 is $(-1)(-1)(-1)=-1$, which is not equivalent to the RHS Conservation of Charge C or $(1)(1)(1)=1$?
Is this how one would check for charge conjugation?
Answer: Yes. More precisely, the neutral pion $\pi_0$ is an eigenstate of the charge conjugation operator $\mathcal{C}$, with eigenvaue $+1$. A photon is also an eigenstate of this operator but with eigenvalue $-1$.
The eigenvalue for a multiparticle state is given by the product of the eigenvalues of each individual particle (given that those particles are eigenstates themselves). So for an even number ($2n$) of photons, the C-parity of the system would be $(-1)^{2n}=+1$. Conversely, for an odd number ($2n+1$) of photons we have that the C-parity is $(-1)^{2n+1}=-1$.
To check whether Charge conjugation allows a certain transition you can then compare the C-parity of the initial an final states, as you have done. The transition is allowed by the Charge conjugation symmetry if and only if both eigenvalues are the same. | {
"domain": "physics.stackexchange",
"id": 79185,
"tags": "particle-physics, pions, charge-conjugation, cpt-symmetry"
} |
Extracting first initial and last name | Question: How can I do this better?
$a = "Tom Smith" ; $e = $a.substring(0,1)
$ee = ($a).split(" "); $y = $e + $ee[1]; $y
TSmith
Answer: Regular expressions are another option:
$name = "Tom Smith"
$short = $name -replace "(?<=^.).*\s", ""
$short
The above code replaces the second character, through to the last space, and replaces them with nothing (deletes them).
The regular expression is what's called a zero-width positive lookbehind - See a tutorial here
The following examples/outputs:
Tom Smith -> TSmith
Tom Bob Tables -> TTables
Bob -> Bob
If you only want to remove to the first space (so Tom Bob Tables becomes TBob Tables, then add a ? to the expression like "(?<=^.).*?\s" | {
"domain": "codereview.stackexchange",
"id": 12976,
"tags": "strings, powershell"
} |
How come neutrons in a nucleus don't decay? | Question: I know outside a nucleus, neutrons are unstable and they have half life of about 15 minutes. But when they are together with protons inside the nucleus, they are stable. How does that happen?
I got this from wikipedia:
When bound inside of a nucleus, the instability of a single neutron to beta decay is balanced against the instability that would be acquired by the nucleus as a whole if an additional proton were to participate in repulsive interactions with the other protons that are already present in the nucleus. As such, although free neutrons are unstable, bound neutrons are not necessarily so. The same reasoning explains why protons, which are stable in empty space, may transform into neutrons when bound inside of a nucleus.
But I don't think I get what that really means. What happens inside the nucleus that makes neutrons stable?
Is it the same thing that happens inside a neutron star's core? Because, neutrons seem to be stable in there too.
Answer: Spontaneous processes such as neutron decay require that the final state is lower in energy than the initial state. In (stable) nuclei, this is not the case, because the energy you gain from the neutron decay is lower than the energy it costs you to have an additional proton in the core.
For neutron decay in the nuclei to be energetically favorable, the energy gained by the decay must be larger than the energy cost of adding that proton. This generally happens in neutron-rich isotopes:
An example is the $\beta^-$-decay of Cesium:
$$\phantom{Cs}^{137}_{55} \mathrm{Cs} \rightarrow \vphantom{Ba}^{137}_{56}\mathrm{Ba} + e^- + \bar{\nu}_e$$
For a first impression of the energies involved, you can consult the semi-empirical Bethe-Weizsäcker formula which lets you plug in the number of protons and neutrons and tells you the binding energy of the nucleus. By comparing the energies of two nuclei related via the $\beta^-$-decay you can tell whether or not this process should be possible. | {
"domain": "physics.stackexchange",
"id": 89967,
"tags": "particle-physics, neutron-stars, weak-interaction, neutrons"
} |
When does Coelom form exactly? | Question: Related to my other question.
I know that the coelom is derived from mesoderm.
Coelom seems to form during organogenesis within 3rd and 8th week of embryogenesis.
However, that answer is not either enough exact or it is wrong.
I am reading the thing in Kimball 5e and Gilbert 9e, but cannot find an exact mention about the thing. I know for sure that the coelom develops within gastrulation and organogenesis, since it is forming from mesoderm.
When does coelom form exactly?
Answer: In humans, the coelom forms by the splitting of the lateral plate mesoderm, which occurs during weeks 4-5 (Sweeney, 1998).
Sweeney, LJ. 1998. Basic Concepts in Embryology. McGraw-Hill. | {
"domain": "biology.stackexchange",
"id": 154,
"tags": "homework, embryology"
} |
Does SCF energy mean same as HF energy? | Question: I use Gaussian 09 and Turbomole to do a same calculation (B3-LYP/6-311G) for a same molecule. But the results are not same, Turbomole shows SCF total energy = -772.16125945927, while Gaussian shows SCF Done E(UB3LYP) = -772.402241773, no UHF energy shows. I use UB3-LYP in Gaussian, while in Turbomole cannot specify that.
I use TmoleX to create the input file for Turbomole. It does not have UB3LYP or RB3LYP in the method part, I can only select DFT > B3LYP as my method. In previous part for molecular attributes, I can select multiplicity: UHF, then generate MOs.
The input file of Gaussian is very simple: UB3LYP/6-311g followed by cartesian coordinates of the molecule.
The energy is not the same in both of these cases. What causes the different results?
Answer: Answering your question from the title: no, SCF does not necessarily mean HF. Essentially the same self-consistent field (SCF) procedure is used to solve both the Hartree-Fock (HF) equations and the Kohn-Sham (KS) one, so for the case of DFT, SCF energy means not HF energy, but rather KS energy.
Regarding discrepancies in results obtained with different programs, there might be few reasons out there: different definitions of the B3LYP functional, different harmonics used for 6-311G basis sets (spherical vs. cartesian), different integration grids, different SCF convergence criteria, etc.
I checked the TURBOMOLE documentation and indeed found that to use the very same B3LYP functional that is used in Gaussian (with the VWN3 correlation) you need to use b3-lyp_Gaussian in TURBOMOLE, not b3-lyp. So, first of all, I suggest you try b3-lyp_Gaussian in TURBOMOLE instead of b3-lyp. | {
"domain": "chemistry.stackexchange",
"id": 6123,
"tags": "computational-chemistry, software"
} |
How does Yeast-two-hybrid detect interactions between several proteins in one experiment? | Question: I am trying to understand the Y2H screening method. I can understand how we can check if two specific proteins interact with each other. For example, if we want to check whether protein A and protein B interact, we fuse A with the Activation Domain (AD) of a transcription factor and B with the Binding Domain (BD) of another transcription factor. When the two fusion proteins are transfected into a yeast cell, the reporter gene is expressed only if the AD and the BD get together and form a complete transcription factor and that happens only if proteins A and B interact with each other.
However, I have seen claims that Y2H is a high throughput system, i.e., it can be used to detect several interactions at once. But most articles online that attempt to describe it, seem vague to me.
For example, suppose I fuse the "bait" protein to the BD and several different cDNA's to the AD and let the yeast grow. Next, I observe that the reporter gene is expressed. Doesn't this observation only imply that there exists some protein from the cDNA library that interacts with our bait protein? How do we know exactly which of the target proteins have interacted with the bait and resulted in the expression of the reporter gene?
Answer: You use a library of many yeast in which each expresses only one target, or 'prey' protein. Then you grow each yeast colony separately, for example in different wells.
It's definitely not a high-throughput method if you do it the old-fashioned way, i.e. making your own yeast library, or if you do it with only a few targets. But these days you can buy pre-made yeast libraries with thousands of target proteins, each separated into a well on a plate, then you run the experiment for all the targets simultaneously. With a visible marker you can then easily see which wells have produced successful interactions, and thus which targets interacted with your bait. | {
"domain": "biology.stackexchange",
"id": 466,
"tags": "proteins"
} |
Why do we use dynamic memory allocation in general and why did we use it in this particular example? | Question: Dynamic Programming C/C++ implementation of LIS(Longest Increasing Subsequence) problem
/* lis() returns the length of the longest increasing
subsequence in arr[] of size n */
int lis( int arr[], int n )
{
int *lis, i, j, max = 0;
lis = (int*) malloc ( sizeof( int ) * n );
/* Initialize LIS values for all indexes */
for (i = 0; i < n; i++ )
lis[i] = 1;
/* Compute optimized LIS values in bottom up manner */
for (i = 1; i < n; i++ )
for (j = 0; j < i; j++ )
if ( arr[i] > arr[j] && lis[i] < lis[j] + 1)
lis[i] = lis[j] + 1;
/* Pick maximum of all LIS values */
for (i = 0; i < n; i++ )
if (max < lis[i])
max = lis[i];
/* Free memory to avoid memory leak */
free(lis);
return max;
}
It is a code from www.geeksforgeeks.org . I have seen explanations that memory is allocated(when done dynamically) on heap, which is a free store of (very large) memory, so it makes sense to only use dynamic allocation when memory required exceeds memory of stack used for storing variables and data locally allocated in functions.
Would it make a difference if I declared array as int arr[n] (as n is given by user in input) and then in main function use lis(arr, n) ? If so, why? If not, what else does dynamic allocation benefit from in this case and in general?
Answer: If you declared array as int arr[n] then the memory might be allocated on the Stack and arr would exist as long as the function lis exists, i.e., it would be allocated locally in the lis. malloc on the other hand, would allocate the memory on the Heap and the array would exist until you explicitly free it using free. Nevertheless, as Yuval Filmus noted, memory allocation for the variable length arrays depends on compiler implementation and environment (for example underlying environment may have no distinction between heap and stack). In addition, the GNU C Compiler allocates memory for variable length arrays on the stack.
However, regarding the time or space complexity of the algorithm, it would not make any difference. In particular, the time complexity of an algorithm has nothing to do with its implementation in a particular language. | {
"domain": "cs.stackexchange",
"id": 9556,
"tags": "dynamic-programming, memory-allocation"
} |
How do we know that the laws of physics are invariant in all inertial frames? | Question: Einstein's Special Relativity theory is based on the assumption that the laws of physics are invariant in all inertial frames, and from there - according to Maxwell's equations - it derives that the speed of light must be the same in all reference frames, thus the need for time dilation etc...
But how is the initial assumption justified? I have always been explained this assumption in an "intuitive" way, as a thought experiment, for example regarding the fact that sitting on a sofa feels no different than sitting in a plane. But I could have made the same thought experiment about the hypothesis that velocities always add together, so one could make light "faster" by shining a beam from the tip of a rocket...
Answer: Lefaroundabout's comment is important. While we are typically taught that we use science to know things, that is not actually a correct statement. Science is a very powerful tool for creating models that can be used to create educated predictions about how a system will behave, and it is founded on the idea of falsifiable hypotheses, but that doesn't mean we're never wrong. It just means it's possible to disprove our hypotheses.
Your example of making the velocities add is a great example. It's terribly intuitive that velocities add together. If I'm on a train, and I throw the baseball, an observe on the ground sees the baseball hurtling through the air at the train's speed plus the speed of my throw. It would be very natural to assume that light behaves the same way. In fact, I think most people believe this is how light works until they are told otherwise by a science teacher.
Now let's bring in Maxwell's equations. Maxwell's equations do a remarkably good job of predicting how electricity and magnetism behave. You can try to falsify them by building oddly designed experiments to isolate magnetic monopoles and so forth, but we found his laws simply hold up well (at least all the way up to Quantum Mechanics, which is its own beast, and its own story). After a lot of testing, the scientific community came to a consensus that Maxwell's equations are pretty darn reliable. I can't say "they knew his equations were true," because that would be an overstatement, but their confidence was very high.
However, there's a quirk. Maxwell's equations predict a "speed of light." But if you go back to our baseball example, we see that the baseball is going at different speeds in different inertial frames. While I ride on the train at a constant velocity, I am viewing the world from an inertial frame, and I see the ball at one speed. While you are on the ground, standing still, you are viewing the world from an inertial frame, and you see the ball at a different speed. Maxwell's equations simply don't have any room for that. They just say "light has a fixed speed," leaving scientists to ponder what's up with that.
One intuitive approach is to assume the light is traveling through a medium, and the speed of light is with respect to that medium. This is intuitive when you look at effects like drag on a baseball. The drag forces on a baseball aren't dependent on how fast it's traveling with respect to me or you, it's how fast it's traveling with respect to the wind. It was theorized that light might travel in a so called the "luminiferous aether," just like our baseball travels through the air. This solves the conundrum of Maxwell's equations: the "speed of light" is the speed of light with respect to the aether.
So this was a reasonable hypothesis. Just like your "velocities add" hypothesis, it lead to natural ways of thinking about light. Of course, this being a scientific hypothesis, it was designed to be falsifiable. If one could demonstrate that light's movement did not act like there was some privileged reference frame (the frame of the aether), then one would be able to refute this hypothesis. And they did.
The most famous experiment falsifying the aether theories was the Michaelson–Morley experiment. Through clever use of interferometry, they were able to compare the speeds of light going in the direction of the Earth's orbit around the sun versus going across it. Their goal was to determine if the aether was stationary, or if it was somehow "dragged" along by massive objects like the Earth (like how air forms drafts behind a large vehicle). They found, curiously enough, that there was no detectable difference in the speed of light in the two directions. If indeed the aether existed (which they believed at the time), it was so tied to the movement of the earth that we couldn't discern it. It's like you were drafting behind a large vehicle, and instead of feeling the wind pull you forward, it felt more like you were encased in concrete and being dragged forcefully along!
Many other experiments also found results like this, which made aether theories start to seem very unreliable. They just called for too much "hand waving." From this, we developed the Lorentz boosts, which were modifications to Maxwell's equations which were very effective at predicting the results of experiments like these, but made the equations terribly ugly. The beauty of Maxwell's equations vanished under the Lorenz transformations.
So now enter Einstein, making his assumption that the speed of light must be the same in all reference frames. I agree with your original opinion that it's a strange thing to just assume. But it was brilliant. When he was done with the math, the ugly Lorenz boosts that defiled Maxwell's equations were neatly tucked away into this assumption that the speed of light was the same in all reference frames. It did a very good job of cleaning up a lot of ugliness in the theories. People liked it.
More than being liked, it was scientific: it was falsifiable. If we ever found two inertial frames which had different speeds of light, or if we found out that time dilation did not occur, it would have falsified Einstein's theories, and we probably wouldn't revere him as we do today. However, in hundreds (if not thousands) of experiments, we have found that Einstein's theory is extraordinarily good at predicting some really awkward and unintuitive effects.
So thus, we justify his assumption that the speed of light is the same in all inertial frames after the fact. We have found that the results of this assumption are tremendously useful and effective. At the time, the justification was that it was an elegant solution to a very difficult problem, and it produced new falsifiable hypotheses to test (like any good scientific theory does). | {
"domain": "physics.stackexchange",
"id": 45428,
"tags": "special-relativity, inertial-frames, galilean-relativity"
} |
FizzBuzz in Commodore Basic | Question: 1 REM
2 REM FIZZBUZZ IN COMMODORE BASIC
3 REM
10 FOR I = 1 TO 100
20 IF (I/3)=INT(I/3) THEN PRINT "FIZZ";
30 IF (I/5)=INT(I/5) THEN PRINT "BUZZ";
40 IF (I/3)<>INT(I/3) AND (I/5)<>INT(I/5) THEN PRINT I;
50 PRINT
60 NEXT I
I'm not happy with the tests for "multiple of 3" and "multiple of 5". Using floating-point arithmetic to determine integer multipleness feels highly inelegant, but I don't believe Commodore Basic has "integer mod" functionality, and I can't think of another similarly-compact way to determine that one integer is a multiple of another.
When run using the VICE emulator, it takes almost nine seconds to execute. Besides getting rid of the floating-point division, are there any obvious ways to speed things up?
Answer: As Jerry explained, all math operations treat numbers as floating point... even numbers stored as integers. This was my first attempt:
1 rem
2 rem fizzbuzz in commodore basic
3 rem
10 t = time
20 for i = 1 to 100
30 n = i / 15 :if n = int(n) then print "fizzbuzz" :goto 70
40 n = i / 3 :if n = int(n) then print "fizz" :goto 70
50 n = i / 5 :if n = int(n) then print "buzz" :goto 70
60 print i
70 next
80 print "ran for" (time - t) / 60 "seconds"
Skipping the rest of the conditions when a condition passes speeds it up significantly.
Assigning the value of X/Y to a variable and reusing it is faster than calculating it twice. Some places around the internet mention the formula X-INT(X/Y)*Y, but that was slower than N=INT(N) in my tests.
According to this article you can write your own modulo "operator" in assembly and use it in BASIC. The assembly program is available here. Seems like overkill to me.
You might think that you could replace the goto 70 with next, but it doesn't quite work. Once the loop is complete, it will go to the next line, and call the next on that line, and you'll get a "next without for" error. Technically it still does what it's supposed to do, but it doesn't exit cleanly.
You could do something like this as a compromise, but it's hackish:
1 rem
2 rem fizzbuzz in commodore basic
3 rem
10 t = time
20 for i = 1 to 100
30 n = i / 15 :if n = int(n) then print "fizzbuzz" :next
40 n = i / 3 :if n = int(n) then print "fizz" :next
50 n = i / 5 :if n = int(n) then print "buzz" :next :goto 70
60 print i: next
70 print "ran for" (time - t) / 60 "seconds"
Edward had a good idea: keep separate counters for fizz and buzz instead of doing division. Here's a variation on his approach that sacrifices speed for readability. It doesn't use goto and is about as readable as the original code in the question. It's faster than my examples above, but slower than Edward's solution (since it doesn't skip other conditions after passing one, and doesn't have a combined "fizzbuzz" case).
1 rem
2 rem fizzbuzz in commodore basic
3 rem
10 t = time
20 fizz = 0 :buzz = 0
30 for i = 1 to 100
40 fizz = fizz + 1 :buzz = buzz + 1
50 print
60 if fizz = 3 then print "fizz"; :fizz = 0
70 if buzz = 5 then print "buzz"; :buzz = 0
80 if fizz > 0 and buzz > 0 then print i;
90 next
100 print ,"ran for" (time - t) / 60 "seconds"
Incidentally, the biggest performance bottleneck seems to be printing carriage returns and moving everything up the screen. If you try the original code in the question without printing any carriage returns, it will run about three times faster.
Assuming the screen has 40 columns by 25 rows, it's just big enough to fit the fizzbuzz output neatly into columns without scrolling. This version does just that, shaving off about a second from the previous version.
1 rem
2 rem fizzbuzz in commodore basic
3 rem
10 t = time
15 print chr$(147);
20 fizz = 0 :buzz = 0 :row = 0 :col = 0
30 for i = 1 to 100
40 fizz = fizz + 1 :buzz = buzz + 1
50 row = row + 1
60 poke 211, col
70 if fizz = 3 then print "fizz"; :fizz = 0
80 if buzz = 5 then print "buzz"; :buzz = 0
90 if fizz > 0 and buzz > 0 then print i;
100 if row = 25 then print chr$(19); :col = col + 10 :row = 0
110 if row > 0 then print
120 next
130 secs = (time - t) / 60
140 poke 198, 0 :wait 198, 1
150 print chr$(147) "ran for" secs "seconds"
Notes:
chr$(147) is CLR, a form feed character which clears the screen.
poke 211, col manipulates system memory at the address where the cursor column is stored.
chr$(19) is HOME; it moves the cursor to the top row.
poke 198, 0 :wait 198, 1 waits for the user to press a key before continuing, so the timing message and READY prompt don't overwrite any of the output at the end.
Here's what the output from the final version looks like: | {
"domain": "codereview.stackexchange",
"id": 8504,
"tags": "performance, fizzbuzz, basic-lang"
} |
Conservation laws vs Einsteinian space-time | Question: The way I understand conservation laws - which I am asking you to correct - is that if I observe any slice of the universe perpendicular to the time axis and count up all the mass/energy, momentum, charge, etc., I should obtain the same sum as I would had I observed any other slice of the universe perpendicular to the time axis.
Stop me right there if you have to, but here is where I start scratching my head.
The general theory of relativity informs me that I can't observe a slice of the universe perpendicular to the time axis. The slice I observe is skewed based on my velocity. Yet worse, the special theory of relativity informs me that I can't even observe a flat slice of the universe if I'm accelerating. It's curved based on my acceleration and has all sorts of bumps and such around every massive object. Again, please re-inform me if I'm mistaken.
So let's say I make my observation and sum up some conserved quantity over the entire universe. The slice of space-time that I've integrated over is S1, and my total is Q1. I change my direction of motion but not my speed such that the slice of the universe I observe S2 is an affine transformation of S1 that is not equal to S1. I then integrate the same conserved quantity over S2 and call it Q2.
Does Q1 = Q2? If it doesn't, then how can we ever verify that conservation laws describe our universe, and how did we come up with them in the first place?
If Q1 does = Q2, would this not imply that all conserved quantities are distributed perfectly evenly, since I can tilt my observation to exclude any symmetry-ruining lump of conserved quantity from the total? Since that doesn't appear to be the case, who is the stupid: Einstein, the guy that came up the conservation laws, or me?
Answer: You can simplify your Gedankenexperiment even further. In a stationary universe assume a mass which is so distant from an obsever, that light from that mass had not enough time to travel to the observer. It is outside of the observer's horizon.
Now let time pass, and eventually that mass will be within the observer's horizon. You may interpret this as the sudden appearence of mass, apparently violating the conservation of mass/energy. (In reality the universe is expanding and the effect would be just the opposite - you see mass disappearing)
It appears you misconcieved the word observable. Not being observable is not the same as being non-existent. "Observable" is all about events and not about things. If an event is not observable, this means you cannot receive signals from this event, not that the thing experiencing the event simply isn't there.
Take for example a black hole, where the "inside" is beyond the horizon of any outside observer. This does not mean, that the mass of a black hole is undetectable from an outside observer.
I think you asked an interesting question, because conservation laws are hardly ever discussed in the presence of a horizon. | {
"domain": "physics.stackexchange",
"id": 11256,
"tags": "conservation-laws, relativity"
} |
VSEPR theory and hybridization in determining the shape of a molecule | Question: Our chemistry teacher told us that both VSEPR theory (which says that the electron pairs in the valence shell of an atom arrange themselves in such a way that repulsions among them are minimized and this arrangement of the electron pairs determines the shape of a particular molecule) and hybridization (which is the intermixing of a particular number of atomic orbitals to form equal number of new orbitals which have same shape and energy) can be used to determine the shape of a molecule and that they are independent of each other. I understand how the shape of a molecule can be predicted using the VSEPR theory but i cannot understand how hybridization helps in determining or rather predicting the shape of a particular molecule. I have searched the net but could not find any useful resource.
How does hybridization help in determining the shape of a molecule?
Answer: The VSEPR theory can indeed be used to predict geometry a priori. You would count the number of substituents around a central atom, counting any lone pairs as substituents also, and then find an arrangement that maximises the distance between all substituents. This theory works well for elements of the first period and compounds in which the central element is in the highest oxidation state, but tends to fail very rapidly for all others, most notably because it cannot account for anything outside of the traditional 2-electron-2-centre bond and because it considers all lone pairs equal even though they are typically not.
The hybridisation theory can only be used to predict geometry in a very limited number of cases that involve carbon atoms, and it only works because carbon is extremely regular. However, using hybridisation to predict carbon geometry is basically circling back onto VSEPR: you check how many atoms are bound to a certain carbon and thereby deduce whether this carbon atom will be $\mathrm{sp,sp^2}$ or $\mathrm{sp^3}$. Carbocations and carbanions can also be accomodated for (by either counting the anionic lone pair or ignoring it — as in VSEPR).
In general, however, it is not possible to predict the geometry following a certain hybridisation. Rather, one determines the geometry by an independent mean (e.g. quantum chemical calculations, gas phase diffraction), deduces the bond angles (and bond lengths) from the geometry and then derives a hybridisation. In short: hybridisation follows geometry, not the other way around. | {
"domain": "chemistry.stackexchange",
"id": 8863,
"tags": "hybridization, vsepr-theory"
} |
How to treat an exercise about the rotational acceleration during a throw? | Question: Because I am studying on my own, I don't have anyone to talk to about this when I don't understand, and I was wondering if someone could help me with a concept in rotational kinematics:
At the start of your throw of a $2.7\:\mathrm{kg}$ bowling ball, your arm is straight behind you and horizontal. Determine the rotational acceleration of your arm if the muscle is replaced. Your arm is $0.64\:\mathrm{m}$ long, has a rotational inertia of $0.48\:\mathrm{kg\:m^2}$, and has a mass of $3.5\:\mathrm{kg}$ with its center of mass $0.28\:\mathrm{m}$ from your shoulder joint.
I'm not interested in the answer, but I am interested in learning how I should treat the arm-bowling ball system. Do I treat the arm as its own rotating object with its own moment of inertia and the bowling ball as its own object with the rotational inertia of a hoop with rotational axis through its center ($I=MR^2$)? Or do I simply add the torques of each and treat this value as the arm's total torque?
Answer: After further research, I found that the moment of inertia of a system consisting of multiple objects, like the arm-bowling ball system in the problem, can be found simply adding the moments of inertia of each object 1, though use of the horizontal axis theorem may be necessary if the object is rotating around an axis parallel to its typical axis of rotation 2.
The moment of inertia of the arm is given, and the moment of inertia of the bowling ball can be modeled as the moment of inertia of a point mass, given by the equation: $I=MR^2$. Since the moment of inertia of a composite object is the sum of the moments of inertia of its parts, the rotational inertia of the arm-bowling ball system is:
$I_{system}=0.48$ $kg$ $m^2$$ + $($2.7$ $kg$)⋅($0.64$ $m$)$^2$$=1.6$ $kg$ $m^2$ | {
"domain": "physics.stackexchange",
"id": 21008,
"tags": "homework-and-exercises, acceleration, rotational-kinematics"
} |
Gravity, matter vs antimatter | Question: I have a simple question regarding matter-antimatter gravity interaction.
Consider the following though experiment:
If we imagine a mass $m$ and an antimass $m^-$, revolving around a large mass $M$
the potential energy of mass $m$ should be:
$$ U_1=-\frac{GmM}{R} $$
and the potential energy of mass $m^-$ should be:
$$ U_2=-\frac{GmM}{R} $$
or:
$$ U_2=\frac{GmM}{R} $$
depending on the sign of the gravity interaction between matter and antimatter.
If the two particles annihilate to energy, then the gravitational field of $M$ will interact with the emitted photons and will change their frequency.
But, as the interaction between gravity and the photons has nothing to do with the question of the gravity between matter and antimatter, can't we simply use the interaction between gravity and photons, and the energy conservation to establish the nature of the gravity interaction between matter and antimatter?
Answer: This is a perfectly good argument and one of the reasons that all the physicists I know believe that antimatter behaves just like matter in a gravitational field.
It is important to distinguish between antimatter, which is well understood from countless collider experiments, and negative matter (also known as exotic matter), which has never been observed. Antimatter does not have a negative mass. Indeed antimatter is just perfectly ordinary matter - we think it special only because we are made from matter and therefore biased. Negative/exotic matter is very different. If it existed it would cause all sorts of problems with conservation of energy and the stability of the universe. | {
"domain": "physics.stackexchange",
"id": 65415,
"tags": "gravity, mass, potential-energy, antimatter"
} |
How big does an atom need to be to have dispersion forces be greater than other intermolecular forces? | Question: I know as an atom gets bigger the dispersion forces grow with it. But how big does an atom, e.g methanoic acid, to have dispersion forces that outrank dipole-dipole forces?
Answer: I think you mean molecules, not atoms. Regardless let's just say you meant molecule and by intermolecular you're really just focusing on Dipole, the answer is- youre comparing apples to oranges. But you don't look at 1 molecule interacting with another rather a whole bunch of molecules interacting simultaneously. At some moment when electrons are gravitating toward 1 atom in a molecule and another molecule nearby has dipolarization and the relative distance can cause bonds to break and realign on another molecule. But I think what you mean is, what kind of atoms in a molecule would be impacted by dispersion force the most? And that's large molecules and also large atoms because large atoms like iodine, can have an induced dipole from a atoms its not bonded to much easier than a tiny atom. How big? There really isn't any way to answer that question. | {
"domain": "chemistry.stackexchange",
"id": 13364,
"tags": "organic-chemistry, intermolecular-forces"
} |
Use a GPU to speed up neural net training in R | Question: I'm currently training a neural net model in R and am wanting to use a GPU to speed up this process. I've looked into this but it appears that this is unavailable to Mac users as Apple no longer uses NVIDIA GPUs.
Can anyone tell me if this is the case, and if not how I can go about utilizing a GPU?
Answer: If you're able to convert the code into python, then you could use the Google colab environment or Kaggle kernels. These online platforms provide free GPU's that you can utilize.
Kaggle kernels also support R directly. | {
"domain": "datascience.stackexchange",
"id": 8669,
"tags": "machine-learning, neural-network, r, gpu"
} |
ROS2 Custom Messages Not Found | Question:
While trying to run my python script, I get ModuleNotFoundError: No module named 'lightring_msgs'. I know the message types are being generated; the package files are in the install directory. Why can't python find the module?
My project layout
/lightring_driver
/lightring_driver
<driver files>
package.xml
setup.cfg
setup.py
/lightring_msgs
/msg
CMakeLists.txt
package.xml
setup.cfg
lightring_msgs/CMakeLists.txt
cmake_minimum_required(VERSION 3.5)
project(lightring_msgs)
# Default to C++14
if(NOT CMAKE_CXX_STANDARD)
set(CMAKE_CXX_STANDARD 14)
endif()
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
find_package(ament_cmake REQUIRED)
find_package(builtin_interfaces REQUIRED)
find_package(rosidl_default_generators REQUIRED)
rosidl_generate_interfaces(lightring_msgs
"msg/LightringCommand.msg"
DEPENDENCIES builtin_interfaces
)
ament_package()
lightring_msgs/package.xml
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format2.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>lightring_msgs</name>
<version>0.0.0</version>
<description>TODO: Package description</description>
<maintainer email="jacob@todo.todo">jacob</maintainer>
<license>TODO: License declaration</license>
<depend>builtin_interfaces</depend>
<buildtool_depend>ament_cmake</buildtool_depend>
<build_depend>rosidl_default_generators</build_depend>
<exec_depend>rosidl_default_runtime</exec_depend>
<member_of_group>rosidl_interface_packages</member_of_group>
<export>
<build_type>ament_cmake</build_type>
</export>
</package>
Originally posted by beck on ROS Answers with karma: 121 on 2018-10-23
Post score: 2
Original comments
Comment by William on 2018-10-25:
How are you building these packages? did you source the setup.bash/setup.bat file after building? Is the location that you installed to on your PYTHONPATH as a result?
Comment by beck on 2018-10-25:
Looks like not sourcing setup.sh (in my case) was the issue. Thank you!
Answer:
I forgot to source install/setup.sh after building. Thanks @William!
Originally posted by beck with karma: 121 on 2018-10-25
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 31957,
"tags": "ros2, colcon, ros-bouncy, rclpy, messages"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.