anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Running / debugging ROS2 Python node with PyCharm on Windows | Question: I am trying to run (and debug) a ROS2 node made in Python with the PyCharm IDE.
The results won't be exactly the same as ros2 run ..., but it seems directly running the relevant script is at least close: python src/my_project/my_node/node.py
So in PyCharm I add the Python 3.8 interpreter I use for ROS2 and open the project. So far so good, by adding C:\dev\ros2_humble\Lib\site-packages to the interpreter path I get correct code completion etc.
However, when I try to run my file I am prompted with this error:
Traceback (most recent call last):
File "C:\Users\name\project\node.py", line 12, in <module>
import rclpy File "C:\dev\ros2_humble\Lib\site-packages\rclpy\__init__.py", line 49, in <module>
from rclpy.signals import install_signal_handlers
File "C:\dev\ros2_humble\Lib\site-packages\rclpy\signals.py", line 15, in <module>
from rclpy.exceptions import InvalidHandle
File "C:\dev\ros2_humble\Lib\site-packages\rclpy\exceptions.py", line 15, in <module>
from rclpy.impl.implementation_singleton import rclpy_implementation as _rclpy
File "C:\dev\ros2_humble\Lib\site-packages\rclpy\impl\implementation_singleton.py", line 32, in <module>
rclpy_implementation = import_c_library('._rclpy_pybind11', package)
File "C:\dev\ros2_humble\Lib\site-packages\rpyutils\import_c_library.py", line 39, in import_c_library
return importlib.import_module(name, package=package)
File "C:\Program Files\Python38\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level) ImportError: DLL load failed while importing _rclpy_pybind11:
The specified module could not be found.
The C extension 'C:\dev\ros2_humble\Lib\site-packages\rclpy\_rclpy_pybind11.cp38-win_amd64.pyd' failed to be imported while being present on the system.
Please refer to 'https://docs.ros.org/en/{distro}/Guides/Installation-Troubleshooting.html#import-failing-even-with-library-present-on-the-system' for possible solutions
I noticed I get the exact same error when running the script from a fresh (unsourced) terminal:
C:\dev\ros2_humble\.venv\Scripts\python.exe C:\Users\name\project\node.py
How could I circumvent this in PyCharm?
Answer: Partial answer already. I was experimenting running the script from a regular terminal.
Running:
C:\dev\ros2_humble\.venv\Scripts\python.exe C:\Users\name\project\node.py
Gives the error: (which is not unexpected)
ModuleNotFoundError: No module named 'rclpy'
I can avoid this by setting PYTHONPATH:
$Env:PYTHONPATH="C:\dev\ros2_humble\Lib\site-packages"; C:\dev\ros2_humble\.venv\Scripts\python.exe C:\Users\name\project\node.py
Now I get the same error as with PyCharm:
ImportError: DLL load failed while importing _rclpy_pybind11: The specified module could not be found.
And I found I can fix this by expanding PATH with the bin/ directory of ROS2:
$Env:PYTHONPATH="C:\dev\ros2_humble\Lib\site-packages"; $Env:PATH = "C:\dev\ros2_humble\bin;" + $Env:PATH; C:\dev\ros2_humble\.venv\Scripts\python.exe C:\Users\name\project\node.py
Now my node runs!
It seems you can apply the same with PyCharm, edit the run configuration and under "Environment" change "Environment Variables:" to override the PATH variable. However, when doing this, the error is not fixed...
A quick test with import os; os.environ["Path"] at the start of my script shows that PATH is not modified at all.
I'm not sure how to continue here.
EDIT: I got it, I need to type "path" under environment variables (no capitals) and I need to specify it in full (i.e. I cannot extend it), but now it works! | {
"domain": "robotics.stackexchange",
"id": 38770,
"tags": "ros2, python, ide"
} |
What does it mean to have tokens at a state in a balancing network? | Question: I was assigned as homework:
Suppose we have a width-w balancing network of depth $d$ in a quiescent
state $s$ called $B$. Let $n = 2^d$. Prove that if n tokens enter the network
on the same wire, pass through the network, and exit, then $B$ will have
the same state after the tokens exit as it did before they entered.
However, I do not understand what the question is asking (please don't actually do the problem).
I have followed chapter 12 of the art of mutlicore programming, and its still unclear what the question is asking. Let me give you my thoughts (and confusions):
What does it mean by $s$ and $B$? According to the textbook, a balancing network is quiescent if every token that arrived on an input wire has emerged on an output wire (which makes sense because we only care when the tokens pass the network, not their order). Does a quiescent state refer to which cables the tokens have entered? Or since the order doesn't matter it just means that the same number of tokens that entered left the network?
What do they refer to as "a state"? Does it means were the tokens are located and which wires they left or if we only care about quiescent states, that the total number of tokens at the beginning and the end is the same?
Answer: It seems that there is a typo – $s$ and $B$ represent the same thing. It looks like the original $s$ was changed to $B$, but the author of the question forgot to delete $s$.
The state of the network is the state of all balancers – which way they point. | {
"domain": "cs.stackexchange",
"id": 5609,
"tags": "algorithms, parallel-computing"
} |
Can the dew point temperature be more than drybulb temperature? | Question: I am extracting data from WRF model output for a specific area. The area I am extracting data for is 5000m above mean sea level. When plotting on a graph, I found that the dew point temperature is higher than the dry bulb temperature.
Is it possible that the dew point can go higher than the dry bulb? If yes, what is the inference of this phenomenon? Or can I simply assume that WRF could not calculate temperature values properly for this hill station?
Answer: Dew point gives an indication of the moisture content of air; it is the temperature at which air can no longer hold water vapour.
The following graph shows the relations of dew point to air temperature (dry bulb) for various levels of humidity.
For humidity less than 100 percent, the air temperature is always higher than the dew point temperature and for a humidity of 100 percent air temperature equals the dew point temperature.
The dew point cannot be higher than the dry bulb temperature, it can only be lower of equal to it. | {
"domain": "earthscience.stackexchange",
"id": 782,
"tags": "meteorology, wrf"
} |
Why is the reaction between two hanging spheres by two strings the sine of the tension of the strings? | Question: Two smooth uniform spheres of radius 4cm and mass 5kg are suspended from the same point A by light inextensible strings of length 8cm attached to their surfaces. There spheres hang in equilibrium, touching each other. What is the reaction between them?
From how i understand this, the reaction force should be indeterminable.
Resolving horizontally:
Tsinø + R (left hand side) = Tsinø + R (right hand side)
So why is the reaction force then specifically the sine of the tension in one of the strings?
If the two spheres were both on the ground and touching each other, there would still be a reaction between them, so i can't understand why the tension in the string is a cause of the reaction force?
Answer: The net force on each sphere is zero because each is in equilibrium. You seem to be satisfied with the balance of vertical forces between tension in the string and the weight of the ball : $T\cos\theta=mg.$
Horizontally the forces on each ball are horizontal component of tension and normal reaction. These are equal and opposite : $T\sin\theta=R$.
You seem to be trying to balance horizontal forces on both balls at the same time. Then you have external forces $T\sin\theta$ to the right and $T\sin\theta$ to the left. Unless the tensions $T$ are different or $\theta$ is different, that does not tell us anything new.
The normal reactions $R$ are internal forces. Together they always cancel out because they are always equal and opposite pairs. To find normal reaction we have to consider the forces on each ball individually. The normal reaction from ball A is then an external force acting on ball B. | {
"domain": "physics.stackexchange",
"id": 39830,
"tags": "homework-and-exercises, newtonian-mechanics, forces, rigid-body-dynamics, statics"
} |
What will happen if all the population all over the world jumped together at the same time in sea? | Question: What can happen if a magnitude of such huge force is applied over the sea water of earth?
Force applied will be average mass of each person *7 billion *9.8 .
Answer: The force of humans on the earth is already there before they jump, because people are standing on the earth. With their weight distributed over the entire earth, their jump (which might briefly increase the force by 2-3x) will have no effect. Since you said they would be jumping in the sea, there will be a very small increase in the sea level.
Mass of all the people on earth is approximately $5\times 10^{11}~\text{kg}$; surface area of all the oceans is about $3.5\times 10^{14}~\text{m}^2$, so all of them jumping into the sea at once would raise the average level of the water by about $1.4 \times 10^{-6} ~\text{m}$ or a little over a micron.
Barely a ripple, on the scale of the seas. | {
"domain": "physics.stackexchange",
"id": 24406,
"tags": "newtonian-mechanics, conservation-laws, earth"
} |
robot_localization tf transform jitters when stopping motion | Question:
Hi there,
We're trying to use r_l on an outdoor ground vehicle with an IMU affixed to the top, and two RTK GPS receivers fore and aft. The IMU provides roll and pitch at 10 Hz (as imu0), and the GPS provides relative position (meters from base station) and yaw, and optionally pitch, at 10 Hz (as pose0). We don't have any wheel encoders, so we don't have a typical odometry source. As a result, we simply use a static transform from map to odom, and we are using r_l to generate a transform from odom to base_link.
r_l seems to provide a very accurate transform most of the time (judged by visual inspection of rviz), but when the vehicle stops the estimated position of base_link jitters (or oscillates) for 1-2 seconds by 0.1-0.5 meters primarily along the vehicle's X axis, then stops at the correct position. However, if we remove imu0 as an input into r_l, the transform stops smoothly and quickly (as does the physical vehicle).
This jitter occurs even when the following are true:
The IMU and GPS do not provide competing measurements for the same axes of motion (i.e., we remove pitch from the GPS input)
The covariance matrix diagonal on the IMU is temporarily set to 1e12 (for debug purposes), which we would imagine would cause the r_l filters to more or less ignore the IMU input
The IMU data are all temporarily set to 0 (for debug purposes), and only an irrelevant axis (e.g., Z'') is configured to be active
An example GPS message and IMU message are below, as is the YAML configuration for r_l.
Any thoughts would be much appreciated! Thank you!
imu0:
---
header:
seq: 4
stamp:
secs: 1487210998
nsecs: 225980997
frame_id: base_link
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 1.0
orientation_covariance: [1000000000000.0, 0.0, 0.0, 0.0, 1000000000000.0, 0.0, 0.0, 0.0, 1000000000000.0]
angular_velocity:
x: 0.0
y: 0.0
z: 0.0
angular_velocity_covariance: [1000000000000.0, 0.0, 0.0, 0.0, 1000000000000.0, 0.0, 0.0, 0.0, 1000000000000.0]
linear_acceleration:
x: 0.0
y: 0.0
z: 0.0
linear_acceleration_covariance: [1000000000000.0, 0.0, 0.0, 0.0, 1000000000000.0, 0.0, 0.0, 0.0, 1000000000000.0]
---
pose0:
---
header:
seq: 2395
stamp:
secs: 1487211196
nsecs: 500669002
frame_id: odom
pose:
pose:
position:
x: -3.85235159044
y: -5.17830684957
z: 0.00309827089337
orientation:
x: 0.00792342224031
y: -0.0124397107908
z: 0.53716653655
w: 0.843347250536
covariance: [0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-12, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01]
---
r_l YAML configuration:
robot_localization:
map_frame: /map
odom_frame: /odom
base_link_frame: /base_link
world_frame: /odom
frequency: 25
sensor_timeout: 0.11
pose0: /gps_receiver/base_link
pose0_config: [True, True, True,
False, False, True,
False, False, False,
False, False, False,
False, False, False]
imu0: /effector/imu/cab
imu0_config: [False, False, False,
False, False, False,
False, False, False,
False, False, False,
False, False, True]
Originally posted by roboticist17 on ROS Answers with karma: 11 on 2017-02-15
Post score: 1
Original comments
Comment by Tom Moore on 2017-02-21:
Both of your GPS sensors output in the odom frame, correct? How are you accounting for their offsets from the vehicle center?
Comment by roboticist17 on 2017-02-21:
Thanks Tom! I'm actually just computing an average of their values, weighted to account for the aft receiver being slightly closer to center. I'm then outputting that as a PoseWithCovarianceStamped in the odom frame and using it with r_l.
Answer:
I think I'd have to see a bag file and full configuration. Re: this point
The IMU data are all temporarily set
to 0 (for debug purposes), and only an
irrelevant axis (e.g., Z'') is
configured to be active
How is your IMU mounted, and what is the base_link->imu transform? I am also very suspicious of the fact that your IMU Z acceleration has (a) a value of 0 and (b) a massive covariance in the sample message. Covariance values can have subtle effects, even in other axes.
Originally posted by Tom Moore with karma: 13689 on 2017-03-01
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by roboticist17 on 2017-03-02:
OK, since we have good pose via RTK GPS I may try to hack something together on my own for now, then follow up if it seems like an EKF is necessary. Re IMU accel and covariance, those were just testing values--same issue with normal/correct values. I was just trying to isolate the issue. Thanks! | {
"domain": "robotics.stackexchange",
"id": 27023,
"tags": "navigation, robot-localization"
} |
Limiting bandwidth by adjusting sample rate | Question: I have an SDR that is capabple of 2.4Ms/s. I am using it to measure the power of signals. Right now I am setting the sample rate to 2M, collecting a burst of samples, and calculating the average power using
10 * log10(sum(abs(sample)**2 for sample in samples) / len(samples))
This gives me the average power over 2MHz, I think. I'd like to limit that. One approach is to use a filter. But it would be simpler just to sample less frequently (sample at my target bandwidth). But will the output measurements be as accurate?
Answer: Depends on your signal. Typically we try to sample at the Nyquist rate, which is equal to two times the maximum frequency of your signal. If you sample less often than this, you will lose information and your answer will not be correct, I think. | {
"domain": "dsp.stackexchange",
"id": 5801,
"tags": "signal-power"
} |
Confusion between samples per second and samples per symbol | Question: My understanding of sampling frequency is when we have an analog signal x(t) we convert it to a digital signal. Assume the sampling frequency is denoted by $f_s$ then the sampling time $T_s$. In my understading this is denoted the number of samples per second.
In MATLAB programs, I have come across generation of base band signal. The authors have upsampled the base band signal and the sampling rate was stated to be 3 samples per symbol.
My question is what does samples per symbol mean.Assuming we have QAM symbols are we repeating it three times?
Thanks
Answer: A digital communications signal such as QAM can be generally written as $$s(t) = \sum_{n=0}^{N}a_n p(t-nT),$$ where $a_n$ are (possibly complex) amplitudes taken from a finite alphabet, and the symbol rate is $1/T$. The pulses $p(t-nT)$ are often called "symbols". Matlab commands that ask you to specify "samples per symbol" are asking how many samples you want to use to define $p(t)$.
Of course, since you know the pulse duration, specifying the samples per symbol is an indirect way of specifying the sampling rate. | {
"domain": "dsp.stackexchange",
"id": 3084,
"tags": "digital-communications"
} |
NP-hardness of existence of spanning tree with given maximal degree | Question: I am trying to solve the following exercise:
Let $G = (V,E)$ be a graph. Show that the following two problems are NP-hard:
$G$ has a spanning tree where every node has at most $k$ neighbors, and $k$ is part of the input.
$G$ has a spanning tree where every node has at most 5 neighbors.
It's written that 1 is supposed to be a hint for 2. You can of course choose to ignore it and solve 2 directly.
I tried to reduce $k$-MST to Steiner tree, but I don't know if this is a right way. I read that can use a Hamiltonian circuit, but I don't understand how. Can anybody help me?
Answer: Hint for part 1: What does it mean when $G$ has a spanning tree where every node has at most 2 neighbors?
Hint for part 2: Suppose you know that it is NP-hard to show that $G$ has a spanning tree where every node has at most 2 neighbors. Can you reduce this problem of at most 2 neighbors to the current problem of at most 5 neighbors?
This post is made out of user53923's comment as I have verified it is useful. | {
"domain": "cs.stackexchange",
"id": 11917,
"tags": "graphs, np-hard, minimum-spanning-tree"
} |
Where does this equation in the electrophysiology literature form come from? | Question: In my studies I keep coming across the form of an equation that is used in many different mathematical models for voltage gated ion channels. The most general form I have found is in the 1977 paper Reconstruction of the action potential of ventricular myocardial fibres by Beeler and Reuter. Each ion channel has a gating variable $y$ whose differential equation depends on functions $\alpha$ and $\beta$, both of which take on the general form:
$$\{\alpha,\beta\}=\frac{C_1\exp[C_2(V_m+C_3)]+C_4(V_m+C_5)}{\exp[C_6(V_m+C_3)]+C_7}$$
where $V_m$ is the membrane potential and each constant is determined to have different values depending on the channel, system, etc.
My question is where does this form come from? In all of the papers I have come across they seem to just use this form with numbers chosen to fit their data without explanation as to why this form is actually used. The Beeler and Reuter paper cites the 1952 paper by Hodgkin and Huxley, but it seems like H&H use simpler forms of this equation primarily motivated by just fitting the data rather than illuminating any underlying mechanisms. Therefore, I do not understand where this form in the B&R paper comes from that so many papers later on use as well.
Does this form come from assumptions about the workings of the ion channels, if so, what are these assumptions, and how do they give rise to this form? Or is this just a general form found to fit many data sets fairly well, and if so why was this chosen over other functions that could probably do the same thing?
Answer: They chose that equation mainly because of numerical simplicity. It can fit the rates $\alpha$ and $\beta$ for all the ion channel particles in the model.
The theoretical foundation for the equation, as in the H&H model, are loose. It resemble the equation derived for the movement of a charged particle in a constant field. But it is not an equation derived from the ion channels biophysics. We can't blame them, considering that at that time (1952) they don't even know for sure that ion channels where proteins.
In page 183:
In order to simplify the reporting of the actual values used in this model, we have expressed the alphas and betas entirely in terms of a generalized function, with eight defining coefficients. Table 1C gives the equation, and the defining coefficients for al rate constants.
For the generalized function note that in every case at least some of the coefficients are zero: this generalized formula is just a way to collect both possible forms for the H&H rate constants.
I have read papers using Eyring equation for a theoretical formulation of transition rates, based on thermodynamics. | {
"domain": "biology.stackexchange",
"id": 9836,
"tags": "biophysics, theoretical-biology, electrophysiology, action-potential"
} |
IUPAC Names for trihalomethanes | Question: The drinking water section of the Massachusetts DEP mandates the electronic submission of analytical results. The compound Chlorodibromomethane must be entered when reporting just trihalomethanes and must be entered as Dibromochloromethane when reporting a full list of volatile organic compounds (EPA Method 525).
The IUPAC system says that the substituents should be alphabetized. I would believe the substituents to be alphabetized are bromine and chlorine and that the "Di" prefix is not considered in the alphabetization logic.
Can any cite a specific reference to the proper naming convention?
Answer: Here's the relevant rule.
P-14.5.1 Simple prefixes (i.e., those describing atoms and unsubstituted substituents) are arranged alphabetically; multiplicative prefixes, if necessary, are then inserted and do not alter the alphabetical order already established.
You are right, the di-, tri-, etc prefixes do not matter and only 'chloro' 'bromo' are considered. | {
"domain": "chemistry.stackexchange",
"id": 16822,
"tags": "nomenclature, analytical-chemistry"
} |
Omron G5V-2 relay NO pins not working | Question: I could swear that it was working for a while. I got back to my desk, tried it again, and it's no longer working. Could I have fried the NO pins on both sides? This is a DPDT relay. Everything works normally on the NC pins. I have never applied more than 5V. I do hear the relay click when I apply 5V to the coil. But when I measure voltage on the NO pins, I get 0V. Has anyone else seen this? I have two of these relays and I can't seem to get voltage on the NO pins with either relay. I should clarify that I'm expecting the same 5V power source to power both the coil and the common pins. If the NC pins work then I don't see why the NO pins shouldn't. In both cases the 5V is shared between the coil and any load attached to the NC/NO pins. I did try driving the entire circuit off a 9V power supply, but that did not change the results (and that does contradict my earlier statement that I've never applied more than 5V to this relay). My circuit is based on Charles Platt's "Make: Electronics", p. 59.
Here's a pic of the schematic I am following, except that I am using a 5V relay and a 5V power supply (USB port) and I am using piezo buzzers without resistors instead of LEDs.
Answer: Problem solved. It turns out that the pin assignment on my relay is different from that in the book.
Here's the schematic for my actual relay (notice that the pin assignment is CO,NC,NO instead of NC,CO,NO as in the book)
Schematic of my actual relay: | {
"domain": "robotics.stackexchange",
"id": 260,
"tags": "electronics"
} |
How to translate KnowRob actions into actual robot movements? | Question:
After taking some time to acquaint myself with the KnowRob system, I'm now in the process of writing my own modules using its functionalities. As a simple starting point, I just want to write an action recipe telling my robot to get to the middle of the room (x=0.00, y=0.00), then just move to another point in the map (x=3.00, y=3.00).
I'm using the move_base module for navigation, and the robot is simulated in an empty room (no semantic map is needed as the space is empty).
CPL is not something I want to look into right now, as my time is limited and I think that system would add a layer of complexity that I don't need right now. I know that the KnowRob team has done experiments using action recipes in the past without using CPL and the cogito system.
In order to implement a working application, I decided to extend the KnowRob ontology with data about action execution on one hand, and to write a simple python module to query the prolog system to actually move the robot on the other hand.
This is the core of my extension to the ontology
> <owl:Class rdf:about="&move;GoToPoint">
> <rdfs:subClassOf rdf:resource="&knowrob;Translation-LocationChange"/>
> <rdfs:subClassOf>
> <owl:Class>
> <owl:intersectionOf rdf:parseType="Collection">
> <owl:Restriction>
> <owl:onProperty rdf:resource="&move;providedByMotionPrimitive"/>
> <owl:hasValue rdf:resource="&move;move_base" />
> </owl:Restriction>
> <owl:Restriction>
> <owl:onProperty rdf:resource="&move;destXValue"/>
> <owl:cardinality rdf:datatype="&xsd;decimal">1</owl:cardinality>
> </owl:Restriction>
> <owl:Restriction>
> <owl:onProperty rdf:resource="&move;destYValue"/>
> <owl:cardinality rdf:datatype="&xsd;decimal">1</owl:cardinality>
> </owl:Restriction>
> </owl:intersectionOf>
> </owl:Class>
> </rdfs:subClassOf> </owl:Class>
I added a MotionPrimitive class, of which move_base is a subclass. Each motion primitive provides a providedByROSAction property: a string containing the name of the appropriate ROS actionlib server. My action recipe is thus simply composed as an intersection of various GoToPoint restrictions.
The robot connects to json_prolog and, on being asked to perform the task, asks for
plan_subevents(move:'MovementTest', SEs)
then queries for the appropriate action primitives and, if it is aware of them, queries for the needed parameters and calls the corresponding servers.
I implemented this approach and it works just fine. I know there are a few minor things that should be fixed (e.g. in the ontology points in space should be represented as properties of a PointInSpace class) but I'm afraid about a major issue that this implementation brings up. Mainly, maintaining both the ontology and the robot executor might become very difficult with the number of action servers getting big. It would obviously be also very prone to errors, as the developers would have to both update one and the other, in two different languages.
Am I proceeding in the right direction with this implementation structure? Am I missing something big? Should I use some other built-in feature I didn't see/notice?
Originally posted by micpalmia on ROS Answers with karma: 58 on 2013-10-27
Post score: 1
Answer:
If you don't want to use the existing CPL system (which probably makes sense for the beginning, since learning CPL and KnowRob at the same time can be a lot), then this sounds like a reasonable way to go. You can have a look at the action recipes created by the editor, which generate a state machine-like structure that could e.g. be translated into SMACH. I've done this for a non-ROS project in Java, and it was fairly easy.
Regarding the number of interfaces that need to be maintained, I would not expect them to be that many. I guess you can get a long way using just move_base, one action for Cartesian arm control and an action for the gripper. You can also define the mapping for super-classes of the ones that you use in the action recipes and inherit these properties.
Originally posted by moritz with karma: 2673 on 2013-10-29
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 15979,
"tags": "knowrob"
} |
Effective Coulomb barrier for deuteron | Question: What is the effective Coulomb barrier for a Deuterium-deuterium fusion reaction?
I am seeing temperatures of about $40 \times 10^7 K$ online, but have no idea how they are getting this.
If we have
$^2H+^2H \rightarrow ^{3}He + ^1n$
and the coulomb barrier is: $U=\frac{ke^2}{r}$ which needs to be overcome for fusion and the strong force to dominate.
Isn't r just $1.3 (A_1^{1/3} + A_1^{1/3} )f= 1.3 (2^{1/3} + 2^{1/3})f=3.2758f \quad (f=10^{-15} m) $
Plugging this into the coulomb equation I get about 476 KeV which is about $552\times10^7$ K
This isn't for an assignment but for my own studies.
Answer: According to your question and the discussion in the comments there seems to be some confusion. Let's first try to answer your original question (Coulomb barrier for a D-D reaction):
The Coulomb barrier is usually defined as the point where the strong force overcomes the repulsive Coulomb force of two positively charged nuclei. A proper estimation is that the nuclei need to barely touch, thus have a distance of
$$R = R_1 + R_2,$$
where $R_1$ and $R_2$ are the radii of the nuclei. As you pointed out, a useful approximation for a nucleus' radius is
$$R=r_0 A^{1/3},$$
with $A$ the mass number and $r_0\approx1.3\cdot10^{-15}\,\mathrm{m}$. The distance at which the strong force starts to set in can thus be written as
$$R\approx r_0\left(A_1^{1/3} + A_2^{1/3}\right).$$
Let's now plug that into the Coulomb potential:
\begin{eqnarray}
V_{Coulomb}&\approx&\frac{1}{4\pi\epsilon_0}\frac{q_1q_2}{r_0\left(A_1^{1/3} + A_2^{1/3}\right)}\\
&=&\frac{e^2}{4\pi\epsilon_0}\frac{Z_1Z_2}{r_0\left(A_1^{1/3} + A_2^{1/3}\right)},
\end{eqnarray}
with $q_1$, $q_2$ the charge of the nuclei and $Z_1$ and $Z_2$ their charge numbers.
Inserting the numbers for a D-D reaction, this results in
$$V_{Coulomb}\approx440\,\mathrm{keV},$$
which corresponds to the value you got.
This corresponds to the potential wall we need to overcome to let the two Deuterium nuclei fuse. Luckily, though, there are two effects leading to a on average lower temperature at which fusion occurs:
Tunneling
Particles in the Maxwellian tail
The fusion rate coefficient or reactivity, $\left<\sigma_{fus}v\right>$, can therefore be calculated as the convolution of a Maxwellian distribution $f_M$ and the fusion cross section $\sigma_{fus}$ (which we know from extensive experiments). So to get the average temperature at which D-D fusion has the highest probability, we need to have a look at the fusion reactions:
D + D $\rightarrow$ T + p
D + D $\rightarrow$ $^3$He + n
They both occur with roughly the same probability. To calculate their actual reactivity, we look up data from experiments and might get something like what is shown in the following plot.
As you can see, the rate coefficient of the D-D reaction is basically increasing with energy (temperature), it will asymptotically approach a very broad maximum at temperatures which are larger by an order of magnitude (sorry for not showing but I couldn't find the necessary data). For comparison, the D+T reaction is also shown and you can see why this is favored in the lab: its reactivity is larger by two orders of magnitude almost over the full range shown here.
To summarize, you don't need to actually climb up all the Coulomb wall to let the particles fuse, finite probability for fusion does occur at much lower temperatures. | {
"domain": "physics.stackexchange",
"id": 40269,
"tags": "homework-and-exercises, nuclear-physics, fusion, quantum-tunneling"
} |
Count keys in a file | Question: These classes are to count keys in a text file. The file has key, value pair like these examples:
John, 12
Sara, 2
Adam, 19
John, 1
Adam, 3
and the main class FileKeyCounter prints output as:
The total for John is 13. The total for Sara 2. The total for Adam is 22.
It consists of 3 classes:
FileKeyCounter: This class is responsible to read given .txt file line
by line and control the flow.
HashMapHandler: This class is responsible to build hashmap with keys(John, Adam, Sara). Since the .txt file may have a big string number that overflows Integer, it calculates the addition using two string value.
LineData: This class is responsible to validate the line, the key and value pair.
Please review this.
public class FileKeyCounter {
private HashMapHandler hash;
public FileKeyCounter() {
hash = new HashMapHandler();
}
public void countKeys(String fileName) {
FileReader fileReader = null;
BufferedReader reader = null;
try {
fileReader = new FileReader(fileName);
reader = new BufferedReader(fileReader);
String line = reader.readLine();
while (line != null) {
LineData lineData = new LineData(line);
if (lineData.isValidLine()) {
hash.buildHash(lineData);
}
line = reader.readLine();
}
hash.printHash();
} catch (IOException e) {
e.printStackTrace();
} finally {
close(fileReader, reader);
}
}
private void close(FileReader fileReader, BufferedReader reader) {
try {
if (fileReader != null) {
fileReader.close();
}
if (reader != null) {
reader.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
public HashMapHandler getHash() {
return hash;
}
public void setHash(HashMapHandler hash) {
this.hash = hash;
}
}
public class LineData {
private static final String COMMA = ",";
private String key;
private String value;
private boolean validLine;
public LineData(String line) {
String[] pair = line.split(COMMA);
if (validateLine(pair)) {
key = pair[ArrayEnum.KEY.getIndex()].trim();
value = pair[ArrayEnum.VALUE.getIndex()].replaceFirst("^0*","").trim();
setValidLine(true);
} else {
setValidLine(false);
}
}
/**
* The method return false under below condition
* 1. If the given array is null or the array length is not 2
* 2. If the first element of the array is empty string
* 3. If the second element of the array is not positive number or zero
*
* @param pair
* @return
*/
public boolean validateLine(String[] pair) {
if (pair == null || pair.length != ArrayEnum.SIZE.getIndex()) {
return false;
}
return validateKey(pair[ArrayEnum.KEY.getIndex()].trim())
&& validateValue(pair[ArrayEnum.VALUE.getIndex()].trim());
}
/**
* validate if the key has meaningful value. For example, emptyString ,"",
* would consider faulty input
*
* @param key
* @return true if key has certain value false if key does not have any
* value
*/
private boolean validateKey(String key) {
if (key.length() == 0) {
return false;
} else {
return true;
}
}
private boolean validateValue(String value) {
return value.matches("\\d+") && !value.startsWith("-") && !value.matches("0+");
}
public String getKey() {
return key;
}
public void setKey(String key) {
this.key = key;
}
public String getValue() {
return value;
}
public void setValue(String value) {
this.value = value;
}
public boolean isValidLine() {
return validLine;
}
public void setValidLine(boolean validLine) {
this.validLine = validLine;
}
}
public class HashMapHandler {
Map<String, String> map = new HashMap<String, String>();
public void printHash() {
StringBuffer buffer = new StringBuffer();
for (Map.Entry<String, String> element : map.entrySet()) {
buffer.append("The total for ").append(element.getKey())
.append(" is ").append(element.getValue()).append(". ");
System.out.print(buffer.toString());
buffer.delete(0, buffer.length());
}
}
public void buildHash(LineData data) {
if (map.containsKey(data.getKey())) {
String cnt = addTwoString(map.get(data.getKey()) , data.getValue());
map.put(data.getKey(), cnt);
} else {
map.put(data.getKey(), data.getValue());
}
}
public Map<String, String> getMap() {
return map;
}
public void setMap(Map<String, String> map) {
this.map = map;
}
private String addTwoString(String a, String b){
int[] result = new int[Math.max(a.length(), b.length()) + 1];
char[] aChar = a.toCharArray();
char[] bChar = b.toCharArray();
int overflowDigit = 0;
int i = a.length()-1;
int j = b.length()-1;
int k = result.length -1;
while ( i >= 0 || j >= 0 ){
int sum = 0;
if ( i < 0 ){
sum = overflowDigit + Character.getNumericValue(bChar[j]);
}else if ( j < 0 ){
sum = overflowDigit + Character.getNumericValue(aChar[i]);
}else{
sum = overflowDigit + Character.getNumericValue(aChar[i]) + Character.getNumericValue(bChar[j]);
}
overflowDigit = 0;
if ( sum > 9 ){
overflowDigit++;
result[k] = sum - 10;
}else{
result[k] = sum;
}
i--;
j--;
k--;
}
if ( overflowDigit > 0 ){
result[0] = overflowDigit;
k--;
}
StringBuffer number = new StringBuffer();
for ( int m = k +1 ; m < result.length ; m++){
number.append(result[m]);
}
return number.toString();
}
}
Answer: Use better abstractions
As I start reading the countKeys method,
it's hard to understand the purpose of this code.
From a method named countKeys and taking a file name,
I would expect something that counts keys in a file and returns the number of keys.
But it's a void method and does something else.
Looking at the implementation,
I understand that it parses lines into LineData objects,
but what is this HashMapHandler,
and what does it mean to buildHash and printHash?
My first guess would be that this is something to do with cryptography or making a digest,
but that's not the case.
So HashMapHandler, hash, buildHash, printHash are all poorly named elements that don't help the reader understand the code.
Re-think your abstractions
What is the real purpose of "LineData" ? It stores some kind of data. What kind of data? A name and a count. The data comes from a line, but that does that define the data? Would the meaning change if it comes from a web service? No. So NameCount would be a better name.
What is the real purpose of "HashMapHandler" ? Is it really about hash maps? What does it even mean, "handle" ? The main purpose seems to merge NameCount objects, and adding up the counts. It looks like a counter. So Counter would be a better name. The buildHash would be better named add.
Side note: this section should really have been the first. I had to take a closer look at the code to really understand how to do things better, so I wrote this section after. Nonetheless, initial impressions can be interesting too, so I'm leaving it there above for the record.
File handling
When you close the BufferedReader,
that will close the underlying FileReader too,
you don't need to close the latter explicitly.
You can simplify the file reading using try-with-resources:
try (BufferedReader reader = new BufferedReader(new FileReader(fileName))) {
String line;
while ((line = reader.readLine()) != null) {
LineData lineData = new LineData(line);
if (lineData.isValidLine()) {
hash.buildHash(lineData);
}
}
hash.printHash();
} catch (IOException e) {
e.printStackTrace();
}
With this version, the close method is not needed anymore,
you can safely delete it.
If you are not on Java 7 yet, you should be, as versions below that are no longer supported.
I also moved the reader.readLine() into the while condition itself,
so that you don't need to write this statement twice.
Java Bean-itis?
There are a lot of setters in this code, but what for?
You don't use most of them,
and you don't need any of them.
Try to write code without setters,
and also make fields final whenever possible.
See the next section for a concrete example.
LineData
This class is really horrible in many ways.
Pointless setters
Constructor calling a setter instead of setting a variable directly
validLine not needed, a getter with return key != null would do
... it would be even better to prevent creating an invalid object in the first place
Constant named COMMA, but what happens if you change its value to something else, for example ; ? The program could work with appropriate data, but it won't make sense to see a variable named "COMMA" in it. Name constants for their purpose, not literally for their value. In this example, SEPARATOR would seem natural.
boolean validateSomething is not great naming. A function that returns boolean should typically be named isValidSomething. A function that validates that some assumptions are true would make sense as either:
named validateSomething, return void, and throw exception if an assumption is false
named getValidSomething or similar, return an object or null if an assumption is false
ArrayEnum seems to be used as a store of independent constants, which is a misuse of enums: independent constants don't belong in an enum. The name ArrayEnum is also very poor, as it doesn't describe its purpose.
Consider this alternative implementation:
class LineData {
private static final String SEPARATOR = ",";
private static final int KEY_INDEX = 0;
private static final int VALUE_INDEX = 1;
private static final int TOKEN_COUNT = 2;
private final String key;
private final String value;
private LineData(String key, String value) {
this.key = key;
this.value = value;
}
public static LineData fromLine(String line) {
String[] tokens = parseLine(line);
if (tokens == null) {
return null;
}
String key = parseKey(tokens[KEY_INDEX]);
String value = parseValue(tokens[VALUE_INDEX]);
if (key != null && value != null) {
return new LineData(key, value);
}
return null;
}
private static String parseKey(String token) {
return !token.isEmpty() ? token : null;
}
private static String parseValue(String token) {
if (token.matches("\\d+") && !token.startsWith("-") && !token.matches("0+")) {
return token.trim();
}
return null;
}
private static String[] parseLine(String line) {
String[] tokens = line.split(SEPARATOR);
if (tokens.length != TOKEN_COUNT) {
return null;
}
return tokens;
}
public String getKey() {
return key;
}
public String getValue() {
return value;
}
}
Notes:
LineData cannot be created directly. It can only be created from a line using the factory method LineData.fromLine
The factory method LineData.fromLine tries to parse the line, and either return a valid LineData object, or return null if parsing failed at some point
The methods have a single responsibility and hide their implementation details
fromLine doesn't know what is a valid key/value and how to extract them. It just knows that a line contains tokens, which include a key and a value, and delegates the parsing of all those to helpers, which return null if something goes unexpected
parseLine splits the line to tokens, and if it looks valid so far, it returns the tokens otherwise null to signal the caller that something went wrong
parseKey checks the token and returns it if it's a valid key
parseValue checks the token, and returns it if it's a valid value
The parameters for parsing, such as the separator, the required number of tokens, the index of the key and value are private constants, as no other code uses these. (If they do, you can move them somewhere else.)
At no point can an invalid LineData exist. If a valid object cannot be created, the factory method returns null, and this is what callers can use to check if parsing was successful
Poor javadoc
* @param pair
* @return
A pair of what? Return what? A good javadoc should explain these elements. | {
"domain": "codereview.stackexchange",
"id": 13857,
"tags": "java"
} |
Bandpass fundamental and harmonics in Matlab | Question: I have a signal in time domain whose sample frequency is Fs=25600. I would like to remove from the fundamental F=285Hz and all its harmonics (2*F,3*F,etc). I tried to use the comb filter in Matlab using this code :
Fs=25600;
N=43;
BW=285;
Apass=200;
[b, a] = iircomb(N, BW/(Fs/2), Apass);
Hd= dfilt.df2(b, a);
x1 = filter(b, a, signal);
Here is the spectrum of the original signal over the frequency interval up to around 400Hz
Here is the result after applying the filter cited above:
I don't get the awaited result. Is there a way to accomplish this in Matlab?
Answer: You have a problem with the comb filter design. According to Matlab documentation the following line creates the filter you want:
[b,a] = iircomb(round(25600/285), 2*285/25600/35,'peak');
Looking at the frequency response of the designed filter:
Seems to be able to solve your problem... | {
"domain": "dsp.stackexchange",
"id": 5478,
"tags": "matlab, bandpass, comb"
} |
What is the electric field between two plates with a hole each (the holes also parallel)? | Question: Suppose you have two charged parallel plates, and an electron on the negative plate. It would move in the electric field to the positive plate. If, however, there was a hole on each of the plates, positioned so that an electron could move through the holes, would there still be an electric field in that section, allowing the electron to be accelerated through the holes? Sorry if i didn't explain my question very well.
Answer: upto what I can understand from your question, i think that the field will be halved in the vicinity of the hole, an effect generally seen in conductors. Here is an illustration for your hint-
Ref: NCERT Physics class 12 chapter 1 | {
"domain": "physics.stackexchange",
"id": 93877,
"tags": "electrostatics, electric-fields, electrons, acceleration, capacitance"
} |
Iteration over "zipped" tuples (for_each_in_tuples) | Question: I want to implement for_each_in_tuples that takes a functor and one or more tuples of the same size and applies this functor to i'th elements of each tuple for i = 0, ..., size - 1.
Example:
#include <iostream>
#include <tuple>
std::tuple t1(1, 2.2, false);
std::tuple t2(3.3, 'a', 888);
std::cout << std::boolalpha;
for_each_in_tuples(
[](auto a1, auto a2)
{
std::cout << a1 << ' ' << a2 << '\n';
}, t1, t2);
// Outputs:
// 1 3.3
// 2.2 a
// false 888
Implementation:
#include <cstddef>
#include <tuple>
#include <type_traits>
namespace impl
{
template<typename T, typename... Ts>
struct First { using Type = T; };
template<typename... Ts>
using First_t = typename First<Ts...>::Type;
template<auto value, auto... values>
inline constexpr auto all_same = (... && (value == values));
template<class Tuple>
inline constexpr auto tuple_size = std::tuple_size_v<std::remove_reference_t<Tuple>>;
template<std::size_t index = 0, class Function, class... Tuples>
constexpr void for_each_in_tuples(Function func, Tuples&&... tuples)
{
constexpr auto size = tuple_size<First_t<Tuples...>>;
func(std::get<index>(std::forward<Tuples>(tuples))...);
if constexpr (index + 1 < size)
for_each_in_tuples<index + 1>(func, std::forward<Tuples>(tuples)...);
}
}
template<class Function, class... Tuples>
constexpr void for_each_in_tuples(Function func, Tuples&&... tuples)
{
static_assert(sizeof...(Tuples) > 0);
static_assert(impl::all_same<impl::tuple_size<Tuples>...>);
impl::for_each_in_tuples(func, std::forward<Tuples>(tuples)...);
}
At Compiler explorer: https://godbolt.org/g/cYknQT
Main questions:
Is this implementation correct and can it be simplified?
Which name is better, for_each_in_tuple or for_each_in_tuples (or ...)?
Answer: template<typename T, typename... Ts>
struct First { using Type = T; };
template<typename... Ts>
using First_t = typename First<Ts...>::Type;
This can be done using std::tuple_element in conjunction with std::tuple.
template <typename... Ts>
using First_t = std::tuple_element_t<0, std::tuple<Ts...>>;
template<auto value, auto... values>
inline constexpr auto all_same = (... && (value == values));
For an empty pack, do you really want the value to be undefined? The default behavior with empty packs are && is considered true and || is considered false.
static_assert(sizeof...(Tuples) > 0);
Instead of failing, maybe try to call the function with 0 arguments and see what happens?
static_assert(impl::all_same<impl::tuple_size<Tuples>...>);
If you want zipped-like behavior, you'll want to zip tuples until one of the tuples has exhausted its elements (min size instead first size).
Is this implementation correct and can it be simplified?
Yes. Use an iterative approach, like sequential expansion, over recursion.
Which name is better, for_each_in_tuple or for_each_in_tuples (or ...)?
Maybe for_each_zipped. Tuples can be gathered from the signature of the function.
If you want to avoid the recursion, use fold expressions, std::index_sequence, and std::make_index_sequence.
Start with a simple helper to just invoke the function with elements across all the tuples at a specific index.
template <std::size_t Index, typename Function, typename... Tuples>
constexpr void invoke_at(Function&& func, Tuples&&... tuples) {
func(std::get<Index>(std::forward<Tuples>(tuples))...);
}
Now we need a way to sequentially call it (invoke_at<0>(args), invoke_at<1>(args), ..., invoke<N>(args)). Use fold expressions like you did for all_same, but with the comma operator and unary right fold ((invoke_at<N>(args), ...)). To generate the Ns that gets expanded, we use std::index_sequence.
template <std::size_t... Indices, typename Function, typename... Tuples>
constexpr void apply_sequence(Function&& func, std::index_sequence<Indices...>, Tuples&&... tuples) {
(((void)invoke_at<Indices>(std::forward<Function>(func), std::forward<Tuples>(tuples)...), ...));
}
Finally, write your function that does checks for preconditions, creates the index sequence, and forwards the arguments to the above helper.
template <typename Function, typename... Tuples>
constexpr void tuple_for_each(Function&& func, Tuples&&... tuples) {
static_assert(sizeof...(tuples) > 0, "Must be called with at least one tuple argument");
constexpr auto min_length = std::min({std::tuple_size_v<std::remove_reference_t<Tuples>>...});
if constexpr (min_length != 0) {
impl::apply_sequence(std::forward<Function>(func),
std::make_index_sequence<min_length>{},
std::forward<Tuples>(tuples)...);
}
else {
func();
}
}
Note - The expansion has a cast to void that disables any overloaded shenanigans abusing the comma operator. | {
"domain": "codereview.stackexchange",
"id": 31572,
"tags": "c++, template-meta-programming, c++17"
} |
Ribosomal RNA amount in a Drosophila cell | Question: I am isolation RNA from Drosophila larvae brain with TRIzol method. What percentage of extracted RNA will be ribosomal RNA? I am only interested in mRNA, so I am trying to figure out whether I need to get rid of rRNA. Thanks
Answer: The protocol you are using will not only leave the sample with rRNA but also non coding RNA.
Many RNA protocols will separate mRNA by affinity of a carrier to the polyA tail. This protocol references an older paper that estimates that only 5% of RNA is mRNA. I'd be surprised if this ratio changed by more than 2-3 fold in drosophila.
I assume that %age is by weight but it could be a densitometry measurement which is similarly interpreted. | {
"domain": "biology.stackexchange",
"id": 1861,
"tags": "molecular-biology, rna, mrna, ribosome"
} |
pr2_teleop_booth moved to pr2_teleop_general | Question:
I had a hard time finding the pr2_teleop_booth package, so I'm just putting it in here hopefully to save others some time. Where did the pr2_teleop_booth package move to?
Originally posted by Steven Bellens on ROS Answers with karma: 735 on 2012-04-16
Post score: 0
Answer:
The pr2_teleop_booth package renamed to pr2_teleop_general:
http://www.ros.org/wiki/pr2_teleop_general
Originally posted by Steven Bellens with karma: 735 on 2012-04-16
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9005,
"tags": "teleop, pr2"
} |
Power dissipation in High Voltage Cables | Question: I was doing the following physics problem in physics class:
You have two dimensionally identical pieces of metal, one made from
aluminium the other made from iron. It is given to us that aluminium
has a lower resistively then iron. Which metal glows first when they
are connected in parallel to a battery. What about if they are
connected in series?
If they are connected in parallel the voltage across them is the same however the current going thru the aluminum by ohms law is higher. Therefore the aluminum glows first.
When they are connected in series the current thru them is the same therefore they should glow at the same time. But then I remembered that power dissipated across a resistor is $P=VI$ and the voltage drop across the iron is greater - so the iron glows first.
Then I tried to look for the source in my mind for the first reasoning - hat heat losses depend on the current and not the voltage. And this is the point where the high voltage transmission lines come in.
I was taught that we transmit electricity at a high voltage (and then transform it down for home usage) to allow for a lower current and therefore decreases power dissipated. But now that I think about it increasing voltage to decrease current wouldn't work as what we same in a lower current we lose by a higher voltage according to $P=VI$
What is going on here, can someone please explain?
Answer: But recall that power dissipated
$P= VI$
is also , from Ohm's law, expressible as
$P = I^2 R$
So the dependency of power dissipated is linear in voltage, but quadratic in current, given the same resistance.
Also remember that the voltage supplied from the power station, and the voltage drop across the transmission line - which is what is important in power loss- , are not the same voltage. The former is considerably larger than the latter.
To see why, consider supplying a fixed amount of power at the end of a transmission line with a supply voltage $V_s$ and supply current $I$.
You would use the first equation, $P= V_s I$, to compute that power.
But the voltage drop across the transmission wire is $V_{drop} = IR$, which is less than
the supply voltage. They are different quantities. The power dissipation is only quadratic in $V_{drop}$, not $V_s$.
Of course, if we are talking about transmission lines, the above is a vast oversimplification, since transmission lines carry AC power. For a short enough transmission line , it can be modeled as a resistance and inductance in series. For longer transmission lines, capacitive effects come into play. But still the qualitative picture of
current dominating the loss over voltage
There is a report from Purdue University at this link that covers transmission line power loss in considerably more detail than there is room for here. | {
"domain": "physics.stackexchange",
"id": 17382,
"tags": "electricity, classical-mechanics"
} |
Sending messages to nao_controller from rostopic_pub' | Question:
Hello Every body ,
I am new in Ros and Python
I would like to use joint_angles topic to control only on joint of NAO lets say HeadYaw
Id it possible to let me know the procedure?
I was running the following command but it does not work :
rostopic pub /joint_angles nao_msgs/JointAnglesWithSpeed '{seq: 1, stamp: now, frame_id: Head}' '['[HeadYaw,HeadPitch]',[1,-1],2.0,0]'
Originally posted by Mohsen 2013 on ROS Answers with karma: 3 on 2013-07-26
Post score: 0
Original comments
Comment by Miguel S. on 2013-07-26:
Could you change the title to something more descriptive? Eg. 'Sending messages to nao_controller from rostopic_pub'. It'll make it easier for future users to find your question :)
Answer:
There are two issues with your command:
On ROS groovy the line is not being accepted, if you flatten your arrays and add -- for the negative numbers it works
The relative speed must be a value between 0 and 1
So if you type something like this...
rostopic pub /joint_angles nao_msgs/JointAnglesWithSpeed -- '[ 1, now, Head]' '[HeadYaw,HeadPitch]' '[1,-1]' 1.0 0
... it should work (at least on groovy with nao_controller up and running)
Originally posted by Miguel S. with karma: 1114 on 2013-07-26
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Mohsen 2013 on 2013-07-26:
Thanks alot :) | {
"domain": "robotics.stackexchange",
"id": 15061,
"tags": "ros, yaml, nao-driver"
} |
How do we know that we have captured the entire spectrum of the Harmonic Oscillator by using ladder operators? | Question: Consider standard quantum harmonic oscillator, $H = \frac{1}{2m}P^2 + \frac{1}{2}m\omega^2Q^2$.
We can solve this problem by defining the ladder operators $a$ and $a^{\dagger}$. One can show that there is a unique "ground state" eigenvector $\psi_0$ with $H\psi_0 = \frac{1}{2}\hbar\omega\psi_0$ and furthermore that given any eigenvector $\psi$ of $H$ with eigenvalue $E$, the vector $a^{\dagger}\psi$ is also an eigenvector of $H$ with eigenvalue $E + \hbar\omega$.
However, it is usually stated that we now have all eigenvectors of $H$ by considering all vectors of the form $(a^{\dagger})^n\psi_0$.
How do we know that we have not missed any eigenvectors by this process? e.g. how do we know that eigenvalues are only of the form $E_n = (n+\frac{1}{2})\hbar\omega$?
Also a slightly more technical question, how do we know that the continuous spectrum of $H$ is empty?
The technical details I am operating with are that $\mathcal{H} = L^2(\mathbb{R})$ and all operators ($H, P, Q$) are defined on Schwartz space, so that they are essentially self-adjoint with their unique self-adjoint extensions corresponding to the actual observables.
Answer: It is sufficient to prove that the vectors $|n\rangle$ form a Hilbert basis of $L^2(\mathbb R)$. This fact cannot be completely established by using the ladder operators. To prove that the span of the afore-mentioned vectors is dense in the Hilbert space, one should write down the explicit expression of the wavefunctions of the said vectors recognizing that they are the well-known Hilbert basis of Hermite functions.
Since the vectors $|n\rangle$ are a Hilbert basis, from standard results of spectral theory, the operator $$\sum_n \hbar \omega(n +1/2 ) |n\rangle \langle n | \tag{1}$$
(using the strong operator topology which defines the domain of this operator implicitly)
is self-adjoint and its spectrum is a pure point spectrum made of the numbers $\hbar \omega(n +1/2 ) $ with $n$ natural. This fact proves that the initial symmetric Hamiltonian operator you described in your post and defined on the Schwartz space admits at least one self-adjoint extension with the said spectrum (in particular no continuous spectrum takes place). To prove that it is the unique self-adjoint extesion, i.e., that the initial symmetric operator is essentially self-adjoint, the shortest way is to observe that the vectors $|n\rangle$ are necessarily analytic vectors of the initial Hamiltonian (notice that all the afore-mentioned vectors stay in the Schwartz space which is the initial domain) because they are eigenvectors. Since they are a Hilbert basis, their span is dense. Under these hypotheses, a celebrated theorem by Nelson implies that the initial symmetric Hamiltonian operator is essentially self-adjoint and thus (1) is the only self-adjoint extension of the initial symmetric Hamiltonian operator. As a final comment, it is interesting to remark that (1) is not a differential operator differently from the naive initial Hamiltonian which is a differential operator but it is not self-adjoint. | {
"domain": "physics.stackexchange",
"id": 36436,
"tags": "quantum-mechanics, harmonic-oscillator, eigenvalue"
} |
Cellular respiration in carnivorous animals | Question: What is the equation for cellular respiration in carnivores as they don’t consume carbohydrates to break down into glucose in the following manner:
Glucose + oxygen -> water + CO2 +energy.
Do they (mammalian carnivores) primarily use ketones to produce energy without using glucose?
Does this mean that glucose is not the only energy source for humans/other animals?
Am I wrong in thinking that every source (protein, fats or carbohydrates) need to be convert into glucose for energy.
Answer: Your guess is correct: glucose is not the sole source of energy in the cell!
While cellular respiration is the classic mechanism for energy production (in the form of ATP) in the cell, there's another process that's equally important and a little less well-known: beta oxidation. Beta oxidation is how fats and other lipids in the cell can be broken down to produce energy. In this process, the last two carbons in the long chain are sliced off and transferred to Acetyl-CoA, which can then be used in the citric acid cycle to produce ATP.
Here's the equation describing it:
$$C_{n}acyl-CoA+FAD+NAD^{+}+H_{2}O+CoA -> C_{n-2}acyl-CoA+FADH_{2}+NADH+H^++acetylCoA$$
And here's a good diagram from the Wikipedia page that summarizes it:
Courtesy of Cruithne9 on Wikipedia [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)]
This process is why fats actually have a higher energy density than sugars: 9 Cal/g, as opposed to sugar's 4. It's also why organisms tend to use fats as energy storage - while sugar and cellular respiration is great for immediate energy, fats are stable and energy dense so it's an evolutionarily favored idea to use them for long-term storage.
You're also quite correct that animals don't tend to have a lot of free sugars and carbohydrates, unlike plants. Instead, we've converted those carbs into the fat molecules via the reverse process, fatty acid synthesis. So, carnivores largely get their energy from beta oxidation and the metabolism of fats, rather than glucose. | {
"domain": "biology.stackexchange",
"id": 9854,
"tags": "biochemistry, bioenergetics"
} |
How strong is this spring compressor? | Question: Disclaimer: amateur mechanic, not an engineer of any kind.
I just bought a spring compressor for my car. It's the factory recommended type so I'm assuming it's strong enough, but I'm wondering exactly how strong - since it looks (to my untrained eye) like there's a clear weakest link. The spring gets sandwiched between cast iron plates. The bottom plate rests on a threaded handle that looks really thick and sturdy. The top plate however is held in place by a hemispherical piece ('Upper Ball' in Figure 1), which is in turn held onto the main shaft by a small dowel pin ('Pin' in Figure 1). The upper ball fills a tapered hole in the upper plate, which presses down onto the coil. Everything seems like a well-machined fit - the pin slides in easily with no play.
Here's an image of the actual pieces:
The pin is 7.75mm in diameter and 56.0mm in length. The upper ball has an ID of 18.6mm and an OD of 44.3mm, so during use the pin sticks out both sides as per the image. I can't tell what it's made of, but I'm assuming either grade 5 or grade 8 steel or the equivalent metric grade.
Two questions:
(a) am I correct in assuming that this is the weakest part of the assembly?
(b) is it possible to calculate an approximate yielding/breaking strength of this part of the assembly, assuming either grade 5 of grade 8 steel?
For reference, the total force I would routinely put on this compressor is 1000 lbs (~4500 N).
Answer: I would have thought the nut and screw screw thread was more likely to be the failure point, but you are looking at the part and I'm only looking at a sketch of it.
The yield strength of grade 5 steel is around 600 MPa. The area of the pin is about 47 mm^2. So to permanently bend the pin by shearing it would take about 28000 N or about 6300 lbf. (Or arguably twice that, since the load goes to both sides of the pin).
Nothing much to worry IMO. | {
"domain": "engineering.stackexchange",
"id": 3834,
"tags": "structural-analysis, automotive-engineering"
} |
Regarding the melting glaciers and icebergs, why isn't the extra water vapor in the atmosphere mentioned in discussions of global warming? | Question: As is reported on the major news networks, many glaciers and big icebergs are melting, which reportedly causes a tremendous increase in the amount of water vapor in the atmosphere.
Now a lot of water vapor and the increasing heat from the sun powers stronger storms and greater inequalities in the distribution of water throughout the world. Therefore, there are more incidences of drought in some places and big flooding in others. I think it all relates back to the fact there are millions of tons more water vapor in the atmosphere. This, added with the Sun's heating, will make more intense storms.
My question is, given all the talk of global warming and instabilities in the weather why isn't the fact that there is a tremendous amount of extra water vapor in the atmosphere ever mentioned on the news or on other TV shows?
Answer: Extra water doesn't mean extra water vapor. Even without this EXTRA water, the water vapor levels will increase, in the current warming world. As its the temperature rise that is making the water turn to vapor.
As temperature rises, evaporation increases and more water vapour accumulates in the atmosphere. As a greenhouse gas, the water absorbs more heat, further warming the air and causing more evaporation. When CO2 is added to the atmosphere, as a greenhouse gas it has a warming effect. This causes more water to evaporate and warm the air to a higher, stabilized level. So the warming from CO2 has an amplified effect. Water vapor is infact the largest positive feedback in the climate system (Soden 2005). - Skeptical Science
Intensification of the water cycle hence, is a product of rise in temperature, rather than the water content. Of course we need water to evaporate, but the increase in the water content doesn't correlate to increased water vapor. It might play some role locally in small ponds and seasonal rivers which get dry during summers, but on a global scale with most of our evaporation occurring over oceans (86% of global evaporation), its not significant enough.
So coming to your question. First of all its not like water vapor is never mentioned -
As global temperatures rise, the atmosphere then holds more moisture ... Therefore there is more water vapor available to fall as rain, snow. - thinkprogress.org - September 5
Global warming is moistening the atmosphere - The Guardian - August 13
And if you notice, all the recent news about melting glaciers has been in connected to some recent developments and observations in form of published papers at reputed journals. NASA even conducted a press conference explaining their latest research, which led to a series of news articles on the topic. So, i'm quite sure if there are some significant developments in terms of water vapor research, it will make the news as well. | {
"domain": "earthscience.stackexchange",
"id": 221,
"tags": "meteorology, atmosphere, global-warming, glacier, water-vapour"
} |
What are other names for planetoids that aren't orbiting a solar system, but hurtling through space? | Question: I'm trying to think of a good word for an asteroid/planetoid that has no stable orbit but has been ejected from a system and is passing close to a sun. Any help?
Answer: See nnnnnn's comment below.
The NASA website, describing 'oumuamuam, uses the term "interstellar object." Extrasolar asteroid would seem to be another option. I haven't seen a unique word different from asteroid that originated outside of the solar system.
It seems a simple prefix is sufficient to describe objects beyond the solar system, i.e. exoplanet.
Scienfitic American has an article which uses the term "interstellar object."
Also, see answers to:
Is intrastellar commonly used by astronomers to refer to objects within our solar system?
Adjective for things outside our solar system | {
"domain": "astronomy.stackexchange",
"id": 4108,
"tags": "asteroids, terminology"
} |
Could submarine SONAR kill a diver? | Question: Could a diver swimming next to a submarine be killed or seriously injured by its SONAR?
What physical aspect of SONAR affects the human body in a potential harmful way?
Answer: Potentially yes it could.
There are no noise-cancelling headphones to stop the U.S. Navy's 235-decibel pressure waves of unbearable pinging and metallic shrieking. At 200 Db, the vibrations can rupture your lungs, and above 210 Db, the lethal noise can bore straight through your brain until it hemorrhages that delicate tissue. If you're not deaf after this devastating sonar blast, you're dead.
I found this from an article about killing with sound:
Killing With Sound | {
"domain": "physics.stackexchange",
"id": 20590,
"tags": "waves, acoustics, water, biology"
} |
Device to Test Radioactive Beverage | Question: In one scene of the movie "Edge of Darkness", the protagonist uses a device to test the radioactivity of milk in a glass container by placing the device near but outside the container. What is this device called, and is it sold to the public?
Answer: It's a portable, pocket size radiation detector like this one that United Nuclear sells:
http://unitednuclear.com/index.php?main_page=index&cPath=2_78
I would try Amazon though first to see if you can find a better price. | {
"domain": "physics.stackexchange",
"id": 19354,
"tags": "radioactivity"
} |
Typical number of samples per symbol | Question: Assuming a pulse shaping of a source of symbols (like $s=\{3,-i,2+5i\}$), and a SRRC waveform (or any other waveform). I know that it depends on many factors like modulation scheme, but, in practice, are there any typical value or range of "number of samples per symbol" required for the pulse shaping process?
Answer: 2 to 4 samples per symbol is common in my experience. The incentive for using fewer samples per symbol is an overall lower sample rate (and thus less processing load). It depends what point in the system you're talking about, but you also need to take the excess bandwidth of your pulse shape into account to ensure that the sample rate is high enough to avoid aliasing.
In some parts of the processing chain, you can get away with fewer samples per symbol than you might expect given the Nyquist rate and the signal's bandwidth. If the signal is perfectly synchronized, for example, then you don't need additional oversampling. In that case, you can use just one sample per symbol (basically, the symbol values themselves). You can then use that and the known pulse shape to reconstruct the signal to whatever fidelity you desire.
In the absence of perfect synchronization (for example, in the various receiver blocks that perform frequency, phase, and/or timing synchronization), you'll have some amount of oversampling. In the literature you'll find 2 samples per symbol as a desirable target rate, because it's really the smallest that you're realistically going to achieve. You can always go higher if you need to. It just results in a higher sample rate (so more computational work) | {
"domain": "dsp.stackexchange",
"id": 5790,
"tags": "filters, sampling, digital-communications, modulation"
} |
how to get turtlebot pose | Question:
I would like to get turtlebot poses when it walking. The following codes are used, but there is no responds. I guess it was the problem of topic "/odom". could anybody give me some ideas to fix it?
thanks.
virtual void onInit()
{
ros::NodeHandle& nh = getNodeHandle();
ros::NodeHandle& private_nh = getPrivateNodeHandle();
cmdpub_ = private_nh.advertise<geometry_msgs::Twist> ("cmd_vel", 1);
sub_= nh.subscribe<PointCloud>("depth/points", 1, &TurtlebotFollower::cloudcb, this);
ros::Subscriber sub = nh.subscribe("/odom",10, &TurtlebotFollower::poseCallback,this);
}
void poseCallback(const geometry_msgs::TwistPtr& msg){
std::cout << msg.linear.x << "\t" <<msg.linear.y<< "\t" << 0.0 << "\t" << std::endl;
}
I found that if I put above codes into the onInit() virtual function, it will be failed. But if I put it in an main() function, it will be succeed. What is the difference?
Originally posted by baowei lin on ROS Answers with karma: 16 on 2015-09-08
Post score: 0
Original comments
Comment by Willson Amalraj on 2015-09-09:
Did your poseCallback work even if the message type is geometry_msgs::Twist? You are trying to use nodelets. You have to run them differently. Please follow the link to learn about running nodelets.
Comment by baowei lin on 2015-09-09:
sorry, the "succeed" means that I've changed them to nav_msgs. I will check nodelets, thank you Willson.
Answer:
I fixed the problem by changing getNodeHandle() to getMTNodeHandle(), which means if you want to subscribe multiple topics, you need a multiple thread callback queue. Thank you Willson again.
Originally posted by baowei lin with karma: 16 on 2015-09-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 22583,
"tags": "ros, pose, turtlebot"
} |
Can the subscriber be part of a separate package than the one that it is subscribing to? | Question:
Hello,
I have the basic idea of writing a subscriber. But I am not quite sure about where it is to be placed. Suppose I am subscribing to a topic in a particular package (not written by me). Is it necessary that the subscriber code (C++ or python) be placed inside the src folder of this package or can I create a separate package just for this subscriber?
Will creating a new package for subscriber make writing/editing the the CMakeLists.txt and package.xml more difficult?
Originally posted by skr_robo on ROS Answers with karma: 178 on 2016-07-12
Post score: 0
Answer:
Publishers and subscribers do not need to be in the same package; in fact, this is one of the strengths of ROS, because it means that you can interact with someone else's code without needing to modify it.
When you're setting up your CMakeLists and your package.xml for your subscriber, you should declare a dependency on the message package that contains the message type definition for the topic that you want to subscribe to. For some cases, this message may be in the package with the publisher (for custom messages), but in most cases the message definition will probably be in a separate package.
For example, most laser scanners publish a sensor_msgs/LaserScan, and so you should depend on the sensor_msgs package, and your code doesn't need a dependency on the specific driver for your laser, regardless of whether you're using a Hokuyo, SICK, or other brand of laser.
Originally posted by ahendrix with karma: 47576 on 2016-07-12
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by skr_robo on 2016-07-13:
Thank you for the reply. I am looking to subscribe to the topic /camera/depth/image_rect_color obtained from zed-ros-wrapper. So, are the message package and zed-ros-wrapper package my only dependencies?
Comment by skr_robo on 2016-07-13:
I mean, how do I identify my other dependencies, if any?
Comment by ahendrix on 2016-07-13:
The wiki page for zed_ros_wrapper doesn't document the topic type, but I suspect it's a sensor_msgs/Image. (Image type from the sensor_msgs package). Therefore your only dependency should be the sensor_msgs package.
Comment by ahendrix on 2016-07-13:
You can also use rostopic info <topic> to determine the type of a published topic, if there isn't good documentation.
Comment by ahendrix on 2016-07-13:
Note that your node does not need a dependency on zed_ros_wrapper
Comment by skr_robo on 2016-07-13:
It is sensor_msgs. I was concerned whether I would need any other dependencies or not. Thanks a lot. | {
"domain": "robotics.stackexchange",
"id": 25224,
"tags": "ros"
} |
Particle Equilibrium and the Interpretation of Accelerations | Question: I have a simple question regarding the interpretation of acceleration and force in the context of a particle in equilibrium.
Given that the necessary and sufficient definition for particle equilibrium can be stated as;
$$
\sum\vec{F}=0
$$
The following substitution can be made;
$$
\sum(m\cdot\vec{a})=0
$$
Assuming equilibrium, the particle must be non-accelerating, and can only have a single mass. Yet intuitively the forces still pull or push. So given equilibrium, how can the forces pull with an acceleration vector of zero? I understand that it is the vector sum of the accelerations which must have a magnitude of zero, but how can any be non-zero anyway, if there is no observable acceleration? I think my interpretation is somewhat tangled. Can anybody clarify? Appreciate the time.
Answer: Say $\Sigma F=F_1+F_2+...=0$. Then your substitution, which is mathematically correct, would physically imply the following: if $F_1$ alone had acted on the body then it would accelerate to $a_1$, if $F_2$ alone had acted on the body then it would accelerate to $a_2$, and so on. So $F_1+F_2+...=0$ implies $a_1+a_2+...=0$, while individual components themselves are not necessarily zero. Each $a_i$ can be non-zero because each of them refers to a different physical condition, which is when the body is being acted upon by $F_i$ alone. | {
"domain": "physics.stackexchange",
"id": 32194,
"tags": "newtonian-mechanics, classical-mechanics, vectors, equilibrium"
} |
Why does this karyotype start numbering at 11? | Question: Here is some image I found for karyotyping a chromosome:
Why does the band numbering start at 11? When I count other things I usually start with the number one, and in some cases zero. Eleven seems random.
Answer:
Why does the band numbering start at 11? When I count other things I usually start with the number one.
It does start with the number one!
What is happening here is clearly a small confusion. Looking at the centromere in this idiogram of chromosome 12 you can see two number 11, as you stated in your question, one above and one below the centromere:
An idiogram of a Giemsa-stained chromosome 12. © 1994 David Adler, University of Washington. All rights reserved.
However, those two 11 below and above the centromere are not, in fact, an 11 (eleven): they are just two number 1 together, 1-1 or one-one.
According to Nature (2017):
This particular idiogram depicts the pattern of Giemsa staining at a fairly low resolution (i.e., it produces about 400 total bands in a karyotype, which is just above the threshold that is clinically useful). At this resolution, the long q arm of chromosome 12 can be subdivided into two main regions, which are designated 12q1 and 12q2. Region 12q1 can be further subdivided into five subregions, designated 12q11 through 12q15, each of which corresponds to a band detected by Giemsa staining. Orally, these subdivisions are referred to as "12q one-one" through "12q one-five" (not as "12q eleven" through "12q fifteen"). The more distal 12q2 region can be subdivided into subregions 12q21 through 12q24. In addition, subregion 12q24 can be further subdivided into regions 12q24.1 through 12q24.3, even at this relatively low resolution.. (emphasis mine)
Thus, those two 11 you see above and below the centromere are (or should be), respectively, 12p1-1 and 12q1-1 (remember that this is the chromosome 12).
The confusion here, therefore, is due to:
The omission of a 12p or a 12q before all those numbers.
The omission of some separator, like an hyphen or a comma, between those numbers.
A detailed PDF from the International System for Cytogenetic Nomenclature (ISCN) can be found here.
Bonus: In case you want to know, that number 850 in the top of the idiogram refers to the number of bands:
At this higher level of resolution, approximately 850 bands can be distinguished in a karyotype
Source: Nature.com. (2017). Chromosome Mapping: Idiograms | Learn Science at Scitable. [online] Available at: https://www.nature.com/scitable/topicpage/chromosome-mapping-idiograms-302 [Accessed 23 May 2017]. | {
"domain": "biology.stackexchange",
"id": 7203,
"tags": "genetics, dna, human-genetics, karyotype"
} |
Can I sense a bright star pointing an eight foot antenna towards it? | Question: If I connect an eight foot Yagi or other comparable sized antenna to my oscilloscope and point the antenna at a bright star will I see a voltage on my oscilloscope?
I am not interested in turning the voltage into an image just wondering if I would see a voltage increase when it is on a bright star. I’d like to know your thoughts before I take the time to build the antenna. I’m thinking about in the 25cm range. I’ve heard that’s an active area. My oscilloscope will read down to about 20 millivolts.
Answer: Stars are too dim for amateur radio equipment. There are two possible radio sources that you can detect: the sun and Jupiter.
Jupiter is particularly interesting as interactions between Io and its magnetic field produce beams of radio waves that sweep past earth every 10 hours. These are detectable in the amateur range, at about 20 MHz.
Nasa make a kit for detecting these radio signals, or it is possible to use a ham antenna, but of course it must be cut for the frequency of operation. The Nasa kit uses a phased dipole antenna which must be set up in a field or similar as the antenna is about 7m long.
Stars are not very good radio sources. Supernovae remnants such as Cassiopeia A or the Crab nebula are much brighter at radio wavelengths. Most supernovae are too distant to be powerful radio sources; radio supernovae are rare. A local supernova would be a radio source but we haven't observed a supernova in the milky way for several hundred years. | {
"domain": "astronomy.stackexchange",
"id": 3240,
"tags": "star, radio-astronomy, radio-telescope"
} |
Is it really necessary for the orbital plane of a satellite to pass through the center of mass of the object around which it's orbiting? | Question: Is it really necessary for the orbital plane of a satellite to pass through the center of mass of the celestial body around which it's orbiting? Does the answer depend on, whether the celestial body is a sphere of radially symmetric density or it's irregularly shaped (for example, a comet or an asteroid)? If there exist cases where the orbital plane doesn't pass through the center of mass, how is Kepler's First Law defined for such cases, since the center of mass of the larger body doesn't lie on the orbital plane of the satellite which contains the foci of the elliptical orbit?
Please Note: The answers for the question Why do satellites orbit around the centre of a planet? did not clarify my above mentioned doubts.
Answer: It is absolutely false that the orbit of a satellite should always be a curve in a plane. And it is also false that the direction of the force should always point towards the center of mass of the attracting body.
As far as the first statement, it should be noticed that in the case of central forces the existence of a plane where the orbit is confined to stay is ensured by the conservation of angular momentum. If the attracting body has a spherical distribution of its mass, a mathematical theorem ensures that the resulting force is central and the situation is equivalent to concentrate the whole mass of the body in its center of mass. However celestial bodies can have shapes deviating from the spherical one and/or a non spherical mass distribution. As a consequence, real interaction energies, in particular at small distances, can be far from being spherical. A general description of anisotropic gravitational interaction, like that felt by low Earth orbit satellites, can be described by a multipole expansion. In such a case, angular momentum conservation does not hold and there is no planar orbit and no Kepler's first law.
A part the angular dependence of the potential energy, it is also interesting to notice that in general a non-spherical mass distribution does not attract in the direction of the center of mass. This can be easily verified examining simple examples of non spherical mass distributions. | {
"domain": "physics.stackexchange",
"id": 62959,
"tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, satellites"
} |
XRF analysis - recalculating results | Question: For TAS diagram plot, one need to recalculate Major oxide data (NaO, K2O, SiO2).
Do one need to recalculate the ppms of Zr,Nb and Y and % of TiO2 for Zr/TiO2 vs Nb/Y diagram plotting for rock classification?
Answer: It is best to recalculate all major oxide (including TiO2), trace elements are not important. No need to recalculate them. | {
"domain": "earthscience.stackexchange",
"id": 1908,
"tags": "geochemistry, petrology"
} |
Photons and Relativity | Question: Consider a Photon from Sun and travels with a velocity $c$. Now think we are that photon. For us, it looks like Sun is moving away from us with a velocity $c$. So, why don't we get attracted back towards Sun, because the mass of Sun would be infinite for us since it moves away from us with a velocity $c$.
Answer: You have completely mixed the modern and classical concepts of relativity. If you're talking about mass increment, you shouldn't calculate speed of Sun based on absolute time & space notion.
For you as a photon, space will be contracted to zero and time will be dilated to infinity. So, you can't calculate a speed (which is a time-like spacetime event) of Sun.
While its a nice satisfactory explanation, its not the real one.
Real Answer:
Relativistic physics doesn't allow you to take position of a photon. In other words, relativistic physics doesn't allow photons to be an observer. Its because a photon can see itself stationary which breaks the framework of relativistic physics. Relativistic physics doesn't allow photons to be at rest in any reference frame. | {
"domain": "physics.stackexchange",
"id": 14860,
"tags": "gravity, special-relativity, mass, speed-of-light, photons"
} |
Considering dark energy and other factors, what is the most distant object light could reach? | Question: If we send a photon from Earth at this moment, what is the most distant object that it could reach?
This question is partly inspired by this video which states we cannot ever travel beyond our local group.
I'm trying to understand why that would be. The recession velocities listed in this Wikipedia article are all in the order of hundreds of km/s, and light travels a thousand times faster. So that would seem reachable, for a photon at least.
Answer: The most distant object that light we emit today can reach in the distant future is at the event horizon
$$eH(t) = a(t)\cdot \int_{t}^{t_{max}} \frac{c\cdot \text{d}t'}{a(t')}$$
which is now approximately 17 billion lightyears away, see the future light cone in comoving coordinates which converges to this distance:
If the light was emitted at the big bang you can use the particle horizon
$$pH(t) = \int_{0}^{t} \frac{\text{d}t'}{a(t')}$$
as the future light cone of t=0, this light has travelled some 46 billion lightyears up to now and will in the distant future converge to a comoving distance of 63 billion lightyears where the farthest galaxies it will ever reach are located today.
The detailed calculation with the cosmological parameters from Planck 2013 can be found here from In[24] to Out[26]. For a detailed explanation of the spacetime diagram see here at page 3. | {
"domain": "physics.stackexchange",
"id": 30992,
"tags": "cosmology, space-expansion, event-horizon, dark-energy"
} |
Is the non-trivial topology on the torus reflected on the Bloch sphere? | Question: Almost every text on topological insulators have the Bloch sphere example of a two level system showing the non triviality of the bundle of an eigenvector over the sphere: we can't define an eigenstate over the whole Bloch sphere, instead of this, we must construct two local trivializations, namely
$$
\psi_{-}^{S}(\vec{n}) = \left(\begin{matrix} -\sin\frac{\theta}{2} \\ e^{i\varphi} \cos\frac{\theta}{2} \end{matrix}\right)
\quad
\psi_{-}^{N}(\vec{n}) = \left(\begin{matrix} - e^{-i\varphi}\sin\frac{\theta}{2} \\ \cos\frac{\theta}{2} \end{matrix}\right)
$$
defined in the south and north hemispheres respectively to avoid the obstruction (to Stokes theorem). We can easily compute the Chern number via Berry curvature on the sphere on any of the states. But we are getting the Chern number integrating over the SPHERE, and not over the Brillouin TORUS. Does this result reflect directly the non-triviality of the fiber bundle over the Brillouin TORUS? (I think getting the pullback of the map from the sphere to the torus and using the chain rule may suffice to prove this, but I am not sure it's enough for a proof). Will we get the same result?
Aside: when in the Bloch sphere we choose the trivializations, we are defining a transition function like
$$ t_{NS} = e^{i\varphi} $$
that applies to the fiber space, with that definition, when going from the north to the south hemisphere. Is that $\varphi$ the berry phase? Quantitatively or qualitatively?
Answer: I'll give you in the following a rather detailed answer, but let me first, shortly state the answers to your questions:
The Chern number of the eigenvector bundle over the torus can be evaluated by integration over the sphere, the integrand will indeed be the monopole Berry curvature. However the integration region will not in general be a single sweep of the surface of the sphere, because the map from $T^2$ (the Brillouin zone) to $S^2$ may wind the sphere several times.
The transition functions are rather related to the Chern number. Their winding number, i.e., the number of times they wind the equator is equal to the Chern number.
Concerning the question paused in the title: The topology of all principal $U(1)$ bundles over the$2$-torus $T^2$ is determined solely by the Chern number. (This result is not general ; for example it is not true for the torus $T^3$ because it is of a higher dimension)
Details:
The Berry phase of a nondegenerate state is a holonomy of a principal $U(1)$ bundle over the parameter space $M$. There can be many topologically inequivalent $U(1)$ bundles over a given parameter space corresponding to inequivalent quantum systems. Thus the Berry phase allows a classification of parametrized quantum systems based upon the classification of principal bundles. This point of view was noticed by: Bohm, Boya, Mostafazadeh and Rudolph.
The classification theorem of principal bundles asserts the existence of a universal principal bundle $U(1)\rightarrow\eta\rightarrow B$, such that any principal $U(1)$ bundle $\lambda$ over the parameter space is the pullback of which under some map $f$. (Please see Nash and Sen for a more detailed explanation of the classification theory of principal bundles.)
$$\begin{array}{ccc}
\lambda& \overset{f^*}\leftarrow & \eta\\
\downarrow & & \downarrow \\
\mathcal{M} & \overset{f}\rightarrow & B
\end{array}$$
The classification theory of principal bundles specifies a base space (the classifying space) and a total space of the universal bundle depending only on the structure group (in our case U(1)). In the case of a nondegenerate state of a Hamiltonian unconstrained by any anti-unitary symmetry, the classifying space is known to be the infinite dimensional complex projective space $B = \mathbb{C}P^{\infty}$. (For a good account of complex projective spaces , please see chapter 4 in Bengtsson and Życzkowski . Also, the following exposition of John Baez of classifying spaces is very useful and contains some more explanation of the infinite dimensional case $\mathbb{C}P^{\infty}$ .
$\mathbb{C}P(\infty)$ is the space of all one dimensional projectors, and the map $f$ from the parameter space to the universal base space is:
$$P: \mathcal{M}\overset{P}\rightarrow \mathbb{C}P^{\infty}$$
$$ \mathcal{M} \ni m \mapsto P(m) = |u(m)\rangle\langle u(m)|\in \mathbb{C}P^{\infty}$$
Where $|u(m)\rangle $ is the state of the system. (The bundle $\lambda$ is sometimes called the Berry bundle, and when the state is an eigenstate of a Hamiltonian, Synonyms: the eigenbundle or the spectral bundle).
The construction of the Berry phase stems from the existence of a universal $U(1)$ connection over the infinite dimensional projective space whose curvature is the Fubini-Study form:
$$ dA = \frac{1}{2i}\mathrm{Tr} (P dP \wedge dP)$$
The pulled back Berry connection $P^*(A) = A(m)$ is the Berry connection whose holonomy is the Berry phase on $\mathcal{M}$ and the integral of its curvature on $M$ is the first Chen class of $\lambda$.
The infinite dimensional projective space $\mathbb{C}P^{\infty}$ is the direct limit of a series of inclusions:
$$ \mathbb{C}P^{1} \subset \mathbb{C}P^{2} . . . \subset \mathbb{C}P^{\infty}$$
In the case of when the parameter space is two dimensional, it is sufficient to approximate the classifying space by its first component namely $ \mathbb{C}P^{1} = S^2$ and consider maps to the space of one dimensional projectors in two dimensions namely $S^2$, and consider the bundle map:
$$\begin{array}{ccc}
\lambda& \overset{P^*}\leftarrow & \eta\\
\downarrow & & \downarrow \\
\mathcal{M} & \overset{P}\rightarrow & S^2
\end{array}$$
(Please see for example, Viennot, for a more detailed expposition of the finite dimensional approximations of the classifying spaces)
Here, it is very easy to write the formula for the one dimensional projector map:
$$P(m) = \frac{1}{2} \begin{bmatrix}
1-\cos \theta(m) & \sin \theta(m) e^{i\phi(m)}\\
\sin \theta(m) e^{-i\phi(m)} & 1+\cos \theta(m) \end{bmatrix}$$
This projector gives rise to the pull back to $\mathcal{M}$ of well-known magnetic monopole Berry connection:
$$A_{\pm} = \frac{1\mp\cos \theta(m) }{\sin \theta(m) } d\phi(m) $$
whose Berry curvature is proportional to the area element of the sphere
$$F = \frac{1}{2} \sin \theta(m) d\theta(m) d\phi(m) $$
The integration is performed along a path in the manifold $\mathcal{M}$, thus the Berry phase corresponding to the path $\Gamma_M$ is given by:
$$\phi_B = \int_{\Gamma_M} A(m)$$
It can be pulled back to the two sphere (by change of the integration variable), but in this case we need to integrate on the image of the path $\Gamma = P(\Gamma_M)$:
$$\phi_B = \int_{\Gamma} A$$
The same is true for the Chern class:
$$ c = \int_{\mathcal{M}} F(m) = \int_{P(\mathcal{M})} F$$
The integration region may wind the two sphere several times and the Chern number will be equal to a multiple of the charge of the monopole.
For the second question, we observe that:
$$\begin{align*}
\int_{P(\mathcal{M})} F
& = \int_{P(\mathcal{M}) \cap S^1} \left (A_{+}-A_{-}\right )
&=\int_{P(\mathcal{M}) \cap S^1} d \phi
&= \frac{1}{i}\int_{P(\mathcal{M}) \cap S^1} g^{-1}d g
\end{align*} $$
Where $S^1$ is the equator and $g= e^{i \phi}$. The last term is the winding number of the mapping $g$. It is a one dimensional Wess-Zumino-Witten term. | {
"domain": "physics.stackexchange",
"id": 45263,
"tags": "condensed-matter, differential-geometry, topological-field-theory, topological-insulators, berry-pancharatnam-phase"
} |
Can molecular genetics make a boolean variable from a continuous variable? | Question: In the same kind of idea than this question. Gene expression are regulated through complex interactions. The concentration of enhancers and repressors is an important aspect that dictate the level of expression of a given gene. These concentrations can take different value on a continuous scale.
Imagine a case where fitness is maximized when a given gene produce n proteins per minute if the concentration of a given protein is greater than x. If the concentration of the protein was to be lower than x, then the gene should not be expressed (0 proteins per minute are produced). In such case, it would be great if a bunch of reactants were to be able to simulate a "switch function" that would switch from "NO EXPRESSION" to "EXPRESSION" at x.
It seems to me that such switch function should be very complicated to evolve. I would suspect that all chemical reactions, including the binding of enhancer to promoter region should follow the law of Michaelis-Menten and the Michaelis-Menten function is not at all switch function. So, I have been thinking about cooperative binding. Hill's equation describes a function that is effectively a switch function given that the hill coefficient is high enough. However, seeking a bit in the literature, it's seems that the Hill coefficient never really overpass 3 (or 5 for extreme estimates). A Hill coefficient of within this range gives a logistic function but still looks quite suboptimal compared to what a perfect switch function could do.
Are there switch functions in molecular genetics that could translate a concentration into a TRUE/FALSE signal?
How well do they simulate the perfect switch function?
Are they based on cooperative binding or on some other mechanism?
References for estimate of Hill coefficients:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1435008/pdf/biophysj00200-0109.pdf
http://onlinelibrary.wiley.com/doi/10.1046/j.1537-2995.1983.23283172857.x/abstract
http://www.pnas.org/content/93/19/10078.short
http://www.ncbi.nlm.nih.gov/pubmed/8816754
http://www.fasebj.org/content/11/11/835.short
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2816740/
Answer: Positive feedbacks can be one alternative. Positive feedbacks exhibit bistability and can therefore adopt one of the two stable states depending on the initial condition. A famous example of a positive feedback switch would be that between cI and Cro in λ-phage, which repress each other. Positive feedbacks also display hysteresis: if the state of the system is dependent on an inducing ligand X such that system attains state S2 at high levels of the ligand, then to switch from S2 to S1 the concentration of X would have to be reduced below the level that was required for the S1 to S2 switch; and vice-versa. In other words, the system tries to remain in its current state.
If the transcription factors that constitute the genetic positive feedback have a co-operative behaviour then you can expect a steep switching curve.
For details see this review. | {
"domain": "biology.stackexchange",
"id": 3634,
"tags": "genetics, molecular-biology, molecular-genetics, gene-expression"
} |
How dependent is Earth is with the other Planets' gravity? | Question: How would the Earth be affected if the Moon was destroyed as far as our orbit around the Sun? What if Mars or Jupiter was destroyed?
Answer: The orbital speed of the Moon around the Earth is 1022 m/s. The Earth is 81 times more massive, than the Moon. Thus, a sudden disappearance of the Moon would cause an $\frac{1022}{81}=12 \frac{m}{s}$ change of the Earth's orbit, which would be negligible.
The disappearance of the Mars or Jupiter would have yet lesser effect. | {
"domain": "astronomy.stackexchange",
"id": 3213,
"tags": "planet, orbit, gravity, solar-system"
} |
microstrain IMU in PR2 | Question:
Hello,
Can anyone tell me in which orientation (where the sensor's x,y,z directions are) the microstrain IMU is attached to PR2?
The reason I am asking this because, I am little confused with transforming the data coming from the IMU.
i) In http://www.ros.org/wiki/microstrain_3dmgx2_imu it says that "The orientation matrix is the transpose of the orientation matrix returned by the hardware, rotated 180 degrees around the y axis." and I could find this in the code:
tf::Quaternion quat;
(tf::Matrix3x3(-1,0,0,
0,1,0,
0,0,-1)*
tf::Matrix3x3(orientation[0], orientation[3], orientation[6],
orientation[1], orientation[4], orientation[7],
orientation[2], orientation[5], orientation[8])).getRotation(quat);
tf::quaternionTFToMsg(quat, data.orientation);
I actually don't quite get the logic behind this. I just feel North-East-Down (NED) convention can be converted East-North-Up (ENU) convention by just rotating PI around X axis.
ii) In the PR2 URDF I see
<!-- imu -->
<xacro:microstrain_3dmgx2_imu_v0 name="imu" parent="${name}" imu_topic="torso_lift_imu/data" update_rate="100.0" stdev="0.00017" >
<origin xyz="-0.02977 -0.1497 0.164" rpy="0 ${M_PI} 0" />
</xacro:microstrain_3dmgx2_imu_v0>
the reason for rpy="0 ${M_PI} 0" will be clear if I know the actual IMU orientation in PR2
Thank you
Originally posted by ChickenSoup on ROS Answers with karma: 387 on 2013-06-29
Post score: 1
Answer:
Here is a mechanical drawing that shows the 3 axes of rotation on the 3DM-GX2.
http://files.microstrain.com/mechanical-prints/3dm-gx2-dimensions-rotations-axes.pdf
Originally posted by MicroStrain Support with karma: 76 on 2013-07-01
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by ChickenSoup on 2013-07-01:
Thank you for the drawing. But, still it does not answer my question. I need information specific to PR2
Comment by tfoote on 2013-08-12:
The best definition of where the IMU is mounted is the URDF which you cited. | {
"domain": "robotics.stackexchange",
"id": 14753,
"tags": "ros, imu, microstrain, pr2"
} |
Why is the Higgs boson spin 0? | Question: Why is the Higgs boson spin 0? Detailed equation-form answers would be great, but if possible, some explanation of the original logic behind this feature of the Higgs mechanism (e.g., "to provide particles with mass without altering their spin states") would also be appreciated.
Answer: The Higgs boson is, by definition, the excitation of the field behind the Higgs mechanism. The Higgs mechanism is a spontaneous symmetry breaking. Spontaneous symmetry breaking means that the laws of physics, or the action $S$, is symmetric with respect to some symmetry $G$, i.e.
$$\delta_G S = 0$$
however, the vacuum state of the quantum field theory isn't symmetric under the generators of this symmetry,
$$ G_i|0\rangle \neq 0$$
If we want to satisfy these conditions at the level of classical field theory, there must exist a field $\phi$ such that the vacuum expectation value
$$\langle \phi\rangle \equiv \langle 0 | \phi (x)| 0 \rangle$$ isn't symmetric under $G$,
$$\delta_G \langle \phi\rangle \neq 0 $$
However, if the field $\phi$ with the nonzero vev had a nonzero spin, the vacuum expectation value would also fail to be symmetric under the Lorentz symmetry because particular components of a vector or a tensor would be nonzero and every nonzero vector or tensor, except for functions of $g_{\mu\nu}$ and $\epsilon_{\lambda\mu\nu\kappa}$, breaks the Lorentz symmetry.
Because one only wants to break the (global part of the) gauge symmetry but not the Lorentz symmetry, the field with the nonzero vev has to be Lorentz-invariant i.e. singlet i.e. spin-zero $j=0$ field, but it must transform in a nontrivial representation of the group that should be broken, e.g. $SU(2)\times U(1)$. The Standard Model Higgs is a doublet under this $SU(2)$ with some charge under the $U(1)$ so that the vev is still invariant under a different "diagonal" $U(1)$, the electromagnetic one. The Higgs component that has a vev is electrically neutral, thus keeping the electromagnetic group unbroken, photons massless, and electromagnetism being a long-range force.
Aside from the Higgs mechanism, there exist other, less well-established proposed mechanisms how to break the electroweak symmetry and make the W-bosons and Z-bosons massive. They go under the names "technicolor", "Higgs compositeness", and so on. The de facto discovery of the 125 GeV Higgs at the LHC has more or less excluded these theories for good. The Higgs boson seems to be comparably light to W and Z-bosons and weakly coupled, close to the Standard Model predictions, and the Higgs mechanism sketched above has to be the right low-energy description (up to energies well above the electroweak scale). | {
"domain": "physics.stackexchange",
"id": 98766,
"tags": "particle-physics, quantum-spin, standard-model, higgs, symmetry-breaking"
} |
How to resolve the action of an operator in a power? | Question: I'm deriving the action of the squeezing operator on a non-vacuum Fock state and I have almost finished but can't work out how to apply an operator that is stuck in the power of a constant, not an exponential. I am trying to work out the result of this action:
$$\bigg(\frac{1}{\cosh{|\zeta }|}\bigg)^{\hat{a^{\dagger}} \hat{a} +\hat{b^{\dagger}}\hat{b} + 1} |{0,b}\rangle $$
I suspect the solution is:
$$\frac{1}{\cosh{|\zeta }|}\sqrt{b}|0,b\rangle,$$
but I'm not certain how to show this. I know you can Taylor expand and exponentiated operator but that doesn't apply here.
Answer: Note that $c^x = \exp[x\ln(c)]$, so for an operator $\hat A$ you would have
$$c^\hat A = \exp[\ln(c) \hat A] = \sum_{n=0}^\infty \frac{[\ln(c)]^n}{n!} \hat A^n$$ | {
"domain": "physics.stackexchange",
"id": 77606,
"tags": "operators, quantum-optics, mathematics, squeezed-states"
} |
Largest Sum Contiguous Subarray - Kadene's algorithm | Question: I am looking forward to an answer to improve this code?
Thanks.
Test class
package test;
import main.algorithms.LargestSumContiguousSubarray;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
public class LargestSumContiguousSubarrayTest {
LargestSumContiguousSubarray largestSumContiguousSubarray;
@Before
public void setUp(){
largestSumContiguousSubarray = new LargestSumContiguousSubarray();
}
@Test
public void testSumContiguousSubArray(){
int[] a = {-2, -3, 4 - 1, -2, 1, 5, -3};
int sum = 7;
Assert.assertEquals(sum, largestSumContiguousSubarray.kadenesAlgo(a));
}
}
LargestSumContiguousSubarray.java class
package main.algorithms;
public class LargestSumContiguousSubarray {
// O(n)
// Kadene's algorithm
public int kadenesAlgo(int[] a) {
// This is also works for negative numbers
int max_so_far = a[0];
int curr_max = a[0];
for(int i=0;i<a.length; i++){
curr_max = Math.max(a[i], curr_max+a[i]);
max_so_far = Math.max(max_so_far, curr_max);
}
return max_so_far;
}
}
Answer: @Doi9t 's answer already covered the ways how to improve your code. The first iteration of your for loop will return these assignments:
curr_max = a[0]
//inside your loop
curr_max = Math.max(a[0], curr_max + a[0]);
max_so_far = Math.max(a[0], curr_max);
So the first iteration will return a value of max_so_far equal to 2 * a[0] if a[0] >= 0 or a[0] if a[0] < 0 and this is a bug, for example your method for array {1} will return the value 2. I have seen from wikipedia that the algorithm set your variables max_so_far and curr_max to 0, this would solve the bug of your code. | {
"domain": "codereview.stackexchange",
"id": 39058,
"tags": "java, interview-questions"
} |
Why do white dwarfs cool down so slowly? | Question: I read that when white dwarfs do not proceed with nuclear fusion, the heat radiation from it is solely based on heat it retained in the past
But then, it floats in an almost 0 K empty space. So, why does it take a million (or many many more) years to cool down (to a black dwarf?)?
Shouldn't this temperature difference between them and the environment (space) cool them down much faster? Or maybe the emptiness of the space causes this very slow temperature outflow?
Answer: Since space is empty, heat can only be transmitted by radiation. Stefan-Boltzmann's radiation law states that the energy flow is proportional to the surface area, times the temperature to the fourth power. It is true that very hot objects transmit a lot of energy per square meter of surface due to the fourth power: a 10 times hotter object sends out 10,000 times more energy. But the surface area is small. A typical white dwarf is about 100 times smaller than the Sun, and hence has 10,000 times smaller surface area. So as soon as the white dwarf cools to a temperature comparable to the Sun it will have a much harder time radiating away the remaining heat.
The sun could in theory radiate away all its heat in 30 million years if it did not produce more (the thermal radiation timescale), but the dwarf would need (very roughly) 10,000 times that time just due to the surface area issue - literally hundreds of billions of years.
This calculation gets complicated by the internal structure of white dwarfs. They are solid rather than plasma, with dense degenerate matter sharing electrons between atoms. This makes their heat conductivity much bigger than stellar heat conductivity (speeding up cooling). But they also slowly crystalize when they are cool enough, and this releases heat that slows the cooling.
The actual cooling dynamics has been studied a lot since the temperature of white dwarfs give valuable information about their age. | {
"domain": "astronomy.stackexchange",
"id": 7007,
"tags": "star, white-dwarf, heat"
} |
Solving using the master theorem | Question: I am wondering why this
$T(n)=3T(n/4)+n⋅lg(n)$
recurrence can be solve by Master Theorem case 3 but this
$T(n)=2T(n/2)+n⋅lg(n)$
recurrence can not be solve by Master Theorem what is the difference between this two recurrences.
When I searched on google I found tow questions related with this two recurrence
First Question:
Solving $T(n)= 3T(\frac{n}{4}) + n\cdot \lg(n)$ using the master theorem
Second Question:
T(n)=2T(n/2)+nlognT(n)=2T(n/2)+nlogn and the Master theorem
But both of them say something in contrast with each other and I could't find out the main reason.
Answer: In these cases it's always a good idea to look at the statement of the master theorem. The statement on Wikipedia states regarding case 3 that if
$T(n) = aT(n/b) + f(n)$ where
$f(n) = \Omega(n^c)$ for some $c > \log_b a$ (i.e. $b^c > a$), and
$af(n/b) \leq (1-\epsilon) f(n)$ for some $\epsilon > 0$ and large enough $n$,
then $T(n) = \Theta(f(n))$.
Let's see whether the theorem applies in your two cases. In your first case $a = 3$, $b = 4$, and $f(n) = n\log n$. We have $f(n) = \Omega(n^1)$ and $1 > \log_4 3$, and furthermore $$3f(n/4) = 3(n/4)\log(n/4) \leq (3/4)n\log n = (3/4)f(n).$$
Both conditions are satisfied, and so we can conclude that $T(n) = \Theta(n\log n)$.
In your second case $a = b = 2$ and $f(n) = n\log n$ is the same. In this case $f(n) = \Omega(n^c)$ only for $c \leq 1$, whereas we need $c > \log_2 2 = 1$, so we cannot apply case 3. The second condition also doesn't hold, since $2f(n/2) = 2(n/2)\log(n/2) = n\log n - \Theta(n)$, and so it is not true that $2f(n/2) \leq (1-\epsilon) f(n)$ for any $\epsilon > 0$.
However, in this example case 2 of the master theorem applies, showing that $T(n) = \Theta(n\log^2 n)$. | {
"domain": "cs.stackexchange",
"id": 6262,
"tags": "algorithms, algorithm-analysis, master-theorem"
} |
Can a physicist ever disprove the law of conservation of energy? | Question: We are all acquainted with the law of conservation of energy which says that energy remains conserved in an isolated system and this is a basic law of nature.
But what I want to ask is that is there any scope for disproving this famous law. What I think is that the law is too vast to leave space for contradictions.
Consider this from Feynman's lectures
In order to verify the conservation of energy, we must be careful that we have not put any in or taken any out. Second, the energy has a large number of different forms, and there is a formula for each one. These are: gravitational energy, kinetic energy, heat energy, elastic energy, electrical energy, chemical energy, radiant energy, nuclear energy, mass energy. If we total up the formulas for each of these contributions, it will not change except for energy going in and out.
I am considering energy as a number which is calculated using vast number of formulas and essentially remains the same. Now, if some physicist some day finds that the number is not the same, according to me, he/she would most likely add some more items in this long list of forms of energy and that's how it would end.
I don't know whether I am right on wrong, but this is what I feel. Maybe because I consider energy just as a number which is always the same and it may not be the case. So firstly I want you to tell me what exactly energy is. I am unable to grasp it's connection with symmetries of this universe and if here hides the answer to my query, then I would like to know it later.
Secondly, I want to know how you as physicist can ever disprove this law. This would ofcourse require me to know, at the first place, what energy is.
I know that how mechanical energy is related to work, and have seen how this law helps us to solve a host of problems, but still the idea seems very much abstract and becomes incomprehensible when the general idea of energy conservation is introduced. This is most probably because I'm not sure on how something is approved to be considered as a form of energy. So I want your help with this and want to know your opinion and idea behind this abstract idea.
I tried my best to make myself clear.
Answer: It's quite easy to disprove the law of conservation of energy:
You can find a situation where the laws of physics cannot be described by (classical or quantum) lagrangian mechanics.
You can find a situation where the laws of physics change over time, i.e. where doing an experiment at time $t$ and doing the same identical experiment at time $t+\Delta t$ produces a different result.
If you do that, then you're on your way to breaking the conservation law.
If you don't do either of those, then Noether's theorem guarantees that there will be a conserved Noether charge that corresponds to the symmetry under time translations, and generally speaking that conserved charge can always be called an energy.
(To be a bit more clear for the technically minded: to disprove the conservation of energy it is necessary to tick at least one of the two bullet points above, but they're not sufficient, either ─ you still have your work cut out even after you've done it, but one core fundamental barrier has now been removed. However, doing away with either of those two features of physical law is so far removed from our current understanding of nature that it's pretty pointless to speculate about how you might proceed after you've cleared that barrier.) | {
"domain": "physics.stackexchange",
"id": 47366,
"tags": "energy, energy-conservation, symmetry"
} |
Snakemake: Cannot find first rule? | Question: Back again with another snakemake query. This time I decided to port my read cleaning and alignment pipeline to snakemake. Repeating the steps from previous question and trying not to make any typo here, I'm now receiving this "error" while trying a dry run using snakemake snakemake -nr -s alignMake:
Building DAG of jobs...
MissingRuleException:
No rule to produce snakemake (if you use input functions make sure that they don't raise unexpected exceptions).
Below is my snakefile:
import os
shell.executable("/bin/bash")
workdir: '/proj/uppstore2018136/publication/jupyterNotebooks'
pathToPEReads = 'raw_data/reads/BL6/totalRNA'
pathToSMReads = 'raw_data/reads/BL6/smallRNA'
pathToHISAT2index = 'indexes/hisat2/reference'
pathToBOWTIE2index = 'indexes/bowtie2/reference'
pathToTemp = 'temp/'
pathToBAM = 'processed_data/BAM'
readsPE, = glob_wildcards('raw_data/reads/BL6/totalRNA/{sample}_1.fastq.gz')
readsSE, = glob_wildcards('raw_data/reads/BL6/smallRNA/{sample}.fq.gz')
samToolsThreads = 6
rule all:
input:
expand('processed_data/BAM/{sample}.bam', sample=readsPE),
expand('processed_data/BAM/{sample}.bam', sample=readsSE)
rule trimgalorePE:
input:
fastq1 = pathToPEReads + '{sample}_1.fastq.gz',
fastq2 = pathToPEReads + '{sample}_2.fastq.gz'
output:
output1 = pathToTemp + '{sample}_1_val_1.fq.gz',
output2 = pathToTemp + '{sample}_2_val_2.fq.gz'
shell:
'''
module load bioinfo-tools
module load cutadapt
module load TrimGalore
trim_galore --quality 20 --paired --length 80 --illumina --stringency 2 --clip_R1 11 --clip_R2 11 -o {pathToTemp} --paired {input.fastq1} {input.fastq2}
'''
rule trimgaloreSEandDust:
input:
fastq = pathToSMReads + '{sample}.fq.gz',
output:
output = pathToTemp + '{sample}_trimmed.fq.gz',
shell:
'''
module load bioinfo-tools
module load cutadapt
module load TrimGalore
trim_galore --quality 20 --length 16 --max_length 32 --small_rna --fastqc --stringency 2 -o {pathToTemp} {input.fastq}
'''
rule align_hisat2:
input:
fastq1 = pathToTemp + '{sample}_1_val_1.fq.gz',
fastq2 = pathToTemp + '{sample}_2_val_2.fq.gz'
output:
output = 'processed_data/BAM/{sample}.bam'
threads: 10
shell:
'''
module load bioinfo-tools
module load HISAT2
module load samtools
hisat2 -q -p {threads} --no-discordant --no-mixed --rna-strandness RF --dta -x {pathToHISAT2index} -1 {input.fastq1} -2 {input.fastq2} | samtools sort -@ {samToolsThreads} -o {output}
'''
rule align_bowtie2:
input:
fastq = pathToTemp + '{sample}_trimmed.fq.gz'
output:
output = 'processed_data/BAM/{sample}.bam'
threads: 10
shell:
'''
module load bioinfo-tools
module load bowtie2
module load samtools
bowtie2 --no-unal --end-to-end -D 20 -R 3 -N 1 -L 20 -i S,1,0.50 -p {threads} -x {pathToBOWTIE2index} -U {fastq} | samtools sort -@ {samToolsThreads} -o {output}
'''
rule clean:
shell:
'''
rm temp/*
'''
I believe it is due snakemake being unable to find the first rule ? Maybe there is something wrong with parsing ? I tested the glob_wildcards and expand function independently and they appear to work properly.
Edit: Here is the directory tree:
.
├── citations
├── doc
├── genome
│ ├── genome.fa
│ └── Mus_musculus.GRCm38.94.gtf
├── indexes
│ ├── bowtie2
│ │ ├── reference.1.bt2
│ │ ├── reference.2.bt2
│ │ ├── reference.3.bt2
│ │ ├── reference.4.bt2
│ │ ├── reference.rev.1.bt2
│ │ └── reference.rev.2.bt2
│ └── hisat2
│ ├── reference.1.ht2
│ ├── reference.2.ht2
│ ├── reference.3.ht2
│ ├── reference.4.ht2
│ ├── reference.5.ht2
│ ├── reference.6.ht2
│ ├── reference.7.ht2
│ └── reference.8.ht2
├── LICENSE
├── notebooks
├── plots
├── processed_data
│ └── BAM
├── raw_data
│ └── reads
│ └── BL6
│ ├── smallRNA
│ │ ├── smBL6_1.fq.gz
│ │ ├── smBL6_2.fq.gz
│ │ └── smBL6_3.fq.gz
│ └── totalRNA
│ ├── EKIxxxx012_1.fastq.gz
│ ├── EKIxxxx012_2.fastq.gz
│ ├── EKIxxxx013_1.fastq.gz
│ ├── EKIxxxx013_2.fastq.gz
│ ├── EKIxxxx014_1.fastq.gz
│ ├── EKIxxxx014_2.fastq.gz
│ ├── ESSxxxx077_1.fastq.gz
│ └── ESSxxxx077_2.fastq.gz
├── README.md
├── recipes
│ ├── alignMake
│ ├── cluster.json
│ ├── indexMake
│ ├── indexMakeScript.out
│ └── indexMakeScript.sh
├── results
├── scripts
├── slurmLogs
│ ├── slurm-6575541.out
│ ├── slurm-6575543.out
│ ├── slurm-6575815.out
│ └── slurm-6580867.out
├── source
└── temp
Answer: you are starting snakemake the wrong way.
snakemake snakemake
means "Running the pipeline in Snakefile until you reach a rule named snakemake or a file name snakemake is created."
If your snakefile is called snakemake and this is the pipeline you want to start, the correct syntax would be:
snakemake -s snakemake | {
"domain": "bioinformatics.stackexchange",
"id": 814,
"tags": "python, snakemake"
} |
Hydrostatic pressure in a gas | Question: I have a physical picture of hydrostatic pressure ($dp/dh = \rho g$) in liquids, interpreting it as the weight of the water column.
Now in a gas, the molecules are much further apart than in a liquid. Still the same hydrostatic pressure exists in gases (e.g. air) as well.
How am I to imagine the "weight of an air column" if molecules are not exactly lying on each other? At the moment I am imagining all those air molecules bouncing off the ground and other surfaces (elastically).
Does the hydrostatic pressure result from some time-average of such bounces? Why is the hydrostatic pressure the same in all directions then?
Answer: When a gas is too dilute, you cannot use hydrodynamics anymore and you are going to have to rely solely on statistical mechanics.
This sets the difference between the continuum and molecular régimes.
Hydrodynamics in this case is an ‘equilibrium’ state where you assume the gas has a homogenous density because it has had time to thermalise. Hence all particles everywhere in the gas have the same velocity/ same rate of impinging on the wall, hence the same pressure exerting on all surface (which I reckon is referred to as Pascal’s principle).
If you were to kick the gas with a piston from one side, then the ‘disturbance’ of density would travel across the gas at the speed of sound, and reflect off the other end in the case of a finite volume. Thermalisation, I.e. the transferral of the kick energy into equal momentum of all particles, will also happen in a timescale governed by the speed of sound in the gas.
The Knudsen number quantifies whether you can use the hydrodynamic approach. It compares the mean free path of the particles, I.e. the distance they cover before they hit another particle, to the physical lengthscale of the system. If the latter is much former than the latter, then you are in the molecular regime, governed by statistical mechanics.
To give you an idea, an Ultra High Vacuum chamber (pressures of $10^{-11}$ mbar) can have mean free paths of tens of kilometres, albeit being about a litre in volume.
In the statistical mechanics case then yes, you rely on time averages.
Which to be fair also give you the real answer in the hydrodynamic case, since a continuum fluid is none other than a mass of single particles. But hydrodynamics is a self-consistent simpler picture so one uses that if one can.
If you wish to resolve timescales that are smaller than the average time, then you have to rely on statistical mechanics also for the time evolution of pressure oscillations etc. | {
"domain": "physics.stackexchange",
"id": 77685,
"tags": "pressure, air, fluid-statics, gas"
} |
Tension problem | Question: A weight "m" is suspended from the center of a light, taut and originally horizontal rope. After suspending the weight "m", what angle must the rope make with the horizontal if the tension in the rope is to equal the weight "m"?
Here is what I have so far.
I'm not sure what to do from here on out. I'm assuming T1 and T2 will be equal and i'm also assuming that T1 and T2 have about half the mass of M. So is the idea to solve for M then substitute I'm guessing but when I did that i ended up with something like 2sin(theta)=sin(theta).
Answer:
I'm assuming T1 and T2 will be equal
What made you assume that? They may or maynot.
and i'm also assuming that T1 and T2 have about half the mass of M.
"about" when the string are almost equal, not always
So is the idea to solve for M then substitute I'm guessing but when I did that i ended up with something like 2sin(theta)=sin(theta).
that is $2\sin\theta=\sin\theta\implies\theta=0$? that doesn't make sense
You should do:
In equilibrium, in a suitable cordinate axes the forces (or their respective components along the axis in consideration) should balance each other.
Take in account weight of body, and the tension in two wires and try to make the triple point at equilibrium. | {
"domain": "physics.stackexchange",
"id": 20644,
"tags": "homework-and-exercises, newtonian-mechanics, forces"
} |
Distribution of point charges on a line of finite length | Question: How will $N$ freely moving charges confined to a line with length $L$ be distributed? What are their equilibrium positions?
Answer: This problem has been solved by Griffiths in
Charge density of a conducting needle. David J. Griffiths and Ye Li. Am. J. Phys. 64 no. 6 (1996), p. 706. PDF from colorado.edu.
The problem is nontrivial. | {
"domain": "physics.stackexchange",
"id": 9619,
"tags": "homework-and-exercises, electromagnetism, electrostatics, charge"
} |
If there are two longest chains possible in an organic compound and both have the same number of substituents, how do we decide the parent chain? | Question: Is the parent chain in the above case determined by atomic mass, alphabetical preference or any other rule?
Answer: If two longest chain are possible with same number of side chain in both of them, then we consider sum of locants of the side chain. The longest chain in which the sum of locants of the side chain is least is taken into consideration. | {
"domain": "chemistry.stackexchange",
"id": 3327,
"tags": "organic-chemistry, nomenclature"
} |
Finite length truncated exponential sequence $\mathcal Z$-transform, zeros over circle explanation | Question: I'm looking at an example on how to obtain the $\mathcal Z$-transform from a finite length truncated exponential sequence, namely:
$$x[n] =
\begin{cases}
a^N &\text{for} & 0 \leq n \leq N-1\\[2ex]
0 & \text{otherwise}
\end{cases}
$$
The $\mathcal Z$-transform from that sequence is:
$$
\frac {1}{z^{N-1}} \cdot \frac {z^{N} - a^N}{z - a}
$$
It can be seen that there are $N-1$ poles at the origin and $N-1$ zeros at $z = a$ (because the pole in $z=a$ is cancelled by a zero). My doubt is on the location of the zeros over the $z$-plane.
I know that the $N-1$ zeros at $z = a$ can be expressed as:
$z = a\cdot1\ \text{and}\ 1 = e^{j2\pi k}$ so:
$$
z_k = ae^{j2\pi k}\quad\text{with}\quad k = 1,2,\ldots,N-1
$$
Yet in the example I'm looking at, they say that the zeros are located at
$$z_k = ae^{j\frac{2\pi k}{N}}\quad\text{with}\quad k = 1,2, \ldots, N-1$$
So they are spaced at $\frac{2\pi}{N}$ instead of being all at the same place and I don't understand where that $N$ came from.
Answer: $$ z^N = a^N \cdot 1$$ where $1$ is recognized equivalently as $$1=e^{j2\pi k}$$ allowing complex roots to be distinguished and hence, the N complex roots become : $$z^N = a^N \cdot e^{j2\pi k}$$ $$ z_k = a \cdot e^{j \frac{2\pi}{N}k}$$ for $k = 0,1,...,N-1$
All the roots have the same magnitude; i.e., $|z_k| = a$ for all $k$, however they are complex numbers and as such they have phase values, which distinguishes them, by effectively distributing the roots along the circumference of a circle with radius $a$ equidistantly in angle. | {
"domain": "dsp.stackexchange",
"id": 5398,
"tags": "z-transform, poles-zeros"
} |
Completion time of autonomous navigator of ROS | Question:
Can autonomous navigator of ROS give two different completion time for two experiments in the same environment and having same starting and ending positions?
Are the default recovery behavior of move_base node for two experiments will be exactly same?
Thanks in advance.
Originally posted by RB on ROS Answers with karma: 229 on 2014-10-13
Post score: 2
Answer:
1 Yes. There are no guarantees on that and I'd actually assume that you will not get the same results. First, I think you'd obviously talk about simulation and not real robots. But even then and if the simulation is running without any noise we don't have a round based system, where data is passed on synchronously. For example, a single laser scan might not arrive in the same time window between runs leading to a slightly different costmap and thus behaviour. This is even excluding the fact that algorithms might do random sampling. This will affect everything happening after that as it depends on each other.
That being said. We are talking about exactly the same. Depending on the stability of the algorithms and simulation you can assume similar values as you would on a real world system.
2 The settings/configuration if you pass them in will be the same. The actually observed behavior will be affected by the same things as 1.
Originally posted by dornhege with karma: 31395 on 2014-10-13
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by paulbovbel on 2014-10-13:
DWA (commonly used for local navigation if using the nav stack) is a sampling based planner, so there'd be a major source of instability right there.
Comment by David Lu on 2014-10-13:
Note: It is sampled in the sense that it does not find the true optimal trajectory, but samples evenly in a given velocity range. It is not random sampling. | {
"domain": "robotics.stackexchange",
"id": 19717,
"tags": "navigation, odometry, move-base"
} |
String search algorithm for finding if a long list of long strings string exists in a similarly sized haystack | Question: My use case is that I have a list of emails and I need to find if the email is in the chain:
emails = [
[' RE: I just got your test || This is a top level email roughly 300+ words in length on average'],
['RE:RE: Something | RE: Something | This is another entirely distinct email that is the top level'],
['RE: FooBar | A third email that is entirely distinct']
... #performance issues start occurring with emails growing to > 50 items
]
Now I would search for an email which is a reply:
test_email = """
A third email that is entirely distinct
"""
Currently I am using a KMP type search but I believe this is inefficient because it searches for multiple copies of the needle. I also believe the in operator is inefficient because it is using the niave method of searching the string.
An implementation of KMP in python I found and modified slightly:
class KMP:
@classmethod
def partial(self, pattern):
""" Calculate partial match table: String -> [Int]"""
ret = [0]
for i in range(1, len(pattern)):
j = ret[i - 1]
while j > 0 and pattern[j] != pattern[i]:
j = ret[j - 1]
ret.append(j + 1 if pattern[j] == pattern[i] else j)
return ret
@classmethod
def search(self, T, P):
"""
KMP search main algorithm: String -> String -> [Int]
Return all the matching position of pattern string P in S
"""
partial, ret, j = KMP.partial(P), [], 0
for i in range(len(T)):
while j > 0 and T[i] != P[j]:
j = partial[j - 1]
try:
if T[i] == P[j]: j += 1
except:
return False
if j == len(P):
return True
return False
Used here:
for choice in choices: #choice[0] is email in the reply chain
if current_email == choice[0]:
incident_type = choice[1]
break
elif len(current_email) > len(choice[0]):
if KMP.search(current_email, choice[0]):
incident_type = choice[1]
break
else:
if KMP.search(choice[0], current_email):
incident_type = choice[1]
break
Should I fix this implementation or scrap it in favor of another algo?
Answer: In the post you write:
I also believe the in operator is inefficient because it is using the naïve method of searching the string.
But if you look at the implementation of Python's in operator for strings, you find that it calls the FASTSEARCH function, which is "based on a mix between Boyer–Moore and Horspool".
The problem of searching many times for strings in a collection of documents is known as full-text search. The approach in the post is to search for the string in each document in turn: this scales linearly with the number and length of the documents. To improve on this scaling behaviour, you need to preprocess the collection of documents into an index. (Note that this only helps if you are searching many times—if you are searching only once, then you can't do better than searching each document.)
Here's a simple demonstration of the full-text search approach:
from collections import defaultdict
class SearchIndex:
"A full-text search index."
def __init__(self):
# Mapping from word to set of documents containing that word.
self._index = defaultdict(set)
def add(self, document):
"Add document to the search index."
for word in document.split():
self._index[word].add(document)
def search(self, query):
"Generate the documents containing the query string."
candidates = min((self._index[word] for word in query.split()), key=len)
for document in candidates:
if query in document:
yield document
This works by building a mapping from words to the sets of documents containing each word (an "inverted index"). When a document is added to the index, it is split into words by calling str.split, and the document is added to the mapping for each word. This is conveniently implemented using collections.defaultdict.
To search for a string, the string is also split into words, and the index consulted for each word. The rarest word (the one mapping to the fewest number of documents) is used to get a set of candidates, and then each candidate is searched for the entire string.
This is a very simple approach and there are many refinements you can make. In particular, if you are going to be making queries of the same set of documents over a long period of time then you will want to make your inverted index persistent, and for that you will want a full-text search engine or database. | {
"domain": "codereview.stackexchange",
"id": 31979,
"tags": "python, performance, search"
} |
Ship a ROS package with Docker | Question:
Hi everyone,
I have a ROS package which has been well-tested with Caffe and Tensorflow + Cuda.
It is quite a burden explaining to users how to use this package and I feel it is best to ship it to users as a docker image. Now, on the docker hub, I see osrf/ros images for example, indigo as well as gazebo (which btw are dependencies for this project).
I have created the dockerfile as well as built the docker image but I find that the ros environment within the docker build is not active. I cannot catkin_init_workspace, catkin build nor catkin make. For the record, I pulled the OSRF/ros_indigo image in building my ros environment. There is no ros installation path in /opt/ros/... either.
The instructions are scarce and I wonder what I may be doing wrong.
Would appreciate your help.
Originally posted by lakehanne on ROS Answers with karma: 152 on 2017-04-19
Post score: 0
Answer:
I think you are making a mistake in your Dockerfile. You cannot layer docker images the way you have tried.
I believe that when you have:
FROM ros:indigo-ros-core
LABEL maintainer "patlekano@gmail.com"
# install ros packages
RUN apt-get update && apt-get install -y \
ros-indigo-ros-base=1.1.4-0* \
&& rm -rf /var/lib/apt/lists/*
# Install gazebo
FROM gazebo:gzserver6
FROM ubuntu:trusty
The image will actually only use ubuntu:trusy and 'forget' about the things you did before. You are probably best of starting from the image that has most of what you want, and then install the rest on top of it (using apt) it's a bit of a hassle, but I recently created a CUDA enabled trusty-ros-indigo-moveit image this way and it worked fine. I have removed a couple of things so not entirely certain but I think it should work like this. The entrypoint is the same as the one from indigo-ros-core.
FROM nvidia/cuda:8.0-devel-ubuntu14.04
# setup environment
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
# setup sources.list
RUN echo "deb http://packages.ros.org/ros/ubuntu trusty main" > /etc/apt/sources.list.d/ros-latest.list
# install bootstrap tools
RUN apt-get update && apt-get install --no-install-recommends -y \
python-rosdep \
python-rosinstall \
python-vcstools \
&& rm -rf /var/lib/apt/lists/*
# bootstrap rosdep
RUN rosdep init \
&& rosdep update
# install ros packages
ENV ROS_DISTRO indigo
RUN apt-get update && apt-get install -y \
ros-indigo-desktop-full=1.1.4-0* \
&& rm -rf /var/lib/apt/lists/*
# setup entrypoint
COPY ./ros_entrypoint.sh /
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["bash"]
# ros-indigo-ros-base
RUN apt-get update && apt-get install -y \
ros-indigo-ros-base=1.1.4-0* \
&& rm -rf /var/lib/apt/lists/*
# moveit-indigo-ci
ENV TERM xterm
# Setup catkin workspace
ENV CATKIN_WS=/root/ws_moveit
RUN mkdir -p $CATKIN_WS/src
WORKDIR $CATKIN_WS/src
# Commands are combined in single RUN statement with "apt/lists" folder removal to reduce image size
RUN wstool init . && \
# Download moveit source so that we can get necessary dependencies
wstool merge https://raw.githubusercontent.com/ros-planning/moveit/${ROS_DISTRO}-devel/moveit.rosinstall && \
wstool update && \
# Update apt-get because previous images clear this cache
apt-get -qq update && \
# Install some base dependencies
apt-get -qq install -y \
# Required for rosdep command
sudo \
# Required for installing dependencies
python-rosdep \
# Preferred build tool
python-catkin-tools \
# Not sure if necessary:
ros-$ROS_DISTRO-rosbash \
ros-$ROS_DISTRO-rospack && \
# Download all dependencies of MoveIt!
rosdep update && \
rosdep install -y --from-paths . --ignore-src --rosdistro ${ROS_DISTRO} --as-root=apt:false && \
# Remove the source code from this container. TODO: in the future we may want to keep this here for further optimization of later containers
cd .. && \
rm -rf src/ && \
# Clear apt-cache to reduce image size
rm -rf /var/lib/apt/lists/*
# Continous Integration Setting
ENV IN_DOCKER 1
# moveit-indigo-release
RUN apt-get update && \
apt-get install -y \
ros-${ROS_DISTRO}-moveit-* && \
rm -rf /var/lib/apt/lists/*
Hope this helps.
Originally posted by rbbg with karma: 1823 on 2017-04-20
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by lakehanne on 2017-04-23:
Thanks. I will try this. Where is this Dockerfile domiciled? I need to find the file ros_entrypoint.sh. Thanks
Comment by rbbg on 2017-04-24:
It's an extract of a bigger Dockerfile so I haven't tested this exact file and it's online. The entrypoint can be found here. good luck.
Comment by lakehanne on 2017-04-25:
Thanks. May I ask why you installed ros-indigo from source and also pulled ros-indigo-ros-base=1.1.4-0* in the same dockerfile? I just finished building my caffe image with your nvidia/cuda:8.0-devel-ubuntu14.04, ros-indigo-full and ros-indigo-base and I found that my build is about 3GB.
Comment by rbbg on 2017-04-28:
He, looks like I have some duplicate parts in there :) Pretty sure that's an error. I copied most from Dockerfiles from osrf and MoveIt! and didn't carefully inspect everything, so that's probably how it got in there. The image does get quite large but at least in my case it couldn't be helped.
Comment by lakehanne on 2017-04-28:
Okay. Thanks for the clarification. I have cleaned it up and my dockerfile is here. Thank you!
Comment by lakehanne on 2017-04-28:
BTW, how did you compile your cuda-8.0? From .deb or via the .run file? I couldn't get my image to populate a matplotlib window as I wanted when I run the docker image with DISPLAY support.
Comment by lakehanne on 2017-04-29:
I have removed the second ros-indigo-base install file and it's lesser in bytes. Thanks for your help though. | {
"domain": "robotics.stackexchange",
"id": 27656,
"tags": "ros, docker"
} |
Horizontal "external beams" on a steep section of a roof | Question: The pictures below are screen shots of what appears to be a barn in the Ukraine. One is a top down aerial view and the other a ground up, side view.
What intrigues me is the purpose of the apparent "external beams" placed horizontally along the steep part of the roof. They appear to have a triangular cross section. Also, they are mounted at differing elevations. I'm guessing they have something to do with snow on the roof during winter. Does anyone know why these horizontal "external beams" are placed where they are?
Answer: In colder climates where you can expect snow and potentially freezing rain, you can get a build up of or snow or worse sheets of ice. At some point the snow or ice sheet may release it bond with the roof and begin to slide. A micro avalanche if you wish. Smooth roofs and/or those with high slopes are more prone to this effect.
In order to protect pedestrians in areas below the edge of the roof, it is common to put up snow guards. They appear to me to be primarily designed to break up sheets so they do not land in one solid piece on the ground but instead many smaller pieces. The staggering of the horizontal pieces could be an architectural feature or a practical one to help snow build up pass around the sides through resulting gaps.
Personally I would not expect the need for it on such a steep roof line for snow, but freezing rain will build up even on vertical surfaces. When the freezing rain debonds from the metal roof, these guards will help break up the sheet. | {
"domain": "engineering.stackexchange",
"id": 4702,
"tags": "building-design"
} |
Camouflage grasshopper identification | Question: I came across the following grasshopper this afternoon while walking to my car. It was initially resting amongst some fallen tree leaves, but as I got closer to my car I must have startled it, as it then ran to the tree, and is also how I noticed it.
It was calm enough for me to temporarily hold it in a cut-in-half water bottle, and I was able to take a few more pictures, and record a brief video. Afterwards, I released the grasshopper back into the grass and it promptly flew away. :)
When searching for its species, I was only able to find a dessert grasshopper, Trimerotropis pallidipennis, but I live in KY, and not out west, so I don't think that's it.
Can someone ID this for me please?
Location: Central KY, USA.
Answer: You've found a Dissosteira carolina or carolina locust.
I'm not completely sure, but I think this is a male.
They are quite common in North America.
idtools.org/id/grasshoppers/factsheet Dissosteira carolina
uwyo.edu: Dissosteira carolina | {
"domain": "biology.stackexchange",
"id": 7835,
"tags": "species-identification, entomology"
} |
Chess board representation in Java | Question: I'm currently working on a small chess game written in Java (on GitHub). The board is modeled as a Board object with a 2D array of Piece objects :
public class Board {
private final int ROWS = 8;
private final int COLS = 8;
private Piece[][] board;
private List<Move> moveList;
[...]
}
At first, I tried to implement all the board's possible states / legal move generation (isCheck, isCheckMate, isStaleMate, legalMoves...) inside the Board class.
For example :
private List<Move> moves(Color color) {
List<Move> allMoves = new ArrayList<Move>();
for (int row = 0; row < ROWS; row++) {
for (int col = 0; col < COLS; col++) {
Square src = new Square(row, col);
Piece piece = getPiece(src);
if (piece == null || !piece.isColor(color))
continue;
allMoves.addAll(piece.availableMoves(src, this));
}
}
return allMoves;
}
However it ended up being about 300 lines and it was not very easy to read (especially because of the duplication of the board iteration loops).
So I decided to try another approach : I removed all the state evaluation code and replaced it with this method :
public void accept(BoardVisitor bv) {
for (int row = 0; row < rows; row++) {
for (int col = 0; col < cols; col++) {
bv.visit(board[row][col], new Square(row, col));
}
}
}
I then created a set of classes to "evaluate" the different states :
public class CheckEvaluator implements BoardVisitor {
private Square kingSquare;
private Board board;
private Color color;
private boolean isCheck = false;
public CheckEvaluator(Color color, Board board) {
this.board = board;
this.color = color;
}
@Override
public void visit(Piece piece, Square src) {
isCheck = isCheck || piece.canGoTo(src, kingSquare, board);
}
public boolean getResult() {
this.kingSquare = board.findKing(color);
board.accept(this);
return isCheck;
}
}
I regrouped all these evaluators inside a single class :
public class BoardEvaluator {
private Board board;
public BoardEvaluator(Board board) {
this.board = board;
}
public boolean isCheck(Color color) {
CheckEvaluator ce = new CheckEvaluator(color, board);
return ce.getResult();
}
public boolean isCheckMate(Color color) {
CheckMateEvaluator cme = new CheckMateEvaluator(color, board);
return cme.getResult();
}
public boolean isStaleMate() {
StaleMateEvaluator sme = new StaleMateEvaluator(board);
return sme.getResult();
}
public List<Move> legalMoves(Color color) {
LegalMovesEvaluator lme = new LegalMovesEvaluator(color, board);
return lme.getResult();
}
This version seems clearer and easier to me but I don't have a lot of experience and I'd be very glad to get some feedback about it:
Do you think this is a valid design?
Is my BoardVisitor a good (if simple) implementation of the Visitor pattern?
Answer: Warning! Arm Chair Quarterbacking in progress. Given that I offer this.
Game Class
Why is this Board.moveList in the Board class? You need a "driver" for a chess game and that would be a Game class. "A game consists of (has) moves" makes more sense.
The Game gives us a conceptual framework for a richer chess game. A Game has players, may have a timer for speed chess, and can keep track of pieces removed from the board; and of course records the moves.
Board Class
The chess board is a data structure. Don't make more of it than it is; nor less.
In the Visitor Pattern the data structure has-a element that has an accept method. That element seems to be a Square. I'm not certain if it's better than the Board being visited, but certainly the point is that we're evaluating the state at that one square? I don't see a big deal in giving a board reference to each square.
OR .. maybe the Pieces are visited. To test if the piece is "inCheck" for example. This perspective makes more sense than a square is in check. Is this why your board is Piece[][] and not Square[][]?
Whether we are visiting the board and iterating the squares; or iterating the board and visiting the squares; or iterating the squares and visiting the pieces may be more than semantics. I vote for whatever best reflects intent, gives me good code expressions, and sensible building blocks.
In any case I agree with @bowmore about refactoring Piece[][] to Square[][].
Pieces
Even given a rich Piece class, I like the idea of using an enumeration for names. This makes for nicer coding and expressability overall (and my pet peeve - it avoids strings). Maybe two enumerations. As in White.Knight and Black.Queen; or Pieces.WhiteKnight, Pieces.BlackQueen And a value to represent an empty square might be nice Pieces.none or Pieces.undefined.
Maybe Piece has a Square reference so it knows where it is. This may have a nice effect on the visit code.
Visitor Pattern
Nice call.
I agree with @MarcoForgerg, the visitors do not need to keep state. Just pass in the needed parameters and forget-about-it when done. And, instead of Singletons perhaps just static.
Nested visitors? Ok, so the board gets "visited" which in turn "visits" each square, which in turn, finally gets to Piece.accept(Evaluator xxxx). Visitors, by definition, understand their visited data structure so I'm thinking board iteration is wrapped in the board visitor, and the square visitor knows to check for an occupying piece and knows what Evaluator(s) to pass to the piece. It feels like nicely layered (code) logic to me. And note how the iteration logic is in the visitors, not the board (data structure). And subsequently all the business logic is in the visitor as well.
SO instead of this
public void accept(BoardVisitor bv) {
for (Square square : board) {
bv.visit(square);
}
}
THIS - Let the Visitor do the walking and talking (decision making). And the decision to evaluate empty squares is delayed as long as possible - push details down. Note that Board, Square, Piece visiting logic is decoupled/layered.
// in Board class
public void accept(BoardVisitor bv) { bv.visit(this));}
// BoardVisitor class
public void visit (Board board) {
// "board level" logic as needed
for (Square square in board) {
square.accept (this.squareVisitor);
}
}
// Square class
public void accept (SquareVisitor sv) {
sv.visit(this);
}
// SquareVisitor class
public void visit (Square square) {
// "square level" logic as needed
if(!square.isEmpty) {
// maybe we target Evaluators for the particular piece on the square
this.evaluator(square); // square has references to its piece and board.
// maybe the square.piece has "visit"
}
} | {
"domain": "codereview.stackexchange",
"id": 3991,
"tags": "java, chess, visitor-pattern"
} |
need topological map online, graph_slam doesn't work | Question:
Has anybody had success using the graph_slam package described at
http://www.ros.org/wiki/graph_slam
I tried checking out the source and building it but the dependencies do not build.
Or do you have any suggestions how I can build a topological map out of a SLAM gridmap fairly efficiently on the fly?
Edit: I am using the cturtle-pr2all stack
Originally posted by Dimitar Simeonov on ROS Answers with karma: 535 on 2011-03-29
Post score: 0
Original comments
Comment by Dimitar Simeonov on 2011-03-30:
CTurtle, sorry about that
Comment by Eric Perko on 2011-03-29:
Could you include which release (CTurtle or Diamondback) you are trying to use?
Answer:
I didn't see this question till now, unfortunately. If you still need this, you can check out the new version of the topological_navigation stack (see www.ros.org/wiki/topological_navigation for instructions). In particular, the topological_map_2d package defines the topological map type and the laser_slam_mapper package constructs a node that constructs it.
Originally posted by bhaskara with karma: 1479 on 2011-05-18
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 5241,
"tags": "ros, slam, navigation, mapping, graph"
} |
“The Fourier transform cannot measure two phases at the same frequency.” Why not? | Question: I have read that the Fourier transform cannot distinguish components with the same frequency but different phase. For example, in Mathoverflow, or xrayphysics, where I got the title of my question from: "The Fourier transform cannot measure two phases at the same frequency."
Why is this true mathematically?
Answer: It's because the simultaneous presence of two sinusoidal signals with the same frequency and different phases is actualy equivalent to a single sinusoidal at the same frequency, but, with a new phase and amplitude as follows:
Let the two sinusodial components be summed like this :
$$ x(t) = a \cos(\omega_0 t + \phi) + b \cos(\omega_0 t + \theta) $$
Then, by trigonometric manipulations it can be shown that :
$$ x(t) = A \cos(\omega_0 t + \Phi) $$
where
$$A = \sqrt{ a^2 + b^2 + 2 a b \cos(\theta-\phi) } $$ and
$$ \Phi = \tan^{-1}\left(\frac{ a \sin(\phi) + b\sin(\theta) } { a \cos(\phi) + b\cos(\theta) } \right) $$
hence you actually have a single sinusoidal (with a new phase and amplitude), and therefore nothing to distinguish indeed... | {
"domain": "dsp.stackexchange",
"id": 12349,
"tags": "fourier-transform, fourier"
} |
On uniform randomness of the weight of the remaining edges of a graph after deleting some of them | Question: Suppose we have a graph $G(V,E,W)$, where $V$ and $E$ are the set of vertices and edges and $W$ is non-negative weight on the edges. Let $w(e)$ be the weight of edge $e$ and $N(e)$ be the neighboring edges of $e$. An edge $e$ is locally subdominant if its weight is smaller than all of its neighbors. With this background Let we have the following algorithm,
for e in E
if w(e) is locally sub-dominant
delete e from the graph G
double weights of all e in N(e)
My question is after this loop ends what can we say about uniform randomness of the remaining edges. Are they still uniform random?
Answer: Let us consider the simplest case, that of a path of length two edges, which are drawn uniformly and independently from $[0,1]$. Let us suppose that the first edge is sub-dominant, which happens with probability 1/2. The probability that the weight of second edge is at most $t \in [0,1]$ is
$$
\Pr[x \leq y \leq t \mid x \leq y] = \Pr[x,y \leq t] = t^2.
$$
Here $x$ is the weight of the first edge, and $y$ is the weight of the second edge. We see that even before doubling, the distribution of $y$ is non-uniform.
Next, let us examine the case of a star with three edges, again drawn uniformly and independently from $[0,1]$. Again suppose that the first edge is sub-dominant, which happens with probability 1/3. The probability that the weight of the second edge is at most $t \in [0,1]$ while that of the third edge is at most $u \in [0,1]$ is
$$
\Pr[y \leq t, z \leq u \mid x \leq y,z] =
3 \int_0^{\min(t,u)} (t-x)(u-x) \, dx = \\
3tum - \frac{3}{2}(t+u)m^2 + m^3.
$$
Here $x,y,z$ are the weights of the first, second, and third edges, respectively, and $m = \min(t,u)$. Substituting $u = 1$, we obtain
$$
\Pr[y \leq t \mid x \leq y,z] = \frac{3t^2-t^3}{2}.
$$
Calculation shows that for generic $t,u$,
$$
\Pr[y \leq t, z \leq u \mid x \leq y,z] \neq
\Pr[y \leq t \mid x \leq y,z] \Pr[z \leq u \mid x \leq y,z],
$$
that is, the remaining weights are no longer independent. | {
"domain": "cs.stackexchange",
"id": 12322,
"tags": "algorithms, graphs, randomness"
} |
What does a Space radiator/cooler look like exactly? | Question: I understand the principles of radiating heat to space using heat pipes and whatnot but what exactly do these devices look like in real world applications? The closest example for a space base application is the electra deployable cooling radiator.
But this is covered in second surface mirrors and I cannot really find a COTS example or product to cool electronics like the Nvidia Jetson. I am familiar with Kerbal space program but those coolers are depicted like car radiators which may work but are probably just an artist depiction/interpretation. Is it truly as simple as adding a surface coating and protect it from the sun? Or a large surface reflective surface area? I imagine there is a network of heatpipes bonded to the inside surface of these second surface mirrors? I am trying to design a cooler for electronics serving in a LEO application just to provide some context here.
Answer: Here's a picture of the HETE-2 spacecraft in test at MIT's Lincoln Laboratory.
The silver areas are the parts we wanted to keep cool: they are covered with silver-teflon tape. The silver has low emissivity at optical wavelengths, so it reflects sunlight. The teflon has high emissivity in the thermal infrared, so it radiates heat. Most of the rest of what you see is gold: it absorbs some visible light, but it has very low infrared emissivity. We used it where we wanted to retain the heat. The light gold is gold-plated aluminum structure, while the dark gold is plastic thermal blanket with gold coating.
There are no heat pipes: those are only needed if the heat can't be adequately delivered to the radiating surface via the metal structure of the spacecraft. | {
"domain": "physics.stackexchange",
"id": 91287,
"tags": "thermal-radiation, electronics, satellites"
} |
Waves on water generated by a falling object | Question:
Let an object of mass $m$ and volume $V$ be dropped in water from height $h$, and $a$ be the amplitude of the wave generated. What is the relation between $a$ and $h$. How many waves are generated? What is the relation between the amplitudes of successive waves? Does it depend on the shape of the particle?
Assume the particle is spherical. What would be the shape of the water that rises creating first wave?
Answer: This is crude.
Maybe there can be an energy approach. Initially the mass has potential energy $T=m g h$. At the point of peak splash-back lets assume all the energy has been transferred to the water with peak potential energy related to the radial wave height function $y(r)=?$. A small volume of water a distance $r$ from impact has differential volume ${\rm d}V = y(r) 2\pi r {\rm d}r$ .
The potential energy of the small volume of water is ${\rm d}T = \rho g \frac{y}{2} {\rm d}V$ where $\rho$ is density of water. The total energy is thus:
$$ T = \int_0^\infty \rho g \frac{y(r)^2}{2} 2\pi r {\rm d} r $$
Putting a nice smooth wave height function of $$y(r) = Y \exp(-\beta\, r) \left(\cos(\kappa\, r) +\frac{\beta}{\kappa} \sin(\kappa\, r)\right)$$ with $Y$ a height coefficient. This has the properties of ${\rm d}y/{\rm d}r=0$ at $r=0$ with $y(0)=Y$.
$$ T = \frac{\pi Y^2 g \rho \left(9 \beta^4+2 \beta^2 \kappa^2+\kappa^4\right)}{8 \beta^2 \left( \beta^2+\kappa^2 \right)^2 } = m g h $$
So wave height should be $$ Y = \propto \sqrt{ \frac{h m}{\rho \pi }} $$ | {
"domain": "physics.stackexchange",
"id": 3787,
"tags": "homework-and-exercises, newtonian-mechanics, fluid-dynamics, waves"
} |
How do you save ROS tutorial subscriber data? | Question:
In the "Writing a Simple Publisher and Subscriber (C++)" tutorial it states that in:
void chatterCallback(const std_msgs::String::ConstPtr& msg)
{
ROS_INFO("I heard: [%s]", msg->data.c_str());
}
The message is passed in a boost shared_ptr, which means you can store it off if you want.
How exactly can I use this to store my data in a text file. I have no clue where to begin with this. I went through the boost shared_ptr documentation, but I do not understand it.
Originally posted by sw14928 on ROS Answers with karma: 3 on 2019-07-23
Post score: 0
Answer:
What speaks against doing a normal string write?
#include <fstream>
#include <string>
#include <iostream>
void chatterCallback(const std_msgs::String::ConstPtr& msg)
{
std::ofstream out("output.txt");
out << msg->data.c_str();
out.close();
}
Originally posted by Mehdi. with karma: 3339 on 2019-07-23
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by sw14928 on 2019-07-23:
Thanks for your advice - it works, but how can I make it print a list? cheers
Comment by Mehdi. on 2019-07-23:
std::vectorstd::string str_vector;
and then append msg->data.c_str() each time a message arrive
then for loop on the vector and print the content
Comment by sw14928 on 2019-07-23:
Sorry my C++ is not very good. Would that write my list to a vector, then how would I print that into my file iteratively? use append.msg? | {
"domain": "robotics.stackexchange",
"id": 33501,
"tags": "ros, ros-melodic, publisher"
} |
Landau and Lifshitz - collisions between particles | Question: In the book mechanics from Landau & Lifshitz, section 17 collisions between particles there are those two equations in page 46:
$$\tan \theta_1 = \frac{m_2 \sin \chi}{m_1+m_2\cos\chi}, \quad \quad \theta_2 = \frac{1}{2}(\pi-\chi).$$
How did they derive these equations?
The vectors in the image are:
$$\textbf{p}_1' = m \mathcal{v} \textbf{n}_0 + m_1\frac{\textbf{p}_1+\textbf{p}_2}{m_1+m_2}$$
$$ \textbf{p}_2' = -m \mathcal{v} \textbf{n}_0 + m_2 \frac{\textbf{p}_1+\textbf{p}_2}{m_1+m_2},$$
and from these we get
$$\vec{AO} = \frac{m_1}{m_1+m_2} (\textbf{p}_{1} + \textbf{p}_{2})$$
$$\vec{OB} = \frac{m_2}{m_1+m_2} (\textbf{p}_{1} + \textbf{p}_{2})$$
$$\vec{OC} = m \mathcal{v},$$
given that $m$ is the reduced mass
$$m = \frac{m_1 m_2}{m_1+m_2}.$$
A link to the book:
https://cimec.org.ar/foswiki/pub/Main/Cimec/MecanicaRacional/84178116-Vol-1-Landau-Lifshitz-Mechanics-3Rd-Edition-197P.pdf
Answer: You got so close to an answer.
The authors are considering a situation when $|\vec p_1|= m_1v$ and $|\vec p_2|= 0$
This means that $|\vec{AO}| = \dfrac{m_1^2\,v}{m_1+m_2}$ and $|\vec{OB}| = \dfrac{m_2\,m_1\,v}{m_1+m_2} =|\vec OC|$
Taking the common factor, $\dfrac{m_1\,v}{m_1+m_2}$ out of each of the lengths results in the following diagram.
Noting that $OB=OC$ for the second relationship the required equation,
$\tan \theta_1 = \dfrac{m_2 \sin \chi}{m_1+m_2\cos\chi}$ and $\theta_2 = \dfrac{1}{2}(\pi-\chi),$
then follow.
Although they do not answer your question you might find these two articles of interest?
Diagrammatic Approach for Investigating Two Dimensional Elastic Collisions in Momentum Space I: Newtonian Mechanics
Diagrammatic Approach for Investigating Two Dimensional Elastic Collisions in Momentum Space II: Special Relativity | {
"domain": "physics.stackexchange",
"id": 84766,
"tags": "homework-and-exercises, newtonian-mechanics, conservation-laws, collision, scattering"
} |
If CO2 is nonpolar how come much more dissolves in water than O2? | Question: Why is it that $\ce{CO2}$ is considerably more soluble in water than $\ce{O2}$ is?
$\ce{CO2}$ is nonpolar but dissolves in water which like $\ce{CO2}$ being nonpolar, doesn't make any sense.
Is it the bond polarity and that the bond polarity is more important in solubility?
If not than why is $\ce{CO2}$ more soluble in water than $\ce{O2}$.
Answer: Taken from my answer to your original question
There are a couple of reasons why $\ce{CO2}$ is more soluble in water than $\ce{O2}$. Because the two $\ce{C=O}$ bonds in $\ce{CO2}$ are polarized (whereas in $\ce{O2}$ the bond is not polarized) it makes it easier for the polar water molecule to solvate it and to form hydrogen bonds. Both of these factors will stabilize a $\ce{CO2}$ molecule more than an $\ce{O2}$ molecule in water; stabilization translates into greater solubility. Another factor enhancing the solubility of $\ce{CO2}$ in water is the fact that $\ce{CO2}$ reacts with water to set up an equilibrium with carbonic acid.
$$\ce{CO2(aq) + H2O <=> H2CO3(aq)}$$
This reaction will also enhance $\ce{CO2}$'s solubility in water compared to oxygen which does not react with water. | {
"domain": "chemistry.stackexchange",
"id": 6135,
"tags": "solubility, polarity"
} |
Integral Calculation in Matlab for Filter Optimization | Question: I am trying to calculate the area under my FIR filter kernel using trapz but I'm getting weird results. The reason I want to know the area is eventually I would like to loop through filter orders in a small range and pick the order which most closely matches my ideal filter window.
When you run this code in Matlab the area under the kernel is given as 9.5e3. However, if you look at my ideal filter window its area is 2e3.
What am I missing here and why is my integral result way off?
Matlab Code:
clc,clear;
close all;
srate = 250e3;
nyquist = srate/2;
npnts = srate*5; %generate number of points for 5 seconds of sampling
time = (0:npnts-1)/srate;
hz = linspace(0,srate/2,floor(npnts/2)+1)/1e3; %go from 0 to nyquist, frequency resolution is defined by last term
% FIR filter specs
passband = [25e3 27e3];
transw = 0.01; %how the filter's edges taper
test_order = 13;
order = round(test_order*srate/passband(1)); % define the number of time points for the filter kernel
shape = [ 0 0 1 1 0 0 ]; %FIR shape
%define frequency shape of the FIR shape, firls function requires
%frequencies to by normalized by nyquist
frex = [0 passband(1)-passband(1)*transw passband passband(2)+passband(2)*transw nyquist]/nyquist;
filter_kernel = firls(order, frex, shape);
% power spectrum of filter kernel
filtkernX = abs(fft(filter_kernel,npnts)).^2;
% calculate area under filter kernel
disp(trapz( filtkernX(1:length(hz))) )
%plot
figure;
plot(hz,filtkernX(1:length(hz)),'-','linew',2,'markersize',1)
hold on
plot([0 passband(1) passband passband(2) nyquist]./1e3,[0 0 1 1 0 0],'ro-','linew',2,'markerfacecolor','w')
xlim([0 60])
xlabel('Frequency (kHz)')
ylabel('Amplitude a.u.')
title('Frequency Response')
legend({'Actual';'Ideal'})
Answer:
However, if you look at my ideal filter window its area is 2e3.
Nope. It's 10000. The spectrum is a metric of spectral density, not frequency. In your case your bandwidth is 2000 Hz and the frequency resolution is 0.2Hz ($\Delta f = \frac{f_s}{N_{FFT}}$). That means your passband has $M = \frac{2000Hz}{0.2Hz} = 10000$ points in the DFT and if you integrate 10000 ones you get 10000.
Side comment: in most cases it's preferable to integrate the power, not the amplitude. | {
"domain": "dsp.stackexchange",
"id": 11938,
"tags": "matlab, filters, filter-design, finite-impulse-response, integration"
} |
Normalising squeezed position eigenket? | Question: I want to find the effect of squeezing operator $S(r) = \exp \big[r(\hat{a}^2 - \hat{a}^{{\dagger}^2})\big]$
on $|q\rangle$ i.e. $S(r)|q\rangle$.
I proceed as follows:
$$S(r)\hat{q}|q\rangle = S(r)q |q\rangle
$$
Also
$$S(r)\hat{q}|q\rangle = S(r)\hat{q}S(r)^{\dagger}S(r)|q\rangle = e^r \hat{q}(S(r)|q\rangle)
$$
using $S(r)\hat{q}S(r)^{\dagger} = e^r \hat{q}$.
Combining above two equations yield
$$\hat{q}(S(r)|q\rangle) = e^{-r} q (S(r)|q\rangle)
$$
Hence, $S(r)|q\rangle$ is eigenket of $\hat{q}$ operator with eigenvalue $ e^{-r} q $ i.e.
$$S(r)|q\rangle = N |e^{-r} q\rangle
$$
where N is the normalization factor.
How to determine normalization factor $N$?
For more details see Chapter 8 of Introduction to Optical Quantum Information Processing
By Pieter Kok, Brendon W. Lovett.
Another ref: P.No. 12, arxiv: 1212.5340
Answer: If
$$
S(z) = \exp\{z (\hat a^\dagger)^2 -z^*\hat a^2\}
$$
then $S^\dagger(z) = S(-z)=S^{-1}(z)$, so $S(z)$ is unitary. It does not change the nomalization of any state therefore. You should be careful however: Your
$|q\rangle$ is an eigenstate of the position operator, so it is not itself normalizable to start with. | {
"domain": "physics.stackexchange",
"id": 57666,
"tags": "quantum-mechanics, wigner-transform, squeezed-states"
} |
When do SARSA and Q-Learning converge to optimal Q values? | Question: Here's another interesting multiple-choice question that puzzles me a bit.
In tabular MDPs, if using a decision policy that visits all states an infinite number of times, and in each state, randomly selects an action, then:
Q-learning will converge to the optimal Q-values
SARSA will converge to the optimal Q-values
Q-learning is learning off-policy
SARSA is learning off-policy
My thoughts, and question: Since the actions are being sampled randomly from the action space, learning definitely seems to be off-policy (correct me if I'm wrong, please!). So that rules 3. and 4. as incorrect. Coming to the first two options, I'm not quite sure whether Q-learning and/or SARSA would converge in this case. All that I'm able to understand from the question is that the agent explores more than it exploits, since it visits all states (an infinite number of times) and also takes random actions (and not the best action!). How can this piece of information help me deduce if either process converges to the optimal Q-values or not?
Source: Slide 2/55
Answer: The true answers are 1 and 3.
1 is true because the required conditions for tabular Q-learning to converge is that each state action pair will be visited infinitely often, and Q-learning learns directly about the greedy policy, $\pi(a|s) := \arg \max_a Q_\pi(s,a)$, and because Q-learning converges to the optimal Q-value function we know that the policy will be optimal (because the optimal policy is the greedy policy wrt the optimal Q-function).
3 is true because Q-learning is by definition an off-policy algorithm, because we learn about the greedy policy whilst following some arbitrary policy.
2 is false because SARSA is on-policy, so it will be learning the Q-function under the random policy.
4 is false because SARSA is strictly on-policy, for reasons analogous to why Q-learning is off-policy. | {
"domain": "ai.stackexchange",
"id": 2232,
"tags": "reinforcement-learning, q-learning, convergence, sarsa"
} |
A question about a comment from Byron and Fuller, pg 533 | Question: Seeing the equation,
\begin{equation*} (\hat{A} -\lambda)G_\lambda
(\mathbf{x},\mathbf{y})=\delta^{(3)}(\mathbf{x}-\mathbf{y}) \tag{1}
\end{equation*}
in the answer
What is different between resolvent and green function
prompts me to ask a question about a comment, found in Byron and Fuller$^1$, on pg 533
The comment is
the Green’s function of the operator $(~I - \lambda A~)$ is
$(~I+\lambda R_A~)$
Reference 1, continues
this may be seen by looking at Eq(9.23)
Now. (9.23) is
\begin{equation*} (~I - \lambda A~)^{-1}=(~I+\lambda R_A~) \tag{9.23}
\end{equation*}
with $I$ the unit operator, $A$ and $R_A$ operators, the latter the resolvent of $A$.
It appears that some liberty is being taken here, because $(~I+\lambda R_A~)$ is an operator, not a function.
My question is: Might the comment, from Byron and Fuller, be intended to bring to mind, that
\begin{equation*}
(~I - \lambda A~)~(~I - \lambda A~)^{-1}=1
\end{equation*}
which means
\begin{equation*}
(~I - \lambda A~)~(~I + \lambda R_A~)=1
\end{equation*}
can be thought of as analogous to (1)?
Reference:
1, Frederick W. Byron, Jr. and Robert W. Fuller, Mathematics of Classical and Quantum Physics, Dover 1992
Answer: Yes, it is the same formula, even though people are loose with notations and conventions.
Your formula (1) is predicated on integral kernels, that is, operator multiplication is integration of a "function" kernel with the function argument. So $\hat G_x ~\psi(x) \equiv \int\!\! dy ~ G(x,y)\psi(y)$, by virtue of $\langle x|\hat G |\psi\rangle= \int\!\! dy ~ \langle x|\hat G |y\rangle \langle y|\psi\rangle $.
Given that, you may think of (1) as
$$
(\hat A -\lambda I)~ (\hat A -\lambda I)^{-1}= I.
$$
The resolvent is normally defined as
$$
\hat R (\lambda; \hat A) = (\hat A -\lambda I)^{-1}.
$$
For large eigenvalues λ, you might think of it as $(-1/\lambda)(I+\hat A/\lambda+ \hat A^2/\lambda^2+...)$. Hilbert's original introduction of this resolvent was to explore the set of its singularities as the spectrum of operators such as $\hat A$.
Now, B&F (9.23) is the same formula, having suppressed the operator carets; except for $\mu = 1/\lambda$, and an overall minus sign introduction,
$$
(I- \mu A)~ (I-\mu A)^{-1}= I. \tag{9.23}
$$
However, now (9.23) is defining its resolvent slightly differently than the mainstream convention, namely as
$$
R_A= A(I-\mu A)^{-1}= A+\mu A^2+\mu^2 A^3+...,
$$
so evidently still a function of A and μ.
It follows that
$$
(I+\mu R_A) (I-\mu A)=(I +\mu A(I-\mu A)^{-1})(I-\mu A) =I.
$$
The various notational options are predicated on whether one is focussing on Neumann series of Fredholm inhomogeneous equations as in B&F, or elegant contour integrations picking up the singularities of the resolvent. | {
"domain": "physics.stackexchange",
"id": 94037,
"tags": "notation, greens-functions, dirac-delta-distributions"
} |
Algorithm for maximizing correspondence between 2 sets | Question: I am trying to figure out how to solve the following problem: I have 2 sets of objects, set A and set B. I have a metric that calculates how closely a given object from set A corresponds to an object from set B. I'd like to maximize the overall correspondence of the entire set.
What I tried first was a sort of greedy algorithm, where I went through each object in set A and calculated that object's correspondence with each object remaining in set B. I then replaced the object in set B that had the highest correspondence with the object in set A. For the next object in set A, I did the same thing, leaving out the objects in set B that already had a correspondence.
The problem with this approach is that it works very well for the first part of the sets, but as I near the end, there are fewer objects in set B to match with, so objects from set A tend to have worse correspondence with those objects.
I think that it might be better to do a full calculation of the correspondence of each item in Set A to each item in Set B, and from there choose each item from set A that has the highest correspondence of the next item in set B.
However, I'm left wondering if you could have a situation where, for example, object 1 from set A has, let's say an 95% correspondence to object 6 in set B, but it's next best correspondence is only 50% with object 14 in set B. Meanwhile, object 2 from set A has a correspondence of 97% to object 6 from set B, but it has a 96% correspondence to object 28 in set B. In my mind, I think it would be better for object 1 from set A to be matched with object 6 from set B (95% correspondence) and object 2 from set A to be matched with object 28 (96% correspondence) in set B instead of matching object 2 in set A to object 6 in set B and object 1 in set A to match object 14 in set B.
Is there a name for such an algorithm? I feel like it's related to the Knapsack Problem, but is not exactly the same. There will always be the same number of objects in set A and set B. I suppose it's possible that there will be more than 1 set of mappings that are "most optimal". In that case, I don't care which one is chosen.
In my application there will be between a few hundred to 10 or 20 thousand objects in each set. The average will probably be around 5,000 or so.
Answer: This is known as the assignment problem. There are polynomial-time algorithms for it. | {
"domain": "cs.stackexchange",
"id": 8578,
"tags": "algorithms"
} |
Can there be an algorithm giving an exponential speedup over Grover's? | Question: This question was inspired by the following reference:
Reference paper
We use the usual notation. $N = 2^n$ , the number of all possible n-bit strings . The oracle $U_\omega$ can be put in the form
$U_\omega=I - 2 (\vert\omega\rangle\langle\omega \vert )$
$U_\omega$ is a reflection of any vector on the hyperplane orthogonal to $\vert\omega\rangle$
The vector $\vert s \rangle $ and operator $U_s$ are introduced.
$\vert s \rangle = H^{\otimes n} \vert 0 \rangle^{\otimes n}$ , where $H^{\otimes n}$ is the n-qubit Hadamard transform.
Operator $U_s$ reflects any vector with respect to $\vert s \rangle$
$U_s = 2\vert s \rangle \langle s \vert - I$
The Grover iteration is $U_{Grover} = U_s U_\omega$
$U_{Grover}$ rotates (at every iteration) the initial vector $\vert s \rangle$ towards the desired vector $\vert\omega\rangle$ by the angle $2\theta$, where $sin\theta = \frac{1}{\sqrt{N}}$
We note that a reflection is expressed by a unitary matrix. That means that the operator defined below is represented by a unitary matrix, therefore a quantum circuit can be designed in order to implement this operator (Edit. This statement was proven false by the answer to this question).
We define the operator:
$U(\vert x \rangle , \vert y \rangle ) = ( \vert x \rangle , U_x \vert y \rangle) $ , where $U_x \vert y \rangle$ represents the reflection of $\vert y \rangle$ with respect to $\vert x \rangle$
In the following relations the vectors $\vert \xi_i \rangle$ are implicitly defined based on the action of the operator U.
We consider the following sequence of transformations (based on the definition of the operator U):
$U(\vert s \rangle , U_\omega\vert s \rangle ) = (\vert s \rangle , U_sU_\omega \vert s \rangle) = (\vert s \rangle , \vert \xi_1 \rangle )$
$U(\vert \xi_1 \rangle , U_\omega\vert s \rangle ) = (\vert \xi_1 \rangle , U_{\xi_1}U_\omega \vert s \rangle) = (\vert \xi_1 \rangle , \vert \xi_2 \rangle )$
$U(\vert \xi_2 \rangle , U_\omega\vert s \rangle ) = (\vert \xi_2 \rangle , U_{\xi_2}U_\omega \vert s \rangle) = (\vert \xi_2 \rangle , \vert \xi_3 \rangle )$
.......................and so on..........................
$U(\vert \xi_{n-1} \rangle , U_\omega\vert s \rangle ) = (\vert \xi_{n-1} \rangle , U_{\xi_{n-1}}U_\omega \vert s \rangle) = (\vert \xi_{n-1} \rangle , \vert \xi_n \rangle )$
In other words, the vector to be reflected is fixed but the reflection axis is variable (in the original Grover algorithm it's the other way around ).
At every step K of the algorithm above the initial vector $\vert s \rangle$ is rotated towards the desired vector $\vert\omega\rangle$ by an angle which is at about $2^K\theta$ (as order of magnitude ), where $sin\theta = \frac{1}{\sqrt{N}}$. That means that this algorithm will only need about $log_2 N$ (as order of magnitude ) steps to reach the target.
Question 1. Can a quantum circuit be designed, that implements this algorithm, in principle ?
Question 2. Does this algorithm present an exponential speedup, when compared to Grover's algorithm?
Edit. Unfortunately nothing from what I tried seems to work. You need a quantum circuit that takes as input the vector to be reflected and the vector that represents the reflection axis. The output of the quantum circuit must contain the reflected vector. That does not seem possible, as far as I understand. This reflection implementation problem, if ever solved, would lead to an exponential speedup of Grover's algorithm.
Related question
Answer: TLDR: your operation $U$ does not exist (so the answer to question 2 is irrelevant, and I haven't thought about it).
You can show that $U$ does not exist in a very similar way to the way that you cloning is impossible. I'll give the very crude sketch here. There are mathematically more robust versions.
It suffices to show that the transformation is not unitary, provided we include an ancilla in the operation (any CP map can be described by a unitary operator on a sufficiently extended system). So, we want a transformation
$$
|0\rangle|\psi\rangle|r\rangle\mapsto |0\rangle(I-2|0\rangle\langle 0|)|\psi\rangle|s\rangle
$$
and a second one
$$
|\phi\rangle|\psi\rangle|r\rangle\mapsto |\phi\rangle(I-2|\phi\rangle\langle \phi|)|\psi\rangle|s'\rangle.
$$
Let's consider the inner products. Before the transformation, we have $\langle\phi|0\rangle$, which we'll assume to be non-zero. After the transformation, we have
$$
\langle\phi|0\rangle \langle\psi|(I-2|\phi\rangle\langle\phi|)(I-2|0\rangle\langle 0|)|\psi\rangle\langle s'|s\rangle.
$$
The two can only be equal (as required for a unitary) if $|s\rangle=|s'\rangle$ and
$$
\langle\psi|(I-2|\phi\rangle\langle\phi|)(I-2|0\rangle\langle 0|)|\psi\rangle=1-2|\langle\phi|\psi\rangle|^2-2|\langle0|\psi\rangle|^2+4\langle\psi|\phi\rangle\langle\phi|0\rangle\langle0|\psi\rangle=1.
$$
It's easy to find a counter-example to this. For example, $|\psi\rangle=|0\rangle$ and and $|\phi\rangle=\cos\theta|0\rangle+\sin\theta|1\rangle$ provided $0<\theta<\pi/2$. | {
"domain": "quantumcomputing.stackexchange",
"id": 1514,
"tags": "grovers-algorithm"
} |
Is it possible that a person with myopia will see a blurry picture as normal? | Question: I am trying to process an image in good quality to appear blurred to a normal person and good to a person suffering from myopia
as seen in this source.
Is it possible that a picture that is blurry will appear normal to a person suffering from myopia (farsightedness)?
Answer: A quick footnote to Nathaniel's answer:
If an image looks blurred to you it's because you are viewing it in a plane that isn't the focal plane.
If you put a screen where I've drawn the red dotted line then the image on the screen will look blurred.
If you measure the light in the red dotted plane then at every point in that plane the light wave will have an intensity and a relative phase. If you know the intensity and phase then you can reconstruct the in focus image using the Huygens construction, and indeed the process is known as Huygen's deconvolution. The trouble is that when you take a photograph the photographic process only records the light intensity and it loses the phase.
So if you're starting from a photograph you've lost half the information originally present, i.e. the relative phase, and that means it's impossible to reconstruct a perfectly focussed image. A blurred photograph won't look normal to anyone - myopic or otherwise. However it is usually possible to improve the blurred picture to some extent, which is why Huygens deconvolution software is so widely available. | {
"domain": "physics.stackexchange",
"id": 27101,
"tags": "optics, vision, image-processing"
} |
Do sidebands corresponds to real photons at that frequency? | Question: Say I have a carrier laser (optical) frequency $\omega_c$: $E=E_0 e^{i\omega_c t}$.
I propagate it through an electro-optical modulator that modulates the phase by $\beta \sin\Omega t$: $E = E_0 e^{i\omega_c t + i\beta \sin(\Omega t)}$.
If $\beta \ll 1$, the field can be expanded into:
$$ E \propto e^{i\omega_c t} + e^{i(\omega_c+\Omega) t} + e^{i(\omega_c-\Omega) t} , $$
where the new $\omega_c\pm\Omega$ are the sidebands.
Question:
The laser emits photons at energy $\hbar\omega_c$. After the modulation, are there actually photons at energies $\hbar(\omega_c\pm\Omega)$?
Answer: First of all, I believe the answer to the question in the title is yes, they are real photons. As Jon Custer mentioned, in a spectrometer there would be photons detected at the frequencies $\omega_c \pm \Omega$. Moreover, if you make the EOM modulation depth very strong, e.g. $\beta = 2.405 =$ a root of first Bessel function, you can completely extinguish the carrier: there are no photons of $\omega_c$ left.
The distinction between creating sidebands and so-called 'non-linear effects' (eg. second-harmonic generation in optics) I would explain as follows.
High-harmonic generation (HHG)
The products of HHG are photons which have a large energy mismatch compared to the initial photons. Therefore, you need a large power (strong non-linearity) to create them. Similarly, in an electrical circuit, you can create harmonics using a mixer, a highly non-linear element.
Sidebands
The products of creating sidebands usually have very similar energies as the initial photons, essentially no energy has to be added or taken away from the initial photons. Therefore a 'small' non-linearity is enough to prepare them. The EOM does act as a non-linear element, the non-linearity given by the response of a polarising beam splitter on a turning polarisation, which is sinusoidal, that is, non-linear. (You can create sidebands with an EOM operating on the linear slope the sine and then you only get sidebands at $\pm\Omega$ but no higher harmonics than that.) Similarly, in an electrical circuit you can simply have the input voltage of a voltage-controlled oscillator (VCO) oscillate and you will get some sidebands at $n\Omega$. But the bandwidth of this input voltage is not usually enough to create multiples of the carrier frequency $\omega_c$. | {
"domain": "physics.stackexchange",
"id": 69828,
"tags": "electromagnetic-radiation, photons"
} |
Find Divisible Sum Pairs in an array in O(n) time | Question: You are given an array of n integers a0, a1, .. an and a positive integer k. Find and print the number of pairs (i,j) where i<j and i+j is evenly divisible by k (Which is i+j % k == 0). This problem has been taken from Hackerrank.
We need a solution in O(n) time.
An explanation is that we can do this by separating elements into buckets depending on their mod k. For example, you have the elements: 1 3 2 6 4 5 9 and k = 3
mod 3 == 0 : 3 6 9
mod 3 == 1 : 1 4
mod 3 == 2 : 2 5
Now, you can make pairs like so:
Elements with mod 3 == 0 will match with elements with (3 - 0) mod k =
0, so other elements in the mod 3 == 0 list, like so: (3, 6) (3, 9)
(6, 9)
Further:
There will be n * (n - 1) / 2 such pairs, where n is length of the
list, because the list is the same and i != j. Elements with mod 3 ==
1 will match with elements with (3 - 1) mod k = 2, so elements in the
mod 3 == 2 list, like so: (1, 2) (1, 5) (4, 2) (4, 5)
It makes sense that (3, 6) (3, 9) (6, 9) ( all items in the 0th bucket be paired) since (a + b)% k = 0 = a % k + b % k.
What isnt clear is how the other pairs (1, 2) (1, 5) (4, 2) (4, 5) were generated by combination of elements in the 1st (mod 3 == 1) and the 2nd (mod 3 == 2) bucket and why would there be n * (n - 1) / 2 pairs.
Answer: Consider in your example the set $S_1=\{1, 4\}$ and the set $S_2=\{2,5\}$. Let $n_1 = 2$ and $n_2=2$, the sizes of sets $S_1$ and $S_2$. All of the elements $x\in S_1$ satisfy $x\equiv 1\pmod 3$ and so are of the form $x=3s+1$ for some $s$. Similarly all $y\in S_2$ are of the form $y=3t+2$ so $x+y=3s+1+3t+2=3(s+t+1)$, which is divisible by 3. The number of such pairs is $n_1\cdot n_2$: pick one element, $x$, from $S_1$ and one element, $y$, from $S_2$ to form the pair $(x, y)$, rearranging, if necessary so that the first element is less than the second. In sum, there will be $n_1\cdot n_2=2\cdot 2$ pairs: $(1, 2), (1, 5), (4, 2), (4, 5)$, which, after arranging the pairs in component orders, gives $(1, 2), (1, 5), (2, 4), (2, 5)$ (your highlighted quote didn't do that).
For a general $k$, then, you'll have sets $S_i$, for $i=1, \cdots k-1$ (ignoring $S_0$ for the moment), so if $n_i$ represents the size of $S_i$, you'll have pairs $(x,y)$ where $x\in S_i$ and $y\in S_{k-i}$. The number of such pairs is $n_1\cdot n_{k-1}+n_2\cdot n_{k-2}+\dotsc$.
[You have to be a bit careful, since, first, the sum only should include those products $n_i\cdot n_{k-i}$ for which $i<k-i$, and second, you'll have to account for the cases where $i$ and $k-i$ are equal. You'll also have to deal with the pairs you get from $S_0$, which have to be counted differently.]
I'd suggest ignoring the sentence where $n(n-1)/2$ appears: it's true but irrelevant. | {
"domain": "cs.stackexchange",
"id": 7052,
"tags": "algorithms, arrays, modular-arithmetic"
} |
Spectrophotometric assay by pNPP | Question: Here is a series of related questions that I want to ask.
Background
The activity of acid phosphatase is measured by an enzymatic reaction that converts para-nitrophenyl phosphate (pNPP) to para-nitrophenol (pNP), liberating phosphate. The product, pNP, absorbs light whose wavelength is $\pu{400 nm}$ with an absorption coefficient ($\pu{400 nm}$) of $\pu{19000 M-1 cm-1}$ at extremely alkaline pH. Reaction mixture for an acid phosphatase is slightly acidic. Thus, it must be alkalinized for quantification of pNP.
Two enzyme concentrations are to be examined - They are $\ce{1X}$ and $\ce{0.1X}$ (The $\ce{0.1X}$ enzyme is made by mixing $\pu{1 ml}$ of $\ce{1X}$ enzyme in $\pu{9 ml}$ of $\ce{NaCl}$.) Reaction times for each enzyme is $1$, $10$ and $20$ minutes.
Procedure
Protocol for measurement of acid phosphatase activity:
Mix $\pu{0.12 ml}$ of $\pu{0.5 M}$ $\ce{Na}$ acetate buffer (pH $5.6$) and $\pu{0.24 ml}$ of $\pu{5 mM}$ pNPP in a test tube. Start the reaction by adding $\pu{0.24 ml}$ of an enzyme solution.
After the reaction times of $1$, $10$, and $\pu{20 min}$, respectively, stop the reaction by adding $\pu{0.6 ml}$ of $\pu{0.5 M}$ $\ce{NaOH}$. $\ce{NaOH}$ stops the reaction and converts the pNP produced into a yellow-colored (A400-absorbing) form.
After all reactions are stopped, measure A400 of the samples.
Assay of potato acid phosphatase:
\begin{array}{ll}
\pu{0.5 M} \text{Na acetate buffer (pH 5.6)} & \pu{0.12 ml}\\
\pu{5 mM} \ce{pNPP} & \pu{0.24 ml}\\
\text{Enzyme} & \pu{0.24 ml}\\
\pu{0.m M} \ce{NaOH} & \pu{0.5 ml}\\\hline
\text{Sum} & \pu{1.2 ml}
\end{array}
There are some things I am supposed to calculate after I obtain the results.
First I am going to plot a graph of absorbance versus time and find the slope for each of the lines ($\ce{1X}$ and $\ce{0.1X}$).
I have been asked to find the absorbance change/min/1ml of $\ce{1X}$ enzyme.
My answer: Divide the slope obtained by $0.24$ (the amount of enzyme).
I have to convert the absorbance change to concentration change when
$L$ is $\pu{1 cm}$ and e400 of pNP is $\pu{19000 M-1 cm-1}$.
My answer: This can be found out by $A = eCL$.
The next question is to convert the concentration change to a change in the amount of substance of pNP.
I have no clue how to do this.
Finally, I have to calculate total activity (in moles per minute) in $\pu{4 ml}$ of $\ce{1X}$ enzyme solution.
My answer: Multiply the answer I obtain in the above question by $4$.
Answer: For the first subquestion:
Using unit analysis, the answer is very quickly found: The unit of the slope $m$, assuming dimensionless values for the absorbance, is
$$[m] = \frac{[\Delta y]}{[\Delta x]} = \frac{[\Delta A]}{[\Delta t]} = \frac{1}{\text{min}}$$
Now you need a volume in the denominator as well, so dividing by a volume seems like a good idea. Since you need the change in absorption per $\pu{mL}$ of $\ce{1X}$ solution, you divide the slope of the $\ce{1X}$ reaction by $\pu{0.24 mL}$ for the correct value.
For the subquestion with which you seem to have the most trouble: simply apply $c=n/V$.
You will find the amount of substance changed via
$$ \Delta n = \Delta c\times V = \Delta c \times \pu{1.2 mL}$$ | {
"domain": "chemistry.stackexchange",
"id": 940,
"tags": "biochemistry, spectrophotometry, enzymes"
} |
Airplanes and the earth’s rotation | Question: A helicopter is on a moving treadmill, suddenly becomes airborne and hovers over the moving treadmill. The pilot of the helicopter must consider the movement of the treadmill if they wish to briefly land the helicopter on the exact spot that the helicopter took off from.
My question: How is a hypothetical airplane taking off from a due east-west runway different from my hypothetical helicopter taking off from a moving treadmill?
Answer: The difference is that the air is in motion as well, when considering the rotation of the earth. It isn't on the treadmill. The airplane taking off is "carried forwards" by the already moving air. The helicopter taking off will have a forwards motion which is suddenly resisted by the stationary air.
Winds, air fluctuations etc. might of course alter this locally. | {
"domain": "physics.stackexchange",
"id": 80072,
"tags": "newtonian-mechanics, reference-frames"
} |
How to calculate ∆rGº from entropy and ∆fGº in different temperatures? | Question: "The total oxidation of glucose occurs according to the following chemical equation:
C6H12O6 (s) + 6O2 (g) -> 6CO2 (g) + 6H2O (l)
The following table gives us the free energies of standard formation and the standard molar entropies of
compounds involved in the previous reaction.
Compound
∆fGº(298 K) kJ/mol
Smº (298 K) J/K/mol
C6H12O6
-917,2
212,10
O2
0
205,14
CO2
-394,36
213,14
H20
-273,13
69,91
Based on the previous data, determine the ∆rGº
for glucose oxidation at 308 K"
It's a question of a doctoral selection process and I want to figure out if it is formulated wrong or if there is a way to an answer.
Answer: For a small temperature change like that, you can make use of dG=-SdT. Determine $\Delta rG^0$ and $\Delta r S^0$ at 298, multiply the latter by 10 , and subtract it from the former to get $\Delta rG^0$ at 308K. | {
"domain": "chemistry.stackexchange",
"id": 14868,
"tags": "physical-chemistry, thermodynamics, entropy, free-energy"
} |
Is the equation $\vec{B}=\mu\vec{H}$ correct in general? How shall we derive it? | Question: The relation between magnetic flux density $(\vec{B})$ and magnetic field intensity $(\vec{H})$, in general, is:
\begin{equation}
\vec{B}=\mu_0(\vec{H}+\vec{M})
\tag{1}
\end{equation}
One of my colleagues told equation (1) can be written as:
\begin{equation}
\vec{B}=\mu\vec{H}
\tag{2}
\end{equation}
where $\mu$ is the permeability of the medium.
Is this correct in general? If yes, how can we derive $(2)$ from $(1)$?
Answer: The magnetization is gained as a result of an external magnetic field applied, so there must be a relation between the two vectors (i.e. a transformation) that takes the form of a tensor function. This transformation is known as the magnetic susceptibility.
Your equation $\mathbf{B} = \mu \mathbf{H}$ is only correct in linear homogeneous isotropic magnetic materials. "Linear" means that the magnetization is in the same direction as the magnetic field (i.e. the tensor function reduces to a scalar function), "homogeneous" means that the susceptibility is the same throughout, and "isotropic" means that susceptibility is the same in all directions. So, to answer your question, it is not correct in general. Most materials, however, do behave in this way when the applied magnetic field is not too strong.
In such a material, the magnetization is directly proportional to the applied magnetic field: $$\mathbf{M} = \chi_m \mathbf{H}$$ where $\chi_m$ is the magnetic susceptibility. So we have $$\mathbf{B} = \mu_0 (\mathbf{H} + \mathbf{M}) = \mu_0 (1 + \chi_m) \mathbf{H} = \mu \mathbf{H}$$ The quantity $(1 + \chi_m)$ is known as the relative permeability. | {
"domain": "physics.stackexchange",
"id": 51955,
"tags": "electromagnetism, magnetic-fields, magnetostatics"
} |
Representing and handling Data Sizes | Question: In a very specific application I have, I needed the ability to easily convert between different data sizes. I.e. when I give an input of 1,048,576KiB, I needed it to say 1GiB, etc.
So, I built a struct for it.
It's pretty robust, includes operations for subtraction, addition, multiplication and division, == and !=, IsSame etc.
I'd like to think it might be useful for others as well.
First bit is the struct:
public struct DataSize
{
public ulong SizeInBytes { get; }
public SizeScale Scale { get; }
public double Size => GetSize(Scale);
public DataSize(ulong sizeInBytes)
{
Scale = SizeScale.Bytes;
SizeInBytes = sizeInBytes;
}
public DataSize(ulong sizeInBytes, SizeScale scale)
{
Scale = scale;
SizeInBytes = sizeInBytes;
}
public DataSize(double size, SizeScale scale)
{
Scale = scale;
if (scale == SizeScale.Bits)
{
SizeInBytes = (uint)(size / 8);
return;
}
if (((int)scale & 0x03) == (int)SizeScale.Bytes)
{
SizeInBytes = (uint)(size * Math.Pow(10, 3 * (((int)scale & 0xFF00) >> 8)));
return;
}
SizeInBytes = (uint)(size * Math.Pow(2, 10 * (((int)scale & 0xFF00) >> 8)));
}
public double GetSize(SizeScale scale)
{
if (scale == SizeScale.Bits)
{
return SizeInBytes * 8.0;
}
if (((int)scale & 0x03) == (int)SizeScale.Bytes)
{
return SizeInBytes / Math.Pow(10, 3 * (((int)scale & 0xFF00) >> 8));
}
return SizeInBytes / Math.Pow(2, 10 * (((int)scale & 0xFF00) >> 8));
}
/// <summary>
/// Returns a <see cref="DataSize"/> that is the highest value which will have a non-zero whole-number <see cref="Size"/> component.
/// </summary>
/// <param name="scaleType">When set to <see cref="SizeScale.Bytes"/> the result will be a <code>B</code> type, when set to <see cref="SizeScale.Bits"/> the result will be a <code>iB</code> type. If set to <see cref="SizeScale.None"/> the same base unit as the source value will be used.</param>
/// <returns>A <see cref="DataSize"/> object.</returns>
public DataSize GetLargestWholeSize(SizeScale scaleType = SizeScale.None)
{
var limit = 1000ul;
if (scaleType == SizeScale.None)
{
scaleType = (SizeScale)((int)Scale & 0x00FF);
}
if (scaleType == SizeScale.Bits)
{
limit = 1024ul;
}
var iterations = 0;
var currSize = (double)SizeInBytes;
while (currSize >= limit)
{
currSize /= limit;
iterations++;
}
return new DataSize(currSize, (SizeScale)((iterations << 8) | ((int)scaleType & 0x00FF)));
}
/// <summary>
/// Returns a <see cref="DataSize"/> that is the smallest value which will have a zero whole-number <see cref="Size"/> component.
/// </summary>
/// <param name="scaleType">When set to <see cref="SizeScale.Bytes"/> the result will be a <code>B</code> type, when set to <see cref="SizeScale.Bits"/> the result will be a <code>iB</code> type. If set to <see cref="SizeScale.None"/> the same base unit as the source value will be used.</param>
/// <returns>A <see cref="DataSize"/> object.</returns>
public DataSize GetSmallestPartialSize(SizeScale scaleType = SizeScale.None)
{
var limit = 1000ul;
if (scaleType == SizeScale.None)
{
scaleType = (SizeScale)((int)Scale & 0x00FF);
}
if (scaleType == SizeScale.Bits)
{
limit = 1024ul;
}
var iterations = 0;
var currSize = (double)SizeInBytes;
while (currSize >= limit)
{
currSize /= limit;
iterations++;
}
iterations++;
return new DataSize(currSize, (SizeScale)((iterations << 8) | ((int)scaleType & 0x00FF)));
}
public override bool Equals(object obj) => obj is DataSize && (DataSize)obj == this;
public override int GetHashCode() => Size.GetHashCode();
public override string ToString() => $"{Size} {Scale.Abbreviation()}";
public string ToString(string numberFormat) => $"{Size.ToString(numberFormat)} {Scale.Abbreviation()}";
public string ToString(SizeScale scale) => $"{GetSize(scale)} {scale.Abbreviation()}";
public string ToString(string numberFormat, SizeScale scale) => $"{GetSize(scale).ToString(numberFormat)} {scale.Abbreviation()}";
public bool IsSame(DataSize comparison) => SizeInBytes == comparison.SizeInBytes && Scale == comparison.Scale;
public static bool IsSame(DataSize left, DataSize right) => left.SizeInBytes == right.SizeInBytes && left.Scale == right.Scale;
public static bool operator ==(DataSize left, DataSize right) => left.SizeInBytes == right.SizeInBytes;
public static bool operator !=(DataSize left, DataSize right) => left.SizeInBytes != right.SizeInBytes;
public static DataSize operator +(DataSize left, DataSize right) => new DataSize(left.SizeInBytes + right.SizeInBytes, left.Scale);
public static DataSize operator -(DataSize left, DataSize right) => new DataSize(left.SizeInBytes - right.SizeInBytes, left.Scale);
public static DataSize operator *(DataSize left, ulong right) => new DataSize(left.SizeInBytes * right, left.Scale);
public static DataSize operator /(DataSize left, ulong right) => new DataSize(left.SizeInBytes / right, left.Scale);
public static DataSize operator *(DataSize left, double right) => new DataSize((ulong)(left.SizeInBytes * right), left.Scale);
public static DataSize operator /(DataSize left, double right) => new DataSize((ulong)(left.SizeInBytes / right), left.Scale);
}
Next I have a SizeScale enum:
public static class SizeScaleExtensions
{
public static string Abbreviation(this SizeScale scale)
{
if (scale == SizeScale.None)
{
return null;
}
if (scale == SizeScale.Bytes)
{
return "B";
}
if (scale == SizeScale.Bits)
{
return "b";
}
var firstLetter = scale.ToString()[0] + "";
if (((int)scale & 0x00FF) == (int)SizeScale.Bits)
{
return firstLetter + "iB";
}
return firstLetter + "B";
}
}
public enum SizeScale : int
{
None = 0x0000,
Bytes = 0x0001,
Bits = 0x0002,
Kilobytes = 0x0101,
Kibibytes = 0x0102,
Megabytes = 0x0201,
Mebibytes = 0x0202,
Gigabytes = 0x0301,
Gibibytes = 0x0302,
Terabytes = 0x0401,
Tibibytes = 0x0402,
Petabytes = 0x0501,
Pibibytes = 0x0502,
Exabytes = 0x0601,
Exbibytes = 0x0602,
Zettabyts = 0x0701,
Zebibytes = 0x0702,
Yottabytes = 0x0801,
Yobibytes = 0x0802,
}
Both the extensions and that enum declaration are in the same file, which means that the extension method is easily available.
Lastly, I have some tests (I know I need a lot more):
[TestClass]
public class DataSizeTests
{
[TestMethod, TestCategory("Data Size Tests")]
public void GetSize_1_SizeScale_Bits()
{
var expected = 8.0;
var input = 1u;
var actual = new DataSize(input).GetSize(SizeScale.Bits);
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void GetSize_1_SizeScale_Bytes()
{
var expected = 1.0;
var input = 1u;
var actual = new DataSize(input).GetSize(SizeScale.Bytes);
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void GetSize_1000_SizeScale_Bytes()
{
var expected = 1000.0;
var input = 1000u;
var actual = new DataSize(input).GetSize(SizeScale.Bytes);
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void GetSize_1024_SizeScale_Bytes()
{
var expected = 1024.0;
var input = 1024u;
var actual = new DataSize(input).GetSize(SizeScale.Bytes);
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void GetSize_1000_SizeScale_Kilobytes()
{
var expected = 1.0;
var input = 1000u;
var actual = new DataSize(input).GetSize(SizeScale.Kilobytes);
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void GetSize_1024_SizeScale_Kilobytes()
{
var expected = 1.024;
var input = 1024u;
var actual = new DataSize(input).GetSize(SizeScale.Kilobytes);
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void GetSize_1000_SizeScale_Kibibytes()
{
var expected = 0.9765625;
var input = 1000u;
var actual = new DataSize(input).GetSize(SizeScale.Kibibytes);
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void GetSize_1024_SizeScale_Kibibytes()
{
var expected = 1.0;
var input = 1024u;
var actual = new DataSize(input).GetSize(SizeScale.Kibibytes);
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void GetSize_1000000000_SizeScale_Gigabytes()
{
var expected = 1.0;
var input = 1000000000u;
var actual = new DataSize(input).GetSize(SizeScale.Gigabytes);
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void GetSize_1073741824_SizeScale_Gigabytes()
{
var expected = 1.073741824;
var input = 1073741824ul;
var actual = new DataSize(input).GetSize(SizeScale.Gigabytes);
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void GetSize_1000000000_SizeScale_Gibibytes()
{
var expected = 0.931322574615478515625;
var input = 1000000000u;
var actual = new DataSize(input).GetSize(SizeScale.Gibibytes);
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void GetSize_1073741824_SizeScale_Gibibytes()
{
var expected = 1.0;
var input = 1073741824ul;
var actual = new DataSize(input).GetSize(SizeScale.Gibibytes);
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void Construct_8_SizeScale_Bits()
{
var expected = new DataSize(1u);
var input = 8u;
var actual = new DataSize((double)input, SizeScale.Bits);
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void Construct_1_SizeScale_Bytes()
{
var expected = new DataSize(1u);
var input = 1u;
var actual = new DataSize(input, SizeScale.Bytes);
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void GetLargestWholeSize_SizeScale_Bits_1024_SizeScale_Kibibytes()
{
var expected = new DataSize(1.0, SizeScale.Mebibytes);
var input = 1024u;
var actual = new DataSize(input, SizeScale.Kibibytes).GetLargestWholeSize(SizeScale.Bits);
Assert.AreEqual(expected.Size, actual.Size);
}
[TestMethod, TestCategory("Data Size Tests")]
public void GetLargestWholeSize_SizeScale_Bytes_1000_SizeScale_Kilobytes()
{
var expected = new DataSize(1.0, SizeScale.Megabytes);
var input = 1000u;
var actual = new DataSize(input, SizeScale.Kilobytes).GetLargestWholeSize(SizeScale.Bytes);
Assert.AreEqual(expected.Size, actual.Size);
}
[TestMethod, TestCategory("Data Size Tests")]
public void Subtract_2_SizeScale_Bytes_1_SizeScale_Bytes()
{
var expected = new DataSize(1u, SizeScale.Bytes);
var initial = new DataSize(2u, SizeScale.Bytes);
var subtract = new DataSize(1u, SizeScale.Bytes);
var actual = initial - subtract;
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Data Size Tests")]
public void Add_1_SizeScale_Bytes_1_SizeScale_Bytes()
{
var expected = new DataSize(2u, SizeScale.Bytes);
var initial = new DataSize(1u, SizeScale.Bytes);
var add = new DataSize(1u, SizeScale.Bytes);
var actual = initial + add;
Assert.AreEqual(expected, actual);
}
}
Here are the tests for SizeScale:
[TestClass]
public class SizeScaleTests
{
[TestMethod, TestCategory("Size Scale Tests")]
public void Abbreviation_None()
{
var input = SizeScale.None;
var actual = input.Abbreviation();
Assert.IsNull(actual);
}
[TestMethod, TestCategory("Size Scale Tests")]
public void Abbreviation_Bytes()
{
var expected = "B";
var input = SizeScale.Bytes;
var actual = input.Abbreviation();
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Size Scale Tests")]
public void Abbreviation_Bits()
{
var expected = "b";
var input = SizeScale.Bits;
var actual = input.Abbreviation();
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Size Scale Tests")]
public void Abbreviation_Kilobytes()
{
var expected = "KB";
var input = SizeScale.Kilobytes;
var actual = input.Abbreviation();
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Size Scale Tests")]
public void Abbreviation_Kibibytes()
{
var expected = "KiB";
var input = SizeScale.Kibibytes;
var actual = input.Abbreviation();
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Size Scale Tests")]
public void Abbreviation_Megabytes()
{
var expected = "MB";
var input = SizeScale.Megabytes;
var actual = input.Abbreviation();
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Size Scale Tests")]
public void Abbreviation_Mebibytes()
{
var expected = "MiB";
var input = SizeScale.Mebibytes;
var actual = input.Abbreviation();
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Size Scale Tests")]
public void Abbreviation_Gigabytes()
{
var expected = "GB";
var input = SizeScale.Gigabytes;
var actual = input.Abbreviation();
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Size Scale Tests")]
public void Abbreviation_Gibibytes()
{
var expected = "GiB";
var input = SizeScale.Gibibytes;
var actual = input.Abbreviation();
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Size Scale Tests")]
public void Abbreviation_Terabytes()
{
var expected = "TB";
var input = SizeScale.Terabytes;
var actual = input.Abbreviation();
Assert.AreEqual(expected, actual);
}
[TestMethod, TestCategory("Size Scale Tests")]
public void Abbreviation_Tibibytes()
{
var expected = "TiB";
var input = SizeScale.Tibibytes;
var actual = input.Abbreviation();
Assert.AreEqual(expected, actual);
}
}
All the tests pass at the moment.
Answer: I would define API a little bit different. Let’s go with couple types: SizeUnit and DataSize, so they can be used as:
using static SizeUnit;
using static Console;
class Program
{
static void Main(string[] args)
{
double one = 1.0;
DataSize size = one.In(Kilobyte);
WriteLine(size); // 1 kB
SizeUnit unit = Byte;
DataSize size2 = size.To(unit);
WriteLine(size2); // 1024 B
WriteLine(one.In(Byte) + one.In(Kilobyte)); // 1025 B
WriteLine(one.In(Bit) + one.In(Byte)); // 9 b
}
}
Where library code (a little bit simplified just to demonstrate api):
public class SizeUnit
{
public static readonly SizeUnit Bit = new SizeUnit("b", 0.125);
public static readonly SizeUnit Byte = new SizeUnit("B", 1);
public static readonly SizeUnit Kilobyte = new SizeUnit("kB", 1024);
// etc...
string Symbol { get; }
double Value { get; }
SizeUnit(string symbol, double value)
{
Symbol = symbol;
Value = value;
}
public override string ToString() => Symbol;
internal double ToSize(double bytes) => bytes / Value;
internal double ToBytes(double size) => size * Value;
}
And:
public struct DataSize
{
public static DataSize operator +(DataSize left, DataSize right) =>
new DataSize(left.Bytes + right.Bytes).To(left.Unit);
public static DataSize operator -(DataSize left, DataSize right) =>
new DataSize(left.Bytes - right.Bytes).To(left.Unit);
public static DataSize operator *(DataSize left, ulong right) =>
new DataSize(left.Bytes * right).To(left.Unit);
// etc...
DataSize(double bytes)
: this(bytes, Byte)
{
}
public DataSize(double bytes, SizeUnit unit)
{
Bytes = bytes;
Unit = unit;
}
public override string ToString() => $"{Value} {Unit}";
public double Bytes { get; }
public double Value => Unit.ToSize(Bytes);
public SizeUnit Unit { get; }
public DataSize To(SizeUnit unit) =>
new DataSize(Bytes, unit);
}
And:
public static class Conversions
{
public static DataSize In(this double value, SizeUnit unit) =>
new DataSize(unit.ToBytes(value), unit);
} | {
"domain": "codereview.stackexchange",
"id": 21808,
"tags": "c#, .net, unit-testing"
} |
URDF model spawns with no link | Question:
Hey, so i've been making a package for a competition and made a custom model for an arena in Blender exported the .dae file for working in with ROS Melodic + Gazebo 9.
But the model spawns successfully but has no link. The code for the .launch file, URDF file below.
.launch file
<launch>
<!-- these are the arguments you can pass this launch file, for example paused:=true -->
<arg name="paused" default="true"/>
<arg name="use_sim_time" default="true"/>
<arg name="gui" default="true"/>
<arg name="headless" default="false"/>
<arg name="debug" default="false"/>
<arg name="arena" default="$(find grid3)/src/urdf/arena1.urdf"/>
<arg name="extra_gazebo_args" default="--verbose"/>
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<arg name="debug" value="$(arg debug)" />
<arg name="gui" value="$(arg gui)" />
<arg name="paused" value="$(arg paused)"/>
<arg name="use_sim_time" value="$(arg use_sim_time)"/>
<arg name="headless" value="$(arg headless)"/>
<arg name="extra_gazebo_args" value="$(arg extra_gazebo_args)"/>
</include>
<param name="arena_description" command="$(find xacro)/xacro $(arg arena)"/>
<node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model"
args="-urdf -z 0.0 -model arena -param arena_description" respawn="false" output="screen" />
</launch>
URDF file:
<?xml version="1.0" ?>
<robot name="arena_one" xmlns:xacro="https://www.ros.org/wiki/xacro" >
<gazebo>
<static>true</static>
</gazebo>
<link name="base_link">
<visual name="visual">
<geometry>
<mesh filename="package://grid3/src/mesh/Arena_ps2.dae" scale="1 1 1"/>
</geometry>
<material>
<texture filename="package://grid3/src/mesh/Screenshot from 2021-07-30 15-50-19.png"/>
</material>
</visual>
<collision name="collision">
<geometry>
<mesh filename="package://grid3/src/mesh/Arena_ps2.dae" scale="1 1 1"/>
</geometry>
<surface>
<friction>
<ode>
<mu>1</mu>
<mu2>1</mu2>
<slip1>0</slip1>
<slip2>0</slip2>
</ode>
</friction>
</surface>
</collision>
</link>
</robot>
Roslaunch says that the model has spawned successfully but in gazebo it doesnt seems to have any link has no link : image.
its my first time working with models of my own, thanks for your help.
Originally posted by devrajPriyadarshi on ROS Answers with karma: 1 on 2021-08-15
Post score: 0
Answer:
This is more of a https://answers.gazebosim.org/questions/ question than a ROS question, so I'd encourage you to use the dedicated Gazebo site in the future... But this is a very common issue and almost always comes down to Gazebo not finding your .dae file on its model path.
There are 3 log files that you can check that often shed some light:
~/.gazebo/gzclientXXX/default.log
~/.gazebo/gzserverXXX/default.log
~/.gazebo/ogre.log
You will likely find a "file not found" error
(Also see:)
https://answers.ros.org/question/250898/model-from-world-file-not-visible-in-gazebo-when-using-roslaunch/
https://answers.gazebosim.org//question/1940/problem-with-including-a-model/
https://answers.ros.org/question/304925/gazebo-model-does-not-load-the-materials/
Originally posted by shonigmann with karma: 1567 on 2021-08-16
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 36802,
"tags": "ros, gazebo, urdf, ros-melodic, base-link"
} |
Domino tiling of a 2xN rectangle in O(ln n) | Question: I solved this problem using Dynamic Programming in $\mathcal{O}(n)$ time. I found that is equivalent to the Fibonacci Numbers.
$F(0) = F(1) = 1$
$F(n) = F(n-1)+F(n-2)$
Where the $F(n-1)$ term is from fixing the left most domino vertically, and the $F(n-2)$ from fixing it horizontally (which implies that the domino under it must also placed horizontally).
Because the Fibonacci Numbers can be generated in $\mathcal{O}(\lg n)$ using
$F(2n) = F(n)F(n+1)+F(n-1)F(n)$
$F(2n+1) = F(n)F(n)+F(n-1)F(n-1)$
Then I try to find the same expression from the domino tiling. Again, I classify all possible tilings in two set. First, all tiling like the one below
Therefore, $F_H(n) = F(i)F(n-i-2)$. Because I want to split in the middle (or close) I consider $n=2k, i=k$ and $n=2k+1, i=k$.
Then
$F_H(2k) = F(k)F(2k-k-2) = F(k)F(k-2)$ and $F_H(2k+1) = F(k)F(2k+1-k-2) = F(k)F(k-1)$.
Then, all tiling of the form
Again, $F_V(n) = F(i)F(n-i-1)$ and when $n=2k, i=k$ and $n=2k+1, i=k$ we have
$F_V(2k) = F(k)F(2k-k-1) = F(k)F(k-1)$ and $F_V(2k+1) = F(k)F(2k+1-k-1) = F(k)F(k)$
Combining all the former expressions,
$F(2k) = F_H(2k) + F_V(2k) = F(k)F(k-2) + F(k)F(k-1)$
$F(2k+1) = F_H(2k+1) + F_H(2k+1) = F(k)F(k-1) + F(k)F(k)$
However, those final expression does not produce the same numbers. Because they look really similar I belive my approach is not completly wrong but I can not find where I make my mistake.
Final notes:
In the first classification, I fix the top most domino horizontally and if I consider a 1-tile shifted horizontal domino (left or right) under it, then the rectangle can not be tiled.
For the (correct) expression $F(2n+1)=F(n)F(n)+F(n-1)F(n-1)$, notice that $F(n)F(n)$ can be mapped to the second case, because $n+1+n = 2n+1$, but the other term $F(n-1)F(n-1)$ can not (at least in the same way), $(n-1)+2+(n-1)\neq 2n+1$. The same can be notice in $F(2n)$ and again one term can be mapped to the second case while the other can not (to the first case). I belive my mistake should be around the first case.
Answer: In your second attempt you tried to fixate the $(k+1)$th term. But instead of the two cases
1 2 ... k k+1 k+2 ... n
|
|
and
1 2 ... k k+1 k+2 ... n
-----
-----
there exist a third case:
1 2 ... k k+1 k+2 ... n
----
----
So if $n = 2 k$, then you have $f(k)f(k-1)$ for the first case, $f(k)f(k-2)$ for the second one, and $f(k-1)f(k-1)$ for the third one. This gives in total:
$f(2k) = f(k)f(k-1) + f(k)f(k-2) + f(k-1)f(k-1) = f(k)[f(k-1) + f(k-2)] + f(k-1)f(k-1) = f(k)^2 + f(k-1)^2$.
Now this formula looks like the formula for $F(2n+1)$ that you posted.
Because the Fibonacci Numbers can be generated in O(lgn) using
F(2n)=F(n)F(n+1)+F(n−1)F(n)
F(2n+1)=F(n)F(n)+F(n−1)F(n−1)
Simple reason: the formula assumes that $F(1) = F(2) = 1$ while you have $f(0) = f(1) = 1$. So after an index shift it should match.
Identical procedure for the odd case $n = 2k + 1$.
edit: Tried to do the index shift but failed. Reason was, that you wrote down the wrong formula for $F(2n+1)$. It should be $F(2n+1)=F(n)^2+F(n+1)^2$. Then the index shift works:
$f(2k) = F(2k+1) = F(k)^2+F(k+1)^2 = f(k-1)^2 + f(k)^2$. | {
"domain": "cs.stackexchange",
"id": 9523,
"tags": "dynamic-programming, tiling"
} |
ros_arduino_bridge configuration error | Question:
Hi,
i follow read me tutorial of ros_arduino_bridge in this link link text and when i run roslaunch ros_arduino_python arduino.launch, i receive this error:
Connecting to Arduino on port
/dev/ttyACM0 ... Traceback (most
recent call last): File
"/home/irisecesi/catkin_ws/src/ros_arduino_bridge/ros_arduino_python/nodes/arduino_node.py",
line 195, in
myArduino = ArduinoROS() File "/home/irisecesi/catkin_ws/src/ros_arduino_bridge/ros_arduino_python/nodes/arduino_node.py",
line 87, in init
self.controller.connect() File "/home/irisecesi/catkin_ws/src/ros_arduino_bridge/ros_arduino_python/src/ros_arduino_python/arduino_driver.py",
line 68, in connect
test = self.get_baud() File "/home/irisecesi/catkin_ws/src/ros_arduino_bridge/ros_arduino_python/src/ros_arduino_python/arduino_driver.py",
line 261, in get_baud
return int(self.execute('b')); File
"/home/irisecesi/catkin_ws/src/ros_arduino_bridge/ros_arduino_python/src/ros_arduino_python/arduino_driver.py",
line 178, in execute
return int(value) TypeError: int() argument must be a string or a number,
not 'NoneType' [INFO] [WallTime:
1468423834.069478] Stopping the robot... [INFO] [WallTime:
1468423834.070452] Shutting down Arduino Node... [arduino-1] process
has died [pid 10979, exit code 1, cmd
/home/irisecesi/catkin_ws/src/ros_arduino_bridge/ros_arduino_python/nodes/arduino_node.py
__name:=arduino __log:=/home/irisecesi/.ros/log/3ed0dd4c-4904-11e6-9d6d-eca86bf97b81/arduino-1.log].
log file:
/home/irisecesi/.ros/log/3ed0dd4c-4904-11e6-9d6d-eca86bf97b81/arduino-1*.log all processes on machine have died,
roslaunch will exit shutting down
processing monitor... ... shutting
down processing monitor complete done
What can i do please?
Originally posted by Emilien on ROS Answers with karma: 167 on 2016-07-13
Post score: 0
Original comments
Comment by Pi Robot on 2016-07-13:
Can you please provide the following additional information: (1) Your version of Ubuntu, (2) Your version of ROS, (3) Which branch of ros_arduino_bridge are you using (e.g. hydro-devel, indigo-devel), (4) which Arduino board are you using (e.g. Uno, Mega, etc.)
Comment by Emilien on 2016-07-14:
I use ubuntu 14.04 , ros indigo, arduino mega and ros_arduino_bridge indigo-devel
Answer:
Thanks for the additional info. I have added a try-except around the get_baud() function that should fix this. Please do a "git pull" and see if it works for you. It might also help to increase the timeout parameter in your params file from the default of 0.1 to 0.5. There should be no need to rerun catkin_make.
Originally posted by Pi Robot with karma: 4046 on 2016-07-14
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 25233,
"tags": "ros, ros-arduino-bridge"
} |
Is the overall force of a test charge in a electromagnetic field always the same? | Question: I had a conversation with my physics teacher today about electric fields and he told me something I couldn't believe and because I found no information elsewhere I am going to ask you guys.
Imagine a radially symmetric electric field like this:
The inner circle is positive and the outer ring is negative (or the other way around)
Imagine putting a test charge near the middle of the field and measure/calculate its force (lets call it $f_1$) and you putting another test charge near the outer ring and measure/calculate its force (lets call it $f_2$) would $f_1=f_2$, $f_1>f_2$ or $f_1<f_2$?
I am talking about the overall force not the force between the test charge and the minus/positive charged ring or circle
Answer: The result that is most relevant here for understanding this is Gauss' Law. Gauss' Law says in particular that the electric field 'through' a surface (i.e. the flux) depends only upon (and in fact is proportional to) the charge contained inside the surface. Here this implies that the force on a test charge inside the outer shell does not depend at all on the charge on the outer shell. In fact, if the total charge in the inner shell is $q$, then for a point in between the shells a distance $r$ from the center, the electric field will be
$$
\frac{1}{4\pi \varepsilon _0}\frac{q}{r^2}\hat{\mathbb{r}}.
$$
Thus, $f_2<f_1$ (because $r_2>r_1$, where $r_2$ is the distance the charge near the outer shell is from the center and $r_1$ is the distance the charge in the middle is from the center). | {
"domain": "physics.stackexchange",
"id": 17748,
"tags": "homework-and-exercises, electromagnetism, electric-fields"
} |
Is perfect thermal insulation possible? | Question: Supposing someone says they have a sample of insulanium with the following properties:
If you make a thermos out of insulanium, it will keep a drink hot or cold forever.
If you put your hand on a sheet of insulanium, it won't feel hot or cold, because no heat is exchanged with your hand.
A sheet of insulanium looks like a mirror at all frequencies, because it neither absorbs nor emits radiation.
Is such a material possible? Could you use the above properties of insulanium to design an experiment that would break the laws of physics?
Answer: I don't see how the existence of insulanium violates any of the laws of thermodynamics :
The 1st Law is a statement that energy is conserved. But there is no suggestion that energy is being created or destroyed here.
The 2nd Law states that the entropy of an isolated system only increases, never decreases. This is not saying that the entropy cannot stay the same for a very long time. When "forever" is reached, the insulanium will be at the same temperature as the contents of the flask - and/or its surroundings. The entropy will then have increased. The fact that it takes "forever" to reach thermal equilibrium is not a violation. Neither is it being claimed that the insulanium will get colder while the contents get hotter. That would violate the 2nd Law.
The 3rd Law states that any isothermal reversible process always increases entropy, except at absolute zero when $\Delta S=0$. No reversible process is being performed here. So I don't see how this Law applies.
The only thing I can find "wrong" with the description of insulanium is that its properties are not well defined. How long is "forever"? In physics - well, in engineering and technology! - unlike maths, we do not deal with infinities, and we have to be careful about zeros also. "Forever" could mean 10 years, which is far longer than any domestic thermos flask could keep something warm. It could mean 1000 years. That would be incredible, but it is not physically impossible.
Likewise, the tolerance for 100% reflectivity is undefined. Is this >95% or >99.5% or >99.995% or what? Even within a stated tolerance, "perfect" reflectivity at all wavelengths might be unrealistic but it is not physically impossible. We define a blackbody as an ideal absorber/emitter at all wavelengths. Although no real material with this property exists, it would not violate the laws of physics if it did.
Insulanium might have unrealistic properties. It might be highly unlikely that it could ever exist. But as far as I can see it does not violate any laws of physics. | {
"domain": "physics.stackexchange",
"id": 32578,
"tags": "thermodynamics, thermal-conductivity"
} |
how to install ros on unbuntu 13.10 | Question:
I tried to follow the "Ubuntu install of ROS Hydro" guidance to install,but failed.
Can ROS be applied on Ubuntu 13.10?
Originally posted by chi on ROS Answers with karma: 1 on 2014-01-14
Post score: 0
Original comments
Comment by Hamid Didari on 2014-01-14:
what kind of error you recieved?
Comment by dornhege on 2014-01-15:
failed says nothing! What exactly did you do? What did you observe? What did you expect to happen and worked out differently?
Answer:
According to this REP (REP 003), Hydro is targeted to Ubuntu 12.04 LTS, 12.10 and 13.04, not for 13.10.
Originally posted by gustavo.velascoh with karma: 756 on 2014-01-15
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 16653,
"tags": "ros"
} |
Is the beta barium borate crystal an observer in the delayed choice quantum eraser double split experiment? | Question: I'm a little confused about the top answer to this question:
Variation of delayed choice quantum eraser
He says "if you simply detect all signal photons and make no distinction between them, there will be no interference pattern on the screen"
But in the standard vanilla double slit experiment with no observer, there is an interference pattern between all the photons.
Question. Does this mean the Beta Barium Borate Crystal is effectively an "observer" for the set of all the signal photons that hit D0?
If not, then what is?
EDIT: I guess my question wasn't clear
I want to know why there is no interference pattern among all the signal photons. This is a double-slit experiment, and normally a double-slit experiment causes an interference pattern (among all the photons) unless there's an observer at the slits. So why doesn't this one?
EDIT 2: I understand that the interference pattern shown by only the D1 events and that shown by only the D2 events cancel out to produce no interference pattern. However, that is only a correlated result. It doesn't explain why there is no interference pattern. My question pertains only to the cause of the lack of interference.
Answer: If we look at the double slit experiment, states corresponding to the slits can be called $| A \rangle, | B \rangle$, and supposed normed. They are not orthogonal because, otherwise, there would not be interferences. One may write :
$| B \rangle = \cos k x| A \rangle + \sin k x| A_\perp \rangle$ , where $A_\perp $ is a state orthogonal to $A$.
The total state is $|\psi \rangle = | A \rangle + | B \rangle = (1 + \cos kx)| A \rangle + \sin kx | A_\perp \rangle \quad \quad\quad\quad\quad\quad\quad\quad\quad\quad (1)$,
and the probability is proportionnal to :
$|\psi|^2 = 2 + 2 \cos kx \quad \quad\quad\quad\quad\quad\quad\quad\quad\quad (2) $
So, we found interferences, as expected.
Now, with the delayed choice quantum eraser double split experiment, a very simple modelisation of the total state could be written :
$|\psi \rangle = (| A_1 A_2 \rangle + | A'_1 A'_2 \rangle) + (|B_1 B_2 \rangle + | B'_1 B'_2 \rangle)\quad\quad\quad\quad\quad\quad\quad\quad\quad (3)$
Here the couples $(A_1,A'_1)$, $(A_2,A'_2)$,$(B_1,B'_1)$, $(B_2,B'_2)$ are orthogonal states, because of the photons entanglement.
Now the relation between $B_1$ and $A_1$ or $B_2$ and $A_2$ is the same as the relation between $B$ and $A$ in the double slit experiment (before the BBO doubling).
$| B_1 \rangle = \cos k x| A_1 \rangle + \sin k x| A_{1\perp} \rangle\quad\quad\quad\quad\quad\quad\quad\quad\quad (4)$
$| B_2 \rangle = \cos k x| A_2 \rangle + \sin k x| A_{2\perp} \rangle\quad\quad\quad\quad\quad\quad\quad\quad\quad (5)$
And now, we define $B'_1$ and $B'_2$ such that $B_1, B'_1$ and $B_2, B'_2$ are orthogonal.
$| B'_1 \rangle = -\sin k x| A_1 \rangle + \cos k x| A_{1\perp} \rangle\quad\quad\quad\quad\quad\quad\quad\quad\quad (6)$
$| B'_2 \rangle = -\sin k x| A_2 \rangle + \cos k x| A_{2\perp} \rangle\quad\quad\quad\quad\quad\quad\quad\quad\quad (7)$
So, the final expression for $\psi$ is :
$|\psi \rangle = (| A_1 A_2 \rangle + | A'_1 A'_2 \rangle )+ (|A_1 A_2 \rangle + | A_{1\perp} A_{2\perp} \rangle)\quad\quad\quad\quad\quad\quad\quad\quad\quad (8)$
We see that the phase dependence in $x$ has disappeared, this means that the 2-qbit density matrix has no dependence in $x$.
Now, if we measure only the first qbit (the signal photon), we have to take the partial trace of the 2-qbit density matrix, to obtain the 1-qbit density matrix.
But, because, in the 2-qbit density matrix, there is no phase dependence in x, it will be the same thing in the 1-qbit density matrix.
So, finally, there is no global interference pattern.
You will note, that the fact, that the signal and idler photons are entangled, is fundamental, for instance the state $| A_1 A_2 \rangle + |B_1 B_2 \rangle$ has a phase dependence in $x$
You may be interested by the original article | {
"domain": "physics.stackexchange",
"id": 8873,
"tags": "quantum-mechanics, double-slit-experiment, observers"
} |
Examining Simple Service and Client Tutorial | Question:
When going through the first part of the tutorial of the subject title, this is what happens in terminal:
warhost@warhost-Latitude-E6400:~/catkin_ws$ rosrun beginner_tutorials add_two_ints_server
[ERROR] [1422561896.227835500]: [registerPublisher] Failed to contact master at [localhost:11311]. Retrying...
It stays there until ^c'd out.
Any idea what I may be doing wrong? I did this in C++
Do you need me to post code?
Originally posted by warhost on ROS Answers with karma: 1 on 2015-01-29
Post score: 0
Answer:
That is expected; the "service server" will just sit there waiting for requests from clients. To exercise the server, you'll need to start another terminal window and run client(s) that make requests to the server.
Originally posted by Morgan with karma: 521 on 2015-01-29
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 20731,
"tags": "ros, c++, tutorial, server"
} |
Why is the deprotonated form of dopamine more reactive than the protonated/neutral form of dopamine? | Question: See this (from the Wikipedia article on dopamine):
Dopamine, like most amines, is an organic base. At neutral or acidic
pH levels it is generally protonated. The protonated form is highly
water-soluble and relatively stable, though it is capable of oxidizing
if exposed to oxygen or other oxidants. At basic pH levels, dopamine
becomes deprotonated. In this free base form it is less soluble and
also highly reactive and easily oxidized
I'm very curious about this since I'm curious about dopamine auto-oxidation, as it is often a cause of neurodegeneration observed in those who take amphetamines (and seems more likely in the cytosol, than the vesicles it would otherwise be in). Though I'm also curious how much auto-oxidation would happen for extracellular dopamine as well.
Answer: In general the free amine will react as a nucleophile i.e. by "donating" an available lone pair of electrons. In its protonated form, this lone pair is "blocked" and thus it has to lose a "blocking" proton to react (I use the words in "quotes" more freely - although I'm a chemist and not biochemist but hopefully it will help).
However re oxidation of dopamine, the reactive site is not the nitrogen (at least not the primary one - but will come back later) but the catechol part. Because of the electron donating effect of the 2 neighboring OH's, the aromatic ring is electron rich and more prone to oxidation to give o-quinone products. This oxidation reaction depends to an extend to the pH but mainly to the presence of (obviously enough) reactive oxygen species (ROS) and also enzymes (oxidases) or even Fe ions that can catalyse this oxidation.
Coming back to the nitrogen, after oxidation occurs with creation of reactive sites on the quinone part, it can act now as a nucleophile and attack, creating indolines that will further oxidize to the more stable but toxic indole products. | {
"domain": "chemistry.stackexchange",
"id": 1603,
"tags": "biochemistry, amines"
} |
How can we evaluate DBSCAN parameters? | Question: yes, DBSCAN parameters, and in particular the parameter eps (size of the epsilon neighborhood).
In the documentation we have a "Look for the knee in the plot".
Fine, but it requires a visual analysis. And it doesn't really work if we want to make things automatic.
So, I was wondering if it was possible to find a good eps in a few lines of code.
Let's imagine something like :
evaluate kNN distance
sort these values
scale them (so that
the values are always between 0 and 1)
evaluate the derivative
find the first point where derivative is higher than a certain
value, let's try with 1
In R, it would look like (using iris dataset as in the DBSCAN documentation) :
# evaluate kNN distance
dist <- dbscan::kNNdist(iris, 4)
# order result
dist <- dist[order(dist)]
# scale
dist <- dist / max(dist)
# derivative
ddist <- diff(dist) / ( 1 / length(dist))
# get first point where derivative is higher than 1
knee <- dist[length(ddist)- length(ddist[ddist > 1])]
and the result is 0.536 which looks quite good.
Is this approach relevant or totally nonsense ?
Answer: OPTICS gets rid of $\varepsilon$, you might want to have a look at it. Especially the reachability plot is a way to visualize what good choices of $\varepsilon$ in DBSCAN might be.
Wikipedia (article) illustrates it pretty well. The image on the top left shows the data points, the image on the bottom left is the reachability plot:
The $y$-axis are different values for $\varepsilon$, the valleys are the clusters. Each "bar" is for a single point, where the height of the bar is the minimal distance to the already printed points. | {
"domain": "datascience.stackexchange",
"id": 843,
"tags": "r, clustering"
} |
Customize data types in a generated model with hibernate | Question: I have a Spring/Hibernate application with the following domain class (irrelevant code stripped for brevity):
@Entity
@Data
public class Program implements Serializable {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Long tid;
@Column
@Convert(converter = ProgramStatusConverter.class)
private ProgramStatus status;
@Column
@Length(max=64)
private String containerId;
@Column
private String sourceCode;
@Column
private String compilationOutput;
@Column
@OneToMany(fetch=FetchType.LAZY, mappedBy="program")
private Set<Execution> executions;
}
The problem is that the database schema (currently an in-memory h2 database) is generated when I startup the application. It works fine in almost all cases, but the "compilationOutput" and "sourceCode" columns are generated as VARCHAR(255), which is too short.
The solution I used was simply to add @Length(max=10000) on the two columns, but I don't like the solution for two reasons:
It's a semantic annotation and I'm using it to solve a purely technical issue. I have absolutely no reason to limit these fields to 10000 characters.
What if I get a sourceCode that's longer than 10000 characters ? This solution is not a real solution.
Is there a better way ?
Answer: It's not unreasonable that a maximum length should be specified for a field that is backed by a VARCHAR. In some frameworks, for example Django (Python), a max_length parameter is mandatory to create a CharField, but optional for a TextField. This makes sense: how else can the framework know the size of the column. A VARCHAR column needs a max size.
Make the maximum size a conscious design decision when using a relational database as storage. If the data is text and you need to support arbitrary length, then in the database it should be stored as a TEXT field instead of a VARCHAR. For that, use @Column(columnDefinition="text") or @Type(type="text"), as @ferada said in a comment,
and as discussed in this related post.
You remarked in a comment that H2 used the type VARCHAR(2147483647).
That's an irrelevant implementation detail of how H2 supports TEXT type.
It's indeed a bit funny. | {
"domain": "codereview.stackexchange",
"id": 16272,
"tags": "java, database, spring, hibernate"
} |
Orthogonal self-intersection of geodesics | Question: I learnt that geodesics parallel transport their velocity vectors. Does that mean a geodesic cannot intersect itself orthogonally?
Answer: No, I don't see how the implication follows. It is very possible for geodesics to intersect themselves orthogonally, even in 2-dimensional manifolds. One example that is easy to visualize is a geodesic on an ellipsoid:
In fact, this is the trajectory of a satellite in low Earth orbit due to the oblateness of the Earth. However, if you consider 4-dimensional spacetime, your statement is true for massive particles as they follow timelike trajectories and two timelike vectors can never be orthogonal to each other. But again, this does not follow from your first statement that "geodesics parallel transport their velocity vectors". | {
"domain": "physics.stackexchange",
"id": 100420,
"tags": "general-relativity, differential-geometry, vectors, curvature, geodesics"
} |
Understanding How Double Precision Numbers are Stored in a Computer | Question: I am reading Numerical Analysis by Walter Gautschi. I am somewhat confused by the following quote from page $5$:
To increase the precision, one can use two machine registers to represent a machine number. In effect, one then embeds $\mathbb{R}(t, s) \subset \mathbb{R}(2t, s)$, and calls $x \in \mathbb{R}(2t, s)$ a double-precision number.
(Here $t$ represents the number of allowable binary digits in the mantissa, and $s$ represents the number of binary digits allowable in the exponent.)
Can someone please explain what is going on here with the "machine register"? Some questions that I have are: instead of using two registers, why not just use one of bigger size? Apparently some registers have a different size, because the exponent is also stored in a register, and $r$ may not equal $s$.
Secondly, double-prevision seems to be defined in terms of a "native precision" already intrinsic to the machine. But on the other hand, I thought double precision was a fixed thing determined by IEEE.
My Background I am a math major taking my first Numerical Analysis course. I do not have any prior experience with computers (except day-to-day use of course) or numerical mathematics.
Answer: I don't know how old your book is, but some architectures (I know of the early SPARCs, there may be others) only had 32 bit floating point registers. Double precision instructions used pairs of registers for storage. This became much less common on general-purpose hardware when 64-bit integer registers became the norm.
If anything, we have the converse today, thanks to vector/SIMD operations. If you have, say, a 128 bit vector register, you can fit four single-precision or two double-precision numbers in it. | {
"domain": "cs.stackexchange",
"id": 15311,
"tags": "computer-architecture, floating-point"
} |
Manage repetitive/similar structures (logging & try/catch blocks) | Question: I'm writing a web bot using puppeteer and I'm logging (using winston) every action the bots does and running it inside try/catch blocks.
For automating these processes and avoid code repetition I created this function:
module.exports = {
webAction: async function (logger, msg, func, ...args)
{
return new Promise(async (resolve) => {
let actionMessage, actionLevel;
try
{
await func(...args);
actionMessage = msg;
actionLevel = 'info';
}
catch (error)
{
actionMessage = "Error while executing (" + msg + ") function.\n---\n" + error.message + "\n---";
actionLevel = 'error';
}
finally
{
logger.log({
level: actionLevel,
message: actionMessage
})
}
resolve('Fullfilment value of Promise');
})
},
I use the previous function with arrow functions and normal/regular functions as arguments for making it as dynamic as it can be:
const htmlCode = await page.evaluate(() => document.querySelector('*').outerHTML);
await utils.webAction(
logger, 'Save web code as html file',
(() => (utils.saveStrAsFile('htmlCodeMP_NewItem.html', htmlCode)))
);
let titleID, priceID;
await utils.webAction(
logger, 'Find Ids',
(() => ([titleID, priceID] = utils.findDynamicIdsMP(htmlCode)))
);
await utils.webAction(
logger, 'Enter item title', utils.webType,
page, 'FooTitle', `input[id="${titleID}"]` // Arguments of `webType`
);
await utils.webAction(
logger, 'Enter item price', utils.webType,
page, '1234', `input[id="${priceID}"]` // Arguments of `webType`
);
await utils.webAction(
logger, 'Click item category dropdown menu',
(() => (page.click(constants.cssMpCreateItemCategory))),
);
Is this a good practice?
I honestly thinks it avoids a lot repetition of code (at least the try/catch blocks & the logger block).
But on the other part I'm thinking that there must be a better approach or professional way of doing this.
I'm no professional nor skilled in Javascript, I'm an amateur in Javascript.
Any advice is highly appreciated/
Thanks in advanced ;)
Answer: Passing in arguments as a list of arguments into webAction becomes hard to read. My suggestion would be as follows
Make the arguments an object
Make the func a callback that is invoked with no arguments. Don't have webAction know anything about what function it is running
return the func's return value in webAction.
const utils = {
webAction: async ({ logger, msg, func }) => {
return new Promise(async (resolve) => {
let res
let actionMessage, actionLevel
try {
res = await func()
actionMessage = msg
actionLevel = 'info'
}
catch (error) {
actionMessage = "Error while executing (" + msg + ") function.\n---\n" + error.message + "\n---"
actionLevel = 'error'
}
finally {
logger.log({
level: actionLevel,
message: actionMessage,
})
}
resolve(res)
})
},
}
Notice how func is invoked with no arguments. Then in its usage:
const htmlCode = await page.evaluate(() => document.querySelector('*').outerHTML);
await utils.webAction({
logger,
msg: 'Save web code as html file',
func: () => utils.saveStrAsFile('htmlCodeMP_NewItem.html', htmlCode)
})
const [titleID, priceID] = await utils.webAction({
func: () => utils.findDynamicIdsMP('htmlCode'),
logger,
msg: 'Find Ids',
})
await utils.webAction({
func: () => utils.webType(page, 'FooTitle', `input[id="${titleID}"]`),
logger,
msg: 'Enter item title',
}) | {
"domain": "codereview.stackexchange",
"id": 45472,
"tags": "javascript, logging"
} |
JSON API call in an Angular service | Question: I have followed this format in my application to call the API and get data from it:
getCheckoutDetails(): Observable<UserDetail> {
let query = `7668`;
return this.http
.get(this.appConfig.getAPIUrl()
+ `/buy/getDetails?${query}`)
.map(this.extractData)
.catch(this.handleErrors);
}
private extractData(res: Response) {
let data = res.json();
return data.body ? data.body.message ? data.body.message : {} : {};
}
private handleErrors(error: Response | any) {
let errMsg: string;
if (error instanceof Response) {
const body = error.json() || '';
const err = body.error || JSON.stringify(body);
errMsg = `${error.status} - ${error.statusText || ''} ${err}`;
} else {
errMsg = error.message ? error.message : error.toString();
}
console.error(errMsg);
return Observable.throw(errMsg);
}
This is sample code on one service. Now this has been followed in all services to handle data and error. When I run gulp cpd command to detect the duplicate code, it lists down all the files. Is there a way to handle this without duplicating?
Answer: Create one httpservice and extend it in your service, for example:
HttpService:
export class HttpService {
protected userInfo:IUserModel;
constructor(public http: Http, public user: User) {
this.userInfo = this.user.getInfo();
}
/**
* method http get
* @param url
* @param params
* @returns {Http}
*/
fetch(url, params, addData) {
let headers = new Headers();
headers.append('Content-Type', 'application/json');
headers.append('User-Id', this.userInfo.id);
headers.append('Token', this.userInfo.accessToken);
if(addData.accessType) {
headers.append('Access-Type', '1');
}
if(addData.limit) {
headers.append('Offset-Step', addData.limit);
}
if(addData.page || addData.page === 0) {
let count = (parseInt(addData.page )- 1) * parseInt(addData.limit);
headers.append('Count', count.toString());
}
let options = new RequestOptions(
{
headers: headers,
search: params,
});
return this.http
.get(
url,
options
)
.map(this.extractData)
.catch(this.handleErrors);
}
/**
* method http post
* @param url
* @param data
* @returns {Http}
*/
send(url, data) {
let body = JSON.stringify(data);
let headers = new Headers({
'Content-Type': 'application/json',
'User-Id': this.userInfo.id,
'Token': this.userInfo.accessToken
});
let options = new RequestOptions({ headers: headers });
return this.http.post(url, body, options)
.map(this.extractData)
.catch(this.handleErrors);
}
/**
* method http put
* @param url
* @param data
* @returns {Http}
*/
stick(url, data) {
let body = JSON.stringify(data);
let headers = new Headers();
headers.append('Content-Type', 'application/json');
headers.append('User-Id', this.userInfo.id);
headers.append('Token', this.userInfo.accessToken);
return this.http
.put(url, body, {headers:headers})
.map(this.extractData)
.catch(this.handleErrors);
}
/**
* method http delete
* @param url
* @param params
* @returns {Http}
*/
remove(url, params) {
//var data = Object.keys(params).map(function(k) {
// return encodeURIComponent(k) + '=' + encodeURIComponent(params[k])
//}).join('&');
//url += '?' + data;
let headers = new Headers();
headers.append('Content-Type', 'application/json');
headers.append('User-Id', this.userInfo.id);
headers.append('Token', this.userInfo.accessToken);
for(var i in params) {
headers.append(i, params[i]);
}
return this.http.delete(url, {headers:headers})
.map(this.extractData)
.catch(this.handleErrors);
}
private extractData(res: Response) {
let data = res.json();
return data.body ? data.body.message ? data.body.message : {} : {};
}
private handleErrors(error: Response | any) {
let errMsg: string;
if (error instanceof Response) {
const body = error.json() || '';
const err = body.error || JSON.stringify(body);
errMsg = `${error.status} - ${error.statusText || ''} ${err}`;
} else {
errMsg = error.message ? error.message : error.toString();
}
console.error(errMsg);
return Observable.throw(errMsg);
}
}
And in your child service extends by HttpService:
export class SomeService extends HttpService {
constructor(private urls: Urls, http: Http, user: User) {
super(http, user);
}
getCheckoutDetails(data): Observable<UserDetail> {
let query = `7668`;
header = {};
return this.fetch(this.appConfig.getAPIUrl() + `/buy/getDetails?${query}`);
}
}
Maybe you need a correct HttpService. | {
"domain": "codereview.stackexchange",
"id": 27738,
"tags": "json, typescript, angular-2+"
} |
Trouble launching moveit2 configurations | Question: Hi I am using ROS2 humble with moveit2. I created a configuration with setup_assistant of moveit2. However, when I try to launch the demo launch file created by it. It gaves me the error:
[ERROR] [launch]: Caught exception in launch (see debug for traceback): 'capabilities'
I could not find anything related to capabilities. What does it mean? May someone please help me?
Answer: I found the solution thanks to @ssarkar. What you should do to fix this error is first you need to open the file called launches.py and go to line 203. To simply do it run the ros2 launch command with --debug.
After opening it you need to reverse commit the changes mentioned here. What I mean is copy,
ld.add_action(DeclareLaunchArgument("capabilities", default_value=""))
and then paste it instead of
ld.add_action(
DeclareLaunchArgument(
"capabilities",
default_value=moveit_config.move_group_capabilities["capabilities"],
)
)
After that save the file with ctrl+s and then rebuild you colcon space. Done. | {
"domain": "robotics.stackexchange",
"id": 39081,
"tags": "moveit, launch"
} |
Electric Field inorder for Fusion to occur | Question: If I want to do D-D -> He + n fusion in an electric field - what potential would I need?
So I know the coulomb barrier is at $U=k^2 \frac{e^2}{10^{-15}} = 1.44 MeV$
This is when the strong force takes over essentially.
Does this mean if I put a deuterium atom in a 1.44 MV electric field and let is accelerate into the other they will fuse? This seems very low?
What am I missing here? What field would I need?
Answer: What you're missing is the difficulty of actually getting the nuclei that you are working with to actually hit each other. Nuclei are tiny, so if you try to aim them at each other, you will probably miss.
This page suggests that at the energies in the core of the sun, only 1 in every $10^{26}$ collision events actually fuses. Now this isn't pure D-D, and the energies are different from your example, but it does give a scale for the problem.
This page has at the bottom a graphic showing the cross section of reaction for common fusion products over a range of energies. To make it work more than just occasionally, you need to have very high densities. People do make tabletop fusors that generate small numbers of fusion events with high voltage. It's just not useful for energy production. | {
"domain": "physics.stackexchange",
"id": 20685,
"tags": "nuclear-physics, fusion"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.